Hugging Face

Hugging Face

Software Development

The AI community building the future.

About us

The AI community building the future.

Website
https://huggingface.co
Industry
Software Development
Company size
51-200 employees
Type
Privately Held
Founded
2016
Specialties
machine learning, natural language processing, and deep learning

Products

Locations

Employees at Hugging Face

Updates

  • Hugging Face reposted this

    View profile for Lysandre Debut, graphic

    Head of Open Source at Hugging Face

    Transformers v4.45 was just released, and it introduces a change I would not have expected: Modularity in Modeling Files. Transformers has always been strict about its single-file policy: a model must be defined in a single file rather than through layers of abstraction. So, what changed, and why are we seemingly moving away from the concept that made transformers what it is today, with 250+ model architectures across many modalities? We respond to an issue that affects both contributors and maintainers: contributing a model to transformers is long and tedious. It oftens results in PRs spanning across 20+ files, with thousands of lines of code. We wanted a solution to remove that constraint from contributors, therefore significantly enabling model additions from model authors and community members. Still, the single-file policy is at the core of Transformers: controversial to some due to the constraints it brings with it, we know for a fact that it enabled: - Researchers to experiment and tweak the modeling files - Students to go through the code without jumping from abstraction to abstraction, - Community members to contribute models without first needing to understand the rest of the overwhelmingly large package. Therefore, we've worked on "Modular Transformers," an approach to designing modeling files in a modular way while maintaining the single-file policy. Contributing a model to Transformers can now be done by subclassing other models, inheriting all their attributes, methods, and forward definitions. The tool we contribute enables unraveling that inheritance into a single file. The RoBERTa "Modular" modeling file above defines the base and masked LM models. This is then unraveled in a 1700+ single-file model definition, which can be inspected, debugged, tweaked, and adapted. The model definition spans ~30 lines of code: only the differences are now explicit. This is particularly important in the wake of LLMs, with each released model being only slightly different in terms of architecture; most of the difference lying in the data for the pretrained checkpoints. While the "Modular" and "Single-file" model definitions serve different purposes, they should both result in the exact same code execution. We aim for no magic, no hidden behavior: define a code path, a property, a method in the modular file, and you'll see it reflected in the single file. With this now merged, we can start seeing model contributions coming in at 215 LoC for the modular file; being unraveled to several files, the single-file definition standing at 1300+ LoC. Now, please come and help us break it! It's experimental and brittle, but it should drastically lower the barrier of entry for model contribution. Come and contribute your model to make it accessible to the community at large

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Hugging Face reposted this

    View organization page for Gradio, graphic

    35,703 followers

    🎤 Voice-Restore is now LIVE on Hugging Face! 🚀 Cutting-edge model can fix background noise, reverberations, distortions, and signal loss. 📣 VoiceRestore uses Flow-Matching Transformers for Speech Recording Quality Restoration 🔊 The audio restoration app is build with Gradio 5 (we are still in Beta!😎): https://lnkd.in/g9NZpK2e 💻 Super easy to use: Built on 🤗 Transformers by Jade Choghari, integrated seamlessly with Gradio for a smooth experience! 🔧 Build the gradio app locally: https://lnkd.in/grbSusMV Kudos to the author for the release Stanislav Kirdey! With Gradio5, Python is the language for you if you want to build highly performant apps with a slick UI. Extremely simple to start using Beta release: `pip install gradio==5.0b5` Docs for Gradio 5 Beta: https://lnkd.in/ghJ97rRn

  • Hugging Face reposted this

    View organization page for Gradio, graphic

    35,703 followers

    Now you can take audio notes and transcribe them in Real-time with Whisper Turbo and Gradio 5!🤩 ✨ Completely open-source stack for building high-performing Python apps. Build them locally or host them publicly. Realtime Whisper-Large-v3 Turbo with a Gradio app on Hugging Face Spaces: https://lnkd.in/gh_tgd7W Kudos to Nishith Jain (@kingnish24 on X) for the brilliant gradio app 👏

  • Hugging Face reposted this

    View organization page for Argilla, graphic

    9,414 followers

    How do you start your text classification project on the Hugging Face Hub? David Berenstein will guide you through the journey of creating a text classifier from scratch using Open Source tools. 🚀 Agenda: - Deploy Argilla on Hugging Face Spaces - Configure and create an Argilla dataset - Use model predictions to accelerate labeling - Train a SetFit model 👇🏾Link to the event in the comments

    • No alternative text description for this image
  • Hugging Face reposted this

    View profile for Philipp Schmid, graphic

    Technical Lead & LLMs at Hugging Face 🤗 | AWS ML HERO 🦸🏻♂️

    View profile for Philipp Schmid, graphic

    Technical Lead & LLMs at Hugging Face 🤗 | AWS ML HERO 🦸🏻♂️

    OpenAI has released new Whisper models! 👀 Yesterday, OpenAI updated their Github and added a new Whisper V3 Turbo model! The turbo model is an optimized version of large-v3 that offers 8x faster transcription speed with minimal degradation in accuracy (no benchmarks yet) with roughly half the size. ⚡️ Coming to Hugging Face soon… 🔜

    • No alternative text description for this image
  • Hugging Face reposted this

    As promised! Here is an open call for contribution to the huggingface-llama-recipes repository. https://lnkd.in/ganGMgBd I have been asked about open source a lot of times. I have always advised people to choose a repository first, go to the issues tab, and take it from there. I have never been able to point to a repository to them myself. That changes today. This is a neat little repository that holds recipes with the Llama family of models using the Hugging Face ecosystem. Help us make this better! For now we have opened up some recipe ideas, but feel free to suggest something on your own. We are more than happy to help! Happy coding, and happy open source!

    GitHub - huggingface/huggingface-llama-recipes

    GitHub - huggingface/huggingface-llama-recipes

    github.com

  • View organization page for Hugging Face, graphic

    745,609 followers

    A new model was recently added to the Transformers library: OmDet-Turbo. It allows to detect objects based on text prompts in real-time, similar to models like Grounding DINO and OWLv2, just a lot faster. Check it out below!

    View profile for Yoni Gozlan, graphic

    ML Engineer @Hugging Face 🤗

    OmDet-Turbo is now supported in Hugging Face 🤗 Transformers! OmDet-Turbo is a real-time, open-vocabulary object detection model developped by Tiancheng Zhao, Peng Liu, Xuan He, Lu Zhang, and Kyusong Lee from Om AI Research Lab. It builds on components from RT-DETR and incorporates a fast multimodal fusion module, enabling it's real-time and open-vocabulary capabilities while maintaining high accuracy. The model comes with an Apache 2.0 license, meaning people can freely use it for commercial applications. You can test OmDet-Turbo's real-time open-vocabulary object detection capabilities on Spaces, as demonstrated in the video below! 🚀 * Try it on Spaces: Live https://lnkd.in/e2B7-SbF, Async https://lnkd.in/eSWJ5wGX * Try it in 🤗 Transformers: https://lnkd.in/erRW4EWg * Arxiv: https://lnkd.in/eYJrYEU5 #ai #artificialintelligence #objectdetection #huggingface #computervision

  • Hugging Face reposted this

    View organization page for Gradio, graphic

    35,703 followers

    Llama 3.2 with Gradio 5.0 Chatbots!🔥🔥🔥 🚀 No more separate buttons for Retry, Undo, Clear, and Send 💪 Examples display inside the chat window ✨ Full-screen support OOB 💯 Supports multimodality as well How to use AI at Meta's Llama 3.2 with hot new Gradio 5.0 Chatbots? > Build locally or on Google Colab: https://lnkd.in/gN8Ca-Yy > Explore on Hugging Face Spaces: https://lnkd.in/gdvJN3zu

Similar pages

Browse jobs

Funding

Hugging Face 7 total rounds

Last Round

Series D
See more info on crunchbase