Today is a huge day for open source AI: Argilla is joining Hugging Face 🤗 🚀 It's time to double down on community, good data for AI, product features, and open collaboration. We're thrilled to continue our path with the wonderful Argilla team and a broader team and vision, with shared values and culture! Thanks to our investors Zetta Venture Partners (James Alcorn), Criteria Venture Tech (Roma Jelinskaite, Albert Morro, Aleix Pérez), Eniac Ventures (Hadley Harris, Dan Jaeck, Monica Lim), and many others, so lucky to have worked with you! https://lnkd.in/dfxvgpsT
Argilla
Desarrollo de software
Madrid, MADRID 9712 seguidores
The Platform where experts improve AI models
Sobre nosotros
Build robust NLP products through faster data labeling and curation. Argilla empowers teams with the easiest to use human-in-the-loop and programmatic labelling features.
- Sitio web
-
https://www.argilla.io
Enlace externo para Argilla
- Sector
- Desarrollo de software
- Tamaño de la empresa
- De 11 a 50 empleados
- Sede
- Madrid, MADRID
- Tipo
- Empresa propia
- Fundación
- 2017
- Especialidades
- NLP, artificial intelligence, Data science y Open Source
Productos
Argilla
Plataformas de etiquetado de datos
The feedback layer for enterprise LLMs Build robust language models with human and machine feedback. Argilla empowers data teams from fine-tuning and RLHF to continuous model improvement.
Ubicaciones
-
Principal
Calle de Vandergoten, 1
Madrid, MADRID 28005, ES
-
Moli Canyars, 7
Carpesa, Valencia 46132, ES
Empleados en Argilla
-
Roma Jelinskaite
VC Investor | SaaS & DeepTech
-
Natalia E.
Building Argilla @ Hugging Face 🤗 | Computational Linguist | PhD
-
Agustín Piqueres Lajarín
ML Engineer @ Hugging Face 🤗
-
Averill Roy
Traductora (ES>FR), Diseñadora gráfica, Rewriter Freelance. También Operations Assistant for Argilla.io & Aprendiz de Cerámica
Actualizaciones
-
Argilla ha compartido esto
LIPN / F.initiatives GliNER, NuMind (YC S22) NuNER and NuExtract, and Tom Aarsen SpanMarker are good tools to create the foundation of token classification projects Start with a teacher model and let's go smoller from there! To go smoller, we first need a high quality dataset which we can structure with Argilla. We first make initial suggestions with a teacher model and a good project understanding. Next we go over the suggestions and explore if it they make sense, while iterating on the data quality. Lastly the high quality dataset can be used to fine tune a smoller and better student model! Learn to go brrrr with less:
Setting up a token classification/NER project using Argilla, GliNER and SpanMarker · Luma
lu.ma
-
Argilla ha compartido esto
🔥 Huge update to the Hugging Face Synthetic Data Generator app: quickly build synthetic datasets for text classification 👩 Human-in-the-loop: iterate on prompts and samples and review in Argilla! ⚙ Tons of configs: labels, language clarity, difficulty, and more! 🦙 Powered by Llama-3.1. Run on Spaces or locally Try it out! https://lnkd.in/decn9kEX Example of a quick iteration to build a text classification dataset for medical questions in Spanish 👇
-
🚀 Big update to the Synthetic Data Generator: generate text classification datasets by describing them in natural language! Stop using big and costly LLMs for your text classification problems. Start fine-tuning smaller and more efficient models with custom data – generated without writing a single line of code! Here’s how it works: - Describe your dataset and iterate on the classification prompt until you’re satisfied with the result. - Add the labels and some extra configuration: How is the style of the texts? What type of task: single-label or multi-label? - Generate your dataset and review it in Argilla because we all know how the wow can become in a oops. It takes a minute and 0€/$ to get started! Try it out, and share your feedback with us! Space: https://lnkd.in/dHdwkWCU Distilabel: https://lnkd.in/dtXWRvXb
-
Argilla ha compartido esto
⚡️ LLMs do a good job at NER, but don't you want to do learn how to do more with less? Go from 🐢 -> 🐇 If you want a small model to perform well on your problem, you need to fine-tune it. Bootstrap with a teacher model. Correct potential mistakes to get high-quality data. Fine-tune your student model Go more accurate and more efficient. Free signup: https://buff.ly/4ebcvlo
-
Argilla ha compartido esto
Attention Data Enthusiasts! You know that data quality is key to building robust AI systems. So, it’s no surprise that the Hugging Face Hub is your go-to for laying the foundation of your projects. Here’s why: 👉 Access the most useful datasets. 👉 Upload your own data effortlessly from a simple CSV. Now, imagine Argilla, the Open Source data curation tool seamlessly integrated with Hugging Face Hub. Here’s what’s possible: 💪 Curate a game-changing dataset for your own use and for the benefit of the AI community, all with just a few clicks. 💪💪 Prepare a robust training dataset with built-in feedback and your own data. 💪💪💪 Invite friends or colleagues to collaborate easily just by logging into Hugging Face. All without writing a single line of code. 🤤 The future of dataset collaboration is here—and it’s for tomorrow ⏳ #releasespoiler
-
Argilla ha compartido esto
Stoked to see the Arabic Open AI community contributing to build a high quality benchmark for RAG on Hugging Face using Argilla! In just two weeks, the community (more than 60 people) has validated 2,100 samples which will be used to measure the quality of LLMs for Retrieval Augmented Generation in Arabic. Special mention to the three women leading the effort: 1. 🥇 Hasna Chouikhi with more than 60% of the total contributions 🤯 2. 🥈 Nouhaila Chab 3. 🥉 Ihssane NEDJAOUI This a good example of what the community can achieve with open science and collaboration! If you want support for improving the understanding and quality of AI for your domain or language, reach out!
-
Argilla ha compartido esto
Want to contribute to the Arabic Open AI community? Join the ALRAGE community annotation sprint organized with 2A2I. You work will help to improve the evaluation of RAG systems in Arabic. 12% of the task is now completed. Sign in and start giving feedback: https://lnkd.in/d9q6UVQX Leaderboard: https://lnkd.in/d6c6_5Pj Top 3 contributors will receive a Hugging Face PRO monthly subscription.
AlRAGE Sprint - a Hugging Face Space by OALL
huggingface.co
-
Argilla ha compartido esto
🚀 You can now use Argilla with LiteLLM (YC W23)! LiteLLM is a framework that allows the call of different LLM APIs using the OpenAI format. However, as we all know, LLMs are far from perfect. Thanks to this cool integration, you'll get the outputs in a chat format that can be directly and easily reviewed in the UI! More information in the docs: https://lnkd.in/dFwWYRyv
-
Argilla ha compartido esto
Thrilled to add 📊 Argilla.io logging to LiteLLM (YC W23) for finetuning LLMs 👉 https://lnkd.in/dcbSQ7x8 (+4 more updates 👇) ✨ New 'ensure_alternating_roles' param 🔄 Finetuning: regex for client auth 📚 DBRX tutorial (mlflow+langchain+litellm) 🔒 Fixed SSO admin access check
Páginas similares
Buscar empleos
Financiación
Última ronda
Semilla5.500.000,00 US$