📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more! What’s new? • Llama 3.2 1B & 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for Arm, MediaTek & Qualcomm on day one. • Llama 3.2 11B & 90B vision models deliver performance competitive with leading closed models — and can be used as drop-in replacements for Llama 3.1 8B & 70B. • New Llama Guard models to support multimodal use cases and edge deployments. • The first official distro of Llama Stack simplifies and supercharges the way developers & enterprises can build around Llama to support agentic applications and more. With Llama 3.2 we’re making it possible to run Llama in even more places, with even more flexible capabilities. Details in the full announcement ➡️ https://go.fb.me/8ar7oz Download Llama 3.2 models ➡️ https://go.fb.me/7eiq2z These models are available to download now directly from Meta and Hugging Face — and will be available across offerings from 25+ partners that are rolling out starting today, including Accenture, Amazon Web Services (AWS), AMD, Microsoft Azure , Databricks, Dell Technologies, Deloitte, Fireworks AI, Google Cloud, Groq, IBM, Infosys, Intel Corporation, Kaggle, NVIDIA, Oracle Cloud, PwC, Scale AI, Snowflake, Together AI and more. We’ve said it before and we’ll say it again: open source AI is how we ensure that these innovations reflect the global community they’re built for and benefit everyone. We’re continuing our drive to make open source the standard with Llama 3.2.
AI at Meta
Research Services
Menlo Park, California 876,907 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
-
https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
One year ago we opened applications for the first-ever Llama Impact Grants program seeking proposals from around the world to use open source AI to address challenges in education, environment and innovation. Now, we're excited to announce the recipients of our first grants with projects ranging from reading assessments in India to personalized maternal and newborn health support in Sub-Saharan Africa. See the full list of Llama Impact Grant and Llama Impact Innovation Award recipients ➡️ https://go.fb.me/khdznv
-
Following the initial RFP period we’re excited to share the first official distribution of Llama Stack. Details ➡️ https://go.fb.me/xfi7g3 Llama Stack packages multiple API Providers into a single endpoint for developers to enable a simple, consistent experience to work with Llama models on-prem, cloud, single-node and on-device.
-
We’re on the ground at #ECCV2024 in Milan this week to showcase some of our latest research, new artifacts and more. Here are four things you won’t want to miss from Meta FAIR, GenAI and Reality Labs Research this week whether you’re here in person or following from your feed. 1. We’re releasing SAM 2.1 an upgraded version of the Segment Anything Model 2 — and the SAM 2 Developer Suite featuring open source tools for training, inference and demos. Live in the Segment Anything repo on GitHub ➡️ https://go.fb.me/mk6ofh 2. We’re supporting 10+ presentations and workshops in areas like computer vision for smart glasses and the metaverse, 3D vision for eCommerce, egocentric research with Project Aria and more. 3 We’re presenting seven orals at ECCV — in addition to the 50+ publications from researchers at Meta that were accepted for this year’s conference. Look out for more details on some of these papers later this week. 4. Demos and discussions with Meta researchers at our booth all week — come by our booth to discuss projects like SAM 2, Ego-Exo4D, DINOv2 and more.
-
Llama 3.2 features our first multimodal Llama models with support for vision tasks. These models can take in both image and text prompts to deeply understand and reason on inputs. These models are the next step towards even richer agentic applications built with Llama. More on all of our new Llama 3.2 models ➡️ https://go.fb.me/14f79n
-
Ready to start working with our new lightweight and multimodal Llama 3.2 models? Check out all of the newest resources in the updated repos on GitHub. Llama GitHub repo ➡️ https://go.fb.me/1sn5cb Llama recipes ➡️ https://go.fb.me/3w78ol Llama Stack ➡️ https://go.fb.me/ci7y5w Model Cards ➡️ https://go.fb.me/2dtbbu The repos include code, new training recipes, updated model cards, details on our new Llama Guard models and our first official release of Llama Stack.
-
With Llama 3.2 we released our first-ever lightweight Llama models: 1B & 3B. These models outperform competing models on a range of tasks even at smaller sizes; feature support for Arm, MediaTek and Qualcomm devices; and empower developers to build personalized, on-device agentic applications with capabilities like summarization, tool use and RAG with strong privacy where data never leaves the device. We’ve shared more, including reference applications as part of the Llama 3.2 release. Details and model downloads ➡️ https://go.fb.me/vbjzj3
-
New research from Meta FAIR: MoMa — Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts ➡️ https://go.fb.me/kz3b0c This paper introduces modality-aware sparse architectures for early fusion, mixed-modality foundation models and opens up several promising directions for future research.
-
Fragmented regulation means the EU risks missing out on the rapid innovation happening in open source and multimodal AI. We're joining representatives from 25+ European companies, researchers and developers in calling for regulatory certainty ➡️ EUneedsAI.com
-
With the release of Llama 3.1, Together AI built LlamaCoder — an open source web app that enables people to generate entire apps from a prompt. Since release, the project has been starred over 2K times and cloned by hundreds of developers on GitHub. More on this project, built with Llama 405B ➡️ https://go.fb.me/or1rcl