Ada Lovelace (microarchitecture)
Launched | October 12, 2022 |
---|---|
Designed by | Nvidia |
Manufactured by | |
Fabrication process | TSMC 4N |
Codename(s) | AD10x |
Product Series | |
Desktop | |
Professional/workstation |
|
Server/datacenter | |
Specifications | |
Clock rate | 735 MHz to 2640 MHz |
L1 cache | 128 KB (per SM) |
L2 cache | 32 MB to 96 MB |
Memory support | |
Memory clock rate | 21-23 Gbit/s |
PCIe support | PCIe 4.0 |
Supported Graphics APIs | |
DirectX | DirectX 12 Ultimate (Feature Level 12_2) |
Direct3D | Direct3D 12 |
Shader Model | Shader Model 6.8 |
OpenCL | OpenCL 3.0 |
OpenGL | OpenGL 4.6 |
CUDA | Compute Capability 8.9 |
Vulkan | Vulkan 1.3 |
Supported Compute APIs | |
CUDA | CUDA Toolkit 11.6 |
DirectCompute | Yes |
Media Engine | |
Encode codecs | |
Decode codecs | |
Color bit-depth |
|
Encoder(s) supported | NVENC |
Display outputs | |
History | |
Predecessor | Ampere |
Variant | Hopper (datacenter) |
Successor | Blackwell |
Support status | |
Supported |
Ada Lovelace, also referred to simply as Lovelace,[1] is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Ampere architecture, officially announced on September 20, 2022. It is named after the English mathematician Ada Lovelace,[2] one of the first computer programmers. Nvidia announced the architecture along with the GeForce RTX 40 series consumer GPUs[3] and the RTX 6000 Ada Generation workstation graphics card.[4] The Lovelace architecture is fabricated on TSMC's custom 4N process which offers increased efficiency over the previous Samsung 8 nm and TSMC N7 processes used by Nvidia for its previous-generation Ampere architecture.[5]
Background
[edit]The Ada Lovelace architecture follows on from the Ampere architecture that was released in 2020. The Ada Lovelace architecture was announced by Nvidia CEO Jensen Huang during a GTC 2022 keynote on September 20, 2022 with the architecture powering Nvidia's GPUs for gaming, workstations and datacenters.[6]
Architectural details
[edit]Architectural improvements of the Ada Lovelace architecture include the following:[7]
- CUDA Compute Capability 8.9[8]
- TSMC 4N process (custom designed for Nvidia) - not to be confused with TSMC's regular N4 node
- 4th-generation Tensor Cores with FP8, FP16, bfloat16, TensorFloat-32 (TF32) and sparsity acceleration
- 3rd-generation Ray Tracing Cores, plus concurrent ray tracing and shading and compute
- Shader Execution Reordering (SER)[9]
- Nvidia video encoder/decoder (NVENC/NVDEC) with 8K 10-bit 60FPS AV1 fixed function hardware encoding[10][11]
- No NVLink support[12][13]
Streaming multiprocessors (SMs)
[edit]CUDA cores
[edit]128 CUDA cores are included in each SM.
RT cores
[edit]Ada Lovelace features third-generation RT cores. The RTX 4090 features 128 RT cores compared to the 84 in the previous generation RTX 3090 Ti. These 128 RT cores can provide up to 191 TFLOPS of compute with 1.49 TFLOPS per RT core.[14] A new stage in the ray tracing pipeline called Shader Execution Reordering (SER) is added in the Lovelace architecture which Nvidia claims provides a 2x performance improvement in ray tracing workloads.[6]
Tensor cores
[edit]Lovelace's new fourth-generation Tensor cores enable the AI technology used in DLSS 3's frame generation techniques. Much like Ampere, each SM contains 4 Tensor cores but Lovelace contains a greater number of Tensor cores overall given its increased number of SMs.
Clock speeds
[edit]There is a significant increase in clock speeds with the Ada Lovelace architecture with the RTX 4090's base clock speed being higher than the boost clock speed of the RTX 3090 Ti.
RTX 2080 Ti | RTX 3090 Ti | RTX 4090 | |
---|---|---|---|
Architecture | Turing | Ampere | Ada Lovelace |
Base clock speed (MHz) |
1350 | 1560 | 2235 |
Boost clock speed (MHz) |
1635 | 1860 | 2520 |
Cache and memory subsystem
[edit]RTX 2080 Ti | RTX 3090 Ti | RTX 4090 | |
---|---|---|---|
Architecture | Turing | Ampere | Ada Lovelace |
L1 Data Cache | 6.375 MB (96 KB per SM) |
10.5 MB (128 KB per SM) |
16 MB (128 KB per SM) |
L2 Cache | 5.5 MB | 6 MB | 72 MB |
The last enabled AD102 Lovelace die features 96 MB of L2 cache, a 16x increase from the 6 MB in the Ampere-based GA102 die.[15] The GPU having quick access to a high amount of L2 cache benefits complex operations like ray tracing compared to the GPU seeking data from the GDDR video memory which is slower. Relying less on accessing memory for storing important and frequently accessed data means that a narrower memory bus width can be used in tandem with a large L2 cache.
Each memory controller uses a 32-bit connection with up to 12 controllers present for a combined memory bus width of 384-bit. The Lovelace architecture can use either GDDR6 or GDDR6X memory. GDDR6X memory features on the desktop GeForce RTX 40 series while the more energy-efficient GDDR6 memory is used on its corresponding mobile versions and on RTX A6000 workstation GPUs.
Power efficiency and process node
[edit]The Ada Lovelace architecture is able to use lower voltages compared to its predecessor.[6] Nvidia claims a 2x performance increase for the RTX 4090 at the same 450W used by the previous generation flagship RTX 3090 Ti.[16]
Increased power efficiency can be attributed in part to the smaller fabrication node used by the Lovelace architecture. The Ada Lovelace architecture is fabricated on TSMC's cutting-edge 4N process, a custom designed process node for Nvidia. The previous generation Ampere architecture used Samsung's 8nm-based 8N process node from 2018, which was two years old by the time of Ampere's launch.[17][18] The AD102 die with its 76.3 billion transistors has a transistor density of 125.5 million per mm2, a 178% increase in density from GA102's 45.1 million per mm2.
Media engine
[edit]The Lovelace architecture utilizes the new 8th generation Nvidia NVENC video encoder and the 7th generation NVDEC video decoder introduced by Ampere returns.[19]
NVENC AV1 hardware encoding with support for up to 8K resolution at 60FPS in 10-bit color is added, enabling higher video fidelity at lower bit rates compared to the H.264 and H.265 codecs.[20] Nvidia claims that its NVENC AV1 encoder featured in the Lovelace architecture is 40% more efficient than the H.264 encoder in the Ampere architecture.[21]
The Lovelace architecture received criticism for not supporting the DisplayPort 2.0 connection that supports higher display data bandwidth and instead uses the older DisplayPort 1.4a which is limited to a peak bandwidth of 32 Gbit/s.[22] As a result, Lovelace GPUs would be limited by DisplayPort 1.4a's supported refresh rates despite the GPU's performance being able to reach higher frame rates. Intel's Arc GPUs that also released in October 2022 included DisplayPort 2.0. AMD's competing RDNA 3 architecture released just two months after Lovelace included DisplayPort 2.1.[23]
Ada Lovelace dies
[edit]Die[24] | AD102[25] | AD103[26] | AD104[27] | AD106[28] | AD107[29] |
---|---|---|---|---|---|
Die size | 609 mm2 | 379 mm2 | 294 mm2 | 188 mm2 | 159 mm2 |
Transistors | 76.3B | 45.9B | 35.8B | 22.9B | 18.9B |
Transistor density | 125.3 MTr/mm2 | 121.1 MTr/mm2 | 121.8 MTr/mm2 | 121.8 MTr/mm2 | 118.9 MTr/mm2 |
Graphics processing clusters | 12 | 7 | 5 | 3 | 2 |
Streaming multiprocessors | 144 | 80 | 60 | 36 | 24 |
CUDA cores | 18432 | 10240 | 7680 | 4608 | 3072 |
Texture mapping units | 576 | 320 | 240 | 144 | 96 |
Render output units | 192 | 112 | 80 | 48 | 32 |
Tensor cores | 576 | 320 | 240 | 144 | 96 |
RT cores | 144 | 80 | 60 | 36 | 24 |
L1 cache | 18 MB | 10 MB | 7.5 MB | 4.5 MB | 3 MB |
128 KB per SM | |||||
L2 cache | 96 MB | 64 MB | 48 MB | 32 MB |
Ada Lovelace-based products
[edit]Consumer
[edit]Desktop
[edit]- GeForce 40 series
- GeForce RTX 4060 (AD107)
- GeForce RTX 4060 Ti (AD106)
- GeForce RTX 4070 (AD104)
- GeForce RTX 4070 SUPER (AD104)
- GeForce RTX 4070 Ti (AD104)
- GeForce RTX 4070 Ti SUPER (AD103)
- GeForce RTX 4080 (AD103)
- GeForce RTX 4080 SUPER (AD103)
- GeForce RTX 4090 D (AD102)
- GeForce RTX 4090 (AD102)
Mobile
[edit]- GeForce 40 series
- GeForce RTX 4050 Laptop (AD107)
- GeForce RTX 4060 Laptop (AD107)
- GeForce RTX 4070 Laptop (AD106)
- GeForce RTX 4080 Laptop (AD104)
- GeForce RTX 4090 Laptop (AD103)
Professional
[edit]Desktop workstation
[edit]- Nvidia Workstation GPUs (formerly Quadro)
- Nvidia RTX 2000 Ada Generation (AD107)
- Nvidia RTX 4000 Ada Generation (AD104)
- Nvidia RTX 4000 SFF Ada Generation (AD104)
- Nvidia RTX 4500 Ada Generation (AD104)
- Nvidia RTX 5000 Ada Generation (AD102)
- Nvidia RTX 5880 Ada Generation (AD102)
- Nvidia RTX 6000 Ada Generation (AD102)
Mobile workstation
[edit]- Nvidia Workstation GPUs (formerly Quadro)
- Nvidia RTX 500 Ada Generation Laptop (AD107)
- Nvidia RTX 1000 Ada Generation Laptop (AD107)
- Nvidia RTX 2000 Ada Generation Laptop (AD107)
- Nvidia RTX 3000 Ada Generation Laptop (AD106)
- Nvidia RTX 3500 Ada Generation Laptop (AD104)
- Nvidia RTX 4000 Ada Generation Laptop (AD104)
- Nvidia RTX 5000 Ada Generation Laptop (AD103)
Datacenter
[edit]- Nvidia Data Center GPUs (formerly Tesla)
- Nvidia L4 (AD104)
- Nvidia L40 (AD102)
- Nvidia L40G (AD102)
- Nvidia L40CNX (AD102)
Type | AD107 | AD106 | AD104 | AD103 | AD102 |
---|---|---|---|---|---|
GeForce 40 Series (Desktop) | GeForce RTX 4060 | GeForce RTX 4060 Ti | GeForce RTX 4070
GeForce RTX 4070 SUPER GeForce RTX 4070 Ti |
GeForce RTX 4070 Ti Super
GeForce RTX 4080 GeForce RTX 4080 Super |
GeForce RTX 4090 D
GeForce RTX 4090 |
GeForce 40 Series (Mobile) | GeForce RTX 4050
GeForce RTX 4060 |
GeForce RTX 4070 | GeForce RTX 4080 | GeForce RTX 4090 | — |
Nvidia Workstation GPUs (Desktop) | RTX 2000 Ada Generation | — | RTX 4000 Ada Generation
RTX 4000 SFF Ada Generation RTX 4500 Ada Generation |
— |
RTX 5000 Ada Generation RTX 5880 Ada Generation RTX 6000 Ada Generation |
Nvidia Workstation GPUs (Mobile) | RTX 500 Ada Generation RTX 1000 Ada Generation |
RTX 3000 Ada Generation | RTX 3500 Ada Generation
RTX 4000 Ada Generation |
RTX 5000 Ada Generation | — |
Nvidia Data Center GPUs | — | Nvidia L4[30] | — | Nvidia L40
Nvidia L40G Nvidia L40CNX |
See also
[edit]References
[edit]- ^ Freund, Karl (September 20, 2022). "Nvidia Launches Lovelace GPU, Cloud Services, Ships H100 GPUs, New Drive Thor". Forbes. Retrieved November 18, 2022.
- ^ Mujtaba, Hassan (September 15, 2022). "Nvidia's Next-Gen Ada Lovelace Gaming GPU Architecture For GeForce RTX 40 Series Confirmed". Wccftech. Retrieved November 18, 2022.
- ^ "Nvidia Delivers Quantum Leap in Performance, Introduces New Era of Neural Rendering with GeForce RTX 40 Series". Nvidia Newsroom (Press release). September 20, 2022. Retrieved September 20, 2022.
- ^ "Nvidia's New Ada Lovelace RTX GPU Arrives for Designers and Creators". Nvidia Newsroom. September 20, 2022. Retrieved November 18, 2022.
- ^ Machkovec, Sam (September 20, 2022). "Nvidia's Ada Lovelace GPU generation: $1,599 for RTX 4090, $899 and up for 4080". Ars Technica. Retrieved November 18, 2022.
- ^ a b c Chiappetta, Marco (September 22, 2022). "NVIDIA GeForce RTX 40 Architecture Overview: Ada's Special Sauce Unveiled". HotHardware. Retrieved April 8, 2023.
- ^ "NVIDIA Ada Lovelace Architecture". NVIDIA. September 20, 2022. Retrieved September 20, 2022.
- ^ "CUDA C++ Programming Guide". docs.nvidia.com. Retrieved April 15, 2023.
- ^ "Improve Shader Performance and In-Game Frame Rates with Shader Execution Reordering". NVIDIA Technical Blog. October 13, 2022. Retrieved April 6, 2023.
- ^ Deigado, Gerado (September 20, 2022). "Creativity At The Speed of Light: GeForce RTX 40 Series Graphics Cards Unleash Up To 2X Performance in 3D Rendering, AI, and Video Exports For Gamers and Creators". NVIDIA. Retrieved September 20, 2022.
- ^ "Nvidia Video Codec SDK". NVIDIA Developer. September 20, 2022. Retrieved November 18, 2022.
- ^ Chuong Nguyen (September 21, 2022). "Nvidia kills off NVLink on RTX 4090". Windows Central. Retrieved January 1, 2023.
- ^ btarunr (September 21, 2022). "Jensen Confirms: NVLink Support in Ada Lovelace is Gone". TechPowerUp. Retrieved November 18, 2022.
- ^ "Nvidia Ada Lovelace GPU Architecture: Designed to deliver outstanding gaming and creating, professional graphics, AI, and compute performance" (PDF). Nvidia. p. 30. Retrieved April 5, 2023.
- ^ "Nvidia Ada Lovelace GPU Architecture: Designed to deliver outstanding gaming and creating, professional graphics, AI, and compute performance" (PDF). Nvidia. p. 12. Retrieved April 6, 2023.
- ^ "Nvidia Ada Lovelace GPU Architecture: Designed to deliver outstanding gaming and creating, professional graphics, AI, and compute performance" (PDF). Nvidia. p. 12. Retrieved April 5, 2023.
- ^ James, Dave (September 1, 2020). "Nvidia confirms Samsung 8nm process for RTX 3090, RTX 3080, and RTX 3070". PC Gamer. Retrieved April 5, 2023.
- ^ Bosnjak, Dominik (September 1, 2020). "Samsung's old 8nm tech at the heart of NVIDIA's monstrous Ampere cards". SamMobile. Retrieved April 5, 2023.
- ^ "Nvidia Ada Lovelace GPU Architecture: Designed to deliver outstanding gaming and creating, professional graphics, AI, and compute performance" (PDF). Nvidia. p. 25. Retrieved April 5, 2023.
- ^ Muthana, Prathap; Mishra, Sampurnananda; Patait, Abhijit (January 18, 2023). "Improving Video Quality and Performance with AV1 and NVIDIA Ada Lovelace Architecture". Nvidia Developer. Retrieved April 5, 2023.
- ^ "Nvidia Ada Science: How Ada advances the science of graphics with DLSS 3" (PDF). Nvidia. p. 13. Retrieved April 5, 2023.
- ^ Garreffa, Anthony (September 25, 2022). "NVIDIA's next-gen GeForce RTX 40 series lack DP2.0 connectivity, silly". TweakTown. Retrieved April 5, 2023.
- ^ Judd, Will (November 3, 2022). "AMD announces 7900 XTX and 7900 XT graphics cards with FSR 3". Eurogamer. Retrieved April 5, 2023.
- ^ "NVIDIA confirms Ada 102/103/104 GPU specs, AD104 has more transistors than GA102". VideoCardz. September 23, 2022. Retrieved September 23, 2022.
- ^ "NVIDIA AD102 GPU Specs". TechPowerUp. Retrieved December 17, 2022.
- ^ "NVIDIA AD103 GPU Specs". TechPowerUp. Retrieved July 16, 2024.
- ^ "NVIDIA AD104 GPU Specs". TechPowerUp. Retrieved October 18, 2022.
- ^ "NVIDIA AD106 GPU Specs". TechPowerUp. Retrieved December 17, 2022.
- ^ "NVIDIA AD107 GPU Specs". TechPowerUp. Retrieved December 17, 2022.
- ^ "NVIDIA L4 Specs". TechPowerUp. March 21, 2023. Retrieved April 15, 2024.