Jump to content

GeForce 3 series: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Bob Pony (talk | contribs)
(42 intermediate revisions by 26 users not shown)
Line 1: Line 1:
{{Short description|Series of GPUs by Nvidia}}
{{for|GeForce cards with a model number of 3X0|GeForce 300 series}}
{{Use mdy dates|date=October 2018}}
{{Infobox GPU
{{Infobox GPU
| name = GeForce 3 Series
| name = GeForce 3 series
| image = [[File:Nvidia GeForce 3 Series Logo.png|200px|GeForce 3 Series logo]]
| image = [[File:Nvidia GeForce 3 Series Logo.png|200px|GeForce 3 series logo]]
| codename = NV20
| codename = NV20
| created = 2001
| created = {{start date and age|February 27, 2001}}
| model1 = GeForce3 Series
| model1 = GeForce 3 series
| model2 = GeForce3 Ti Series
| model2 = GeForce 3 Ti series
| entry = None
| midrange = GeForce 3, Ti 200
| midrange = GeForce 3, Ti 200
| highend = GeForce 3, Ti 500
| highend = GeForce 3, Ti 500
| d3dversion = [[Microsoft Direct3D#Direct3D 8.0|Direct3D 8.0]]<br/>[[High Level Shader Language|Vertex Shader 1.1]]<br/>[[High Level Shader Language|Pixel Shader 1.1]]
| d3dversion = [[Microsoft Direct3D#Direct3D 8.0|Direct3D 8.0]]<br />[[High Level Shader Language|Vertex Shader 1.1]]<br />[[High Level Shader Language|Pixel Shader 1.1]]
| openglversion = [[OpenGL#OpenGL 1.2|OpenGL 1.2]]
| openglversion = [[OpenGL#OpenGL 1.3|OpenGL 1.3]]
|predecessor = [[GeForce 2 series]]
|predecessor = [[GeForce 2 series]]
|successor = [[GeForce 4 series]]
|successor = [[GeForce 4 series]]
| support status = Unsupported
|architecture=[[Kelvin (microarchitecture)|Kelvin]]
}}
}}


The '''GeForce 3''' (NV20) is the third generation of [[Nvidia|NVIDIA]]'s [[GeForce]] [[graphics processing unit]]s. Introduced in March 2001{{Citation needed|date=February 2017}}, it advanced the GeForce architecture by adding programmable pixel and vertex shaders, [[multisample anti-aliasing]] and improved the overall efficiency of the rendering process.
The '''GeForce 3 series''' (NV20) is the third generation of [[Nvidia]]'s [[GeForce]] line of [[graphics processing unit]]s (GPUs). Introduced in February 2001,<ref name="Nvidia GeForce3 Roundup - July 2001">{{Cite web|url=https://www.anandtech.com/show/802|title = NVIDIA GeForce3 Roundup - July 2001}}</ref> it advanced the GeForce architecture by adding programmable pixel and vertex shaders, [[multisample anti-aliasing]] and improved the overall efficiency of the rendering process.


The GeForce 3 was unveiled during the 2001 Macworld conference and powered realtime demos of Pixar's Junior Lamp and id software's Doom 3. Apple would later announce launch rights for its new line of computers.
The GeForce 3 was unveiled during the 2001 [[Macworld Conference & Expo|Macworld Conference & Expo/Tokyo 2001]] in [[Makuhari Messe]] and powered realtime demos of Pixar's [[Luxo Jr.|Junior Lamp]] and [[id Software]]'s [[Doom 3]]. Apple would later announce launch rights for its new line of computers.


The GeForce 3 family comprises 3 consumer models: the '''GeForce 3''', the '''GeForce 3 Ti200''', and the '''GeForce 3 Ti500'''. A separate professional version, with a feature-set tailored for computer aided design, was sold as the '''Quadro DCC'''. A derivative of the GeForce 3, known as the '''NV2A''', is used in the Microsoft [[Xbox (console)|Xbox]] game console.
The GeForce 3 family comprises 3 consumer models: the '''GeForce 3''', the '''GeForce 3 Ti200''', and the '''GeForce 3 Ti500'''. A separate professional version, with a feature-set tailored for computer aided design, was sold as the '''Quadro DCC'''. A derivative of the GeForce 3, known as the '''NV2A''', is used in the Microsoft [[Xbox (console)|Xbox]] game console.
Line 24: Line 28:
[[Image:Geforce3gpu.jpg|thumb|GeForce3 Ti 200 GPU]]
[[Image:Geforce3gpu.jpg|thumb|GeForce3 Ti 200 GPU]]


Introduced three months after NVIDIA acquired [[3dfx]] and marketed as the ''nFinite FX Engine'', the GeForce 3 was the first [[Microsoft Direct3D]] 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft [[Shader]] language 1.1. For legacy Direct3D 7.0 applications the fixed-function hardware from GeForce 2 is included, as the single vertex shader was not fast enough to emulate it yet.<ref>https://paroj.github.io/gltut/History%20Radeon8500.html</ref> With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line.
The GeForce 3 was introduced three months after Nvidia acquired the assets of [[3dfx]]. It was marketed as the ''nFinite FX Engine'', and was the first [[Microsoft Direct3D]] 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft [[Shader]] language 1.1. It is believed that the fixed-function T&L hardware from GeForce 2 was still included on the chip for use with Direct3D 7.0 applications, as the single vertex shader was not fast enough to emulate it yet.<ref>{{Cite web|url=https://paroj.github.io/gltut/History%20Radeon8500.html|title=Programming at Last}}</ref> With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line.


To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed ''Lightspeed Memory Architecture'' (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the [[z-buffering|z-buffer]] (depth buffer) and better manage interaction with the DRAM.
To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed ''Lightspeed Memory Architecture'' (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the [[z-buffering|z-buffer]] (depth buffer) and better manage interaction with the DRAM.


Other architectural changes include improvements to [[Spatial anti-aliasing|anti-aliasing]] functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and [[Quincunx]] anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the Z buffers and stencil buffers, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's [[texture sampling unit]]s were upgraded to support 8-tap [[anisotropic filtering]], compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper.
Other architectural changes include EMBM support<ref>{{Cite web|url=http://ixbtlabs.com/articles/digest3d/0402/itogi-video-gf3.html|title=April 2002 3Digest - NVIDIA GeForce3|first=iXBT|last=Labs|website=iXBT Labs}}</ref><ref>{{Cite web|url=https://www.nvidia.com/object/geforce3_faq.html|title = GeForce RTX 20 Series Graphics Cards and Laptops}}</ref> (first introduced by Matrox in 1999) and improvements to [[Spatial anti-aliasing|anti-aliasing]] functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and [[Quincunx]] anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the [[Z-buffering|Z-buffer]]s and [[stencil buffer]]s, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's [[texture sampling unit]]s were upgraded to support 8-tap [[anisotropic filtering]], compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper.


==Performance==
==Performance==
The GeForce 3 GPU (NV20) has the same theoretical pixel and texel throughput per clock as the GeForce 2 (NV15). GeForce 2 Ultra is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200; this means that in select instances, like Direct3D 7 T&L benchmarks, the GeForce 2 Ultra and sometimes even GTS can outperform the GeForce 3 and Ti200, because the newer GPUs use the same fixed-function T&L unit, but are clocked lower.<ref>{{cite web|url=https://techreport.com/review/2515/nvidia-geforce3-graphics-processor/11|title=NVIDIA's GeForce3 graphics processor|website=techreport.com|accessdate=25 June 2017}}</ref> The GeForce 2 Ultra also has considerable raw memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when comparing anti-aliasing performance the GeForce 3 is clearly superior because of its MSAA support and memory bandwidth/fillrate management efficiency.
The GeForce 3 GPU (NV20) has the same theoretical pixel and texel throughput per clock as the GeForce 2 (NV15). GeForce 2 Ultra is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200; this means that in select instances, like Direct3D 7 T&L benchmarks, the GeForce 2 Ultra and sometimes even GTS can outperform the GeForce 3 and Ti200, because the newer GPUs use the same fixed-function T&L unit, but are clocked lower.<ref>{{cite web|url=https://techreport.com/review/2515/nvidia-geforce3-graphics-processor/11|title=NVIDIA's GeForce3 graphics processor|website=techreport.com|date=June 26, 2001|access-date=June 25, 2017}}</ref> The GeForce 2 Ultra also has considerable raw memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when comparing anti-aliasing performance the GeForce 3 is clearly superior because of its MSAA support and memory bandwidth/fillrate management efficiency.


When comparing the shading capabilities to the Radeon 8500, reviewers noted superior precision with the ATi card.<ref>{{cite web|url=https://techreport.com/review/3266/ati-radeon-8500-off-the-beaten-path/5|title=ATI's Radeon 8500: Off the beaten path|website=techreport.com|accessdate=25 June 2017}}</ref>
When comparing the shading capabilities to the Radeon 8500, reviewers noted superior precision with the ATi card.<ref>{{cite web|url=https://techreport.com/review/3266/ati-radeon-8500-off-the-beaten-path/5|title=ATI's Radeon 8500: Off the beaten path|website=techreport.com|date=December 31, 2001|access-date=June 25, 2017}}</ref>


==Product positioning==
==Product positioning==
NVIDIA refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the [[Radeon R200|Radeon 8500]] and [[Radeon R100|Radeon 7500]]. The Ti500 has higher core and memory clocks (240&nbsp;MHz core/250&nbsp;MHz RAM) than the original GeForce 3 (200&nbsp;MHz/230&nbsp;MHz), and generally matches the Radeon 8500. The Ti200 was the slowest, and lowest-priced GeForce3 release. It is clocked lower (175&nbsp;MHz/200&nbsp;MHz) yet it surpasses the Radeon 7500 in speed and feature set besides dual-monitor implementation.
Nvidia refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the [[Radeon R200|Radeon 8500]] and [[Radeon R100|Radeon 7500]]. The Ti500 has higher core and memory clocks (240&nbsp;MHz core/250&nbsp;MHz RAM) than the original GeForce 3 (200&nbsp;MHz/230&nbsp;MHz), and generally matches the Radeon 8500. The Ti200 was the slowest, and lowest-priced GeForce3 release. It is clocked lower (175&nbsp;MHz/200&nbsp;MHz) yet it surpasses the Radeon 7500 in speed and feature set besides dual-monitor implementation.


The original GeForce3 was only released in 64 MiB configurations, while the Ti200 and Ti500 were also released as 128 MiB versions.
The original GeForce3 was only released in 64 MiB configurations, while the Ti200 and Ti500 were also released as 128 MiB versions.
Line 46: Line 50:
Nvidia has ceased driver support for GeForce 3 series.
Nvidia has ceased driver support for GeForce 3 series.


[[Image:GeForce3 Ti 500.jpg|thumb|Nvidia GeForce3 Ti 500]]
== Successor ==
The [[GeForce 4 Series]] (Non-MX), introduced in April 2002, was a revision of the GeForce 3 architecture. The budget variant, dubbed the [[GeForce 4 MX]], was closer in terms of design to the GeForce 2.


===Final drivers===
[[Image:GeForce3 Ti 500.jpg|thumb|NVIDIA GeForce3 Ti 500]]

===Final drivers include===
* Windows 9x & Windows Me: 81.98 released on December 21, 2005; [http://www.nvidia.com/object/win9x_81.98.html Download];
* Windows 9x & Windows Me: 81.98 released on December 21, 2005; [http://www.nvidia.com/object/win9x_81.98.html Download];
:[http://www.nvidia.com/object/81.98_9x_supported.html Product Support List Windows 95/98/Me – 81.98].
:[http://www.nvidia.com/object/81.98_9x_supported.html Product Support List Windows 95/98/Me – 81.98].
* Windows 2000, 32-bit Windows XP & Media Center Edition: 93.71 released on November 2, 2006; [http://www.nvidia.com/object/winxp_2k_93.71_2.html Download].
* Windows 2000, 32-bit Windows XP & Media Center Edition: 93.71 released on November 2, 2006; [https://www.nvidia.com/download/driverResults.aspx/5/ Download] (Despite claims in the documentation that 94.24 supports the Geforce 3 series, it does not).

(Despite claims in the documentation that 94.24 supports the Geforce 3 series, it does not)
* The Windows 2000/XP drivers may be installed on later versions of Windows, such as Windows 7. They do not support the "Aero"-effects of Windows 7, however.
The drivers for Windows 2000/XP may be installed on later versions of Windows such as Windows Vista and 7; however, they do not support desktop compositing or the [[Windows Aero|Aero]] effects of these operating systems.


:(Products supported list also on this page)
:(Products supported list also on this page)
[http://www.nvidia.com/object/win9x_archive.html Windows 95/98/Me Driver Archive]<br />
[http://www.nvidia.com/object/win9x_archive.html Windows 95/98/Me Driver Archive]<br />
[http://www.nvidia.com/object/winxp-2k_archive.html Windows XP/2000 Driver Archive]
[http://www.nvidia.com/object/winxp-2k_archive.html Windows XP/2000 Driver Archive]

== Successor ==
The [[GeForce 4 series]] (Non-MX), introduced in April 2002, was a revision of the GeForce 3 architecture. The budget variant, dubbed the [[GeForce 4 MX]], was closer in terms of design to the GeForce 2.


== See also ==
== See also ==
*[[Graphics card]]
*[[Graphics card]]
*[[Graphics processing unit]]
*[[Graphics processing unit]]
*[[Kelvin (microarchitecture)]]


==References==
==References==
Line 80: Line 85:
{{Nvidia}}
{{Nvidia}}


{{DEFAULTSORT:Geforce 3 Series}}
{{DEFAULTSORT:Geforce 3 series}}
[[Category:Computer-related introductions in 2001]]
[[Category:Computer-related introductions in 2001]]
[[Category:Nvidia graphics processors]]
[[Category:GeForce series|3 Series]]
[[Category:Video cards]]
[[Category:Graphics cards]]

Revision as of 19:46, 2 March 2024

GeForce 3 series
GeForce 3 series logo
Release dateFebruary 27, 2001; 23 years ago (February 27, 2001)
CodenameNV20
ArchitectureKelvin
Models
  • GeForce 3 series
  • GeForce 3 Ti series
Cards
Mid-rangeGeForce 3, Ti 200
High-endGeForce 3, Ti 500
API support
DirectXDirect3D 8.0
Vertex Shader 1.1
Pixel Shader 1.1
OpenGLOpenGL 1.3
History
PredecessorGeForce 2 series
SuccessorGeForce 4 series
Support status
Unsupported

The GeForce 3 series (NV20) is the third generation of Nvidia's GeForce line of graphics processing units (GPUs). Introduced in February 2001,[1] it advanced the GeForce architecture by adding programmable pixel and vertex shaders, multisample anti-aliasing and improved the overall efficiency of the rendering process.

The GeForce 3 was unveiled during the 2001 Macworld Conference & Expo/Tokyo 2001 in Makuhari Messe and powered realtime demos of Pixar's Junior Lamp and id Software's Doom 3. Apple would later announce launch rights for its new line of computers.

The GeForce 3 family comprises 3 consumer models: the GeForce 3, the GeForce 3 Ti200, and the GeForce 3 Ti500. A separate professional version, with a feature-set tailored for computer aided design, was sold as the Quadro DCC. A derivative of the GeForce 3, known as the NV2A, is used in the Microsoft Xbox game console.

Architecture

GeForce3 Ti 200 GPU

The GeForce 3 was introduced three months after Nvidia acquired the assets of 3dfx. It was marketed as the nFinite FX Engine, and was the first Microsoft Direct3D 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft Shader language 1.1. It is believed that the fixed-function T&L hardware from GeForce 2 was still included on the chip for use with Direct3D 7.0 applications, as the single vertex shader was not fast enough to emulate it yet.[2] With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line.

To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed Lightspeed Memory Architecture (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the z-buffer (depth buffer) and better manage interaction with the DRAM.

Other architectural changes include EMBM support[3][4] (first introduced by Matrox in 1999) and improvements to anti-aliasing functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and Quincunx anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the Z-buffers and stencil buffers, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's texture sampling units were upgraded to support 8-tap anisotropic filtering, compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper.

Performance

The GeForce 3 GPU (NV20) has the same theoretical pixel and texel throughput per clock as the GeForce 2 (NV15). GeForce 2 Ultra is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200; this means that in select instances, like Direct3D 7 T&L benchmarks, the GeForce 2 Ultra and sometimes even GTS can outperform the GeForce 3 and Ti200, because the newer GPUs use the same fixed-function T&L unit, but are clocked lower.[5] The GeForce 2 Ultra also has considerable raw memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when comparing anti-aliasing performance the GeForce 3 is clearly superior because of its MSAA support and memory bandwidth/fillrate management efficiency.

When comparing the shading capabilities to the Radeon 8500, reviewers noted superior precision with the ATi card.[6]

Product positioning

Nvidia refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the Radeon 8500 and Radeon 7500. The Ti500 has higher core and memory clocks (240 MHz core/250 MHz RAM) than the original GeForce 3 (200 MHz/230 MHz), and generally matches the Radeon 8500. The Ti200 was the slowest, and lowest-priced GeForce3 release. It is clocked lower (175 MHz/200 MHz) yet it surpasses the Radeon 7500 in speed and feature set besides dual-monitor implementation.

The original GeForce3 was only released in 64 MiB configurations, while the Ti200 and Ti500 were also released as 128 MiB versions.

Specifications

Discontinued support

Nvidia has ceased driver support for GeForce 3 series.

Nvidia GeForce3 Ti 500

Final drivers

  • Windows 9x & Windows Me: 81.98 released on December 21, 2005; Download;
Product Support List Windows 95/98/Me – 81.98.
  • Windows 2000, 32-bit Windows XP & Media Center Edition: 93.71 released on November 2, 2006; Download (Despite claims in the documentation that 94.24 supports the Geforce 3 series, it does not).

The drivers for Windows 2000/XP may be installed on later versions of Windows such as Windows Vista and 7; however, they do not support desktop compositing or the Aero effects of these operating systems.

(Products supported list also on this page)

Windows 95/98/Me Driver Archive
Windows XP/2000 Driver Archive

Successor

The GeForce 4 series (Non-MX), introduced in April 2002, was a revision of the GeForce 3 architecture. The budget variant, dubbed the GeForce 4 MX, was closer in terms of design to the GeForce 2.

See also

References

  1. ^ "NVIDIA GeForce3 Roundup - July 2001".
  2. ^ "Programming at Last".
  3. ^ Labs, iXBT. "April 2002 3Digest - NVIDIA GeForce3". iXBT Labs.
  4. ^ "GeForce RTX 20 Series Graphics Cards and Laptops".
  5. ^ "NVIDIA's GeForce3 graphics processor". techreport.com. June 26, 2001. Retrieved June 25, 2017.
  6. ^ "ATI's Radeon 8500: Off the beaten path". techreport.com. December 31, 2001. Retrieved June 25, 2017.