List of Nvidia graphics processing units#GeForce 300 series
{{Short description|none}}
{{Use dmy dates|date=April 2024}}
This list contains general information about graphics processing units (GPUs) and video cards from Nvidia, based on official specifications. In addition some Nvidia motherboards come with integrated onboard GPUs. Limited/Special/Collectors' Editions or AIB versions are not included.
{{toclimit|3}}
Field explanations
The fields in the table listed below describe the following:
- Model – The marketing name for the processor, assigned by Nvidia.
- Launch – Date of release for the processor.
- Code name – The internal engineering codename for the processor (typically designated by an NVXY name and later GXY where X is the series number and Y is the schedule of the project for that generation).
- Fab – Fabrication process. Average feature size of components of the processor.
- Bus interface – Bus by which the graphics processor is attached to the system (typically an expansion slot, such as PCI, AGP, or PCI-Express).
- Memory – The amount of graphics memory available to the processor.
- SM Count – Number of streaming multiprocessors.{{cite web |url=http://www.motherboards.org/reviews/hardware/2038_5.html |title=Nvidia GeForce GTX 480 Video Card Review :: Streaming Multiprocessor |website=Motherboards.org |date=March 26, 2010 |author=Benjamin Sun |access-date=March 14, 2012 |archive-url=https://web.archive.org/web/20111220040904/http://www.motherboards.org/reviews/hardware/2038_5.html |archive-date=December 20, 2011 |url-status=live }}
- Core clock – The factory core clock frequency; while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia.
- Memory clock – The factory effective memory clock frequency (while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia). All DDR/GDDR memories operate at half this frequency, except for GDDR5, which operates at one quarter of this frequency.
- Core config – The layout of the graphics pipeline, in terms of functional units. Over time the number, type, and variety of functional units in the GPU core has changed significantly; before each section in the list there is an explanation as to what functional units are present in each generation of processors. In later models, shaders are integrated into a unified shader architecture, where any one shader can perform any of the functions listed.
- Fillrate – Maximum theoretical fill rate in textured pixels per second. This number is generally used as a maximum throughput number for the GPU and generally, a higher fill rate corresponds to a more powerful (and faster) GPU.
- Memory subsection
- Bandwidth – Maximum theoretical bandwidth for the processor at factory clock with factory bus width. GHz = 10{{sup|9}} Hz.
- Bus type – Type of memory bus or buses used.
- Bus width – Maximum bit width of the memory bus or buses used. This will always be a factory bus width.
- API support section
- Direct3D – Maximum version of Direct3D fully supported.
- OpenGL – Maximum version of OpenGL fully supported.
- OpenCL – Maximum version of OpenCL fully supported.
- Vulkan – Maximum version of Vulkan fully supported.
- CUDA - Maximum version of Cuda fully supported.
- Features – Added features that are not standard as a part of the two graphics libraries.
Desktop GPUs
=Pre-GeForce=
{{unreferenced section|date=May 2024}}
{{further|Fahrenheit (microarchitecture)}}
{{sort-under}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" style="vertical-align: bottom"|Model
! rowspan="2" style="vertical-align: bottom"|Launch ! rowspan="2" {{Vertical header|Code name}} ! rowspan="2" {{Vertical header|Fab (nm){{cite web |title=3D accelerator database |url=http://vintage3d.org/dbn.php |website=Vintage 3D |access-date=21 July 2019 |archive-url=https://web.archive.org/web/20181023222614/http://www.vintage3d.org/dbn.php |archive-date=23 October 2018 |url-status=live }}}} ! rowspan="2" {{Vertical header|Transistors (million)}} ! rowspan="2" {{Vertical header|Die size (mm2)}} ! rowspan="2" {{Vertical header|Bus interface}} ! rowspan="2" {{Vertical header|Core clock (MHz)}} ! rowspan="2" {{Vertical header|Memory clock (MHz)}} ! rowspan="2" {{Vertical header|Core config{{efn|name=geforce 256 1|Pixel pipelines: texture mapping units: render output units}}}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! colspan="2" |Latest API support ! rowspan="2" {{Vertical header|TDP (Watts)}} |
---|
{{Vertical header|Size (MiB)}}
!{{Vertical header|Bandwidth (GB/s)}} !{{Vertical header|Bus type}} !{{Vertical header|Bus width (bit)}} !{{Vertical header|MOperations/s}} !{{Vertical header|MPixels/s}} !{{Vertical header|MTexels/s}} !{{Vertical header|MVertices/s}} !{{Vertical header|Direct3D}} !{{Vertical header|OpenGL}} |
style="text-align:left" |NV1
|{{Date table sorting|May 22, 1995}} |NV1 |90 |PCI |12 |50 |rowspan="3" |1:1:1 |1 |0.4 |FPM |64 |12 |12 |12 | rowspan="10" |0 |1.0 |n/a |? |
style="text-align:left" |Riva 128
|{{Date table sorting|August 25, 1997}} | rowspan="2" |NV3 |SGS 350 nm | rowspan="2" |90 | rowspan="2" |100 | rowspan="2" |100 |4 | rowspan="2" |1.6 | rowspan="9" |SDR | rowspan="3" |128 | rowspan="2" |100 | rowspan="2" |100 | rowspan="2" |100 |5.0 | rowspan="2" |1.0 |? |
style="text-align:left" |Riva 128ZX
|{{Date table sorting|February 23, 1998}} |SGS/TSMC 350 nm | rowspan="2" |AGP 2x, PCI |8 | |? |
style="text-align:left" |Riva TNT
|{{Date table sorting|June 15, 1998}} |NV4 |TSMC 350 nm |90 |90 |110 |rowspan="7" |2:2:2 |8 |1.76 |180 |180 |180 |6.0 | rowspan="7" |1.2 |? |
style="text-align:left" |Vanta
|{{Date table sorting|March 22, 1999}} | rowspan="3" |NV6 | rowspan="4" |TSMC 250 nm | rowspan="3" | | rowspan="3" | |AGP 4x, PCI |100 |125 |8 |1.0 |rowspan="3" |64 |200 |200 |200 | |? |
style="text-align:left" |Vanta LT
|{{Date table sorting|March 2000}} |AGP 2x |80 |100 |8 |0.8 |160 |160 |160 | |? |
style="text-align:left" |Riva TNT2 M64
|{{Date table sorting|October 1999}} | rowspan="4" |AGP 4x, PCI | rowspan="2" |125 | rowspan="2" |150 |8 |1.2 | rowspan="2" |250 | rowspan="2" |250 | rowspan="2" |250 | |? |
style="text-align:left" |Riva TNT2
|{{Date table sorting|March 15, 1999}} | rowspan="3" |NV5 | rowspan="3" |90 |16 |2.4 | rowspan="3" |128 | |? |
style="text-align:left" |Riva TNT2 Pro
|{{Date table sorting|October 12, 1999}} |TSMC 220 nm |143 |166 |16 |2.656 |286 |286 |286 | |? |
style="text-align:left" |Riva TNT2 Ultra
|{{Date table sorting|March 15, 1999}} |TSMC 250 nm |150 |183 |16 |2.928 |300 |300 |300 | |? |
{{notelist}}
=GeForce 256 series=
{{Further|GeForce 256|Celsius (microarchitecture)}}
- All models are made via TSMC 220 nm fabrication process
- All models support Direct3D 7.0 and OpenGL 1.2
- All models support hardware Transform and Lighting (T&L) and Cube Environment Mapping
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" style="vertical-align: bottom"|Model
! rowspan="2" style="vertical-align: bottom"|Launch ! rowspan="2" {{Vertical header|Code name}} ! rowspan="2" {{Vertical header|Transistors (million)}} ! rowspan="2" {{Vertical header|Die size (mm2)}} ! rowspan="2" {{Vertical header|Bus interface}} ! rowspan="2" {{Vertical header|Core clock (MHz)}} ! rowspan="2" {{Vertical header|Memory clock (MHz)}} ! rowspan="2" {{Vertical header|Core config{{efn|name=geforce 256 1|Pixel pipelines: texture mapping units: render output units}}}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" {{Vertical header|Performance (MFLOPS ! rowspan="2" {{Vertical header|TDP (Watts)}} |
---|
{{Vertical header|Size (MiB)}}
!{{Vertical header|Bandwidth (GB/s)}} !{{Vertical header|Bus type}} !{{Vertical header|Bus width (bit)}} !{{Vertical header|MOperations/s}} !{{Vertical header|MPixels/s}} !{{Vertical header|MTexels/s}} !{{Vertical header|MVertices/s}} |
style="text-align:left" |GeForce 256 SDR{{Cite web|title=4x AGP GeForce 256 Graphics Accelerator|url=http://vgamuseum.info/images/doc/nvidia/gf256/geforce256_graphics.pdf|access-date=September 26, 2022|website=vgamuseum.info|archive-date=7 January 2024|archive-url=https://web.archive.org/web/20240107050758/http://vgamuseum.info/images/doc/nvidia/gf256/geforce256_graphics.pdf|url-status=live}}
|{{Dts|1999|October|11|format=mdy|abbr=on}} | rowspan="2" |NV10 | rowspan="2" |17 | rowspan="2" |139 | rowspan="2" |{{nowrap|AGP 4x}}, PCI | rowspan="2" |120 |166 | rowspan="2" |4:4:4 | rowspan="2" |32 |2.656 |SDR | rowspan="2" |128 | rowspan="2" |480 | rowspan="2" |480 | rowspan="2" |480 | rowspan="2" |0 | rowspan="2" |960 |13 |
style="text-align:left" |GeForce 256 DDR{{cite web|title=NVIDIA GeForce 256 DDR Specs|url=https://www.techpowerup.com/gpu-specs/geforce-256-ddr.c734|access-date=February 16, 2021|website=TechPowerUp|language=en}}
|{{Dts|1999|December|13|format=mdy|abbr=on}} |150 |4.800 |DDR |12 |
{{notelist}}
=GeForce2 series=
{{Further|GeForce 2 series|Celsius (microarchitecture)}}
- All models support Direct3D 7 and OpenGL 1.2
- All models support TwinView Dual-Display Architecture, Second Generation Transform and Lighting (T&L), Nvidia Shading Rasterizer (NSR), High-Definition Video Processor (HDVP)
- GeForce2 MX models support Digital Vibrance Control (DVC)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" style="vertical-align: bottom"|Model
! rowspan="2" style="vertical-align: bottom"|Launch ! rowspan="2" {{Vertical header|Code name}} ! rowspan="2" style="vertical-align: bottom"|Fab (nm) ! rowspan="2" {{Vertical header|Transistors (million)}} ! rowspan="2" {{Vertical header|Die size (mm2)}} ! rowspan="2" {{Vertical header|Bus interface}} ! rowspan="2" {{Vertical header|Core clock (MHz)}} ! rowspan="2" {{Vertical header|Memory clock (MHz)}} ! rowspan="2" {{Vertical header|Core config{{efn|name=geforce 2 1|Pixel pipelines: texture mapping units: render output units}}}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" {{Vertical header|Performance (GFLOPS ! rowspan="2" {{Vertical header|TDP (Watts)}} |
---|
{{Vertical header|Size (MiB)}}
!{{Vertical header|Bandwidth (GB/s)}} !{{Vertical header|Bus type}} !{{Vertical header|Bus width (bit)}} !{{Vertical header|MOperations/s}} !{{Vertical header|MPixels/s}} !{{Vertical header|MTexels/s}} !{{Vertical header|MVertices/s}} |
style="text-align:left" |GeForce2 MX IGP + nForce 220/420
|June 4, 2001 | rowspan="4" |NV1A (IGP) / NV11 (MX) | rowspan="4" |64 |FSB | rowspan="3" |175 |133 | rowspan="4" |2:4:2 |Up to 32 system RAM |2.128 |DDR |64 | rowspan="3" |350 | rowspan="3" |350 | rowspan="3" |700 | rowspan="8" |0 |0.700 |3 |
style="text-align:left" |GeForce2 MX200
|March 3, 2001 | rowspan="3" |{{nowrap|AGP 4x}}, PCI | rowspan="2" |166 | rowspan="6" |32 |1.328 | rowspan="2" |SDR |64 | |1 |
style="text-align:left" |GeForce2 MX
|June 28, 2000 |2.656 |128 | |4 |
style="text-align:left" |GeForce2 MX400
|March 3, 2001 | rowspan="3" |200 |166,200 (SDR) |1.328 3.200 2.656 |SDR |64/128 (SDR) |400 |400 |800 |0.800 |5 |
style="text-align:left" |GeForce2 GTS
|April 26, 2000 | rowspan="4" |NV15 | rowspan="4" |88 | rowspan="4" |AGP 4x |166 | rowspan="4" |4:8:4 |5.312 |rowspan="4" |DDR |rowspan="4" |128 | rowspan="2" |800 | rowspan="2" |800 | rowspan="2" |1,600 |1.600 |6 |
style="text-align:left" |GeForce2 Pro
|December 5, 2000 | rowspan="2" |200 | rowspan="2" |6.4 | |? |
style="text-align:left" |GeForce2 Ti
|October 1, 2001 |TSMC |rowspan="2" |250 | rowspan="2" |1,000 | rowspan="2" |1,000 | rowspan="2" |2,000 |2.000 |? |
style="text-align:left" |GeForce2 Ultra
|August 14, 2000 |TSMC |230 |64 |7.36 | |? |
{{notelist}}
=GeForce3 series=
{{Further|GeForce 3 series|Kelvin (microarchitecture)}}
- All models are made via TSMC 150 nm fabrication process
- All models support Direct3D 8.0 and OpenGL 1.3
- All models support 3D Textures, Lightspeed Memory Architecture (LMA), nFiniteFX Engine, Shadow Buffers
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" style="vertical-align: bottom"|Model
! rowspan="2" style="vertical-align: bottom"|Launch ! rowspan="2" {{Vertical header|Code name}} ! rowspan="2" {{Vertical header|Transistors (million)}} ! rowspan="2" {{Vertical header|Die size (mm2)}} ! rowspan="2" {{Vertical header|Bus interface}} ! rowspan="2" {{Vertical header|Core clock (MHz)}} ! rowspan="2" {{Vertical header|Memory clock (MHz)}} ! rowspan="2" {{Vertical header|Core config{{efn|name=geforce 3 1|Pixel shaders: vertex shaders: texture mapping units: render output units}}}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" {{Vertical header|Performance (GFLOPS ! rowspan="2" {{Vertical header|TDP (Watts)}} |
---|
{{Vertical header|Size (MiB)}}
!{{Vertical header|Bandwidth (GB/s)}} !{{Vertical header|Bus type}} !{{Vertical header|Bus width (bit)}} !{{Vertical header|MOperations/s}} !{{Vertical header|MPixels/s}} !{{Vertical header|MTexels/s}} !{{Vertical header|MVertices/s}} |
style="text-align:left" |GeForce3 Ti200
|October 1, 2001 | rowspan="3" |NV20 | rowspan="3" |57 | rowspan="3" |128 | rowspan="3" |{{nowrap|AGP 4x}}, PCI |175 |200 | rowspan="3" |4:1:8:4 |64 |6.4 |rowspan="3" |DDR |rowspan="3" |128 |700 |700 |1400 |43.75 |8.750 |? |
style="text-align:left" |GeForce3
|February 27, 2001 |200 |230 |64 |7.36 |800 |800 |1600 |50 |10.00 |? |
style="text-align:left" |GeForce3 Ti500
|October 1, 2001 |240 |250 |64 |8.0 |960 |960 |1920 |60 |12.00 |29 |
{{notelist}}
=GeForce4 series=
{{Further|GeForce 4 series|Kelvin (microarchitecture)}}
- All models are manufactured via TSMC 150 nm manufacturing process
- All models support Accuview Antialiasing (AA), Lightspeed Memory Architecture II (LMA II), nView
{{sort-under}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" style="vertical-align: bottom"|Model
! rowspan="2" style="vertical-align: bottom"|Launch ! rowspan="2" {{Vertical header|Code name}} ! rowspan="2" {{Vertical header|Transistors (million)}} ! rowspan="2" {{Vertical header|Die size (mm2)}} ! rowspan="2" {{Vertical header|Bus interface}} ! rowspan="2" {{Vertical header|Core clock (MHz)}} ! rowspan="2" {{Vertical header|Memory clock (MHz)}} ! rowspan="2" {{Vertical header|Core config{{efn|name=geforce 4 1|Pixel shaders: vertex shaders: texture mapping units: render output units}}}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" {{Vertical header|Performance ! colspan="2" |Supported API version ! rowspan="2" {{Vertical header|TDP (Watts)}} |
---|
{{Vertical header|Size (MiB)}}
!{{Vertical header|Bandwidth !{{Vertical header|Bus type}} !{{Vertical header|Bus width (bit)}} !{{Vertical header|MOperations/s}} !{{Vertical header|MPixels/s}} !{{Vertical header|MTexels/s}} !{{Vertical header|MVertices/s}} !{{Vertical header|Direct3D}} !{{Vertical header|OpenGL}} |
style="text-align:left" |GeForce4 MX IGP + nForce2
|October 1, 2002 |NV1F |? |? |FSB |rowspan="5" |250 |133 |rowspan="8" |2:0:4:2 |Up to 128 system RAM |2.128 |DDR |64 | rowspan="5" |500 | rowspan="5" |500 | rowspan="2" |1,000 | rowspan="5" |125 |1.000 | rowspan="8" |7.0 |rowspan="8"|1.2 |? |
style="text-align:left" |GeForce4 MX420
| February 6, 2002 | rowspan="2" |NV17 | rowspan="2" |65 | rowspan="2" |AGP 4x |166 | 64 | 2.656 | SDR | 128 (SDR) | | 14 |
style="text-align:left" |GeForce4 MX440 SE
| 2002 | rowspan="2" |64 |rowspan="12" |DDR 1000 | |13 |
style="text-align:left" |GeForce MX4000
| December 14, 2003 | rowspan="2" |NV18B | rowspan="2" |29 | rowspan="2" |65 | AGP 8x | rowspan="2" |166 | rowspan="2" |2.656 | rowspan="2" |64 | rowspan="2" |1000 | | 14 |
style="text-align:left" |GeForce PCX4300
| February 19, 2004 | PCIe x16 | 128 | | 16 |
style="text-align:left" |GeForce4 MX440
| February 6, 2002 | NV17 | 29 | 65 | AGP 4x | rowspan="2" |275 | 200 |rowspan="5" |64 |6.4 |128 | rowspan="2" |550 | rowspan="2" |550 | rowspan="2" |1,100 | rowspan="2" |137.5 |1.100 |18 |
style="text-align:left" |GeForce4 MX440 8x
| September 25, 2002 | NV18 | 65 | AGP 8x | 166 |64 | |19 |
style="text-align:left" |GeForce4 MX460
| February 6, 2002 | NV17 |29 |65 |AGP 4x |300 |275 |8.8 |rowspan="7" |128 |600 |600 |1,200 |150 |1.200 |22 |
style="text-align:left" |GeForce4 Ti4200
|April 16, 2002 |NV25 |142 |AGP 4x |rowspan="2" |250 |222 (128 MiB) |rowspan="6" |4:2:8:4 |7.104 (128 MiB) | rowspan="2" |1,000 | rowspan="2" |1,000 | rowspan="2" |2,000 | rowspan="2" |125 |15.00 | rowspan="6" |8.0a |rowspan="6" |1.3 |33 |
style="text-align:left" |GeForce4 Ti4200 8x
| September 25, 2002 | NV28 | 142 | AGP 8x | 250 | 8.0 | | 34 |
style="text-align:left" |GeForce4 Ti4400
| February 6, 2002 | NV25 | 63 | 142 | AGP 4x | rowspan="2" |275 | rowspan="2" |275 | rowspan="4" |128 | rowspan="2" |8.8 | rowspan="2" |1,100 | rowspan="2" |1,100 | rowspan="2" |2,200 | rowspan="2" |137.5 |16.50 | 37 |
style="text-align:left" |GeForce4 Ti4400 8x (Ti4800SE{{efn|name=geforce 4 2|GeForce4 Ti4400 8x: Card manufacturers utilizing this chip labeled the card as a Ti4800SE. The surface of the chip has "Ti-8x" printed on it.}}) | January 20, 2003 | NV28 | 63 | 101 | AGP 8x | | 38 |
style="text-align:left" |GeForce4 Ti4600
| February 6, 2002 | NV25 | 63 | 142 | AGP 4x | rowspan="2" |300 | rowspan="2" |325 | rowspan="2" |10.4 | rowspan="2" |1,200 | rowspan="2" |1,200 | rowspan="2" |2,400 | rowspan="2" |150 |18.00 | ? |
style="text-align:left" |GeForce4 Ti4600 8x (Ti4800{{efn|name=geforce 4 3|GeForce4 Ti4600 8x: Card manufacturers utilizing this chip labeled the card as a Ti4600, and in some cases as a Ti4800. The surface of the chip has "Ti-8x" printed on it, as well as "4800" printed at the bottom.}}) | January 20, 2003 | NV28 | 63 | 101 | AGP 8x | | 43 |
{{notelist}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! colspan="2" | Features |
---|
nFiniteFX II Engine
! Video Processing Engine (VPE) |
style="text-align:left;" | GeForce4 MX420
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 MX440 SE
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 MX4000
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 PCX4300
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 MX440
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 MX440 8X
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 MX460
| {{no}} | {{yes}} |
style="text-align:left;" | GeForce4 Ti4200
| {{yes}} | {{no}} |
style="text-align:left;" | GeForce4 Ti4200 8x
| {{yes}} | {{no}} |
style="text-align:left;" | GeForce4 Ti4400
| {{yes}} | {{no}} |
style="text-align:left;" | GeForce4 Ti4400 8x
| {{yes}} | {{no}} |
style="text-align:left;" | GeForce4 Ti4600
| {{yes}} | {{no}} |
style="text-align:left;" | GeForce4 Ti4600 8x
| {{yes}} | {{no}} |
=GeForce FX (5xxx) series=
{{Further|GeForce FX series|Rankine (microarchitecture)}}
- All models support Direct3D 9.0a and OpenGL 1.5 (2.1 (software) with latest drivers)
- The GeForce FX series runs vertex shaders in an array
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" |Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Transistors (million) ! rowspan="2" |Die size (mm2) ! rowspan="2" |Core clock (MHz) ! rowspan="2" |Memory clock (MHz) ! rowspan="2" |Core config{{efn|name=geforce fx 1|Pixel shaders: vertex shaders: texture mapping units: render output units}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" |Performance (GFLOPS ! rowspan="2" |TDP (Watts) |
---|
Size (MiB)
!Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
style="text-align:left" |GeForce FX 5100
| rowspan="3" |March 2003 | rowspan="5" |NV34 | rowspan="6" |TSMC 150 nm | rowspan="5" |45{{cite web|title=NVIDIA NV34 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv34.g21|access-date=2021-02-16|website=TechPowerUp|language=en}} | rowspan="5" |124 | rowspan="2" |AGP 8x |200 |rowspan="2" |166 |rowspan="10" |4:2:4:4 |64 |2.6 | rowspan="14" |DDR |64 |800 |800 |800 |100.0 |12.0 |? |
style="text-align:left" |GeForce FX 5200 LE
| rowspan="2" |250 | rowspan="3" |64 |2.6 |64 | rowspan="2" |1,000 | rowspan="2" |1,000 | rowspan="2" |1,000 | rowspan="2" |125.0 |15.0 |? |
style="text-align:left" |GeForce FX 5200
|AGP 8x |200 |3.2 |64 | |21 |
style="text-align:left" |GeForce FX 5200 Ultra
|March 6, 2003 |AGP 8x |325 |325 |10.4 |128 |1,300 |1,300 |1,300 |162.5 |19.5 |32 |
style="text-align:left" |GeForce PCX 5300
|March 17, 2004 |PCIe x16 |250 |166 |128 |2.6 |64 |1,000 |1,000 |1,000 |125.0 |15.0 |21 |
style="text-align:left" |GeForce FX 5500
|March 2004 |NV34B |91 |AGP 8x |270 |166 |64 |5.3 |128 |1,080 |1,080 |1,080 |135.0 |16.2 |? |
style="text-align:left" |GeForce FX 5600 XT
|October 2003 | rowspan="4" |NV31 | rowspan="19" |TSMC 130 nm | rowspan="4" |121 |AGP 8x |235 |200 |64 |3.2 |64 |940 |940 |940 |117.5 |14.1 |? |
style="text-align:left" |GeForce FX 5600
|March 2003 |AGP 8x |325 |275 |8.8 |rowspan="3" |128 |1,300 |1,300 |1,300 |162.5 |19.5 |25 |
style="text-align:left" |GeForce FX 5600 Ultra
| rowspan="2" |March 6, 2003 | rowspan="3" |AGP 8x |350 |350 |rowspan="2" |64 |11.2 |1,400 |1,400 |1,400 |175.0 |21.0 |27 |
style="text-align:left" |GeForce FX 5600 Ultra Rev.2
|400 |400 |12.8 |1,600 |1,600 |1,600 |200.0 |24.0 |31 |
style="text-align:left" |GeForce FX 5700 VE
|September 2004 | rowspan="6" |NV36 |rowspan="6"|133 |rowspan="2"|250 |rowspan="2"|200 |rowspan="6"|4:3:4:4 |rowspan="3" |128 |rowspan="2" |3.2 |rowspan="2" |64 | rowspan="2" |1000 | rowspan="2" |1000 | rowspan="2" |1000 | rowspan="2" |187.5 |17.5 |20 |
style="text-align:left" |GeForce FX 5700 LE
|March 2004 |AGP 8x | |21 |
style="text-align:left" |GeForce FX 5700
|2003 |AGP 8x | rowspan="2" |425 | rowspan="2" |250 | rowspan="2" |8.0 | rowspan="6"|128 | rowspan="2" |1,700 | rowspan="2" |1,700 | rowspan="2" |1,700 | rowspan="2" |318.7 |29.7 |20 |
style="text-align:left" |GeForce PCX 5750
|March 17, 2004 |PCIe x16 |128 | |25 |
style="text-align:left" |GeForce FX 5700 Ultra
|October 23, 2003 | rowspan="8" |AGP 8x | rowspan="2" |475 |453 | rowspan="2" |128 |14.4 |GDDR2 | rowspan="2" |1,900 | rowspan="2" |1,900 | rowspan="2" |1,900 | rowspan="2" |356.2 |33.2 |43 |
style="text-align:left" |GeForce FX 5700 Ultra GDDR3
|March 15, 2004 |475 |15.2 |GDDR3 | |38 |
style="text-align:left" |GeForce FX 5800
| rowspan="2" |January 27, 2003 | rowspan="2" |NV30 | rowspan="2" |199 |400 |400 | rowspan="2" |4:2:8:4 |rowspan="5" |128 |12.8 |rowspan="2" |GDDR2 |1,600 |1,600 |3,200 |300.0 |24.0 |55 |
style="text-align:left" |GeForce FX 5800 Ultra
|500 |500 |16.0 |2,000 |2,000 |4,000 |375.0 |30.0 |66 |
style="text-align:left" |GeForce FX 5900 ZT
|December 15, 2003 |rowspan="5" |NV35 |rowspan="5" |207 |325 |rowspan="2" |350 | rowspan="7" |4:3:8:4 |rowspan="2" |22.4 |rowspan="6" |DDR |rowspan="7" |256 |1,300 |1,300 |2,600 |243.7 |22.7 |? |
style="text-align:left" |GeForce FX 5900 XT
|390 | rowspan="2" |1,600 | rowspan="2" |1,600 | rowspan="2" |3,200 | rowspan="2" |300.0 |27.3 |48 |
style="text-align:left" |GeForce FX 5900
|May 2003 | 400 |rowspan="2" |425 |rowspan="2"|27.2 |28.0 |55 |
style="text-align:left" |GeForce FX 5900 Ultra
|May 12, 2003 |450 |rowspan="2"|128 |1,800 |1,800 |3,600 |337.5 |31.5 |65 |
style="text-align:left" |GeForce PCX 5900
|March 17, 2004 |PCIe x16 |350 |275 |17.6 |1,400 |1,400 |2,800 |262.5 |24.5 |49 |
style="text-align:left" |GeForce FX 5950 Ultra
|October 23, 2003 | rowspan="2" |NV38 | rowspan="2" |207 |AGP 8x | rowspan="2" |475 |475 | rowspan="2" |256 | 30.4 | rowspan="2" |1,900 | rowspan="2" |1,900 | rowspan="2" |3,800 | rowspan="2" |356.2 |33.2 |83 |
style="text-align:left" |GeForce PCX 5950
|February 17, 2004 |PCIe x16 |425 |27.2 |GDDR3 | |83 |
rowspan="2" |
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Fab (nm) ! rowspan="2" |Transistors (million) ! rowspan="2" |Die size (mm{{sup|2}}) ! rowspan="2" |Core clock (MHz) ! rowspan="2" |Memory clock (MHz) ! rowspan="2" |Core config{{efn|name=geforce fx 1|Pixel shaders: vertex shaders: texture mapping units: render output units}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" |Performance (GFLOPS ! rowspan="2" |TDP (Watts) |
Size (MiB)
!Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
{{notelist}}
=GeForce 6 (6xxx) series=
{{Further|GeForce 6 series|Curie (microarchitecture)}}
- All models support Direct3D 9.0c and OpenGL 2.1
- All models support Transparency AA (starting with version 91.47 of the ForceWare drivers) and PureVideo
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" |Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Transistors (million) ! rowspan="2" |Core clock (MHz) ! rowspan="2" |Memory clock (MHz) ! rowspan="2" |Core config{{efn|name=geforce 6 1|Pixel shaders: vertex shaders: texture mapping units: render output units}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" |Performance (GFLOPS) ! rowspan="2" |TDP (Watts) |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
style="text-align:left" |GeForce 6100 + nForce 410
| October 20, 2005 | MCP51 | | rowspan="4" |HyperTransport | rowspan="3" |425 | 100–200 (DDR) | rowspan="5" |2:1:2:1 | rowspan="4" |Up to 256 system RAM | 1.6–6.4 (DDR) | DDR | rowspan="4" |64 | rowspan="3" |850 | rowspan="3" |425 | rowspan="3" |850 | rowspan="3" |106.25 | ? | ? |
style="text-align:left" |GeForce 6150 SE + nForce 430
| rowspan="2" |June 2006 | MCP61 | rowspan="2" | | 200 | 3.2 | DDR2 | ? | ? |
style="text-align:left" |GeForce 6150 LE + nForce 430
| MCP61 | rowspan="2" |100–200 (DDR) | 1.6–6.4 (DDR) | rowspan="2" |DDR | ? | ? |
style="text-align:left" |GeForce 6150 + nForce 430
| October 20, 2005 | MCP51 | | 475 | 1.6–6.4 (DDR) |950 |475 |950 |118.75 | ? | ? |
style="text-align:left" |GeForce 6200 LE
| April 4, 2005 | NV44 | rowspan="8" |TSMC 110 nm | AGP 8x | 350 | 266 | 128 | 4.256 | DDR | 64 |700 |700 |700 |87.5 | ? | ? |
style="text-align:left" |GeForce 6200A
| April 4, 2005 | NV44A | AGP 8x PCI | 300 | 4:3:4:2 | ? | ? |
style="text-align:left" |GeForce 6200
| October 12, 2004 (PCIe) | NV43 | AGP 8x | 300 | 275 | 4:3:4:4 | 128 | 8.8 | DDR | 128 |1,200 |1,200 |1,200 |225 | 1.2 | 20 |
style="text-align:left" |GeForce 6200 TurboCache
| December 15, 2004 | rowspan="2" |NV44 | rowspan="2" |PCIe x16 | 350 | 200 | rowspan="2" |4:3:4:2 | 128–256 System RAM incl.16/32–64/128 onboard | 3.2 | rowspan="3" |DDR | rowspan="2" |64 |1,400 |700 |1,400 |262.5 | 1.4 | 25 |
style="text-align:left" |GeForce 6500
| October 1, 2005 | 400 | 333 | rowspan="4" |128 | 5.328 |1,600 |800 |1,600 |300 | ? | ? |
style="text-align:left" |GeForce 6600 LE
| 2005 | rowspan="3" |NV43 | rowspan="6" |AGP 8x | rowspan="2" |300 | 200 | 4:3:4:4 | 6.4 | rowspan="3" |128 |1,200 | rowspan="2" |1,200 |1,200 | rowspan="2" |225 | 1.3 | ? |
style="text-align:left" |GeForce 6600
| August 12, 2004 | 275 | rowspan="2" |8:3:8:4 | 8.8 | DDR |2,400 |2,400 | 2.4 | 26 |
style="text-align:left" |GeForce 6600 GT
| August 12, 2004 (PCIe) | 500 | 475 (AGP) | GDDR3 |4,000 |2,000 |4,000 |375 | 4.0 | 47 |
style="text-align:left" |GeForce 6800 LE
| July 22, 2004 (AGP) | rowspan="3" |NV40 (AGP) | rowspan="4" |IBM 130 nm | rowspan="3" |222 | 320 (AGP) | 350 | rowspan="2" |8:4:8:8 | 128 | 22.4 | DDR | 256 |2,560 (AGP) |2,560 (AGP) |2,560 (AGP) |320 (AGP) | 2.6 | ? |
style="text-align:left" |GeForce 6800 XT
| September 30, 2005 | 300 (64 Bit) | 266 (64 Bit) | 256 | 4.256 | DDR | 64{{cite web|url=https://www.biostar.com.tw/app/cn/vga/content.php?S_ID=40|title=映泰集团 :: V6802XA16 :: 产品规格|website=www.biostar.com.tw|access-date=2019-12-12|archive-date=12 October 2022|archive-url=https://web.archive.org/web/20221012210343/https://www.biostar.com.tw/app/cn/vga/content.php?S_ID=40|url-status=live}} |2,400 |2,400 |2,400 |300 | 2.6 | 36 |
style="text-align:left" |GeForce 6800
| April 14, 2004 (AGP) | 325 | 350 | rowspan="3" |12:5:12:12 | 128 | 22.4 | DDR | rowspan="6" |256 |3,900 |3,900 |3,900 |406.25 | 3.9 | 40 |
style="text-align:left" |GeForce 6800 GTO
| April 14, 2004 | NV45 | PCIe x16 | | 450 | 256 | 28.8 | rowspan="5" |GDDR3 |4,200 |4,200 |4,200 |437.5 | ? | ? |
style="text-align:left" |GeForce 6800 GS
| December 8, 2005 (AGP) | NV40 (AGP) | TSMC 110 nm | AGP 8x | 350 (AGP) | rowspan="2" |500 | rowspan="2" |128 | rowspan="2" |32 |5,100 |5,100 |5,100 |531.25 | 4.2 | 59 |
style="text-align:left" |GeForce 6800 GT
| May 4, 2004 (AGP) | rowspan="2" |NV40 (AGP) | rowspan="3" |IBM 130 nm | rowspan="2" |222 | rowspan="2" |AGP 8x | 350 | rowspan="3" |16:6:16:16 |5,600 |5,600 |5,600 |525 | 5.6 | 67 |
style="text-align:left" |GeForce 6800 Ultra
| May 4, 2004 (AGP) | 400 | 525 (512 MiB) | 256 | 33.6 (512 MiB) |6,400 |6,400 |6,400 |600 | 6.4 | 105 |
style="text-align:left" |GeForce 6800 Ultra Extreme Edition
| May 4, 2004 | NV40 | AGP 8x | 450 | 600 | 256 | 35.2 |7,200 |7,200 |7,200 |675 | ? | ? |
rowspan="2" |Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Fab (nm) ! rowspan="2" |Transistors (million) ! rowspan="2" |Core clock (MHz) ! rowspan="2" |Memory clock (MHz) ! rowspan="2" |Core config{{efn|name=geforce 6 1|Pixel shaders: vertex shaders: texture mapping units: render output units}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" |Performance (GFLOPS) ! rowspan="2" |TDP (Watts) |
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
{{notelist}}
==Features==
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! colspan="4" | Features |
---|
OpenEXR HDR
! Scalable Link Interface (SLI) ! PureVideo WMV9 Decoding |
style="text-align:left;" | GeForce 6100
| {{no}} | {{no}} | {{no}} | {{Partial|Limited}} |
style="text-align:left;" | GeForce 6150 SE
| {{no}} | {{no}} | {{Partial|Driver-Side Only}} | {{Partial|Limited}} |
style="text-align:left;" | GeForce 6150
| {{no}} | {{no}} | {{no}} | {{yes}} |
style="text-align:left;" | GeForce 6150 LE
| {{no}} | {{no}} | {{Partial|Driver-Side Only}} | {{yes}} |
style="text-align:left;" | GeForce 6200
| {{no}} | {{no}} | {{yes2}} Yes (PCIe only) | {{yes}} |
style="text-align:left;" | GeForce 6500
| {{no}} | {{yes}} | {{yes}} | {{yes}} |
style="text-align:left;" | GeForce 6600 LE
| {{yes}} | {{yes2}} Yes (No SLI Connector) | {{no}} | {{yes}} |
style="text-align:left;" | GeForce 6600
| {{yes}} | {{yes2}} Yes (SLI Connector or PCIe Interface) | {{no}} | {{yes}} |
style="text-align:left;" | GeForce 6600 DDR2
| {{yes}} | {{yes2}} Yes (SLI Connector or PCIe Interface) | {{no}} | {{yes}} |
style="text-align:left;" | GeForce 6600 GT
| {{yes}} | {{yes}} | {{no}} | {{yes}} |
style="text-align:left;" | GeForce 6800 LE
| {{yes}} | {{no}} | {{no}} | {{no}} |
style="text-align:left;" | GeForce 6800 XT
| {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes2}} Yes (NV42 only) |
style="text-align:left;" | GeForce 6800
| {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes2}} Yes (NV41, NV42 only) |
style="text-align:left;" | GeForce 6800 GTO
| {{yes}} | {{yes}} | {{no}} | {{no}} |
style="text-align:left;" | GeForce 6800 GS
| {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes2}} Yes (NV42 only) |
style="text-align:left;" | GeForce 6800 GT
| {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{no}} |
style="text-align:left;" | GeForce 6800 Ultra
| {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{no}} |
=GeForce 7 (7xxx) series=
{{Further|GeForce 7 series|Curie (microarchitecture)}}
- All models support Direct3D 9.0c and OpenGL 2.1
- All models support Transparency AA (starting with version 91.47 of the ForceWare drivers)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" |Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Transistors (million) ! rowspan="2" |Die size (mm2) ! rowspan="2" |Core clock (MHz) ! rowspan="2" |Memory clock (MHz) ! rowspan="2" |Core config{{efn|name=geforce 7 1|Pixel shaders: vertex shaders: texture mapping units: render output units}} ! colspan="4" |Memory ! colspan="4" |Fillrate ! rowspan="2" |Performance (GFLOPS) ! rowspan="2" |TDP (Watts) |
---|
Size (MiB)
!Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
style="text-align:left" |GeForce 7025 + nForce 630a
| rowspan="5" |July 2007 |MCP68S | rowspan="2" |TSMC 110 nm | | | rowspan="2" |HyperTransport | rowspan="2" |425 | rowspan="2" |200 (DDR) | rowspan="5" |2:1:2:2 | rowspan="5" |Up to 256 system RAM | rowspan="2" |6.4 | rowspan="2" |DDR | rowspan="2" |64 | rowspan="2" |850 | rowspan="2" |850 | rowspan="2" |850 | rowspan="2" |106.25 | ? | ? |
style="text-align:left" |GeForce 7050PV + nForce 630a
|MCP67QV | | | ? | ? |
style="text-align:left" |GeForce 7050 + nForce 610i/630i
|MCP73 | rowspan="3" |TSMC 90 nm | | |HyperTransport/FSB |500 |333 |5.336 | rowspan="3" |DDR2 | rowspan="3" |64 |1,000 |1,000 |1,000 |125 | ? | ? |
style="text-align:left" |GeForce 7100 + nForce 630i
| rowspan="2" |MCP76 | | | rowspan="2" |FSB |600 | rowspan="2" |400 | rowspan="2" |6.4 |1,200 |1,200 |1,200 |150 | ? | ? |
style="text-align:left" |GeForce 7150 + nForce 630i
| | |630 |1,260 |1,260 |1,260 |157.5 | ? | ? |
style="text-align:left" |GeForce 7100 GS
|August 8, 2006 |NV44 |TSMC 110 nm |110 | rowspan="5" |PCIe x16 |350 |266 |4:3:4:2 | rowspan="2" |128 |2.4 |DDR2 | rowspan="4" |32 |1,400 |700 |1,400 |262.5 | ? | ? |
style="text-align:left" |GeForce 7200 GS
|January 18, 2006 | rowspan="4" |G72 | rowspan="8" |TSMC 90 nm | rowspan="4" |112{{cite web|title=NVIDIA G72 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g72.g46|access-date=2021-02-16|website=TechPowerUp|language=en}} | rowspan="4" |81 |450 |400 |2:2:4:2 |3.2 | rowspan="4" |DDR2 | rowspan="3" |1,800 | rowspan="3" |900 | rowspan="3" |1,800 | rowspan="3" |337.5 |0.9 |11 |
style="text-align:left" |GeForce 7300 SE
| rowspan="2" |March 22, 2006 | rowspan="2" |350 | rowspan="2" |333 | rowspan="3" |4:3:4:2 | rowspan="2" |128 | rowspan="2" |2.656 | ? | ? |
style="text-align:left" |GeForce 7300 LE
|1.4 |13 |
style="text-align:left" |GeForce 7300 GS
|January 18, 2006 |550 |400 | rowspan="2" |128 |6.4 |64 |2,200 |1,100 |2,200 |412.5 |2.2 |10 |
style="text-align:left" |GeForce 7300 GT
|May 15, 2006 |G73 |125 |AGP 8x |350 |325 (DDR2) |8:5:8:4 |10.4 |DDR2 |128 |2,800 |1,400 |2,800 |437.5 |2.8 |24 |
style="text-align:left" |GeForce 7500 LE
|2006 |G72 |81 |PCIe x16 |475 |405 |4:3:4:2 |64 |6.480 |DDR2 |64 |2,200 |1,100 |2,200 |593.8 | ? |10 |
style="text-align:left" |GeForce 7600 GS
|March 22, 2006 (PCIe) | rowspan="2" |G73 | rowspan="4" |125 | rowspan="3" |AGP 8x |400 | rowspan="3" |400 (DDR2) | rowspan="4" |12:5:12:8 | rowspan="6" |256 | rowspan="3" |12.8 | rowspan="3" |DDR2 | rowspan="4" |128 |4,800 |3,200 |4,800 |500 |4.8 |30 |
style="text-align:left" |GeForce 7600 GT
|March 9, 2006 (PCIe) | rowspan="2" |560 | rowspan="2" |6,720 | rowspan="2" |4,480 | rowspan="2" |6,720 | rowspan="2" |700 |6.7 | ? |
style="text-align:left" |GeForce 7600 GT 80 nm
|January 8, 2007 |G73-B1 | rowspan="2" |TSMC 80 nm | ? |48 |
style="text-align:left" |GeForce 7650 GS
|March 22, 2006 |G73 |PCIe x16 |450 |400 |12.7 |DDR2 |5,400 |3,600 |5,400 |562.5 | ? | ? |
style="text-align:left" |GeForce 7800 GS
|February 2, 2006 | rowspan="3" |G70 | rowspan="3" |TSMC 110 nm | rowspan="3" |302{{cite web|title=NVIDIA G70 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g70.g40|access-date=2021-02-16|website=TechPowerUp|language=en}} | rowspan="3" |333 |AGP 8x |375 |600 |16:8:16:8 |38.4 | rowspan="10" |GDDR3 | rowspan="10" |256 |6,000 |3,000 |6,000 |750 |6 |70 |
style="text-align:left" |GeForce 7800 GT
|August 11, 2005 | rowspan="2" |PCIe x16 |400 |500 |20:7:20:16 |32 |8,000 |6,400 |8,000 |700 |8 |59 |
style="text-align:left" |GeForce 7800 GTX
|June 22, 2005 (256 MiB) |430 (256 MiB) |600 (256 MiB) |24:8:24:16 |256 |38.4 (256 MiB) |10,320 (256 MiB) |6,880 (256 MiB) |10,320 (256 MiB) |860 (256 MiB) |10.3 |100 (256 MiB) |
style="text-align:left" |GeForce 7900 GS
|May 2006 (PCIe) | rowspan="4" |G71 | rowspan="7" |TSMC 90 nm | rowspan="7" |278{{cite web|title=NVIDIA G71 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g71.g47|access-date=2021-02-16|website=TechPowerUp|language=en}} | rowspan="7" |196 |AGP 8x | rowspan="2" |450 | rowspan="3" |660 |20:7:20:16 | rowspan="2" |256 | rowspan="3" |42.24 |9,000 | rowspan="2" |7,200 |9,000 |787.5 |9 |50 |
style="text-align:left" |GeForce 7900 GT
|March 9, 2006 | rowspan="4" |PCIe x16 | rowspan="3" |24:8:24:16 |10,800 |10,800 |900 |10.8 |65 |
style="text-align:left" |GeForce 7900 GTO
|October 1, 2006 | rowspan="2" |650 | rowspan="2" |512 | rowspan="2" |15,600 | rowspan="2" |10,400 | rowspan="2" |15,600 | rowspan="2" |1,300 |15.6 | ? |
style="text-align:left" |GeForce 7900 GTX
| rowspan="2" |March 9, 2006 |800 |51.2 |15.6 |105 |
style="text-align:left" |GeForce 7900 GX2
|2x G71 |500 |600 |2x 24:8:24:16 |2x 512 |2x 38.4 |24,000 |16,000 |24,000 |2,000 | ? | ? |
style="text-align:left" |GeForce 7950 GT
|September 6, 2006 (PCIe) |G71 |AGP 8x |550 |700 |24:8:24:16 |512 |44.8 |13,200 |8,800 |13,200 |1,100 |13.2 |90 |
style="text-align:left" |GeForce 7950 GX2
|June 5, 2006 |2x G71 |PCIe x16 |500 |600 |2x 24:8:24:16 |2x 512 |2x 38.4 |24,000 |16,000 |24,000 |2000 |24 |136 |
rowspan="2" |Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Fab (nm) ! rowspan="2" |Transistors (million) ! rowspan="2" |Die size (mm2) ! rowspan="2" |Core clock (MHz) ! rowspan="2" |Memory clock (MHz) ! rowspan="2" |Core config{{efn|name=geforce 7 1|Pixel shaders: vertex shaders: texture mapping units: render output units}} !Size (MiB) !Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s ! rowspan="2" |Performance (GFLOPS) ! rowspan="2" |TDP (Watts) |
colspan="4" |Memory
! colspan="4" |Fillrate |
{{notelist}}
==Features==
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! colspan="5" | Features |
---|
Gamma-correct antialiasing
! 64-bit OpenEXR HDR ! Scalable Link Interface (SLI) ! TurboCache ! Dual Link DVI |
style="text-align:left;" | GeForce 7100 GS
| {{no}} | {{no}} | {{yes2}} Yes (PCIe only, No SLI bridge) | {{yes}} | {{no}} |
style="text-align:left;" | GeForce 7200 GS
| {{yes}} | {{yes}} | {{no}} | {{yes}} | {{no}} |
style="text-align:left;" | GeForce 7300 SE
| {{yes}} | {{yes}} | {{no}} | {{yes}} | {{no}} |
style="text-align:left;" | GeForce 7300 LE
| {{yes}} | {{yes}} | {{no}} | {{yes}} | {{no}} |
style="text-align:left;" | GeForce 7300 GS
| {{yes}} | {{yes}} | {{yes2}} Yes (PCIe only) | {{yes}} | {{no}} |
style="text-align:left;" | GeForce 7300 GT
| {{yes}} | {{yes}} | {{yes2}} Yes (PCIe only, No SLI bridge) | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7600 GS
| {{yes}} | {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7600 GT
| {{yes}} | {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7600 GT (80 nm)
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7650 GS (80 nm)
| {{yes}} | {{yes}} | {{yes2}} Yes (Depending on OEM Design) | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7800 GS
| {{yes}} | {{yes}} | {{no}} | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7800 GT
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7800 GTX
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7800 GTX 512
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|One port}} |
style="text-align:left;" | GeForce 7900 GS
| {{yes}} | {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes|Two ports}} |
style="text-align:left;" | GeForce 7900 GT
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|Two ports}} |
style="text-align:left;" | GeForce 7900 GTO
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|Two ports}} |
style="text-align:left;" | GeForce 7900 GTX
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|Two ports}} |
style="text-align:left;" | GeForce 7900 GX2 (GTX Duo)
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|Two ports}} |
style="text-align:left;" | GeForce 7950 GT
| {{yes}} | {{yes}} | {{yes2}} Yes (PCIe only) | {{no}} | {{yes|Two ports}} |
style="text-align:left;" | GeForce 7950 GX2
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{yes|Two ports}} |
=GeForce 8 (8xxx) series=
{{Further|GeForce 8 series|Tesla (microarchitecture)}}
- All models support coverage sample anti-aliasing, angle-independent anisotropic filtering, and 128-bit OpenEXR HDR.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | Core config{{efn|name=geforce 8 1|Unified shaders: texture mapping units: render output units}} ! colspan="2" | Fillrate ! colspan="4" | Memory !Processing power (GFLOPS){{efn|name=geforce 8 3|To calculate the processing power, see Performance.}} ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Pixel (GP/s) ! Texture (GT/s) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Direct3D ! OpenGL !CUDA |
style="text-align:left;" | GeForce 8100 mGPU{{cite web |url=http://www.tomshardware.com/reviews/amd-nvidia-chipset,1972-14.html |title=AMD and Nvidia Platforms Do Battle |website=Tomshardware.com |date=18 July 2008 |access-date=2015-12-11 |archive-date=27 January 2010 |archive-url=https://web.archive.org/web/20100127115556/http://www.tomshardware.com/reviews/amd-nvidia-chipset,1972-14.html |url-status=live }}
| rowspan="3" | 2008 | rowspan="3" | MCP78 | rowspan="5" |TSMC 80 nm | {{unk}} | {{unk}} | rowspan="3" | PCIe 2.0 x16 | rowspan="3" | 500 | rowspan="2" | 1200 | rowspan="3" | 400 | rowspan="4" | 8:8:4 | rowspan="3" | 2 | rowspan="3" | 4 | Up to 512 from system memory | rowspan="3" | 6.4 | rowspan="6" | DDR2 | rowspan="3" | 64 | rowspan="2" |28.8 | rowspan="6" | 10.0 | rowspan="18" | 3.3 | rowspan="3" |n/a | rowspan="3" |n/a | {{unk}} | The block of decoding of HD-video PureVideo HD is disconnected |
style="text-align:left;" | GeForce 8200 mGPU
| {{unk}} | {{unk}} | gt | {{unk}} | rowspan="2" | PureVideo 3 with VP3 |
style="text-align:left;" | GeForce 8300 mGPU
| {{unk}} | {{unk}} | 1500 | Up to 512 from system memory |36 | {{unk}} |
style="text-align:left;" | GeForce 8300 GS{{cite web|url=http://www.theinquirer.net/default.aspx?article=38884 |archive-url=https://web.archive.org/web/20070925073721/http://www.theinquirer.net/default.aspx?article=38884|date=April 12, 2007|archive-date=September 25, 2007|url-status=dead |title=Nvidia GF8600/8500/8300 details revealed}}
| July 2007 | rowspan="2" | G86 | rowspan="3" | 210 | rowspan="2" | 127 | PCIe 1.0 x16 | rowspan="2" | 450 | rowspan="2" | 900 | rowspan="3" | 400 | rowspan="2" | 1.8 | rowspan="2" | 3.6 | 128 | rowspan="3" | 6.4 | rowspan="3" | 64 | 14.4 | rowspan="15" |1.1 | rowspan="3" |1.1 | rowspan="2" | 40 | OEM only |
style="text-align:left;" | GeForce 8400 GS
| June 15, 2007 | PCIe 1.0 x16 | 16:8:4 | rowspan="2" | 128 | 28.8 | rowspan="4" | |
style="text-align:left;" | GeForce 8400 GS rev.2
| December 10, 2007 | G98 | TSMC 65 nm | 86 | PCIe 2.0 x16 | 567 | 1400 | 8:8:4 | 2.268 | 4.536 | 22.4 | rowspan="2" | 25 |
style="text-align:left;" | GeForce 8400 GS rev.3
| July 12, 2010 | GT218 | TSMC 40 nm | 260 | 57 | PCIe 2.0 x16 | 520 | 1230 | 400 (DDR2) | 8:4:4 | 2.08 | 2.08 | 512 | 4.8 | DDR2 | 32 |19.7 | 10.1 |1.2 |
style="text-align:left;" | GeForce 8500 GT
| April 17, 2007 | G86 | rowspan="4" | TSMC 80 nm | 210 | 127 | PCIe 1.0 x16 | 450 | 900 | rowspan="2" | 400 | 16:8:4 | 1.8 | 3.6 | 256 | rowspan="2" | 12.8 | rowspan="2" | DDR2 | rowspan="4" | 128 |28.8 | rowspan="11" | 10.0 | rowspan="5" |1.1 | 45 |
style="text-align:left;" | GeForce 8600 GS
| April 2007 | rowspan="3" | G84 | rowspan="3" | 289 | rowspan="3" | 169 | PCIe 1.0 x16 | rowspan="2" | 540 | 1180 | 16:8:8 | rowspan="2" | 4.32 | 4.32 | 256 |75.5 | rowspan="2" | 47 | OEM only |
style="text-align:left;" | GeForce 8600 GT
| rowspan="2" | April 17, 2007 | PCIe 1.0 x16 | 1188 | 400 | rowspan="2" | 32:16:8 | 8.64 | 256 | 12.8 | DDR2 |76 | rowspan="4" | |
style="text-align:left;" | GeForce 8600 GTS
| PCIe 1.0 x16 | 675 | 1450 | 1000 | 5.4 | 10.8 | 256 | 32 | rowspan="8" | GDDR3 |92.8 | 71 |
style="text-align:left;" | GeForce 8800 GS
| January 2008 | G92 | TSMC 65 nm | 754 | 324 | PCIe 2.0 x16 | 550 | 1375 | rowspan="3" | 800 | 96:48:12 | 6.6 | 26.4 | 384 | 38.4 | 192 |264 | 105 |
style="text-align:left;" | GeForce 8800 GTS (G80)
| February 12, 2007 (320) | rowspan="2" | G80 | rowspan="2" | TSMC 90 nm | rowspan="2" | 681 | rowspan="2" | 484 | rowspan="2" | PCIe 1.0 x16 | 513 | 1188 | 96:24:20 | 10.3 | 12.3 | 320 | rowspan="2" | 64 | rowspan="2" | 320 |228 | rowspan="2" |1.0 | 146 |
style="text-align:left;" | GeForce 8800 GTS 112 (G80)
| November 19, 2007 | 500 | 1200 | 112:28:{{efn|name=geforce 8 2|Full G80 contains 32 texture address units and 64 texture filtering units unlike G92 which contains 64 texture address units and 64 texture filtering units{{cite web |url=http://www.anandtech.com/show/2549/3 |title=Lots More Compute, a Leetle More Texturing - Nvidia's 1.4 Billion Transistor GPU: GT200 Arrives as the GeForce GTX 280 & 260 |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222075904/http://www.anandtech.com/show/2549/3 |archive-date=2015-12-22 |url-status=live }}{{cite news |url=http://www.anandtech.com/show/2116/6 |title=Digging deeper into the shader core - Nvidia's GeForce 8800 (G80): GPUs Re-architected for Direct3D 10 |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222104756/http://www.anandtech.com/show/2116/6 |archive-date=2015-12-22 |url-status=live }}}}20 | 10 | 14 | 640 |268.8 | 150 |
style="text-align:left;" | GeForce 8800 GT
| October 29, 2007 (512) | rowspan="2" | G92 | rowspan="2" | TSMC 65 nm | rowspan="2" | 754 | rowspan="2" | 324 | rowspan="2" | PCIe 2.0 x16 | 600 | 1500 | 700 (256) | 112:56:16 | 9.6 | 33.6 | 256 | 57.6 | rowspan="2" | 256 |336 | rowspan="2" |1.1 | 125 | rowspan="4" | |
style="text-align:left;" | GeForce 8800 GTS (G92)
| December 11, 2007 | 650 | 1625 | 970 | 128:64:16 | 10.4 | 41.6 | 512 | 62.1 |416 | 135 |
style="text-align:left;" | GeForce 8800 GTX
| November 8, 2006 | rowspan="2" | G80 | rowspan="2" | TSMC 90 nm | rowspan="2" | 681 | rowspan="2" | 484 | rowspan="2" | PCIe 1.0 x16 | 575 | 1350 | 900 | rowspan="2" | 128:32:{{efn|name=geforce 8 2}}24 | 13.8 | 18.4 | rowspan="2" | 768 | 86.4 | rowspan="2" | 384 |345.6 | rowspan="2" |1.0 | 145 |
style="text-align:left;" | GeForce 8800 Ultra
| May 2, 2007 | 612 | 1500 | 1080 | 14.7 | 19.6 | 103.7 |384 | 175 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! Core (MHz) ! Shader (MHz) ! Memory (MHz) ! rowspan="2" | Core config{{efn|name=geforce 8 1|Unified shaders: texture mapping units: render output units}} ! Pixel (GP/s) ! Texture (GT/s) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Direct3D ! OpenGL !CUDA ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments |
colspan="3" | Clock rate
! colspan="2" | Fillrate ! colspan="4" | Memory !Processing power (GFLOPS){{efn|name=geforce 8 3|To calculate the processing power, see Performance.}} ! colspan="4" | Supported API version |
{{notelist}}
==Features==
- Compute Capability 1.1: has support for Atomic functions, which are used to write thread-safe programs.
- Compute Capability 1.2: for details see CUDA
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! colspan="7" | Features |
---|
Scalable Link Interface (SLI) ! 3-Way ! PureVideo HD ! PureVideo 2 with VP2, ! PureVideo 3 with VP3, ! PureVideo 4 with VP4 ! Compute |
style="text-align:left;" | GeForce 8300 GS (G86)
| {{no}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8400 GS Rev. 2 (G98)
| {{no}} | {{no}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8400 GS Rev. 3 (GT218)
| {{no}} | {{no}} | {{no}} | {{no}} | {{no}} | {{yes}} | {{yes|1.2}} |
style="text-align:left;" | GeForce 8500 GT
| {{yes}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8600 GT
| {{yes}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8600 GTS
| {{yes}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8800 GS (G92)
| {{yes}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8800 GTS (G80)
| {{yes}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{no}} | {{no|1.0}} |
style="text-align:left;" | GeForce 8800 GTS Rev. 2 (G80)
| {{yes}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{no}} | {{no|1.0}} |
style="text-align:left;" | GeForce 8800 GT (G92)
| {{yes}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8800 GTS (G92)
| {{yes}} | {{no}} | {{no}} | {{yes}} | {{no}} | {{no}} | {{yes|1.1}} |
style="text-align:left;" | GeForce 8800 GTX
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{no}} | {{no}} | {{no|1.0}} |
style="text-align:left;" | GeForce 8800 Ultra
| {{yes}} | {{yes}} | {{yes}} | {{no}} | {{no}} | {{no}} | {{no|1.0}} |
=GeForce 9 (9xxx) series=
{{Further|GeForce 9 series|Tesla (microarchitecture)}}
- All models support Coverage Sample Anti-Aliasing, Angle-Independent Anisotropic Filtering, 128-bit OpenEXR HDR
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | Core config{{efn|name=geforce 9 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS){{efn|name=geforce 9 2|To calculate the processing power see Tesla (microarchitecture)#Performance.}} ! colspan="2" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 9300 mGPU
| rowspan="2" | October 2008 | MCP7A-S | rowspan="2" | 65 nm | rowspan="2" | 282 | rowspan="2" | 162 | rowspan="4" | PCIe 2.0 x16 | 450 | 1200 | rowspan="2" | 400 | rowspan="2" | 16:8:4 | rowspan="2" | Up to 512 from system memory | rowspan="2" | 6.4/12.8 | rowspan="2" | DDR2 | rowspan="2" | 64 |1.8 |3.6 | 57.6 | rowspan="17" | 10.0 | rowspan="17" | 3.3 | {{unk}} | rowspan="2" | based on 8400 GS |
style="text-align:left;" | GeForce 9400 mGPU
| MCP7A-U | 580 | 1400 |2.32 |4.64 | 67.2 | 12 |
style="text-align:left;" | GeForce 9300 GE{{cite web |url=http://www.pcinpact.com/affichage/43963-NVIDIA-9600GT-9300GE-9300GS/58125.htm |title=Nvidia GeForce 9300 GE |year=2008 |author=Nvidia Corporation |access-date=2010-02-22 |archive-url=https://web.archive.org/web/20090416090530/http://www.pcinpact.com/affichage/43963-NVIDIA-9600GT-9300GE-9300GS/58125.htm |archive-date=2009-04-16 |url-status=live }}
| rowspan="2" | June 2008 | rowspan="2" | G98 | rowspan="2" |TSMC 65 nm | rowspan="2" | 210 | rowspan="2" | 86 | 540 | 1300 | rowspan="2" | 500 | rowspan="2" | 8:8:4 | rowspan="2" | 256 | rowspan="2" | DDR2 | rowspan="2" | 64 |2.16 |4.32 | 20.8 | rowspan="2" | 25 | |
style="text-align:left;" | GeForce 9300 GS
| 567 | rowspan="4" | 1400 |2.268 |4.536 | 22.4 | |
style="text-align:left;" | GeForce 9400 GT
| August 27, 2008 | G96-200-c1 | TSMC 55 nm | rowspan="3" | 314 | rowspan="3" | 144 | rowspan="3" | PCIe 2.0 x16 | rowspan="3" | 550 | 400 | 16:8:4 | rowspan="3" | 256 | 12.8 | DDR2 | rowspan="3" | 128 |2.2 |4.4 | 44.8 | rowspan="3" | 50 | |
GeForce 9500 GS
| | | |500 |24:12:4 |16.0 |DDR2 | | |60 |OEM |
style="text-align:left;" | GeForce 9500 GT
| rowspan="2" | July 29, 2008 | G96-300-C1 | UMC 65 nm | 500 | 32:16:8 | 16.0 | DDR2 |4.4 |8.8 | 89.6 | |
style="text-align:left;" | GeForce 9600 GS
| G94a | rowspan="2" | TSMC 65 nm | 505 | 240 | rowspan="10" |PCIe 2.0 x16 | 500 | 1250 | 500 | 48:24:12 | 768 | 24.0 | DDR2 | rowspan="2" | 192 |6 |12 | 120 | {{unk}} | OEM |
style="text-align:left;" | GeForce 9600 GSO
| May 2008 | G92-150-A2 | 754 | 324 | 550 | 1375 | 800 | 96:48:12 | 384 | 38.4 | rowspan="9" | GDDR3 |6.6 |26.4 | 264 | 84 | rowspan="2" | |
style="text-align:left;" | GeForce 9600 GSO 512
| October 2008 | G94a | TSMC 65 nm | rowspan="3" | 505 | 240 | 650 | 1625 | 900 | 48:24:16 | 512 | 57.6 | rowspan="7" | 256 |10.4 |15.6 | 156 | 90 |
style="text-align:left;" | GeForce 9600 GT Green Edition
| 2009 | G94b | TSMC 55 nm | 196?{{citation needed|date=September 2012}} | 600 | 1500 | 700/900 | rowspan="2" | 64:32:16 | rowspan="4" | 512 | 44.8/57.6 |9.6 |19.2 | 192 | 59 | Core Voltage = 1.00v |
style="text-align:left;" | GeForce 9600 GT
| February 21, 2008 | G94-300-A1 | TSMC 65 nm | 240 | 650 | 1625 | 900 | 57.6 |10.4 |20.8 | 208 | 95 | |
style="text-align:left;" | GeForce 9800 GT Green Edition
| 2009 | G92a2 | TSMC/UMC 65 nm | rowspan="4" | 754 | rowspan="2" | 324 | 550 | 1375 | 700 | rowspan="2" | 112:56:16 | 44.8 |8.8 |30.8 | 308 | 75 | Core Voltage = 1.00v |
style="text-align:left;" | GeForce 9800 GT
| July 2008 | G92a | 65 nm | 600 | 1500 | 900 | 57.6 |9.6 |33.6 | 336 | 125 | rowspan="4" | |
style="text-align:left;" | GeForce 9800 GTX
| April 1, 2008 | G92-420-A2 | TSMC 65 nm | 324 | 675 | 1688 | 1100 | rowspan="2" | 128:64:16 | 512 | rowspan="2" | 70.4 |10.8 |43.2 | 432 | 140 |
style="text-align:left;" | GeForce 9800 GTX+
| July 16, 2008 | G92b | TSMC 55 nm | 260 | 738 | 1836 | 1100 | 512 |11.808 |47.232 | 470 | 141 |
style="text-align:left;" | GeForce 9800 GX2
| March 18, 2008 | 2x G92 | TSMC/UMC 65 nm | 2x 754 | 2x 324 | 600 | 1500 | 1000 | 2x 128:64:16 | 2x 512 | 2x 64.0 | 2x 256 |2x 9.6 |2x 38.4 | 2x 384 | 197 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! Core (MHz) ! Shader (MHz) ! Memory (MHz) ! rowspan="2" | Core config{{efn|name=geforce 9 1|Unified shaders: texture mapping units: render output units}} ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments |
colspan="3" | Clock rate
! colspan="4" | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS){{efn|name=geforce 9 2|To calculate the processing power see Tesla (microarchitecture)#Performance.}} ! colspan="2" | Supported API version |
{{notelist}}
==Features==
- Compute Capability: 1.1 has support for Atomic functions, which are used to write thread-safe programs.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! colspan="3" | Features |
---|
Scalable Link Interface (SLI)
! PureVideo 2 with VP2, ! PureVideo 3 with VP3, |
style="text-align:left;" | GeForce 9300 GE (G98)
| rowspan="7" {{yes}} | rowspan="2" {{no}} | rowspan="2" {{yes}} |
style="text-align:left;" | GeForce 9300 GS (G98) |
style="text-align:left;" | GeForce 9400 GT
| rowspan="8" {{yes}} | rowspan="8" {{no}} |
style="text-align:left;" | GeForce 9500 GT |
style="text-align:left;" | GeForce 9600 GSO |
style="text-align:left;" | GeForce 9600 GT |
style="text-align:left;" | GeForce 9800 GT |
style="text-align:left;" | GeForce 9800 GTX
| rowspan="2" {{yes}} |
style="text-align:left;" | GeForce 9800 GTX+ |
style="text-align:left;" | GeForce 9800 GX2
| {{yes}} |
=GeForce 100 series=
{{Further|GeForce 100 series|Tesla (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | Core config{{efn|name=geforce 100 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! Processing power (GFLOPS){{efn|name=geforce 100 2|To calculate the processing power see Tesla (microarchitecture)#Performance.}} ! colspan="2" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce G 100
| rowspan="5" | March 10, 2009 | G98 | TSMC 65 nm | 210 | 86 | rowspan="5" | PCIe 2.0 x16 | 567 | rowspan="2" | 1400 | 500 | 8:8:4 | rowspan="2" | 512 | 8.0 | rowspan="3" | DDR2 | 64 |2.15 |4.3 | 22.4 | rowspan="5" | 10.0 | rowspan="5" | 3.3 | 35 | rowspan="5" | OEM products |
style="text-align:left;" | GeForce GT 120
| G96b | rowspan="4" | TSMC 55 nm | 314 | 121 | rowspan="2" | 500 | 800 | 32:16:8 | 16.0 | 128 |4.4 |8.8 | 89.6 | 50 |
style="text-align:left;" | GeForce GT 130
| rowspan="2" | G94b | rowspan="2" | 505 | rowspan="2" | 196 | 1250 | 500 | 48:24:12 | 1536 | 24.0 | 192 |6 |12 | 120 | 75 |
style="text-align:left;" | GeForce GT 140
| 650 | 1625 | 1800 | 64:32:16 | 512 1024 | 57.6 | rowspan="2" | GDDR3 | rowspan="2" | 256 |10.4 |20.8 | 208 | 105 |
style="text-align:left;" | GeForce GTS 150
| G92b | 754 | 260 | 738 | 1836 | 1000 | 128:64:16 | 1024 | 64.0 |11.808 |47.232 | 470 | 141 |
{{notelist}}
=GeForce 200 series=
{{Further|GeForce 200 series|Tesla (microarchitecture)}}
- All models support Coverage Sample Anti-Aliasing, Angle-Independent Anisotropic Filtering, 240-bit OpenEXR HDR
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | Core config{{efn|name=geforce 200 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! Processing power (GFLOPS){{efn|name=geforce 200 2|To calculate the processing power see Tesla (microarchitecture)#Performance.}} ! colspan="2" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments ! rowspan="2" | Release Price (USD) |
---|
Core (MHz)
! Shader (MHz) ! Memory (GT/s) ! Size (MiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 205
| November 26, 2009 | GT218 | rowspan="2" | 260 | rowspan="2" | 57 | PCIe 2.0 x16 | 589 | 1402 | 1 | 8:4:4 | 512 | 8 | DDR2 | 64 |2.356 |2.356 | 22.4 | rowspan="3" | 10.1 | rowspan="15" | 3.3 | 30.5 | OEM only |
style="text-align:left;" | GeForce 210
| rowspan="2" | October 12, 2009 | GT218-325-B1 | PCIe 2.0 x16 | 520 | 1230 | 0.8 | 16:8:4 | 512 | 4.0 | DDR2 | 32 |2.356 |4.712 | 36.4 | 30.5 | rowspan="2" | |
style="text-align:left;" | GeForce GT 220
| GT216-300-A2 | TSMC 40 nm | 486 | 100 | rowspan="13" | PCIe 2.0 x16 | 615(OEM) | 1335(OEM) | 1 | 48:16:8 | 512 | 16.0 | DDR2 | 64 |5 |10 | 128.2(OEM) | 58 |
rowspan="2" style="text-align:left;" | GeForce GT 230
| G94b | rowspan="2" | TSMC/UMC 55 nm | 505 | 196? | 650 | 1625 | 1.8 | 48:24:16 | 512 | 57.6 | GDDR3 | 256 |10.4 |15.6 | 156 | rowspan="2" | 10 | rowspan="2" | 75 | rowspan="2" | OEM only |
April 27, 2009{{Cite web |date=2023-08-18 |title=Pegatron GT 230 Specs |url=https://www.techpowerup.com/gpu-specs/pegatron-gt-230.b4915 |access-date=2023-08-18 |website=TechPowerUp |language=en}}
| G92b | 754 | 260 | 500 | 1242 | 1 | 96:48:12 | 1536 | 24 | DDR2 | 192 |6 |24 | 238.5 |
style="text-align:left;" | GeForce GT 240
| November 17, 2009 | GT215-450-A2 | TSMC 40 nm | 727 | 139 | 550 | 1340 | 1.8 | 96:32:8 | 512 | 28.8(OEM) | DDR3 | 128 |4.4 |17.6 | 257.3 | 10.1 | 69 | |
style="text-align:left;" | GeForce GTS 240
| G92a | TSMC 65 nm | rowspan="3" | 754 | 324 | 675 | 1620 | 2.2 | 112:56:16 | 1024 | 70.4 | rowspan="9" | GDDR3 | rowspan="3" | 256 |10.8 |37.8 | 362.9 | rowspan="9" | 10.0 | 120 | OEM only |
rowspan="2" style="text-align:left;" | GeForce GTS 250
| 2009 | G92b | TSMC/UMC 55 nm | rowspan="2" | 260 | 702 | 1512 | 2 | rowspan="2" | 128:64:16 | 512 | 64.0 |11.2 |44.9 | 387 | 130 | |
March 3, 2009
| G92-428-B1 | TSMC 65 nm | 738 | 1836 | 2 | 512 | 64.0 |11.808 |47.232 | 470 | 150 | Some cards are rebranded GeForce 9800 GTX+ | $150 |
rowspan="2" style="text-align:left;" | GeForce GTX 260
| June 16, 2008 | GT200-100-A2 | 65 nm | rowspan="5" | 1400 | 576 | 576 | 1242 | 1.998 | 192:64:28 | 896 | 111.9 | rowspan="3" | 448 |16.128 |36.864 | 477 | 182 | Replaced by GTX 260 Core 216 |
September 16, 2008 November 27, 2008 (55 nm) | GT200-103-A2 | 65 nm | 576 | 576 | 1242 | 1.998 | 216:72:28 | 896 (1792) | 111.9 |16.128 |41.472 | 536.5 | 182 | 55 nm version has less TDP | $300 |
style="text-align:left;" | GeForce GTX 275
| April 9, 2009 | GT200-105-B3 | TSMC/UMC 55 nm | 470 | 633 | 1404 | 2.268 | 240:80:28 | 896 (1792) | 127.0 |17.724 |50.6 | 674 | 219 | Effectively one-half of the GTX 295 | $250 |
style="text-align:left;" | GeForce GTX 280
| June 17, 2008 | GT200-300-A2 | 65 nm | 576 | 602 | 1296 | 2.214 | rowspan="2" | 240:80:32 | 1024 | 141.7 | 512 |19.264 |48.16 | 622 | 236 | Replaced by GTX 285 |
style="text-align:left;" | GeForce GTX 285
| January 15, 2009 | GT200-350-B3 | rowspan="2" | TSMC/UMC 55 nm | 470 | 648 | 1476 | 2.484 | 1024 (2048) | 159.0 | 512 |20.736 |51.84 | 708.48 | 204 | EVGA GTX285 Classified supports 4-way SLI | $400 |
style="text-align:left;" | GeForce GTX 295
| January 8, 2009 | 2x GT200-400-B3 | 2x 1400 | 2x 470 | 576 | 1242 | 1.998 | 2x 240:80:28 | 2x 896 | 2x 111.9 | 2x 448 |2x 16.128 |2x 46.08 | 1192.3 | 289 | Dual PCB models were replaced with a single PCB model with 2 GPUs | $500 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! Core (MHz) ! Shader (MHz) ! Memory (GT/s) ! rowspan="2" | Core config{{efn|name=geforce 200 1|Unified shaders: texture mapping units: render output units}} ! Size (MiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments ! rowspan="2" | Release Price (USD) |
colspan="3" | Clock rate
! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! Processing power (GFLOPS){{efn|name=geforce 200 2|To calculate the processing power see Tesla (microarchitecture)#Performance.}} ! colspan="2" | Supported API version |
{{notelist}}
==Features==
- Compute Capability: 1.1 (G92 [GTS250] GPU)
- Compute Capability: 1.2 (GT215, GT216, GT218 GPUs)
- Compute Capability: 1.3 has double precision support for use in GPGPU applications. (GT200a/b GPUs only)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! colspan="3" | Features |
---|
Scalable Link Interface (SLI)
! PureVideo 2 with VP2 ! PureVideo 4 with VP4 Engine |
style="text-align:left;" | GeForce 210
| colspan="1" rowspan="3" {{no}} | colspan="1" rowspan="3" {{no}} | colspan="3" rowspan="3" {{yes}} |
style="text-align:left;" | GeForce GT 220 |
style="text-align:left;" | GeForce GT 240 |
style="text-align:left;" | GeForce GTS 250
| rowspan="7" {{yes}} | colspan="1" rowspan="8" {{yes}} | colspan="1" rowspan="8" {{no}} |
style="text-align:left;" | GeForce GTX 260 |
style="text-align:left;" | GeForce GTX 260 Core 216 |
style="text-align:left;" | GeForce GTX 260 Core 216 (55 nm) |
style="text-align:left;" | GeForce GTX 275 |
style="text-align:left;" | GeForce GTX 280 |
style="text-align:left;" | GeForce GTX 285 |
style="text-align:left;" | GeForce GTX 295
| {{yes}} |
=GeForce 300 series=
{{Further|GeForce 300 series|Tesla (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | Core config{{efn|name=geforce 300 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! Processing power (GFLOPS){{efn|name=geforce 300 2|To calculate the processing power see Tesla (microarchitecture)#Performance.}} ! rowspan="2" | TDP (Watts) ! rowspan="2" | Comments |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) |
style="text-align:left;" | GeForce 310
| November 27, 2009 | GT218 | rowspan="7" |TSMC 40 nm | 260 | 57 | rowspan="7" | PCIe 2.0 x16 | 589 | 1402 | 1000 | 16:8:4 | 512 | 8 | DDR2 | rowspan="2" | 64 |2.356 |4.712 | 44.8 | 30.5 | OEM Card, similar to Geforce 210 |
style="text-align:left;" | GeForce 315
| rowspan="6" | February 2010 | GT216 | 486 | 100 | 475 | 1100 | rowspan="2" | 1580 | 48:16:4 | 512 | 12.6 | DDR3 |3.8 |7.6 | 105.6 | 33 | OEM Card, similar to Geforce GT220 |
style="text-align:left;" | GeForce GT 320
| GT215 | rowspan="5" | 727 | rowspan="5" | 144 | 540 | 1302 | 72:24:8 | 1024 | 25.3 | rowspan="3" | GDDR3 | 128 |4.32 |12.96 | 187.5 | 43 | OEM Card |
rowspan="3" style="text-align:left;" | GeForce GT 330{{cite web |url=http://www.techpowerup.com/gpudb/1758/geforce-gt-330-oem.html |title=Nvidia GeForce GT 330 OEM {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20141018032011/http://www.techpowerup.com/gpudb/1758/geforce-gt-330-oem.html |archive-date=2014-10-18 |url-status=live }}
| 550 | 1350 | | rowspan="2" | 96:32:8 | 512 | 32.00 | 128 |4.40 |17.60 | 257.3 | rowspan="3" | 75 | rowspan="3" | Specifications vary depending on OEM, similar to GT230 v2. |
G92{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gt-330-oem.c3314|title=NVIDIA GeForce GT 330 OEM Specs|website=TechPowerUp|language=en|access-date=2020-03-23}}
| rowspan="2" |500 | rowspan="2" |1250 | | 256 | 51.20 | 256 |4.000 | rowspan="2" |24.00 | rowspan="2" |240.0 |
G92B{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gt-330-oem.c1757|title=NVIDIA GeForce GT 330 OEM Specs|website=TechPowerUp|language=en|access-date=2020-03-23}}
| | 96:32:16 | 1024 | 16.32 | DDR2 | 128 |8.000 |
style="text-align:left;" | GeForce GT 340
| GT215 | 550 | 1340 | 3400 | 96:32:8 | 512 | 54.4 | 128 | | | 257.3 | 69 | OEM Card, similar to GT240 |
{{notelist}}
=GeForce 400 series=
{{Further|GeForce 400 series|Fermi (microarchitecture)}}
- All cards have a PCIe 2.0 x16 Bus interface.
- The base requirement for Vulkan 1.0 in terms of hardware features was OpenGL ES 3.1 which is a subset of OpenGL 4.3, which is supported on all Fermi and newer cards.
- Memory bandwidths stated in the following table refer to Nvidia reference designs. Actual bandwidth can be higher or lower depending on the maker of the graphic board.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | SM count ! rowspan="2" | Core config{{efn|name=geforce 400 1|Unified shaders: texture mapping units: render output units}}{{efn|name=geforce 400 3|Each SM in the GF100 contains 4 texture filtering units for every texture address unit. The complete GF100 die contains 64 texture address units and 256 texture filtering units.{{cite web |url=http://anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3 |title=The GF100 Recap - Nvidia's GeForce GTX 480 and GTX 470: 6 Months Late, Was It Worth the Wait? |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20110805164512/http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3 |archive-date=2011-08-05 |url-status=live }} Each SM in the GF104/106/108 architecture contains 8 texture filtering units for every texture address unit but has doubled both addressing and filtering units. The complete GF104 die also contains 64 texture address units and 512 texture filtering units despite the halved SM count, the complete GF106 die contains 32 texture address units and 256 texture filtering units and the complete GF108 die contains 16 texture address units and 128 texture filtering units.{{cite web |url=http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/2 |title=GF104: Nvidia Goes Superscalar - Nvidia's GeForce GTX 460: The $200 King |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222100647/http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/2 |archive-date=2015-12-22 |url-status=live }}}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{efn|name=geforce 400 2|To calculate the processing power see Fermi (microarchitecture)#Performance.}} ! colspan="5" |Supported API version ! rowspan="2" | TDP (Watts){{efn|name=geforce 400 4|Note that while GTX 460's TDP is comparable to that of AMD's HD5000 series, GF100-based cards (GTX 480/470/465) are rated much lower but pull significantly more power, e.g. GTX 480 with 250W TDP consumes More power than an HD 5970 with 297W TDP.{{cite web |url=http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html |title=GeForce GTX 480 And 470: From Fermi And GF100 To Actual Cards! |website=Tomshardware.com |date=27 March 2010 |access-date=2015-12-11 |archive-date=11 September 2013 |archive-url=https://web.archive.org/web/20130911085858/http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html |url-status=live }}}} ! rowspan="2" | Release Price (USD) |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !OpenCL{{efn|name=geforce 400 6|The 400 series is the only non-OEM family from GeForce 9 to 700 series not to include an official dual-GPU system. However, on March 18, 2011, EVGA released the first single-PCB card with dual 460s on board. The card came with 2048 MiB of memory at 3600 MHz and 672 shader processors at 1400 MHz and was offered at the MSRP of $429.}} !CUDA |
style="text-align:left;" | GeForce 405{{efn|name=geforce 400 7|The GeForce 405 card is a rebranded GeForce 310 which itself is a rebranded GeForce 210.}}
| September 16, 2011 | GT216 | 40 nm | 486 | 100 | 475 | 1100 | 800 | rowspan="2" | 1 | 48:16:8 | 0.5 | 12.6 | DDR3 | 64 |3.8 |7.6 | 105.6 | {{unk}} |10.1 |3.3 | rowspan="2" |1.1 |1.2 | 30.5 | rowspan="3" | OEM |
style="text-align:left;" | GeForce GT 420
| September 3, 2010 | GF108 | rowspan="16" | TSMC 40 nm | rowspan="5" | 585 | rowspan="5" | 116 | rowspan="4" | 700 | rowspan="4" | 1400 | 1800 | 48:4:4 | 0.5 | 28.8 | rowspan="4" | GDDR3 | rowspan="2" | 128 | rowspan="4" |2.8 |2.8 | 134.4 | {{unk}} | rowspan="16" |12 FL 11_1 | rowspan="16" |4.6 | rowspan="13" |2.1 | 50 |
rowspan="3" style="text-align:left;" | GeForce GT 430
| rowspan="3" | October 11, 2010 | rowspan="3" | GF108 | 1600 | rowspan="4" | 2 | rowspan="4" | 96:16:4 | 0.5 | 25.6 | rowspan="3" |11.2 | 268.8 | {{unk}} |1.2 | 60 |
1800
| rowspan="2" | 0.5 | 28.8 | 128 | 268.8 | rowspan="2" | Unknown | rowspan="11" |1.1 | 49 | rowspan="2" | $79 |
1300
| 10.4 | 64 | | |
rowspan="2" style="text-align:left;" | GeForce GT 440
| February 1, 2011 | GF108 | 810 | 1620 | 1800 | 0.5 | 28.8 | GDDR3 | 128 |3.2 |12.9 | 311.04 | {{unk}} | 65 | $100 |
rowspan="2" | October 11, 2010
| rowspan="2" | GF106 | rowspan="3" | 1170 | rowspan="3" | 238 | 594 | 1189 | 1600 | rowspan="2" | 3 | rowspan="2" | 144:24:24 | 1.5 | 43.2 | DDR3 | rowspan="2" | 192 |4.86 |19.44 | 342.43 | {{unk}} | 56 | rowspan="2" | OEM |
rowspan="2" style="text-align:left;" | GeForce GTS 450
| 790 | 1580 | 4000 | 1.5 | 96.0 | rowspan="10" | GDDR5 |4.7 |18.9 | 455.04 | {{unk}} | 106 |
September 13, 2010 March 15, 2011 | GF106-250 | 783 | 1566 | 1200-1600 (GDDR3) | 4 | 192:32:16 | 0.5 | 57.7 | 128 |6.2 |25.0 | 601.34 | {{unk}} | 106 | $129 |
style="text-align:left;" | GeForce GTX 460 SE
| November 15, 2010 | GF104-225-A1 | rowspan="5" | 1950 | rowspan="5" | 332 | rowspan="2" | 650 | rowspan="2" | 1300 | rowspan="2" | 3400 | 6 | 288:48:32 | 1 | 108.8 | rowspan="2" | 256 |7.8 |31.2 | 748.8 | {{unk}} | 150 | $160 |
rowspan="4" style="text-align:left;" | GeForce GTX 460
| October 11, 2010 | GF104 | rowspan="4" | 7 | 336:56:32 | 1 | 108.8 |9.1 |36.4 | 873.6 | {{unk}} | | OEM |
rowspan="2" | July 12, 2010
| rowspan="2" | GF104-300-KB-A1 | rowspan="2" | 675 | rowspan="2" | 1350 | rowspan="2" | 3600 | 336:56:24 | 0.75 | 86.4 | 192 | rowspan="2" |9.4 | rowspan="2" |37.8 | 907.2 | rowspan="2" | Unknown | | $199 |
336:56:32
| 1 | 115.2 | 256 | |160 | $229 |
September 24, 2011
| GF114 | 779 | 1557 | 4008 | 336:56:24 | 1 | 96.2 | 192 |10.9 |43.6 | 1045.6 | {{unk}} | | $199 |
style="text-align:left;" | GeForce GTX 465
| May 31, 2010 | GF100-030-A3 | rowspan="3" | 529 | rowspan="2" | 608 | rowspan="2" | 1215 | 3206 | 11 | 352:44:32 | 1 | 102.7 | 256 |13.3 |26.7 | 855.36 | 106.92 | rowspan="3" |1.2 | rowspan="3" |2.0 | 200{{efn|name=geforce 400 4}} | $279 |
style="text-align:left;" | GeForce GTX 470
| March 26, 2010 | GF100-275-A3 | 3348 | 14 | 448:56:40 | 1.25 | 133.9 | 320 |17.0 |34.0 | 1088.64 | 136.08 | 215{{efn|name=geforce 400 4}} | $349 |
style="text-align:left;" | GeForce GTX 480
| March 26, 2010 | GF100-375-A3 | 701 | 1401 | 3696 | 15 | 480:60:48 | 1.5 | 177.4 | 384 |21.0 |42.0 | 1344.96 | 168.12 | 250{{efn|name=geforce 400 4}} | $499 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | SM count ! rowspan="2" | Core config{{efn|name=geforce 400 1|Unified shaders: texture mapping units: render output units}}{{efn|name=geforce 400 3}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{efn|name=geforce 400 2|To calculate the processing power see Fermi (microarchitecture)#Performance.}} ! colspan="5" |Supported API version ! rowspan="2" | TDP (Watts){{efn|name=geforce 400 4}} ! rowspan="2" | Release Price (USD) |
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !OpenCL{{efn|name=geforce 400 6|The 400 series is the only non-OEM family from GeForce 9 to 700 series not to include an official dual-GPU system. However, on March 18, 2011, EVGA released the first single-PCB card with dual 460s on board. The card came with 2048 MiB of memory at 3600 MHz and 672 shader processors at 1400 MHz and was offered at the MSRP of $429.}} !CUDA |
{{notelist}}
=GeForce 500 series=
{{Further|GeForce 500 series|Fermi (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | SM count ! rowspan="2" | Core config{{efn|name=geforce 500 1|Unified shaders: texture mapping units: render output units}}{{efn|name=geforce 500 3|Each SM in the GF110 contains 4 texture filtering units for every texture address unit. The complete GF110 die contains 64 texture address units and 256 texture filtering units.{{cite web |url=http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/2 |title=GF110: Fermi Learns Some New Tricks - Nvidia's GeForce GTX 580: Fermi Refined |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203734/http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/2 |archive-date=2016-01-13 |url-status=live }} Each SM in the GF114/116/118 architecture contains 8 texture filtering units for every texture address unit but has doubled both addressing and filtering units.}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{efn|name=geforce 500 2|To calculate the processing power see Fermi (microarchitecture)#Performance.}} ! colspan="5" |Supported API version ! rowspan="2" | TDP (watts){{efn|name=geforce 500 6|Similar to previous generation, GTX 580 and most likely future GTX 570{{Update inline|date=April 2021}}, while reflecting its improvement over GF100, still have lower rated TDP and higher power consumption, e.g. GTX580 (243W TDP) is slightly less power hungry than GTX 480 (250W TDP). This is managed by clock throttling through drivers when a dedicated power hungry application is identified that could breach card TDP. Application name changing will disable throttling and enable full power consumption, which in some cases could be close to that of GTX480.{{cite web |url=http://www.anandtech.com/Show/Index/4008?cPage=14&all=False&sort=0&page=17&slug=nvidias-geforce-gtx-580 |title=Power, Temperature, and Noise - Nvidia's GeForce GTX 580: Fermi Refined |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203734/http://www.anandtech.com/Show/Index/4008?cPage=14&all=False&sort=0&page=17&slug=nvidias-geforce-gtx-580 |archive-date=2016-01-13 |url-status=live }}}} ! rowspan="2" | Release Price (USD) |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !OpenCL8 !CUDA |
style="text-align:left;" | GeForce 510
| September 29, 2011 | rowspan="2" | GF119 | rowspan="2" | 292 | rowspan="2" | 79 | PCIe 2.0 x16 | 523 | 1046 | rowspan="4" | 1800 | rowspan="2" | 1 | rowspan="2" | 48:8:4 | rowspan="3" | 1 | 14.4 | rowspan="4" | DDR3 | rowspan="2" | 64 |2.1 |4.5 | 100.4 | {{unk}} | rowspan="15" |12 FL 11_1 | rowspan="15" |4.6 | rowspan="15" |1.1 | rowspan="12" |2.1 | 25 | OEM |
style="text-align:left;" | GeForce GT 520
| April 12, 2011 | PCIe 2.0 x16 | 810 | 1620 | 14.4 |3.25 |6.5 | 155.5 | {{unk}} | 29 | $59 |
style="text-align:left;" | GeForce GT 530{{cite web|title=NVIDIA GeForce GT 530 OEM Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gt-530-oem.c630|access-date=September 25, 2022|website=TechPowerUp}}
| rowspan="3" | May 14, 2011 | GF108-220 | 585 | 116 | rowspan="13" | PCIe 2.0 x16 | 700 | 1400 | 2 | 96:16:4 | 28.8 | 128 |2.8 |11.2 | 268.8 | 22.40 | 50 | OEM |
rowspan="2" style="text-align:left;" | GeForce GT 545
| rowspan="2" | GF116 | rowspan="3" | ~1170 | rowspan="3" | ~238 | 720 | 1440 | rowspan="2" | 3 | rowspan="2" | 144:24:16 | 1.5 | 43 | 192 |11.52 |17.28 | 415.07 | {{unk}} | 70 | $149 |
870
| 1740 | 3996 | 1 | 64 | rowspan="11" | GDDR5 | 128 |13.92 |20.88 | 501.12 | {{unk}} | 105 | OEM |
style="text-align:left;" | GeForce GTX 550 Ti
| March 15, 2011 | GF116-400 | 900 | 1800 | 4104 | 4 | 192:32:24 | 0.75+0.25 | 65.7+32.8 | 128+64{{efn|name=geforce 500 9|1024 MiB RAM on 192-bit bus assemble with 4 x (128 MiB) + 2 x (256 MiB).}} |21.6 |28.8 | 691.2 | {{unk}} | 116 | $149 |
style="text-align:left;" | GeForce GTX 555
| May 14, 2011 | GF114 | rowspan="4" | 1950 | rowspan="4" | 332 | rowspan="2" | 736 | rowspan="2" | 1472 | rowspan="2" | 3828 | rowspan="2" | 6 | rowspan="2" | 288:48:24 | rowspan="2" | 1 | rowspan="2" | 91.9 | rowspan="2" | 128+64{{efn|name=geforce 500 9|1024 MiB RAM on 192-bit bus assemble with 4 x (128 MiB) + 2 x (256 MiB).}} | rowspan="2" |17.6 | rowspan="2" |35.3 | rowspan="2" |847.9 | {{unk}} | rowspan="3" |150 | rowspan="2" | OEM |
style="text-align:left;" | GeForce GTX 560 SE
| GF114-200-KB-A1{{efn|name=geforce 500 4|Internally referred to as GF104B{{cite web |url=http://www.gpu-tech.org/content.php/144-%E2%80%A6and-GF110s-real-name-is-GF100B-%28and-who-guesses-what-GF114-is-%29 |title=...and GF110s real name is: GF100B |website=GPU-Tech.org |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203736/http://www.gpu-tech.org/content.php/144-%E2%80%A6and-GF110s-real-name-is-GF100B-%28and-who-guesses-what-GF114-is-%29 |archive-date=2016-01-13 |url-status=live }}}} | {{unk}} | |
style="text-align:left;" | GeForce GTX 560
| May 17, 2011 | GF114-325-A1{{efn|name=geforce 500 4}} | 810 | 1620 | rowspan="2" | 4008 | 7 | 336:56:32 | rowspan="2" | 1 2 | 128.1 | rowspan="2" | 256 |25.92 |45.36 | 1088.6 | {{unk}} | $199 |
rowspan="2" style="text-align:left;" | GeForce GTX 560 Ti
| January 25, 2011 | GF114-400-A1{{efn|name=geforce 500 4}} | 822 | 1645 | 8 | 384:64:32 | 128.26 |26.3 |52.61 | 1263.4 | 110 | 170 | $249 |
May 30, 2011
| GF110{{efn|name=geforce 500 5|Internally referred to as GF100B}} | rowspan="4" | 3000{{cite web |url=http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580 |title=Nvidia's GeForce GTX 580: Fermi Refined |date=November 9, 2010 |publisher=AnandTech |author=Ryan Smith |access-date=November 9, 2010 |archive-url=https://web.archive.org/web/20101110202636/http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580 |archive-date=November 10, 2010 |url-status=live }} | rowspan="3" | 732 | rowspan="3" | 1464 | rowspan="3" | 3800 | 11 | 352:44:40 | 1.25 | rowspan="3" | 152 | rowspan="3" | 320 | rowspan="3" |29.28 |32.21 | 1030.7 | 128.83 | rowspan="2" | 210{{efn|name=geforce 500 6}} | OEM |
style="text-align:left;" | GeForce GTX 560 Ti 448 Cores
| November 29, 2011 | GF110-270-A1{{efn|name=geforce 500 5}} | 14 | 448:56:40 | 1.25 | 40.99 | 1311.7 | 163.97 | $289 |
style="text-align:left;" | GeForce GTX 570
| December 7, 2010 | GF110-275-A1{{efn|name=geforce 500 5}} | 15 | 480:60:40 | 1.25 2.5 |43.92 | 1405.4 | 175.68 | rowspan="3" |2.0 | 219{{efn|name=geforce 500 6}} | $349 |
style="text-align:left;" | GeForce GTX 580
| November 9, 2010 | GF110-375-A1{{efn|name=geforce 500 5}} | 772 | 1544 | 4008 | 16 | 512:64:48 | 1.5 | 192.384 | 384 |37.05 |49.41 | 1581.1 | 197.63 | $499 |
style="text-align:left;" | GeForce GTX 590
| March 24, 2011 | 2x GF110-351-A1 | 2x 3000 | 2x 520 | 607 | 1215 | 3414 | 2x16 | 2x 512:64:48 | 2x 1.5 | 2x163.87 | 2x384 |2x29.14 |2x38.85 | 2488.3 | 311.04 | 365 | $699 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock rate ! rowspan="2" | SM count ! rowspan="2" | Core config{{efn|name=geforce 500 1|Unified shaders: texture mapping units: render output units}}{{efn|name=geforce 500 3}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{efn|name=geforce 500 2|To calculate the processing power see Fermi (microarchitecture)#Performance.}} ! colspan="5" |Supported API version ! rowspan="2" | TDP (Watts){{efn|name=geforce 500 6}} ! rowspan="2" | Release Price (USD) |
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !OpenCL8 !CUDA |
{{notelist}}
=GeForce 600 series=
{{Further|GeForce 600 series|Kepler (microarchitecture)}}
- Add NVENC on GTX cards
- Several 600 series cards are rebranded 400 or 500 series cards.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="5" | Clock rate ! rowspan="2" | SM count ! rowspan="2" | Core config{{efn|name=geforce 600 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{efn|name=geforce 600 10|To calculate the processing power see Kepler (microarchitecture)#Performance, or Fermi (microarchitecture)#Performance.}} ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Release Price (USD) |
---|
Core (MHz)
! Average Boost (MHz) ! Max Boost (MHz) ! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan{{efn|name=geforce 600 11|Vulkan 1.2 is only supported on Kepler cards.}} ! Direct3D ! OpenGL ! OpenCL |
style="text-align:left;" | GeForce 605{{efn|name=geforce 600 2|The GeForce 605 (OEM) card is a rebranded GeForce 510.}}
| April 3, 2012 | GF119 | rowspan="3" | 292 | rowspan="3" | 79 | PCIe 2.0 x16 | 523 | {{N/a}} | {{N/a}} | 1046 | 898 | rowspan="3" | 1 | 48:8:4 | 0.5 1 | 14.4 | rowspan="7" | DDR3 | rowspan="5" | 64 |2.09 |4.2 | 100.4 | {{unk}} | rowspan="5" {{N/a}} | rowspan="27" | 12 | rowspan="27" | 4.6 | rowspan="27" | 1.2 | 25 | OEM |
style="text-align:left;" | GeForce GT 610{{efn|name=geforce 600 3|The GeForce GT 610 card is a rebranded GeForce GT 520.}}
| May 15, 2012 | GF119-300-A1 | PCIe 2.0 x16, PCIe x1, PCI | rowspan="2" | 810 | {{N/a}} | {{N/a}} | rowspan="2" | 1620 | 1000 | 48:8:4 | 0.5 | 8 | rowspan="2" |3.24 |6.5 | 155.5 | {{unk}} | 29 | Retail |
rowspan="2" style="text-align:left;" | GeForce GT 620{{efn|name=geforce 600 4|The GeForce GT 620 (OEM) card is a rebranded GeForce GT 520.}}
| April 3, 2012 | GF119 | rowspan="3" | PCIe 2.0 x16 |{{N/a}} |{{N/a}} | 898 | 48:8:4 | 0.5 | 14.4 |6.5 | 155.5 | {{unk}} | 30 | OEM |
May 15, 2012
| GF108-100-KB-A1 | 585 | 116 | 700 | {{N/a}} | {{N/a}} | 1400 | 1000–1800 | 2 | 96:16:4 | 1 | 8–14.4 |2.8 |11.2 | 268.8 | {{unk}} | 49 | Retail |
style="text-align:left;" | GeForce GT 625
| February 19, 2013 | GF119 | 292 | 79 | 810 | {{N/a}} | {{N/a}} | 1620 | 898 | rowspan="2" | 1 | 48:8:4 | 0.5 1 | 14.4 |3.24 |6.5 | 155.5 | {{unk}} | 30 | rowspan="2" | OEM |
rowspan="4" style="text-align:left;" | GeForce GT 630{{efn|name=geforce 600 6|The GeForce GT 630 (DDR3, 128-bit, retail) card is a rebranded GeForce GT 430 (DDR3, 128-bit).}}{{efn|name=geforce 600 7|The GeForce GT 630 (GDDR5) card is a rebranded GeForce GT 440 (GDDR5).}}
| April 24, 2012 | GK107 | TSMC 28 nm | 1300 | 118 | PCIe 3.0 x16 | 875 | {{N/a}} | {{N/a}} | 875 | 891 | 192:16:16 | 1 | 28.5 | rowspan="3" | 128 |14 |14 | 336 | 14 | 1.2 | 50 |
rowspan="2" | May 15, 2012
| GF108-400-A1 | rowspan="2" | TSMC 40 nm | rowspan="2" | 585 | rowspan="2" | 116 | rowspan="2" | PCIe 2.0 x16 | 700 | {{N/a}} | {{N/a}} | 1620 | 1600–1800 | rowspan="2" | 2 | 96:16:4 | 1 | 25.6–28.8 |2.8 |11.2 | 311 | {{unk}} | rowspan="2" {{N/a}} | 49 | rowspan="2" | Retail |
GF108
| 810 | {{N/a}} | {{N/a}} | 1620 | 800 | 96:16:4 | 1 | 51.2 | GDDR5 |3.2 |13 | 311 | {{unk}} | 65 |
May 29, 2013
| GK208-301-A1 | rowspan="2" | TSMC 28 nm | rowspan="2" | 1020 | rowspan="2" | 79 | PCIe 2.0 x8 | 902 | {{N/a}} | {{N/a}} | 902 | 900 | rowspan="2" | 1 | 384:16:8 | rowspan="2" | 1 | 14.4 | rowspan="5" | DDR3 | rowspan="2" | 64 |7.22 |14.44 | 692.7 | {{unk}} | rowspan="2" | 1.2 | 25 | |
style="text-align:left;" | GeForce GT 635
| February 19, 2013 | GK208 | PCIe 3.0 x8 | 967 | {{N/a}} | {{N/a}} | 967 | 1001 | 384:16:8 | 16 |7.74 |15.5 | 742.7 | {{unk}} | 35 | rowspan="3" | OEM |
rowspan="5" style="text-align:left;" | GeForce GT 640{{efn|name=geforce 600 8|The GeForce GT 640 (OEM) GF116 card is a rebranded GeForce GT 545 (DDR3).}}
| rowspan="2" | April 24, 2012 | GF116 | TSMC 40 nm | 1170 | 238 | PCIe 2.0 x16 | 720 | {{N/a}} | {{N/a}} | 1440 | 891 | 3 | 144:24:24 | 1.5 | 42.8 | 192 |17.3 |17.3 | 414.7 | {{unk}} | {{N/a}} | 75 |
rowspan="3" | GK107
| rowspan="3" | TSMC 28 nm | rowspan="3" | 1300 | rowspan="3" | 118 | rowspan="3" | PCIe 3.0 x16 | 797 | {{N/a}} | {{N/a}} | 797 | 891 | rowspan="4" | 2 | rowspan="3" | 384:32:16 | 1 | 28.5 | rowspan="3" | 128 |12.8 |25.5 | 612.1 | 25.50 | rowspan="4" | 1.2 | 50 |
June 5, 2012
| 900 | {{N/a}} | {{N/a}} | 900 | 891 | 2 | 28.5 |14.4 |28.8 | 691.2 | 28.8 | 65 | $100 |
April 24, 2012
| 950 | {{N/a}} | {{N/a}} | 950 | 1250 | 1 | 80 | rowspan="14" | GDDR5 |15.2 |30.4 | 729.6 | 30.40 | 75 | OEM |
May 29, 2013
| GK208-400-A1 | TSMC 28 nm | 1020 | 79 | PCIe 2.0 x8 | 1046 | {{N/a}} | {{N/a}} | 1046 | 1252 | 384:16:8 | rowspan="3" | 1 | 40.1 | 64 |8.37 |16.7 | 803.3 | {{unk}} | 49 | |
style="text-align:left;" | GeForce GT 645{{efn|name=geforce 600 9|The GeForce GT 645 (OEM) card is a rebranded GeForce GTX 560 SE.}}
| April 24, 2012 | GF114-400-A1 | TSMC 40 nm | 1950 | 332 | PCIe 2.0 x16 | 776 | {{N/a}} | {{N/a}} | 1552 | 1914 | 6 | 288:48:24 | 91.9 | 192 |18.6 |37.3 | 894 | {{unk}} | {{N/a}} | 140 | rowspan="2" | OEM |
style="text-align:left;" | GeForce GTX 645
| April 22, 2013 | GK106 | rowspan="11" | TSMC 28 nm | 2540 | 221 | rowspan="11" | PCIe 3.0 x16 | 823.5 | 888.5 | {{N/a}} | 823 | 1000 | 3 | 576:48:16 | 64 | rowspan="4" | 128 |14.16 |39.5 | 948.1 | 39.53 | rowspan="11" | 1.2 | rowspan="2" | 64 |
rowspan="2" style="text-align:left;" | GeForce GTX 650
| September 13, 2012 | GK107-450-A2 | 1300 | 118 | rowspan="2" | 1058 | {{N/a}} | {{N/a}} | rowspan="2" | 1058 | rowspan="2" | 1250 | rowspan="2" | 2 | rowspan="2" | 384:32:16 | rowspan="4" | 1 | rowspan="2" | 80 | rowspan="2" |16.9 | rowspan="2" |33.8 | 812.54 | rowspan="2" | 33.86 | $110 |
November 27, 2013 {{cite web|title=NVIDIA GeForce GTX 650 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-650.c2445|access-date=2021-12-09|website=TechPowerUp|language=en}}
| GK-106-400-A1 | rowspan="4" |2540 | rowspan="4" |221 | {{N/a}} | 65 | | ? |
style="text-align:left;" | GeForce GTX 650 Ti
| October 9, 2012 | GK106-220-A1 | 928 | {{N/a}} | {{N/a}} | 928 | 1350 | rowspan="2" | 4 | 768:64:16 | 86.4 |14.8 |59.4 | 1425.41 | 59.39 | 110 | $150 (130) |
style="text-align:left;" | GeForce GTX 650 Ti Boost
| March 26, 2013 | GK106-240-A1 | rowspan="2" | 980 | rowspan="2" | 1032 | {{N/a}} | rowspan="2" | 980 | 1502 | 768:64:24 | 144.2 | 192 |23.5 |62.7 | 1505.28 | 62.72 | 134 | $170 (150) |
rowspan="2" style="text-align:left;" | GeForce GTX 660
| September 13, 2012 | GK106-400-A1 | 1084 | 1502 | 5 | 960:80:24 | 1.5+0.5 | 96.1+48.1 | 128+64 |23.5 |78.4 | 1881.6 | 78.40 | 140 | $230 (180) |
August 22, 2012
| GK104-200-KD-A2 | rowspan="4" | 3540 | rowspan="4" | 294 | 823.5 | 888.5 | 899 | 823 | 1450 | 6 | 1152:96:24 | 1.5 | 139 | 192 |19.8 |79 | 2108.6 | 79.06 | 130 | OEM |
style="text-align:left;" | GeForce GTX 660 Ti
| August 16, 2012 | GK104-300-KD-A2 | rowspan="2" | 915 | rowspan="2" | 980 | 1058 | rowspan="2" | 915 | 1502 | rowspan="2" | 7 | 1344:112:24 | 2 | 96.1+48.1 | 128+64 |22.0 |102.5 | 2459.52 | 102.48 | 150 | $300 |
style="text-align:left;" | GeForce GTX 670
| May 10, 2012 | GK104-325-A2 | 1084 | 1502 | 1344:112:32 | rowspan="2" | 2 | 192.256 | rowspan="2" | 256 |29.3 |102.5 | 2459.52 | 102.48 | 170 | $400 |
style="text-align:left;" | GeForce GTX 680
| March 22, 2012 | GK104-400-A2 | 1058 | 1110 | 1006 | 1502 | 8 | 1536:128:32 | 192.256 |32.2 |128.8 | 3090.43 | 128.77 | 195 | $500 |
style="text-align:left;" | GeForce GTX 690
| April 29, 2012 | 2x GK104-355-A2 | 2x 3540 | 2x 294 | 915 | 1019 | 1058 | 915 | 1502 | 2x 8 | 2x 1536:128:32 | 2x 2 | 2x 192.256 | 2x 256 |2x 29.28 |2x 117.12 | 2x 2810.88 | 2x 117.12 | 300 | $1000 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="5" | Clock rate ! rowspan="2" | SM count ! rowspan="2" | Core config{{efn|name=geforce 600 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{efn|name=geforce 600 10|To calculate the processing power see Kepler (microarchitecture)#Performance, or Fermi (microarchitecture)#Performance.}} ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Release Price (USD) |
Core (MHz)
! Average Boost (MHz) ! Max Boost (MHz) ! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan{{efn|name=geforce 600 11}} ! Direct3D ! OpenGL ! OpenCL |
{{notelist}}
= GeForce 700 series =
{{Further|GeForce 700 series|Kepler (microarchitecture)}}
The GeForce 700 series for desktop. The GM107-chips are Maxwell-based, the GF1xx are Fermi-based, and the GKxxx-chips Kepler.
- Improve NVENC
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="4" | Clock rate ! rowspan="2" | SMX count ! rowspan="2" | Core config{{efn|name=geforce 700 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" |Processing power (GFLOPS){{efn|name=geforce 700 9|To calculate the processing power see Maxwell (microarchitecture)#Performance, or Kepler (microarchitecture)#Performance.}} ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Release Price (USD) |
---|
Base (MHz)
! Average Boost (MHz) ! Max Boost{{efn|name=geforce 700 2|Max Boost depends on ASIC quality. For example, some GTX TITAN with over 80% ASIC quality can hit 1019 MHz by default, lower ASIC quality will be 1006 MHz or 993 MHz.}} (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan{{efn|name=geforce 700 11|Maxwell supports Vulkan version 1.3, while Kepler only support Vulkan version 1.2, Fermi does not support the Vulkan API at all.}} ! Direct3D{{efn|name=geforce 700 3|Kepler supports some optional 11.1 features on feature level 11_0 through the Direct3D 11.1 API, however Nvidia did not enable four non-gaming features to qualify Kepler for level 11_1.{{cite web |url=http://www.guru3d.com/news_story/nvidia_kepler_not_fully_compliant_with_directx_11_1.html |title=Nvidia Kepler not fully compliant with Direct3D 11.1 |website=Guru3d.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222113454/http://www.guru3d.com/news_story/nvidia_kepler_not_fully_compliant_with_directx_11_1.html |archive-date=2015-12-22 |url-status=live }}{{Cite web|url=https://brightsideofnews.com/blog/nvidia-doesnt-fully-support-directx-111-with-kepler-gpus2c-bute280a6/|archiveurl=https://web.archive.org/web/20130903174514/http://www.brightsideofnews.com/news/2012/11/21/nvidia-doesnt-fully-support-directx-111-with-kepler-gpus2c-bute280a6.aspx|url-status=dead|title=Nvidia Doesn't Fully Support DirectX 11.1 with Kepler GPUs, But...|date=November 21, 2012|archivedate=September 3, 2013}}}} ! OpenGL ! OpenCL |
style="text-align:left;" | GeForce GT 705{{cite web |url=http://www.techpowerup.com/gpudb/2578/geforce-gt-705.html |title=Nvidia GeForce GT 705 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20140615091732/http://www.techpowerup.com/gpudb/2578/geforce-gt-705.html |archive-date=2014-06-15 |url-status=live }}{{efn|name=geforce 700 4|The GeForce GT 705 (OEM) is a rebranded GeForce GT 610, which itself is a rebranded GeForce GT 520.}}
| rowspan="2" | March 27, 2014 | GF119-300-A1 | TSMC 40 nm | 292 | 79 | PCIe 2.0 x16 | 810 | {{n/a}} | {{n/a}} | 898 | rowspan="4" | 1 | 48:8:4 | 0.5 | rowspan="2" | 14.4 | rowspan="2" | DDR3 | 64 |3.24 |6.5 |155.5 |19.4 | n/a | rowspan="21" | 12 | rowspan="20" | 4.6 | 1.1 | 29 | rowspan="2" | OEM |
rowspan="2" style="text-align:left;" | GeForce GT 710{{cite web |url=http://www.techpowerup.com/gpudb/1990/geforce-gt-710.html |title=Nvidia GeForce GT 710 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20140615091722/http://www.techpowerup.com/gpudb/1990/geforce-gt-710.html |archive-date=2014-06-15 |url-status=live }}
| GK208-301-A1 | rowspan="5" | TSMC 28 nm | rowspan="5" | 1020 | rowspan="5" | 79 | PCIe 2.0 x8 | 823 | {{n/a}} | {{n/a}} | 900 (1800) | 192:16:8 | 0.5 | rowspan="5" | 64 |6.6 |13.2 |316.0 |13.2 | 1.2 | rowspan="5" | 1.2 | |
January 26, 2016
| GK208-203-B1 | PCIe 2.0 x8, PCIe x1 | 954 | {{n/a}} | {{n/a}} | 900 (1800) | 192:16:8 | 1 | 14.4 | rowspan="2" | DDR3 |7.6 |15.3 |366 |15.3 | | rowspan="2" | 19 | $35–45 |
style="text-align:left;" | GeForce GT 720{{cite web |url=http://www.techpowerup.com/gpudb/1989/geforce-gt-720.html |title=Nvidia GeForce GT 720 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20140615091724/http://www.techpowerup.com/gpudb/1989/geforce-gt-720.html |archive-date=2014-06-15 |url-status=live }}
| March 27, 2014 | GK208-201-B1 | rowspan="3" | PCIe 2.0 x8 | 797 | {{n/a}} | {{n/a}} | 900 (1800) | 192:16:8 | 1 | 14.4 |6.4 |12.8 |306 |12.8 | | $49–59 |
rowspan="3" style="text-align:left;" | GeForce GT 730 {{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gt-730/specifications |title=GT 730 {{pipe}} Specifications |website=GeForce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151212232644/http://www.geforce.com/hardware/desktop-gpus/geforce-gt-730/specifications |archive-date=2015-12-12 |url-status=live }}{{efn|name=geforce 700 5|The GeForce GT 730 (DDR3, 64-bit) is a rebranded GeForce GT 630 (Rev. 2).}}{{efn|name=geforce 700 6|The GeForce GT 730 (DDR3, 128-bit) is a rebranded GeForce GT 630 (128-bit).}} | rowspan="3" | June 18, 2014 | GK208-301-A1 | 902 | {{n/a}} | {{n/a}} | 900 | rowspan="5" | 2 | 384:16:8 | 1{{cite web|url=https://www.zotac.com/product/graphics_card/gt-730-1gb|title=GeForce GT 730 1GB-ZOTAC|work=ZOTAC|access-date=July 12, 2017|archive-url=https://web.archive.org/web/20160719212714/https://www.zotac.com/product/graphics_card/gt-730-1gb|archive-date=July 19, 2016|url-status=live}} | 14.4 | DDR3 |7.22 |14.44 | rowspan="2" |692.7 | rowspan="2" |28.9 | | 23 | rowspan="3" | $69–79 |
GK208-400-A1
| 902 | {{n/a}} | {{n/a}} | 1250 | 384:16:8 | 40.0 | GDDR5 |7.22 |14.44 | | 25 |
GF108
| TSMC 40 nm | 585 | 116 | PCIe 2.0 x16 | 700 | {{n/a}} | {{n/a}} | 900 | 96:16:4 | rowspan="3" | 1 | 28.8 | rowspan="2" | DDR3 | 128 |2.8 |11.0 |268.8 |33.6 | n/a | 1.1 | 49 |
rowspan="2" style="text-align:left;" | GeForce GT 740{{efn|name=geforce 700 7|The GeForce GT 740 (OEM) is a rebranded GeForce GTX 650.}}
| rowspan="2" | May 29, 2014 | rowspan="2" | GK107-425-A2 | rowspan="2" | 1270 | rowspan="2" | 118 | rowspan="14" | PCIe 3.0 x16 | 993 | {{n/a}} | {{n/a}} | 891 | 384:32:16 | 28.5 | rowspan="5" | 128 |15.9 |31.8 | rowspan="2" |762.6 | rowspan="2" |31.8 | 1.2 | rowspan="14" | 1.2 | rowspan="2" | 64 | rowspan="2" | $89–99 |
993
| {{n/a}} | {{n/a}} | 1252 | 384:32:16 | 80.1 | GDDR5 |15.9 |31.8 | |
style="text-align:left;" | GeForce GTX 745
| rowspan="3" | February 18, 2014 | GM107-220-A2 | rowspan="3" | 1870 | rowspan="3" | 148 | 1033 | {{unk}} | {{unk}} | 900 | 3 | 384:24:16 | 1 | 28.8 | DDR3 |16.5 |24.8 |793.3 |24.8 | rowspan="3" | 1.3 | rowspan="2" | 55 | OEM |
style="text-align:left;" | GeForce GTX 750
| GM107-300-A2 | 1020 | 1085 | 1163 | 1250 | 4 | 512:32:16 | 80 | rowspan="11" | GDDR5 |16.3 |32.6 |1044.5 |32.6 | $119 |
style="text-align:left;" | GeForce GTX 750 Ti
| GM107-400-A2 | 1020 | 1085 | 1200 | 1350 | 5 | 640:40:16 | 1 | 86.4 |16.3 |40.8 |1305.6 |40.8 | 60 | $149 |
style="text-align:left;" | GeForce GTX 760 192-bit
| October 17, 2013 | GK104-200-KD-A2 | rowspan="4" | 3540 | rowspan="4" | 294 | 824 | 888 | 889 | 1450 | rowspan="2" | 6 | 1152:96:24 | 1.5 | 139.2 | 192 |19.8 |79.1 |1896.2 |79.0 | rowspan="9" | 1.2 | 130 | OEM |
style="text-align:left;" | GeForce GTX 760
| June 25, 2013 | GK104-225-A2 | 980 | 1033 | 1124 | 1502 | 1152:96:32 | 2 | 192.3 | rowspan="3" | 256 |31.4{{efn|name=geforce 700 10|As a Kepler GPC is able to rasterize 8 pixels per clock, fully enabled GK110 GPUs (780 Ti/TITAN Black) can only output 40 pixels per clock (5 GPCs), despite 48 ROPs and all SMX units being physically present. For GTX 780 and GTX 760, multiple GPC configurations with differing pixel fillrate are possible, depending on which SMXs were disabled in the chip: 5/4 GPCs, or 4/3 GPCs, respectively.}} |94 |2257.9 |94.1 | rowspan="2" | 170 | $249 ($219) |
style="text-align:left;" | GeForce GTX 760 Ti{{efn|name=geforce 700 8|The GeForce GTX 760 Ti (OEM) is a rebranded GeForce GTX 670.}}
| GK104 | 915 | 980 | 1084 | 1502 | 7 | 1344:112:32 | 2 | 192.3 |29.3 |102.5 |2459.5 |102.5 | OEM |
style="text-align:left;" | GeForce GTX 770
| May 30, 2013 | GK104-425-A2 | 1046 | 1085 | 1130 | 1752.5 | 8 | 1536:128:32 | 2 4 | 224 |33.5 |134 |3213.3 |133.9 | rowspan="5" | 230 | $399 ($329) |
style="text-align:left;" | GeForce GTX 780
| May 23, 2013 | GK110-300-A1 | rowspan="4" | 7080 | rowspan="4" | 561 | 863 | 900 | 1002 | 1502 | 12 | 2304:192:48 | 288.4 | rowspan="4" | 384 |41.4{{efn|name=geforce 700 10|As a Kepler GPC is able to rasterize 8 pixels per clock, fully enabled GK110 GPUs (780 Ti/TITAN Black) can only output 40 pixels per clock (5 GPCs), despite 48 ROPs and all SMX units being physically present. For GTX 780 and GTX 760, multiple GPC configurations with differing pixel fillrate are possible, depending on which SMXs were disabled in the chip: 5/4 GPCs, or 4/3 GPCs, respectively.}} |160.5 |3976.7 |165.7 | $649 ($499) |
style="text-align:left;" | GeForce GTX 780 Ti{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-780-ti/specifications |title=GeForce GTX780 Ti. Specifications |website=Geforce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151212021141/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-780-ti/specifications |archive-date=2015-12-12 |url-status=live }}{{cite web |url=http://videocardz.com/47508/videocardz-nvidia-geforce-gtx-780-ti-2880-cuda-cores |title=Nvidia GeForce GTX 780 Ti has 2880 CUDA cores |website=Videocardz.com |date=31 October 2013 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222120538/http://videocardz.com/47508/videocardz-nvidia-geforce-gtx-780-ti-2880-cuda-cores |archive-date=2015-12-22 |url-status=live }}{{cite web |url=http://web-engage.augure.com/pub/link/282593/04601926874847631383752919307-hl-com.com.html |title=PNY dévoile son nouveau foudre de guerre: la GeForce GTX 780 TI. |website=Web-engage.augure.com |access-date=2015-12-11 |url-status=dead |archive-url=https://web.archive.org/web/20131109211440/http://web-engage.augure.com/pub/link/282593/04601926874847631383752919307-hl-com.com.html |archive-date=November 9, 2013 }}
| November 7, 2013 | GK110-425-B1 | 876 | 928 | 1019 | 1752.5 | 15 | 2880:240:48 | 3 | 336.5 |42.0{{efn|name=geforce 700 10|As a Kepler GPC is able to rasterize 8 pixels per clock, fully enabled GK110 GPUs (780 Ti/TITAN Black) can only output 40 pixels per clock (5 GPCs), despite 48 ROPs and all SMX units being physically present. For GTX 780 and GTX 760, multiple GPC configurations with differing pixel fillrate are possible, depending on which SMXs were disabled in the chip: 5/4 GPCs, or 4/3 GPCs, respectively.}} |210.2 |5045.7 |210.2 | $699 |
style="text-align:left;" | GeForce GTX TITAN{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan |title=GeForce GTX TITAN |website=Geforce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151205173714/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan |archive-date=2015-12-05 |url-status=live }}{{cite web |url=http://www.nvidia.com/titan-graphics-card |title=TITAN Graphics Card |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20130224082627/http://www.nvidia.com/titan-graphics-card |archive-date=2013-02-24 |url-status=live }}{{cite web |url=http://www.anandtech.com/show/6760/nvidias-geforce-gtx-titan-part-1 |title=Nvidia's GeForce GTX Titan, Part 1: Titan For Gaming, Titan For Compute |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151204225432/http://www.anandtech.com/show/6760/nvidias-geforce-gtx-titan-part-1 |archive-date=2015-12-04 |url-status=live}}
| February 21, 2013 | GK110-400-A1 | 837 | 876 | 993 | 1502 | 14 | 2688:224:48 | rowspan="2" | 6 | 288.4 |40.2 |187.5 |4499.7 | rowspan="2" | $999 |
style="text-align:left;" | GeForce GTX TITAN Black
| February 18, 2014 | GK110-430-B1 | 889 | 980 | 1058 | 1752.5 | 15 | 2880:240:48 | 336.5 |42.7 |213.4 |5120.6 |1706.9 |
style="text-align:left;" | GeForce GTX TITAN Z
| May 28, 2014 | 2x 7080 | 2x 561 | 705 | 876 | {{unk}} | 1752.5 | 2x 15 | 2x 2880:240:48 | 2x 6 | 2x 336.5 | 2x 384 |2x 33.8 |2x 169 |5046x2 | 4.5 | 375 | $2999 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (million) ! rowspan="2" | Die size (mm2) ! colspan="4" | Clock rate ! rowspan="2" | SMX count ! rowspan="2" | Core config{{efn|name=geforce 700 1|Unified shaders: texture mapping units: render output units}} ! colspan="4" | Memory configuration ! colspan="2" |Fillrate ! colspan="2" |Processing power (GFLOPS){{efn|name=geforce 700 9}} ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Release Price (USD) |
Base (MHz)
! Average Boost (MHz) ! Max Boost{{efn|name=geforce 700 2|Max Boost depends on ASIC quality. For example, some GTX TITAN with over 80% ASIC quality can hit 1019 MHz by default, lower ASIC quality will be 1006 MHz or 993 MHz.}} (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! DRAM type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan{{efn|name=geforce 700 11}} ! Direct3D{{efn|name=geforce 700 3}} ! OpenGL ! OpenCL |
{{notelist}}
=GeForce 900 series=
{{Further|GeForce 900 series|Maxwell (microarchitecture)}}
- All models support the following APIs: Direct3D 12_1, OpenGL 4.6, OpenCL 3.0 and Vulkan 1.3{{cite web | url=https://www.khronos.org/conformance/adopters/conformant-products | title=The Khronos Group | date=31 May 2022 | access-date=11 June 2017 | archive-date=28 January 2017 | archive-url=https://web.archive.org/web/20170128195542/https://www.khronos.org/conformance/adopters/conformant-products | url-status=live }} and CUDA 5.2
- Improve NVENC (YUV4:4:4, predictive lossless encoding).
- Add H265 hardware support on GM20x
- GM108 does not have NVENC hardware encoder support.
{{Row hover highlight}}
{{notelist|refs=
{{efn|name=CoreConfig|Main shader processors : texture mapping units : render output units (streaming multiprocessors)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the number of ROPs multiplied by the respective core clock speed.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the respective core clock speed.}}
{{efn|name=ProcessingPower|To calculate the processing power see Maxwell (microarchitecture)#Performance.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
=GeForce 10 series=
{{Further|GeForce 10 series|Pascal (microarchitecture)}}
- Supported display standards: DP 1.4 (no DSC), HDMI 2.0b, Dual-link DVI{{efn|The NVIDIA TITAN Xp and the Founders Edition GTX 1080 Ti do not have a dual-link DVI port, but a DisplayPort to single-link DVI adapter is included in the box.}}{{cite web|url=http://www.geforce.com/hardware/10series/geforce-gtx-1080|title=GTX 1080 Graphics Card|author1=Nvidia|access-date=May 7, 2016|archive-url=https://web.archive.org/web/20160507083310/http://www.geforce.com/hardware/10series/geforce-gtx-1080|archive-date=May 7, 2016|url-status=live}}
- Supported APIs: Direct3D 12 (12_1), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 6.1
- Improved NVENC (HEVC Main10, decode 8K30, etc.)
{{Row hover highlight}}
{{notelist|refs=
{{efn|name=CoreConfig|Main shader processors : texture mapping units : render output units (streaming multiprocessors) (graphics processing clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=ProcessingPower|To calculate the processing power see Pascal (microarchitecture)#Performance.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
=Volta series=
{{Further|Volta (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Process ! rowspan="2" | Transistors (billion) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock speeds ! rowspan="2" | Core config{{efn|name=CoreConfig}} ! colspan="4" | Memory ! colspan="2" |Fillrate{{efn|name=PerfValues}} ! colspan="4" |Processing power (GFLOPS){{efn|name=PerfValues}} ! rowspan="2" |TDP (Watts) ! rowspan="2" | NVLink Support ! colspan="2" | Release price (USD) |
---|
Base core clock (MHz)
! Boost core clock (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Pixel (GP/s){{efn|name=PixelFillrate}} ! Texture (GT/s){{efn|name=TextureFillrate}} ! Tensor compute + Single precision ! MSRP ! Founders Edition |
style="text-align:left;" | Nvidia TITAN V{{cite web|url=https://www.nvidia.com/en-us/titan/titan-v|title=Nvidia TITAN V Graphics Card|author1=Nvidia|access-date=December 8, 2017|archive-url=https://web.archive.org/web/20171208093132/https://www.nvidia.com/en-us/titan/titan-v/|archive-date=December 8, 2017|url-status=live}}
| December 7, 2017 | GV100-400-A1 | rowspan="2" | 21.1 | rowspan="2" | 815 | rowspan="2" | PCIe 3.0 x16 | rowspan="2" | 1200 | rowspan="2" | 1455 | rowspan="2" | 1.7 | 5120:320:96:640 | 4.5 | 12 | 652.8 | rowspan="2" |HBM2 | 3072 | rowspan="2" | 153.6 | rowspan="2" | 384.0 | rowspan="2" | 24,576.0 | rowspan="2" | 12,288.0 | rowspan="2" | 6,144.0 | rowspan="2" | 110,592.0 | rowspan="2" | 250 | {{No}} | $2999 | {{N/a}} |
style="text-align:left;" | Nvidia TITAN V CEO Edition{{Cite news|url=https://www.anandtech.com/show/13004/nvidia-limited-edition-32gb-titan-v-ceo-edition|title=NVIDIA Unveils & Gives Away New Limited Edition 32GB Titan V "CEO Edition"|last=Smith|first=Ryan|access-date=2018-08-08|archive-url=https://web.archive.org/web/20180730215157/https://www.anandtech.com/show/13004/nvidia-limited-edition-32gb-titan-v-ceo-edition|archive-date=2018-07-30|url-status=live}}{{Cite news|url=https://www.techpowerup.com/gpudb/3277/titan-v-ceo-edition|title=NVIDIA TITAN V - CEO Edition|work=TechPowerUp|access-date=2018-08-08|language=en}}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}} | June 21, 2018 | GV-100-???-A1 | 5120:320:128:640 | 6 | 32 | 870.4 | 4096 | {{No}} | colspan="2" {{N/a}} |
{{notelist|refs=
{{efn|name=CoreConfig|Main shader processors : texture mapping units : render output units : tensor cores (streaming multiprocessors) (graphics processing clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
=GeForce 16 series=
{{Further|GeForce 16 series|Turing (microarchitecture)}}
- Supported APIs: Direct3D 12 (feature level 12_1), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 7.5
- NVENC 6th generation (B-frame, etc.)
- TU117 only supports Volta NVENC (5th generation)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Process ! rowspan="2" | Transistors (billion) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock speeds ! rowspan="2" | Core config{{efn|name=CoreConfig}} ! rowspan="2" | L2 Cache (MiB) ! colspan="4" | Memory ! colspan="2" | Fillrate{{efn|name=PerfValues}} ! colspan="3" |Processing power (GFLOPS){{efn|name=PerfValues}} ! rowspan="2" | TDP (Watts) ! rowspan="2" | NVLink support ! rowspan="2" | Release price (USD) |
---|
Base core clock (MHz)
! Boost core clock (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Pixel (GP/s){{efn|name=PixelFillrate}} ! Texture (GT/s){{efn|name=TextureFillrate}} |
style="text-align:left;" |GeForce GTX 1630
| TU117-150-A1 | rowspan="3" |4.7 | rowspan="3" |200 | rowspan="8" |PCIe 3.0 x16 | 1740 | 1785 | 12 | 512:32:16:1024:0 | rowspan="4" |1 | rowspan="5" |4 | 96 | GDDR6 | 64 | 27.84 | 55.68 | 3,563.52 | 1,781.76 | 55.68 | rowspan="3" |75 | rowspan="8" {{No}} | {{Dunno}} |
rowspan="3" style="text-align:left;" |GeForce GTX 1650{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1650/|title=NVIDIA GeForce GTX 1650 Graphics Card|website=NVIDIA|access-date=2019-04-23|archive-url=https://web.archive.org/web/20190423131601/https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1650/|archive-date=2019-04-23|url-status=live}}
| April 23, 2019 | rowspan="2" |TU117-300-A1 | 1485 | 1665 | 8 | rowspan="3" |896:56:32:1792:0 | 128 | GDDR5 | rowspan="4" |128 | 47.52 | 83.16 | 5,322.00 | 2,661.00 | 83.16 | rowspan="3" |$149 |
April 3, 2020{{cite web|url=https://www.anandtech.com/show/15701/nvidias-geforce-gtx-1650-gddr6-released-gddr5-price-parity|title=NVIDIA's GeForce GTX 1650 GDDR6 Released: GDDR6 Reaching Price Parity With GDDR5|website=Anandtech|access-date=2020-04-06|archive-date=7 December 2023|archive-url=https://web.archive.org/web/20231207025426/http://www4.anandtech.com/show/15701/nvidias-geforce-gtx-1650-gddr6-released-gddr5-price-parity|url-status=live}}
| rowspan="2" |1410 | rowspan="2" |1590 | rowspan="3" |12 | rowspan="4" |192 | rowspan="3" |GDDR6 | rowspan="2" | 45.12 | rowspan="2" | 78.96 | rowspan="2" | 5,053.44 | rowspan="2" | 2,526.72 | rowspan="2" | 78.96 |
June 18, 2020{{cite web|title=NVIDIA GeForce GTX 1650 TU106 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1650-tu106.c3585|access-date=2021-12-14|website=TechPowerUp|language=en|archive-date=23 November 2020|archive-url=https://web.archive.org/web/20201123174607/https://www.techpowerup.com/gpu-specs/geforce-gtx-1650-tu106.c3585|url-status=live}}
| TU106-125-A1 | 10.8 | 445 | 90 |
style="text-align:left;" |GeForce GTX 1650 Super{{cite web|url=https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1650-super/|title=NVIDIA GeForce GTX 1650 SUPER Graphics Card|website=NVIDIA|access-date=2019-10-29|archive-date=29 October 2019|archive-url=https://web.archive.org/web/20191029131835/https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1650-super/|url-status=live}}
| rowspan="4" |6.6 | rowspan="4" |284 | rowspan="3" |1530 | 1725 | 1280:80:32:2560:0 | rowspan="4" |1.5 | 48.96 | 122.40 | 7,833.60 | 3,916.80 | 122.40 | 100 | $159 |
style="text-align:left;" |GeForce GTX 1660{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|title=The GeForce 16 Series Graphics Cards are Here|website=NVIDIA|language=en-us|access-date=2019-03-23|archive-url=https://web.archive.org/web/20190325235255/https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|archive-date=2019-03-25|url-status=live}}
| March 14, 2019 | TU116-300-A1 | rowspan="2" |1785 | 8 | rowspan="2" |1408:88:48:2816:0 | rowspan="3" |6 | GDDR5 | rowspan="3" |192 | rowspan="2" |73.44 | rowspan="2" |134.64 | rowspan="2" |8,616.00 | rowspan="2" |4,308.00 | rowspan="2" |134.64 | 120 | $219 |
style="text-align:left;" |GeForce GTX 1660 Super{{cite web|url=https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1660-super/|title=NVIDIA GeForce GTX 1660 SUPER Graphics Card|website=NVIDIA|access-date=2019-10-29|archive-date=29 October 2019|archive-url=https://web.archive.org/web/20191029131837/https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1660-super/|url-status=live}}
| October 29, 2019 | TU116-300-A1 | 14 | 336 | rowspan="2" |GDDR6 | 125 | $229 |
style="text-align:left;" |GeForce GTX 1660 Ti{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|title=NVIDIA GeForce RTX 1660 Graphics Card|website=NVIDIA|access-date=2019-02-22|archive-url=https://web.archive.org/web/20190222141328/https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|archive-date=2019-02-22|url-status=live}}
| February 21, 2019 | TU116-400-A1 | 1500 | 1770 | 12 | 1536:96:48:3072:0 | 288 | 72.0 | 144.0 | 9,216.00 | 4,608.00 | 144.00 | 120 | $279 |
=RTX 20 series=
{{Further|GeForce 20 series|Turing (microarchitecture)}}
- Supported APIs: Direct3D 12 Ultimate (12_2), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 7.5
- Unlike previous generations the RTX Non-Super (RTX 2070, RTX 2080, RTX 2080 Ti) Founders Edition cards no longer have reference clocks, but are "Factory-OC". However, RTX Supers (RTX 2060 Super, RTX 2070 Super, and RTX 2080 Super) Founders Edition are reference clocks.
- NVENC 6th generation (B-frame, etc.)
{{Row hover highlight}}
{{notelist|refs=
{{efn|name=CoreConfig|Main shader processors : texture mapping units : render output units : tensor cores (or FP16 cores in GeForce 16 series) : ray-tracing cores (streaming multiprocessors) (graphics processing clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
{{efn|name=TitanRTXBoost|Boost of the Founders Editions, as there is no reference version of this card.}}
}}
= RTX 30 series =
{{Further|GeForce 30 series|Ampere (microarchitecture)}}
- Supported APIs: Direct3D 12 Ultimate (12_2), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 8.6
- Supported display connections: HDMI 2.1, DisplayPort 1.4a
- NVENC 7th generation
- Tensor core 3rd gen
- RT Core 2nd gen
- RTX IO
- Improved NVDEC with AV1 decode
- NVIDIA DLSS 2.0
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Process ! rowspan="2" | Transistors (billion) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock speeds ! rowspan="2" | Core config{{efn|name=CoreConfig}} ! colspan="4" | Memory ! colspan="2" | Fillrate ! colspan="4" |Processing power (TFLOPS) ! rowspan="2" | Ray-tracing Performance (TFLOPS) ! rowspan="2" | TDP (Watts) ! rowspan="2" | NVLink support ! colspan="2" | Release price (USD) |
---|
Base core clock (MHz)
! Boost core clock (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Pixel (GP/s) ! Texture (GT/s) ! Tensor compute (FP16) (2:1 sparse) ! MSRP ! Founders Edition |
style="text-align:left;" rowspan="4" |GeForce RTX 3050{{cite web |url=https://www.nvidia.com/fi-fi/geforce/graphics-cards/30-series/rtx-3050/ |access-date=2022-01-04 |title=NVIDIA GeForce RTX 3050 Graphics Card Announcement |website=NVIDIA |language=fi |archive-date=29 March 2024 |archive-url=https://web.archive.org/web/20240329045731/https://www.nvidia.com/fi-fi/geforce/graphics-cards/30-series/rtx-3050/ |url-status=live }}
|February 2, 2024 |GA107-325 | rowspan="2" | 8.7 | rowspan="2" | 200 | rowspan="4" | PCIe 4.0 | rowspan="1" | 1042 | rowspan="1" | 1470 | rowspan="4" | 14 | rowspan="1" | 2304:72:32:72:18 | rowspan="4" | 2 | rowspan="1" | 6 | rowspan="1" | 168.0 | rowspan="9" |GDDR6 | rowspan="1" | 96 | rowspan="1" | 33.34 | rowspan="1" | 75.02 | rowspan="1" | 4.802 | rowspan="1" | 4.802 | rowspan="1" | 0.075 | rowspan="1" | 30.1 | rowspan="1" | | 70 | rowspan="15" {{No}} | rowspan="1" | $169 | rowspan="8" {{N/a}} |
December 16, 2022
|GA107-150-A1 | rowspan="1" | 1552 | rowspan="1" | 1777 | rowspan="1" | 2560:80:32:80:20 | rowspan="4" | 8 | rowspan="3" | 224.0 | rowspan="4" | 128 | rowspan="1" | 49.6 | rowspan="1" | 124.2 | rowspan="1" | 7.95 | rowspan="1" | 7.95 | rowspan="1" | 0.124 | rowspan="1" | 63.6 | rowspan="1" | 18.2 | 115 | rowspan="1" | $249 |
July 18, 2022
|GA106-150 | rowspan="5" | 13.25 | rowspan="5" | 276 | rowspan="1" | 1515 | rowspan="1" | 1755 | rowspan="1" | 2304:72:32:72:18 | rowspan="1" | 48.48 | rowspan="1" | 109.1 | rowspan="1" | 6.981 | rowspan="1" | 6.981 | rowspan="1" | 0.109 | rowspan="1" | | rowspan="1" | | rowspan="2" | 130 |
January 27, 2022{{cite web |url=https://mugens-reviews.de/builds/pc/neueste-nvidia-grafikkarten/ |access-date=2022-01-06 |title=NVIDIA Announces the GeForce RTX 30 Series |website=Mugens-Reviews |language=de |archive-date=23 February 2024 |archive-url=https://web.archive.org/web/20240223155146/https://mugens-reviews.de/builds/pc/neueste-nvidia-grafikkarten/ |url-status=live }}
| GA106-150-A1 | rowspan="1" | 1552 | rowspan="1" | 1777 | rowspan="1" | 2560:80:32:80:20 | rowspan="1" | 49.6 | rowspan="1" | 124.2 | rowspan="1" | 7.95 | rowspan="1" | 7.95 | rowspan="1" | 0.124 | rowspan="1" | 63.6 | rowspan="1" | 18.2 | rowspan="1" | $249 |
style="text-align:left;" rowspan="4" |GeForce RTX 3060{{cite web |url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3060/ |access-date=2021-01-12 |title=NVIDIA GeForce RTX 3060 Graphics Card Announcement |website=NVIDIA |archive-date=26 February 2022 |archive-url=https://web.archive.org/web/20220226224801/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3060/ |url-status=live }}
| rowspan="2" |GA106-302 | rowspan="13" |PCIe 4.0 | rowspan="4" |1320 | rowspan="4" |1777 | rowspan="4" |15 | rowspan="4" |3584:112:48:112:28 | rowspan="4" |3 | 240.0 | rowspan="4" |63.4 | rowspan="4" |147.8 | rowspan="4" |9.46 | rowspan="4" |9.46 | rowspan="4" |0.148 | rowspan="4" |75.7 | rowspan="4" |25 | rowspan="4" |170 | rowspan="4" |$329 |
/ *** 3060***
| May 2021 | rowspan="3" |12 | rowspan="3" |360.0 | rowspan="3" |192 |
/ *** 3060***
| February 25, 2021 |GA106-300-A1 |
September 1, 2021
| rowspan="5" |17.4 | rowspan="5" |392.5 |
3060 / ***
! rowspan="2" style="text-align:left;" |GeForce RTX 3060 Ti{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3060-ti/ |access-date=2020-12-01 |title=NVIDIA GeForce RTX 3060 Ti Graphics Card |archive-date=12 January 2021 |archive-url=https://web.archive.org/web/20210112132428/https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3060-ti/ |url-status=live }} | December 2, 2020 | GA104-200-A1 | rowspan="2" | 1410 | rowspan="2" | 1665 |14 | rowspan="2" | 4864:152:80:152:38 | rowspan="4" |4 | rowspan="4" |8 |448.0 | rowspan="4" |256 | rowspan="2" | 112.8 | rowspan="2" | 214.3 | rowspan="2" | 13.70 | rowspan="2" | 13.72 | rowspan="2" | 0.214 | rowspan="2" | 109.7 | rowspan="2" | 32.4 | rowspan="2" | 200 | colspan="2" rowspan="2" | $399 |
October 27, 2022
|GA104-202 |19 |608.0 |
3070 / ***
! style="text-align:left;" |GeForce RTX 3070{{cite web |url=https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3070/ |access-date=2020-09-06 |title=NVIDIA GeForce RTX 3070 Graphics Card |website=NVIDIA |archive-date=14 May 2021 |archive-url=https://web.archive.org/web/20210514170715/https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3070/ |url-status=live }}{{cite web|last=Smith|first=Ryan|date=September 1, 2020|title=NVIDIA Announces the GeForce RTX 30 Series: Ampere For Gaming, Starting With RTX 3080 & RTX 3090|url=https://www.anandtech.com/show/16057/nvidia-announces-the-geforce-rtx-30-series-ampere-for-gaming-starting-with-rtx-3080-rtx-3090|access-date=2020-09-02|website=AnandTech|archive-date=12 January 2022|archive-url=https://web.archive.org/web/20220112113553/https://www.anandtech.com/show/16057/nvidia-announces-the-geforce-rtx-30-series-ampere-for-gaming-starting-with-rtx-3080-rtx-3090|url-status=live}} | GA104-300-A1 | 1500 | 1725 |14 | 5888:184:96:184:46 |448.0 | 144.0 | 276.0 | 17.66 | 17.66 | 0.276 | 141.31 | 40.6 |220 | colspan="2" | $499 |
/*** Ti***/
! style="text-align:left;" |GeForce RTX 3070 Ti{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3070-3070ti/ |access-date=2021-06-02 |title=NVIDIA GeForce RTX 3070 Family |website=NVIDIA |archive-date=26 February 2022 |archive-url=https://web.archive.org/web/20220226223558/https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3070-3070ti/ |url-status=live }} | June 10, 2021 | GA104-400-A1 | 1575 | 1770 | rowspan="4" |19 | 6144:192:96:192:48 | 608.3 | rowspan="6" |GDDR6X | 151.18 | 302.36 | 19.35 | 19.35 | 0.302 | 154.8 | 43.5 | 290 | colspan=2 | $599 |
/*** 3080 ***/
! rowspan="2" style="text-align:left;" |GeForce RTX 3080{{cite web |url=https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3080/ |access-date=2020-09-06 |title=NVIDIA GeForce RTX 3080 Graphics Card |website=NVIDIA |archive-date=19 May 2021 |archive-url=https://web.archive.org/web/20210519061929/https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3080/ |url-status=live }} | September 17, 2020 | GA102-200-A1 | rowspan="5" |28.3 | rowspan="5" |628.4 | 1440 | rowspan="2" | 1710 | 8704:272:96:272:68 | 5 | 10 | 760.0 | 320 | 138.2 | 391.68 | 25.06 | 25.07 | 0.392 | 200.54 | 59.5 | 320 | colspan=2 | $699 |
January 27, 2022
| GA102-220-A1 | 1260 | 8960:280:96:280:70 | rowspan="4" |6 | rowspan="2" |12 | rowspan="2" |912.0 | rowspan="4" |384 | 131.0 | 352.8 | 22.6 | 22.6 | 0.353 | 180.6 | 61.3 | rowspan="3" | 350 | colspan=2 | $799 |
3080 / ***
! style="text-align:left;" |GeForce RTX 3080 Ti{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3080-3080ti/ |access-date=2021-06-02 |title=NVIDIA GeForce RTX 3080 Family of Graphics Card |website=NVIDIA |archive-date=1 March 2022 |archive-url=https://web.archive.org/web/20220301194515/https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3080-3080ti/ |url-status=live }} | June 3, 2021 | GA102-225-A1 | 1365 | 1665 | 10240:320:112:320:80 | 153.5 | 438.5 | 28.06 | 28.57 | 0.438 | 228.6 | 68.2 | colspan=2 | $1199 |
3090 / ***
! style="text-align:left;" |GeForce RTX 3090{{cite web |url=https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3090/ |access-date=2020-09-06 |title=NVIDIA GeForce RTX 3090 Graphics Card |website=NVIDIA |archive-date=26 February 2022 |archive-url=https://web.archive.org/web/20220226220435/https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3090/ |url-status=live }} | September 24, 2020 | GA102-300-A1 | 1395 | 1695 | 19.5 | 10496:328:112:328:82 | rowspan="2" |24 | 935.8 | 156.2 | 457.6 | 29.38 | 29.28 | 0.459 | 235.08 | 71.1 | rowspan="2" |2-way NVLink | colspan=2 | $1499 |
3090 / ***
! style="text-align:left;" |GeForce RTX 3090 Ti{{cite web |title=GeForce RTX 3090 Ti Is Here: The Fastest GeForce GPU For The Most Demanding Creators & Gamers |url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3090-ti-out-now/ |access-date=2022-04-02 |website=NVIDIA |language=en-us |archive-date=18 April 2024 |archive-url=https://web.archive.org/web/20240418002055/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3090-ti-out-now/ |url-status=live }}{{cite web |title=NVIDIA GeForce RTX 3090 Ti Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3090-ti.c3829 |access-date=2022-04-02 |website=TechPowerUp |language=en |archive-date=23 January 2023 |archive-url=https://web.archive.org/web/20230123200947/https://www.techpowerup.com/gpu-specs/geforce-rtx-3090-ti.c3829 |url-status=live }} | March 29, 2022 | GA102-350-A1 | 1560 | 1860 | 21 | 10752:336:112:336:84 | 1008.3 | 174.7 | 524.2 | 33.5 | 33.5 | 0.524 | 269.1 | 79.9 | 450 | colspan=2 | $1999 |
{{notelist|refs=
{{efn|name=CoreConfig|Main shader processors : texture mapping unit : render output units : tensor cores : ray-tracing cores (streaming multiprocessors) (graphics processing clusters)}}
}}
= RTX 40 series =
{{Further|GeForce 40 series|Ada Lovelace (microarchitecture)}}
- Supported APIs: Direct3D 12 Ultimate (12_2), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 8.9{{Cite web |title=CUDA C++ Programming Guide |url=https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html |access-date=2022-09-20 |website=docs.nvidia.com |language=en-us |archive-date=3 May 2021 |archive-url=https://web.archive.org/web/20210503160950/https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html |url-status=live }}
- Supported display connections: HDMI 2.1, DisplayPort 1.4a
- Tensor core 4th gen
- RT core 3rd gen
- NVIDIA DLSS 3
- NVIDIA DLSS 3.5
- Shader Execution Reordering
- Dual NVENC with 8K 10-bit 60FPS AV1 fixed function hardware encoding{{Cite web |title=Creativity At The Speed of Light: GeForce RTX 40 Series Graphics Cards Unleash Up To 2X Performance in 3D Rendering, AI, and Video Exports For Gamers and Creators |url=https://www.nvidia.com/en-us/geforce/news/rtx-40-series-and-studio-updates-for-content-creation/ |website=NVIDIA |access-date=22 May 2023 |archive-date=1 June 2023 |archive-url=https://web.archive.org/web/20230601124541/https://www.nvidia.com/en-us/geforce/news/rtx-40-series-and-studio-updates-for-content-creation/ |url-status=live }}{{cite web |title=Nvidia Video Codec SDK |url=https://developer.nvidia.com/nvidia-video-codec-sdk |date=23 August 2013 |access-date=22 May 2023 |archive-date=17 March 2023 |archive-url=https://web.archive.org/web/20230317082326/https://developer.nvidia.com/nvidia-video-codec-sdk |url-status=live }}
- Opacity Micro-Maps (OMM)
- Displacement Micro-Meshes (DMM)
- No NVLink support, Multi-GPU over PCIe 5.0{{Cite web |author1=Chuong Nguyen |date=2022-09-21 |title=Nvidia kills off NVLink on RTX 4090 |url=https://www.windowscentral.com/hardware/computers-desktops/nvidia-kills-off-nvlink-on-rtx-4090 |access-date=2023-01-01 |website=Windows Central |language=en |archive-date=24 April 2024 |archive-url=https://web.archive.org/web/20240424172523/https://www.windowscentral.com/hardware/computers-desktops/nvidia-kills-off-nvlink-on-rtx-4090 |url-status=live }}{{Cite news |title=Jensen Confirms: NVLink Support in Ada Lovelace is Gone |url=https://www.techpowerup.com/299107/jensen-confirms-nvlink-support-in-ada-lovelace-is-gone |work=TechPowerUp |language=en-US |date=September 21, 2022 |access-date=November 21, 2022 |archive-date=18 October 2022 |archive-url=https://web.archive.org/web/20221018020318/https://www.techpowerup.com/299107/jensen-confirms-nvlink-support-in-ada-lovelace-is-gone |url-status=live }}
{{Row hover highlight}}
{{notelist|refs=
{{efn|name="CoreConfig"|Main shader processors : texture mapping unit : render output units : tensor cores : ray-tracing cores (streaming multiprocessors) (graphics processing clusters)}}
}}
=RTX 50 series=
{{Further|GeForce 50 series|Blackwell (microarchitecture)}}
GeForce RTX 50 series desktop GPUs are the first consumer GPUs to utilize a PCIe 5.0 interface and GDDR7 video memory.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model ! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Process ! rowspan="2" | Transistors (billion) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock speeds ! rowspan="2" | Core config{{efn|name="CoreConfig"}} ! rowspan="2" | L2 Cache (MiB) ! colspan="4" | Memory ! colspan="2" | Fillrate ! colspan="5" |Processing power (TFLOPS) ! rowspan="2" | Ray-tracing Performance (TFLOPS) ! rowspan="2" | TDP (Watts) ! colspan="2" | Release price (USD) |
Base core clock (MHz)
! Boost core clock (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Pixel (GP/s) ! Texture (GT/s) ! Tensor compute FP16 (2:1 sparse) ! Tensor compute FP4 (2:1 sparse) ! MSRP ! Founders Edition |
---|
style="text-align:left;" | GeForce RTX 5060 |May, 2025 |GB206-250 | rowspan="3" |21.9 | rowspan="3" |181{{nbsp}}mm2 | rowspan="3" |PCIe 5.0 x8 |2280 |2497 | rowspan="5" |28 |3840:120:48:120:30 | rowspan="3" |32 | rowspan="2" |8 | rowspan="3" |448 | rowspan="7" |GDDR7 | rowspan="3" |128 |119.9 |299.6 |19.18 |19.18 |0.1996 |77 (154) |307 (614) |58 |145 W |$299 |N/A |
rowspan="2" style="text-align:left;" |GeForce RTX 5060 Ti | rowspan="2" |April 16, 2025 | rowspan="2" |GB206-300 | rowspan="2" |2407 | rowspan="2" |2572 | rowspan="2" |4608:144:48:144:36 | rowspan="2" |123.5 | rowspan="2" |370.4 | rowspan="2" |23.7 | rowspan="2" |23.7 | rowspan="2" |0.370 | rowspan="2" |95 (190) | rowspan="2" |380 (760) | rowspan="2" |72 | rowspan="2" |180 W |$379 |N/A |
16
|$429 |N/A |
style="text-align:left;" | GeForce RTX 5070 | rowspan="1" | {{dts|2025|March|5|format=mdy|abbr=on}} | GB205-300 | 31.1 | 263{{nbsp}}mm2 | rowspan="4" |PCIe 5.0 x16 | 2160 | 2512 | 6144:192:80:192:48 | 48 | 12 | 672 | 192 | 201 | 483.8 | 30.97 | 30.97 | 0.4838 |123 (247) |494 (988) |94 | 250{{nbsp}}W | colspan="2" |$549 |
style="text-align:left;" | GeForce RTX 5070 Ti | rowspan="1" | {{dts|2025|February|20|format=mdy|abbr=on}} | GB203-300 | rowspan="2" |45.6 | rowspan="2" | 378{{nbsp}}mm2 | 2300 | 2452 | 8960:280:96:280:70 | rowspan="2" | 64 | rowspan="2" | 16 | 896 | rowspan="2" | 256 | 235.4 | 693.0 | 44.35 | 44.35 | 0.693 |176 (352) |703 (1406) |133 | 300{{nbsp}}W |$749 |N/A |
style="text-align:left;" | GeForce RTX 5080{{Cite web |title=NVIDIA GeForce RTX 5080 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5080/ |access-date=2025-01-14 |website=NVIDIA |language=en-us}} | rowspan="2" | {{dts|2025|January|30|format=mdy|abbr=on}} | GB203-400 | 2300 | 2617 | 30 | 10752:336:112:336:84 | 960 | 293.1 | 879.3 | 56.28 | 56.28 | 0.8793 |225 (450) |900 (1801) |171 | 360{{nbsp}}W |colspan="2" |$999 |
style="text-align:left;" | GeForce RTX 5090{{Cite web |title=NVIDIA GeForce RTX 5090 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/ |access-date=2025-01-14 |website=NVIDIA |language=en-us}} | GB202-300 | 92.2 | 750{{nbsp}}mm2 | 2010 | 2407 | 28 | 21760:680:176:680:170 | 96 | 32 | 1,792 | 512 | 423.6 | 1637 | 104.8 | 104.8 | 1.637 |419 (838) |1676 (3352) | 318 | 575{{nbsp}}W |colspan="2" |$1,999 |
{{notelist|refs=
{{efn|name="CoreConfig"|Main shader processors : texture mapping unit : render output units : tensor cores : ray-tracing cores (streaming multiprocessors) (graphics processing clusters)}}
}}
Mobile GPUs
Mobile GPUs are either soldered to the mainboard or to some Mobile PCI Express Module (MXM).
=GeForce2 Go series=
- All models are manufactured with a 180 nm manufacturing process
- All models support Direct3D 7.0 and OpenGL 1.2
- Celsius (microarchitecture)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2|Model
!rowspan=2|Launch !rowspan=2|Code name !rowspan=2|Core clock (MHz) !rowspan=2|Memory clock (MHz) !rowspan=2|Core config{{efn|name=GF2GCoreConfig}} !colspan=4|Memory ! colspan="4" |Fillrate |
---|
Size (MiB)
!Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
style="text-align:left"|GeForce2 Go 100
|February 6, 2001 | rowspan="3" |NV11M | rowspan="3" |AGP 4x |125 |332 | rowspan="3" |2:0:4:2 |8, 16 |1.328 |DDR |32 |250 |250 |500 | rowspan="3" |0 |
style="text-align:left"|GeForce2 Go
|November 11, 2000 | rowspan="2" |143 |166 | rowspan="2" |16, 32 | rowspan="2" |2.656 |SDR |128 | rowspan="2" |286 | rowspan="2" |286 | rowspan="2" |572 |
style="text-align:left"|GeForce2 Go 200
|February 6, 2001 |332 |DDR |64 |
{{notelist|refs=
{{efn|name=GF2GCoreConfig|Pixel shaders: vertex shaders: texture mapping units: render output units}}
}}
=GeForce4 Go series=
- All models are made via 150 nm fabrication process
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2|Model
!rowspan=2|Launch !rowspan=2|Code name !rowspan=2|Core clock (MHz) !rowspan=2|Memory clock (MHz) !rowspan=2|Core config{{efn|name=GF2GCoreConfig}} !colspan=4|Memory ! colspan="4" |Fillrate ! colspan="2" |API support |
---|
Size (MiB)
!Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
style="text-align:left"|GeForce4 Go 410
| rowspan="3" |February 6, 2002 | rowspan="4" |NV17M | rowspan="6" |AGP 8x | rowspan="2" |200 |200 | rowspan="5" |2:0:4:2 |16 |1.6 |SDR | rowspan="2" |64 | rowspan="2" |400 | rowspan="2" |400 | rowspan="2" |800 | rowspan="5" |0 |8.0a | rowspan="6" |1.3 |
style="text-align:left"|GeForce4 Go 420
|400 |32 |3.2 | rowspan="5" |DDR | |
style="text-align:left"|GeForce4 Go 440
|220 |440 | rowspan="4" |64 |7.04 | rowspan="4" |128 |440 |440 |880 | |
style="text-align:left"|GeForce4 Go 460
|October 14, 2002 |250 |500 |8 |500 |500 |1000 | |
style="text-align:left"|GeForce4 Go 488
| |NV18M |300 |550 |8.8 |600 |600 |1200 | |
style="text-align:left"|GeForce4 Go 4200
|November 14, 2002 |NV28M |200 |400 |4:2:8:4 |6.4 |800 |800 |1600 |100 | |
{{notelist|refs=
{{efn|name=GF2GCoreConfig|Pixel shaders: vertex shaders: texture mapping units: render output units}}
}}
=GeForce FX Go 5 (Go 5xxx) series=
The GeForce FX Go 5 series for notebooks architecture.
- 1 Vertex shaders: pixel shaders: texture mapping units: render output units
- * The GeForce FX series runs vertex shaders in an array
- ** GeForce FX series has limited OpenGL 2.1 support(with the last Windows XP driver released for it, 175.19).
- Rankine (microarchitecture)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=3 | Model
! rowspan=3 | Launch ! rowspan=3 | Code name ! rowspan=3 | Fab (nm) ! rowspan=3 | Core clock (MHz) ! rowspan=3 | Memory clock (MHz) ! rowspan=3 | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="3" | Supported API version ! rowspan=3 | TDP (Watts) |
---|
rowspan=2 | Size (MiB)
! rowspan=2 | Bandwidth (GB/s) ! rowspan=2 | Bus type ! rowspan=2 | Bus width (bit) ! rowspan="2" |Pixel (GP/s) ! rowspan="2" |Texture (GT/s) ! Direct3D ! colspan="2" | OpenGL |
! Hardware
! Drivers (Software) |
style="text-align:left;" | GeForce FX Go 5100*
| rowspan="4" | March 2003 | rowspan="2" | NV34M | rowspan="2" | 150 | rowspan="5" | AGP 8x | 200 | 400 | rowspan="4" | 4:2:4:4 | 64 | 3.2 | rowspan="5" | DDR | 64 | 0.8 | 0.8 | rowspan="5" | 9.0 | rowspan="5" | 1.5 | rowspan="5" | 2.1** | {{unk}} |
style="text-align:left;" | GeForce FX Go 5500*
| 300 | rowspan="3" | 600 | 32 | rowspan="3" | 9.6 | rowspan="4" | 128 |1.2 |1.2 | {{unk}} |
style="text-align:left;" | GeForce FX Go 5600*
| rowspan="2" | NV31M | rowspan="3" | 130 | 350 | rowspan="3" | 32 | rowspan="2" |1.4 | rowspan="2" |1.4 | {{unk}} |
style="text-align:left;" | GeForce FX Go 5650*
| 350 | {{unk}} |
style="text-align:left;" | GeForce FX Go 5700*
| February 1, 2005 | NV36M | 450 | 550 | 4:3:4:4 | 8.8 |1.8 |1.8 | {{unk}} |
=GeForce Go 6 (Go 6xxx) series=
- All models support Direct3D 9.0c and OpenGL 2.1
- Curie (microarchitecture)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2|Model
!rowspan=2|Launch !rowspan=2|Code name !rowspan=2|Fab (nm) !rowspan=2|Core clock (MHz) !rowspan=2|Memory clock (MHz) !rowspan=2|Core config1 !colspan=4|Memory ! colspan="4" |Fillrate |
---|
Size (MiB)
!Bandwidth (GB/s) !Bus type !Bus width (bit) !MOperations/s !MPixels/s !MTexels/s !MVertices/s |
style="text-align:left"|GeForce Go 6100 + nForce Go 430
|{{unk}} | rowspan="2" |C51M | rowspan="5" |110 | rowspan="2" |HyperTransport | rowspan="2" |425 | rowspan="2" |System memory | rowspan="2" |2:1:2:1 | rowspan="2" |Up to 128 MiB system | rowspan="2" |System memory | rowspan="2" |DDR2 | rowspan="2" |64/128 | rowspan="2" |850 | rowspan="2" |425 | rowspan="2" |850 | rowspan="2" |106.25 |
style="text-align:left"|GeForce Go 6150 + nForce Go 430
|February 1, 2006 |
style="text-align:left"|GeForce Go 6200
|February 1, 2006 | rowspan="2" |NV44M | rowspan="5" |PCIe x16 |300 |600 | rowspan="2" |4:3:4:2 | rowspan="2" |16 |2.4 | rowspan="3" |DDR |32 |1200 |600 |1200 |225 |
style="text-align:left"|GeForce Go 6400
|February 1, 2006 |400 | rowspan="2" |700 |5.6 |64 |1600 |800 |1600 |250 |
style="text-align:left"|GeForce Go 6600
|September 29, 2005 |NV43M | rowspan="2" |300 |8:3:8:4 | rowspan="2" |128 |11.2 |128 | rowspan="2" |3000 | rowspan="2" |1500 | rowspan="2" |3000 |281.25 |
style="text-align:left"|GeForce Go 6800
|November 8, 2004 | rowspan="2" |NV41M | rowspan="2" |130 | rowspan="2" |700 | rowspan="2" |12:5:12:12 | rowspan="2" |22.4 | rowspan="2" |DDR, DDR2 | rowspan="2" |256 |375 |
style="text-align:left"|GeForce Go 6800 Ultra
|February 24, 2005 |450 |256 |5400 |3600 |5400 |562.5 |
=GeForce Go 7 (Go 7xxx) series=
The GeForce Go 7 series for notebooks architecture.
- 1 Vertex shaders: pixel shaders: texture mapping units: render output units
- 2 Graphics card supports TurboCache, memory size entries in bold indicate total memory (graphics + system RAM), otherwise entries are graphics RAM only
- Curie (microarchitecture)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan=2 | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Features |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 7000M
| rowspan="2" | February 1, 2006 | MCP67MV | rowspan="7" | 90 | rowspan="2" | Hyper Transport | 350 | rowspan="2" | System memory | rowspan="2" | 1:2:2:2 | rowspan="2" | Up to 256 from system memory | rowspan="2" | System memory | rowspan="2" | DDR2 | rowspan="2" | 64/128 | 0.7 | 0.7 | rowspan="13" | 9.0c | rowspan="13" | 2.1 | {{unk}} | rowspan="2" | |
style="text-align:left;" | GeForce 7150M
| MCP67M | 425 | 0.85 | 0.85 | {{unk}} |
style="text-align:left;" | GeForce Go 72002
| rowspan="3" | January 2006 | rowspan="3" | G72M | rowspan="11" | PCIe x16 | 450 | rowspan="2" | 700 | 3:4:4:1 | 64 | 2.8 | rowspan="11" | GDDR3 | 32 | 0.45 | 1.8 | {{unk}} | rowspan="3" | Transparency Anti-Aliasing |
style="text-align:left;" | GeForce Go 73002
| 350 | rowspan="2" | 3:4:4:2 | 128, 256, 512 | 5.60 | rowspan="2" | 64 | 0.7 | 1.4 | {{unk}} |
style="text-align:left;" | GeForce Go 74002
| rowspan="2" | 450 | 900 | 64, 256 | 7.20 | 0.9 | 1.8 | {{unk}} |
style="text-align:left;" | GeForce Go 7600
| March 2006 | rowspan="2" | G73M | 1000 | 5:8:8:8 | 256, 512 | 16 | rowspan="3" | 128 | 3.6 | 3.6 | {{unk}} | rowspan="8" | Scalable Link Interface (SLI), Transparency Anti-Aliasing |
style="text-align:left;" | GeForce Go 7600 GT
| rowspan="2" | 2006 | 500 | 1200 | rowspan="2" | 5:12:12:8 | 256 | 19.2 | 4 | 6 | {{unk}} |
style="text-align:left;" | GeForce Go 7700
| G73-N-B1 | 80 | 450 | 1000 | 512 | 16 | 3.6 | 5.4 | {{unk}} |
style="text-align:left;" | GeForce Go 7800
| March 3, 2006 | rowspan="2" | G70M | rowspan="2" | 110 | rowspan="2" | 400 | rowspan="2" | 1100 | 6:16:16:8 | rowspan="3" | 256 | rowspan="2" | 35.2 | rowspan="5" | 256 |3.2 |6.4 | 35 |
style="text-align:left;" | GeForce Go 7800 GTX
| October 2005 | 8:24:24:16 |6.4 |9.6 | 65 |
style="text-align:left;" | GeForce Go 7900 GS
| rowspan="2" | April 2006 | rowspan="3" | G71M | rowspan="3" | 90 | 375 | 1000 | 7:20:20:16 | 32.0 |6 |7.5 | 20 |
style="text-align:left;" | GeForce Go 7900 GTX
| 500 | 1200 | rowspan="2" | 8:24:24:16 | 256 512 | 38.4 |8 |12 | rowspan="2" | 45 |
style="text-align:left;" | GeForce Go 7950 GTX
| October 2006 | 575 | 1400 | 512 | 44.8 |9.2 |13.8 |
=GeForce 8M (8xxxM) series=
The GeForce 8M series for notebooks architecture Tesla.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS) ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 8200M G{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-8200m-g-mgpu/specifications |title=GeForce 8200M G mGPU. Specifications |website=Geforce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222115936/http://www.geforce.com/hardware/notebook-gpus/geforce-8200m-g-mgpu/specifications |archive-date=2015-12-22 |url-status=live }}
| June 2008 | MCP77MV, MCP79MVL | rowspan="7" | 80 | Integrated (PCIe 2.0 x16) | rowspan="3" | 400 | rowspan="3" | 800 | 800 | rowspan="2" | 8:8:4 | Up to 256 from system memory | 12.8 | DDR2 | 128 | rowspan="3" |1.6 | rowspan="3" |3.2 | 19.2 | rowspan="9" | 10.0 | rowspan="9" | 3.3 | {{unk}} | PureVideo HD with VP3, Full H.264 / VC-1 / MPEG-2 HW Decode |
style="text-align:left;" | GeForce 8400M G
| rowspan="5" | May 2007 | rowspan="3" | NB8M(G86) | rowspan="6" | PCIe x16 | rowspan="2" | 800 | rowspan="2" | 128 / 256 | rowspan="2" | 6.4 | rowspan="5" | DDR2 / GDDR3 | rowspan="2" | 64 | | 10 | rowspan="5" | PureVideo HD with VP2, BSP Engine, and AES128 Engine |
style="text-align:left;" | GeForce 8400M GS
| rowspan="3" | 16:8:4 | 38.4 | 11 |
style="text-align:left;" | GeForce 8400M GT
| 450 | 900 | 1200 | rowspan="4" | 256 / 512 | 19.2 | rowspan="4" | 128 |1.8 |3.6 | 43.2 | 14 |
style="text-align:left;" | GeForce 8600M GS
| rowspan="3" | NB8P(G84) | 600 | 1200 | 1400 | 22.4 |2.4 |4.8 | 57.6 | rowspan="2" | 20 |
style="text-align:left;" | GeForce 8600M GT
| 475 | 950 | 800 / 1400 | rowspan="2" | 32:16:8 | 12.8 / 22.4 |3.8 |7.6 | 91.2 |
style="text-align:left;" | GeForce 8700M GT
| June 2007 | 625 | rowspan="3" | 1250 | rowspan="3" | 1600 | 25.6 | rowspan="3" | GDDR3 |5 |10 | 120 | 29 | rowspan="3" | Scalable Link Interface, PureVideo HD with VP2, BSP Engine, and AES128 Engine |
style="text-align:left;" | GeForce 8800M GTS
| rowspan="2" | November 2007 | rowspan="2" | NB8P(G92) | rowspan="2" | 65 | rowspan="2" | PCIe 2.0 x16 | rowspan="2" | 500 | 64:32:16 | rowspan="2" | 512 | rowspan="2" | 51.2 | rowspan="2" | 256 | rowspan="2" |8 |16 | 240 | 50 |
style="text-align:left;" | GeForce 8800M GTX
| 96:48:16 |24 | 360 | 65 |
=GeForce 9M (9xxxM) series=
The GeForce 9M series for notebooks architecture. Tesla (microarchitecture)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS) ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 9100M G mGPU | rowspan="4" | 2008 | MCP77MH, MCP79MH | rowspan="2" | 65 | Integrated | 450 | 1100 | 1066 | rowspan="2" | 8:8:4 | Up to 256 from system memory | 17.056 | DDR3 | 128 |1.8 |3.6 | 26.4 | rowspan="17" | 10.0 | rowspan="17" | 3.3 | 12 | Similar to 8400M G |
style="text-align:left;" | GeForce 9200M GS
| NB9M-GE(G98) | rowspan="3" | PCIe 2.0 x16 | 550 | 1300 | 1400 | 256 | 11.2 | rowspan="3" | DDR2/GDDR3 | rowspan="3" | 64 |2.2 |4.4 | 31.2 | rowspan="3" | 13 | rowspan="3" | |
style="text-align:left;" | GeForce 9300M G
| NB9M-GE(G86) | 80 | 400 | 800 | 1200 | 16:8:4 | rowspan="2" | 256/512 | 9.6 |1.6 |3.2 | 38.4 |
style="text-align:left;" | GeForce 9300M GS
| NB9M-GE(G98) | rowspan="3" | 65 | 550 | 1400 | 1400 | 8:8:4 | 11.2 |2.2 |4.4 | 33.6 |
style="text-align:left;" | GeForce 9400M G
| October 15, 2008 | MCP79MX | Integrated(PCIe 2.0 x16) | 450 | 1100 | 800 | 16:8:4 | Up to 256 from system memory | 12.8 | DDR2 | rowspan="8" | 128 |1.8 |3.6 | 54 | 12 | PureVideo HD with VP3. Known as the GeForce 9400M in Apple systems{{cite web |url=http://www.nvidia.com/object/product_geforce_9400m_g_us.html |title=Hardware {{pipe}} GeForce |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20120212161506/http://www.nvidia.com/object/product_geforce_9400m_g_us.html |archive-date=2012-02-12 |url-status=live }} and Nvidia ION based systems |
style="text-align:left;" | GeForce 9500M G
| rowspan="6" | 2008 | NB9P(G96) | PCIe 2.0 x16 | 500 | 1250 | 1600 | 16:8:8 | rowspan="2" | 512 | 25.6 | rowspan="4" | DDR2 / GDDR3 |4 |4 | 60 | rowspan="3" | 20 | |
style="text-align:left;" | GeForce 9500M GS
| NB9P-GV(G96) | 80 | PCIe x16 | 475 | 950 | 1400 | rowspan="6" | 32:16:8 | 22.4 |3.8 |7.6 | 91.2 |Rebranded 8600M GT |
style="text-align:left;" | GeForce 9600M GS
| NB9P-GE2(G96) | rowspan="2" | 65 | rowspan="4" | PCIe 2.0 x16 | 430 | 1075 | 800 | 1024 | 12.8 |3.44 |6.88 | 103.2 | rowspan="2" | |
style="text-align:left;" | GeForce 9600M GT
| NB9P-GS(G96) | 500 | rowspan="2" | 1250 | rowspan="9" | 1600 | 512/1024 | rowspan="4" | 25.6 |4 |8 | rowspan="2" | 120 | 23 |
style="text-align:left;" | GeForce 9650M GS
| NB9P-GS1(G84) | 80 | 625 | 512 | rowspan="8" | GDDR3 |5 |10 | 29 |Rebranded 8700M GT |
style="text-align:left;" | GeForce 9650M GT
| NB9P-GT(G96) | 65/55 | 550 | 1325 | 1024 |4.4 |8.8 | 127.2 | 23 | rowspan="3" | |
style="text-align:left;" | GeForce 9700M GT
| rowspan="2" | July 29, 2008 | NB9E-GE(G96) | rowspan="3" | 65 | PCIe x16 | 625 | 1550 | rowspan="3" | 512 |5 |10 | 148.8 | 45 |
style="text-align:left;" | GeForce 9700M GTS
| NB9E-GS(G94) | rowspan="5" | PCIe 2.0 x16 | rowspan="2" | 530 | rowspan="2" | 1325 | 48:24:16 | rowspan="5" | 51.2 | rowspan="5" | 256 |8.48 |12.7 | 190.8 | rowspan="2" | 60 |
style="text-align:left;" | GeForce 9800M GS
| 2008 | rowspan="2" | NB9E-GT(G94) | 64:32:16 |8.48 |16.96 | 254 |Down Clocked 9800M GTS Via Firmware |
style="text-align:left;" | GeForce 9800M GTS
| rowspan="3" | July 29, 2008 | rowspan="2" | 65/55 | 600 | 1500 | 64:32:16 | 512 / 1024 |9.6 |19.2 | 288 | 75 | |
style="text-align:left;" | GeForce 9800M GT
| NB9E-GT2(G92) | rowspan="2" | 500 | rowspan="2" | 1250 | 96:48:16 | 512 | rowspan="2" |8 |24 | 360 | 65 |Rebranded 8800M GTX |
style="text-align:left;" | GeForce 9800M GTX
| NB9E-GTX(G92) | 65 | 112:56:16 | 1024 |28 | 420 | 75 | |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! Core (MHz) ! Shader (MHz) ! Memory (MHz) ! rowspan="2" | Core config1 ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! rowspan="2" | Processing power (GFLOPS) ! Direct3D ! OpenGL ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
colspan=3 | Clock speed
! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version |
=GeForce 100M (1xxM) series=
The GeForce 100M series for notebooks architecture. Tesla (microarchitecture) (103M, 105M, 110M, 130M are rebranded GPU i.e. using the same GPU cores of previous generation, 9M, with promised optimisation on other features)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS) ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce G 102M
| January 8, 2009 | MCP79XT | rowspan="3" | 65 | Integrated | 450? | 1000 | 800 | 16:8:4 | Up to 512 from system memory | 6.4 | rowspan="2" | DDR2 | rowspan="4" | 64 |1.8 |3.6 | 48 | rowspan="8" | 10.0 | rowspan="8" | 3.3 | rowspan="4" | 14 | PureVideo HD, CUDA, Hybrid SLI, based on GeForce 9400M G |
style="text-align:left;" | GeForce G 103M
| January 1, 2009 | N10M-GE2(G98) | rowspan="7" | PCIe 2.0 x16 | rowspan="2" | 640 | rowspan="2" | 1600 | 1000 | rowspan="2" | 8:8:4 | 512 | 8 | rowspan="2" |2.56 | rowspan="2" |5.12 | 38 | rowspan="2" | PureVideo HD, CUDA, Hybrid SLI, comparable to the GeForce 9300M GS |
style="text-align:left;" | GeForce G 105M
| rowspan="2" | January 8, 2009 | N10M-GE1(G98) | 1000 | | 8 | GDDR2 | 38 |
style="text-align:left;" | GeForce G 110M
| N10M-GE1(G96b) | rowspan="5" | 55 | 400 | 1000 | 1000 | 16:8:4 | rowspan="5" | 1024 | 8 | DDR2 |1.6 |3.2 | 48 | PureVideo HD, CUDA, Hybrid SLI |
style="text-align:left;" | GeForce GT 120M
| February 11, 2009 | N10P-GV1(G96b) | 500 | 1250 | 1000 | rowspan="2" | 32:16:8 | 16 | DDR2 | rowspan="2" | 128 |4 |8 | 110 | rowspan="2" | 23 | PureVideo HD, CUDA, Hybrid SLI, Comparable to the 9500M GT and 9600M GT DDR2 (500/1250/400) |
style="text-align:left;" | GeForce GT 130M
| January 8, 2009 | N10P-GE1(G96b) | 600 | 1500 | 1000 | 16 | DDR2 |4.8 |9.6 | 144 | PureVideo HD, CUDA, Hybrid SLI, comparable to the 9650M GT |
style="text-align:left;" | GeForce GTS 150M
| rowspan="2" | March 3, 2009 | N10E-GE1(G94b) | 400 | 1000 | rowspan="2" | 1600 | rowspan="2" | 64:32:16 | rowspan="2" | 51.2 | rowspan="2" | GDDR3 | rowspan="2" | 256 |6.4 |12.8 | 192 | {{unk}} | rowspan="2" | PureVideo HD, CUDA, Hybrid SLI |
style="text-align:left;" | GeForce GTS 160M
| N10E-GS1(G94b) | 600 | 1500 |9.6 |19.2 | 288 | 60 |
=GeForce 200M (2xxM) series=
The GeForce 200M series is a graphics processor architecture for notebooks, Tesla (microarchitecture)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS) ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce G210M
| June 15, 2009 | GT218 | 40 | rowspan="9" | PCIe 2.0 x16 | 625 | 1500 | 1600 | 16:8:4 | 512 | 12.8 | GDDR3 | 64 |2.5 |5 | 72 | rowspan="9" | 10.1 | rowspan="9" | 3.3 | rowspan="2" | 14 | Lower clocked versions of the GT218 core is also known as Nvidia ION 2 |
style="text-align:left;" | GeForce GT 220M
| 2009 | G96b | 55 | rowspan="2" | 500 | 1250 | 1000 | 32:16:8 | rowspan="8" | 1024 | 16 | DDR2 | rowspan="5" | 128 | rowspan="2" |4 | rowspan="2" |8 | 120 | rebranded 9600M GT @55 nm node shrink |
style="text-align:left;" | GeForce GT 230M
| rowspan="4" | June 15, 2009 | rowspan="2" | GT216 | rowspan="4" | 40 | 1100 | rowspan="2" | 1600 | rowspan="2" | 48:16:8 | rowspan="2" | 25.6 | rowspan="2" | GDDR3 | 158 | rowspan="2" | 23 | rowspan="6" | |
style="text-align:left;" | GeForce GT 240M
| 550 | 1210 |4.4 |8.8 | 174 |
style="text-align:left;" | GeForce GTS 250M
| GT215 | 500 | 1250 | 3200 | rowspan="2" | 96:32:8 | 51.2 | rowspan="2" | GDDR5 |4 |16 | 360 | 28 |
style="text-align:left;" | GeForce GTS 260M
| GT215 | rowspan="2" | 550 | rowspan="2" | 1375 | 3600 | 57.6 |4.4 |17.6 | 396 | 38 |
style="text-align:left;" | GeForce GTX 260M
| rowspan="2" | March 3, 2009 | rowspan="3" | G92b | rowspan="3" | 55 | rowspan="2" | 1900 | 112:56:16 | rowspan="2" | 60.8 | rowspan="3" | GDDR3 | rowspan="3" | 256 |8.8 |30.8 | 462 | 65 |
style="text-align:left;" | GeForce GTX 280M
| 585 | 1463 | rowspan="2" | 128:64:16 |9.36 |37.44 | 562 | rowspan="2" | 75 |
style="text-align:left;" | GeForce GTX 285M
| February 2010 | 600 | 1500 | 2000 | 64.0 |9.6 |38.4 | 576 | Higher Clocked Version of GTX280M with new memory |
=GeForce 300M (3xxM) series=
The GeForce 300M series for notebooks architecture, Tesla (microarchitecture)
- 1 Unified shaders: texture mapping units: render output units
- 2 To calculate the processing power see Tesla (microarchitecture)#Performance.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 305M
| rowspan="2" | January 10, 2010 | rowspan="3" | GT218 | rowspan="10" | 40 | rowspan="10" | PCIe 2.0 x16 | 525 | 1150 | 1400 | rowspan="3" | 16:8:4 | rowspan="3" | 512 | 11.2 | rowspan="3" | DDR3 | rowspan="3" | 64 |2.1 |4.2 | 55 | rowspan="10" | 10.1 | rowspan="10" | 3.3 | rowspan="3" | 14 |
style="text-align:left;" | GeForce 310M
| 625 | 1530 | rowspan="2" | 1600 | rowspan="2" | 12.8 |2.5 |5 | 73 |
style="text-align:left;" | GeForce 315M
| January 5, 2011 | 606 | 1212 |2.42 |4.85 | 58.18 |
style="text-align:left;" | GeForce 320M
| April 1, 2010 | MCP89 | 450 | 950 | 1066 | 48:16:8 | 17.056 | DDR3 | rowspan="7" | 128 |3.6 |7.2 | 136.8 | 20 |
style="text-align:left;" | GeForce GT 320M
| January 21, 2010 | rowspan="3" | GT216 | 500 | 1100 | 1580 | 24:8:8 | rowspan="6" | 1024 | 25.3 | rowspan="4" | DDR3 |4 |4 | 90 | 14 |
style="text-align:left;" | GeForce GT 325M
| rowspan="2" | January 10, 2010 | 450 | 990 | rowspan="3" | 1600 | rowspan="2" | 48:16:8 | rowspan="3" | 25.6 |3.6 |7.2 | 142 | rowspan="2" | 23 |
style="text-align:left;" | GeForce GT 330M
| 575 | 1265 |4.6 |9.2 | 182 |
style="text-align:left;" | GeForce GT 335M
| rowspan="3" | January 7, 2010 | rowspan="3" | GT215 | 450 | 1080 | 72:24:8 |3.6 |10.8 | 233 | 28? |
style="text-align:left;" | GeForce GTS 350M
| rowspan="2" | 500 | 1249 | 3200 | rowspan="2" | 96:32:8 | 51.2 | rowspan="2" | DDR3 |4 |16 | 360 | 28 |
style="text-align:left;" | GeForce GTS 360M
| 1436 | 3600 | 57.6 |4.4 |17.6 | 413 | 38 |
=GeForce 400M (4xxM) series=
The GeForce 400M series for notebooks architecture, Fermi (microarchitecture)
- 1 Unified shaders: texture mapping units: render output units
- 2 To calculate the processing power see Fermi (microarchitecture)#Performance.
- 3 Each SM in the GF100 also contains 4 texture address units and 16 texture filtering units. Total for the full GF100 64 texture address units and 256 texture filtering units. Each SM in the GF104/106/108 architecture contains 8 texture filtering units for every texture address unit. The complete GF104 die contains 64 texture address units and 512 texture filtering units, the complete GF106 die contains 32 texture address units and 256 texture filtering units and the complete GF108 die contains 16 texture address units and 128 texture filtering units.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 410M
| January 5, 2011 | GF119 | rowspan="10" | 40 | rowspan="10" | PCIe 2.0 x16 | 575 | 1150 | rowspan="5" | 1600 | rowspan="2" | 48:83:4 | rowspan="3" | 0.5 and 1 | 12.8 | rowspan="5" | DDR3 | 64 |2.3 |4.6 | 110.4 | rowspan="10" | 12 | rowspan="10" | 4.5 | 12 | rowspan="2" |Similar to Desktop GT420 OEM |
style="text-align:left;" | GeForce GT 415M
| rowspan="7" | September 3, 2010 | rowspan="4" | GF108 | rowspan="2" | 500 | rowspan="2" | 1000 | rowspan="4" | 25.6 | rowspan="4" | 128 | rowspan="2" |2 |4 | 96 |
style="text-align:left;" | GeForce GT 420M
| rowspan="3" | 96:163:4 |8 | 192 | rowspan="2" |Similar to Desktop GT430 |
style="text-align:left;" | GeForce GT 425M
| 560 | 1120 | 1 |2.24 |8.96 | 215.04 |
style="text-align:left;" | GeForce GT 435M
| 650 | 1300 | 2 |2.6 |10.4 | 249.6 |Similar to Desktop GT430/440 |
style="text-align:left;" | GeForce GT 445M
| rowspan="2" | GF106 | 590 | 1180 | 1600 | 144:243:16 | 1 | 25.6 | DDR3 | 128 |9.44 |14.16 | 339.84 |Similar to Desktop GTS450 OEM) |
style="text-align:left;" | GeForce GTX 460M
| 675 | 1350 | rowspan="2" | 2500 | 192:323:24 | rowspan="2" | 1.5 | rowspan="2" | 60 | rowspan="4" | GDDR5 | rowspan="2" | 192 |16.2 |21.6 | 518.4 | rowspan="2" | 45–50(GPU only) |Similar to Desktop GTX550 Ti |
style="text-align:left;" | GeForce GTX 470M
| GF104 | 550 | 1100 | 288:483:24 |13.2 |26.4 | 633.6 |Similar to Desktop GTX 460/560SE |
style="text-align:left;" | GeForce GTX 480M
| May 25, 2010 | GF100 | 425 | 850 | 2400 | 352:443:32 | rowspan="2" | 2 | 76.8 | rowspan="2" | 256 |13.6 |18.7 | 598.4 | rowspan="2" | 100(MXM module) |Similar to Desktop GTX465 |
style="text-align:left;" | GeForce GTX 485M
| January 5, 2011 | GF104 | 575 | 1150 | 3000 | 384:643:32 | 96.0 |18.4 |36.8 | 883.2 |Similar to Desktop GTX560 Ti |
=GeForce 500M (5xxM) series=
The GeForce 500M series for notebooks architecture, Fermi (microarchitecture)
- 1 Unified shaders: texture mapping units: render output units
- 2 On Some Dell XPS17
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (MHz) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce GT 520M
| January 5, 2011 | GF119 | rowspan="10" | 40 | rowspan="10" | PCIe 2.0 x16 | 740 | 1480 | rowspan="2" | 1600 | 48:8:4 | rowspan="4" | 1 | rowspan="2" | 12.8 | rowspan="6" | DDR3 | rowspan="3" | 64 |2.96 |5.92 | 142.08 | rowspan="10" | 12 | rowspan="10" | 4.6 | 12 |Similar to Desktop 510/520 |
style="text-align:left;" | GeForce GT 520M
| | GF108 | 515 | 1030 | 96:16:4 |2.06 |8.24 | 197.76 | rowspan="2" | 20 | Noticed in Lenovo laptops, similar to Desktop 530/430/440 |
style="text-align:left;" | GeForce GT 520MX
| May 30, 2011 | GF119 | 900 | 1800 | rowspan="3" | 1800 | 48:8:4 | 14.4 |3.6 |7.2 | 172.8 |Similar to Desktop 510 & GT520 |
style="text-align:left;" | GeForce GT 525M
| rowspan="4" | January 5, 2011 | rowspan="2" | GF108 | 600 | 1200 | rowspan="2" | 96:16:4 | rowspan="3" | 28.8 | rowspan="3" | 128 |2.4 |9.6 | 230.4 | 20–23 |Similar to Desktop GT 530/430/440 |
style="text-align:left;" | GeForce GT 540M
| 672 | 1344 | 2 |2.688 |10.752 | 258.048 | rowspan="2" | 32–35 | rowspan="2" |Similar to Desktop GT 530/440 |
style="text-align:left;" | GeForce GT 550M
| GF108 | 740 | 1480 | 1800 | 96:16:4 | 1 |2.96 |11.84 | 284.16 |
style="text-align:left;" | GeForce GT 555M
| GF106 | 590 | 1180 | 1800 | 144:24:24 | 1.5 | 43.2 | DDR3 | 192 |14.6 |14.6 | 339.84 | 30–35 |Similar to Desktop GT545 |
style="text-align:left;" | GeForce GTX 560M
| May 30, 2011 | GF116 | 775 | 1550 | 2500 | 192:32:16 | 2 | 40.0 | rowspan="3" | GDDR5 | 128 |18.6 |24.8 | 595.2 | rowspan="2" | 75 |Similar to Desktop GTX 550Ti |
style="text-align:left;" | GeForce GTX 570M{{cite web |url=http://www.geforce.com/#/Hardware/GPUs/geforce-gtx-570m/specifications |title=Graphics Cards, Gaming, Laptops, and Virtual Reality from NVIDIA GeForce |access-date=2011-07-01 |archive-url=https://web.archive.org/web/20110629034841/http://www.geforce.com/#/Hardware/GPUs/geforce-gtx-570m/specifications |archive-date=2011-06-29 |url-status=live }}
| rowspan="2" | June 28, 2011 | rowspan="2" | GF114 | 575 | 1150 | rowspan="2" | 3000 | 336:56:24 | 1.5 | 72.0 | 192 |13.8 |32.2 | 772.8 |Similar to Desktop GTX 560 |
style="text-align:left;" | GeForce GTX 580M
| 620 | 1240 | 384:64:32 | 2 | 96.0 | 256 |19.8 |39.7 | 952.3 | 100 |Similar to Desktop GTX 560 Ti |
=GeForce 600M (6xxM) series=
{{Further|GeForce 600 series}}
The GeForce 600M series for notebooks architecture, Fermi (microarchitecture) and Kepler (microarchitecture). The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
- 1 Unified shaders: texture mapping units: render output units
- Non GTX Graphics, lack support NVENC
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="3" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan ! Direct3D ! OpenGL |
style="text-align:left;" | GeForce 610M{{cite web |url=http://www.nvidia.in/object/geforce-610m-in.html#pdpContent=2 |title=GeForce 610M Graphics Card with Optimus technology {{pipe}} Nvidia |website=Nvidia.in |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208092726/http://www.nvidia.in/object/geforce-610m-in.html#pdpContent=2 |archive-date=2015-12-08 |url-status=live }}
| December 2011 | GF119 (N13M-GE) | 40 | rowspan="5" | PCIe 2.0 x16 | 900 | 1800 | 1.8 | 48:8:4 | rowspan="4" | 1 | 14.4 | rowspan="3" | DDR3 | 64 |3.6 |7.2 | 142.08 | rowspan="5" | n/a | rowspan="16" | 12 | rowspan="16" | 4.5 | 12 | OEM. Rebadged GT 520MX |
style="text-align:left;" | GeForce GT 620M{{cite web |url=http://www.anandtech.com/show/5697/nvidias-geforce-600m-Series-keplers-and-fermis-and-die-shrinks-oh-my/2 |title=Nvidia's GeForce 600M series: Mobile Kepler and Fermi Die Shrinks |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20150919060349/http://www.anandtech.com/show/5697/nvidias-geforce-600m-series-keplers-and-fermis-and-die-shrinks-oh-my/2 |archive-date=2015-09-19 |url-status=live }}
| April 2012 | rowspan="2" | GF117 (N13M-GS) | rowspan="2" | 28 | rowspan="2" | 625 | rowspan="2" | 1250 | rowspan="2" | 1.8 | rowspan="3" | 96:16:4 | 14.4 | 64 | rowspan="2" |2.5 | rowspan="2" |10 | 240 | rowspan="2" | 15 | rowspan="2" | OEM. Die-Shrink GF108 |
style="text-align:left;" | GeForce GT 625M
| October 2012 | 14.4 | 64 | |
style="text-align:left;" | GeForce GT 630M{{cite web |url=http://www.nvidia.in/object/geforce-gt-630m-in.html#pdpContent=2 |title=GeForce GT 630M Graphics Card with Optimus technology {{pipe}} Nvidia |website=Nvidia.in |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151204075305/http://www.nvidia.in/object/geforce-gt-630m-in.html#pdpContent=2 |archive-date=2015-12-04 |url-status=live }}{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gt-630m/specifications |title=GT 630M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151209063648/http://www.geforce.com/hardware/notebook-gpus/geforce-gt-630m/specifications |archive-date=2015-12-09 |url-status=live }}
| rowspan="2" | April 2012 | GF108 (N13P-GL) | 40 | 660 | 1320 | 1.8 | 28.8 | DDR3 | 128 |2.6 |10.7 | 258.0 | 33 | GF108: OEM. Rebadged GT 540M |
style="text-align:left;" | GeForce GT 635M{{cite web |url=http://www.nvidia.in/object/geforce-gt-635m-in.html#pdpContent=2 |title=GeForce GT 635M GPU with Nvidia Optimus technology {{pipe}} Nvidia |website=Nvidia.in |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222133158/http://www.nvidia.in/object/geforce-gt-635m-in.html#pdpContent=2 |archive-date=2015-12-22 |url-status=live }}{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gt-635m/specifications |title=GT 635M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151219023301/http://www.geforce.com/hardware/notebook-gpus/geforce-gt-635m/specifications |archive-date=2015-12-19 |url-status=live }}
| GF106 (N12E-GE2) | 40 | 675 | 1350 | 1.8 | 144:24:24 | 2 | 28.8 | DDR3 | 128 |16.2 |16.2 | 289.2 | 35 | GF106: OEM. Rebadged GT 555M |
style="text-align:left;" | GeForce GT 640M LE
| rowspan="2" | March 22, 2012 | GF108 | 40 | PCIe 2.0 x16 | 762 | 1524 | 3.13 | 96:16:4 | rowspan="3" | 1 | 50.2 | rowspan="4" | DDR3 GDDR5 | rowspan="5" | 128 |3 |12.2 | 292.6 | rowspan="5" | 1.2 | 32 | GF108: 94% of desktop GT630{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 640M{{cite web |url=http://www.anandtech.com/show/5672/acer-aspire-timelineu-m3-life-on-the-kepler-verge |title=Acer Aspire TimelineU M3: Life on the Kepler Verge |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222112445/http://www.anandtech.com/show/5672/acer-aspire-timelineu-m3-life-on-the-kepler-verge |archive-date=2015-12-22 |url-status=live }}
| rowspan="2" | GK107 (N13P-GS) | rowspan="4" | 28 | rowspan="4" | PCIe 3.0 x16 | 625 | 625 | 1.8 | rowspan="4" | 384:32:16 | rowspan="2" | 28.8 |10 |20 | 480 | rowspan="2" | 32 | 59% of desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 645M
| October 2012 | 710 | 710 | 1.8 |11.36 |22.72 | 545 | 67% of desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 650M{{cite web |author=RPL |url=http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03 |title=HP Lists New Ivy Bridge 2012 Mosaic Design Laptops, Available April 8th |publisher=Laptop Reviews |date=2012-03-18 |access-date=2015-12-11 |url-status=dead |archive-url=https://web.archive.org/web/20130523012343/http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03 |archive-date=May 23, 2013 }}{{cite web |url=http://content.dell.com/us/en/home/d/help-me-choose/hmc-aw-video-card-laptops |title=Help Me Choose: Video Cards {{pipe}} Dell |website=Content.dell.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20121102092044/http://content.dell.com/us/en/home/d/help-me-choose/hmc-aw-video-card-laptops |archive-date=2012-11-02 |url-status=live }}
| rowspan="2" | March 22, 2012 | GK107 (N13P-GT) | 745 | 835 | 1.8 | 0.5 | 28.8 |11.9 |23.8 | 729.6 | 45 | 79% of desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 660M{{cite web |url=https://www.engadget.com/2012/01/08/lenovo-ideapad-laptops-CES-2012/ |title=Lenovo unveils six mainstream consumer laptops (and one desktop replacement) |website=Engadget.com |date=2012-01-08 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222110754/http://www.engadget.com/2012/01/08/lenovo-ideapad-laptops-CES-2012/ |archive-date=2015-12-22 |url-status=live }}{{cite web |url=http://forum.notebookreview.com/asus-reviews-owners-lounges/659534-asus-g75vw-ivy-bridge-660m-review-owners-lounge-4.html |title=Asus G75VW Ivy Bridge 660M Review! and owners lounge {{pipe}} Page 4 {{pipe}} NotebookReview |website=Forum.notebookreview.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20150107131543/http://forum.notebookreview.com/asus-reviews-owners-lounges/659534-asus-g75vw-ivy-bridge-660m-review-owners-lounge-4.html |archive-date=2015-01-07 |url-status=live }}
| GK107 (N13E-GE) | 835 | 950 | 5 | 2 | 80.0 | rowspan="7" | GDDR5 |15.2 |30.4 | 729.6 | 50 | 79% of desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 670M
| April 2012 | GF114 (N13E-GS1-LP) | 40 | PCIe 2.0 x16 | 620 | 1240 | 3 | 336:56:24 | rowspan="2" | 1.5 | 72.0 | rowspan="2" | 192 |14.35 |33.5 | 833 | n/a | rowspan="2" | 75 | 73% of desktop GTX 560{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX [https://www.geforce.com/hardware/notebook-gpus/geforce-gtx-670mx/specifications 670MX]
| October 2012 | GK104 (N13E-GR) | 28 | PCIe 3.0 x16 | 615 | 615 | 2.8 | 960:80:24 | 67.2 |14.4 |48.0 | 1181 | 1.2 | 61% of desktop GTX 660{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 675M
| April 2012 | GF114 (N13E-GS1) | 40 | PCIe 2.0 x16 | 632 | 1265 | 3 | 384:64:32 | 2 | 96.0 | rowspan="4" | 256 |19.8 |39.7 | 972 | n/a | rowspan="3" | 100 | 75% of desktop GTX 560Ti{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX [https://www.geforce.com/hardware/notebook-gpus/geforce-gtx-675mx/specifications 675MX]
| October 2012 | GK104 (N13E-GSR) | rowspan="3" | 28 | rowspan="3" | PCIe 3.0 x16 | 667 | 667 | rowspan="2" | 3.6 | 960:80:32 | rowspan="3" | 4 | rowspan="2" | 115.2 |19.2 |48.0 | 1281 | rowspan="3" | 1.2 | 61% of desktop GTX 660{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 680M
| June 4, 2012 | GK104 (N13E-GTX) | rowspan="2" | 719 | rowspan="2" | 719 | 1344:112:32 | rowspan="2" |23 |80.6 | 1933 | 78% of desktop GTX 670{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 680MX
| October 23, 2012 | GK104 | 5 | 1536:128:32 | 160 |92.2 | 2209 | 122 | 72% of desktop GTX 680{{Original research inline|date=June 2015}} |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="3" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
Core (MHz)
! Shader (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan ! Direct3D ! OpenGL |
=GeForce 700M (7xxM) series=
{{Further|GeForce 700 series|Kepler (microarchitecture)}}
The GeForce 700M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
- 1 Unified shaders: texture mapping units: render output units
- Non GTX variants lack NVENC support
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Core (MHz)
! Shader (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan ! Direct3D ! OpenGL ! CUDA |
style="text-align:left;" | GeForce 710M
| January 2013 | GF117 | rowspan="15" | 28 | PCIe 2.0 x16 | 800 | 1600 | rowspan="2" | 1.8 | 96:16:4 | 1 | rowspan="2" | 14.4 | rowspan="7" | DDR3 | rowspan="4" | 64 |3.2 |12.8 | 307.2 | n/a | rowspan="15" | 12 | rowspan="15" | 4.5 | 12 | OEM. About 115% of Mobile 620 & Desktop 530{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce 710M
| July 24, 2013 | GK208 | PCIe 3.0 x8 | 719 | ? | 192:16:8 | 1 |5.752 |11.5 | 276.1 | 1.2 | 15 |Kepler, similar to 730M with half of the cores disabled |
style="text-align:left;" | GeForce GT 720M
| April 1, 2013 | GF117 | PCIe 2.0 x16 | 938 | 1876 | rowspan="4" | 2 | 96:16:4 | rowspan="5" | 2 | 16.0 |3.8 |15.0 | 360.19 | n/a | ? | OEM. About 130% of Mobile 625/630 & Desktop 620{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 720M
|December 25, 2013 |GK208 |PCIe 2.0 x8 | colspan="2" |719 |192:16:8 |12.8 |3.032 |12.13 |291 | rowspan="12" | 1.2 |22 |Kepler, similar to 730M with half of the cores disabled |
style="text-align:left;" | GeForce GT 730M
| January 2013 | rowspan="3" | GK208 | rowspan="3" | PCIe 3.0 x8 | colspan="2" | 719 | rowspan="3" | 384:32:8 | rowspan="2" |16.0 | 128 |5.8 |23.0 | 552.2 | 33 | Kepler, similar to Desktop GT640 |
style="text-align:left;" | GeForce GT 735M
| rowspan="5" | April 1, 2013 | colspan="2" | 889 | rowspan="2" | 64 |7.11 |28.4 | 682.8 | rowspan="2" | ? | Kepler, similar to Desktop GT640 |
style="text-align:left;" | GeForce GT 740M
| colspan="2" | 980 | 1.8 | 14.4 |7.84 |31.4 | 752.6 | Kepler, similar to Desktop GT640. |
style="text-align:left;" | GeForce GT 740M
| rowspan="4" | GK107 | rowspan="8" | PCIe 3.0 x16 | 1.8 | rowspan="4" | 384:32:16 | 28.8 | rowspan="6" | 128 |12.96 |25.92 | 622.1 | rowspan="2" | 45 | about 76% of Desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 745M
| colspan="2" | 837 | rowspan="2" | 2 | rowspan="5" | 2 | rowspan="2" | 32 | rowspan="2" | DDR3 |13.4 |26.8 | 642.8 | about 79% of Desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 750M
| colspan="2" | 967 |15.5 |30.9 | 742.7 | rowspan="2" | 50 | about 91% of Desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GT 755M{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gt-755m/specifications |title=GT 755M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208161237/http://www.geforce.com/hardware/notebook-gpus/geforce-gt-755m/specifications |archive-date=2015-12-08 |url-status=live }}
| ? | colspan="2" | 1020 | 5.4 | 86.4 | rowspan="5" | GDDR5 |15.7 |31.4 | 783 | about 93% of Desktop GTX650{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 760M
| rowspan="4" | May 2013 | rowspan="3" | GK106 | colspan="2" | 719 | rowspan="3" | 4 | rowspan="2" | 768:64:16 | rowspan="2" | 64 |10.5 |42.1 | 1104 | 55 | about 71% of Desktop GTX 650Ti{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 765M
| colspan="2" | 863 |13.6 |54.4 | 1326 | 65 | about 92% of Desktop GTX 650Ti{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 770M
| colspan="2" rowspan="2" | 797 | 960:80:24 | 3 | 96 | 192 |19.5 |64.9 | 1530 | 75 | about 83% of Desktop GTX660{{Original research inline|date=June 2015}} |
style="text-align:left;" | GeForce GTX 780M
| GK104 | 5 | 1536:128:32 | 4 | 160 | 256 |26.3 |105.3 | 2448 | 122 | about 78% of Desktop GTX770{{Original research inline|date=June 2015}} |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
Core (MHz)
! Shader (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan ! Direct3D ! OpenGL ! CUDA |
=GeForce 800M (8xxM) series=
{{Further|GeForce 800M series}}
The GeForce 800M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
- 1 Unified shaders: texture mapping units: render output units
- 810M to 845M lack NVENC support
{{Row hover highlight}}
=GeForce 900M (9xxM) series=
{{Further|GeForce 900 series}}
The GeForce 900M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
- 1 Unified shaders: texture mapping units: render output units
- 920M to 940M lack NVENC support
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Min (MHz)
! Average (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan ! Direct3D ! OpenGL ! OpenCL |
GeForce 910M
|March 13, 2015 |GK208B |28 |PCIe 3.0 x8 |641 | | |384:32:8 (2 SMX) |2 |16.02 |DDR3 |64 |5.128 |20.51 |492.3 |1.2 |12 (11_0) |4.6 |1.2 |33 | |
style="text-align:left;" | GeForce 920M
| rowspan="3" | March 12, 2015 | GK208 | rowspan="12" | 28 | rowspan="12" | PCIe 3.0 x16 | 954 | {{unk}} | rowspan="2" | 1.8 | 384:32:8 | rowspan="5" | 2 | rowspan="2" | 14.4 | rowspan="4" | DDR3 | rowspan="5" | 64 |7.6 |30.5 | 733 | 1.2 | rowspan="8" | 12 (11_0) | rowspan="12" | 4.6 | rowspan="12" | 1.2 | 33 | rowspan="5" | |
style="text-align:left;" | GeForce 930M
| rowspan="3" | GM108 | 928 | 941 | rowspan="3" | 384:24:8 |7.4 |22.3 | 713 | rowspan="11" | 1.3 |
style="text-align:left;" | GeForce 940M
| 1072 | 1176 | rowspan="2" | 2 | rowspan="2" | 16 |8.6 |25.7 | 823 |
rowspan="2" style="text-align:left;" | GeForce 940MX
| 1004 | 1242 |9.9 |29.8 | 954 | rowspan="2" | 23 |
June 28, 2016{{cite web |url=https://www.techpowerup.com/gpu-specs/geforce-940mx.c2845 |title=Nvidia GeForce 940MX Specs |publisher=TechPowerUp |access-date=2020-07-22 }}
| rowspan="4" | GM107 | 795 | 861 | rowspan="2" | 5 | 512:32:8 | 40 | rowspan="2" | GDDR5 |6.9 |27.6 | 882 |
rowspan="2" style="text-align:left;" | GeForce GTX 950M
| rowspan="3" | March 12, 2015 | rowspan="2" | 914 | rowspan="2" {{unk}} | rowspan="3" | 640:40:16 | rowspan="4" | 2, 4 | 80 | rowspan="4" | 128 | rowspan="2" |14.6 | rowspan="2" |36.6 | 1170 | {{unk}} | rowspan="3" | Similar core config to GTX 750 Ti (GM107-400-A2) |
2
| 32 | DDR3 | |
style="text-align:left;" | GeForce GTX 960M
| 1097 | 1176 | rowspan="4" | 5 | rowspan="2" | 80 | rowspan="5" | GDDR5 |17.5 |43.8 | 1403 |
style="text-align:left;" | GeForce GTX 965M{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-965m |title=GTX 965M |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208060211/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-965m |archive-date=2015-12-08 |url-status=live }}
| January 5, 2015 | rowspan="4" | GM204 | rowspan="2" | 944 | {{unk}} | 1024:64:32 |30.2 |60.4 | 1933 | rowspan="4" | 12 (12_1) | Similar core config to GTX 960 (GM206-300) |
style="text-align:left;" | GeForce GTX 970M{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-970m/specifications |title=GTX 970M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151209063713/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-970m/specifications |archive-date=2015-12-09 |url-status=live }}
| rowspan="2" | October 7, 2014 | 993 | 1280:80:48 | 3, 6 | 120 | 192 |44.4 |73.9 | 2365 | 75 | Similar core config to GTX 960 OEM (GM204) |
style="text-align:left;" | GeForce GTX 980M{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980m/specifications |title=GTX 980M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151213131803/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980m/specifications |archive-date=2015-12-13 |url-status=live }}
| 1038 | 1127 | 1536:96:64 | 4, 8 | 160 | rowspan="2" | 256 |66.4 |99.6 | 3189 | 100 | Similar core config to GTX 970 (GM204-200) with one SMM disabled |
style="text-align:left;" | GeForce GTX 980{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980/specifications |title=GTX 980 {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151205174022/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980/specifications |archive-date=2015-12-05 |url-status=live }}
| September 22, 2015 | 1064 | {{unk}} | 7.01 | 2048:128:64 | 8 | 224 |68.1 |136.2 | 4358 | 165, oc to 200 | Similar to Desktop GTX 980 |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! colspan="3" | Clock speed ! rowspan="2" | Core config1 ! colspan="4" | Memory ! colspan="2" |Fillrate ! rowspan="2" | Processing power (GFLOPS)2 ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | Notes |
Min (MHz)
! Average (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Vulkan ! Direct3D ! OpenGL ! OpenCL |
=GeForce 10 series=
{{Further|GeForce 10 series|Pascal (microarchitecture)}}
- Unified shaders: texture mapping units: render output units
- Improved NVENC (Better support for H265, VP9,...)
- Supported APIs: Direct3D 12 (12_1), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 6.1
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" | Launch ! rowspan="2" | Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Transistors (billion) ! rowspan="2" | Die size (mm2) ! colspan="3" | Clock speeds ! rowspan="2" | Core config ! colspan="4" | Memory ! colspan="2" |Fillrate ! colspan="3" |Processing power (GFLOPS) ! colspan="4" | Supported API version ! rowspan="2" | TDP (Watts) ! rowspan="2" | SLI support |
---|
Base core clock (MHz)
! Boost core clock (MHz) ! Memory (GT/s) ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !Single precision (Boost) ! DirectX ! OpenGL ! Vulkan ! OpenCL |
style="text-align:left;" | GeForce GTX 1050 (Notebook){{cite web|url=http://www.geforce.com/hardware/10series/notebook|title=GeForce GTX 10-Series Notebooks|work=geforce.com|access-date=2016-08-16|archive-url=https://web.archive.org/web/20161021192100/http://www.geforce.com/hardware/10series/notebook|archive-date=2016-10-21|url-status=live}}{{cite web |url=https://www.anandtech.com/show/10980/nvidia-launches-geforce-gtx-1050-ti-gtx-1050-for-laptops |title=NVIDIA Launches GeForce GTX 1050 TI & GTX 1050 For Laptops |website=anandtech.com |access-date=2018-12-27 |archive-url=https://web.archive.org/web/20181227230413/https://www.anandtech.com/show/10980/nvidia-launches-geforce-gtx-1050-ti-gtx-1050-for-laptops |archive-date=2018-12-27 |url-status=live }}
| rowspan="2" | January 3, 2017 | GP107(N17P-G0-A1) | rowspan="2" | 14 nm | rowspan="2" | 3.3 | rowspan="2" | 135 | rowspan="9" |PCIe 3.0 x16 | 1354 | 1493 | rowspan="2" | 7 | 640:40:16 | rowspan="2" | 4 | rowspan="2" | 112 | rowspan="7" | GDDR5 | rowspan="2" | 128 |21.7 |54.2 |14 |1733 (1911) |27 | 12 (12_1) | rowspan="9" | 4.5 | rowspan="9" | 1.3 | rowspan="9" |1.2 | 53 | rowspan="5" {{No}} |
style="text-align:left;" | GeForce GTX 1050 Ti (Notebook)
| GP107(N17P-G1-A1) | 1493 | 1620 | 768:48:32 |47.8 |71.7 |18 |2293 (2488) |36 | |64 |
style="text-align:left;" | GeForce GTX 1060 (Notebook)
| August 16, 2016 | rowspan="2" | GP106 | rowspan="7" | 16 | rowspan="3" | 4.4 | rowspan="3" | 200 | 1404 | 1670 | rowspan="5" | 8 | rowspan="3" | 1280:80:48 | rowspan="3" | 6 | rowspan="3" | 192 | rowspan="3" | 192 |67.4 |112 |56 |3594 (4275) |112 | | rowspan="3" | 80 |
rowspan="2" style="text-align:left;" | GeForce GTX 1060 Max-Q
| rowspan="2" |May 2017 |1063 | rowspan="2" |1480 | rowspan="2" |71.04 | rowspan="2" |118.4 |59.20 | rowspan="2" |3789 | rowspan="2" |118.4 | |
GP106B
|1265 | | |
style="text-align:left;" | GeForce GTX 1070 (Notebook)
|August 16, 2016 | rowspan="4" | 7.2 | rowspan="4" | 314 | 1442 | 1645 | rowspan="2" | 2048:128:64 | rowspan="4" | 8 | rowspan="2" | 256 | rowspan="4" | 256 |92.3 |185 |92 |5906 (6738) |185 | |115 | rowspan=1 {{Yes}} |
style="text-align:left;" | GeForce GTX 1070 Max-Q
|May 2017 |1101 |1379 |88.26 |176.5 |88.26 |5648 |176.5 | |? | rowspan=1 {{No}} |
style="text-align:left;" | GeForce GTX 1080 (Notebook)
|August 16, 2016 | 1556 | 1733 | rowspan="2" | 10 | rowspan="2" | 2560:160:64 | rowspan="2" | 320 | rowspan="2" |GDDR5X |99.6 |249 |124 |7967 (8873) |249 | | 150 |rowspan=1 {{Yes}} |
style="text-align:left;" | GeForce GTX 1080 Max-Q
|May 2017 |1101 |1468 |93.95 |234.9 |117.4 |7516 |234.9 | |? |rowspan=1 {{No}} |
=GeForce 16 series=
{{Further|GeForce 16 series|Turing (microarchitecture)}}
- Supported APIs: Direct3D 12 (12_1), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 7.5, improve NVENC
- No SLI, no TensorCore, and no Raytracing hardware acceleration.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|+ ! rowspan="2" |Model ! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" |Process ! rowspan="2" |Transistors (billion) ! rowspan="2" |Die size (mm2) ! colspan="2" |Clock speeds ! rowspan="2" |Core config ! rowspan="2" |L2 !Memory (GT/s) ! colspan="4" |Memory ! colspan="2" |Fillrate ! colspan="3" |Processing power (GFLOPS) ! rowspan="2" |TDP (Watts) |
Base core clock (MHz)
!Boost core clock (MHz) ! !Size (GiB) !Bandwidth (GB/s) !Bus type !Bus width (bit) !Pixel (GP/s) !Texture (GT/s) |
---|
Geforce GTX 1630
|Jun 28, 2022 | rowspan="2" |TU117 | rowspan="8" |TSMC 12FFN | rowspan="5" |4.7 | rowspan="5" |200 | rowspan="8" |PCIe 3.0 x16 |1740 |1785 |512:32:16 | rowspan="5" |1.0 |12 |4 |96 |GDDR6 |64 |28.56 |57.12 |3.656 |1828 |57.12 |75 |
GeForce GTX 1650 (Laptop)
| rowspan="2" |April 23, 2019 |1395 |1560 | rowspan="4" |1024:64:32 | rowspan="2" |8 | rowspan="2" |4 |128 | rowspan="2" |GDDR5 | rowspan="4" |128 |49.92 |99.84 |6390 |3195 |50 |
GeForce GTX 1650 Max-Q
| TU117(N18P-G0-MP-A1) |1020 |1245 |112 |39.84 |79.68 |5100 |2550 |30 |
GeForce GTX 1650 Ti Max-Q
| rowspan="2" |April 2, 2020 | TU117 |1035 |1200 | rowspan="2" |12 | rowspan="2" |4 | rowspan="2" |192 | rowspan="5" |GDDR6 |38.4 |76.8 | 4915 | 2458 | 76.8 | 35 |
GeForce GTX 1650 Ti
| TU117(N18P-G62-A1) |1350 |1485 |47.52 |95.04 | 6083 | 3041 | 95.04 | 55 |
GeForce GTX 1660 (Laptop)
|? | rowspan="3" |TU116 | rowspan="3" |6.6 | rowspan="3" |284 |1455 |1599 |1408:88:48 | rowspan="3" |1.5 |16 | rowspan="3" |6 |384 | rowspan="3" |192 |76.32 |127.2 |8141 |4070 |? |
GeForce GTX 1660 Ti Max-Q
| rowspan="2" |April 23, 2019 |1140 |1335 |1536:96:46 | rowspan="2" |12 |288 |64.08 |128.2 |8202 |4101 |60 |
GeForce GTX 1660 Ti (Laptop){{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-ti-mobile.c3369|title=NVIDIA GeForce GTX 1660 Ti Mobile Specs|website=TechPowerUp|language=en|access-date=2020-02-04}}
|1455 |1590 |1536:96:46 |288 |76.32 |152.6 |9769 |4884 |80 |
=GeForce 20 series=
{{Further|GeForce 20 series|Turing (microarchitecture)}}
- Supported APIs: Direct3D 12 (12_2), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 7.5, improve NVENC (Support B-Frame on H265...)
- MX Graphics lack NVENC and they are based on Pascal architecture.{{cite news|url=https://www.dobreprogramy.pl/nvidia-geforce-mx250-i-mx230-dwie-nowe-grafiki-do-laptopow,6628559174182529a|title=NVIDIA GeForce MX250 i MX230 – dwie "nowe" grafiki do laptopów|work=Dobre Programy|date=2019-02-21|language=pl|access-date=18 February 2022|archive-date=18 February 2022|archive-url=https://web.archive.org/web/20220218175646/https://www.dobreprogramy.pl/nvidia-geforce-mx250-i-mx230-dwie-nowe-grafiki-do-laptopow,6628559174182529a|url-status=live}}
- Add TensorCore and Ray tracing hardware acceleration, RTX IO (Only on RTX cards)
- Nvidia DLSS
{{Row hover highlight}}
{{notelist|refs=
{{efn|name=CoreConfig|Main Shader Processors : Texture Mapping Units : Render Output Units : Tensor Cores (or FP16 Cores in GeForce 16 series) : Ray-tracing Cores (Streaming Multiprocessors) (Graphics Processing Clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
=GeForce 30 series=
{{Further|GeForce 30 series|Ampere (microarchitecture)}}
- Supported APIs: Direct3D 12 Ultimate (12_2), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 8.6
- Tensor core 3rd gen
- RT core 2nd gen
- RTX IO
- Improve NVDEC (Add AV1)
{{Row hover highlight}}
{{notelist|refs=
{{efn|name=CoreConfig|Main Shader Processors : Texture Mapping Units : Render Output Units : Tensor Cores (or FP16 Cores in GeForce 16 series) : Ray-tracing Cores (Streaming Multiprocessors) (Graphics Processing Clusters)}}
{{efn|name=ClockSpeed|Which base and boost core clockspeeds the GPU has depends on the TDP configuration set by the system builder}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock.}}
}}
= GeForce 40 series =
{{Further|GeForce 40 series|Ada Lovelace (microarchitecture)}}
- Supported APIs: Direct3D 12 Ultimate (12_2), OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 and CUDA 8.9
- Tensor core 4th gen
- RT core 3rd gen
- DLSS 3 (Super Resolution + Frame Generation){{Cite web |title=Introducing NVIDIA DLSS 3 |url=https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/ |access-date=2023-01-04 |website=NVIDIA |language=en-us |archive-date=23 May 2024 |archive-url=https://web.archive.org/web/20240523145811/https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/ |url-status=live }}
- SER {{Cite web |date=2022-10-13 |title=Improve Shader Performance and In-Game Frame Rates with Shader Execution Reordering |url=https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/ |access-date=2023-01-04 |website=NVIDIA Technical Blog |language=en-US |archive-date=25 May 2023 |archive-url=https://web.archive.org/web/20230525025659/https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/ |url-status=live }}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" scope="col" | Model
! rowspan="2" scope="col" | Launch ! rowspan="2" scope="col" | Code name ! rowspan="2" scope="col" | Process ! rowspan="2" scope="col" {{vertical header|va=mid| Transistors (billion)}} ! rowspan="2" scope="col" | Die size (mm2) ! rowspan="2" scope="col" | Bus interface ! colspan="3" scope="colgroup" | Clock speeds{{efn|name=ClockSpeed|Which base and boost core clockspeeds the GPU has depends on the TDP configuration set by the system builder}} ! rowspan="2" scope="col" | Core config ! rowspan="2" scope="col" | L2 Cache (MiB) ! colspan="4" scope="colgroup" | Memory ! colspan="2" scope="colgroup" | Fillrate{{efn|name=PerfValues}} ! colspan="5" |Processing power (TFLOPS){{efn|name=PerfValues}} ! colspan="2" scope="colgroup" | Ray-tracing Performance ! rowspan="2" scope="col" | TDP (Watts) |
---|
scope="col" | Base core (MHz)
! scope="col" | Boost core (MHz) ! scope="col" | Memory (MHz) ! scope="col" | Size (GiB) ! scope="col" | Bandwidth (GB/s) ! scope="col" | Bus type ! scope="col" | Bus width (bit) ! scope="col" | Pixel (GP/s){{efn|name=PixelFillrate}} ! scope="col" | Texture (GT/s){{efn|name=TextureFillrate}} ! scope="col" | Half precision ! scope="col" | Single precision ! scope="col" | Double precision ! scope="col" | Tensor compute (FP16){{clarify|date=May 2024}} ! scope="col" | Tensor TOPS ! scope="col" | Rays/s (Billions) ! scope="col" | RTX OPS/s (Trillions) |
scope="row" style="text-align:left;" | GeForce RTX 4050 Mobile/ Laptop{{Cite web |title=GeForce RTX 40 Series Laptops: NVIDIA Ada Lovelace Breaks Energy-Efficiency Barrier, Supercharges 170+ Laptop Designs For Gamers & Creators |url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-40-series-laptops-available-february-8/ |access-date=2023-01-04 |website=NVIDIA |language=en-us |archive-date=21 March 2023 |archive-url=https://web.archive.org/web/20230321071930/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-40-series-laptops-available-february-8/ |url-status=live }} |rowspan="3" | February 22, 2023 |AD107 |rowspan="2" | 18.9 |rowspan="2" | 146 | rowspan="3" |PCIe 4.0 x8 |1140-2370 |1605-2370 | rowspan="3" |2000 |2560:80: |12 |6 |168.0 |rowspan="5" | GDDR6 |96 |36.4-75.8 |91.2-189.6 |5.83-12.1 |5.83-12.1 |0.09-0.18 |46.6-97.0 |93.3-194.1 | | |rowspan="3" | 35–115 |
scope="row" style="text-align:left;" | GeForce RTX 4060 Mobile/ Laptop |AD107 |1140-2295 |1470-2370 |3072:96: | rowspan="2" | 32 |rowspan="2" | 8 |rowspan="2" | 224.0 |rowspan="2" | 128 |36.4-73.4 |109.4-220.3 |7.00-14.1 |7.00-14.1 |0.10-0.22 |56.0-112.8 |112.0-225.6 | | |
scope="row" style="text-align:left;" |GeForce RTX 4070 Mobile/ Laptop |AD106 |22.9 |190 |735-2070 |1230-2175 |4608:144: |35.2-99.3 |105.8-298.0 |6.77-19.0 |6.77-19.0 |0.10-0.29 |54.1-152.6 |108.3-305.2 | | |
style="text-align:left;" |GeForce RTX 4080 Mobile/ Laptop |rowspan="2" | February 8, 2023 |AD104 |35.8 |295 | rowspan="2" |PCIe 4.0 x16 |795-1860 |1350-2280 | rowspan="2" |2250 |7424:232: |48 |12 |336.0 |192 |63.6-148.8 108.0-182.4 |184.4-431.5 313.2-528.9 |11.8-27.6 20.0-33.8 |11.8-27.6 20.0-33.8 |0.18-0.43 0.31-0.52 |94.4-220.9 |188.8-441.8 | | |60-150 |
scope="row" style="text-align:left;" |GeForce RTX 4090 Mobile/ Laptop |AD103 |45.9 |379 |930-1620 |1455-2580 |9728:304: |64 |16 |448.0 |256 |104.1-181.4 |282.7-492.4 |18.0-31.5 |18.0-31.5 |0.28-0.49 |144.8-252.1 |289.5-504.2 | | |80-150 |
{{notelist|refs=
{{efn|name=CoreConfig| Main Shader Processors : Texture Mapping Units : Render Output Units : Tensor Cores : Ray-tracing Cores (Streaming Multiprocessors) (Graphics Processing Clusters)}}
{{efn|name=PerfValues|Base clock, Boost clock.}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the core clock speed.}}
}}
=GeForce MX series=
{{Row hover highlight}}
{{notelist|refs=
{{efn|name=CoreConfig|Shader Processors : Texture mapping units : Render output units : Streaming Multiprocessors : Ray tracing cores : Tensor Cores}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
Workstation GPUs
=Quadro=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro
| rowspan="2" |NV10GL | rowspan="2" |220 | rowspan="6" |AGP 4x |135 |166 | rowspan="2" |0:4:4:4 | rowspan="5" |64 |2.66 |SDR | rowspan="14" |128 | rowspan="2" |0.54 | rowspan="2" |0.54 |7 | rowspan="5" |1.2 | rowspan="14" | | rowspan="11" | |
Quadro DDR
|135 |333 |5.312 |DDR | |
Quadro2 MXR
| rowspan="2" |NV11GL | rowspan="2" |180 |200 |183 | rowspan="2" |0:2:4:2 ||2.93 | rowspan="2" |SDR |0.4 |0.4 | |
Quadro2 EX
|175 |166 |2.7 |0.35 |0.35 | |
Quadro2 PRO
|NV15GL |150 |250 |400 |0:4:8:4 |6.4 | rowspan="3" |DDR |1 |2 | |
Quadro DCC
||NV20GL |180 |200 |460 |1:4:8:4 | rowspan="3" |128 |7.4 |0.8 |1.6 |8 | rowspan="6" |1.4 |
Quadro4 380XGL
|NV18GL | rowspan="8" |150 |AGP 8x |275 |513 | rowspan="4" |0:2:4:2 |8.2 |0.55 |1.1 | rowspan="4" |7 |
Quadro4 500XGL
| rowspan="2" |NV17GL | rowspan="2" |AGP 4x |250 |166 |2.7 |SDR |0.5 |1 |
Quadro4 550XGL
|270 | rowspan="2" |400 | rowspan="3" |64 | rowspan="2" |6.4 | rowspan="6" |DDR |0.59 |1.08 |
Quadro4 580XGL
|NV18GL |AGP 8x |300 |0.6 |1.2 |
Quadro4 700XGL
| rowspan="3" |NV25 | rowspan="3" |AGP 4x | rowspan="2" |275 | rowspan="2" |550 | rowspan="4" |2:4:8:4 | rowspan="2" |8.8 | rowspan="2" |1.1 | rowspan="2" |2.2 |8 |
Quadro4 750XGL
| rowspan="3" |128 | |1.5 | rowspan="3" |Stereo display |
Quadro4 900XGL
| rowspan="2" |300 | rowspan="2" |650 | rowspan="2" |10.4 | rowspan="2" |1.2 | rowspan="2" |2.4 | | rowspan="2" |1.4 |
Quadro4 980XGL
|NV28GL |AGP 8x | |
rowspan=2 | Model
! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! rowspan=2 | TDP (Watts) ! rowspan=2 | Features |
colspan=4 | Memory
! colspan="2" |Fillrate ! colspan="2" | Supported API version |
=Quadro FX series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1* ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 500
|NV34GL | rowspan="3" |150 |AGP 8x |270 |480 | rowspan="2" |2:4:4:4 |128 |7.68 | rowspan="3" |DDR | rowspan="6" |128 |1.08 |1.08 |9.0 | rowspan="8" |2.0 | rowspan="8" | | rowspan="7" |Stereo display |
Quadro FX 600
|NV34GL |PCI |350 |480 |128 |7.68 |1.4 |1.4 | |
Quadro FX 700
|NV35GL | rowspan="8" |AGP 8x |275 |550 |3:4:8:4 | rowspan="4" |128 |8.8 |1.1 |2.2 | |
Quadro FX 1000
|NV30GL | rowspan="7" |130 |300 |600 |2:4:8:4 |9.6 |GDDR2 |1.2 |2.4 | |
Quadro FX 1100
|NV36GL |425 |650 |3:4:4:4 |10.4 |DDR2 |1.7 |1.7 | |
Quadro FX 2000
|NV30GL | rowspan="3" |400 |800 |2:4:8:4 |12.8 |GDDR2 | rowspan="3" |1.6 | rowspan="3" |3.2 | |
Quadro FX 3000
| rowspan="2" |NV35GL | rowspan="2" |850 | rowspan="2" |3:4:8:4 | rowspan="4" |256 |27.2 | rowspan="2" |DDR | rowspan="4" |256 | |
Quadro FX 3000G
|27.2 | |Stereo display, Genlock |
Quadro FX 4000
| rowspan="2" |NV40GL | rowspan="2" |375 | rowspan="2" |1000 | rowspan="2" |6:12:12:12 | rowspan="2" |32.0 | rowspan="2" |GDDR3 | rowspan="2" |4.5 | rowspan="2" |4.5 |9.0c | rowspan="2" |2.1 |142 |Stereo display |
Quadro FX 4000 SDI
| | |Stereo display, Genlock |
=Quadro FX (x300) series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=2 | Fillrate ! colspan=4 | Memory ! colspan=2 | Supported API version ! rowspan=2 | TDP (Watts) |
---|
Pixel (GP/s)
! Texture (GT/s) ! Size (MiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Direct3D ! OpenGL |
Quadro FX 330
|NV37GL |150 | rowspan="2" |PCIe x16 |250 |400 |2:4:4:4 |1 |1 |64 |3.2 | rowspan="2" |DDR |128 | rowspan="2" |9.0 |2.0 |21 |
Quadro FX 1300
|NV38 |130 |350 |550 |3:4:8:4 |1.4 |2.8 |128 |17.6 |256 |2.1 |55 |
=Quadro FX (x400) series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 540
|NV43GL |90 | rowspan="5" |PCIe x16 |300 |550 |4:8:8:8 | rowspan="2" |128 |8.8 |GDDR3 |128 |2.4 |2.4 | rowspan="5" |9.0c | rowspan="5" |2.1 |35 | |
Quadro FX 1400
|NV41 | rowspan="2" |130 | rowspan="2" |350 |600 |5:8:8:8 |19.2 |DDR | rowspan="4" |256 |2.8 |2.8 |75 | rowspan="4" |Stereo display, SLI |
Quadro FX 3400
|NV45GL/ NV40 |900 | rowspan="2" |5:12:12:12 | rowspan="2" |256 |28.8 | rowspan="3" |GDDR3 |4.2 |4.2 |101 |
Quadro FX 3450
|NV41 |110 |425 |1000 |32.0 |5.1 |5.1 |83 |
Quadro FX 4400
|NV45GL A3/ NV40 |130 |400 |1050 |6:16:16:16 |512 |33.7 |6.4 |6.4 |110 |
=Quadro FX (x500) series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 350
|G72 |90 | rowspan="10" |PCIe x16 |550 | rowspan="2" |400 |3:4:4:2 | rowspan="2" |128 |6.4 |DDR2 |64 |1.1 |2.2 |9.0c | rowspan="10" |2.1 |21 | rowspan="3" | |
Quadro FX 550
|NV43 |110 |360 |4:8:8:8 |12.8 | rowspan="7" |GDDR3 |128 |2.88 |2.88 | |30 |
Quadro FX 1500
| rowspan="2" |G71 | rowspan="2" |90 |325 |625 |6:16:16:16 | rowspan="2" |256 |40.0 | rowspan="4" |256 |5.2 |5.2 | |65 |
Quadro FX 3500
|450 |660 |7:20:20:16 |42.2 |7.2 |9 | |80 | rowspan="2" |Stereo display, SLI |
Quadro FX 4500
| rowspan="2" |G70 | rowspan="2" |110 | rowspan="2" |430 | rowspan="2" |525 | rowspan="2" |8:24:24:16 | rowspan="2" |512 | rowspan="2" |33.6 | rowspan="2" |6.88 | rowspan="2" |10.3 | |109 |
Quadro FX 4500 SDI
| |116 |Stereo display, Genlock |
Quadro FX 4500X2
| rowspan="4" |G71 | rowspan="4" |90 |500 |605 |2x 8:24:24:16 |2x 512 |2x 38.7 |2x 256 |2x 8 |2x 12 | |145 | rowspan="4" |Stereo display, SLI, Genlock |
Quadro FX 4500
rev. A2 | rowspan="3" |650 |800 | rowspan="3" |8:24:24:16 |512 |51.2 | rowspan="3" |256 | rowspan="3" |10.4 | rowspan="3" |15.6 | |105 |
Quadro FX 5500
| rowspan="2" |505 | rowspan="2" |1024 | rowspan="2" |32.3 | rowspan="2" |DDR2 | |96 |
Quadro FX 5500 SDI
| |104 |
=Quadro FX (x600) series=
{{Further|Quadro}}
- 1 Vertex shaders: pixel shaders: texture mapping units: render output units
- 2 Unified shaders: texture mapping units: render output units
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config12 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS){{cite web|title=Quadro-Powered All-In-One Workstations|url=http://www.nvidia.com/content/PDF/product-comparison/Quadro-Product-Comparison.pdf|url-status=live|archive-url=https://web.archive.org/web/20130624012038/http://www.nvidia.com/content/PDF/product-comparison/Quadro-Product-Comparison.pdf|archive-date=2013-06-24|access-date=2015-12-11|publisher=Nvidia}}{{cite web|date=2010-10-25|title=Quadro FX series|url=http://hgpu.org/?cat=92|url-status=live|archive-url=https://web.archive.org/web/20160113203735/http://hgpu.org/?cat=92|archive-date=2016-01-13|access-date=2015-12-11|website=Hgpu.org}} ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro FX 560
|Apr 20, 2006 |G73GL | rowspan="4" |90 | rowspan="3" |PCIe x16 |350 |350 |1200 |5:12:12:8 |128 |19.2 | rowspan="4" |GDDR3 |128 |2.8 |4.2 | | |9.0c |2.1 | - | - |30 | |
Quadro FX 46002
|Mar 5, 2007 | rowspan="2" |G80-850-A2 + NVIO-1-A3 | rowspan="2" |500 | rowspan="2" |1200 | rowspan="2" |1400 |96:24:24 | rowspan="2" |768 | rowspan="2" |67.2 | rowspan="3" |384 | rowspan="2" |12 | rowspan="2" |24 |345 | rowspan="3" | - | rowspan="3" |10.0 | rowspan="3" |3.3 | rowspan="2" |1.1 | rowspan="2" |1.0 |134 | rowspan="3" |Stereo display, SLI, Genlock |
Quadro FX 4600 SDI2
|Mar 5, 2007 |96:24:24 |345 |154 |
Quadro FX 56002
|Mar 5, 2007 |G80-875-A2 + NVIO-1-A3 |PCIe 2.0 x16 |600 |1350 |1600 |128:32:24 |1536 |76.8 |14.4 |38.4 |518.4 | - | - |171 |
=Quadro FX (x700) series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro FX 370
|Sep 12, 2007 |G84-825-A2 | rowspan="2" |80 | rowspan="2" |PCIe x16 |360 |720 |800 |16:8:4 | rowspan="2" |256 |6.4 | rowspan="5" |DDR2 | rowspan="2" |64 |1.44 |2.88 |34.56 | rowspan="8" | - | rowspan="8" |10.0 | rowspan="8" |3.3 | rowspan="2" |1.1 | rowspan="2" |1.1 |35 | |
Quadro FX 370 LP
|Nov 6, 2008 |G98 |540 |1300 |1000 |8:8:4 |8 |2.16 |4.32 |25.92 |25 |DMS-59 for two Single Link DVI |
Quadro FX 470
|Sep 12, 2007 |MCP7A-U |65 |PCIe 2.0 x16 |580 |1400 |800 |16:8:4 |Up to 256 MiB from system memory. | rowspan="3" |12.8 | rowspan="3" |128 |2.32 |4.64 |67.2 | - | - |30 |based on GeForce 9400 mGPU |
Quadro FX 570
|Sep 12, 2007 |G84-850-A2 | rowspan="2" |80 | rowspan="2" |PCIe x16 | rowspan="2" |460 | rowspan="2" |920 | rowspan="2" |800 |16:8:8 |256 | rowspan="2" |3.68 |3.68 |44.1 | rowspan="5" |1.1 | rowspan="5" |1.1 |38 | rowspan="2" | |
Quadro FX 1700
|Sep 12, 2007 |G84-875-A2 |32:16:8 | rowspan="2" |512 |7.36 |88.32 |42 |
Quadro FX 3700
|Jan 8, 2008 |G92-875-A2 | rowspan="3" |65 | rowspan="3" |PCIe 2.0 x16 |500 |1250 | rowspan="3" |1600 |112:56:16 |51.2 | rowspan="3" |GDDR3 |256 |8 |28 |420 |78 |Stereo display, SLI |
Quadro FX 4700X2
|Apr 18, 2008 |2x G92-880-A2 |600 |1500 |2x 128:64:16 |2x 1024 |2x 51.2 |2x 256 |2x 9.6 |2x 38.4 |2x 576 |226 |SLI |
Quadro VX 200
|Jan 8, 2008 |G92-851-A2 |450 |1125 |96:48:16 |512 |51.2 |256 |7.2 |21.6 |324 |75 |2x Dual-link DVI, S-Video, optimised for Autodesk AutoCAD |
=Quadro FX (x800) series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro FX 380
|Mar 30, 2009 |G96-850-C1 |65 | rowspan="8" |PCIe 2.0 x16 |450 |1100 |1400 |16:8:8 |256 |22.4 | rowspan="8" |GDDR3 |128 |3.6 |3.6 |52.8 | rowspan="4" | - | rowspan="8" |10.0 | rowspan="8" |3.3 | rowspan="8" |1.1 |1.1 |34 |Two Dual Link DVI, no DisplayPort |
Quadro FX 380 LP
|Dec 1, 2009 |GT218GL |40 |589 |1402 | rowspan="7" |1600 |16:8:4 | rowspan="2" |512 |12.8 |64 |2.356 |4.712 |67.296 |1.2 |28 |DisplayPort, Dual Link DVI |
Quadro FX 580
|Apr 9, 2009 |G96-875-C1 | rowspan="2" |65 |450 |1125 |32:16:8 |25.6 |128 |3.6 |7.2 |108 | rowspan="2" |1.1 |40 |Dual DisplayPort, Dual Link DVI |
Quadro FX 1800
|Mar 30, 2009 |G94-876-B1 |550 |1375 |64:32:12 |768 |38.4 |192 |6.6 |17.6 |264 |59 | rowspan="4" |Stereo DP Dual Link DVI, Dual DisplayPort, SLI |
Quadro FX 3800
|Mar 30, 2009 |G200-835-B3 + NVIO2-A2 | rowspan="4" |55 |600 | rowspan="2" |1204 |192:64:16 |1024 |51.2 |256 |9.632 |38.528 |691.2 |86.4 | rowspan="4" |1.3 |108 |
Quadro FX 4800
|Nov 11, 2008 |G200-850-B3 + NVIO2-A2 |602 |192:64:24 |1536 |76.8 |384 |14.448 |38.528 |693.504 |86.688 |150 |
Quadro FX 5800
|Nov 11, 2008 |G200-875-B2 + NVIO2-A2 |610 |1296 |240:80:32 |4096 |102.4 |512 |20.736 |51.840 |878.4 |109.8 |189 |
Quadro CX{{cite web |url=http://www.nvidia.com/object/product_quadro_cx_us.html |title=Nvidia Quadro CX is the accelerator for Adobe Creative Suite 4 |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222092543/http://www.nvidia.com/object/product_quadro_cx_us.html |archive-date=2015-12-22 |url-status=live }}
|Nov 11, 2008 |GT200GL + NVIO2 |602 |1204 |192:64:24 |1536 |76.8 |384 |14.448 |38.528 |693.504 |86.688 |150 |Display Port and dual-link DVI Output, optimised for Adobe Creative Suite 4 |
=Quadro x000 series=
{{Further|Quadro}}
- 1 Unified shaders: texture mapping units: render output units
- 4 Each SM in the Fermi architecture contains 4 texture filtering units for every texture address unit. Total for the full GF100 64 texture address units and 256 texture filtering units
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro 400
|Apr 5, 2011 |GT216GL | rowspan="8" |40 | rowspan="8" |PCIe 2.0 x16 |450 |1125 |1540 |48:16:4 |0.5 |12.3 | rowspan="2" |DDR3 |64 |1.8 |7.2 |108 | - |10.1 |4.5 | rowspan="8" |1.1 |1.2 |32 | rowspan="2" |DisplayPort, Dual Link DVI |
Quadro 600
|Dec 13, 2010 |GF108GL |640 |1280 |1600 |96:164:4 | rowspan="2" |1 |25.6 | rowspan="2" |128 |2.56 |10.24 |245.76 |15 | rowspan="7" |11.0 | rowspan="7" |4.6 | rowspan="2" |2.1 |40 |
Quadro 2000
|Dec 24, 2010 |GF106GL (GF106-875) |625 |1250 |2600 |192:324:16 |41.6 | rowspan="6" |GDDR5 |10 |20 |480 |30 |62 |Stereo DP Dual Link DVI, Dual DisplayPort |
Quadro 4000
|Nov 2, 2010 | rowspan="3" |GF100 |475 |950 |2800 |256:324:32 |2 |89.6 |256 |15.2 |15.2 |486.4 |243 | rowspan="5" |2.0 |142 | rowspan="5" | |
Quadro 5000
|Feb 23, 2011 |513 |1026 | rowspan="2" |3000 |352:444:40 |2.5 |120 |320 |20.53 |22.572 |722.304 |359 |152 |
Quadro 6000
|Dec 10, 2010 |574 |1148 |448:564:48 |6 |144 | rowspan="2" |384 |27.552 |32.144 |1028.608 |515 | rowspan="2" |204 |
Quadro 7000
|May 2, 2012 |GF110 |651 |1301 |3696 |512:644:48 |6 |177 |31.248 |41.7 |1332 |667 |
Quadro Plex 7000{{cite web |url=https://www.techpowerup.com/gpudb/902/quadro-plex-7000 |title=NVIDIA Quadro Plex 7000 Specs |access-date=2018-03-26 }}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}
|July 25, 2011 |2x GF100 |574 |1148 |3000 |2x 512:644:48 |2x 6 |2x 144 |2x 384 |2x 18.37 |2x 36.74 |2x 1176 |2x 588 |600 |
=Quadro Kxxx series=
{{Further|Quadro|Kepler (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan=4 | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! Vulkan ! CUDA |
Quadro 410
|Aug 7, 2012 | rowspan="4" |GK107 | rowspan="7" |28 | rowspan="6" |PCIe 2.0 x16 |706 |706 |1800 |192:16:8 |0.5 |14.4 |64 |5.65 |11.3 |271.10 | rowspan="5" | | rowspan="7" |11.0 | rowspan="7" |4.6 | rowspan="7" |1.2 | rowspan="6" |3.0 |38 | |
Quadro K600
|Mar 1, 2013 |876 |876 |891 |192:16:16 |1 |28.5 |DDR3 | rowspan="3" |128 |14.0 |14.0 |336.38 |41 | 6.3" Card |
Quadro K2000
|Mar 1, 2013 | rowspan="2" |954 | rowspan="2" |954 | rowspan="2" |1000 |384:32:16 |2 | rowspan="2" |64 | rowspan="5" |GDDR5 | rowspan="2" |15.2 | rowspan="2" |30.5 |732.67 | rowspan="2" |51 | rowspan="2" | 7.97" Card |
Quadro K2000D
|Mar 1, 2013 |384:32:16 |2 | |
Quadro K4000
|Mar 1, 2013 |GK106 |810.5 |810.5 |1404 |768:64:24 |3 |134.8 |192 |19.4 |51.9 |1244.93 |80 | 9.5" Card |
Quadro K5000
|Aug 17, 2012 |GK104 |706 |706 |1350 |1536:128:32 |4 |172.8 |256 |22.6 |90.4 |2168.83 |90.4 |122 | rowspan="2" | 10.5" Card |
Quadro K6000
|Jul 23, 2013 |GK110 |PCIe 3.0 x16 |901.5 |901.5 |1502 |2880:240:48 |12 |288 |384 |54.1 |216 |5196 |1732 |3.5 |225 |
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan=4 | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! Vulkan ! CUDA |
Quadro K420
|Jul 22, 2014 |GK107 | rowspan="6" |28 | rowspan="5" |PCIe 2.0 x16 |780 |780 |1800 |192:16:16 |29 | rowspan="2" |DDR3 | rowspan="4" |128 |12.48 |12.48 |299.52 |12.48 |11.0 | rowspan="6" |4.6 |1.2 |3.0 |41 | |
Quadro K620
|Jul 22, 2014 |GM107-850 |1000 |1000 |900 |384:24:16 |2 |28.8 |16.0 |24.0 |768.0 |24.0 | rowspan="3" |12.0 | rowspan="3" |1.3 | rowspan="3" |5.0 | rowspan="2" |45 | 6.3" Card |
Quadro K1200
|Jan 28, 2015 |GM107-860 |954 | |1253 |512:32:16 | rowspan="3" |4 |80.2 | rowspan="4" |GDDR5 |15.3 |30.5 |1083 | | rowspan="2" | 7.97" Card |
Quadro K2200
|Jul 22, 2014 |1046 |1046 |1253 |640:40:16 |80.2 |16.7 |41.8 |1338.9 |41.8 |68 |
Quadro K4200
|Jul 22, 2014 |GK104 |780 |780 |1350 |1344:112:32 |172.8 | rowspan="2" |256 |24.96 |87.36 |2096.64 |87.36 | rowspan="2" |11.0 | rowspan="2" |1.2 |3.0 |105 | 9.5" Card |
Quadro K5200
|Jul 22, 2014 |GK110B |PCIe 3.0 x16 |650 |650 |1500 |2304:192:32 |8 |192 |20.8 |124.8 |2995.2 |124.8 |3.5 |150 | 10.5" Card |
=Quadro Mxxx series=
{{Further|Quadro|Maxwell (microarchitecture)}}
- 1Unified shaders: texture mapping units: render output units: streaming multiprocessors
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="4" | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan="5" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan="2" |Release price (USD) ! rowspan="2" | Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro M2000
|Apr 8, 2016 | rowspan="4" |28 | rowspan="4" |PCIe 3.0 x16 |796 |1163 |1653 |768:48:32:6 | rowspan="4" |48 |1 |4 |105.8 | rowspan="4" |GDDR5 |128 |37.8 |56.6 |1812.5 |56.6 | rowspan="4" |12.1 | rowspan="4" |4.6 | rowspan="4" |1.3 | rowspan="4" |1.2 | rowspan="4" |5.2 |75 |$438 | rowspan="2" |Four DisplayPort 1.2a |
Quadro M4000
|Jun 29, 2015 |773 |773 |1503 |1664:104:64:13 | rowspan="2" |2 | rowspan="2" |8 |192.4 | rowspan="2" |256 |51.2 |83.2 |2662.4 |83.2 |120 |$791 |
Quadro M5000
|Jun 29, 2015 |861 |1038 |1653 |2048:128:64:16 |211.6 |67.2 |134.4 |4300.8 |134.4 |150 |$2857 | rowspan="2" |Four DisplayPort 1.2a, One DVI-I |
Quadro M6000
|Mar 21, 2015 |988 |1114 |1653 |3072:192:96:24 |3 |12 |317 |384 |106.9 |213.9 285.2 |6070 |190 |$4200 $4999 |
=Quadro Pxxx series=
{{Further|Quadro|Pascal (microarchitecture)}}
- 1Unified shaders: texture mapping units: render output units: streaming multiprocessors
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name{{cite web |url=https://developer.nvidia.com/video-encode-decode-gpu-support-matrix |title=Video Encode and Decode GPU Support Matrix |publisher=Nvidia |access-date=2017-05-07 |archive-url=https://web.archive.org/web/20170710130550/https://developer.nvidia.com/video-encode-decode-gpu-support-matrix |archive-date=2017-07-10 |url-status=live }} ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="4" | Memory ! colspan="2" |Fillrate{{cite web |title=GPU Database |url=https://www.techpowerup.com/gpudb/ |access-date=2017-05-07 |publisher=techPowerUp}}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}} ! colspan="2" | Processing power (GFLOPS) ! colspan="5" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan="2" |Release price (USD) ! rowspan="2" | Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro P400
|Feb 7, 2017 |GP107-825 |rowspan=4 | 14 | rowspan="10" | PCIe 3.0 x16 |1228 |1252 |1003 |256:16:16:2 | rowspan="9" |48 |0.5 | rowspan="3" |2 |32 | rowspan="5" |GDDR5 |64 |17.1 |17.1 |641 |20.0 | rowspan="10" |12.1 | rowspan="10" |4.6 | rowspan="10" |1.3 | rowspan="10" |1.2 | rowspan="9" |6.1 |30 |$120 |Three Mini-DisplayPort 1.4 |
Quadro P600
|Feb 7, 2017 |GP107-850 |1329 |1557 |1003 |384:24:16:3 | rowspan="3" |1 | rowspan="2" |64 | rowspan="3" |128 |21.7 |32.5 |1195 |37.3 | rowspan="2" |40 |$178 | rowspan="3" |Four Mini-DisplayPort 1.4 |
Quadro P620
|Feb 1, 2018 |GP107-855 | rowspan="2" |1266 |1354 |1003 |512:32:16:4 |23.3 |46.6 |1490 |46.6 | |
Quadro P1000
|Feb 7, 2017 |GP107-860 |1481 |1752 |640:40:32:5 |4 |82 |43.3 |54.2 |1894 |59.2 |47 |$375 |
Quadro P2000
|Feb 6, 2017 |GP106-875 | rowspan="6" | 16 |1076 |1480 |2002 |1024:64:40:8 | rowspan="2" |1.25 |5 |140 | rowspan="2" |160 |54.8 |87.7 |3010 |93.8 | rowspan="2" |75 |$585 | rowspan="3" |Four DisplayPort 1.4 |
Quadro P2200
|Jun 10, 2019 |1000 |1493 |1251 (10008) |1280:80:40:9 |5 |200 |GDDR5X |59.7 |119.4 |3822 |121.3 | |
Quadro P4000
|Feb 6, 2017 |GP104-850-A1 |1202 |1480 |1901 |1792:112:64:14 | rowspan="2" |2 |8 |243 |GDDR5 | rowspan="2" |256 |78.5 |137.4 |5300 |165.6 |105 |$815 |
Quadro P5000
|Oct 1, 2016 |GP104-875-A1 |1607 |1733 |1126 |2560:160:64:20 |16 |288 | rowspan="2" |GDDR5X |102.8 |257.1 |8873 |277.3 |180 |$2499 | rowspan="2" |Four DisplayPort 1.4, One DVI-D |
Quadro P6000
|Oct 1, 2016 |GP102-875-A1 |1506 |1645 |1126 |3840:240:96:30 |3 |24 |432 |384 |136.0 |340.0 |10882 (11758) |~340 |250 |$5999 |
Quadro GP100
|Oct 1, 2016 |GP100-876-A1 |1304 |1442 |703 (1406) |3584:224:128:56 |24 |4 |16 |720 |HBM2 |4096 |184.7 |323 |10336 |5168 |6.0 |235 | |NVLINK support |
=Quadro GVxxx series=
{{Further|Quadro|Volta (microarchitecture)}}
- 1 Unified shaders: texture mapping units: render output units: streaming multiprocessors: tensor cores
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Core clock (MHz) ! rowspan="2" | Boost clock (MHz) ! rowspan="2" | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="4" | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (TFLOPS) ! colspan="5" | Supported API version ! rowspan="2" |TDP (Watts) ! rowspan="2" | Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !CUDA |
Quadro GV100{{cite web|url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-volta-gv100-us-nv-623049-r10-hr.pdf|title=Nvidia Quadro GV100 Data Sheet|website=NVIDIA|language=en-us|access-date=2018-11-06|archive-url=https://web.archive.org/web/20180401003614/https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-volta-gv100-us-nv-623049-r10-hr.pdf|archive-date=2018-04-01|url-status=live}}
|Mar 27, 2018 |GV100-875-A1 |12 |PCIe 3.0 x16 |1132 |1627 |848 (1696) |5120:320:128:80:640 |128 |6 |32 |870 |HBM2 |4096 |208.4 |521 |14.8 |7.4 |12.1 |4.6 |1.3 |3.0 |7.0 |250 |4x DisplayPort, NVLINK support |
= Quadro Tx00/Tx000 series =
{{Further|Quadro|Turing (microarchitecture)}}
- 1 Unified shaders: texture mapping units: render output units: streaming multiprocessors
{{Row hover highlight}}
= Quadro RTX x000 series =
{{Further|Quadro|Turing (microarchitecture)}}
- 1 Unified shaders: texture mapping units: render output units: streaming multiprocessors: tensor cores
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Core clock (MHz) ! rowspan="2" | Boost clock (MHz) ! rowspan="2" | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="4" | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (TFLOPS) ! colspan="5" | Supported API version ! rowspan="2" |TDP (Watts) ! rowspan="2" |Release price (USD) ! rowspan="2" | Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !CUDA |
Quadro RTX 4000
|Nov 13, 2018 |TU104-850-A1 | rowspan="4" | 12 | rowspan="4" | PCIe 3.0 x16 |1005 |1545 |1625 |2304:144:64:36:288 | rowspan="4" |64 | rowspan="2" |4 |8 |416 | rowspan="4" | GDDR6 | rowspan="2" | 256 |98.9 |222.5 |7.119 |0.2225 | rowspan="4" | 12.2 | rowspan="4" | 4.6 | rowspan="4" | 1.3 | rowspan="4" | 3.0 | rowspan="4" | 7.5 |100-125 |$899 | rowspan="4" | 3x DisplayPort 1x USB Type-C |
Quadro RTX 5000
|Aug 13, 2018 |TU104-875-A1 |1620 |1815 | rowspan="3" |1750 |3072:192:64:48:384 |16 |448 |116.2 |348.5 |11.15 |0.3485 |125-230 |$2299 |
Quadro RTX 6000
|Aug 13, 2018 | rowspan="2" |TU102-875-A1 |1440 | rowspan="2" |1770 | rowspan="2" |4608:288:96:72:576 | rowspan="2" |6 |24 | rowspan="2" | 672 | rowspan="2" | 384 | rowspan="2" |169.9 | rowspan="2" |509.8 | rowspan="2" |16.31 | rowspan="2" |0.5098 | rowspan="2" |100-260 |$6299 |
Quadro RTX 8000
|Aug 13, 2018 |1395 |48 |$9999 |
= RTX Ax000 series =
{{Further|Quadro|Ampere (microarchitecture)}}
- 1 Unified shaders: texture mapping units: render output units: streaming multiprocessors: tensor cores
{{Row hover highlight}}
= RTX Ada Generation =
{{Further|Quadro|Ada Lovelace (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Core clock (MHz) ! rowspan="2" | Boost clock (MHz) ! rowspan="2" | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="4" | Memory ! colspan="3" |Fillrate ! colspan="4" | Processing power (TFLOPS) ! colspan="5" | Supported API version ! rowspan="2" |TDP (Watts) ! colspan="2" |Size ! rowspan="2" |Release price (USD) ! rowspan="2" |Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !Ray Tracing (TFLOPS) !Tensor compute (FP16) (sparse) !CUDA !Profile !Slots |
RTX 2000 Ada Generation{{Cite web |title=NVIDIA RTX 2000 Ada Generation |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/proviz-rtx-2000-ada-datasheet |access-date=2024-06-15 |website=NVIDIA |language=en}}{{Cite web |date=2024-06-15 |title=NVIDIA RTX 2000 Ada Generation Specs |url=https://www.techpowerup.com/gpu-specs/rtx-2000-ada-generation.c4199 |access-date=2024-06-15 |website=TechPowerUp |language=en}}
|{{Date table sorting|2024|Feb|12}} |AD107-875-A1 |PCIe 4.0 x8 |1620 |2130 |2000 |2816:88:48:22:88 | rowspan="7" |128 |12 |16 |224 | rowspan="7" |GDDR6 |128 |102.2 |187.4 |27.7 |12.0 |12.0 |0.1874 |191.9 | rowspan="7" |12.2 | rowspan="7" |4.6 | rowspan="7" |1.3 | rowspan="7" |3.0 | rowspan="7" |8.9 |70 |HHHL |Double |$649 |4x mini DisplayPort |
RTX 4000 SFF Ada Generation{{Cite web |title=NVIDIA RTX 4000 SFF Ada Generation Graphics Card |url=https://www.nvidia.com/content/dam/en-zz/Solutions/rtx-4000-sff/proviz-rtx-4000-sff-ada-datasheet-2616456-web.pdf |access-date=Oct 10, 2023 |website=Nvidia}}
|{{Date table sorting|2023|Mar|21}} |AD104 |PCIe 4.0 x16 |720 |1560 |1750 |6144:192:80:48:192 | rowspan="3" |48 |20 |280 |160 |125.2 |300.5 |44.3 |19.23 |19.23 |0.3 |306.8 |70 |HHHL |Double |$1250 |4x mini DisplayPort |
RTX 4000 Ada Generation{{Cite web |title=NVIDIA RTX 4000 Ada Generation Datasheet |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/rtx-4000-ada-datashe |access-date=2023-10-10 |website=NVIDIA |language=en}}
|{{Date table sorting|2023|Aug|9}} |AD104 |PCIe 4.0 x16 |1500 |2175 |1750 |6144:192:80:48:192 |20 |360 |160 |174 |417.6 |61.8 |26.73 |26.73 |0.417 |427.6 |130 |FHFL |Single |$1250 |4x DisplayPort |
RTX 4500 Ada Generation{{Cite web |title=NVIDIA RTX 4500 Ada Generation Graphics Card |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/print-nvidia-rtx-450?lx=CCKW39&contentType=data-sheet |access-date=2023-10-10 |website=NVIDIA |language=en-us}}
|{{Date table sorting|2023|Aug|9}} |AD103 |PCIe 4.0 x16 |2070 |2580 |2250 (18000) |7680:240:80:60:240 |24 |432 |192 |206 |620 |91.6 |39.63 |39.63 |0.619 |637.8 |210 |FHFL |Double |$2250 |4x DisplayPort |
RTX 5000 Ada Generation{{Cite web |title=NVIDIA RTX 5000 Ada Generation Datasheet |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/rtx-5000-ada-datasheet |access-date=2023-10-10 |website=NVIDIA |language=en}}
|{{Date table sorting|2023|Aug|9}} |AD102-850-KAB-A1 |PCIe 4.0 x16 |1155 |2550 |2250 (18000) |12800:400:160:100:400 | rowspan="2" |72 |32 |576 |256 |408.0 |1020 |151.0 |65.28 |65.28 |1.02 |1044 |250 |FHFL |Double |$4000 |4x DisplayPort |
RTX 5880 Ada Generation{{Cite web |title=NVIDIA RTX 5880 Ada Generation |url=https://www.nvidia.com/en-us/design-visualization/rtx-5880/ |access-date=Jan 5, 2024 |website=NVIDIA}}
|{{Date table sorting|2024|Jan|5}} |AD102 |PCIe 4.0 x16 |975 |2460 |2500 |14080:440:176:110:440 |48 |960 |384 |433.0 |1082 |160.2 |69.27 |69.27 |1.082 |1108 |285 |FHFL |Double |$6999 |4x DisplayPort |
RTX 6000 Ada Generation{{Cite web |title=NVIDIA RTX 6000 Ada Generation |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/rtx-6000/proviz-print-rtx6000-datasheet-web-2504660.pdf |access-date=Oct 10, 2023 |website=NVIDIA}}
|{{Date table sorting|2022|Dec|3}} |AD102-870-A1 |PCIe 4.0 x16 |915 |2505 |2500 |18176:568:192:142:568 |96 |48 |960 |384 |481.0 |1423 |210.6 |91.06 |91.06 |1.423 |1457 |300 |FHFL |Double |$6799 |4x DisplayPort |
= RTX PRO Blackwell series =
{{Further|Blackwell (microarchitecture)}}
- 1 Unified shaders: texture mapping units: render output units: Tensor cores: RT cores
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Core clock (MHz) ! rowspan="2" | Boost clock (MHz) ! rowspan="2" | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="4" | Memory ! colspan="3" |Fillrate ! colspan="4" | Processing power (TFLOPS) ! colspan="5" | Supported API version ! rowspan="2" |TDP (Watts) ! colspan="2" |Size ! rowspan="2" |Release price (USD) ! rowspan="2" |Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !Ray Tracing (TFLOPS) !Tensor compute (FP16) (sparse) !CUDA !Profile !Slots |
RTX PRO 4000 Blackwell
|{{Date table sorting|2025|Mar|18}} |rowspan="2" |GB203 | rowspan="5" |PCIe 5.0 x16 |1590 |2617 |rowspan="5" |1750 |8960:280:96:280:70 | rowspan="5" |128 |48 |24 |672 | rowspan="5" |GDDR7 |192 |251.2 |732.8 |TBD |46.90 |46.90 |0.7328 |TBD | rowspan="5" |12.2 | rowspan="5" |4.63 | rowspan="5" |1.3 | rowspan="5" |3.0 | rowspan="5" |11.6 |140 |FHFL |Single |$1546 |rowspan="5" |4x DisplayPort |
RTX PRO 4500 Blackwell
|{{Date table sorting|2025|Mar|18}} |1590 |2617 |10496:328:112:328:82 |64 |32 |896 |256 |293.1 |858.4 |TBD |54.94 |54.94 |0.8584 |TBD |200 |FHFL |Double |$2623 |
RTX PRO 5000 Blackwell
|{{Date table sorting|2025|Mar|18}} |rowspan="3" |GB202 |1590 |2617 |14080:440:176:440:110 |96 |48 |1344 |384 |460.6 |1151 |TBD |73.69 |73.69 |1.151 |TBD |rowspan="2" |300 |FHFL |Double |$4569 |
RTX PRO 6000 Blackwell Max-Q Workstation Edition
|{{Date table sorting|2025|Mar|18}} | rowspan="2" |1590 |2288 |rowspan="2" |24064:752:192:752:188 |rowspan="2" |128 |rowspan="2" |96 |rowspan="2" |1792 |rowspan="2" |512 |439.3 |1721 |330 |110.1 |110.1 |1.721 |TBD |FHFL |Double |rowspan="2" |$8565 |
RTX PRO 6000 Blackwell Workstation Edition
|{{Date table sorting|2025|Mar|18}} |2617 |502.5 |1968 |380 |126.0 |126.0 |1.968 |TBD |600 |FHFL |Double |
=Quadro NVS=
{{Further|Quadro}}
- 1 Vertex shaders: pixel shaders: texture mapping units: render output units
- 2 Unified shaders: texture mapping units: render output units
- * NV31, NV34 and NV36 are 2x2 pipeline designs if running vertex shader, otherwise they are 4x1 pipeline designs.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config12* ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=3 | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! Vulkan |
NVS 50
|May 31, 2005 |NV18 | rowspan="3" |150 | rowspan="3" |AGP 4x/PCI |250 |250 |200 |0:2:4:2 |64 |1.6 | rowspan="6" |DDR |32 |0.5 |1.0 | | rowspan="3" |7 | rowspan="4" |1.2 | rowspan="15" | n/a | rowspan="4" | |
NVS 100
|Dec 22, 2003 | rowspan="2" |NV17 |200 | |333 |0:2:4:2 |64 |2.664 |64 | | | |2x DVI-I, VGA, S-Video |
NVS 200
|Dec 22, 2003 |250 |250 |250 |0:2:4:2 |64 |8.0 | rowspan="4" |128 |0.5 |1.0 | |
NVS 210S
|Dec 22, 2003 |MCP51 |90 |Integrated |425 | | |1:2:2:1 |Up to 256 from system memory | |0.425 |0.850 | |9.0c |
NVS 280
|Oct 28, 2003 |NV34GL |150 |PCIe x16/AGP 8x / PCI | rowspan="2" |275 | rowspan="2" |275 |250 |0:2:4:2/ |64 |8.0 | rowspan="2" |0.55 | rowspan="2" |1.1 | | rowspan="2" |9.0 |1.5 |13 |
NVS 285
|Jun 6, 2006 |NV44 |110 | rowspan="4" |PCIe x1/x16 |275 |3:4:4:2 |128 |8.8 | |2.1 |18 |
NVS 290
|Oct 4, 2007 |G86-827-A2 |80 |460 |920 |800 |16:8:4 | rowspan="2" |256 |6.4 |DDR2 | rowspan="5" |64 |1.84 |3.68 |44.16 | rowspan="2" |10 | rowspan="3" |3.3 |21 |
NVS 295
|May 7, 2009 |G98 |65 |550 |1300 |1400 |8:8:4 |11.2 |GDDR3 |2.2 |4.4 |31.2 |23 |2x DisplayPort or 2x DVI-D |
NVS 300
|Jan 8, 2011 |GT218 | rowspan="3" |40 |589 |1402 |1580 |16:8:4 | rowspan="2" |512 |12.64 | rowspan="3" |DDR3 |2.356 |4.712 |67.3 |10.1 |17.5 |
NVS 310
|Jun 26, 2012 | rowspan="2" |GF119 | rowspan="2" |PCIe x16 | rowspan="2" |523 | rowspan="2" |1046 | rowspan="2" |1750 | rowspan="2" |48:8:4 | rowspan="2" |14 | rowspan="2" |2.092 | rowspan="2" |4.184 |100.4 | rowspan="2" |11.0 | rowspan="2" |4.1 | rowspan="2" |19.5 |2x DisplayPort |
NVS 315
|Mar 10, 2013 |1024 | |DMS-59 Idle Power Consumption 7 W |
NVS 400
|Jul 16, 2004 |2x NV17 |150 |PCI |220 |220 |332 |2x 0:2:4:2 |2x 64 |2x 11.0 |DDR |2x 128 |2x 0.44 |2x 0.88 |2x 5.328 |7 |1.2 |18 |2x LFH-60 |
NVS 420
|Jan 20, 2009 |2xG98-850-U2 |65 | rowspan="3" |PCIe x1/x16 |550 |1300 |1400 |2x 8:8:4 |2x 256 |2x 11.2 |GDDR3 |2x 64 |2x 2.2 |2x 4.4 |2x 31.2 |10 |3.3 |40 |through VHDCI to (4x DisplayPort or 4x DVI-D) |
NVS 440
|Feb 14, 2006 |2xNV43 |110 |250 | |500 |2x 4:8:8:8 |2x 128 |2x 8.000 |DDR |2x 128 |2x 2.000 |2x 2.000 | |9.0 |2.1 |31 |2x DMS-59{{cite web |title=Nvidia Quadro NVS Technical Specifications |url=http://http.download.nvidia.com/ndemand/Quadro_extranet/Product_Overview/PO_QUADRO_NVS_MAY06_REV2.pdf |work=Nvidia press release |access-date=2014-05-15 |archive-url=https://web.archive.org/web/20120914092314/http://http.download.nvidia.com/ndemand/Quadro_extranet/Product_Overview/PO_QUADRO_NVS_MAY06_REV2.pdf |archive-date=2012-09-14 |url-status=live }} |
NVS 450
|Nov 11, 2008 |2xG98 |65 |550 |1300 |1400 |2x 8:8:4 |2x 256 |2x 11.2 |GDDR3 |2x 64 |2x 2.2 |2x 4.4 |2x 31.2 |10 |3.3 | rowspan="2" |35 |4x DisplayPort |
NVS 510
|Oct 23, 2012 |GK107 | rowspan="2" | 28 |PCIe 2.0 x16 |797 | |1782 |192:16:8 |2048 |28.5 | rowspan="2" |DDR3 |128 |3.188 |12.75 |306.0 | rowspan="2" |11.0 | rowspan="2" |4.6 | 1.2 |4x miniDisplayPort |
NVS 810
|Nov 4, 2015 |2x GM107 |PCIe 3.0 x16 |1033 | |1800 |2x 512:32:16 |2x 2048 |2x 14.4 |2x 64 |16.53 |33.06 |1058 |1.3 |68 |8x miniDisplayPort |
Mobile Workstation GPUs
=Quadro Go (GL) & Quadro FX Go series=
{{Further|Quadro}}
Early mobile Quadro chips based on the GeForce2 Go up to GeForce Go 6800. Precise specifications on these old mobile workstation chips are very hard to find, and conflicting between Nvidia press releases and product lineups in GPU databases like TechPowerUp's GPUDB.
- 1 Vertex shaders: pixel shaders: texture mapping units: render output units
- 2 Unified shaders: texture mapping units: render output units
{{Row hover highlight}}
=Quadro FX (x500M) series=
{{Further|Quadro}}GeForce 7-Series based.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Supported API version ! rowspan=2 | TDP (Watts) |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 350M
|Mar 13, 2006 |G72GLM | rowspan="4" |90 | rowspan="4" |PCIe 1.0 x16 |450 |900 |3:4:4:2 |256 |14.4 | rowspan="4" |GDDR3 |128 |0.9 |1.8 | rowspan="4" |9.0c | rowspan="4" |2.1 | rowspan="4" |15 |
Quadro FX 1500M
|Apr 18, 2006 | rowspan="3" |G71GLM |375 |1000 | rowspan="3" |8:24:24:16 | rowspan="3" |512 |32 | rowspan="3" |256 |6 |9 |
Quadro FX 2500M
|Sep 29, 2005 |500 | rowspan="2" |1200 | rowspan="2" |38.4 |8 |12 |
Quadro FX 3500M
|Mar 1, 2007 |575 |9.2 |13.8 |
=Quadro FX (x600M) series=
{{Further|Quadro}}GeForce 8-Series (except FX 560M and FX 3600M) based. First Quadro Mobile line to support DirectX 10.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 360M
|May 9, 2007 |G86M |80 | rowspan="4" |PCIe 1.0 x16 |400 |800 | rowspan="2" |1200 |16:8:4 |256 |9.6 |DDR2 |64 |1.6 |3.2 |38.4 |10.0 |3.3 |17 |Based on the GeForce 8400M GS |
Quadro FX 560M
|Apr 20, 2006 |G73GLM |90 |500 |500 |5:12:12:8 | rowspan="3" |512 |19.2 | rowspan="3" |GDDR3 | rowspan="2" |128 |4 |6 | |9.0c |2.1 |35? |7600GS based? |
Quadro FX 1600M
|Jun 1, 2007 |G84M |80 |625 | rowspan="2" |1250 | rowspan="2" |1600 |32:16:8 |25.6 |5 |10 |120 | rowspan="2" |10.0 | rowspan="2" |3.3 |50? | |
Quadro FX 3600M
|Feb 23, 2008 |G92M |65 |500 |64:32:16 |51.2 |256 |8 |16 |240 |70 |Based on the GeForce 8800M GTX. Dell Precision M6300 uses 64 shader version of the FX 3600M |
=Quadro FX (x700M) series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | TDP (Watts) |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 370M
|Aug 15, 2008 |G98M |65 | rowspan="6" |PCIe 1.0 x16 |550 |1400 |1200 |8:4:4 |256 |9.6 | rowspan="6" |GDDR3 |64 |2.2 |2.2 |33.6 | rowspan="6" |10.0 | rowspan="6" |3.3 |20 |
Quadro FX 570M
|Jun 1, 2007 |G84M |80 |475 |950 |1400 | rowspan="3" |32:16:8 | rowspan="4" |512 |22.4 | rowspan="3" |128 |3.8 |7.6 |91.2 |45 |
Quadro FX 770M
|Aug 14, 2008 | rowspan="2" |G96M | rowspan="4" |65 |500 |1250 | rowspan="4" |1600 | rowspan="2" |25.6 |4 |8 |119.0 |35 |
Quadro FX 1700M
|Oct 1, 2008 |625 |1550 |5 |10 |148.8 |50 |
Quadro FX 2700M
|Aug 14, 2008 |G94M | rowspan="2" |530 |1325 |48:24:16 | rowspan="2" |51.2 | rowspan="2" |256 |8.48 |12.72 |190.8 |65 |
Quadro FX 3700M
|Aug 14, 2008 |G92M |1375 |128:64:16 |1024 |8.8 |35.2 |528 |75 |
=Quadro FX (x800M) series=
{{Further|Quadro}}The last DirectX 10 based Quadro mobile cards.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | TDP (Watts) |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro FX 380M
|Jan 7, 2010 |GT218M | rowspan="3" |40 | rowspan="5" |PCIe 2.0 x16 |625 |1530 | rowspan="2" |1600 |16:8:4 |512 |12.8 | rowspan="2" |GDDR3 |64 |2.5 |5 |73.44 | rowspan="3" |10.1 | rowspan="5" |3.3 |25 |
Quadro FX 880M
|Jan 7, 2010 |GT216M |550 |1210 |48:16:8 | rowspan="4" |1024 |25.6 | rowspan="2" |128 |4.4 |8.8 |174.24 |35 |
Quadro FX 1800M
|Jun 15, 2009 |GT215M |450 |1080 |1600 |72:24:8 |25.6 |GDDR3 |3.6 |10.8 |233.28 |45 |
Quadro FX 2800M
|Dec 1, 2009 | rowspan="2" |G92M | rowspan="2" |55 |500 |1250 | rowspan="2" |2000 |96:48:16 | rowspan="2" |64 | rowspan="2" |GDDR3 | rowspan="2" |256 |8 |16 |360 | rowspan="2" |10.0 |75 |
Quadro FX 3800M
|Aug 14, 2008 |675 |1688 |128:64:16 |10.8 |43.2 |648.192 |100 |
=Quadro (xxxxM) series=
{{Further|Quadro|Fermi (microarchitecture)}}
- 1 Unified shaders: texture mapping units: render output units
- 2 Each SM in the Fermi architecture contains 4 texture filtering units for every texture address unit
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config12 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro 500M
|Feb 22, 2011 | rowspan="2" |GF108 | rowspan="7" |40 | rowspan="7" |PCIe 2.0 x16 | rowspan="2" |700 | rowspan="2" |1400 | rowspan="3" |1800 | rowspan="2" |96:16:4 |1 | rowspan="3" |28.8 | rowspan="3" |DDR3 | rowspan="3" |128 | rowspan="2" |2.8 | rowspan="2" |11.2 |268.8 | rowspan="7" |11.0 | rowspan="7" |4.5 |35 | |
Quadro 1000M
|Jan 13, 2011 | rowspan="5" |2 | |45 |Dell Precision M4600 |
Quadro 2000M
|Jan 13, 2011 |GF106 |550 |1100 |192:32:16 |8.8 |17.6 |422.4 |55 |Dell Precision M4600 |
Quadro 3000M
|Feb 22, 2011 | rowspan="2" |GF104 |450 |900 | rowspan="2" |2500 |240:40:32 | rowspan="2" |80 | rowspan="4" |GDDR5 | rowspan="4" |256 |14.4 |18 |432 |75 |Dell Precision M6600 |
Quadro 4000M
|Feb 22, 2011 |475 |950 |336:56:32 |15.2 |26.6 |638.4 | rowspan="3" |100 |Dell Precision M6600 |
Quadro 5000M
|Jul 27, 2010 |GF100 |405 |810 |2400 |320:40:32 |76.8 |12.96 |16.2 |518.4 |Dell Precision M6500 |
Quadro 5010M
|Feb 22, 2011 |GF110 |450 |900 |2600 |384:48:32 |4 |83.2 |14.4 |21.6 |691.2 |Dell Precision M6600 |
=Quadro (Kx000M) series=
{{Further|Quadro|Kepler (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro K500M
|Jun 1, 2012 | rowspan="3" |GK107 | rowspan="6" |28 | rowspan="6" |PCIe 3.0 x16 | rowspan="2" |850 | rowspan="2" |850 |1600 |192:16:8 |1 |12.8 | rowspan="3" |DDR3 |64 |6.8 | rowspan="2" |13.6 |326.4 | rowspan="6" |11.0 | rowspan="6" |4.5 | rowspan="6" |Yes |35 | |
Quadro K1000M
|Jun 1, 2012 | rowspan="2" |1800 |192:16:16 | rowspan="3" |2 | rowspan="2" |28.8 | rowspan="2" |128 |13.6 |326.4 |45 |Dell Precision M4700 |
Quadro K2000M
|Jun 1, 2012 |745 |745 |384:32:16 |11.92 |23.84 |572.16 |55 |Dell Precision M4700 |
Quadro K3000M
|Jun 1, 2012 | rowspan="3" |GK104 |654 |654 | rowspan="2" |2800 |576:48:32 | rowspan="2" |89.6 | rowspan="3" |GDDR5 | rowspan="3" |256 |20.93 |31.39 |753.41 |75 |Dell Precision M6700 |
Quadro K4000M
|Jun 1, 2012 |600 |600 |960:80:32 | rowspan="2" |4 |19.2 |48 |1152 | rowspan="2" |100 |Dell Precision M6700 |
Quadro K5000M
|Aug 7, 2012 |706 |706 |3000 |1344:112:32 |96 |22.59 |79.07 |1897.73 |Dell Precision M6700 |
=Quadro (Kx100M) series=
{{Further|Quadro|Kepler (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro K510M
|Jul 23, 2013 | rowspan="2" |GK208 | rowspan="7" |28 |PCIe 3.0 x8 |850 |850 |1200 |192:16:8 | rowspan="2" |1 |19.2 | rowspan="7" |GDDR5 | rowspan="2" |64 |6.8 |13.6 |326.4 | rowspan="7" |11.0 | rowspan="7" |4.5 | rowspan="7" |Yes | rowspan="2" |30 | rowspan="2" | |
Quadro K610M
|Jul 23, 2013 | rowspan="6" |PCIe 3.0 x16 |980 |980 |1300 |192:16:8 |20.8 |7.84 |15.68 |376.32 |
Quadro K1100M
|Jul 23, 2013 |GK107 |716 |716 |1400 |384:32:16 | rowspan="2" |2 |44.8 | rowspan="2" |128 |11.45 |22.91 |549.89 |45 |Dell Precision M3800 and M4800 |
Quadro K2100M
|Jul 23, 2013 |GK106 |654 |654 |1500 |576:48:16 |48.0 |10.46 |31.39 |753.41 |55 |Dell Precision M4800 |
Quadro K3100M
|Jul 23, 2013 | rowspan="3" |GK104 |680 |680 | rowspan="2" |800 |768:64:32 | rowspan="2" |4 | rowspan="2" |102.4 | rowspan="3" |256 |21.76 |43.52 |1044.48 |75 |Dell Precision M6800 |
Quadro K4100M
|Jul 23, 2013 |706 |706 |1152:96:32 |22.59 |67.77 |1626.624 | rowspan="2" |100 |Dell Precision M6800 |
Quadro K5100M
|Jul 23, 2013 |771 |771 |900 |1536:128:32 |8 |115.2 |24.67 |98.68 |2368.51 |Dell Precision M6800 |
=Quadro (Kx200M) series=
{{Further|Quadro|Maxwell (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan="2" |Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan="5" | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro K2200M
|Jul 19, 2014 |GM107 |28 |PCIe 3.0 x16 |1150 |1150 |1253 |640:40:16 |2 |80.2 |GDDR5 |128 |18.4 |46 |1472 |46 |12.1 |4.6 |1.3 |3.0 |5.0 |Yes |65 |
=Quadro (Mx000M) series=
{{Further|Quadro|Maxwell (microarchitecture)}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan="2" |Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! colspan="2" | Processing power (GFLOPS) ! colspan="5" | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
Quadro M500M
|Apr 27, 2016 |GM108 | rowspan="7" |28 | rowspan="7" |PCIe 3.0 x16 |1029 |1124 |900 (1800) |384:24:8 | rowspan="3" |2 |14.40 |DDR3 |64 |8.992 |17.98 |863.2 |26.98 | rowspan="7" |12.1 | rowspan="7" |4.6 | rowspan="7" |1.3 | rowspan="7" |3.0 | rowspan="4" |5.0 | rowspan="7" |Yes |25 |
Quadro M600M
|Aug 18, 2015 | rowspan="3" |GM107 |837 |876 | rowspan="6" |1253 |384:24:16 | rowspan="3" |80.2 | rowspan="6" |GDDR5 | rowspan="3" |128 |7.008 |14.02 |672.8 |21.02 |30 |
Quadro M1000M
|Aug 18, 2015 |993 | |512:32:16 |15.89 |31.78 |1017 |31.78 |40 |
Quadro M2000M
|Dec 3, 2015 |1029 |1098 |640:40:16 | rowspan="3" |4 |17.57 |43.92 |1405 |43.92 |55 |
Quadro M3000M
|Aug 18, 2015 | rowspan="3" |GM204 |1050 | |1024:64:32 | rowspan="3" |160.4 | rowspan="3" |256 |33.60 |67.20 |2150 |67.20 | rowspan="3" |5.2 |75 |
Quadro M4000M
|Aug 18, 2015 |975 | |1280:80:48 |62.40 |78.00 |2496 |78.00 | rowspan="2" |100 |
Quadro M5000M
|Aug 18, 2015 |962 | |1536:96:64 |8 |62.40 |93.60 |2955.3 |93.60 |
=Quadro (Mx200) series=
{{Further|Quadro|Maxwell (microarchitecture)}}
Mobile version of the Quadro (Mx200) series.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! rowspan="2" |L2 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=3 | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! CUDA |
Quadro M520 Mobile
|Jan 11, 2017 |GM108 | rowspan="4" |28 | rowspan="4" |PCIe 3.0 x16 |965 |1176 |5000 |384:24:8 |1 |1 |40 | rowspan="4" |GDDR5 |64 |9.4 |18.8 |840 | rowspan="4" |12.1 | rowspan="4" |4.5 | rowspan="3" |5.0 | rowspan="4" |Yes |25 |
Quadro M620 Mobile
|Jan 11, 2017 | rowspan="2" |GM107 |756 |1018 | rowspan="2" |5012 |512:32:16 |2 |2 | rowspan="2" |80.2 | rowspan="3" |128 |16.3 |32.6 |1000 |30 |
Quadro M1200 Mobile
|Jan 11, 2017 |991 |1148 |640:40:16 |2 | rowspan="2" |4 |18.4 |45.9 |1400 |45 |
Quadro M2200 Mobile
|Jan 11, 2017 |GM206 |695 |1037 |5508 |1024:64:32 |1 |88.1 |33.2 |66.3 |2100 |5.2 |55 |
=Quadro (Mx500) series=
{{Further|Quadro|Maxwell (microarchitecture)}}
Mobile version of the Quadro (Mx500) series.
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! rowspan="2" |L2 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=3 | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! CUDA |
Quadro M5500 Mobile
|Apr 8, 2016 |GM204 |28 |PCIe 3.0 x16 |861 ||1140 ||6606 |2048:128:64 |2 |8 |211.4 |GDDR5 |256 |73 |145.9 |4669 |12.1 |4.5 |5.2 |Yes |150 |
=Quadro (Px000) series=
{{Further|Quadro|Pascal (microarchitecture)}}
Mobile version of the Quadro (Px000) series series.
- 1Unified shaders: texture mapping units: render output units: streaming multiprocessors
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! rowspan="2" |L2 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=3 | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL ! CUDA |
Quadro P500 Mobile
|Jan 5, 2018 |GP108 | rowspan="4" |14 | rowspan="8" |PCIe 3.0 x16 |1455 |1519 |1253 |256:16:16:2 |0.5 |2 |40 | rowspan="8" |GDDR5 |64 |24.3 |24.3 |750 | rowspan="8" |12.1 | rowspan="8" |4.5 | rowspan="8" |6.1 | rowspan="8" |Yes |18 |
Quadro P600 Mobile
|Feb 7, 2017 |GP107 |1430 |1620 |1252 |384:24:16:3 | rowspan="3" |1 |2 |80 | rowspan="3" |128 |25.92 |38.88 |1200 |25 |
Quadro P1000 Mobile
|Feb 7, 2017 |GP107(N18P-Q1-A1) |1493 |1519 |1502 |512:32:16:4 | rowspan="2" |4 | rowspan="2" |96 |24.3 |48.61 |1600 |40 |
Quadro P2000 Mobile
|Feb 6, 2017 |GP107(N18P-Q3-A1) |1557 |1607 |1502 |768:64:32:6 |51.42 |77.14 |2400 |50 |
Quadro P3000 Mobile
|Jan 11, 2017 |GP106 | rowspan="4" |16 |1088 |1215 |1752 |1280:80:48:10 |1.5 |6 |168 |192 |58.32 |97.2 |3098 |75 |
Quadro P4000 Mobile
|Jan 11, 2017 | rowspan="3" |GP104 |1202 | rowspan="2" |1228 |1500 | rowspan="2" |1792:112:64:14 | rowspan="3" |2 | rowspan="2" |8 | rowspan="3" |192 | rowspan="3" |256 | rowspan="2" |78.59 | rowspan="2" |137.5 |4398 |100 |
Quadro P4000
Max-Q |Jan 11, 2017 |1114 |1502 | |80 |
Quadro P5000 Mobile
|Jan 11, 2017 |1164 |1506 |1500 |2048:128:64:16 |16 |96.38 |192.8 |6197 |100 |
=Quadro (Px200) series=
{{Further|Quadro|Pascal (microarchitecture)}}
- 1Unified shaders: texture mapping units: render output units: streaming multiprocessors
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Boost clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! rowspan="2" |L2 ! colspan=4 | Memory ! colspan=2 | Fillrate ! Processing power (GFLOPS) ! colspan=3 | Supported API version ! rowspan=2 | Nvidia Optimus ! rowspan=2 | TDP (Watts) |
---|
Size (GiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) ! Pixel (GP/s) ! Texture (GT/s) ! Direct3D ! OpenGL ! CUDA |
Quadro P3200 Mobile
|Feb 21, 2018 | rowspan="5" |GP104 | rowspan="5" |16 | rowspan="5" |PCIe 3.0 x16 |1328 |1543 |1752 |1792:112:64:14 |1.5 |6 |168.2 | rowspan="5" |GDDR5 |192 |98.75 |172.8 |5530 | rowspan="5" |12.1 | rowspan="5" |4.5 | rowspan="5" |6.1 | rowspan="5" |Yes |75 |
Quadro P4200 Mobile
|Feb 21, 2018 |1418 |1594 | rowspan="2" |1753 | rowspan="2" |2304:144:64:18 | rowspan="4" |2 | rowspan="2" |8 | rowspan="2" |224.4 | rowspan="4" |256 |102.0 |229.5 |7345 |100 |
Quadro P4200 Max-Q
|Feb 21, 2018 |1215 |1480 |94.72 |213.1 |6820 |100 |
Quadro P5200 Mobile
|Feb 21, 2018 |1582 |1759 | rowspan="2" |1804 | rowspan="2" |2560:160:64:20 | rowspan="2" |16 | rowspan="2" |230.9 |112.6 |281.4 |9006 |100 |
Quadro P5200 Max-Q
|Feb 21, 2018 |1240 |1480 |94.72 |236.8 |7578 |100 |
= Quadro RTX / T x000 series =
{{Further|Turing (microarchitecture)}}
Mobile version of the Quadro RTX / T x000 series.
- 1 Unified shaders: texture mapping units: render output units: streaming multiprocessors: tensor cores (or FP16 Cores in T x000 Series)
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Core clock (MHz) ! rowspan="2" | Boost clock (MHz) ! rowspan="2" | Core config1 ! rowspan="2" |L2 ! colspan="5" | Memory ! colspan="2" |Fillrate ! colspan="3" |Processing power (TFLOPS) ! colspan="5" | Supported API version ! rowspan="2" |TDP (Watts) ! rowspan="2" | Notes |
---|
Size
(GiB) ! Bandwidth (GB/s) !Memory clock (MHz) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !CUDA |
rowspan="2" |Quadro T500 Mobile
| rowspan="2" |Dec 2, 2020 | rowspan="2" |TU117(N19P-Q1-A1) | rowspan="15" |12 | rowspan="15" |PCIe 3.0 | rowspan="2" | 1365 | rowspan="2" | 1695 | rowspan="2" |896:56:32:14:56 | rowspan="8" |1 |2 | rowspan="2" |80 | rowspan="2" |1250 | rowspan="8" |GDDR5 | rowspan="2" |64 |54.24 |94.92 |5.591 |2.796 |0.08736 | rowspan="15" |12.1 | rowspan="15" |4.6 | rowspan="15" |1.2 | rowspan="15" |3.0 | rowspan="15" |7.5 |25 | rowspan="15" | |
rowspan="7" |4
|49.92 |87.36 |6.075 |3.037 |0.09492 |18 |
Quadro T550 Mobile
|May 2022 |TU117 |1065 |1665 |1024:64:32:26:64 |112 |1500 |64 |53.28 |106.6 |6.82 |3.41 |0.1066 |23 |
Quadro T600 Mobile
|Apr 12, 2021 |TU117 |780 |1410 |896:56:32:14:56 |192 |1500 |128 |45.12 |78.96 |5.053 |2.527 |0.07896 |40 |
Quadro T1000 Mobile{{cite web|url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-mobile-line-card-n18-11x8.5-r4-hr.pdf|title=NVIDIA PROFESSIONAL GRAPHICS SOLUTIONS|website=Nvidia.com}}
|May 27, 2019 |TU117(N19P-Q1-A1) |1395 |1455 |768:48:32:12:1536 |128 |2000 |128 |46.56 |69.84 |5.215 |2.607 |0.08148 |40-50 |
Quadro T1200 Mobile
|Apr 12, 2021 |TU117 |1515 |1785 |1024:64:32:16:64 |224 |1500 |128 |57.12 |114.2 |7.311 |3.656 |0.1142 |18 |
Quadro T2000 Mobile
|May 27, 2019 |TU117(N19P-Q3-A1) |1575 |1785 | rowspan="2" |1024:64:32:16:2048 | rowspan="2" |128 |2001 | rowspan="2" |128 |57.1 |114.2 |7.311 |3.656 |0.1142 |60 |
Quadro T2000 Max-Q{{Cite web |date=2024-09-29 |title=NVIDIA Quadro T2000 Max-Q Specs |url=https://www.techpowerup.com/gpu-specs/quadro-t2000-max-q.c3436 |access-date=2024-09-29 |website=TechPowerUp |language=en}}
|May 27, 2019 |TU117 |1035 |1395 |1250 |44.64 |89.28 |5.714 |2.857 |0.08928 |40 |
Quadro RTX 3000 Mobile
|May 27, 2019 |TU106(N19E-Q1-KA-K1) |945 |1380 | rowspan="2" |2304:144:48:36:288 | rowspan="6" |4 | rowspan="2" |6 |448 |1750 | rowspan="7" |GDDR6 | rowspan="6" |256 |88.32 |198.7 |12.72 |6.359 |0.1987 |60-80 |
Quadro RTX 3000 Max-Q{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-3000-max-q.c3429|title=NVIDIA Quadro RTX 3000 Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-03-04}}
|May 27, 2019 |TU106 |600 |1215 |416 |1625 |77.76 |175.0 |11.2 |5.599 |0.175 |60 |
Quadro RTX 4000 Mobile
|May 27, 2019 |TU104(N19E-Q3-A1) |1110 |1560 | rowspan="2" |2560:160:64:40:320 | rowspan="2" |8 |448 |1750 |99.84 |249.6 |15.97 |7.987 |0.2496 |110 |
Quadro RTX 4000 Max-Q{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-4000-max-q.c3427|title=NVIDIA Quadro RTX 4000 Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-03-04}}
|May 27, 2019 |TU104 |780 |1380 |416 |1625 |88.32 |220.8 |14.13 |7.066 |0.2208 |80 |
Quadro RTX 5000 Mobile
|May 27, 2019 |TU104(N19E-Q5-A1) |1035 |1530 | rowspan="2" |3072:192:64:48:384 | rowspan="2" |16 |448 |1750 |98.88 |296.6 |18.98 |9.492 |0.2966 |110 |
Quadro RTX 5000 Max-Q{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-5000-max-q.c3432|title=NVIDIA Quadro RTX 5000 Max-Q Specs|website=TechPowerUp|language=en|access-date=2025-04-16}}
|May 27, 2019 |TU104 |600 |1350 |384 |1500 |86.40 |259.2 |16.59 |8.294 |0.2592 |80 |
Quadro RTX 6000 Mobile{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-6000-mobile.c3497|title=NVIDIA Quadro RTX 6000 Specs|website=TechPowerUp|language=en|access-date=2025-04-16}}{{cite web|url=https://www.nvidia.com/en-us/design-visualization/quadro-in-laptops/asus-proart-studiobook-one/|title=NVIDIA Quadro RTX 6000 Mobile product page|website=Nvidia|language=en|access-date=2025-04-16}}
|Sep 4, 2019 |TU102 |1275 |1455 |4608:288:96:72:576 |6 |24 |672 |1750 |384 |139.7 |419.0 |26.82 |13.41 |0.419 |80 |
= RTX Ax000 series =
{{Further|Ampere (microarchitecture)}}
Mobile version of the RTX Ax000 series.
- 1 Unified shaders: texture mapping units: render output units: streaming multiprocessors: tensor cores
{{Row hover highlight}}
= RTX Ada Generation =
{{Further|Ada Lovelace (microarchitecture)}}
Mobile version of the RTX Ada Generation
- 1 CUDA cores: RT cores: Tensor cores
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan="2" | Model
! rowspan="2" |Launch ! rowspan="2" |Code name ! rowspan="2" | Fab (nm) ! rowspan="2" | Core clock (MHz) ! rowspan="2" | Boost clock (MHz) ! rowspan="2" | Core config1 ! colspan="2" |Cache ! colspan="5" | Memory ! colspan="2" |Fillrate ! colspan="4" |Processing power (TFLOPS) ! colspan="5" | Supported API version ! rowspan="2" |TDP (Watts) ! rowspan="2" | Notes |
---|
L1/SM (KiB)
!L2 ! Size (GiB) ! Bandwidth (GB/s) !Memory clock (MHz) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) !Tensor !CUDA |
RTX 500 Mobile Ada Generation{{Cite web |title=proviz-rtx-mobile-line-card.pdf |url=https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/documents/proviz-rtx-mobile-line-card.pdf |access-date=12 June 2024 |website=nvidia.com}}{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-500-mobile-ada-generation.c4207 |title=NVIDIA RTX 500 Mobile Ada Generation}}
| rowspan="2" |Feb, 2024 |AD107 | rowspan="7" |PCIe 4.0 |1485 |2025 |2048:16:64 | rowspan="7" |128 | rowspan="2" |12 |4 |128 | rowspan="4" |2000 | rowspan="7" |GDDR6 |64 |64.8 |129.6 |8.294 |8.294 |0.1296 |147.4 | rowspan="7" |12.2 | rowspan="7" |4.6 | rowspan="7" |1.3 | rowspan="7" |3.0 | rowspan="7" |8.9 |35–60 | |
RTX 1000 Mobile Ada Generation{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-1000-mobile-ada-generation.c4208 |title=NVIDIA RTX 1000 Mobile Ada Generation}}
|AD107 |1485 |2025 |2560:20:80 |6 |192 |96 |97.2 |162.0 |10.37 |10.37 |0.162 |193.0 |35–140 | |
RTX 2000 Mobile Ada Generation{{Cite web |title=proviz-mobile-linecard-update-2653183.pdf |url=https://nvdam.widen.net/s/dmdqnnwcmk/proviz-mobile-linecard-update-2653183 |access-date=2023-06-18 |website=nvdam.widen.net}}{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-2000-mobile-ada-generation.c4093 |title=NVIDIA RTX 2000 Mobile Ada Generation}}
| rowspan="5" |Mar, 2023 |AD107 |1635 |2115 |3072:24:96 |24 |8 |256 |128 |101.5 |203.0 |12.99 |12.99 |0.203 |231.6 |35–140 | |
RTX 3000 Mobile Ada Generation{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-3000-mobile-ada-generation.c4095 |title=NVIDIA RTX 3000 Mobile Ada Generation}}
|AD106 |1395 |1695 |4608:36:144 |32 |8 (ECC) |256 |128 |81.36 |244.1 |15.62 |15.62 |0.2441 |318.6 |35–140 | |
RTX 3500 Mobile Ada Generation{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-3500-mobile-ada-generation.c4098 |title=NVIDIA RTX 3500 Mobile Ada Generation}}
|AD104 |1110 |1545 |5120:40:160 | rowspan="2" |48 |12 (ECC) |432 | rowspan="3" |2250 |192 |98.88 |247.2 |15.82 |15.82 |0.2472 |368.6 |60–140 | |
RTX 4000 Mobile Ada Generation{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-4000-mobile-ada-generation.c4096 |title=NVIDIA RTX 4000 Mobile Ada Generation}}
|AD104 |1290 |1665 |7424:58:232 |12 (ECC) |432 |192 |133.2 |386.3 |24.72 |24.72 |0.3863 |538.0 |60–175 | |
RTX 5000 Mobile Ada Generation{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-5000-mobile-ada-generation.c4097 |title=NVIDIA RTX 5000 Mobile Ada Generation}}
|AD103 |1425 |2115 |9728:76:304 |64 |16 (ECC) |576 |256 |236.9 |643.0 |41.15 |41.15 |0.643 |681.8 |60–175 | |
= Mobility Quadro NVS series =
{{Further|Quadro}}
- 1 Vertex shaders: pixel shaders: texture mapping units: render output units
- 2 Unified shaders: texture mapping units: render output units
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan=2 | Core config12 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan=2 | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL |
Quadro NVS 110M
|Jun 1, 2006 |G72M | rowspan="2" |90 | rowspan="2" |PCIe 1.0 x16 |300 |300 |600 | rowspan="2" |3:4:4:2 | rowspan="2" |Up to 512 |4.8 |DDR | rowspan="8" |64 |0.6 |1.2 | rowspan="2" | |9.0c |2.1 | rowspan="6" |10 | rowspan="9" | |
Quadro NVS 120M
|Jun 1, 2006 |G72GLM |450 |450 | rowspan="2" |700 |5.6 | rowspan="2" |DDR2 |0.9 |1.8 |9.0c |2.1 |
Quadro NVS 130M
|May 9, 2007 | rowspan="3" |G86M | rowspan="3" |80 | rowspan="5" |PCIe 2.0 x16 |400? |800? |8:4:4 | rowspan="2" |Up to 256 |6.4? |1.6? |1.6? |19.2 | rowspan="5" |10.0 | rowspan="5" |3.3 |
Quadro NVS 135M
|May 9, 2007 | rowspan="2" |400 | rowspan="2" |800 |1188 |16:8:4 |9.504 | rowspan="4" |GDDR3 | rowspan="2" |1.6 | rowspan="2" |3.2 |38.4 |
Quadro NVS 140M
|May 9, 2007 |1200 |16:8:4 |Up to 512 |9.6 |38.4 |
Quadro NVS 150M
|Aug 15, 2008 | rowspan="2" |G98M | rowspan="2" |65 |530 |1300 | rowspan="2" |1400 |8:4:4 |Up to 256 | rowspan="2" |11.2 | rowspan="2" |2.12 |2.12 |31.2 |
Quadro NVS 160M
|Aug 15, 2008 |580 |1450 |8:8:4 |256 |4.24 |34.8 |12 |
Quadro NVS 300M
|May 24, 2006 |G72GLM |90 |PCIe 1.0 x16 |450 |450 |1000 |3:4:4:2 | rowspan="2" |Up to 512 |8 |DDR2 |0.9 |1.8 | |9.0c |2.1 |16 |
Quadro NVS 320M
|Jun 9, 2007 |G84M |65 |PCIe 2.0 x16 |575 |1150 |1400 |32:16:8 |22.4 | rowspan="2" |GDDR3 |128 |4.6 |9.2 |110.4 |10.0 |3.3 |20 |
Quadro NVS 510M
|Aug 21, 2006 |G72GLM |90 |PCIe 1.0 x16 |500 |500 |1200 |8:24:24:16 |Up to 1024 |38.4 |256 |8 |12 | |9.0c |2.1 |45? |based on Go 7900 GTX |
=Mobility NVS series=
{{Further|Quadro}}
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan="2" |Launch ! rowspan=2 | Code name ! rowspan=2 | Fab (nm) ! rowspan=2 | Core clock (MHz) ! rowspan=2 | Shader clock (MHz) ! rowspan=2 | Memory clock (MHz) ! rowspan="2" | Core config1 ! colspan=4 | Memory ! colspan="2" |Fillrate ! Processing power (GFLOPS) ! colspan="4" | Supported API version ! rowspan=2 | TDP (Watts) ! rowspan=2 | Notes |
---|
Size (MiB)
! Bandwidth (GB/s) ! Bus type ! Bus width (bit) !Pixel (GP/s) !Texture (GT/s) ! Direct3D ! OpenGL !CUDA |
NVS 2100M
|Jan 7, 2010 | rowspan="2" |GT218M | rowspan="4" |40 | rowspan="6" |PCIe 2.0 x16 |535 |1230 | rowspan="4" |1600 | rowspan="2" |16:8:4 | rowspan="2" |Up to 512 | rowspan="3" |12.8 | rowspan="6" |GDDR3 | rowspan="3" |64 |2.14 |4.28 |59.04 | rowspan="2" |10.1 | rowspan="2" |3.3 | rowspan="6" |1.1 | rowspan="2" |1.2 | rowspan="2" |14 | |
NVS 3100M
|Jan 7, 2010 |600 |1470 |2.4 |4.8 |70.56 |based on G210M/310M |
NVS 4200M
|Jan 7, 2010 |GF119 |810 |1620 |48:8:4 | rowspan="4" |Up to 1024 |3.24 |6.48 |155.52 |11 |4.5 |2.1 |25 |based on GT 520M |
NVS 5100M
|Feb 22, 2011 |GT216M |550 |1210 |48:16:8 |25.6 |128 |4.4 |8.8 |174.24 |10.1 |3.3 |1.2 | rowspan="3" |35 | rowspan="3" | |
NVS 5200M
|Jun 1, 2012 | rowspan="2" |GF108 | rowspan="2" |40/28 |625 |1250 | rowspan="2" |1800 | rowspan="2" |96:16:4 |14.4 |64 |2.5 |10 |240 | rowspan="2" |11 | rowspan="2" |4.5 | rowspan="2" |2.1 |
NVS 5400M
|Jun 1, 2012 |660 |1320 |28.8 |128 |2.64 |10.56 |253.44 |
Tegra GPU
{{main|Tegra}}
Data Center GPUs
=GRID=
{{Row hover highlight}}
class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;" |
rowspan=2 | Model
! rowspan=2 | Archi- ! rowspan=2 | Chips ! rowspan=2 | Thread processors ! rowspan=2 | Bus interface ! colspan=2 | Memory ! rowspan=2 | TDP (Watts) |
---|
Bus type
! Size (GiB) |
valign="top"
| rowspan="4" | Kepler |4x GK107 |4x 192 | rowspan="4" |PCIe 3.0 x16 |DDR3 |4x 4 GiB |130 |
valign="top"
|2x GK104-895 |2x 1536 | rowspan="3" |GDDR5 |2x 4 GiB | rowspan="3" |225 |
GRID K340
|4x GK107 |4x 384 |4x 1 GiB |
GRID K520
|2x GK104 |2x 1536 |2x 4 GiB |
=Tesla=
{{Further|Nvidia Tesla}}
{{Nvidia Tesla}}
{{anchor|Console_GPUs}}
Console/Handheld GPUs
{{Row hover highlight}}
- 1 Pixel shaders: vertex shaders: texture mapping units: render output units
- 2 Unified shaders: Texture mapping units : Render output units
- 3 Unified shaders: Texture mapping units : Render output units : Ray tracing cores : Tensor Core
See also
- nouveau (software)
- Scalable Link Interface (SLI)
- TurboCache
- Tegra
- Apple M1
- CUDA
- Nvidia NVDEC
- Nvidia NVENC
- Qualcomm Adreno
- ARM Mali
- Comparison of Nvidia nForce chipsets
- List of AMD graphics processing units
- List of Intel graphics processing units
- List of eponyms of Nvidia GPU microarchitectures
- Imageon by ATI (Now AMD)
References
{{Reflist|colwidth=30em}}
External links
- [http://download.nvidia.com/developer/Papers/2005/OpenGL_2.0/NVIDIA_OpenGL_2.0_Support.pdf OpenGL 2.0 support on Nvidia GPUs (PDF document)]
- [http://developer.download.nvidia.com/opengl/glsl/glsl_release_notes.pdf Release Notes for Nvidia OpenGL Shading Language Support (PDF document)]
{{Nvidia}}
{{Graphics Processing Unit}}
{{DEFAULTSORT:Nvidia graphics processing units}}