Radeon

ATI Radeon is a brand of graphics processing units (GPU) that since 2000 has been manufactured by ATI Technologies and subsequently AMD and is the successor to their Rage line. There are four different groups, which can be differentiated by the DirectX generation they support. More specific distinctions can also be followed, such as the HyperZ version, the number of pixel pipelines, and of course, the memory and processor clock speeds.

ATI Radeon Graphics

The Radeon Graphics logo
Invented by ATI

Contents

  • 1 ATI Radeon Processor Generations
  • 2 Radeon Card Brands
  • 3 Product naming scheme
  • 4 Drivers
  • 5 See also
  • 6 References
  • 7 External links

    ATI Radeon Processor Generations

    Series Graphics APIs support Notes
    DirectX OpenGL
    R100 DirectX 7.0 OpenGL 1.3 ATI's first graphics processor to be fully DirectX 7 compliant. It was first introduced in 2000. R100 brought with it large gains in bandwidth and fill-rate efficiency through the new HyperZ technology. Initial models included Radeon SDR, DDR and 7000/VE. The final release was the Radeon 7500.
    R200 DirectX 8.1 OpenGL 1.4 ATI's second generation Radeon. This design included ATI's first programmable shader architecture and introduced the more advanced pixel shader 1.4. This line includes Radeon 8500 - 9200, 9250.
    R300 DirectX 9.0 OpenGL 2.0 ATI's DirectX 9.0 technology, released in 2002, incorporated pixel shader. Included in this generation are Radeon 9500 - 9800, X300 - X600, X1050.
    R420 DirectX 9.0b While heavily based upon the previous generation, this line included extensions to the Shader Model 2 feature-set. Shader Model 2b, the specification ATI and Microsoft defined with this generation, offered somewhat more shader program flexibility. This generation's technology is used in Radeon X700 - X850.
    R520 DirectX 9.0c ATI's DirectX 9.0c series of graphics cards, with complete Shader Model 3.0 support. Launched in October 2005, this series brought a number of enhancements including the floating point render target technology necessary for HDR rendering with anti-aliasing. Cards released include X1300 - X1950.
    R600 DirectX 10.0/
    DirectX 10.1 (RV670)
    OpenGL 3.0 ATI's first series of ATI Radeon GPUs supporting the Direct3D 10.0 specification and the company's second graphics solution to employ unified shader technology. Releases of this platform include the HD 2400, HD 2600 and HD 2900. There are also products supporting DirectX 10.1, known as the HD 3000 series, with a die shrink.
    R700 DirectX 10.1 Based on the R600 architecture. Mostly a bolstered card with many more stream processors, with improvements to power consumption and GDDR5 support for the high-end RV770 chip. It arrived in late June 2008. The HD 4850 and HD 4870 have 800 stream processors and GDDR3 and GDDR5 memory respectively.
    R800 DirectX 11
    ATI's latest series. The Radeon R800 is expected to be launched on a 40 nm fabrication process, with 2000 stream processors and compatible with the next major version of the DirectX API, DirectX 11 which is aimed at a launch late July, 2009.

    Radeon Card Brands

  • AMD no longer sells Radeon cards at the retail level. Instead, it sells Radeon GPUs to 3rd party board manufacturers, who build and sell the Radeon-based boards to the OEM and retail channel. Board manufacturers of the Radeon include Diamond Multimedia, Sapphire Technology, AsusTek, HIS - Hightech Information System, MSI - Micro-Star International, PowerColor, Gigabyte, VisionTek, & recently, XFX and Gainward

    Product naming scheme

    Since ATI's first DirectX 9-class GPU, the company has followed a naming scheme that relates each product to a market segment.

  • 1 Stream Processors only applicable to Radeon HD 2000 series video cards.
Product Category Card Name
(* denotes wildcard)
Usual Suffixes Price range (USD) Shader amount (VS/PS/SPU)1 Memory Outputs Example products
Type Width
(bit)
Size (MiB)
Enthusiast/high-end **9**
**8**
XTX, XT, XT PE, XL, Pro, GTO, GT >$150 75-100%
200% (X2, dual-GPU)
GDDR3,
GDDR4,
GDDR5
256-bit/
512-bit
256/512/1024 Dual DVI with
HDMI (HD 2000 dongle)
X800, X1950, HD 2900, HD 4870
Mainstream **7**
**6**
**5**
XT, XL, Pro, SE, GTO, GT $100–$150 37.5-75% DDR2,
GDDR3,
GDDR4
128-bit 128/256/512 D-Sub, DVI/
Dual DVI with
HDMI (HD 2000 dongle)
X700, X1600, HD 2600
Budget/Value **4**
**3**
**2**
**1**
**0**
7x00, 9000, 9200, 9250
SE, HM <$99 25-50% DDR2,
GDDR3
64-bit 64/128
(HM: 768/1024)
D-Sub, DVI with
HDMI (HD 2000 Dongle)
X300, X1050,
X1400, HD 2400

Note: Suffix indicate different layers of performance. See ATI Video Card Suffixes.

Since the release of the Radeon HD 3000 series products, previous PRO, XT, GT, and XTX suffixes were eliminated, products will be differentiated by changing the last two digits of the product model number (for instance, HD 3850 and HD 3870, giving the impression that the HD 3870 model having higher performance than HD 3850).[1] Similar changes to the IGP naming were spotted as well, for the previously launched AMD M690T chipset with side-port memory, the IGP is named "Radeon X1270", while for the AMD 690G chipset, the IGP is named "Radeon X1250", as for AMD 690V chipset, the IGP is clocked lower and having fewer functions and thus named "Radeon X1200". The new numbering scheme of video products are shown below:

Product Category Model number
range (steps of 10)1
Price range
(USD)
Shader amount
(VS/PS/SPU)2
Memory Outputs Product(s)
Type Width
(bit)
Size (MiB)
Enthusiast
/high-end
800-990 >$150 75-100% GDDR3,
GDDR4,
GDDR5
256-bit 256/512/1024 2 DVI,
HDMI, DP (Dongle)
HD 3850/3870
HD 4850/4870/ HD 4830
Mainstream 600-790 $100–$150 37.5-75% DDR2,
GDDR3,
GDDR4
128-bit 128/256/512/1024 D-Sub, DVI
HD 3650/ HD 4650/ HD 4670
DVI, 2 DP,
HDMI (Dongle)
Budget/Value 350-590 <$99 25-50% DDR2,
GDDR3
64-bit 64/128
(HM: 768/1024)
D-Sub, DVI,
HDMI, DP (Dongle)
HD 3450/3470
IGP 000-300 N/A 25-50% UMA,
side-port memory
(GDDR2/GDDR3)
UMA + 16-bit (side-port) 64 + UMA
(OS dependent)
D-Sub, DVI,
HDMI, DP
Component (YCbCr)
X1270/X1250/X1200
HD 3200/HD 3100/2100
  • 1 The last two digits denotes variant, similar to the previous suffixes, which "70" is in essence the "XT" variant while "50" is actually the "Pro" variant.[2]
  • 2 Stream Processors only applicable to Direct3D 10-class video components (Radeon HD 2000/3000 series).

    Drivers

    Windows

    The ATI Radeon graphics driver package for Windows operating system is called ATI Catalyst.

    The ATI Catalyst official drivers refuse to work with the mobile versions of Radeon series due to an agreement with OEM vendors.[3] An alternative is an application called Mobility Modder, a third-party utility which modifies recent desktop Radeon drivers to work with Mobility Radeon graphics cards.

    There are also unofficial modifications available such as Omega drivers or DNA drivers. These drivers typically consist of mixtures of various driver file versions with some registry variables altered and are advertised as offering superior performance or image quality. They are, of course, unsupported, and as such, are not guaranteed to function correctly. Some of them also provide modified system files for hardware enthusiasts to run specific graphics cards outside of their specifications.

    Windows XP Professional x64

    ATI has yet to produce mobile 64 bit drivers for the Windows XP Professional x64 Edition operating system. This may be due to a number of factors. One factor is that most people use the 32-bit version of Windows XP, due not only to video card driver issues, but other driver compatibility issues as well. Nonetheless, it is possible to obtain a proper driver for this type of setup. In order to do so, one requires the use of an unsupported application like Modtool.

    Macintosh

    ATI used to only offer driver updates for their retail Mac video cards, but now also offer drivers for all ATI Mac products, including the GPUs in Apple's portable lines. Apple also includes ATI driver updates whenever they release a new OS update. ATI provides a preference panel for use in Mac OS X called ATI Displays which can be used both with retail and OEM versions of their cards. Though it gives more control over advanced features of the graphics chipset, ATI Displays has limited functionality compared to their Catalyst for Windows product.

    Linux

    Initially, ATI did not produce Radeon drivers for Linux, instead giving hardware specifications and documentation to Direct Rendering Infrastructure (DRI) developers under various non-disclosure agreements.

    In mid 2004, however, ATI started to support Linux (XFree86, X.Org), hiring a new Linux driver team to produce fglrx. Their new proprietary Linux drivers, instead of being a port of the Windows Catalyst drivers, were based on the Linux drivers for the FireGL (the FireGL drivers worked with Radeons before, but didn't officially support them), a card geared towards graphics producers, not gamers; though the display drivers part is now based on the same sources as the ones from Windows Catalyst since version 4.x in late 2004. The proprietary Linux drivers neither support R100 (Radeon 7000-7500) nor R200 (Radeon 8500-9200, 9250) chips[4].

    The frequency of driver updates increased in late 2004, releasing Linux drivers every two months, half as often as their Windows counterparts. Then since late 2005 this has been increased to monthly releases, inline with the Windows Catalyst releases.

    For information on alternative Open Source drivers, see below.

    FreeBSD

    FreeBSD systems have the same open-source support for Radeon hardware as Linux, including 2D and 3D acceleration for Radeon R100, R200, and R300-series chipsets. The R300 support, as with Linux, remains experimental due to being reverse-engineered from ATI's proprietary drivers.

    ATI does not support its proprietary fglrx driver on FreeBSD, it has been partly ported by a third party as of January 2007. This is in contrast to its main rival, NVIDIA, which has periodically released its proprietary driver for FreeBSD since November 2002 (though 64-bit BSD systems are still not supported as of 2009). In the meantime the release is similar to Linux.

    MidnightBSD

    MidnightBSD supports 2D and 3D acceleration for Radeon R100, R200, and R300 chipsets. This support is similar to FreeBSD and Linux.

    AmigaOS

    Since AmigaOS 4 introduction AmigaOS users officially gained support for R100/R200 Radeon cards with the R300 chips being planned, although this depends on the available hardware documentations from ATI or the open source drivers from the Linux community.

    Hans de Ruiter is developing on R5xx and R6xx drivers from AMD documentation. At the present time (07/MAR/2009) there is a basic P96 2D driver, which works with the PCI Radeon X1300, X1550 and HD2400 that Hans is using for development and testing.

    BeOS

    Although ATI does not provide its own drivers for BeOS, it provides hardware and technical documentation to the Haiku Project who provide drivers with full 2D and video in/out support. They are the sole graphics manufacturer in any way still supporting BeOS.

    MorphOS

    MorphOS supports 2D and 3D acceleration for Radeon R100 and R200 chipsets.[5]

    FOSS drivers

    On September 12, 2007, AMD released documentation for the RV630 (Radeon HD 2600 PRO and Radeon HD 2600 XT) and M56 (Radeon Mobility X1600) chips for open source driver development, for its strategic open source driver development initiative.[6] This initial "documentation drop" released sufficient programming information for a skeleton display detection and modesetting driver to be released. This was version 1.0.0 of the "radeonhd" driver. Further documentation releases and a baseline open source drivers are likely to follow in the near future. [7] The register reference guides for M76 (Mobility Radeon HD 2600/2700/3600/3800 series) and RS690 (AMD 690 chipset series) were also released on January 4, 2008, and is available from ATI website [8].

    All specs are available without an NDA. AMD is collaborating with Novell to build a new, free driver called RadeonHD based on these specifications. At present it is reasonably stable, and supports DRI for r500 series cards. Its development can be tracked using the git repository at the Freedesktop.org website. [9]

    Also available is a driver known as "ati", "xf86-video-ati", "video-ati" and "radeon". The main difference between video-ati and radeonhd used to be that video-ati uses AtomBIOS and radeonhd does not. AtomBIOS is an abstraction layer filled in by AMD to quickly add a new type of card or card series. AtomBIOS speeds up development of video-ati, but some have argued that it makes the open-source driver more legacy and untouchable.[10] In July 2008 development has started to enable radeonhd to use AtomBIOS too, which should tremendously decrease the timeframe in which initial support for new hardware is developed. This development was started in a branch named atombios_support, and as of September 2008 is not yet merged with the master branch.[11][12]

GeForce 200 Series

The GeForce 200 Series is the tenth generation of NVIDIA's GeForce graphics processing units. The series also represents the continuation of the company's unified shader architecture introduced with the GeForce 8 Series and the GeForce 9 Series.

The GeForce GTX 280 and 260 are based on the same processor core. During the manufacturing process, GTX chips are binned and separated through defect testing of the core's logic functionality. Those that fail to meet the GTX 280 hardware specification are re-tested and binned as GTX 260 (which is specified with fewer stream processors, less ROPS and a narrower memory bus). In late 2008, in order to create more parity between the GTX 260 and the competing HD 4870, Nvidia re-released the GTX 260 with 216 stream processors up from 192. Effectively, there are two GTX 260 cards in production with non-trivial performance differences.

As of June 2008, the G200 is the largest commercial GPU ever constructed. It consists of 1.4 billion transistors covering a 576mm2 die surface area built on a 65nm process.[1] To date, the G200 is the largest CMOS-logic chip that has been fabricated at the TSMC foundry.

NVIDIA GeForce 200 Series
Codename(s) G92a/b, G200a/b
Release date 2008/2009
Mid-Range GPU GTS 250, GTX 260, GTX 260 Core 216
High-end GPU GTX 275, GTX 280, GTX 285, GTX 295
Direct3D and Shader version D3D 10.0, Model 4.0

Contents

  • 1 Technical summary
  • 2 Future
  • 3 See also
  • 4 References
  • 5 External links

    Technical summary

    Model Year Code name Fab (nm) Transistors (Billion) Die Size (mm 2) Bus interface Memory min (MiB) Config core 1 Reference clock rate Fillrate Reference Memory Configuration Graphics library support (version GFLOPs2 (MADD+MUL) TDP (Watts) Comments
    Core (MHz) Shader (MHz) Memory (MHz) Pixel (GP/s) Texture (GT/s) Bandwidth (GiB/s) DRAM type Bus width (bit) DirectX OpenGL
    GeForce GTS 250 March 3, 2009 G92a/b 65/55 0.754 324/230 PCIe x16 2.0 512 or 1024 128:64:16 738 1836 2200 11.808 47.232 70.4 GDDR3 256 10 3.1[2] 705 145 Some 512MB cards are rebranded GeForce 9800 GTX and GTX+ cards
    GeForce GTX 260 June 26, 2008 G200 65 1.4 576 PCIe x16 2.0 896 192:64:28 576 1242 1998 16.128 36.864 111.9 GDDR3 448 32x14 10 3.1[2] 715 182
    GeForce GTX 260 216SP[3] September 16, 2008 G200 65 1.4 576 PCIe x16 2.0 896 216:72:28 576 1242 1998 16.128 41.472 111.9 GDDR3 448 32x14 10 3.1[2] 805 182
    GeForce GTX 260 216SP 55nm[4] December 22, 2008 G200b 55 1.4 470 PCIe x16 2.0 896 216:72:28 576 1242 1998 16.128 41.472 111.9 GDDR3 448 32x14 10 3.1[2] 805 171
    GeForce GTX 275[5] April 2, 2009 G200b 55 1.4 470 PCIe x16 2.0 896 240:80:28 633 1404 2268 17.724 50.6 127.0 GDDR3 448 32x14 10 3.1[2] 1010.88 219
    GeForce GTX 280 [6][7] June 17, 2008 G200 65 1.4 576 PCIe x16 2.0 1024 240:80:32 602 1296 2214 19.264 48.16 141.7 GDDR3 512 32x16 10 3.1[2] 933 236
    GeForce GTX 285 [8] [9] January 15, 2009 G200b 55 1.4 470 PCIe x16 2.0 1024 240:80:32 648 1476 2484 20.736 51.84 159.0 GDDR3 512 32x16 10 3.1[2] 1062.72 183
    GeForce GTX 295 January 8, 2009 G200b 55 2x 1.4 2x 470 PCIe x16 2.0 2x 896 2x 240:80:28 576 1242 1998 2x 16.128 2x 46.08 2x 111.9 GDDR3 2x 448 32x14 10 3.1[2] 1788.48 289

    Future

    According to Expreview and other sources, Nvidia plans to release a single PCB version of the GTX 295 graphics card. The performance specifications of the new card will be identical to the dual PCB version. Improvements however, are speculated to be better power consumption, better thermal performance as well as cheaper manufacturing costs. The single PCB version of the GTX 295 is expected to be released in late May, 2009. The first confirmed model will be from Inno3D and is named the "GTX 295 Platinum Edition". EVGA will also release a single PCB model of the GTX 295 nicknamed the "GTX 295 Co-op Edition".[10]

    At CES 2009, Nvidia announced the low-end mobile version of the GTX 200 series, known as the G 100M series. Preliminary benchmarks have found that some chips are up to 50% faster than the GeForce 9 that precede them. The first three chips are the G 105M which replaces the 9300M GS, the G 110M which compares to the recently announced 9400M G, and the GT 130M which replaces the 9600M GT.

    Nvidia has also released OEM budget and mainstream cards under the 100 series with similar naming schemes to the mobile cards. Current cards include the G 100, GT 120, GT 130 and GTS 150. The GT 120 is a rebranded 9500 GT with improved thermal designs while the GT 130 is a modified version of the 9600 GSO. The GTS 150 is an OEM version of the GTS 250 with some slight changes.

    Nvidia also have announced plans to launch a revision and rebrand of the 9800 GT (8800 GT) which is based on the G92 chipset, slated for an April release called the GTS 240. The GTS 240 was to have been an overclocked version of the 9800 GT but there are reports that the GTS 240 has been cancelled. The GTS 250 is basically a 55nm G92b based 9800 GTX+ GPU on a new P361 PCB and internally Nvidia calls it D10P2. The differences are mainly on the power design; the core and ram speeds are identical to the 9800 GTX+ but power consumption has been lowered. All of the GTX 200 series cards support OpenGL 3.0.

    Even more recently, Nvidia has launched the high-end mobile version of the GTX 200 series. The first two chips are the GTX 260M and the GTX 280M, which are fabricated at a smaller 55nm process allowing for 128 stream processors, up from the 9800M GTX's maximum of 112 stream processors. Also, Nvidia has released more midrange versions of the GT200s for mobiles. These include the GT 210M, GT 230M, GT 240M, GTS 250M, and GTS 260M. It is hard to place where these cards will slot in compared to the lower end GT100Ms, but it is believed these mobile cards will eventually completely erase the GT100s.

    Nvidia will be launching a GTX 300 series GPU as its flagship model. The GTX 300 series is projected to launch in Q4 of 2009, with the first products to be fabricated on TSMC's 45 nanometer manufacturing process. It is rumored to be DirectX 11 compatible, and like AMD's recent GPUs, use GDDR5 RAM to reduce manufacturing cost. According to information released by The Bright Side of News on April 22 2009, the G300 architecture will feature a cGPU design. The cGPU is much closer to traditional CPUs and will be radically different from previous generations. The G300 chipset will use MIMD instead of SIMD.

GeForce 9M Series

All graphical processing units in the GeForce 9M series feature:

  • Increased performance for similar power draw compared to GeForce 8M series for midrange and mid-high range notebooks,
  • DirectX 10.0 and OpenGL 2 compatibility,
  • 16X antialiasing, and
  • Full HD DVD / Blu-ray hardware decoding.

    9100M G[32]

  • 8 Stream Processors.
  • 450 MHz core clock.
  • 1100 MHz shader clock.
  • Memory Clock depend on System Memory.
  • Up to 256 MB shared memory, with Turbo Cache.
  • 64 bit memory interface (single-channel mode) / 128 bit memory interface (dual-channel mode).
  • Memory Bandwidth depend on System Memory.
  • 1.8 billion texels/s texture fill rate.
  • (specification based on Acer Aspire 4530 using GPU-Z v0.2.8 utility)

    9200M GS[33]

  • 8 Stream Processors.
  • 529 MHz core clock.
  • 1300 MHz shader clock.
  • 400 MHz memory clock.
  • Up to 256 MB memory.
  • 64-bit memory interface.
  • 6.4 GB/s memory bandwidth.
  • 2.1GPixel/s Pixel Fillrate.
  • 4.2GTexel/s Texture Fillrate

    9300M G[34]

  • 16 Stream Processors.
  • 400 MHz core clock.
  • 800 MHz shader clock.
  • 600 MHz memory clock.
  • Up to 256 MB memory.
  • 64-bit memory interface.
  • 4.8 GB/s memory bandwidth.
  • 3.2 billion texels/s texture fill rate.

    9300M GS[35]

  • 16 Stream Processors.
  • 580 MHz core clock.
  • 1450 MHz shader clock.
  • 800 MHz memory clock.
  • Up to 512 MB memory.
  • 64-bit memory interface.
  • 6.4 GB/s memory bandwidth.
  • 4.6 billion texels/s texture fill rate.

    9400M G[36]

  • 16 Stream Processors.
  • Memory Clock depend on System Memory.
  • 64 bit memory interface (single-channel mode) / 128 bit memory interface (dual-channel mode).
  • Memory Bandwidth depend on System Memory.
  • 3.6 billion texels/s texture fill rate.

    9500M G[37]

  • 16 Stream Processors.
  • 500 MHz core clock.
  • 1200 MHz shader clock.
  • 800 MHz memory clock.
  • Up to 1024 MB memory.
  • 128-bit memory interface.
  • 12,8 GB/s memory bandwidth.
  • ? billion texels/s texture fill rate.

    9500M GS[38]

  • 32 Stream Processors.
  • 475 MHz core clock.
  • 950 MHz shader clock.
  • 700 MHz memory clock.
  • Up to 512 MB memory.
  • 128-bit memory interface.
  • 22.4 GB/s memory bandwidth.
  • 7.6 billion texels/s texture fill rate.

    9600M GS[39]

  • 064A/8 core (G96.
  • 32 Stream Processors.
  • 430 MHz core clock.
  • 1075 MHz shader clock.
  • 800/1600 MHz memory clock (effective).
  • Up to 1024 MB memory.
  • 128-bit memory interface.
  • 12.8 GB/s (with DDR2 type) or 25.6 GB/s (with GDDR3 type) memory bandwidth.
  • 6.8 billion texels/s texture fill rate.
  • 103 GigaFLOPS.

    9600M GT[40]

  • 32 Stream Processors.
  • 500 MHz core clock.
  • 1250 MHz shader clock.
  • 1000 MHz memory clock.
  • Up to 1024 MB memory.
  • 128-bit memory interface.
  • 25.6 GB/s memory bandwidth.
  • 8.0 billion texels/s texture fill rate.

    9650M GS [41]

  • G84 core
  • 32 Stream Processors.
  • 625 MHz core clock.
  • 1250 MHz shader clock.
  • 800 MHz memory clock.
  • Up to 512 MB memory.
  • 128 bit memory interface.
  • 25.6 GB/s memory bandwidth.
  • 10 billion texels/s texture fill rate.

    9650M GT [42]

  • G96 core (65/55nm).
  • 32 Stream Processors.
  • 550 MHz core clock.
  • 1325 MHz shader clock.
  • 800 MHz memory clock.
  • Up to 1024 MB memory.
  • 128 bit memory interface.
  • 25.6 GB/s memory bandwidth.
  • 8.8 billion texels/s texture fill rate.

    9700M GT [4]

  • G96 core.
  • 32 Stream Processors.
  • 625 MHz core clock.
  • 1550 MHz shader clock.
  • 800 MHz memory clock.
  • 128 bit memory interface.
  • 25.6 GB/s memory bandwidth.
  • 10.0 billion texels/s texture fill rate.
  • 148.8 GigaFLOPS.

    9700M GTS [5]

  • G94 core.
  • 48 Stream Processors.
  • 530 MHz core clock.
  • 1325 MHz shader clock.
  • 800 MHz memory clock.
  • 256 bit memory interface.
  • 51.2 GB/s memory bandwidth.
  • 12.7 billion texels/s texture fill rate.
  • 190.8 GigaFLOPS.

    9800M GS [6]

  • G94 core.
  • 64 Stream Processors.
  • 530 MHz core clock.
  • 1325 MHz shader clock.
  • 800 MHz memory clock.
  • 256 bit memory interface.
  • 51.2 GB/s memory bandwidth.
  • 17.0 billion texels/s texture fill rate.
  • 254 GigaFLOPS.

    9800M GTS [7]

  • G94 core.
  • 64 Stream Processors.
  • 600 MHz core clock.
  • 1500 MHz shader clock.
  • 800 MHz memory clock.
  • 256 bit memory interface.
  • 51.2 GB/s memory bandwidth.
  • 19.2 billion texels/s texture fill rate.
  • 288 GigaFLOPS.

    9800M GT [8]

  • G92 core.
  • 96 Stream Processors.
  • 500 MHz core clock.
  • 1250 MHz shader clock.
  • 800 MHz memory clock.
  • 256 bit memory interface.
  • 51.2 GB/s memory bandwidth.
  • 24.0 billion texels/s texture fill rate.
  • 360 GigaFLOPS.

    9800M GTX [9]

  • G92 core.
  • 112 Stream Processors.
  • 500 MHz core clock.
  • 1250 MHz shader clock.
  • 800 MHz memory clock.
  • 256 bit memory interface.
  • 51.2 GB/s memory bandwidth.
  • 28.0 billion texels/s texture fill rate.
  • 420 GigaFLOPS.

GeForce 9800 Series

The GeForce 9800 series contains the GX2 (dual GPU), GTX and GT variants.[15]

GeForce 9800 GX2

On March 18, 2008 the GeForce 9800 GX2 was officially launched.

The GeForce 9800 GX2 has the following specifications: [16][17].

  • Dual PCBs, dual GPU design
  • 197 W power consumption [18].
  • Two 65nm process GPUs, with 256 total Stream Processors (128 per PCB)[19].
  • Supports Quad SLI.
  • Power of Two underclocked Geforce 8800 GTS 512(G92) video cards in SLI Mode
  • 1 GiB (512 MB per PCB) memory framebuffer.
  • Supports DirectX 10, Shader Model 4, OpenGL 2.1, and PCI-Express 2.0.
  • Outputs include two DVI ports, an HDMI output, and S/PDIF in connector on-board for routing audio through the HDMI cable [20].
  • An 8-pin and a 6-pin power connector.
  • Clocks (Core/Shader/Memory): 600 MHz/1500 MHz/2000 MHz [21]
  • 256-bit memory interface[21]
  • 128 GB/s memory bandwidth[21]
  • Release date: March 18, 2008
  • Launch price of $599 [22].

    GeForce 9800 GTX

    On April 1, 2008 the GeForce 9800 GTX was officially launched. It was basically an 8800 GTS 512MB with two SLI connectors, higher clock speeds, and support for Nvidia Hybrid Power, a technology that allows the discrete GPU to shut off during non resource intensive applications, and instead use the integrated GPU. With these extra features though, a high price came too.

    Taken from an eVGA specification sheet:[23]

  • 128 Stream Processors.
  • Clocks (Core/Shader/Memory): 675 MHz/1688 MHz/2200 MHz
  • 256-bit memory interface.
  • 512 MB of GDDR3 memory.
  • 70.4 GB/s memory bandwidth
  • Texture Fill Rate of 43.2 (billion/s).
  • DirectX 10, Shader Model 4.0, OpenGL 2.1, and PCI-Express 2.0.
  • Outputs include two DVI ports, an HDMI output(Using nVidia DVI to HDMI adapter(included)), and S/PDIF in connector on-board for routing audio through the HDMI cable.
  • Release date was 2008-04-01. [24]
  • Launch Price of $349. [25]

In July 2008 nVidia released the 55 nm refresh of the 9800 GTX: the 9800 GTX+. It has faster core (738 MHz) and shader (1836 MHz) clocks.

GeForce 9800 GT

The 9800GT is effectively a rebranded 8800GT, although some are being manufactured using a newer 55 nm technology instead of the older 65 nm first debuted on the 8800GT[26]

ASUSTeK have released a 9800GT with Tri-SLI support.[27]

Taken from the nVidia product detail page.[28]

  • 112 Stream Processors
  • 512-1024MB Standard Memory
  • 256-bit Memory Interface Width
  • 600 MHz Graphics Clock (MHz)
  • 1500 MHz Processor Clock (MHz)
  • 900 MHz Memory Clock
  • Texture Fill Rate (billion/s) 33.6
  • Memory Bandwidth (GB/s) 57.6
  • DirectX 10, Shader Model 4.0, OpenGL 2.1, and PCI-Express 2.0.

GeForce 9600 Series

GeForce 9600 GT

On February 21, 2008 the GeForce 9600 GT was officially launched.

  • 55/65 nm G94 GPU.
  • 64 Stream Processors.
  • 16 Raster Operation(ROP) units, 32 Texture Address(TA)/Texture Filter(TF) units.
  • 20.8 billion texels/s fillrate.
  • 650 MHz core clock, with a 1625 MHz unified shader clock.
  • 1800 MHz memory, with a 256-bit memory interface.
  • 256 MB, 512 MB, or 1 GB of GDDR3 memory[10].
  • 57.6 GB/s memory bandwidth for boards configured with GDDR3 900 MHz memory.
  • 505M transistor count
  • DirectX 10, Shader Model 4.0, OpenGL 2.1, and PCI-Express 2.0[11].
  • Is compatible with HDCP, but the implementation will depend on the manufacturer.
  • Supports CUDA and the Quantum Effects physics processing engine.
  • Almost double the performance of the previous Nvidia mid-range card, the GeForce 8600GTS.
  • Estimated by NVIDIA to cost between $169–$189 MSRP.

Source: [12]

GeForce 9600 GSO

The GeForce 9600 GSO was originally essentially a renamed 8800 GS. This tactic has been seen before in products such as the GeForce 7900 GTO to clear unsold stock when it is made obsolete by the next generation. Just like the 8800 GS the 9600 GSO features 96 stream processors, a 550 MHz core clock with shaders clocked at 1,375 MHz, and 384MB memory clocked at 800 MHz on a 192-bit memory bus.[13]

GeForce 9600 GSO 512

After clearing the old 8800 GS stock, nvidia revised the specification with a new core, and 512 MB of memory clocked at 900 MHz on a 256-bit bus.[14] For these cards the number of stream processors is halved to 48, with the core frequency increased to 650 MHz and the shader frequency increased to 1625 MHz.

GeForce 9500 Series

GeForce 9500 GT

On July 29, 2008 the Geforce 9500 GT was officially launched.

  • 65 nm G96 GPU
  • 32 Stream Processors.
  • 550 MHz core, with a 1400 MHz unified shader clock.
  • 8.8 billion texels/s fillrate.
  • 256 MB/512MB 1600 MHz GDDR3 memory or 256 MB 1000 MHz GDDR2 memory, both with a 128-bit memory bus.
  • 25.6 GB/s memory bandwidth for boards configured with GDDR3 800 MHz memory.
  • Support DirectX 10, Shader Model 4.0, OpenGL 2.1, and PCI-Express 2.0.
  • Supports 2nd generation PureVideo 2 technology with partial VC1 decoding.[8][9]

GeForce 9500 GS

  • 65 nm G96 GPU
  • 32 Stream Processors.
  • 8 Raster Operations (ROP) units
  • 550 MHz Core, with a 1375 MHz unified shader clock.
  • 8.8 billion texels/s Fillrate.
  • 512MB 1000MHz DDR2 memory with a 128-bit memory bus.
  • 16.0 GB/s memory bandwidth.
  • Supports DirectX 10, Shader Model 4.0, OpenGL 2.1, and PCI-Express 2.0
  • Supports 2nd generation PureVideo 2 technology with partial VC1 decoding.

GeForce 9400 Series

GeForce 9400 GT

On August 27, 2008 the Geforce 9400 GT was officially launched.

  • 65 nm G96 GPU
  • 16 Stream Processors.
  • 550 MHz core, with a 1350 MHz unified shader clock.
  • 4.4 billion texels/s fillrate.
  • 256 MB/512 MB/1024 MB 800 MHz DDR2 or 256 MB 1600 MHz GDDR3[4] , both with a 128-bit memory bus
  • 12.8 GB/s memory bandwidth for boards configured with DDR2 800 MHz memory.
  • Support DirectX 10, Shader Model 4.0, OpenGL 2.1, and PCI-Express 2.0.
  • Supports 2nd generation PureVideo technology and HybridPower technology.[5][6][7]

GeForce 9 Series

The GeForce 9 Series is the ninth generation of NVIDIA's GeForce series of graphics processing units, the first of which was released on February 21, 2008.

NVIDIA GeForce 9 Series
Codename(s) G92a/b, G94a/b, G96a/b
Release date 2008
Entry-level GPU 9100, 9200, 9300 (IGP), 9400 (both integrated and discrete variants), 9500
Mid-Range GPU 9600
High-end GPU 9800
Direct3D and Shader version D3D 10, Model 4

Contents

[hide]

GeForce 8M Series

On May 10, 2007, NVIDIA announced the availability of their GeForce 8 notebook GPUs through select OEMs. So far the lineup consists of the 8400M, 8600M, 8700M and 8800M series chips.[36] It has been announced by nVidia that some of their graphics chips have a higher than expected rate of failure due to overheating when used in particular notebook configurations. Some major laptop manufacturers are making adjustments to fan setting and firmware updates to help delay the occurrence of any potential GPU failure. In late July, Dell released a set of BIOS updates that made the laptop fans spin more frequently[37]. As of mid-August, nVidia is yet to give further details publicly, though it has been heavily rumored that all or most of the 8400 and 8600 cards have this issue.

GeForce 8400M Series

The GeForce 8400M is the entry level series for the GeForce 8M chipset. Normally found on midrange laptops as an alternate solution to integrated graphics, the 8400M is designed for watching high definition video content rather than gaming. Versions include the 8400M G, 8400M GS, and 8400M GT. While these GPUs are not oriented for high-end gaming, the GDDR3-equipped 8400M-GT can handle most modern games at medium settings,[38] and is suitable for occasional gaming.

GeForce 8600M Series

The GeForce 8600M is offered in midrange laptops as a mid-range performance solution for enthusiasts who want to watch high-definition content such as Blu-ray Disc and HD DVD movies and play current and some future games with decent settings. Versions include the 8600M GS and 8600M GT, and provide decent gaming performance (due to the implementation of GDDR3 memory in the higher-end 8600M models) for current games.

GeForce 8600M Series

The GeForce 8600M is offered in midrange laptops as a mid-range performance solution for enthusiasts who want to watch high-definition content such as Blu-ray Disc and HD DVD movies and play current and some future games with decent settings. Versions include the 8600M GS and 8600M GT, and provide decent gaming performance (due to the implementation of GDDR3 memory in the higher-end 8600M models) for current games.

GeForce 8700M Series

The GeForce 8700M was developed for the high-end market. Currently the only version is the 8700M GT. This chipset is available on high-end laptops such as the Dell XPS M1730, Sager NP5793, and Toshiba Satellite X205. While this card is considered by most in the field to be a decent mid-range card, it is hard to classify the 8700M-GT as a high-end card due to its 128-bit memory bus, and is essentially an overclocked 8600M GT GDDR3 mid-range card.[39] However, it shows strong performance when in a dual-card SLI configuration, and provides decent gaming performance in a single-card configuration.[40]

GeForce 8800M Series

The GeForce 8800M was developed to succeed the 8700M in the high-end market, and can be found in high-end gaming notebook computers.

Versions include the 8800M GTS and 8800M GTX. These were released as the first truly high-end mobile Geforce 8 Series GPUs, each with a 256-bit memory bus and a standard 512 megabytes of GDDR3 memory, and provide high-end gaming performance equivalent to many desktop GPUs. In SLI, these can produce 3DMark06 results in the high thousands.[40]

Laptop models which include the 8800M GPUs are: Sager NP5793, Sager NP9262, Alienware m15x and m17x, HP HDX9494NR and Dell M1730. Clevo also manufactures similar laptop models for CyberPower, Rock, and Sager (among others) - all with the 8800M GTX, while including the 8800M GTS in the Gateway P-6831 FX and P-6860 FX models.

GeForce 8800 Series

The 8800 series, codenamed G80, was launched on November 8, 2006 with the release of the GeForce 8800 GTX and GTS. A 320 MB GTS was released on February 12 and the Ultra was released on May 2, 2007. The cards are larger than their predecessors, with the 8800 GTX measuring 10.6 in (~26.9 cm) in length and the 8800 GTS measuring 9 in (~23 cm). Both cards have two dual-link DVI connectors and a HDTV/S-Video out connector. The 8800 GTX requires 2 PCIe power inputs to keep within the PCIe standard, while the GTS requires just one.

EVGA GeForce 8800 GTX

8800 GS

The 8800 GS is a trimmed-down 8800 GT with 96 stream processors and 384 MB of RAM on a 192-bit bus.[14] In May 2008, it was rebranded as the 9600 GSO in an attempt to spur sales.

On April 28 2008, Apple announced an updated iMac line featuring an 8800 GS.[15] However, the GPU is actually a rebranded NVIDIA GeForce 8800M GTS.[citation needed] It features up to 512 MB of 800 MHz GDDR3 video memory, 64 unified stream processors, a 500 MHz core speed, a 256-bit memory bus width, and a 1250 MHz shader clock.[16]

Underside

8800 GTX / 8800 Ultra

The 8800 GTX is equipped with 768 MB GDDR3 RAM. The 8800 series replaced the GeForce 7950 Series as NVIDIA's top-performing consumer GPU. GeForce 8800 GTX and GTS use identical GPU cores, but the GTS model disables parts of the GPU and reduces RAM size and bus width to lower production cost.

As of September 2007, the G80 was the largest commercial GPU ever constructed. It consists of 681 million transistors covering a 480 mm² die surface area built on a 90 nm process. (In fact the G80's total transistor count is ~686 million, but since the chip was made on a 90 nm process and due to process limitations and yield feasibility, NVIDIA had to break the main design into two chips: Main shader core at 681 million transistors and NV I/O core of about ~5 million transistors making the entire G80 design standing at ~686 million transistors).

A minor manufacturing defect related to a resistor of improper value caused a recall of the 8800 GTX models just two days before the product launch, though the launch itself was unaffected.[17]

BFG 8800GTX with the heatsink removed

The GeForce 8800 GTX was by far the fastest GPU when first released, and 13 months after its initial debut it still remained one of the fastest. The GTX has 128 stream processors clocked at 1.35 GHz, a core clock of 575 MHz, and 768 MB of 384-bit GDDR3 memory at 1.8 GHz, giving it a memory bandwidth of 86.4 GB/s. The card performs faster than a single Radeon HD 2900 XT, and faster than 2 Radeon X1950 XTXs in Crossfire[clarification needed] or 2 Geforce 7900 GTXs in SLI.[clarification needed] The 8800 GTX also supports HDCP, but one major flaw is its older NVIDIA Purevideo processor that uses more CPU resources. Originally retailing for around US$600, prices came down to under US$400 before it was discontinued. The 8800 GTX is also very power hungry, using up to 185 watts of power and requiring two PCI-E power connectors to operate. The 8800 GTX also has 2 SLI connector ports, allowing it to support NVIDIA 3-way SLI for users who run demanding games at extreme resolutions such as 2560x1600.

The 8800 Ultra, retailing at a higher price,[clarification needed] is identical to the GTX architecturally, but features higher clocked shaders, core and memory. Nvidia later[when?] told the media the 8800 Ultra was a new stepping,[clarification needed] creating less heat[clarification needed] therefore clocking higher. Originally retailing from $800 to $1000, most users thought the card to be a poor value, offering only 10% more performance than the GTX but costing hundreds of dollars more. Prices dropped to as low as $500 before being discontinued on January 23, 2008. The core clock of the Ultra runs at 612 MHz, the shaders at 1.5 GHz, and finally the memory at 2.16 GHz, giving the Ultra a theoretical memory bandwidth of 103.7 GB/s. It has 2 SLI connector ports, allowing it to support NVIDIA 3-way SLI. An updated dual slot cooler was also implemented, allowing for quieter and cooler operation at higher clock speeds.[18]

8800GT with cover removed.

8800 GT


The 8800 GT, codenamed G92, was released on October 29, 2007. The card is the first to transition to 65 nm process, and supports PCI-Express 2.0.[19] It has a single-slot cooler as opposed to the double slot cooler on the 8800 GTS and GTX, and uses less power than GTS and GTX due to its 65 nm process. While its core processing power is comparable to that of the GTX, the 256-bit memory interface and the 512 MB of GDDR3 memory often hinders its performance at very high resolutions and graphics settings. The 8800 GT, unlike other 8800 cards, is equipped with the PureVideo 2 engine for GPU assisted decoding of the H.264 and VC-1 codecs. Performance benchmarks at stock speeds place it above the 8800 GTS (640MB and 320MB versions) and slightly below the 8800 GTX. A 256MB version of the 8800 GT which lower stock memory speeds (1.4 GHz as opposed to 1.8 GHz) but with the same core is also available. Performance benchmarks have shown that the 256 MB version of the 8800 GT has a considerable performance disadvantage when compared to its 512 MB counterpart, especially in newer games such as Crysis. Some manufacturers also make models with 1 GB of memory; and with large resolutions and big textures one can perceive a performance difference in the benchmarks. These models are more likely to take up to 2 slots of your computer.

The release of this card presents an odd dynamic to the graphics processing industry. At an NVIDIA projected initial street price of around $200, this card outperforms the ATI flagship HD2900XT and HD3870 in most situations, and even NVIDIA's own 8800 GTS 640mb (previously priced at an MSRP of $400). The card, only marginally slower in synthetic and gaming benchmarks than the 8800 GTX, also takes much of the value away from NVIDIA's own high end card. This release was shortly followed by the (EVGA) 8800 GTS SSC (the original 8800 GTS re-released with 96+ (112) shader processor units), and ATI's counter, the HD 3800 series.

The remaining 8800 GT cards are now being re-branded and sold under the 9800 GT name. (Once stocks of G92 chips run out, 9800 GT will receive a 55 nm chip.)[citation needed]

Compatibility issue with PCI-E 1.0a

Shortly after the release, an incompatibility issue with older PCI Express 1.0a motherboards was unmasked. When using the PCI Express 2.0 compliant 8800 GT in some motherboards with PCI Express 1.0a slots, the card would not produce any display image, but the computer would often boot (with the fan spinning at a constant 100%). The incompatibility is mostly with motherboards with VIA PCI-E 1.0a chipsets.[citation needed]

Some graphics cards had a workaround, which was to re-flash the graphics card's BIOS with an older GEN1 BIOS. However this effectively made it into a PCI Express 1.0 card, not being able to utilize the PCIE 2.0 functions. This could be considered a non-issue however since the card itself could not even utilize the full capacity of the regular PCIE 1.0 slots, there was no noticeable performance reduction. Also flashing of the video card BIOS voided the warranties of most video card manufacturers (if not all) thus making it a less-than-optimum way of getting the card to work properly. A workaround to this is to flash the BIOS of the motherboard to the latest version, which depending on the manufacturer of the motherboard, may contain a fix. In relation to this compatibility issue, the high numbers of cards reported as DOA (as much as 13-15%) were believed to be inaccurate. When it was revealed that the G92 8800 GT and 8800 GTS 512MB were going to be designed with PCI Express 2.0 connections, NVIDIA claimed that all cards would have full backwards-compatibility, but failed to mention that this was only true for PCI Express 1.1 motherboards. The source for the BIOS-flash did not come from NVIDIA or any of their partners, but rather ASRock, a mainboard producer, who mentioned the fix in one of their motherboard FAQs. ASUSTek, sells the 8800 GT with their sticker, posted a newer version of their 8800 GT BIOS on their website, but did not mention that it fixed this issue. EVGA also posted a new bios to fix this issue.[20]

8800 GTS

The first releases of the 8800 GTS line, in November 2006, came in 640 MB and 320 MB configurations of GDDR3 RAM and utilized NVIDIA's G80 GPU.[21] While the 8800 GTX has 128 stream processors and a 384-bit memory bus, these versions of 8800 GTS feature 96 stream processors and a 320-bit bus. With respect to features, however, they are identical because they use the same GPU. [22]

Around the same release date as the 8800 GT, NVIDIA released a new 640 MB of the 8800 GTS. While still based on the 90 nm G80 core, this version has 7 out of the 8 clusters of 16 stream processors enabled (as opposed to 6 out 8 on the older GTSs), giving it a total of 112 stream processors instead of 96. Most other aspects of the card remain unchanged. However, because the only 2 add-in partners who are making this card (BFG and EVGA) have decided to overclock it, this version of the 8800 GTS actually runs slightly faster than a stock GTX in most scenarios, especially at higher resolutions, due to the increased clock speeds.[23]

NVIDIA released a new 8800 GTS 512MB based on the 65 nm G92 GPU on December 10, 2007.[24] This 8800 GTS has 128 stream processors, compared to the 96 processors of the original GTS models. It is equipped with 512 MB GDDR3 on a 256-bit bus. Combined with a 650 MHz core clock and architectural enhancements, this gives the card raw GPU performance exceeding that of 8800 GTX, but it is constrained by the narrower 256-bit memory bus. Its performance can match the 8800 GTX in some situations, and it outperforms the older GTS cards in all situations.[24]