A graphics card (also called a video card, display card, graphics accelerator, graphics adapter, VGA card/VGA, video adapter, display adapter, or colloquially GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor.
Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU).
A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to erroneously refer to the graphics card as a whole.
Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the CPU. Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing.
Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation.Usually, a graphics card comes in the form of a printed circuit board (expansion board) which is to be inserted into an expansion slot.
Others may have dedicated enclosures, and they are connected to the computer via a docking station or a cable. These are known as external GPUs (eGPUs).
History of Graphic cards
Graphics cards, also known as video cards or graphics processing units (GPUs), have historically evolved alongside computer display standards to accommodate advancing technologies and user demands. In the realm of IBM PC compatibles,
the early standards included Monochrome Display Adapter (MDA), Color Graphics Adapter (CGA), Hercules Graphics Card, Enhanced Graphics Adapter (EGA), and Video Graphics Array (VGA).
Each of these standards represented a step forward in the ability of computers to display more colors, higher resolutions, and richer graphical interfaces, laying the foundation for the development of modern graphical capabilities.
In the late 1980s, advancements in personal computing led companies like Radius to develop specialized graphics cards for the Apple Macintosh II. These cards were unique in that they incorporated discrete 2D QuickDraw capabilities, enhancing the graphical output of Macintosh computers by accelerating 2D graphics rendering.
QuickDraw, a core part of the Macintosh graphical user interface, allowed for the rapid rendering of bitmapped graphics, fonts, and shapes, and the introduction of such hardware-based enhancements signaled an era of specialized graphics processing in consumer machines.
The evolution of graphics processing took a major leap forward in the mid-1990s with 3dfx Interactive's introduction of the Voodoo series, one of the earliest consumer-facing GPUs that supported 3D acceleration. These cards, however, were dedicated entirely to 3D processing and lacked 2D support, necessitating the use of a separate 2D graphics card in tandem.
The Voodoo's architecture marked a major shift in graphical computing by offloading the demanding task of 3D rendering from the CPU to the GPU, significantly improving gaming performance and graphical realism.The development of fully integrated GPUs that could handle both 2D and 3D rendering came with the introduction of the NVIDIA RIVA 128. Released in 1997,
the RIVA 128 was one of the first consumer-facing GPUs to integrate both 3D and 2D processing units on a single chip. This innovation simplified the hardware requirements for end-users, as they no longer needed separate cards for 2D and 3D rendering, thus paving the way for the widespread adoption of more powerful and versatile GPUs in personal computers.
In contemporary times, the majority of graphics cards are built using chips sourced from two dominant manufacturers: AMD and Nvidia. These modern graphics cards are multifunctional and support various tasks beyond rendering 3D images for gaming. They also provide 2D graphics processing, video decoding, TV output, and multi-monitor setups.
Additionally, many graphics cards now have integrated sound capabilities, allowing them to transmit audio alongside video output to connected TVs or monitors with built-in speakers, further enhancing the multimedia experience.
Within the graphics industry, these products are often referred to as graphics add-in boards (AIBs). The term "AIB" emphasizes the modular nature of these components, as they are typically added to a computer's motherboard to enhance its graphical capabilities.
The evolution from the early days of separate 2D and 3D cards to today's integrated and multifunctional GPUs reflects the ongoing technological advancements and the increasing demand for high-quality visual and multimedia experiences in computing.
Discrete vs integrated graphics
As an alternative to the use of a graphics card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip as integrated graphics. Motherboard-based implementations are sometimes called "on-board video". Some motherboards support using both integrated graphics and the graphics card simultaneously to feed separate displays. The main advantages of integrated graphics are: a low cost, compactness, simplicity, and low energy consumption.
Integrated graphics often have less performance than a graphics card because the graphics processing unit inside integrated graphics needs to share system resources with the CPU. On the other hand, a graphics card has a separate random access memory (RAM), cooling system, and dedicated power regulators. A graphics card can offload work and reduce memory-bus-contention from the CPU and system RAM, therefore the overall performance for a computer could improve in addition to increased performance in graphics processing. Such improvements to performance can be seen in video gaming, 3D animation, and video editing.
Power demand
As the processing power of graphics cards increased, so did their demand for electrical power. Current high-performance graphics cards tend to consume large amounts of power. For example, the thermal design power (TDP) for the GeForce Titan RTX is 280 watts.[11] When tested with video games, the GeForce RTX 2080 Ti Founder's Edition averaged 300 watts of power consumption.
While CPU and power supply manufacturers have recently aimed toward higher efficiency,
power demands of graphics cards continued to rise, with the largest power consumption of any individual part in a computer. Although power supplies have also increased their power output, the bottleneck occurs in the PCI-Express connection, which is limited to supplying 75 watts.
Modern graphics cards with a power consumption of over 75 watts usually include a combination of six-pin (75 W) or eight-pin (150 W) sockets that connect directly to the power supply. Providing adequate cooling becomes a challenge in such computers. Computers with multiple graphics cards may require power supplies over 750 watts. Heat extraction becomes a major design consideration for computers with two or more high-end graphics cards.[citation needed]
As of the Nvidia GeForce RTX 30 series, Ampere architecture, a custom flashed RTX 3090 named "Hall of Fame" has been recorded to reach a peak power draw as high as 630 watts. A standard RTX 3090 can peak at up to 450 watts. The RTX 3080 can reach up to 350 watts, while a 3070 can reach a similar, if not slightly lower peak power draw. Ampere cards of the Founders Edition variant feature a "dual axial flow through" cooler design, which includes fans above and below the card to dissipate as much heat as possible towards the rear of the computer case.
A similar design was used by the Sapphire Radeon RX Vega 56 Pulse graphics card.