A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM) [1] containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. [2] Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.
In computing, a screen buffer is a part of computer memory used by a computer application for the representation of the content to be shown on the computer display. [3] The screen buffer may also be called the video buffer, the regeneration buffer, or regen buffer for short. [4] Screen buffers should be distinguished from video memory. To this end, the term off-screen buffer is also used.
The information in the buffer typically consists of color values for every pixel to be shown on the display. Color values are commonly stored in 1-bit binary (monochrome), 4-bit palettized, 8-bit palettized, 16-bit high color and 24-bit true color formats. An additional alpha channel is sometimes used to retain information about pixel transparency. The total amount of memory required for the framebuffer depends on the resolution of the output signal, and on the color depth or palette size.
Computer researchers[ who? ] had long discussed the theoretical advantages of a framebuffer but were unable to produce a machine with sufficient memory at an economically practicable cost.[ citation needed ] [5] In 1947, the Manchester Baby computer used a Williams tube, later the Williams-Kilburn tube, to store 1024 bits on a cathode-ray tube (CRT) memory and displayed on a second CRT. [6] [7] Other research labs were exploring these techniques with MIT Lincoln Laboratory achieving a 4096 display in 1950. [5]
A color scanned display was implemented in the late 1960s, called the Brookhaven RAster Display (BRAD), which used a drum memory and a television monitor. [8] In 1969, A. Michael Noll of Bell Labs implemented a scanned display with a frame buffer, using magnetic-core memory. [9] Later on, the Bell Labs system was expanded to display an image with a color depth of three bits on a standard color TV monitor.
In the early 1970s, the development of MOS memory (metal–oxide–semiconductor memory) integrated-circuit chips, particularly high-density DRAM (dynamic random-access memory) chips with at least 1 kb memory, made it practical to create, for the first time, a digital memory system with framebuffers capable of holding a standard video image. [10] [11] This led to the development of the SuperPaint system by Richard Shoup at Xerox PARC in 1972. [10] Shoup was able to use the SuperPaint framebuffer to create an early digital video-capture system. By synchronizing the output signal to the input signal, Shoup was able to overwrite each pixel of data as it shifted in. Shoup also experimented with modifying the output signal using color tables. These color tables allowed the SuperPaint system to produce a wide variety of colors outside the range of the limited 8-bit data it contained. This scheme would later become commonplace in computer framebuffers.
In 1974, Evans & Sutherland released the first commercial framebuffer, the Picture System, [12] costing about $15,000. It was capable of producing resolutions of up to 512 by 512 pixels in 8-bit grayscale, and became a boon for graphics researchers who did not have the resources to build their own framebuffer. The New York Institute of Technology would later create the first 24-bit color system using three of the Evans & Sutherland framebuffers. [13] Each framebuffer was connected to an RGB color output (one for red, one for green and one for blue), with a Digital Equipment Corporation PDP 11/04 minicomputer controlling the three devices as one.
In 1975, the UK company Quantel produced the first commercial full-color broadcast framebuffer, the Quantel DFS 3000. It was first used in TV coverage of the 1976 Montreal Olympics to generate a picture-in-picture inset of the Olympic flaming torch while the rest of the picture featured the runner entering the stadium.
The rapid improvement of integrated-circuit technology made it possible for many of the home computers of the late 1970s to contain low-color-depth framebuffers. Today, nearly all computers with graphical capabilities utilize a framebuffer for generating the video signal. Amiga computers, created in the 1980s, featured special design attention to graphics performance and included a unique Hold-And-Modify framebuffer capable of displaying 4096 colors.
Framebuffers also became popular in high-end workstations and arcade system boards throughout the 1980s. SGI, Sun Microsystems, HP, DEC and IBM all released framebuffers for their workstation computers in this period. These framebuffers were usually of a much higher quality than could be found in most home computers, and were regularly used in television, printing, computer modeling and 3D graphics. Framebuffers were also used by Sega for its high-end arcade boards, which were also of a higher quality than on home computers.
Framebuffers used in personal and home computing often had sets of defined modes under which the framebuffer can operate. These modes reconfigure the hardware to output different resolutions, color depths, memory layouts and refresh rate timings.
In the world of Unix machines and operating systems, such conveniences were usually eschewed in favor of directly manipulating the hardware settings. This manipulation was far more flexible in that any resolution, color depth and refresh rate was attainable – limited only by the memory available to the framebuffer.
An unfortunate side-effect of this method was that the display device could be driven beyond its capabilities. In some cases, this resulted in hardware damage to the display. [14] More commonly, it simply produced garbled and unusable output. Modern CRT monitors fix this problem through the introduction of protection circuitry. When the display mode is changed, the monitor attempts to obtain a signal lock on the new refresh frequency. If the monitor is unable to obtain a signal lock, or if the signal is outside the range of its design limitations, the monitor will ignore the framebuffer signal and possibly present the user with an error message.
LCD monitors tend to contain similar protection circuitry, but for different reasons. Since the LCD must digitally sample the display signal (thereby emulating an electron beam), any signal that is out of range cannot be physically displayed on the monitor.
Framebuffers have traditionally supported a wide variety of color modes. Due to the expense of memory, most early framebuffers used 1-bit (2 colors per pixel), 2-bit (4 colors), 4-bit (16 colors) or 8-bit (256 colors) color depths. The problem with such small color depths is that a full range of colors cannot be produced. The solution to this problem was indexed color, which adds a lookup table to the framebuffer. Each color stored in framebuffer memory acts as a color index. The lookup table serves as a palette with a limited number of different colors, while the rest is used as an index table.
Here is a typical indexed 256-color image and its own palette (shown as a rectangle of swatches):
In some designs it was also possible to write data to the lookup table (or switch between existing palettes) on the fly, allowing dividing the picture into horizontal bars with their own palette and thus render an image that had a far wider palette. For example, viewing an outdoor shot photograph, the picture could be divided into four bars, the top one with emphasis on sky tones, the next with foliage tones, the next with skin and clothing tones, and the bottom one with ground colors. This required each palette to have overlapping colors, but carefully done, allowed great flexibility.
While framebuffers are commonly accessed via a memory mapping directly to the CPU memory space, this is not the only method by which they may be accessed. Framebuffers have varied widely in the methods used to access memory. Some of the most common are:
The framebuffer organization may be packed pixel or planar. The framebuffer may be all points addressable or have restrictions on how it can be updated.
Video cards always have a certain amount of RAM. A small portion of this RAM is where the bitmap of image data is "buffered" for display. The term frame buffer is thus often used interchangeably when referring to this RAM.
The CPU sends image updates to the video card. The video processor on the card forms a picture of the screen image and stores it in the frame buffer as a large bitmap in RAM. The bitmap in RAM is used by the card to continually refresh the screen image. [15]
Many systems attempt to emulate the function of a framebuffer device, often for reasons of compatibility. The two most common virtual framebuffers are the Linux framebuffer device (fbdev) and the X Virtual Framebuffer (Xvfb). Xvfb was added to the X Window System distribution to provide a method for running X without a graphical framebuffer. The Linux framebuffer device was developed to abstract the physical method for accessing the underlying framebuffer into a guaranteed memory map that is easy for programs to access. This increases portability, as programs are not required to deal with systems that have disjointed memory maps or require bank switching.
A frame buffer may be designed with enough memory to store two frames worth of video data. In a technique known generally as double buffering or more specifically as page flipping, the framebuffer uses half of its memory to display the current frame. While that memory is being displayed, the other half of memory is filled with data for the next frame. Once the secondary buffer is filled, the framebuffer is instructed to display the secondary buffer instead. The primary buffer becomes the secondary buffer, and the secondary buffer becomes the primary. This switch is often done after the vertical blanking interval to avoid screen tearing where half the old frame and half the new frame is shown together.
Page flipping has become a standard technique used by PC game programmers.
As the demand for better graphics increased, hardware manufacturers created a way to decrease the amount of CPU time required to fill the framebuffer. This is commonly called graphics acceleration. Common graphics drawing commands (many of them geometric) are sent to the graphics accelerator in their raw form. The accelerator then rasterizes the results of the command to the framebuffer. This method frees the CPU to do other work.
Early accelerators focused on improving the performance of 2D GUI systems. While retaining these 2D capabilities, most modern accelerators focus on producing 3D imagery in real time. A common design uses a graphics library such as OpenGL or Direct3D which interfaces with the graphics driver to translate received commands to instructions for the accelerator's graphics processing unit (GPU). The GPU uses those instructions to compute the rasterized results and the results are bit blitted to the framebuffer. The framebuffer's signal is then produced in combination with built-in video overlay devices (usually used to produce the mouse cursor without modifying the framebuffer's data) and any final special effects that are produced by modifying the output signal. An example of such final special effects was the spatial anti-aliasing technique used by the 3dfx Voodoo cards. These cards add a slight blur to the output signal that makes aliasing of the rasterized graphics much less obvious.
At one time there were many manufacturers of graphics accelerators, including: 3dfx Interactive; ATI; Hercules; Trident; Nvidia; Radius; S3 Graphics; SiS and Silicon Graphics. As of 2015 [update] the market for graphics accelerators for x86-based systems is dominated by Nvidia (acquired 3dfx in 2002), AMD (who acquired ATI in 2006), and Intel.
With a framebuffer, the electron beam (if the display technology uses one) is commanded to perform a raster scan, the way a television renders a broadcast signal. The color information for each point thus displayed on the screen is pulled directly from the framebuffer during the scan, creating a set of discrete picture elements, i.e. pixels.
Framebuffers differ significantly from the vector displays that were common prior to the advent of raster graphics (and, consequently, to the concept of a framebuffer). With a vector display, only the vertices of the graphics primitives are stored. The electron beam of the output display is then commanded to move from vertex to vertex, tracing a line across the area between these points.
Likewise, framebuffers differ from the technology used in early text mode displays, where a buffer holds codes for characters, not individual pixels. The video display device performs the same raster scan as with a framebuffer but generates the pixels of each character in the buffer as it directs the beam.
The Original Chip Set (OCS) is a chipset used in the earliest Commodore Amiga computers and defined the Amiga's graphics and sound capabilities. It was succeeded by the slightly improved Enhanced Chip Set (ECS) and the greatly improved Advanced Graphics Architecture (AGA).
In computer graphics and digital photography, a raster graphic represents a two-dimensional picture as a rectangular matrix or grid of pixels, viewable via a computer display, paper, or other display medium. A raster image is technically characterized by the width and height of the image in pixels and by the number of bits per pixel. Raster images are stored in image files with varying dissemination, production, generation, and acquisition formats.
Video Graphics Array (VGA) is a video display controller and accompanying de facto graphics standard, first introduced with the IBM PS/2 line of computers in 1987, which became ubiquitous in the IBM PC compatible industry within three years. The term can now refer to the computer display standard, the 15-pin D-subminiature VGA connector, or the 640 × 480 resolution characteristic of the VGA hardware.
In computer graphics, planar is the method of arranging pixel data into several bitplanes of RAM. Each bit in a bitplane is related to one pixel on the screen. Unlike packed, high color, or true color graphics, the whole dataset for an individual pixel is not in one specific location in RAM, but spread across the bitplanes that make up the display. Planar arrangement determines how pixel data is laid out in memory, not how the data for a pixel is interpreted; pixel data in a planar arrangement could encode either indexed or direct color.
A blitter is a circuit, sometimes as a coprocessor or a logic block on a microprocessor, dedicated to the rapid movement and modification of data within a computer's memory. A blitter can copy large quantities of data from one memory area to another relatively quickly, and in parallel with the CPU, while freeing up the CPU's more complex capabilities for other operations. A typical use for a blitter is the movement of a bitmap, such as windows and icons in a graphical user interface or images and backgrounds in a 2D video game. The name comes from the bit blit operation of the 1973 Xerox Alto, which stands for bit-block transfer. A blit operation is more than a memory copy, because it can involve data that's not byte aligned, handling transparent pixels, and various ways of combining the source and destination data.
Bit blit is a data operation commonly used in computer graphics in which several bitmaps are combined into one using a boolean function.
QuickDraw was the 2D graphics library and associated application programming interface (API) which is a core part of classic Mac OS. It was initially written by Bill Atkinson and Andy Hertzfeld. QuickDraw still existed as part of the libraries of macOS, but had been largely superseded by the more modern Quartz graphics system. In Mac OS X Tiger, QuickDraw has been officially deprecated. In Mac OS X Leopard applications using QuickDraw cannot make use of the added 64-bit support. In OS X Mountain Lion, QuickDraw header support was removed from the operating system. Applications using QuickDraw still ran under OS X Mountain Lion to macOS High Sierra; however, the current versions of Xcode and the macOS SDK do not contain the header files to compile such programmes.
The VIC-II, specifically known as the MOS Technology 6567/6566/8562/8564, 6569/8565/8566 (PAL), is the microchip tasked with generating Y/C video signals and DRAM refresh signals in the Commodore 64 and Commodore 128 home computers.
The Color Graphics Adapter (CGA), originally also called the Color/Graphics Adapter or IBM Color/Graphics Monitor Adapter, introduced in 1981, was IBM's first color graphics card for the IBM PC and established a de facto computer display standard.
Text mode is a computer display mode in which content is internally represented on a computer screen in terms of characters rather than individual pixels. Typically, the screen consists of a uniform rectangular grid of character cells, each of which contains one of the characters of a character set; at the same time, contrasted to graphics mode or other kinds of computer graphics modes.
The Television Interface Adaptor (TIA) is the custom computer chip which, along with a variant of the MOS Technology 6502, constitutes the heart of the 1977 Atari Video Computer System game console. The TIA generates the screen display, sound effects, and reads the controllers. At the time the Atari VCS was designed, even small amounts of RAM were expensive. The chip was designed without the extra circuitry of a framebuffer, instead requiring detailed programming to create even a simple display.
The TMS9918 is a video display controller (VDC) manufactured by Texas Instruments, in manuals referenced as "Video Display Processor" (VDP) and introduced in 1979. The TMS9918 and its variants were used in the ColecoVision, CreatiVision, Memotech MTX, MSX, NABU Personal Computer, SG-1000/SC-3000, Spectravideo SV-318, SV-328, Sord M5, Tatung Einstein, TI-99/4, Casio PV-2000, Coleco Adam, Hanimex Pencil II, PECOS and Tomy Tutor.
The Motorola 6845, or MC6845, is a display controller that was widely used in 8-bit computers during the 1980s. Originally intended for designs based on the Motorola 6800 CPU and given a related part number, it was more widely used alongside various other processors, and was most commonly found in machines based on the Zilog Z80 and MOS 6502.
In computing, indexed color is a technique to manage digital images' colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. It is a form of vector quantization compression.
SuperPaint was a pioneering graphics program and framebuffer computer system developed by Richard Shoup at Xerox PARC. The system was first conceptualized in late 1972 and produced its first stable image in April 1973. SuperPaint was among the earliest uses of computer technology for creative artworks, video editing, and computer animation, all of which would become major areas within the entertainment industry and major components of industrial design.
In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images.
In computing, a bitmap graphic is an image formed from rows of different colored pixels. A GIF is an example of a graphics image file that uses a bitmap.
VGA text mode was introduced in 1987 by IBM as part of the VGA standard for its IBM PS/2 computers. Its use on IBM PC compatibles was widespread through the 1990s and persists today for some applications on modern computers. The main features of VGA text mode are colored characters and their background, blinking, various shapes of the cursor, and loadable fonts. The Linux console traditionally uses hardware VGA text modes, and the Win32 console environment has an ability to switch the screen to text mode for some text window sizes.
Composite artifact colors is a designation commonly used to address several graphic modes of some 1970s and 1980s home computers. With some machines, when connected to an NTSC TV or monitor over composite video outputs, the video signal encoding allowed for extra colors to be displayed, by manipulating the pixel position on screen, not being limited by each machine's hardware color palette.