Video capture
Video capture is the process of turning an incoming video signal (and sound) from a camera or other source into a digital form that a computer, cloud, or AI system can use for processing, broadcasting, image recognition, or sharing.
A quick look at how it has evolved:
- Early 1990s: 16-bit ISA capture cards appeared, with VIDCAP as part of Video for Windows. Some setups used two cards to reach around 15 frames per second.
- Mid to late 1990s: PCI buses offered lower latency and higher frame rates, typically around 30 fps. Brands like Matrox and ATI bundled capture kits with graphics cards and other components.
- Around 2012–2013: PCI Express provided dedicated bandwidth per lane and much higher throughput, enabling capture of both analog SD and newer digital video like 1080p.
- Today: Modern capture cards use PCIe Gen 2 (and newer), plus USB 3 or Thunderbolt, to capture up to 4K video at 30 or 60 fps. In machine-vision work, higher frame rates are possible, often with monochrome to save bandwidth. High-quality capture requires careful design, including low-noise, low-jitter circuitry and good PCB layouts.
How it works in a system:
A dedicated video capture device or card uses video decoders to convert signals to a standard digital video format and then sends the video to storage or to other hardware. The video stream can travel over PCIe, USB, Ethernet, or Wi‑Fi, or be stored directly on the device (as in a digital video recorder).
Typical uses include processing, broadcasting, image recognition, or archiving the captured video.
This page was last edited on 3 February 2026, at 14:08 (CET).