To understand how a full-frame sensor works, it’s essential to know the basics of how an image sensor captures light. An image sensor is made up of millions of individual photosites or pixels, each of which captures a small portion of the incoming light. The sensor then combines these individual pixels to create a final image.
In a full-frame sensor, the surface area of the sensor is equivalent to a standard 35mm film frame, which is approximately 36mm x 24mm.
The larger surface area of a full-frame sensor means that each individual pixel can be larger, allowing for more light to be captured.
Canon was the first company to introduce a full-frame sensor in a digital camera with the EOS 1Ds in 2002. Since then, Canon has continued to develop full-frame sensors for their high-end professional cameras, including the EOS R5 and EOS R6 mirrorless cameras.
Sony is another major player in the full-frame camera market. Sony introduced their first full-frame mirrorless camera, the Alpha 7, in 2013.
Panasonic is a more recent entrant into the full-frame camera market, with their Lumix S series of cameras. The Lumix S1 and S1R both feature full-frame sensors, with the S1R offering a massive 47.3 megapixels of resolution.
In recent years, there has been a trend towards higher megapixel counts in full-frame sensors, with cameras like the Canon EOS R5 and Sony Alpha 7R IV offering resolutions of 45 and 61 megapixels, respectively. However, higher megapixel counts can come at the cost of increased image noise and reduced low-light performance.