Pixels on an image sensor are analogous to a bunch of red, green, and blue paint buckets placed side by side. (Red, green, and blue combine to create all colors.) The bigger the buckets, the more paint (electrons) they can capture.Here?s where it gets a little tricky, so it?s best to explain by another analogy. Suppose you need to estimate how much rain falls onto a farm, and you have only a minute?s worth of rainfall to do your measurements. Imagine that you spread 100 empty soup cans around the property, capped by funnels that are 10 centimeters in diameter. You might collect only a few hundred drops in each. Now suppose you could double the size of the funnel. The amount that can be collected increases exponentially. Calculating how much rain falls on the field by extrapolating the water collected with, say, 1000‑cm funnels will yield vastly more accurate results. Here?s why: If your raindrops are really photons, the signal-to-noise ratio is dominated by the fact that the noise is equal to the square root of the number of photons. Thus the more rain each soup can collects, the higher the signal-to-noise ratio.