Usually, it's a measurement of intensity and wavelength to determine the shade. This is determined by the imaging library (think of it as a licensed SDK from a company that builds optics) and how the software is written.
For IR and near-IR applications, the intensity is often used alone to determine the pixel shading. I'm guessing here, but (I really only ever worked with near-IR industrial Machine-assisted vision systems) for full color, consumer applications the wavelength also has a lot to do with it, simply because it's easy to equate wavelength with color spectrum.
EDIT: You've piqued my interest, and I'm trying to find out from some folks I know who still work in the industry at Matrox.
Last bump, I swear
Most non-specialized, full-color (in the human vision spectrum) digital computational cameras operate this way, per a contact at Matrox:
The intensity, wavelength, and on-time of the light hitting a pixel is sampled for X number of milliseconds (determined by the software) when the shutter is activated. If the on-time (the amount of time the pixel receives enough light to be turned on) is above a certain threshold (again, determined by the software), the results of intensity and wavelength are averaged and the data is given a reference point in a grid to tell the software what color it sees. The software then takes all the information supplied by these pixels, and assembles it into a single file, and performs any image-wide settings (settings like exposure compensation or sharpness) to that file. Afterwards, the software then reads the image data and does jpg correction on individual pixels as it is programmed to do, sharpening and adjusting color information to produce the smoothest result.
If auto ISO settings are selected, there is another step where the camera software leverages the sensor data to determine the highest value (lowest ISO) that still provides enough image data. It does this while the image is focusing, before exposure is calculated and the shutter is activated.