By incorporating on-chip multiplication gain, the electron multiplying CCD achieves, in an all solid-state sensor, the single-photon detection sensitivity typical of intensified or electron-bombarded CCDs at much lower cost and without compromising the quantum efficiency and resolution characteristics of the conventional CCD structure.
For the most of the twentieth century, a photosensitive chemical emulsion spread on film was used to reproduce images from the optical microscope. It has only been in the past decade that improvements in electronic camera and computer technology have made digital imaging faster, cheaper, and far more accurate to use than conventional photography. A wide range of new and exciting techniques have subsequently been developed that enable researchers to probe deeper into tissues, observe extremely rapid biological processes in living cells, and obtain quantitative information about spatial and temporal events on a level approaching the single molecule.
The imaging device is one of the most critical components in optical microscopy because it determines at what level fine specimen detail may be detected, the relevant structures resolved, and/or the dynamics of a process visualized and recorded. The range of light detection methods and the wide variety of imaging devices currently available to the microscopist make the equipment selection process difficult and often confusing. This discussion is intended to aid in understanding the basics of light detection, the fundamental properties of digital images, and the criteria relevant to selecting a suitable detector for specific applications.
Historical Perspective
Recording images with the microscope dates back to the earliest days of microscopy. The first single lens instruments, developed by Dutch scientists Antoni van Leeuwenhoek and Jan Swammerdam in the late 1600s, were used by these pioneering investigators to produce highly detailed drawings of blood, microorganisms, and other minute specimens. British scientist Robert Hooke engineered one of the first compound microscopes and used it to write “Micrographia”, his hallmark volume on microscopy and imaging published in 1665. The microscopes developed during this period were incapable of projecting images, and observation was limited to close visualization of specimens through the eyepiece. True photographic images were first obtained with the microscope in 1835 when William Henry Fox Talbot applied a chemical emulsion process to capture photomicrographs at low magnification. Between 1830 and 1840 there was an explosive growth in the application of photographic emulsions to recording microscopic images. For the next 150 years, the art and science of capturing images through the microscope with photographic emulsions co-evolved with advancements in film technology. During the late 1800s and early 1900s, Carl Zeiss and Ernst Abbe perfected the manufacture of specialized optical glass and applied the new technology to many optical instruments, including compound microscopes.
The dynamic imaging of biological activity was introduced in 1909 by French doctorial student Jean Comandon, who presented one of the earliest time lapse videos of syphilis producing spirochaetes. Comandon's technique enabled movie production of the microscopic world. Between 1970 and 1980 researchers coupled tube based video cameras with microscopes to produce time lapse image sequences and real-time videos. In the 1990s the tube camera gave way to solid state technology and the area array charge coupled device (CCD), heralding a new era in photomicrography. Current terminology referring to the capture of electronic images with the microscope is digital or electronic imaging.
Digital Image Acquisition: Analog to Digital Conversion
Regardless of whether light focused on a specimen ultimately impacts on the human retina, a film emulsion, a phosphorescent screen or the photodiode array of a CCD, an analog image is produced. These images can contain a wide spectrum of intensities and colors. Images of this type are referred to as continuous tone because the various tonal shades and hues blend together without disruption, to generate a diffraction limited reproduction of the original specimen. Continuous tone images accurately record image data by using a sequence of electrical signal fluctuations that vary continuously throughout the image.
As we view them, images are generally square or rectangular in dimension thus each pixel is represented by a coordinate pair with specific x and y values, arranged in a typical Cartesian coordinate system (Figure 1c). The x coordinate specifies the horizontal position or column location of the pixel, while the y coordinate indicates the row number or vertical position. Thus, a digital image is composed of a rectangular or square pixel array representing a series of intensity values that is ordered by an (x, y) coordinate system. In reality, the image exists only as a large serial array of data values that can be interpreted by a computer to produce a digital representation of the original scene.
The horizontal to vertical dimension ratio of a digital image is known as the aspect ratio and can be calculated by dividing the image width by the height. The aspect ratio defines the geometry of the image. By adhering to a standard aspect ratio for display of digital images, gross distortion of the image is avoided when the images are displayed on remote platforms. When a continuous tone image is sampled and quantized, the pixel dimensions of the resulting digital image acquire the aspect ratio of the original analog image. It is important that each pixel has a 1:1 aspect ratio (square pixels) to ensure compatibility with common digital image processing algorithms and to minimize distortion.
Spatial Resolution in Digital Images
The quality of a digital image, or image resolution, is determined by the total number of pixels and the range of brightness values available for each pixel. Image resolution is a measure of the degree to which the digital image represents the fine details of the analog image recorded by the microscope. The term spatial resolution is reserved to describe the number of pixels utilized in constructing and rendering a digital image. This quantity is dependent upon how finely the image is sampled during digitization, with higher spatial resolution images having a greater number of pixels within the same physical image dimensions. Thus, as the number of pixels acquired during sampling and quantization of a digital image increases, the spatial resolution of the image also increases.
The optimum sampling frequency, or number of pixels utilized to construct a digital image, is determined by matching the resolution of the imaging device and the computer system used to visualize the image. A sufficient number of pixels should be generated by sampling and quantization to dependably represent the original image. When analog images are inadequately sampled, a significant amount of detail can be lost or obscured, as illustrated by the diagrams in Figure 2. The analog signal presented in Figure 2(a) shows the continuous intensity distribution displayed by the original image, before sampling and digitization, when plotted as a function of sample position. When 32 digital samples are acquired (Figure 2(b)), the resulting image retains a majority of the characteristic intensities and spatial frequencies present in the original analog image.
When the sampling frequency is reduced as in (Figure 2(c) and (d)) frequencies present in the original image are missed during analog-to-digital (A/D) conversion and a phenomenon known as aliasing develops. Figure 2(d) illustrates the digital image with the lowest number of samples, where aliasing has produced a loss of high spatial frequency data while simultaneously introducing spurious lower frequency data that don't actually exist.
The spatial resolution of a digital image is related to the spatial density of the analog image and the optical resolution of the microscope or other imaging device. The number of pixels and the distance between pixels (the sampling interval) in a digital image are functions of the accuracy of the digitizing device. The optical resolution is a measure of the ability of the optical lens system (microscope and camera) to resolve the details present in the original scene. Optical resolution is affected by the quality of the optics, image sensor, and supporting electronics. Spatial density and the optical resolution determine the spatial resolution of the image. Spatial resolution of the image is limited solely by spatial density when the optical resolution of the imaging system is superior to the spatial density.
All of the details contained in a digital image are composed of brightness transitions that cycle between various levels of light and dark. The cycle rate between brightness transitions is known as the spatial frequency of the image, with higher rates corresponding to higher spatial frequencies. Varying levels of brightness in minute specimens observed through the microscope are common, with the background usually consisting of a uniform intensity and the specimen exhibiting a larger range of brightness levels.
The numerical value of each pixel in the digital image represents the intensity of the optical image averaged over the sampling interval. Thus, background intensity will consist of a relatively uniform mixture of pixels, while the specimen will often contain pixels with values ranging from very dark to very light. Features seen in the microscope that are smaller than the digital sampling interval will not be represented accurately in the digital image. The Nyquist criterion requires a sampling interval equal to twice the highest spatial frequency of the specimen to accurately preserve the spatial resolution in the resulting digital image. If sampling occurs at an interval beneath that required by the Nyquist criterion, details with high spatial frequency will not be accurately represented in the final digital image. The Abbe limit of resolution for optical images is approximately 0.22 micrometers (using visible light), meaning that a digitizer must be capable of sampling at intervals that correspond in the specimen space to 0.11 micrometers or less. A digitizer that samples the specimen at 512 pixels per horizontal scan line would have to produce a maximum horizontal field of view of 56 micrometers (512 x 0.11 micrometers) in order to conform to the Nyquist criterion. An interval of 2.5 to 3 samples for the smallest resolvable feature is suggested to ensure adequate sampling for high resolution imaging.
A serious sampling artifact known as spatial aliasing (undersampling) occurs when details present in the analog image or actual specimen are sampled at a rate less than twice their spatial frequency. When the pixels in the digitizer are spaced too far apart compared to the high frequency detail present in the image, the highest frequency information masquerades as low spatial frequency features that are not actually present in the digital image. Aliasing usually occurs as an abrupt transition when the sampling frequency drops below a critical level, which is about 25 percent below the Nyquist resolution limit. Specimens containing regularly spaced, repetitive patterns often exhibit moiré fringes that result from aliasing artifacts induced by sampling at less than 1.5 times the repetitive pattern frequency.
The Contrast Transfer Function
Contrast can be understood as a measure of changes in image signal intensity (ΔI) in relation to the average image intensity (I) as expressed by the following equation:
Spatial frequency can be defined as the number of times a periodic feature recurs in a given unit space or interval. The intensity recorded at zero spatial frequency in the CTF is a quantification of the average brightness of the image. Since contrast is diffraction limited, spatial frequencies near zero will have high contrast (approximately 100 percent) and those with frequencies near the diffraction limit will have lower recorded contrast in the image. As the CTF graph in Figure 3 illustrates, the Rayleigh Criterion is not a fixed limit but rather, the spatial frequency at which the contrast has dropped to about 25 percent. The CTF can therefore provide information about how well an imaging system can represent small features in a specimen.
The CTF can be determined for any functional component of the imaging system and is a performance measure of the imaging system as a whole. System performance is evaluated as the product of the CTF curves determined for each component. Therefore, it will be lower than that of any of the individual components. Small features that have limited contrast to begin with will become even less visible as the image passes through successive components of the system. The lowest CTFs are typically observed in the objective and CCD. The analog signal produced by the CCD can be modified before being passed to the digitizer to increase contrast using the analog gain and offset controls. Once the image has been digitally encoded, changes in magnification and concomitant adjustments of pixel geometry can result in improvement of the overall CTF.
Image Brightness and Bit Depth
The brightness of a digital image is a measure of relative intensity values across the pixel array, after the image has been acquired with a digital camera or digitized by an A/D converter. Brightness should not be confused with radiant intensity, which refers to the magnitude or quantity of light energy actually reflected from or transmitted through the object being imaged. As concerns digital image processing, brightness is best described as the measured intensity of all the pixels comprising the digital image after it has been captured, digitized, and displayed. Pixel brightness is important to digital image processing because, other than color, it is the only variable that can be utilized by processing techniques to quantitatively adjust the image.
Regardless of capture method, an image must be digitized to convert the specimen's continuous tone intensity into a digital brightness value. The accuracy of the digital value is directly proportional to the bit depth of the digitizing device . If two bits are utilized, the image can only be represented by four brightness values or levels (2x2). Likewise, if three or four bits are processed, the corresponding images have eight (2x2x2) and 16 (2x2x2x2) brightness levels, as shown in Figure 4, which illustrates the correlation between bit depth and the number of gray levels in digital images. If two bits are utilized, the image can only be represented by four brightness values or levels. Likewise, if three or four bits are processed, the corresponding images have eight and 16 brightness levels, respectively. In all of these cases, level 0 represents black, while the top level represents white, and each intermediate level is a different shade of gray.
The grayscale or brightness range of a digital image consists of gradations of black, white, and gray brightness levels. The greater the bit depth the more grey levels are available to represent intensity changes in the image. For example, a 12-bit digitizer is capable of displaying 4,096 gray levels (2E12) when coupled to a sensor having a dynamic range of 72 decibels (dB). When applied in this sense, dynamic range refers to the maximum signal level with respect to noise that the CCD sensor can transfer for image display. It can be defined in terms of pixel signal capacity and sensor noise characteristics. Similar terminology is used to describe the range of gray levels utilized in creating and displaying a digital image. This usage is relevant to the intrascene dynamic range.
The term bit depth refers to the binary range of possible grayscale values used by the analog to digital converter, to translate analog image information into discrete digital values capable of being read and analyzed by a computer. For example, the most popular 8-bit digitizing converters have a binary range of 256 (2E08) possible values and a 16-bit converter has 65,536 (2E16) possible values. The bit depth of the A/D converter determines the size of the gray scale increments, with higher bit depths corresponding to a greater range of useful image information available from the camera.
The number of grayscale levels that must be generated in order to achieve acceptable visual quality should be enough that the steps between individual gray values are not discernible to the human eye. The just noticeable difference in intensity of a gray level image for the average human eye is about two percent under ideal viewing conditions. At most, the human eye can distinguish about 50 discrete shades of gray within the intensity range of a video monitor, suggesting that the minimum bit depth of an image should be between 6 and 7 bits.
Digital images should have at least 8-bit to 10-bit resolution to avoid producing visually obvious gray level steps in the enhanced image when contrast is increased during image processing. The number of pixels and gray levels necessary to adequately describe an image is dictated by the physical properties of the specimen. Low contrast, high resolution images often require a significant number of gray levels and pixels to produce satisfactory results, while other high contrast and low resolution images (such as a line grating) can be adequately represented with a significantly lower pixel density and gray level range. Finally, there is a trade off in computer performance between contrast, resolution, bit depth, and the speed of image processing algorithms.
Image Histograms
Grey-level or image histograms provide a variety of useful information about the intensity or brightness of a digital image. In a typical histogram, the pixels are quantified for each grey level of an 8-bit image. The horizontal axis is scaled from 0 to 255 and the number of pixels representing each grey level is graphed on the vertical axis. Statistical manipulation of the histogram data allows the comparison of images in terms of their contrast and intensity. The relative number of pixels at each grey level can be used to indicate the extent to which the grey level range is being utilized by a digital image. Pixel intensities are well distributed among grey levels in an image having normal contrast and indicate a large intrascene dynamic range. In low contrast images only a small portion of available grey levels are represented and intrascene dynamic range is limited. When pixel intensities are distributed among high and low grey levels, leaving the intermediate levels unpopulated, there is an excess of black and white pixels and contrast is typically high.
Fundamental Properties of Charge Coupled Device Cameras
The fundamental processes involved in creating an image with a CCD camera include: exposure of the photodiode array elements to incident light, conversion of accumulated photons to electrons, organization of the resulting electronic charge in potential wells and, finally, transfer of charge packets through the shift registers to the output amplifier. Charge output from the shift registers is converted to voltage and amplified prior to digitization in the A/D converter. Different structural arrangement of the photodiodes and photocapacitors result in a variety of CCD architectures. Some of the more commonly used configurations include frame transfer (FT), full frame (FF), and interline (IL) types. Modifications to the basic architecture such as electron multiplication, back thinning/illumination and the use of microlenticular (lens) arrays have helped to increase the sensitivity and quantum efficiency of CCD cameras. The basic structure of a single metal oxide semiconductor (MOS) element in a CCD array is illustrated in Figure 5. The substrate is a p/n type silicon wafer insulated with a thin layer of silicon dioxide (approximately 100 nanometers) that is applied to the surface of the wafer. A grid pattern of electrically conductive, optically transparent, polysilicon squares or gate electrodes are used to control the collection and transfer of photoelectrons through the array elements.
After being accumulated in a CCD during the exposure interval, photoelectrons accumulate when a positive voltage (0-10 volts) is applied to an electrode. The applied voltage leads to a hole-depleted region beneath the electrode known as a potential well. The number of electrons that can accumulate in the potential well before their charge exceeds the applied electric field is known as the full well capacity. The full well capacity depends on pixel size. A typical full well capacity for CCDs used in fluorescence microscopy is between 20,000 and 40,000 photons. Excessive exposure to light can lead to saturation of the pixels where photons spill over into adjacent pixels and cause the image to smear or bloom. In many modern CCDs, special "anti-blooming" channels are incorporated to prevent the excess electrons from affecting the surrounding pixels. The benfit of anti-blooming generally outweighs the decrease in full well capacity that is a side effect of the feature.
The length of time electrons are allowed to accumulate in a potential well is a specified integration time controlled by a computer program. When a voltage is applied at a gate, electrons are attracted to the electrode and move to the oxide/silicon interface where they collect in a 10nm thick region until the voltages at the electrodes are cycled or clocked. Different bias voltages applied to the gate electrodes control whether a potential well or barrier will form beneath a particular gate. During charge transfer the charge packet held in the potential well is transferred from pixel to pixel in a cycling or clocking process often explained by analogy to a bucket brigade as shown in Figure 6. In the bucket brigade analogy, raindrops are first collected in a parallel bucket array (Figure 6(a)), and then transferred in parallel to the serial output register (Figure 6(b)). The water accumulated in the serial register is output, one bucket at a time, to the output node (calibrated measuring container). Depending on CCD type, various clocking circuit configurations may be used. Three phase clocking schemes are commonly used in scientific cameras.
The grid of electrodes forms a two dimensional, parallel register. When a programmed sequence of changing voltages is applied to the gate electrodes the electrons can be shifted across the parallel array. Each row in the parallel register is sequentially shifted into the serial register. The contents of the serial register are shifted one pixel at a time into the output amplifier where a signal proportional to each charge packet is produced. When the serial register is emptied the next row in the parallel register is shifted and the process continues until the parallel register has been emptied. This function of the CCD is known as charge transfer or readout and relies on the efficient transfer of charge from the photodiodes to the output amplifier. The rate at which image data are transferred depends on both the bandwidth of the output amplifier and the speed of the A/D converter.
Charge coupled device cameras use a variety of architectures to accomplish the tasks of collecting photons and moving the charge out of the registers and into the readout amplifier. The simplest CCD architecture is known as Full Frame (FF) (see Figure 7a). This configuration consists of a parallel photodiode shift register and a serial shift register. Full frame CCDs use the entire pixel array to simultaneously detect incoming photons during exposure periods and thus have a 100-percent fill factor. Each row in the parallel register is shifted into the serial register. Pixels in the serial register are read out in discrete packets until all the information in the array has been transferred into the readout amplifier. The output amplifier then produces a signal proportional to that of each pixel in the array. Since the parallel array is used both to detect photons and transfer the electronic data, a mechanical shutter or synchronized strobe light must be used to prevent constant illumination of the photodiodes. Full frame CCDs typically produce high resolution, high density images but can be subject to significant readout noise.
Frame Transfer (FT) architecture (Figure 7b) divides the array into a photoactive area and a light shielded or masked array, where the electronic data are stored and transferred to the serial register. Transfer from the active area to the storage array depends upon the array size but can take less than half a millisecond. Data captured in the active image area are shifted quickly to the storage register where they are read out row by row into the serial register. This arrangement allows simultaneous readout of the initial frame and integration of the next frame. The main advantage of frame transfer architecture is that it eliminates the need to shutter during the charge transfer process and thus increases the frame rate of the CCD.
For every active row of pixels in an Interline (IL) array (Figure 7c) there is a corresponding masked transfer row. The exposed area collects image data and following integration, each active pixel rapidly shifts its collected charge to the masked part of the pixel. This allows the camera to acquire the next frame while the data are shifted to charge transfer channels. Dividing the array into alternating rows of active and masked pixels permits simultaneous integration of charge potential and readout of the image data. This arrangement eliminates the need for external shuttering and increases the device speed and frame rate. The incorporation of a microscopic lens partially compensates for the reduced light gathering ability caused by pixel masking. Each lens directs a portion of the light that would otherwise be reflected by the aluminum mask to the active area of the pixel.
Readout speed can be enhanced by defining one or more sub-arrays that represent areas of interest in the specimen. The reduction in pixel count results in faster readout of the data. The benefit of this increased readout rate occurs without an corresponding increase in noise, unlike the situation of simply increasing clock speed. In a clocking routine known as binning, charge is collected from a specified group of adjacent pixels and the combined signal is shifted into the serial register. The size of the binning array is usually selectable and can range from 2 x 2 pixels to most of the CCD array. The primary reasons for using binning are to improve the signal noise ratio and dynamic range. These benefits come at the expense of spatial resolution. Therefore, binning is commonly used in applications where resolution of the image is less important than rapid throughput and signal improvement.
CCD Enhancing Technologies
In addition to micro lens technology a number of physical modifications have been made to CCDs to improve camera performance. Instruments used in contemporary biological research must be able to detect weak signals typical of low fluorophore concentrations and tiny specimen volumes, cope with low excitation photon flux and achieve the high speed and sensitivity required for imaging rapid cellular kinetics. The demands imposed on detectors can be considerable: ultra-low detection limits, rapid data acquisition and generation of a signal that's distinguishable from the noise produced by the device.
Most contemporary CCD enhancement is a result of backthinning and/or gain register electron multiplication. Backthinning avoids the loss of photons that are either absorbed or reflected from the overlying films on the pixels in standard CCDs. Also, electrons created at the surface of the silicon by ultraviolet and blue wavelengths are often lost due to recombination at the oxide silicon interface, thus rendering traditional CCD chips less sensitive to high frequency incident light. Using an acid etching technique, the CCD silicon wafer can be uniformly thinned to about 10-15 micrometers. Incident light is directed onto the back side of the parallel register away from the gate structure. A potential accumulates on the surface and directs the generated charge to the potential wells. Back thinned CCDs exhibit photon sensitivity throughout a wide range of the electromagnetic spectrum, typically from ultraviolet to near infrared wavelengths. Back thinning can be used with FF or FT architectures, in combination with solid state electron multiplication devices, to increase quantum efficiency to above 90 percent.
The electron multiplying CCD (EMCCD) is a modification of the conventional CCD in which an electron multiplying register is inserted between the serial register output and the charge amplifier. This multiplication register or gain register is designed with an extra grounded phase that creates a high field region and a higher voltage (35-45 volts) than the standard CCD horizontal register (5-15 volts). Electrons passing through the high field region are multiplied as a result of an approximately 1-percent probability that an electron will be produced as a result of collision. The multiplication register consists of 4 gates that use clocking circuits to apply potential differences (35-40 volts) and generate secondary electrons by the process of impact ionization. Impact ionization occurs when an energetic charge carrier loses energy during the creation of other charge carriers. When this occurs in the presence of an applied electric field an avalanche breakdown process produces a cascade of secondary electrons (gain) in the register. Despite the small (approximately 1 percent) probability of generating a secondary electron the large number of pixels in the gain register can result in the production of electrons numbering in the hundreds or thousands.
Traditional slow scan CCDs achieve high sensitivity and high speed but do so at the expense of readout rate. Readout speed is constrained in these cameras by the charge amplifier. In order to attain high speed the bandwidth of the charge amplifier must be as wide as possible. However, as the bandwidth increases so too does the amplifier noise. The typically low bandwidths of slow scan cameras means they can only be read out at lower speeds (approximately 1 MHz). EMCCDs side step this constraint by amplifying the signal prior to the charge amplifier, effectively reducing the relative readout noise to less than one electron, thus providing both low detection limit and high speed. EMCCDs are therefore able to produce low light images rapidly, with good resolution, a large intensity range and a wide dynamic range.
CCD Performance Measures: Camera Sensitivity
The term sensitivity, with respect to CCD performance, can be interpreted differently depending on the incident light level used in a particular application. In imaging scenarios where signal levels are low, such as in fluorescence microscopy, sensitivity refers to the ability of the CCD to detect weak signals. In high light level applications (such as brightfield imaging of stained specimens) performance may be measured as the ability to determine small changes in the bright images. In the case of low light levels, the camera noise is the limiting factor to sensitivity but in the case of high light levels, the signal noise becomes the limiting factor.
The signal-to-noise ratio (SNR) of a camera can measure the ratio of signal levels that a camera can detect in a single exposure but it cannot determine the sensitivity to weak light or the sensitivity to change in large signals unless the values in the ratio are known. A camera with 2 electrons of camera noise and a 20,000-electron full well capacity will have the same SNR (10,000:1) as a camera with 20 electrons camera of noise and a 200,000-electron full well capacity. However, the first camera will be much more sensitive to low signals and the second camera will offer much better sensitivity to small changes in a large signal. The difference is in the type of noise for each application. The low-light camera is limited by the camera noise, 2 electrons in this case, which means a minimum of about 5 signal electrons would be detectable. High light level sensitivity is limited by the noise of the signal, which in this case is the square root of 200,000 (447 electrons), representing a detectable change of about 0.2 percent of the signal.
Sensitivity depends on the limiting noise factor and in every situation there is a rough measure of CCD device performance is the ratio of incident light signal to that of the combined noise of the camera. Signal (S) is determined as a product of input light level (I), quantum efficiency (QE) and the integration time (T) measured in seconds.
There are numerous types and sources of noise generated throughout the digital imaging process. Its amount and significance often depend on the application and type of CCD used to create the image. The primary sources of noise considered in determining the ratio are statistical (shot noise), thermal noise (dark current) and preamplification or readout noise, though other types of noise may be significant in some applications and types of camera. Total noise is usually calculated as the sum of readout noise, dark current and statistical noise in quadrature as follows:
Preamplification or read noise is produced by the readout electronics of the CCD. Read noise is comprised of two primary types or sources of noise, related to the operation of the solid state electrical components of the CCD. White noise originates in the metal oxide semiconductor field effect transistor (MOSFET) of the output amplifier, where the MOSFET resistance generates thermal noise. Flicker noise, also known as 1/f noise, is also a product of the output amplifier that originates in the material interface between the silicon and silicon dioxide layers of the array elements.
Thermal noise or dark current is generated similarly, as a result of impurities in the silicon that allow energetic states within the silicon band gap. Thermal noise is generated within surface states, in the bulk silicon and in the depletion region, though most are produced at surface states. Dark current is inherent to the operation of semiconductors as thermal energy allows electrons to undergo a stepped transition from the valence band to the conduction band where they are added to the signal electrons and measured by the detector. Thermal noise is most often reduced by cooling the CCD. This can be accomplished using liquid nitrogen or a thermoelectric (Peltier) cooler. The former method places the CCD in a nitrogen environment where the temperature is so low that significant thermal noise is eliminated. Thermoelectric cooling is commonly used to reduce the contribution of thermal noise to total camera noise. A Peltier type cooler uses a semiconductor sandwiched between two metal plates. When a current is applied the device acts like a heat pump and transfers heat from the CCD.
Amplification noise occurs in the gain registers of EMCCDs and is often represented by a quantity known as the Noise Factor. For low light imaging systems the noise introduced by the multiplicative process or gain can be an important performance parameter. The electron multiplication process amplifies weak signals above the noise floor enabling detection of signals as low as those produced by single photon events, in some cases. In any process in which a signal is amplified, noise added to the signal is also amplified. For this reason it is important to cool EMCCDs to reduce dark current and its associated shot noise.
Any time we undertake to quantify photons or photoelectric events there is inherent uncertainty in the measurement that is due to the quantum nature of light. The absorption of photons is a quantum mechanical event and thus the number of photons absorbed varies according to a Poisson distribution. The accuracy of determinations of the number of photons absorbed by a particular pixel is fundamentally restrained by this inherent statistical error. This uncertainty is referred to as Poisson, statistical or shot noise and is given by the square root of the signal or number of photoelectrons detected. In a low light fluorescence application the mean value of the brightest pixels might be as low as 16 photons. Due to statistical uncertainty or Poisson noise the actual number of photoelectrons collected in a potential well during an integration period could vary between 12 and 20 (16 ± 4). In mean values representing lower specimen signal levels the uncertainty becomes more significant. For example, if the mean value is only 4 photoelectrons the percentage of the signal representing statistical noise jumps to 50 percent (4 ± 2). Poisson or shot noise is an inherent physical limitation. Statistical noise decreases as signal increases and so can only be reduced by increasing the number of events counted. Although quantum efficiency is often considered separately from noise, a value indicating reduced numbers of quantum mechanical events implies an increase in statistical or Poisson noise.
Quantum efficiency (QE) is a measure of camera performance that determines the percentage of photons that are detected by a CCD. It is a property of the photovoltaic response and summarized by the following equation:
where the quantum efficiency (QE) is equal to the number of electron hole pairs generated as determined by the number of photoelectrons detected (n(e)) divided by the average number of photons (n(p)) incident on the pixel. Quantum efficiency will always be less than one.
The number of photoelectrons generated is contingent upon the photovoltaic response of the silicon element to the incident photons and depends on a number of conditions. The amount of charge created during a photon silicon interaction depends on a number of factors that include the absorption coefficient and diffusion length. The absorption coefficient of silicon varies as longer wavelengths penetrate further into the silicon substrate than do shorter wavelengths. Above a critical wavelength (greater than 1100 nanometers) photons are not energetic enough to induce the photoelectric effect. Photons in the 450 to 700 nanometer range are absorbed in the location of potential well and in the bulk silicon substrate. The QE of photons absorbed in the depletion area approaches 100 percent while those elsewhere in the substrate may release electrons that move less efficiently.
The spectral sensitivity of a CCD depends on the QE of the photoactive elements over the range of near UV to near infrared wavelengths, as illustrated in Figure 8. Modifications made to CCDs to increase performance have led to high QE in the blue green portion of the spectrum. Back thinned CCDs can exhibit quantum efficiencies of greater than 90 percent, eliminating loss due to interaction with the charge transfer channels.
A measure of CCD performance proposed by James Pawley is known as the intensity spread function (ISF) and measures the amount of error due to statistical noise in an intensity measurement. The ISF relates the number measured by the A/D converter to the brightness of a single pixel. The ISF for a particular detector is determined first by making a series of measurements of a single pixel in which the source illumination is uniform and the integration periods identical. The data are then plotted as a histogram and the mean number of photons and the value at the full width half maximum (FWHM) point (the standard deviation) are determined.
The ISF is equal to the mean divided by the FWHM calculated as the standard deviation. The value is expressed as photons meaning it has been corrected for QE and the known proportional relationship between photoelectrons and their representative numbers stored in memory. The quantity that is detected and digitized is proportional to the number of photoelectrons rather than photons. The ISF is thus a measure of the amount of error in the output signal due to statistical noise that increases as the QE (the ratio of photoelectrons to photons) decreases. The statistical error represents the minimum noise level attainable in an imaging system where readout and thermal noise have been adequately reduced.
The conversion of incident photons to an electronic output signal is a fundamental process in the CCD. The ideal relationship between the light input and the final digitized output is linear. As a performance measure, linearity describes how well the final digital image represents the actual features of the specimen. The specimen features are well represented when the detected intensity value of a pixel is linearly related to the stored numerical value and to the brightness of the pixel in the image display. Linearity measures the consistency with which the CCD responds to photonic input over its well depth. Most modern CCDs exhibit a high degree of linear conformity, but deviation can occur as pixels near their full well capacity. As pixels become saturated and begin to bloom or spill over into adjacent pixels or charge transfer channels the signal is no longer affected by the addition of further photons and the system becomes non linear.
Quantitative evaluation of CCD linearity can be performed by generating sets of exposures with increasing exposure times using a uniform light source. The resulting data are plotted with the mean signal value as a function of exposure (integration) time. If the relationship is linear, a 1 second exposure that produces about 1000 electrons predicts that a 10-second exposure will produce about 10,000 electrons. Deviations from linearity are frequently measured in fractions of a percent but no system is perfectly linear throughout its entire dynamic range. Deviation from linearity is particularly important in low light, quantitative applications and for performing flat field corrections. Linearity measurements differ among manufacturers and may be reported as a percentage of conformance to or deviation from the ideal linear condition.
In low-light imaging applications, the fluorescent signal is about a million times weaker than the excitation light. The signal is further limited in intensity by the need to minimize photobleaching and phototoxicity. When quantifying the small number of photons characteristic of biological fluorescent imaging, the process is photon starved but also subject to the statistical uncertainty associated with enumerating quantum mechanical events. The measurement of linearity is further complicated by the fact that the amount of uncertainty increases with the square root of the intensity. This means that the statistical error is largest in the dimmest regions of the image. Manipulating the data using a deconvolution algorithm is often the only way to address this problem in photon limited imaging applications.
Introduction to Digital Imaging in Microscopy
Part II: Basic Microscopy Concepts
Contributing Authors
Kristin L. Hazelwood, Scott G. Olenych, John D. Griffin, Christopher S. Murphy, Judith A. Cathcart, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.