How does a ccd array work




















While this model is an oversimplification we provide an in depth explanation below. Photons striking a silicon surface create free electrons through the photoelectric effect.

A simultaneous positive charge or holes are generated as well. If nothing is done the hole and the electrons will recombine and release energy in the form of heat. Small thermal fluctuations are very difficult to measure and it is thus preferable to gather electrons in the place they were generated and count them to create an image.

This is accomplished by positively biasing discrete areas to attract electrons generated while the photons strike the surface. The substrate of a CCD is made of silicon, but photons coming from above the gate strike the epitaxial layer — essentially silicon with different elements doped into it — and generate photoelectrons. The gate is held at a positive charge in relation to the rest of the device, which attracts the electrons.

The figure to the right shows how electrons are held in place and moved to where they can be quantified. The top black line represents the potential well for the electrons that are represented by the blue color and is low , or downhill , where the potential is high since opposites attract. Electrons are shifted in two directions on a CCD, called the parallel or serial direction. One parallel shift occurs from the right to the left shown at left.

The serial shift is performed from top to bottom and directs the electron packets to the measurement electronics. In the example to the left, the image is split up into 2 and then 4 different sections and read-out. The method of reading this voltage is called dual slope integration DSI and is used when the absolute lowest noise possible is required. Generally speaking, the faster a pixel is read, the more noise is introduced into the measurement.

If the gain of the measurement is known the ADU number for each pixel generated can be directly correlated to the number of electrons found in that pixel. The charge or image can be transferred using different scanning architectures such as full frame readout, frame transfer and interline transfer.

The charge coupled device principle can be easily understandable with the following transfer schemes:. It is the simplest scanning architecture which requires a shutter in a number of applications to cut off the light input and to avoid smearing during the passage of charges through parallel-vertical registers or vertical CCD and parallel-horizontal registers or horizontal CCD and then transferred to output in serial.

By using the bucket brigade process the image can be transferred from image array to opaque frame storage array. As it does not use any serial register, it is a fast process compared to other processes.

Each pixel consists of a photodiode and opaque charge storage cell. This transfer, as the image is hidden, in one transfer cycle produces a minimum image smear; hence, the fastest optical shuttering can be achieved. But frequently CCDs are fabricated on a P-type substrate and manufactured by using buried channel MOS capacitors; for this a thin N-type region is formed on its surface.

A silicon dioxide layer is grown as an insulator on the top of the N-region, and gates are formed by placing one or more electrodes on this insulating layer. Free electrons are formed from photoelectric effect when the photons strike the silicon surface, and because of the vacuum, simultaneously, positive charge or the hole will be generated. Instead of choosing difficult process of counting the thermal fluctuations or heat formed by the recombining of hole and electron, it is preferred to collect and count electrons to produce an image.

This can be achieved by attracting electrons generated by striking photons on silicon surface towards the positively biased distinct areas. It can be seen that the Silicon itself is not arranged to form individual pixels. In fact, the pixels are defined by the position of electrodes above the CCD itself. If a positive voltage is applied to the electrode, then this positive potential will attract all of the negatively charged electrons close to the area under the electrode.

In addition, any positively charged holes will be repulsed from the area around the electrode. Consequently a "potential well" will form in which all the electrons produced by incoming photons will be stored. As more and more light falls onto the CCD, then the potential well surrounding this electrode will attract more and more electrons until the potential well is full the amount of electrons that can be stored under a pixel is known as the full well capacity.

To prevent this happening the light must be prevented from falling onto the CCD for example, by using a shutter as in a camera.

Thus, an image can be made of an object by opening the shutter, "integrating" for a length of time to fill up most of the electrons in the potential well, and then closing the shutter to ensure that the full well capacity is not exceeded.

An actual CCD will consist of a large number of pixels i. The number of rows and columns defines the CCD size, typical sizes are pixels high by pixels wide. The resolution of the CCD is defined by the size of the pixels, also by their separation the pixel pitch. Thus, a x sized CCD would have a physical area image size of about 10mm x 10mm. How is a CCD clocked out? The figure below shows a cross section through a row of a CCD. Only one of these electrodes is required to create the potential well, but other electrodes are required to transfer the charge out of the CCD.

The upper section of the figure section 1 shows charge being collected under one of the electrodes. As this process is continued, the charge cloud will progress either down the column, or across the row, depending upon the orientation of the electrodes.

The figure below called a clocking diagram shows the progression under which each electrode is held high and low to ensure that charge is transferred through the CCD.

This process is repeated in transfer 2 and transfer 3, the charge has now been moved three pixels along. Obtaining the best images within the constraints imposed by a particular specimen or experiment typically requires a compromise among the criteria listed, which often exert contradictory demands. For example, capturing time-lapse sequences of live fluorescently-labeled specimens may require reducing the total exposure time to minimize photobleaching and phototoxicity.

Several methods can be utilized to accomplish this, although each involves a degradation of some aspect of imaging performance.

If the specimen is exposed less frequently, temporal resolution is reduced; applying pixel binning to allow shorter exposures reduces spatial resolution; and increasing electronic gain compromises dynamic range and signal-to-noise ratio. Different situations often require completely different imaging rationales for optimum results. In contrast to the previous example, in order to maximize dynamic range in a single image of a specimen that requires a short exposure time, the application of binning or a gain increase may accomplish the goal without significant negative effects on the image.

Performing efficient digital imaging requires the microscopist to be completely familiar with the crucial image quality criteria, and the practical aspects of balancing camera acquisition parameters to maximize the most significant factors in a particular situation.

A small number of CCD performance factors and camera operational parameters dominate the major aspects of digital image quality in microscopy, and their effects overlap to a great extent. Factors that are most significant in the context of practical CCD camera use, and discussed further in the following sections, include detector noise sources and signal-to-noise ratio, frame rate and temporal resolution, pixel size and spatial resolution, spectral range and quantum efficiency, and dynamic range.

Camera sensitivity, in terms of the minimum detectable signal, is determined by both the photon statistical shot noise and electronic noise arising in the CCD. A conservative estimation is that a signal can only be discriminated from accompanying noise if it exceeds the noise by a factor of approximately 2. The minimum signal that can theoretically yield a given SNR value is determined by random variations of the photon flux, an inherent noise source associated with the signal, even with an ideal noiseless detector.

This photon statistical noise is equal to the square root of the number of signal photons, and since it cannot be eliminated, it determines the maximum achievable SNR for a noise-free detector. If a SNR value of 2. In practice, other noise components, which are not associated with the specimen photon signal, are contributed by the CCD and camera system electronics, and add to the inherent photon statistical noise.

Once accumulated in collection wells, charge arising from noise sources cannot be distinguished from photon-derived signal. Most of the system noise results from readout amplifier noise and thermal electron generation in the silicon of the detector chip. The thermal noise is attributable to kinetic vibrations of silicon atoms in the CCD substrate that liberate electrons or holes even when the device is in total darkness, and which subsequently accumulate in the potential wells.

For this reason, the noise is referred to as dark noise , and represents the uncertainty in the magnitude of dark charge accumulation during a specified time interval. The rate of generation of dark charge, termed dark current , is unrelated to photon-induced signal but is highly temperature dependent.

In similarity to photon noise, dark noise follows a statistical square-root relationship to dark current, and therefore it cannot simply be subtracted from the signal.

Cooling the CCD reduces dark charge accumulation by an order of magnitude for every degree Celsius temperature decrease, and high-performance cameras are usually cooled during use. Cooling even to 0 degrees is highly advantageous, and at degrees, dark noise is reduced to a negligible value for nearly any microscopy application.

Providing that the CCD is cooled, the remaining major electronic noise component is read noise , primarily originating with the on-chip preamplifier during the process of converting charge carriers into a voltage signal. Although the read noise is added uniformly to every pixel of the detector, its magnitude cannot be precisely determined, but only approximated by an average value, in units of electrons root-mean-square or rms per pixel.

Some types of readout amplifier noise are frequency dependent, and in general, read noise increases with the speed of measurement of the charge in each pixel. The increase in noise at high readout and frame rates is partially a result of the greater amplifier bandwidth required at higher pixel clock rates. Cooling the CCD reduces the readout amplifier noise to some extent, although not to an insignificant level.

A number of design enhancements are incorporated in current high-performance camera systems that greatly reduce the significance of read noise, however. One strategy for achieving high readout and frame rates without increasing noise is to electrically divide the CCD into two or more segments in order to shift charge in the parallel register toward multiple output amplifiers located at opposite edges or corners of the chip.

This procedure allows charge to be read out from the array at a greater overall speed without excessively increasing the read rate and noise of the individual amplifiers. Cooling the CCD in order to reduce dark noise provides the additional advantage of improving the charge transfer efficiency CTE of the device.

This performance factor has become increasingly important due to the large pixel-array sizes employed in many current CCD imagers, as well as the faster readout rates required for investigations of rapid dynamic processes. With each shift of a charge packet along the transfer channels during the CCD readout process, a small portion may be left behind.

While individual transfer losses at each pixel are miniscule in most cases, the large number of transfers required, especially in megapixel sensors, can result in significant losses for pixels at the greatest distance from the CCD readout amplifier s unless the charge transfer efficiency is extremely high.

The occurrence of incomplete charge transfer can lead to image blurring due to the intermixing of charge from adjacent pixels. In addition, cumulative charge loss at each pixel transfer, particularly with large arrays, can result in the phenomenon of image shading , in which regions of images farthest away from the CCD output amplifier appear dimmer than those adjacent to the serial register.

Charge transfer efficiency values in cooled CCDs can be 0. Both hardware and software methods are available to compensate for image intensity shading.

A software correction is implemented by capturing an image of a uniform-intensity field, which is then utilized by the imaging system to generate a pixel-by-pixel correction map that can be applied to subsequent specimen images to eliminate nonuniformity due to shading. Software correction techniques are generally satisfactory in systems that do not require correction factors greater than approximately percent of the local intensity.

Larger corrections, up to approximately fivefold, can be handled by hardware methods through the adjustment of gain factors for individual pixel rows. The required gain adjustment is determined by sampling signal intensities in five or six masked reference pixels located outside the image area at the end of each pixel row. Voltage values obtained from the columns of reference pixels at the parallel register edge serve as controls for charge transfer loss, and produce correction factors for each pixel row that are applied to voltages obtained from that row during readout.

Correction factors are large in regions of some sensors, such as areas distant from the output amplifier in video-rate cameras, and noise levels may be substantially increased for these image areas. Although the hardware correction process removes shading effects without apparent signal reduction, it should be realized that the resulting signal-to-noise ratio is not uniform over the entire image. In many applications, an image capture system capable of providing high temporal resolution is a primary requirement.

For example, if the kinetics of a process being studied necessitates video-rate imaging at moderate resolution, a camera capable of delivering superb resolution is, nevertheless, of no benefit if it only provides that performance at slow-scan rates, and performs marginally or not at all at high frame rates. Full-frame slow-scan cameras do not deliver high resolution at video rates, requiring approximately one second per frame for a large pixel array, depending upon the digitization rate of the electronics.

If specimen signal brightness is sufficiently high to allow short exposure times on the order of 10 milliseconds , the use of binning and subarray selection makes it possible to acquire about 10 frames per second at reduced resolution and frame size with cameras having electromechanical shutters.

Faster frame rates generally necessitate the use of interline-transfer or frame-transfer cameras, which do not require shutters and typically can also operate at higher digitization rates. The latest generation of high-performance cameras of this design can capture full-frame bit images at near video rates. The now-excellent spatial resolution of CCD imaging systems is coupled directly to pixel size, and has improved consistently due to technological improvements that have allowed CCD pixels to be made increasingly smaller while maintaining other performance characteristics of the imagers.

In comparison to typical film grain sizes approximately 10 micrometers , the pixels of many CCD cameras employed in biological microscopy are smaller and provide more than adequate resolution when coupled with commonly used high-magnification objectives that project relatively large-radii diffraction Airy disks onto the CCD surface.

Interline-transfer scientific-grade CCD cameras are now available having pixels smaller than 5 micrometers, making them suitable for high-resolution imaging even with low-magnification objectives.

The relationship of detector element size to relevant optical resolution criteria is an important consideration in choosing a digital camera if the spatial resolution of the optical system is to be maintained. The Nyquist sampling criterion is commonly utilized to determine the adequacy of detector pixel size with regard to the resolution capabilities of the microscope optics. Nyquist's theorem specifies that the smallest diffraction disk radius produced by the optical system must be sampled by at least two pixels in the imaging array in order to preserve the optical resolution and avoid aliasing.

As an example, consider a CCD having pixel dimensions of 6. At this sampling frequency, sufficient margin is available that the Nyquist criterion is nearly satisfied even with 2 x 2 pixel binning.

Detector quantum efficiency QE is a measure of the likelihood that a photon having a particular wavelength will be captured in the active region of the device to enable liberation of charge carriers. The parameter represents the effectiveness of a CCD imager in generating charge from incident photons, and is therefore a major determinant of the minimum detectable signal for a camera system, particularly when performing low-light-level imaging.

No charge is generated if a photon never reaches the semiconductor depletion layer or if it passes completely through without transfer of significant energy. The nature of interaction between a photon and the detector depends upon the photon's energy and corresponding wavelength, and is directly related to the detector's spectral sensitivity range. Although conventional front-illuminated CCD detectors are highly sensitive and efficient, none have percent quantum efficiencies at any wavelength.

Image sensors typically employed in fluorescence microscopy can detect photons within the spectral range of nanometers, with peak sensitivity normally in the range of nanometers. Maximum QE values are only about percent, except in the newest designs, which may reach 80 percent efficiency. Figure 10 illustrates the spectral sensitivity of a number of popular CCDs in a graph that plots quantum efficiency as a function of incident light wavelength.

Most CCDs used in scientific imaging are of the interline-transfer type, and because the interline mask severely limits the photosensitive surface area, many older versions exhibit very low QE values.

With the advent of the surface microlens technology to direct more incident light to the photosensitive regions between transfer channels, newer interline sensors are much more efficient and many have quantum efficiency values of percent. Sensor spectral range and quantum efficiency are further enhanced in the ultraviolet, visible, and near-infrared wavelength regions by various additional design strategies in several high-performance CCDs.

Because aluminum surface transfer gates absorb or reflect much of the blue and ultraviolet wavelengths, many newer designs employ other materials, such as indium-tin oxide, to improve transmission and quantum efficiency over a broader spectral range.

Even higher QE values can be obtained with specialized back-thinned CCDs, which are constructed to allow illumination from the rear side, avoiding the surface electrode structure entirely. To make this possible, most of the silicon substrate is removed by etching, and although the resulting device is delicate and relatively expensive, quantum efficiencies of approximately 90 percent can routinely be achieved.

Other surface treatments and construction materials may be utilized to gain additional spectral-range benefits. Performance of back-thinned CCDs in the ultraviolet wavelength region is enhanced by the application of specialized antireflection coatings.

Modified semiconductor materials are used in some detectors to improve quantum efficiency in the near-infrared. Sensitivity to wavelengths outside the normal spectral range of conventional front-illuminated CCDs can be achieved by the application of wavelength-conversion phosphors to the detector face. Phosphors for this purpose are chosen to absorb photon energy in the spectral region of interest and emit light within the spectral sensitivity region of the CCD.

As an example of this strategy, if a specimen or fluorophore of interest emits light at nanometers where sensitivity of any CCD is minimal , a conversion phosphor can be employed on the detector surface that absorbs efficiently at nanometers and emits at nanometers, within the peak sensitivity range of the CCD. A term referred to as the dynamic range of a CCD detector expresses the maximum signal intensity variation that can be quantified by the sensor.

The quantity is specified numerically by most CCD camera manufacturers as the ratio of pixel full well capacity FWC to the read noise, with the rationale that this value represents the limiting condition in which intrascene brightness ranges from regions that are just at pixel saturation level to regions that are barely lost in noise.

The sensor dynamic range determines the maximum number of resolvable gray-level steps into which the detected signal can be divided. To take full advantage of a CCD's dynamic range, it is appropriate to match the analog-to-digital converter's bit depth to the dynamic range in order to allow discrimination of as many gray scale steps as possible.

Analog-to-digital converters with bit depths of 10 and 11 are capable of discriminating and gray levels, respectively. As stated previously, because a computer bit can only assume one of two possible states, the number of intensity steps that can be encoded by a digital processor ADC reflects its resolution bit depth , and is equal to 2 raised to the value of the bit depth specification.

Therefore, 8, 10, 12, and bit processors can encode a maximum of , , , or gray levels.



rytkikamriks1976's Ownd

0コメント

  • 1000 / 1000