Most often is in reference to the numerical aperture of a lens also known as the f-stop.
The number of bits that are digitized by the A/D converter or the representation of how many bits can be made from the signal in a pixel.
A coaxial type of connector commonly used on professional video systems. The connector is used to couple coaxial cables to video and other high-frequency electronic equipment.
CMOS (Complementary Metal Oxide Semiconductor) process is used extensively in the semiconductor business. CMOS wide variety of macro cells and architecture advantages afford high speed sensors with XY addressability as well as a high level of integration of features. CMOS has become the most widely used imaging technology due to the “Camera-On-Chip.
Standard lens mount found on many industrial and scientific cameras. The thread of the lens and lens mount is 1 inch in diameter with 32 threads per inch pitch and with a back focal length of 17.52 mm or 0.684 inch.
Understanding color is difficult but necessary even for monochrome imaging. The color of light is determined by its wavelength. The longer wavelengths are hotter in color (red). The shorter wavelengths are cooler (blue). Color perception is a function of the human eye. The surface of an object either reflects or absorbs different light wavelengths. The light that the human eye perceives is unique in that it produces a physiological effect in our brain. What is red to one person may have a slight difference of perception by another person. Terms that further describe the color of an object are hue, saturation and brightness. Hue is the base color such as red, blue violet, yellow and others. Saturation is the shades that vary from a basic color to that of a different shade. An example of a hue would be green and a saturated color would be lime (light green). Brightness also known as luminance is the intensity of the light. The subject of color would take an entire book to fully explain the science. However, studying a color chart can give the user some insight into the composition a color scene.
Color temperature is a common way of describing a light source. Color temperature originally derived it’s meaning from the heating of a theoretical black body to a temperature that caused the body to give off varying colors that ranged from red hot to white hot. Lord Kelvin developed this term and his name was associated with the unit measure. Some high-speed cameras have color-balancing circuitry that allows the camera’s sensor to be set for to the color temperature of the light being used.
Color vs Monochrome
Most of the early high-speed filming was done with black-and-white. Once color film became available, the use of black and white film declined. The use of high-speed color film set the format standard that video has attempted to meet. Over the years, monochrome images have been all that could be recorded on most high-speed cameras. Today’s high-speed cameras can produce images that replace color film for some high-speed applications. Full 24-bit color images are now possible from high-speed cameras. To understand the strengths and weaknesses of both color and monochrome in varying high-speed video applications, some background must be discussed.
There are various methods of producing color in high-speed video. The two most widely used techniques are beam splitters and color filter arrays. True color means that the primary colors and all the saturations are possible. This technique is costly since the electronic circuitry is tripled with the need for three imaging sensors. The alignment of the three sensors must be very precise otherwise, mis-registration will occur. The second and most common technique is a cost saving compromise. Color Filter Arrays (CFA) are more cost affective because they only use one imaging device. There are individual color filters deposited on the surface of each pixel. There is some combination of Red, Blue and Green or a complimentary color scheme. Each pixel is isolated to a certain color spectrum. Although the pixels are filtered, the raw data must be interpolated for solving the missing pixels in each color plane.
Now that the main methods for producing color have been discussed, we need to review why a user would chose between recording in color vs. monochrome. Generally, monochrome images produce better image quality and monochrome cameras are more sensitive because they don’t have the Color Filter Array attenuating the light. The resolving capability of a monochrome sensor is also better than that of CFA image sensors. This is due to the fact that there is no interpolation involved. The one disadvantage of a monochrome image is the loss of color differentiation. The subtle change in gray levels is harder to observe than a change in hue or saturation. Color is valuable for differentiating shades which may yield useful information. Most high-speed photography is done with monochrome cameras for the reasons listed above.
Depth of Field
Depth-of-field (DOF) is the range in which an object would be in focus within a scene. The largest DOF is achieved when a lens is set to infinity. The smaller the f-stop the smaller the DOF. If the object is moved closer to the lens, the DOF also decreases. Lenses of different focal lengths will not have the same DOF for a given f-stop.
Depth of Focus
The Depth of focus depends upon the numerical aperture (NA) as well as the magnification and is inversely proportional to both. The higher the magnification the shorter the depth of focus for any given numerical aperture. Also known as depth of field.
Many factors influence the amount of light required to produce the best image possible. Without sufficient light, the image may be:
— under-exposed, detail is lost in dark
— unbalanced, poor color reproduction
— blurred, due to the lack of depth-of-field
The time that the imaging sensor is exposed to light depends on several factors. These factors include, lens f-stop, frame rate, shutter speed, light levels, reflectance of surrounding material, imaging sensor’s well capacity, and the sensor’s signal-to-noise (SNR) ratio. All of these factors can significantly impact the image quality. An often-overlooked factor is the exposure time, also known as the shutter speed.
The exposure time, shutter speed and shutter angle are interchangeable terms. The exposure time for mechanical shutters is set in terms of number of degrees that it is open. The exposure time for electronic sensors is either the inverse of the frame rate if no electronic shutter exists or the time that an electronic shuttered sensor is exposed in microseconds. Shown below are the relationships for defining the exposure time:
mechanical shutter = (revolutions per second x angle/360)
no shutter = 1/frame rate
electronic shutter = period of time that the sensor is “integrating”
The exposure time determines how sharp or blur free an image is—regardless of the frame rate. The exposure time needed to avoid blur depends on the subject’s velocity and direction, the amount of lens magnification, the shutter speed or frame rate (whichever is faster) and the resolution of the imaging system.
A high velocity subject may be blurred in an image if the velocity is too high during the integration of light on the sensor. If a sharp edge of an object is imaged, and the object moves within one frame more than 2 pixels or a line pair, the object may be blurred. This is due to the fact that multiple pixels are imaging an averaged value of the edge. This creates a smear or blur effect on the edge. To get good picture quality, the shutter speed should be 10x that of the subject’s velocity.
The lens magnification can influence the relative velocity of the subject being imaged. The velocity of an object moving across a magnified field-of-view (FOV) is increased linearly according to the magnification level. Instinctually, if an object is viewed far away, the relative velocity in the FOV is less than that viewed next to the object.
A proper shutter speed may be calculated as follows:
|Exposure (shutter rate) 2X Pixel Size / Vr
Vr = sensor dimension x (field-of-view / object’s velocity )
If the object’s velocity, the field-of-view, the imaging sensor’s dimensions and pixel count are known, the shutter speed required to produce a sharp image can be calculated. The relative velocity (Vr) at the sensor can be calculated by reducing the subject’s velocity by the optical reduction at the sensor. The pixel size must be calculated by dividing the sensor size in the dimension of interest (x or y). Knowing that a relative velocity at the sensor plane that is less than 2 pixels or a line pair will produce a good image, we multiply the pixel size by two. Therefore, the shutter speed is calculated by dividing the 2X pixel size by the relative velocity (Vr). The inverse yields the minimum shutter speed or in the case of an imaging system without a shutter, it is the minimum frame rate for sharp images.
The time during which the sensor is exposed. In the case of a sensor with an electronic shutter, the shutter speed is the time for which the shutter is held open during the taking of an image and this may be shorter time than the actual frame period.
Field of View
Field of View (FOV) is the amount of the scene visible to the camera, defined as the camera’s aperture divided by its focal length. Wide-angle lenses have a large field of view, while telephoto has a small field of view.
Standard lens mount found on many scientific cameras. The thread of the lens and lens mount is 2.5 mm in diameter with a back focal length of 46.5 mm. Also know as a bayonet mount. F-mount lens are preferred lens type when the diagonal of the sensor exceeds 11 mm. Most f-mount lenses have a much larger areas where the lens is constrained to be flat as possible. In the case of an f-mount, this area is 35 mm.
The distance between the focal plane on the sensor and the focal point (optical centre of the lens) when the lens is focused at infinity. The focal length of the lens is marked in millimeters on the lens mount. The principal focal point is the position of best focus for infinity.
Frame rate, sample rate, capture rate, record rate and camera speed are interchangeable terms, it is often shortened to the acronym “fps”. Measured in frames per second, a camera’s speed is one of the most important considerations in high-speed imaging. The frame rate to use when making a recording should be determined after considering the speed of the subject, the size of the area under study, the number of images needed to obtain all the event’s essential information, and the frame rates available from the particular high-speed camera. For example, at 1,000 fps a picture is taken once every millisecond. If an event takes place in 15 milliseconds, the camera will capture 15 frames of that event. If the frame rate is set too low, the camera will not capture enough images. If the frame rate is set higher than necessary, the camera’s on board storage may not be able to store all the necessary frames.
In most high-speed cameras higher frame rates result in lower resolutions thus reducing the area of coverage. This happens when a camera’s frame rate is set higher than it’s ability to provide a full-frame coverage. At the higher record rates, the height and/or width of the image is reduced, and in return the frame rate can be increased by multiples of ten to fifteen times the camera’s full frames per second recording rate. When considering the frame rate performance of a high-speed camera, be specific about your requirements. And look closely at a manufacturer’s specification sheet to see what the true resolution is at any given frame rate.
Also called a lens F Number or the speed of a lens. An f-stop is a designation to indicate a camera’s aperture opening. Each f-stop lets in twice as much light as the f-stop before it, and half as much light as the f-stop after it. Over a 5 f-stop range, the power is adjustable from full down to 1/32 of the total power. The f-stop is the numerical indication of how large a lens opening (aperture) is. The larger the f-stop number, the smaller the opening; for example, f/16 represents a smaller aperture than f/2. Some common f-stops on 35mm cameras are f2, f2.8, f4, f5.6, f8, f11, f16. Smaller openings (like f16) have greater depth of field.
There are a number of lighting sources available for high-speed video. Some care must be taken in lighting selection due to the several factors. The factors that need to be considered included the type of light, the uniformity of the light source, the intensity of the light, the color temperature, the amount of flicker, the size of the light, the beam focus and the handling requirements. All of these factors are important in matching the light to the application.
Lighting an application properly can produce significantly better results than if poor light management is used. There are four fundamental directions for lighting high speed video subjects: front, side, fill and backlight. Placing a light behind or adjacent to a lens is the most common method of illuminating a subject. However, some fill lighting or side lighting may be needed to eliminate the shadows produced by the front lighting. It is advisable to have the light behind the lens to avoid specular reflections off the lens. Side lighting is the next most common lighting technique. As the name implies, the light is at an angle from the side. This can produce a very pleasing illumination. In fact, for low contrast subjects, a low incident lighting angle from the side can enhance detail. Fill lighting may be used to remove shadows or other dark areas. Fill lighting may also be used to lessen the flicker from lamps that have poor uniformity. Fill is from the side or top of a scene. Backlighting may be used to illuminate a translucent subject from behind. It is not used that frequently in high-speed video. However, certain applications such as microscopy, web analysis or flow visualization are well suited for backlighting. Knowing and using when appropriate all of these techniques is important for getting high quality images.
The recording time of a high-speed video camera is dependent on the frame rate selected and the amount of storage medium available. The continuing technological advances in DRAM technology make higher storage levels affordable, but DRAM is still a limiting factor if more than approximately 10 seconds of full frame recording at high speeds is required. However, most high-speed events occur in such short duration that 2000 frames is usually more than enough to capture an event. As memory chips get denser, the storage capacity will continue to increase in high-speed cameras.
Resolution of a high-speed camera is generally expressed in terms of the number of pixels in the horizontal and vertical dimension. A pixel is defined as the smallest unit of a picture that can be individually addressed and read. At the present, high-speed camera resolutions range from 128 x 128 to approximately 2500 x 1600 pixels.
A rule of thumb for capturing high-speed events is that the smallest object or displacement to be detected by the camera should not be less than 2 pixels within the camera’s horizontal field of view.
The sensor resolution may be expressed also in terms of line pairs per millimeter (lp/mm). The meaning of line pairs per millimeter is an expression of how many transitions from black to white (lines) can be resolved in one millimeter. To calculate a sensor’s theoretical limiting resolution in lp/mm, take the inverse of two times the pixel size. Shown below is the limiting resolution of a sensor with a 16 micron pixel.
Theoretical Limiting Resolution
= ( 1/ (2 x pixel size)) x 1000 = 1/(2 x 16) x 1000 = 31.25 lp/mm
Region of Interest
Region of Interest (ROI) is a user-defined resolution, a rectangular exposure area on the sensor.
A measure of the ability to resolve an edge on an object.
Most high-speed image sensors have a sensitivity that is equivalent to a film Exposure Index value of between 125 ISO and 480 ISO in color and up to 3200 ISO in monochrome. The sensitivity is a very important factor for obtaining clear images. An inexperienced user may confuse motion blur with a poor depth-of-field. If the sensitivity of the camera is not high enough for imaging an object for a given scene, the lens aperture must be opened up. This reduces the depth-of-field for the object to remain in focus. As the object moves, it could take a path outside the area that is in focus. This would then give the appearance of an object with motion blur. However, in reality, it is out of focus.
In practice, a single 600-watt incandescent lamp placed four feet from a typical subject provides sufficient illumination to make recordings at 1,000 fps with an exposure of one millisecond (1/1,000 of a second) a f/4. This level of performance is fine for many applications, although some demanding high-speed events have characteristics where greater light sensitivity may be preferred.
The size of the image sensor in a camera is important to know. Some common size sensors include 1/2 inch, 2/3 inch and 1 inch. The 1-inch sensor has an effective width of 12.8 millimeters or larger, while the 2/3-inch sensor has an effective width of 8.8 millimeters. A lens that works properly on a camera having a small sensor may not produce a large enough image to work correctly on a camera having a large sensor. This is due to the distortion in the fringe areas of the lens. Knowing the width of a sensor prevents image blur because users can calculate parameters such as the correct exposure time. The sensor’s width also allows users to calculate the depth of field for a given aperture.
The goal in using a high-speed camera is to obtain a series of pictures that are observable in slow motion after capturing the pictures of a high-speed event. Time magnification describes the degree of “slowing down” of motion that occurs during the playback of an event. To determine the amount of time magnification, divide the recording rate by the replay rate. For example, a recording made at 1,000 fps and replayed at 30 fps will show a time magnification of 33:1. One second of real time will last for 30 seconds on the television or computer monitor. If the same recording were replayed at only 1 fps, that one-second event would take more than 16 minutes to play back! Most systems allow replay in forward or reverse with variable playback speeds. Therefore, it is important to capture only the information that is necessary otherwise, long recordings can take hours to playback.
Types of Lighting
Lighting types can be identified by two characteristics; physical design and the method of producing the light. The physical characteristics include lens, the reflector, packaging and the bulb design. The method of producing light includes tungsten, carbon arc, fluorescent and HMI.
Tungsten lighting is also referred to as incandescent lighting. Tungsten color temperature is 3200K. A common type of tungsten lamp is called a halogen lamp. Halogen is a hot light source since the bulb must heat the regenerative tungsten. Tungsten lamps are efficient in their light output but care needs to be taken when using them due to the high heat of the lamps and housings.
This type of lamp forms an arc between two carbon electrodes. The arc produces a gas that fuels a bright flame that burns from one electrode to the other. This type of lighting is expensive and rarely if ever used in high-speed photography.
The fluorescent tube is one type of gas discharge lamp. At the end of each tube are electrodes and the tube is normally filled with argon and some mercury. As current is applied at the electrodes, the argon gas vaporizes the mercury. The mercury emits an ultraviolet emission which then strikes the side of the tube that is coated with a phosphor. The phosphor then transforms the ultraviolet to visible light. Most fluorescent lamps emit a dominant green hue which is not very suitable for a balanced light source. Additionally, the discharge produces a non-uniform light that is easily detected as a 60-cycle flicker when playing images back from a high-speed high-speed camera.
HMI (mercury medium-arc iodide) is the most common lamp in this class of lighting. As current is passed through the HMI electrodes, an arc is generated and the gas in the lamp is excited to a light emitting state. The spectrum of light emitted includes visible as well as ultraviolet. This light source typically has a UV filter to block the harmful emissions. The HMI light is a balanced light source which generates an intense white light. If a switching ballast is used with the HMI, it produces a uniform light with very low flicker. Other types of ballast are not as well regulated and not as useful for high-speed photography.