2-CCD HDR – A method of capturing high dynamic range (HDR) images using a beam-splitter prism to simultaneously send the identical high contrast scene to two precisely-aligned CCDs. By individually adjusting the exposure settings of the two CCDs, one imager can be set to properly expose the darker portions of the scene while the other can properly capture the brighter areas of the scene. An image processing algorithm, either in the camera or on an external computer, can then “fuse” these two images together to extend the dynamic range of the image beyond that of a single imager.
2-CCD / 2-CMOS multi-imager – A camera containing two CCDs or two CMOS sensors affixed to a beam-splitter prism and precisely-aligned to a common optical plane such that the same image is simultaneously passed to both imagers. By varying the sensors and the filter coatings on the prism, a 2-CCD/2-CMOS camera can be designed for a variety of multi-spectral configurations, such as simultaneous color images and near-infrared imaging of the same scene. Some older 2-CCD models were also designed for monochrome HDR, color HDR, and/or low-noise double-speed operation, though no current models are available with those capabilities.
3-CCD / 3-CMOS – Describes color CCD and CMOS cameras which have separate sensors for the Red, Green and Blue color bands. This is the typical construction of broadcast cameras, and this technology has been adopted for certain industrial and medical applications. In 4-CCD cameras, an extra chip has been added to simultaneously detect the near infrared light spectrum. The major advantage of this architecture is that the camera has full resolution in all 3 color bands.
AcquisitionTransferStart (Delayed Readout) – A camera setting used to make a camera output stored image data in response to an external trigger signal (delayed readout). The number of frames that can be acquired and held in the camera for delayed readout depends on the camera's storage capacity and the resolution/bit depth of the images.
Action Commands (and control) - A feature of the GigE Vision standard that enables cameras to execute a pre-configured action when they receive an action command. Action commands can send both unicast and broadcast messages and give instructions for actions to multiple cameras simultaneously by broadcasting them. A camera equipped with this function can even give instructions for actions to different types of multiple cameras. Although this function includes jitter and delays, it is useful for controlling multiple cameras simultaneously.
Active Pixels – The pixels on a CCD or CMOS imager which contain image information when read out of the camera. This is typically less than the total number of pixels on an imager, as pixels around the edges of the sensor may be used as optical black (OB) pixels – used to establish black levels or to help with color interpolation – or may not be read out at all. The term “effective pixels” includes the active pixels plus the optical black pixels, i.e., all pixels that can be read out of the sensor, which may still be less than the total number of pixels on the chip. Please note that these terms are not always used consistently, especially in the consumer camera world, where “effective pixels” is often used in place of “active pixels.”
ALC (Automatic Level Control) – A JAI function that combines the automatic gain control (AGC/Auto Gain Control) and automatic exposure control (ASC/Auto Shutter Control) functions to handle changes in scene lighting. The function lets users define various parameters relating to the mix of shutter and gain adjustments that will be used when light levels change and how quickly the camera will react to such changes.
Analog Camera – Provides output in analog form, typically, but not necessarily, according to a video standard (CCIR / PAL for Europe and EIA/NTSC for USA and Japan).
AOI – Most commonly stands for "Automated Optical Inspection," which refers to any system that uses cameras and software programs to automatically search for specific defects in manufactured products or sub-components and generate pass/fail results. AOI systems are designed to replace manual inspections, providing both greater speed and accuracy than human inspectors. Systems can inspect for a single type of defect such as size, position, presence/absence, discoloration, scratches, etc., or can simultaneously inspect for multiple defects. In the past, the same abbreviation was also sometimes used for "Area of Interest" though today that usage has been almost completely replaced by ROI (Region of Interest).
Industry where JAI targets OEM’s or integrators that make equipment to inspect and separate cotton for foreign materials.
Target customer group that includes OEM’s for inspection and sorting of food products by grade, color, size, or other characteristics, and for removal of foreign material.
Life sciences industry
Industry focused on equipment and processes to study and examine living organisms. Life Sciences encompasses a wide array of fields including, but not limited to, microbiology, biotechnology, medical imaging, pathology, genomics and ophthalmology.
Automated imaging of printed circuit boards or electronic subsystems to determine proper component placement, identify defects and evaluate overall quality
Industry where JAI targets OEM’s or integrators that make equipment to identify and separate recyclable materials.
Area Scan - Denotes a camera (or imager) architecture in which images are captured in a square or rectangular form in a single cycle (similar to the images created by a DSLR or cell phone camera). This image is then read out in a single frame with its resolution being a function of its height and width in pixels. The alternative to area scan is Line Scan.
Automated Imaging - A terminology which summarizes all usage of cameras in industrial applications where image processing (using in-camera or external computer algorithms) is involved. Subcategories of Automated Imaging are Machine Vision or Factory Automation.
Auto-iris lens video - Cameras operating in outdoor environments are faced with varying light conditions. When light levels change in the images captured by the camera, the images will either be too bright or too dark. An auto-iris lens provides a solution to these problems. The lenses have an electric motor-driven iris which is opened or closed according to signals fed to it from the camera. A camera equipped with auto-iris could produce a video signal of constant brightness by opening or closing the auto-iris of the lens when light level changes.
Binning – A process that combines the signal values from two or more adjacent pixels to create a single “virtual” pixel having a higher signal level. The result is an image with lower pixel resolution (pixel detail) but with higher sensitivity. Common binning schemes include combining every two adjacent pixels in each horizontal line (horizontal binning), combining every two adjacent pixels in each vertical column (vertical binning), or combining each group of four pixels – two horizontal and two vertical (2 x 2 binning) – to create an image with four times greater sensitivity but ¼ the resolution. Some JAI cameras offer the option to have pixel values averaged when they are combined instead of being added together.
Blemish Compensation – Defective pixels can occur on image sensors over time. This camera feature substitutes values for defective pixels by interpolating using the surrounding pixels. The function works on defective (bright) pixels that are not adjacent to each other.
Blooming – The term used to describe when a set of pixels in an image is oversaturated by a bright spot (the sun, a light, a laser) and the charge contained in those pixels spills over into adjacent pixels causing the bright spot to “bloom” in a radiating pattern.
Brightness (Hue and Saturation) – Brightness is one aspect of color in the RGB color model. While hue defines the peak wavelength of the color, and saturation defines how “pure” the color is (how narrow or wide is the waveband), brightness defines the intensity or energy level of the color. This scheme, abbreviated HSB, is one of several similar (though not identical) color schemes used in machine vision.
Burst Mode (Burst Trigger Mode) – In this mode, a single external trigger signal causes the camera to acquire a "burst" of multiple images at, or close to, the sensor's maximum frame rate, which is faster than the camera's interface can support. Instead, the camera temporarily stores the images in memory, so they can then be read out at the slower speed of the interface. On a GigE Vision camera, for example, this enables the capture of image sets with interframe timing that is faster than could normally be handled over the 1 Gbps bandwidth limit of the interface. The number of frames that can be acquired in each burst depends on the camera's storage capacity and the resolution/bit-depth of the images.
Camera Link - Camera Link is a serial communication protocol designed for computer vision applications. It is based on the National Semiconductor interface Channel Link. It has been designed in order to standardize the digital communications (interface) between industrial video products such as cameras, cables and frame grabbers. The standard is maintained and administered by the global machine vision organization Automated Imaging Association or AIA.
Cat5e and Cat6e cables – Standard categories of Ethernet cables. Both use four twisted pairs of copper wires, however Cat6e features more stringent specifications for crosstalk and system noise, and supports higher signal frequencies - up to 250 MHz compared to 100 MHz for Cat5e. For this reason, Cat6e is strongly recommended for use with GigE cameras, especially if longer cable lengths are to be used.
CCD sensor - CCD stands for Charge Coupled Device. This term is generally used for the imaging sensor used in CCD cameras. A CCD sensor is divided up into discrete picture elements, called pixels, in the form of a matrix. The sensor converts light into electric charge, proportional to the amount of incident light. The charge is then transformed to a voltage as each pixel is read out.
CCIR – Refers to an analog video and television standard published in 1982 by International Telecommunication Union - Radiocommunications sector. This became the dominant video standard used for monochrome video in Europe and several other regions around the world. It is characterized by interlaced video running at 25 frames per second (50 fields per second) with a standard screen resolution of 752 pixels by 582 lines. In parts of the world where the standard power frequency is 60 Hz, such as North America, a different standard is used. See EIA for a description.
Chromatic Aberration Correction (lateral) – This camera function corrects for the chromatic aberration (color shifts) in an image caused when a lens produces slightly different magnifications of different color wavelengths. This can cause some of the color components (R,G,B) for the same point on a target to spread onto adjacent pixels on the sensor. Distortion can be amplified in prism cameras due to the additional refractive element in the optical path (the prism). In cases where chromatic aberration exists, if it is not corrected it will result in color "fringes" appearing around objects towards the edges of the image.
Chunk Data – A camera feature that adds camera configuration information to the image data that is output from the camera. Embedding camera configuration information in the image data allows you to use the serial number of the camera as a search key and find specific image data from among large volumes of image data. In addition, when images are acquired with a single camera in sequence under multiple setting conditions, you can search for images by their setting conditions.
Clock Frequency – Refers to the frequency of a sine wave typically generated by an oscillator crystal that sets the pace for how fast operations can take place inside the camera. Most commonly, a “pixel clock” will guide the speed at which the internal electronics can read out pixel information from the imager (CCD or CMOS) and pass it to the camera interface. The higher the clock frequency – typically expressed in MHz (millions of cycles per second) – the faster data can be extracted from the sensor, enabling a faster frame rate. For some interfaces, a second clock frequency governs how fast the data can then be organized and sent out of the camera. This frequency (the Camera Link pixel clock, for example), may be different than the pixel clock used for the imager.
CMOS – Complementary Metal Oxide Semiconductor. Originally used for µ-processor or memory chips. Can also be used to design image sensors. In the past, image sensors using CMOS technology had major drawbacks in the areas of noise and shutter technology, thus making them less interesting to use than CCD sensors. Today, new generations of CMOS imagers have alleviated many of these issues enabling them to overtake CCDs as the dominant type of image sensor used in machine vision cameras, as well as many other types of cameras.
C-Mount – A standard type of lens mount using screw threads to attach the lens securely to the camera, even in high-vibration factory environments. Because of the diameter of the C-mount opening, these lenses typically cannot be used on imagers with a format larger than 4/3” in diameter.
CoaXPress Interface – A point-to-point serial digital interface standard for machine vision cameras. CoaXPress uses traditional coaxial cables, similar to those used for older analog cameras, but adds a high bandwidth chipset capable of operating at up to 12.5 Gbps per cable (more than 12X Gigabit Ethernet speeds). It supports cables in excess of 100 m in length without repeaters or hubs.
Color Enhancer – A function available in some JAI cameras that boosts the intensity of certain colors in the image as specified by the user. Three primary and three complementary colors can be selected for enhancing up to 2X their normal intensity.
Color Space Conversion – A process that changes the standard color space (RGB) that is used to define the colors in an image into other ways of specifying color information. In JAI cameras equipped with color space conversion capabilities, available color spaces include sRGB, AdobeRGB, UserCustom RGB, CIE XYZ, and HSI. (HSI is not supported on some camera models).
Counter Function – A camera function that counts up change points in the camera’s internal signals using the camera’s internal counter and reads that information from the host side. This function is useful for verifying error conditions via the count value using internal camera operations.
CS-Mount – Similar to the screw-in C-mount, CS-mount has been used extensively in the security industry where smaller cameras and imagers are common. Due to focal length differences, adapters are available to enable C-mount lenses to be used on CS-mount cameras, however the reverse is not possible. A CS-mount lens cannot be used on a C-mount camera.
CXP Link Sharing – A feature of the CoaXPress interface standard (v2.0 and later) that allows cameras to be connected to multiple PCs. In Sharing Mode, the captured images can be divided and sent to each PC. In Duplicate Mode, the same captured image can be copied and sent to each PC.
Debayering – An interpolation function inside a camera that converts Bayer sensor data into an RGB pixel format for outputting (for example, RGB8, RGB10V1Packed, RGB10p32). In addition to alleviating the need for Debayering on a host computer, in-camera conversion to RGB format enables Color Enhancer and Color Space Conversion to be used on single-sensor (Bayer) cameras, instead of being limited exclusively to multi-sensor (prism) cameras.
Decimation – A camera function that performs downsampling of the image (typically by omitting every other pixel) in both the horizontal and vertical direction. This reduces the file size for processing or storage while maintaining the full field of view of the image.
Dichroic Coating – A coating placed on the face of a prism or other piece of optical glass that allows specific wavelengths of light to pass through while reflecting the remaining wavelengths. Dichroic coatings are used in JAI’s multi-imager prism cameras to split light into red, green, and blue wavelengths for color imaging, and can be used to separate near-infrared light for multi-spectral imaging. They can also be customized for specific spectral analysis tasks.
Digital Camera – All camera sensors are based on analog technology, i.e., pixel wells convert captured electrons into an analog signal value. In a digital camera this electrical charge is converted to a digital signal, using an A/D converter before it is transferred out of the camera, typically as either an 8-bit, 10-bit, or 12-bit value. Modern CMOS image sensors make the A/D conversion on the sensor itself enabling it to be output from the camera in formats already suitable for computer processing. Older analog cameras sent analog signals out of the camera, which were convenient for connecting directly to old analog TV monitors, but required analog-to-digital conversion inside the computer before image data could be analyzed by computer algorithms.
DSNU – Dark Signal Non-Uniformity. This refers to variations in individual pixel behavior that can be seen or measured even in the absence of any illumination. In simple terms, it refers to how different pixels perceive ”black” or the absence of light. Most of these ”dark signal” variations are affected by temperature and integration time. Other variances are driven more by electronic issues (on-chip amplifiers and converters) and remain fairly constant under different thermal conditions. These ”fixed” non-uniformities are typically considered part of an imager’s ”Fixed Pattern Noise” (see Fixed Pattern Noise/FPN). Compensation for DSNU issues are typically made at the factory as part of the camera testing process.
DSP – Digital Signal Processor. Some color cameras incorporate a DSP for the enhancement and correction of images in real time. Parameters, which are typically controlled, are: Gain, Shutter, White Balance, Gamma, and Aperture. DSPs can also be used for edge detection/enhancement, defective pixel correction, color interpolation, and other tasks. The output of a DSP camera is typically analog video.
Dual tap – This typically refers to a divide-and-conquer method used for reading information from a CCD image sensor whereby the sensor is divided into two regions – either left/right or top/bottom – and the pixels are read from both regions simultaneously. The frame rate of the sensor is effectively doubled, minus a little overhead, without resorting to overclocking, which increases noise. CMOS imagers, which have a more flexible readout architecture, may utilize many different taps to read out sections of the chip, producing high frame rates but also a phenomenon known as “fixed pattern noise.”
Edge Enhancer – A camera function that identifies boundaries within an image, such as lines or edges, and increases the contrast/sharpness of those boundaries.
EIA Interface – Also called RS-170, this refers to a standard for traditional monochrome television broadcasting in North America and other parts of the world where the typical power frequency is 60 Hz. The EIA standard calls for interlaced video at 30 frames per second (60 fields per second) with a standard screen resolution of 768 pixels by 494 lines. See CCIR for the European standard.
Encoder Control – A built-in feature available in some JAI line scan cameras. With encoder control, a camera can be directly connected to an encoder rather than receiving encoder signals via the frame grabber or other interface connection. Direct connection enables the camera to generate trigger signals or detect the scanning direction of the subject in response to signals output from the rotary encoder.
Event Control – A camera feature that outputs a signal change point inside the camera as information indicative of an event occurrence (event message). Events that can use the Event Control function include AcquisitionTrigger, FrameStart, etc. (varies depending on the camera model). You can specify whether or not to send an event message when an event occurs at each event.
Exposure Active (EEN) – A signal which can be output externally from the camera showing the timing at which video is being accumulated to the sensor.
Field of View (FOV) – Describes the area that the camera and lens are looking at. In machine vision inspection applications, this is typically expressed in a size measurement (e.g., 16 cm wide by 9 cm high). In traffic or surveillance applications, this can also be expressed in degrees (e.g., a 40-degree horizontal FOV).
FireWire – See IEEE 1394
Fixed Pattern Noise (FPN) – A non-random type of visible and/or measurable image “noise” that results from electrical signal differences, or “offsets,” that are not related to the amount of light striking the imager. This is most commonly seen in CMOS imagers where each pixel typically has its own amplifier and, in order to increase readout speed, “strips” of pixels are read out simultaneously through multiple amplifiers. The use of many different amplifiers, each with slight variations in electrical characteristics, can result in a “pattern” of slightly lighter and darker areas in the image. This is typically perceived as a vertically-oriented pattern that overlays the image. Because CCDs shift all pixels one row at a time through the same readout register, they are virtually immune to Fixed Pattern Noise, except in the case of multi-tap output where careful “tap balancing” is required to avoid a similar issue. FPN is considered a type of Dark-Signal Non-Uniformity (see DSNU) and can be compensated by measuring and mapping the pattern of amplifier differences and applying an image processing algorithm to adjust for these variances. This function is typically built into the camera and is not adjustable by the camera user.
Flat-field Correction – An in-camera technique that corrects for slight pixel-to-pixel sensitivity variations across an image sensor. Essentially, this calibration technique makes small adjustments in the gain applied to each pixel such that when the camera is pointed at a smooth, white card which has been evenly lit at less than 100% saturation, all pixels will have the same pixel value (see PRNU).
Four tap – Also called “quad-tap,” this is a divide-and-conquer method of reading information from a CCD image sensor whereby the CCD is divided into four regions and the pixels are read from all four regions simultaneously. The frame rate of the sensor is effectively increased by a factor of four, minus a little overhead, without resorting to overclocking, which increases noise. CMOS imagers, which have a more flexible readout architecture, may utilize many different taps to read out sections of the chip, producing high frame rates but also a phenomenon known as “fixed pattern noise.” See also dual-tap.
Frame Grabber (also sometimes called Acquisition Board) – A board inserted in to a PC for the function of acquiring images from a camera directly into the memory of the PC, where the image processing takes place. Certain Frame Grabbers also have on-board processors for doing image processing independently of the host computer.
Frame Rate – The rate at which an area scan camera can capture and read out an image. This is usually expressed in “frames per second” with the frame rates of typical machine vision cameras ranging from a few frames per second up to more than 200. Frame rate can be increased by using binning (though not necessarily), and by using partial scanning or region of interest (ROI) whereby only a portion of the active pixels are read out of the camera during each frame period.
Frame Start Trigger (Line Scan) – A camera setting used to tell a camera to capture an image. In line scan cameras, the Frame Start Trigger setting tells the camera to consolidate a user-defined set of line data into a frame for outputting. Data Leader and Data Trailer are added in every frame. The number of lines in one frame is set by Offset Y and Height of [Image Format Control]. After receiving a Frame Start Trigger signal, the camera will skip the image data from the number of lines indicated by Offset Y and then send the data of Data Leader, the image data, and the Data Trailer. Upon completion of data transmission for one frame, no data will be sent until the next Frame Start Trigger is received.
Gain – An amplification of the signal collected by the pixels in the CCD or CMOS imager. Applying gain is like “turning up the brightness knob” on the image. However, gain also increases the noise in the image, which may make it unusable for some types of machine vision inspection or measurement tasks. In some cases, “negative gain” can be applied to “turn down” the brightness of an image, though usually this is done with the shutter or the lens iris.
Gamma Correction – Adjusts the relationship between the values recorded by each pixel and those that are output in the image for viewing or processing. In a strictly linear relationship (gamma=1.0), a half-filled pixel well is output in 8-bit mode at a pixel value of 127 or 128 (half of the full value of 255). But gamma correction uses a nonlinear function to map well values to a different curve of pixel values. Sometimes this is done to mimic the responsiveness of a computer monitor or the human eye, which prefers a brighter set of gray tones (gamma = 0.45). Other times, it can be done to correct for high or low contrast within the image (see also Look-up Table).
General Imaging – The term collectively describing applications where typically no (or only limited) image processing is involved. Typically involves capturing an image and displaying it on a monitor, or recording images for later analysis. Surveillance is a type of General Imaging, as are surgical viewing applications.
GenICam - GenICam is a universal configuration interface across a wide range of standard interfaces, such as GigE Vision, Camera Link and IEEE 1394-IIDC, regardless of the camera type and image format. It allows the user to easily identify the camera type, the features and functions available in the specific camera and also to see what range of parameters are associated with each function. The core of the GenICam standard is the Descriptor File (in XML format) that resides in the camera, which maps the cameras internal registers to a standardized function list. GenICam is owned and licensed by EMVA (European Machine Vision Association).
Gigabit Ethernet – A computer networking standard for transmitting digital information in packets across a computer network at a speed of 1 billion bits of information per second (1 Gbps).
GigE Vision – An interface standard introduced in 2006 that uses Gigabit Ethernet data transmission (1000BASE-T) for outputting image data from industrial cameras. The GigE Vision standard is maintained and licensed by the Association for Advancing Automation (A3) and has become one of the most prevalent digital camera standards in the world. It utilizes standard Cat5e or Cat6e cables to transmit data up to 100 m at a rate of 1 Gbps (125 Mbytes/s). Because it is a networking standard, it also supports various multicasting and broadcast messaging capabilities that are not possible with point-to-point interfaces. Since its introduction, the standard has evolved to support additional Ethernet performance tiers. These include 2.5 Gbps (2.5GBASE-T), 5 Gbps (5GBASE-T), and 10 Gbps (10GBASE-T, also called 10 GigE). There are even a few machine vision cameras available in the market that can support 25 Gbps and 100 Gbps speeds.
GPIO – Stands for general purpose input and output. This typically refers to a set of functions and signal registers which can be accessed and programmed by users to perform a variety of fundamental camera tasks, such as triggering the camera, setting up a pulse generator or counter, and designating which inputs and outputs will be used for various tasks.
Gradation Compression (Mode) – A JAI camera mode that compresses the bit depth of captured images to enable scenes containing a wide range of pixel values to be output as a narrower set of intensity gradations. Maximum range of raw pixel values can be either 10 bits (0-1023) or 12 bits (0-4095). They are compressed using one or two user-defined knee points into 8-bit images for storage and display.
Grey Scale – This is another term for black-and-white, or monochrome imaging. It refers to images where all pixel values represent the level of light intensity, without including any color information. Thus, all pixels are expressed as varying shades of grey. The number of possible grey values depends on how many bits are used to hold each pixel value. In an 8-bit image, 256 values are possible. For a 10-bit image, 1024 different shades are available, while a 12-bit image can support 4096 different grey values.
GUF / GenICam Firmware Update – Refers to a standardized method of updating firmware in GenICam-compliant devices. Cameras that support this standard can be updated using a GenICam Update File (GUF) and the JAI GenICam Firmware Update Tool.
HDR (High Dynamic Range) – Refers to methods that can be used to minimize saturated and/or black pixels in scenes where the range of pixel intensity values exceeds that of the camera's sensor. Most cameras have a maximum sensor range of 12 bits (some may have less). High dynamic range methods can be used to extend the effective range of pixel values to 14 bits or more, which can then be output as raw values or in a compressed image format.
HDTV – Refers to the high-definition television standard developed for broadcasting. HDTV has several different levels, but it is most commonly used to mean a progressive-scan image with a resolution of 1920 pixels wide by 1080 lines high and a minimum frame rate of 30 frames per second. This is sometimes abbreviated as 1080p30 or just 1080p. More recently, both consumers and machine vision customers have shown a growing interest in 1080p HDTV running at 60 frames per second, which produces sharper images of moving objects.
Hue (Saturation and Brightness) – Hue is one aspect of color in the RGB color model. Hue defines the peak wavelength of the color, in other words, where it fits within the visible spectrum. Meanwhile, saturation defines how “pure” the color is (essentially, how narrow or wide is the waveband), and brightness defines the intensity or energy level of the color. This scheme, abbreviated HSB, is one of several similar (though not identical) color schemes used in machine vision.
ICCD – Intensified CCD, also Low Light Level CCD. The intensifier tube collects faint photon information and converts this to an electron which is accelerated onto a scintillation plate. This is in turn connected to a CCD sensor via fiber optics or a lens system. This can produce useful image quality even at nighttime under starlit or overcast skies (in night vision cameras or goggles, for example).
IEEE 1394 – Standard for serial transmission of digital data, which can be used for digital output cameras. Sony launched a family of industrial products based on this standard, but had low success in the market. The major reason for this was that at the time, IEEE 1394 still had virtually no acceptance from the PC market (not yet included on motherboards). As of the summer 2001, there was a brief period of increased activity and acceptance and a number of camera manufacturers launched IEEE 1394 models. This standard was initially launched by Apple Computer under the name of FireWire. While FireWire is still in use for many peripherals in the consumer market, FireWire cameras in the machine vision market have largely disappeared.
Image Compression – A process applied to an image to minimize its size in bytes for transmission or storage. Lossy compression sacrifices some amount of image quality without degrading it below an acceptable threshold. Lossless compression temporarily "rewrites" (encodes) the image data to make it smaller, while retaining the ability to restore it later to its original quality. See Xpress.
Image Flip (Reverse XY) – A camera function that outputs an image by inverting it horizontally and/or vertically. On color models, the Bayer array is changed by the Image Flip function. For example, BayerRG -> BayerGB (ReverseY = 1), BayerGR (ReverseX = 1), BayerBG (ReverseX =1 and ReverseY = 1)
Image Processing – Refers to using the images captured by CCD/CMOS cameras for the purpose of Automated Imaging. The image is analyzed by special software to provide a singular result. Typically, the result should be of Pass/Fail, or Go/No-Go type (examples would be: correct size of an object, correct position of an object, correct color of an object, correct number of objects, etc).
Image Scaling – Changing the number of pixels within a digital image without changing the size of the image, i.e., applying a larger or smaller pixel pitch to an image. See Xscale.
Infrared light – Covers any light with a wavelength starting at the upper edge of the visible spectrum (700 nm) and extending all the way to 1 mm (the lower edge of the microwave spectrum). Within the infrared band there are several sub-bands. These include near infrared (from 700 to 1000 nm), short wavelength or SWIR (1000 to 3000 nm), mid wavelength or MWIR (3000 to 8000 nm), long wavelength LWIR (8000 to 15000 nm), and far infrared. Because infrared wavelengths are longer than visible light, they are able to pass through the surface of some substances, especially organic materials and certain types of paints and plastics. This allows near infrared and SWIR cameras in particular to be used to see non-visible defects and see through smoke and certain types of packaging. LWIR cameras are known as thermal-imaging cameras because they are able to “see” thermal emissions from both living creatures and from factory machines.
Interlaced Scan – The basis for historical broadcast TV is Interlaced Scan. It involved capturing the image in two fields (odd and even lines at separate time intervals). The major advantage of using interlaced scan was that it conserved video bandwidth, as the latency of the eye put the two fields together again. Many early industrial cameras used interlaced scan, but with the resulting drawback of image artifacts if the object was moving while being captured.
JPEG – (also jpg) A method of compressing an image to reduce file size. The standard was developed by a group called the Joint Photographic Experts Group, hence the abbreviation JPEG. The level of compression can be adjusted by the user to determine the proper trade-off between file size and loss of image quality.
Knee Function – Has some similarities to gamma correction in that it refers to a way to change the relationship between actual pixel well values and their corresponding output value. In this case a different function is applied to a specific portion of the I/O graph starting at a “knee point” where the slope of the graph is changed. Knee functions are commonly used to “compress” the bright parts of an image so that they do not saturate when attempting to brighten the darker areas of an image.
Lens Control (Birger Mount) – This camera feature allows a camera to be connected to a Birger Mount Adapter via RS-232C. Lens control commands sent via the CoaXPress interface to the camera can be transferred to the servo-equipped lens mount, thus enabling control of functions like focus and aperture.
Light Spectrum – A range of wavelengths within the electromagnetic spectrum which are perceived as “light” by humans or instruments. These include visible light, with wavelengths from 400 to 700 nm, infrared light (700 nm to 1 mm) and ultraviolet light (wavelengths from 10 nm to 400 nm). Wavelengths shorter than 10 nm are considered x-rays and wavelengths longer than 1 mm are considered microwaves.
Line Scan – Denotes a camera (or imager) architecture that gathers an image on a line-by-line basis (requires that either the object is moving or the camera is moving). An image of arbitrary size (number of lines) is captured into the memory of the host computer. This operation can be compared to a fax machine. The alternative to line scan is Area Scan.
Lookup Table (LUT) – A user-programmable method of modifying the relationship between the values recorded by each pixel and those that are output in the image for viewing or processing (see also Gamma Correction). While “pre-set” gamma correction lets the user adjust this input-output relationship using one of several pre-defined curves, a Lookup Table lets the user define a custom mapping of input values to output values. This is done by selecting an “index” and assigning it a “value.” For example, Index 0 would typically represent a pixel with an exposure value of 0 – a black pixel. But by assigning a value of 8 to Index 0 in the Lookup Table, any pixel with the value of 0 would be “boosted” to an output value of 8. By repeating this process across all “Indexes,” the user can define many different custom ways to boost or reduce the intensity of various pixel values within the image. The number of Indexes available is also referred to as “points,” so a “256-point Lookup Table” has 256 indexes that can be mapped to an adjusted output value. The number of values to which each index can be mapped is often different than the number of indexes. For example, 256 index points could each be mapped to a value between 0 and 4095. The Lookup Table function, in this case, would handle the task of calculating the proper input and output values based on whether the camera was operating at 8-bits, 10-bits, 12-bits, etc.
Megapixel – Classifies cameras, which have a resolution of 1 million pixels or more. Most machine vision cameras today are megapixel cameras, though some camera manufacturers still produce sub-megapixel cameras. JAI’s highest resolution camera is currently the SP-45000, which has a resolution of 45 million pixels.
M-52 mount – A large format screw-type lens mount designed to accommodate cameras with very large area scan or very long line-scan imagers.
Mini-Camera-Link – A part of the Camera Link standard that specifies smaller connectors than the original Camera Link standard. Except for the size of the camera and cable connectors, Mini Camera Link adheres to all other Camera Link electrical and physical specifications. Thus, it is possible to connect a camera with a Mini Camera Link connector to a frame grabber with a Camera Link connector, provided the user has a cable with the proper connectors on each end.
Multi-imager – A term for any camera that has more than one CCD or CMOS sensor inside. In most cases, this requires the use of a prism to split the light and direct it to the multiple imagers. However, in some line scan cameras, multiple linear sensors may be placed side-by-side without the use of a prism block. These dual-line, tri-linear, and quad-linear arrangements create both timing challenges and parallax issues, depending on the application.
Multi-ROI - Refers to a camera's ability to select several smaller scanning areas within the full sensor area from a single exposure. By skipping areas that are not specified as regions of interest when scanning a frame, the ROI function can output the specified regions in a combined state or as individual frames. For No-Overlap Multi ROI, the scanning areas cannot be overlapped. For Overlap Multi ROI, the scanning areas can be overlapped.
Near Infrared Light (NIR) – The lowest band in the infrared light spectrum (see Infrared light) extending from 700 nm (the edge of the visible spectrum) to around 1000 nm. The longer wavelengths, though not visible to the naked eye, are able to penetrate through certain inks and plastics and penetrate below the surface of organic material like fruits and vegetables. By using CCDs or CMOS imagers that are sensitive to NIR light, cameras can be made to capture monochrome images showing various subsurface defects and hidden objects.
NTSC Standard – Similar to the EIA (RS-170) standard, except it refers to the analog color video imaging format in North America and other parts of the world. Basic characteristics are: interlaced color video, 30 frames per second (60 fields per second), standard resolution of 768 pixels by 494 lines.