JAI | JAI glossary: About machine vision and industrial camera…

Glossary

2-CCD HDR – A method of capturing high dynamic range (HDR) images using a beam-splitter prism to simultaneously send the identical high contrast scene to two precisely-aligned CCDs. By individually adjusting the exposure settings of the two CCDs, one imager can be set to properly expose the darker portions of the scene while the other can properly capture the brighter areas of the scene. An image processing algorithm, either in the camera or on an external computer, can then “fuse” these two images together to extend the dynamic range of the image beyond that of a single imager.

2-CCD multi-imager – A camera containing two CCDs affixed to a beam-splitter prism and precisely-aligned to a common optical plane such that the same image is simultaneously passed to both imagers. By varying the CCDs and the filter coatings on the prism, a 2-CCD camera can be designed for monochrome HDR, color HDR, low-noise double-speed operation, or a variety of multi-spectral configurations, such as simultaneous color images and near-infrared imaging of the same scene.

3-CCD / 3-CMOS – Describes color CCD and CMOS cameras which have separate sensors for the Red, Green and Blue color bands. This is the typical construction of broadcast cameras, and this technology has been adopted for certain industrial and medical applications. In 4-CCD cameras, an extra chip has been added to simultaneously detect the near infrared light spectrum. The major advantage of this architecture is that the camera has full resolution in all 3 color bands.

A

Active pixels
– The pixels on a CCD or CMOS imager which contain image information when read out of the camera. This is typically less than the total number of pixels on an imager, as pixels around the edges of the sensor may be used as optical black (OB) pixels – used to establish black levels or to help with color interpolation – or may not be read out at all. The term “effective pixels” includes the active pixels plus the optical black pixels, i.e., all pixels that can be read out of the sensor, which may still be less than the total number of pixels on the chip. Please note that these terms are not always used consistently, especially in the consumer camera world, where “effective pixels” is often used in place of “active pixels.”

Analog Camera – Provides output in analog form, typically, but not necessarily, according to a video standard (CCIR / PAL for Europe and EIA/NTSC for USA and Japan).

Applications (examples)

Cotton
Industry where JAI targets OEM’s or integrators that make equipment to inspect and separate cotton for foreign materials.

Food industry
Target customer group that includes OEM’s for inspection and sorting of food products by grade, color, size, or other characteristics, and for removal of foreign material.

Life science industry
Industry focused on equipment and processes to study and examine living organisms. Life Science encompasses a wide array of fields including, but not limited to, microbiology, biotechnology, medical imaging, pathology, genomics and optometry.

PCB inspection
Automated imaging of printed circuit boards or electronic subsystems to determine proper component placement, identify defects and evaluate overall quality

Recycling
Industry where JAI targets OEM’s or integrators that make equipment to identify and separate recyclable materials.

Area Scan - Denotes a camera (or imager) architecture in which images are captured in a square or rectangular form in a single cycle (similar to film inside a normal camera). This image is then read out in a single frame with its resolution being a function of its height and width in pixels. The opposite of area scan is Line Scan.
Automated Imaging, A.I. – A terminology which summarizes all usage of cameras in industrial applications where image processing (using in-camera or external computer algorithms) is involved. Subcategories of AI are also Machine Vision or Factory Automation.

Auto-iris lens video - Cameras operating in outdoor environments are faced with varying light conditions. When light levels change in the images captured by the camera, the images will either be too bright or too dark. An auto-iris lens provides a solution to these problems. The lenses have an electric motor-driven iris which is opened or closed according to signals fed to it from the camera. A camera equipped with auto-iris could produce a video signal of constant brightness by opening or closing the auto-iris of the lens when light level changes.

B

Binning – A process that combines the signal values from two or more adjacent pixels to create a single “virtual” pixel having a higher signal level. The result is an image with lower pixel resolution (pixel detail) but with higher sensitivity. Common binning schemes include combining every two adjacent pixels in each horizontal line (horizontal binning), combining every two adjacent pixels in each vertical column (vertical binning), or combining each group of four pixels – two horizontal and two vertical (2 x 2 binning) – to create an image with four times greater sensitivity but ¼ the resolution.

Blooming – The term used to describe when a set of pixels in an image is oversaturated by a bright spot (the sun, a light, a laser) and the charge contained in those pixels spills over into adjacent pixels causing the bright spot to “bloom” in a radiating pattern.

Brightness (Hue and Saturation) – Brightness is one aspect of color in the RGB color model. While hue defines what the mixture of the color is, and saturation defines how “pure” the color is, brightness defines the intensity or energy level of the light source. This scheme, abbreviated HSB, is one of several similar (though not identical) color schemes used in machine vision.

C

Camera Link - Camera Link is a serial communication protocol designed for computer vision applications. It is based on the National Semiconductor interface Channel Link. It has been designed in order to standardize the digital communications (interface) between industrial video products such as cameras, cables and frame grabbers. The standard is maintained and administered by the global machine vision organization Automated Imaging Association or AIA.

Cat5e and Cat6e cables – Standard categories of Ethernet cables. Both use four twisted pairs of copper wires, however Cat6e features more stringent specifications for crosstalk and system noise, and supports higher signal frequencies - up to 250 MHz compared to 100 MHz for Cat5e. For this reason, Cat6e is strongly recommended for use with GigE cameras, especially if longer cable lengths are to be used.

CCD sensor - CCD stands for Charge Coupled Device. This term is generally used for the imaging sensor used in CCD cameras. A CCD sensor is divided up into discrete picture elements, called pixels, in the form of a matrix. The sensor converts light into electric charge, proportional to the amount of incident light. The charge is then transformed to a voltage as each pixel is read out

CCIR – Refers to an analog video and television standard published in 1982 by International Telecommunication Union - Radiocommunications sector. This became the dominant video standard used for monochrome video in Europe and several other regions around the world. It is characterized by interlaced video running at 25 frames per second (50 fields per second) with a standard screen resolution of 752 pixels by 582 lines. In parts of the world where the standard power frequency is 60 Hz, such as North America, a different standard is used. See EIA for a description.

Clock frequency – Refers to the frequency of a sine wave typically generated by an oscillator crystal that sets the pace for how fast operations can take place inside the camera. Most commonly, a “pixel clock” will guide the speed at which the internal electronics can read out pixel information from the imager (CCD or CMOS) and pass it to the camera interface. The higher the clock frequency – typically expressed in MHz (millions of cycles per second) – the faster data can be extracted from the sensor, enabling a faster frame rate. For some interfaces, a second clock frequency governs how fast the data can then be organized and sent out of the camera. This frequency (the Camera Link pixel clock, for example), may be different than the pixel clock used for the imager.

CMOS – Complementary Metal Oxide Semiconductor. Typically used for µ-processor or memory chips. Can also be used to design image sensors. In the past, image sensors using CMOS technology had major drawbacks in the areas of noise and shutter technology, thus making them less interesting to use than CCD sensors. Today, new generations of CMOS imagers have alleviated many of these issues enabling them to serve as reasonable alternatives to CCDs for many applications.

C-Mount – A standard type of lens mount using screw threads to attach the lens securely to the camera, even in high-vibration factory environments. Because of the diameter of the C-mount opening, these lenses typically cannot be used on imagers with a format larger than 4/3” in diameter.

CoaXPress interface – A relatively new point-to-point serial digital interface standard for machine vision cameras. CoaXPress uses traditional coaxial cables, similar to those used for older analog cameras, but adds a high bandwidth chipset capable of operating at up to 6.25 Gbps per cable (over 6X Gigabit Ethernet speeds). It supports cables in excess of 100 m in length without repeaters or hubs.

CS-Mount - Similar to the screw-in C-mount, CS-mount has been used extensively in the security industry where smaller cameras and imagers are common. Due to focal length differences, adapters are available to enable C-mount lenses to be used on CS-mount cameras, however the reverse is not possible. A CS-mount lens cannot be used on a C-mount camera.

D

Dichroic coating – A coating placed on the face of a prism or other piece of optical glass that allows specific wavelengths of light to pass through while reflecting the remaining wavelengths. Dichroic coatings are used in JAI’s multi-imager prism cameras to split light into red, green, and blue wavelengths for color imaging, and can be used to separate near-infrared light for multi-spectral imaging. They can also be customized for specific spectral analysis tasks.

Digital Camera – All CCD cameras are based on analog technology. The CCD sensor itself is an analog component. In a digital camera the video signal is converted to a digital signal, using an A/D converter (typically 8 or 10 bit) before it is transferred to the image acquisition card. The major advantage of digital cameras is that the A/D converter is very close to the sensor, which results in a more noise-free signal. The disadvantage is that the cable between the camera and the system is more complex and cable length is limited.

DSNU – Dark Signal Non-Uniformity. This refers to variations in individual pixel behavior that can be seen or measured even in the absence of any illumination. In simple terms, it refers to how different pixels perceive ”black” or the absence of light. Most of these ”dark signal” variations are affected by temperature and integration time. Other variances are driven more by electronic issues (on-chip amplifiers and converters) and remain fairly constant under different thermal conditions. These ”fixed” non-uniformities are typically considered part of an imager’s ”Fixed Pattern Noise” (see Fixed Pattern Noise/FPN). Compensation for DSNU issues are typically made at the factory as part of the camera testing process.


DSP – Digital Signal Processor. Modern color CCD cameras incorporate a DSP for the enhancement and correction of images in real time. Parameters, which are typically controlled, are: Gain, Shutter, White Balance, Gamma, and Aperture. DSPs can also be used for edge detection/enhancement, defective pixel correction, color interpolation, and other tasks. The output of a DSP camera is typically analog video.

Dual tap – This typically refers to a divide-and-conquer method of reading information from a CCD whereby the CCD is divided into two regions – either left/right or top/bottom – and the pixels are read from both regions simultaneously. The frame rate of the CCD is effectively doubled, minus a little overhead, without resorting to overclocking, which increases noise. CMOS imagers, which have a more flexible readout architecture, may utilize many different taps to read out sections of the chip, producing high frame rates but also a phenomenon known as “fixed pattern noise.”

E

EIA interface – Also called RS-170, this refers to a standard for traditional monochrome television broadcasting in North America and other parts of the world where the typical power frequency is 60 Hz. The EIA standard calls for interlaced video at 30 frames per second (60 fields per second) with a standard screen resolution of 768 pixels by 494 lines. See CCIR for the European standard.

F

Field of view (FOV) – Describes the area that the camera and lens are looking at. In machine vision inspection applications, this is typically expressed in an size measurement (e.g., 16 cm wide by 9 cm high). In traffic or surveillance applications, this can also be expressed in degrees (e.g., a 40-degree horizontal FOV).

FireWire – See IEEE 1394
Fixed Pattern Noise (FPN) – A non-random type of visible and/or measurable image “noise” that results from electrical signal differences, or “offsets,” that are not related to the amount of light striking the imager. This is most commonly seen in CMOS imagers where each pixel typically has its own amplifier and, in order to increase readout speed, “strips” of pixels are read out simultaneously through multiple amplifiers. The use of many different amplifiers, each with slight variations in electrical characteristics, can result in a “pattern” of slightly lighter and darker areas in the image. This is typically perceived as a vertically-oriented pattern that overlays the image. Because CCDs shift all pixels one row at a time through the same readout register, they are virtually immune to Fixed Pattern Noise, except in the case of multi-tap output where careful “tap balancing” is required to avoid a similar issue. FPN is considered a type of Dark-Signal Non-Uniformity (see DSNU) and can be compensated by measuring and mapping the pattern of amplifier differences and applying an image processing algorithm to adjust for these variances. This function is typically built into the camera and is not adjustable by the camera user.

Flat-field correction – An in-camera technique that corrects for slight pixel-to-pixel sensitivity variations across an image sensor. Essentially, this calibration technique makes small adjustments in the gain applied to each pixel such that when the camera is pointed at a smooth, white card which has been evenly lit at less than 100% saturation, all pixels will have the same pixel value (see PRNU).

Four tap – Also called “quad-tap,” this is a divide-and-conquer method of reading information from a CCD whereby the CCD is divided into four regions and the pixels are read from all four regions simultaneously. The frame rate of the CCD is effectively increase by a factor of four, minus a little overhead, without resorting to overclocking, which increases noise. See also dual-tap.

Frame Grabber (also sometimes called Acquisition Board) – A board inserted in to a PC for the function of acquiring images from a camera directly into the memory of the PC, where the image processing takes place. Certain Frame Grabbers also have on-board processors for doing image processing independently of the host computer.

Frame rate – The rate at which an area scan camera can capture and read out an image. This is usually expressed in “frames per second” with the frame rates of typical machine vision cameras ranging from a few frames per second up to more than 200. Frame rate can be increased by using binning (though not necessarily), and by using partial scanning or region of interest (ROI) whereby only a portion of the active pixels are read out of the camera during each frame period.

G

Gain – An amplification of the signal collected by the pixels in the CCD or CMOS imager. Applying gain is like “turning up the brightness knob” on the image. However, gain also increases the noise in the image, which may make it unusable for some types of machine vision inspection or measurement tasks. In some cases, “negative gain” can be applied to “turn down” the brightness of an image, though usually this is done with the shutter or the lens iris.

Gamma correction – Adjusts the relationship between the values recorded by each pixel and those that are output in the image for viewing or processing. In a strictly linear relationship (gamma=1.0), a half-filled pixel well is output in 8-bit mode at a pixel value of 127 or 128 (half of the full value of 255). But gamma correction uses a nonlinear function to map well values to a different curve of pixel values. Sometimes this is done to mimic the responsiveness of a computer monitor or the human eye, which prefers a brighter set of gray tones (gamma = 0.45). Other times, it can be done to correct for high or low contrast within the image (see also Look-up Table).

General Imaging, G.I. – The terms collectively describing applications where typically no (or only limited) image processing is involved. Typically involves capturing an image and displaying it on a monitor, or recording images for later analysis. Surveillance is part of G.I., as are surgical viewing applications.

GenICam - GenICam is a universal configuration interface across a wide range of standard interfaces, such as GigE Vision, Camera Link and IEEE 1394-IIDC, regardless of the camera type and image format. It allows the user to easily identify the camera type, the features and functions available in the specific camera and also to see what range of parameters are associated with each function. The core of the GenICam standard is the Descriptor File (in XML format) that resides in the camera, which maps the cameras internal registers to a standardized function list. GenICam is owned by EMVA (European Machine Vision Association).

Gigabit Ethernet – A computer networking standard for transmitting digital information in packets across a computer network at a speed of 1 billion bits of information per second (1 Gbps).

GigE Vision – An interface standard introduced in 2006 that uses Gigabit Ethernet data transmission for outputting image data from industrial cameras. The GigE Vision standard is maintained and licensed by the Automated Imaging Association (AIA) and has become one of the most prevalent digital camera standards in the world. It utilizes standard Cat5e or Cat6e cables to transmit data up to 100 m at a rate of 1 Gbps (125 Mbytes/s). Because it is a networking standard, it also supports various multicasting and broadcast messaging capabilities that are not possible with point-to-point interfaces.

GPIO – Stands for general purpose input and output. This typically refers to a set of functions and signal registers which can be accessed and programmed by users to perform a variety of fundamental camera tasks, such as triggering the camera, setting up a pulse generator or counter, and designating which inputs and outputs will be used for various tasks.

Grey Scale – This is another term for black-and-white, or monochrome imaging. It refers to images where all pixel values represent the level of light intensity, without including any color information. Thus, all pixels are expressed as varying shades of grey. The number of possible grey values depends on how many bits are used to hold each pixel value. In an 8-bit image, 256 values are possible. For a 10-bit image, 1024 different shades are available, while a 12-bit image can support 4096 different grey values.

H

HDTV – Refers to the high-definition television standard developed for broadcasting. HDTV has several different levels, but it is most commonly used to mean a progressive-scan image with a resolution of 1920 pixels wide by 1080 lines high and a minimum frame rate of 30 frames per second. This is sometimes abbreviated as 1080p30 or just 1080p. More recently, both consumers and machine vision customers have shown a growing interest in 1080p HDTV running at 60 frames per second, which produces sharper images of moving objects.

Hue – saturation and brightness - Hue is one aspect of color in the RGB color model. Hue defines what the mixture of the color is, in other words, how much red, green, and blue have been mixed together. Meanwhile, saturation defines how “pure” the color is (are their other colors mixed in), and brightness defines the intensity or energy level of the light source. This scheme, abbreviated HSB, is one of several similar (though not identical) color schemes used in machine vision.

I

ICCD – Intensified CCD, also Low Light Level CCD. The intensifier tube collects faint photon information and converts this to an electron which is accelerated onto a scintillation plate. This is in turn connected to a CCD sensor via fibre optics or a lens system. This results in useful image quality even at starlit or overcast skies.

IEEE 1394 – Standard for serial transmission of digital data, which can be used for digital output cameras. Sony has launched a family of industrial products based on this standard, but has had low success with these products up to now in the initial years. The major reason for this is that IEEE 1394 still had virtually no acceptance from the market (not yet included on motherboard of PCs). As of the summer 2001, there is an increased activity and acceptance and a number of camera manufacturers are now launching IEEE 1394 models. One of the factors behind this change is the launch of a very affordable interface card by the Greek company Unibrain. They have also launched an “industrial” IEEE 1394 camera. This standard was initially launched by Apple Computer under the name of FireWire.

Image Processing – When using CCD cameras in Automated Imaging, the image is treated by special software to provide a singular result. Typically the result should be of Go / No-Go type (examples would be: correct size of an object, correct position of an object, correct color of an object, correct number of objects, etc).

Infrared light – Covers any light with a wavelength starting at the upper edge of the visible spectrum (700 nm) and extending all the way to 1 mm (the lower edge of the microwave spectrum). Within the infrared band there are several sub-bands. These include near infrared (from 700 to 1400 nm), short wavelength or SWIR (1400 to 3000 nm), mid wavelength or MWIR (3000 to 8000 nm), long wavelength LWIR (8000 to 15000 nm), and far infrared. Because infrared wavelengths are longer than visible light, they are able to pass through the surface of some substances, especially organic materials and certain types of paints and plastics. This allows near infrared and SWIR cameras in particular to be used to see non-visible defects and see through smoke and certain types of packaging. LWIR cameras are known as thermal-imaging cameras, because they are able to “see” thermal emissions from both living creatures and from factory machines.

Interlaced Scan – The basis for traditional broadcast TV is Interlaced Scan. It involves capturing the image in two fields (odd and even lines at separate time intervals). The major advantage of using interlaced scan is that it conserves video bandwidth, as the latency of the eye puts the two fields together again. Many industrial cameras still use interlaced scan, but with the results of drawbacks if the object is moving while being captured.

ITI - Imaging Technology Incorporated – US based frame grabber manufacturer, purchased by Coreco.

J

JPG – (also jpeg) A method of compressing an image to reduce file size. The standard was developed by a group called the Joint Photographic Experts Group, hence the abbreviation JPEG. The level of compression can be adjusted by the user to determine the proper trade-off between file size and loss of image quality.

K

Knee function – Has some similarities to gamma correction in that it refers to a way to change the relationship between actual pixel well values and their corresponding output value. In this case a different function is applied to a specific portion of the I/O graph starting at a “knee point” where the slope of the graph is changed. Knee functions are commonly used to “compress” the bright parts of an image so that they do not saturate when attempting to brighten the darker areas of an image.

L

Light spectrum – A range of wavelengths within the electromagnetic spectrum which are perceived as “light” by humans or instruments. These include visible light, with wavelengths from 400 to 700 nm, infrared light (700 nm to 1 mm) and ultraviolet light (wavelengths from 10 nm to 400 nm). Wavelengths shorter than 10 nm are considered x-rays and wavelengths longer than 1 mm are considered microwaves.

Line Scan - Denotes a camera (or imager) architecture that gathers an image on a line-by-line basis (requires that either the object is moving or the camera is moving). An image of arbitrary size (number of lines) is captured in to the memory of the host computer. This operation can be compared to a fax machine. The opposite of line scan is Area Scan.

Lookup Table (LUT) – A user-programmable method of modifying the relationship between the values recorded by each pixel and those that are output in the image for viewing or processing (see also Gamma Correction). While “pre-set” gamma correction lets the user adjust this input-output relationship using one of several pre-defined curves, a Lookup Table lets the user define a custom mapping of input values to output values. This is done by selecting an “index” and assigning it a “value.” For example, Index 0 would typically represent a pixel with an exposure value of 0 – a black pixel. But by assigning a value of 8 to Index 0 in the Lookup Table, any pixel with the value of 0 would be “boosted” to an output value of 8. By repeating this process across all “Indexes,” the user can define many different custom ways to boost or reduce the intensity of various pixel values within the image. The number of Indexes available is also referred to as “points,” so a “256-point Lookup Table” has 256 indexes that can be mapped to an adjusted output value. The number of values to which each index can be mapped is often different than the number of indexes. For example, 256 index points could each be mapped to a value between 0 and 4095. The Lookup Table function, in this case, would handle the task of calculating the proper input and output values based on whether the camera was operating at 8-bits, 10-bits, 12-bits, etc.

M

Megapixel – Classifies cameras, which have a resolution of 1 million pixels or more. JAI CM-140GE and CM-200CL are examples of megapixel cameras. JAI’s highest resolution camera is currently the SP-20000, which has a resolution of 20 million pixels.

M-52 mount – A large format screw-type lens mount designed to accommodate cameras with very large area scan or very long line-scan imagers.

Mini-Camera-Link – A part of the Camera Link standard that specifies smaller connectors than the original Camera Link standard. Except for the size of the camera and cable connectors, Mini Camera Link adheres to all other Camera Link electrical and physical specifications. Thus, it is possible to connect a camera with a Mini Camera Link connector to a frame grabber with a Camera Link connector, provided the user has a cable with the proper connectors on each end.

Multi-imager – A term for any camera that has more than one CCD or CMOS sensor inside. In most cases, this requires the use of a prism to split the light and direct it to the multiple imagers. However, in some line scan cameras, multiple linear sensors may be placed side-by-side without the use of a prism block. These dual-line, tri-linear, and quad-linear arrangements create both timing challenges and parallax issues, depending on the application.

N

Near infrared light – The lowest band in the infrared light spectrum (see Infrared light) extending from 700 nm (the edge of the visible spectrum) to around 1400 nm. The longer wavelengths, though not visible to the naked eye, are able to penetrate through certain inks and plastics and penetrate below the surface of organic material like fruits and vegetables. By using CCDs or CMOS imagers that are sensitive to NIR light, cameras can be made to capture monochrome images showing various subsurface defects and hidden objects.

NTSC standard – Similar to the EIA (RS-170) standard, except it refers to the analog color video imaging format in North America and other parts of the world. Basic characteristics are: interlaced color video, 30 frames per second (60 fields per second), standard resolution of 768 pixels by 494 lines.

O

OEM – Original Equipment Manufacturer. A customer who builds machines for a specific task or specific market segment, in large volume. Purchases the JAI camera (and other components) from a distributor. The machines built by an OEM are usually produced for a period of 3 – 5 years. Generally uses standard products. (In Japan this term is sometimes used to describe a customer who buys a custom designed product – non-standard product)

Optical black – This is the term given to pixels around the edges of a CCD or CMOS imager which are fully functional from an electrical perspective, but have a metal shielding over the photosensitive area. By shielding these pixels, they will only output dark current and bias level, which can then be used as black reference for the signals that are read out of the active pixel region. Because they do not appear as part of the main image, when stating the resolution of a camera, JAI does not include the optical black pixels. However, some JAI cameras do allow the user to include optical black pixels in their full image readout. (see also Active Pixels).

P

PAL standard - Similar to the CCIR standard, except it refers to the traditional analog color video imaging format used in Europe and other parts of the world. Basic characteristics are: interlaced color video, 25 frames per second (50 fields per second), standard resolution of 752 pixels by 582 lines.

Partial scan – A technique for reading out a designated subset of the full number of lines from an imager. Because the full image is not read out, the frame rate of the camera is typically increased. Partial scan may involve pre-defined subsets of the image, or may be fully programmable, letting the user select the starting line and the height of the partial image.

Pixels – Photosensitive sites that make up a CCD or CMOS imager. As a pixel is struck by photons, it produces a number of electrons which are stored as an electrical charge in a so-called “pixel well”. The more photons that strike a pixel, the more electrons that are produced. After a specified exposure time, the charges from each pixel are read out as an analog signal value, which is then converted to a digital value, corresponding to the intensity of light that struck that pixel. The result of all the pixel values, creates a digital image.

Pixel clock – The name for a sine wave typically generated by an oscillator crystal that sets the pace for how fast operations can take place inside the camera. The pixel clock guides the speed at which the internal electronics can read out pixel information from the imager (CCD or CMOS) and pass it to the camera interface. The higher the clock frequency – typically expressed in MHz (millions of cycles per second) – the faster data can be extracted from the sensor, enabling a faster frame rate. For some interfaces, a second pixel clock governs how fast the data can then be organized and sent out of the camera. This frequency (the Camera Link pixel clock, for example), may be different than the pixel clock used for the imager.

Power over Mini Camera Link – An extension of the original Camera Link standard that enables power to be supplied from a properly-equipped frame grabber to a Camera Link camera via the cable that is carrying data from the camera to the grabber. Power over Mini Camera Link specifies that mini-sized connectors are to be used, however the same approach can also be used with full-sized cables and connectors.

Prism – an optical element consisting of multiple polished glass pieces assembled in a way that refracts (bends) light as it passes through it. By positioning the faces of the glass pieces in particular ways and applying various coatings to the surfaces, a prism can be used to split one scene into two identical images with half the intensity, or can be used to send specific wavelengths (colors) of light to different sensors or imagers.

PRNU – Photo Response Non-Uniformity. This refers to pixel-specific differences in how an image sensor responds to equal amounts of light falling onto all pixels. In other words, when all pixels are exposed to the exact same shade of grey, they do not necessarily produce the exact same amount of signal. The small variations in response are called PRNU. This is a property of the sensor and is not related to lens properties which tend to distribute more light to the center of the imager and less towards the edges (see Shading Correction). A method called Flat Field Correction (FFC) is typically used with PRNU to make small adjustments in the gain applied to each pixel in order to “even out” the small pixel-to-pixel differences in response.

Progressive Scan – Captures an area scan image in a single line-by-line sequence, without dividing it up in odd / even lines. The major advantage of this is that sharp pictures of fast moving objects can be captured. The opposite of progressive scan is Interlaced Scan.

Q


Q.E. (Quantum Efficiency) - QE is a quantity defined for a photosensitive device such as photographic film or a charge-coupled device (CCD) which is the percentage of photons hitting the photoreactive surface that will produce an electron–hole pair. It is a key measurement of the device's electrical sensitivity to light.

R

Remote Head Camera
– Collective term for cameras where the CCD sensor is placed at a distance from the control circuit, via a cable with a length of around 2 – 5 meters. Examples are CM-030GE-RH and CV-M53x series. Sometimes also referred to as Micro Head or Separate Head.

S

Sensitivity – Is a broad term that describes how readily a camera or imager responds to small amounts of illumination, whether visible or non-visible wavelengths. There are several factors that impact sensitivity, including the size of the pixels, how well they collect light, how efficiently they convert light to electrical signals, and how much “noise” they generate during this process. For the output of a camera or imager to be useful, one must be able to distinguish between the image information (signal) and the noise component (see Signal-to-noise ratio). The lower the amount of illumination required to produce detectable image information, the more “sensitive” a camera or imager is said to be. Sensitivity specifications can be expressed in several ways including the amount of “lux” (lumens per square meter) required to generate a meaningful signal; radiometric measurements (describing the “power” of the light in terms of watts per square meter); and “absolute sensitivity” measurements which state the minimum number of photons that must strike a pixel before meaningful image information can be obtained.

Shading correction - This is a compensation method designed to produce a flat, equal response to light under the same conditions in which the calibration routine was run. It is generally thought of as a coarse correction to variations in image brightness which typically result from optics-related shading issues originating with the lens and/or prism. In a multi-imager color camera, shading correction may be used to equalize the response of the three color channels.

Shutter – In film cameras, the shutter is an opaque device that physically “opens” to allow light to strike the film and the “closes” when the exposure is complete. For a digital sensor, an electronic shutter achieves the same effect by transferring the digital charge collected in the pixels to a light-shielded buffer area (transfer register) at the end of the designated exposure time. If all pixels are transferred at the same time, the shutter is said to be “global.” If pixels are transferred in a sequential fashion, the shutter is said to be “rolling.”

Signal-to-noise ratio – CCD and CMOS cameras produce several forms of “noise,” that is, variations in pixel charges not generated by the light striking the imager. These can be caused by thermal conditions, electronics, or simply the fundamental physical laws of how photons are converted to electrons. Noise can appear as random graininess, horizontal or vertical lines that become visible in low signal areas of the image, blotchy gradients between darker and lighter regions, and other manifestations. The signal-to-noise ratio is a measure of how much a typical image is corrupted by these noise sources. It is generally expressed in decibels – the higher the number, the “cleaner” the image.

Smear – Similar to blooming, smear is the result of one or more over saturated pixels transferring some of their charge to an adjacent pixel. Only in this case, the transfer occurs as the charges are being progressively moved down and out of the light sensitive area, resulting in a vertical streak on the image. This is most commonly seen in CCD imagers. CMOS imagers use a different method for shifting the pixel charges out of the unshielded part of the imager and typically do not experience this issue. Thus, marketers often refer to CMOS technology as “smearless.”

SMT – Surface Mount Technology. Allows mounting components on circuit board without the need for through-holes. Saves assembly time, as can be automated. JAI uses SMT for all products.

S/N ratio – See Signal to noise ratio

Solution - A combination of hardware and software that work in unison to solve a challenging customer problem

Surveillance
– See General Imaging

SVGA standard – One of several “standard” sensor resolutions defined and marketed by Sony. SVGA corresponds to a resolution of 776 x 582 pixels, or roughly 0.4 megapixels.

SXGA standard – One of several “standard” sensor resolutions defined and marketed by Sony. SXGA corresponds to a resolution of 1392 x 1040 pixels, or roughly 1.4 megapixels.

T

TIFF – Tagged image file format. This is a “raw” image format. Unlike JPEG, there is no potential loss of image information, however there is also no compression. Therefore, TIFF images have much larger file sizes than JPEG images.

Tri-linear - A line scan camera with three independent line scan sensors arranged side-by-side. Each imager has a unique color filter (Red, Blue and Green) to produce color line scan images. Since they are side-by-side, the optical planes from the target to the imagers are slightly different. This can cause encoding challenges and parallax issues.

U

Ultraviolet light – The band containing the shortest wavelengths in the light spectrum. Ultraviolet ranges all the way from 10 nm (just above x-rays) to 400 nm, the bottom of the visible light range. Most ultraviolet imaging is performed at 300-400 nm, or in the so-called “solar blind region” of 230-290 nm. The short wavelengths of UV light allow for visualization of very small surface features, making it useful for inspecting microscopic details such as the surface of semiconductor chips.

USB – Universal Serial Bus. Used for connecting peripherals to PC computer via serial communication. Is also widely used for connecting simple cameras (web cam) to PCs. Also considered for higher end cameras, once speed of USB becomes higher. Eliminates the need for a frame grabber.

UXGA – One of several “standard” sensor resolutions defined and marketed by Sony. UXGA corresponds to a resolution of 1624 x 1236 pixels, or roughly 2 megapixels.

V

VGA - One of several “standard” sensor resolutions defined and marketed by Sony. UXGA corresponds to a resolution of 640 x 480 pixels, or roughly 0.3 megapixels.

Vision Technology - Imaging products such as cameras, lighting, lenses, frame grabbers, cabling and software that are developed specifically for imaging applications.

W

White balance – The process of making sure that pixels with different color filters respond to the light source being used in the correct color proportions. Color cameras typically use Bayer filter arrays that place a mosaic of red, green, and blue filers over the imager’s pixels. However, because different light sources contain different mixes of these colors, the camera might perceive colors incorrectly. White balancing involves pointing the camera at a smooth white card or surface illuminated at a level below the saturation point, and then adding gain to pixels until all pixels have the same value as the color channel with the highest value (typically green). This calibration ensures that colors will now be rendered properly. White balancing can be done manually, in a one-push automatic fashion, or on a continuous automatic basis to account for changes in light sources.

X

XGA - One of several “standard” sensor resolutions defined and marketed by Sony. XGA corresponds to a resolution of 1024 x 768 pixels, or roughly 0.8 megapixels.


Y

Z

You are using an outdated browser!

Update your browser to display this website correctly. Update my browser now

×