Choosing a Camera Technology to Work Best for Beam Profiling Applications

Back in 1997, Spiricon’s founder and president, Dr. Carlos Roundy, presented a paper at the 4th International Workshop on Lasers and Optics Characterization, which was held in Munich, Germany. The paper was based on the work performed at Spiricon during the mid 90s. At that time, novel concepts were being elucidated for laser beam characterization. Earlier definitions were rather simple and usually driven by customers’ preferences on how the beam should be measured.

Beam width and divergence are two measurements that were often requested, although other measurements like Tophat, Gauss fit, Power in a Bucket, Peak Fluence, Ellipticity, Centroid, Orientation, etc. were also requested.

“Seeing” What the Laser is Doing

CCD and CID camera technologies were used in the late 80s and early 90s as they were seen as a viable method for recording the intensity profile of a laser beam in 2D and in real time. Most cameras used variable exposure and external triggering features, and hence, were able to image continuous wave (CW) and pulsed laser beam profiles. This latest ability to “see” the laser’s operation became a widespread feature in the laser guru’s toolkit. However, seeing was insufficient to meet the customers' needs, and hence, equal importance was given to measurement. The camera technologies resulted in novel algorithms, spanning a large number of the measurement areas discussed above. Manufacturers used different algorithms to make one or more of the above measurements. Although specific beam width measurements were often requested, the algorithms used were exclusive to each individual investigator and manufacturer.

Camera Limitations

Limited signal to noise, and the complexity involved in the establishment of a baseline or stable zero (in order to determine the beam intensity profile), were the two major concerns for all types of cameras. These restrictions presented different challenges based on the approach used for performing a beam width measurement. The following are the most popular beam width measurement methods:

  • 13.5% of peak, carried out in varied ways
  • 86.5% of total power/energy beam widths
  • 1/e², also called second moment or D4σ (Dee-Four-Sigma)
  • Full Width Half Max, also called 50% of peak
  • 86.5% of Power in a Bucket, beam diameter, also known as encircled power

A, B, and E Methods will produce C, provided the percentages were set as depicted and the laser beam mode was a pure TEM00. If higher modes are seen in the beam, then the A, B, and E methods would not produce C, and the scale of error relied on the mix of higher modes.

The Search for Improved Accuracy

In the past, customers would accept major errors in the beam width measurement precision; however, the need for better precision and repeatability soon became more pressing. The ability to make precise second moment beam width measurements also became critical. Since the second moment beam width complied with laser beam propagation hypothesis, it became the industry standard over time. However, it is difficult to make this measurement with noisy cameras featuring zero baselines.

A clip level technique is used by all beam width methods A, B, D, and E, where camera pixels over a given value were incorporated to make a beam width measurement. Pixels under the clip level were not considered. The clip levels were above the noise floor, and hence, measurements were relatively immune to noise provided that a good baseline can be determined, and the camera imager is filled by the beam spot size.

This does not happen when the second moment D4s beam widths are measured. The calculation of the second moment uses a mathematical term which is the square of the distance a pixel is located from the beam’s center multiplied by the pixel's intensity. Hence, both the distance from the center and the population of the far out pixels increase. In practice, both are restricted to the imager’s area, but in theory, they increase to infinity. If the laser beam is small with respect to the imager’s size, pixels located at greater distances can significantly affect the beam calculation.

A range of constraints is used to control the potential runaway second moment results. One part of the solution is to restrict the area over which the calculations are carried out. Such aperture-applied methods were required for small beams on a large imager. Different rules were required to establish the size of the aperture, and algorithms had to be characterized to produce them for beams of different sizes and shapes.

Establishing a Baseline

Establishing the baseline presented a more challenging prospect. Manufacturers of beam analyzers already knew how to use clip levels. In initial attempts, a clip level was set near or above the noise floor so as to prevent noise in the beam wings. A major drawback of this method was that the clip level had to be frequently tuned to achieve an accurate result. When there was a variation in the beam intensity, the beam width would alter unless the clip level has been modified. The beam width could change if the aperture decreased or increased, as described above. The clip level technique may work properly only for the beam profile being determined. The quest for the magic clip level eventually came to a dead-end, but continued to present a suitable solution for other beam profilers.

During the 90s, Spiricon worked to find an appropriate solution to the aperture and baseline issues. In those days, an ISO standard was unheard of. Analog cameras needed external digitizers, which were setup for 640 x 480 width and height pixels at 8 bits for each pixel. The pixels were 13 to 20 µm in size, and the camera imagers were mostly CCD interlaced frame transfer architectures. This camera was used to model the information in Dr Roundy’s first version of the paper.

New Camera Technologies

Present-day CCD cameras include several million pixels rather than just a couple of hundred thousand. While the pixel sizes are relatively small, being in the one digit µm size range, the larger imagers can pose new challenges. The probability and quantity of bad pixels, and pixels that twinkle, has increased. Output image shading may be greater and more frequent. The availability of larger imagers makes it possible to provide sizes up to 35 mm format film. Over the years, a number of CMOS imagers was studied, and the previous instruments were mostly designed for low-end commercial applications. These were rolling shutter designs, which made them unsuitable for pulsed lasers, and were characterized by poor pixel-to-pixel uniformity, low signal to noise ratios, unstable background black levels, and poor linearity of response. Nevertheless, the prior versions of CMOS imagers had one positive feature — they bloom less or do bloom at all if used with YAG lasers. Also, they were more cost-effective in comparison to the CCD imagers. An improved yet affordable CMOS imager was anticipated to be developed in the future.

The current generation of CMOS imagers has effectively resolved most of the above-mentioned issues. These devices can be triggered and hence are suitable for pulsed lasers. Linearity and the temporal signal to noise approaching CCDs are also similar; however, the blooming effect at YAG wavelengths is slightly different than that of CCDs and was not completely absent. It is possible that the blooming was indeed present, but now can be detected more easily, or that the new buffered designs could have aggravated it. A major issue is baseline instability, which makes it rather complicated to subtract it out. Another problem is pixel to pixel instabilities. With advancements in these devices, their cost has also increased.

Spiricon designed its first pyroelectric-based camera called Pyrocam in the early 90s (Figure 1). This device was based on the company’s 1D and 2D pyroelectric imagers, and paved the way for camera quality imaging of FIR, NIR and UV lasers. It includes imagers up to 25 mm square with 320 x 320 pixels, and uses a Gig-E interface.

Ophir’s newest pyroelectric camera, PyrocamIV.

Figure 1. Ophir’s newest pyroelectric camera, PyrocamIV.

Many imagers were developed using InGaAs focal plane arrays in the intervening years, but all had uniformity issues. To resolve these problems, camera producers integrated the cameras or their drivers with advanced algorithms. In each camera, a non- uniformity correction file (NUC) was delivered separately, or integrated in its firmware. NUC flattens the dark field, controls poor pixel correction, and balances the uniformity. These cameras deliver a reliable performance, and keep temporal signal to noise in the 60 dB range. Moreover, they are compatible for the NIR, and thus handle Telecom and YAG wavelengths without any major blooming. However, a major drawback of these imagers is that they have low resolution, with the smallest pixel size in the 30 µm range on the lower resolution, and 20 µm range on the higher resolution. When these imagers are thermally stabilized, they work optimally and are also quite expensive based on resolution. Non-linear response is seen in certain imagers that differ with respect to exposure times. Hence, care must be taken when choosing these cameras for critical measurement applications.

In contemporary images, a frame grabber is not required. The latest versions of cameras incorporated 1394A the B (FireWire) technology, which was eventually challenged by USB2, USB3, and then gigabit Ethernet. One important aspect is that high quality cameras allow for good measurements. Stringent specifications and testing methods developed by Spiricon ensure that all cameras are assessed before they are integrated in the beam analyzers.

Ultracal and ISO

During the mid 90s, when baseline correction and auto-aperture techniques were developed by Spiricon, ISO standards for laser beam measurements were not yet in place. With extensive work in laser beam modeling and controlled analysis, Spiricon patented the technique known as Ultracal™. The company has received a couple of US patents covering these methods.

Conclusion

A decade after the patent was granted, the Ultracal baseline correction method was recognized by ISO-11146 -3 First edition 2004-02-01 and was specified in section 3.2. Further ISO methods and definitions were developed, and a number of techniques was also developed for making beam measurements that soon became standardized. Spiricon was already making ISO-compliant measurements in certain cases, and implemented new techniques in other cases, which helped provide more options to its customers.

This information has been sourced, reviewed and adapted from materials provided by Ophir Photonics Group.

For more information on this source, please visit Ophir Photonics Group.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Ophir Photonics Group. (2024, July 10). Choosing a Camera Technology to Work Best for Beam Profiling Applications. AZoOptics. Retrieved on November 06, 2024 from https://www.azooptics.com/Article.aspx?ArticleID=1053.

  • MLA

    Ophir Photonics Group. "Choosing a Camera Technology to Work Best for Beam Profiling Applications". AZoOptics. 06 November 2024. <https://www.azooptics.com/Article.aspx?ArticleID=1053>.

  • Chicago

    Ophir Photonics Group. "Choosing a Camera Technology to Work Best for Beam Profiling Applications". AZoOptics. https://www.azooptics.com/Article.aspx?ArticleID=1053. (accessed November 06, 2024).

  • Harvard

    Ophir Photonics Group. 2024. Choosing a Camera Technology to Work Best for Beam Profiling Applications. AZoOptics, viewed 06 November 2024, https://www.azooptics.com/Article.aspx?ArticleID=1053.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.