Nonequidistant scanning approach for millimetresized SPM measurements
 Petr Klapetek^{1}Email author,
 Miroslav Valtr^{1} and
 Petr Buršík^{2}
https://doi.org/10.1186/1556276X7213
© Klapetek et al; licensee Springer. 2012
Received: 30 September 2011
Accepted: 11 April 2012
Published: 11 April 2012
Abstract
Longrange scanning probe microscope (SPM) measurements are usually extremely time consuming as many data need to be collected, and the microscope probe speed is limited. In this article, we present an adaptive measurement method for a largearea SPM. In contrast to the typically used line by line scanning with constant pixel spacing, we use an algorithm based on several levels of local refinement in order to minimize the amount of information that would be useless in the data processing phase. The data obtained from the measurement are in general formed by xyz data sets that are triangulated back with a desired local resolution. This enables storing more relevant information from a single measurement as the data are interpolated and regularized in the data processing phase instead of during the measurement. In this article, we also discuss the influence of thermal drifts on the measured data and compare the presented algorithm to the standard matrixbased measuring approach.
Background
Many novel industrial components, e. g. optoelectronic devices, microchips, diffraction gratings or photonic crystals, are typically formed by highly integrated microstructures arranged on very large areas. The inspection and control of such components are problematic as we need to detect small imperfections on large areas or to evaluate small differences between many semiconductor mask critical dimensions. Scanning probe microscopy techniques (e. g. atomic force microscopy) can be used for this task if sufficiently large areas can be scanned. The bottleneck of using scanning probe microscopes was always the scanning range; however today, specialized equipment can be found in the literature and even on the market [1–5] featuring very small uncertainties (less than 1 nm) over very large volumes. However, even if the range of microscopes has changed, the maximum speed of the tip is still nearly the same. Largescale measurements are therefore very slow.
In this article, we discuss one possible realization of a nonequidistant measurement method for a longrange scanning probe microscope and discuss its applications for different samples typical for microelectronics and solar cell industry.
Methods
Experimental arrangement
For illustrative measurements, three different samples were used: a calibration grating that is usually used for commercial scanning probe microscopes calibration, a microchip surface and a solar cell surface with a typical pyramidal structure used for light trapping. These represent typical measurands requesting largescale measurements combined with highresolution details. Contact mode measurements were performed using standard contact tips from Nanosensors (PPPCONTR series, NANOSENSORS, Rue JaquetDroz 1, Case Postale 216, CH2002 Neuchatel, Switzerland).
Adaptive measurement algorithm
There are numerous ways how to perform measurements of a rectangular surface area. Logically, the most straightforward and mostly used approach is to measure in a rectangular grid mapping pixel to pixel and directly creating the image. Even if this is the most convenient scanning approach and the data can be processed very easily after measurement, it leads to a large loss of data already during the scanning process. As the microscope needs to maintain the feedback loop during the motion of the tip, the amount of the collected data in a single profile is usually much larger than the number of pixels of the resulting image (e. g. by an order or more). Data are then resampled to the requested number of pixels, loosing the highresolution information already acquired in between them. A natural improvement of 'standard' square scanning probe microscope (SPM) image would therefore be a square set of data with nonrectangular pixel size, where fast axis spacing would be much smaller than the one of the slow axis. Implementing such a scanning method could already dramatically improve the amount of information collected in a single scan (in the same time), even if the method is still quite a regular one.
On the other side, adaptive sampling could be realized as forming a completely random distribution of points in the xy plane. Based on some surface properties, like local roughness, we could measure the data forming a completely irregular set of points that would be triangulated afterwards. However, as the microscope in principle measures highresolution data over some continuous path, this could be ineffective. It is therefore desirable to measure a set of profiles (not necessarily forming a Cartesian grid).
 1.
Measure a net formed by rows and columns with coarse xy resolution and interpolate data to the final resolution.
 2.
Measure and add an interleaved net of rows and columns to form a data with resolution which is twice as fine and interpolate data to the final resolution.
 3.
Identify subsets between rows and columns where the interpolated data between the last two iterations differ more than by a requested z precision criterion.
 4.
Where the z precision criterion failed, measure a net of rows and columns with twice as fine resolution on those rectangles. Note that in order to save the measurement time, this process needs to be optimized so that the movement between different refinement areas is minimized. An optimum path for the SPM probe is therefore planned merging all the necessary movements (including movements between different areas) with the criterion of the shortest total distance. This is in principle a traveling salesman problem, and here, it is solved using the nearest neighbor algorithm.
When we have the data measured, we cannot save them as a regular matrix, so we use a general set of xyz values for storing the data. Algorithms available for SPM data processing however expect a regular matrix in most of the cases, and so do also the vast majority of software packages available for SPM data processing and analysis. The easiest way on how to overcome this is to regularize the data back before analysis  now with the desired resolution that can be locally much larger than for the coarse image. We can therefore perform zooms in the measured data  and where the data are densely sampled, we can obtain highresolution details. For interpolation purposes, data are triangulated by a fast divide and conquer routine (similar to the one described in [7]) reaching optimal worstcase complexity of O(n log n), where n is the number of data used for triangulation. Recursive divisions alternate between horizontal and vertical cuts. A Delaunay triangulation and a Voronoi diagram are created using this approach. With Delaunay triangulation and Voronoi diagram computed, we can use a number of interpolation methods. At the moment, a simple planar interpolation of the triangulated data is utilized.
The presented algorithm was implemented using libraries of Gwyddion open source software for SPM data analysis http://gwyddion.net. While the scanning routines can hardly be shared to the public (these are connected to the microscope hardware), we expect that the tested triangulation and regularization approach will be made available to Gwyddion users as part of our future work.
Results and discussion
 1.
It can save the amount of necessary measured information while measuring with very high resolution, thus shortening the measurement time and minimizing the probe wear while preserving the necessary resolution on critical surface structures.
 2.
It allows the user to perform further local refinements easily while the sample is still in the microscope, not necessarily based on the algorithm criteria but also on the user's requirements.
 3.
It can automatically measure the data allowing the user to perform zooms offline, e. g. in the data processing phase after the measurement. This can be useful namely for automated image processing or inspection purposes.
Performance of the measurement algorithm
Accuracy  20 nm  3 nm 

Grating  18%  34% 
Microchip  42%  54% 
Solar cell  41%  68% 
It should be noted that the obtained data have different statistical properties than a typical matrixbased SPM image. There is no difference between the fast and slow axes as both x and y directions are used for the measurement. It has no sense therefore to distinguish between directions while evaluating direct or statistical quantities. All parasitic effects, such as drift or noise, influence data almost isotropically. This could be understood as a drawback in some sense; on the other side, it can simplify the treatment of uncertainties while obtaining profiles of different orientation with respect to the main axes or 2D statistical properties evaluation.
An important issue is the influence of drift on the measurement process. Drift, namely of thermal origin, can be observed in many SPM systems [8, 9]. In a typical matrixbased SPM image, drift is mainly seen in the slow scanning axis, leading to image distortion as seen in Figure 6B. Note that the coordinate system origin in this simulation is in the top left corner of the image. Based on the analysis of the known surface structures or based on repetitive scans, we can determine the drift rate, which is usually a timedependent decaying function, with a maximum right after the instrument startup or a sample exchange [10]. If we use the scanning approach presented in this work, the data in the image are not measured successively, so the straightforward drift determination from AFM image is not possible as the drift influences interleaved values significantly (see Figure 6C). However, drift can still be evaluated. As in each refinement level, we measure on the same area as was already measured in the previous iteration, and we even obtain data at exactly the same points (at row/column crossings); we can determine the drift rate already during refinement process for areas that are being refined. We can do this using the following approach:

Create an interpolated image from one refinement level (using the standard procedure from the previous section);

Create an interpolated image from the next level skipping the data measured in the previous level;

Use crosscorrelation to determine the shift between the two data sets (in all three axes), which are the x and y drift values; and

Shift one data set according to the crosscorrelation result and subtract the two data sets, which gives us the z drift value.
As an illustration of the process, we have simulated data measurement with a constant drift vector of (3, 3, 0.3) nanometres per second in Figure 6. We have used the part of the microchip surface seen in Figure 6A, using already measured data (without observable drift) and adding the drift during the simulated measurement (all movements were performed with the same velocity). Drift was evaluated from levels 1 and 2 of the refinement process where, for this sample, we still measure on nearly the whole area (the local refinement criterion still holds everywhere). Using the abovementioned process, we have evaluated the drift vector as (3.3 ± 0.5, 2.7 ± 0.3, 0.29 ± 0.15) nanometres per second which is a good estimate of the drift rate. In Figure 6D, a simulated measurement with correction based on this estimated drift rate is shown (for drift estimation, the first two iterations were used), obviously leading to significant correction of the image. Of course, if the drift rate is not constant, the abovementioned approach would not be an optimal one; however, the user could repeat it after several refinement iterations to correct the drift rate estimate. Generally, it can be seen that the drift leads to a much more evident image distortion when our adaptive refinement algorithm is used. On the other side, even the data obtained in a regular matrix approach are influenced by the drift, and if we do not know the measured structure properties, we need to use a similar correlation technique to determine the drift. Data are therefore 'wrong' in both cases, but in the regular matrix case, they look better and allow the user to ignore the systematic errors caused by the drift. The large influence of the drift on the presented algorithm could therefore be seen even as a benefit for a metrologist  if the systematic error is quantitatively the same in both cases anyway, the adaptive approach prevents the fact that it would rest hidden in the data.
Conclusion
We have implemented an adaptive refinement measuring algorithm in a longrange scanning probe microscope. For surfaces with large regular areas, the use of this adaptive scanning instead of a regular matrix scanning can save the time necessary for the measurement and enable highresolution zooming in the measured data. This can be interesting namely when measuring large areas of different manufactured nanoand microstructures, like electronic parts, photonic crystals, diffraction gratings, etc. Using the presented approach, we iteratively measure with higher and higher resolution on critical surface areas, determined by a simple and universal criterion. In the data processing phase, the set of xyz data obtained from the measurement is triangulated, and necessary details are regularized in order to reach requested resolution. This enables the user to perform zooming in the data processing phase, with no need of performing further measurements.
Declarations
Acknowledgements
This work was supported by the Ministry of Trade and Commerce under contract number FRTI1/241.
Authors’ Affiliations
References
 Werner C, Rosielle PCJN, Steinbuch M: Design of a long stroke translation stage for AFM. International Journal of Machine Tools & Manufacture 2010, 50: 183–190. 10.1016/j.ijmachtools.2009.10.012View ArticleGoogle Scholar
 Manske E, Hausotte T, Mastylo R, Machleidt T, Franke KH, Jäger G: New applications of the nanopositioning and nanomeasuring machine by using advanced tactile and nontactile probes. Meas Sci Technol 2007, 18(2):520. 10.1088/09570233/18/2/S27View ArticleGoogle Scholar
 Eves BJ: Design of a large measurementvolume metrological atomic force microscope (AFM). Meas Sci Technol 2009, 20(8):084003. 10.1088/09570233/20/8/084003View ArticleGoogle Scholar
 Dai G, Pohlenz F, Danzebrink HU, Xu M, Hasche K, Wilkening G: Metrological large range scanning probe microscope. Rev Sci Instruments 2004, 75(4):962–969. 10.1063/1.1651638View ArticleGoogle Scholar
 Sturwald S, Schmitt R: Large scale atomic force microscopy for characterisation of optical surfaces and coatings. Int Journal of Precision Technology 2011, 2(2–3):136–152.View ArticleGoogle Scholar
 Klapetek P, Valtr M, Matula M: A longrange scanning probe microscope for automotive reflector optical quality inspection. Meas Sci Technol 2011, 22: 094011. 10.1088/09570233/22/9/094011View ArticleGoogle Scholar
 Guibas L, J S: Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams. ACM Transactions on Graphics (TOG) 1985, 4(2):74–123. 10.1145/282918.282923View ArticleGoogle Scholar
 Rahe P, Bechstein R, Kühnle A: Vertical and lateral drift corrections of scanning probe microscopy images. J Vac Sci Technol 2010, 28: C4E31. 10.1116/1.3360909View ArticleGoogle Scholar
 Clifford CA, Seah MP: Simplified drift characterization in scanning probe microscopes using a simple twopoint method. Meas Sci Technol 2009, 20: 095103. 10.1088/09570233/20/9/095103View ArticleGoogle Scholar
 Marinello F, Balcon M, Schiavuta P, Carmignato S, Savio E: Thermal drift study on different commercial scanning probe microscopes during the initial warmingup phase. Meas Sci Technol 2011, 22: 094016. 10.1088/09570233/22/9/094016View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.