1. Introduction
Printed electronics and printing technologies have been receiving attention because they exhibit a balance between quality and cost. Printed circuit boards (PCBs) have been widely used in various applications. To ensure the fabrication quality of PCBs, researchers have proposed various methods to quantify pattern integrity by evaluating their visual appearance using automatic optical inspection (AOI) procedures [
1,
2]. Line width roughness (LWR) [
3] and line edge roughness (LER) [
4] have been adopted, which facilitate the evaluation with scientific and quantified indicators. Process instability can thus be determined or suspected by identifying geometric changes [
5]. In addition, some characterizations can be projected as a part of quality control, quality assurance, or early warning [
6].
However, the aforementioned methods have the shortcomings of the inability to analyze complex patterns with low efficiency [
3] and noticeable errors [
7]. They also do not fully support dimension measurements and fail to connect geometric variations with electromagnetic (EM) properties [
4]. Furthermore, most procedures in these methods require manual operations and no unified standards are set for image differentiation, preprocessing, data induction and calling, and parameter adjustment through AOI; therefore, analyses based on these existing methods are laborious and time-consuming. Although data acquisition can be performed automatically in part, the analyses of large quantities of samples are still challenging for users without an engineering background. For example, when the images of interest captured from the PCB samples are complicated, users have to refine parameters, such as focus and contrast, to fulfill specific requirements. Users also have to convert the images of interest with proper settings (such as grayscale) before the analyses can be performed. Furthermore, the results can be erroneous when the alignments for correcting magnification, shift, rotation, and tilt are not properly conducted with a universal solution for different samples [
4]. In addition, as for the technology that combines image processing and AOI, most of the current industrial applications are to skeletonize the circuit of the PCBs before determining whether the circuit is complete, based on the image processing results. In those process steps, the precision requirements for image processing are relatively low and the requirements are relatively loose. In short, existing methods cannot provide a single solution to support smart manufacturing with operational efficiency in handling related procedures in a standard and integrated manner.
Therefore, this paper proposes the development of an integrated AOI (iAOI) system and procedure to minimize manual handling steps, and provide a turn-key and one-step solution. In the proposed system and procedure, several standardized steps for image processing and data analysis were designed, and a graphical user interface (GUI) was developed. The proposed procedure contains five analysis steps and starts with image registration as the first step, where the image of interest obtained from the PCB sample can be imported to the computer from an existing file or a camera that operates instantly. Subsequently, proper threshold settings should be performed as the second step to ensure that good contrasts appear between the target and non-target locations on the image of interest. To further enhance sharpness and remove noise, an image gradient should be conducted as the third step. The markers that are generally arranged close to the image of interest for alignment should be correctly positioned in the fourth step before the geometric transformation of the image can be executed by the computer in the fifth (the last) step. These ensure unified operations and show operational simplicity to general users who are unfamiliar with engineering interfaces.
To demonstrate the effectiveness of the proposed iAOI system and procedure, self-complimentary Archimedean spiral antennas (SCASAs) fabricated using standard PCB fabrication procedures were introduced as samples, in which pattern variations in terms of bulge amplitude (BA) appeared in both arms of the SCASAs (
Figure 1) [
8]. By analyzing the SCASAs through the proposed iAOI system and procedure, the image of interest and related data can be acquired effectively and efficiently compared with existing methods. These data can be used to establish a database for EM characterization and its prediction using artificial intelligence (AI) through machine learning, which has not been supported in similar solutions.
2. Workflow
In this study, an iAOI system with a camera and backlight unit (BU) hardware in an opaque box was used as a module. Nevertheless, for the iAOI procedure, the GUI was created by Python with application programming interfaces (APIs), such as DigiCam [
9], Tkinter [
10], OpenCV [
11], and NumPy [
12], to control the camera in the demonstrative module, perform image processing, and evaluate features with several operations in a workflow shown in
Figure 2. The complete script of Python was written through C (programming language); thus, the stability and efficiency were reliable. Meanwhile, Python has become more and more popular because of its short development cycle; therefore, industries and academia are willing to develop APIs on Python. Consequently, this work adopted Python programmed through C, in which various APIs existed in this work. We believe that the developed Python script through C was reliable in this work.
From the GUI, users may initiate the iAOI system with a login page by performing the operation
Login, which prompts the password. (In this work, operations are described in
italic for clarity.) In addition to
Login, five other operations exist:
Camera control, Image capture, Preprocessing, Feature collection, and Predict. The GUI (
Figure 3) appeared in a work panel with five windows to indicate the conditions of the corresponding image of interest or data after processing. The
Optical Microscope (OM) image,
processed image, and
analyzed image window display the image of interest captured through the camera, processed through the proposed procedures, and analyzed using features, respectively. (In this work, windows in the GUI panels are
underlined for clarity.) The
analyzed feature and
predicted performance window display the quantified data extracted from and projected onto the analyzed image, respectively.
To display the progress, a task panel with seven status indicators was designed in the GUI, which shows the progress of the corresponding iAOI procedures as a percentage. The seven indicators refer to (1) the readiness of the camera, (2) the readiness of the OM image window, (3) the progress of Preprocessing, (4) the readiness of the Processed image window, (5) the progress in Feature collection, (6) the readiness of the Analyzed feature window, and (7) the progress in operation Predict and readiness of the Predicted performance window.
In the GUI, the user may press one of the action buttons designed for the windows in the work panel. For example, in the OM image window, when the “Shutter” is clicked, the operation Camera control is started, and the signal is sent between the camera and computer through the USB interface. (In this work, actions attached to the windows in the work panel were “bracketed” for clarity.) The camera captured the image, sent it back to the computer, and displayed it on the OM image window through the operation image capture. If users are not satisfied with this image, it can be cleared by clicking “Redo”. Otherwise, this image is sent for adjustment by clicking “Next,” which initiates the operation Preprocessing.
In
Preprocessing, several functions for gray scaling and noise cancellation will be conducted to refine the image to the proper condition for further analyses [
13], which will be explained later. Similar to the actions that appear in the
OM image window, “Redo” provides a chance for users to remove the existing image and re-capture the image from the camera. Otherwise, the adjusted image will be passed for analyses by pressing “Next,” which initiates the operation
Feature collection with several functions for advanced adjustment and coordinate collection, which will also be described later.
In the
Analyzed image window, users may opt to display and focus on individual contours on the arms or gaps; thus, four action buttons of Arm-1, Arm-2, Gap-1, and Gap-2 are assigned. In this study, both the outer and inner LERs on the two arms of the SCASAs were collected (
Figure 1a). These LERs can be used to obtain arm lengths, areas, and gaps between the two arms. Finally, the features were summarized and displayed in the
Analyzed feature window. To this point, the physical characteristics are qualified, and comparisons can be achieved if the expected values (for example, the designed dimensions) exist.
In addition to feature analyses, this study aimed to predict the characteristics of SCASAs. Consequently, the quantified values of the features are fed into the developed AI model [
4], and the predictions are shown in the
Predicted performance window when the action “Predict,” designed for the
Analyzed feature window, is clicked. In the demonstration, the capacitance between the two arms was studied, and the users exported the predictions or terminated this GUI by clicking “Save” or “End” in the task panel, respectively.
Regardless of the image, the aforementioned workflow supports the general operation of feature characterizations provided that the corresponding analysis algorithms and AI models are ready. Consequently, typical and universal steps are described below, which also support the demonstrative SCASAs adopted in the present study. In this study, five analysis steps, including image registration, threshold setting, image gradient, marker alignment, and geometric transformation, were considered, which sequentially lead the image of interest to an appropriate one before characteristics, such as LER, can be analyzed or quantified. The detailed operating procedure was described in the
Supplementary File (composition of the GUI).
3. General Procedure
3.1. Image Registration
Image registration is the process in which the image of interest is captured by the camera and delivered to the computer [
14]. However, various hardware settings and software operations may cause different image conditions, which in turn lead to different analysis results [
15]. Consequently, hardware settings, such as BU contrast, should be considered. Additionally, because the control on the camera is performed through the GUI, related functions of
Click shutter,
Image transfer, and
Display image will be conducted in operation
Camera control.By realizing image registration through the automated and standardized steps proposed in this work, users can receive the required image quality and avoid instabilities arising from manual operations and calibrations, regardless of the sample. Furthermore, these automated steps improve overall efficiency, which will be discussed later.
3.2. Threshold Setting
The image of interest underwent several functions in the operation
Preprocessing to meet the specifications defined for analyses. These functions are designed for image binarization and thresholding [
16,
17], which extracts the target area (for example, the arm or gap shown in
Figure 1a) from the image of interest by observing its grayscale.
In API OpenCV, two categories of simple and adaptive binarization exist. Simple binarization is a straightforward option because it removes major noise in most cases. However, the appropriate sharpness of the contours of the pattern is also expected when performing the analyses. Accordingly, adaptive binarization that shows additional control of the parameters (such as “block size” and “non-zero value”) in the functions was also adopted. (In this work, API parameters are described in “bracketed italic” for clarity). Among the various maneuverable parameters in these two binarizations, “maxValue” was the key to the present study, which was set to integers between 0 and 255.
When “
maxValue” was set to 0–56, most of the areas on the image showed excessive backlight because there were only limited grayscales (
Figure 4). These results cannot effectively distinguish pattern contours from the background [
18]. However, when “
maxValue” was set to 57 to 255, some noise appeared, and reasonable and acceptable results were found. After several trials, only when “
maxValue” was set to 83 to 93, appropriate sharpness and clear contours were obtainable for analysis [
19]. Consequently, a “
maxValue” of 85, which showed the most stable appearance regardless of the BA was selected (
Figure 5). At this point, simple binarization has already removed the noise in the image of interest. However, as aforementioned, adaptive binarization was continuously applied to further enhance the appearance of the contour.
By applying a “
maxValue” of 0 to 89 and 116 to 255 through adaptive binarization to the image after simple binarization (a “
maxValue” of 85), broken contours were found, which cannot be used for analyses. After several trials, a “
maxValue” of 90 to 115 resulted in clear borderlines (sample frame shown in
Figure 1a) outside the images of interest, indicating proper conditions for further analysis. Because the borderline should be sharp and the image around the markers should not display any noise, a “
maxValue” of 90 was determined to be the best condition for adaptive binarization.
3.3. Image Gradient
After the thresholds of the simple and adaptive binarizations were determined, a gradient was applied to obtain the boundary of the patterns because each pixel had a grayscale between 0 (black) and 255 (white). The greater the difference in grayscale between adjacent pixels, the more noticeable the boundary. Because the change in grayscale in adjacent pixels can be the judgment for the boundary, algorithms of
Sobel [
20],
Scharr [
20],
Laplacian [
21], and
Canny [
16] were considered, which showed respective benefits. (In this study, the algorithms are marked in
bold for clarity.)
Sobel was designed to extract thick and continuous contours; thus, it supports patterns with simple shapes and sharp edges. Scharr is an adjusted version of Sobel that aims to obtain a cleaner image with an extended kernel. However, both Sobel and Scharr sacrifice the resolution of complex patterns, showing deficiencies in the proposed iAOI procedure. Similarly, Laplacian cancels noise; however, it occasionally removes expected contours by erroneously judging them as noise, thereby displaying incorrect results.
However,
Canny is a composite edge detection algorithm that combines Gaussian filter, gradient detection, non-maximum suppression, and boundary judgment, which exhibits the advantages of low error rate (background and pattern can be separated with high contrast), accurate positioning (the marked edge is close to the actual edge), and high resolution (fine contours are generated) over the other three algorithms in this work in the tolerance viewpoint. Consequently, this work introduced
Canny as the algorithm for the image gradient, and the resultant image (after threshold setting) is shown in
Figure 6.
3.4. Marker Alignment
To analyze the contours of the image, the image frame should be removed, and a function
Clear frame is created and performed before alignment is conducted. Subsequently, a function
Find mark is created to obtain the relative coordinates of the markers. (In this paper, functions are described and
underlined in italic for clarity.) Because markers are key to alignment, effective and practical markers are required [
17].
In this work, three square markers located at the three corners (UL at the upper left, UR at the upper right, and BL at the bottom left) of the SCASA sample were originally adopted (
Figure 7a). Each square marker had four corners and was named A, B, C, and D, as shown in
Figure 7b. Before alignment, the coordinates of any specific corner should be known through a raster scan, which starts from the upper left corner of the image in the format (
x,
y) (
Figure 8) and has been widely adopted in similar studies [
22]. In this work, Cartesian coordinates were used, where x represents the pixel number in the horizontal direction from left to right, and y represents the pixel number in the vertical direction from top to bottom.
In the function
Find mark, the coordinates of the corners UL_A, UR_B, and BL_C were expected to be accurately obtained (
Figure 8). Accordingly, the row-by-row raster scan for corner UL_A started from the left to a point located at the middle of the image with
y = 0 to avoid erroneous coordinate collection from marker UR. The function stopped at a point with a black pixel that showed (R, G, B) = (0, 0, 0). If there were no black pixels in a row, an identical scan was repeated with
y = 1. For corner UR_B, the raster scan also started row by row, however, from the right to a point located in the middle of the image to avoid erroneous coordinate collection from the marker UL. Similarly, the function stopped at the corresponding black pixel in the marker UR. Because marker BL is located on the same side as marker UL, the procedure of collecting the corner BL_C coordinate was identical to that of corner UL_A; however, the row-by-row raster scan started from the bottom. These coordinates are used for rotation and shift alignment later, and the image theoretically fulfills the requirements for geometric transformation after alignment.
3.5. Geometric Transformation
After locating the markers with their coordinates, the target pattern should stand upright for analysis. Thus, a function
Rotate that adjusts the image into the correct orientation was prepared in this work. By considering the center of the PCB sample as both the rotational center and the center of the adjusted image, and by selecting the coordinates of corners UL_A and UR_B;
θ and the rotation of the image (clockwise or counterclockwise) were understandable (
Figure 9a).
In addition to the rotation, the horizontal or vertical shift of the image should be known, which can also be obtained through trigonometry and θ. Finally, the function Rotate adjusts the image not only into the correct orientation but also the location by rotating and shifting it, respectively. To this point, the image was ready for feature analysis, and demonstrations of SCASAs made on PCB samples were conducted as described below.
5. Result and Discussion
Because each sample for analysis contained two arms (or two gaps), and each arm (or gap) contained two contours, one analyzed image returned four LERs on the developed GUI regardless of their designed BA. Cumulatively, ten samples for each BA were analyzed in this study. Statistically, the results revealed that an increasing average LER existed along the enlarged BA, reflecting the expected tendency. Although the target LER for BA sets of 000, 015, 030, 045, 060, and 075 in the inner contour of Arm-1 were 0, 1.655, 2.894, 4.304, 5.599, and 6.971 pixels from the design, respectively; those analyzed from the samples were 1.439, 2.357, 3.198, 4.718, 5.984, and 7.093 pixels, respectively (
Figure 17a). The difference between the designed and analyzed values (on average 0.564 pixels) can be attributed to reasonable fabrication tolerances. Similar statistics were also summarized for the LER on the outer contour of Arm-1 (on average 0.521 pixel (
Figure 17b)), inner contour of Arm-2 (on average 0.616 pixel (
Figure 18a)), and outer contour of Arm-2 (on average 0.460 pixel (
Figure 18b)).
However, in EM applications, the responses rely on conductors (arms with metal) and insulators (gaps with dielectric) for its capacitive behavior. Two arms and their corresponding gaps were thus considered as the paired electrodes and the distance between the paired electrodes in a capacitor for a SCASA, respectively. Consequently, it is necessary to quantify the gap integrity, which is composed of two contours separately contributed by different arms. Statistically, the results revealed that an increasing average gap existed along the enlarged BA, also reflecting the expected tendency (
Figure 19a,b and
Figure 20a,b for LER on the inner contour of Gap-1, outer contour of Gap-1, inner contour of Gap-2, and outer contour of Gap-2, respectively). Additionally, the maximal LER in a specific BA set (e.g., 045) did not surpass the minimum LER in the adjacent BA set (e.g., 060), thereby indicating the effectiveness of the proposed procedure.
In addition to the expected differences and tendencies, the contribution of the second alignment should be further studied. When the function
Second alignment was removed from the operation
Preprocessing, LER analyses of the arms and gaps could still be performed. By converting the results obtained with the function
Second alignment (
Figure 17,
Figure 18,
Figure 19 and
Figure 20) into statistical expressions and by comparing their counterparts obtained without the function
Second alignment, the importance of the function
Second alignment was proved.
When the second alignment was not performed, the LER distribution was diverse and the standard deviation was large in Arm-1 (
Figure 21a). The target LER for the BA sets of 000, 015, 030, 045, 060, and 075 on the inner contour of Arm-1 were 0, 1.655, 2.894, 4.304, 5.599, and 6.971 pixels, respectively, whereas those analyzed (without the second alignment) were 2.336, 2.281, 3.539, 4.599, 6.336, and 7.093 pixels, respectively. Although the LER differences between the designed and analyzed ones (on average 0.794 pixels) may be negligible, an inappropriate LER tendency appeared in the BA sets of 000 and 015. Similarly, the statistics for the difference between the designed and analyzed (without the second alignment) LER on the outer contour of Arm-1 (on average 0.633 pixels in
Figure 22a), on the inner contour of Arm-2 (on average 0.816 pixels in
Figure 23a), and on the outer contour of Arm-2 (on average 0.703 pixels in
Figure 24a) were summarized with an average tolerance for all four LERs of 0.737 pixels.
However, when the second alignment was conducted, the LER distribution and its standard deviation exhibited improved stability. The analyzed (with the second alignment) LERs were 1.439, 2.357, 3.198, 4.718, 5.984, and 7.093 pixels for BA sets of 000, 015, 030, 045, 060, and 075, respectively (
Figure 21b). The differences between the designed and analyzed ones (on average 0.564 pixels) were improved by 29.0% compared with the results shown in
Figure 21a. Similar statistics were also summarized for the difference between the designed and analyzed (with the second alignment) LER on the outer contour of Arm-1 (on average 0.521 pixels,
Figure 22b, improved by 17.7%), on the inner contour of Arm-2 (on average 0.616 pixels,
Figure 23b, improved by 24.5%), and on the outer contour of Arm-2 (on average 0.460 pixels,
Figure 24b, improved by 34.6%). The overall average tolerance for all four LERs analyzed with the second alignment was 0.540 pixels, exhibiting a 26.7% improvement compared to its counterpart obtained without the second alignment.
Although these differences may be invisible, the standard deviation of the LER in each BA set without the second alignment was larger than that with the second alignment, implying the effectiveness of the proposed procedure. Considering that cameras in the iAOI system can be advanced for various applications, the second alignment that led to precise analysis and unambiguous judgment was crucial.
When the LERs were collected using the proposed iAOI procedure and displayed on the developed GUI, they were fed into the established AI model to predict the capacitances of the SCASAs. The results indicate that the prediction was efficient and correct, as summarized elsewhere [
8], and many more features in addition to the LER may be included in the AI model in the future.
This work extended the research result from [
7], which simply started with linear, symmetric, and nonlinear (but mathematically predictable) patterns. As a result, practical patterns that can be applied to applications such as antennas were developed for the first time. In addition, the designed antenna successfully imitated the inkjet printing effect of wetting. We believe that the proposed iAOI system could thus be compatible with linear, symmetric, and nonlinear, but mathematically describable patterns.
6. Conclusions
The spirit of this work is to provide integrated hardware and software to the operators who may not have the professional or technical background to appropriately evaluate the result. The iAOI integrates the hardware of a camera, BU, and stage, and the software of a control GUI that includes algorithms for image operation and AI-based judgment. Consequently, potential users (regardless of their background) could easily handle the production line without judgment bias. In summary, an iAOI system was built using an automatic pattern integrity analysis in this work with the following highlights.
6.1. An Integrated iAOI System Was Realized
In this work, a lazy learning method was applied with the help of Matlab toolbox, in which 19 models exist. We looked for a model that shows the smallest root-mean-square-error but the largest coefficient of determination as the training result. Consequently, after a thorough evaluation of all models, the Gaussian progress regression with an exponential covariance function was determined as the most suitable AI model for the iAOI system [
27]. Five operations (registration, thresholding, gradient, alignment, and transformation) were thus set for this iAOI system, and several functions in each operation were prepared using Python with APIs.
6.2. A User-Friendly GUI Was Prepared for Easy Handling
Although some of these could be performed separately, as demonstrated in previous works, the proposed iAOI system additionally provided a user-friendly GUI, which displayed the step-by-step results of image operations. This iAOI system and procedure provides a turn-key solution to the PCB production line, where pattern variations exist regardless of its design and application.
6.3. Verification through a Potential PCB Production Line Was Performed
To demonstrate the effectiveness of the proposed iAOI system and procedure, proof-of-concept SCASA samples made with PCBs were introduced, and LER analyses were successfully performed regardless of geometric deformation in terms of BA. Considering the present supportiveness of our facility, it is the most balanced way to perform a proof-of-concept study. In the present case, we considered setting a reliable reference for reappearance. Nevertheless, when one of the parameters changes, it is still possible to reappear a dependable result, provided that additional studies were thoroughly carried out.
6.4. Additional Studies Were Conducted on the Topic of BU, Marker, and Offset
In addition, physical features of widths and gaps were quantified for the SCASA samples, and advances in sharpness enhancement, marker modification, and alignment refinement through BU improvement and diffuser optimization, triangle marker introduction, and offset elimination, respectively, were proposed and performed to further perfect the analyses.
6.5. Improved Accuracy and Efficiency Was Demonstrated
The results indicated that, on average, an LER accuracy improvement of 26.7% was achieved. Because the proposed iAOI system and procedure integrated the required operations into a single GUI and provided corresponding responses to the results of each operation, the analysis efficiency of complex patterns with handling simplicity substantially improved. Based on identical hardware and environmental conditions, the required operation time was suppressed from 55 min [
4] to 8 min for analyzing one SCASA sample, displaying an 85% improvement in practice. The proposed iAOI system and procedure showed the capability of function extension for EM characterization and its prediction by linking the LERs to an AI model, implying its practicality in smart manufacturing.