Abstract

Information lithography in manufacturing is a broad set of techniques for encoding sequences of bits as physical or behavioral features in physical parts. It is an effective approach for part traceability and anti-counterfeiting. Several techniques have recently been proposed for embedding 2D codes in 3D printed parts by local control of geometry or material. This paper presents an approach to embed and retrieve information in additive manufacturing (AM) parts by controlling the printing process parameters. The approach leverages variations in printing speed to encode information on the surface of AM parts. Optical imaging devices, such as 2D scanners and optical profilometers, are employed to read the embedded information, enabling the capture of local height differences on the part surfaces that embody 2D codes such as QR codes. The retrieved information is processed using computer vision techniques such as morphological segmentation and binary classification. First, the impact of variations in the encoding parameters on the information retrieval accuracy is characterized. Then, the feasibility and effectiveness of the proposed scheme are demonstrated through experimental results, showcasing a high accuracy in retrieving encoded messages and successfully distinguishing subtle surface features resulting from varying printing speeds. The proposed approach offers an inexpensive and efficient method for information lithography, allowing for the secure embedding of information, e.g., serial numbers and watermarks, while addressing counterfeiting and security concerns in diverse industries.

Graphical Abstract Figure
Graphical Abstract Figure
Close modal

1 Introduction

Additive manufacturing (AM) has seen rapid growth over the past few decades. It has revolutionized the product realization process by enabling the creation of products with complex shapes that were previously impossible. It offers significantly greater control over geometric features and material properties of manufactured goods, compared to competing subtractive manufacturing methods. Advances in AM have disrupted several key industries, including aerospace, automobiles, and bio-medicine [13]. However, with these advancements come new security concerns such as theft of technical data to process sabotage and counterfeiting [46]. Manufacturing has also become one of the most targeted industries for security attacks [7], making it important to develop and adopt security measures in both the digital and physical domains.

Secure product development is a broad area that encompasses several security objectives, including confidentiality, integrity, availability, and accountability [8,9]. To achieve these objectives, various techniques have been developed, including fragile watermarking of 3D models, physically unclonable functions [10], and privacy-aware supply chain collaboration [11]. A promising technique is information lithography, which involves embedding information within parts to make them easier to track and verify authenticity [12].

Additive manufacturing is particularly well suited for information lithography due to its precise control over local structures and properties. Existing literature in AM presents methods to embed and read information from 3D printed parts for a range of printing processes and embedded signals. These signals include acoustic responses, cavities in the part volume, material changes, and others [1316]. Embedding these signals requires manipulation of part geometry or printing process parameters (material extrusion, print head path, etc.) to locally alter parts to carry the desired signal. To be successful, information lithography requires secure transmission of information in a physical part that is accessible to the end users.

In this paper, we build on our information embedding scheme [17] by (i) experimentally analyzing the effect of the embedding parameters on the accuracy of retrieved information, specifically printing speed and bit width, (ii) reducing the time taken to extract surface data, and (iii) encrypting the data for added security. In this scheme, information is embedded by manipulating printing speed to create subtle differences in the surface characteristics that can be captured by optical imaging devices, such as cameras, 2D scanners, or profilometers. Embedding information on the surface provides advantages such as easy accessibility without the need for specialized equipment. Moreover, incorporating information during the manufacturing process streamlines the overall production steps, contributing to improving efficiency and reducing complexity. Since the encoding approach provides total control over the location of encoded bits, we can identify each of the bit instances in the captured image, extract their features, and decode them using computer vision techniques. Additionally, we investigate the accuracy of numerous decoding methods as a function of the variation of the encoding parameter, i.e., the difference in printing speed, and experimentally show how to securely encode information such that it is unintelligible by an adversary.

The paper is organized as follows: Sec. 2 provides an overview of information embedding methods, discusses techniques for information retrieval in AM, and outlines key research gaps. In Sec. 3, we present experimental results quantifying the effects of the encoding parameters on the information retrieval accuracy and determine the best combination of the parameters for the proposed information lithography scheme. Section 4 details our approach for embedding and retrieving 2D codes. This is followed by an analysis of the findings in Sec. 5. We conclude the paper and discuss avenues for future work in Sec. 6.

2 Related Work

Due to the rapid growth and adoption of AM technologies, the risk of counterfeiting AM-fabricated components is rising [6,18]. While there have been several efforts aimed at addressing this risk [6,13,1820], several challenges remain [21]. Several techniques for embedding information in AM have been proposed in the literature, each with its own trade-offs in terms of cost, durability, and detectability. Some methods involve manipulating printing process parameters to create unique signatures [2123], while others modify parts’ geometry [2426] or incorporate multiple materials for tagging [14,16,27]. Information on the surface or near-surface of the parts can be read using relatively low-cost equipment, such as optical cameras or microscopes, making them a practical choice for many applications. Conversely, capturing internal information may require more expensive equipment, such as CT scanners, X-ray machines, or magnetic scanners, but can provide more robust and secure information embedding. In the following sections, we detail some of the methods used to embed and retrieve information in AM parts.

2.1 Information Embedding Schemes.

Geometry-Based Modification. Aliaga and Atallah [26] presented one of the earlier studies aimed at embedding signatures on the surface of 3D objects. Their approach adds subtle geometric noise to the part’s surface, which is then combined with the manufacturing noise inherent in the process to be used as a signature of the part. Retrieving these signatures requires a good estimation of the noise in the manufacturing process. Harrison et al. [15] embed acoustic barcodes in their parts. Their approach creates surface gaps that produce a unique sound when swiped, with each sound being mapped to a signature barcode. While some approaches explore modifying the surface geometry of the parts to incorporate obtrusive QR codes [25,28], Peng et al. [24] introduced new ways of embedding unobtrusive QR codes by creating minimal carvings on parts and use of directional light at predefined angles. Other researchers explored placing air voids below the outer layers of parts as a way of embedding information [13,2932]. Chen et al. [18] and Kubo et al. [33] modify a part’s internal geometry for identification. For instance, Chen et al. [18] demonstrate how QR codes can be divided into numerous segments and embedded at different depths and locations in AM parts. Kubo et al. [33] utilize the differences in parts’ acoustic resonance due to internal structural changes.

Process Parameters Control. Several information embedding approaches exploit the flexibility and variability that process parameter control offers. For instance, Dogan et al. [22] proposed G-ID, a method that creates unique signatures on the surface, or near-surface, of parts through varying process parameters, such as infill pattern, density, and layer height. Similarly, Delmotte et al. [21] demonstrate how inducing local variations in layer height (i.e., variations within the same layer) can embed large amounts of information in small regions. Global variations in layer height have also been exploited for embedding optical barcodes [27], while other researchers varied the starting location of each layer for object watermarking [34]. Kubo et al. [23] extended their previous work by utilizing process parameter variations (e.g., infill pattern or density), instead of manually modifying the parts’ internal geometries.

Use of Multiple Materials. Incorporating multiple materials (i.e., filaments or feedstock) is another common technique used to embed information in AM parts. These methods are generally employed to avoid surface variations or geometrical modifications. Maia et al. [27] encode optical barcodes by either using two filaments with different colors or by using invisible near-infrared (IR) dye. Another approach utilizes near-IR dyed filaments for embedding information inside objects [13,35]. In metal-based AM, researchers have explored the use of dissimilar materials during selective laser melting for tagging purposes. By using materials with different magnetic properties, such as 430 and 17–4PH ferromagnetic steels, Salas et al. [16] successfully embedded 2D tags within non-magnetic materials, whereas Wei et al. [14] used Cu10Sn (a copper alloy) to embed QR codes within 316L stainless steel.

2.2 Information Decoding Techniques.

There exist various methods for retrieving embedded information from AM parts. Some are more suitable for detecting surface-level data, whereas others are more appropriate for capturing internal information. In this section, we review different techniques employed to detect and decode embedded information in AM.

Surface Detection. Surface-level information embedding is more prevalent than sub-surface (internal) embedding in the AM literature. This is partly because acquiring internal information may necessitate costly equipment. Optical cameras, including phones and digital cameras, have been extensively used to capture surface-level data [22,24,25,27,3638]. For instance, Dogan et al. [22] developed a phone application that allows users to take pictures of objects and detect embedded information via image processing. Similarly, researchers [21,34] utilized a 2D paper scanner to capture surface data for detection. Additionally, optical profilometers [17] and microphones [15] have also been employed to detect surface-level information. Once the information is captured, it can be decoded manually [34] through image processing [21,22], or by employing machine learning (ML) [38] techniques.

Internal Detection. Specialized equipment is typically required to capture information embedded inside parts due to the nature of the AM process. Various tools have been used to acquire subsurface information, including ultrasonic sensors, magnetic sensors, X-rays, micro-CT, and IR cameras. For example, Kubo et al. [23,33] use ultrasonic sensing to capture the effects of changing the internal structure of a part, while IR imaging is utilized to capture surface response due to embedded air voids [13,31,32] or the use of near-IR fluorescent dyes inside parts [35]. Wei et al. [14] capture QR codes embedded in dissimilar materials using X-ray equipment, and Chen et al. [18] use a micro-CT scanner. Salas et al. [16] utilize a custom-built magnetic sensor to capture the magnetic response of 2D embedded tags.

These specialized equipment can be costly and require highly skilled operators. In our work, we focus on embedding surface-level information through printing speed control and the use of optical imaging equipment, coupled with computer vision techniques, for information retrieval.

2.3 Research Gaps and Proposed Approach.

In summary, existing work focuses on creating novel techniques for surface or internal embedding, and retrieval of information. These studies have taken a solution-driven perspective, rather than a problem-driven perspective, for designing information lithography schemes. In a solution-driven perspective, the emphasis is on the development of innovative approaches to a broad class of problems. On the other hand, a problem-driven perspective places a greater emphasis on tailoring solutions depending on the specific requirements needed (e.g., imperceptibility, robustness to external attacks, and secrecy), constraints of the manufacturing process, and capabilities of the adversary.

Given that the task of embedding information for security can be viewed as a design problem [12], this paper is a step toward developing a robust and secure information lithography scheme through determining favorable encoding and decoding approaches that successfully retrieve encoded information. To achieve this, we show that the design of such a scheme for a manufacturing process includes the design of two main components:

  1. an encoder that embeds information in/on the part and

  2. a decoder that retrieves it later.

The transmission of information occurs through a channel that encompasses all the processes impacting the encoded information from manufacturing to decoding. The task of an encoding scheme designer is to maximize the amount of information that can be transmitted in the part (the channel capacity) given constraints placed on the channel by the manufacturing process, part design specifications, expected damage to the part, and available data acquisition methods. The design variables for this channel may include the specific manufacturing process parameters to vary, the range over which those parameters are allowed to vary, the size of the manipulated regions of the part, and the part characteristics that are measured during decoding. The designer should also characterize the noise in the channel, which may be due to inaccuracies during encoding in manufacturing, damage to the part after manufacturing but before decoding, noise introduced during data acquisition during decoding, or some combination of all these factors.

In this paper, we focus on designing an embedding scheme that transmits information in AM parts by controlling printing process parameters. Specifically, we choose the difference in printing speed between different local regions, and bit size as our encoding parameters. This is because variations in the printing speed have been shown to influence the width of infill line in extrusion-based AM [3941], and this results in localized height differences that can be extracted and decoded [17]. This printing speed-based embedding approach offers distinct advantages over modifying the 3D surface geometry for encoding. First, unlike altering the computed-aided design (CAD) file, which requires access to the original design, our method enables manufacturers to generate the encoded part independently of the design file, facilitating seamless integration into the manufacturing process. Second, decoupling the encoding from the CAD design allows the production of multiple instances of the same part with different codes without the need to create a new CAD file for each variant. Third, changing the printing speed allows embedding information through features that are smaller than the layer thickness, thereby providing a higher information capacity. A schematic diagram of the influence of printing speed is shown in Fig. 1, and an example of a part with embedded information is illustrated in Fig. 2.

Fig. 1
Schematic diagram of the influence of printing speed on width of the infill line in a fused deposition process. The right side of the figure shows the scanning direction and an enlarged view of the cross section of infill lines with width, w, and layer height, h. The wider infill lines labeled with “1” have slower infill speeds than those labeled with “0.”
Fig. 1
Schematic diagram of the influence of printing speed on width of the infill line in a fused deposition process. The right side of the figure shows the scanning direction and an enlarged view of the cross section of infill lines with width, w, and layer height, h. The wider infill lines labeled with “1” have slower infill speeds than those labeled with “0.”
Close modal
Fig. 2
An example of printed test part with alternating “0” and “1” bits (i.e., message) embedded on the surface, and the height data ae extracted using an optical profilometer: (a) test part, (b) reshaped bit string, (c) surface height map, and (d) mean and standard deviation of height signals
Fig. 2
An example of printed test part with alternating “0” and “1” bits (i.e., message) embedded on the surface, and the height data ae extracted using an optical profilometer: (a) test part, (b) reshaped bit string, (c) surface height map, and (d) mean and standard deviation of height signals
Close modal

For this paper, the channel is the surface modified during printing. The decoding is carried out by processing height measurements of this surface to read the embedded message, given any noise that might have been introduced during manufacturing, handling, and data collection. In the following section, we present an experimental study to determine the best values of the encoding parameters for the proposed information lithography scheme.

3 Determining the Best Encoding Parameters

Given an AM machine and imaging equipment, we first determine the best combination of the encoding parameters. We achieve this by investigating the effect of varying the printing speed and bit size on the surface profile, and thus the retrieved message, by encoding test parts with bit strings of alternating “1” and “0” bits. Then, we select the encoding parameters that would achieve high accuracy of information retrieval and use them for our embedding scheme in Sec. 4.1. The combinations of encoding parameters, namely the difference in speed, Δs(mm/s), and bit width, bw (mm), used for this study are shown in Fig. 3.

Fig. 3
Encoding parameter combinations for test parts with alternating “0” and “1” bit string
Fig. 3
Encoding parameter combinations for test parts with alternating “0” and “1” bit string
Close modal

3.1 Test Parts Design.

We begin by generating a bit string containing the desired message. To embed these bit strings on the test parts, we follow our previous work in Ref. [17] by first reshaping an N-size bit string into an array corresponding to the area on the surface of the part that would encode bits. The 2D array consists of R rows and C columns, where R×C=N. For example, a 40-bit string consisting of alternating “1” and “0” is embedded using a 2D array with R=20 and C=2. These bits are mapped to a region on the surface of the 3D model of the parts. Then, in the locations where the bits are to be embedded, we modify the printing speed in the g-code, such that the printing speed for a “0” bit is different from that of a “1” bit. After printing, we obtain the surface profile data (i.e., height data) from the part and analyze it to decode the embedded message. Figure 2 displays the 2D array in a test part, the preprocessed height data extracted using an optical profilometer, and surface height variations across a vertical line-sample from the embedded region.

These 12 combinations of encoding parameters shown in Fig. 3 were tested using a 8×12×3.6 mm rectangular part. The original g-code file was generated using Slic3r, an open-source software,2 using the following printing parameters: 0.16 mm layer height (h), 200C nozzle temperature, 100% infill density, 60 mm/s infill speed, and 30 mm/s external perimeter speed. All parts in this work were printed using 1.75 mm blue polylactic acid (PLA) filaments and Creality Ender 3,3 a commercial fused deposition modeling (FDM) printer. On these test parts, we encoded 40 bits of alternating “0” and “1” reshaped as a 20×2 array separated by a bit-free column (i.e., 20 vertical bits printed with the standard external perimeter speed). An example of a sample printed test part with alternating “0” and “1” bits (i.e., message) embedded on the surface and the extracted height data is shown in Fig. 2.

3.2 Evaluation of the Test Parts.

Given the local height differences induced during our encoding process, the next step is to capture the surface-level information to decode and evaluate the embedded information. We utilized an optical profiling microscope, Zeta-20,4 to capture height data signals from parts with a small footprint [17], and we further utilize it to study the effect of varying the encoding parameters, Δs and bw. We use the 5× lens, which has a field of view (FoV) of 4.69mm×3.52mm. Given this limited FoV for our application, we took nine scans covering each part and stitched them using matlab DIPimage 2.9 [42]. An example of the resulting height map is shown in Fig. 2(c).

After capturing the surface height data, we determine whether a given region of the surface is encoding a “0” or “1” bit. We achieve this using binary ML classifiers. First, the height data, which is represented as an image with pixels of intensity proportional to the height, are equalized using the contrast limited adaptive histogram equalization (CLAHE) algorithm [43]. This allows better capture of local changes in height. The scikit-image implementation of the CLAHE algorithm was used for this study [44]. This is followed by labeling each bit region in the embedded area as a “0” or “1” bit.

Two experiments are conducted to explore how adjusting the encoding parameters influences the accuracy of information retrieval. The first experiment focuses on assessing the impact of Δs, while the second one aims to evaluate how varying bw affects the accuracy of information retrieval. For the first experiment, the test parts are bucketed into three subsets with four parts each:

  • (1a) high {Δs(40,55)mm/s},

  • (1b) medium {Δs(25,40]mm/s}, and

  • (1c) low {Δs[10,25]mm/s}.

Similarly, the test parts are divided into two subsets for the second experiment, with six parts each:
  • (2a) large {bw(2.0,2.4]mm} and

  • (2b) small {bw[1.6,2.0]mm}.

We implemented three different image-classifiers that are widely used for image classification tasks, namely logistic regression, support vector machines (SVMs), and random forest. These algorithms are traditional ML classifiers, and we use their scikit-learn5 implementation and the same training framework. After preprocessing and labeling each subset of test parts according to the aforementioned criteria, a stratified k-fold with k=5 (i.e., an 80/20 split of the training/validation data) was used, such that, after five cross-validation (CV) iterations, each bit would be used for validation once. This ensures that the validation set generated for each k contains as close as possible distribution as the training set. Two metrics are used for evaluating the performance of the classifiers, namely classification accuracy and area under the receiver operating characteristic (ROC) curve (AUC). These two metrics are used since accuracy is generally easier to understand, but AUC is preferred when a “single number” evaluation of ML algorithms is needed [45]. The AUC metric measures the ability of a classifier to distinguish between positive and negative instances across different threshold values, with values ranging between 0 and 1. For example, an AUC of 0.5 indicates that the classifier’s performance is equivalent to random chance, while a value of 1.0 means that the classifier perfectly separates the positive (“1” bit) and negative (“0” bit) instances of data.

The best-performing classifier for this task is the SVM with the RBF (radial basis function or Gaussian) kernel and C=1, which controls the strength of regularization (l1 or l2 norm) applied. The performance metrics of the data subsets are reported in Table 1, and the mean ROC curves with 1 standard deviation for both experiments are shown in Fig. 4.

Fig. 4
Mean ROC curves and AUC performance for (a) experiment 1 and (b) experiment 2
Fig. 4
Mean ROC curves and AUC performance for (a) experiment 1 and (b) experiment 2
Close modal
Table 1

Performance metrics of SVM classifier on test parts

Experiment 1 (Δs)(mm/s)Experiment 2 (bw)(mm)
SubsetHigh Δs (40–55)Medium Δs (25–40]Low Δs [10–25)Large bw (2.0,2.4]Small bw [1.6,2.0]
Accuracy0.880.820.770.840.81
AUC0.920.890.840.910.91
Experiment 1 (Δs)(mm/s)Experiment 2 (bw)(mm)
SubsetHigh Δs (40–55)Medium Δs (25–40]Low Δs [10–25)Large bw (2.0,2.4]Small bw [1.6,2.0]
Accuracy0.880.820.770.840.81
AUC0.920.890.840.910.91

3.3 Conclusion.

The outcomes of these experiments reveal a clear trend: both the accuracy of bit retrieval and the AUC increase as Δs are increased. Furthermore, it is noteworthy that while adjustments in the parameter bw influence both the accuracy and AUC of the model, the impact is notably less pronounced compared to the influence of Δs. Therefore, we use high Δs=54 mm/s and the mean bw tested of 2 mm as our encoding parameters for the remainder of this paper. In the following section, we discuss our information lithography scheme design, which embeds information in the form of QR codes. This approach utilizes low-cost scanning equipment and fast image processing techniques for efficient information retrieval.

4 Embedding and Retrieval of QR Codes

We employ QR codes as our chosen medium for message representation. QR codes, which are two-dimensional barcodes, encode text strings along both horizontal and vertical axes. This selection is driven by their capacity to provide a space-efficient means of storing information. Their user-friendly nature, adaptability, and extensive range of applications, spanning from product authentication to tracking and beyond, underscore their utility. QR codes are designed to be highly fault-tolerant. This attribute renders them capable of withstanding damage, distortion, or interference while remaining readable, thus contributing to a substantially low error rate. Furthermore, the versatility of QR codes extends to their potential for encryption, ensuring a layer of data privacy and security [46].

4.1 Embedding QR Codes.

Our process for encoding any message into a 3D printed part consists of three parts: (1) generating and mapping the message of interest to the 3D part’s surface (without any surface manipulation), (2) modifying the printing speed in locations where the message is mapped, and (3) 3D printing the part. QR codes are embedded in the same manner and the overall approach to information embedding is shown in Figs. 5(a) and 5(b).

Fig. 5
The pipeline of embedding and retrieving information:(a) information encoding, (b) 3D printing of test parts, (c) extracting surface data, and (d) decoding with ML classifiers
Fig. 5
The pipeline of embedding and retrieving information:(a) information encoding, (b) 3D printing of test parts, (c) extracting surface data, and (d) decoding with ML classifiers
Close modal
Given a message, we generate an n×n bit QR code using the qrcode python library.6 An example of a 29×29 bit code is shown in Fig. 6. This n×n bit code is then mapped to an area, Ae, on the part’s surface, which depends on the printing process, information retrieval method, and part’s use case. Generally, the surface area required for encoding is
Ae=n×n×bs
(1)
where n×n is the number of bits, and bs is the size of one bit. To preserve the shape of the QR code, we use a squared bit size such that the width of one bit, bw, equals its height. This is achieved by ensuring that bw=bnl×h, where bnl is the number of layers in a bit and h is the layer height printing parameter. To embed the QR codes, we first slice the CAD model, without any geometrical manipulation, and generate its g-code using Slic3r. This g-code file is created using a constant external printing speed. Then, at the locations where the bits are to be embedded, we modify the printing speed in the model’s g-code, such that the printing speed for a “0” bit is different from that of a “1” bit. For illustration, we map 21×21 bit QR codes onto the center of 46×46×5mm rectangular parts such that bw=2mm, bnl=8, and h=0.25mm (thus bw=bnl×h is satisfied). For these parts, we use 60 mm/s for encoding “0” bits and 6mm/s for encoding “1” bits such that Δs=54mm/s. These parameters are chosen based on the results from our previous study (see Sec. 3 and Ref. [17]), and an example of this is shown in Fig. 6.
Fig. 6
(a) An example of a QR code (this code points to the website link: purdue.com) and (b) a 2D scan of a printed part with an embedded QR code highlighting the bs parameters
Fig. 6
(a) An example of a QR code (this code points to the website link: purdue.com) and (b) a 2D scan of a printed part with an embedded QR code highlighting the bs parameters
Close modal

To evaluate the performance of our information embedding and retrieval schemes, we printed 20 parts with various 21×21 bit QR codes embedded on its surface, for a total of 8820 bits. These parts used the same 46×46×5mm rectangular CAD model and g-code file that was generated using the aforementioned printing parameters. The parts and encoding parameters are designed such that the 21×21 bit QR code occupies the entire front surface area with an excess 2 mm on each of the four sides for padding. In the following section, we discuss our information retrieval pipeline, from capturing surface-level data to the decoding process.

4.2 Retrieving Embedded QR Codes.

With the presence of localized height differences introduced during the information embedding process, the subsequent task involves capturing surface-level characteristics essential for decoding and retrieving the embedded data. In Sec. 3, we employed an optical profiling microscope, namely the Zeta-20, to capture height data signals of parts. Though it provides height data with high resolution, it is constrained to imaging small parts due to its narrow field of view (4.69×3.52mm for the 5× lens). This is a limitation as it would take 14 separate scans to capture the parts with the 21×21 QR codes, which is costly and time-consuming. Thus, we explored using consumer-grade optical imaging devices, such as smartphone cameras, 2D scanners, and digital single-lens reflex (DSLR) cameras for imaging the larger parts. Though all devices capture quick and high-resolution images of the parts’ surfaces, we utilized the 2D scanner of the EPSON XP-430 printer for our experiments as it provides more consistent lighting during the imaging process compared to the cameras. In Figs. 5(c) and 5(d), we show the overall information retrieval approach.

We demonstrate the use of two approaches for processing surface data to decode and retrieve embedded QR codes: morphological segmentation and ML-based classification. For both, we work with grayscale images of parts and assume that we know a priori the bs used and which region of the surface contains the embedded QR code. In practice, this assumption can be justified by asserting that the manufacturer knows where the codes are embedded on their parts’ surfaces and what embedding parameters. In the former method, we apply linear filters to extract textures, remove noise introduced during imaging, and binarize the image. This is simple to implement but could result in inconsistent outcomes (i.e., accuracy) for different parts. On the other hand, ML classifiers classify each bs in the encoded region of an image, which includes noise introduced during embedding and extraction, as a “0” or “1” bit. They require training, but a large representative training set can enable them to extract relevant features and patterns while ignoring noise, thus yielding high classification accuracy. In the following sections, we detail the implementation of both approaches we use for decoding imaged surface data to retrieve embedded QR codes.

4.2.1 Morphological Segmentation.

Given a grayscale image of a part’s surface, we first extract the texture by obtaining the magnitude of the real and imaginary responses of the image after applying a Gabor filter. We use the scikit-image implementation of the filter with a frequency, f, of 2 and orientation, θ, of (2/3)π. We then apply a series of morphological image operations, namely dilation and erosion operations. The dilation expands the boundaries of the “1” bit regions, while erosion removes pixels from within the “0” bit regions that are due to imaging noise and are present after applying the Gabor filter, as shown in Fig. 7. The Gabor filter, dilation, and erosion parameters were selected after tuning on a part with an embedded QR code of the message “Purdue.” Finally, we downscale the image to 21×21 pixels (i.e., to the same size of the original code) by bilinear resizing, in which each pixel in the resized image is calculated by taking a weighted average of the surrounding four pixels in the original image.

Fig. 7
Morphological segmentation pipeline: (a) scanned grayscale image, (b) after Gabor filtering, (c) after morphological dilation and erosion operations, and (d) after bilinear image scaling
Fig. 7
Morphological segmentation pipeline: (a) scanned grayscale image, (b) after Gabor filtering, (c) after morphological dilation and erosion operations, and (d) after bilinear image scaling
Close modal

4.2.2 Binary Classification.

Binary classification algorithms typically involve two main steps: (i) training a model on a dataset to optimize its parameters and (ii) using the trained model to predict the class membership of each data point in a testing dataset. We implement four different classifiers that have been widely used for image classification tasks, namely logistic regression, SVMs, random forest, and deep convolutional neural network (DCNN). The former three algorithms are traditional ML classifiers, and we use their scikit-learn implementation and the same training framework.7 On the other hand, DCNNs are deep learning algorithms and are especially powerful for image classification tasks. For this, we use the PyTorch framework for building, training, and testing the DCNN.8 In the following sections, we present the training details for both traditional ML and deep learning classifiers.

Training Traditional ML Classifiers. A total of 19 parts with 21×21 bit embedded QR codes are used to create the training data set, for a total of 8379 bits. First, 2D scans are captured for each part, then these images are converted to grayscale and normalized using min–max normalization. This is followed by labeling each bit region in the embedded area as a “0” or “1” bit, as shown in Fig. 8, and resizing it to 64×64 pixels. Each of the model parameters is selected based on results from a grid search CV experiment [47]. The scikit-learn implementation was used.9 A stratified k-fold with k=5 and an 80/20 split of the training data were used, such that, after five CV iterations, each bit would be used for validation once. This ensures that the validation set for each k generated contains as close as possible distribution as the training set.

Fig. 8
An example of how bits embedded in a region are labeled for classification
Fig. 8
An example of how bits embedded in a region are labeled for classification
Close modal

The results from running grid search CV of the traditional ML classifiers are presented in Fig. 9. For logistic regression and SVM, the hyperparameter C controls the strength of regularization (l1 or l2 norm) applied, in which high values of C lead to weaker regularization, while low values give more weight to the regularization at the expense of fitting training data. Three different kernels are tested for the SVM classifier, and the RBF (or Gaussian) kernel slightly outperforms the linear and polynomial kernels. For the random forest classifier, the higher the number of trees (i.e., the number of estimators) and max depth selected, the better the performance. However, this increases training time and complexity, as it may lead to overfitting.

Fig. 9
Grid search CV results for ML classifiers: (a) logistic regression, (b) support vector machine (SVM), and (c) random forest
Fig. 9
Grid search CV results for ML classifiers: (a) logistic regression, (b) support vector machine (SVM), and (c) random forest
Close modal

Training a Deep Convolutional Neural Network Classifier. Similar to the training process of traditional ML classifiers, the same parts with the 21×21 bit embedded QR codes are used for training, for a total of 8379 bits. Their 2D scans are first converted to grayscale images with [0,1] range. This is followed by transforming them into tensors and normalizing them with mean, μ, and standard deviation, σ, equal 0.5, such that each feature is [1,1] range. After normalization, each bit region is labeled and resized to 64×64 pixels, as shown in Fig. 8. The architecture chosen for DCNN is a ResNet-34 model [48], pre-trained on the ImageNet 1K dataset [49]. Notably, the final layer of this model is adapted into a fully connected layer with a single output feature, generating either a “0” or “1” classification for each corresponding bit region. To fine-tune the model for this new task, a multi-step approach is employed. Initially, the model’s weights are frozen for several epochs, enabling the final layer to acclimate to the classification of the new dataset. Subsequently, these frozen weights are unfrozen for the subsequent training phase. The ensuing iterations of training facilitate the refinement of the model’s performance. In the following section, we present the outcomes derived from training the four models, accompanied by an analysis of their performance.

5 Results

For the evaluation of the code retrieval techniques, specifically morphological segmentation and binary classification, a set of 20 components, each embedding a 21×21 bit QR code, was employed. These components encompassed QR codes representing five distinct messages. The rationale behind using some parts with the same QR codes for both training and testing lies in the approach adopted: training/inference instances are derived from individual “0” and “1” bits, rather than utilizing complete components as singular training instances. Consequently, within each component, there exist 441 discrete training/inference instances, entirely decoupled from the underlying message information encoded by the QR code. The parameters used to embed these 441-bit codes while printing are Δs=54mm/s and bs=2mm×2mm. Thus, the total area of the encoded region is Aencode=21×21×2×2=1764mm2 or 2.73 square inch. To reduce the surface noise due to the printer nozzle changing directions, we include a 2 mm of padding on each of the four sides of the parts. In the following sections, we present the results and discuss the performance of each information retrieval algorithm.

5.1 Morphological Segmentation.

For testing, we used f=2 and θ=(2/3)π for Gabor filter parameters. The segmentation of the part with the encoded message “Purdue” resulted in a bit retrieval accuracy of 97.7% (431/441 bits), which was sufficient to make the retrieved code readable using a QR scanner, which is shown in Fig. 7(d). However, the segmentation of the other parts resulted in an average bit-retrieval accuracy of 65.3%. This is expected because the parameters of the Gabor filter were tuned to extract texture features using the part with the encoded message “Purdue.” By tuning the segmentation algorithm for a part with a different QR code (e.g., “purdue.com”), we were able to successfully retrieve the encoded message and achieve a bit-retrieval accuracy of 94.3% (416/441 bits), compared to the 59.4% accuracy (262/441 bits) achieved using the previous algorithm. The differences from before and after tuning are shown in Fig. 10. Thus, this algorithm can be accurate when tuned for specific parts, but it may be impractical for large-scale applications in its current state.

Fig. 10
Morphological segmentation results of a part with QR code linking to “purdue.com:” (a) Gabor filter before tuning, (b) retrieved code before tuning, (c) Gabor filter after tuning, and (d) retrieved code after tuning
Fig. 10
Morphological segmentation results of a part with QR code linking to “purdue.com:” (a) Gabor filter before tuning, (b) retrieved code before tuning, (c) Gabor filter after tuning, and (d) retrieved code after tuning
Close modal

5.2 Binary Classification.

To assess the effectiveness of each classifier, we employed leave-one-out cross-validation (LOOCV). This involved conducting 20 iterations, where in each iteration, the model is trained on the images of 19 parts (8379 bits) and then classified all 441 bits from the remaining imaged part. The logistic regression model was trained with C=1 and l2 regularization. The SVM algorithm was trained using the RBF kernel with C=1, and the random forest classifier was trained with a maximum tree depth of 4, and 100 estimators. The mean ROC and AUC values for these three models are displayed in Fig. 11. For the DCNN, we used a batch size of 6, binary cross entropy as a loss criterion, Adam optimizer with learning rate of 103, l2 penalty of 104, and 14 epochs for training, in which the weights are frozen in the first four epochs. The learning curve for a training instance is shown in Fig. 12.

Fig. 11
Mean ROC curves and AUC performance of traditional ML classifiers on our QR code dataset
Fig. 11
Mean ROC curves and AUC performance of traditional ML classifiers on our QR code dataset
Close modal
Fig. 12
Learning curve for the ResNet-34 model tuned on our QR code dataset
Fig. 12
Learning curve for the ResNet-34 model tuned on our QR code dataset
Close modal

We report the average accuracy results for all four binary classifiers used in this study in Fig. 13. The logistic regression, random forest, and SVM models followed with average accuracies of 76%, 75%, and 91%, respectively. The discrepancy in performance observed between the LOOCV and grid search CV of the traditional ML models, which range between 8%and23%, could be due to low complexity of these models. One can note that by using a more complex model, such DCNN, the classification accuracy greatly improves. The DCNN achieved the highest overall accuracy, correctly classifying the most bits, with an average accuracy of 98%, which in combination with QR codes built-in error correction leads to 100% accurate message retrieval. This is because deep learning models are particularly suited for image classification tasks due to their ability to learn complex features from data. Now that 100% message retrieval is achievable; in the following section, we discuss and show experimental results on how to enhance the security of embedded information.

Fig. 13
Average accuracy results for binary classifiers using LOOCV with error bars
Fig. 13
Average accuracy results for binary classifiers using LOOCV with error bars
Close modal

5.3 Secure QR Codes.

To strengthen our information lithography scheme, several methods can be employed before and during the embedding process. Before the embedding and printing stages, the message can be fortified through encryption by employing a robust key and cipher algorithm, such as the advanced encryption standard (AES) [50]. This approach ensures that the embedded QR code is unintelligible without accessing the key and cipher text, substantially increasing the security of the transmitted information. Furthermore, security can be enhanced by manipulating the precise locations where each bit of information is embedded. This can be achieved through predetermined algorithm, leading to the generation of non-readable QR code. Another potential method involves napping the QR code onto non-square bit configurations and strategically distributing them across the parts’ surfaces. Each of these approaches contributes to making the embedded information more secure against adversaries.

In our experimentation, we focused on the first method (i.e., encrypting the message using AES). We apply the Galois/counter mode block cipher to the message “purdue.edu” [51]. This mode leverages universal hashing over a binary Galois field to provide authenticated encryption. Our results, which entail four different parts, consistently yielded 100% message retrieval with over 98% bit-retrieval accuracy, highlighting the robustness and security of our information lithography scheme.

6 Conclusion

In this study, we present a practical, low-cost approach to information lithography, which enables embedding and retrieval of information in additively manufactured parts. By controlling the printing process parameters, specifically printing speed and bit size, we designed a method for encoding information, in the form of QR codes, onto 3D printed parts using an FDM printer. Through our design of experiments and analyses, we evaluated the influence of these encoding parameters on the accuracy of information retrieval. Our findings demonstrate a clear relationship, with variations in printing speed, Δs, being positively correlated with bit-retrieval accuracy, while the influence of bit size is less pronounced. We use a low-cost commercial 2D scanner to capture the surface-level information, while to decode the embedded information, we utilize two computer vision-based methods: (i) morphological segmentation and (ii) ML-based binary classification. Our results highlight the remarkable performance of DCNNs in achieving 98% bit classification accuracy. On the other hand, morphological segmentation is capable of retrieving the embedded QR codes but requires tuning for each part instance. Furthermore, the use of QR codes as a medium for message storage, coupled with their inherent error correction capabilities, reinforces the practicality and reliability of our approach with 100% message retrieval rate.

The results presented in this paper offer the potential for enhancing security measures in the manufacturing domain. Our findings demonstrate that, by leveraging AM’s precise control over printing parameters, local structures, and properties, it is possible to embed information within parts effectively. Furthermore, while we use printing speed and bit size as encoding parameters, our information lithography framework can be utilized with other process parameters such as nozzle temperature and filament extrusion rate. The high accuracy of information retrieval achieved through machine learning techniques, particularly DCNN, holds significant potential for real-world applications. Finally, we discuss how the proposed approach can be fortified with encryption methods, such as AES, to further increase data security.

Footnotes

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Chen
,
L.
,
He
,
Y.
,
Yang
,
Y.
,
Niu
,
S.
, and
Ren
,
H.
,
2017
, “
The Research Status and Development Trend of Additive Manufacturing Technology
,”
Int. J. Adv. Manuf. Technol.
,
89
(
9–12
), pp.
3651
3660
.
2.
Liu
,
R.
,
Wang
,
Z.
,
Sparks
,
T.
,
Liou
,
F.
, and
Newkirk
,
J.
,
2017
, “
Aerospace Applications of Laser Additive Manufacturing
,”
Laser Additive Manufacturing
,
M.
Brandt
, ed.,
Elsevier
,
Duxford, UK
, pp.
351
371
.
3.
Leal
,
R.
,
Barreiros
,
F.
,
Alves
,
L.
,
Romeiro
,
F.
,
Vasco
,
J.
,
Santos
,
M.
, and
Marto
,
C.
,
2017
, “
Additive Manufacturing Tooling for the Automotive Industry
,”
Int. J. Adv. Manuf. Technol.
,
92
(
5–8
), pp.
1671
1676
.
4.
Pan
,
Y.
,
White
,
J.
,
Schmidt
,
D.
,
Elhabashy
,
A.
,
Sturm
,
L.
,
Camelio
,
J.
, and
Williams
,
C.
,
2017
, “
Taxonomies for Reasoning About Cyber-physical Attacks in IoT-based Manufacturing Systems
,”
International Journal of Interactive Multimedia and Artificial Intelligence
,
4
(
3
), pp.
45
54
.
5.
Wells
,
L. J.
,
Camelio
,
J. A.
,
Williams
,
C. B.
, and
White
,
J.
,
2014
, “
Cyber-physical Security Challenges in Manufacturing Systems
,”
Manuf. Lett.
,
2
(
2
), pp.
74
77
.
6.
Yampolskiy
,
M.
,
King
,
W. E.
,
Gatlin
,
J.
,
Belikovetsky
,
S.
,
Brown
,
A.
,
Skjellum
,
A.
, and
Elovici
,
Y.
,
2018
, “
Security of Additive Manufacturing: Attack Taxonomy and Survey
,”
Addit. Manuf.
,
21
, pp.
431
457
.
7.
IBM
,
2022
, “
X-Force Threat Intelligence Index 2022
,” https://www.ibm.com/downloads/cas/ADLMYLAZ.
8.
Chaduvula
,
S. C.
,
Dachowicz
,
A.
,
Atallah
,
M. J.
, and
Panchal
,
J. H.
,
2018
, “
Security in Cyber-enabled Design and Manufacturing: A Survey
,”
ASME J. Comput. Inf. Sci. Eng.
,
18
(
4
), p.
040802
.
9.
Von Solms
,
R.
, and
Van Niekerk
,
J.
,
2013
, “
From Information Security to Cyber Security
,”
Comput. Secur.
,
38
, pp.
97
102
.
10.
Dachowicz
,
A.
,
Atallah
,
M.
, and
Panchal
,
J. H.
,
2018
, “
Optical Puf Design for Anti-counterfeiting in Manufacturing of Metallic Goods
,”
ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 1B: 38th Computers and Information in Engineering Conference
,
Quebec City, Canada
,
Aug. 26–29
.
11.
Hong
,
Y.
,
Vaidya
,
J.
, and
Wang
,
S.
,
2013
, “
A Survey of Privacy-Aware Supply Chain Collaboration: From Theory to Applications
,”
J. Inf. Syst.
,
28
(
1
), pp.
243
268
.
12.
ElSayed
,
K. A.
,
Dachowicz
,
A.
,
Atallah
,
M. J.
, and
Panchal
,
J. H.
,
2023
, “
Information Embedding for Secure Manufacturing: Challenges and Research Opportunities
,”
ASME J. Comput. Inf. Sci. Eng.
,
23
(
6
), p.
060813
.
13.
Suzuki
,
M.
,
Dechrueng
,
P.
,
Techavichian
,
S.
,
Silapasuphakornwong
,
P.
,
Torii
,
H.
, and
Uehira
,
K.
,
2017
, “
Embedding Information Into Objects Fabricated With 3-d Printers by Forming Fine Cavities Inside Them
,”
Electron. Imag.
,
2017
(
7
), pp.
6
9
.
14.
Wei
,
C.
,
Sun
,
Z.
,
Huang
,
Y.
, and
Li
,
L.
,
2018
, “
Embedding Anti-counterfeiting Features in Metallic Components Via Multiple Material Additive Manufacturing
,”
Addit. Manuf.
,
24
, pp.
1
12
.
15.
Harrison
,
C.
,
Xiao
,
R.
, and
Hudson
,
S.
,
2012
, “
Acoustic Barcodes: Passive, Durable and Inexpensive Notched Identification Tags
,”
Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology
,
Cambridge, MA
,
Oct. 7–10
, pp.
563
568
.
16.
Salas
,
D.
,
Ebeperi
,
D.
,
Elverud
,
M.
,
Arróyave
,
R.
,
Malak
,
R.
, and
Karaman
,
I.
,
2022
, “
Embedding Hidden Information in Additively Manufactured Metals Via Magnetic Property Grading for Traceability
,”
Addit. Manuf.
,
60
, p.
103261
.
17.
ElSayed
,
K. A.
,
Dachowicz
,
A.
, and
Panchal
,
J. H.
,
2021
, “
Information Embedding in Additive Manufacturing Through Printing Speed Control
,”
AMSec ’21
,
Virtual Event, South Korea
,
Nov. 19
, Association for Computing Machinery, pp.
31
37
.
18.
Chen
,
F.
,
Luo
,
Y.
,
Tsoutsos
,
N. G.
,
Maniatakos
,
M.
,
Shahin
,
K.
, and
Gupta
,
N.
,
2019
, “
Embedding Tracking Codes in Additive Manufactured Parts for Product Authentication
,”
Adv. Eng. Mater.
,
21
(
4
), p.
1800495
.
19.
Li
,
Z.
,
Rathore
,
A. S.
,
Song
,
C.
,
Wei
,
S.
,
Wang
,
Y.
, and
Xu
,
W.
,
2018
, “
Printracker: Fingerprinting 3D Printers Using Commodity Scanners
,”
Proceedings of the 2018 ACM Sigsac Conference on Computer and Communications Security
,
Toronto, Canada
,
Oct. 15–19
, pp.
1306
1323
.
20.
Delmotte
,
A.
,
Tanaka
,
K.
,
Kubo
,
H.
,
Funatomi
,
T.
, and
Mukaigawa
,
Y.
,
2018
, “
Blind Watermarking for 3-d Printed Objects Using Surface Norm Distribution
,”
2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR)
,
Fukuoka, Japan
,
June 25–28
,
IEEE
, pp.
282
288
.
21.
Delmotte
,
A.
,
Tanaka
,
K.
,
Kubo
,
H.
,
Funatomi
,
T.
, and
Mukaigawa
,
Y.
,
2019
, “
Blind Watermarking for 3D Printed Objects by Locally Modifying Layer Thickness
,”
IEEE Trans. Multimedia
,
22
(
11
), pp.
2780
2791
.
22.
Dogan
,
M. D.
,
Faruqi
,
F.
,
Churchill
,
A. D.
,
Friedman
,
K.
,
Cheng
,
L.
,
Subramanian
,
S.
, and
Mueller
,
S.
,
2020
, “
G-id: Identifying 3D Prints Using Slicing Parameters
,”
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
,
Honolulu, HI
,
Apr. 25–30
, pp.
1
13
.
23.
Kubo
,
Y.
,
Eguchi
,
K.
, and
Aoki
,
R.
,
2020
, “
3d-Printed Object Identification Method Using Inner Structure Patterns Configured by Slicer Software
,”
Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems
,
Honolulu, HI
,
Apr. 25–30
, pp.
1
7
.
24.
Peng
,
H.
,
Liu
,
P.
,
Lu
,
L.
,
Sharf
,
A.
,
Liu
,
L.
,
Lischinski
,
D.
, and
Chen
,
B.
,
2020
, “
Fabricable Unobtrusive 3D-QR-Codes With Directional Light
,”
Computer Graphics Forum, Vol. 39
,
Utrecht, The Netherlands
,
June 6–8
,
Wiley Online Library
, pp.
15
27
.
25.
Song
,
C.
,
Li
,
Z.
,
Xu
,
W.
,
Zhou
,
C.
,
Jin
,
Z.
, and
Ren
,
K.
,
2018
, “
My Smartphone Recognizes Genuine QR Codes! Practical Unclonable QR Code Via 3d Printing
,”
Proc. ACM Inter. Mob. Wearable Ubiquitous Technol.
,
2
(
2
), pp.
1
20
.
26.
Aliaga
,
D. G.
, and
Atallah
,
M. J.
,
2009
, “
Genuinity Signatures: Designing Signatures for Verifying 3D Object Genuinity
,”
Computer Graphics Forum, Vol. 28
,
Munich, Germany
,
Mar. 30–Apr. 3
,
Wiley Online Library
, pp.
437
446
.
27.
Maia
,
H. T.
,
Li
,
D.
,
Yang
,
Y.
, and
Zheng
,
C.
,
2019
, “
LayerCode: Optical Barcodes for 3D Printed Shapes
,”
ACM Trans. Graph.
,
38
(
4
), pp.
1
14
.
28.
Kikuchi
,
R.
,
Yoshikawa
,
S.
,
Jayaraman
,
P. K.
,
Zheng
,
J.
, and
Maekawa
,
T.
,
2018
, “
Embedding QR Codes Onto B-Spline Surfaces for 3d Printing
,”
Comput. Aided Des.
,
102
, pp.
215
223
.
29.
Willis
,
K. D.
, and
Wilson
,
A. D.
,
2013
, “
Infrastructs: Fabricating Information Inside Physical Objects for Imaging in the Terahertz Region
,”
ACM Trans. Graph.
,
32
(
4
), pp.
1
10
.
30.
Suzuki
,
M.
,
Silapasuphakornwong
,
P.
,
Uehira
,
K.
,
Unno
,
H.
, and
Takashima
,
Y.
,
2015
, “
Copyright Protection for 3d Printing by Embedding Information Inside Real Fabricated Objects
,”
VISAPP (3)
,
Berlin, Germany
,
Mar. 11–14
, pp.
180
185
.
31.
Okada
,
A.
,
Silapasuphakornwong
,
P.
,
Suzuki
,
M.
,
Torii
,
H.
,
Takashima
,
Y.
, and
Uehira
,
K.
,
2015
, “
Non-destructively Reading Out Information Embedded Inside Real Objects by Using Far-Infrared Light
,”
Applications of Digital Image Processing XXXVIII, Vol. 9599
,
San Diego, CA
,
Aug. 9–13
,
International Society for Optics and Photonics
, p.
95992V
.
32.
Li
,
D.
,
Nair
,
A. S.
,
Nayar
,
S. K.
, and
Zheng
,
C.
,
2017
, “
Aircode: Unobtrusive Physical Tags for Digital Fabrication
,”
Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology
,
Québec City, QC, Canada
,
Oct. 22–25
, pp.
449
460
.
33.
Kubo
,
Y.
,
Eguchi
,
K.
,
Aoki
,
R.
,
Kondo
,
S.
,
Azuma
,
S.
, and
Indo
,
T.
,
2019
, “
Fabauth: Printed Objects Identification Using Resonant Properties of Their Inner Structures
,”
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
,
Glasgow, Scotland, UK
,
May 4–9
, pp.
1
6
.
34.
Baumann
,
F. W.
, and
Roller
,
D.
,
2017
, “
Watermarking for Fused Deposition Modeling by Seam Placement
,”
MATEC Web of Conferences
,
Kortrijk, Belgium
,
Feb. 24–26
, Vol. 104,
EDP Sciences
, p.
02023
.
35.
Silapasuphakornwong
,
P.
,
Torii
,
H.
,
Uehira
,
K.
,
Funsian
,
A.
,
Asawapithulsert
,
K.
, and
Sermpong
,
T.
,
2019
, “
Embedding Information in 3D Printed Objects Using Double Layered Near Infrared Fluorescent Dye
,”
Int. J. Mater. Manuf.
,
7
(
6
), pp.
230
234
.
36.
Kennedy
,
Z. C.
,
Stephenson
,
D. E.
,
Christ
,
J. F.
,
Pope
,
T. R.
,
Arey
,
B. W.
,
Barrett
,
C. A.
, and
Warner
,
M. G.
,
2017
, “
Enhanced Anti-counterfeiting Measures for Additive Manufacturing: Coupling Lanthanide Nanomaterial Chemical Signatures With Blockchain Technology
,”
J. Mater. Chem. C
,
5
(
37
), pp.
9570
9578
.
37.
Gao
,
Y.
,
Wang
,
W.
,
Jin
,
Y.
,
Zhou
,
C.
,
Xu
,
W.
, and
Jin
,
Z.
,
2021
, “
Thermotag: A Hidden ID of 3D Printers for Fingerprinting and Watermarking
,”
IEEE Trans. Inf. Forensics Secur.
,
16
(
Online
), pp.
2805
2820
.
38.
Zhang
,
X.
,
Wang
,
Q.
, and
Ivrissimtzis
,
I.
,
2018
,
Single Image Watermark Retrieval From 3D Printed Surfaces Via Convolutional Neural Networks
,
Eurographics Association
,
Swansea, UK
.
39.
Lei
,
M.
,
Wei
,
Q.
,
Li
,
M.
,
Zhang
,
J.
,
Yang
,
R.
, and
Wang
,
Y.
,
2022
, “
Numerical Simulation and Experimental Study the Effects of Process Parameters on Filament Morphology and Mechanical Properties of FDM 3D Printed PLA/GNPs Nanocomposite
,”
Polymers
,
14
(
15
), p.
3081
.
40.
Pibulchinda
,
P.
,
Barocio
,
E.
,
Favaloro
,
A. J.
, and
Pipes
,
R. B.
,
2023
, “
Influence of Printing Conditions on the Extrudate Shape and Fiber Orientation in Extrusion Deposition Additive Manufacturing
,”
Compos. Part B: Eng.
,
261
, p.
110793
.
41.
Ansari
,
A. A.
, and
Kamil
,
M.
,
2021
, “
Effect of Print Speed and Extrusion Temperature on Properties of 3d Printed PLA Using Fused Deposition Modeling Process
,”
Mater. Today: Proc.
,
45
(
6
), pp.
5462
5468
.
42.
van Kempen
,
G.
,
van Ginkel
,
M.
,
van Vliet
,
L.
,
Luengo
,
C.
, and
Rieger
,
B.
,
2021
, “
DIPimage
,” https://diplib.org/DIPimage.html.
43.
Pizer
,
S. M.
,
Amburn
,
E. P.
,
Austin
,
J. D.
,
Cromartie
,
R.
,
Geselowitz
,
A.
,
Greer
,
T.
,
Zimmerman
,
J. B.
, and
Zuiderveld
,
K.
,
1987
, “
Adaptive Histogram Equalization and Its Variations
,”
Comput. Vis. Graph. Image Process.
,
39
(
3
), pp.
355
368
.
44.
van der Walt
,
S.
,
Schönberger
,
J. L.
,
Nunez-Iglesias
,
J.
,
Boulogne
,
F.
,
Warner
,
J. D.
,
Yager
,
N.
,
Gouillart
,
E.
, and
Yu
,
T.
,
2014
, “
scikit-Image: Image Processing in Python
,”
PeerJ
,
2
, p.
e453
.
45.
Bradley
,
A. P.
,
1997
, “
The Use of the Area Under the ROC Curve in the Evaluation of Machine Learning Algorithms
,”
Pattern Recogn.
,
30
(
7
), pp.
1145
1159
.
46.
Rani
,
M. M. S.
, and
Euphrasia
,
K. R.
,
2016
, “
Data Security Through QR Code Encryption and Steganography
,”
Adv. Comput.: Int. J.
,
7
(
1/2
), pp.
1
7
.
47.
ElSayed
,
K. A.
, and
Panchal
,
J. H.
,
2023
, “
Process Control-Based Embedding and Computer Vision-Based Retrieval of 2d Codes in Fused Deposition Modeling
,”
ASME 2023 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC/CIE 2023), No. 116880
,
Boston, MA
,
ASME
.
48.
He
,
K.
,
Zhang
,
X.
,
Ren
,
S.
, and
Sun
,
J.
,
2016
, “
Deep Residual Learning for Image Recognition
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Las Vegas, NV
,
June 27–30
, pp.
770
778
.
49.
Russakovsky
,
O.
,
Deng
,
J.
,
Su
,
H.
,
Krause
,
J.
,
Satheesh
,
S.
,
Ma
,
S.
,
Huang
,
Z.
,
Karpathy
,
A.
,
Khosla
,
A.
,
Bernstein
,
M.
, and
Berg
,
A. C.
,
2015
, “
Imagenet Large Scale Visual Recognition Challenge
,”
Int. J. Comput. Vis.
,
115
, pp.
211
252
.
50.
Standard
,
N.-F.
,
2001
, “
Announcing the Advanced Encryption Standard (AES)
,”
Federal Inf. Process. Standards Pub.
,
197
(
1–51
), pp.
3
3
.
51.
McGrew
,
D. A.
, and
Viega
,
J.
,
2004
, “
The Security and Performance of the Galois/Counter Mode (GCM) of Operation
,”
International Conference on Cryptology in India
,
Chennai, India
,
Dec. 20–22
,
Springer
, pp.
343
355
.