Abstract

This paper proposes a new parametric level set method for topology optimization based on deep neural network (DNN). In this method, the fully connected DNN is incorporated into the conventional level set methods to construct an effective approach for structural topology optimization. The implicit function of level set is described by fully connected DNNs. A DNN-based level set optimization method is proposed, where the Hamilton–Jacobi partial differential equations (PDEs) are transformed into parametrized ordinary differential equations (ODEs). The zero-level set of implicit function is updated through updating the weights and biases of networks. The parametrized reinitialization is applied periodically to prevent the implicit function from being too steep or too flat in the vicinity of its zero-level set. The proposed method is implemented in the framework of minimum compliance, which is a well-known benchmark for topology optimization. In practice, designers desire to have multiple design options, where they can choose a better conceptual design base on their design experience. One of the major advantages of the DNN-based level set method is capable to generate diverse and competitive designs with different network architectures. Several numerical examples are presented to verify the effectiveness of the proposed DNN-based level set method.

1 Introduction

Topology optimization has experienced great attention and development in recent years in that it searches for the optimal material layout in a design domain with gradient-based algorithm. The first homogenization-based topology optimization method proposed by Bendsoe and Sigmund is published in 1988 [1]. The research on topology optimization experienced a boost in the past two decades [2]. A solid isotropic material with penalization (SIMP) method [3] is widely used in engineering because of effectiveness and simplicity. For the SIMP method, the artificial density is applied to describe the material layout distribution, and optimal design is achieved through gradient-based optimization algorithm. However, the intermediate density may exist in optimal design, which blurs the boundaries, and post-processing techniques are needed to remove gray area. In fact, it is hard for the standard density-based method to eliminate intermediate density during optimization [4]. Based on the standard density-based method, several advanced schemes are proposed in recent years to achieve feature control, robust design, and length scale control [517], which are able to alleviate such phenomenon using various projection methods. Some other robust formulations [1821] are also proposed in recent years to ensure manufacturability. In general, the post-processing techniques are needed to obtain a clear-boundary optimal design, while the performance of original optimal design may degenerate compared with the post-processed design.

Compared with the SIMP method, the level set method is a “moving boundary” approach which evolves the boundaries of design in optimization with distinct boundaries [22]. In the beginning, Osher and Sethian [23] proposed the level set method to tackle the fronts of moving fluid. Level set methods represent the design boundaries using the zero-level set of an implicit function, and shape sensitivity analysis is implemented to compute the velocity field, which is incorporated into the Hamilton–Jacobi partial differential equation (PDE) to evolve the level set function. Osher and Santosa [24] proposed a level set method for design optimization problems based on the projected gradient approach. This work is further developed by Allaire et al. [25] and Wang et al. [26]. Allaire et al. [25] proposed a numerical method based on a combination of shape derivative and level set method for front propagation, where the weight and perimeter constrains are considered as objective functions. Wang et al. [26] used a scalar function of a higher dimension to represent the structural boundary with the level set model, where a link between the velocity field and structural sensitivity analysis is identified. Compared with a conventional level set method, several parametrized level set methods are proposed in recent years. Wang and Wang [27] incorporated radial basis functions (RBFs) into the conventional level set method to construct a more efficient approach for topology optimization. An RBF-level set optimization method is proposed to transform the Hamilton–Jacobi PDE to parameterized ordinary differential equations (ODEs), and the level set function is updated through updating the expansion coefficients. Wei and Wang [28] proposed a piecewise constant level set (PCLS) method to resolve the shape and topology optimization problem, where the boundary is described by the PCLS functions. Jiang et al. [29] applied a cardinal basis function to parameterize the level set function with a unity collocation method, where a distance regularization energy functional is introduced to maintain the desired signed distance property in optimization. Recently, Guo et al. [30] and Zhang et al. [31] proposed a new computational framework named moving morphable component (MMC), which embedded the MMCs into level set scheme. This computational scheme incorporates geometry and mechanical information into topology optimization in an explicit way and the structural complexity can be easily controlled in an explicit way. Another method called the moving morphable voids was also proposed by Zhang et al. [32,33], which introduced a set of geometry parameters to describe the boundary of the structure in an explicit way. In recent years, several other advanced parametrized level set methods are proposed as described in Ref. [34]. Luo et al. [35] proposed an efficient non-gradient approach for topology optimization without any sensitivity information. In his method, the material-field series expansion is applied to parametrize the geometry information, which achieves a considerable reduction of design variables, and the kriging-based optimization algorithm is implemented to resolve optimization problem based on the surrogate model.

Machine learning [36] has experienced a huge increase in research interest in the past decade, since it is a powerful tool for constructing the relationship between the input and output sampling data. With the dramatic growth of available data and development of new methods, machine learning has revolutionized our understanding of physical world such as image recognition [37], drug discovery [38], etc. Several successfully applications to physical problems can be found in recent years. Raissi et al. [39] proposed physics-informed neural networks (PINNs) for solving PDEs. In his method, PINNs are trained to solve supervised learning tasks, which are constrained by given laws of physics. Based on this concept, several advanced machine learning-based methods for solving PDEs [4043] are proposed recently for forward and inverse PDE-based problems. Recent years have also witnessed several studies applying machine learning methods to resolve topology optimization problems. Yu et al. [44] proposed a novel deep learning-based method to predict optimal design with a given boundary conditions without any iterative scheme. Lei et al. [45] proposed a method to achieve real-time structural topology optimization based on a machine learning method. His method combined MMC-based explicit framework with supported vector regression to establish a mapping between design parameters and optimal designs. Oh et al. [46] proposed a framework integrating topology optimization and generative models (generative adversarial networks) in an iterative way to generate new designs. As described in Ref. [47], deep neural networks (DNNs) are applied to represent shape’s boundary with zero-level-set of the learned function, which can represent an entire class of shapes and thus the model size can be reduced by an order of magnitude compared with existing works. In the computational framework of the parametrized level set method, a deep learning-based parameterized level set method is proposed in this paper to achieve topology optimization. The core of the current work is to incorporate DNNs into the present level-set-based topology optimization method. The implicit function is described by deep feedforward neural networks (NNs). Thus, the sufficient smoothness and continuity of implicit function can be guaranteed.

At present, most topology optimization algorithms aim at finding optimal material layout which can minimize or maximize the objective function. In practice, the designers desire to generate multiple solutions that are diverse and competitive enough so that they can make a choice based on their experience from the aesthetic perspective or other functional requirements. Wang et al. [48] achieve diverse and competitive designs by incorporating graphic diversity constraints, which is in the framework of the SIMP method. Based on different penalty methods, Yang et al. [49] present five simple and effective strategies to achieve multiple solutions, where these strategies are demonstrated to provide the designer with structurally efficient and topologically different solutions. Recently, He et al. [50] proposed three stochastic approaches to generate diverse and competitive designs in the framework of bi-directional evolutionary structural optimization, where a series of random designs are produced with distinctly different topologies. Oh et al. [46] proposed an artificial intelligent-based deep generative design framework which is able to generate numerous design options. For the level set method, rare literatures are found in this field. Here, we proposed a DNN-based level set method to effectively generate diverse and competitive designs with high structural performance.

The paper is organized as follows. In Sec. 2, the implicit modeling based on DNN is presented. Section 3 describes the DNN level set topology optimization formulation in detail. In Sec. 4, numerical examples are shown to illustrate the effectiveness of the proposed parameterized level set method, followed by conclusions in Sec. 5.

2 Deep Neural Networks Implicit Modeling

To reconstruct the design domain with a single continuous and differentiable function, an implicit modeling method based on DNNs is presented here. The feedforward networks [51], with one or more layers between the input and output layers, are mainly used for function approximation. The typical architectures of DNNs are illustrated in Fig. 1, which contains input, hidden layers, and output. The mathematical formulation of deep feedforward NNs can be defined as
(1)
where ℕ denotes feedforward networks, and the θ is the parameter of network. The hidden layer is defined as h(l)(x), and a network with L hidden layers can be expressed as, where a(l)(x) is a linear operation,
(2)
where W(l) is the weight matrix and b(l) is the bias vector for the l th layer. The weight matrix W(l)(l = 1, 2, …, L) and bias b(l)(l = 1, 2, …, L) can be combined into a single parameter θ. h(l)(l = 1, 2, …, L) are hidden layer activation functions (kernel functions).
Fig. 1
Architecture of DNNs: (a) one hidden layer, (b) two hidden layer, and (c) three hidden layer
Fig. 1
Architecture of DNNs: (a) one hidden layer, (b) two hidden layer, and (c) three hidden layer
Close modal
In fact, DNN is a universal approximator for nonlinear functions. It has been proven that a three-layer feedforward NN can approximate any continuous multivariate function to any accuracy [52]. DNN is a very effective tool for function approximation in high-dimensional spaces. Besides that, DNN models are analytically differentiable, and graph-based automatic differentiation [53] techniques can be applied to easily gradient information. Compared with the conventional discrete level set method, another extraordinary merit of DNNs is model reduction. As described in the literature, DeepSDF [47] using DNNs to learn signed distance function to represent the zero-level set of complex geometry shape demonstrates extraordinary model reduction ability, where the high fidelity of shape using DeepSDF is validated. The initial weights of networks are critical for convergence of a training. In practice, it is common to initialize all the weights and biases with random zero-mean values [54]. The initialization of an implicit level set function represented by DNNs can be formulated as follows:
(3)
where ℕ is the feedforward NN, and Φ is the target implicit function. Operator 2 denotes two-norm. (x, y) denotes the coordinate of point. The backpropagation learning algorithm [54] is applied here to train the NNs. The activation function is chosen as hyperbolic tangent function. Each layer of network contains eight neurons. The target implicit function is plotted in Fig. 2. The training results of the plate with five circle holes inside using three different architectures are presented in Fig. 3. Note that for network with one hidden layer, the trained shape cannot achieve high fidelity as shown in Fig. 3(a), while the better shape training result can be found in Fig. 3(c) for network with three hidden layers.
Fig. 2
Target implicit function
Fig. 2
Target implicit function
Close modal
Fig. 3
Training results: (a) one hidden layer, (b) two hidden layers, and (c) three hidden layers
Fig. 3
Training results: (a) one hidden layer, (b) two hidden layers, and (c) three hidden layers
Close modal

3 Deep Neural Network Level Set Method for Structural Topology Optimization

3.1 Conventional Level Set-Based Topology Optimization.

The conventional level set method uses a zero contour (2D) or isosurface (3D) to represent the boundaries of geometry, which is introduced by Osher and Sethian [23] to simulate the motion of dynamic interfaces. The interface is described by the zero-level set of implicit function Φ(x), which is Lipschitz-continuous in design domain. In this paper, the level set function Φ(x) is defined as

(4)
where D is the design domain, Ω represents all admissible shapes, ∂Ω denotes the boundary of shape, and t is the pseudo time [25] of shape dynamic evolution. The Hamilton–Jacobi PDE can be obtained through differentiating the zero-level set with respect to pseudo time t as follows:
(5)
where Vn is the normal velocity computed through sensitivity analysis. The shape of zero-level set evolves along the gradient direction through solving the above Hamilton–Jacobi equation. This equation can be resolved using upwind schemes, where a reinitialization procedure is needed as an auxiliary step to avoid the implicit function becoming too flat or steep in the vicinity of its zero-level set. In this paper, the objective is chosen as minimizing structural compliance J(Φ), which can be formulated as follows:
(6)
(7)
where the notations in above equations can be written as
(8)
(9)
where u is the displacement and ɛ (u) is the strain. H(·) denotes the Heaviside step function. C is the elastic tensor. Γu and Γτ denote the displacement and force boundary, respectively. Note that the test function ν should belong to the prescribed displacement on Dirichlet boundary. τ and b denote the traction at boundary and body force in the domain, respectively. a(u, v, Φ) is the energy bilinear form and l(v, Φ) is the load linear form. v is a virtual displacement field. The operator (:) denotes tensor contraction. For Heaviside step function, it equals zero in the void area and one in the solid area. In practice, the step Heaviside function is approximated by a smooth function to ensure the differentiable in the transition area. The smoothed Heaviside function can be formulated as follows:
(10)

The δ is a small value, and Δ denotes the half of the transition width. Detailed mathematical properties of the smoothed Heaviside function can be found in Ref. [26].

3.2 Deep Neural Network Level Set Optimization Method.

A DNN level set method is proposed to convert the Hamilton–Jacobi PDE into system of ODEs in the design domain for topology optimization. For the conventional level set method, the implicit function Φ(x) is updated through solving Hamilton–Jacobi equations to obtain the optimal topology. In this paper, DNN implicit modeling is applied to represent implicit function Φ(x), where the evolution of Φ(x) is equivalent to updating the parameters of networks. As mentioned in Sec. 3.1, the implicit function represented by the time-dependent NNs can be expressed as
(11)
Substituting Eq. (11) into the Hamilton–Jacobi equation, the parametrized ODE can be written as
(12)
(13)
The Moore–Penrose inverse M+ of matrix ∂ℕ(x, y, θ)/∂θ is applied to obtain the least-squares solution of the above system. Note that the term ∂ℕ(x, y, θ)/∂θ is computed through automatic differential tool CasADi [55].
(14)
where the M+ can be expressed as
(15)

In Eq. (14), the DNN coefficients are time-dependent, where the initial value of DNN can be obtained through back propagation (BP) training. Therefore, a PDE problem is transformed into the ODE problem. It is worth to mention that the derivative information of DNN with respect to its parameter or input is ready to be obtained through graph-based automatic differentiation. To obtain high accuracy and stable solutions, Eq. (14) is solved by a Runge–Kutta–Fehlberg (RKF45) method [56], which is recommended by Ref. [57]. An RKF45 method is able to determine if the proper step size is being used, where two different approximations for the solution are made and compared. If the two approximations do not agree to a specified accuracy, the step size is reduced. More details of this method can be found in Ref. [56]. In general, the time-step size should be sufficiently small to achieve the numerical stability due to the Courant–Friedrichs–Lewy condition for stability [58].

Based on shape derivative, the normal velocity Vn along the free moving boundary form minimum compliance can be expressed as follows:
(16)
where λ is the Lagrange multiplier to enforce the constraint of volume fraction. The augmented Lagrangian updating scheme [59] is applied to update λ in this paper.

3.3 Parametrized Reinitialization Scheme.

For the standard level set method, irregularities during evolution may occur, where inevitable leads to instability of level set evolution. To overcome this difficulty, reinitialization scheme was introduced to regularize level set function (LSF) to maintain the stability of boundary evolution. In general, reinitialization is implemented by periodically reshaping LSF as a signed distance function [60]. A standard method for initialization is obtaining the steady solution of the following evolution equation:
(17)
where Φinitial is the level set function to be initialized, and sign(·) denotes the sign function. Substituting Eq. (11) into reinitialization in Eq. (17), the following equation can be obtained:
(18)

The reinitialization procedure used in Eq. (18) usually slightly moves the zero-level set contour, which may cause inconsistencies during the optimization process. In general, the reinitialization procedure needs to be implemented periodically to maintain the signed distance function [13,61]. Hartmann et al. [62] proposed a constrained reinitialization scheme, where the least-squares solution is obtained to keep the location of zero-level contour. Jiang and Chen [34] proposed a new level set scheme using a double-well potential function to achieve distance regularization inside the topology optimization loop. In this paper, the regular reinitialization scheme is applied periodically to avoid the implicit function becoming too flat or too steep. More details of implementation can be found in Ref. [25]. Equation (18) can be solved using the Runge–Kutta–Fehlberg (RKF45) method [58]. The proposed DNN-based level set algorithm is described as follows:

  1. Initialization of the parametrized level set function Φ(θ), corresponding to an initial guess Φ0. The initial weights and biases are determined through backpropagation algorithm (Eq. (3)).

  2. Iteration until convergence, for k ≥ 0.

    • Based on an implicit function Φk, the material distribution in the design domain can be computed through Heaviside function (Eq. (10)). Solve the equilibrium equation (Eq. (7)) using finite element method technique to obtain displacement field.

    • Compute objective J(Φ) and the normal velocity Vn. Updating the weights and biases by solving the parametrized Hamilton–Jacobi equation (Eq. (13)) using RBF45 scheme.

    • For stability reasons, the reinitialization of the level set function Φ by solving Eq. (18) in every iteration.

The flowchart of algorithm can be found in Fig. 4.

Fig. 4
Flowchart of the level set method based on DNN
Fig. 4
Flowchart of the level set method based on DNN
Close modal

4 Numerical Examples

In this section, several two-dimensional numerical examples are demonstrated to verify the effectiveness of the proposed DNN level set method. Unless stated otherwise, the following parameters are chosen as the elastic modulus E = 1 for solid materials and modulus E = 1 × 10−6 for void materials. The Poisson’s ratio is chosen as v = 0.3. The volume fraction constraint is set to be 0.4. The DNNs are initialized using backpropagation training algorithm. The activation function is chosen as hyperbolic tangent function [63]. The detailed initialization description is presented in Sec. 2. For all numerical examples, the design domain is discretized by rectilinear mesh with grid size equals 1. The finite element analysis is based on an ‘ersatz material’ approach, which is a well-known method for level set topology optimization [25].

4.1 Messerschmitt–Bölkow–Blohm Beam.

The Messerschmitt–Bölkow–Blohm (MBB) beam is investigated in the present example for minimum compliance problem. The boundary condition is plotted in Fig. 5, where a concentrated force P = 1 is applied at the mid of the top edge. The architecture of network is presented in Fig. 6. Note that a fixed support is located at bottom-left corner, while a roller is at the bottom-right corner. The design domain is meshed with 200 × 100 elements with a grid size equals to 1. The fixed Lagrange multiplier is chosen as l = 5, and the time-step is chosen as τ = 3 × 10−3. For the first case, the implicit function is represented by the NN with one hidden layer, where every hidden layer contains eight neurons (Fig. 6). The total number of design variables is 33. To generate a symmetry design, only half of the design domain is used for optimization, where symmetric boundary condition is implemented. The final optimal design is shown in Fig. 8(a). The initial design (training result) is plotted in Fig. 7(a). Note that the NN with one hidden layer is a shallow network, and the hole in training result is not a perfect circle due to limited fitting ability. A stable topology optimized solution can be achieved through solving the ODEs in Eq. (14) after 150 iterations (Fig. 10(a)).

Fig. 5
Compliance design of an MBB beam
Fig. 5
Compliance design of an MBB beam
Close modal
Fig. 6
Architecture of networks
Fig. 6
Architecture of networks
Close modal
Fig. 7
Initial design of an MBB beam design (DNN with one hidden layer): (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Fig. 7
Initial design of an MBB beam design (DNN with one hidden layer): (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Close modal
Fig. 8
Optimized design of an MBB beam: (a) comp: 33.39 (8), (b) comp: 32.57 (8 × 8), and (c) comp: 36.02 (8 × 8 × 8)
Fig. 8
Optimized design of an MBB beam: (a) comp: 33.39 (8), (b) comp: 32.57 (8 × 8), and (c) comp: 36.02 (8 × 8 × 8)
Close modal

To make a comparison, NNs with two hidden layers and three hidden layers are chosen to represent the implicit function of level set. The optimization parameter setting is the same as the previous example. The architecture of network is shown in Figs. 1(b) and 1(c). The inputs are the point coordinates (x, y) in the design domain, and the output is the value of the implicit function at the present point. The initial training results are presented in Figs. 7(b) and 7(c), and the optimal designs are displayed in Figs. 8(b) and 8(c). The total number of design variables are 105 (two hidden layers) and 177 (three hidden layers). The implicit function of optimal design is shown in Fig. 9. The compliance values of two optimal designs are 32.57 and 39.42, respectively. Because the optimization results only converge to local minima, there is no guarantee that more variables will result in better solutions. Convergence history is presented in Fig. 10. Since the level set function is updated via updating the parameters of NNs, new holes can be generated freely from the mathematical point of view. This salient point is verified in Fig. 8, where new small holes are generated during the optimization process. A benchmark design generated by the SIMP method using top88 code as provided in Ref. [64] is presented in Fig. 11. Note that the radius of the filter for the standard SIMP method is chosen as r = 2. The compliance of optimal design produced by the SIMP method is 33.48. Compared with solutions from the DNN-based level set method, the structural compliance values of optimized designs (networks with one and two layers) are quite close with the benchmark solution.

Fig. 9
Implicit function of optimized design
Fig. 9
Implicit function of optimized design
Close modal
Fig. 10
Convergence history: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Fig. 10
Convergence history: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Close modal
Fig. 11
Benchmark design (compliance: 33.48)
Fig. 11
Benchmark design (compliance: 33.48)
Close modal

To further verify the effectiveness of the proposed DNN-based level set method for diverse and competitive designs, different network architectures are selected to produce diverse designs as shown in Fig. 12. It is worth mentioning that different networks are capable of generating distinctly different solutions. For the single layer network with five neurons, the optimized result is simple and no intricate geometric details are found as shown in Fig. 12(a). For a network with two layers (15 × 15), the optimized design seems to be more complex with several truss-like support components inside as shown in Fig. 12(f). For a given architecture of NN, the implicit function (segmentation) represented by NN is a subspace of all possible designs. Thus, NN with different architectures describes different subspace of solutions, which give the explanation of diverse solutions.

Fig. 12
Diverse and competitive designs generated by the DNN-based level set method: (a) comp: 31.52 (5), (b) comp: 42.62 (10), (c) comp: 38.45 (15), (d) comp: 35.07 (5 × 5), (e) comp: 40.23 (10 × 10), (f) comp: 43.01 (15 × 15), (g) comp: 32.41 (5 × 5 × 5), (h) comp: 41.78 (10 × 10 × 10), and (i) comp: 44.32 (15 × 15 × 15)
Fig. 12
Diverse and competitive designs generated by the DNN-based level set method: (a) comp: 31.52 (5), (b) comp: 42.62 (10), (c) comp: 38.45 (15), (d) comp: 35.07 (5 × 5), (e) comp: 40.23 (10 × 10), (f) comp: 43.01 (15 × 15), (g) comp: 32.41 (5 × 5 × 5), (h) comp: 41.78 (10 × 10 × 10), and (i) comp: 44.32 (15 × 15 × 15)
Close modal

4.2 Short Cantilever Beam.

The minimum compliance design of a short cantilever beam is presented in Fig. 13. The design domain is a square with fixed boundary condition on the left side, and a vertical concentration force F = 1 is applied at the midpoint of right side. The design domain is meshed with 100 × 100 elements with grid size equals 1. The optimization formulation is described in Eqs. (6) and (7). A fixed Lagrange multiplier l = 3 is applied for volume constraint, and the time-step is chosen as τ = 3 × 10−3. For the first case, NN with one hidden layer is chosen to represent the implicit function, where the hidden layer contains eight neurons. Because of the limited fitting ability of shallow NN, the initialized shape has some artifacts near the boundary, and the training result is presented in Fig. 14(a). The final optimal design is displayed in Fig. 15(a). The implicit function of optimal design is shown in Fig. 16(a), and the optimization progress converges after 120 iterations (Fig. 17(a)). To make a comparison with numerical results generated by shallow NNs, the networks with two or three hidden layers are examined here. The architectures of networks are shown in Figs. 1(b) and 1(c), where each layer contains eight neurons. The total number of design variables is 105 (two hidden layers) and 177 (three hidden layers). The other optimization settings are the same as before. The optimal layout and implicit functions are demonstrated in Figs. 15(b), 15(c), 16(b), and 16(c). Obviously, the optimal designs using DNNs (two or three hidden layers) have more intricate geometric features compared to the result obtained by shallow NN. The convergence histories for the two designs are demonstrated in Figs. 17(b) and 17(c), where the stable topology optimized design is achieved after 140 iterations. A benchmark design obtained with the SIMP method is demonstrated in Fig. 18. It is worth to mention that designs produced by the NN with 2 or 3 layers are slightly lower than the value of benchmark design (around 3.5% difference).

Fig. 13
Compliance design of an short cantilever beam
Fig. 13
Compliance design of an short cantilever beam
Close modal
Fig. 14
Initial design of an short cantilever beam: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Fig. 14
Initial design of an short cantilever beam: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Close modal
Fig. 15
Optimized design of an short cantilever beam: (a) hidden layers: 8 (181,946), (b) hidden layers: 8 × 8 (176,893), and (c) hidden layers: 8 × 8 × 8 (177,848)
Fig. 15
Optimized design of an short cantilever beam: (a) hidden layers: 8 (181,946), (b) hidden layers: 8 × 8 (176,893), and (c) hidden layers: 8 × 8 × 8 (177,848)
Close modal
Fig. 16
Implicit function of optimized design: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Fig. 16
Implicit function of optimized design: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Close modal
Fig. 17
Convergence history: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Fig. 17
Convergence history: (a) hidden layers: 8, (b) hidden layers: 8 × 8, and (c) hidden layers: 8 × 8 × 8
Close modal
Fig. 18
Benchmark design (compliance: 183,543)
Fig. 18
Benchmark design (compliance: 183,543)
Close modal

To further generate multiple alternatives, nine different NN architectures are examined to obtain the solutions as shown in Fig. 19. Obviously, although these designs have distinctly different topologies, the compliance value of optimized design is close with respect to benchmark (difference less than 5%). Unsymmetrical designs can be easily obtained as plotted in Figs. 19(b)19(i).

Fig. 19
Diverse and competitive designs generated by the DNN-based level set method: (a) comp: 174,319 (5), (b) comp: 176,569 (10), (c) comp: 175,819 (15), (d) comp: 197,768 (5 × 5), (e) comp: 174,970 (10 × 10), (f) comp: 181,101 (15 × 15), (g) comp: 190,953 (5 × 5 × 5), (h) comp: 176,683 (10 × 10 × 10), and (i) comp: 178,011 (15 × 15 × 15)
Fig. 19
Diverse and competitive designs generated by the DNN-based level set method: (a) comp: 174,319 (5), (b) comp: 176,569 (10), (c) comp: 175,819 (15), (d) comp: 197,768 (5 × 5), (e) comp: 174,970 (10 × 10), (f) comp: 181,101 (15 × 15), (g) comp: 190,953 (5 × 5 × 5), (h) comp: 176,683 (10 × 10 × 10), and (i) comp: 178,011 (15 × 15 × 15)
Close modal

5 Conclusion

In this paper, a DNN level set method is proposed for topology optimization. DNNs are popular for function approximation. The implicit function is represented by deep feedforward NNs. The activation function is chosen as a hyperbolic tangent function. Based on DNNs, a high level of smoothness in gradient and curvature of implicit function can be achieved. The Hamilton–Jacobi PDE is transformed into parametrized ODE and implicit function is updated through updating the weights and biases of network.

The major contribution of the proposed method is applying the DNN as a function approximator to describe implicit function of the level set method. Different DNN architectures are capable of generating diverse and competitive designs with high structural performance. DNN-based level set method can provide the designers with multiple conceptual alternatives instead of finding one optimum solution to maximize or minimize the objective. The limitation of the present work is that the mathematical connection between network architecture and structural complexity or performance cannot be quantified explicitly. At present, applying mathematical tool to quantify this relation is extremely difficult and further effort will be devoted to solving this issue in the future. Compared to the diverse competitive design for topology optimization methodology based on diversity constraints proposed by Wang et al. [48], qualifying the relationship between the diversity and NN architectures is future research direction. The proposed DNN-based method provides an alternative way to produce the diversity solution in the framework of the level set method. In addition, using the deep learning method to represent implicit function opens an opportunity toward a marriage of machine learning and topology optimization.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtained from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper.

References

1.
Bendsoe
,
M. P.
, and
Sigmund
,
O.
,
2013
,
Topology Optimization: Theory, Methods, and Applications
,
Springer Science & Business Media
.
2.
Sigmund
,
O.
, and
Maute
,
K.
,
2013
, “
Topology Optimization Approaches
,”
Struct. Multidiscipl. Optim.
,
48
(
6
), pp.
1031
1055
. 10.1007/s00158-013-0978-6
3.
Bendsøe
,
M. P.
, and
Sigmund
,
O.
,
1999
, “
Material Interpolation Schemes in Topology Optimization
,”
Arch. Appl. Mech.
,
69
(
9–10
), pp.
635
654
. 10.1007/s004190050248
4.
van Dijk
,
N. P.
,
Maute
,
K.
,
Langelaar
,
M.
, and
Van Keulen
,
F.
,
2013
, “
Level-Set Methods for Structural Topology Optimization: A Review
,”
Struct. Multidiscipl. Optim.
,
48
(
3
), pp.
437
472
. 10.1007/s00158-013-0912-y
5.
Wang
,
F.
,
Lazarov
,
B. S.
, and
Sigmund
,
O.
,
2011
, “
On Projection Methods, Convergence and Robust Formulations in Topology Optimization
,”
Struct. Multidiscipl. Optim.
,
43
(
6
), pp.
767
784
. 10.1007/s00158-010-0602-y
6.
Norato
,
J.
,
Bell
,
B.
, and
Tortorelli
,
D. A.
,
2015
, “
A Geometry Projection Method for Continuum-Based Topology Optimization With Discrete Elements
,”
Comput. Methods Appl. Mech. Eng.
,
293
, pp.
306
327
. 10.1016/j.cma.2015.05.005
7.
Watts
,
S.
, and
Tortorelli
,
D. A.
,
2017
, “
A Geometric Projection Method for Designing Three Dimensional Open Lattices With Inverse Homogenization
,”
Int. J. Numer. Methods Eng.
,
112
(
11
), pp.
1564
1588
. 10.1002/nme.5569
8.
Lazarov
,
B. S.
, and
Wang
,
F.
,
2017
, “
Maximum Length Scale in Density Based Topology Optimization
,”
Comput. Methods Appl. Mech. Eng.
,
318
, pp.
826
844
. 10.1016/j.cma.2017.02.018
9.
Zhou
,
M.
,
Lazarov
,
B. S.
,
Wang
,
F.
, and
Sigmund
,
O.
,
2015
, “
Minimum Length Scale in Topology Optimization by Geometric Constraints
,”
Comput. Methods Appl. Mech. Eng.
,
293
, pp.
266
282
. 10.1016/j.cma.2015.05.003
10.
Lazarov
,
B. S.
,
Wang
,
F.
, and
Sigmund
,
O.
,
2016
, “
Length Scale and Manufacturability in Density-Based Topology Optimization
,”
Arch. Appl. Mech.
,
86
(
1–2
), pp.
189
218
. 10.1007/s00419-015-1106-4
11.
Lazarov
,
B. S.
,
Schevenels
,
M.
, and
Sigmund
,
O.
,
2011
, “
Robust Design of Large-Displacement Compliant Mechanisms
,”
Mech. Sci.
,
2
(
2
), pp.
175
182
. 10.5194/ms-2-175-2011
12.
Guest
,
J. K.
,
2009
, “
Topology Optimization With Multiple Phase Projection
,”
Comput. Methods Appl. Mech. Eng.
,
199
(
1–4
), pp.
123
135
. 10.1016/j.cma.2009.09.023
13.
Guest
,
J. K.
,
2009
, “
Imposing Maximum Length Scale in Topology Optimization
,”
Struct. Multidiscipl. Optim.
,
37
(
5
), pp.
463
473
. 10.1007/s00158-008-0250-7
14.
Guest
,
J. K.
,
Prévost
,
J. H.
, and
Belytschko
,
T.
,
2004
, “
Achieving Minimum Length Scale in Topology Optimization Using Nodal Design Variables and Projection Functions
,”
Int. J. Numer. Methods Eng.
,
61
(
2
), pp.
238
254
. 10.1002/nme.1064
15.
Asadpoure
,
A.
,
Tootkaboni
,
M.
, and
Guest
,
J. K.
,
2011
, “
Robust Topology Optimization of Structures With Uncertainties in Stiffness-Application to Truss Structures
,”
Comput. Struct.
,
89
(
11–12
), pp.
1131
1141
. 10.1016/j.compstruc.2010.11.004
16.
Guest
,
J. K.
, and
Smith Genut
,
L. C.
,
2010
, “
Reducing Dimensionality in Topology Optimization Using Adaptive Design Variable Fields
,”
Int. J. Numer. Methods Eng.
,
81
(
8
), pp.
1019
1045
. 10.1002/nme.2724
17.
Carstensen
,
J. V.
, and
Guest
,
J. K.
,
2018
, “
Projection-Based Two-Phase Minimum and Maximum Length Scale Control in Topology Optimization
,”
Struct. Multidiscipl. Optim.
,
58
(
5
), pp.
1845
1860
. 10.1007/s00158-018-2066-4
18.
Schevenels
,
M.
,
Lazarov
,
B. S.
, and
Sigmund
,
O.
,
2011
, “
Robust Topology Optimization Accounting for Spatially Varying Manufacturing Errors
,”
Comput. Methods Appl. Mech. Eng.
,
200
(
49–52
), pp.
3613
3627
. 10.1016/j.cma.2011.08.006
19.
Sigmund
,
O.
,
2009
, “
Manufacturing Tolerant Topology Optimization
,”
Acta Mech. Sin.
,
25
(
2
), pp.
227
239
. 10.1007/s10409-009-0240-z
20.
Lazarov
,
B. S.
,
Schevenels
,
M.
, and
Sigmund
,
O.
,
2012
, “
Topology Optimization With Geometric Uncertainties by Perturbation Techniques
,”
Int. J. Numer. Methods Eng.
,
90
(
11
), pp.
1321
1336
. 10.1002/nme.3361
21.
Sigmund
,
O.
,
2007
, “
Morphology-Based Black and White Filters for Topology Optimization
,”
Struct. Multidiscipl. Optim.
,
33
(
4–5
), pp.
401
424
. 10.1007/s00158-006-0087-x
22.
Sethian
,
J. A.
,
1996
, “
Theory, Algorithms, and Applications of Level Set Methods for Propagating Interfaces
,”
Acta Numer.
,
5
, pp.
309
395
. 10.1017/S0962492900002671
23.
Osher
,
S.
, and
Sethian
,
J. A.
,
1988
, “
Fronts Propagating With Curvature-Dependent Speed: Algorithms Based on Hamilton––Jacobi Formulations
,”
J. Comput. Phys.
,
79
(
1
), pp.
12
49
. 10.1016/0021-9991(88)90002-2
24.
Osher
,
S. J.
, and
Santosa
,
F.
,
2001
, “
Level Set Methods for Optimization Problems Involving Geometry and Constraints: I. Frequencies of a Two-Density Inhomogeneous Drum
,”
J. Comput. Phys.
,
171
(
1
), pp.
272
288
. 10.1006/jcph.2001.6789
25.
Allaire
,
G.
,
Jouve
,
F.
, and
Toader
,
A.-M.
,
2004
, “
Structural Optimization Using Sensitivity Analysis and A Level-Set Method
,”
J. Comput. Phys.
,
194
(
1
), pp.
363
393
. 10.1016/j.jcp.2003.09.032
26.
Wang
,
M. Y.
,
Wang
,
X.
, and
Guo
,
D.
,
2003
, “
A Level Set Method for Structural Topology Optimization
,”
Comput. Methods Appl. Mech. Eng.
,
192
(
1–2
), pp.
227
246
. 10.1016/S0045-7825(02)00559-5
27.
Wang
,
S.
, and
Wang
,
M. Y.
,
2006
, “
Radial Basis Functions and Level Set Method for Structural Topology Optimization
,”
Int. J. Numer. Methods Eng.
,
65
(
12
), pp.
2060
2090
. 10.1002/nme.1536
28.
Wei
,
P.
, and
Wang
,
M. Y.
,
2009
, “
Piecewise Constant Level Set Method for Structural Topology Optimization
,”
Int. J. Numer. Methods Eng.
,
78
(
4
), pp.
379
402
. 10.1002/nme.2478
29.
Jiang
,
L.
,
Chen
,
S.
, and
Jiao
,
X.
,
2018
, “
Parametric Shape and Topology Optimization: A New Level Set Approach Based on Cardinal Basis Functions
,”
Int. J. Numer. Methods Eng.
,
114
(
1
), pp.
66
87
. 10.1002/nme.5733
30.
Guo
,
X.
,
Zhang
,
W.
, and
Zhong
,
W.
,
2014
, “
Doing Topology Optimization Explicitly and Geometrically—A New Moving Morphable Components Based Framework
,”
ASME J. Appl. Mech.
,
81
(
8
), p.
081009
. 10.1115/1.4027609
31.
Zhang
,
W.
,
Zhou
,
J.
,
Zhu
,
Y.
, and
Guo
,
X.
,
2017
, “
Structural Complexity Control in Topology Optimization Via Moving Morphable Component (MMC) Approach
,”
Struct. Multidiscipl. Optim.
,
56
(
3
), pp.
535
552
. 10.1007/s00158-017-1736-y
32.
Zhang
,
W.
,
Chen
,
J.
, and
Zhu
,
X.
,
2017
, “
Explicit Three Dimensional Topology Optimization Via Moving Morphable Void (MMV) Approach
,”
Comput. Methods Appl. Mech. Eng.
,
322
, pp.
590
614
. 10.1016/j.cma.2017.05.002
33.
Zhang
,
W.
,
Li
,
D.
,
Zhou
,
J.
,
Du
,
Z.
,
Li
,
B.
, and
Guo
,
X.
,
2018
, “
A Moving Morphable Void (MMV)-Based Explicit Approach for Topology Optimization Considering Stress Constraints
,”
Comput. Methods Appl. Mech. Eng.
,
334
, pp.
381
413
. 10.1016/j.cma.2018.01.050
34.
Jiang
,
L.
, and
Chen
,
S.
,
2017
, “
Parametric Structural Shape & Topology Optimization With a Variational Distance-Regularized Level Set Method
,”
Comput. Methods Appl. Mech. Eng.
,
321
, pp.
316
336
. 10.1016/j.cma.2017.03.044
35.
Luo
,
Y.
,
Xing
,
J.
, and
Kang
,
Z.
,
2020
, “
Topology Optimization Using Material-Field Series Expansion and Kriging-Based Algorithm: An Effective Non-Gradient Method
,”
Comput. Methods Appl. Mech. Eng.
,
364
, p.
112966
. 10.1016/j.cma.2020.112966
36.
Lison
,
P.
,
2015
,
An Introduction to Machine Learning
,
Language Technology Group
,
Edinburgh
.
37.
Rastegari
,
M.
,
Ordonez
,
V.
,
Redmon
,
J.
, and
Farhadi
,
A.
,
2016
, “
Xnor-net: Imagenet Classification Using Binary Convolutional Neural Networks
,”
European Conference on Computer Vision
,
Springer
, pp.
525
542
.
38.
Gawehn
,
E.
,
Hiss
,
J. A.
, and
Schneider
,
G.
,
2016
, “
Deep Learning in Drug Discovery
,”
Mol. Inform.
,
35
(
1
), pp.
3
14
. 10.1002/minf.201501008
39.
Raissi
,
M.
,
Perdikaris
,
P.
, and
Karniadakis
,
G. E.
,
2019
, “
Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations
,”
J. Comput. Phys.
,
378
, pp.
686
707
. 10.1016/j.jcp.2018.10.045
40.
Raissi
,
M.
,
Yazdani
,
A.
, and
Karniadakis
,
G. E.
,
2020
, “
Hidden Fluid Mechanics: Learning Velocity and Pressure Fields From Flow Visualizations
,”
Science
,
367
(
6481
), pp.
1026
1030
. 10.1126/science.aaw4741
41.
Iten
,
R.
,
Metger
,
T.
,
Wilming
,
H.
,
Del Rio
,
L.
, and
Renner
,
R.
,
2020
, “
Discovering Physical Concepts With Neural Networks
,”
Phys. Rev. Lett.
,
124
(
1
), p.
010508
. 10.1103/PhysRevLett.124.010508
42.
Brunton
,
S. L.
,
Noack
,
B. R.
, and
Koumoutsakos
,
P.
,
2020
, “
Machine Learning for Fluid Mechanics
,”
Annu. Rev. Fluid Mech.
,
52
(
1
), pp.
477
508
. 10.1146/annurev-fluid-010719-060214
43.
Raissi
,
M.
,
Wang
,
Z.
,
Triantafyllou
,
M. S.
, and
Karniadakis
,
G. E.
,
2019
, “
Deep Learning of Vortex-Induced Vibrations
,”
J. Fluid Mech.
,
861
, pp.
119
137
. 10.1017/jfm.2018.872
44.
Yu
,
Y.
,
Hur
,
T.
,
Jung
,
J.
, and
Jang
,
I. G.
,
2019
, “
Deep Learning for Determining a Near-Optimal Topological Design Without Any Iteration
,”
Struct. Multidiscipl. Optim.
,
59
(
3
), pp.
787
799
. 10.1007/s00158-018-2101-5
45.
Lei
,
X.
,
Liu
,
C.
,
Du
,
Z.
,
Zhang
,
W.
, and
Guo
,
X.
,
2019
, “
Machine Learning-Driven Real-Time Topology Optimization Under Moving Morphable Component-Based Framework
,”
ASME J. Appl. Mech.
,
86
(
1
), p.
011004
. 10.1115/1.4041319
46.
Oh
,
S.
,
Jung
,
Y.
,
Kim
,
S.
,
Lee
,
I.
, and
Kang
,
N.
,
2019
, “
Deep Generative Design: Integration of Topology Optimization and Generative Models
,”
ASME J. Mech. Des.
,
141
(
11
), p.
111405
. 10.1115/1.4044229
47.
Park
,
J. J.
,
Florence
,
P.
,
Straub
,
J.
,
Newcombe
,
R.
, and
Lovegrove
,
S.
,
2019
, “
Deepsdf: Learning Continuous Signed Distance Functions for Shape Representation
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp.
165
174
.
48.
Wang
,
B.
,
Zhou
,
Y.
,
Zhou
,
Y.
,
Xu
,
S.
, and
Niu
,
B.
,
2018
, “
Diverse Competitive Design for Topology Optimization
,”
Struct. Multidiscipl. Optim.
,
57
(
2
), pp.
891
902
. 10.1007/s00158-017-1762-9
49.
Yang
,
K.
,
Zhao
,
Z. L.
, and
He
,
Y.
,
2019
, “
Simple and Effective Strategies for Achieving Diverse and Competitive Structural Designs
,”
Extreme Mech. Lett.
,
30
, p.
100481
. 10.1016/j.eml.2019.100481
50.
He
,
Y.
,
Cai
,
K.
,
Zhao
,
Z.-L.
, and
Xie
,
Y. M.
,
2020
, “
Stochastic Approaches to Generating Diverse and Competitive Structural Designs in Topology Optimization
,”
Finite Elements Anal. Des.
,
173
, p.
103399
. 10.1016/j.finel.2020.103399
51.
Goodfellow
,
I.
,
Bengio
,
Y.
, and
Courville
,
A.
,
2016
,
Deep Learning
,
MIT Press
,
Cambridge, MA
.
52.
Cybenko
,
G.
,
1989
, “
Approximation by Superpositions of a Sigmoidal Function
,”
Math. Control Signals Syst.
,
2
(
4
), pp.
303
314
. 10.1007/BF02551274
53.
Paszke
,
A.
,
Gross
,
S.
,
Chintala
,
S.
,
Chanan
,
G.
,
Yang
,
E.
,
DeVito
,
Z.
,
Lin
,
Z.
,
Desmaison
,
A.
,
Antiga
,
L.
, and
Lerer
,
A.
,
2017
, “
Automatic Differentiation in PyTorch
.”
54.
Rumelhart
,
D. E.
,
Hinton
,
G. E.
, and
Williams
,
R. J.
,
1985
,
Learning Internal Representations by Error Propagation
,
California Univ San Diego La Jolla Inst for Cognitive Science
,
San Diego, CA
.
55.
Andersson
,
J. A.
,
Gillis
,
J.
,
Horn
,
G.
,
Rawlings
,
J. B.
, and
Diehl
,
M.
,
2019
, “
CasADi: A Software Framework for Nonlinear Optimization and Optimal Control
,”
Math. Program. Comput.
,
11
(
1
), pp.
1
36
. 10.1007/s12532-018-0139-4
56.
Butcher
,
J. C.
,
1987
,
The Numerical Analysis of Ordinary Differential Equations: Runge–Kutta and General Linear Methods
,
Wiley-Interscience
.
57.
Wang
,
S.
,
Lim
,
K. M.
,
Khoo
,
B. C.
, and
Wang
,
M. Y.
,
2007
, “
An Extended Level Set Method for Shape and Topology Optimization
,”
J. Comput. Phys.
,
221
(
1
), pp.
395
421
. 10.1016/j.jcp.2006.06.029
58.
Osher
,
S.
,
Fedkiw
,
R.
, and
Piechor
,
K.
,
2004
, “
Level Set Methods and Dynamic Implicit Surfaces
,”
ASME Appl. Mech. Rev.
,
57
(
3
), p.
B15–B15
. 10.1115/1.1760520
59.
Wang
,
M. Y.
, and
Wang
,
P.
,
2006
, “
The Augmented Lagrangian Method in Structural Shape and Topology Optimization With RBF Based Level Set Method
,”
CJK-OSM 4: The Fourth China–Japan–Korea Joint Symposium on Optimization of Structural and Mechanical Systems
, p.
191
.
60.
Li
,
C.
,
Xu
,
C.
,
Gui
,
C.
, and
Fox
,
M. D.
,
2010
, “
Distance Regularized Level Set Evolution and Its Application to Image Segmentation
,”
IEEE Trans. Image Process.
,
19
(
12
), pp.
3243
3254
. 10.1109/TIP.2010.2069690
61.
Challis
,
V. J.
,
2010
, “
A Discrete Level-Set Topology Optimization Code Written in Matlab
,”
Struct. Multidiscipl. Optim.
,
41
(
3
), pp.
453
464
. 10.1007/s00158-009-0430-0
62.
Hartmann
,
D.
,
Meinke
,
M.
, and
Schröder
,
W.
,
2010
, “
The Constrained Reinitialization Equation for Level Set Methods
,”
J. Comput. Phys.
,
229
(
5
), pp.
1514
1535
. 10.1016/j.jcp.2009.10.042
63.
Anastassiou
,
G. A.
,
2011
, “
Multivariate Hyperbolic Tangent Neural Network Approximation
,”
Comput. Math. Appl.
,
61
(
4
), pp.
809
821
. 10.1016/j.camwa.2010.12.029
64.
Andreassen
,
E.
,
Clausen
,
A.
,
Schevenels
,
M.
,
Lazarov
,
B. S.
, and
Sigmund
,
O.
,
2011
, “
Efficient Topology Optimization in MATLAB Using 88 Lines of Code
,”
Struct. Multidiscipl. Optim.
,
43
(
1
), pp.
1
16
. 10.1007/s00158-010-0594-7