Abstract
Oscillating heat pipes are heat transfer devices with the potential of addressing some of the most pressing current thermal management problems, from the miniaturization of microchips to the development of hypersonic vehicles. Since their invention in the 1990s, numerous studies have attempted to develop predictive and inverse design models for oscillating heat pipe function. However, the field still lacks robust and flexible models that can be used to prescribe design specifications based on a target performance. The fundamental difficulty lies in the fact that, despite the simplicity of their design, the mechanisms behind the operation of oscillating heat pipes are complex and only partially understood. To circumvent this limitation, over the last several years, there has been increasing interest in the application of machine learning techniques to oscillating heat pipe modeling. Our survey of the literature has revealed that machine learning techniques have successfully been used to predict different aspects of the operation of these devices. However, many fundamental questions such as which machine learning models are better suited for this task or whether their results can extrapolate to different experimental setups remain unanswered. Moreover, the wealth of knowledge that the field has produced regarding the physical phenomena behind oscillating heat pipes is still to be leveraged by machine learning techniques. Herein, we discuss these applications in detail, emphasizing their advantages, limitations, as well as potential paths forward.
1 Introduction
Oscillating heat pipes (OHPs) are a special type of heat pipe that enjoy a staggering effective thermal conductivity that can be several orders of magnitude higher than pure copper, while operating without external power inputs or wick structures. An OHP consists of a meandering capillary that is partially filled with a working fluid (Fig. 1). The filling ratio is defined as the ratio of the liquid volume to the total internal volume of the capillary. The hydraulic diameter of the capillary is designed small enough to enable the formation of a train of vapor bubbles and liquid plugs due to the surface tension of the fluid. As shown in Fig. 1, OHPs are composed of three regions: evaporator, adiabatic section, and condenser. The evaporator is exposed to the heat source from which heat is intended to be removed. Due to this heat, the working fluid evaporates causing an increase in vapor pressure and volume, which pushes the fluid along the adiabatic section toward the condenser. In the condenser (which is kept at a lower temperature than the evaporator), vapor condenses and contracts. The interplay between vapor expansion and contraction and pressure differences induces an oscillatory motion of the working fluid [1].
Since their introduction in the early 1990s by Akachi [2,3], OHPs have elicited the interest of the scientific and industrial community due to their simple design (no moving parts, no external power inputs, and no wick structures), ease of manufacturing, and extraordinary effective thermal conductivity. Despite the great promise shown by OHPs, the lack of predictive tools for OHP function has hindered their widespread utilization in the industry, as it is difficult to prescribe design specifications given a target performance [4]. For this reason, mathematical modeling of OHPs is a very active area of research in which significant progress has been made over the last 20 years. Despite the progress made in OHP modeling, the field is still far from a comprehensive model for prediction and inverse design. The fundamental difficulty lies on the fact that, despite their design simplicity, the function of OHPs arises from the delicate interplay of a multitude of physical phenomena; an OHP is not only a heat transfer device but also a typical mechanical vibration system, a thermodynamic engine that converts some of the thermal energy to be transported into the work output to generate thermally excited oscillating motion, while integrating evaporation and condensation heat transfer effectively and efficiently. As a result, the heat transfer performance of OHPs depends on many factors, including fluid properties (such as surface tension, viscosity, thermal conductivity, density, latent heat, and heat capacity), mechanical vibration properties [5–8] (spring constant, restoring force, drag force, and saturation pressure), surface wetting characteristics [9–13] (contact angle, surface tension, nucleation cavity, thin film thickness, and material properties), physical configuration [14–17] (channel size, channel length, channel layer, spatial layout, and turn number), and thermal conditions [18,19] (heat flux level, total power input, and temperature uniformity), while being constraint by application and manufacturing limitations.
The complexity of the interactions from which the OHP function arises poses a challenge to first-principle modeling. On the one hand, the mathematical representation of all these phenomena results in complicated systems of differential equations, which are very expensive to integrate. On the other hand, our mechanistic understating of the relevant phenomena is at most partial, and reliable correlations are often unavailable. Due to these difficulties, in recent years, the OHP modeling community has turned toward machine learning (ML) data-driven techniques to try to leverage experimental and simulated data to develop predictive and inverse design models. Although instances of this approach can be found as early as 2002 [20], the development and success of ML over the last 10 years has prompted a surge of efforts in this direction. In what follows, we will review research articles that utilize ML techniques to predict the performance of OHPs, comment on the advantages and limitations of these methods, as well as discuss some perspectives for the future. Multiple survey articles regarding OHP modeling and experiments have been published through the years [4,21,22], including a recent and exhaustive review of the physical phenomena underlying OHP function [23]. In the present review, we survey the attempts that have been made to employ ML techniques for the study and prediction of OHP function (which have been summarized in Table 1).
List of publications that employ ML techniques to study OHPs, including the algorithms used along with their corresponding inputs and outputs
Reference | Year | ML model | Inputs | Outputs | Data used | Remarks |
---|---|---|---|---|---|---|
Khandekar et al. [20] | 2002 | Artificial neural network | Power input, filling ratio | Thermal resistance | 76 data points from their own experiment | Provides example of neural networks difficulties to extrapolate |
Lee and Chang [24] | 2009 | NARX neural networks | Evaporator temperature time series | Condenser temperature time series | Data form their own experiment | First attempt to predict time-dependent quantities |
Jiaqiang et al. [25] | 2011 | Artificial neural network | Filling ratio, pipe diameter, inclination angle, number of turns, and heat input | Heat transfer rate | Data form their own experiment | Uses grey-relation analysis to study the sensitivity of the output to the various input parameters |
Jokar et al. [26] | 2016 | Artificial neural network, genetic algorithm | Power input, filling ratio, inclination angle | Thermal resistance | Data form their own experiment | Utilizes the genetic algorithm to find parameters that minimize thermal resistance |
Patel and Mehta [27] | 2016 | 18 artificial neural networks models | Power input, filling ratio | Thermal resistance | Experimental data from Ref. [28] | Best performance obtained by generalized regression neural network with Gaussian radial basis function |
Jalilian et al. [29] | 2016 | Artificial neural network | Total solar radiation, evaporator length, filling ratio, input water temperature, inclination angle | Gained heat | Data form their own experiment | The OHP study was part of a solar collector |
Ahmadi et al. [30] | 2019 | Group method of data handling | Inner and outer diameter, number of turns, lengths of evaporator, condenser and adiabatic sections, inclination angle, filling ratio, power input, thermal conductivity of tube material | Thermal resistance and effective thermal conductivity | Collected from published studies | Explicit polynomial relation for the outputs as functions of the inputs are obtained |
Malekan et al. [31] | 2019 | Artificial neural network, neuro-fuzzy inference, group method of data handling | Heat input, thermal conductivity, inner diameter-to-length ratio | Thermal resistance | Data form their own experiment | Nano fluids were used as working fluid |
Wang et al. [32] | 2019 | Artificial neural network | Heat flux, number of turns, inner diameter, filling ratio | Thermal resistance | Collected from published studies | The artificial neural network performed better for a specific range of heat fluxes |
Wang et al. [33] | 2019 | Artificial neural network | The Kutateladze, Bond, Morton, Prandtl, and Jacob dimensionless numbers, along with number of turns and evaporator section length-to-inner dimeter ratio | Thermal resistance | 22 data points collected form published studies | Dimensionless numbers were used to extend the range of applicability of the model |
Qian et al. [34] | 2020 | XGBoost model | The Kutateladze, Jacob, Prandtl, Bond, and Morton dimensionless numbers, along with heat flux, evaporator temperature, and various geometric parameters | Effective thermal conductivity | 70 data points from Ref. [35] | Discusses difficulties of artificial neural network with small data sets and combining dimensionless and dimensional numbers as inputs. Proposes the XGBoost algorithm to bypass these difficulties. |
Wen [36] | 2021 | Artificial neural network and group method of data handling | Heat input, filling ratio, number of turns, and geometric parameters | Thermal resistance | Collected from published studies | An algebraic relation between the various inputs and thermal resistance is explicitly obtained |
Yoon and Kim [37] | 2021 | Artificial neural network with long short-term memory encoder | Time series of menisci positions obtained through a visualization experiment | Time series of a given meniscus positions | Data form their own experiment | Prediction results are used to calculate the volumetric fraction in the condenser section |
Loyola-Fuentes et al. [38] | 2022 | Artificial neural network, k-nearest neighbors, random forest | Reynolds, Weber, Froude, and Bond dimensionless numbers (calculated from visualization measurements) | Flow pattern classification | Over 17,000 data points from their own experiment | First study to use ML to perform a classification study for OHPs |
Prashanth et al. [39] | 2022 | Various artificial neural network architectures | Heat input, filling ratio, time taken to reach steady state | Temperature rise | 3500 data points with various working fluids from their own experiment | Optimal architecture depends on the working fluid, highlighting the difficulty of obtaining an ML general framework |
Koyama et al. [40] | 2022 | Artificial neural network with long short-term memory encoder | Time series for internal flow patter, wall temperature difference, heat transport rate | Successive time series for internal flow patter, wall temperature difference, heat transport rate | Data form their own visualization experiment | The agreement between time series was not optimal, but the predicted and measured averaged quantity were similar |
Reference | Year | ML model | Inputs | Outputs | Data used | Remarks |
---|---|---|---|---|---|---|
Khandekar et al. [20] | 2002 | Artificial neural network | Power input, filling ratio | Thermal resistance | 76 data points from their own experiment | Provides example of neural networks difficulties to extrapolate |
Lee and Chang [24] | 2009 | NARX neural networks | Evaporator temperature time series | Condenser temperature time series | Data form their own experiment | First attempt to predict time-dependent quantities |
Jiaqiang et al. [25] | 2011 | Artificial neural network | Filling ratio, pipe diameter, inclination angle, number of turns, and heat input | Heat transfer rate | Data form their own experiment | Uses grey-relation analysis to study the sensitivity of the output to the various input parameters |
Jokar et al. [26] | 2016 | Artificial neural network, genetic algorithm | Power input, filling ratio, inclination angle | Thermal resistance | Data form their own experiment | Utilizes the genetic algorithm to find parameters that minimize thermal resistance |
Patel and Mehta [27] | 2016 | 18 artificial neural networks models | Power input, filling ratio | Thermal resistance | Experimental data from Ref. [28] | Best performance obtained by generalized regression neural network with Gaussian radial basis function |
Jalilian et al. [29] | 2016 | Artificial neural network | Total solar radiation, evaporator length, filling ratio, input water temperature, inclination angle | Gained heat | Data form their own experiment | The OHP study was part of a solar collector |
Ahmadi et al. [30] | 2019 | Group method of data handling | Inner and outer diameter, number of turns, lengths of evaporator, condenser and adiabatic sections, inclination angle, filling ratio, power input, thermal conductivity of tube material | Thermal resistance and effective thermal conductivity | Collected from published studies | Explicit polynomial relation for the outputs as functions of the inputs are obtained |
Malekan et al. [31] | 2019 | Artificial neural network, neuro-fuzzy inference, group method of data handling | Heat input, thermal conductivity, inner diameter-to-length ratio | Thermal resistance | Data form their own experiment | Nano fluids were used as working fluid |
Wang et al. [32] | 2019 | Artificial neural network | Heat flux, number of turns, inner diameter, filling ratio | Thermal resistance | Collected from published studies | The artificial neural network performed better for a specific range of heat fluxes |
Wang et al. [33] | 2019 | Artificial neural network | The Kutateladze, Bond, Morton, Prandtl, and Jacob dimensionless numbers, along with number of turns and evaporator section length-to-inner dimeter ratio | Thermal resistance | 22 data points collected form published studies | Dimensionless numbers were used to extend the range of applicability of the model |
Qian et al. [34] | 2020 | XGBoost model | The Kutateladze, Jacob, Prandtl, Bond, and Morton dimensionless numbers, along with heat flux, evaporator temperature, and various geometric parameters | Effective thermal conductivity | 70 data points from Ref. [35] | Discusses difficulties of artificial neural network with small data sets and combining dimensionless and dimensional numbers as inputs. Proposes the XGBoost algorithm to bypass these difficulties. |
Wen [36] | 2021 | Artificial neural network and group method of data handling | Heat input, filling ratio, number of turns, and geometric parameters | Thermal resistance | Collected from published studies | An algebraic relation between the various inputs and thermal resistance is explicitly obtained |
Yoon and Kim [37] | 2021 | Artificial neural network with long short-term memory encoder | Time series of menisci positions obtained through a visualization experiment | Time series of a given meniscus positions | Data form their own experiment | Prediction results are used to calculate the volumetric fraction in the condenser section |
Loyola-Fuentes et al. [38] | 2022 | Artificial neural network, k-nearest neighbors, random forest | Reynolds, Weber, Froude, and Bond dimensionless numbers (calculated from visualization measurements) | Flow pattern classification | Over 17,000 data points from their own experiment | First study to use ML to perform a classification study for OHPs |
Prashanth et al. [39] | 2022 | Various artificial neural network architectures | Heat input, filling ratio, time taken to reach steady state | Temperature rise | 3500 data points with various working fluids from their own experiment | Optimal architecture depends on the working fluid, highlighting the difficulty of obtaining an ML general framework |
Koyama et al. [40] | 2022 | Artificial neural network with long short-term memory encoder | Time series for internal flow patter, wall temperature difference, heat transport rate | Successive time series for internal flow patter, wall temperature difference, heat transport rate | Data form their own visualization experiment | The agreement between time series was not optimal, but the predicted and measured averaged quantity were similar |
2 A Brief Overview of Physical Modeling of Oscillating Heat Pipes
One of the first modeling approaches employed to study OHPs was continuum wave propagation (CWP), by which the pressure oscillation inside an OHP was assumed to be dictated by the wave equation [41,42]. The application of CWP was limited to a specific operational regime because of the use of phenomenological representations [43]. Another common modeling method (1D modeling) consists of representing an OHP as a straight tube with specific locations for heat input (evaporators), heat output (condensers), and adiabatic sections [44]. In 1D modeling, as shown in Fig. 2, it is assumed that the tube is filled with liquid slugs and vapor bubbles, while the evaporator, condenser, and adiabatic sections are represented by periodic conditions [45]. Continuity, momentum, and energy equations are solved for each of the liquid slugs and vapor bubbles considering phase change. Initial 1D modeling attempts neglected the effects of thin films on OHP function [46–48], but this important aspect was included in later investigations [49–62].

Oscillating heat pipe represented as a straight tube in a 1D model. Here, , , and stand for the length of the evaporator, adiabatic, and condenser section, respectively, while () represent the position of the left (right) end of the ith vapor bubble.
With the advancement of computing devices, direct numerical simulation (DNS) of OHPs in 2D and 3D became possible. DNS that solves the full Navier–Stokes equations allows the simulation of the fluid mixture, which was not possible through 1D modeling. As in the 1D case, 2D and 3D modeling involve governing equations, along with equations representing turbulence flow effects and heat and mass transfer via phase change [63–66]. The most popular model for predicting phase change phenomena in OHPs is the volume of fluid (VOF) method [67–79]. VOF is a surface-tracking technique that solves a single set of momentum equations for the working fluid and can be applied to fixed Eulerian meshes [80]. Although 2D and 3D approaches can describe different flow regimes in OHPs accurately, they are very computationally expensive compared to 1D models.
Mass-spring-damper (MSD) modeling is yet another method employed by researchers to predict the oscillating motions inside OHPs. As illustrated in Fig. 3, the train of liquid slugs and vapor bubbles can be represented by an MSD mechanical system, where the liquid slugs are modeled as masses connected by springs (vapor bubbles). The expansion and contraction of the working fluid in the evaporator and condenser provide the force driving the oscillation [1]. By using this homogeneity, some researchers developed correlations to predict frequencies and damping ratios of oscillating motions in OHPs [81–83].
3 A General Framework For Implementation and Testing of Machine Learning Algorithms
Despite the great variety of ML algorithms and techniques available, a common general framework is typically used for implementing these algorithms and assessing their performance. We will briefly outline this workflow here as it is used by most research articles that we will review. Suppose that we have an input space X and a target space Y, which are related by an unknown function . For example, one could take the input space X to be composed of filling ratios and heat inputs for a given OHP and the target space to consist of the corresponding thermal resistance or effective thermal conductivity. One would expect that different heat inputs and filling ratios would induce different thermal resistances via a function . (Here, we are assuming that all other factors such as working fluid are kept fixed). Machine learning algorithms can “learn” to approximate this function from a data set of examples where both the input and output values are recorded. The output of an ML algorithm is a function that approximates the unknown function f. In our running example, an experimental study would have to be conducted so that the thermal resistance or effective thermal conductivity of an OHP is calculated for different choices of filling ratio and heat input. The data collected in this study would be normally divided into a training set and a testing set. (There is no rule for the proportion of this splitting, but 80% training and 20% testing is a common choice.) The training set is fed to the chosen ML algorithm from which it generates the approximating function . This process is usually referred to as “training the algorithm.” Once the algorithm is trained, the testing set (to which the algorithm did not have access during training) is used to assess the ability of the algorithm to correctly predict outputs given inputs that it did not see during training. This process is known as testing, and it is usually concerned with various statistical assessments of how far the output predicted by the algorithm is from the actual output in the data set. This general framework is schematically depicted in Fig. 4.
Most applications of ML to OHPs utilize artificial neural networks (ANNs) as their predictive model. In these applications, the most common target variable is thermal resistance [20,26,27,31–33,36], while the set of inputs has a larger variability across studies (see Fig. 5 for a schematic representation of a neural network model for OHPs). Naturally, the main factor influencing the choice of input is the data available for training. As the filling ratio and the heat input to the evaporator are easily measurable and adjustable operational conditions that are known to play a central role in OHP performance [84–88], these variables are frequent choices for inputs [20,26,27,32–34]. In cases where data from several OHPs were used for the study [32,36], it is common to include design parameters, such as diameter or number of turns to “inform” the ML algorithms that the data come from different devices. However, some studies have used dimensionless numbers as inputs, as it is conceivable that this will allow the ML algorithm to make accurate predictions about different OHPs with similar architectures. Finally, as opposed to thermal resistance, which is a continuous variable, some research groups have used discrete variables, such as Refs. [38,40], to perform classification studies.

Schematic representation of a neural network model that interpolates between various design and operating parameters of OHPs and the corresponding thermal resistance
4 Variations on the Theme of Artificial Neural Networks
The first application of ML to the study of OHPs was conducted by Khandekar et al. [20], where an ANN was trained to predict thermal resistance given heat input and filling ratio. Accurate predictions were obtained when the ANN was trained on filling ratios for which the OHP was able to exhibit an oscillatory flow. However, when extreme filling ratios (outside the 20–80% range) were used, the accuracy of the ANN predictions was dramatically diminished. Khandekar et al. [20] warn us that data-driven techniques might fail to establish reasonable correlations for data obtained under different phenomenological regimes.
Jokar et al. [26] conducted an experimental study of a five-turn OHP in which the thermal resistance was obtained for various filling ratios, power inputs, and inclination angles in the ranges of 30–80%, 5 W–50 W, and 5 deg–90 deg, respectively. An ANN was trained to predict thermal resistance based on the other parameters. Recall that the trained ANN is simply a function of three variables that returns the predicted thermal resistance when given values for filling ratio, power input, and an inclination angle. This function is then passed to the genetic algorithm (GA), an optimization algorithm inspired by natural selection, to find the working conditions that result in the minimum thermal resistance. The genetic algorithm found the optimal conditions to be a filling ratio of 38.25%, a power input of 39.93 W, and an inclination angle of 55.6 deg, which the authors deemed to be in good agreement with their experimental results.
Jalilian et al. [29] experimentally studied an oscillating heat pipe flat-plate solar collector and used the data collected to train an artificial neural network. The inputs of the network were total solar radiation, length of the evaporator, filling ratio (water was the working fluid), temperature of input water to the tank, and inclination angle, while the output of the ANN was the total heat gained. Various architectures were tested including two and three hidden layers, various choices of activation functions, and a number of neurons in each layer. The optimal architecture consisted of 1 hidden layer of 20 neurons, a unipolar activation function, and a learning rate of 0.04. For this architecture, it was found that over 87% of the predicted values had an error smaller than 15%. The trained neural network (a function from inputs to output) was then optimized via the GA to find the combination of inputs that yielded the highest possible efficiency (heat gained). The thermal efficiency of the optimal case was 4% higher than the case study considered in the article.
Patel and Mehta [27] used experimental data from the study by Shafii et al. [28] to train a family of 18 ANN models (radial basis, linear layers, and others) with various activation functions. As in the study by Khandekar et al. [20], the inputs to these ANNs were filling ratio and heat input, while the output was thermal resistance. The working fluids used by Shafii et al. [28] were water and ethanol. In this study, the best-performing ANN, based on the minimum mean absolute relative difference, was a generalized regression neural network with Gaussian RBF and a spread constant of 4.8, which could predict thermal resistance within an error of 1.81%.
Wang et al. [32] collected over 200 data points from the literature pertaining to water-filled OHPs. An ANN was trained on this data to predict the thermal resistance of the OHP. As the working fluid was constant in all studies, no properties of the working fluid were passed to the ANN as inputs. However, given that the data were collected on OHPs with different designs, geometric parameters (inner diameter, number of turns, and length ratio of the evaporator) were used as inputs for training the ANN, together with filling ratio and power input. The trained ANN achieved a mean-square error (MSE) of 0.0025 on the testing set, although the accuracy of the predictions decreased for data points with high heat fluxes (above 14,000 W/m2).
As most ML studies of OHPs have used water and ethanol as working fluids, Wen [36] trained an ANN and a group method of data handling (GMDH) on 66 data points gathered from various experimental studies where acetone was used as the working fluid. For a suitable choice of architecture, the ANN outperformed the GMDH model. Malekan et al. [31] reported on the effects that working fluids composed of water and nanoparticles have on the performance of OHPs. An experimental study was conducted on three 9-turn OHPs of diameters 2 mm, 2.5 mm, and 3 mm. The OHPs are charged at 50% with various working fluids: water, Fe2O3/water (2 vol% of cFe2O3 nanoparticles (the average diameter of 20 nm) with water as base fluid), and Fe3O4/water (2 vol% of Fe3O4 nanoparticles (the narrow diameter distribution is between 10 and 30 nm) with water as base fluid) nanofluids. In the experiments, the thermal resistance of the OHP was measured for different heat inputs to the evaporator. Three different ML models were trained on the collected data (108 data points): a multilayer feed-forward neural network model, an adaptive neuro-fuzzy inference system model, and a GMDH model. The inputs to the models were the length-to-diameter ratio, the heat input, and the thermal conductivity of the working fluid. Several architectures were tested for each of the models, and their performances were studied and compared. The results show that all three models can make accurate predictions according to various statistical metrics, although GMDH exhibited poorer performance than the other two algorithms.
As explained in Sec. 3, a trained machine learning model is a function between input and output spaces. Standard mathematical techniques can be used to study these functions and obtain further information about the system studied. For example, optimization routines can be employed to, for instance, determine the fill ratio and the power input that would result in an optimal heat transfer performance. Such is the direction pursued by Jokar et al. [26], where the genetic algorithm is applied to a trained neural network to determine the combination of fill ratio, inclination angle, and power input that minimizes the thermal resistance of the OHP. It is also possible to study the sensitivity of a trained ML model to its various parameters. For example, grey relational analysis was used by Jiaqiang et al. [25] to determine that among several parameters (fill ratio, interior pipe diameter, inclination angle, heat input, and number of turns), the fill ratio had the strongest influence on the heat transfer rate, while the impact of varying the inner diameter was rather limited.
4.1 Dimensionless Numbers.
Wang et al. [33] remark that most ML models developed for predicting OHP performance are restricted to small parameter ranges, including one or two working fluids. Moreover, typically only a small number of parameters are considered when developing these models. Wang et al. [33] proposed the use of dimensionless numbers as inputs to ANN. In their study, 722 data points were collected from the published literature obtained from different OHPs with various working fluids and a wide range of geometric and operational parameters (see Table 2).
Parameters used by Wang et al. [33] and their corresponding value ranges
Parameter | Range |
---|---|
Inner diameter (mm) | 0.8–2.45 |
Heat flux (W/m2) | 493.99–132760.40 |
Number of turns | 2—20 |
Coolant temperature (K) | 293.15–333.15 |
Charging ratio | 0.2–0.9 |
Working fluid | Water, ethanol, methanol, acetone, R123 |
Thermal resistance (K/W) | 0.048–4.47 |
Parameter | Range |
---|---|
Inner diameter (mm) | 0.8–2.45 |
Heat flux (W/m2) | 493.99–132760.40 |
Number of turns | 2—20 |
Coolant temperature (K) | 293.15–333.15 |
Charging ratio | 0.2–0.9 |
Working fluid | Water, ethanol, methanol, acetone, R123 |
Thermal resistance (K/W) | 0.048–4.47 |
For each data point, seven relevant dimensionless numbers were computed. These dimensionless numbers were passed as input to an ANN model designed to predict the corresponding thermal resistance. After training, the MSE on the testing set (15% of the entire data set) was found to be 0.0136, indicating a good agreement between predictions and experimental data.
4.2 Time-Series Analysis.
Most applications of ML to OHP disregard the temporal variations of the variables of interest to focus on average quantities. Yoon and Kim [37] carried out an experimental study to obtain time-series data of the position of liquid–vapor menisci (see Fig. 6) on a five-turn closed-loop OHP filled with ethanol at 55%.
An artificial neural network with a long short-memory unit was employed to predict 30 ms of this time-series based on experimental measurements of the previous 90 ms. To quantitatively assess the prediction results, the volumetric fraction in the condenser section (ratio of vapor plugs volume in a condenser section to total channel volume in the condenser) was computed based on both the experimental data and the neural network predictions. The model estimation was within 30% of the experimental data.
Koyama et al. [40] conducted a study aimed at predicting the temporal variation of three performance variables: flow pattern, temperature difference between evaporator and condenser, and heat transport rate. A five-turn OHP filled at 50% with ethanol was experimentally studied in a bottom heat mode. High-speed cameras were synchronized with K-type thermocouples to obtain flow pattern images and temperature readings. The data collected were used to train and validate an ANN model with long short-term memory layers. Although the experimental and predicted temporal and axial variation generally differed, the ML model was able to capture some characteristic behavior of the flow such as the appearance of large-amplitude flow followed by an almost steady flow with small oscillations (see Fig. 7). Similarly, the temporal variations of the heat transport rate and temperature difference were generally different in the experimental and predicted data, although a correlation between high-amplitude oscillations and low-temperature difference was consistent across the experimental and predicted dataset. Finally, time-average and standard deviation of the predicted heat transport, temperature difference, and volumetric liquid-phase ratio exhibited good agreement with those of the experimental data.
![Temporal and axial flow variations observed in experimental data (right) and predicted by the model (left) at a power input of 62 W [37]. The symbol g indicates the direction of gravitational acceleration in each channel.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/thermalscienceapplication/16/4/10.1115_1.4064597/1/m_tsea_16_4_040801_f007.png?Expires=1746043170&Signature=n9Q9LCe7uZxrmZS8S8rNrFOO3SpjO-1sQ~07d5NL4TpA9nBVc9gsUvAq21AJ-5bhz3E6K~eWM1eW2CQ~mMn4zb~MuQZSsL8LlkSSZbVExwF2abe9~gY5QMs8Xef4~L5Dt50HS~fxtW20KvXnpMDlM~ZxeJ7mJfj-LCaE1t3~tTqWXXgf9twfBy6l7MxIo~nTQf-f1mCNWVpBpnejvUcAMrVaT0TGEH4E9mg2uU2vddsmHMBaXepgR5ngzA3OSf5mcEFhsFyKdJx6ePRezX7~VlTUrie7JC~xS49dfY4~OEApA50DbWqv9ChR439svWnerAeRh0SZFml9w3q4iheh9A__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Temporal and axial flow variations observed in experimental data (right) and predicted by the model (left) at a power input of 62 W [37]. The symbol g indicates the direction of gravitational acceleration in each channel.
![Temporal and axial flow variations observed in experimental data (right) and predicted by the model (left) at a power input of 62 W [37]. The symbol g indicates the direction of gravitational acceleration in each channel.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/thermalscienceapplication/16/4/10.1115_1.4064597/1/m_tsea_16_4_040801_f007.png?Expires=1746043170&Signature=n9Q9LCe7uZxrmZS8S8rNrFOO3SpjO-1sQ~07d5NL4TpA9nBVc9gsUvAq21AJ-5bhz3E6K~eWM1eW2CQ~mMn4zb~MuQZSsL8LlkSSZbVExwF2abe9~gY5QMs8Xef4~L5Dt50HS~fxtW20KvXnpMDlM~ZxeJ7mJfj-LCaE1t3~tTqWXXgf9twfBy6l7MxIo~nTQf-f1mCNWVpBpnejvUcAMrVaT0TGEH4E9mg2uU2vddsmHMBaXepgR5ngzA3OSf5mcEFhsFyKdJx6ePRezX7~VlTUrie7JC~xS49dfY4~OEApA50DbWqv9ChR439svWnerAeRh0SZFml9w3q4iheh9A__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Temporal and axial flow variations observed in experimental data (right) and predicted by the model (left) at a power input of 62 W [37]. The symbol g indicates the direction of gravitational acceleration in each channel.
5 Other Machine Learning Models
As mentioned earlier, most attempts to employ ML techniques to study OHPs have focused on ANN. However, Qian et al. [34], following the works of Krizhevsky et al. [89] and Silver et al. [90], argue that proper training of an ANN usually requires more data points than those usually available in OHP experimental studies. Moreover, data are often normalized to train an ANN, which can create difficulties when nondimensional numbers are used as input, as they usually lie in a range that is very different from other parameters such as geometric quantities, for example. To circumvent these difficulties, Qian et al. [34] trained an XGBoost model, which does not require data normalization and has been observed to outperform ANN for a small training set [91]. The inputs for the model were the evaporator temperature, the heat flux, geometric parameters, and various dimensionless numbers such as the Kutateladze number, which captures heat transfer properties of the OHP. The model was trained to predict the effective heat transfer coefficient. It was found that, for a given data set, the XGBoost model yields better statistical scores (root-mean-square error, for example) than an ANN and other ML models. Moreover, XGBoost exhibits some degree of interpretability, as it allows to rank various inputs of the model in terms of the sensitivity of the output.
All the works thus far reviewed have been concerned with the prediction of performance indicators (such as thermal resistance) based on a given set of inputs. A different goal was pursued by Loyola-Fuentes et al. [38] where three different ML models (random forest, K-nearest neighbors, and an ANN) were used to classify flow patterns in an OHP. To this end, an experimental study was conducted. Two working fluids were used (ethanol and FC-32) under various heat input and gravity levels. The resulting wall temperature, heat flux, fluid velocity, and acceleration were recorded. Two data sets (one per working fluid) consisting of the controlled and measure variables, as well as corresponding flow images, were manually classified and labeled as slug-plug, semi-annular, or annular flow (see, for example, Fig. 8).
![Different flow patterns as observed and classified by Loyola-Fuentes et al. [38] (S, slug-plug; SA, semi-annular; A, annular).](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/thermalscienceapplication/16/4/10.1115_1.4064597/1/m_tsea_16_4_040801_f008.png?Expires=1746043170&Signature=JnB~WqfdEecjZsdW0fyCK1TpTcZnJjV5FiVzRdbkFIV0EjzotQOJS1efLrbec4ohC0tF~Qo1TUNYAyk8GlBoCO-Byq0XY6-E96o8u1WVvecKk9HEMl4SJTYe4fLnk4J2qV93TaevaK1GWs2vrpm58GdbGboqHa~9Yh14AUuQtzurX8sUyAd8bICb-HzOQGkW6FJ0y2oTchNtfl97BgUCRDgyJnWZNz~ouhXsm~V~sh0CyXbl01bwOlunWZOk2s~LSIBJCXvcSePwsPhjxGF98GlZqOLhugdTp-w4nds5NigepYHptrq5vzYEznSEk9vJ9He11cU87cLmf-zaXGJYaA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Different flow patterns as observed and classified by Loyola-Fuentes et al. [38] (S, slug-plug; SA, semi-annular; A, annular).
![Different flow patterns as observed and classified by Loyola-Fuentes et al. [38] (S, slug-plug; SA, semi-annular; A, annular).](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/thermalscienceapplication/16/4/10.1115_1.4064597/1/m_tsea_16_4_040801_f008.png?Expires=1746043170&Signature=JnB~WqfdEecjZsdW0fyCK1TpTcZnJjV5FiVzRdbkFIV0EjzotQOJS1efLrbec4ohC0tF~Qo1TUNYAyk8GlBoCO-Byq0XY6-E96o8u1WVvecKk9HEMl4SJTYe4fLnk4J2qV93TaevaK1GWs2vrpm58GdbGboqHa~9Yh14AUuQtzurX8sUyAd8bICb-HzOQGkW6FJ0y2oTchNtfl97BgUCRDgyJnWZNz~ouhXsm~V~sh0CyXbl01bwOlunWZOk2s~LSIBJCXvcSePwsPhjxGF98GlZqOLhugdTp-w4nds5NigepYHptrq5vzYEznSEk9vJ9He11cU87cLmf-zaXGJYaA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Different flow patterns as observed and classified by Loyola-Fuentes et al. [38] (S, slug-plug; SA, semi-annular; A, annular).
The three algorithms were trained and validated on the collected data and compared via statistical measures of performance. The ANN model obtained the highest accuracy score for both working fluids. Finally, the classification results obtained by the ANN model were used to create flow pattern maps with respect to certain dimensionless numbers as shown in Fig. 9 in the case of ethanol, from which thresholds for flow transitions were also obtained.
![Flow pattern map for ethanol developed in Ref. [38]. Fr, We, and Bo stand for the Froude, Weber, and Bond numbers, respectively.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/thermalscienceapplication/16/4/10.1115_1.4064597/1/m_tsea_16_4_040801_f009.png?Expires=1746043170&Signature=CWlxeiP2GP85X9KWNvYqQ4MFxQcmF1MkhDvCHuJMODMu6IBrx2VrU-m2Zf3DWEYHfcYX3sxy42kcXm7VBV9kDybznnhyfwEt0ddF-Pky-XRi0p~0fL8i7I3s8YpcXJOG6iYr2FpT8C1i-lWt7RCrGgXyUODD~cX1IhXu6dNep6L9HOWqsKYXHY2~aUABAJcwTPD5t5UE3DR9chzI-767DT7iUj90kOVNKxunRW6UJhRFsEFBDn~3yezLruGxh9zVvilftY3Mks~cXmERgqrku73iQtfHsCDVhGd3vTF2c9ORL74GAJv2-mWfl1AvstMMJ5QGxQYVG8GrAjamWbkAOg__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Flow pattern map for ethanol developed in Ref. [38]. Fr, We, and Bo stand for the Froude, Weber, and Bond numbers, respectively.
![Flow pattern map for ethanol developed in Ref. [38]. Fr, We, and Bo stand for the Froude, Weber, and Bond numbers, respectively.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/thermalscienceapplication/16/4/10.1115_1.4064597/1/m_tsea_16_4_040801_f009.png?Expires=1746043170&Signature=CWlxeiP2GP85X9KWNvYqQ4MFxQcmF1MkhDvCHuJMODMu6IBrx2VrU-m2Zf3DWEYHfcYX3sxy42kcXm7VBV9kDybznnhyfwEt0ddF-Pky-XRi0p~0fL8i7I3s8YpcXJOG6iYr2FpT8C1i-lWt7RCrGgXyUODD~cX1IhXu6dNep6L9HOWqsKYXHY2~aUABAJcwTPD5t5UE3DR9chzI-767DT7iUj90kOVNKxunRW6UJhRFsEFBDn~3yezLruGxh9zVvilftY3Mks~cXmERgqrku73iQtfHsCDVhGd3vTF2c9ORL74GAJv2-mWfl1AvstMMJ5QGxQYVG8GrAjamWbkAOg__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Flow pattern map for ethanol developed in Ref. [38]. Fr, We, and Bo stand for the Froude, Weber, and Bond numbers, respectively.
6 Limitations and Perspectives
The work surveyed in this article shows that ML techniques can be successfully used to predict important markers for OHPs. However, much is still to be done before our ability to model OHPs allows us to predict their performance based on design specifications. Despite the promise of ML techniques, major challenges still need to be addressed. One serious difficulty is that many different design choices need to be made to implement ML models. For example, in the case of ANNs, one must determine the architecture of the neural network, activation functions, and learning rates. There is no overarching theory that can guarantee the suitability of a given set of choices for a specific application, and hence, expert knowledge and even time-consuming trial and error approaches are often required. As we have seen, many studies try and compare different ML models with various implementations. However, the findings of these studies are not homogeneous, including the case of Prashanth et al. [39] in which the best-performing ML model varied depending on the working fluid used in the experiments.
Another common disadvantage of ML models is that they rely on data that are typically costly and difficult to obtain, and often predictions are not accurate for input values that lie outside of the range used for training. The use of dimensionless numbers as inputs for ML learning models has been proposed to increase the flexibility of these models at least for similar experimental setups. Wang et al. in Ref. [33] were able to use dimensionless numbers to train an ANN that made accurate predictions in a wider range of values than seen in most studies. However, the training data used in this article spanned a wide range of values, and it is hard to determine whether the flexibility of the model developed was due to the use of dimensionless numbers or the rich training set.
In the case of OHPs, the difficulty of ML to extrapolate results outside of the ranges used for training might be due to the fact that these algorithms operate as black boxes that process inputs into outputs without knowledge of the underlying physics of the problem. Khandekar et al. [20] have provided a fine illustration of this point. In this article, an experimental study was conducted on a ten-turn OHP in which 76 data points were recorded containing information about the filling ratio, heat load, and overall thermal resistance of the device. An ANN was trained to predict thermal resistance based on heat input and filling ratio. Although OHPs typically operate with filling ratios between 20% and 85%, 24 out of the 76 data were recorded with filling rations outside this range. OHPs exhibit very different modes of operation when the filling ratio is close to 0% or 100% (see Fig. 10). Indeed, when there is almost no working fluid inside the pipe, there is a tendency toward dry-out of the evaporator. On the other hand, for a filling ratio close to 100%, the lack of vapor bubbles impedes the onset of oscillations, and the OHP operates in principle as a single-phased thermosyphon. When the filling ratio is between 20% and 85%, the OHP operates in its typical oscillating mode. As different physical phenomena and heat transfer mechanisms take place in each mode of operation, the authors study the ability of the ANN to make accurate predictions about a system that experiences distinct phenomenological regimes. To this end, an ANN was trained using only data with filling ratios between 20% and 85%, where the OHP operates in its typical oscillating mode. In this situation, the ANN was able to make accurate predictions. On the other hand, the ANN's predictions were unreasonable when the full set of experimental data with filling ratios ranging from 0% to 100% was used. These results highlight the importance of guiding ML algorithms with knowledge of the physics underlying the system one seeks to represent.
While ML algorithms represent a powerful tool to study OHPs, given the limitations outlined earlier, it appears unlikely that they have come to replace classical mathematical models and thorough phenomenological understanding of OHP function. On the contrary, there have been many recent research efforts across science and engineering to integrate physical modeling and ML learning techniques to address each other's shortcomings. To accomplish this, a variety of approaches are currently the focus of intense research in fields ranging from material sciences to physiology. Although there is no unified terminology, these techniques are often globally referred to as physics-informed or physics-guided machine learning. A fine example of this approach is found in physics-informed neural networks (PINNs), which were introduced by Raissi et al. [92]. Even though PINNs are structurally identical to traditional neural networks (such as the one represented in Fig. 5), they use a slightly different training process. Classical networks are trained through a process (minimization of a loss function) that penalizes the discrepancy between their predictions and observed data. In addition to this, PINNs are also penalized if their predictions fail to satisfy physical equations that are known to hold for the studied system (for example, conservation laws of the system). This added penalization term enforces the physical feasibility of the network's predictions, as opposed to only enforcing agreement with observed data. It is then expected that a PINN would perform better in physical regimes that are underrepresented in the training data. It would be interesting, for example, to determine whether this approach can be used to overcome the difficulties discussed by Khandekar et al. [20], which were described in the previous paragraph.
A different example of the physics-guided approach consists of augmenting an experimental dataset with synthetic data generated by a mathematical model and then using the augmented dataset to train an ML algorithm [93]. The idea is that physical knowledge of the system, encoded in the mathematical model, is embedded into the dataset itself by the synthetic variables. Yet another approach to leverage physical information in the implementation of ML algorithms is to modify the architecture itself of a neural network (Fig. 5) so that is satisfies a priori relations that are known to hold for the system (for example, invariance with respect to the action of a given group of transformations). These techniques and ideas are exhaustively surveyed and discussed by Karniadakis et al. [94].
Even though a comprehensive and reliable model of OHP function is not available, a wealth of knowledge regarding the physical principles guiding the operation of OHPs as well as partial mathematical models have been developed over the last 20 years. It seems unlikely that disregarding this knowledge in favor of data-driven techniques is a suitable way forward for the OHP modeling community. Quite on the contrary, the vast amount of work that has been done in modeling of OHPs, together with ML techniques such as the ones surveyed in this article, make the field ripe for the synergistic interaction of ML, physics, and mathematical modeling.
Acknowledgment
The work presented in this article was supported by the Office of Naval Research Grant No. N00014-19-1-2006, under the direction of Dr. Mark Spector.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
No data, models, or code were generated or used for this paper.