We compare the performance of human players against that of the efficient global optimization (EGO) algorithm for an NP-complete powertrain design and control problem. Specifically, we cast this optimization problem as an online competition and received 2391 game plays by 124 anonymous players during the first month from launch. We found that while only a small portion of human players can outperform the algorithm in the long term, players tend to formulate good heuristics early on that can be used to constrain the solution space. Such constraining of the search enhances algorithm efficiency, even for different game settings. These findings indicate that human-assisted computational searches are promising in solving comprehensible yet computationally hard optimal design and control problems, when human players can outperform the algorithm in a short term.

Introduction

Optimal design problems can still challenge our computational ability to solve them. A case in point is the design of an electric vehicle combining topology (configuration) design, proportional design, and control design for overall system optimization [1]. This problem is nondeterministic polynomial time (NP) complete, not amenable to an all-in-one solution and with a disjoint feasible domain. Other computationally hard design problems include material synthesis, drug design, and mechanical, electrical, and structural topology design [27].

More recently, “human computation” [8] has been reported as a promising alternative to solving tough optimization problems. A seminal experiment was the Foldit [9] game, which gathers large numbers of nonexpert players (a “crowd”) to perform protein structure prediction by minimizing the protein's energy with spatial rearrangement of its structure. Since its launch in 2009, Foldit has attracted 300,000 players and has shown an advantage of human spatial reasoning for pursuing near-optimal solutions [10]. A follow-up study showed that the most popular folding strategy derived from the crowd is comparable in performance to an expert-developed computer algorithm for protein folding [11]. Inspired by this success, Lee et al. tackled the challenge of ribonucleic acid (RNA) synthesis, a combinatorial optimization problem [12], with the eteRNA experiment [13] showing that a human crowd with no knowledge of the underlying science improved their problem solving skills, outperformed existing computer algorithms, and even contributed knowledge to the creation of a more effective algorithm. Le Bras et al. proposed a way to expedite search in combinatorial problems by identifying the optimal values for a subset of variables through human–computer interactions [14]. They demonstrated that a material discovery problem can be cast as an online game solved visually by human players. The Phylo game is another example that utilized human pattern recognition capability in solving an NP-hard optimization problem called multiple sequence alignment, where players are set to identify similar genome sequences of animals by moving colored blocks around [15].

These results indicate the potential of using human intuition (sampling good design solutions [10,12] and recognizing visual patterns [14,15]) and human intelligence (learning and generating design rules [11,12]), for solving challenging optimization problems or enhancing numerical optimization solvers. Gamification, i.e., using games purposefully [16], is one way to engage large numbers of human participants in such tasks. The above studies do not provide analysis on whether investment in gamification is more cost-effective than improving existing computer algorithms. However, it appears that such “citizen science” can be valuable when (a) a group of trained human solvers can be gathered and maintained, and (b) efficient solution of a large number of problems of similar mathematical nature is desired. At this point, the implementations of these existing games are case-dependent, and whether their individual success can be replicated in solving other problems remains an open question [17].

These successes and doubts regarding human computation motivated this paper. How much can gamification offer in solving computationally hard design optimization problems?

This paper investigates the potential benefits of using human crowd computation through games designed to address the aforementioned electric vehicle design problem. The basic idea is to “learn” (in computer science terminology) the problem's constrained solution space from successful and failed player attempts, and to search adaptively for a near-optimal solution within this constrained space. This idea is an extension of apprenticeship learning (see Ref. [18] for an example), where the machine apprentice follows or explores similar actions as the expert master.

The rest of the paper is structured as follows: We introduce the motivating problem of vehicle powertrain design and control in Sec. 2 and its gamification in Sec. 3. In Sec. 4, we look into computational solutions to the game problem with and without using information from human players. Section 6 compares performance of human players against the computational solution, and discusses the use of human computation as well as some technical details of the experiment. We conclude with Sec. 7.

Motivating Problem

A growing number of civil [1922] and military [2325] vehicle applications are being considered for electrification due to ever restrictive emission reduction and energy security requirements. With different driving conditions (speed and power demands), vehicle optimal powertrain design and control strategies can be quite different [26]. Therefore, efficient design automation is desired to identify optimal solutions (i.e., the powertrain design and the control strategy) given the input vehicle specifications and driving conditions.

Fathy et al. [27] and Peters et al. [28] investigated the combined design and control problem, and showed that ignoring the coupling between design and control yields suboptimal solutions. One approach to handle the coupling is to formulate an all-in-one optimization problem to include both design and control [29,30] problems in a single one; another approach is to nest optimal control within the design problem. This latter approach is more commonly used in hybrid electric vehicle configuration studies [1,3133], where existing optimal control algorithms such as dynamic programming (DP) [21,34], Pontryagin's minimum principle (PMP) [35,36], and equivalent consumption minimization strategy (ECMS) [37] can be applied. The DP method finds the control strategy with globally optimal energy efficiency by discretizing the state space and using optimal control to find a shortest path; PMP and ECMS have lower computation cost but can find only local optimal solutions. They require some model simplifications to guarantee global optimality [36]. The computational cost of DP grows exponentially with the size of the state space and the number of time steps [38]. The search for the optimal powertrain design and control policy can therefore become expensive, especially when a large variety of topology design candidates exist.

Optimal control algorithms commonly used for vehicle control require full information of the mechanical and electrical behavior (models), as well as future driving conditions, i.e., probability distributions of speed and power demand. When this information is (partially) unknown, learning mechanisms are required to guide the control strategy. In particular, when repeated trials are allowed, such as during the design phase, apprenticeship (imitation) learning algorithms [18,39] are developed for the controller to follow sample trajectories demonstrated by human experts. This paper is different from existing reinforcement learning studies in two aspects: (1) While existing work either utilizes an existing human expert [39] or learns a strategy (perhaps implicitly through a reward function) natural to human beings [18], here we examine a problem that is hard for individual players and is solved through crowd competition; (2) rather than developing a learning algorithm, here we focus on understanding the efficacy of human computation.

The EcoRacer Game

We introduce the ecoRacer game [40] that translates the powertrain design and control problem into a crowd competition. The game asks players to drive an electric car through a given track. Unlike traditional racer games that compete on speed, players must spend as little energy as possible to complete the track in 36 s2. To achieve this goal, players must employ good control strategies in combination with selection of a final drive ratio—a design variable. The final drive ratio range is set between 10 and 40. A higher ratio will result in a larger output torque to the front wheels but lower maximum vehicle speed. After each completion of the track, the battery energy consumption is submitted as the player's score, and players are notified about their current ranking among all competitors. The leading scores along with their final drive ratios are also shown to the players. These features follow existing trendy games and we hypothesize that they will trigger competition among players, leading to more plays of the game. However, the hypothesis is not tested in this paper. Game features are summarized in Fig. 1.

Fig. 1
The ecoRacer game interface (a)–(d) and settings (e). The game interface (a) is designed to work for computer and mobile screens. Touching the gear icon to the right enters the design screen (b), where the final drive ratio can be tuned. After each play, the score board (c) is shown, and the player can review speed and motor efficiency profiles of the current play in (d). Vehicle specifications, motor efficiency map, and maximum torque curves are shown in (e).
Fig. 1
The ecoRacer game interface (a)–(d) and settings (e). The game interface (a) is designed to work for computer and mobile screens. Touching the gear icon to the right enters the design screen (b), where the final drive ratio can be tuned. After each play, the score board (c) is shown, and the player can review speed and motor efficiency profiles of the current play in (d). Vehicle specifications, motor efficiency map, and maximum torque curves are shown in (e).
Close modal

The control involves only acceleration and braking to ensure that the game is intuitive and easy to play on mobile platforms and for all audiences, including those with no driving experience. The controls are presented to the player at the beginning of the game. Energy consumption (regeneration) by acceleration (braking) is visualized on a battery bar. The Chipmunk 2D physics engine [41] is incorporated to model the track, car body, motor, wheels, and suspensions, with supplied motor efficiency map, maximum and minimum torque curve, and other vehicle parameters. The physics engine simulates the scene once every 1/48 s, allowing precise control of the car by the player. For mobile devices with lower computing capability, this high frequency simulation may lead to slower car movement. Since the clock counts in real-time, the players on these devices become less competent. In order to address this issue, we defined “one second” in the game as the time spent on completing 48 simulation calls.

Computational Solutions of the EcoRacer Game

In order to benchmark the performance of human players, this section provides two computational solutions to the ecoRacer game. The first considers the game as a nested optimization problem. The outer loop searches for an optimal final drive ratio, while the inner loop solves a DP problem to find the optimal control policy for the given design. Through a nearly exhaustive search of the discrete control and design space, we can obtain a solution close enough to the true global optimum. We should emphasize that this approach requires all models and parameters that constitute the game to be known. In contrast, the second approach treats the game as a black-box function that inputs control and design variables and outputs a score. The black-box function is learned based on existing trials through metamodeling (response surfaces). This latter approach is closer to how we usually solve optimization problems where simulations or experiments are involved; it is also a fair comparison to human players as both the machine and humans learn and improve through playing the game.

Optimal Design and Planning With Full Knowledge.

Let Pbatt be the instant battery power consumption, ωmot(t) and Tmot(t) the motor speed and torque at time t, v is the vehicle speed and, a is the vehicle acceleration. Also, denote by tf the time spent on finishing the race, x(tf) the distance covered at tf, xfinal the total track length, vmax the maximum speed limit, and amax and amin the limits on the vehicle acceleration. The objective is to minimize the battery consumption denoted by Ebatt with respect to the final drive ratio ρ and the control of Tmot(t), while completing the race within the time limit tmax. Note that tf is not a free variable but it depends on the decision variables ρ and Tmot(t). This minimization problem can be formulated as follows:
(1)
The vehicle speed and acceleration are determined by motor speed ωmot and motor torque Tmot following:
(2)

where Rtire is the wheel radius, Jwheel is the wheel inertia, Mveh is the vehicle mass, and Fres is the resistive force, including both road resistance and extra load from road inclination.

We solve the energy minimization problem in Eq. (1) using a nested formulation described below.

Inner Loop Control Problem.

The inner loop control problem for a given design is solved by DP. Specifically, we discretize the track into N = 180 equal steps, each corresponding to 5 m. This setting provides an accurate enough solution while keeping a single call of the DP algorithm computationally affordable. We use the horizontal position and speed of the vehicle as state variables, and the change of speed, Δv, as the decision variable. At each step i, Δv can take three values that correspond to the three actions players can take during the game: Δvmax,i for acceleration, Δvmin,i for brake, and 0 for no action. The values of Δvmax,i and Δvmin,i are determined by the motor torque limits based on the current state. To incorporate the time constraint, we first calculate time cost along with energy consumption for each decision at every state, and then treat the violation in the total time span as a penalty in the DP objective. With these assumptions, the DP formulation in Eq. (1) for given ρ becomes
(3)

where λ0 is penalty weight. Larger λ results in smaller tf. We use the bisection method to find the minimum λ that satisfies the time constraint. Note that this requires solving the DP problem multiple times.

Outer Loop Design Problem.

The outer loop design problem searches for an optimal final drive ratio that minimizes battery energy consumption. The problem is discretized since players are only allowed to choose from a discrete set of final drive ratios. Starting with three initial ρ values, we fit a quadratic function to approximate the fuel consumption and iteratively minimize this function. We obtain convergence when two subsequent iterations give the same optimal solution. For ρ>20, no λ value yields a control solution to satisfy the time constraint due to low vehicle maximum speed caused by high final drive ratio. The optimal design ρ*=18 is identified with 47.8% battery state of charge (SOC) upon finishing the track. Depending on the initial set of final drive ratios selected for the outer loop, the inner loop is called four to seven times with each call taking 1.7 to 2 hrs to converge to a λ that satisfies the time constraint for the given final drive ratio3. Figure 2 summarizes the optimization results. As validation, we tested the same strategy using the game engine and obtained a final battery SOC of 43.4%. The difference between the game engine and the DP calculation is due to (1) the discrepancy between the physics engine and the DP model, e.g., vehicle jumping could happen in the game but is not modeled in DP, and (2) the discretization scheme involved in the DP solution. A finer discretization will reduce the difference with increased computation time.

Fig. 2
DP results for the optimal final drive ratio. Top to bottom: Road profile, optimal speed profile corresponding to ρ*=18, and optimal control decisions in terms of braking and acceleration.
Fig. 2
DP results for the optimal final drive ratio. Top to bottom: Road profile, optimal speed profile corresponding to ρ*=18, and optimal control decisions in terms of braking and acceleration.
Close modal

Optimal Design and Control Through EGO.

We now discuss the EGO algorithm that iteratively learns a good design and control strategy without relying on the settings of the game.

Control Parameters.

For each play by the computer (or human player), we keep track of the control signals for acceleration and braking, denoted by c, along with the following four states: (1) track slope s (“1” for uphill, “−1” for downhill and “0” for flat ground), (2) remaining distance d, (3) remaining time t¯, and (4) vehicle speed v. We assume that the control strategy can be parameterized by some vector w, so that, given w, the control signal can be determined by the states
(4)

where u(·) is a mapping from the joint space of states and control parameters to the control signal. Denote the space of w as W, which represents a subset of all control strategies that a human player can deploy. The formal determination of u(·) and W using human data will be discussed in Sec. 5. For elaboration on the search algorithm, it suffices to consider W as a bounded vector space and u(·) as a bounded function defined on W.

The Score.

Given a final drive ratio ρ and control parameters w, the game can be simulated to output the battery charge consumed, denoted by e where 0e1, and the remaining distance from the terminal, denoted by dend. We define the following score as the objective to maximize for optimization:
(5)

where 1(·) is an indicator function that returns 1 when its argument is true, or 0 otherwise. A successful play where dend=0 will output the final SOC that is (1e) while a failed play outputs the negative remaining distance. This objective favors the completion of the track, but also avoids evaluating failures indifferently.

The EGO Algorithm.

We can now employ the EGO algorithm, a search routine suitable for optimizing a black-box function defined on a continuous and bounded design space [42]. Denote the solution space as S:=W×D, where D=[10,40] is the one-dimensional design space. The algorithm starts by sampling S. It then creates a kriging model using the initial samples and their responses. The next sample is chosen to maximize the expected improvement of the objective. Then, the kriging model is updated by incorporating the new sample and its response. The modeling and sampling procedure is repeated until some termination criterion is met. Figure 3 summarizes the EGO procedure. The nested maximum expected improvement problem is solved using a genetic algorithm.

Fig. 3
A one-dimensional example of EGO search (a)–(d). At each iteration, the algorithm updates the kriging model (in solid line) and chooses the next sample (triangle) according to the maximum expected improvement (merit) function.
Fig. 3
A one-dimensional example of EGO search (a)–(d). At each iteration, the algorithm updates the kriging model (in solid line) and chooses the next sample (triangle) according to the maximum expected improvement (merit) function.
Close modal

Augmented EGO Search Using Human Plays

We now discuss parameterization of human plays and how the EGO algorithm can be improved based on these plays.

Parameterization of Human Plays.

Recall that each play can be represented as a sequence of five-tuples with control signals and corresponding states. To avoid over-burdening the server, we collect these five-tuples at each unit distance during the play. We denote the total number of unit distances by N and each recorded play as {ci,si,di,t¯i,vi}, for i=1,,N4.

Note that w represents the control strategy to be optimized along with the final drive ratio. For optimization purpose, it is ideal to find a low dimensional W (and thus S) where ws can be found to preserve the recorded control signals according to Eq. (4). In practice, we manually tune the definition of u(·) so that replaying the best human play, i.e., with the highest recorded score, using the converted control parameters will yield a score close to the original one. This manual process leads to the following choice of u(·):
(6)
For a recorded play, the control parameters w are chosen by minimizing the discrepancy between the true control signals and those derived from Eq. (4)
(7)

We then replay all human plays using the converted control parameters along with their corresponding final drive ratios to update scores.

Note that instead of handcrafting the function u(·), we could also use a neural network to learn the mapping between states and control signals, and later consider network parameters as control parameters. However, an arbitrarily defined network could introduce a larger number of control parameters to achieve a good fit to the data, and thereby create a larger search space for the EGO algorithm. For the same reason, a nonlinear kernel (with infinite dimensions) is not used.

In addition, the EGO algorithm requires bounds to be specified for the space of control parameters. Here we identified that the control parameters of the best human play are bounded in [−3, 3]. Therefore setting W:=[3,3]9 will ensure that the best human solution is available to EGO.

Classification of Human Solutions.

The parameterization step described above leads to a data set describing all human plays and their corresponding scores. To extract knowledge from these data, we create a one-class classifier for plays with positive scores. The classifier, denoted as ϕ([ρ,w]), predicts whether the input solution is likely to succeed (ϕ>0) or fail (ϕ0), and can be used as a constraint during the search. The classifier is built using LIBSVM [43] with a Gaussian kernel. The kernel parameter γ is set to 0.1 and the training error parameter ν to 106. Our rationale for training the classifier with all plays with positive scores rather than the few with top scores is that the former represent a broader range of strategies that finish the track. The resultant classifier will thus represent a more relaxed constraint in the solution space than the one derived from only the top plays, offering solution strategies that could be more generally applicable to the ecoRacer game with game settings different from the current game. To empirically demonstrate knowledge learned by the classifier from human plays, we examine two basic control policies that should be applied universally to all game settings: (A) If the vehicle has zero speed when going uphill, it should accelerate in order to complete the track; and (B) if the vehicle is approaching the terminal with a high speed, it should restore energy by braking.

To verify that ϕ>0 captures these two, we uniformly draw 106 samples from S. For each sample solution, we calculate its control signals under a variety of states specified in Fig. 4. For instance, for rule A, we fix the vehicle speed to zero and the slope to “uphill” while sweeping the other two states across all their feasible values. We consider each solution and states pair as a test point. The percentage of test points that give the correct control signal according to the two rules can be calculated among all samples or among the ones that satisfy ϕ>0. Results summarized in Fig. 4 show that incorporating ϕ>0 into the search offers a significantly higher chance of sampling reasonable solutions. The set S consists of a large number of irrational control strategies, e.g., braking while running out of time, or depleting energy to rush through the track. According to the one million samples, the subspace of ϕ>0 accounts for only 0.02% of the entire S. We note that while certain heuristics (e.g., rule A) can be manually coded into the search by the algorithm designer, others cannot. Taking rule B as an example, while human players would decelerate toward the terminal to regenerate energy, the relationship between the timing of braking and the vehicle states (e.g., speed and distance from the terminal) is difficult to formulate explicitly, especially when neither the environment nor the vehicle model are known.

Fig. 4
The “learned” subspace of solutions provides more rational control strategies. (a) Percentage of test points that follow rules (A) and (B) for a variety of states. The percentage is calculated for all uniformly sampled solutions in S, and for those that satisfy ϕ>0. (b) A summary of tested states and percentages averaged across all state values.
Fig. 4
The “learned” subspace of solutions provides more rational control strategies. (a) Percentage of test points that follow rules (A) and (B) for a variety of states. The percentage is calculated for all uniformly sampled solutions in S, and for those that satisfy ϕ>0. (b) A summary of tested states and percentages averaged across all state values.
Close modal

Experiment and Results

We summarize the experiment with human players and compare their performance with that of the EGO algorithm. We then apply the classifier learned from human plays to the same energy optimization problems with different track settings.

Comparison Between Human Players and the Algorithm.

The ecoRacer game was announced on Facebook and WeChat on October 14th, 2014, and introduced to a sophomore engineering design class at the University of Michigan on November 4th, 2014. The statistics and analysis in this paper are based on data received before November 26th, 2014. The development team was advised not to play the game during this period. A total number of 124 unique participants registered and played the game, with 2391 plays recorded. The best play (by user “ikalyoncu” on his or her 150th trial) reached a score of 43.2%, only marginally worse than the DP solution of 43.8%. In addition, this player identified the actual optimal final drive ratio of ρ*=18. Statistics of the game are summarized in Fig. 5. Raw player inputs and corresponding scores can be accessed from5.

Fig. 5
Statistics of the ecoRacer game. (a) Distribution of players by the number of games they played, and distribution of plays by their score. (b) Performance comparison of human player and the EGO algorithm for the first 200 plays.
Fig. 5
Statistics of the ecoRacer game. (a) Distribution of players by the number of games they played, and distribution of plays by their score. (b) Performance comparison of human player and the EGO algorithm for the first 200 plays.
Close modal

In the same figure, we compare human performance, colored by player IDs, against that of EGO in the first 200 plays. The EGO implementation starts with five initial samples uniformly drawn from S in the first iteration. Data from five independent EGO runs are collected due to the probabilistic nature of the initial sampling and the genetic algorithm employed to solve the maximum expected improvement problem. For both human and EGO plays, we also plot the averaged best scores. For human players, the calculation at each iteration only considers those who reach that iteration. Lastly, since all failed human plays are stored with scores of zero, we assign failed EGO plays with zeros for fair comparison.

Major findings from the experiment are as follows: (1) One can see that most players quit early and thus are outperformed by the computer in the long run. Only three players have higher scores than the computer after 200 plays. In addition, the fact that only 41% of players (51 out of 124) played more than 10 times suggests that the game is either too hard or not interesting enough for the participants. (2) For dedicated players who kept playing, however, their progress along trials is evident, especially when compared with the EGO results: While the EGO algorithm is designed to balance exploration of the space and exploitation of observations, the ten-dimensional solution space as well as the rugged objective function prevented the algorithm from converging to a local solution in 200 iterations. Although the algorithmic parameters (such as the Gaussian parameter and the merit function form) can be tuned to improve EGO's performance on a specific problem, dedicated human players provided an alternative way to search with no parameter tuning. Note that the bimodal distribution in scores toward 200 plays, i.e., high scores and zeros, is due to the nature of the game that high scores are achieved when one can finish the track nearly on time by spending just enough energy. Our interpretation is that human players gradually learned this rule, trying to improve their scores while risking failing the game.

These data, along with feedback from some of the players, provide us the qualitative understanding that human players are capable of learning quickly and creating good solutions, but only a few can optimize their solution by precisely executing near-optimal control strategies.

Search With and Without Crowdsourced Knowledge.

The comparison between the EGO algorithm and the players alone showed limited advantage of the latter in finding a good solution in the long run, largely due to the fact that only few players have the necessary persistence to fine tune their strategy by playing repetitively. However, as we show in Fig. 6, the method introduced in Sec. 5.2 allows us to turn player data into a constraint in the solution space and to enhance the search for games that were not played by humans.

Fig. 6
Comparison between EGO with and without human knowledge, using the inverse, hill, zigzag and long tracks. The long track is five times as long as the others.
Fig. 6
Comparison between EGO with and without human knowledge, using the inverse, hill, zigzag and long tracks. The long track is five times as long as the others.
Close modal

Specifically, we tested four different tracks to investigate whether the learned knowledge is transferable or scalable. These are the “inverse,” “hill,” “zigzag,” and “long” tracks. The inverse track is a flipped version of the original track. The long track duplicates the original track and the energy capacity five times. For each track, we run the EGO algorithm with and without enforcing the constraint ϕ>0 during the search. In the computation, a penalty of 106ϕ is added to the score if the solution is infeasible (ϕ<0). In each case, the algorithm starts with five initial samples and terminates after a total of 200 samples. Since ϕ is created using a support vector machine, the initial samples are randomly chosen among the support vectors when the constraint is incorporated, or otherwise randomly sampled from S.

Figure 6 compares performance from the two algorithm settings, using the average values and standard errors from five independent EGO runs for each case. The result shows that while EGO without human plays can identify good solutions in the long run, the algorithm can gain a significant early advantage using heuristics extracted from human plays. Further, knowledge learned from human players is transferable to different track settings and scalable to longer tracks, offering a promising solution to the “curse of dimensionality” [38].

One may naturally then question whether the EGO algorithm could learn from its own plays for future improvements and whether collecting all 2391 human plays is necessary to achieve good heuristics. While this paper does not address these questions theoretically, we used the long track to show that for the same small set of human and EGO plays, the former could lead to better future search; with enough plays, the two will have a similar contribution to future search. Figure 7 compares the EGO performance with four different constraints generated from (1) all human plays, (2) all EGO plays, (3) the first 500 human plays, and (4) the first 500 EGO plays, using the original track. The results show a significantly worse performance with the constraint learned from the first 500 EGO plays, while learning from all EGO plays achieves performance comparable to that from all human plays.

Fig. 7
EGO performance on the long track with constraints from (1) all human plays (2) all EGO plays, (3) the first 500 human plays, and (4) the first 500 EGO plays
Fig. 7
EGO performance on the long track with constraints from (1) all human plays (2) all EGO plays, (3) the first 500 human plays, and (4) the first 500 EGO plays
Close modal

Lastly, learning from too small a set of human plays could diminish search performance. Applying a constraint learned from the first 100 human plays, the averaged score for the long track after 200 iterations is 39, significantly lower than that from the previous experiment (Fig. 6) with no human plays (60). A closer investigation shows that the constraint classifies all best solutions discovered from unconstrained EGO runs as infeasible. In other words, a small amount of plays with low successful rate could result in a biased conclusion and reject potential solutions, leading to a search algorithm that is worse than one with no prior knowledge. To further verify, we created five classifiers ϕ̂100,200,,500 from the first 100 to 500 plays and checked whether they can correctly identify the best solutions (obtained from unconstrained EGO runs) for all tracks as feasible. Results are consistent across all tracks. The best solutions discovered from unconstrained EGO runs are identified as feasible only after 400 or more plays are considered.

To understand this finding better, we calculated the success rate of human players, i.e., the rate of finishing the track with nonzero remaining energy, for the first 100–500 plays. These rates are 10.0%, 19.0%, 29.3%, 42.3%, and 38.0%, respectively. We can then hypothesize that human plays become useful when the players achieve a certain level of success rate, i.e., discover good but not necessarily optimal solutions. However, the rate drops when new players enter with unsuccessful plays. With an increasing number of plays, we observe not only an improvement in the success rate in general, but also in the score. Therefore, determining the number of plays needed for generating a useful EGO constraint based on success rate could be conservative.

Discussion.

The experiment and simulation studies presented here help to answer questions regarding the use of human computation for engineering design.

Why would humans be useful in an optimization task? Human computation, often in the form of crowdsourcing, has mainly been successful in batch tasks that rely on human intuition rather than computational resource, see Ref. [8] for examples. Nonetheless, human computation becomes especially beneficial when intuitive tasks are actually computationally expensive. As demonstrated by Foldit and eteRNA, some human players have excellent ability at solving spatial optimization problems, as their search efficiency outperforms that of computer algorithms. The ecoRacer game, similarly, is based on the hypothesis that some players are well trained at tuning their control strategy and design through trial-and-error. The hypothesis is reasonable considering that people around the world spend 3 × 109 hrs training themselves in online games every week [17], and that a large portion of the games requires players to perfect their control. As shown in Fig. 5, some players learned good strategies much earlier than the algorithm did. However, the underlying optimization problem must be well translated (and camouflaged) by the game so that people can appreciate the problem and enjoy the fun without facing the actual engineering problem. This requirement leads to the next question.

Is it worth expending the effort of translating any particular problem and even building a game for it, in the hope that some talented people could solve it? Human computation experiments require significant effort to create the proper game environment and to establish sophisticated competition, collaboration and rewarding mechanisms. These efforts are necessary for continuously growing the player population in order to increase the chance of finding talented players. The efforts are not likely to be worthwhile if the underlying problem needs to be solved only once. Given sufficient computational resources, the computer algorithm is likely to outperform human players either by iterating long enough (see Fig. 7 in Ref. [11] for an example) or by solving the problem via brute-force, e.g., generating the entire tree of states and costs for DP6. Protein folding [10], RNA synthesis [12] and powertrain design for vehicles of particular usage may be tough enough problems requiring repeated solution and thus worthy of human computation investment. In addition, we should emphasize that extracting heuristics from human players and applying them to new problem settings, as demonstrated in Refs. [11] and [12] and this study, could be of significant importance in solving problems facing the curse of dimensionality. As noted in Sec. 6.2, the machine can also learn from its own experience to effectively constrain its search in the long run. Therefore, human computation has an advantage when human players can outperform the algorithm in the short run. The short-term advantage of human search is that it may capture prior knowledge about the problem that a machine does not acquire. Therefore, future work should investigate the extraction of such human prior knowledge from a limited number of effective solutions from players.

In summary, while we cannot yet compare the cost effectiveness of human computation versus optimization algorithms, human computation can be valuable for (i) problems comprehensible to people without specialized knowledge, with evidence suggesting that humans find solutions based on their experience that are not discovered by an algorithm; and (ii) problems that must be solved many times with different parameters, so that long-term gains can compensate for the initial game infrastructure investment.

Given that players are more likely to spend their spare time on playing regular games than solving scientific or engineering puzzles, is human computation a realistic alternative? For example, while Foldit attracted 300,000 players, there are hundreds of millions of active users of Angry Birds [17]. Given this reality, even if human computation is proven viable for solving computationally expensive design optimization problems, would there be enough players per problem for the strategy to be effective? Our experiment, as well as existing reports on human computation, showed that the problem can be efficiently solved by a small population of core players. In the case of ecoRacer, it was those players who contributed the successful plays. This finding implies that while the crowd attracted by any particular problem will not necessarily be large, these self-motivated participants may suffice to produce valuable data to expedite problem solving.

A related question is how challenges of translating and camouflaging the problem would be addressed for design problems more complicated than the ecoRacer. We believe that these challenges may not relate to complexity of the design problem, e.g., large problem size, nonconvexity, or lack of models. Existing problems, such as in Foldit, eteRNA, and in this paper, are complicated (NP-complete or NP-hard), yet easily understood by most players. Rather, the challenge is to make the design problem naturally fun to play. Therefore, a significant amount of effort may be needed to calibrate game mechanisms so that the crowd is incentivized, if the problem does not by itself attract players. We have not yet studied whether such mechanisms are effective across problems, regardless of problem complexity.

Some technical issues of the presented ecoRacer experiment are discussed below:

  1. (1)

    From a reinforcement learning perspective, the EGO approach can be categorized as “Direct Policy Search” where the entire simulation is treated as a black-box. All comparisons between EGO and human plays are therefore limited to our current algorithm design. Other popular direct methods such as NEAT [44], as well as indirect algorithms such as actor-critic [45] should be tested and their integration with human play data should be investigated.

  2. (2)

    Converting user control signals to control parameters w caused discrepancy in the resultant scores. This is because the bases in the mapping u(·) are chosen so that the score derived from the best play's control parameters is close to its actual score. Therefore, the classifier ϕ is not derived directly from the genuine player data. This discrepancy can be reduced by choosing a better set of bases for u(·) that closes the gaps between each play's original score and that from the converted control parameters.

  3. (3)

    We used a one-class classification that solely extracts knowledge from successful plays. An alternative approach would be to take failed plays into account and derive a binary classifier. It is also possible to create the classifier by utilizing the actual scores, as opposed to the binary “failed” and “successful” labels.

  4. (4)

    The motivation for showing the best scores as well as the drive ratios is to give players an incentive and some hints to further improve their solutions. We believe that such game mechanism is necessary to motivate enough players to participate. However, this mechanism causes the human computation (the game) to be different from the EGO algorithm: (i) The best scores allow players to measure how close are their solutions, based on which they may either fine-tune their strategies or choose to explore new ones; (ii) the displayed drive ratio allows newcomers to quickly converge to a good drive ratio without much exploration, providing them a good initial drive ratio for their search. However, even with information sharing, most players are still out-played by the computer in the long run. In addition, we observe no convergence to the optimal drive ratio value in the first 500 game plays from the crowd. Therefore the conclusions in Sec. 6.2 are valid even though the mechanisms of the game and the EGO algorithm are slightly different.

Conclusion

We examined the value of incorporating game-based human computation in solving optimization problems in the context of an optimal powertrain design and control problem. The results showed that while only a small portion of human players outperformed the EGO algorithm, useful heuristics can be extracted from the recorded plays and effectively applied to problem settings that were not presented to the players. This indicates the promising use of human computing in transferring knowledge learned from human-comprehensible problems to similar ones of larger scale or difficulty, to achieve scalable and effective search. The findings from this paper offer useful insights for future attempts at solving computationally expensive engineering optimization problems through human computation. In future work, it would be interesting to investigate if useful information can be extracted from the evolution in human plays, and how decomposition strategies involving problem partitioning and coordination that have been effectively applied to numerical solvers can be incorporated within a human computation framework. With a larger crowd of participants, we could also investigate the relationship between the crowd size and its performance. Lastly, it is valuable to explore game mechanisms that encourage players to improve their strategies in more efficient ways.

Acknowledgment

This work has been supported by the National Science Foundation under Grant No. CMMI-1266184. This support is gratefully acknowledged. We thank anonymous reviewers for insightful critique that helped improve this presentation significantly. We also thank members of the Optimal Design Laboratory at the University of Michigan for suggestions and advice on game design and experimentation, and all our players for making this paper possible.

2

This time limit is calibrated empirically so that the game is challenging enough to encourage replays, but not too difficult to drive new players away.

3

This simulation is performed using a desktop computer with Intel Xeon E5-2620 CPU clocked at 2.10 GHz and 128 GB RAM.

4

The scores of replays using these recorded data are not the same as the original scores from players. This is because, while the game takes player inputs once every 1/48 s, we only record a subset of these signals at discrete distance values.

6

While finding a shortest path can be done in polynomial time, the cost of generating the graph could grow exponentially with respect to the track length.

References

1.
Bayrak
,
A. E.
,
Ren
,
Y.
, and
Papalambros
,
P. Y.
,
2013
, “
Design of Hybrid-Electric Vehicle Architecture Using Auto-Generation of Feasible Driving Modes
,”
ASME
Paper No. DETC2013-13043.
2.
Larsen
,
U. D.
,
Signund
,
O.
, and
Bouwsta
,
S.
,
1997
, “
Design and Fabrication of Compliant Micromechanisms and Structures With Negative Poisson's Ratio
,”
J. Microelectromech. Syst.
,
6
(
2
), pp.
99
106
.
3.
Miskin
,
M. Z.
, and
Jaeger
,
H. M.
,
2013
, “
Adapting Granular Materials Through Artificial Evolution
,”
Nat. Mater.
,
12
(
4
), pp.
326
331
.
4.
Schneider
,
G.
, and
Fechner
,
U.
,
2005
, “
Computer-Based De Novo Design of Drug-Like Molecules
,”
Nat. Rev. Drug Discovery
,
4
(
8
), pp.
649
663
.
5.
Koza
,
J. R.
,
Bennett
,
F. H.
, III
,
Andre
,
D.
,
Keane
,
M. A.
, and
Dunlap
,
F.
,
1997
, “
Automated Synthesis of Analog Electrical Circuits by Means of Genetic Programming
,”
IEEE Trans. Evol. Comput.
,
1
(
2
), pp.
109
128
.
6.
Bendsøe
,
M. P.
, and
Kikuchi
,
N.
,
1988
, “
Generating Optimal Topologies in Structural Design Using a Homogenization Method
,”
Comput. Methods Appl. Mech. Eng.
,
71
(
2
), pp.
197
224
.
7.
Kicinger
,
R.
,
Arciszewski
,
T.
, and
Jong
,
K. D.
,
2005
, “
Evolutionary Computation and Structural Design: A Survey of the State-of-the-Art
,”
Comput. Struct.
,
83
(
23
), pp.
1943
1978
.
8.
von Ahn
,
L.
,
2009
, “
Human Computation
,” 46th
ACM/IEEE
Design Automation Conference
, San Francisco, CA, July 26–31, pp.
418
419
.
9.
Cooper
,
S.
,
Khatib
,
F.
,
Treuille
,
A.
,
Barbero
,
J.
,
Lee
,
J.
,
Beenen
,
M.
,
Leaver-Fay
,
A.
,
Baker
,
D.
,
Popović
,
Z.
, and > 57,000 Foldit Players,
2010
, “The Challenge of Designing Scientific Discovery Games,”
International Conference on the Foundations of Digital Games
, Monterey, CA, pp.
40
47
.
10.
Cooper
,
S.
,
Khatib
,
F.
,
Treuille
,
A.
,
Barbero
,
J.
,
Lee
,
J.
,
Beenen
,
M.
,
Leaver-Fay
,
A.
,
Baker
,
D.
, and
Popović
,
Z.
,
2010
, “
Predicting Protein Structures With a Multiplayer Online Game
,”
Nature
,
466
(
7307
), pp.
756
760
.
11.
Khatib
,
F.
,
Cooper
,
S.
,
Tyka
,
M. D.
,
Xu
,
K.
,
Makedon
,
I.
,
Popović
,
Z.
,
Baker
,
D.
, and
Players
,
F.
,
2011
, “
Algorithm Discovery by Protein Folding Game Players
,”
Proc. Natl. Acad. Sci.
,
108
(
47
), pp.
18949
18953
.
12.
Lee
,
J.
,
Kladwang
,
W.
,
Lee
,
M.
,
Cantu
,
D.
,
Azizyan
,
M.
,
Kim
,
H.
,
Limpaecher
,
A.
,
Yoon
,
S.
,
Treuille
,
A.
, and
Das
,
R.
,
2014
, “
RNA Design Rules From a Massive Open Laboratory
,”
Proc. Natl. Acad. Sci.
,
111
(
6
), pp.
2122
2127
.
13.
Lee
,
J.
,
Kladwang
,
W.
,
Lee
,
M.
,
Cantu
,
D.
,
Azizyan
,
M.
,
Kim
,
H.
,
Limpaecher
,
A.
,
Yoon
,
S.
,
Treuille
,
A.
, and
Das
,
R.
,
2014
, “
Eterna
,” http://eterna.cmu.edu
14.
Le Bras
,
R.
,
Bernstein
,
R.
,
Gomes
,
C. P.
,
Selman
,
B.
, and
Van Dover
,
R. B.
,
2013
, “
Crowdsourcing Backdoor Identification for Combinatorial Optimization
,”
Twenty-Third International Joint Conference on Artificial Intelligence
, Beijing, Aug. 3–9, AAAI Press, pp.
2840
2847
.
15.
Kawrykow
,
A.
,
Roumanis
,
G.
,
Kam
,
A.
,
Kwak
,
D.
,
Leung
,
C.
,
Wu
,
C.
,
Zarour
,
E.
,
Sarmenta
,
L.
,
Blanchette
,
M.
, and
Waldispühl
,
J.
,
2012
, “
Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment
,”
PLoS One
,
7
(
3
), p.
e31362
.
16.
Von Ahn
,
L.
,
2006
, “
Games With a Purpose
,”
Computer
,
39
(
6
), pp.
92
94
.
17.
Schrope
,
M.
,
2013
, “
Solving Tough Problems With Games
,”
Proc. Natl. Acad. Sci.
,
110
(
18
), pp.
7104
7106
.
18.
Abbeel
,
P.
, and
Ng
,
A. Y.
,
2004
, “
Apprenticeship Learning Via Inverse Reinforcement Learning
,”
Twenty-First International Conference on Machine Learning
, Banff, Alberta, Canada, July 4–9, p.
1
.
19.
Folkesson
,
A.
,
Andersson
,
C.
,
Alvfors
,
P.
,
Alaküla
,
M.
, and
Overgaard
,
L.
,
2003
, “
Real Life Testing of a Hybrid PEM Fuel Cell Bus
,”
J. Power Sources
,
118
(
1
), pp.
349
357
.
20.
Gao
,
D.
,
Jin
,
Z.
, and
Lu
,
Q.
,
2008
, “
Energy Management Strategy Based on Fuzzy Logic for a Fuel Cell Hybrid Bus
,”
J. Power Sources
,
185
(
1
), pp.
311
317
.
21.
Lin
,
C.-C.
,
Peng
,
H.
,
Grizzle
,
J. W.
, and
Kang
,
J.-M.
,
2003
, “
Power Management Strategy for a Parallel Hybrid Electric Truck
,”
IEEE Trans. Control Syst. Technol.
,
11
(
6
), pp.
839
849
.
22.
Evans
,
D. G.
,
Polom
,
M. E.
,
Poulos
,
S. G.
,
Van Maanen
,
K. D.
, and
Zarger
,
T. H.
,
2003
, “
Powertrain Architecture and Controls Integration for GM's Hybrid Full-Size Pickup Truck
,”
SAE
Technical Paper No. 2003-01-0085.
23.
Fish
,
S.
,
Savoie
,
T.
, and
Vanicek
,
H.
,
2001
, “
Modeling Hybrid Electric HMMWV Power System Performance
,”
IEEE Trans. Magn.
,
37
(
1
), pp.
480
484
.
24.
Antoniou
,
A.
,
Komyathy
,
J.
,
Bench
,
J.
, and
Emadi
,
A.
,
2005
, “
Modeling and Simulation of Various Hybrid Electric Configurations of the High-Mobility Multipurpose Wheeled Vehicle (HMMWV)
,”
IEEE
Vehicle Power and Propulsion Conference
, Chicago, IL, Sept. 7–9, pp.
507
514
.
25.
Ed Bargar
,
H.
,
Li
,
J.
,
Goering
,
D. J.
, and
Lee
,
J. H.
,
2003
, “
Modeling and Verification of Hybrid Electric HMMWV Performance
,” 29th Annual Conference of the
IEEE
Industrial Electronics Society
, Roanoke, VA, Nov. 2–6, Vol.
1
, pp.
939
944
.
26.
Liu
,
J.
,
2007
, “
Modeling, Configuration and Control Optimization of Power-Split Hybrid Vehicles
,”
Ph.D. thesis
, The University of Michigan, Ann Arbor, MI.
27.
Fathy
,
H. K.
,
Reyer
,
J. A.
,
Papalambros
,
P. Y.
, and
Ulsov
,
A.
,
2001
, “
On the Coupling Between the Plant and Controller Optimization Problems
,”
2001 American Control Conference
, IEEE, Arlington, VA, June 25–27, Vol.
3
, pp.
1864
1869
.
28.
Peters
,
D.
,
Papalambros
,
P.
, and
Ulsoy
,
A.
,
2010
, “
Relationship Between Coupling and the Controllability Grammian in Co-Design Problems
,”
2010 American Control Conference
, IEEE, Baltimore, MD, June 30–July 2, pp.
623
628
.
29.
Allison
,
J. T.
,
Guo
,
T.
, and
Han
,
Z.
,
2014
, “
Co-Design of an Active Suspension Using Simultaneous Dynamic Optimization
,”
ASME J. Mech. Des.
,
136
(
8
), p.
081003
.
30.
Allison
,
J. T.
,
2013
, “
Plant-Limited Co-Design of an Energy-Efficient Counterbalanced Robotic Manipulator
,”
ASME J. Mech. Des.
,
135
(
10
), p.
101003
.
31.
Liu
,
J.
, and
Peng
,
H.
,
2010
, “
A Systematic Design Approach for Two Planetary Gear Split Hybrid Vehicles
,”
Veh. Syst. Dyn.
,
48
(
11
), pp.
1395
1412
.
32.
Zhang
,
X.
,
Li
,
C.-T.
,
Kum
,
D.
, and
Peng
,
H.
,
2012
, “
Prius+ and Volt-: Configuration Analysis of Power-Split Hybrid Vehicles With a Single Planetary Gear
,”
IEEE Trans. Veh. Technol.
,
61
(
8
), pp.
3544
3552
.
33.
Cheong
,
K. L.
,
Li
,
P. Y.
, and
Chase
,
T. R.
,
2011
, “
Optimal Design of Power-Split Transmissions for Hydraulic Hybrid Passenger Vehicles
,”
2011 American Control Conference
, IEEE, San Francisco, CA, June 29–July 1, pp.
3295
3300
.
34.
Liu
,
J.
, and
Peng
,
H.
,
2006
, “
Control Optimization for a Power-Split Hybrid Vehicle
,”
2006 American Control Conference
, IEEE, Minneapolis, MN, June 14–16.
35.
Delprat
,
S.
,
Guerra
,
T.
, and
Rimaux
,
J.
,
2002
, “
Control Strategies for Hybrid Vehicles: Optimal Control
,” 56th
IEEE
Vehicular Technology Conference
, Vancouver, Canada, Sept. 24–28, Vol.
3
, pp.
1681
1685
.
36.
Kim
,
N.
,
Cha
,
S.
, and
Peng
,
H.
,
2011
, “
Optimal Control of Hybrid Electric Vehicles Based on Pontryagin's Minimum Principle
,”
IEEE Trans. Control Syst. Technol.
,
19
(
5
), pp.
1279
1287
.
37.
Paganelli
,
G.
,
Delprat
,
S.
,
Guerra
,
T.-M.
,
Rimaux
,
J.
, and
Santin
,
J.-J.
,
2002
, “
Equivalent Consumption Minimization Strategy for Parallel Hybrid Powertrains
,” 55th
IEEE
Vehicular Technology Conference
, Birmingham, AL, May 6–9, Vol.
4
, pp.
2076
2081
.
38.
Bellman
,
R.
,
1957
,
Dynamic Programming
, 1st ed.,
Princeton University Press
,
Princeton, NJ
.
39.
Abbeel
,
P.
,
Coates
,
A.
, and
Ng
,
A. Y.
,
2010
, “
Autonomous Helicopter Aerobatics Through Apprenticeship Learning
,”
Int. J. Rob. Res.
,
29
(
13
), pp.
1608
1639
.
40.
Ren
,
Y.
, and
Bayrak
,
A. E.
,
2014
, “
EcoRacer
,” http://ecoracer.herokuapp.com
41.
Lembcke
,
S.
, and
Gentle
,
J.
,
2007
, “
Chipmunk Physics JS
,” https://github.com/josephg/Chipmunk-js
42.
Jones
,
D.
,
Schonlau
,
M.
, and
Welch
,
W.
,
1998
, “
Efficient Global Optimization of Expensive Black-Box Functions
,”
J. Global Optim.
,
13
(
4
), pp.
455
492
.
43.
Chang
,
C.-C.
, and
Lin
,
C.-J.
,
2011
, “
LIBSVM: A Library for Support Vector Machines
,”
ACM Trans. Intell. Syst. Technol. (TIST)
,
2
(
3
),
p. 27
.
44.
Stanley
,
K. O.
, and
Miikkulainen
,
R.
,
2002
, “
Efficient Evolution of Neural Network Topologies
,”
2002 Congress on Evolutionary Computation
, IEEE, Honolulu, HI, May 12–17, Vol.
2
, pp.
1757
1762
.
45.
Konda
,
V. R.
, and
Tsitsiklis
,
J. N.
,
1999
, “
Actor-Critic Algorithms
,”
NIPS
,
13
, pp.
1008
1014
.