This case study describes the application of Design and Analysis of Computer Experiments, carried out in a real life customer project. Computer experiments are somewhat different to traditional experiments, since measurement uncertainty does not exist. This implies that any difference between a prediction using a transfer function and the corresponding experimental observation is caused by imperfection in the transfer function. However, transfer functions are still very useful in these situations.

## 1 Problem description

A manufacturer of high voltage tubes wants to develop a next generation tube based on a current design. The developers have experience of how design choices affect the outcome. The goal is to design a tube with better performance, which is expressed in the electrical properties. The team is free to choose the geometry of the tube and wishes to systematically explore the possibilities.

The part of the tube that is of interest is a flattened cone on top of a cylinder. Figure 1 shows the flattened cone; Figure 2 is the cross-section, where the green rectangles in the two figures correspond to each other.

The geometry of the tube is still open to choice and described by 11 design parameters, indicated by *x _{1}* –

*x*.

_{11}The developers can use a simulation tool that takes as input the geometry and tube properties, and predicts the electrical behavior *e.g.* electric fields at each point. Ultimately these were summarized in 14 response parameters, described here as *y _{1}*–

*y*. Note that one simulation takes about two hours.

_{14}There are several constraints on x’s and y’s, based on geometric restrictions and electric properties of the tube. The objective varied during the course of the project, but always remained quite simple: minimize or maximize a single x or y.

To give the general idea, in Section 2 we will first explain the idealized way of working. Of course, in the real project we ran into some issues, which we will describe in Section 3.

## 2 Design and analysis of computer experiments

We used CQM’s software tool Compact. This gives a convenient environment that supports each of the steps. The ideal workflow consists of the following steps:

**Step 1 – generate a simulation scheme.**A set of experiments is proposed using techniques from DoCE (Design of [Computer] Experiments). Each experiment is one possible tube design. Figure 3 shows the design space, whose dimensions are the design parameters*x*. The blue dots represent the proposed simulation runs. The vertical axis corresponds to one of the response variables_{i}*y*._{j}**Step 2 – computer simulations.**The table containing the proposed runs is emailed to the developers, who run the simulation for each of the design points over the weekend. The values of*y*are added and the results emailed back._{j}**Step 3 –**We create transfer functions*y*, so that all designs_{j}= f_{j}(x_{1},…,x_{n})*(x*can be evaluated, even the ones that were not on the original list. The transfer function can be evaluated very quickly, as opposed to the original simulation, which can take hours._{1},…,x_{n})**Step 4 – optimization.**The optimization problem is given by the constraints and objective function, and now that the functions can be evaluated quickly the optimal design can also be found quickly. Actually, a set of “local optima” is found which may all prove interesting.

## 3 Practical workflow

##### 3.1 Problem formulation

The project contained a number of complex design constraints regarding geometry. These relate to the positioning of parts in the geometry, thereby determining the permitted values for *x _{i}*. The coordinates of the relevant points in the design are simple functions of the

*x*’s (a constant plus linear combination of the

_{i}*x*’s). From there, we can work out the length of the blue line (Figure 4). This expression contains a square root from Pythagoras’ theorem. By squaring this expression we get rid of the square root, and rewriting then results in a constraint in the form of a 4

_{i}^{th}order polynomial:

*F(x*≥0 to describe that the perpendicular distance should be at least a certain amount.

_{1},…,x_{11})At the time, the Compact software could only handle polynomials up to 2^{nd} order (Note: Compact can now handle polynomials of any order). We therefore opted for a practical solution: we replaced the single constraints “perpendicular distance >LB” with “all the distances using several angles > LB”; which is almost equivalent. By choosing the step of the angle 90/4=22.5 degrees, the maximum error for the perpendicular distance is about 2%, which is acceptable. Because in the expressions the angles are fixed and no longer depend on the *x _{i}*’s, it turns out that the resulting polynomials are of 2

^{nd}order as desired.

3.2 Design of Computer Experiments

In generating the scheme, we used space-filling Latin Hypercube Designs (LHDs). The design space is 11-dimensional, and huge compared to the familiar 2 and 3 dimensional spaces that we can imagine. For example, a 2D square has 4 corner points and a 3D cube has 8, but an 11D cube has 2^{11}=2048 corner points. So our 500 points in an 11-dimensional space is only a very modest set.

It transpires that about half the 500 points satisfy the geometric constraints. The scatter plot matrix in Figure 5 shows *x _{i}* vs.

*x*(you can read the values of

_{j}*i*and

*j*from the diagonal) and gives an impression of design points within the space. One of them is enlarged and shows that points at the top left are missing, apparently resulting from the design constraints.

Note that this design is intended for use with a computer experiment, in which the outcome is not usually subject to stochastic noise; as opposed to an experiment with physical objects and measurement errors. The considerations for suitable designs are therefore somewhat different in DoCE compared with DoE. For example, in DoE one typically tries to choose the design to minimize the sampling uncertainty on some effect, leading to designs where the points are pushed into the corners. However, here we primarily want to explore the entire design space. Moreover, it is likely that only a small subset of the 11 parameters will explain most of the effects, and repetitions of points with identical coordinates on that small subset provide minimal additional information. The space-filling LHDs avoid such repetitions and have better properties than choosing points at random.

3.3 Simulations

The “response” parameters are derived from the output of the simulation tool. The input consists of a general set up, and the geometry file which is given by the design parameters *x _{i}*. The simulation generates results in two parts, see Figure 6.

The first part is the electric field, and each run (i.e. evaluation of a specific design) is calculated in about 10 minutes. The electric field is calculated at each point in the tube, as Figure 6 illustrates (for a different application). This field is summarized in *y _{1}…y_{11}* (e.g., the strength of a field at a key point of the tube).

The second part of the output concerns the current flow within the tube. This is summarized in *y _{12}-y_{14}*. This second part takes about 2 hours per run.

In choosing the summaries *y _{1}-y_{11}*, CQM and the developers worked together to ensure the right choices were made. Ideally, a choice of y results in a “smooth” function of the design parameters. An example of influencing the smoothness by using a suitable parameterization is the following:

Suppose *y _{1}, y_{2}, y_{3}* are the field strengths at three locations in the tube, and that only the maximum of these is of interest,

*max (y*. One might be tempted to define

_{1},y_{2},y_{3})*y*, drop

_{4}=max(y_{1},y_{2},y_{3})*y*and use

_{1}-y_{3}*y*instead. However, the individual

_{4}*y*are likely to be smoother functions of the design parameters than

_{1},y_{2},y_{3}*y*, due to the maximum operator. Therefore, it is better to retain

_{4}*y*as response parameters, and a bound on their maximum can be enforced by bounds on

_{1},y_{2},y_{3}*y*and

_{1}, y_{2}*y*.

_{3}##### 3.4 Transfer functions

After the design points have been simulated, the responses are approximated by transfer functions. Here the goal is to get models that describe the physical trends and not just mimic the data: this is a trade-off between model complexity and data fit. Compact features two model flavors adhering to these principles:

- Adapted stepwise (polynomial) regression, selected using leave-one-point-out cross-validation
- Kriging models: a Maximum Likelihood-based method resulting in an interpolating model on all data. These models can take some minutes to build.

Polynomial regression models are preferred over Kriging models because of their simplicity. The models were initially validated using leave-one-out cross-validation, and at a later stage in the project also using independent test sets. In a few cases, we needed to use Kriging models and in some of these cases cross-validation was not possible because it would take too long.

Note that “validation” here refers to the validation of the transfer function compared to the simulation results. However the ultimate test is to measure real-life products to see if these results correspond to the simulation model.

The total structure of this problem results in several layers of the design space, see Figure 8. The red parts of the design space correspond to impossible designs because of geometry (e.g. some components being outside the main body of the tube). Within the geometrically possible areas, the electric field is evaluated *y _{1}-y_{11}* and these are simulated relatively quickly. Only if the electric field parameters satisfy their constraints can the simulation calculate the electron-related responses

*y*(the blue region), and the models for these responses should only be evaluated on this limited part of the design space. A part of that space is composed of the designs that satisfy all constraints (i.e. the feasible region). To illustrate that the models predicting the

_{12}-y_{14}*y*values are uncertain, the green boundaries of the design space are drawn with very thick lines.

_{j}

The two responses that are less accurately modeled are handled in the optimization in slightly different ways. One way to deal with the inaccuracy is, over several steps, to set the bounds more strictly than is actually required. Another way is to use not only the stepwise regression models but also Kriging models, and require that the predictions from both types satisfy the bounds.

Using Compact or Compact CO, the extensive underlying technicalities are taken care of automatically. Note that the response variables, which are used in the constraints and the objective, are highly non-linear. The strategy therefore is to use multiple starting points in the search for optimal designs. A starting point usually results in a local optimum, and typically a small set of different local optima remains. The best among these is considered to be the global optimum. The points of the original DoCE scheme were taken as the set of starting points. Since quite a number of starting points are used, it is unlikely that the true global optimum will be missed.

Once the models are built, we arrive at the optimization step.

3.5 Optimal design

When building and validating the models, it turned out that for many of the responses we could choose the simpler stepwise regression models and only for some we needed Kriging models. Fortunately, the model quality was OK-to-good for most of the responses, and only moderate for two responses. It could have been worse, since we had only 100s of points in the huge 11-dimensional design space that apparently convey sufficient information to describe most of the physics, see Table 1.

The next steps are to evaluate the fit by graphical means and summary statistics. In this situation, there are models on different parts of the design space and actually a different, experimental tool (Compact-CO) was used to “glue” the models together to create one environment suitable for optimization.

Building the models using the Compact tool is done at the touch of a few buttons, mainly when choosing between stepwise regression and Kriging (in other software, term selection would require considerable work).

In the project, the underlying dataset with simulations was extended several times and these optimization steps were followed in several iterations of the project. In the earlier iterations, to the experienced eyes of the developers the local optima looked unrealistic, which often lead to tighter or additional constraints.

During the project, there were several iterations of new simulations leading to new models, new optimal designs, new insights, and more data from simulations. In later iterations, part of the extra simulation data could be used as independent test sets.

The columns in Table 2 indicate the layers of the design space (left to right from fewer to more constraints), see Figure 8. Rows indicate extra data sets. The table illustrates how much more complex a real project can be than a streamlined workflow.

##### 3.6 Communicating the results

One great spin-off from the project was an elaborate Excel tool that could visualize sets of designs, such as a simulated set or a set of local optima, see Figure 9. A simple wire frame model visualizes the resulting tube. The portion on the lower right states the model predictions, while the lower left lists the constraints. Any constraint that is violated shows up as a red number (none in the specific visualized setting). It is possible to scroll through a list of pre-defined designs, such as local optima, using the slider bar on the top. In addition, the individual *x _{i}* values can be altered using the slider bars on the top left. This allows one to gain insight into how the design and electric properties depend on the design parameters. Remember that using the simulation tool, one evaluation takes about two hours. The Excel tool is quite elaborate and uses some VBA code, and involves many extra tables for auxiliary computations to evaluate the regression and Kriging models.

In the final iterations, the development team formulated several scenarios, explicitly in terms of constraints and objectives. Using relatively conservative versions of the constraints, three optimal designs were chosen and simulated, and verification models built. After testing these, it turned out that another response parameter was of interest. This new response could be extracted from the available results and was added to the models. This led to a new optimal design and new physical model, which is being tested at the time of writing.

## 4 Conclusion

This project illustrates many interesting points in respect of Design of Computer Experiments/compact modeling. However, there are some other important takeaways.

First, it is important during the project to switch frequently between the developer’s world and the mathematical model. Each time, insights from mathematics lead to new insights for the developers, which in turn lead to more appropriate mathematical formulations, and so on. This calls for good communication in many relatively short (agile) feedback loops.

A second point is that it is often possible to work around practical constraints in order to satisfy time and budget constraints. One example in this project was our replacing the geometric constraints of order 4 by one of order 2, thus allowing us to keep using a standard environment.

Following a DoCE approach, rather than trial-and-error, resulted in a deeper exploration of possible tube designs that would otherwise probably not have been considered. Furthermore, the effect of design choices on the electrical properties is now better understood. The Excel tool summarizes the ways of working, and reportedly made communication with other development departments easier.Figure 10. Mathematical modeling of the real life world asks for short feedback loops

A possible follow-up here would be a tolerance study, where the impact of (production/use-case-related) variation of design parameters on the electrical properties is evaluated. This can be done relatively easy using the transfer functions. Note that there is one technical hurdle: modeling the appropriate dependence structure of the *x _{i}*’s, since product or use-case-related variation may not be related to individual

*x*’s.

_{i}

**References**

[1] https://commons.wikimedia.org/w/index.php?title=File:VFPt_metal_ball_grounded.svg&oldid=115249700