The review
This review looks at CORTIME, from APISOFT, a Danish based company. CORTIME is a third-party add in to SOLIDWORKS which provides parametric CAD optimization by driving SOLIDWORKS dimensions and feature controls. CORTIME uses both SOLIDWORKS geometry responses (known as sensors) such as mass properties, and a wide range of analysis results (also defined as sensors) from SOLIDWORKS Simulation. This includes all available structural, thermal or CFD analyses. CORTIME came into being in 2013 and its technology is based on recent research work by the founders.
Parametric Design Optimization
Parametric CAD Design Optimization assumes the designer has already created some level of formal design within SOLIDWORKS and is on the path to a viable design product. The job of Parametric Design Optimization is to explore by varying design parameters to improve the performance of this embryo design. Changes can be limited to small improvements, effectively polishing the design, or the configuration can be defined in a more generic way, with a wide scope for new and radical configurations to evolve. The accuracy of the responses, such as local stresses, is at the full fidelity of the FEA or CFD model.
In my view, Parametric Design Optimization has been somewhat overshadowed by Topology Optimization. Many designers already have some notion of what structure to put into available design space to carry load, to support vibration, buckling or whatever the criteria might be. Starting with a ‘blank canvas’, as in Topology Optimization, is not always appropriate. In this scenario Parametric Design Optimization is a powerful tool. However, the two technologies should not be thought of as competing. Each can produce useful complementary design ideas and information at different stages in the design evolution. Topology Optimization favors the more radical concept stage, or very organic structures, whereas Parametric Design Optimization, by its nature, reflects a more mature design point. The design is already fully defined in CAD, with no conversion stage required. The implicit accuracy of stresses, displacements and CFD results also provides well qualified designs.
CORTIME Multi-Objective Optimization approach
A traditional single objective optimization approach, such as minimizing weight, drives towards an optimum solution, subject to design constraints. For example, an optimum cantilever beam design could seek a minimum weight subject to a limit of tip deflection of 0.1 inches and maximum stress below yield. This represents a single optimum design point.
By contrast, a Multi-Objective approach could minimize weight, but also attempt to minimize tip deflection. The CORTIME product is built around this more general multi-objective approach.
The motivation for this approach is that most designs are a trade-off. Why was there a hard limit on tip deflection of 0.1 inches? What would happen to the weight if we relaxed this a bit?
Imagine a design committee reviewing the beam. Being a committee, one group wants to minimize weight at all costs (these may be the performance engineers). The other group’s sole focus is to minimize tip deflection (these may be the aeroelastic gurus). The two objectives are competing; a stiff design will be heavy; a flexible design will be light. Instead of choosing a single optimum, the Multi-Objective method gives the committee a series of design points which are each the best in class. Any trade-off between the performance engineers and aeroelasticians will be an optimum trade-off. Inferior designs which don’t best satisfy either group are avoided. The trade-off curve is called a Pareto frontier, named after an Italian economist who used it in decision-making in his field. The number of objectives is not limited to two, but higher numbers come at a computational cost. The other issue is that, faced with a three-way or four-way split, a committee-based compromise is much more unlikely! From the design perspective there can be too many decisions to make.
So, the CORTIME approach is all about compromise and trade-off between good solutions, rather than striving to achieve a single optimum.
Simulated Annealing
The optimization technology behind CORTIME is the Simulated Annealing method.
Simulated Annealing is an analogy of the annealing process in metal forming. A metal is raised to a high temperature which increases energy at the molecular level. The increased energy allows more movement of the molecules, which tend to find improved equilibrium states. As the metal is cooled slowly the energy available for molecular re-arrangement is reduced and the metal stabilizes to a more even molecular structure. This improves the material properties.
In the Simulated Annealing method, the ‘temperature’ is a control parameter for the search strategy. At high temperatures the search in design space for a better solution allows for a robust approach. Better solutions will be accepted, but inferior solutions are also explored. This allows a randomness of search and avoids being trapped in a local optimum. Unpromising outliers may in fact provide a path to a global optimum. As the temperature decreases, the search becomes more selective, and eventually will only accept better local solutions. At this stage it is assumed the global optimum general region has been found and the search is more focused.
Because CORTIME is a multi-objective method a set of best points are established using this approach, rather than a single point.
The sample study – panel buckling
I have been experimenting with CORTIME for several months now exploring how it interacts with SOLIDWORKS simulation using conventional design variables and feature controls in optimization problems. For this review, I decided to look at linear buckling, which is a challenging optimization problem as mode shapes and load factors change in a complex way across design space. I also wanted to explore the use of a more generalized geometry forming tool, which is analogous to a manufacturing process. The SOLIDWORKS Deform feature opens up interesting possibilities for generic shape changes. Figure 1 shows the set up.
Figure 1. The Deform feature pushing a tool body into a plate body
The Deform feature takes the top body which is the tool and deforms the lower target body, which starts as a flat plate. This gives a form of swaging in the plate. Two design variables are defined in the tool; the depth of the indentors and the offset between them. Together these give a wide variety of candidate shapes. A sample of these is shown in figure 2.
Figure 2. Design variations of the stiffened panel
The Deform feature is interesting, because it attempts to maintain constant volume of material. This means that under the pseudo-forming process, the skins will thin down. Now this is not an exact forming simulation, but I thought it to be a useful indication of thickness variation with the stamped design, rather than using a folded sheet design. The original plate thickness is .05 inches and under the deform action this can reduce to .03 inches or less in the sloped regions. It was important to use the high-fidelity settings, otherwise the geometry will fail in the thin regions. I also meshed the panel using high order Tetrahedral solid elements, so that thickness variation could be represented. The buckling load is sensitive to the local wall thickness, so this was important to include. Using shell elements is not feasible, without complicated mapping to represent the varying thickness. I did a series of benchmarks to compare solid elements versus shells under buckling for constant thickness and the solid elements performed well.
Setting up the problem in CORTIME
Figure 3 shows the layout of the CORTIME user interface, installed as an add-in with the SOLIDWORKS Simulation buckling study also set up.
Figure 3. CORTIME user interface
The main interface appears as a task pane on the right-hand side in the figure, with a ribbon above the graphics window. The four tabs at the bottom of the task pane access the initial setup phase, launching and monitoring the optimization phase, additional options and an attractive online help guide.
Much of the functionality can also be accessed directly from the ribbon bar, including setting up the optimization, running the optimization and viewing the optimization run using a variety of graphical display methods.
The Setup tab is shown highlighted in figure 3, and the task pane contains forms for accessing the Optimization Resources passed through from SOLIDWORKS, the Optimization Variables defined in CORTIME and the Objective functions defined in CORTIME.
Figure 4 shows part of the Optimization Resources form.
Figure 4. A montage of the optimization resources, showing sensors and equations
I have defined three sensors in SOLIDWORKS; the Mass, Surface Area of the bottom plate face and the Buckling Factor of Safety. The latter is generated by each SOLIDWORKS Simulation buckling analysis which is spawned. The Mass is only monitored to check it stays constant under the Deform feature action. I am using the Surface Area as a gauge of the plate thickness being developed. As the volume is constant, the surface area is an inverse measure of the thickness.
I have used the Equations method to define the depth and offset as global variables within SOLIDWORKS. These are then available to CORTIME. This is a slight overkill, as I could have created the dimensions directly as CORTIME variables, on the fly, as they are created. CORTIME appends icons to the menu actions within SOLIDWORKS to set dimensions or feature parameters to be variables or objectives. I found the global variable approach useful as I could easily replicate a design configuration, using CORTIME variable results on a different installation without CORTIME present. However, to emphasize, as will be seen shortly, working within CORTIME, any design configuration can be easily rebuilt from any of the results graphics windows.
Figure 5 shows the Optimization Variables window.
Figure 5. The optimization variables window
I have dragged the two global design variables into this window; depth and offset. The current value is shown in the far-right column. The Type defines the Range method. The default is to define an upper and lower bound – I like the fact that both values are in one dialog box. Other Range methods include Range with Step, or Range with N-Steps, so you can select specific increments. Range Integer with Step only allows integer values and Discrete Values allow an explicitly defined subset of variables. I experimented with trial solutions to establish what were sensible ranges. It is a good idea to give the ranges and increments some thought – how do they reflect real-world manufacturing gauges, dimensions etc.? The fewer the number of design variations then the more efficient the optimization run will be. If you are too generous with limits, then resource is wasted exploring the additional design space.
CORTIME has a very useful preview feature called Build-Checker, to help with assessing ranges. Build-Checker takes the extreme combinations of design variables and creates the corresponding set of geometry configurations automatically. A table of results is presented indicating successful and failed geometry builds. The failed builds can be clicked on and the geometry failure investigated. As an example, I did a simple plate with a hole stress concentration optimization. The variables were plate width and hole diameter. My variable range allowed the hole to be bigger than the plate width! More complex dimensional interactions are difficult to predict, and Build-Checker does a great job of allowing you to interactively explore and correct design variable ranges.
Figure 6 shows the objectives created for this optimization study.
Figure 6. Optimization Objectives form
The Mass is set to Type Observe – I simply want to monitor the value. It should remain constant under the Deform feature operation. The Buckling Factor of Safety is set to Maximize, I want the panel to be resistant to buckling. I have set the Surface Area to Minimize – I want to reduce the stretching, and hence thinning of the panel. Now these are conflicting requirements, so the aim is to produce a set of efficient alternative designs which satisfy these objectives to a lesser or greater extent. I then make an engineering judgement!
A variation exists on this theme as I can bias an Objective using a priority. The default priority levels are set as 3, but you can extend up to 7, which I have done. I have set the Factor of Safety to be the dominant objective (High) and the Surface Area as lesser priority (Medium). There is a weighting factor of 2 between each level, so in my case the Buckling Factor is twice as important (don’t tell the committee!). An equal Priority gives an unbiased search.
Other Objective Types include Target, Avoid Target, Keep Below, Keep Above and Range. Combined with the Priority level this can give a fascinating range of possibilities. I found them particularly useful when doing a Normal Modes study with frequency band requirements.
It is interesting to note that using Keep Above, or Keep Below, with High or Ultra High Priority is effectively providing a constraint boundary.
Figure 7 shows the Optimize Task Pane.
Figure 7. The Optimize Task Pane.
The Optimization Algorithm Settings are controlled here. I have selected the default Global Optimization with Initial Randomization. This is recommended by CORTIME as a starting point for a study. The Initial Randomization explores design space, trying to cover the region with a minimum number of exploratory steps. The number of random steps is a function of the Number of Iterations, which CORTIME calculates based on the number of design variables and their ranges. A very good total time estimate can be made by clicking on the calculator icon which will spawn the current geometry build and its FEA simulation and capture that time.
Other Optimization options include carrying out a fixed Data Sweep, which explores design space in a non-adaptive way. This can be very expensive, but can give insight into what design space looks like. Alternatively, a Response Surface can be generated in an initial session and then used as a surrogate model in a second session. This can be useful in a very expensive analysis such as nonlinear analysis.
Investigating the results
Various types of graph are available to review the results, these are selected at the top of the Optimize task Pane, or from the ribbon bar. In figure 8 I have used the Progress Graph to check the history of the two design variables.
Figure 8. Progress Graph showing history of design variables
The vertical axes for each variable can be scaled, which is useful for varying orders of the parameter values. Any design variable or objective can be plotted. I have only selected the design variables. The initial random walk can be seen over the first 20 design iterations. After that the annealing algorithm takes over and the search gradually tightens up as the ‘temperature’ control parameter is increased. The fluctuation in variable values can be seen to decay as the steps progress. I have terminated the optimization as the solution focused around one area, but continuing would have reduce the fluctuations even further. For a multi-objective optimization, the solution is not converging to a ‘best’ solution – it is still exploring the trade off curve. However, in my case, because of the priority given to the Buckling Factor, the results are tending to ‘bunch’ in a particular design space region.
A companion plot is shown in figure 9, with the objective function histories shown.
Figure 9. Progress Graph showing Objective Functions
The same trend is seen with an initial random walk and then steadily decreasing fluctuations. It is interesting to see that the objective functions show quite significant fluctuations popping up at the final iteration (170), because I have terminated the run before the planned number of iterations – using the ‘temperature’ analogy the system has not fully cooled! In some cases, this would not be wise, if there were likely to be many local optima. In this case, looking at other metrics I am happy with the data produced.
Figure 10 shows a 2D scatter plot with both objectives plotted in 2D objective space; Surface Area on the vertical axis, Load Factor on the horizontal axis.
Figure 10. Scatter plot showing objective space with buckling mode overlay
The dots in the scatter plots represent each design and analysis iteration. I have overlaid some representative buckling analysis results. Because of the Buckling Factor emphasis there many points clustering towards a high Buckling Factor and corresponding Surface Area region. The ‘cusp’ at the right-hand side of the graph is at a Buckling factor of 9.55 and a surface area of 230 in^2. There is a line of results running from this towards the bottom left of the plot. These represent points around the Pareto frontier. I have shown 3 buckling mode shape results from this line. As a contrast the upper line of results running from the cusp towards the top left are most inferior results. They have larger surface areas than designs on the corresponding Pareto curve. I have shown 3 sample mode shapes from this set.
Between the Pareto frontier (the lower set) and the worse cases (upper set) lie a set of intermediate designs. The strength of the method is that all of these inferior designs can be discarded from trade-off decisions.
The most inferior (upper) set show evidence of local buckling of the deep free edge regions. The lower superior set show varying mode shapes, from some local edge buckling combined with overall buckling (left), local edge buckling (left) and back to combined overall and local buckling (right).
This set can also be thought of as a design trade of curve – low axial loads (Factor 3) can be resisted by a shallow swage with wide offset (left), medium axial loads (Factor 6) can be resisted by a deeper swage with smaller offset and high axial loads (Factor 7.5) can be resisted by a deeper swage and bigger offset.
I have ignored the rich set of results around the cusp (Factor 9.5) so far. These are explored in figure 11.
Figure 11. Scatter plot showing objective space at high Load Factors
Because I biased the Buckling Load Factor over the Surface Area, results have congregated here and allowed a much better Pareto curve definition. I have overlaid the plot with the some of the best candidate designs and their buckling responses. There is a slight tendency towards a non-symmetric free edge buckle at the highest Load factor, but all designs in this region show a tendency to local edge buckling. There is also a trend towards a deeper and sharper swage to achieve higher Load Factors.
The results can also be filtered to display only the ‘Best’ values as defined by CORTIME. These are designs evaluated as best found to date as the algorithm progresses. This filter is shown applied to objectives and variables in figure 12.
Figure 12. History of best Objectives (top) and Variables (bottom)
These results are those clustered mainly around the right-hand cusp of figures 10 and 11. They show the relative insensitivity of the Load Factor to quite large changes in offset. The depth seems to have stabilized to around 0.75 to 0.8
Another way to investigate the interaction between design variables and objectives is to plot them all together in a Parallel Coordinate Graph, as shown in figure 13.
Figure 13. Parallel Coordinate Graphs of two group selections from the Best set
Each of the graph types allows design points to be marked, these points then appear as marked in each of the graphs and highlighted in the Parallel graphs. Figure 13 contains only the Best set, and the vertical axes can be scaled to emphasize the variations. The top graph focusses on the subset of 4 designs all with depth 0.885 inches and shows how they all give very similar objective values. As noted earlier, the result is insensitive to the offset value here. The bottom graph focuses on the subset of 6 designs with the highest Load Factor. Most follow the trend that a larger depth and offset, gives a higher Load Factor and Surface area. The steeper the slope between Surface Area and Buckling Factor, the more efficient the design, and this could allow a ranking within this subset.
Redesign using a flange
Most of the best designs on the Pareto curve showed local buckling of the free edge, rather than overall buckling. The SOLIDWORKS geometry was modified to include an edge flange of varying height. This kept two objectives but increased the design variables to three. Figure 14 shows a typical flange candidate design.
Figure 14. Modified design with flange
The results of the second design study are shown superimposed over the original design in figure 15.
Figure 15. Comparison of objectives between the two designs
Figure 15 shows the increased effectiveness of the flange, with Load Factors increased up to 26.5. However, as expected, the surface area increases. The mass also jumps to a higher value. A best design curve (Pareto frontier) can be drawn between these two designs.
Finally figure 16 shows the change in buckling mode with the introduction of the edge flanges for the Best set of new designs.
Figure 16. Objective values, mode shapes and design configurations for Best subset
The transition from local edge buckling to overall buckling is shown.
Conclusion
CORTIME provides a well-integrated work flow to carry out Parametric Design Optimization studies across a wide range of analysis types. I have not had space to describe all the possibilities in this article, but these include multiple load cases and multi-disciplinary analysis, for example linking Structural, Thermal and CFD. I plan to look at nonlinear analysis using the surrogate model approach in more detail in a future article.
The results reviewing and interpretation tools allowed for a deeper insight into the exploration of the design and objective space. The ability of the multi-objective approach to evolve design curves or trade-off studies is very powerful.
Leave a Reply