Skip to main content

IMPORTANT NOTICE: The ARC website is being updated on Tuesday, May 28, 2024. ARC will be in a "Read Only" mode. Viewing and downloading content will be available but other functions are restricted. For further inquiries, please contact [email protected].

Skip to article control options
No Access

Weaponizing Favorite Test Functions for Testing Global Optimization Algorithms: An illustration with the Branin-Hoo Function

AIAA 2020-3132
Session: Emerging Methods, Algorithms, and Software Development in MDO I
Published Online:

Some popular functions used to test global optimization algorithms, such as the Branin-Hoo and Himmelblau functions, have multiple local optima, all with the same value of the objective function. That is all local optima are also global optima. This renders them easy to optimize, because it is impossible for the algorithm to get stuck in a local optimum that is not the global one. Such functions actually present an opportunity to create challenging problems for optimization algorithms, because, as illustrated here, it is easy to convert them to functions with competitive local optima by adding a localized bump at the location of one of the optima. This process is illustrated here for the Branin-Hoo function, which has three global optima. We use the popular Python SciPy differential evolution (DE) optimizer for the illustration, because its wide use is likely to imply a well written code. DE also allows the use of the gradient-based BFGS local optimizer for final convergence. By making a large number of replicate runs we establish the probability of reaching a global optimum with the original and weaponized Branin-Hoo. With the original function we find 100% probability of success with a moderate number of function evaluations. With the weaponized version, we found that the probability of getting trapped in a non-global optimum can be made small only with a much larger number of function evaluations.