Ten New Benchmarks for Optimization

08/30/2023
by   Xin-She Yang, et al.
0

Benchmarks are used for testing new optimization algorithms and their variants to evaluate their performance. Most existing benchmarks are smooth functions. This chapter introduces ten new benchmarks with different properties, including noise, discontinuity, parameter estimation and unknown paths.

READ FULL TEXT
research
06/18/2021

A Benchmarks Library for Extended Parametric Timed Automata

Parametric timed automata are a powerful formalism for reasoning on conc...
research
12/18/2022

Riemannian Optimization for Variance Estimation in Linear Mixed Models

Variance parameter estimation in linear mixed models is a challenge for ...
research
09/08/2021

YAHPO Gym – Design Criteria and a new Multifidelity Benchmark for Hyperparameter Optimization

When developing and analyzing new hyperparameter optimization (HPO) meth...
research
07/10/2022

An Introduction to Lifelong Supervised Learning

This primer is an attempt to provide a detailed summary of the different...
research
02/03/2023

Revisiting Long-tailed Image Classification: Survey and Benchmarks with New Evaluation Metrics

Recently, long-tailed image classification harvests lots of research att...
research
05/20/2021

Optimizing Neural Network Weights using Nature-Inspired Algorithms

This study aims to optimize Deep Feedforward Neural Networks (DFNNs) tra...
research
03/14/2016

A ranking approach to global optimization

We consider the problem of maximizing an unknown function over a compact...

Please sign up or login with your details

Forgot password? Click here to reset