« Dimensionality Reduction Techniques for Global Optimization
August 15, 2018, 2:00 PM - 2:30 PM
Coralia Cartis, University of Oxford
We show that the scalability challenges of Global Optimisation (GO) algorithms can be overcome for functions with low effective dimensionality, which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations. We propose the use of random subspace embeddings within a(ny) global minimisation algorithm, extending the approach in Wang et al (2013). We introduce two new frameworks, REGO (Random Embeddings for GO) and AREGO (Adaptive REGO), which transform the high-dimensional optimization problem into a low-dimensional one. In REGO, a new low-dimensional problem is formulated with bound constraints in the reduced space and solved with any GO solver. Using random matrix theory, we provide probabilistic bounds for the success of REGO, which indicate that this is dependent upon the dimension of the embedded subspace and the intrinsic dimension of the function, but independent of the ambient dimension. Numerical results show that high success rates can be achieved with only one embedding and that rates are independent of the ambient dimension of the problem. AREGO repeatedly solves a low-dimensional problem, each time with a different random subspace that is chosen using past information. Using results from conic integral geometry, we derive probabilistic bounds on the success of the reduced problem and show that AREGO is globally convergent with probability one for any Lipschitz function. In our numerical tests, we investigate the numerical efficiency of this adaptive approach, as well as its invariance to the ambient dimension of the problem.
This work is joint with Adilet Otemissov (Turing Institute, London and Oxford University).