Scalable Bayesian Optimization : Comparison of various methods
Bayesian Optimization is one of the most popular methods for optimizing expensive black-box functions. In this project, we attempt to understand some of the recent techniques for scaling Bayesian Optimization for large number of input data points. We also try some novel ideas and evaluations. Stay tuned for results and cleaned-up code!
Here is the directory structure. You can access the code in the /code
folder. Please note that only the code files are added to Git due to space optimization. Other files could be made available on request.
BayesOpt/
├── code
│ ├── pybnn
│ │ ├── build
│ │ │ ├── bdist.linux-x86_64
│ │ │ └── lib
│ │ │ ├── pybnn
│ │ │ │ ├── sampler
│ │ │ │ └── util
│ │ │ └── test
│ │ ├── dist
│ │ ├── notebooks
│ │ ├── pybnn
│ │ │ ├── sampler
│ │ │ └── util
│ │ ├── pybnn.egg-info
│ │ └── test
│ ├── __pycache__
│ └── util
│ └── __pycache__
├── experiments
│ └── src
├── latex
└── papers
├── hyp_LDA
└── hyp_LogReg
We have used the implementation of pybnn
as a base model for Bayesian Linear Regression on the basis of J. Snoek et al [1]. The usual Bayesian Optimization routine with the neural-network based surrogate model has been implemented by us.
We test our implementation through a simple mathematical dataset which looks like this:
Stay tuned!
Stay tuned!