basdocu.blogg.se

Optimal latin hypercube sampling
Optimal latin hypercube sampling






optimal latin hypercube sampling optimal latin hypercube sampling

The comparison of surrogate-modeling construction approaches at large sample sizes revealed that surrogate models trained using ANN, ALAMO, and ELM yielded smaller root mean squared error and higher adjusted R-squared values than the models trained using the rest of the = , As the sample size increased, the impact of sampling methods diminished. Our results revealed that for smaller sample sizes, on average, surrogate models trained using the data points generated by Sobol sequences provided the best estimation for all surrogate model construction approaches. We studied the impact of sampling method, sample size, and challenge function characteristics on the accuracy of surrogate model predictions. The data used to construct surrogate models were generated using Latin Hypercube Sampling (LHS), Halton and Sobol sampling methods. Each approach is used to construct surrogate models for predicting the outputs of thirty-four test functions. Here, the construction approaches considered include: Artificial Neural Networks (ANNs), Automated Learning of Algebraic Models using Optimization (ALAMO), Radial Basis Networks (RBNs), Extreme Learning Machines (ELMs), Gaussian Progress Regression (GPR), Random Forests (RFs), Support Vector Regression (SVR), and Multivariate Adaptive Regression Splines. This study compares eight surrogate-model construction approaches using computational experiments.








Optimal latin hypercube sampling