Title: | Analysis-of-Marginal-Tail-Means |
---|---|
Description: | Provides functions for implementing the Analysis-of-marginal-Tail-Means (ATM) method, a robust optimization method for discrete black-box optimization. Technical details can be found in Mak and Wu (2018+) <arXiv:1712.03589>. This work was supported by USARO grant W911NF-17-1-0007. |
Authors: | Simon Mak |
Maintainer: | Simon Mak <[email protected]> |
License: | GPL (>= 2) |
Version: | 0.1.0 |
Built: | 2024-11-11 03:43:53 UTC |
Source: | https://github.com/cran/atmopt |
The 'atmopt' package provides functions for implementing the ATM optimization method in Mak and Wu (2018+) <arXiv:1712.03589>.
Package: | atmopt |
Type: | Package |
Version: | 1.0 |
Date: | 2018-10-19 |
License: | GPL (>= 2) |
The 'atmopt' package provides functions for implementing the Analysis-of-marginal-Tail-Means (ATM) method in Mak and Wu (2018+). ATM is a robust method for discrete, black-box optimization, where function evaluations are expensive and limited.
Simon Mak
Maintainer: Simon Mak <[email protected]>
Mak, S. and Wu, C. F. J. (2018+). Analysis-of-marginal-Tail-Means (ATM): a robust method for discrete black-box optimization. Under review.
atm.addpts
adds new data into an ATM object.
atm.addpts(atm.obj,des.new,obs.new)
atm.addpts(atm.obj,des.new,obs.new)
atm.obj |
Current ATM object. |
des.new |
Design matrix for new evaluations. |
obs.new |
Observations for new evaluations. |
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
atm.init
initialize the ATM object to use for optimization.
atm.init(nfact, nlev)
atm.init(nfact, nlev)
nfact |
Number of factors to optimize. |
nlev |
A vector containing the number of levels for each factor. |
nfact <- 9 #number of factors lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor fit <- atm.init(nfact,nlev) #initialize ATM object
nfact <- 9 #number of factors lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor fit <- atm.init(nfact,nlev) #initialize ATM object
atm.nextpts
computes new design points to evaluate using a randomized orthogonal array (OA).
atm.nextpts(atm.obj,reps=NULL)
atm.nextpts(atm.obj,reps=NULL)
atm.obj |
Current ATM object. |
reps |
Number of desired replications for OA. |
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
atm.init
predicts the minimum setting for an ATM object.
atm.predict(atm.obj,alphas=NULL,ntimes=1,nsub=100,prob.am=0.5,prob.pw=1.0,reps=NULL)
atm.predict(atm.obj,alphas=NULL,ntimes=1,nsub=100,prob.am=0.5,prob.pw=1.0,reps=NULL)
atm.obj |
Current ATM object. |
alphas |
A |
ntimes |
Number of resamples for tuning ATM percentages. |
nsub |
Number of candidate percentiles to consider. |
prob.am |
In case of ties in percentage estimation, the probability of choosing marginal means (if optimal) for minimization. |
prob.pw |
In case of ties in percentage estimation, probability of picking-the-winner (if optimal) for minimization. |
reps |
Number of replications for internal OA in tuning ATM percentages. |
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
atm.remlev
removes the worst (i.e., highest) levels from each factor in ATM.
atm.remlev(atm.obj)
atm.remlev(atm.obj)
atm.obj |
Current ATM object. |
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
## Not run: #################################################### # Example 1: detpep10exp (9-D) #################################################### nfact <- 9 #number of factors ntimes <- floor(nfact/3) #number of "repeats" for detpep10exp lev <- 4 #number of levels nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){detpep10exp(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure #################################################### # Example 2: camel6 (24-D) #################################################### nfact <- 24 #number of factors ntimes <- floor(nfact/2.0) #number of "repeats" for camel6 lev <- 4 nlev <- rep(lev,nfact) #number of levels for each factor nelim <- 3 #number of level eliminations fn <- function(xx){camel6(xx,ntimes,nlev)} #objective to minimize (assumed expensive) #initialize objects # (predicts & removes levels based on tuned ATM percentages) fit.atm <- atm.init(nfact,nlev) #initialize sel.min object # (predicts minimum using smallest observed value & removes levels with largest minima) fit.min <- atm.init(nfact,nlev) #Run for nelim eliminations: res.atm <- rep(NA,nelim) #for ATM results res.min <- rep(NA,nelim) #for sel.min results for (i in 1:nelim){ # ATM updates: new.des <- atm.nextpts(fit.atm) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.atm <- atm.addpts(fit.atm,new.des,new.obs) #add data to object fit.atm <- atm.predict(fit.atm) #predict minimum setting idx.atm <- fit.atm$idx.opt res.atm[i] <- fn(idx.atm) fit.atm <- atm.remlev(fit.atm) #removes worst performing level # sel.min updates: new.des <- atm.nextpts(fit.min) #get design points new.obs <- apply(new.des,1,fn) #sample function fit.min <- atm.addpts(fit.min,new.des,new.obs) #add data to object fit.min <- atm.predict(fit.min, alphas=rep(0,nfact)) #find setting with smallest observation idx.min <- fit.min$idx.opt res.min[i] <- fn(idx.min) #check: min(fit.min$obs.all) fit.min <- atm.remlev(fit.min) #removes worst performing level } res.atm res.min #conclusion: ATM finds better solutions by learning & exploiting additive structure ## End(Not run)
A discrete test function constructed from the six-hump camel function in Ali et al. (2005).
camel6(xx,ntimes,nlev)
camel6(xx,ntimes,nlev)
xx |
A |
ntimes |
Number of duplications for the function (base function is 2D). |
nlev |
A |
xx <- c(1,2,1,2,1,2) #input factors nlev <- rep(4,length(xx)) #number of levels for each factor ntimes <- length(xx)/2 #base function is in 2D, so duplicate 3 times camel6(xx,ntimes,nlev)
xx <- c(1,2,1,2,1,2) #input factors nlev <- rep(4,length(xx)) #number of levels for each factor ntimes <- length(xx)/2 #base function is in 2D, so duplicate 3 times camel6(xx,ntimes,nlev)
A discrete test function constructed from the modified exponential function in Dette and Pepelyshev (2010).
detpep10exp(xx,ntimes,nlev)
detpep10exp(xx,ntimes,nlev)
xx |
A |
ntimes |
Number of duplications for the function (base function is 3D). |
nlev |
A |
xx <- c(1,2,1,2,1,2) #input factors nlev <- rep(4,length(xx)) #number of levels for each factor ntimes <- length(xx)/3 #base function is in 2D, so duplicate 2 times detpep10exp(xx,ntimes,nlev)
xx <- c(1,2,1,2,1,2) #input factors nlev <- rep(4,length(xx)) #number of levels for each factor ntimes <- length(xx)/3 #base function is in 2D, so duplicate 2 times detpep10exp(xx,ntimes,nlev)