Hello, everyone. I’m new to community. plz let me know if there is anything wrong with my question.

I use Bayesian optimization with botorch. However, I don’t know if i use it properly. I know that Bayesian optimization uses a continuous model… but I put discrete sets as input.

In detail, my data consists of 6-dimensional input and 1-dimensional output, and the input parameter consists of discrete sets.

## Like this

train_X = torch.tensor([

[150., 50., 0., 150., 150., 150.],

[150., 150., 150., 100., 0., 50.],

[100., 100., 100., 100., 150., 150.],

[100., 50., 50., 150., 150., 100.],

[100., 150., 150., 100., 150., 100.],

[50., 100., 150., 150., 50., 150.]])

train_Y = torch.tensor([

[280.17],

[281.07],

[281.79],

[283.07],

[283.16],

[283.68]])

gp = SingleTaskGP(train_X, train_Y, covar_module=MaternKernel(nu=0.5))

mll = ExactMarginalLogLikelihood(gp.likelihood, gp)

fit_gpytorch_model(mll)

UCB = UpperConfidenceBound(gp, beta=0.1)

EI = ExpectedImprovement(gp, best_f=0.2)

set_1 = torch.tensor([150, 150, 150, 150, 150, 150])

bounds = torch.stack([torch.zeros(6), set_1])

i=30

st=[ ]

while(i!=0):

candidate, acq_value = optimize_acqf(EI, bounds=bounds, q=1, num_restarts=20, raw_samples=200)

if(acq_value >= 2):

i=i-1

a = torch.round(candidate/50)*50

st.append(a)

st=torch.stack(st, 1)

st=torch.squeeze(st)*10

print(st.numpy())

When I iterate like this, I get an error. This error is that the maximum value of the previously used data (train_X, train_Y) is obtained as a duplicate. I want a new maximum parameter set, but this way the same values are printed.

I think the error is coming from just deriving the discrete set through rounding. So any comments on how to use discrete sets in botorch would be appreciated.

Thanks for reading.