In jDE, each individual has its own F and CR values. How to assign these values to each individuals programmatically. How to update these values.
A pseudo-code will help.
If you want each individual to have its own F and CR values, you can simply save it in a list. (Pseudo-code: Python)
ID_POS = 0
ID_FIT = 1
ID_F = 2
ID_CR = 3
def create_solution(problem_size):
pos = np.random.uniform(lower_bound, upper_bound, problem_size)
fit = fitness_function(pos)
F = your_values
CR = your values
return [pos, fit, F, CR]
def training(problem_size, pop_size, max_iteration):
# Initialization
pop = [create_solution(problem_size) for _ in range(0, pop_size)]
# Evolution process
for iteration in range(0, max_iteration):
for i in range(0, pop_size):
# Do your stuff here
pos_new = ....
fit_new = ....
F_new = ...
CR_new = ...
if pop[i][ID_FIT] < fit_new: # meaning the new solution has better fitness than the old one.
pop[i][ID_F] = F_new
pop[i][ID_CR] = CR_new # This is how you update F and CR for every individual.
...
You can check out my repo's contains most of the state-of-the-art meta-heuristics here.
https://github.com/thieunguyen5991/metaheuristics
Related
Hi everyone so I have a DataFrame about Pokemon data
data = pd.read_csv('pokemon.csv')
And I'm only interested in 2 columns 'type1' 'type2' (type2 can be null) as the same way the original videogame does. What I need is to get a DataFrame that looks like this:
data.type1 looks like this:
data.type2:
So basically I need to take a single DataFrames using those 2 columns.
I've code this stuff trying to get 2 DataFrame that I can turn into the final one I am asked to reach:
tabla = {}
def contar(tipo):
buscando=tipo
if tipo == np.NaN:
pass
else:
if tipo in tabla:
tabla[tipo] += 1
else:
tabla[tipo] = 1
tabla2 = {}
def contar2(tipo):
buscando=tipo
if tipo == np.NaN:
pass
else:
if tipo in tabla2:
tabla2[tipo] += 1
else:
tabla2[tipo] = 1
def reset_tabla():
tabla = {}
tabla2 = {}
data['type1'].apply(contar)
df_type1 = pd.DataFrame.from_dict(tabla, orient='index')
reset_tabla()
data['type2'].apply(contar2)
df_type2 = pd.DataFrame.from_dict(tabla2, orient='index')
df_types = pd.concat([df_type1, df_type2])
df_type1
So with above code I get the data I want but no the way I need it.
I expected:
Instead, this was the output:
img continues and data appears 2 times due to 2 types columns
I think what I am doing wrong is the concat because type1 and 2 look like this separately:
and
Finally, if you know how to combine these 2 DataFrames or you think you can solve this problem better let me know.
Thanks you all :).
I've solved this issue, so if it's useful for somebody the solution is here:
tabla = {}
def contar(tipo):
buscando=tipo
if tipo in tabla:
tabla[tipo] += 1
else:
tabla[tipo] = 1
tabla2 = {}
def contar2(tipo):
buscando=tipo
if tipo == np.NaN:
pass
else:
if tipo in tabla2:
tabla2[tipo] += 1
else:
tabla2[tipo] = 1
def reset_tabla():
tabla = {}
tabla2 = {}
reset_tabla()
data['type1'].apply(contar)
data['type2'].apply(contar2)
for x in tabla2.keys():
if type(x)==float:
delete = x
del tabla2[delete]
types = {"type1": tabla,
"type2": tabla2}
df_types = pd.DataFrame(types)
df_types
So I get
I am trying to solve a MINLP problem using GEKKO. My code is the following:
m = GEKKO(remote = True)
m.options.SOLVER = 3
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
# Array Variable
rows = nb_phases + 3*b_max*(nb_phases+1)#48
columns = 1
x = np.empty((rows,columns),dtype=object)
for i in range(3*nb_phases*b_max+nb_phases+1):
for j in range(columns):
x[i,j] = m.Var(value = xinit[i,j], lb = LB[i,j], ub = UB[i,j], integer = False)
for i in range(3*nb_phases*b_max+nb_phases+1, (3*nb_phases+3)*b_max+nb_phases):
for j in range(columns):
x[i,j] = m.Var(value = xinit[i,j], lb = LB[i,j], ub = UB[i,j], integer = True)
# Constraints
#m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
m.axb(A,B, etype = '<=',sparse=False)
#m.axb(A = A_eq,b = B_eq, x = x, etype = '=', sparse = False)
m.axb(A_eq,B_eq, etype = '=',sparse=False)
for i in range(rows):
for j in range(columns):
m.Minimize((x[i,j]-i*j)**2)
#Solver
m.solve(disp = True)
When calling the axb function, if I declare the variable x in the arguments as the following:
m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
I get the error : List x must be composed of GEKKO parameters or variables. I don't really understand why I get this error since x is a gekko variable.
If I don't declare the variable x in the arguments of the axb function:
m.axb(A,B, etype = '<=',sparse=False)
I get the following error: AXB Missing Configuration File, Error: AXB object missing: axb1.txt, Example config file: axb1.txt
I was thinking maybe the issue is that x is not defined as an array. Therefore, considering x[i,j], I tried to explicit the equation Ax<=b by coding the matrix product A.x in a loop to avoid calling m.axb but I am not sure how to declare the equations after. My code is the following:
Ax = []
for i in range(rows):
temp = []
for j in range(columns):
temp.append(A[i,j]*x[j,0])
Ax.append(sum(temp))
for i in range(rows):
m.Equations(Ax[i] <= B[i])
I get the error: 'int' object is not subscriptable
Is anyone able to help me figure out how to solve this problem?
Is there a way of defining x as an array? (Since some of its elements are integers and some aren't)
Thanks a lot !
Here is a solution that works with the newer version of Gekko that is not yet released but is available on GitHub. You'll need to put the newest version of gekko.py (v1.0) in the Lib/site_packages/gekko folder and the local executable (apm.exe for Windows, apm_mac for MacOS, apm for Linux) in the Lib/site_packages/gekko/bin folder to use remote=False.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote = False)
m.options.SOLVER = 3
nb_phases = 2
b_max = 3
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
# Array Variable
rows = nb_phases + 3*b_max*(nb_phases+1)#48
columns = 1
xinit = np.ones(rows)
LB = np.zeros(rows)
UB = np.ones(rows)*10.0
#x = m.Array(m.Var,(rows))
x = np.empty(rows,dtype=object)
for i in range(3*nb_phases*b_max+nb_phases+1):
x[i] = m.Var(value = xinit[i], lb = LB[i], ub = UB[i], integer = False)
for i in range(3*nb_phases*b_max+nb_phases+1, (3*nb_phases+3)*b_max+nb_phases):
x[i] = m.Var(value = xinit[i], lb = LB[i], ub = UB[i], integer = True)
# Constraints
#m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
A = np.ones((1,rows)); B = np.zeros(1)
m.axb(A,B,x,etype = '<=',sparse=False)
#m.axb(A = A_eq,b = B_eq, x = x, etype = '=', sparse = False)
m.axb(A,B,x,etype = '=',sparse=False)
for i in range(rows):
m.Minimize((x[i]-i)**2)
#Solver
m.options.SOLVER = 1
m.solve(disp = True)
This produces the solution:
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
--------- APM Model Size ------------
Each time step contains
Objects : 2
Constants : 0
Variables : 29
Intermediates: 0
Connections : 58
Equations : 29
Residuals : 29
Number of state variables: 29
Number of total equations: - 2
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 27
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: -0.00 NLPi: 2 Dpth: 0 Lvs: 0 Obj: 7.71E+03 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 0.019000000000000003 sec
Objective : 7714.
Successful solution
---------------------------------------------------
I tried sudoku solution using backtracking, but it was taking a lot time around 12sec to give output. I tried to implement a multiprocessing technique but it's taking lot more time than that of backtracking. I never ran it completely it's too slow. Please suggest what am I missing? Even better if someone can also tell me how to run this through my GPU. (using CUDA).
import concurrent.futures
import copy
A = [[0]*9 for _ in range(9)]
A[0][6] = 2
A[1][1] = 8
A[1][5] = 7
A[1][7] = 9
A[2][0] = 6
A[2][2] = 2
A[2][6] = 5
A[3][1] = 7
A[3][4] = 6
A[4][3] = 9
A[4][5] = 1
A[5][4] = 2
A[5][7] = 4
A[6][2] = 5
A[6][6] = 6
A[6][8] = 3
A[7][1] = 9
A[7][3] = 4
A[7][7] = 7
A[8][2] = 6
Boards = [A]
L = []
for i in range(9):
for j in range(9):
if A[i][j] == 0:
L.append([i,j])
def RC_Check(A,Value,N):
global L
i,j = L[N]
for x in range(9):
if A[x][j] == Value:
return False
if A[i][x] == Value:
return False
return True
def Square_Check(A,Value,N):
global L
i,j = L[N]
X, Y = int(i/3)*3,int(j/3)*3
for x in range(X,X+3):
for y in range(Y,Y+3):
if A[x][y] == Value:
return False
return True
def New_Boards(Board,N):
global L
i,j = L[N]
Boards = []
with concurrent.futures.ProcessPoolExecutor() as executor:
RC_Process = executor.map(RC_Check,[Board]*10,list(range(1,10)),[N]*10)
Square_Process = executor.map(Square_Check,[Board]*10,list(range(1,10)),[N]*10)
for Value, (RC_Process, Square_Process) in enumerate(zip(RC_Process,Square_Process)):
if RC_Process and Square_Process:
Board[i][j] = Value+1
Boards.append(copy.deepcopy(Board))
return Boards
def Solve_Boards(Boards,N):
Results = []
with concurrent.futures.ProcessPoolExecutor() as executor:
Process = executor.map(New_Boards,Boards,[N]*len(Boards))
for new_boards in Process:
if len(new_boards):
Results.extend(new_boards)
return Results
if __name__ == "__main__":
N = 0
while N < len(L):
Boards = Solve_Boards(Boards,N)
N+=1
print(len(Boards),N)
print(Boards)
Multi processing is NOT a silver bullet. Backtracking is pretty more efficient than exhaustive search parallelly in most cases. I tried running this code on my PC which has 32 cores 64 threads, but it takes long time.
And you look like to want to use GPGPU to solve this problem, but i doesn't suit, Because state of board depends on previous state, so can't split calculation efficiently.
Suppose I have the following:
# in pseudo code
# function input 1
chord = [0,1,17,35,47,0]
dims = [0,1,2,4,5,6]
x_axis = 3
t_axis = 7
# what I'd like to return
np.squeeze(arr[0,1,17,:,35,47,0,:])
# function input 2
chord = [0,3,4,5,6,7]
dims = [0,2,3,4,5,6]
x_axis = 1
t_axis = 7
# desired return
np.squeeze(arr[0,:,3,4,5,6,7,:])
How do I construct these numpy slices given input that I can arbitrarily specify a pair of axes and a chord coordinate?
I implemented a reflection-based solution:
def reflection_window(arr:np.ndarray,chord:list,dim0,dim1):
var = "arr"
bra = "["
ket = "]"
coord = [str(i) for int(i) in chord]
coord.insert(dim0,':')
coord.insert(dim1,':')
chordstr = ','.join(coord)
slicer = var+bra+chordstr+ket
return eval(slicer)
Staying native to numpy is probably better, but since python is a shell scripting language, it probably makes sense to treat it that way if necessary.
I have attempted to assess the relevance some predictions based on a dataset (n * 6), but I am wondering about the causes of strange results I am currently facing with svr.SVR.predict. The below code could illustrate my statement:
d = DataReader(...)
a = d.iloc[:,0:5]
b = d.iloc[:,5]
cut = 10
z = d.iloc[len(d.index) - cut :,0:5]
X,y = np.asarray(a[:-10]), np.asarray(b[:-10]) # train set
XT,yT = np.asarray(z), np.asarray(b[-10:]) # test set
clf = svm.SVR(kernel = 'rbf', gamma=0.1, C=1e3)
y_hat = clf.fit(X,y).predict(XT[i]) #, i = 0,1...
yields amazing static values for all i, despite differences in XT[i] (Ps: XT[i].shape = (5,)).
In a nutshell, the goal consisted of comparing y_hat vs yT.
Best
You need to normalize before SVM. Try the following:
from sklearn.preprocessing import StandardScaler
d = DataReader(...)
a = d.iloc[:,0:5]
b = d.iloc[:,5]
cut = 10
z = d.iloc[len(d.index) - cut :,0:5]
X,y = np.asarray(a[:-10]), np.asarray(b[:-10]) # train set
XT,yT = np.asarray(z), np.asarray(b[-10:]) # test set
scl = StandardScaler()
X = scl.fit_transform(X)
XT = scl.transform(XT)
clf = svm.SVR(kernel = 'rbf', gamma=0.1, C=1e3)
y_hat = clf.fit(X,y).predict(XT[i]) #, i = 0,1...