Top % by Group in Access - ms-access-2007

I want to preface this by saying I use the front end of access and not sql.
I have a set of data that I'm trying to do a scorecard for. I have Region, Store, SKU, SKU Count. I need to give a score of 6 to SKU's within the to 15% of their region, a score of 3 for 50-85%, and zero for anything below 50% of the region.
This is my data....I cannot figure out how to do this...am I missing a piece of data to do it?
R Store SKU SKU Cnt
24 4969 14260 -942
24 7660 14162 -54
24 78 12486 -174
24 4881 14160 -376
24 78 14161 -162
24 4905 14161 -1179
24 4901 14162 -114
24 4941 14160 -312
24 4815 17912 544
24 4926 18974 112
24 4845 14160 -160
24 4828 14162 -142
28 3004 14261 -3064
28 4813 70339 -229
28 4941 14161 -382
28 4905 12486 -161
28 4907 17912 -1150
28 4907 12486 152
28 4884 14160 -62
28 7217 14261 420
28 4845 14161 -1123
28 4853 20692 -50
28 4899 20692 40
28 4872 14261 -225
32 78 17470 993
32 4956 13978 -374
32 4948 70330 -2454
32 4815 14162 167
32 4972 14161 -242
32 4372 14261 -3613
32 5413 20692 50
32 7155 64673 60
32 78 17471 484
32 4870 20713 47
32 4926 17472 -310
32 4844 12486 -66
32 4863 17471 517

Related

pandas df add new column based on proportion of two other columns from another dataframe

I have df1 which has three columns (loadgroup, cartons, blocks) like this
loadgroup
cartons
blocks
cartonsPercent
blocksPercent
1
2269
14
26%
21%
2
1168
13
13%
19%
3
937
8
11%
12%
4
2753
24
31%
35%
5
1686
9
19%
13%
total(sum of column)
8813
68
100%
100%
The interpretation is like this: out of df1 26% cartons which is also 21% of blocks are assigned to loadgroup 1, etc. we can assume blocks are 1 to 68, cartons are 1 to 8813.
I also have df2 which also has cartons and blocks columns. but does not have loadgroup.
My goal is to assign loadgroup (1-5 as well) to df2 (100 blocks 29608 cartons in total), but keep the proportions, for example, for df2, 26% cartons 21% blocks assign loadgroup 1, 13% cartons 19% blocks assign loadgroup 2, etc.
df2 is like this:
block
cartons
0
533
1
257
2
96
3
104
4
130
5
71
6
68
7
87
8
99
9
51
10
291
11
119
12
274
13
316
14
87
15
149
16
120
17
222
18
100
19
148
20
192
21
188
22
293
23
120
24
224
25
449
26
385
27
395
28
418
29
423
30
244
31
327
32
337
33
249
34
528
35
528
36
494
37
540
38
368
39
533
40
614
41
462
42
350
43
618
44
463
45
552
46
397
47
401
48
397
49
365
50
475
51
379
52
541
53
488
54
383
55
354
56
760
57
327
58
211
59
356
60
552
61
401
62
320
63
368
64
311
65
421
66
458
67
278
68
504
69
385
70
242
71
413
72
246
73
465
74
386
75
231
76
154
77
294
78
275
79
169
80
398
81
227
82
273
83
319
84
177
85
272
86
204
87
139
88
187
89
263
90
90
91
134
92
67
93
115
94
45
95
65
96
40
97
108
98
60
99
102
total 100 blocks
29608 cartons
I want to add loadgroup column to df2, try to keep those proportions as close as possible. How to do it please? Thank you very much for the help.
I don't know how to find loadgroup column based on both cartons percent and blocks percent. But generate random loadgroup based on either cartons percent or blocks percent is easy.
Here is what I did. I generate 100,000 seeds first, then for each seed, I add column loadgroup1 based on cartons percent, loadgroup2 based on blocks percent, then calculate both percentages, then compare with df1 percentages, get absolute difference, record it. For these 100,000 seeds, I take the minimum difference one as my solution, which is sufficient for my job.
But this is not the optimal solution, and I am looking for quick and easy way to do this. Hope somebody can help.
Here is my code.
df = pd.DataFrame()
np.random.seed(10000)
seeds = np.random.randint(1, 1000000, size = 100000)
for i in range(46530, 46537):
print(seeds[i])
np.random.seed(seeds[i])
df2['loadGroup1'] = np.random.choice(df1.loadgroup, len(df2), p = df1.CartonsPercent)
df2['loadGroup2'] = np.random.choice(df1.loadgroup, len(df2), p = df1.blocksPercent)
df2.reset_index(inplace = True)
three = pd.DataFrame(df2.groupby('loadGroup1').agg(Cartons = ('cartons', 'sum'), blocks = ('block', 'count')))
three['CartonsPercent'] = three.Cartons/three.Cartons.sum()
three['blocksPercent'] = three.blocks/three.blocks.sum()
four = df1[['CartonsPercent','blocksPercent']] - three[['CartonsPercent','blocksPercent']]
four = four.abs()
subdf = pd.DataFrame({'i':[i],'Seed':[seeds[i]], 'Percent':['CartonsPercent'], 'AbsDiff':[four.sum().sum()]})
df = pd.concat([df,subdf])
three = pd.DataFrame(df2.groupby('loadGroup2').agg(Cartons = ('cartons', 'sum'), blocks = ('block', 'count')))
three['CartonsPercent'] = three.Cartons/three.Cartons.sum()
three['blocksPercent'] = three.blocks/three.blocks.sum()
four = df1[['CartonsPercent','blocksPercent']] - three[['CartonsPercent','blocksPercent']]
four = four.abs()
subdf = pd.DataFrame({'i':[i],'Seed':[seeds[i]], 'Percent':['blocksPercent'], 'AbsDiff':[four.sum().sum()]})
df = pd.concat([df,subdf])
df.sort_values(by = 'AbsDiff', ascending = True, inplace = True)
df = df.head(10)
Actually the first row of df will tell me the seed I am looking for, I kept 10 rows just for curiosity.
Here is my solution.
block
cartons
loadgroup
0
533
4
1
257
1
2
96
4
3
104
4
4
130
4
5
71
2
6
68
1
7
87
4
8
99
4
9
51
4
10
291
4
11
119
2
12
274
2
13
316
4
14
87
4
15
149
5
16
120
3
17
222
2
18
100
2
19
148
2
20
192
3
21
188
4
22
293
1
23
120
2
24
224
4
25
449
1
26
385
5
27
395
3
28
418
1
29
423
4
30
244
5
31
327
1
32
337
5
33
249
4
34
528
1
35
528
1
36
494
5
37
540
3
38
368
2
39
533
4
40
614
5
41
462
4
42
350
5
43
618
4
44
463
2
45
552
1
46
397
3
47
401
3
48
397
1
49
365
1
50
475
4
51
379
1
52
541
1
53
488
2
54
383
2
55
354
1
56
760
5
57
327
4
58
211
2
59
356
5
60
552
4
61
401
1
62
320
1
63
368
3
64
311
3
65
421
2
66
458
5
67
278
4
68
504
5
69
385
4
70
242
4
71
413
1
72
246
2
73
465
5
74
386
4
75
231
1
76
154
4
77
294
4
78
275
1
79
169
4
80
398
4
81
227
4
82
273
1
83
319
3
84
177
4
85
272
5
86
204
3
87
139
1
88
187
4
89
263
4
90
90
4
91
134
4
92
67
3
93
115
3
94
45
2
95
65
2
96
40
4
97
108
2
98
60
2
99
102
1
Here are the summaries.
loadgroup
cartons
blocks
cartonsPercent
blocksPercent
1
7610
22
26%
22%
2
3912
18
13%
18%
3
3429
12
12%
12%
4
9269
35
31%
35%
5
5388
13
18%
13%
It's very close to my target though.

Pandas - cross columns reference

My data is a bit complicated, I separate into 2 sections: (A) Explain data, (B) Desire output
(A) - Explain data:
My data as follow:
comp date adj_date val
0 a 1999-12-31 NaT 50
1 a 2000-01-31 NaT 51
2 a 2000-02-29 NaT 52
3 a 2000-03-31 NaT 53
4 a 2000-04-30 NaT 54
5 a 2000-05-31 NaT 55
6 a 2000-06-30 NaT 56
----------------------------------
7 a 2000-07-31 2000-01-31 57
8 a 2000-08-31 2000-02-29 58
9 a 2000-09-30 2000-03-31 59
10 a 2000-10-31 2000-04-30 60
11 a 2000-11-30 2000-05-31 61
12 a 2000-12-31 2000-06-30 62
13 a 2001-01-31 2000-07-31 63
14 a 2001-02-28 2000-08-31 64
15 a 2001-03-31 2000-09-30 65
16 a 2001-04-30 2000-10-31 66
17 a 2001-05-31 2000-11-30 67
18 a 2001-06-30 2000-12-31 68
----------------------------------
19 a 2001-07-31 2001-01-31 69
20 a 2001-08-31 2001-02-28 70
21 a 2001-09-30 2001-03-31 71
22 a 2001-10-31 2001-04-30 72
23 a 2001-11-30 2001-05-31 73
24 a 2001-12-31 2001-06-30 74
25 a 2002-01-31 2001-07-31 75
26 a 2002-02-28 2001-08-31 76
27 a 2002-03-31 2001-09-30 77
28 a 2002-04-30 2001-10-31 78
29 a 2002-05-31 2001-11-30 79
30 a 2002-06-30 2001-12-31 80
----------------------------------
31 a 2002-07-31 2002-01-31 81
32 a 2002-08-31 2002-02-28 82
33 a 2002-09-30 2002-03-31 83
34 a 2002-10-31 2002-04-30 84
35 a 2002-11-30 2002-05-31 85
36 a 2002-12-31 2002-06-30 86
37 a 2003-01-31 2002-07-31 87
38 a 2003-02-28 2002-08-31 88
39 a 2003-03-31 2002-09-30 89
40 a 2003-04-30 2002-10-31 90
41 a 2003-05-31 2002-11-30 91
42 a 2003-06-30 2002-12-31 92
----------------------------------
date: is the actual date, as end of month.
adj_date = date + MonthEnd(-6)
val: is given value
I want to create new column val_new where:
it is referencing to val of previous year December
val_new is then applied to date as from date.July to date.(year+1).June, Or equivalently in adj_date it is from adj_date.Jan to adj_date.Dec
(B) - Desire Output:
comp date adj_date val val_new
0 a 1999-12-31 NaT 50 NaN
1 a 2000-01-31 NaT 51 NaN
2 a 2000-02-29 NaT 52 NaN
3 a 2000-03-31 NaT 53 NaN
4 a 2000-04-30 NaT 54 NaN
5 a 2000-05-31 NaT 55 NaN
6 a 2000-06-30 NaT 56 NaN
-------------------------------------------
7 a 2000-07-31 2000-01-31 57 50.0
8 a 2000-08-31 2000-02-29 58 50.0
9 a 2000-09-30 2000-03-31 59 50.0
10 a 2000-10-31 2000-04-30 60 50.0
11 a 2000-11-30 2000-05-31 61 50.0
12 a 2000-12-31 2000-06-30 62 50.0
13 a 2001-01-31 2000-07-31 63 50.0
14 a 2001-02-28 2000-08-31 64 50.0
15 a 2001-03-31 2000-09-30 65 50.0
16 a 2001-04-30 2000-10-31 66 50.0
17 a 2001-05-31 2000-11-30 67 50.0
18 a 2001-06-30 2000-12-31 68 50.0
-------------------------------------------
19 a 2001-07-31 2001-01-31 69 62.0
20 a 2001-08-31 2001-02-28 70 62.0
21 a 2001-09-30 2001-03-31 71 62.0
22 a 2001-10-31 2001-04-30 72 62.0
23 a 2001-11-30 2001-05-31 73 62.0
24 a 2001-12-31 2001-06-30 74 62.0
25 a 2002-01-31 2001-07-31 75 62.0
26 a 2002-02-28 2001-08-31 76 62.0
27 a 2002-03-31 2001-09-30 77 62.0
28 a 2002-04-30 2001-10-31 78 62.0
29 a 2002-05-31 2001-11-30 79 62.0
30 a 2002-06-30 2001-12-31 80 62.0
-------------------------------------------
31 a 2002-07-31 2002-01-31 81 74.0
32 a 2002-08-31 2002-02-28 82 74.0
33 a 2002-09-30 2002-03-31 83 74.0
34 a 2002-10-31 2002-04-30 84 74.0
35 a 2002-11-30 2002-05-31 85 74.0
36 a 2002-12-31 2002-06-30 86 74.0
37 a 2003-01-31 2002-07-31 87 74.0
38 a 2003-02-28 2002-08-31 88 74.0
39 a 2003-03-31 2002-09-30 89 74.0
40 a 2003-04-30 2002-10-31 90 74.0
41 a 2003-05-31 2002-11-30 91 74.0
42 a 2003-06-30 2002-12-31 92 74.0
-------------------------------------------
I have two solutions, but both comes at a cost:
Solution 1: to create sub_dec dataframe where we take val of Dec each year. Then merge back to main data. This one works fine, but I don't like this solution because our actual data will involve a lot of merge, and it is not easy and convenient to keep track of all those merges.
Solution 2: (1) I create a lag by shift(7), (2) set other adj_date but Jan to None, (3) then use groupby with ffill. This solution works nicely, but if there is any missing rows, or the date is not continuous, then the entire output is wrong
create adj_year:
data['adj_year'] = data['adj_date'].dt.year
cross referencing by shift(7):
data['val_new'] = data.groupby('comp')['val'].shift(7)
setting other adj_date except Jan to be None:
data.loc[data['adj_date'].dt.month != 1, 'val_new'] = None
using ffill to fill in None by each group of ['comp', 'adj_year']:
data['val_new'] = data.groupby(['comp', 'adj_year'])['val_new'].ffill()
If you have any suggestion to overcome the drawback of Solution 02, or any other new solution is appreciated.
Thank you
You can use Timedelta with correct conversion from seconds to months, according to your needs ,
check these two resources for more info:
https://docs.python.org/3/library/datetime.html
pandas: function equivalent to SQL's datediff()?

Change value into int

So my question is: I run a SQL Query and my result is:
week sale sale override sell through(%)
Week-1 42 29 3.94804504619449
Week-2 46 36 3.39242402149418
Week-3 53 44 3.91839149445099
Week-4 44 33 3.53663152826439
Week-5 45 20 4.12465416879239
Week-6 45 24 3.9861284151902
Week-7 47 10 3.93148317015786
Week-8 27 14 4.96932263953541
Week-9 49 18 3.6518835739424
Week-10 56 35 3.54296186103223
Week-11 44 23 3.42675072960917
Week-12 42 28 3.73042394308072
So I want now change sell through(%) into this:
week sale sale override sell through(%)
Week-1 42 29 39
Week-2 46 36 33
Week-3 53 44 39
Week-4 44 33 35
Week-5 45 20 41
Week-6 45 24 39
Week-7 47 10 39
Week-8 27 14 49
Week-9 49 18 36
Week-10 56 35 35
Week-11 44 23 34
Week-12 42 28 37
Is there a way to change value?
Based on the data you have shown, it looks like you need to wrap the generation of sell through(%) in FLOOR(10.0 *...) e.g. if your existing query was used as a subquery you could write:
SELECT [week], [sale], [sale override],
FLOOR(10.0 * [sell through(%)]) AS [sell through(%)]
FROM (
-- your existing query
) d
You can try something like this in your query:
declare #float float = '3.94804504619449'
select cast(#float * 10 as int)

How can I translate this nested query into R dplyr?

I'm a newbie in R and I'm trying to translate the following nested query using dplyr:
SELECT * FROM DAT
where concat(code, datcomp) IN
(SELECT concat(code, max(datcomp)) from DAT group by code)
DAT is a data frame containing several hundreds columns.
code is a not-unique numeric field
datcomp is a string like 'YYYY-MM-DDTHH24:MI:SS'
What I'm trying to do is extracting from data frame the most recent timestamp for each code.
Eg: given
code datcomp
1 1005 2019-06-12T09:13:47
2 1005 2019-06-19T16:15:46
3 1005 2019-06-17T21:46:02
4 1005 2019-06-17T17:52:01
5 1005 2019-06-24T13:10:05
6 1015 2019-05-02T10:33:13
7 1030 2019-06-11T14:58:16
8 1030 2019-06-20T09:50:20
9 2008 2019-05-17T18:43:34
10 2008 2019-05-28T15:16:50
11 3030 2019-05-24T09:51:30
12 3032 2019-05-30T16:40:03
13 3032 2019-05-21T09:34:27
14 3062 2019-05-17T16:10:53
15 3062 2019-06-20T16:45:51
16 3069 2019-07-01T17:54:59
17 3069 2019-07-09T12:39:56
18 3069 2019-07-09T17:45:09
19 3069 2019-07-17T14:31:01
20 3069 2019-06-24T13:42:27
21 3104 2019-06-05T14:47:38
22 3104 2019-05-17T15:18:47
23 3111 2019-06-06T15:52:51
24 3111 2019-07-01T09:50:33
25 3127 2019-04-16T16:04:59
26 3127 2019-05-15T11:49:29
27 3249 2019-06-21T18:24:14
28 3296 2019-07-01T17:44:54
29 3311 2019-06-10T11:05:20
30 3311 2019-06-21T12:11:05
31 3311 2019-06-19T11:36:47
32 3332 2019-05-13T09:38:21
33 3440 2019-06-11T12:53:07
34 3440 2019-05-17T17:40:19
35 3493 2019-04-18T11:18:37
36 5034 2019-06-06T15:24:04
37 5034 2019-05-31T11:39:17
38 5216 2019-05-20T17:16:07
39 5216 2019-05-14T15:08:15
40 5385 2019-05-17T13:19:54
41 5387 2019-05-13T09:33:31
42 5387 2019-05-07T10:49:14
43 5387 2019-05-15T10:38:25
44 5696 2019-06-10T16:16:49
45 5696 2019-06-11T14:47:00
46 5696 2019-06-13T17:10:36
47 6085 2019-05-21T10:15:58
48 6085 2019-06-03T11:22:34
49 6085 2019-05-29T11:25:37
50 6085 2019-05-31T12:52:42
51 6175 2019-05-13T17:17:48
52 6175 2019-05-27T09:58:04
53 6175 2019-05-23T10:32:21
54 6230 2019-06-21T14:28:11
55 6230 2019-06-11T16:00:48
56 6270 2019-05-28T08:57:38
57 6270 2019-05-17T16:17:04
58 10631 2019-05-22T09:46:51
59 10631 2019-07-03T10:41:41
60 10631 2019-06-06T11:52:42
What I need is
code datcomp
1 1005 2019-06-24T13:10:05
2 1015 2019-05-02T10:33:13
3 1030 2019-06-20T09:50:20
4 2008 2019-05-28T15:16:50
5 3030 2019-05-24T09:51:30
6 3032 2019-05-30T16:40:03
7 3062 2019-06-20T16:45:51
8 3069 2019-07-17T14:31:01
9 3104 2019-06-05T14:47:38
10 3111 2019-07-01T09:50:33
11 3127 2019-05-15T11:49:29
12 3249 2019-06-21T18:24:14
13 3296 2019-07-01T17:44:54
14 3311 2019-06-21T12:11:05
15 3332 2019-05-13T09:38:21
16 3440 2019-06-11T12:53:07
17 3493 2019-04-18T11:18:37
18 5034 2019-06-06T15:24:04
19 5216 2019-05-20T17:16:07
20 5385 2019-05-17T13:19:54
21 5387 2019-05-15T10:38:25
22 5696 2019-06-13T17:10:36
23 6085 2019-06-03T11:22:34
24 6175 2019-05-27T09:58:04
25 6230 2019-06-21T14:28:11
26 6270 2019-05-28T08:57:38
27 10631 2019-07-03T10:41:41
Thank you in advance.
a more generalized version - group, then sort so that you get whatever you want first, then slice (which would allow you to take the nth value from each group as sorted):
dati %>%
group_by(code) %>%
arrange(desc(datcomp)) %>%
slice(1) %>%
ungroup()

Evaluation stuck at local optima in genetic programing DEAP. How to prevent GP from converging on local optima?

I'm trying to do a symbolic regression of a geometric model. And it gets stuck at most of the time with a fitness score that is not near 0. So I did a couple of research and find out it is the problem with local minima. And some people tried to prioritize diversity of population over fitness. But that's not what I want.
So I what I did is to reconfigure the algorithms.eaSimple and added a block in that. So it resets the population when the last n=50 generation have the same fitness.
I don't have any idea other than that as I'm very new to it.
Is there any better way to do this?
I'm using min fitness. creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
def my_eaSimple(population, toolbox, cxpb, mutpb, ngen, stats=None, halloffame: tools.HallOfFame = None,
verbose=True):
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in population if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
record = stats.compile(population) if stats else {}
logbook.record(gen=0, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
# Begin the generational process
gen = 1
last_few_pop_to_consider = 50
starting_condition = last_few_pop_to_consider
is_last_few_fitness_same = lambda stats_array: abs(numpy.mean(stats_array) - stats_array[0]) < 0.1
while gen < ngen + 1:
# Select the next generation individuals
offspring = toolbox.select(population, len(population))
# Vary the pool of individuals
offspring = algorithms.varAnd(offspring, toolbox, cxpb, mutpb)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# Update the hall of fame with the generated individuals
if halloffame is not None:
halloffame.update(offspring)
# Replace the current population by the offspring
population[:] = offspring
# Append the current generation statistics to the logbook
record = stats.compile(population) if stats else {}
logbook.record(gen=gen, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
gen += 1
# stopping criteria
min_fitness = record['fitness']['min\t']
# max_fitness = record['fitness']['max\t']
if min_fitness < 0.1:
print('Reached desired fitness')
break
if gen > starting_condition:
min_stats = logbook.chapters['fitness'].select('min\t')[-last_few_pop_to_consider:]
if is_last_few_fitness_same(min_stats):
print('Defining new population')
population = toolbox.population(n=500)
starting_condition = gen + last_few_pop_to_consider
return population, logbook
Output
gen nevals avg max min std avg max min std
0 500 2.86566e+23 1.41421e+26 112.825 6.31856e+24 10.898 38 3 9.50282
1 451 2.82914e+18 1.41421e+21 90.113 6.31822e+19 6.226 38 1 5.63231
2 458 2.84849e+18 1.41421e+21 89.1206 6.3183e+19 5.602 36 1 5.18417
3 459 4.24902e+14 2.01509e+17 75.1408 9.01321e+15 5.456 35 1 4.05167
4 463 4.23166e+14 2.03904e+17 74.3624 9.11548e+15 6.604 36 1 3.61762
5 462 2.8693e+11 1.25158e+14 65.9366 5.60408e+12 7.464 34 1 3.00478
6 467 2.82843e+18 1.41421e+21 65.9366 6.31823e+19 8.144 37 1 3.51216
7 463 5.40289e+13 2.65992e+16 65.9366 1.1884e+15 8.322 22 1 2.88276
8 450 6.59849e+14 3.29754e+17 59.1286 1.47323e+16 8.744 34 1 3.03685
9 458 1.8128e+11 8.17261e+13 54.4395 3.65075e+12 9.148 23 1 2.69557
10 459 6.59851e+14 3.29754e+17 54.4395 1.47323e+16 9.724 35 1 3.02255
11 458 2.34825e+10 1.41421e+11 54.4395 5.26173e+10 9.842 18 1 2.32057
12 459 3.52996e+11 1.60442e+14 54.4395 7.1693e+12 10.56 33 1 2.63788
13 457 3.81044e+11 1.60442e+14 54.4395 7.18851e+12 11.306 35 1 2.84611
14 457 2.30681e+13 1.15217e+16 54.4395 5.14751e+14 11.724 24 1 2.6495
15 463 2.65947e+10 1.41421e+11 54.4395 5.52515e+10 12.072 29 1 2.63036
16 469 4.54286e+10 9.13693e+12 54.4395 4.10784e+11 12.104 34 1 3.00752
17 461 6.58255e+11 1.74848e+14 54.4395 9.76474e+12 12.738 36 4 3.10956
18 450 2.03669e+10 1.41421e+11 54.4395 4.96374e+10 13.062 30 4 3.01963
19 465 1.75385e+10 2.82843e+11 54.4395 4.74595e+10 13.356 24 1 2.82157
20 458 1.83887e+10 1.41421e+11 54.4395 4.7559e+10 13.282 23 1 3.03949
21 455 3.67899e+10 8.36173e+12 54.4395 4.04044e+11 13.284 34 4 3.03106
22 461 1.36372e+10 1.41422e+11 54.4395 4.16569e+10 13.06 35 3 3.01005
23 471 2.00634e+26 1.00317e+29 54.3658 4.48181e+27 12.798 36 1 3.17698
24 466 2.82843e+18 1.41421e+21 54.3658 6.31823e+19 12.706 36 3 3.07043
25 464 3.00384e+10 8.36174e+12 54.3658 3.75254e+11 12.612 34 5 2.89231
26 474 2.00925e+10 1.41421e+11 54.3658 4.93588e+10 12.594 34 3 2.60253
27 452 2.9528e+11 1.41626e+14 54.3658 6.32694e+12 12.43 25 1 2.49822
28 453 1.23899e+10 1.41421e+11 54.3658 3.98511e+10 12.41 20 5 2.45721
29 456 5.98529e+14 2.99256e+17 54.3658 1.33697e+16 12.57 37 1 2.6346
30 474 1.35672e+13 6.69898e+15 54.3658 2.99297e+14 12.526 35 1 2.94029
31 446 6.92755e+22 3.46377e+25 54.3658 1.5475e+24 12.55 36 1 2.62517
32 462 4.02525e+10 8.16482e+12 54.3658 3.92769e+11 12.764 34 5 2.77061
33 449 1.53268e+13 7.65519e+15 54.3658 3.42007e+14 12.628 35 1 2.76218
34 466 3.13214e+16 1.54388e+19 54.3658 6.89799e+17 12.626 35 1 2.97626
35 464 2.82845e+18 1.41421e+21 54.3658 6.31823e+19 12.806 36 5 2.74597
36 460 2.93493e+11 1.32308e+14 54.3658 5.91505e+12 12.734 35 5 2.88084
37 456 2.93491e+10 8.29826e+12 54.3658 3.72372e+11 12.614 37 1 2.80517
38 449 3.44519e+10 8.16482e+12 54.3658 3.67344e+11 12.742 34 3 2.91881
39 466 1.53217e+13 7.65519e+15 54.3658 3.42008e+14 12.502 35 3 2.70296
40 454 2.82843e+18 1.41421e+21 54.3658 6.31823e+19 12.51 36 1 2.81103
41 453 9.66059e+24 4.68888e+27 54.3658 2.09566e+26 12.554 33 1 2.47691
42 448 2.2287e+10 3.38289e+12 54.3658 1.58629e+11 12.576 26 1 2.50763
43 460 5.47399e+12 2.73042e+15 54.3658 1.21985e+14 12.584 34 1 2.80053
44 460 2.82843e+18 1.41421e+21 54.3658 6.31823e+19 12.692 27 1 2.86516
45 464 2.829e+18 1.41421e+21 54.3658 6.31823e+19 12.57 34 1 3.15549
46 460 2.92607e+11 1.31556e+14 54.3658 5.88776e+12 12.61 37 3 2.78817
47 465 2.82843e+18 1.41421e+21 54.3658 6.31823e+19 12.622 36 1 3.04616
48 461 1.64306e+10 2.97245e+12 54.3658 1.37408e+11 12.468 26 1 2.57856
49 463 1.54834e+10 1.41421e+11 54.3658 4.4029e+10 12.464 20 1 2.4529
50 451 1.59239e+10 1.41421e+11 54.3658 4.44609e+10 12.63 33 1 2.76281
51 455 5.40036e+19 2.70018e+22 54.3658 1.20635e+21 12.78 37 1 2.84668
52 478 2.82843e+18 1.41421e+21 54.3658 6.31823e+19 12.712 36 3 2.84694
53 461 2.78669e+21 1.39193e+24 54.3658 6.21866e+22 12.714 36 1 3.23546
54 471 7.41272e+12 3.70045e+15 54.3658 1.65323e+14 12.336 34 3 2.848
55 465 2.83036e+18 1.41421e+21 54.3658 6.31822e+19 12.74 36 1 3.62662
56 459 2.82843e+18 1.41421e+21 54.3658 6.31823e+19 12.606 29 1 2.60437
57 453 5.98308e+24 2.99154e+27 54.3658 1.33652e+26 12.722 34 1 2.62311
58 460 3.62463e+21 1.8109e+24 54.3658 8.09047e+22 12.65 37 1 2.92361
Defining new population
59 500 5.83025e+48 2.91513e+51 109.953 1.30238e+50 10.846 38 1 8.89889
60 464 2.93632e+15 8.87105e+17 165.988 4.38882e+16 5.778 36 1 4.79173
61 444 5.54852e+19 2.70018e+22 93.5182 1.20674e+21 4.992 37 1 4.648
62 463 4.28647e+14 2.14148e+17 82.0774 9.56741e+15 5.468 34 1 4.34891
63 464 2.82843e+18 1.41421e+21 78.8184 6.31823e+19 6.624 35 1 4.25989
64 453 3.40035e+11 1.60954e+14 68.7629 7.19022e+12 7.356 36 1 3.77694
65 456 5.65762e+18 2.82851e+21 68.7629 1.26368e+20 7.606 35 1 4.15966
66 461 2.82843e+18 1.41421e+21 68.7629 6.31823e+19 7.906 35 1 3.81171
67 447 1.63302e+10 1.41421e+11 68.7629 4.51102e+10 7.802 33 1 3.47258
68 463 6.59552e+14 3.29754e+17 68.7629 1.47323e+16 8.37 34 3 3.80698
69 460 1.53579e+13 7.65512e+15 68.7629 3.42003e+14 8.646 35 1 3.64042
70 461 2.80014e+10 1.41421e+11 68.7629 5.63553e+10 9.212 38 1 3.69582
71 453 1.97446e+11 7.80484e+13 68.7629 3.50764e+12 9.84 34 1 3.74785
72 459 9.98853e+11 1.75397e+14 68.7629 1.25317e+13 10.284 35 3 3.61764
73 453 5.6863e+16 2.84218e+19 68.7629 1.26979e+18 10.796 36 1 3.86864
74 466 2.57445e+10 1.41434e+11 68.7629 5.4564e+10 10.806 35 1 3.2949
75 453 2.82849e+18 1.41421e+21 68.7629 6.31823e+19 10.876 34 1 3.27301
76 433 1.67235e+20 8.36174e+22 68.7629 3.73574e+21 10.868 35 1 2.94051
77 457 3.6663e+21 1.83315e+24 68.7629 8.1899e+22 10.964 37 3 3.21476
78 461 1.80829e+14 9.04015e+16 68.7629 4.03883e+15 10.992 35 3 3.26985
79 450 3.21984e+11 1.41626e+14 68.7629 6.32593e+12 11.17 28 1 2.77941
80 460 2.82843e+18 1.41421e+21 68.7629 6.31823e+19 11.044 35 1 3.25362
81 455 6.46751e+14 2.99308e+17 68.7629 1.34123e+16 11.06 34 1 3.51061
82 463 3.21908e+21 1.60954e+24 68.7629 7.19088e+22 11.112 34 1 3.58433
83 473 2.82843e+18 1.41421e+21 68.7629 6.31823e+19 10.946 38 3 3.70663
84 460 3.14081e+11 1.41626e+14 68.7629 6.32625e+12 10.896 35 1 3.4976
85 456 1.53419e+13 7.65526e+15 68.7629 3.4201e+14 11.156 36 1 3.23661
The population get reset after getting 54.4395 minimum fitness for 50 times in 59th gen.