Error: Indexed components can only be indexed with simple slices: start and stop values are not allowed - optimization

I am running a multi-time mixed-integer linear optimization problem in pyomo. For the model, I have a set of 2 technologies for which I want to determine the capacity for each month of the year. So, in total I have 24 decision variables.
technologies = [
'y11', 'y12', 'y21', 'y22', 'y31','y32', 'y41', 'y42', 'y51', 'y52','y61',
'y62', 'y71', 'y72', 'y81', 'y82', 'y91', 'y92', 'y101', 'y102', 'y111',
'y112', 'y121', 'y122'
]
model.technologies = pyo.Var(technologies, within = pyo.NonNegativeReals)
The capacity per month needs to be equal to or higher than the demand of that month
demand = [400, 385, 200, 350, 345, 415, 425, 380, 230, 421, 239, 450]
To formulate this constraint of capacity and demand, I wrote the following constraint for each month:
model.constraint_1 = pyo.Constraint(expr = sum(model.technologies [0:3]) >= demand [0])
model.constraint_2 = pyo.constraint(expr = sum(model.technologies [4:7]) >= demand [1])
.........
However, when I do this, I get the error: IndexError: Indexed components can only be indexed with simple slices: start and stop values are not allowed.
Can anyone explain this error and how to fix it?

Related

Code to obtain the maximum value out of a list and the three consecutive values after the maximum value

I am writing a model to calculate the maximum production capacity for a machine in a year based on 15-min data. As the maximum capacity is not the sum of the required capacity for all 15-min over the year, I want to write a piece of code that determines the maximum value in the list and then adds this maximum value and the three next consecutive values after this maximum value to a new variable. An simplified example would be:
fifteen_min_capacity = [10, 12, 3, 4, 8, 12, 10, 9, 2, 10, 4, 3, 15, 8, 9, 3, 4, 10]
The piece of code I want to write would be able to determine the maximum capacity in this list (15) and then add this capacity plus the three consecutive ones (8,9,3) to a new variables:
hourly_capacity = 35
Does anyone now the code that would give this output?
I have tried using the max(), the sum() and a combination of both. However, I do not get a working code. Any help would be much appreciated!

How to calculate Type I error and Type II error by varying the sample sizes in (5, 10, 15, ..., 195, 200)? And plot these on a graph?

I have calculated the Type 1 and Type II error for the following data:
np.random.seed(1005)
mean = 5 # Population mean
std = 4 # Population std
n = 50 # Sample size
samples = np.random.normal(loc=mean, scale=std, size=n) # Generate the data
print(samples)
Type I error:
X = (sample_mean - 6) / (np.std(samples)/np.sqrt(n))
Type II error:
CI_lower = sample_mean - 1.96*(np.std(samples)/np.sqrt(n))
CI_upper = sample_mean + 1.96*(np.std(samples)/np.sqrt(n))
How would I use these to calculate Type I error and Type II error by varying the sample sizes in {5, 10, 15, ..., 195, 200}? I've tried increasing the sample size in a range like this but I'm not sure if this is the correct way to go:
TT1 = []
for i in range(5,201,5):
6*norm.cdf(-np.abs(X))
p1 = 6*norm.cdf(-np.abs(X))
q1 = 6 - 1.96*(np.std(samples)/np.sqrt(range(5,201,5)))
q2 = 6 + 1.96*(np.std(samples)/np.sqrt(range(5,201,5)))
TT2 = norm.cdf(q2,loc=5.8,scale=np.std(samples)/np.sqrt(range(5,201,5))) -norm.cdf(q1,loc=5.8,scale=np.std(samples)/np.sqrt(range(5,201,5)))
The data is computing but I'm not sure if this is the correct way to apply the intervals, or whether I need to update my values in the samples variable.

Understanding Pandas Series Data Structure

I am trying to get my head around the Pandas module and started learning about the Series data structure.
I have created the following Series in Spyder :-
songs = pd.Series(data = [145,142,38,13], name = "Count")
I can obtain information about the Series index using the code:-
songs.index
The output of the above code is as follows:-
My question is where it states Start = 0 and Stop = 4, what are these referring to?
I have interpreted start = 0 as the first element in the Series is in row 0.
But i am not sure what Stop value refers to as there are no elements in row 4 of the Series?
Can some one explain?
Thank you.
This concept as already explained adequately in the comments (indexing is at minus one the count of items) is prevalent in many places.
For instance, take the list data structure-
z = songs.to_list()
[145, 142, 38, 13]
len(z)
4 # length is four
# however indexing stops at i-1 position 'i' being the length/count of items in the list.
z[4] # this will raise an IndexError
# you will have to start at index 0 going till only index 3 (i.e. 4 items)
z[0], z[1], z[2], z[-1] # notice how -1 can be used to directly access the last element

On run 'example/sumo/grid.py'.FatalFlowError:'Not enough vehicles have spawned! Bad start?'

I want to simulate a jam simulation on the grid example,
So I try to increase the number of row and column or increase the number of num_cars_left/nums_cars_right/nums_cars_top/nums_cars_bot.
For example:
n_rows = 5
n_columns = 5
num_cars_left = 50
num_cars_right = 50
num_cars_top = 50
num_cars_bot = 50
So, then run it by command, there is an error:
Loading configuration... done.
Success.
Loading configuration... done.
Traceback (most recent call last):
File "examples/sumo/grid.py", line 237, in <module>
exp.run(1, 1500)
File "/home/dnl/flow/flow/core/experiment.py", line 118, in run
state = self.env.reset()
File "/home/dnl/flow/flow/envs/loop/loop_accel.py", line 167, in reset
obs = super().reset()
File "/home/dnl/flow/flow/envs/base_env.py", line 520, in reset
raise FatalFlowError(msg=msg)
flow.utils.exceptions.FatalFlowError:
Not enough vehicles have spawned! Bad start?
Missing vehicles / initial state:
- human_994: ('human', 'bot4_0', 0, 446, 0)
- human_546: ('human', 'top0_5', 0, 466, 0)
- human_886: ('human', 'bot3_0', 0, 366, 0)
- human_689: ('human', 'bot1_0', 0, 396, 0)
.....
And then I checked the 'flow/flow/envs/base_env.py'
There is a description of it:
# check to make sure all vehicles have been spawned
if len(self.initial_ids) > len(initial_ids):
missing_vehicles = list(set(self.initial_ids) - set(initial_ids))
msg = '\nNot enough vehicles have spawned! Bad start?\n' \
'Missing vehicles / initial state:\n'
for veh_id in missing_vehicles:
msg += '- {}: {}\n'.format(veh_id, self.initial_state[veh_id])
raise FatalFlowError(msg=msg)
So, my question is: if there is a limit number of rows, columns, nums_cars_left(right/bot/top) if I want to simulate a traffic jam on grid, how to do?
The grid example examples/sumo/grid.py doesn't use inflows by default,
instead it spawns the vehicles directly on the input edges. So if you increase the number of vehicles, you have to increase the size of the edges they spawn on. I tried your example and this setting works for me:
inner_length = 300
long_length = 500
short_length = 500
n_rows = 5
n_columns = 5
num_cars_left = 50
num_cars_right = 50
num_cars_top = 50
num_cars_bot = 50
The length of the edges the vehicles spawn on is short_length, it is the one you want to increase if the vehicles don't have enough room to be added.
Also, changing the number of rows and columns doesn't change anything because 50 vehicles will be added to each of them; so in this case you will have 20 input edges of each 50 vehicles, 1000 vehicles total, which will be quite laggy.
If you want to use continuous inflows instead of one-time spawning, have a look at the use_inflows parameter in the grid_example function in examples/sumo/grid.py, and what this parameter does when it's set to True.

torch7: Unexpected 'counts' in k-Means Clustering

I am trying to apply k-means clustering on a set of images (images are loaded as float torch.Tensors) using the following segment of code:
print('[Clustering all samples...]')
local points = torch.Tensor(trsize, 3, 221, 221)
for i = 1,trsize do
points[i] = trainData.data[i]:clone() -- dont want to modify the original tensors
end
points:resize(trsize, 3*221*221) -- to convert it to a 2-D tensor
local centroids, counts = unsup.kmeans(points, total_classes, 40, total_classes, nil, true)
print(counts)
When I observe the values in the counts tensor, I observe that it contains unexpected values, in the form of some entries being more than trsize, whereas the documentation says that counts stores the counts per centroid. I expected that it means counts[i] equals the number of samples out of trsize belonging to cluster with centroid centroids[i]. Am I wrong in assuming so?
If that indeed is the case, shouldn't sample-to-centroid be a hard-assignment (i.e. shouldn't counts[i] sum to trsize, which clearly is not the case with my clustering)? Am I missing something here?
Thanks in advance.
In the current version of the code, counts are accumulated after each iteration
for i = 1,niter do
-- k-means computations...
-- total counts
totalcounts:add(counts)
end
So in the end counts:sum() is a multiple of niter.
As a workaround you can use the callback to obtain the final counts (non-accumulated):
local maxiter = 40
local centroids, counts = unsup.kmeans(
points,
total_classes,
maxiter,
total_classes,
function(i, _, totalcounts) if i < maxiter then totalcounts:zero() end end,
true
)
As an alternative you can use vlfeat.torch and explicitly quantize your input points after kmeans to obtain these counts:
local assignments = kmeans:quantize(points)
local counts = torch.zeros(total_classes):int()
for i=1,total_classes do
counts[i] = assignments:eq(i):sum()
end