ode45 system second order - system

i have write this program but it's not working
function xprime = exh(t,z)
xprime = [z(2);
30*z(3) - 40*z(1);
z(4);
30*z(1)+30*z(5)-60*z(3);
z(6);
30*z(3)-40*z(5)];
tspan=[0,200];
Z0=[0.05 0.04 0.03];
[t,z] = ode45('exh',tspan,Z0);
plot(t,z)

Your system has dimension 6, your initial value has dimension 3, this can not work, the dimensions have to be the same.
In other words, you need not only the initial positions but also the initial velocities for a system of order 2.

Related

Why do my arrays display missing values when identifying a bandwidth? (geopandas)

I'm trying to identify a suitable bandwidth to use for a geographically weighted regression but every time I search for the bandwidth it displays that there are missing (NaN) values within the arrays of the dataset. Although, each row features all values.
g_y = df_ct2008xy['2008 HP'].values.reshape((-1,1))
g_X = df_ct2008xy[['2008 AF', '2008 MI', '2008 MP', '2008 EB']].values
u = df_ct2008xy['X']
v = df_ct2008xy['Y']
g_coords = list(zip(u,v))
g_X = (g_X - g_X.mean(axis=0)) / g_X.std(axis=0)
g_y = g_y.reshape((-1,1))
g_y = (g_y - g_y.mean(axis=0)) / g_y.std(axis=0)
bw = mgwr.sel_bw.Sel_BW(g_coords,
g_y, # Independent variable
g_X, # Dependent variable
fixed=True, # True for fixed bandwidth and false for adaptive bandwidth
spherical=True) # Spherical coordinates (long-lat) or projected coordinates
I searched using numpy to identify if these were individual values using
np.isnan(g_y).any()
and
np.isnan(g_X)
but apparently every value is 'missing' and returning 'True'

Understanding Pandas Series Data Structure

I am trying to get my head around the Pandas module and started learning about the Series data structure.
I have created the following Series in Spyder :-
songs = pd.Series(data = [145,142,38,13], name = "Count")
I can obtain information about the Series index using the code:-
songs.index
The output of the above code is as follows:-
My question is where it states Start = 0 and Stop = 4, what are these referring to?
I have interpreted start = 0 as the first element in the Series is in row 0.
But i am not sure what Stop value refers to as there are no elements in row 4 of the Series?
Can some one explain?
Thank you.
This concept as already explained adequately in the comments (indexing is at minus one the count of items) is prevalent in many places.
For instance, take the list data structure-
z = songs.to_list()
[145, 142, 38, 13]
len(z)
4 # length is four
# however indexing stops at i-1 position 'i' being the length/count of items in the list.
z[4] # this will raise an IndexError
# you will have to start at index 0 going till only index 3 (i.e. 4 items)
z[0], z[1], z[2], z[-1] # notice how -1 can be used to directly access the last element

Why even though I sliced my original DataFrame and assigned it to another variable, my original DataFrame still changed values?

I am trying to calculate a portfolio's daily total price, by multiplying weights of each asset with the daily price of the assets.
Currently I have a DataFrame tw which is all zeros except for the dates that I want to re-balance, which holds my assets weights. What I would like to do is for each month, populate the zeros with the weights I am trying to re-balance with, till the next re-balancing date, and so on and so forth.
My code:
df_of_weights = tw.loc[dates_to_rebalance[13]:]
temp_date = dates_to_rebalance[13]
counter = 0
for date in df_of_weights.index:
if date.year == temp_date.year and date.month == temp_date.month:
if date.day == temp_date.day:
pass
else:
df_of_weights.loc[date] = df_of_weights.loc[temp_date].values
counter += 1
temp_date = dates_to_rebalance[13+counter]
I understand that if you slice your DataFrame and assign it to a variable (df_of_weights), changing the values of said variable would not affect the original DataFrame. However, the values in tw changed. Have been searching for an answer online for a while now and am really confused.
You should use copy in order to fix the problem such that:
df_of_weights = tw.loc[dates_to_rebalance[13]:].copy()
The problem is slicing provides view instead of copy. The issue is still open.
https://github.com/pandas-dev/pandas/issues/15631

Defining an RDLC chart axis with an aggregate function

The autoaxis for one of my embedded charts isn't behaving well, sometimes only showing one other major value besides top and bottom. So I thought I'd set my own boundaries, which seemed pretty easy given that one of the columns on the chart is always going to be larger than any of the others.
<Maximum>=(((Max(Fields!Entered.Value, "Chart1") + 10) \ 50) + 1) * 50</Maximum>
(the other columns detail what happened to the things that entered this process)
Round up to the nearest 50 with a little overage to put the label on top. Then I can put the intervals at this divided by 5 and I'm gold.
Except I'm not gold. The chart groups records by date and the individual bars are Sum(Fields!Entered.Value) et cetera, so it's drastically underscaling when multiple batches get processed on a single date. But hey, it groups records by date, I can use that:
<ChartCategoryHierarchy>
<ChartMembers>
<ChartMember>
<Group Name="Chart1_CategoryGroup">
<GroupExpressions>
<GroupExpression>=Fields!Date.Value</GroupExpression>
</GroupExpressions>
</Group>
</ChartMember>
</ChartMembers>
</ChartCategoryHierarchy>
as:
<Maximum>=(((Max(Fields!Entered.Value, "Chart1_CategoryGroup") + 10) \ 50) + 1) * 50</Maximum>
and it'll aggregate over the group just fine. Right?
The ValueAxis_Primary.Maximum expression for the chart 'Chart1' has a scope parameter that is not valid for an aggregate function. The scope parameter must be set to a string constant that is equal to either the name of a containing group, the name of a containing data region, or the name of a dataset.
Nope! It works just fine for "Chart1" but not for "Chart1_CategoryGroup"!
So, uh:
what scope are the axis calculations operating in, 'cause it ain't the category scope?
is there some way to provide them an aggregate scope that groups the data by date so they can do their calculations proper?
You Have To Nest The Scope
A little extra work gave me this insight:
Max(Fields!Entered.Value, "Chart1_CategoryGroup") returns the maximum of the entered fields within one single category group, which is not the level the Y axis is concerned with. What you're interested in is the maximum value of the summed calculation (within a group) for the whole chart, so specify the scopes to do that:
<Maximum>
=(((Max(
Sum(Fields!Entered.Value, "Chart1_CategoryGroup")
, "Chart1") + 10) \ 50) + 1) * 50
</Maximum>

torch7: Unexpected 'counts' in k-Means Clustering

I am trying to apply k-means clustering on a set of images (images are loaded as float torch.Tensors) using the following segment of code:
print('[Clustering all samples...]')
local points = torch.Tensor(trsize, 3, 221, 221)
for i = 1,trsize do
points[i] = trainData.data[i]:clone() -- dont want to modify the original tensors
end
points:resize(trsize, 3*221*221) -- to convert it to a 2-D tensor
local centroids, counts = unsup.kmeans(points, total_classes, 40, total_classes, nil, true)
print(counts)
When I observe the values in the counts tensor, I observe that it contains unexpected values, in the form of some entries being more than trsize, whereas the documentation says that counts stores the counts per centroid. I expected that it means counts[i] equals the number of samples out of trsize belonging to cluster with centroid centroids[i]. Am I wrong in assuming so?
If that indeed is the case, shouldn't sample-to-centroid be a hard-assignment (i.e. shouldn't counts[i] sum to trsize, which clearly is not the case with my clustering)? Am I missing something here?
Thanks in advance.
In the current version of the code, counts are accumulated after each iteration
for i = 1,niter do
-- k-means computations...
-- total counts
totalcounts:add(counts)
end
So in the end counts:sum() is a multiple of niter.
As a workaround you can use the callback to obtain the final counts (non-accumulated):
local maxiter = 40
local centroids, counts = unsup.kmeans(
points,
total_classes,
maxiter,
total_classes,
function(i, _, totalcounts) if i < maxiter then totalcounts:zero() end end,
true
)
As an alternative you can use vlfeat.torch and explicitly quantize your input points after kmeans to obtain these counts:
local assignments = kmeans:quantize(points)
local counts = torch.zeros(total_classes):int()
for i=1,total_classes do
counts[i] = assignments:eq(i):sum()
end