Logarithms in MARIE - operators

How would I write a MARIE Assembly code that calculates logarithms. Input A and STORE A Input B and Store B. Enter in 2 in A and 16 in B would print a result and show an output 4. Log base 2 of 16 = 4.

Related

How toPick up item which value greater than a number every month, and create new column

I have a dataset as this below, want to pick up the name which applied more than 3 times every month.
Which is passion, otherwise is n-passion
Month Name Applied
4 a 3
4 b 2
4 c 4
5 a 3
5 b 4
5 c 2
6 a 5
6 b 7
6 c 0
enter image description here
Wanted output as below
Name Status
a passion
b n-passion
c c-passion
enter image description here
enter image description here
How to achive this?

How do you copy data from a dataframe to another

I am having a difficult time getting the correct data from a reference csv file to the one I am working on.
I have a csv file that has over 6 million rows and 19 columns. I looks something like this :
enter image description here
For each row there is a brand and a model of a car amongst other information.
I want to add to this file the fuel consumption per 100km traveled and the type of fuel that is used.
I have another csv file that has the fuel consumption of every model of car that looks something like this : enter image description here
What I want to ultimately do is add the matching values of G,H, I and J columns from the second file to the first one.
Because of the size of the file I was wondering if there is another way to do it other than with a "for" or a "while" loop?
EDIT :
For example...
The first df would look something like this
ID
Brand
Model
Other_columns
Fuel_consu_1
Fuel_consu_2
1
Toyota
Rav4
a
NaN
NaN
2
Honda
Civic
b
NaN
NaN
3
GMC
Sierra
c
NaN
NaN
4
Toyota
Rav4
d
NaN
NaN
The second df would be something like this
ID
Brand
Model
Fuel_consu_1
Fuel_consu_2
1
Toyota
Corrola
100
120
2
Toyota
Rav4
80
84
3
GMC
Sierra
91
105
4
Honda
Civic
112
125
The output should be :
ID
Brand
Model
Other_columns
Fuel_consu_1
Fuel_consu_2
1
Toyota
Rav4
a
80
84
2
Honda
Civic
b
112
125
3
GMC
Sierra
c
91
105
4
Toyota
Rav4
d
80
84
The first df may have many times the same brand and model for different ID's. The order is completely random.
Thank you for providing updates I was able to put something together that should be able to help you
#You drop these two columns because you won't need them once you join them to df1 (which is your 2nd table provided)
df.drop(['Fuel_consu_1', 'Fuel_consu_2'], axis = 1 , inplace = True)
#This will join your first and second column to each other on the Brand and Model columns
df_merge = pd.merge(df, df1, on=['Brand', 'Model'])

Extract the summary of data using groupby and optimize the inspector utilisation - pandas and and other optimisation package in python

I have accident record data as shown below across the places
Inspector_ID Place Date
0 1 A 1-09-2019
1 2 A 1-09-2019
2 1 A 1-09-2019
3 1 B 1-09-2019
4 3 A 1-09-2019
5 3 A 1-09-2019
6 1 A 2-09-2019
7 3 A 2-09-2019
8 2 B 2-09-2019
9 3 A 3-09-2019
10 1 C 3-09-2019
11 1 D 3-09-2019
12 1 A 3-09-2019
13 1 E 3-09-2019
14 1 A 3-09-2019
15 1 A 3-09-2019
16 3 A 4-09-2019
17 3 B 5-09-2019
18 4 B 5-09-2019
19 3 A 5-09-2019
20 3 C 5-09-2019
21 3 A 5-09-2019
22 3 D 5-09-2019
23 3 C 5-09-2019
From the above data, I want to optimize the inspector utlisation.
for that tried below codes get the objective function of the optimisation.
c = df.groupby('Place').Inspector_ID.agg(
Total_Number_of_accidents='count',
Number_unique_Inspector='nunique',
Unique_Inspector='unique').reset_index().sort_values(['Total_Number_of_accidents'], ascending=False)
Below is the output of above code
Place Total_Number_of_accidents Number_unique_Inspector Unique_Inspector
0 A 14 3 [1, 2, 3]
1 B 4 4 [1, 2, 3, 4]
2 C 3 2 [1, 3]
3 D 2 2 [1, 3]
4 E 1 1 [1]
And then
f = df.groupby('Inspector_ID').Place.agg(
Total_Number_of_accidents='count',
Number_unique_Place='nunique',
Unique_Place='unique').reset_index().sort_values(['Total_Number_of_accidents'], ascending=False)
Output:
Inspector_ID Total_Number_of_accidents Number_unique_Place Unique_Place
2 3 11 4 [A, B, C, D]
0 1 10 5 [A, B, C, D, E]
1 2 2 2 [A, B]
3 4 1 1 [B]
From the above we have 4 Inspectors, 5 Places and 24 accidents. I want to optimize the allocation of inspectors based on the above data.
condition 1 - There should be at least 1 inspector in each Place.
condition 2 - All inspector should be assigned at least one Place.
Condition 3 - Identify the Place which is over utilised based on number of accidents (for eg: Place - B - Only 4 accidents and four inspector, So some inpspector from Place B can be assigned to Place A and next question is which inspector? and How many?.
Is it possible to do that in python, if possible which algorithm? and how?
it is an https://en.wikipedia.org/wiki/Assignment_problem maybe it should be reduced to max-flow problem but with optimization of equality in flow (using graph package like NetworkX):
how to create di-graph:
vertice s source of flow (of accidents)
S-set would be all places that will have accidents
X_s - set of all edges (s, x) where x in S, now t is sink, and we have analogus sets T and X_t now let's set capacity for edges in X_s - it would be set from column Total_Number_of_accidents in X_t we would set max number of accidents to process by inspector and we will get back to it later on, now let's make edges from S to T (x, y) where x in X_s and y in X_t and let's set capacity of these edges to high number (e.g. 1e6) and let's call this set X_c these edges will tell us how much load will get inspector y from place x.
now solve max-flow problem, and when some edges from X_t would have too big flow you can decrease capacity of these (to reduce load on particular inspector) and when some edges in X_c will have very small flow you can just remove these edges to reduce complexity of work organization, after few iterations you should have desired solution
you can code some super algorithm but if it's real life problem you would like to avoid situations like assigning one inspector to all places and to process 0.38234 accident at each place...
also there should be probably some constraints on how many accidents should be processed by inspector in given time but you didn't mentioned it...

Aggregate based on flag (with qualifiers)

I am trying to convert some calculated fields in Hyperion Studio to SQL. I got stuck trying to aggregate enrollment counts based on a shared course location/date/time. I have a flag created using LEAD to mark rows where the course location is identical to the row below. I need to roll up those consecutive rows, based on the flag, to get a total enrollment count for each location. The flag calculation includes exceptions (seen in example below, rows 2 and 3) where I specifically command it not to flag despite a shared location.
This is an example of base data that includes the flag (Roll Up is the field I'm trying to calculate):
SectionID Course Name Title Instructor Location Enrollment Flag Roll Up
1 EN.100.201 Title1 Prof. W Building 1 16
2 EN.550.365 Title2 Prof. X Building 2 5
3 EN.530.403 Title3 Prof. Y Building 2 30
4 EN.400.401 Title4 Prof. Z Building 3 25 Y
5 EN.400.601 Title4 Prof. Z Building 3 10
Here is the output I'm trying to achieve
SectionID Course Name Title Instructor Location Enrollment Flag Roll Up
1 EN.100.201 Title1 Prof. W Building 1 16 16
2 EN.550.365 Title2 Prof. X Building 2 5 5
3 EN.530.403 Title3 Prof. Y Building 2 30 30
5 EN.400.601 Title4 Prof. Z Building 3 10 35
Thanks in advance! Edit: made this easier to read; sorry, I'm new here.

How to count the ID with the same prefix and store the total number in another column

I have a dataset in which I noticed that the ID comes with info for classification. Basically, the last 2 digits of ID stand for their sub-ID (01, 02, 03, etc) in the same family. Below is an example. I am trying to get another column (the 2nd column) to store the information of how many sub-IDs we have for the same family. e.g., 22302 belongs to family 223, which has 3 members: 22301, 22302, and 22303. So that I have a new feature for classification modeling. Not sure if there is a better idea to extract information. Anyway, can someone let me know how to extract the number in the same class (as shown the 2nd column)
ID Same class
23401 1
22302 3
43201 1
144501 2
144502 2
22301 3
22303 3
You can do it with str slice and transform
df['New']=df.groupby(df.ID.astype(str).str[:-2]).ID.transform('size')
df
Out[223]:
ID Sameclass New
0 23401 1 1
1 22302 3 3
2 43201 1 1
3 144501 2 2
4 144502 2 2
5 22301 3 3
6 22303 3 3