RODBC: Columns and values don't match - sql

I came across this behavior in RODBC (using SQL Server driver):
df1 = data.frame(matrix(c(1:20), nrow=10))
df1
which outputs
X1 X2
1 1 11
2 2 12
3 3 13
4 4 14
5 5 15
6 6 16
7 7 17
8 8 18
9 9 19
10 10 20
which makes sense. Then I save the table using RODBC
sqlSave(conout, df1, 'TEST')
Then I switch the two created columns:
df2 = df1[,c(2,1)]
df2
which outputs
X2 X1
1 11 1
2 12 2
3 13 3
4 14 4
5 15 5
6 16 6
7 17 7
8 18 8
9 19 9
10 20 10
which also makes sense.
Seeing those two tables, I see that X1 only contains 1:10 and X2 only contains 11:20. Now, when I do
sqlSave(conout, df2, 'TEST', append=TRUE, fast=FALSE)
sqlQuery(conout, 'SELECT * FROM TEST')
rownames X1 X2
1 1 1 11
2 2 2 12
3 3 3 13
4 4 4 14
5 5 5 15
6 6 6 16
7 7 7 17
8 8 8 18
9 9 9 19
10 10 10 20
11 1 11 1
12 2 12 2
13 3 13 3
14 4 14 4
15 5 15 5
16 6 16 6
17 7 17 7
18 8 18 8
19 9 19 9
20 10 20 10
which is definitely not what I saved. Now three questions:
How is this possible?
Where is this behavior explained in the RODBC manual?
How can I prevent the behavior without reordering my columns (the real case behind this example has > 300 columns).

Related

How can I plot two lines in one graph where values of the lines do not exist for the same x axis?

I would like to plot SupDem (variable) where e_boix_regime==1 and SupDem where e_boix_regime==0.
My data:
year
SupDem
e_boix_regime
1997
0.98
1
1998
0.75
0
My code:
dem = dem_aut[dem_aut["e_boix_regime"]==1].SupDem
aut = dem_aut[dem_aut["e_boix_regime"]==0].SupDem
year = dem_aut["year"]
plt.plot(year, dem, label="Suuport for Democracy in Demcoracies")
plt.plot(year, aut, label="Support for Democracy in Autocracies")
plt.show()```
The error is follwoing: x and y must have same first dimension, but have shapes (53,) and (28,)
I just wanted to plot two lines together.
This can help you solve the problem. I hope you can reproduce the codee with it:
two (or more) graphs in one plot with different x-axis AND y-axis scales in python
Issue
Your issue is regarding shape of x and y. For plotting graph you need same data point/shape of x-values and y-values.
Solution
Take each year with dem_aut["e_boix_regime"]==1 and dem_aut["e_boix_regime"]==2 condition as you are doing with SupDem
Source Code
df = pd.DataFrame(
{
"SupDem": np.random.randint(1, 11, 30),
"year": np.random.randint(10, 21, 30),
"e_boix_regime": np.random.randint(1, 3, 30),
}
) # see DataFrame below
df["e_boix_regime"].value_counts() # 1 = 18, 2 = 12
df[df["e_boix_regime"] == 2][["SupDem", "year"]] # see below
# you need same no. of data points for both x/y axis i.e. `year` and `SupDem`
plt.plot(
df[df["e_boix_regime"] == 1]["year"], df[df["e_boix_regime"] == 1]["SupDem"], marker="o", label="e_boix_regime==1"
)
# hence applying same condition for grabbing year which is applied for SupDem
plt.plot(
df[df["e_boix_regime"] == 2]["year"], df[df["e_boix_regime"] == 2]["SupDem"], marker="o", label="e_boix_regime==2"
)
plt.xlabel("Year")
plt.ylabel("SupDem")
plt.legend()
plt.show()
Output
PS: Ignore the data point plots, it's generated from random values
DataFrame Outputs
SupDem year e_boix_regime
0 1 12 2
1 10 10 1
2 5 19 2
3 4 14 2
4 8 14 2
5 4 17 2
6 2 15 2
7 10 11 1
8 8 11 2
9 6 19 2
10 5 15 1
11 8 17 1
12 9 10 2
13 1 14 2
14 8 18 1
15 3 13 2
16 6 16 2
17 1 16 1
18 7 13 1
19 8 15 2
20 2 17 2
21 5 10 2
22 1 19 2
23 5 20 2
24 7 16 1
25 10 14 1
26 2 11 2
27 1 18 1
28 5 16 1
29 10 18 2
df[df["e_boix_regime"] == 2][["SupDem", "year"]]
SupDem year
0 1 12
2 5 19
3 4 14
4 8 14
5 4 17
6 2 15
8 8 11
9 6 19
12 9 10
13 1 14
15 3 13
16 6 16
19 8 15
20 2 17
21 5 10
22 1 19
23 5 20
26 2 11
29 10 18

Unpivot a data-frame that has information of two teams in one row?

I have some data that holds information about two opposing teams
home_x away_x
0 7 28
1 11 10
2 11 20
3 12 15
4 12 16
I know about .melt(), which returns something like this:
variable value
0 home_x 7
1 home_x 11
2 home_x 11
3 home_x 12
4 home_x 12
So each value is a row here.
There are several attributes for each team.
I want each row to have all the attributes for the respective team( home or away)
The ultimate goals is to have all the attributes of both teams in one row. This would double the number of rows.
home_x away_x
0 7 28
would be transformed into:
team1_x team2_x
0 7 28
0 28 7
sample df:
home_x
away_x
home_y
away_y
0
7
28
7
20
1
28
7
28
13
2
28
7
28
4
3
7
28
7
58
4
11
10
11
10
try:
res = pd.DataFrame()
for c in df.columns.str.split("_").str[1].unique():
p1 = df.filter(regex=f"{c}$")
c1,c2 =p1.columns
df_map = {c1:c2, c2:c1}
swap = p1.rename(columns={**df_map})
res = pd.concat([res,p1.append(swap).sort_index(ignore_index=True)], axis=1)
then rename the columns.
import re
repl = {'home': 'team1', 'away': 'team2'}
res.columns = [re.sub('|'.join(repl.keys()), lambda x: repl[x.group()], i) for i in res.columns]
team1_x
team2_x
team1_y
team2_y
0
7
28
7
20
1
28
7
20
7
2
28
7
28
13
3
7
28
13
28
4
28
7
28
4
5
7
28
4
28
6
7
28
7
58
7
28
7
58
7
8
11
10
11
10
9
10
11
10
11
Here is an approach:
You might need to group on the last split of the column names and then group on axis=1, then iterate through the groups and reverse the column order and name them same with the suffix:
def myinfo(data):
c = data.columns.str.split("_").str[-1]
f = lambda x: pd.DataFrame.set_axis(x, ["team1","team2"],axis=1)
l = [pd.concat([*map(f , (v,v.iloc[:,::-1]))]).add_suffix(f"_{k}")
for k,v in data.groupby(c,axis=1)]
return pd.concat(l,axis=1).sort_index()
print(myinfo(df))
team1_x team2_x
0 7 28
0 28 7
1 11 10
1 10 11
2 11 20
2 20 11
3 12 15
3 15 12
4 12 16
4 16 12

Keep only the first value on duplicated column (set 0 to others)

Supposing I have the following situation:
A dataframe where the first column ['ID'] will eventually have duplicated values.
import pandas as pd
df = pd.DataFrame({"ID": [1,2,3,4,4,5,5,5,6,6],
"l_1": [10,12,32,45,45,20,20,20,20,20],
"l_2": [11,12,32,11,21,27,38,12,9,6],
"l_3": [5,9,32,12,21,21,18,12,8,1],
"l_4": [6,21,12,77,77,2,2,2,8,8]})
ID l_1 l_2 l_3 l_4
1 10 11 5 6
2 12 12 9 21
3 32 32 32 12
4 45 11 12 77
4 45 21 21 77
5 20 27 21 2
5 20 38 18 2
5 20 12 12 2
6 20 9 8 8
6 20 6 1 8
When duplicated IDs occurs:
I need to keep only the first values for column l_1 and l_4 (other duplicated rows must be zero).
Columns 'l_2' and 'l_3' must stay the same.
When duplicated IDs occurs, the values on these rows on columns l_1 and l_4 will be also duplicated.
Expected output:
ID l_1 l_2 l_3 l_4
1 10 11 5 6
2 12 12 9 21
3 32 32 32 12
4 45 11 12 77
4 0 21 21 0
5 20 27 21 2
5 0 38 18 0
5 0 12 12 0
6 20 9 8 8
6 0 6 1 0
Is there a Straightforward way using pandas or numpy to accomplish this ?
I could just accomplish it doing all these steps:
x1 = df[df.duplicated(subset=['ID'], keep=False)].copy()
x1.loc[x1.groupby('ID')['l_1'].apply(lambda x: (x.shift(1) == x)), 'l_1'] = 0
x1.loc[x1.groupby('ID')['l_4'].apply(lambda x: (x.shift(1) == x)), 'l_4'] = 0
df = df.drop_duplicates(subset=['ID'], keep=False)
df = pd.concat([df, x1])
Isn't this just:
df.loc[df.duplicated('ID'), ['l_1','l_4']] = 0
Output:
ID l_1 l_2 l_3 l_4
0 1 10 11 5 6
1 2 12 12 9 21
2 3 32 32 32 12
3 4 45 11 12 77
4 4 0 21 21 0
5 5 20 27 21 2
6 5 0 38 18 0
7 5 0 12 12 0
8 6 20 9 8 8
9 6 0 6 1 0

replace the outlier value from multiple columns based on different condition using pandas?

I want to find the outlier in multiple columns at a time and replace the outlier value with some other value based on two conditions.
sample dataset:
day phone_calls received
1 11 11
2 12 12
3 10 0
4 13 12
5 170 2
6 9 9
7 67 1
8 180 150
9 8 1
10 10 10
find out the outlier range, let's say the range is (8-50), then replace the value: if the column value is less than 8 then replace with 8, and if greater than 50 then replace with 50.
Please help I am new to pandas.
I think need set_index with clip:
df = df.set_index('day').clip(8,50)
print (df)
phone_calls received
day
1 11 11
2 12 12
3 10 8
4 13 12
5 50 8
6 9 9
7 50 8
8 50 50
9 8 8
10 10 10
Or similar with iloc select all columns without first:
df.iloc[:, 1:] = df.iloc[:, 1:].clip(8,50)
print (df)
day phone_calls received
0 1 11 11
1 2 12 12
2 3 10 8
3 4 13 12
4 5 50 8
5 6 9 9
6 7 50 8
7 8 50 50
8 9 8 8
9 10 10 10
EDIT: You can specify columns in list:
cols = ['phone_calls','received']
df[cols] = df[cols].clip(8,50)
print (df)
day phone_calls received
0 1 11 11
1 2 12 12
2 3 10 8
3 4 13 12
4 5 50 8
5 6 9 9
6 7 50 8
7 8 50 50
8 9 8 8
9 10 10 10

Update Query in SQL with numeric pattern in MS Access

Good Day All,
I need assistance in an creating an update query that groups my data.
The data in my table is actually spatial in nature and can be thought of a matrix that is 10 columns by 5 rows. I have the ObjectID, Row and Column but I want the column DesiredResult which is a 2x2 grouping of the rows & columns.
So the R,Cs of 1,1 1,2, 2,1 and 2,2, will have a DesiredResult of 1 while the 1,3 1,4 2,3 2,4 will have a DesiredResult of 2 and so on (see below for an example) ....
I was able to create the R and C columns using a combination of Quotient & Mod so I assume I would do somethign similar but I am stuck. How would I go about this query in MS Access ?
ObjectID R C DesiredResult
1 1 1 1
2 1 2 1
3 1 3 2
4 1 4 2
5 1 5 3
6 1 6 3
7 1 7 4
8 1 8 4
9 1 9 5
10 1 10 5
11 2 1 1
12 2 2 1
13 2 3 2
14 2 4 2
15 2 5 3
16 2 6 3
17 2 7 4
18 2 8 4
19 2 9 5
20 2 10 5
21 3 1 6
22 3 2 6
23 3 3 7
24 3 4 7
25 3 5 8
26 3 6 8
27 3 7 9
28 3 8 9
29 3 9 10
30 3 10 10
31 4 1 6
32 4 2 6
33 4 3 7
34 4 4 7
35 4 5 8
36 4 6 8
37 4 7 9
38 4 8 9
39 4 9 10
40 4 10 10
41 5 1 11
42 5 2 11
43 5 3 12
44 5 4 12
45 5 5 13
46 5 6 13
47 5 7 14
48 5 8 14
49 5 9 15
50 5 10 15
Something like ... ?
SELECT a.Row, a.Col, Col\2 AS D1, Col Mod 2 AS D2, [D1]+[D2] AS Desired
FROM table AS a
ORDER BY a.Row, a.Col;
Remou had a close approximation but it turns out this gives me what I need. I needed both a row and a column index.
SELECT ObjectID, R, C,
Int(([C]-1)/2) AS ColIndex,
Int(([R]-1)/2) AS RowIndex,
[RowIndex]*5+[ColIndex]+1 AS DesiredResult
FROM Testing
ORDER BY ObjectID
The key in the query is that there is the number 2 in both the Column & Row Index (which is the grouping size) and the number 5 is used in Desired Result and represents the Number of Row cells.
Thanks !