Chi square function: how to? - testing

I don't understand if what I'm doing is correct.
I wish to perform a chi-squared test on a dataset, but I'm not sure about the result.
This is my dataset:
> dput(chi)
structure(list(`Age.(days)` = c("< 7 days", "7-10 days", "10-12 days",
"12-15 days", "15-20 days", "20-25 days"), Broods = c(6, 9, 10,
6, 14, 5), N.Carnus = c(92, 74, 48, 17, 37, 10)), row.names = c(NA,
6L), class = "data.frame")
And this is the test:
chi$"Age.(days)" <- as.character(chi$"Age.(days)")
chisq.test(table(chi$"Age.(days)", chi$"N.Carnus"))
What I wish to find is if there is a significant connection between age of the host and number of parasites (coming from a certain number of broods).
Thank you :)

Related

How to convert summary.plm object to LaTeX code output?

Reproducible code in R is below:
library(plm)
data2 <- structure(list(year = c(1990, 1991, 1992, 1993, 1990, 1991, 1992,
1993, 1990, 1991, 1992, 1993, 1990, 1991, 1992, 1993), id = c(1,
1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4), value = c(10, 5,
9, 2, 3, 7, -1, -2, -3, -4, 10, 200, 5, 6, 1, 7), outcome = c(400,
3000, 2000, 1000, 700, 600, 500, 400, 1350, 20000, 5, 17, 3,
2, 5, 7)), class = "data.frame", row.names = c(NA, -16L))
data3 <- pdata.frame(data2, index = c("year", "id"))
model1 <- plm(outcome ~ value, data = data2, effect = "twoways")
sum_mod1 <- summary(model1, vcov = function(x) vcovHC(x, model="arellano"))
What I want to do: I want to basically use stargazer, kable or texreg on sum_mod1 to transform R's output into a LaTeX format. The reason I can't use model1 for LaTeX is because sum_mod1 uses specific standard errors that I need. However, I can't use stargazer or texreg on sum_mod1 directly because it is a summary.plm object, which is not allowed by either package/function apparently. I was wondering if anyone has a work-around this, because it's pretty critical I get these standard errors for my work!
Essentially, I just need to extract the output in sum_mod1 in such a way so that the LaTeX code that R spits out is in the same format as stargazer(model1), but just with the standard errors/significance values/etc. in sum_mod1. I just don't know where to start to fix this, so if anyone has any ideas, I'd really appreciate it!

Creating multiple columns in pandas with lambda function

I'm trying to create a set of new columns with growth rates within my df in a more efficient way than multiply imputing them one by one.
My df has +100 variables, but for simplicity, assume the following:
consumption = [5, 10, 15, 20, 25, 30, 35, 40]
wage = [10, 20, 30, 40, 50, 60, 70, 80]
period = [1, 2, 3, 4, 5, 6, 7, 8]
id = [1, 1, 1, 1, 1, 1, 1, 1]
tup= list(zip(id , period, wage))
df = pd.DataFrame(tup,
columns=['id ', 'period', 'wage'])
With two variables I could simply do this:
df['wage_chg']= df.sort_values(by=['id', 'period']).groupby(['id'])['wage'].apply(lambda x: (x/x.shift(4)-1)).fillna(0)
df['consumption_chg']= df.sort_values(by=['id', 'period']).groupby(['id'])['consumption'].apply(lambda x: (x/x.shift(4)-1)).fillna(0)
But maybe by using a for loop or something I could iterate over my column names creating new growth rate columns with the name columnname_chg as in the example above.
Any ideas?
Thanks
You can try DataFrame operation rather than Series operation in groupby.apply
cols = ['wage', 'columnname']
out = df.join(df.sort_values(by=['id', 'period'])
.groupby(['id'])[cols]
.apply(lambda g: (g/g.shift(4)-1)).fillna(0)
.add_suffix('_chg'))

Converting from Spell Format to STS when each individual has multiple, separate spells

I am trying to convert data of this form to STS format in order to perform sequence analysis:
|Person ID |Spell |Start Month |End Month |Status (Economic Activity) |
| -------- |----- |------------|----------|---------------------------|
|1|1|300|320|4|
|1|2|320|360|4|
|2|1|330|360|4|
|3|1|270|360|7|
|4|1|280|312|4|
|4|2|312|325|4|
|4|3|325|360|6|
Does anyone know how I can deal with the issue of multiple spells per person and somehow combine each spell for a given individual?
You should have a look at TraMiner's excellent documentation. Particularly, the user guide is very helpful. There you would find a section on the seqformat function, which is exactly what you are looking for
library(TraMineR)
## Create spell data
data <-
as.data.frame(
matrix(
c(1, 1, 300, 320, 4,
1, 2, 320, 360, 4,
2, 1, 330, 360, 4,
3, 1, 270, 360, 7,
4, 1, 280, 312, 4,
4, 2, 312, 325, 4,
4, 3, 325, 360, 6),
ncol = 5, byrow = T)
)
names(data) <- c("id", "spell", "start", "end", "status")
## Converting from SPELL to STS format with TraMineR::seqformat
data.sts <-
seqformat(data, from = "SPELL", to = "STS",
id = "id", begin = "start", end = "end", status = "status",
process = FALSE)

Inserting new fields(columns) to mongoDB with pandas

I have an existing data in MongoDB where Primary Key is set on 'date' with a few fields in it.
And I want to insert a new pandas dataframe with new fields(columns) to the existing data in MongoDB, joining on the 'date' field which exists on the both dataframe.
For example, lets say the this is dataframe A I have in my MongoDB ( I set the index with 'date' field when calling the data from MongoDB)
And this is the new dataframe B I want to insert to MongoDB
And this is the final dataframe C with new fields( 'std_50_3000window', 'std_50_300window', 'std_50_500window' added on 'date' index), which I want it to have on my MongoDB.
Is there any way to do this?? (Maybe with insert_many method?)
The method you need is update_one() with upsert=True in a loop; you can't use insert_many() for two reasons; firstly your not always inserting; sometime you are updating; secondly update_many() (and insert_many()) only work on a single filter; in your case each filter is different as each update relates to a different time.
This is generic solution that will combine dataframes (df_a, df_b in this case - you can have as many as you like) in the manner that you need. It uses iterrows to get each row of the dataframe, filters on the date, and sets the values to those in the dataframe. the $set operator will override values if they are there already and set them if not set. upsert=True will perform an insert if there's no match on the date.
for df in [df_a, df_b]:
for _, row in df.iterrows():
db.mycollection.update_one({'date': row.get('date')}, {'$set': row.to_dict()}, upsert=True)
Full worked example:
from pymongo import MongoClient
from pprint import pprint
import datetime
import pandas as pd
# Sample data setup
db = MongoClient()['mydatabase']
data_a = [[datetime.datetime(2017, 5, 19, 21, 20), 96, 8, 98],
[datetime.datetime(2017, 5, 19, 21, 21), 95, 8, 97],
[datetime.datetime(2017, 5, 19, 21, 22), 95, 8, 97]]
df_a = pd.DataFrame(data_a, columns=['date', 'std_500_1000window', 'std_50_100window', 'std_50_2000window'])
data_b = [[datetime.datetime(2017, 5, 19, 21, 20), 98, 9, 10],
[datetime.datetime(2017, 5, 19, 21, 21), 98, 9, 10],
[datetime.datetime(2017, 5, 19, 21, 22), 98, 9, 10]]
df_b = pd.DataFrame(data_b, columns=['date', 'std_50_3000window', 'std_50_300window', 'std_50_500window'])
# Perform the upserts
for df in [df_a, df_b]:
for _, row in df.iterrows():
db.mycollection.update_one({'date': row.get('date')}, {'$set': row.to_dict()}, upsert=True)
# Print the results
for record in db.mycollection.find():
pprint(record)
Result:
{'_id': ObjectId('5f0ae909df5531ac655ce528'),
'date': datetime.datetime(2017, 5, 19, 21, 20),
'std_500_1000window': 96,
'std_50_100window': 8,
'std_50_2000window': 98,
'std_50_3000window': 98,
'std_50_300window': 9,
'std_50_500window': 10}
{'_id': ObjectId('5f0ae909df5531ac655ce52a'),
'date': datetime.datetime(2017, 5, 19, 21, 21),
'std_500_1000window': 95,
'std_50_100window': 8,
'std_50_2000window': 97,
'std_50_3000window': 98,
'std_50_300window': 9,
'std_50_500window': 10}
{'_id': ObjectId('5f0ae909df5531ac655ce52c'),
'date': datetime.datetime(2017, 5, 19, 21, 22),
'std_500_1000window': 95,
'std_50_100window': 8,
'std_50_2000window': 97,
'std_50_3000window': 98,
'std_50_300window': 9,
'std_50_500window': 10}

Extracting the indices of outliers in Linear Regression

The following script computes R-squared value between two numpy arrays(x and y).
The R-squared value is very low due to outliers in the data. How can I extract the indices of those outliers?
import numpy as np, matplotlib.pyplot as plt, scipy.stats as stats
x = np.random.random_integers(1,50,50)
y = np.random.random_integers(1,50,50)
r2 = stats.linregress(x, y) [3]**2
print r2
plt.scatter(x, y)
plt.show()
An outlier is defined as: value-mean > 2*standard deviation.
You can do this with the line
[i for i in range(len(x)) if (abs(x[i] - np.mean(x)) > 2*np.std(x))]
What is does:
A list is constructed from the indices of x, where the element at that index satisfies the condition described above.
A quick test:
x = np.random.random_integers(1,50,50)
this gives me the array:
array([16, 6, 13, 18, 21, 37, 31, 8, 1, 48, 4, 40, 9, 14, 6, 45, 20,
15, 14, 32, 30, 8, 19, 8, 34, 22, 49, 5, 22, 23, 39, 29, 37, 24,
45, 47, 21, 5, 4, 27, 48, 2, 22, 8, 12, 8, 49, 12, 15, 18])
Now I add some outliers manually as there are none initially:
x[4] = 200
x[15] = 178
lets test:
[i for i in range(len(x)) if (abs(x[i] - np.mean(x)) > 2*np.std(x))]
result:
[4, 15]
Is this what you was looking for?
EDIT:
I added the abs() function in the line above, because when you are working with negative numbers this might end bad. The abs() function takes the absolute value.
I think Sander's approach is the correct one, but if you must see R2 without those outliers before making a decision here is a way to do it.
Setup data and introduce outlier:
In [1]:
import numpy as np, scipy.stats as stats
np.random.seed(123)
x = np.random.random_integers(1,50,50)
y = np.random.random_integers(1,50,50)
y[5] = 100
Calculate R2 taking out one y value at a time (along with matching x value):
m = np.eye(y.shape[0])
r2 = np.apply_along_axis(lambda a: stats.linregress(np.delete(x, a.argmax()), np.delete(y, a.argmax()))[3]**2, 0, m)
Get index of the biggest outlier:
r2.argmax()
Out[1]:
5
Get R2 when this outlier is taken out:
In [2]:
r2[r2.argmax()]
Out[2]:
0.85892084723588935
Get the value of the outlier:
In [3]:
y[r2.argmax()]
Out[3]:
100
To get top n outliers:
In [4]:
n = 5
sorted_index = r2.argsort()[::-1]
sorted_index[:n]
Out [4]:
array([ 5, 27, 34, 0, 17], dtype=int64)