Group dataframes by percentile level - pandas

I have a dataframe:
('U', 'OLHC', '+') counts: 127
Date Open High Low Close Sign Struct Trend OH HL LC OL LH HC
1997-06-17 00:00:00+00:00 812.97 897.60 811.80 894.42 + OLHC U 84.63 85.80 82.62 1.17 85.80 3.18
1998-03-08 00:00:00+00:00 957.59 1055.69 954.24 1055.69 + OLHC U 98.10 101.45 101.45 3.35 101.45 0.00
1998-10-14 00:00:00+00:00 957.28 1066.11 923.32 1005.53 + OLHC U 108.83 142.79 82.21 33.96 142.79 60.58
1998-11-27 00:00:00+00:00 1005.53 1192.97 1000.12 1192.33 + OLHC U 187.44 192.85 192.21 5.41 192.85 0.64
1999-01-10 00:00:00+00:00 1192.33 1278.24 1136.89 1275.09 + OLHC U 85.91 141.35 138.20 55.44 141.35 3.15
1999-04-08 00:00:00+00:00 1271.18 1344.08 1216.03 1343.98 + OLHC U 72.90 128.05 127.95 55.15 128.05 0.10
1999-11-14 00:00:00+00:00 1282.81 1396.12 1233.70 1396.06 + OLHC U 113.31 162.42 162.36 49.11 162.42 0.06
2001-04-25 00:00:00+00:00 1182.91 1253.76 1081.19 1228.75 + OLHC U 70.85 172.57 147.56 101.72 172.57 25.01
2001-12-01 00:00:00+00:00 1066.98 1163.38 1052.83 1137.88 + OLHC U 96.40 110.55 85.05 14.15 110.55 25.50
2003-03-30 00:00:00+00:00 836.25 895.78 788.90 863.50 + OLHC U 59.53 106.88 74.60 47.35 106.88 32.28
2003-05-13 00:00:00+00:00 863.50 947.51 843.68 942.30 + OLHC U 84.01 103.83 98.62 19.82 103.83 5.21
2003-09-22 00:00:00+00:00 977.59 1040.18 974.21 1022.82 + OLHC U 62.59 65.97 48.61 3.38 65.97 17.36
2003-11-05 00:00:00+00:00 1022.82 1061.44 990.34 1051.81 + OLHC U 38.62 71.10 61.47 32.48 71.10 9.63
2003-12-19 00:00:00+00:00 1053.14 1091.03 1031.24 1088.67 + OLHC U 37.89 59.79 57.43 21.90 59.79 2.36
2004-12-05 00:00:00+00:00 1095.74 1197.11 1090.23 1191.17 + OLHC U 101.37 106.88 100.94 5.51 106.88 5.94
2005-01-18 00:00:00+00:00 1190.84 1217.90 1173.76 1195.98 + OLHC U 27.06 44.14 22.22 17.08 44.14 21.92
2005-05-30 00:00:00+00:00 1142.40 1199.56 1136.22 1198.78 + OLHC U 57.16 63.34 62.56 6.18 63.34 0.78
2006-02-18 00:00:00+00:00 1274.61 1294.90 1253.61 1287.24 + OLHC U 20.29 41.29 33.63 21.00 41.29 7.66
2006-04-03 00:00:00+00:00 1287.14 1310.88 1268.42 1297.81 + OLHC U 23.74 42.46 29.39 18.72 42.46 13.07
2006-09-26 00:00:00+00:00 1267.60 1336.60 1266.67 1336.34 + OLHC U 69.00 69.93 69.67 0.93 69.93 0.26
2006-11-09 00:00:00+00:00 1335.37 1389.45 1327.10 1378.33 + OLHC U 54.08 62.35 51.23 8.27 62.35 11.12
2006-12-23 00:00:00+00:00 1378.35 1431.81 1375.60 1410.76 + OLHC U 53.46 56.21 35.16 2.75 56.21 21.05
2007-09-13 00:00:00+00:00 1455.27 1503.41 1370.60 1483.95 + OLHC U 48.14 132.81 113.35 84.67 132.81 19.46
2008-04-20 00:00:00+00:00 1293.37 1395.90 1256.98 1390.33 + OLHC U 102.53 138.92 133.35 36.39 138.92 5.57
2009-04-07 00:00:00+00:00 770.05 845.61 666.79 815.55 + OLHC U 75.56 178.82 148.76 103.26 178.82 30.06
2009-05-21 00:00:00+00:00 815.55 930.17 814.84 888.33 + OLHC U 114.62 115.33 73.49 0.71 115.33 41.84
2009-07-04 00:00:00+00:00 888.33 956.23 881.46 896.42 + OLHC U 67.90 74.77 14.96 6.87 74.77 59.81
2009-08-17 00:00:00+00:00 896.42 1018.00 869.32 979.73 + OLHC U 121.58 148.68 110.41 27.10 148.68 38.27
2009-09-30 00:00:00+00:00 979.73 1080.15 979.73 1057.08 + OLHC U 100.42 100.42 77.35 0.00 100.42 23.07
2009-11-13 00:00:00+00:00 1057.08 1105.37 1020.18 1093.48 + OLHC U 48.29 85.19 73.30 36.90 85.19 11.89
2010-10-31 00:00:00+00:00 1126.57 1196.14 1122.79 1183.26 + OLHC U 69.57 73.35 60.47 3.78 73.35 12.88
2010-12-14 00:00:00+00:00 1185.71 1246.73 1173.00 1241.59 + OLHC U 61.02 73.73 68.59 12.71 73.73 5.14
2011-01-27 00:00:00+00:00 1241.58 1301.29 1232.85 1299.54 + OLHC U 59.71 68.44 66.69 8.73 68.44 1.75
2011-03-12 00:00:00+00:00 1299.63 1344.07 1275.10 1304.28 + OLHC U 44.44 68.97 29.18 24.53 68.97 39.79
2011-12-01 00:00:00+00:00 1223.46 1292.66 1158.66 1244.58 + OLHC U 69.20 134.00 85.92 64.80 134.00 48.08
2012-01-14 00:00:00+00:00 1246.03 1296.82 1202.37 1289.09 + OLHC U 50.79 94.45 86.72 43.66 94.45 7.73
2012-02-27 00:00:00+00:00 1290.22 1371.94 1290.22 1367.59 + OLHC U 81.72 81.72 77.37 0.00 81.72 4.35
2012-07-08 00:00:00+00:00 1318.90 1374.81 1266.74 1354.68 + OLHC U 55.91 108.07 87.94 52.16 108.07 20.13
2012-08-21 00:00:00+00:00 1354.66 1426.68 1325.41 1413.17 + OLHC U 72.02 101.27 87.76 29.25 101.27 13.51
2012-12-31 00:00:00+00:00 1366.42 1448.00 1366.42 1426.19 + OLHC U 81.58 81.58 59.77 0.00 81.58 21.81
2013-02-13 00:00:00+00:00 1426.19 1524.69 1426.19 1520.33 + OLHC U 98.50 98.50 94.14 0.00 98.50 4.36
Then, I select only events by percentile value:
col_name = 'HC'
group_name = col_name + '_lev'
level_value = .95
hc_group = df[df[col_name] > df[col_name].quantile(level_value)]
hc_group.loc[:, group_name] = col_name
I get the result:
Date Open High Low Close Sign Struct Trend OH HL LC OL LH HC HC_lev
1998-10-14 00:00:00+00:00 957.28 1066.11 923.32 1005.53 + OLHC U 108.83 142.79 82.21 33.96 142.79 60.58 HC
2009-05-21 00:00:00+00:00 815.55 930.17 814.84 888.33 + OLHC U 114.62 115.33 73.49 0.71 115.33 41.84 HC
2009-07-04 00:00:00+00:00 888.33 956.23 881.46 896.42 + OLHC U 67.90 74.77 14.96 6.87 74.77 59.81 HC
2009-08-17 00:00:00+00:00 896.42 1018.00 869.32 979.73 + OLHC U 121.58 148.68 110.41 27.10 148.68 38.27 HC
2011-03-12 00:00:00+00:00 1299.63 1344.07 1275.10 1304.28 + OLHC U 44.44 68.97 29.18 24.53 68.97 39.79 HC
2011-12-01 00:00:00+00:00 1223.46 1292.66 1158.66 1244.58 + OLHC U 69.20 134.00 85.92 64.80 134.00 48.08 HC
2016-06-29 00:00:00+00:00 2065.04 2120.55 1991.68 2070.77 + OLHC U 55.51 128.87 79.09 73.36 128.87 49.78 HC
This code works fine.
I would like to do the same for each column from list ['OH', 'HL', 'LC', 'OL', 'LH', 'HC'] and
return a groupby object, grouped by these columns.
In other words, I need one big object , consist of let say oh_group, hl_group, ..., hc_group
Could you advise me , how to deal with this?

Finally, I've found the solution. May be it will be useful for someone else.
names = ['OH', 'HL', 'LC', 'OL', 'LH', 'HC']
percentiles = [.75, .90, .95, .98]
for col_name in names:
for perc in percentiles:
k = df[df[col_name] > df[col_name].quantile(perc)]
k.loc[:, 'Level'] = str(perc)
total_df = pd.concat([total_df, k], sort=False)
print(col_name + ' events:')
print('----------')
print_groups(total_df.groupby('Level'))

Related

Pandas - Add a new calculated column to a MultiIndex column dataframe

I have a Dataframe with the following structure:
np.random.seed(1)
mi = pd.MultiIndex.from_product([[3, 5], ["X","Y","V","T"]], names=["Node", "Parameter"])
df = pd.DataFrame(index=pd.DatetimeIndex(['2022-07-07 12:00:00', '2022-07-07 13:00:00',
'2022-07-07 14:00:00', '2022-07-07 15:00:00',
'2022-07-07 16:00:00'],
dtype='datetime64[ns]', name='Date', freq=None), columns=mi, data=np.random.rand(5,8))
print(df)
Node 3 5
Parameter X Y V T X Y V T
Date
2022-07-07 12:00:00 0.417022 0.720324 0.000114 0.302333 0.146756 0.092339 0.186260 0.345561
2022-07-07 13:00:00 0.396767 0.538817 0.419195 0.685220 0.204452 0.878117 0.027388 0.670468
2022-07-07 14:00:00 0.417305 0.558690 0.140387 0.198101 0.800745 0.968262 0.313424 0.692323
2022-07-07 15:00:00 0.876389 0.894607 0.085044 0.039055 0.169830 0.878143 0.098347 0.421108
2022-07-07 16:00:00 0.957890 0.533165 0.691877 0.315516 0.686501 0.834626 0.018288 0.750144
I would like to add a new calculated column "Z" for each Node, based on the value "X" ** 2 + "Y" ** 2.
The following achieves the desired result:
x = df.loc[:,(slice(None),"X")]
y = df.loc[:,(slice(None),"Y")]
z = (x**2).rename(columns={"X":"Z"}) + (y ** 2).rename(columns={"Y":"Z"})
result = df.join(z).sort_index(axis=1)
Is there a more straightforward way to achieve this?
For example, using df.xs to select the desired column data e.g. df.xs("X", axis=1, level=1) **2 + df.xs("X", axis=1, level=1) ** 2, how can I then assign the result to the original dataframe?
You can use groupby.apply:
(df.groupby(level='Node', axis=1)
.apply(lambda g: g.droplevel('Node', axis=1).eval('Z = X**2 + Y**2'))
)
Or, with xs and drop_level=False on one of the values:
(df.join((df.xs('X', axis=1, level=1, drop_level=False)**2
+df.xs('Y', axis=1, level=1)**2
).rename(columns={'X': 'Z'}, level=1)
)
.sort_index(axis=1, level=0, sort_remaining=False)
)
Output:
Node 3 5
Parameter X Y V T Z X Y V T Z
Date
2022-07-07 12:00:00 0.417022 0.720324 0.000114 0.302333 0.692775 0.146756 0.092339 0.186260 0.345561 0.030064
2022-07-07 13:00:00 0.396767 0.538817 0.419195 0.685220 0.447748 0.204452 0.878117 0.027388 0.670468 0.812891
2022-07-07 14:00:00 0.417305 0.558690 0.140387 0.198101 0.486278 0.800745 0.968262 0.313424 0.692323 1.578722
2022-07-07 15:00:00 0.876389 0.894607 0.085044 0.039055 1.568379 0.169830 0.878143 0.098347 0.421108 0.799977
2022-07-07 16:00:00 0.957890 0.533165 0.691877 0.315516 1.201818 0.686501 0.834626 0.018288 0.750144 1.167884
One option is with pd.xs:
out = df.xs('X',axis=1,level=1).pow(2).add(df.xs('Y',axis=1,level=1).pow(2))
out.columns = [out.columns, np.repeat(['Z'], 2)]
pd.concat([df, out], axis = 1).sort_index(axis=1)
Node 3 5
Parameter T V X Y Z T V X Y Z
Date
2022-07-07 12:00:00 0.302333 0.000114 0.417022 0.720324 0.692775 0.345561 0.186260 0.146756 0.092339 0.030064
2022-07-07 13:00:00 0.685220 0.419195 0.396767 0.538817 0.447748 0.670468 0.027388 0.204452 0.878117 0.812891
2022-07-07 14:00:00 0.198101 0.140387 0.417305 0.558690 0.486278 0.692323 0.313424 0.800745 0.968262 1.578722
2022-07-07 15:00:00 0.039055 0.085044 0.876389 0.894607 1.568379 0.421108 0.098347 0.169830 0.878143 0.799977
2022-07-07 16:00:00 0.315516 0.691877 0.957890 0.533165 1.201818 0.750144 0.018288 0.686501 0.834626 1.167884
Another option, is to select all the columns, run pow across all the columns in one go, before grouping and concatenating:
out = (df
.loc(axis=1)[:, ['X','Y']]
.pow(2)
.groupby(level='Node', axis=1)
.agg(np.add.reduce,axis=1))
out.columns = [out.columns, np.repeat(['Z'], 2)]
pd.concat([df, out], axis = 1).sort_index(axis=1)

Extract a time and space variable from a moving ship from the ERA5 reanalysis

I want to extract the measured wind from a station inside a moving ship, which I have the latitude, longitude and time values and the wind value for each time step in space. I can extract a fixed point in space for all time steps but I would like to extract for example the wind at time step x to a date longitude and latitude as the ship moves. How can I do this from the code below?
data = xr.open_dataset('C:/Users/William Jacondino/Desktop/Dados/ERA5\\ERA5_2017.nc', decode_times=False)
dir_out = 'C:/Users/William Jacondino/Desktop/MovingShip'
if not os.path.exists(dir_out):
os.makedirs(dir_out)
print("\nReading the observation station names:\n")
stations = pd.read_csv(r"C:/Users/William Jacondino/Desktop/MovingShip/Date-TIME.csv",index_col=0, sep='\;')
print(stations)
Reading the observation station names:
Latitude Longitude
Date-Time
16/11/2017 00:00 0.219547 -38.247914
16/11/2017 06:00 0.861717 -38.188858
16/11/2017 12:00 1.529534 -38.131039
16/11/2017 18:00 2.243760 -38.067467
17/11/2017 00:00 2.961202 -38.009050
... ... ...
10/12/2017 00:00 -5.775127 -35.206581
10/12/2017 06:00 -5.775120 -35.206598
10/12/2017 12:00 -5.775119 -35.206583
10/12/2017 18:00 -5.775122 -35.206584
11/12/2017 00:00 -5.775115 -35.206590
# variável tempo e unidade
times = data.variables['time'][:]
unit = data.time.units
# variáveis latitude (lat) e longitude (lon)
lon = data.variables['longitude'][:]
lat = data.variables['latitude'][:]
# variável temperatura em 2 metros em celsius
temp = data.variables['t2m'][:]-275.15
# variável temperatura do ponto de orvalho em 2 metros em celsius
tempdw = data.variables['d2m'][:]-275.15
# variável sea surface temperature (sst) em celsius
sst = data.variables['sst'][:]-275.15
# variável Surface sensible heat flux sshf
sshf = data.variables['sshf'][:]
unitsshf = data.sshf.units
# variável Surface latent heat flux
slhf = data.variables['slhf'][:]
unitslhf = data.slhf.units
# variável Mean sea level pressure
msl = data.variables['msl'][:]/100
unitmsl = data.msl.units
# variável Total precipitation em mm/h
tp = data.variables['tp'][:]*1000
# componente zonal do vento em 100 metros
uten100 = data.variables['u100'][:]
unitu100 = data.u100.units
# componente meridional do vento em 100 metros
vten100 = data.variables['v100'][:]
unitv100 = data.v100.units
# componente zonal do vento em 10 metros
uten = data.variables['u10'][:]
unitu = data.u10.units
# componente meridional do vento em 10 metros
vten = data.variables['v10'][:]
unitv = data.v10.units
# calculando a velocidade do vento em 10 metros
ws = (uten**2 + vten**2)**(0.5)
# calculando a velocidade do vento em 100 metros
ws100 = (uten100**2 + vten100**2)**(0.5)
# calculando os ângulos de U e V para obter a direção do vento em 10 metros
wdir = (180 + (np.degrees(np.arctan2(uten, vten)))) % 360
# calculando os ângulos de U e V para obter a direção do vento em 100 metros
wdir100 = (180 + (np.degrees(np.arctan2(uten100, vten100)))) % 360
for key, value in stations.iterrows():
#print(key,value[0], value[1], value[2])
station = value[0]
file_name = "{}{}".format(station+'_1991',".csv")
#print(file_name)
lon_point = value[1]
lat_point = value[2]
########################################
# Encontrando o ponto de Latitude e Longitude mais próximo das estações
# Squared difference of lat and lon
sq_diff_lat = (lat - lat_point)**2
sq_diff_lon = (lon - lon_point)**2
# Identifying the index of the minimum value for lat and lon
min_index_lat = sq_diff_lat.argmin()
min_index_lon = sq_diff_lon.argmin()
print("Generating time series for station {}".format(station))
ref_date = datetime.datetime(int(unit[12:16]),int(unit[17:19]),int(unit[20:22]))
date_range = list()
temp_data = list()
tempdw_data = list()
sst_data = list()
sshf_data = list()
slhf_data = list()
msl_data = list()
tp_data = list()
uten100_data = list()
vten100_data = list()
uten_data = list()
vten_data = list()
ws_data = list()
ws100_data = list()
wdir_data = list()
wdir100_data = list()
for index, time in enumerate(times):
date_time = ref_date+datetime.timedelta(hours=int(time))
date_range.append(date_time)
temp_data.append(temp[index, min_index_lat, min_index_lon].values)
tempdw_data.append(tempdw[index, min_index_lat, min_index_lon].values)
sst_data.append(sst[index, min_index_lat, min_index_lon].values)
sshf_data.append(sshf[index, min_index_lat, min_index_lon].values)
slhf_data.append(slhf[index, min_index_lat, min_index_lon].values)
msl_data.append(msl[index, min_index_lat, min_index_lon].values)
tp_data.append(tp[index, min_index_lat, min_index_lon].values)
uten100_data.append(uten100[index, min_index_lat, min_index_lon].values)
vten100_data.append(vten100[index, min_index_lat, min_index_lon].values)
uten_data.append(uten[index, min_index_lat, min_index_lon].values)
vten_data.append(vten[index, min_index_lat, min_index_lon].values)
ws_data.append(ws[index,min_index_lat,min_index_lon].values)
ws100_data.append(ws100[index,min_index_lat,min_index_lon].values)
wdir_data.append(wdir[index,min_index_lat,min_index_lon].values)
wdir100_data.append(wdir100[index,min_index_lat,min_index_lon].values)
################################################################################
#print(date_range)
df = pd.DataFrame(date_range, columns = ["Date-Time"])
df["Date-Time"] = date_range
df = df.set_index(["Date-Time"])
df["WS10 ({})".format(unitu)] = ws_data
df["WDIR10 ({})".format(units.deg)] = wdir_data
df["WS100 ({})".format(unitu)] = ws100_data
df["WDIR100 ({})".format(units.deg)] = wdir100_data
df["Chuva({})".format(units.mm)] = tp_data
df["MSLP ({})".format(units.hPa)] = msl_data
df["T2M ({})".format(units.degC)] = temp_data
df["Td2M ({})".format(units.degC)] = tempdw_data
df["Surface Sensible Heat Flux ({})".format(unitsshf)] = sshf_data
df["Surface latent heat flux ({})".format(unitslhf)] = slhf_data
df["U10 ({})".format(unitu)] = uten_data
df["V10 ({})".format(unitv)] = vten_data
df["U100 ({})".format(unitu100)] = uten100_data
df["V100 ({})".format(unitv100)] = vten100_data
df["TSM ({})".format(units.degC)] = sst_data
print("The following time series is being saved as .csv files")
df.to_csv(os.path.join(dir_out,file_name), sep=';',encoding="utf-8", index=True)
print("\n! !Successfuly saved all the Time Series the output Directory!!\n{}".format(dir_out))
My code to extract a fixed variable at a given point in space is like this, but I would like to extract during the ship's movement, for example at time 11/12/2017 00:00, latitude -5.775115 and longitude -35.206590 I have a value of the wind, and in the next time step for another latitude x longitude I have another value. How can I adapt my code for this?
This is another perfect use case for xarray's advanced indexing! I feel like this part of the user guide needs a cape and a theme song :)
I'll use a made up dataset and set of stations which are similar (I think) to yours. First step is to reset your Date-Time index, so you can use it in pulling the nearest time value from the xarray.Dataset, since you want a common index for the time, lat, and lon:
In [14]: stations = stations.reset_index(drop=False)
...: stations
Out[14]:
Date-Time Latitude Longitude
0 2017-11-16 00:00:00 0.219547 -38.247914
1 2017-11-16 06:00:00 0.861717 -38.188858
2 2017-11-16 12:00:00 1.529534 -38.131039
3 2017-11-16 18:00:00 2.243760 -38.067467
4 2017-11-17 00:00:00 2.961202 -38.009050
5 2017-12-10 00:00:00 -5.775127 -35.206581
6 2017-12-10 06:00:00 -5.775120 -35.206598
7 2017-12-10 12:00:00 -5.775119 -35.206583
8 2017-12-10 18:00:00 -5.775122 -35.206584
9 2017-12-11 00:00:00 -5.775115 -35.206590
In [15]: ds
Out[15]:
<xarray.Dataset>
Dimensions: (lat: 40, lon: 40, time: 241)
Coordinates:
* lat (lat) float64 -9.75 -9.25 -8.75 -8.25 -7.75 ... 8.25 8.75 9.25 9.75
* lon (lon) float64 -44.75 -44.25 -43.75 -43.25 ... -26.25 -25.75 -25.25
* time (time) datetime64[ns] 2017-11-01 2017-11-01T06:00:00 ... 2017-12-31
Data variables:
temp (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
tempdw (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
sst (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
ws (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
ws100 (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
wdir (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
wdir100 (lat, lon, time) float64 0.07366 0.3448 0.2456 ... 0.3081 0.4236
Using the advanced indexing rules, if we select from the dataset using DataArrays as indexers, the result will be reshaped to match the indexer. What this means is that we can take your stations dataframe, which has the values time, lat, and lon, and pull the nearest indices from the xarray dataset:
In [16]: ds_over_observations = ds.sel(
...: time=stations["Date-Time"].to_xarray(),
...: lat=stations["Latitude"].to_xarray(),
...: lon=stations["Longitude"].to_xarray(),
...: method="nearest",
...: )
Now, our data has the same index as your dataframe!
In [17]: ds_over_observations
Out[17]:
<xarray.Dataset>
Dimensions: (index: 10)
Coordinates:
lat (index) float64 0.25 0.75 1.75 2.25 ... -5.75 -5.75 -5.75 -5.75
lon (index) float64 -38.25 -38.25 -38.25 ... -35.25 -35.25 -35.25
time (index) datetime64[ns] 2017-11-16 ... 2017-12-11
* index (index) int64 0 1 2 3 4 5 6 7 8 9
Data variables:
temp (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
tempdw (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
sst (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
ws (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
ws100 (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
wdir (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
wdir100 (index) float64 0.1887 0.222 0.6754 0.919 ... 0.1134 0.9231 0.6095
You can dump this into pandas with .to_dataframe:
In [18]: df = ds_over_observations.to_dataframe()
In [19]: df
Out[19]:
lat lon time temp tempdw sst ws ws100 wdir wdir100
index
0 0.25 -38.25 2017-11-16 00:00:00 0.188724 0.188724 0.188724 0.188724 0.188724 0.188724 0.188724
1 0.75 -38.25 2017-11-16 06:00:00 0.222025 0.222025 0.222025 0.222025 0.222025 0.222025 0.222025
2 1.75 -38.25 2017-11-16 12:00:00 0.675417 0.675417 0.675417 0.675417 0.675417 0.675417 0.675417
3 2.25 -38.25 2017-11-16 18:00:00 0.919019 0.919019 0.919019 0.919019 0.919019 0.919019 0.919019
4 2.75 -38.25 2017-11-17 00:00:00 0.566266 0.566266 0.566266 0.566266 0.566266 0.566266 0.566266
5 -5.75 -35.25 2017-12-10 00:00:00 0.652490 0.652490 0.652490 0.652490 0.652490 0.652490 0.652490
6 -5.75 -35.25 2017-12-10 06:00:00 0.429541 0.429541 0.429541 0.429541 0.429541 0.429541 0.429541
7 -5.75 -35.25 2017-12-10 12:00:00 0.113352 0.113352 0.113352 0.113352 0.113352 0.113352 0.113352
8 -5.75 -35.25 2017-12-10 18:00:00 0.923058 0.923058 0.923058 0.923058 0.923058 0.923058 0.923058
9 -5.75 -35.25 2017-12-11 00:00:00 0.609493 0.609493 0.609493 0.609493 0.609493 0.609493 0.609493
The index in the result is the same one as the stations data. If you'd like, you could merge in the original values using pd.concat([stations, df], axis=1).set_index("Date-Time") to get your original index back, alongside all the weather data:
In [20]: pd.concat([stations, df], axis=1).set_index("Date-Time")
Out[20]:
Latitude Longitude lat lon time temp tempdw sst ws ws100 wdir wdir100
Date-Time
2017-11-16 00:00:00 0.219547 -38.247914 0.25 -38.25 2017-11-16 00:00:00 0.188724 0.188724 0.188724 0.188724 0.188724 0.188724 0.188724
2017-11-16 06:00:00 0.861717 -38.188858 0.75 -38.25 2017-11-16 06:00:00 0.222025 0.222025 0.222025 0.222025 0.222025 0.222025 0.222025
2017-11-16 12:00:00 1.529534 -38.131039 1.75 -38.25 2017-11-16 12:00:00 0.675417 0.675417 0.675417 0.675417 0.675417 0.675417 0.675417
2017-11-16 18:00:00 2.243760 -38.067467 2.25 -38.25 2017-11-16 18:00:00 0.919019 0.919019 0.919019 0.919019 0.919019 0.919019 0.919019
2017-11-17 00:00:00 2.961202 -38.009050 2.75 -38.25 2017-11-17 00:00:00 0.566266 0.566266 0.566266 0.566266 0.566266 0.566266 0.566266
2017-12-10 00:00:00 -5.775127 -35.206581 -5.75 -35.25 2017-12-10 00:00:00 0.652490 0.652490 0.652490 0.652490 0.652490 0.652490 0.652490
2017-12-10 06:00:00 -5.775120 -35.206598 -5.75 -35.25 2017-12-10 06:00:00 0.429541 0.429541 0.429541 0.429541 0.429541 0.429541 0.429541
2017-12-10 12:00:00 -5.775119 -35.206583 -5.75 -35.25 2017-12-10 12:00:00 0.113352 0.113352 0.113352 0.113352 0.113352 0.113352 0.113352
2017-12-10 18:00:00 -5.775122 -35.206584 -5.75 -35.25 2017-12-10 18:00:00 0.923058 0.923058 0.923058 0.923058 0.923058 0.923058 0.923058
2017-12-11 00:00:00 -5.775115 -35.206590 -5.75 -35.25 2017-12-11 00:00:00 0.609493 0.609493 0.609493 0.609493 0.609493 0.609493 0.609493

Plot secondary x_axis in ggplot

Dear All seniors and members,
Hope you are doing great. I have data set, which I like to plot the secondary x-axis in ggplot. I could not make it to work for the last 4 hours. below is my dataset.
Pathway ES NES p_value q_value Group
1 HALLMARK_HYPOXIA 0.49 2.25 0.000 0.000 Top
2 HALLMARK_EPITHELIAL_MESENCHYMAL_TRANSITION 0.44 2.00 0.000 0.000 Top
3 HALLMARK_UV_RESPONSE_DN 0.45 1.98 0.000 0.000 Top
4 HALLMARK_TGF_BETA_SIGNALING 0.48 1.77 0.003 0.004 Top
5 HALLMARK_HEDGEHOG_SIGNALING 0.52 1.76 0.003 0.003 Top
6 HALLMARK_ESTROGEN_RESPONSE_EARLY 0.38 1.73 0.000 0.004 Top
7 HALLMARK_KRAS_SIGNALING_DN 0.37 1.69 0.000 0.005 Top
8 HALLMARK_INTERFERON_ALPHA_RESPONSE 0.37 1.54 0.009 0.021 Top
9 HALLMARK_TNFA_SIGNALING_VIA_NFKB 0.32 1.45 0.005 0.048 Top
10 HALLMARK_NOTCH_SIGNALING 0.42 1.42 0.070 0.059 Top
11 HALLMARK_COAGULATION 0.32 1.39 0.031 0.067 Top
12 HALLMARK_MITOTIC_SPINDLE 0.30 1.37 0.025 0.078 Top
13 HALLMARK_ANGIOGENESIS 0.40 1.37 0.088 0.074 Top
14 HALLMARK_WNT_BETA_CATENIN_SIGNALING 0.35 1.23 0.173 0.216 Top
15 HALLMARK_OXIDATIVE_PHOSPHORYLATION -0.65 -3.43 0.000 0.000 Bottom
16 HALLMARK_MYC_TARGETS_V1 -0.49 -2.56 0.000 0.000 Bottom
17 HALLMARK_E2F_TARGETS -0.45 -2.37 0.000 0.000 Bottom
18 HALLMARK_DNA_REPAIR -0.46 -2.33 0.000 0.000 Bottom
19 HALLMARK_ADIPOGENESIS -0.42 -2.26 0.000 0.000 Bottom
20 HALLMARK_FATTY_ACID_METABOLISM -0.41 -2.06 0.000 0.000 Bottom
21 HALLMARK_PEROXISOME -0.43 -2.01 0.000 0.000 Bottom
22 HALLMARK_MYC_TARGETS_V2 -0.43 -1.84 0.003 0.001 Bottom
23 HALLMARK_CHOLESTEROL_HOMEOSTASIS -0.42 -1.83 0.003 0.001 Bottom
24 HALLMARK_ALLOGRAFT_REJECTION -0.34 -1.78 0.000 0.003 Bottom
25 HALLMARK_MTORC1_SIGNALING -0.32 -1.67 0.000 0.004 Bottom
26 HALLMARK_P53_PATHWAY -0.29 -1.52 0.000 0.015 Bottom
27 HALLMARK_UV_RESPONSE_UP -0.28 -1.41 0.013 0.036 Bottom
28 HALLMARK_REACTIVE_OXYGEN_SPECIES_PATHWAY -0.35 -1.39 0.057 0.040 Bottom
29 HALLMARK_HEME_METABOLISM -0.26 -1.34 0.014 0.061 Bottom
30 HALLMARK_G2M_CHECKPOINT -0.23 -1.20 0.080 0.172 Bottom
I like to plot like the following plot (plot # 1)
Here is my current codes chunks.
ggplot(data, aes(reorder(Pathway, NES), NES, fill= Group)) +
theme_classic() + geom_col() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1, size = 8),
axis.title = element_text(face = "bold", size = 12),
axis.text = element_text(face = "bold", size = 8), plot.title = element_text(hjust = 0.5)) + labs(x="Pathway", y="Normalized Enrichment Score",
title="2Gy_5f vs. 0Gy") + coord_flip()
This code produces the following plot (plot # 2)
So I would like to generate the plot where I have secondary x-axis with q_value (same like the first bar plot I have attached). Any help is greatly appreciated. Note: I used coord_flip so it turn angle of x-axis.
Kind Regards,
synat
[1]: https://i.stack.imgur.com/dBFIS.jpg
[2]: https://i.stack.imgur.com/yDbC5.jpg
Maybe you don't need a secondary axis per se to get the plot style you seek.
library(tidyverse)
ggplot(data, aes(x = NES, y = reorder(Pathway, NES), fill= Group)) +
theme_classic() +
geom_col() +
geom_text(aes(x = 2.5, y = reorder(Pathway, NES), label = q_value), hjust = 0) +
annotate("text", x = 2.5, y = length(data$Pathway) + 1, hjust = 0, fontface = "bold", label = "q_value" ) +
coord_cartesian(xlim = c(NA, 3),
ylim = c(NA, length(data$Pathway) + 1),
clip = "off") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1, size = 8),
axis.title = element_text(face = "bold", size = 12),
axis.text = element_text(face = "bold", size = 8),
plot.title = element_text(hjust = 0.5)) +
labs(x="Pathway", y="Normalized Enrichment Score",
title="2Gy_5f vs. 0Gy")
And for future reference you can read in data in the format you pasted like so:
data <- read_table(
"
Pathway ES NES p_value q_value Group
HALLMARK_HYPOXIA 0.49 2.25 0.000 0.000 Top
HALLMARK_EPITHELIAL_MESENCHYMAL_TRANSITION 0.44 2.00 0.000 0.000 Top
HALLMARK_UV_RESPONSE_DN 0.45 1.98 0.000 0.000 Top
HALLMARK_TGF_BETA_SIGNALING 0.48 1.77 0.003 0.004 Top
HALLMARK_HEDGEHOG_SIGNALING 0.52 1.76 0.003 0.003 Top
HALLMARK_ESTROGEN_RESPONSE_EARLY 0.38 1.73 0.000 0.004 Top
HALLMARK_KRAS_SIGNALING_DN 0.37 1.69 0.000 0.005 Top
HALLMARK_INTERFERON_ALPHA_RESPONSE 0.37 1.54 0.009 0.021 Top
HALLMARK_TNFA_SIGNALING_VIA_NFKB 0.32 1.45 0.005 0.048 Top
HALLMARK_NOTCH_SIGNALING 0.42 1.42 0.070 0.059 Top
HALLMARK_COAGULATION 0.32 1.39 0.031 0.067 Top
HALLMARK_MITOTIC_SPINDLE 0.30 1.37 0.025 0.078 Top
HALLMARK_ANGIOGENESIS 0.40 1.37 0.088 0.074 Top
HALLMARK_WNT_BETA_CATENIN_SIGNALING 0.35 1.23 0.173 0.216 Top
HALLMARK_OXIDATIVE_PHOSPHORYLATION -0.65 -3.43 0.000 0.000 Bottom
HALLMARK_MYC_TARGETS_V1 -0.49 -2.56 0.000 0.000 Bottom
HALLMARK_E2F_TARGETS -0.45 -2.37 0.000 0.000 Bottom
HALLMARK_DNA_REPAIR -0.46 -2.33 0.000 0.000 Bottom
HALLMARK_ADIPOGENESIS -0.42 -2.26 0.000 0.000 Bottom
HALLMARK_FATTY_ACID_METABOLISM -0.41 -2.06 0.000 0.000 Bottom
HALLMARK_PEROXISOME -0.43 -2.01 0.000 0.000 Bottom
HALLMARK_MYC_TARGETS_V2 -0.43 -1.84 0.003 0.001 Bottom
HALLMARK_CHOLESTEROL_HOMEOSTASIS -0.42 -1.83 0.003 0.001 Bottom
HALLMARK_ALLOGRAFT_REJECTION -0.34 -1.78 0.000 0.003 Bottom
HALLMARK_MTORC1_SIGNALING -0.32 -1.67 0.000 0.004 Bottom
HALLMARK_P53_PATHWAY -0.29 -1.52 0.000 0.015 Bottom
HALLMARK_UV_RESPONSE_UP -0.28 -1.41 0.013 0.036 Bottom
HALLMARK_REACTIVE_OXYGEN_SPECIES_PATHWAY -0.35 -1.39 0.057 0.040 Bottom
HALLMARK_HEME_METABOLISM -0.26 -1.34 0.014 0.061 Bottom
HALLMARK_G2M_CHECKPOINT -0.23 -1.20 0.080 0.172 Bottom")
Created on 2021-11-23 by the reprex package (v2.0.1)

create multi-indexed dataframe

I do not know how to create a multi-indexed df (that has unequal number of 2nd-indices). here is a sample:
data = [{'caterpillar': [('Сatérpillar',
{'fuzz': 0.82,
'levenshtein': 0.98,
'jaro_winkler': 0.9192,
'hamming': 0.98}),
('caterpiⅼⅼaʀ',
{'fuzz': 0.73,
'levenshtein': 0.97,
'jaro_winkler': 0.9114,
'hamming': 0.97}),
('cÂteԻpillÂr',
{'fuzz': 0.73,
'levenshtein': 0.97,
'jaro_winkler': 0.881,
'hamming': 0.97})]},
{'elementis': [('elEmENtis',
{'fuzz': 1.0, 'levenshtein': 1.0, 'jaro_winkler': 1.0, 'hamming': 1.0}),
('ÊlemĚntis',
{'fuzz': 0.78,
'levenshtein': 0.98,
'jaro_winkler': 0.863,
'hamming': 0.98}),
('еlÈmÈntis',
{'fuzz': 0.67,
'levenshtein': 0.97,
'jaro_winkler': 0.8333,
'hamming': 0.97})]},
{'gibson': [('giBᏚon',
{'fuzz': 0.83,
'levenshtein': 0.99,
'jaro_winkler': 0.9319,
'hamming': 0.99}),
('ɡibsoN',
{'fuzz': 0.83,
'levenshtein': 0.99,
'jaro_winkler': 0.9206,
'hamming': 0.99}),
('giЬႽon',
{'fuzz': 0.67,
'levenshtein': 0.98,
'jaro_winkler': 0.84,
'hamming': 0.98}),
('glbsՕn',
{'fuzz': 0.67,
'levenshtein': 0.98,
'jaro_winkler': 0.8333,
'hamming': 0.98})]}]
I want a df like this (note: 'Other Name' has differing number of values for each 'Orig Name':
Orig Name| Other Name| fuzz| levenstein| Jaro-Winkler| Hamming
------------------------------------------------------------------------
caterpillar Сatérpillar 0.82 0.98. 0.9192 0.98
caterpiⅼⅼaʀ 0.73 0.97 0.9114 0.97
cÂteԻpillÂr 0.73 0.97 0.881 0.97
gibson giBᏚon 0.83. 0.99 0.9319 0.99
ɡibsoN 0.83 0.99. 0.9206 0.99
giЬႽon 0.67. 0.98 0.84 0.98
glbsՕn 0.67. 0.98. 0.8333 0.98
elementis .........
--------------------------------------------------------------------------
I tried :
orig_name_list = [x for d in data for x, v in d.items()]
value_list = [v for d in data for x, v in d.items()]
other_names = [tup[0] for tup_list in value_list for tup in tup_list]
algos = ['fuzz', 'levenshtein', 'jaro_winkler', 'hamming']
Not sure how to proceed from there. Suggestions are appreciated.
Let's try concat:
pd.concat([pd.DataFrame([x[1]]).assign(OrigName=k, OtherName=x[0])
for df in data for k,d in df.items() for x in d])
Output:
fuzz levenshtein jaro_winkler hamming OrigName OtherName
0 0.82 0.98 0.9192 0.98 caterpillar Сatérpillar
0 0.73 0.97 0.9114 0.97 caterpillar caterpiⅼⅼaʀ
0 0.73 0.97 0.8810 0.97 caterpillar cÂteԻpillÂr
0 1.00 1.00 1.0000 1.00 elementis elEmENtis
0 0.78 0.98 0.8630 0.98 elementis ÊlemĚntis
0 0.67 0.97 0.8333 0.97 elementis еlÈmÈntis
0 0.83 0.99 0.9319 0.99 gibson giBᏚon
0 0.83 0.99 0.9206 0.99 gibson ɡibsoN
0 0.67 0.98 0.8400 0.98 gibson giЬႽon
0 0.67 0.98 0.8333 0.98 gibson glbsՕn
One way to do this is to reformat your data for json record consumption via the pd.json_normalize function. Your json is currently not formatted correctly to be stored into a dataframe easily:
new_data = []
for entry in data:
new_entry = {}
for name, matches in entry.items():
new_entry["name"] = name
new_entry["matches"] = []
for match in matches:
match[1]["match"] = match[0]
new_entry["matches"].append(match[1])
new_data.append(new_entry)
df = pd.json_normalize(new_data, "matches", ["name"]).set_index(["name", "match"])
print(df)
fuzz levenshtein jaro_winkler hamming
name match
caterpillar Сatérpillar 0.82 0.98 0.9192 0.98
caterpiⅼⅼaʀ 0.73 0.97 0.9114 0.97
cÂteԻpillÂr 0.73 0.97 0.8810 0.97
elementis elEmENtis 1.00 1.00 1.0000 1.00
ÊlemĚntis 0.78 0.98 0.8630 0.98
еlÈmÈntis 0.67 0.97 0.8333 0.97
gibson giBᏚon 0.83 0.99 0.9319 0.99
ɡibsoN 0.83 0.99 0.9206 0.99
giЬႽon 0.67 0.98 0.8400 0.98
glbsՕn 0.67 0.98 0.8333 0.98

SQL query is not working (Error in rsqlite_send_query)

This is what the head of my data frame looks like
> head(d19_1)
SMZ SIZ1_diff SIZ1_base SIZ2_diff SIZ2_base SIZ3_diff SIZ3_base SIZ4_diff SIZ4_base SIZ5_diff SIZ5_base
1 1 -620 4170 -189 1347 -35 2040 82 1437 244 1533
2 2 -219 831 -57 255 -4 392 8 282 14 297
3 3 -426 834 -162 294 -134 379 -81 241 -22 221
4 4 -481 676 -142 216 -114 267 -50 158 -43 166
5 5 -233 1711 -109 584 54 913 71 624 74 707
6 6 -322 1539 -79 512 -50 799 23 532 63 576
Total_og Total_base %_SIZ1 %_SIZ2 %_SIZ3 %_SIZ4 %_SIZ5 Total_og Total_base
1 11980 12648 14.86811 14.03118 1.715686 5.706333 15.916504 11980 12648
2 2156 2415 26.35379 22.35294 1.020408 2.836879 4.713805 2156 2415
3 1367 2314 51.07914 55.10204 35.356201 33.609959 9.954751 1367 2314
4 790 1736 71.15385 65.74074 42.696629 31.645570 25.903614 790 1736
5 5339 5496 13.61777 18.66438 5.914567 11.378205 10.466761 5339 5496
6 4362 4747 20.92268 15.42969 6.257822 4.323308 10.937500 4362 4747
The datatype of the data frame is as below str(d19_1)
> str(d19_1)
'data.frame': 1588 obs. of 20 variables:
$ SMZ : int 1 2 3 4 5 6 7 8 9 10 ...
$ SIZ1_diff : int -620 -219 -426 -481 -233 -322 -176 -112 -34 -103 ...
$ SIZ1_base : int 4170 831 834 676 1711 1539 720 1396 998 1392 ...
$ SIZ2_diff : int -189 -57 -162 -142 -109 -79 -12 72 -36 -33 ...
$ SIZ2_base : int 1347 255 294 216 584 512 196 437 343 479 ...
$ SIZ3_diff : int -35 -4 -134 -114 54 -50 16 4 26 83 ...
$ SIZ3_base : int 2040 392 379 267 913 799 361 804 566 725 ...
$ SIZ4_diff : int 82 8 -81 -50 71 23 36 127 46 75 ...
$ SIZ4_base : int 1437 282 241 158 624 532 242 471 363 509 ...
$ SIZ5_diff : int 244 14 -22 -43 74 63 11 143 79 125 ...
$ SIZ5_base : int 1533 297 221 166 707 576 263 582 429 536 ...
$ Total_og : int 11980 2156 1367 790 5339 4362 2027 4715 3465 4561 ...
$ Total_base: int 12648 2415 2314 1736 5496 4747 2168 4464 3278 4375 ...
$ %_SIZ1 : num 14.9 26.4 51.1 71.2 13.6 ...
$ %_SIZ2 : num 14 22.4 55.1 65.7 18.7 ...
$ %_SIZ3 : num 1.72 1.02 35.36 42.7 5.91 ...
$ %_SIZ4 : num 5.71 2.84 33.61 31.65 11.38 ...
$ %_SIZ5 : num 15.92 4.71 9.95 25.9 10.47 ...
$ Total_og : int 11980 2156 1367 790 5339 4362 2027 4715 3465 4561 ...
$ Total_base: int 12648 2415 2314 1736 5496 4747 2168 4464 3278 4375 ...
When I run the below query, it is returning me the below error and I don't know why. I don't have any column in table
Query
d20_1 <- sqldf('SELECT *, CASE
WHEN SMZ BETWEEN 1 AND 110 THEN "Baltimore City"
WHEN SMZ BETWEEN 111 AND 217 THEN "Anne Arundel County"
WHEN SMZ BETWEEN 218 AND 405 THEN "Baltimore County"
WHEN SMZ BETWEEN 406 AND 453 THEN "Carroll County"
WHEN SMZ BETWEEN 454 AND 524 THEN "Harford County"
WHEN SMZ BETWEEN 1667 AND 1674 THEN "York County"
ELSE 0
END Jurisdiction
FROM d19_1')
Error:
Error in rsqlite_send_query(conn#ptr, statement) :
table d19_1 has no column named <NA>
Your code works correctly for me:
d19_1 <- structure(list(SMZ = 1:6, SIZ1_diff = c(-620L, -219L, -426L,
-481L, -233L, -322L), SIZ1_base = c(4170L, 831L, 834L, 676L,
1711L, 1539L), SIZ2_diff = c(-189L, -57L, -162L, -142L, -109L,
-79L), SIZ2_base = c(1347L, 255L, 294L, 216L, 584L, 512L), SIZ3_diff = c(-35L,
-4L, -134L, -114L, 54L, -50L), SIZ3_base = c(2040L, 392L, 379L,
267L, 913L, 799L), SIZ4_diff = c(82L, 8L, -81L, -50L, 71L, 23L
), SIZ4_base = c(1437L, 282L, 241L, 158L, 624L, 532L), SIZ5_diff = c(244L,
14L, -22L, -43L, 74L, 63L), SIZ5_base = c(1533L, 297L, 221L,
166L, 707L, 576L), Total_og = c(11980L, 2156L, 1367L, 790L, 5339L,
4362L), Total_base = c(12648L, 2415L, 2314L, 1736L, 5496L, 4747L
), X._SIZ1 = c(14.86811, 26.35379, 51.07914, 71.15385, 13.61777,
20.92268), X._SIZ2 = c(14.03118, 22.35294, 55.10204, 65.74074,
18.66438, 15.42969), X._SIZ3 = c(1.715686, 1.020408, 35.356201,
42.696629, 5.914567, 6.257822), X._SIZ4 = c(5.706333, 2.836879,
33.609959, 31.64557, 11.378205, 4.323308), X._SIZ5 = c(15.916504,
4.713805, 9.954751, 25.903614, 10.466761, 10.9375), Total_og.1 = c(11980L,
2156L, 1367L, 790L, 5339L, 4362L), Total_base.1 = c(12648L, 2415L,
2314L, 1736L, 5496L, 4747L)), .Names = c("SMZ", "SIZ1_diff",
"SIZ1_base", "SIZ2_diff", "SIZ2_base", "SIZ3_diff", "SIZ3_base",
"SIZ4_diff", "SIZ4_base", "SIZ5_diff", "SIZ5_base", "Total_og",
"Total_base", "X._SIZ1", "X._SIZ2", "X._SIZ3", "X._SIZ4", "X._SIZ5",
"Total_og.1", "Total_base.1"), row.names = c(NA, -6L), class = "data.frame")
library(sqldf)
sqldf('SELECT *, CASE
WHEN SMZ BETWEEN 1 AND 110 THEN "Baltimore City"
WHEN SMZ BETWEEN 111 AND 217 THEN "Anne Arundel County"
WHEN SMZ BETWEEN 218 AND 405 THEN "Baltimore County"
WHEN SMZ BETWEEN 406 AND 453 THEN "Carroll County"
WHEN SMZ BETWEEN 454 AND 524 THEN "Harford County"
WHEN SMZ BETWEEN 1667 AND 1674 THEN "York County"
ELSE 0
END Jurisdiction
FROM d19_1')