Need correct x&y labels in for loop - matplotlib

I'm trying to create 21 scatter plots with data I have. These 21 plots have different combinations of data, and I have succeeded at creating the right plots. However, I cannot for the life of me figure out how to correctly label the plots. Here is my code:
F225W = np.loadtxt('path/phot_F225W.dat',usecols=[0], unpack=True)
F275W = np.loadtxt('path/phot_F275W.dat',usecols=[0], unpack=True)
... I did this for all filters
Filters = [F225W,F275W,F336W,F438W,F606W,F814W,F850L]
for i in range(len(Filters)):
for j in range(len(Filters)):
B = Filters[i]
R = Filters[j]
BR = B-R
if j<=i:
pass
else:
plt.figure()
plt.gca().invert_yaxis()
plt.xlim(-6,6)
plt.ylim(-4,-15)
plt.xlabel(str(Filters[i]) + '-' + str(Filters[j]))
plt.ylabel(str(Filters[j]))
plt.plot(BR,R,'k.',markersize=1)
plt.show()
The code is supposed to iterate through the different combinations of filters and plot B-R vs. R, but instead of just labeling it B-R and R, I need it to show me the filters that were used in creating the plot. At the moment it creates the correct plots, but the labels don't show up.

To expand on the comment, does this work as a solution? The loop will pause until you close each figure that pops up on each iteration (if you keep the plt.show() ). You could alternatively save each figure and look at them separately as indicated in the solution as well:
Filters = [F225W,F275W,F336W,F438W,F606W,F814W,F850L]
Filter_names = ['F225W','F275W','F336W','F438W','F606W','F814W','F850L']
for i in range(len(Filters)):
for j in range(len(Filters)):
B = Filters[i]
BB = Filter_names[i]
R = Filters[j]
RR = Filter_names[j]
BR = B-R
if j<=i:
pass
else:
plt.figure()
plt.gca().invert_yaxis()
plt.xlim(-6,6)
plt.ylim(-4,-15)
plt.xlabel(str(Filter_names[i]) + '-' + str(Filter_names[j]))
plt.ylabel(str(Filter_names[j]))
plt.title('B filter:' + BB + '\tR Filter:' + RR)
plt.plot(BR,R,'k.',markersize=1)
os.chdir(path_you_want_to_save_to)
plt.savefig('B_' + BB +'_R_' + RR + '.png')
#uncomment line to see graph and pause loop.
#also note the indentation has changed
#plt.show()
plt.close()
After looking again I'm guessing the Filters are lists of arrays of some sort? So you need another list Filter_names for the string representing their names. I think that fixes your problem as you were trying to label them with list data before.

Related

How to use a loop to make a plot of 3 columns at the time?

I have a dataframe which contains the 3 columns of data (P, EP and Q) for each of the three catchment areas. I need to make a subplot of each catchment area showing the 3 columns of data that belong to this catchment area using one loop.
I did manage to make the three subplots without using a loop, but don't get how I am supposed to use one loop.
df = pd.read_excel('catchment_water_balance_data_ex2.xlsx', index_col=0, parse_dates=[0], skiprows=4)
df_monthly = df.resample('M').mean()
fig, axs = plt.subplots(3)
catchment_1 = df_monthly[['P1', 'EP1', 'Q1']]
catchment_2 = df_monthly[['P2', 'EP2', 'Q2']]
catchment_3 = df_monthly[['P3', 'EP3', 'Q3']]
axs[0].plot(catchment_1)
axs[1].plot(catchment_2)
axs[2].plot(catchment_3)
fig.suptitle('Water data of 3 catchments')
fig.supylabel('mm/day');
enter image description here

How can I define and add a lagend to this ggplot 2 script?

I came up with the following script to bin my data on X values, and plot the means of those bins in overlapping bar graphs. It works fine, but I can't seem to get a legend to generate, probably due to poor understanding of aesthetic mapping.
Here is the script, note that "MOI" and "T_cell_contacts" are two data columns in each DF.
ggplot(mapping=aes(MOI, T_cell_contacts)) + stat_summary_bin(data = Cleaned24hr4, fun = "mean", geom="bar", bins= 100, fill = "#FF6666", alpha = 0.3) + stat_summary_bin(data = cleaned24hr8, fun = "mean", geom="bar", bins= 100, fill = "#3733FF", alpha = 0.3) + ylab("mean")
I also added the graph that it plots.
Full disclosure: I was in the middle of writing this when #schumacher posted their response :). Decided to finish anyway.
There are two ways to approach this. One way (more complicated) is to keep the dataframes separate and ask ggplot2 to create a legend via mapping, and the second (simpler) way is to combine into one dataset similar to what #schumacher posted and map the fill color to the extra id column created.
I'll show you both, but first, here's a sample dataset:
library(ggplot2)
set.seed(8675309)
df1 <- data.frame(my_x=rep(1:100, 3), my_y=rnorm(300, 40, 4))
df2 <- data.frame(my_x=rep(11:110, 3), my_y=rnorm(300, 110, 10))
# and the plot code similar to OP's question
ggplot(mapping=aes(x = my_x, y = my_y)) +
stat_summary_bin(data=df1, fun="mean", geom="bar", bins=40, fill="blue", alpha=0.3) +
stat_summary_bin(data=df2, fun="mean", geom="bar", bins=40, fill="red", alpha=0.3)
Method 1 : Combine Dataframes
This is the preferred method for a variety of reasons I can't list completely here. There are a lot of options you can use for combining datasets. One is using union() or rbind() after adding some sort of ID column to your data, but you can do all in one shot using bind_rows() from dplyr:
df <- dplyr::bind_rows(list(dataset1 = df1, dataset2 = df2), .id="id")
The result will bind the rows together and by specifying the .id argument, it will create a new column in the dataset called "id" that uses the names for each of the datasets in the list as the value. In this case, the value in thd df$id column is either "dataset1" if it originated from df1 or "dataset2" if it originated from df2.
Then you use aes(fill=...) to map the fill color to the column "id" in the combined dataset.
p <- ggplot(df, aes(x=my_x, y=my_y)) +
stat_summary_bin(aes(fill=id), fun="mean", geom="bar", bins=40, alpha=0.3)
p
This creates a plot with the default colors for fill, so if you want to supply your own, just use scale_fill_manual(values=...) to specify the particular colors. Using a named vector for values= ensures that each color is applied the way you want it to be, but you can just supply an unnamed vector of color names.
p + scale_fill_manual(values = c("dataset1" = "blue", "dataset2" = "red"))
Method 2 : Use mapping to add the legend
While Method 1 is preferred, there is another way that does not force you to combine your dataframes. This is also useful to illustrate a bit about how ggplot2 decides to create and draw legends. The legend is created automaticaly via the mapping= argument, specifically via aes(). If you put any aesthetic inside of aes() that would normally impart a different appearance and not location (with some exceptions like x, y, and label), then this initiates the creation of a legend. You can map either a column in your dataset (like above), or you can just supply a single value and that will be applied to the entire dataset used for the geom. In this case, see what happens when you change the fill= argument for each geom call to be within aes() and assign it to a character value:
p1 <- ggplot(mapping = aes(x=my_x, y=my_y)) +
stat_summary_bin(aes(fill="first"), data=df1, fun="mean", geom="bar", bins=40, alpha=0.3) +
stat_summary_bin(aes(fill="second"), data=df2, fun="mean", geom="bar", bins=40, alpha=0.3) +
scale_fill_manual(values = c("first" = "blue", "second" = "red"))
p1
It works! When you provide a character value for the fill= aesthetic inside aes(), it's basically labeling every observation in that data to have the value "first" or "second" and using that to make the legend. Cool, right?
You notice a problem though, which is that the alpha value for the legend is not correct. This is because you get overplotting. It's just one of the reasons why you shouldn't really do it this way, but... sort of works. It is only noticeable if you ahve an alpha value. You can get that to look normal, but you need to use guide_legend() to override the aesthetics. Since the code effectively causes the legend to be drawn completely for each geom... you have to cut the alpha value in half for it to display correctly.
p1 + guides(fill=guide_legend(override.aes = list(alpha=0.15)))
Oh, and the real reason why not to use Method 2 is.... just think about doing that again for 5 datasets... how about 10?... how about 20?.....
I think the difficulty has to do with building a single legend out of two different geoms. My approach was to combine your data into a single data frame. The records from each to be set apart by a new category column, I'll call "cat" for short.
With the popular dplyr package:
Cleaned24hr4 <- mutate(Cleaned24hr4, cat = "hr4")
Cleaned24hr8 <- mutate(Cleaned24hr8, cat = "hr8")
Then put them together:
Cleaned <- union(Cleaned24hr4,Cleaned24hr8)
Define your colors:
colorcode <- c("hr4" = "#FF6666", "hr8" = "#3733FF")
Here's my ggplot statement:
ggplot(Cleaned, mapping=aes(MOI, T_cell_contacts)) +
stat_summary_bin(fun = "mean", geom="bar", bins= 100, aes(fill = cat), alpha = 0.3) +
scale_fill_manual(values = colorcode) +
ylab("mean")
Output using some dummy data.

Adding error_y from two columns in a stacked bar graph, plotly express

I have created a stacked bar plot using plotly.express. Each X-axis category has two correspondent Y-values that are stacked to give the total value of the two combined.
How can I add an individual error bar for each Y-value?
I have tried several options that all yield the same: The same value is added to both stacked bars. The error_y values are found in two separate columns in the dataframe: "st_dev_PHB_%" and "st_dev_PHV_%" , respectively, which correspond to 6 categorical values (x="C").
My intuition tells me its best to merge them into a new column in the dataframe, since I load the dataframe in the bar plot. However, each solution I try give an error or that the same value is added to each pair of Y-values.
What would be nice, is if it's possible to have X error_y values corresponding to the X number of variables loaded in the y=[...,...] . But that would off course be too easy .........................
data_MM = read_csv(....)
#data_MM["error_bar"] = data_MM[['st_dev_PHB_%', 'st_dev_PHV_%']].apply(tuple, axis=1).tolist()
#This one adds the values together instead of adding them to same list.
#data_MM["error_bar"] = data_MM['st_dev_PHB_%'] + data_MM['st_dev_PHV_%']
#data_MM["error_bar"] = data_MM[["st_dev_PHB_%", "st_dev_PHV_%"]].values.tolist()
#data_MM["error_bar"] = list(zip(data_MM['st_dev_PHB_%'],data_MM['st_dev_PHV_%']))
bar_plot = px.bar(data_MM, x="C", y=["PHB_wt%", "PHV_wt%"], hover_data =["PHA_total_wt%"], error_y="error_bar")
bar_plot.show()
The most commonly endured error message:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
I see your problem with the same error bar being used in both bars in the stack. However, I got a working solution with Plotly.graph_objs. The only downside was the second bar is plotting at the front, and therefore the upper half of the lower error bar is covered. At least you can still read off the error value from the hover data.
Here is the full code:
n = 20
x = list(range(1, n + 1))
y1 = np.random.random(n)
y2 = y1 + np.random.random(n)
e1 = y1 * 0.2
e2 = y2 * 0.05
trace1 = go.Bar(x=x, y=y1, error_y=dict(type='data', array=e1), name="Trace 1")
trace2 = go.Bar(x=x, y=y2, error_y=dict(type='data', array=e2), name="Trace 2")
fig = go.Figure(data=[trace1, trace2])
fig.update_layout(title="Test Plot", xaxis_title="X axis", yaxis_title="Y axis", barmode="stack")
fig.show()
Here is a resulting plot (top plot showing one error value, bottom plot showing different error value for the same bar stack):

Item Wrong Length 1 Instead of 50 Pandas

I'm dealing with a csv file consists of 2 columns and 51 rows in total.
data = pd.read_csv("data.csv", sep = ',')
data.columns=['x_column', 'y_column']
Then I perform linear regresssion
X = data.iloc[:, 0].values.reshape(-1, 1)
y = data.iloc[:, 1].values.reshape(-1, 1)
lr = LinearRegression()
Next thing I need to perform is Tukey Method.
X = data.iloc[[0], :].values
y = data.iloc[[1], :].values
Then I plotted the boxes and found out my range is between -40 to 10.
data.boxplot(return_type='dict')
plt.plot()
I need to assign my outliers to a value in order to remove them before training my dataset again. And this is where I have a problem.
y_column = X[:, 1]
data_outliers = (y_column > 0.0)
data[data_outliers]
When I run this last part I get Item wrong length 1 instead of 50. error and I don't know how to solve that. Any help is appreciated.
Try:
data_outliers = (y_column > 0.0).ravel()
The problem was that your data_outliers was a numpy column with two dimensions (shape: (1,50)) and that was impossible to mask the df like that... ravel just flattened it...

Grouping the factors in ggplot

I am trying to create a graph based on matrix similar to one below... I am trying to group the Erosion values based on "Slope"...
library(ggplot2)
new_mat<-matrix(,nrow = 135, ncol = 7)
colnames(new_mat)<-c("Scenario","Runoff (mm)","Erosion (t/ac)","Slope","Soil","Tillage","Rotation")
for ( i in 1:nrow(new_mat)){
new_mat[i,2]<-sample(10:50, 1)
new_mat[i,3]<-sample(0.1:20, 1)
new_mat[i,4]<-sample(c("S2","S3","S4","S5","S1"),1)
new_mat[i,5]<-sample(c("Deep","Moderate","Shallow"),1)
new_mat[i,7]<-sample(c("WBP","WBF","WF"),1)
new_mat[i,6]<-sample(c("Intense","Reduced","Notill"),1)
new_mat[i,1]<-paste0(new_mat[i,4],"_",new_mat[i,5],"_",new_mat[i,6],"_",new_mat[i,7],"_")
}
#### Graph part ########
grphs_mat<-as.data.frame(new_mat)
grphs_mat$`Runoff (mm)`<-as.numeric(as.character(grphs_mat$`Runoff (mm)`))
grphs_mat$`Erosion (t/ac)`<-as.numeric(as.character(grphs_mat$`Erosion (t/ac)`))
ggplot(grphs_mat, aes(Scenario, `Erosion (t/ac)`,group=Slope, colour = Slope))+
scale_y_continuous(limits=c(0,max(as.numeric((grphs_mat$`Erosion (t/ac)`)))))+
geom_point()+geom_line()
But when i run this code.. The values are distributed in x-axis for all 135 scenarios. But what i want is grouping to be done in terms of slope but it also picks up the other common factors such as Soil+Rotation+Tillage and place it in x-axis. For example:
For these five scenarios:
S1_Deep_Intense_WBF_
S2_Deep_Intense_WBF_
S3_Deep_Intense_WBF_
S4_Deep_Intense_WBF_
S5_Deep_Intense_WBF_
It separates the S1, S2, S3,S4,S5 but also be able to know that other factors are same and put them in x-axis such that the slope lines are stacked on top of each other in 135/5 = 27 x-axis points. The final figure should look like this (Refer image). Apologies for not being able to explain it better.
I think i am making a mistake in grouping or assigning the x-axis values.
I will appreciate your suggestions.
In the example you give, I didn't get every possible factor combination represented so the plots looked a bit weird. What I did instead was start with the following:
set.seed(42)
new_mat <- matrix(,nrow = 1000, ncol = 7)
And then deduplicated this by summarising the values. A possible relevant step here for you analysis is that I made new variable with the interaction() function that is the combination of three other factors.
library(tidyverse)
df <- grphs_mat
df$x <- with(df, interaction(Rotation, Soil, Tillage))
# The simulation did not yield unique combinations
df <- df %>% group_by(x, Slope) %>%
summarise(n = sum(`Erosion (t/ac)`))
Next, I plotted this new x variable on the x-axis and used "stack" positions for the lines and points.
g <- ggplot(df, aes(x, y = n, colour = Slope, group = Slope)) +
geom_line(position = "stack") +
geom_point(position = "stack")
To make the x-axis slightly more readable, you can replace the . that the interaction() function placed by newlines.
g + scale_x_discrete(labels = function(x){gsub("\\.", "\n", x)})
Another option is to simply rotate the x axis labels:
g + theme(axis.text.x.bottom = element_text(angle = 90))
There are a few additional options for the x-axis if you go into ggplot2 extension packages.