can I make "number at risk "table for cox plot if I have more than one independent variable?
if it possible where can I find the relevant code (I searched but couldn't find)
the code I used on my data:
fit <- coxph(Surv(time,event) ~chr1q21_status+CCND1+CRTM1+IRF4,data = myeloma)
ggsurvplot(fit, data = myeloma,
risk.table=TRUE, break.time.by=365, xlim = c(0,4000),
risk.table.y.text=FALSE, legend.labs = c("2","3","4+"))
got this message- object 'ggsurv' not found' although for only one variable and the function survfit it worked.
"number at risk "table for cox plot
It's not a Cox plot, it's a Kaplan-Meier plot. You're trying to plot a Cox model, when what you want is to fit KM curves using survfit and then to plot the resulting fit:
library("survival")
library("survminer")
fit <- survfit(Surv(time,status) ~ ph.ecog + sex , data = lung)
ggsurvplot(fit, data = lung, risk.table = TRUE)
Since you now mention that you have continuous predictors, perhaps you could think about what you expect an at-risk table or KM plot to show.
Here's an example of binning a continuous measure (age):
library("survival")
library("survminer")
#> Loading required package: ggplot2
#> Loading required package: ggpubr
#> Loading required package: magrittr
lung$age_bin <- cut(lung$age, quantile(lung$age))
fit <- survfit(Surv(time,status) ~ age_bin + sex , data = lung)
ggsurvplot(fit, data = lung, risk.table = TRUE)
Related
I am creating a map to depict density of datapoints at different locations. At some locations, there is a high density of data available, and at others, there is a low density of data available. I would like to present the map with each data point shown but with each point a certain size to represent the density.
In my data table I have the location, and each location is assigned 'A', 'B', or 'C' to depict 'Low', 'Medium', and 'High' density. When plotting using geom_sf, I am able to get the points on the map, but I would like each category to be represented by a different size circle. I.e. 'Low density' locations with a small circle, and 'High density' locations with a larger circle.
I have been approaching the aesthetics of this map in the same way I would approach it as if it were a normal ggplot situation, but have not had any luck. I feel like I must be missing something obvious related to the fact that I am using geom_sf(), so any advice would be appreciated!
Using a very simple code:
ggplot() +
geom_sf(data = stc_land, color = "grey40", fill = "grey80") +
geom_sf(data = stcdens, aes(shape = Density) +
theme_classic()
I know that the aes() call should go in with the 'stcdens' data, and I got close with the 'shape = Density', but I am not sure how to move forward with assigning what shapes I want to each category.
You probably want to swap shape = Density for size = Density; then the plot should behave itself (and yes, it is a standard ggplot behavior, nothing sf specific :)
As your code is not exactly reproducible allow me to use my favorite example of 3 cities in NC:
library(sf)
library(ggplot2)
shape <- st_read(system.file("shape/nc.shp", package="sf")) # included with sf package
cities <- data.frame(name = c("Raleigh", "Greensboro", "Wilmington"),
x = c(-78.633333, -79.819444, -77.912222),
y = c(35.766667, 36.08, 34.223333),
population = c("high", "medium","low")) %>%
st_as_sf(coords = c("x", "y"), crs = 4326) %>%
dplyr::mutate(population = ordered(population,
levels = c("low", "medium", "high")))
ggplot() +
geom_sf(data = shape, fill = NA) +
geom_sf(data = cities, aes(size = population))
Note that I turned the population from a character variable to ordered factor, where high > medium > low (so that the circles follow the expected order).
I came up with the following script to bin my data on X values, and plot the means of those bins in overlapping bar graphs. It works fine, but I can't seem to get a legend to generate, probably due to poor understanding of aesthetic mapping.
Here is the script, note that "MOI" and "T_cell_contacts" are two data columns in each DF.
ggplot(mapping=aes(MOI, T_cell_contacts)) + stat_summary_bin(data = Cleaned24hr4, fun = "mean", geom="bar", bins= 100, fill = "#FF6666", alpha = 0.3) + stat_summary_bin(data = cleaned24hr8, fun = "mean", geom="bar", bins= 100, fill = "#3733FF", alpha = 0.3) + ylab("mean")
I also added the graph that it plots.
Full disclosure: I was in the middle of writing this when #schumacher posted their response :). Decided to finish anyway.
There are two ways to approach this. One way (more complicated) is to keep the dataframes separate and ask ggplot2 to create a legend via mapping, and the second (simpler) way is to combine into one dataset similar to what #schumacher posted and map the fill color to the extra id column created.
I'll show you both, but first, here's a sample dataset:
library(ggplot2)
set.seed(8675309)
df1 <- data.frame(my_x=rep(1:100, 3), my_y=rnorm(300, 40, 4))
df2 <- data.frame(my_x=rep(11:110, 3), my_y=rnorm(300, 110, 10))
# and the plot code similar to OP's question
ggplot(mapping=aes(x = my_x, y = my_y)) +
stat_summary_bin(data=df1, fun="mean", geom="bar", bins=40, fill="blue", alpha=0.3) +
stat_summary_bin(data=df2, fun="mean", geom="bar", bins=40, fill="red", alpha=0.3)
Method 1 : Combine Dataframes
This is the preferred method for a variety of reasons I can't list completely here. There are a lot of options you can use for combining datasets. One is using union() or rbind() after adding some sort of ID column to your data, but you can do all in one shot using bind_rows() from dplyr:
df <- dplyr::bind_rows(list(dataset1 = df1, dataset2 = df2), .id="id")
The result will bind the rows together and by specifying the .id argument, it will create a new column in the dataset called "id" that uses the names for each of the datasets in the list as the value. In this case, the value in thd df$id column is either "dataset1" if it originated from df1 or "dataset2" if it originated from df2.
Then you use aes(fill=...) to map the fill color to the column "id" in the combined dataset.
p <- ggplot(df, aes(x=my_x, y=my_y)) +
stat_summary_bin(aes(fill=id), fun="mean", geom="bar", bins=40, alpha=0.3)
p
This creates a plot with the default colors for fill, so if you want to supply your own, just use scale_fill_manual(values=...) to specify the particular colors. Using a named vector for values= ensures that each color is applied the way you want it to be, but you can just supply an unnamed vector of color names.
p + scale_fill_manual(values = c("dataset1" = "blue", "dataset2" = "red"))
Method 2 : Use mapping to add the legend
While Method 1 is preferred, there is another way that does not force you to combine your dataframes. This is also useful to illustrate a bit about how ggplot2 decides to create and draw legends. The legend is created automaticaly via the mapping= argument, specifically via aes(). If you put any aesthetic inside of aes() that would normally impart a different appearance and not location (with some exceptions like x, y, and label), then this initiates the creation of a legend. You can map either a column in your dataset (like above), or you can just supply a single value and that will be applied to the entire dataset used for the geom. In this case, see what happens when you change the fill= argument for each geom call to be within aes() and assign it to a character value:
p1 <- ggplot(mapping = aes(x=my_x, y=my_y)) +
stat_summary_bin(aes(fill="first"), data=df1, fun="mean", geom="bar", bins=40, alpha=0.3) +
stat_summary_bin(aes(fill="second"), data=df2, fun="mean", geom="bar", bins=40, alpha=0.3) +
scale_fill_manual(values = c("first" = "blue", "second" = "red"))
p1
It works! When you provide a character value for the fill= aesthetic inside aes(), it's basically labeling every observation in that data to have the value "first" or "second" and using that to make the legend. Cool, right?
You notice a problem though, which is that the alpha value for the legend is not correct. This is because you get overplotting. It's just one of the reasons why you shouldn't really do it this way, but... sort of works. It is only noticeable if you ahve an alpha value. You can get that to look normal, but you need to use guide_legend() to override the aesthetics. Since the code effectively causes the legend to be drawn completely for each geom... you have to cut the alpha value in half for it to display correctly.
p1 + guides(fill=guide_legend(override.aes = list(alpha=0.15)))
Oh, and the real reason why not to use Method 2 is.... just think about doing that again for 5 datasets... how about 10?... how about 20?.....
I think the difficulty has to do with building a single legend out of two different geoms. My approach was to combine your data into a single data frame. The records from each to be set apart by a new category column, I'll call "cat" for short.
With the popular dplyr package:
Cleaned24hr4 <- mutate(Cleaned24hr4, cat = "hr4")
Cleaned24hr8 <- mutate(Cleaned24hr8, cat = "hr8")
Then put them together:
Cleaned <- union(Cleaned24hr4,Cleaned24hr8)
Define your colors:
colorcode <- c("hr4" = "#FF6666", "hr8" = "#3733FF")
Here's my ggplot statement:
ggplot(Cleaned, mapping=aes(MOI, T_cell_contacts)) +
stat_summary_bin(fun = "mean", geom="bar", bins= 100, aes(fill = cat), alpha = 0.3) +
scale_fill_manual(values = colorcode) +
ylab("mean")
Output using some dummy data.
I am trying to create a graph based on matrix similar to one below... I am trying to group the Erosion values based on "Slope"...
library(ggplot2)
new_mat<-matrix(,nrow = 135, ncol = 7)
colnames(new_mat)<-c("Scenario","Runoff (mm)","Erosion (t/ac)","Slope","Soil","Tillage","Rotation")
for ( i in 1:nrow(new_mat)){
new_mat[i,2]<-sample(10:50, 1)
new_mat[i,3]<-sample(0.1:20, 1)
new_mat[i,4]<-sample(c("S2","S3","S4","S5","S1"),1)
new_mat[i,5]<-sample(c("Deep","Moderate","Shallow"),1)
new_mat[i,7]<-sample(c("WBP","WBF","WF"),1)
new_mat[i,6]<-sample(c("Intense","Reduced","Notill"),1)
new_mat[i,1]<-paste0(new_mat[i,4],"_",new_mat[i,5],"_",new_mat[i,6],"_",new_mat[i,7],"_")
}
#### Graph part ########
grphs_mat<-as.data.frame(new_mat)
grphs_mat$`Runoff (mm)`<-as.numeric(as.character(grphs_mat$`Runoff (mm)`))
grphs_mat$`Erosion (t/ac)`<-as.numeric(as.character(grphs_mat$`Erosion (t/ac)`))
ggplot(grphs_mat, aes(Scenario, `Erosion (t/ac)`,group=Slope, colour = Slope))+
scale_y_continuous(limits=c(0,max(as.numeric((grphs_mat$`Erosion (t/ac)`)))))+
geom_point()+geom_line()
But when i run this code.. The values are distributed in x-axis for all 135 scenarios. But what i want is grouping to be done in terms of slope but it also picks up the other common factors such as Soil+Rotation+Tillage and place it in x-axis. For example:
For these five scenarios:
S1_Deep_Intense_WBF_
S2_Deep_Intense_WBF_
S3_Deep_Intense_WBF_
S4_Deep_Intense_WBF_
S5_Deep_Intense_WBF_
It separates the S1, S2, S3,S4,S5 but also be able to know that other factors are same and put them in x-axis such that the slope lines are stacked on top of each other in 135/5 = 27 x-axis points. The final figure should look like this (Refer image). Apologies for not being able to explain it better.
I think i am making a mistake in grouping or assigning the x-axis values.
I will appreciate your suggestions.
In the example you give, I didn't get every possible factor combination represented so the plots looked a bit weird. What I did instead was start with the following:
set.seed(42)
new_mat <- matrix(,nrow = 1000, ncol = 7)
And then deduplicated this by summarising the values. A possible relevant step here for you analysis is that I made new variable with the interaction() function that is the combination of three other factors.
library(tidyverse)
df <- grphs_mat
df$x <- with(df, interaction(Rotation, Soil, Tillage))
# The simulation did not yield unique combinations
df <- df %>% group_by(x, Slope) %>%
summarise(n = sum(`Erosion (t/ac)`))
Next, I plotted this new x variable on the x-axis and used "stack" positions for the lines and points.
g <- ggplot(df, aes(x, y = n, colour = Slope, group = Slope)) +
geom_line(position = "stack") +
geom_point(position = "stack")
To make the x-axis slightly more readable, you can replace the . that the interaction() function placed by newlines.
g + scale_x_discrete(labels = function(x){gsub("\\.", "\n", x)})
Another option is to simply rotate the x axis labels:
g + theme(axis.text.x.bottom = element_text(angle = 90))
There are a few additional options for the x-axis if you go into ggplot2 extension packages.
I have plotted a heatmap in ggplot2. I want to add a curved line to the plot to show where z=0 (i.e. where the value of the data used for the fill is zero), how can I do this?
Thanks
Since no example data or code is provided, I'll illustrate with the volcano dataset, representing heights of a volcano in a matrix. Since the data doesn't contain a zero point, we'll draw the line at the arbitrarily chosen 125 mark.
library(ggplot2)
# Convert matrix to data.frame
df <- data.frame(
row = as.vector(row(volcano)),
col = as.vector(col(volcano)),
value = as.vector(volcano)
)
# Set contour breaks at desired level
ggplot(df, aes(col, row, fill = value)) +
geom_raster() +
geom_contour(aes(z = value),
breaks = 125, col = 'red')
Created on 2020-04-06 by the reprex package (v0.3.0)
If this isn't a good approximation of your problem, I'd suggest to include example data and code in your question.
I'm new here and would really appreciate some help! I've got a simple r script to plot log transformed data using ggplot and also plot on the 95% confidence and prediction intervals. However, I'm stuck on how to format the axes... I'd like them to be in log scale. I learned this from a tutorial and have tried to go through the script then transform the axes, but it messes up the confidence intervals.
I've tried using:
scale_y_continuous(trans=log2_trans())+
scale_x_continuous(trans=log2_trans())
but that doesn't transform the 95% confidence interval... Any suggestions would be appreciated! Basically, I'm just looking for an easy way to get log scales that look nice on the already nice graph.
Here's my code (I didn't include all the data just a bit for reference):
x <- c(6135.0613509,945.2650501,1927.8260200,110.0000000,
3812.9674276,3.2991626,1173.4923354,945.2650501,
114.2114798,11.2463797)
y <- c(370.00,32.00,2900.00,52.00,1500.00,0.06,16.00,50.00,5.00,11.00)
df <- data.frame(x, y)
# Log transformation
log_x <- log(x)
log_y <- log(y)
# Plot
plot(log_x,log_y)
# Linear Regression - linear model function
model1 <- lm(log_y ~ log_x, data = df)
summary(model1)
abline(model1, col = "lightblue") # Add trendline to plot of log transformed
data
# load ggplot2
library(ggplot2)
# Confidence and Prediction intervals
temp_var <- predict(model1, interval = "prediction")
## Warning in predict.lm(model1, interval = "prediction"): predictions on
## current data refer to _future_ responses
new_df <- cbind(df, temp_var)
ggplot(new_df, aes(log_x, log_y))+
geom_point() +
geom_line(aes(y = lwr), color = "red", linetype = "dashed")+
geom_line(aes(y = upr), color = "red", linetype = "dashed")+
geom_smooth(method = lm, se = TRUE)