How to organize facet_grid with too many panels - ggplot2

I am trying to plot gene expression values in a clinical trial. I have come to an scalar, represeting relative quantification, and I have 3 groups of intervention.I am trying to plot it in a bar graph (I am open to alternatives). My database is made up 55 genes to study in 151 samples.
The plot design is not pretty fancy, I would like to discriminate the groups by colours
ggplot(genes[time = 3], aes(grup_int, RQ)) +
stat_summary(fun.y = mean, geom='point') +
stat_summary(fun.data = 'mean_cl_boot', geom='errorbar', width=.25) +
facet_grid(.~gen)
As you see the resolution is low. I was wondering if there is other approach, maybe rearrange the plots or even divide the plot in 2 plots...
Thanks in advance!

Related

Creating bar charts with binary data

I have the following data , which I am trying to use to create a bar chart from to show how preference of fruit varies with country:
see data table here
I want to create a bar chart that shows preference of apples, oranges, grapes and bananas based on survey location (i.e x= surveyloc and Y = pref freq of oranges, apples, bananas). I am not quite sure how to do this when dealing with binary data and am hoping to get some assistance.
If you are looking to see preference for multiple variables (ex. fruits) across multiple locations (ex. locations), when only having binary data ("yes" or "no", or 0 vs 1), a bar chart is probably not the best option. My recommendation would be something like a tile plot so that you can convey at a glance preferences across the locations. Here's an example using some dummy data. I'll first show you an example of a bar plot (column plot), then the recommendation I have for you, which would be a tilemap.
Example Dataset
library(ggplot2)
library(dplyr)
library(tidyr)
set.seed(8675309)
df <- data.frame(
location = state.name[1:10],
apples = rbinom(10,1,0.3),
oranges = rbinom(10,1,0.1),
pears = rbinom(10,1,0.25),
grapes = rbinom(10,1,0.6),
mangos = rbinom(10,1,0.65)
)
# tidy data
df <- df %>% pivot_longer(cols = -location) %>%
mutate(value = factor(value))
I created df above initially in the same format you have for your dataset (location | pref1 | pref2 | pref3 | ...). It's difficult to use ggplot2 to plot this type of data easily, since it is designed to handle what is referred to as Tidy Data. This is overall a better strategy for data management and is adaptable to whatever output you wish - I'd recommend reading that vignette for more info. Needless to say, after the code above we have df formatted as a "tidy" table.
Note I've also turned the binary "value" column into a factor (since it only holds "0" or "1", and values of "0.5" and the like don't make sense here with this data).
"Bar Chart"
I put "bar chart" in quotes, because as we are plotting the value (0 or 1) on the y axis and location on the x axis, we are creating a "column chart". "Bar charts" formally only need a list of values and plot count, density, or probability on the y axis. Regardless, here's an example:
bar_plot <-
df %>%
ggplot(aes(x=location, y=value, fill=name)) +
geom_col(position="dodge", color='gray50', width=0.7) +
scale_fill_viridis_d()
bar_plot
We could think about just showing where value==1, but that's probably not going to make things clearer.
Example of Tile Plot
What I think works better here is a tilemap. The idea is that you spread location on the x axis and name (of the fruit) on the y axis, and then show the value field as the color of the resulting tiles. I think it makes things a bit easier to view, and it should work pretty much the same if your data is binary or probabilistic. For probability data, you just don't need to conver to a factor first.
tile_plot <-
df %>%
ggplot(aes(x=location, y=name, fill=value)) +
geom_tile(color='black') +
scale_fill_manual(values=c(`0`="gray90", `1`="skyblue")) +
coord_fixed() +
scale_x_discrete(expand=expansion(0)) +
scale_y_discrete(expand=expansion(0))
tile_plot
To explain a little what's going on here is that we setup the aesthetics as indicated above in ggplot(...). Then we draw the tiles with geom_tile(), where the color= represents the line around the tiles. The actual fill colors are described in scale_fill_manual(). The tiles are forced to be "sqare" via coord_fixed(), and then I remove excess area around the tiles via the scale_x_*() and scale_y_*() commands.

Vega / vega-lite multi-level / hierachy axes

i have seen view composition techniques like facets etc.
however i am struggling to create a plot that features multiple view cells / subplots for different combinations of dimensions/groups as seen in the image for the x axis. it should be possible to not only do this for year-month-day but also for something as "country, year" and then plotting e.g. some continuous x vs. y scatter as subplot within individual cell for specific country & year.
could you provide an example json for auch a plot, so that i can have multiple groups applied for both x & y axis?
please see following vega spec for what I've tried already. I would like to add a graph on top of it, the colored boxes should be the scales.
https://vega.github.io/editor/#/url/vega/N4KABGBEAkDODGALApgWwIaQFxUQFzwAdYsB6UgN2QHN0A6agSz0QFcAjOxge1IRQyUa6SgBY6AK1jcAdpAA04KAHdGAExbYwogAw7FESCkbV8WgMx6DUQujVrGM6lv0glkWCZnoANrC0A2koQwFDeqMhakGC29o7O8lAUvqyROK4QmVDsjmpawJCOhKx4UQBO6E6RiZCoji41GAAeWgCM+lCweMiEbQC+YH3WIWHoEeXcrDJ5Nck+qVoAZr6wyDU50-mFMsWlOJBIyPAA1uzcLQNDwWChkOFpSchleIzwvgpJKQ-LfmvZuVsiiUoocTmcLoMlABdRTuNToPCYHBBLKgLKGe5RPBlZDVa6GOapfzI-GZArqKIAQQ+kFsOJkezAMlYPh8sy+Liu6LJhTy+wAQjS6cgGVSaYSHq0udybryogBhIXoemMyDU9nzB46aXc8l8qAAESVKoV4o5OClw3ReqiAFFjSLVYqNQsLX1STDSZBsZVYItuGVUIFSSEQ5lvQBPQgPDw+l6LCMKMOGY7IRP7ClW7m05WOgDSaaiwtFYZ11uTUDwUZjdJeL1kSZl4cWjGQPn1kAljabNjsDicW081G8PiiMT78UgZezZUmmxwBSHI4mUzy0-RHkYAC8HgFF153vsqM9Xr4wAB+MDGUx4MA4VQaRBTxL74eHx4nt4+C9gB8sO9XsgJhmH0UKlh61zuhAnruAgvjIMSYAomSXqYvs8DcD4AbdlkkbRlEAb9u8WZQBUVSDoc4zoQiNABhGABMOhTpB0KwoYGBlMciHIaG2ZVvh+w4vApQkZAiyzkGC6QPCiJYjikTrpAIoYWoDxojKSkMk8+QVmJjCshR8Egph2E1C2bYdhSUEyuuhisIQMlqbp6ayku76dk8Lxfj+MmsKgdBNDoAHXvgAC0vn+RGTG2eGDGDgeo5Hp5p7fpeEUBa0wVATe4UIn5dARq0z66S0UluYlH5eWeaV5ZFQU4OlgVQDFhhNPR8VvhVHmftVEDpYVAGNUV1nciNmRjdZYHukAA
There are a number of faceting examples in the Vega-Lite gallery.

Visualizing randomized four dimensional data set

I have a four dimensional data set. None of the four variables are equally spaced. Right now, I visualize the data using 3D scatter (with the color of the dots indicating the fourth dimension). But this makes it extremely unwieldy while it is printed. Had the variables been evenly spaced,a series of pcolors would have been an option. Is there some way, wherein I can represent such a data using a series of 2D plots? My data set looks something like this:
x = [3.67, 3.89, 25.6]
y = [4.88, 4.88, 322.9]
z = [1.0, 2.0, 3.0]
b = [300.0,411.0,414.5]
A scatter plot matrix is a common way to plot multiple dimensions. Here's a plot of four continuous variables colored by a fifth categorical variable.
To deal with the uneven spacing, it depends on the nature of the unevenness.
You might plot it as-is if the unevenness is significant.
You might make a second plot with the extreme values excluded.
You might apply a transformation (such as log or quantile) if the data justifies it.

Constructing a bubble trellis plot with lattice in R

First off, this is a homework question. The problem is ex. 2.6 from pg.26 of An Introduction to Applied Multivariate Analysis. It's laid out as:
Construct a bubble plot of the earthquake data using latitude and longitude as the scatterplot and depth as the circles, with greater depths giving smaller circles. In addition, divide the magnitudes into three equal ranges and label the points in your bubble plot with a different symbol depending on the magnitude group into which the point falls.
I have figured out that symbols, which is in base graphics does not work well with lattice. Also, I haven't figured out if lattice has the functionality to change symbol size (i.e. bubble size). I bought the lattice book in a fit of desperation last night, and as I see in some of the examples, it is possible to symbol color and shape for each "cut" or panel. I am then working under the assumption that symbol size could then also be manipulated, but I haven't been able to figure out how.
My code looks like:
plot(xyplot(lat ~ long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude",
panel = function(x,y){
grid.circle(x,y,r=sqrt(quakes$depth),draw=TRUE)
}
))
Where I attempt to use the grid package to draw the circles, but when this executes, I just get a blank plot. Could anyone please point me in the right direction? I would be very grateful!
Here is the some code for creating the plot that you need without using the lattice package. I obviously had to generate my own fake data so you can disregard all of that stuff and go straight to the plotting commands if you want.
####################################################################
#Pseudo Data
n = 20
latitude = sample(1:100,n)
longitude = sample(1:100,n)
depth = runif(n,0,.5)
magnitude = sample(1:100,n)
groups = rep(NA,n)
for(i in 1:n){
if(magnitude[i] <= 33){
groups[i] = 1
}else if (magnitude[i] > 33 & magnitude[i] <=66){
groups[i] = 2
}else{
groups[i] = 3
}
}
####################################################################
#The actual code for generating the plot
plot(latitude[groups==1],longitude[groups==1],col="blue",pch=19,ylim=c(0,100),xlim=c(0,100),
xlab="Latitude",ylab="Longitude")
points(latitude[groups==2],longitude[groups==2],col="red",pch=15)
points(latitude[groups==3],longitude[groups==3],col="green",pch=17)
points(latitude[groups==1],longitude[groups==1],col="blue",cex=1/depth[groups==1])
points(latitude[groups==2],longitude[groups==2],col="red",cex=1/depth[groups==2])
points(latitude[groups==3],longitude[groups==3],col="green",cex=1/depth[groups==3])
You just need to add default.units = "native" to grid.circle()
plot(xyplot(lat ~ long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude",
panel = function(x,y){
grid.circle(x,y,r=sqrt(quakes$depth),draw=TRUE, default.units = "native")
}
))
Obviously you need to tinker with some of the settings to get what you want.
I have written a package called tactile that adds a function for producing bubbleplots using lattice.
tactile::bubbleplot(depth ~ lat*long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude")

Creating grid and interpolating (x,y,z) for contour plot sagemath

!I have values in the form of (x,y,z). By creating a list_plot3d plot i can clearly see that they are not quite evenly spaced. They usually form little "blobs" of 3 to 5 points on the xy plane. So for the interpolation and the final "contour" plot to be better, or should i say smoother(?), do i have to create a rectangular grid (like the squares on a chess board) so that the blobs of data are somehow "smoothed"? I understand that this might be trivial to some people but i am trying this for the first time and i am struggling a bit. I have been looking at the scipy packages like scipy.interplate.interp2d but the graphs produced at the end are really bad. Maybe a brief tutorial on 2d interpolation in sagemath for an amateur like me? Some advice? Thank you.
EDIT:
https://docs.google.com/file/d/0Bxv8ab9PeMQVUFhBYWlldU9ib0E/edit?pli=1
This is mostly the kind of graphs it produces along with this message:
Warning: No more knots can be added because the number of B-spline
coefficients
already exceeds the number of data points m. Probably causes:
either
s or m too small. (fp>s)
kx,ky=3,3 nx,ny=17,20 m=200 fp=4696.972223 s=0.000000
To get this graph i just run this command:
f_interpolation = scipy.interpolate.interp2d(*zip(*matrix(C)),kind='cubic')
plot_interpolation = contour_plot(lambda x,y:
f_interpolation(x,y)[0], (22.419,22.439),(37.06,37.08) ,cmap='jet', contours=numpy.arange(0,1400,100), colorbar=True)
plot_all = plot_interpolation
plot_all.show(axes_labels=["m", "m"])
Where matrix(c) can be a huge matrix like 10000 X 3 or even a lot more like 1000000 x 3. The problem of bad graphs persists even with fewer data like the picture i attached now where matrix(C) was only 200 x 3. That's why i begin to think that it could be that apart from a possible glitch with the program my approach to the use of this command might be totally wrong, hence the reason for me to ask for advice about using a grid and not just "throwing" my data into a command.
I've had a similar problem using the scipy.interpolate.interp2d function. My understanding is that the issue arises because the interp1d/interp2d and related functions use an older wrapping of FITPACK for the underlying calculations. I was able to get a problem similar to yours to work using the spline functions, which rely on a newer wrapping of FITPACK. The spline functions can be identified because they seem to all have capital letters in their names here http://docs.scipy.org/doc/scipy/reference/interpolate.html. Within the scipy installation, these newer functions appear to be located in scipy/interpolate/fitpack2.py, while the functions using the older wrappings are in fitpack.py.
For your purposes, RectBivariateSpline is what I believe you want. Here is some sample code for implementing RectBivariateSpline:
import numpy as np
from scipy import interpolate
# Generate unevenly spaced x/y data for axes
npoints = 25
maxaxis = 100
x = (np.random.rand(npoints)*maxaxis) - maxaxis/2.
y = (np.random.rand(npoints)*maxaxis) - maxaxis/2.
xsort = np.sort(x)
ysort = np.sort(y)
# Generate the z-data, which first requires converting
# x/y data into grids
xg, yg = np.meshgrid(xsort,ysort)
z = xg**2 - yg**2
# Generate the interpolated, evenly spaced data
# Note that the min/max of x/y isn't necessarily 0 and 100 since
# randomly chosen points were used. If we want to avoid extrapolation,
# the explicit min/max must be found
interppoints = 100
xinterp = np.linspace(xsort[0],xsort[-1],interppoints)
yinterp = np.linspace(ysort[0],ysort[-1],interppoints)
# Generate the kernel that will be used for interpolation
# Note that the default version uses three coefficients for
# interpolation (i.e. parabolic, a*x**2 + b*x +c). Higher order
# interpolation can be used by setting kx and ky to larger
# integers, i.e. interpolate.RectBivariateSpline(xsort,ysort,z,kx=5,ky=5)
kernel = interpolate.RectBivariateSpline(xsort,ysort,z)
# Now calculate the linear, interpolated data
zinterp = kernel(xinterp, yinterp)