I have ~40 points in UTM zone 19 taken from Peru that I would like to convert to lat/long to project onto Google Earth. I am having some problems with PBSmapping and can't seem to figure out the solution. I have searched through the forums and tried several different methods, including the project command in proj4 but still can't get this to work. Here is the code I have currently written
library(PBSmapping)
#just two example UTM coordinates
data<-as.data.frame(matrix(c(214012,197036,8545520,8567292),nrow=2))
attr(data,"projection") <- "UTM"
attr(data, "zone") <- 19
colnames(data)<-c("X","Y")
convUL(data,km=FALSE)
The corresponding lat/longs should be somewhere with lats between -12.9XXXXX and -13.0XXXXX and long between -71.8XXXX to -71.4XXXX. The values given by convUL seem to be way off.
Once you get the valid pairs of coordinates you could do something like this:
library(rgdal)
data <- data.frame(id = c(1:2), x = c(214012,197036) , y = c(8545520,8567292))
coordinates(data) = ~x+y
Asign projection
# Use the appropriate EPSG Code
proj4string(data) = CRS('+init=epsg:24891') # 24891 or 24893
Transform to geographic coordinates
enter code heredata_wgs84 <- spTransform(data, CRS('+init=epsg:4326'))
Get some valid background data to plot it against
# Country data
package(dismo)
peru <-getData('GADM', country='PER', level=0)
plot(peru, axes = T)
plot(data, add = T)
Write your KML file
# Export kml
tmpd <- 'D:\\'
writeOGR(data_wgs84, paste(tmpd, "peru_data.kml", sep="/"), 'id', driver="KML")
Related
I am trying to extract my smoothing function from a ggplot and save it as dataframe (hourly datapoints) Plot shown here.
What I have tried:
I have already tried different interpolation techniques, but the results are not satisfying.
Linear interpolation causes a zic-zac pattern.
Na_spline causes a weird curved pattern.
The real data behaves more closely to the geom_smoothing of ggplot. I have tried to reproduce it with the following functions:
loess.data <- stats::loess(Hallwil2018_2019$Avgstemp~as.numeric(Hallwil2018_2019$datetime), span = 0.5)
loess.predict <- predict(loess.data, se = T)
But it creates a list that misses the NA values and is much shorter.
You can pass a newdata argument to predict() to get it to predict a value for every time period you give it. For example (from randomly generated data):
df <- data.frame(date = sample(seq(as.Date('2021/01/01'),
as.Date('2022/01/01'),
by="day"), 40),
var = rnorm(40, 100, 10))
mod <- loess(df$var ~ as.numeric(df$date), span = 0.5)
predict(mod, newdata = seq(as.Date('2021/01/01'), as.Date('2022/01/01'), by="day"))
I'm working in a project in which i need to plot several coordinates from a mongodb query, i can plot a single result and works awesome, but i need to figure out how to plot several coordinates from the query, the query contains the info needed but i don;t know how to iterate correctly in the result file i think that's my problem , i'm running on circles now so i need some help .
Code that works for a single result :
countE2 = mydb.results.find({"d.latitude":{"$gt":0},"d.longitude":{"$gt":0}},{"d.latitude":1, "d.longitude": 1,"a":1,"u.Server":1}).limit(10).sort('t', pymongo.DESCENDING)
for resultado in countE2:
lat = resultado["d"]["latitude"]
lon = resultado["d"]["longitude"]
ip = resultado["a"]
server = resultado["u"]["Server"]
us_cities = [lat,lon,ip,server]
y = us_cities
print(y)
fig = px.scatter_mapbox(y,lat=[y[0]], lon=[y[1]],hover_name=[y[2]],hover_data=[[y[3]],[y[3]]],color_discrete_sequence=["green"], zoom=3, height=1000)
print(fig)
fig.update_layout(mapbox_style="dark", mapbox_accesstoken=token)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
Please consider that the query returns several results ,in this case countE.
Thanks
I really can’t figure out how to display just the centroids for my categorical variables using the function ggord. If anybody could help me, that would be great.
Here is an example of what I’m trying to achieve using the dune data set:
library(vegan)
library (ggord)
library(ggplot2)
ord <- rda(dune~Moisture+ Management+A1,dune.env)
#first plot
plot(ord)
# second plot
ggord(ord)
#I tried to add the centroids, but somehow the whole plot seems to be differently scaled?
centroids<-ord$CCA$centroids
ggord(ord)+geom_point(aes(centroids[,1],centroids[,2]),pch=4,cex=5,col="black",data=as.data.frame(centroids))
In the first plot only the centroids (instead of arrows) for moisture and management are displayed. In the ggord plot every variable is displayed with an arrow.
And why do these plots look so different? The scales of the axes is totally different?
Something like this could work - you can use the var_sub argument to retain specific predictors (e.g., continuous), then just plot others on top of the ggord object.
library(vegan)
library(ggord)
library(ggplot2)
data(dune)
data(dune.env)
ord <- rda(dune~Moisture+ Management+A1,dune.env)
# get centroids for factors
centroids <- data.frame(ord$CCA$centroids)
centroids$labs <- row.names(centroids)
# retain only continuous predictors, then add factor centroids
ggord(ord, var_sub = 'A1') +
geom_text(data = centroids, aes(x = RDA1, y = RDA2, label = labs))
I want to figure out which state a lat long belongs to. And for this I am using the shape files provided by US Census and shapely library. This is what I tried so far:
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point
df_poly = gpd.read_file("data/tl_2019_us_state.shp")
df_poly = df_poly[['GEOID', 'geometry']].set_index('GEOID')
display(df_poly.head(5))
geometry
GEOID
54 POLYGON ((-81.74725 39.09538, -81.74635 39.096...
12 MULTIPOLYGON (((-86.38865 30.99418, -86.38385 ...
17 POLYGON ((-91.18529 40.63780, -91.17510 40.643...
27 POLYGON ((-96.78438 46.63050, -96.78434 46.630...
24 POLYGON ((-77.45881 39.22027, -77.45866 39.220...
p1 = Point(map(float, (29.65, -95.17)))
any(df_poly['geometry'].contains(p1))
False
But it is somehow returning False for any coordinate that I try. For example the above coordinate is from Texas but still its returning False, so what am I missing here?
Here are a few things you should check:
Did you use the correct order for the point? Shapely points use (x, y) coordinates, which are in the opposite order of (lat, lon) coordinates. I'd try flipping the coordinates and seeing if that works.
For example, I see one of your coordinates is this: "-81.74725 39.09538" If you interpret that in (lat, lon) order, it's in Antartica. If you interpret it in (x, y) order, it's in Ohio.
Are you using the correct SRID? The census data usually uses NAD83, but this is a good thing to check:
print(df_poly.crs)
Another good sanity check is to look at the centroid of each polygon, and verify that it's reasonable:
df.geometry.centroid
In the past, I've seen people who had data which was in the wrong SRID, and had to convert it.
I need to display a catalogue of galaxies projected on the sky. Not all the sky is relevant here, so I need to center an zoom on the relevant part. I am OK with more or less any projection, like Lambert, Mollweide, etc. Here are mock data and code sample, using Mollweide:
# Generating mock data
np.random.seed(1234)
(RA,Dec)=(np.random.rand(100)*60 for _ in range(2))
# Creating projection
projection='mollweide'
fig = plt.figure(figsize=(20, 10));
ax = fig.add_subplot(111, projection=projection);
ax.scatter(np.radians(RA),np.radians(Dec));
# Creating axes
xtick_labels = ["$150^{\circ}$", "$120^{\circ}$", "$90^{\circ}$", "$60^{\circ}$", "$30^{\circ}$", "$0^{\circ}$",
"$330^{\circ}$", "$300^{\circ}$", "$270^{\circ}$", "$240^{\circ}$", "$210^{\circ}$"]
labels = ax.set_xticklabels(xtick_labels, fontsize=15);
ytick_labels = ["$-75^{\circ}$", "$-60^{\circ}$", "$-45^{\circ}$", "$-30^{\circ}$", "$-15^{\circ}$",
"$0^{\circ}$","$15^{\circ}$", "$30^{\circ}$", "$45^{\circ}$", "$60^{\circ}$",
"$75^{\circ}$", "$90^{\circ}$"]
ax.set_yticklabels(ytick_labels,fontsize=15);
ax.set_xlabel("RA");
ax.xaxis.label.set_fontsize(20);
ax.set_ylabel("Dec");
ax.yaxis.label.set_fontsize(20);
ax.grid(True);
The result is the following:
I have tried various set_whateverlim, set_extent, clip_box and so on, as well as importing cartopy and passing ccrs.LambertConformal(central_longitude=...,central_latitude=...) as arguments. I was unable to get a result.
Furthermore, I would like to shift RA tick labels down, as they are difficult to read with real data. Unfortunately, ax.tick_params(pad=-5) doesn't do anything.