Trying to calculate direction in sky of planets at a given time, but getting incorrect answers? - skyfield

I'm trying to write a program to calculate what planets are above the horizon at a given time, and the direction in the sky the planets are, but I'm getting incorrect results. My code is below, does anyone have any idea what the issue might be?
import numpy as np
import skyfield
from skyfield.api import load
planets = load('de441.bsp')
ts = load.timescale()
earth = planets['earth']
other_planets = [planets['mercury barycenter'], planets['venus barycenter'], planets['mars barycenter'], planets['jupiter barycenter'], planets['saturn barycenter']]
other_planet_names = ["Mercury", "Venus", "Mars", "Jupiter", "Saturn"]
directions = ["North", "Northeast", "East", "Southeast", "South", "Southwest", "West", "Northwest"]
def planets_visible(time):
numPlanetsViewed = 0
for i in range (0, 5):
az, dec, distance = earth.at(time).observe(other_planets[i]).radec()
if dec._degrees > 0:
numPlanetsViewed = numPlanetsViewed + 1
string = other_planet_names[i] + " is " + str(int((round(dec._degrees, 0)))) + " degrees above the horizon!"
print(string)
direction = int(np.floor(((az._degrees - 22.5) % 360) / 45))
string2 = " It is visible to the " + directions[direction] + "."
print(string2)
print(" ")
if numPlanetsViewed == 0:
print("Sorry, none of the four major planets are visible at this time!")
now = ts.now()
planets_visible(now)
I keep getting that, say, right now (4/7/2021, ~4:00 PM), the right planets are visible – yay! – but the azimuths are all off, saying Mars is NE when it's SE, for example (and I can't speak for the right ascensions, since I don't know them at the moment, but I think those are off too.) Does anyone know what' s up?

You will want to double-check the documentation: radec() does not return azimuth and altitude, it returns right ascension and declination. Yes, Python is letting you call the return value az, but that’s because Python does not check whether the names you assign to are appropriate; the value is not an azimuth, which is why it is not matching the value you expect. Try reading through the documentation again, and follow its instructions for producing an azimuth; hopefully the numbers will agree better.

Related

Create line network from closest points with boundaries

I have a set of points and I want to create line / road network from those points. Firstly, I need to determine the closest point from each of the points. For that, I used the KD Tree and developed a code like this:
def closestPoint(source, X = None, Y = None):
df = pd.DataFrame(source).copy(deep = True) #Ensure source is a dataframe, working on a copy to keep the datasource
if(X is None and Y is None):
raise ValueError ("Please specify coordinate")
elif(not X in df.keys() and not Y in df.keys()):
raise ValueError ("X and/or Y is/are not in column names")
else:
df["coord"] = tuple(zip(df[X],df[Y])) #create a coordinate
if (df["coord"].duplicated):
uniq = df.drop_duplicates("coord")["coord"]
uniqval = list(uniq.get_values())
dupl = df[df["coord"].duplicated()]["coord"]
duplval = list(dupl.get_values())
for kq,vq in uniq.items():
clstu = spatial.KDTree(uniqval).query(vq, k = 3)[1]
df.at[kq,"coord"] = [vq,uniqval[clstu[1]]]
if([uniqval[clstu[1]],vq] in list(df["coord"]) ):
df.at[kq,"coord"] = [vq,uniqval[clstu[2]]]
for kd,vd in dupl.items():
clstd = spatial.KDTree(duplval).query(vd,k = 1)[1]
df.at[kd,"coord"] = [vd,duplval[clstd]]
else:
val = df["coord"].get_values()
for k,v in df["coord"].items():
clst = spatial.KDTree(val).query(vd, k = 3)[1]
df.at[k,"coord"] = [v,val[clst[1]]]
if([val[clst[1]],v] in list (df["coord"])):
df.at[k,"coord"] = [v,val[clst[2]]]
return df["coord"]
The code can return the the closest points around. However, I need to ensure that no double lines are created (e.g (x,y) to (x1,y1) and (x1,y1) to (x,y)) and also I need to ensure that each point can only be used as a starting point of a line and an end point of a line despite the point being the closest one to the other points.
Below is the visualization of the result:
Result of the code
What I want:
What I want
I've also tried to separate the origin and target coordinate and do it like this:
df["coord"] = tuple(zip(df[X],df[Y])) #create a coordinate
df["target"] = "" #create a column for target points
count = 2 # create a count iteration
if (df["coord"].duplicated):
uniq = df.drop_duplicates("coord")["coord"]
uniqval = list(uniq.get_values())
for kq,vq in uniq.items():
clstu = spatial.KDTree(uniqval).query(vq, k = count)[1]
while not vq in (list(df["target"]) and list(df["coord"])):
clstu = spatial.KDTree(uniqval).query(vq, k = count)[1]
df.set_value(kq, "target", uniqval[clstu[count-1]])
else:
count += 1
clstu = spatial.KDTree(uniqval).query(vq, k = count)[1]
df.set_value(kq, "target", uniqval[clstu[count-1]])
but this return an error
IndexError: list index out of range
Can anyone help me with this? Many thanks!
Answering now about the global strategy, here is what I would do (rough pseudo-algorithm):
current_point = one starting point in uniqval
while (uniqval not empty)
construct KDTree from uniqval and use it for next line
next_point = point in uniqval closest to current_point
record next_point as target for current_point
remove current_point from uniqval
current_point = next_point
What you will obtain is a linear graph joining all your points, using closest neighbors "in some way". I don't know if it will fit your needs. You would also obtain a linear graph by taking next_point at random...
It is hard to comment on your global strategy without further detail about the kind of road network your want to obtain. So let me just comment your specific code and explain why the "out of range" error happens. I hope this can help.
First, are you aware that (list_a and list_b) will return list_a if it is empty, else list_b? Second, isn't the condition (vq in list(df["coord"]) always True? If yes, then your while loop is just always executing the else statement, and at the last iteration of the for loop, (count-1) will be greater than the total number of (unique) points. Hence your KDTree query does not return enough points and clstu[count-1] is out of range.

Reflecting boundary conditions in FiPy

I am attempting to solve the convection diffusion equation in FiPy. For the moment, all I am trying to achieve is a Neumann boundary condition, so that the wave reflects back at the right-hand boundary rather than travelling out of the domain.
I have added the following line:
phi.faceGrad.constrain(0, mesh.exteriorFaces)
But this doesn't seem to change anything.
Am I imposing the wrong boundary condition? Am I imposing it incorrectly? I have searched for this, but can't seem to find an example which has the simple property of a wave reflecting off a boundary! My code is below. Thanks so much.
from fipy import *
nx = 100
L = 1.
dx = L/nx
steps = 160
dt = 0.1
t = dt * steps
mesh = Grid1D(nx=nx, dx=dx)
x = mesh.cellCenters[0]
phi = CellVariable(name="solution variable", mesh=mesh, value=0.)
phi.setValue(1., where=(x>0.03) & (x<0.09))
# Diffusion and convection coefficients
D = FaceVariable(name='diffusion coefficient',mesh=mesh,value=1.*10**(-4.))
C = (0.1,)
# Boundary conditions
phi.faceGrad.constrain(0, mesh.exteriorFaces)
eq = TransientTerm() == DiffusionTerm(coeff=D) - ConvectionTerm(coeff=C)
for step in range(steps):
eq.solve(var=phi, dt=dt)
if step%20==0:
viewer = Viewer(vars=phi, datamin=0., datamax=1.)
viewer.plot()

How to change colors in stat_summary()

I am trying to plot two columns of raw data (I have used melt to combine them into one data frame) and then add separate error bars for each. However, I want to make the raw data for each column one pair of colors and the error bars another set of colors, but I can't seem to get it to work. The plot I am getting is at the link below. I want to have different color pairs for the raw data and for the error bars. A simple reproducible example is coded below, for illustrative purposes.
dat2.m<-data.frame(obs=c(2,4,6,8,12,16,2,4,6),variable=c("raw","raw","raw","ip","raw","ip","raw","ip","ip"),value=runif(9,0,10))
c <- ggplot(dat2.m, aes(x=obs, y=value, color=variable,fill=variable,size = 0.02)) +geom_jitter(size=1.25) + scale_colour_manual(values = c("blue","Red"))
c<- c+stat_summary(fun.data="median_hilow",fun.args=(conf.int=0.95),aes(color=variable), position="dodge",geom="errorbar", size=0.5,lty=1)
print(c)
[1]: http://i.stack.imgur.com/A5KHk.jpg
For the record: I think that this is a really, really bad idea. Unless you have a use case where this is crucial, I think you should re-examine your plan.
However, you can get around it by adding a new set of variables, padded with a space at the end. You will want/need to play around with the legends, but this should work (though it is definitely ugly):
dat2.m<- data.frame(obs=c(2,4,6,8,12,16,2,4,6),variable=c("raw","raw","raw","ip","raw","ip","raw","ip","ip"),value=runif(9,0,10))
c <- ggplot(dat2.m, aes(x=obs, y=value, color=variable,fill=variable,size = 0.02)) +geom_jitter(size=1.25) + scale_colour_manual(values = c("blue","Red","green","purple"))
c<- c+stat_summary(fun.data="median_hilow",fun.args=(conf.int=0.95),aes(color=paste(variable," ")), position="dodge",geom="errorbar", size=0.5,lty=1)
print(c)
One way around this would be to use repetitive calls to geom_point and stat_summary. Use the data argument of those functions to feed subsets of your dataset into each call, and set the color attribute outside of aes(). It's repetitive and somewhat defeats the compactness of ggplot, but it'd do.
c <- ggplot(dat2.m, aes(x = obs, y = value, size = 0.02)) +
geom_jitter(data = subset(dat2.m, variable == 'raw'), color = 'blue', size=1.25) +
geom_jitter(data = subset(dat2.m, variable == 'ip'), color = 'red', size=1.25) +
stat_summary(data = subset(dat2.m, variable == 'raw'), fun.data="median_hilow", fun.args=(conf.int=0.95), color = 'pink', position="dodge",geom="errorbar", size=0.5,lty=1) +
stat_summary(data = subset(dat2.m, variable == 'ip'), fun.data="median_hilow", fun.args=(conf.int=0.95), color = 'green', position="dodge",geom="errorbar", size=0.5,lty=1)
print(c)

Description of parameters of GDAL SetGeoTransform

Can anyone help me with parameters for SetGeoTransform? I'm creating raster layers with GDAL, but I can't find description of 3rd and 5th parameter for SetGeoTransform. It should be definition of x and y axis for cells. I try to find something about it here and here, but nothing.
I need to find description of these two parameters... It's a value in degrees, radians, meters? Or something else?
The geotransform is used to convert from map to pixel coordinates and back using an affine transformation. The 3rd and 5th parameter are used (together with the 2nd and 4th) to define the rotation if your image doesn't have 'north up'.
But most images are north up, and then both the 3rd and 5th parameter are zero.
The affine transform consists of six coefficients returned by
GDALDataset::GetGeoTransform() which map pixel/line coordinates into
georeferenced space using the following relationship:
Xgeo = GT(0) + Xpixel*GT(1) + Yline*GT(2)
Ygeo = GT(3) + Xpixel*GT(4) + Yline*GT(5)
See the section on affine geotransform at:
https://gdal.org/tutorials/geotransforms_tut.html
I did do like below code.
As a result I was able to do same with SetGeoTransform.
# new file
dst = gdal.GetDriverByName('GTiff').Create(OUT_PATH, xsize, ysize, band_num, dtype)
# old file
ds = gdal.Open(fpath)
wkt = ds.GetProjection()
gcps = ds.GetGCPs()
dst.SetGCPs(gcps, wkt)
...
dst.FlushCache()
dst = Nonet
Given information from the aforementioned gdal datamodel docs, the 3rd & 5th parameters of SatGeoTransform (x_skew and y_skew respectively) can be calculated from two control points (p1, p2) with known x and y in both "geo" and "pixel" coordinate spaces. p1 should be above-left of p2 in pixelspace.
x_skew = sqrt((p1.geox-p2.geox)**2 + (p1.geoy-p2.geoy)**2) / (p1.pixely - p2.pixely)`
y_skew = sqrt((p1.geox-p2.geox)**2 + (p1.geoy-p2.geoy)**2) / (p1.pixelx - p2.pixelx)`
In short this is the ratio of Euclidean distance between the points in geospace to the height (or width) of the image in pixelspace.
The units of the parameters are "geo"length/"pixel"length.
Here is a demonstration using the corners of the image stored as control points (gcps):
import gdal
from math import sqrt
ds = gdal.Open(fpath)
gcps = ds.GetGCPs()
assert gcps[0].Id == 'UpperLeft'
p1 = gcps[0]
assert gcps[2].Id == 'LowerRight'
p2 = gcps[2]
y_skew = (
sqrt((p1.GCPX-p2.GCPX)**2 + (p1.GCPY-p2.GCPY)**2) /
(p1.GCPPixel - p2.GCPPixel)
)
x_skew = (
sqrt((p1.GCPX-p2.GCPX)**2 + (p1.GCPY-p2.GCPY)**2) /
(p1.GCPLine - p2.GCPLine)
)
x_res = (p2.GCPX - p1.GCPX) / ds.RasterXSize
y_res = (p2.GCPY - p1.GCPY) / ds.RasterYSize
ds.SetGeoTransform([
p1.GCPX,
x_res,
x_skew,
p1.GCPY,
y_skew,
y_res,
])

The math behind Apple's Speak here example

I have a question regarding the math that Apple is using in it's speak here example.
A little background: I know that average power and peak power returned by the AVAudioRecorder and AVAudioPlayer is in dB. I also understand why the RMS power is in dB and that it needs to be converted into amp using pow(10, (0.5 * avgPower)).
My question being:
Apple uses this formula to create it's "Meter Table"
MeterTable::MeterTable(float inMinDecibels, size_t inTableSize, float inRoot)
: mMinDecibels(inMinDecibels),
mDecibelResolution(mMinDecibels / (inTableSize - 1)),
mScaleFactor(1. / mDecibelResolution)
{
if (inMinDecibels >= 0.)
{
printf("MeterTable inMinDecibels must be negative");
return;
}
mTable = (float*)malloc(inTableSize*sizeof(float));
double minAmp = DbToAmp(inMinDecibels);
double ampRange = 1. - minAmp;
double invAmpRange = 1. / ampRange;
double rroot = 1. / inRoot;
for (size_t i = 0; i < inTableSize; ++i) {
double decibels = i * mDecibelResolution;
double amp = DbToAmp(decibels);
double adjAmp = (amp - minAmp) * invAmpRange;
mTable[i] = pow(adjAmp, rroot);
}
}
What are all the calculations - or rather, what do each of these steps do? I think that mDecibelResolution and mScaleFactor are used to plot 80dB range over 400 values (unless I'm mistaken). However, what's the significance of inRoot, ampRange, invAmpRange and adjAmp? Additionally, why is the i-th entry in the meter table "mTable[i] = pow(adjAmp, rroot);"?
Any help is much appreciated! :)
Thanks in advance and cheers!
It's been a month since I've asked this question, and thanks, Geebs, for your response! :)
So, this is related to a project that I've been working on, and the feature that is based on this was implemented about 2 days after asking that question. Clearly, I've slacked off on posting a closing response (sorry about that). I posted a comment on Jan 7, as well, but circling back, seems like I had a confusion with var names. >_<. Thought I'd give a full, line by line answer to this question (with pictures). :)
So, here goes:
//mDecibelResolution is the "weight" factor of each of the values in the meterTable.
//Here, the table is of size 400, and we're looking at values 0 to 399.
//Thus, the "weight" factor of each value is minValue / 399.
MeterTable::MeterTable(float inMinDecibels, size_t inTableSize, float inRoot)
: mMinDecibels(inMinDecibels),
mDecibelResolution(mMinDecibels / (inTableSize - 1)),
mScaleFactor(1. / mDecibelResolution)
{
if (inMinDecibels >= 0.)
{
printf("MeterTable inMinDecibels must be negative");
return;
}
//Allocate a table to store the 400 values
mTable = (float*)malloc(inTableSize*sizeof(float));
//Remember, "dB" is a logarithmic scale.
//If we have a range of -160dB to 0dB, -80dB is NOT 50% power!!!
//We need to convert it to a linear scale. Thus, we do pow(10, (0.05 * dbValue)), as stated in my question.
double minAmp = DbToAmp(inMinDecibels);
//For the next couple of steps, you need to know linear interpolation.
//Again, remember that all calculations are on a LINEAR scale.
//Attached is an image of the basic linear interpolation formula, and some simple equation solving.
//As per the image, and the following line, (y1 - y0) is the ampRange -
//where y1 = maxAmp and y0 = minAmp.
//In this case, maxAmp = 1amp, as our maxDB is 0dB - FYI: 0dB = 1amp.
//Thus, ampRange = (maxAmp - minAmp) = 1. - minAmp
double ampRange = 1. - minAmp;
//As you can see, invAmpRange is the extreme right hand side fraction on our image's "Step 3"
double invAmpRange = 1. / ampRange;
//Now, if we were looking for different values of x0, x1, y0 or y1, simply substitute it in that equation and you're good to go. :)
//The only reason we were able to get rid of x0 was because our minInterpolatedValue was 0.
//I'll come to this later.
double rroot = 1. / inRoot;
for (size_t i = 0; i < inTableSize; ++i) {
//Thus, for each entry in the table, multiply that entry with it's "weight" factor.
double decibels = i * mDecibelResolution;
//Convert the "weighted" value to amplitude using pow(10, (0.05 * decibelValue));
double amp = DbToAmp(decibels);
//This is linear interpolation - based on our image, this is the same as "Step 3" of the image.
double adjAmp = (amp - minAmp) * invAmpRange;
//This is where inRoot and rroot come into picture.
//Linear interpolation gives you a "straight line" between 2 end-points.
//rroot = 0.5
//If I raise a variable, say myValue by 0.5, it is essentially taking the square root of myValue.
//So, instead of getting a "straight line" response, by storing the square root of the value,
//we get a curved response that is similar to the one drawn in the image (note: not to scale).
mTable[i] = pow(adjAmp, rroot);
}
}
Response Curve image: As you can see, the "Linear curve" is not exactly a curve. >_<
Hope this helps the community in some way. :)
No expert, but based on physics and math:
Assume the max amplitude is 1 and minimum is 0.0001 [corresponding to -80db, which is what min db value is set to in the apple example : #define kMinDBvalue -80.0 in AQLevelMeter.h]
minAmp is the minimum amplitude = 0.0001 for this example
Now, all that is being done is the amplitudes in multiples of the decibel resolution are being adjusted against the minimum amplitude:
adjusted amplitude = (amp-minamp)/(1-minamp)
This makes the range of the adjusted amplitude = 0 to 1 instead of 0.0001 to 1 (if that was desired).
inRoot is set to 2 here. rroot=1/2 - raising to power 1/2 is square root. from apple's file:
// inRoot - this controls the curvature of the response. 2.0 is square root, 3.0 is cube root. But inRoot doesn't have to be integer valued, it could be 1.8 or 2.5, etc.
Essentially gives you a response between 0 and 1 again, and the curvature of that varies based on what value you set for inRoot.