ECEF from Azimuth, Elevation, Range and Observer Lat,Lon,Alt - coordinate-transformation

I'm trying to write a basic python script that will track a given satellite, defined with tle's, from a given location. I'm not a asto/orbital person but am trying to become smarter on it.
I am running into a problem when I try to convert the azimuth, elevation, range values to a ECEF position. I'm using PyEphem to get the observation values and spg4 to get the real location to verify. I'm also using the website, http://www.n2yo.com/?s=25544, the verify the values.
I'm getting the observed azimuth, elevation and range with:
def get_ob(epoch, sv, obsLoc):
site = ephem.Observer()
site.lon = str(obsLoc.lat) # +E -104.77 here
site.lat = str(obsLoc.lon) # +N 38.95 here
site.elevation = obsLoc.alt # meters 0 here
#epoch = time.time()
site.date = datetime.datetime.utcfromtimestamp(epoch)
sat = ephem.readtle(sv.name,sv.tle1,sv.tle2)
sat.compute(site)
az = degrees(sat.az)
el = degrees(sat.alt)
#range in m
range = sat.range
sat_lat = degrees(sat.sublat)
sat_long = degrees(sat.sublong)
# elevation of sat in m
sat_elev = sat.elevation
x, y, z = aer2ecef(az,el,range,38.95,-104.77,80 / 1000)
The reported azimuth, elevation and range match the website. I'm converting to ECEF positions with:
def aer2ecef(azimuthDeg, elevationDeg, slantRange, obs_lat, obs_long, obs_alt):
#site ecef in meters
sitex, sitey, sitez = llh2ecef(obs_lat,obs_long,obs_alt)
#some needed calculations
slat = sin(radians(obs_lat))
slon = sin(radians(obs_long))
clat = cos(radians(obs_lat))
clon = cos(radians(obs_long))
azRad = radians(azimuthDeg)
elRad = radians(elevationDeg)
# az,el,range to sez convertion
south = -slantRange * cos(elRad) * cos(azRad)
east = slantRange * cos(elRad) * sin(azRad)
zenith = slantRange * sin(elRad)
x = ( slat * clon * south) + (-slon * east) + (clat * clon * zenith) + sitex
y = ( slat * slon * south) + ( clon * east) + (clat * slon * zenith) + sitey
z = (-clat * south) + ( slat * zenith) + sitez
return x, y, z
When I plot that though, the position is way off (wrong side of globe). The position I get from the website and from spg4 match so I believe those to be the correct.
I'm not sure if the error is in my conversion method or I'm using the wrong data for the conversion. I found the method in a answer here: Get ECEF XYZ given starting coordinates, range, azimuth, and elevation
Any advice or suggestions for where I'm getting off would be very appreciated. Below is test input/outputs:
The satellites I'm testing with are the ISS and directv10 (one fixed, one moving- with internet tracking available for verification):
0 Direct10
1 31862U 07032A 13099.15996183 -.00000126 00000-0 10000-3 0 1194
2 31862 000.0489 046.9646 0000388 001.7833 103.5813 01.00271667 21104
0 ISS
1 25544U 98067A 13112.50724749 .00016717 00000-0 10270-3 0 9148
2 25544 51.6465 24.5919 0009906 171.1474 188.9854 15.52429950 26067
Observer site lla:
[38.95 -104.77 0.0]
results:
sv: ISS ephem observed response(km) # epoch: 1365630559.000000 : [344.067992722211, -72.38297754053431, 12587.123][degrees(sat.az), degrees(sat.alt), sat.range]
sv: ISS ephem reported llh location(km) # epoch: 1365630559.000000 : [-41.678271938092195, -129.16682754513502, 421.06290625][degrees(sat.sublat0, degrees(sat.sublong), sat.elevation]
sv: ISS ephem calculated xyz location(km) # epoch: 1365630559.000000 : [688.24385373837845, 6712.2004971137103, -704.83633267710866][aer2ecef(az,el,range,obsLoc.lat,obsLoc.lon,obsLoc.alt)]
sv: ISS ephem llh from calc xyz location(km) # epoch: 1365630559.000000 : [-6.001014287867631, 84.1455657632957, 12587.123][ecef2llh()]
sv: ISS ephem xyz from reported llh location(km) # epoch: 1365630559.000000 :[-3211.7910504146325, -3942.7032969856118, -4498.9656030253745][llh2ecef(lat,long,elev)]
sv: ISS spg84 ecef position(m) # epoch: 1365630559.000000 : [-3207667.3380003194, -3936704.823960199, -4521293.5388663234]
sv: ISS spg84 ecef2llh(m) # epoch: 1365630559.000000 : [-41.68067424524357, -129.17349987675482, 6792812.8704163525]
sv: Direct10 ephem observed response(km) # epoch: 1365630559.000000 : [320.8276456938389, -19.703680198781303, 43887.572][degrees(sat.az), degrees(sat.alt), sat.range]
sv: Direct10 ephem reported llh location(km) # epoch: 1365630559.000000 : [0.004647324660923812, -102.8070784813048, 35784.688][degrees(sat.sublat0, degrees(sat.sublong), sat.elevation]
sv: Direct10 ephem calculated xyz location(km) # epoch: 1365630559.000000 : [-18435.237655222769, 32449.238763035213, 19596.893001978762][aer2ecef(az,el,range,obsLoc.lat,obsLoc.lon,obsLoc.alt)]
sv: Direct10 ephem llh from calc xyz location(km) # epoch: 1365630559.000000 : [27.727834453026748, 119.60200825103102, 43887.572][ecef2llh()]
sv: Direct10 ephem xyz from reported llh location(km) # epoch: 1365630559.000000 :[-9346.1899009219123, -41113.897098582587, 3.4164105611003754][llh2ecef(lat,long,elev)]
sv: Direct10 spg84 ecef position(m) # epoch: 1365630559.000000 : [-9348605.9260040354, -41113193.982686974, -14060.29781505302]
sv: Direct10 spg84 ecef2llh(m) # epoch: 1365630559.000000 : [-0.019106864351793953, -102.81049179145006, 42156299.077687651]

I feel really dumb but found the issue...
I was transposing the the latitude and longitude from the site into the PyEphem model (look at line 3-4)...The conversion works currently.
Let that be a lesson kids. USE GOOD VARIABLE NAMES...don't be lazy like me and lose time trying to find a nonexistent math bug....

Related

Transfer learning by using vgg in pytorch

I am using vgg16 for image classification. I want to test my transfered model with the following code:
classes = ['A', 'B', 'C']
len(classes) #3
len(test_data)#171
batch_size=10
# Testing
test_loss = 0.0
class_correct = list(0. for i in range(len(classes)))
class_total = list(0. for i in range(len(classes)))
vgg16.eval()
for data, target in test_loader:
output = vgg16(data)
loss = criterion(output, target)
test_loss += loss.item()*data.size(0)
_, pred = torch.max(output, 1)
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy())
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(len(classes)):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
I receive following error:
I receive following error:
15 for i in range(batch_size):
16 label = target.data[i]
---> 17 class_correct[label] += correct[i].item()
18 class_total[label] += 1
19
IndexError: too many indices for array
I do not know why I am getting this error and how I can solve it. I would be grateful if you could help me.

How to obtain 2D Cutout of an image from a SkyCoord position?

I am following the example from Astropy docs for 2D Cutout.
The header of my FITS file:
SIMPLE = T / file does conform to FITS standard
BITPIX = -32 / number of bits per data pixel
NAXIS = 3 / number of data axes
NAXIS1 = 512 / length of data axis 1
NAXIS2 = 512 / length of data axis 2
NAXIS3 = 3 / length of data axis 3
EXTEND = T / FITS dataset may contain extensions
COMMENT FITS (Flexible Image Transport System) format is defined in 'Astronomy
COMMENT and Astrophysics', volume 376, page 359; bibcode: 2001A&A...376..359H
SURVEY = 'DECaLS '
VERSION = 'DR8-south'
IMAGETYP= 'image '
BANDS = 'grz '
BAND0 = 'g '
BAND1 = 'r '
BAND2 = 'z '
CTYPE1 = 'RA---TAN' / TANgent plane
CTYPE2 = 'DEC--TAN' / TANgent plane
CRVAL1 = 186.11382 / Reference RA
CRVAL2 = 0.15285422 / Reference Dec
CRPIX1 = 256.5 / Reference x
CRPIX2 = 256.5 / Reference y
CD1_1 = -7.27777777777778E-05 / CD matrix
CD1_2 = 0. / CD matrix
CD2_1 = 0. / CD matrix
CD2_2 = 7.27777777777778E-05 / CD matrix
IMAGEW = 512. / Image width
IMAGEH = 512. / Image height
So far what I have tried :
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
position = SkyCoord(hdu[0].header['CRVAL1']*u.deg,hdu[0].header['CRVAL2']*u.deg)
size = 200*u.pixel
wcs1 = WCS(hdu[0].header)
cutout = Cutout2D(hdu[0].data[0], position ,size, wcs = wcs1 )
I run into error in the last line.
Error :
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-142-7cc21a13e941> in <module>
----> 1 cutout = Cutout2D(hdu[0].data[0], position ,size, wcs = wcs1 )
/Applications/anaconda3/lib/python3.7/site-packages/astropy/nddata/utils.py in __init__(self, data, position, size, wcs, mode, fill_value, copy)
714 if wcs is not None:
715 self.wcs = deepcopy(wcs)
--> 716 self.wcs.wcs.crpix -= self._origin_original_true
717 self.wcs.array_shape = self.data.shape
718 if wcs.sip is not None:
ValueError: operands could not be broadcast together with shapes (3,) (2,) (3,)
My guess is it is because naxis =3 in my file and the documentation assumes naxis = 2. Though I am not sure if that is the actual issue here. Can anybody help fix this error?
Since your WCS is 3d, but you're getting a 2d cutout, you need to drop the 3rd dimension. Try cutout = Cutout2D(hdu[0].data[0], position ,size, wcs = wcs1.celestial ), where .celestial is a convenience tool to drop the third dimension in the WCS.

Speeding up data pipeline with tf.data.dataset API

I am having trouble speeding up the data pipeline for training with tf.data.dataset and I think I am missing something here. With the different options within dataset to pre-load data, dataset speed is still slow.
I have a complex data pipeline but I simplified to a small example below. I tried fine-tuning the num_parallel_calls, cycle_length, prefetch etc but I cannot seem to get to smooth dataset generation. What am I missing? Any suggestions? enter code here
import tensorflow as tf
tf.enable_eager_execution()
from timeit import default_timer as timer
feature_count = 400
batch_size = 1024
look_back = 100
target_groups = 21
def random_data_generator(x=0):
while True:
x_data = tf.random.uniform(
shape=(batch_size, look_back, feature_count),
minval=-1.0,
maxval=5,
dtype=tf.dtypes.float32)
Y_data = tf.random.uniform(
shape=(batch_size, target_groups),
minval=1,
maxval=21,
dtype=tf.dtypes.int32)
yield x_data, Y_data
def get_simple_Dataset_generator():
dataset = tf.data.Dataset.from_tensor_slices([0,1,2])
dataset = dataset.interleave(lambda x: tf.data.Dataset.from_generator(random_data_generator,
output_types=(tf.float32, tf.float32), args=(x,)),
cycle_length=3,
block_length=3,
num_parallel_calls= tf.data.experimental.AUTOTUNE)
#dataset = dataset.prefetch(2)
while True:
for x, Y in dataset:
yield x, Y
def test_speed():
generator = get_simple_Dataset_generator()
print("Testing generator speed ")
for i in range(1,100):
start_time = timer()
next(generator)
lap_time = timer()-start_time
print("%s Time - %fsec "%(i, lap_time))
if __name__ == '__main__':
test_speed()```
I was hoping to see consistent generator speed but it still very erratic.
Output
1 Time - 3.417578sec
2 Time - 1.257846sec
3 Time - 1.286210sec
4 Time - 0.000456sec
5 Time - 0.027772sec
6 Time - 0.058985sec
7 Time - 0.000416sec
8 Time - 0.026721sec
9 Time - 0.027316sec
10 Time - 0.777332sec
11 Time - 1.379266sec
12 Time - 1.172304sec
13 Time - 0.000365sec
14 Time - 0.026909sec
15 Time - 0.045409sec
16 Time - 0.000708sec
17 Time - 0.025682sec
18 Time - 0.027223sec
19 Time - 0.577131sec
20 Time - 1.220682sec
21 Time - 1.189601sec
22 Time - 0.000573sec
23 Time - 0.079531sec
24 Time - 0.624080sec
25 Time - 0.038932sec
I believe that eager exicution has some kinks that are still being worked out. Changing your code to graph based execution I get each iteration is about 0.06 seconds in stead of the 1 second times from the eager version.
Here is the code snippet:
import tensorflow as tf
from timeit import default_timer as timer
feature_count = 400
batch_size = 1024
look_back = 100
target_groups = 21
def random_data(x=0):
x_data = tf.random.uniform(
shape=(batch_size, look_back, feature_count),
minval=-1.0,
maxval=5,
dtype=tf.dtypes.float32)
Y_data = tf.random.uniform(
shape=(batch_size, target_groups),
minval=1,
maxval=21,
dtype=tf.dtypes.int32)
return x_data, Y_data
def get_simple_Dataset():
dataset = tf.data.Dataset.from_tensor_slices(tf.zeros(100))
dataset = dataset.map(random_data)
dataset = dataset.prefetch(10)
return dataset
def test_speed():
dataset = get_simple_Dataset()
iterator = dataset.make_one_shot_iterator()
fetch = iterator.get_next()
with tf.Session() as sess:
for i in range(1,100):
start_time = timer()
sess.run(fetch)
lap_time = timer()-start_time
print("%s Time - %fsec "%(i, lap_time))
if __name__ == '__main__':
test_speed()
1 Time - 0.108968sec
2 Time - 0.071986sec
3 Time - 0.068198sec
4 Time - 0.065433sec
5 Time - 0.066582sec
6 Time - 0.064175sec
7 Time - 0.067372sec
8 Time - 0.064265sec
9 Time - 0.065510sec
10 Time - 0.068043sec

tensorflow object detection API: training is very slow

I am currently studying google tensorflow object detection API. When I try to retrain the model with Oxford III pet dataset, the training process is very slow.
Here is what I found so far:
most of time only 2% GPU is utilzed.
but CPU utilization is 60%, so It seems GPU is not starved by input, otherwise CPU should be near 100% utilization.
I am trying to profile it with tensorflow profiler, but I am in a bit hurry now, any idea or suggestion would be helpful.
I found the problems. It's the issue with input, my tfrecord file is corrupted somehow, so the input thread hang up sometimes.
There are many reasons for this to happen. The most common being that there is some problem with your record file. There need to be done some testing before adding an image and it's contour to record file. Some of them are:
First check the image before sending it to the record:
def checkJPG(fn):
with tf.Graph().as_default():
try:
image_contents = tf.read_file(fn)
image = tf.image.decode_jpeg(image_contents, channels=3)
init_op = tf.initialize_all_tables()
with tf.Session() as sess:
sess.run(init_op)
tmp = sess.run(image)
except:
print("Corrupted file: ", fn)
return False
return True
Also, check the height and width of the contour and if any contour is not crossing the borders:
boxW = xmax - xmin
boxH = ymax - ymin
if boxW == 0 or boxH == 0:
print("...ONE CONTOUR SKIPPED... (boxW | boxH) = 0")
continue
if boxW*boxH < 100:
print("...ONE CONTOUR SKIPPED... (boxW*boxH) < 100")
continue
if xmin / width <= 0 or xmax / width <= 0 or ymin / height <= 0 or ymax / height <= 0:
print("...ONE CONTOUR SKIPPED... (x | y) <= 0")
continue
if xmin / width >= 1 or xmax / width >= 1 or ymin / height >= 1 or ymax / height >= 1:
print("...ONE CONTOUR SKIPPED... (x | y) >= 1")
continue
One of the other reason is that there is too much data in evaluation record file. It's better to add only 10 images in your evaluation record file and change the evaluation config like this:
eval_config {
num_visualizations: 10
num_examples: 10
eval_interval_secs: 3000
max_evals: 1
use_moving_averages: false
}
As i can see , it is not utilizing GPU as now,
Have you tried to optimise GPU using tensorflow given parameter
https://www.tensorflow.org/performance/performance_guide#optimizing_for_gpu

How does Tensorflow Batch Normalization work?

I'm using tensorflow batch normalization in my deep neural network successfully. I'm doing it the following way:
if apply_bn:
with tf.variable_scope('bn'):
beta = tf.Variable(tf.constant(0.0, shape=[out_size]), name='beta', trainable=True)
gamma = tf.Variable(tf.constant(1.0, shape=[out_size]), name='gamma', trainable=True)
batch_mean, batch_var = tf.nn.moments(z, [0], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=0.5)
def mean_var_with_update():
ema_apply_op = ema.apply([batch_mean, batch_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
mean, var = tf.cond(self.phase_train,
mean_var_with_update,
lambda: (ema.average(batch_mean), ema.average(batch_var)))
self.z_prebn.append(z)
z = tf.nn.batch_normalization(z, mean, var, beta, gamma, 1e-3)
self.z.append(z)
self.bn.append((mean, var, beta, gamma))
And it works fine both for training and testing phases.
However I encounter problems when I try to use the computed neural network parameters in my another project, where I need to compute all the matrix multiplications and stuff by myself. The problem is that I can't reproduce the behavior of the tf.nn.batch_normalization function:
feed_dict = {
self.tf_x: np.array([range(self.x_cnt)]) / 100,
self.keep_prob: 1,
self.phase_train: False
}
for i in range(len(self.z)):
# print 0 layer's 1 value of arrays
print(self.sess.run([
self.z_prebn[i][0][1], # before bn
self.bn[i][0][1], # mean
self.bn[i][1][1], # var
self.bn[i][2][1], # offset
self.bn[i][3][1], # scale
self.z[i][0][1], # after bn
], feed_dict=feed_dict))
# prints
# [-0.077417567, -0.089603029, 0.000436493, -0.016652612, 1.0055743, 0.30664611]
According to the formula on the page https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/nn/batch_normalization:
bn = scale * (x - mean) / (sqrt(var) + 1e-3) + offset
But as we can see,
1.0055743 * (-0.077417567 - -0.089603029)/(0.000436493^0.5 + 1e-3) + -0.016652612
= 0.543057
Which differs from the value 0.30664611, computed by Tensorflow itself.
So what am I doing wrong here and why I can't just calculate batch normalized value myself?
Thanks in advance!
The formula used is slightly different from:
bn = scale * (x - mean) / (sqrt(var) + 1e-3) + offset
It should be:
bn = scale * (x - mean) / (sqrt(var + 1e-3)) + offset
The variance_epsilon variable is supposed to scale with the variance, not with sigma, which is the square-root of variance.
After the correction, the formula yields the correct value:
1.0055743 * (-0.077417567 - -0.089603029)/((0.000436493 + 1e-3)**0.5) + -0.016652612
# 0.30664642276945747