gps conversion of Time object - astropy

I am trying to figure out why when a time object is converted to GPS time, it becomes an array rather than remaining as a Time object. Example:
from astropy.time import Time
times = Time([56701, 56702], format="mjd", scale="tdb")
times.gps
Out: array([ 1.07628475e+09, 1.07637115e+09])
While conversion to other times gives:
times.utc
Out: <Time object: scale='utc' format='mjd' value=[ 56700.9992224 56701.9992224]>
Which I believe is the intended behaviour.

GPS is defined in astropy as a time format not a scale. There is some ambiguity here as discussed in #1879. In the end we decided to keep GPS as just a format, in which case the output of times.gps as a numpy array is the expected correct behavior.
There is a pull request #2091 which clarifies the situation with GPS time.

Related

In MediaPipe, is it possible to see augmented landmarks rendered in real time?

So I am using MediaPipe Holistic Solutions to extract keypoints from a body, hands and face, and I am using the data from this extraction for my calculations just fine. The problem is, I want to see if my data augmentation works, but I am unable to see it in real time. An example of how the keypoints are extracted:
lh_arr = (np.array([[result .x, result .y, result .z] for result in results.left_hand_landmarks.landmark]).flatten()
if I then do lets say, lh_arr [10:15]*2, I cant use this new data in the draw_landmarks function, as lh_arr is not class 'mediapipe.python.solution_base.SolutionOutputs'. Is there a way to get draw_landmarks() to use an np array instead or can I convert the np array back into the correct format? I have tried to get get the flattened array back into a dictionary of the same format of results, but it did not work. I can neither augment the results directly, as they are unsupported operand types.

Grouping xarray daily data into monthly means

I am hoping to plot a graph representing monthly temperature from 1981-01-01 to 2016-12-31.
I would like the months "Jan Feb Mar Apr May...Dec" on the x-axis and the temperature record as the y-axis as my plan is to compare monthly temperature record of 1981 - 1999 with 2000 - 2016.
I have read in the data no problem.
temp1 = xr.open_dataarray('temp1981-1999.nc') temp2 = xr.open_dataarray('temp2000-2016.nc')
and have got rid of the lat and lon dimensions
temp1mean = temp1.mean(dim=['latitude','longitude']) temp2mean = temp2.mean(dim=['latitude','longitude'])
I tried to convert it into a dataframe to allow me to carry on the next step such as averaging the months using group by
temp1.cftime_range(start=None, end=None, periods=None, freq='M', normalize=False, name=None, closed=None, calendar='standard')
t2m time 1981-01-01 276.033295 1981-02-01 278.882935 1981-03-01 282.905579 1981-04-01 289.908936 1981-05-01 294.862457 ... ... 1999-08-01 295.841553 1999-09-01 294.598053 1999-10-01 289.514771 1999-11-01 283.360687 1999-12-01 278.854431
monthly = temp1mean.groupby(temp1mean.index.month).mean()
However I got the following error.
"'DataArray' object has no attribute 'index'"
Therefore, I am wondering if there's any way to groupby all the monthly means and create a graph as followed.
In addition to the main question, I would greatly appreciate if you could also suggest ways to convert the unit kelvin into celsius when plotting the graph.
As I have tried the command
celsius = temp1mean.attrs['units'] = 'kelvin'
but the output is merely
"'air_temperature"
I greatly appreciate any suggestion you may have for plotting this grpah! Thank you so so much and if you need any further information please do not hesitate to ask, I will reply as soon as possible.
Computing monthly means
The xarray docs have a helpful section on using the datetime accessor on any datetime dimensions:
Similar to pandas, the components of datetime objects contained in a given DataArray can be quickly computed using a special .dt accessor.
...
The .dt accessor works on both coordinate dimensions as well as multi-dimensional data.
xarray also supports a notion of “virtual” or “derived” coordinates for datetime components implemented by pandas, including “year”, “month”, “day”, “hour”, “minute”, “second”, “dayofyear”, “week”, “dayofweek”, “weekday” and “quarter”
In your case, you need to use the name of the datetime coordinate (whatever it is named) along with the .dt.month reference in your groupby. If your datetime coordinate is named "time", the groupby operation would be:
monthly_means = temp1mean.groupby(temp1mean.time.dt.month).mean()
or, using the string shorthand:
monthly_means = temp1mean.groupby('time.month').mean()
Units in xarray
As for units, you should definitely know that xarray does not interpret/use attributes or metadata in any way, with the exception of plotting and display.
The following assignment:
temp1mean.attrs['units'] = 'kelvin'
simply assigns the string "kelvin" to the user-defined attribute "units" - nothing else. This may show up as the data's units in plots, but that doesn't mean the data isn't in Fahrenheit or dollars or m/s. It's just a string you put there.
If the data is in fact in kelvin, the best way to convert it to Celsius that I know of is temp1mean - 273.15 :)
If you do want to work with units explicitly, check out the pint-xarray extension project. It's currently in early stages and is experimental, but it does what I think you're looking for.

Issues with Decomposing Trend, Seasonal, and Residual Time Series Elements

i am quite a newbie to Time Series Analysis and this might be a stupid question.
I am trying to generate the trend, seasonal, and residual time series elements, however, my timestamps index are actually strings (lets say 'window1', 'window2', 'window3'). Now, when i try to apply seasonal_decompose(data, model='multiplicative'), it returns an error as, Index' object has no attribute 'inferred_freq' and which is pretty understandable.
However, how to go around this issue by keeping strings as time series index?
Basically here you need to specify freq parameter.
Suppose you have following dataset
s = pd.Series([102,200,322,420], index=['window1', 'window2', 'window3','window4'])
s
>>>window1 102
window2 200
window3 322
window4 420
dtype: int64
Now specify freq parameter,in this case I used freq=1
plt.style.use('default')
plt.figure(figsize = (16,8))
import statsmodels.api as sm
sm.tsa.seasonal_decompose(s.values,freq=1).plot()
result = sm.tsa.stattools.adfuller(s,maxlag=1)
plt.show()
I am not allowed to post image ,but I hope this code will solve your problem.Also here maxlag by default give an error for my dataset ,therefore I used maxlag=1.If you are not sure about its values ,do use default value for maxlag.

How to get the time crossing from amplitude of a signal by LabVIEW

I am a labview new starter.
Here is my problem,I am working on a data processing LabVIEW program,now I use Threshold Detector VI to get the signals which have crossing the threshold,and I have the indices of these signals.but the indices are related to the amplitude of the signals,what I want is the time crossings of these signals.
What can I do?
Thank you very much!
You can use Get Waveform Time Array to get an array of timestamps, then use your threshold indices to select from the timestamp array.

How to plot a Pearson correlation given a time series?

I am using the code in this website http://blog.chrislowis.co.uk/2008/11/24/ruby-gsl-pearson.html to implement a Pearson Correlation given two time series data like so:
require 'gsl'
pearson_correlation = GSL::Stats::correlation(
GSL::Vector.alloc(first_metrics),GSL::Vector.alloc(second_metrics)
)
This returns a number such as -0.2352461593569471.
I'm currently using the highcharts library and am feeding it two sets of timeseries data. Given that I have a finite time series for both sets, can I do something with this number (-0.2352461593569471) to create a third time series showing the slope of this curve? If anyone can point me in the right direction I'd really appreciate it!
No, correlation doesn't tell you anything about the slope of the line of best fit. It just tells you approximately how much of the variability in one variable (or one time series, in this case) can be explained by the other. There is a reasonably good description here: http://www.graphpad.com/support/faqid/1141/.
How you deal with the data in your specific case is highly dependent on what you're trying to achieve. Are you trying to show that variable X causes variable Y? If so, you could start by dropping the time-series-ness, and just treat the data as paired values, and use linear regression. If you're trying to find a model of how X and Y vary together over time, you could look at multivariate linear regression (I'm not very familiar with this, though).