Convert variable from datetime.timedelta to numpy.timedelta64 - numpy

How to convert a variable from Python's datetime.timedelta to numpy.timedelta64?

array([datetime.timedelta(1)], dtype="timedelta64[ms]")[0]
This link explains many things about datetime64 and timedelta64.
This is also relevant for converting datetime.datetime to datetime64

You can do this without creating an np.array by mapping the fundamental integer representations in datetime.timedelta (days, seconds, and microseconds) to corresponding np.timedelta64 representations, and then summing.
The downside of this approach is that, while you will get the same delta duration, you will not always get the same units. The upside of this approach is that, if you are converting single values rather than large arrays of values, it will generally be faster than creating an array.
You can also just call np.timedelta64() with a datetime.timedelta, but that approach only returns a np.timedelta64() with microsecond units.
from functools import reduce
import operator
TIME_DELTA_ATTR_MAP = (
('days', 'D'),
('seconds', 's'),
('microseconds', 'us')
)
def to_timedelta64(value: datetime.timedelta) -> np.timedelta64:
return reduce(operator.add,
(np.timedelta64(getattr(value, attr), code)
for attr, code in TIME_DELTA_ATTR_MAP if getattr(value, attr) > 0))

Related

How do you deal with datetime obj when applying ANN models?

How do you deal with datetime obj when applying ANN models? I have thought of writing function which iterates through the column but there has to be a cleaner way to do so, right?
dataset.info()
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 299 non-null int64
1 ZIP 299 non-null int64
2 START_TIME 299 non-null datetime64[ns]
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x = sc.fit_transform(x)
float() argument must be a string or a number, not 'Timestamp'
With attempt:
TypeError: float() argument must be a string or a number, not 'datetime.time' in relation with a scatter plot
could not convert string to float: '2022-03-16 11:55:00'
I would suggest doing the following steps:
converting string to datetime.datetime objects
from datetime import datetime
t = datetime.strptime("2022-03-16 11:55:00","%Y-%m-%d %H:%M:%S")
Then extract the necessary components to pass as inputs to the network:
x1,x2,x3 = t.month, t.hour, t.minute
As an aside, I noticed you are directly scaling the time components. Rather, do some different pre-processing depending on the problem. For example, extracting sine and cosine information of the time components rather than using them directly or scaling them. sine and cosine components preserve the distance between time points.
import numpy as np
hour_cos = np.cos(t.hour)
hour_sin = np.sin(t.hour)
extract other periodic components as necessary for the problem
e.g. if you are looking at weather variable: sine and cosine of hour, month are typically useful. If you are looking at sales, day of month, month, day of week sine and cosine are useful
Update: from the comments I noticed you mentioned that you are predicting decibel levels. Assuming, you are already factoring in spatial inputs variables, you should definitely try something like a sine/cosine transformation assuming the events generating sounds exhibit a periodic pattern. Again, this is an assumption and might not be completely true.
dataset['START_TIME'] = pd.to_datetime(dataset['START_TIME']).apply(lambda x: x.value)
Seems like a clean way of doing so, but I'm still open to alternatives.

How can I create a column containing only Nones but of a specific type in Pandas? [duplicate]

Is there a preferred way to keep the data type of a numpy array fixed as int (or int64 or whatever), while still having an element inside listed as numpy.NaN?
In particular, I am converting an in-house data structure to a Pandas DataFrame. In our structure, we have integer-type columns that still have NaN's (but the dtype of the column is int). It seems to recast everything as a float if we make this a DataFrame, but we'd really like to be int.
Thoughts?
Things tried:
I tried using the from_records() function under pandas.DataFrame, with coerce_float=False and this did not help. I also tried using NumPy masked arrays, with NaN fill_value, which also did not work. All of these caused the column data type to become a float.
NaN can't be stored in an integer array. This is a known limitation of pandas at the moment; I have been waiting for progress to be made with NA values in NumPy (similar to NAs in R), but it will be at least 6 months to a year before NumPy gets these features, it seems:
http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-na
(This feature has been added beginning with version 0.24 of pandas, but note it requires the use of extension dtype Int64 (capitalized), rather than the default dtype int64 (lower case):
https://pandas.pydata.org/pandas-docs/version/0.24/whatsnew/v0.24.0.html#optional-integer-na-support
)
This capability has been added to pandas beginning with version 0.24.
At this point, it requires the use of extension dtype 'Int64' (capitalized), rather than the default dtype 'int64' (lowercase).
If performance is not the main issue, you can store strings instead.
df.col = df.col.dropna().apply(lambda x: str(int(x)) )
Then you can mix then with NaN as much as you want. If you really want to have integers, depending on your application, you can use -1, or 0, or 1234567890, or some other dedicated value to represent NaN.
You can also temporarily duplicate the columns: one as you have, with floats; the other one experimental, with ints or strings. Then inserts asserts in every reasonable place checking that the two are in sync. After enough testing you can let go of the floats.
In case you are trying to convert a float (1.143) vector to integer (1), and that vector has NAs, converting it to the new 'Int64' dtype will give you an error. In order to solve this you have to round the numbers and then do ".astype('Int64')"
s1 = pd.Series([1.434, 2.343, np.nan])
#without round() the next line returns an error
s1.astype('Int64')
#cannot safely cast non-equivalent float64 to int64
##with round() it works
s1.round().astype('Int64')
0 1
1 2
2 NaN
dtype: Int64
My use case is that I have a float series that I want to round to int, but when you do .round() still has decimals, you need to convert to int to remove decimals.
This is not a solution for all cases, but mine (genomic coordinates) I've resorted to using 0 as NaN
a3['MapInfo'] = a3['MapInfo'].fillna(0).astype(int)
This at least allows for the proper 'native' column type to be used, operations like subtraction, comparison etc work as expected
Pandas v0.24+
Functionality to support NaN in integer series will be available in v0.24 upwards. There's information on this in the v0.24 "What's New" section, and more details under Nullable Integer Data Type.
Pandas v0.23 and earlier
In general, it's best to work with float series where possible, even when the series is upcast from int to float due to inclusion of NaN values. This enables vectorised NumPy-based calculations where, otherwise, Python-level loops would be processed.
The docs do suggest : "One possibility is to use dtype=object arrays instead." For example:
s = pd.Series([1, 2, 3, np.nan])
print(s.astype(object))
0 1
1 2
2 3
3 NaN
dtype: object
For cosmetic reasons, e.g. output to a file, this may be preferable.
Pandas v0.23 and earlier: background
NaN is considered a float. The docs currently (as of v0.23) specify the reason why integer series are upcasted to float:
In the absence of high performance NA support being built into NumPy
from the ground up, the primary casualty is the ability to represent
NAs in integer arrays.
This trade-off is made largely for memory and performance reasons, and
also so that the resulting Series continues to be “numeric”.
The docs also provide rules for upcasting due to NaN inclusion:
Typeclass Promotion dtype for storing NAs
floating no change
object no change
integer cast to float64
boolean cast to object
New for Pandas v1.00 +
You do not (and can not) use numpy.nan any more.
Now you have pandas.NA.
Please read: https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html
IntegerArray is currently experimental. Its API or implementation may
change without warning.
Changed in version 1.0.0: Now uses pandas.NA as the missing value
rather than numpy.nan.
In Working with missing data, we saw that pandas primarily uses NaN to
represent missing data. Because NaN is a float, this forces an array
of integers with any missing values to become floating point. In some
cases, this may not matter much. But if your integer column is, say,
an identifier, casting to float can be problematic. Some integers
cannot even be represented as floating point numbers.
If there are blanks in the text data, columns that would normally be integers will be cast to floats as float64 dtype because int64 dtype cannot handle nulls. This can cause inconsistent schema if you are loading multiple files some with blanks (which will end up as float64 and others without which will end up as int64
This code will attempt to convert any number type columns to Int64 (as opposed to int64) since Int64 can handle nulls
import pandas as pd
import numpy as np
#show datatypes before transformation
mydf.dtypes
for c in mydf.select_dtypes(np.number).columns:
try:
mydf[c] = mydf[c].astype('Int64')
print('casted {} as Int64'.format(c))
except:
print('could not cast {} to Int64'.format(c))
#show datatypes after transformation
mydf.dtypes
This is now possible, since pandas v 0.24.0
pandas 0.24.x release notes
Quote: "Pandas has gained the ability to hold integer dtypes with missing values.
I know that OP has asked for NumPy or Pandas only, but I think it is worth mentioning polars as an alternative that supports the requested feature.
In Polars any missing values in an integer column are simply null values and the column remains an integer column.
See Polars - User Guide > Coming from Pandas for more info.

Emulating fixed precision in python

For a university course in numerical analysis we are transitioning from Maple to a combination of Numpy and Sympy for various illustrations of the course material. This is because the students already learn Python the semester before.
One of the difficulties we have is in emulating fixed precision in Python. Maple allows the user to specify a decimal precision (say 10 or 20 digits) and from then on every calculation is made with that precision so you can see the effect of rounding errors. In Python we tried some ways to achieve this:
Sympy has a rounding function to a specified number of digits.
Mpmath supports custom precision.
This is however not what we're looking for. These options calculate the exact result and round the exact result to the specified number of digits. We are looking for a solution that does every intermediate calculation in the specified precision. Something that can show, for example, the rounding errors that can happen when dividing two very small numbers.
The best solution so far seems to be the custom data types in Numpy. Using float16, float32 and float64 we were able to al least give an indication of what could go wrong. The problem here is that we always need to use arrays of one element and that we are limited to these three data types.
Does anything more flexible exist for our purpose? Or is the very thing we're looking for hidden somewhere in the mpmath documentation? Of course there are workarounds by wrapping every element of a calculation in a rounding function but this obscures the code to the students.
You can use decimal. There are several ways of usage, for example, localcontext or getcontext.
Example with getcontext from documentation:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
Example of localcontext usage:
>>> from decimal import Decimal, localcontext
>>> with localcontext() as ctx:
... ctx.prec = 4
... print Decimal(1) / Decimal(3)
...
0.3333
To reduce typing you can abbreviate the constructor (example from documentation):
>>> D = decimal.Decimal
>>> D('1.23') + D('3.45')
Decimal('4.68')

What is the equivalent of numpy.allclose for structured numpy arrays?

Running numpy.allclose(a, b) throws TypeError: invalid type promotion on structured arrays. What would be the correct way of checking whether the contents of two structured arrays are almost equal?
np.allclose does an np.isclose followed by all(). isclose tests abs(x-y) against tolerances, with accomodations for np.nan and np.inf. So it is designed primarily to work with floats, and by extension ints.
The arrays have to work with np.isfinite(a), as well as a-b and np.abs. In short a.astype(float) should work with your arrays.
None of this works with the compound dtype of a structured array. You could though iterate over the fields of the array, and compare those with isclose (or allclose). But you will have ensure that the 2 arrays have matching dtypes, and use some other test on fields that don't work with isclose (eg. string fields).
So in the simple case
all([np.allclose(a[name], b[name]) for name in a.dtype.names])
should work.
If the fields of the arrays are all the same numeric dtype, you could view the arrays as 2d arrays, and do allclose on those. But usually structured arrays are used when the fields are a mix of string, int and float. And in the most general case, there are compound dtypes within dtypes, requiring some sort of recursive testing.
import numpy.lib.recfunctions as rf
has functions to help with complex structured array operations.
Assuming b is a scalar, you can just iterate over the fields of a:
all(np.allclose(a[field], b) for field in a.dtype.names)

Motivation behind numpy's datetime64 type?

I noticed recently that numpy includes a datetime64 data type beginning in numpy 1.7:
http://www.compsci.wm.edu/SciClone/documentation/software/math/NumPy/html1.7/reference/arrays.datetime.html
I am wondering what is the motivation behind including this as a separate type within the numpy package rather than using the builtin datetime.datetime provided by Python?
Some of the reasons I am interested in understanding this better include:
I want to know when it is appropriate to use datetime.datetime vs when to use numpy.datetime64
Since numpy includes no date type analogous to datetime.date, should I use numpy.datetime64 for dates when I need to interact with numpy.datetime64 objects? Or should I intermingle datetime.date and numpy.datetime64 in my code?
The reason is identical as to why there is a np.int and an np.float. These numpy types get stored by value in an array, rather than by boxed reference, as generic python object are. The latter takes far more memory, allocation overhead, and is much less cache friendly to traverse.
I avoid intermingling datetime64 and python's inbuilt datetime objects. The reason for this is that the code that you write to work with a datetime.datetime will not work with a numpy.datetime64 scalar. For example any of the methods or properties of a datetime.datetime would not be availble on a numpy.datetime64 object.
In order to avoid the intermingling, what I tend to do is when I am dealing with scalars, I use python's datetime.datetime or datetime.date. When I am dealing with numpy array's, I use datetime64. This means that when I am extracting or iterating over single values from a numpy datetime64 array, I convert them into a datetime object first before I let propagate into other parts of the codebase.
Also you can read about different units of datetime64, that will allow you use a datetime64 as a datetime.date or datetime.datetime here:
http://docs.scipy.org/doc/numpy-dev/reference/arrays.datetime.html#arrays-dtypes-dateunits