I have my Postgres table set up as such:
CREATE TABLE "Friendships" (
id integer DEFAULT PRIMARY KEY,
"fromUserId" integer NOT NULL REFERENCES "Users"(id),
"toUserId" integer NOT NULL REFERENCES "Users"(id)
);
When a user fetches their friendships, I run: SELECT * FROM Friendships WHERE fromUserId=XXXX.
How do I modify the query so that additional data is added to the results (true/false) based on whether toUserId also added that user?
Example result:
[
{ id: 444, fromUserId: 1, toUserId: 22, addedBack: false },
{ id: 445, fromUserId: 1, toUserId: 67, addedBack: true },
{ id: 446, fromUserId: 1, toUserId: 599, addedBack: true },
{ id: 447, fromUserId: 1, toUserId: 733, addedBack: false },
]
With EXISTS:
select f.*,
exists (
select 0 from "Friendships"
where "toUserId" = f."fromUserId" and "fromUserId" = f."toUserId"
) addedBack
from "Friendships" f
where f."fromUserId" = 1
For this sample data:
INSERT INTO "Friendships"(id, "fromUserId", "toUserId") VALUES
(444, 1, 22), (445, 1, 67), (446, 1, 599), (447, 1, 733),
(448, 67, 1), (449, 599, 1);
Results:
> id | fromUserId | toUserId | addedback
> --: | ---------: | -------: | :--------
> 444 | 1 | 22 | f
> 445 | 1 | 67 | t
> 446 | 1 | 599 | t
> 447 | 1 | 733 | f
See the demo.
You can use a left outer join to check for the reciprocate value:
select
f.id,
f.fromuserid,
f.touserid,
case when r.id is null then false else true end as addedback
from friendships f
left join friendships r on f.touserid = r.fromuserid
and r.touserid = f.fromuserid
where f.fromuserid = XXXX
I want to use within a FROM a subset of 2 tables using RIGHT JOIN (I want from that subset all the rows of ITV2_VEHICULOS whose ID is not in ITV2_HIST_VEHICULOS) so that the SELECT "takes" the data from there and with the WHERE it can filter
My query:
SELECT
*
FROM
ITV2_INSPECCIONES I,
ITV2_HORAS_INSPECCION HI_FIN,
ITV2_INSPECCIONES I_SIG,
ITV2_HORAS_INSPECCION HI_SIG_INI,
ITV2_HIST_VEHICULOS VH,
ITV2_CATEGORIAS_VEHICULO CAT,
ITV2_CLASIF_VEH_CONS CVC,
ITV2_CLASIF_VEH_USO CVU,
(
SELECT
*
FROM
ITV2_HIST_VEHICULOS VH
RIGHT JOIN ITV2_VEHICULOS V ON
VH.C_VEHICULO_ID = V.C_VEHICULO_ID
) VI
WHERE
I.C_TIPO_INSPECCION = 1
AND I.F_DESFAVORABLE IS NOT NULL
AND I.C_RESULTADO IN(
3,
4
)
AND I.C_VEHICULO_ID = VI.C_VEHICULO_ID
AND VI.C_CATEGORIA_ID = CAT.C_CATEGORIA_ID
AND VI.C_CLASIF_VEH_CONS_ID = CVC.C_CLASIF_VEH_CONS_ID
AND VI.C_CLASIF_VEH_USO_ID = CVU.C_CLASIF_VEH_USO_ID -- HORAS
AND I.C_ESTACION_ID = HI_FIN.C_ESTACION_ID
AND I.C_INSPECCION_ID = HI_FIN.C_INSPECCION_ID
AND I.N_ANNO = HI_FIN.N_ANNO
AND HI_FIN.C_TIPO_HORA_ID = 6 -- INSPECCION SIGUIENTE
AND I.C_ESTACION_ID = I_SIG.C_ESTACION_ID_FASE_ANT
AND I.C_INSPECCION_ID = I_SIG.C_INSPECCION_ID_FASE_ANT
AND I.N_ANNO = I_SIG.N_ANNO_FASE_ANT --
AND I_SIG.N_ANNO IN(
2013,
2014,
2015,
2016,
2017,
2018
)
AND I_SIG.C_ESTACION_ID IN(
3,
21,
22,
26,
28,
32,
34,
37,
41,
47,
53,
59,
60
)
AND I_SIG.F_INSPECCION >= '01/09/2015'
AND I_SIG.F_INSPECCION <= '30/09/2018' --
AND I_SIG.F_DESFAVORABLE IS NULL
AND I_SIG.C_RESULTADO IN(
1,
2
) -- Y HORAS
AND I_SIG.C_ESTACION_ID = HI_SIG_INI.C_ESTACION_ID
AND I_SIG.C_INSPECCION_ID = HI_SIG_INI.C_INSPECCION_ID
AND I_SIG.N_ANNO = HI_SIG_INI.N_ANNO
AND HI_SIG_INI.C_TIPO_HORA_ID = 1
--GROUP BY...
I expect in the output:
C_ESTACION_ID(FROM I) |C_VEHICULO_ID(FROM(I) |C_TIPO_HORA_ID(FROM HI_FIN)|F_HORA (FROM I_FIN) |A_MATRICULA FROM (V) | F_CAMBIO FROM (VH -> IF subdata of V EXISTS)
---------------------|----------------------|---------------------------|--------------------|---------------------|---------------------------------------
This is what your query would look like if you use "explicit join syntax" instead of just some commas between table names:
SELECT *
FROM ITV2_INSPECCIONES I
INNER JOIN ITV2_HORAS_INSPECCION HI_FIN ON I.C_ESTACION_ID = HI_FIN.C_ESTACION_ID
AND I.C_INSPECCION_ID = HI_FIN.C_INSPECCION_ID
AND I.N_ANNO = HI_FIN.N_ANNO
INNER JOIN ITV2_INSPECCIONES I_SIG ON I.C_ESTACION_ID = I_SIG.C_ESTACION_ID_FASE_ANT
AND I.C_INSPECCION_ID = I_SIG.C_INSPECCION_ID_FASE_ANT
AND I.N_ANNO = I_SIG.N_ANNO_FASE_ANT
INNER JOIN ITV2_HORAS_INSPECCION HI_SIG_INI ON I_SIG.C_ESTACION_ID = HI_SIG_INI.C_ESTACION_ID
AND I_SIG.C_INSPECCION_ID = HI_SIG_INI.C_INSPECCION_ID
AND I_SIG.N_ANNO = HI_SIG_INI.N_ANNO
WHERE I.C_TIPO_INSPECCION = 1
AND I.F_DESFAVORABLE IS NOT NULL
AND I.C_RESULTADO IN (3, 4)
AND HI_FIN.C_TIPO_HORA_ID = 6 -- INSPECCION SIGUIENTE
AND HI_SIG_INI.C_TIPO_HORA_ID = 1
AND I_SIG.F_INSPECCION >= '01/09/2015'
AND I_SIG.F_INSPECCION <= '30/09/2018'
AND I_SIG.F_DESFAVORABLE IS NULL
AND I_SIG.N_ANNO IN (2013, 2014, 2015, 2016, 2017, 2018)
AND I_SIG.C_ESTACION_ID IN (3, 21, 22, 26, 28, 32, 34, 37, 41, 47, 53, 59, 60)
AND I_SIG.C_RESULTADO IN (1, 2) -- Y HORAS
Now I had to pull out several tables and the subquery from that because, frankly, they don't make much sense to me:
ITV2_HIST_VEHICULOS VH, << no join conditions to preceding tables
ITV2_CATEGORIAS_VEHICULO CAT, << no join conditions to preceding tables
ITV2_CLASIF_VEH_CONS CVC, << no join conditions to preceding tables
ITV2_CLASIF_VEH_USO CVU, << no join conditions to preceding tables
(
SELECT
*
FROM ITV2_VEHICULOS V
LEFT JOIN ITV2_HIST_VEHICULOS VH ON
VH.C_VEHICULO_ID = V.C_VEHICULO_ID
) VI
AND I.C_VEHICULO_ID = VI.C_VEHICULO_ID
AND VI.C_CATEGORIA_ID = CAT.C_CATEGORIA_ID
AND VI.C_CLASIF_VEH_CONS_ID = CVC.C_CLASIF_VEH_CONS_ID
AND VI.C_CLASIF_VEH_USO_ID = CVU.C_CLASIF_VEH_USO_ID
convert rows into new columns, like:
original dataframe:
attr_0 attr_1 attr_2 attr_3
0 day_0 -0.032546 0.161111 -0.488420 -0.811738
1 day_1 -0.341992 0.779818 -2.937992 -0.236757
2 day_2 0.592365 0.729467 0.421381 0.571941
3 day_3 -0.418947 2.022934 -1.349382 1.411210
4 day_4 -0.726380 0.287871 -1.153566 -2.275976
...
after convertion:
day_0_attr_0 day_0_attr_1 day_0_attr_2 day_0_attr_3 day_1_attr_0 \
0 -0.032546 0.144388 -0.992263 0.734864 -0.936625
day_1_attr_1 day_1_attr_2 day_1_attr_3 day_2_attr_0 day_2_attr_1 \
0 -1.717135 -0.228005 -0.330573 -0.28034 0.834345
day_2_attr_2 day_2_attr_3 day_3_attr_0 day_3_attr_1 day_3_attr_2 \
0 1.161089 0.385277 -0.014138 -1.05523 -0.618873
day_3_attr_3 day_4_attr_0 day_4_attr_1 day_4_attr_2 day_4_attr_3
0 0.724463 0.137691 -1.188638 -2.457449 -0.171268
If MultiIndex use:
print (df.index)
MultiIndex(levels=[[0, 1, 2, 3, 4], ['day_0', 'day_1', 'day_2', 'day_3', 'day_4']],
labels=[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]])
df = df.reset_index(level=0, drop=True).stack().reset_index()
level_0 level_1 0
0 day_0 attr_0 -0.032546
1 day_0 attr_1 0.161111
2 day_0 attr_2 -0.488420
3 day_0 attr_3 -0.811738
4 day_1 attr_0 -0.341992
5 day_1 attr_1 0.779818
6 day_1 attr_2 -2.937992
7 day_1 attr_3 -0.236757
8 day_2 attr_0 0.592365
9 day_2 attr_1 0.729467
10 day_2 attr_2 0.421381
11 day_2 attr_3 0.571941
12 day_3 attr_0 -0.418947
13 day_3 attr_1 2.022934
14 day_3 attr_2 -1.349382
15 day_3 attr_3 1.411210
16 day_4 attr_0 -0.726380
17 day_4 attr_1 0.287871
18 day_4 attr_2 -1.153566
19 day_4 attr_3 -2.275976
df = pd.DataFrame([df[0].values], columns = df['level_0'] + '_' + df['level_1'])
print (df)
day_0_attr_0 day_0_attr_1 ... day_4_attr_2 day_4_attr_3
0 -0.032546 0.161111 ... -1.153566 -2.275976
[1 rows x 20 columns
Another solution with product:
from itertools import product
cols = ['{}_{}'.format(a,b) for a, b in product(df.index.get_level_values(1), df.columns)]
print (cols)
['day_0_attr_0', 'day_0_attr_1', 'day_0_attr_2', 'day_0_attr_3',
'day_1_attr_0', 'day_1_attr_1', 'day_1_attr_2', 'day_1_attr_3',
'day_2_attr_0', 'day_2_attr_1', 'day_2_attr_2', 'day_2_attr_3',
'day_3_attr_0', 'day_3_attr_1', 'day_3_attr_2', 'day_3_attr_3',
'day_4_attr_0', 'day_4_attr_1', 'day_4_attr_2', 'day_4_attr_3']
df = pd.DataFrame([df.values.ravel()], columns=cols)
print (df)
day_0_attr_0 day_0_attr_1 ... day_4_attr_2 day_4_attr_3
0 -0.032546 0.161111 ... -1.153566 -2.275976
[1 rows x 20 columns]
If no MultiIndex solutions are a bit changed:
print (df.index)
Index(['day_0', 'day_1', 'day_2', 'day_3', 'day_4'], dtype='object')
df = df.stack().reset_index()
df = pd.DataFrame([df[0].values], columns = df['level_0'] + '_' + df['level_1'])
from itertools import product
cols = ['{}_{}'.format(a,b) for a, b in product(df.index, df.columns)]
df = pd.DataFrame([df.values.ravel()], columns=cols)
print (df)
You can use melt and string concatenation approach i.e
idx = df.index
temp = df.melt()
# Repeat the index
temp['variable'] = pd.Series(np.concatenate([idx]*len(df.columns))) + '_' + temp['variable']
# Set index and transpose
temp.set_index('variable').T
variable day_0_attr_0 day_1_attr_0 day_2_attr_0 day_3_attr_0 day_4_attr_0 . . . .
value -0.032546 -0.341992 0.592365 -0.418947 -0.72638 . . . .
According to this question I implemented the horizontal addition this time 5 by 5 and 7 by 7. It does the job correctly but it is not fast enough.
Can it be faster than what it is? I tried to use hadd and other instruction but the improvement is restricted. For examlple, when I use _mm256_bsrli_epi128 it is slightly better but it needs some extra permutation that ruins the benefit because of the lanes. So the question is how it should be implemented to gain more performance. The same story is for 9 elements, etc.
This adds 5 elements horizontally and puts the results in places 0, 5, and 10:
//it put the results in places 0, 5, and 10
inline __m256i _mm256_hadd5x5_epi16(__m256i a )
{
__m256i a1, a2, a3, a4;
a1 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 1 * 2);
a2 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 2 * 2);
a3 = _mm256_bsrli_epi128(a2, 2);
a4 = _mm256_bsrli_epi128(a3, 2);
return _mm256_add_epi16(_mm256_add_epi16(_mm256_add_epi16(a1, a2), _mm256_add_epi16(a3, a4)) , a );
}
And this adds 7 elements horizontally and puts the results in places 0 and 7:
inline __m256i _mm256_hadd7x7_epi16(__m256i a )
{
__m256i a1, a2, a3, a4, a5, a6;
a1 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 1 * 2);
a2 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 2 * 2);
a3 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 3 * 2);
a4 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 4 * 2);
a5 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 5 * 2);
a6 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 6 * 2);
return _mm256_add_epi16(_mm256_add_epi16(_mm256_add_epi16(a1, a2), _mm256_add_epi16(a3, a4)) , _mm256_add_epi16(_mm256_add_epi16(a5, a6), a ));
}
Indeed it is possible calculate these sums with less instructions. The idea is to accumulate
the partial sums not only in columns 10, 5 and 0, but also in other columns. This reduces the number of
vpaddw instructions and the number of 'shuffles' compared to your solution.
#include <stdio.h>
#include <x86intrin.h>
/* gcc -O3 -Wall -m64 -march=haswell hor_sum5x5.c */
int print_vec_short(__m256i x);
int print_10_5_0_short(__m256i x);
__m256i _mm256_hadd5x5_epi16(__m256i a );
__m256i _mm256_hadd7x7_epi16(__m256i a );
int main() {
short x[16];
for(int i=0; i<16; i++) x[i] = i+1; /* arbitrary initial values */
__m256i t0 = _mm256_loadu_si256((__m256i*)x);
__m256i t2 = _mm256_permutevar8x32_epi32(t0,_mm256_set_epi32(0,7,6,5,4,3,2,1));
__m256i t02 = _mm256_add_epi16(t0,t2);
__m256i t3 = _mm256_bsrli_epi128(t2,4); /* byte shift right */
__m256i t023 = _mm256_add_epi16(t02,t3);
__m256i t13 = _mm256_srli_epi64(t02,16); /* bit shift right */
__m256i sum = _mm256_add_epi16(t023,t13);
printf("t0 = ");print_vec_short(t0 );
printf("t2 = ");print_vec_short(t2 );
printf("t02 = ");print_vec_short(t02 );
printf("t3 = ");print_vec_short(t3 );
printf("t023= ");print_vec_short(t023);
printf("t13 = ");print_vec_short(t13 );
printf("sum = ");print_vec_short(sum );
printf("\nVector elements of interest: columns 10, 5, 0:\n");
printf("t0 [10, 5, 0] = ");print_10_5_0_short(t0 );
printf("t2 [10, 5, 0] = ");print_10_5_0_short(t2 );
printf("t02 [10, 5, 0] = ");print_10_5_0_short(t02 );
printf("t3 [10, 5, 0] = ");print_10_5_0_short(t3 );
printf("t023[10, 5, 0] = ");print_10_5_0_short(t023);
printf("t13 [10, 5, 0] = ");print_10_5_0_short(t13 );
printf("sum [10, 5, 0] = ");print_10_5_0_short(sum );
printf("\nSum with _mm256_hadd5x5_epi16(t0)\n");
sum = _mm256_hadd5x5_epi16(t0);
printf("sum [10, 5, 0] = ");print_10_5_0_short(sum );
/* now the sum of 7 elements: */
printf("\n\nSum of short ints 13...7 and short ints 6...0:\n");
__m256i t = _mm256_loadu_si256((__m256i*)x);
t0 = _mm256_permutevar8x32_epi32(t0,_mm256_set_epi32(3,6,5,4,3,2,1,0));
t0 = _mm256_and_si256(t0,_mm256_set_epi16(0xFFFF,0,0xFFFF,0xFFFF,0xFFFF,0xFFFF,0xFFFF,0xFFFF, 0,0xFFFF,0xFFFF,0xFFFF,0xFFFF,0xFFFF,0xFFFF,0xFFFF));
__m256i t1 = _mm256_alignr_epi8(t0,t0,2);
__m256i t01 = _mm256_add_epi16(t0,t1);
__m256i t23 = _mm256_alignr_epi8(t01,t01,4);
__m256i t0123 = _mm256_add_epi16(t01,t23);
__m256i t4567 = _mm256_alignr_epi8(t0123,t0123,8);
__m256i sum08 = _mm256_add_epi16(t0123,t4567); /* all elements are summed, but another permutation is needed to get the answer at position 7 */
sum = _mm256_permutevar8x32_epi32(sum08,_mm256_set_epi32(4,4,4,4,4,0,0,0));
printf("t = ");print_vec_short(t );
printf("t0 = ");print_vec_short(t0 );
printf("t1 = ");print_vec_short(t1 );
printf("t01 = ");print_vec_short(t01 );
printf("t23 = ");print_vec_short(t23 );
printf("t0123 = ");print_vec_short(t0123 );
printf("t4567 = ");print_vec_short(t4567 );
printf("sum08 = ");print_vec_short(sum08 );
printf("sum = ");print_vec_short(sum );
printf("\nSum with _mm256_hadd7x7_epi16(t) (the answer is in column 0 and in column 7)\n");
sum = _mm256_hadd7x7_epi16(t);
printf("sum = ");print_vec_short(sum );
return 0;
}
inline __m256i _mm256_hadd5x5_epi16(__m256i a )
{
__m256i a1, a2, a3, a4;
a1 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 1 * 2);
a2 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 2 * 2);
a3 = _mm256_bsrli_epi128(a2, 2);
a4 = _mm256_bsrli_epi128(a3, 2);
return _mm256_add_epi16(_mm256_add_epi16(_mm256_add_epi16(a1, a2), _mm256_add_epi16(a3, a4)) , a );
}
inline __m256i _mm256_hadd7x7_epi16(__m256i a )
{
__m256i a1, a2, a3, a4, a5, a6;
a1 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 1 * 2);
a2 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 2 * 2);
a3 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 3 * 2);
a4 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 4 * 2);
a5 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 5 * 2);
a6 = _mm256_alignr_epi8(_mm256_permute2x128_si256(a, _mm256_setzero_si256(), 0x31), a, 6 * 2);
return _mm256_add_epi16(_mm256_add_epi16(_mm256_add_epi16(a1, a2), _mm256_add_epi16(a3, a4)) , _mm256_add_epi16(_mm256_add_epi16(a5, a6), a ));
}
int print_vec_short(__m256i x){
short int v[16];
_mm256_storeu_si256((__m256i *)v,x);
printf("%4hi %4hi %4hi %4hi | %4hi %4hi %4hi %4hi | %4hi %4hi %4hi %4hi | %4hi %4hi %4hi %4hi \n",
v[15],v[14],v[13],v[12],v[11],v[10],v[9],v[8],v[7],v[6],v[5],v[4],v[3],v[2],v[1],v[0]);
return 0;
}
int print_10_5_0_short(__m256i x){
short int v[16];
_mm256_storeu_si256((__m256i *)v,x);
printf("%4hi %4hi %4hi \n",v[10],v[5],v[0]);
return 0;
}
The output is:
$ ./a.out
t0 = 16 15 14 13 | 12 11 10 9 | 8 7 6 5 | 4 3 2 1
t2 = 2 1 16 15 | 14 13 12 11 | 10 9 8 7 | 6 5 4 3
t02 = 18 16 30 28 | 26 24 22 20 | 18 16 14 12 | 10 8 6 4
t3 = 0 0 2 1 | 16 15 14 13 | 0 0 10 9 | 8 7 6 5
t023= 18 16 32 29 | 42 39 36 33 | 18 16 24 21 | 18 15 12 9
t13 = 0 18 16 30 | 0 26 24 22 | 0 18 16 14 | 0 10 8 6
sum = 18 34 48 59 | 42 65 60 55 | 18 34 40 35 | 18 25 20 15
Vector elements of interest: columns 10, 5, 0:
t0 [10, 5, 0] = 11 6 1
t2 [10, 5, 0] = 13 8 3
t02 [10, 5, 0] = 24 14 4
t3 [10, 5, 0] = 15 10 5
t023[10, 5, 0] = 39 24 9
t13 [10, 5, 0] = 26 16 6
sum [10, 5, 0] = 65 40 15
Sum with _mm256_hadd5x5_epi16(t0)
sum [10, 5, 0] = 65 40 15
Sum of short ints 13...7 and short ints 6...0:
t = 16 15 14 13 | 12 11 10 9 | 8 7 6 5 | 4 3 2 1
t0 = 8 0 14 13 | 12 11 10 9 | 0 7 6 5 | 4 3 2 1
t1 = 9 8 0 14 | 13 12 11 10 | 1 0 7 6 | 5 4 3 2
t01 = 17 8 14 27 | 25 23 21 19 | 1 7 13 11 | 9 7 5 3
t23 = 21 19 17 8 | 14 27 25 23 | 5 3 1 7 | 13 11 9 7
t0123 = 38 27 31 35 | 39 50 46 42 | 6 10 14 18 | 22 18 14 10
t4567 = 39 50 46 42 | 38 27 31 35 | 22 18 14 10 | 6 10 14 18
sum08 = 77 77 77 77 | 77 77 77 77 | 28 28 28 28 | 28 28 28 28
sum = 77 77 77 77 | 77 77 77 77 | 77 77 28 28 | 28 28 28 28
Sum with _mm256_hadd7x7_epi16(t) (the answer is in column 0 and in column 7)
sum = 16 31 45 58 | 70 81 91 84 | 77 70 63 56 | 49 42 35 28
As the following code makes apparent, the representation of numpy.datetime64 falls victim to overflow well before the objects fail to work.
import numpy as np
import datetime
def showMeDifference( t1, t2 ):
dt = t2-t1
dt64_ms = np.array( [ dt ], dtype = "timedelta64[ms]" )[0]
dt64_us = np.array( [ dt ], dtype = "timedelta64[us]" )[0]
dt64_ns = np.array( [ dt ], dtype = "timedelta64[ns]" )[0]
assert( dt64_ms / dt64_ns == 1.0 )
assert( dt64_us / dt64_ms == 1.0 )
assert( dt64_ms / dt64_us == 1.0 )
print str( dt64_ms )
print str( dt64_us )
print str( dt64_ns )
t1 = datetime.datetime( 2014, 4, 1, 12, 0, 0 )
t2 = datetime.datetime( 2014, 4, 1, 12, 0, 1 )
showMeDifference( t1, t2 )
t1 = datetime.datetime( 2014, 4, 1, 12, 0, 0 )
t2 = datetime.datetime( 2014, 4, 1, 12, 1, 0 )
showMeDifference( t1, t2 )
t1 = datetime.datetime( 2014, 4, 1, 12, 0, 0 )
t2 = datetime.datetime( 2014, 4, 1, 13, 0, 0 )
showMeDifference( t1, t2 )
print "These are for " + np.__version__
1000 milliseconds
1000000 microseconds
1000000000 nanoseconds
60000 milliseconds
60000000 microseconds
-129542144 nanoseconds
3600000 milliseconds
-694967296 microseconds
817405952 nanoseconds
These are for 1.7.1
Is this just a bug in np.timedelta64? If so, what idiom / workarounds have people used when working with np.timedelta64?