Angular 8 Issue - npm start - angular8

I recently started to work on Angular 8, everything went well but from last 2 days I am randomly receiving following error. I am using Node v14.18.1 and ng --version output is this
#angular-devkit/architect 0.800.6 #angular-devkit/build-angular 0.800.6 #angular-devkit/build-optimizer 0.800.6 #angular-devkit/build-webpack 0.800.6 #angular-devkit/core 8.0.6 #angular-devkit/schematics 8.3.26 #angular/cdk 8.2.3 #angular/cli 8.3.26 #angular/material 8.2.3 #angular/material-moment-adapter 8.2.3 #ngtools/webpack 8.0.6 #schematics/angular 8.3.26 #schematics/update 0.803.26 rxjs 6.5.5 typescript 3.4.5 webpack 4.30.0enter code here
Any idea that why it is happening?
Fatal error in , line 0 unreachable code FailureMessage Object: 000000C1040FEB20 1: 00007FF67F1C412F napi_wrap+133311 2: 00007FF67F0ED13F std::basic_ostream<char,std::char_traits<char> >::operator<<+57583 3: 00007FF67FD5B6A2 V8_Fatal+162 4: 00007FF67F89DF5C v8::internal::MarkingWorklists::SwitchToContextSlow+49356 5: 00007FF67F8C4FE7 v8::internal::IncrementalMarking::Step+807 6: 00007FF67F8C2457 v8::internal::IncrementalMarking::AdvanceWithDeadline+663 7: 00007FF67F8C5930 v8::internal::IncrementalMarking::WasActivated+448 8: 00007FF67F0F0B7F std::basic_ostream<char,std::char_traits<char> >::operator<<+72495 9: 00007FF67F0EF456 std::basic_ostream<char,std::char_traits<char> >::operator<<+66566 10: 00007FF67F22141B uv_async_send+331 11: 00007FF67F220BAC uv_loop_init+1292 12: 00007FF67F220D4A uv_run+202 13: 00007FF67F118A45 v8::internal::AsmJsScanner::GetIdentifierString+51749 14: 00007FF67F191227 node::Start+311 15: 00007FF67EFE685C RC4_options+339804 16: 00007FF67FFA14F8 v8::internal::compiler::RepresentationChanger::Uint32OverflowOperatorFor+14424 17: 00007FFDB3967974 BaseThreadInitThunk+20 18: 00007FFDB436A271 RtlUserThreadStart+33

Related

NextJS error on start and build Assertion `(insertion_info.second) == (true)' failed

All of the sudden my NextJS app won't start or build, I get the following message:
npm[213732]: c:\ws\src\env-inl.h:1041: Assertion `(insertion_info.second) == (true)' failed.
1: 00007FF7A493052F napi_wrap+109311
2: 00007FF7A48D5256 v8::internal::OrderedHashTable<v8::internal::OrderedHashSet,1>::NumberOfElementsOffset+33302
3: 00007FF7A48D55D1 v8::internal::OrderedHashTable<v8::internal::OrderedHashSet,1>::NumberOfElementsOffset+34193
4: 00007FF7A494E69F node::AddEnvironmentCleanupHook+127
5: 00007FF7A48FD5A1 napi_add_env_cleanup_hook+49
6: 00007FFF90FE5057 napi_register_module_v1+8071
7: 00007FF7A48FD25B node_module_register+7275
8: 00007FF7A48F9AE0 node::Buffer::New+6352
9: 00007FF7A48FA873 node::Buffer::New+9827
10: 00007FF7A514E85F v8::internal::Builtins::builtin_handle+321471
11: 00007FF7A514DDF4 v8::internal::Builtins::builtin_handle+318804
12: 00007FF7A514E0E7 v8::internal::Builtins::builtin_handle+319559
13: 00007FF7A514DF33 v8::internal::Builtins::builtin_handle+319123
14: 00007FF7A522A0CD v8::internal::SetupIsolateDelegate::SetupHeap+464173
15: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
16: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
17: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
18: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
19: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
20: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
21: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
22: 00007FF7A526A190 v8::internal::SetupIsolateDelegate::SetupHeap+726512
23: 00007FF7A51BE8DA v8::internal::SetupIsolateDelegate::SetupHeap+23866
24: 00007FF7A52A8473 v8::internal::SetupIsolateDelegate::SetupHeap+981203
25: 00007FF7A51C29D2 v8::internal::SetupIsolateDelegate::SetupHeap+40498
26: 00007FF7A51EF7C0 v8::internal::SetupIsolateDelegate::SetupHeap+224288
27: 00007FF7A526BBBE v8::internal::SetupIsolateDelegate::SetupHeap+733214
28: 00007FF7A51E293D v8::internal::SetupIsolateDelegate::SetupHeap+171421
29: 00007FF7A51C057C v8::internal::SetupIsolateDelegate::SetupHeap+31196
30: 00007FF7A509081F v8::internal::Execution::CallWasm+1839
31: 00007FF7A509092B v8::internal::Execution::CallWasm+2107
32: 00007FF7A509136A v8::internal::Execution::TryCall+378
33: 00007FF7A5071B85 v8::internal::MicrotaskQueue::RunMicrotasks+501
34: 00007FF7A50718E0 v8::internal::MicrotaskQueue::PerformCheckpoint+32
35: 00007FF7A49541C0 node::CallbackScope::~CallbackScope+672
36: 00007FF7A49545BB node::CallbackScope::~CallbackScope+1691
37: 00007FF7A4954A01 node::MakeCallback+209
38: 00007FF7A491F04E napi_wrap+38430
39: 00007FF7A49796C8 uv_check_init+120
40: 00007FF7A49842E8 uv_run+664
41: 00007FF7A4890255 v8::internal::OrderedHashTable<v8::internal::OrderedHashSet,1>::NumberOfBucketsOffset+9365
42: 00007FF7A49039B7 node::Start+311
43: 00007FF7A476686C RC4_options+339820
44: 00007FF7A570619C v8::internal::compiler::RepresentationChanger::Uint32OverflowOperatorFor+153532
45: 00007FF8741626BD BaseThreadInitThunk+29
46: 00007FF8759EDFB8 RtlUserThreadStart+40
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! my-app#0.1.0 dev: `next dev -p 5000`
npm ERR! Exit status 1
I even checked out main branch and removed all local changes (few content pages and one form that should hit API to send data) to get back to previous state that worked, but it still doesn't work. I am on Win11 machine and have Node server running normally.
I had the same problem. Just delete the ".next" directory and run "npm run dev" that it will work again.

What exactly set(CMAKE_CXX_VISIBILITY_PRESET hidden) does with cmake?

I mean, I know that it's supposed to default everything that isn't explicitly exported as "default" as hidden, but when I try to use it (on a debug build) and compare builds of the shared library with it and without it, the output of
readelf -Ws --dyn-syms libLibrary.so
is exactly the same, with everything having default visibility. What can I be missing?
This option comes with GenerateExternalHeader module from its example but I don't want to just mindlessly set flags because the (often incorrect) documentation tells me to. And in this case I cannot understand what this option even does if anything.
Adds -fvisibility=hidden to command line arguments of GCC and clang.
Adds here: https://gitlab.kitware.com/cmake/cmake/-/blob/master/Source/cmLocalGenerator.cxx#L2200 .
Option comes from CMAKE_${LANG}_COMPILE_OPTIONS_VISIBILITY property:
/usr/share/cmake $ ag _COMPILE_OPTIONS_VISIBILITY
Modules/Compiler/AppleClang-CXX.cmake
15: set(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/AppleClang-OBJCXX.cmake
13:set(CMAKE_OBJCXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/Clang-CXX.cmake
15: set(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/Clang-HIP.cmake
9: set(CMAKE_HIP_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/Clang.cmake
36: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Compiler/GNU-CXX.cmake
17: set(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fno-keep-inline-dllexport")
21: set(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/GNU-OBJCXX.cmake
14: set(CMAKE_OBJCXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/GNU.cmake
32: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Compiler/IntelLLVM-CXX.cmake
23: set(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Compiler/IntelLLVM.cmake
54: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Compiler/NVIDIA-CUDA.cmake
48: set(CMAKE_CUDA_COMPILE_OPTIONS_VISIBILITY -Xcompiler=-fvisibility=)
Modules/Compiler/QCC-CXX.cmake
15:set(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN "-fvisibility-inlines-hidden")
Modules/Platform/AIX-GNU-CXX.cmake
3:unset(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN)
Modules/Platform/AIX-GNU.cmake
20: unset(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY)
Modules/Platform/Apple-Intel.cmake
17: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Platform/Apple-IntelLLVM.cmake
16: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Platform/HP-UX-GNU-CXX.cmake
3:unset(CMAKE_CXX_COMPILE_OPTIONS_VISIBILITY_INLINES_HIDDEN)
Modules/Platform/HP-UX-GNU.cmake
19: unset(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY)
Modules/Platform/Linux-Fujitsu.cmake
14: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Platform/Linux-Intel.cmake
57: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")
Modules/Platform/Linux-IntelLLVM.cmake
54: set(CMAKE_${lang}_COMPILE_OPTIONS_VISIBILITY "-fvisibility=")

Using shift function along with max function Pandas

I am attempting to create a technical indicator ('Supertrend') using Pandas. The formula for this column is recursive.
(For people familiar with Pinescript, this column will replicate the result of this Pinescript function):
df['st_trendup'] = np.select(df['Close'].shift() > df['st_trendup'].shift(),df[['st_up','st_trendup'.shift()]].max(axis=1),df['st_up'])
The problem occurs in the true part of the np.select()because I cannot call .shift() on a string.
Normally, I would make a new column that uses .shift() beforehand but since this is recursive, I have to do it all in one line.
If possible I'd like to avoid using loops for speed; prefer solutions using native pandas or numpy functions.
What I am looking for
A way to find max function that can accomodate a .shift() call
Columns that are used:
def tr(high,low,close1):
return max(high - low, abs(high - close1), abs(low - close1))
df['st_closeprev'] = df['Close'].shift()
df['st_hl2'] = (df['High']+df['Low'])/2
df['st_tr'] = df.apply(lambda row: tr(row['High'],row['Low'],row['st_closeprev']),axis=1)
df['st_atr'] = df['st_tr'].ewm(alpha = 1/pd,adjust=False,min_periods=pd).mean()
df['st_up'] = df['st_hl2'] - factor * df['st_atr']
df['st_dn'] = df['st_hl2'] + factor * df['st_atr']
df['st_trendup'] = np.select(df['Close'].shift() > df['st_trendup'].shift(),df[['st_up','st_trendup'.shift()]].max(axis=1),df['st_up'])
Sample data obtained by the df.to_dict
{'Date': {0: Timestamp('2021-01-01 09:15:00'),
1: Timestamp('2021-01-01 09:30:00'),
2: Timestamp('2021-01-01 09:45:00'),
3: Timestamp('2021-01-01 10:00:00'),
4: Timestamp('2021-01-01 10:15:00'),
5: Timestamp('2021-01-01 10:30:00'),
6: Timestamp('2021-01-01 10:45:00'),
7: Timestamp('2021-01-01 11:00:00'),
8: Timestamp('2021-01-01 11:15:00'),
9: Timestamp('2021-01-01 11:30:00'),
10: Timestamp('2021-01-01 11:45:00'),
11: Timestamp('2021-01-01 12:00:00'),
12: Timestamp('2021-01-01 12:15:00'),
13: Timestamp('2021-01-01 12:30:00'),
14: Timestamp('2021-01-01 12:45:00'),
15: Timestamp('2021-01-01 13:00:00'),
16: Timestamp('2021-01-01 13:15:00'),
17: Timestamp('2021-01-01 13:30:00'),
18: Timestamp('2021-01-01 13:45:00'),
19: Timestamp('2021-01-01 14:00:00'),
20: Timestamp('2021-01-01 14:15:00'),
21: Timestamp('2021-01-01 14:30:00'),
22: Timestamp('2021-01-01 14:45:00'),
23: Timestamp('2021-01-01 15:00:00'),
24: Timestamp('2021-01-01 15:15:00'),
25: Timestamp('2021-01-04 09:15:00')},
'Open': {0: 31250.0,
1: 31376.0,
2: 31405.0,
3: 31389.4,
4: 31377.5,
5: 31347.8,
6: 31310.8,
7: 31343.4,
8: 31349.5,
9: 31349.9,
10: 31325.1,
11: 31310.9,
12: 31329.0,
13: 31376.0,
14: 31375.5,
15: 31357.4,
16: 31325.0,
17: 31341.1,
18: 31300.0,
19: 31324.5,
20: 31353.3,
21: 31350.0,
22: 31346.9,
23: 31330.0,
24: 31314.3,
25: 31450.2},
'High': {0: 31407.0,
1: 31425.0,
2: 31411.95,
3: 31389.45,
4: 31382.0,
5: 31350.0,
6: 31354.6,
7: 31359.0,
8: 31370.0,
9: 31364.7,
10: 31350.0,
11: 31337.9,
12: 31378.9,
13: 31419.5,
14: 31377.75,
15: 31360.0,
16: 31367.15,
17: 31345.2,
18: 31340.0,
19: 31367.0,
20: 31375.0,
21: 31370.0,
22: 31350.0,
23: 31334.6,
24: 31329.6,
25: 31599.0},
'Low': {0: 31250.0,
1: 31367.95,
2: 31352.5,
3: 31331.65,
4: 31301.4,
5: 31303.05,
6: 31310.0,
7: 31325.05,
8: 31335.35,
9: 31315.35,
10: 31281.9,
11: 31292.0,
12: 31316.25,
13: 31352.05,
14: 31335.0,
15: 31322.0,
16: 31318.25,
17: 31261.55,
18: 31283.3,
19: 31324.5,
20: 31322.0,
21: 31332.15,
22: 31324.1,
23: 31300.15,
24: 31280.0,
25: 31430.0},
'Close': {0: 31375.0,
1: 31398.3,
2: 31386.0,
3: 31377.0,
4: 31342.3,
5: 31311.7,
6: 31345.0,
7: 31349.0,
8: 31344.2,
9: 31327.6,
10: 31311.3,
11: 31325.6,
12: 31373.0,
13: 31375.0,
14: 31357.4,
15: 31326.0,
16: 31345.9,
17: 31300.6,
18: 31324.4,
19: 31353.8,
20: 31345.6,
21: 31341.6,
22: 31332.5,
23: 31311.0,
24: 31285.0,
25: 31558.4},
'Volume': {0: 259952,
1: 163775,
2: 105900,
3: 99725,
4: 115175,
5: 78625,
6: 67675,
7: 46575,
8: 53350,
9: 54175,
10: 96975,
11: 80925,
12: 79475,
13: 147775,
14: 38900,
15: 64925,
16: 52425,
17: 142175,
18: 81800,
19: 74950,
20: 68550,
21: 40350,
22: 47150,
23: 119200,
24: 222875,
25: 524625}}
Change:
df[['st_up','st_trendup'.shift()]].max(axis=1)
to:
df[['st_up','st_trendup']].assign(st_trendup = df['st_trendup'].shift()).max(axis=1)

how to plot the loss from the log file

The below mentioned are the loss values generated in the file 'log'(the iterations are actually more than this what I listed below). Attached the screenshot of the contents of the log file for ref. How to plot the Iteration (x-axis) vs Loss (y-axis) from these contents in the 'log' file ?
0: combined_hm_loss: 0.17613089
1: combined_hm_loss: 0.20243575
2: combined_hm_loss: 0.07203530
3: combined_hm_loss: 0.03444689
4: combined_hm_loss: 0.02623464
5: combined_hm_loss: 0.02061908
6: combined_hm_loss: 0.01562270
7: combined_hm_loss: 0.01253260
8: combined_hm_loss: 0.01102418
9: combined_hm_loss: 0.00958306
10: combined_hm_loss: 0.00824807
11: combined_hm_loss: 0.00694697
12: combined_hm_loss: 0.00640630
13: combined_hm_loss: 0.00593691
14: combined_hm_loss: 0.00521284
15: combined_hm_loss: 0.00445185
16: combined_hm_loss: 0.00408901
17: combined_hm_loss: 0.00377806
18: combined_hm_loss: 0.00314004
19: combined_hm_loss: 0.00287649
enter image description here
try this:
import pandas as pd
import numpy as np
import io
data = '''
index combined_hm_loss
0: 0.17613089
1: 0.20243575
2: 0.07203530
3: 0.03444689
4: 0.02623464
5: 0.02061908
6: 0.01562270
7: 0.01253260
8: 0.01102418
9: 0.00958306
10: 0.00824807
11: 0.00694697
12: 0.00640630
13: 0.00593691
14: 0.00521284
15: 0.00445185
16: 0.00408901
17: 0.00377806
18: 0.00314004
19: 0.00287649
'''
df = pd.read_csv(io.StringIO(data), delim_whitespace=True)
ax = df.plot.area(y='combined_hm_loss')
ax.invert_yaxis()

How to create multiple line graph using seaborn and find rate?

I need help to create a multiple line graph using below DataFrame
num user_id first_result second_result result date point1 point2 point3 point4
0 0 1480R clear clear pass 9/19/2016 clear consider clear consider
1 1 419M consider consider fail 5/18/2016 consider consider clear clear
2 2 416N consider consider fail 11/15/2016 consider consider consider consider
3 3 1913I consider consider fail 11/25/2016 consider consider consider clear
4 4 1938T clear clear pass 8/1/2016 clear consider clear clear
5 5 1530C clear clear pass 6/22/2016 clear clear consider clear
6 6 1075L consider consider fail 9/13/2016 consider consider clear consider
7 7 1466N consider clear fail 6/21/2016 consider clear clear consider
8 8 662V consider consider fail 11/1/2016 consider consider clear consider
9 9 1187Y consider consider fail 9/13/2016 consider consider clear clear
10 10 138T consider consider fail 9/19/2016 consider clear consider consider
11 11 1461Z consider clear fail 7/18/2016 consider consider clear consider
12 12 807N consider clear fail 8/16/2016 consider consider clear clear
13 13 416Y consider consider fail 10/2/2016 consider clear clear clear
14 14 638A consider clear fail 6/21/2016 consider clear consider clear
data file linke data.xlsx or data as dict
data = {'num': {0: 0,
1: 1,
2: 2,
3: 3,
4: 4,
5: 5,
6: 6,
7: 7,
8: 8,
9: 9,
10: 10,
11: 11,
12: 12,
13: 13,
14: 14},
'user_id': {0: '1480R',
1: '419M',
2: '416N',
3: '1913I',
4: '1938T',
5: '1530C',
6: '1075L',
7: '1466N',
8: '662V',
9: '1187Y',
10: '138T',
11: '1461Z',
12: '807N',
13: '416Y',
14: '638A'},
'first_result': {0: 'clear',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'consider',
8: 'consider',
9: 'consider',
10: 'consider',
11: 'consider',
12: 'consider',
13: 'consider',
14: 'consider'},
'second_result': {0: 'clear',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'clear',
8: 'consider',
9: 'consider',
10: 'consider',
11: 'clear',
12: 'clear',
13: 'consider',
14: 'clear'},
'result': {0: 'pass',
1: 'fail',
2: 'fail',
3: 'fail',
4: 'pass',
5: 'pass',
6: 'fail',
7: 'fail',
8: 'fail',
9: 'fail',
10: 'fail',
11: 'fail',
12: 'fail',
13: 'fail',
14: 'fail'},
'date': {0: '9/19/2016',
1: '5/18/2016',
2: '11/15/2016',
3: '11/25/2016',
4: '8/1/2016',
5: '6/22/2016',
6: '9/13/2016',
7: '6/21/2016',
8: '11/1/2016',
9: '9/13/2016',
10: '9/19/2016',
11: '7/18/2016',
12: '8/16/2016',
13: '10/2/2016',
14: '6/21/2016'},
'point1': {0: 'clear',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'consider',
8: 'consider',
9: 'consider',
10: 'consider',
11: 'consider',
12: 'consider',
13: 'consider',
14: 'consider'},
'point2': {0: 'consider',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'consider',
5: 'clear',
6: 'consider',
7: 'clear',
8: 'consider',
9: 'consider',
10: 'clear',
11: 'consider',
12: 'consider',
13: 'clear',
14: 'clear'},
'point3': {0: 'clear',
1: 'clear',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'consider',
6: 'clear',
7: 'clear',
8: 'clear',
9: 'clear',
10: 'consider',
11: 'clear',
12: 'clear',
13: 'clear',
14: 'consider'},
'point4': {0: 'consider',
1: 'clear',
2: 'consider',
3: 'clear',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'consider',
8: 'consider',
9: 'clear',
10: 'consider',
11: 'consider',
12: 'clear',
13: 'clear',
14: 'clear'}
}
I need to create a bar graph and a line graph, I have created the bar graph using point1 where x = consider, clear and y = count of consider and clear
but I have no idea how to create a line graph by this scenario
x = date
y = pass rate (%)
Pass Rate is a number of clear/(consider + clear)
graph the rate for first_result, second_result, result all on the same graph
and the graph should look like below
please comment or answer how can I do it. if I can get an idea of grouping dates and getting the ratio then also great.
Here's my idea how to do it:
# first convert all `clear`, `consider` to 1,0
tmp_df = df[['first_result', 'second_result']].apply(lambda x: x.eq('clear').astype(int))
# convert `pass`, `fail` to 1,0
tmp_df['result'] = df.result.eq('pass').astype(int)
# copy the date
tmp_df['date'] = df['date']
# groupby and compute mean, i.e. number_pass/total_count
tmp_df = tmp_df.groupby('date').mean()
tmp_df.plot()
Output for this dataset