Hive Runtime Error: Map local work exhausted memory - hive

I am trying to join two ORC tables in Hive but I get the an error. Here is the query:
select t1.num as num, t1.product as Product, t2.value as OldValue, t1.value as NewValue from test_new t1 LEFT OUTER JOIN test_old t2 ON t1.num=t2.num and t1.product=t2.product where t2.value is NULL and t1.value is not NULL or t1.value<>t2.value;
Error:
2017-05-29 11:19:27,157 INFO [main]: mr.ExecDriver (SessionState.java:printInfo(911)) - Execution log at: /tmp/alex/kaliamoorthya_20170529111919_6621dd64-7a5e-4411-abda-b28fddab8bdc.log
2017-05-29 11:19:27,320 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(118)) - <PERFLOG method=deserializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-05-29 11:19:27,321 INFO [main]: exec.Utilities (Utilities.java:deserializePlan(953)) - Deserializing MapredLocalWork via kryo
2017-05-29 11:19:27,462 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(158)) - </PERFLOG method=deserializePlan start=1496056767320 end=1496056767462 duration=142 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-05-29 11:19:27,472 INFO [main]: mr.MapredLocalTask (SessionState.java:printInfo(911)) - 2017-05-29 11:19:27 Starting to launch local task to process map join; maximum memory = 1908932608
2017-05-29 11:19:27,549 INFO [main]: mr.MapredLocalTask (MapredLocalTask.java:initializeOperators(441)) - fetchoperator for t2 created
2017-05-29 11:19:27,550 INFO [main]: exec.TableScanOperator (Operator.java:initialize(346)) - Initializing Self TS[0]
2017-05-29 11:19:27,550 INFO [main]: exec.TableScanOperator (Operator.java:initializeChildren(419)) - Operator 0 TS initialized
2017-05-29 11:19:27,550 INFO [main]: exec.TableScanOperator (Operator.java:initializeChildren(423)) - Initializing children of 0 TS
2017-05-29 11:19:27,550 INFO [main]: exec.HashTableSinkOperator (Operator.java:initialize(458)) - Initializing child 1 HASHTABLESINK
2017-05-29 11:19:27,550 INFO [main]: exec.HashTableSinkOperator (Operator.java:initialize(346)) - Initializing Self HASHTABLESINK[1]
2017-05-29 11:19:27,551 INFO [main]: mapjoin.MapJoinMemoryExhaustionHandler (MapJoinMemoryExhaustionHandler.java:<init>(61)) - JVM Max Heap Size: 1908932608
2017-05-29 11:19:27,582 INFO [main]: persistence.HashMapWrapper (HashMapWrapper.java:calculateTableSize(94)) - Key count from statistics is -1; setting map size to 100000
2017-05-29 11:19:27,582 INFO [main]: exec.HashTableSinkOperator (Operator.java:initialize(394)) - Initialization Done 1 HASHTABLESINK
2017-05-29 11:19:27,582 INFO [main]: exec.TableScanOperator (Operator.java:initialize(394)) - Initialization Done 0 TS
2017-05-29 11:19:27,582 INFO [main]: mr.MapredLocalTask (MapredLocalTask.java:initializeOperators(461)) - fetchoperator for t2 initialized
2017-05-29 11:19:28,059 INFO [main]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
2017-05-29 11:19:28,062 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(118)) - <PERFLOG method=OrcGetSplits from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
2017-05-29 11:19:28,098 INFO [main]: orc.OrcInputFormat (OrcInputFormat.java:generateSplitsInfo(961)) - FooterCacheHitRatio: 0/4
2017-05-29 11:19:28,098 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(158)) - </PERFLOG method=OrcGetSplits start=1496056768062 end=1496056768098 duration=36 from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
2017-05-29 11:19:28,209 INFO [main]: orc.OrcRawRecordMerger (OrcRawRecordMerger.java:<init>(430)) - min key = null, max key = null
2017-05-29 11:19:28,209 INFO [main]: orc.ReaderImpl (ReaderImpl.java:rowsOptions(526)) - Reading ORC rows from hdfs://nameservice1/user/hive/warehouse/alex_tmp.db/test_old/000000_0 with {include: [true, true, true, true], offset: 0, length: 9223372036854775807}
2017-05-29 11:19:28,646 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:28 Processing rows: 200000 Hashtable size: 199999 Memory usage: 130784248 percentage: 0.069
2017-05-29 11:19:28,708 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:28 Processing rows: 300000 Hashtable size: 299999 Memory usage: 159462144 percentage: 0.084
2017-05-29 11:19:28,784 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:28 Processing rows: 400000 Hashtable size: 399999 Memory usage: 207258624 percentage: 0.109
2017-05-29 11:19:28,843 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:28 Processing rows: 500000 Hashtable size: 499999 Memory usage: 235936520 percentage: 0.124
2017-05-29 11:19:28,903 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:28 Processing rows: 600000 Hashtable size: 599999 Memory usage: 274173712 percentage: 0.144
2017-05-29 11:19:28,965 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:28 Processing rows: 700000 Hashtable size: 699999 Memory usage: 312410896 percentage: 0.164
2017-05-29 11:19:29,059 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:29 Processing rows: 800000 Hashtable size: 799999 Memory usage: 359036720 percentage: 0.188
2017-05-29 11:19:29,126 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:29 Processing rows: 900000 Hashtable size: 899999 Memory usage: 397273912 percentage: 0.208
2017-05-29 11:19:29,196 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:29 Processing rows: 1000000 Hashtable size: 999999 Memory usage: 425951800 percentage: 0.223
2017-05-29 11:19:29,263 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:29 Processing rows: 1100000 Hashtable size: 1099999 Memory usage: 464188992 percentage: 0.243
2017-05-29 11:19:29,333 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:29 Processing rows: 1200000 Hashtable size: 1199999 Memory usage: 502426176 percentage: 0.263
2017-05-29 11:19:29,401 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:29 Processing rows: 1300000 Hashtable size: 1299999 Memory usage: 540663360 percentage: 0.283
2017-05-29 11:19:32,752 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:32 Processing rows: 1400000 Hashtable size: 1399999 Memory usage: 485809696 percentage: 0.254
2017-05-29 11:19:32,817 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:32 Processing rows: 1500000 Hashtable size: 1499999 Memory usage: 524582216 percentage: 0.275
2017-05-29 11:19:32,937 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:32 Processing rows: 1600000 Hashtable size: 1599999 Memory usage: 580131976 percentage: 0.304
2017-05-29 11:19:32,998 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:32 Processing rows: 1700000 Hashtable size: 1699999 Memory usage: 618904496 percentage: 0.324
2017-05-29 11:19:33,061 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 1800000 Hashtable size: 1799999 Memory usage: 647983888 percentage: 0.339
2017-05-29 11:19:33,124 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 1900000 Hashtable size: 1899999 Memory usage: 686756400 percentage: 0.36
2017-05-29 11:19:33,188 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2000000 Hashtable size: 1999999 Memory usage: 725528920 percentage: 0.38
2017-05-29 11:19:33,253 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2100000 Hashtable size: 2099999 Memory usage: 764301440 percentage: 0.40
2017-05-29 11:19:33,316 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2200000 Hashtable size: 2199999 Memory usage: 793380824 percentage: 0.416
2017-05-29 11:19:33,380 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2300000 Hashtable size: 2299999 Memory usage: 832153336 percentage: 0.436
2017-05-29 11:19:33,445 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2400000 Hashtable size: 2399999 Memory usage: 870925856 percentage: 0.456
2017-05-29 11:19:33,510 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2500000 Hashtable size: 2499999 Memory usage: 909698376 percentage: 0.477
2017-05-29 11:19:33,574 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:33 Processing rows: 2600000 Hashtable size: 2599999 Memory usage: 938777776 percentage: 0.492
2017-05-29 11:19:38,930 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:38 Processing rows: 2700000 Hashtable size: 2699999 Memory usage: 924140056 percentage: 0.484
2017-05-29 11:19:38,996 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:38 Processing rows: 2800000 Hashtable size: 2799999 Memory usage: 960610440 percentage: 0.503
2017-05-29 11:19:39,063 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 997080808 percentage: 0.522
2017-05-29 11:19:39,134 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 1033551200 percentage: 0.541
2017-05-29 11:19:39,203 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 1070021576 percentage: 0.561
2017-05-29 11:19:39,392 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 1140046400 percentage: 0.597
2017-05-29 11:19:39,456 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 1176516784 percentage: 0.616
2017-05-29 11:19:39,519 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 1212987168 percentage: 0.635
2017-05-29 11:19:39,583 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3500000 Hashtable size: 3499999 Memory usage: 1249457552 percentage: 0.655
2017-05-29 11:19:39,646 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3600000 Hashtable size: 3599999 Memory usage: 1285927936 percentage: 0.674
2017-05-29 11:19:39,710 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3700000 Hashtable size: 3699999 Memory usage: 1322398320 percentage: 0.693
2017-05-29 11:19:39,774 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3800000 Hashtable size: 3799999 Memory usage: 1358868704 percentage: 0.712
2017-05-29 11:19:39,837 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 3900000 Hashtable size: 3899999 Memory usage: 1395339088 percentage: 0.731
2017-05-29 11:19:39,904 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 4000000 Hashtable size: 3999999 Memory usage: 1431809456 percentage: 0.75
2017-05-29 11:19:39,973 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:39 Processing rows: 4100000 Hashtable size: 4099999 Memory usage: 1468279832 percentage: 0.769
2017-05-29 11:19:40,041 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:40 Processing rows: 4200000 Hashtable size: 4199999 Memory usage: 1504750200 percentage: 0.788
2017-05-29 11:19:40,113 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:40 Processing rows: 4300000 Hashtable size: 4299999 Memory usage: 1538933512 percentage: 0.806
2017-05-29 11:19:48,786 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:48 Processing rows: 4400000 Hashtable size: 4399999 Memory usage: 1496365384 percentage: 0.784
2017-05-29 11:19:48,850 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:48 Processing rows: 4500000 Hashtable size: 4499999 Memory usage: 1532580448 percentage: 0.803
2017-05-29 11:19:48,915 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:48 Processing rows: 4600000 Hashtable size: 4599999 Memory usage: 1568795512 percentage: 0.822
2017-05-29 11:19:48,979 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:48 Processing rows: 4700000 Hashtable size: 4699999 Memory usage: 1605010584 percentage: 0.841
2017-05-29 11:19:49,044 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:49 Processing rows: 4800000 Hashtable size: 4799999 Memory usage: 1641225648 percentage: 0.86
2017-05-29 11:19:49,108 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:49 Processing rows: 4900000 Hashtable size: 4899999 Memory usage: 1677440712 percentage: 0.879
2017-05-29 11:19:49,171 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:49 Processing rows: 5000000 Hashtable size: 4999999 Memory usage: 1713655784 percentage: 0.898
2017-05-29 11:19:49,235 INFO [main]: exec.HashTableSinkOperator (SessionState.java:printInfo(911)) - 2017-05-29 11:19:49 Processing rows: 5100000 Hashtable size: 5099999 Memory usage: 1749870856 percentage: 0.917
2017-05-29 11:19:49,246 ERROR [main]: mr.MapredLocalTask (MapredLocalTask.java:executeInProcess(354)) - Hive Runtime Error: Map local work exhausted memory
org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionException: 2017-05-29 11:19:49 Processing rows: 5100000 Hashtable size: 5099999 Memory usage: 1749870856 percentage: 0.917
at org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.checkMemoryStatus(MapJoinMemoryExhaustionHandler.java:99)
at org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.processOp(HashTableSinkOperator.java:249)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:409)
at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:380)
at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:346)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:743)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
I have tried to set the map memory and reduce memory to 22000 also and still no luck.
After searching the internet I found someone who suggested to set hive.auto.convert.join = false property in hive to overcome the above error and my query started to run.
I am not sure running my query in this way would gain any performance. Would the performance be still the same? Do we have any other alternative to fix the problem? Please suggest me some ideas on improving the performance of the query.

Your first and safest option is to set hive.auto.convert.join = false. This way you compromise some performance because you won't benefit from mapjoin. But it completely depends on your use case and your data size how big of deal this compromise would be.
The other option is to play with hive.auto.convert.join.noconditionaltask.size option which according to https://cwiki.apache.org/confluence/display/Hive/LanguageManual+JoinOptimization "enables the user to control what size table can fit in memory" finding the right threshold could be challenging though.
P.S. Just keep in mind for hive.auto.convert.join.noconditionaltask.size to go in effect, hive.auto.convert.join.noconditionaltask needs to be true (which by default is).

Related

I want to present bank transactions for passbook from given data?

TR_DATE
ACC_NAME
TYPE
AMOUNT
01-01-2017
AVNEESH
CR
60000
02-01-2017
AVNEESH
DB
8000
03-01-2017
AVNEESH
CR
8000
04-01-2017
AVNEESH
DB
5000
01-01-2017
NUPUR
CR
10000
02-01-2017
NUPUR
DB
8000
03-01-2017
NUPUR
CR
8000
And expected result for above data is
TR_DATE
ACC_NAME
TYPE
AMOUNT
BALANCE
01-01-2017
AVNEESH
CR
60000
60000
02-01-2017
AVNEESH
DB
8000
52000
03-01-2017
AVNEESH
CR
8000
60000
04-01-2017
AVNEESH
DB
5000
55000
01-01-2017
NUPUR
CR
10000
10000
02-01-2017
NUPUR
DB
8000
2000
03-01-2017
NUPUR
CR
8000
10000
You can use the analytic version of the sum() function, with a case expression to turn debits into negative amounts, and a window clause to apply the sum to amounts up to the current row's date:
select tr_date, acc_name, type, amount,
sum(case when type = 'DB' then -1 else 1 end * amount)
over (partition by acc_name order by tr_date) as balance
from passbook
order by acc_name, tr_date
TR_DATE
ACC_NAME
TYPE
AMOUNT
BALANCE
2017-01-01
AVNEESH
CR
60000
60000
2017-01-02
AVNEESH
DB
8000
52000
2017-01-03
AVNEESH
CR
8000
60000
2017-01-04
AVNEESH
DB
5000
55000
2017-01-01
NUPUR
CR
10000
10000
2017-01-02
NUPUR
DB
8000
2000
2017-01-03
NUPUR
CR
8000
10000
fiddle

How do I take multiple slices of a dataframe based on the value in single cells (row/column intersection)?

I am trying to analyse a time series of stock prices and want to take slices of the dataframe where the value in a column meet a certain criteria.
For the purpose of illustration, consider the following:
I have a time series dataframe of the stock price of Musgrave Minerals and am looking for occassions on which the share price increased by 20% or more within three days. That part is simple and I have the following code to achieve that:
import pandas_datareader.data as web
df = web.DataReader('MGV.AX', 'yahoo' )
df_20 = df[df.pct_change(periods = 3)['Adj Close'] >= 0.2]
df_20
High Low Open Close Volume Adj Close
Date
2018-10-28 0.081 0.064 0.064 0.075 11177031 0.075
2018-10-29 0.081 0.074 0.074 0.079 4327736 0.079
2018-10-30 0.079 0.074 0.078 0.078 1273789 0.078
2019-07-04 0.068 0.066 0.067 0.068 3431060 0.068
2019-12-02 0.100 0.083 0.086 0.096 12055640 0.096
2020-05-19 0.135 0.125 0.125 0.130 1947179 0.130
2020-05-20 0.160 0.130 0.130 0.155 8608597 0.155
2020-05-21 0.180 0.150 0.150 0.175 8580197 0.175
2020-05-22 0.170 0.150 0.165 0.165 10210411 0.165
2020-06-03 0.290 0.215 0.270 0.260 36446028 0.260
2020-06-04 0.270 0.225 0.245 0.235 14611718 0.235
2020-06-05 0.265 0.235 0.250 0.240 10751744 0.240
2020-06-10 0.435 0.330 0.345 0.415 37203724 0.415
2020-06-11 0.465 0.380 0.420 0.380 42022170 0.380
2020-06-12 0.425 0.350 0.350 0.420 15112508 0.420
2021-05-19 0.430 0.390 0.390 0.420 5093006 0.420
So there were 16 occassions on which the criteria were met.
Now for each occurrence, I want to take a slice that extends 5 rows (trading days) before the event and 5 rows after the event. I can't find anything online that would assist me. Is there a way of doing this using Pandas?
Whilst I could export the dataframe to Excel and perform the analysis there, I'm sure that Python can provide a quicker and better solution.

Why my calculation does not match metatrader 4 backtest loss?

I am writing and expert advisor and ran a test. And I cannot understand, and find info why loss amount is not matching my calculations. To do backtesting I should get correct results.
So in the image we see there was 0.79 lot position opened at 109.87 closed at 109.939.
Here is the position size calculator showing swap.
But when backtesting I am getting even lower swap. Maybe calculator indicator is showing wrong. With lower swap the gap between expected loss and actual loss is even bigger.
So daily swap eats 6.15 usd. Account is in usd. From jan 22 to feb 6 it has been 17 swaps. Commision as you will see below in log is 3.16.
So my math is 6.15*17+52.5+3.16=160.21
but position loss is 175.48.
I also have printed swap each day:
2020.11.30 20:55:10.728 2020.02.06 00:00:00 adx USDJPY,H1: swap: -98.7114401
2020.11.30 20:55:10.728 2020.02.05 00:00:00 adx USDJPY,H1: swap: -81.29177420000001
2020.11.30 20:55:10.727 2020.02.04 00:00:00 adx USDJPY,H1: swap: -75.48521890000001
2020.11.30 20:55:10.727 2020.02.03 00:00:00 adx USDJPY,H1: swap: -69.67866360000001
2020.11.30 20:55:10.727 2020.01.31 00:00:00 adx USDJPY,H1: swap: -63.87210830000001
2020.11.30 20:55:10.721 2020.01.30 00:00:00 adx USDJPY,H1: swap: -58.06555300000001
2020.11.30 20:55:10.721 2020.01.29 00:00:00 adx USDJPY,H1: swap: -40.6458871
2020.11.30 20:55:10.721 2020.01.28 00:00:00 adx USDJPY,H1: swap: -34.8393318
2020.11.30 20:55:10.720 2020.01.27 00:00:00 adx USDJPY,H1: swap: -29.0327765
2020.11.30 20:55:10.720 2020.01.24 00:00:00 adx USDJPY,H1: swap: -23.2262212
2020.11.30 20:55:10.720 2020.01.23 00:00:00 adx USDJPY,H1: swap: -17.41966590000001
2020.11.30 20:55:10.719 2020.01.22 00:00:00 adx USDJPY,H1: commission -3.16
and it shows about 5.8 for one day. Which would make even bigger gap if I would use it for my calculation.

Rolling average of pandas timeseries with fixed duration window

In the docs it says
df.rolling('2s').sum()
is an option. However
ts = ts.rolling('900s').mean()
returns the error ValueError: window must be an integer
What's up with that? Can't share the whole timeseries, but this is a snippet:
2019-09-20 00:30:51.234260+00:00 4.63
2019-09-20 00:50:51.292892+00:00 4.40
2019-09-20 01:30:51.273058+00:00 4.54
2019-09-20 01:50:51.270876+00:00 4.44
2019-09-20 02:30:51.267385+00:00 4.55
...
2019-10-30 22:57:35.003066+00:00 1.71
2019-10-30 23:12:34.965801+00:00 1.61
2019-10-30 23:27:34.976495+00:00 1.56
2019-10-30 23:42:34.984976+00:00 1.26
2019-10-30 23:57:34.965543+00:00 1.05

WCF : Moving from IIS 7 to IIS 8

I have moved my my wcf service from iis7 to 8.i can browse to the svc file but i cannot browse to any other methods through get or post method.it shows the below error
The sever encountered an error processing the request.see server logs for more details
the log file is shown below
Software: Microsoft Internet Information Services 8.5
Version: 1.0
Date: 2014-12-17 04:25:48
Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2014-12-17 04:25:48 (ipaddress) GET /service - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 301 0 0 120
2014-12-17 04:25:48 (ipaddress) GET /service/ - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 200 0 0 3
2014-12-17 04:25:53 (ipaddress) GET /service/MposService.svc - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko (ipaddress):786/service/ 200 0 0 904
2014-12-17 04:27:42 (ipaddress) GET /service/MposService.svc - 786 - publicip Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 200 0 0 628
2014-12-17 04:27:42 (ipaddress) GET /favicon.ico - 786 - public ip Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 404 0 2 470
2014-12-17 04:28:24 (ipaddress) GET /service/MposService.svc/getCustomer section=s1 786 - 117.213.26.161 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 400 0 0 640