kafka stream statestore restoration problem - kotlin

we are using KafkaStream for data aggregation. we have about 1000 K records per day to aggregate and our source and stateStore-changelog topic have 6 partitions. also size of stateStore is about 50 G.
to avoid long restoration time after restarting services we are using statefulset pods with 2 replicas.
but we are facing 2 issues:
during restoration phase, it goes to PENDING_SHUTDOWN state and then kafka stream thread shutdowns completely. after restarting pod it starts restoration again, but this state happens another time and restoration fails. final state is State transition from PENDING_SHUTDOWN to DEAD and after that service does nothing.
here is full log:
10:49:20.001 | INFO | o.a.k.s.p.i.StreamThread | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] State transition from PARTITIONS_ASSIGNED to PENDING_SHUTDOWN
10:49:20.001 | INFO | o.a.k.s.p.i.StreamThread | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] Shutting down
10:49:20.003 | INFO | o.a.k.s.p.i.StandbyTask | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] standby-task [0_5] Suspended running
10:49:20.008 | INFO | o.a.k.c.c.KafkaConsumer | [Consumer clientId=aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): aggregator-Stats-changelog-2
10:49:50.113 | INFO | o.a.k.s.p.i.StandbyTask | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] standby-task [0_5] Closed dirty
10:49:50.113 | INFO | o.a.k.s.p.i.StreamTask | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] task [0_2] Suspended RESTORING
10:49:50.115 | INFO | o.a.k.c.c.KafkaConsumer | [Consumer clientId=aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
10:50:13.051 | INFO | o.a.k.s.p.i.RecordCollectorImpl | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] task [0_2] Closing record collector dirty
10:50:13.052 | INFO | o.a.k.s.p.i.StreamTask | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] task [0_2] Closed dirty
10:50:13.055 | INFO | o.a.k.c.p.KafkaProducer | [Producer clientId=aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
10:50:13.061 | INFO | o.a.k.c.m.Metrics | Metrics scheduler closed
10:50:13.062 | INFO | o.a.k.c.m.Metrics | Closing reporter org.apache.kafka.common.metrics.JmxReporter
10:50:13.062 | INFO | o.a.k.c.m.Metrics | Metrics reporters closed
10:50:13.063 | INFO | o.a.k.c.u.AppInfoParser | App info kafka.producer for aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-producer unregistered
10:50:13.064 | INFO | o.a.k.c.c.KafkaConsumer | [Consumer clientId=aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
10:50:13.064 | INFO | o.a.k.c.m.Metrics | Metrics scheduler closed
10:50:13.064 | INFO | o.a.k.c.m.Metrics | Closing reporter org.apache.kafka.common.metrics.JmxReporter
10:50:13.064 | INFO | o.a.k.c.m.Metrics | Metrics reporters closed
10:50:13.067 | INFO | o.a.k.c.u.AppInfoParser | App info kafka.consumer for aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-consumer unregistered
10:50:13.067 | INFO | o.a.k.c.m.Metrics | Metrics scheduler closed
10:50:13.067 | INFO | o.a.k.c.m.Metrics | Closing reporter org.apache.kafka.common.metrics.JmxReporter
10:50:13.067 | INFO | o.a.k.c.m.Metrics | Metrics reporters closed
10:50:13.070 | INFO | o.a.k.c.u.AppInfoParser | App info kafka.consumer for aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1-restore-consumer unregistered
10:50:13.071 | INFO | o.a.k.s.p.i.StreamThread | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
10:50:13.071 | INFO | o.a.k.s.p.i.StreamThread | stream-thread [aggregator-7034b814-b6f5-4784-8ea5-01d2c402a4b5-StreamThread-1] Shutdown complete
during restoration phase i check stateStore size , but sometime i see that size decrease. for example partition 1 stateStore size is 20G but after 30s while restoration still is in progress it became 17G! is there any reason for this? (my state-store-changelog topic have compact cleanup policy - i think this is irrelevant but i mention it , might be worth )
i tried to fix issue 1 by setting REQUEST_TIMEOUT_MS_CONFIG to 2 min and decreasing MAX_POLL_RECORDS_CONFIG to 100 and MAX_POLL_RECORDS_CONFIG to 9 min. but still i'm facing this issue and restoration can not be completed.
thanks for your help :)
edit: i enable kafka debug log and here is logs befor shuttingdown:
23:52:33.422 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Invoking poll on main Consumer
23:52:33.422 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Main Consumer poll completed in 0 ms and fetched 0 records
23:52:33.422 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] State is PARTITIONS_ASSIGNED; initializing tasks if necessary
23:52:33. [39mDEBUG[0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Initialization call done. State is PARTITIONS_ASSIGNED
23:52:33.422 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Idempotently invoking restoration logic in state PARTITIONS_ASSIGNED
23:52:33.423 | [34mINFO [0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] State transition from PARTITIONS_ASSIGNED to PENDING_SHUTDOWN
23:52:33.423 | [34mINFO [0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Shutting down
23:52:33.425 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_3] Skipped preparing RESTORING task for commit since there is nothing to commit
23:52:33.425 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_3] Suspended RESTORING
23:52:33.431 | [39mDEBUG[0;39m| o.a.k.s.p.i.ProcessorStateManager | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_3] Closing its state manager and all the registered state stores: {Stats=StateStoreMetadata (Stats : -aggregator-Stats-changelog-3 # 148773145}
23:52:33.432 | [34mINFO [0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): -aggregator-Stats-changelog-4, -aggregator-Stats-changelog-5, -aggregator-Stats-changelog-0, -aggregator-Stats-changelog-2, -aggregator-Stats-changelog-1
23:52:33.435 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing value providers for store Stats of task 0_3
23:52:33.435 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing metrics recorder for store Stats of task 0_3 from metrics recording trigger
23:52:35.193 | [39mDEBUG[0;39m| o.a.k.c.c.i.ConsumerCoordinator | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-consumer, groupId=-aggregator] Sending Heartbeat request with generation 111 and member id -aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-consumer-e81c4fa1-8980-4e07-af8f-6de7ddeb2801 to coordinator 172.16.10.110:9092 (id: 2147483637 rack: null)
23:52:35.194 | [39mDEBUG[0;
.
.
.
23:59:06.017 | [39mDEBUG[0;39m| o.a.k.c.c.i.ConsumerCoordinator | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-consumer, groupId=-aggregator] Received successful Heartbeat response
23:59:07.122 | [39mDEBUG[0;39m| o.a.k.s.p.i.StateDirectory | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Released state dir lock for task 0_2
23:59:07.122 | [34mINFO [0;39m| o.a.k.s.p.i.RecordCollectorImpl | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_2] Closing record collector dirty
23:59:07.122 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_2] Closed dirty
23:59:07.122 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_0] Skipped preparing RESTORING task for commit since there is nothing to commit
23:59:07.122 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_0] Suspended RESTORING
23:59:07.124 | [39mDEBUG[0;39m| o.a.k.s.p.i.ProcessorStateManager | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_0] Closing its state manager and all the registered state stores: {Stats=StateStoreMetadata (Stats : -aggregator-Stats-changelog-0 # 214094739}
23:59:07.124 | [34mINFO [0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): -aggregator-Stats-changelog-4, -aggregator-Stats-changelog-1
23:59:07.124 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing value providers for store Stats of task 0_0
23:59:07.124 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing metrics recorder for store Stats of task 0_0 from metrics recording trigger
23:59:07.342 | [39mDEBUG[0;39m| o.a.k.s.p.i.StateDirectory | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Released state dir lock for task 0_0
23:59:07.342 | [34mINFO [0;39m| o.a.k.s.p.i.RecordCollectorImpl | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_0] Closing record collector dirty
23:59:07.342 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_0] Closed dirty
23:59:07.342 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_4] Skipped preparing RESTORING task for commit since there is nothing to commit
23:59:07.343 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_4] Suspended RESTORING
23:59:07.343 | [39mDEBUG[0;39m| o.a.k.s.p.i.ProcessorStateManager | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_4] Closing its state manager and all the registered state stores: {Stats=StateStoreMetadata (Stats : -aggregator-Stats-changelog-4 # 196360274}
23:59:07.343 | [34mINFO [0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): -aggregator-Stats-changelog-1
23:59:07.344 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing value providers for store Stats of task 0_4
23:59:07.344 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing metrics recorder for store Stats of task 0_4 from metrics recording trigger
23:59:07.541 | [39mDEBUG[0;39m| o.a.k.s.p.i.StateDirectory | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Released state dir lock for task 0_4
23:59:07.541 | [34mINFO [0;39m| o.a.k.s.p.i.RecordCollectorImpl | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_4] Closing record collector dirty
23:59:07.541 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_4] Closed dirty
23:59:07.541 | [39mDEBUG[0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_1] Skipped preparing RESTORING task for commit since there is nothing to commit
23:59:07.541 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_1] Suspended RESTORING
23:59:07.542 | [39mDEBUG[0;39m| o.a.k.s.p.i.ProcessorStateManager | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_1] Closing its state manager and all the registered state stores: {Stats=StateStoreMetadata (Stats : -aggregator-Stats-changelog-1 # 131295699}
23:59:07.542 | [34mINFO [0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
23:59:07.542 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing value providers for store Stats of task 0_1
23:59:07.542 | [39mDEBUG[0;39m| o.a.k.s.s.i.m.RocksDBMetricsRecorder | [RocksDB Metrics Recorder for Stats] Removing metrics recorder for store Stats of task 0_1 from metrics recording trigger
23:59:07.799 | [39mDEBUG[0;39m| o.a.k.s.p.i.StateDirectory | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Released state dir lock for task 0_1
23:59:07.799 | [34mINFO [0;39m| o.a.k.s.p.i.RecordCollectorImpl | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_1] Closing record collector dirty
23:59:07.799 | [34mINFO [0;39m| o.a.k.s.p.i.StreamTask | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] task [0_1] Closed dirty
23:59:07.803 | [34mINFO [0;39m| o.a.k.c.p.KafkaProducer | [Producer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
23:59:07.804 | [39mDEBUG[0;39m| o.a.k.c.p.i.Sender | [Producer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-producer] Beginning shutdown of Kafka producer I/O thread, sending remaining records.
23:59:07.809 | [39mDEBUG[0;39m| o.a.k.c.p.i.Sender | [Producer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-producer] Shutdown of Kafka producer I/O thread has completed.
23:59:07.809 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Metrics scheduler closed
23:59:07.809 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Closing reporter org.apache.kafka.common.metrics.JmxReporter
23:59:07.809 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Metrics reporters closed
23:59:07.810 | [34mINFO [0;39m| o.a.k.c.u.AppInfoParser | App info kafka.producer for -aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-producer unregistered
23:59:07.810 | [39mDEBUG[0;39m| o.a.k.c.p.KafkaProducer | [Producer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-producer] Kafka producer has been closed
23:59:07.811 | [34mINFO [0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
23:59:07.811 | [39mDEBUG[0;39m| o.a.k.c.c.i.ConsumerCoordinator | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-consumer, groupId=-aggregator] Heartbeat thread has closed
23:59:07.812 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Metrics scheduler closed
23:59:07.812 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Closing reporter org.apache.kafka.common.metrics.JmxReporter
23:59:07.812 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Metrics reporters closed
23:59:07.815 | [34mINFO [0;39m| o.a.k.c.u.AppInfoParser | App info kafka.consumer for -aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-consumer unregistered
23:59:07.815 | [39mDEBUG[0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-consumer, groupId=-aggregator] Kafka consumer has been closed
23:59:07.815 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Metrics scheduler closed
23:59:07.815 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Closing reporter org.apache.kafka.common.metrics.JmxReporter
23:59:07.815 | [34mINFO [0;39m| o.a.k.c.m.Metrics | Metrics reporters closed
23:59:07.819 | [34mINFO [0;39m| o.a.k.c.u.AppInfoParser | App info kafka.consumer for -aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer unregistered
23:59:07.819 | [39mDEBUG[0;39m| o.a.k.c.c.KafkaConsumer | [Consumer clientId=-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1-restore-consumer, groupId=null] Kafka consumer has been closed
23:59:07.820 | [34mINFO [0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
23:59:07.820 | [34mINFO [0;39m| o.a.k.s.p.i.StreamThread | stream-thread [-aggregator-7a670705-853c-4ce9-a223-78ff5fe0a9be-StreamThread-1] Shutdown complete

Related

Sharing a temporary client certificate store between multiple processes

Suppose Process A creates a temporary client certificate store, then launches Process B, passing the inherited cert store handle via some kind of inter process communication, then Process A exits. (see below)
When Process B starts running, it fetches the cert store handle and tries processing the temporary client store.
The question is: "Can a temporary cert store created in parent Process A (which exits) still be accessible by child Process B?" Thanks!
Process A
+--------------------------------------------------+
| CreateFile("certificate.pfx",...) |
| ReadFile(hFile,...) |
| Create CRYPT_DATA_BLOB |
| PFXImportCertStore(&cryptBlob,...) |
| CreateProcess(Process B hCertStore,...,TRUE,...) |
| (TRUE indicates new process inherits hCertStore) |
| Process Exits |
+--------------------------------------------------+
Process B
+--------------------------------------------------+
| Get handle hCertStore using Inter-Process Comms |
| CertFindCertificateInStore{hCertStore,...) |
| Process the temporary cert store... |
+--------------------------------------------------+
Ran a test, passing the temp cert store handle from Process A to Process B and was greeted with a crash dump:
STACK_TEXT:
00000017`7df4fcc0 00007ffa`655574c6 : 00000017`7df4fd30 00000017`7df4fd30 00000017`7df4fd10 00000000`ffffffff : crypt32!AutoResyncStore+0x10
00000017`7df4fd20 00007ff7`23c943cd : 0000016d`0dc53a2e 00000000`00000000 00007ff7`23cac7c2 00000000`0118b2d8 : crypt32!CertFindCertificateInStore+0x56
00000017`7df4fd70 0000016d`0dc53a2e : 00000000`00000000 00007ff7`23cac7c2 00000000`0118b2d8 00000000`00000000 : HelloWorld64!WinMain+0x121
00000017`7df4fd78 00000000`00000000 : 00007ff7`23cac7c2 00000000`0118b2d8 00000000`00000000 00000000`00000000 : 0x0000016d`0dc53a2e
SYMBOL_NAME: crypt32!AutoResyncStore+10
MODULE_NAME: crypt32
IMAGE_NAME: crypt32.dll
STACK_COMMAND: ~0s ; .ecxr ; kb
FAILURE_BUCKET_ID: INVALID_POINTER_READ_c0000005_crypt32.dll!AutoResyncStore
OS_VERSION: 10.0.18362.1
BUILDLAB_STR: 19h1_release
OSPLATFORM_TYPE: x64
OSNAME: Windows 10
Looks like it's not possible to access a temporary cert store using a shared handle from another process. As a workaround, I'll look into passing the PFX BLOB as follows:
Pass the PFX BLOB (i.e., the pfx file bytes) via IPC to another process.
Create a global shared named memory map for the BLOB, so any process with that
memory handle can access it and process the cert.
Thanks.

verifyAlertNotPresent() in selenium ide halts the execution on failure

I have written a script to Verify that alert is not present using verifyAlertNotPresent command. Using this command as I know execution should not stop even if alert is present and should continue but it halts the execution. My script is given below.
forXml | file:///E:/XML files/NAA_StateZip.xml
open |
clickAndWait | link=NAA Click & Lease
type | name=_OWNER | sdfsdf
type | name=OWNERFAX | 1234556
fireEvent | name=OWNERFAX | blur
verifyAlertNotPresent | Error: Invalid input format Please re-enter area code and phone number |
close
selectWindow | null
endForXml
When I am running this script Log shows this.
[info] Executing: |type | name=OWNERFAX | 1234556 |
[info] Executing: |fireEvent | name=OWNERFAX | blur |
[info] Executing: |verifyAlertNotPresent | Error: Invalid input format Please re-enter area code and phone number | |
[error] true
[info] Executing: |close | | |
[error] There was an unexpected Alert! [Error: Invalid input format Please re-enter area code and phone number]
[info] Test case fail
Please provide solution to this as I want to run this script for set of data.
To handle alerts using Selenium IDE, you will have to use 'assertFoo(pattern)' function. If you fail to assert the presence of a pop-up your next command will be blocked and you will get an error. Look at the Docs.

MSBuild copy task for all solution outputs without project\bin\configuration

I have an MSBuild script that builds all solutions in my repository, but now I need a way to copy all of the output to a build directory. I was trying to use the Output parameter in the build task to know which files to copy, but RecursiveDir can't be used with that parameter: MSBuild RecursiveDir is empty (you can see my build script here too). Anyway, I have this folder structure:
Repository
|
+- Solution1
| |
| +- ProjectA
| | |
| | +- bin
| | |
| | +- Release
| |
| +- ProjectB
| |
| +- bin
| |
| +- Release
|
+- Solution2
| |
| +- ProjectA
| |
| +- bin
| |
| +- x86
| |
| +-Release
| |
| +- images
|
...etc
Basically, I just want to copy the contents of each Release folder, including subfolder structure and contents, into the following structure:
Build
|
+- Solution1
|
+- Solution2
| |
| +- images
|
...etc
I.e. I want to strip the Project\bin\platform\configuration part of the path. I don't want to have to manually include each project, because new ones pop up every so often and it would be nice not to have to update the build script every time. Seems simple enough but I can't figure it out...
I've seen MsBuild Copy output and remove part of path but I don't really understand it so I don't know how to apply it here.
Have you tried overriding the output path parameter?
For example if you call msbuild on each solution
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\msbuild.exe $(SolutionName) /p:OutputPath="%CD%\Build"
This will redirect your output from your projects with no configuration to deal with.
Just look in \Build for your output (Its content depends on your project type)

Objective C: Accelerometer directions

I'm trying to detect a shaking motion in an up and down direction like this:
/ \
|
________________________________
| |
| |
| |
|o |
| |
| |
|______________________________|
|
\ /
-(void) bounce:(UIAcceleration *)acceleration {
NSLog(#"%f",acceleration.x);
}
I was thinking it's the x axis but that responds if you turn it to be parallel to the floor. How do I detect this?
I don't have a direct answer for you, but a great experiment is to make a quick app which spews the numbers out onto the screen or into a file (or both). Then shake it the way to you want to detect, and see which set has the greatest change.

Understanding a malloc_history dump

If you have ever asked how can I debug releasing/alloc issues in objective-c, you will have came across these environment settings that can help track the problem down:
NSZombieEnabled - Keeps abjects around after release, so you can get pointers etc.
MallocStackLogging - keeps object history for reference later
NSDebugEnabled
You set all of these to YES in the 'environment' section of the 'arguments' tab in the 'executables' (found in group tree) info.
So, Im getting this console output
MyApp [4413:40b] -[CALayer
retainCount]: message sent to
deallocated instance 0x4dbb170
then open terminal, while the debugger has forwarded the break and type:
malloc_history 4413 0x4dbb170
Then, I get a big text dump, and as far as I understand the important bit is this:
1
ALLOC 0x4dbb160-0x4dbb171 [size=18]:
thread_a0375540 |start | main |
UIApplicationMain | GSEventRun |
GSEventRunModal | CFRunLoopRunInMode |
CFRunLoopRunSpecific | __CFRunLoopRun
| __CFRunLoopDoTimer |
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
| __NSFireDelayedPerform |
-[todoListViewController drillDocumentMenu:] |
-[documentListViewController drillIntoDocumentWithToDoRecord:] |
-[documentViewController OpenTodoDocument:OfType:WithPath:] |
-[documentViewController OpenDocumentOfType:WithPath:] |
-[documentViewController managePDFDocumentWithPath:] |
-[PDFDocument loadPDFDocumentWithPath:andTitle:] |
-[PDFDocument getMetaData] | CGPDFDictionaryApplyFunction |
ListDictionaryObjects(char const*,
CGPDFObject*, void*) | NSLog | NSLogv
| _CFLogvEx | __CFLogCString |
asl_send | _asl_send_level_message |
asl_set_query | strdup | malloc |
malloc_zone_malloc
2
FREE 0x4dbb160-0x4dbb171 [size=18]:
thread_a0375540 |start | main |
UIApplicationMain | GSEventRun |
GSEventRunModal | CFRunLoopRunInMode |
CFRunLoopRunSpecific | __CFRunLoopRun
| __CFRunLoopDoTimer |
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
| __NSFireDelayedPerform |
-[todoListViewController drillDocumentMenu:] |
-[documentListViewController drillIntoDocumentWithToDoRecord:] |
-[documentViewController OpenTodoDocument:OfType:WithPath:] |
-[documentViewController OpenDocumentOfType:WithPath:] |
-[documentViewController managePDFDocumentWithPath:] |
-[PDFDocument loadPDFDocumentWithPath:andTitle:] |
-[PDFDocument getMetaData] | CGPDFDictionaryApplyFunction |
ListDictionaryObjects(char const*,
CGPDFObject*, void*) | NSLog | NSLogv
| _CFLogvEx | __CFLogCString |
asl_send | _asl_send_level_message |
asl_free | free
3
ALLOC 0x4dbb170-0x4dbb19f [size=48]:
thread_a0375540 |start | main |
UIApplicationMain | GSEventRun |
GSEventRunModal | CFRunLoopRunInMode |
CFRunLoopRunSpecific | __CFRunLoopRun
| __CFRunLoopDoTimer |
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
| __NSFireDelayedPerform |
-[todoListViewController drillDocumentMenu:] |
-[documentListViewController drillIntoDocumentWithToDoRecord:] |
-[documentViewController OpenTodoDocument:OfType:WithPath:] |
-[documentViewController OpenDocumentOfType:WithPath:] |
-[documentViewController managePDFDocumentWithPath:] |
-[ScrollViewWithPagingViewController init] | -[UIView init] |
-[UIScrollView initWithFrame:] | -[UIView initWithFrame:] | UIViewCommonInitWithFrame | -[UIView
_createLayerWithFrame:] | +[NSObject(NSObject) alloc] | +[NSObject(NSObject) allocWithZone:] | class_createInstance |
_internal_class_createInstanceFromZone | calloc | malloc_zone_calloc
What i don't understand though is if it the history was ALLOC,FREE,ALLOC then why does the error indicate that it was released (net +1 alloc)?
or is my understanding of the dump wrong?
EDIT (fresh run= different object pointers):
Zombie Detection with instruments:
Why and how, does the retain count jump from 1 to -1?
Looking at the backtrace of the Zombie, looks like the retain count is being called by: Quartz through release_root_if_unused
Edit: Solved- I was removing a view from super, then releasing it. Fixed by just releasing it.
#Kay is correct; the malloc history is showing two allocations at the specified address; one that has been allocated and freed and one that is still in play.
What you need is the backtrace of the call to retainCount on the CALayer that has already been released. Because you have zombie detection enabled, amongst other memory debugging things, it may be that the deallocation simply has not & will not happen.
Mixing malloc history with zombie detection changes the runtime behavior significantly.
I'd suggest running with zombie detection in Instruments. Hopefully, that'll pinpoint the exact problem.
If not, then there is a breakpoint you can set to break when a zombie is messaged. Set that breakpoint and see where you stop.
OK -- so, CoreAnimation is using the retain count for internal purposes (the system frameworks can get away with this, fragile though it is).
I think the -1 is a red herring; it is likely that zombies return 0xFF....FFFF as the retain count and this is rendered as -1 in Instruments.
Next best guess; since this is happening in a timer, the over-release is probably happening during animation. The CoreAnimation layers should handle this correctly. There is an over-release of a view or animation layer container in your code that is causing the layer to go away prematurely.
Have you tried "Build and Analyze"? Off-chance it might catch the mismanagement of a view somewhere.
In any case and as an experiment, try retaining your view(s) an extra time and see if that makes this problem stop. If it does, that is, at least, a clue.
(Or it might be a bug in the system frameworks... maybe... but doubtful.)
Finally, who the heck is calling retainCount?!?!? In the case of CoreAnimation, it is possible that the retainCount is being used internally as an implementation detail.
If it is your code, though, then the location of the zombie call should be pretty apparent.
I am no expert, but if you take a look at the first line in block 3:
ALLOC 0x4dbb170-0x4dbb19f [size=48]:
While in the other two outputs a memory block of size 18 at 0x4dbb160-0x4dbb171 was allocated and freed. I assume the old object was freed and there is a new object residing at this memory address. Thus the old instance at 0x...b160 is not valid any longer.