Fabric Answer Crash when restarting the app after crash - crashlytics

In my app I have fabric and DefaultExceptionHandler both set. Order for the same is First Fabric is initialised and started then DefaultExceptionHandler is set.
When any uncaught exception is thrown same is caught in Default Exception and app is restarted after 100 ms. At that point getting the below exception from fabric when activity resumed.
E/Answers: Failed to submit events task
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#b716997 rejected from java.util.concurrent.ScheduledThreadPoolExecutor#f402784[Shutting down, pool size = 1, active threads = 0, queued tasks = 1, completed tasks = 8]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2049)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:814)
at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:305)
at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:635)
at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:601)
at com.crashlytics.android.answers.AnswersEventsHandler.executeAsync(AnswersEventsHandler.java:185)
at com.crashlytics.android.answers.AnswersEventsHandler.processEvent(AnswersEventsHandler.java:171)
at com.crashlytics.android.answers.AnswersEventsHandler.processEventAsync(AnswersEventsHandler.java:47)
at com.crashlytics.android.answers.SessionAnalyticsManager.onLifecycle(SessionAnalyticsManager.java:129)
at com.crashlytics.android.answers.AnswersLifecycleCallbacks.onActivityStarted(AnswersLifecycleCallbacks.java:26)
at io.fabric.sdk.android.ActivityLifecycleManager$ActivityLifecycleCallbacksWrapper$1.onActivityStarted(ActivityLifecycleManager.java:111)
at android.app.Application.dispatchActivityStarted(Application.java:207)
at android.app.Activity.onStart(Activity.java:1194)
at android.support.v4.app.FragmentActivity.onStart(FragmentActivity.java:595)
at android.app.Instrumentation.callActivityOnStart(Instrumentation.java:1270)
at android.app.Activity.performStart(Activity.java:6689)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2622)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2724)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1473)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6123)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:757)
E/Answers: Failed to submit events task
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#7d90833 rejected from java.util.concurrent.ScheduledThreadPoolExecutor#f402784[Shutting down, pool size = 1, active threads = 0, queued tasks = 1, completed tasks = 8]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2049)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:814)
at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:305)
at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:635)
at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:601)
at com.crashlytics.android.answers.AnswersEventsHandler.executeAsync(AnswersEventsHandler.java:185)
at com.crashlytics.android.answers.AnswersEventsHandler.processEvent(AnswersEventsHandler.java:171)
at com.crashlytics.android.answers.AnswersEventsHandler.processEventAsync(AnswersEventsHandler.java:47)
at com.crashlytics.android.answers.SessionAnalyticsManager.onLifecycle(SessionAnalyticsManager.java:129)
at com.crashlytics.android.answers.AnswersLifecycleCallbacks.onActivityResumed(AnswersLifecycleCallbacks.java:31)
at io.fabric.sdk.android.ActivityLifecycleManager$ActivityLifecycleCallbacksWrapper$1.onActivityResumed(ActivityLifecycleManager.java:116)
at android.app.Application.dispatchActivityResumed(Application.java:216)
at android.app.Activity.onResume(Activity.java:1255)
at android.support.v4.app.FragmentActivity.onResume(FragmentActivity.java:485)
at com.my.package.BaseActivity.onResume(BaseActivity.java:105)
at android.app.Instrumentation.callActivityOnResume(Instrumentation.java:1291)
at android.app.Activity.performResume(Activity.java:6776)
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3398)
at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3461)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2730)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1473)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6123)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:757)

Related

How to design worker verticle with infinite blocking loop?

I am trying to compose a worker verticle that will bridge Google cloud PubSub topic subscription with an event-bus of the vert.x by adopting kotlin example of PubSub combined with this answer regarding worker with an infinite blocking loop processing.
It does works but Vert.X keep nagging into the log that Thread blocked by throwing an exception sometime after the message was from PubSub was received (please ignore blocking initialization for now):
9:15:12 AM: Executing task 'run'...
WARNING: You are a using release candidate 2.0.0-rc5. Behavior of this plugin has changed since 1.3.5. Please see release notes at: https://github.com/GoogleCloudPlatform/app-gradle-plugin.
Missing a feature? Can't get it to work?, please file a bug at: https://github.com/GoogleCloudPlatform/app-gradle-plugin/issues.
:compileKotlin UP-TO-DATE
:compileJava NO-SOURCE
:processResources NO-SOURCE
:classes UP-TO-DATE
Mar 10, 2019 9:15:18 AM io.vertx.core.impl.launcher.commands.Watcher
INFO: Watched paths: [/home/username/IdeaProjects/project_name/./src]
Mar 10, 2019 9:15:18 AM io.vertx.core.impl.launcher.commands.Watcher
INFO: Starting the vert.x application in redeploy mode
:run
Starting vert.x application...
f48ba7fd-a52b-487f-b553-2b74473e58ba-redeploy
Creating topic gcs-project-id:vertx.
Mar 10, 2019 9:15:18 AM com.google.auth.oauth2.DefaultCredentialsProvider warnAboutProblematicCredentials
WARNING: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/.
Mar 10, 2019 9:15:21 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 2759 ms, time limit is 2000 ms
Topic gcs-project-id:vertx successfully created.
Creating subscription gcs-project-id:kotlin.
Mar 10, 2019 9:15:22 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 3759 ms, time limit is 2000 ms
Mar 10, 2019 9:15:23 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 4758 ms, time limit is 2000 ms
Mar 10, 2019 9:15:24 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 5759 ms, time limit is 2000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:469)
at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:142)
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1309)
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:52)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.pubsub.v1.SubscriptionAdminClient.createSubscription(SubscriptionAdminClient.java:359)
at com.google.cloud.pubsub.v1.SubscriptionAdminClient.createSubscription(SubscriptionAdminClient.java:260)
at com.example.project.MainVerticle.subscribeTopic(MainVerticle.kt:76)
at com.example.project.MainVerticle.init(MainVerticle.kt:46)
at io.vertx.core.impl.DeploymentManager.lambda$doDeploy$8(DeploymentManager.java:492)
at io.vertx.core.impl.DeploymentManager$$Lambda$28/1902260856.handle(Unknown Source)
at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:320)
at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
at io.vertx.core.impl.EventLoopContext$$Lambda$29/1640639994.run(Unknown Source)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Subscription gcs-project-id:kotlin successfully created.
Listening to messages on kotlin:
Mar 10, 2019 9:15:25 AM io.vertx.core.impl.launcher.commands.VertxIsolatedDeployer
INFO: Succeeded in deploying verticle
Message Id: 462746807438186 Data: Bazinga
Message Id: 462746750387788 Data: Another message
Mar 10, 2019 9:16:25 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 60171 ms, time limit is 60000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:32)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:13)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:272)
at io.vertx.core.impl.ContextImpl$$Lambda$33/1101004004.run(Unknown Source)
at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
at io.vertx.core.impl.TaskQueue$$Lambda$26/1213216872.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Mar 10, 2019 9:16:26 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 61172 ms, time limit is 60000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:32)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:13)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:272)
at io.vertx.core.impl.ContextImpl$$Lambda$33/1101004004.run(Unknown Source)
at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
at io.vertx.core.impl.TaskQueue$$Lambda$26/1213216872.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
And here is source-code:
package com.example.project_name
import com.google.api.gax.rpc.ApiException
import com.google.cloud.pubsub.v1.*
import com.google.pubsub.v1.ProjectSubscriptionName
import com.google.pubsub.v1.ProjectTopicName
import com.google.pubsub.v1.PubsubMessage
import com.google.pubsub.v1.PushConfig
import io.vertx.core.*
import java.util.concurrent.LinkedBlockingDeque
class MainVerticle : MessageReceiver, AbstractVerticle() {
private val projectId = "gcs-project-id"
private val topicId = "vertx"
private val topic: ProjectTopicName = ProjectTopicName.of(projectId, topicId)
private val subscriptionId = "kotlin"
private val subscription = ProjectSubscriptionName.of(projectId, subscriptionId)
private val messages = LinkedBlockingDeque<PubsubMessage>()
private lateinit var subscriber: Subscriber
override fun receiveMessage(message: PubsubMessage, consumer: AckReplyConsumer) {
messages.offer(message)
consumer.ack()
}
override fun start() {
vertx.executeBlocking<Void>({
try {
println("Listening to messages on $subscriptionId:")
subscriber.awaitRunning()
while (true) {
val message = messages.take()
println("Message Id: ${message.messageId} Data: ${message.data.toStringUtf8()}")
}
} finally {
subscriber.stopAsync()
it.complete()
}
}, { println("done, ${it.cause()}") })
}
override fun init(vertx: Vertx?, context: Context?) {
super.init(vertx, context)
try {
createTopic()
subscribeTopic()
subscriber = Subscriber.newBuilder(subscription, this).build()
subscriber.startAsync()
} catch (e: ApiException) {
// example : code = ALREADY_EXISTS(409) implies topic already exists
println("Failed: $e")
}
}
override fun stop(stopFuture: Future<Void>?) {
super.stop(stopFuture)
try {
deleteSub()
deleteTopic()
} catch (e: ApiException) {
println("Failed: $e")
} finally {
subscriber.stopAsync()
stopFuture!!.complete()
}
}
private fun createTopic() { // expects 1 arg: <topic> to create
println("Creating topic ${topic.project}:${topic.topic}.")
TopicAdminClient.create().use { topicAdminClient -> topicAdminClient.createTopic(topic) }
println("Topic ${topic.project}:${topic.topic} successfully created.")
}
private fun subscribeTopic() { // expects 2 args: <topic> and <subscription>
println("Creating subscription ${subscription.project}:${subscription.subscription}.")
SubscriptionAdminClient.create().use { it.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0) }
println("Subscription ${subscription.project}:${subscription.subscription} successfully created.")
}
private fun deleteTopic() {
println("Deleting topic ${topic.project}:${topic.topic}.")
TopicAdminClient.create().use { it.deleteTopic(topic) }
println("Topic ${topic.project}:${topic.topic} successfully deleted.")
}
private fun deleteSub() { // expects 1 arg: <subscription> to delete
println("Deleting subscription ${subscription.project}:${subscription.subscription}.")
SubscriptionAdminClient.create().use { it.deleteSubscription(subscription) }
println("Subscription ${subscription.project}:${subscription.subscription} successfully deleted.")
}
}
fun main(vararg args: String) {
Vertx.vertx().deployVerticle(MainVerticle(), DeploymentOptions().apply {
isWorker = true
})
}
I am clearly missing something. Also if you have a better approach that can integrate/unify Google’s PubSub library (that has its own async loop) with Vert.X I’d be happy to hear over my primitive example approach.
The problem is with your while loop.
"Blocking" in this case does not mean that you can just keep running forever.
Your call to it.complete() is never reached, and at some point Vert.x will complain about that.
See the manual on Running blocking code, specifically the WARNING section.
To solve your problem, you will need to schedule your calls to messages.take() in one way or another, for example using setPeriodic. Inside the interval handler, empty your queue with executeBlocking, then give back control by calling complete(), either before or after you have scheduled the processing of the messages, depending if you care about the result.

Unable to join Akka.NET cluster

I am having a problem joining and debugging joining to Akka.NET cluster. I am using version 1.3.8. My setup is following:
Lighthouse
Almost default code from github. Runs in console akka.hocon is following:
lighthouse {
actorsystem: "sng"
}
petabridge.cmd{
host = "0.0.0.0"
port = 9110
}
akka {
loglevel = DEBUG
loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
remote {
log-sent-messages = on
log-received-messages = on
log-remote-lifecycle-events = on
enabled-transports = ["akka.remote.dot-netty.tcp"]
dot-netty.tcp {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "0.0.0.0"
port = 4053
}
log-remote-lifecycle-events = DEBUG
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = []
roles = [lighthouse]
}
}
Working node
Also console (net461) application with as simple as possible startup and joining. It works as excpected. akka.hocon:
akka {
loglevel = DEBUG
loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-sent-messages = on
log-received-messages = on
log-remote-lifecycle-events = on
dot-netty.tcp {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "0.0.0.0"
port = 0
}
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"]
roles = [monitor]
}
}
Not working node
An .NET 4.6.1 library, registerd as COM and started in other (Media Monkey) application with VBA code:
Sub OnStartup
Set o = CreateObject("MediaMonkey.Akka.Agent.MediaMonkeyAkkaProxy")
o.Init(SDB)
End Sub
Akka system is, as in console aplikation, created with standard ActorSystem.Create("sng", config);
akka.hocon:
akka {
loglevel = DEBUG
loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-sent-messages = on
log-received-messages = on
log-remote-lifecycle-events = on
dot-netty.tcp {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "0.0.0.0"
port = 0
}
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"]
roles = [mediamonkey]
}
}
Debugging workflow
Startup Lighthouse application:
Configuration Result:
[Success] Name sng.Lighthouse
[Success] ServiceName sng.Lighthouse
Topshelf v4.0.0.0, .NET Framework v4.0.30319.42000
[Lighthouse] ActorSystem: sng; IP: 127.0.0.1; PORT: 4053
[Lighthouse] Performing pre-boot sanity check. Should be able to parse address [akka.tcp://sng#127.0.0.1:4053]
[Lighthouse] Parse successful.
[21:01:35 INF] Starting remoting
[21:01:35 INF] Remoting started; listening on addresses : [akka.tcp://sng#127.0.0.1:4053]
[21:01:35 INF] Remoting now listens on addresses: [akka.tcp://sng#127.0.0.1:4053]
[21:01:35 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Starting up...
[21:01:35 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Started up successfully
The sng.Lighthouse service is now running, press Control+C to exit.
[21:01:35 INF] petabridge.cmd host bound to [0.0.0.0:9110]
[21:01:35 INF] Node [akka.tcp://sng#127.0.0.1:4053] is JOINING, roles [lighthouse]
[21:01:35 INF] Leader is moving node [akka.tcp://sng#127.0.0.1:4053] to [Up]
Started and stopped working console node
Lighthouse logs:
[21:05:40 INF] Node [akka.tcp://sng#0.0.0.0:37516] is JOINING, roles [monitor]
[21:05:40 INF] Leader is moving node [akka.tcp://sng#0.0.0.0:37516] to [Up]
[21:05:54 INF] Connection was reset by the remote peer. Channel [[::ffff:127.0.0.1]:4053->[::ffff:127.0.0.1]:37517](Id=1293c63a)
[21:05:54 INF] Message AckIdleCheckTimer from akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%400.0.0.0%3A37516-1/endpointWriter to akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%400.0.0.0%3A37516-1/endpointWriter was not delivered. 1 dead letters encountered.
[21:05:55 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 2 dead letters encountered.
[21:05:55 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 3 dead letters encountered.
[21:05:56 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 4 dead letters encountered.
[21:05:56 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 5 dead letters encountered.
[21:05:57 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 6 dead letters encountered.
[21:05:57 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 7 dead letters encountered.
[21:05:58 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 8 dead letters encountered.
[21:05:58 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 9 dead letters encountered.
[21:05:59 WRN] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://sng#0.0.0.0:37516, Uid=1060233119 status = Up, role=[monitor], upNumber=2)]. Node roles [lighthouse]
[21:06:01 WRN] AssociationError [akka.tcp://sng#127.0.0.1:4053] -> akka.tcp://sng#0.0.0.0:37516: Error [Association failed with akka.tcp://sng#0.0.0.0:37516] []
[21:06:01 WRN] Tried to associate with unreachable remote address [akka.tcp://sng#0.0.0.0:37516]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://sng#0.0.0.0:37516] Caused by: [System.AggregateException: One or more errors occurred. ---> Akka.Remote.Transport.InvalidAssociationException: No connection could be made because the target machine actively refused it tcp://sng#0.0.0.0:37516
at Akka.Remote.Transport.DotNetty.TcpTransport.<AssociateInternal>d__1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.<Associate>d__22.MoveNext()
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__11_54(Task`1 result)
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
---> (Inner Exception #0) Akka.Remote.Transport.InvalidAssociationException: No connection could be made because the target machine actively refused it tcp://sng#0.0.0.0:37516
at Akka.Remote.Transport.DotNetty.TcpTransport.<AssociateInternal>d__1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.<Associate>d__22.MoveNext()<---
]
[21:06:04 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Leader is auto-downing unreachable node [akka.tcp://sng#127.0.0.1:4053]
[21:06:04 INF] Marking unreachable node [akka.tcp://sng#0.0.0.0:37516] as [Down]
[21:06:05 INF] Leader is removing unreachable node [akka.tcp://sng#0.0.0.0:37516]
[21:06:05 WRN] Association to [akka.tcp://sng#0.0.0.0:37516] having UID [1060233119] is irrecoverably failed. UID is now quarantined and all messages to this UID will be delivered to dead letters. Remote actorsystem must be restarted to recover from this situation.
Working node logs:
[21:05:38 INF] Starting remoting
[21:05:38 INF] Remoting started; listening on addresses : [akka.tcp://sng#0.0.0.0:37516]
[21:05:38 INF] Remoting now listens on addresses: [akka.tcp://sng#0.0.0.0:37516]
[21:05:38 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37516] - Starting up...
[21:05:38 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37516] - Started up successfully
[21:05:40 INF] Welcome from [akka.tcp://sng#127.0.0.1:4053]
[21:05:40 INF] Member is Up: Member(address = akka.tcp://sng#127.0.0.1:4053, Uid=439782041 status = Up, role=[lighthouse], upNumber=1)
[21:05:40 INF] Member is Up: Member(address = akka.tcp://sng#0.0.0.0:37516, Uid=1060233119 status = Up, role=[monitor], upNumber=2)
//shutdown logs are missing
Started and stopped COM node
Lighthouse logs:
[21:12:02 INF] Connection was reset by the remote peer. Channel [::ffff:127.0.0.1]:4053->[::ffff:127.0.0.1]:37546](Id=4ca91e15)
COM node logs:
[WARNING][18. 07. 2018 19:11:15][Thread 0001][ActorSystem(sng)] The type name for serializer 'hyperion' did not resolve to an actual Type: 'Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion'
[WARNING][18. 07. 2018 19:11:15][Thread 0001][ActorSystem(sng)] Serialization binding to non existing serializer: 'hyperion'
[21:11:15 DBG] Logger log1-SerilogLogger [SerilogLogger] started
[21:11:15 DBG] StandardOutLogger being removed
[21:11:15 DBG] Default Loggers started
[21:11:15 INF] Starting remoting
[21:11:15 DBG] Starting prune timer for endpoint manager...
[21:11:15 INF] Remoting started; listening on addresses : [akka.tcp://sng#0.0.0.0:37543]
[21:11:15 INF] Remoting now listens on addresses: [akka.tcp://sng#0.0.0.0:37543]
[21:11:15 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37543] - Starting up...
[21:11:15 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37543] - Started up successfully
[21:11:15 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe
[21:11:15 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe
[21:11:16 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+JoinSeedNodes
[21:11:16 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe
[21:11:26 WRN] Couldn't join seed nodes after [2] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:31 WRN] Couldn't join seed nodes after [3] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:36 WRN] Couldn't join seed nodes after [4] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:40 ERR] No response from remote. Handshake timed out or transport failure detector triggered.
[21:11:40 WRN] AssociationError [akka.tcp://sng#0.0.0.0:37543] -> akka.tcp://sng#127.0.0.1:4053: Error [Association failed with akka.tcp://sng#127.0.0.1:4053] []
[21:11:40 WRN] Tried to associate with unreachable remote address [akka.tcp://sng#127.0.0.1:4053]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://sng#127.0.0.1:4053] Caused by: [Akka.Remote.Transport.AkkaProtocolException: No response from remote. Handshake timed out or transport failure detector triggered.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.Transport.AkkaProtocolTransport.<Associate>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Akka.Remote.EndpointWriter.<AssociateAsync>d__23.MoveNext()]
[21:11:40 DBG] Disassociated [akka.tcp://sng#0.0.0.0:37543] -> akka.tcp://sng#127.0.0.1:4053
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 1 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 2 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 3 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 4 dead letters encountered.
[21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 5 dead letters encountered.
[21:11:40 INF] Message AckIdleCheckTimer from akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%40127.0.0.1%3A4053-1/endpointWriter to akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%40127.0.0.1%3A4053-1/endpointWriter was not delivered. 6 dead letters encountered.
[21:11:41 WRN] Couldn't join seed nodes after [5] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:41 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 7 dead letters encountered.
[21:11:46 WRN] Couldn't join seed nodes after [6] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
[21:11:51 WRN] Couldn't join seed nodes after [7] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053]
Do you have any idea how to debug and/or resolve this?
As I can see that the first thing I notice in the non-working node the hocon configuration contains different "seed-nodes" address from the working node.
IMHO the "seed-nodes" in all the applications [nodes as called in cluster] withinvthe cluster needs to be same. So in the non-working node instead of
seed-nodes = ["akka.tcp://songoulash#127.0.0.1:4053"]
replace with the below which is in the working node
seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"]
Also, please check the github link for sample https://github.com/AJEETX/Akka.Cluster
and another link https://github.com/AJEETX/AkkaNet.Cluster.RoundRobinGroup
#Rok, Kindly let me know if this was helpful or I can further try to investigate.

How to handle Spark with multiple cassandra server with different ssl policy

One cassandra cluster doesn't have SSL enabled and another cassandra cluster has SSL enabled. How to interact with both the cassandra cluster from a single spark job. I have to copy the table from one server(without SSL) and put into another server(with SSL).
Spark job:-
object TwoClusterExample extends App {
val conf = new SparkConf(true).setAppName("SparkCassandraTwoClusterExample")
println("Starting the SparkCassandraLocalJob....")
val sc = new SparkContext(conf)
val connectorToClusterOne = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "localhost"))
val connectorToClusterTwo = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "remote"))
val rddFromClusterOne = {
implicit val c = connectorToClusterOne
sc.cassandraTable("test","one")
}
{
implicit val c = connectorToClusterTwo
rddFromClusterOne.saveToCassandra("test","one")
}
}
Cassandra conf:-
spark.master spark://ip:6066
spark.executor.memory 1g
spark.cassandra.connection.host remote
spark.cassandra.auth.username iccassandra
spark.cassandra.auth.password pwd1
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.eventLog.enabled true
spark.eventLog.dir /Users/test/logs/spark
spark.cassandra.connection.ssl.enabled true
spark.cassandra.connection.ssl.trustStore.password pwd2
spark.cassandra.connection.ssl.trustStore.path truststore.jks
Submitting the job:-
spark-submit --deploy-mode cluster --master spark://ip:6066 --properties-file cassandra-count.conf --class TwoClusterExample target/scala-2.10/cassandra-table-assembly-1.0.jar
Below error i am getting:-
17/10/26 16:27:20 DEBUG STATES: [/remote:9042] preventing new connections for the next 1000 ms
17/10/26 16:27:20 DEBUG STATES: [/remote:9042] Connection[/remote:9042-1, inFlight=0, closed=true] failed, remaining = 0
17/10/26 16:27:20 DEBUG ControlConnection: [Control connection] error on /remote:9042 connection, no more host to try
com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:157)
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:140)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:222)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /remote:9042
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:220)
... 6 more
17/10/26 16:27:20 DEBUG Cluster: Shutting down
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: java.io.IOException: Failed to open native connection to Cassandra at {remote}:9042
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:304)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:51)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:146)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at cassandraCount$.runJob(cassandraCount.scala:27)
at cassandraCount$delayedInit$body.apply(cassandraCount.scala:22)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at cassandraCount$.main(cassandraCount.scala:10)
at cassandraCount.main(cassandraCount.scala)
... 6 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /remote:9042 (com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155)
... 37 more
17/10/26 16:27:23 INFO SparkContext: Invoking stop() from shutdown hook
17/10/26 16:27:23 INFO SparkUI: Stopped Spark web UI at http://10.7.10.138:4040
Working code:-
object ClusterSSLTest extends App{
val conf = new SparkConf(true).setAppName("sparkCassandraLocalJob")
println("Starting the ClusterSSLTest....")
val sc = new SparkContext(conf)
val sourceCluster = CassandraConnector(
sc.getConf.set("spark.cassandra.connection.host", "localhost"))
val destinationCluster = CassandraConnector(
sc.getConf.set("spark.cassandra.connection.host", "remoteip1,remoteip2")
.set("spark.cassandra.auth.username","uname")
.set("spark.cassandra.auth.password","pwd")
.set("spark.cassandra.connection.ssl.enabled","true")
.set("spark.cassandra.connection.timeout_ms","10000")
.set("spark.cassandra.connection.ssl.trustStore.path", "../truststore.jks")
.set("spark.cassandra.connection.ssl.trustStore.password", "pwd")
.set("spark.cassandra.connection.ssl.trustStore.type", "JKS")
.set("spark.cassandra.connection.ssl.protocol", "TLS")
.set("spark.cassandra.connection.ssl.enabledAlgorithms", "TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA")
)
val rddFromSourceCluster = {
implicit val c = sourceCluster
val tbRdd = sc.cassandraTable("analytics","products")
println(s"no of rows ${tbRdd.count()}")
tbRdd
}
val rddToDestinationCluster = {
implicit val c = destinationCluster // connect to source cluster in this code block.
rddFromSourceCluster.saveToCassandra("analytics","products")
}
sc.stop()
}

HTMLunit browser automation causes SSL error

I am writing a simple piece of Java code, to check my external IP and update my DNS entries if it has changed.
I use NetBeans and HTMLunit. The first part of the code works OK.
My code gets the proper External IP address from WhatIsMyIp.com
The second part, which involves logging into my DNS provider webpage (fastname.no) is attached below.
>
> public void LoginToDNS(HtmlPage LogInPage) throws Exception {
> try (final WebClient webClient = new WebClient()) {
>
> //get the webpage data we're looking for
> // also get the buttons and textfields
> final List<?> forms = LogInPage.getForms();
> final HtmlForm form = LogInPage.getFormByName("form1");
> final HtmlTextInput UsernameField = form.getInputByName("username");
> final HtmlPasswordInput PasswordField = form.getInputByName("password");
> final HtmlButton submit_button = form.getButtonByName("submit");
>
> // Set the Username and pasword
> UsernameField.setValueAttribute("myusername");
> PasswordField.setValueAttribute("mypassword");
>
> // submit the form by clicking the button and get back the control panel page
> final HtmlPage DNSPage = submit_button.click();
> CheckDNSEntries(DNSPage);
>
> }
> }
Although I can see that I have logged in properly (I know, because the page title changes to the correct one), I see an SSL error and the resulting webpage is not what I want.
java.util.concurrent.ExecutionException: java.io.IOException: Cannot init SSL
....
Caused by: java.io.IOException: Cannot init SSL
......
at org.eclipse.jetty.websocket.client.io.WebSocketClientSelectorManager.newConnection(WebSocketClientSelectorManager.java:96)
... 3 more
Right when I press the Submit_button, I see a hell of a lot of errors on the debugger coming from htmlunit. (see below)
Apr 12, 2016 10:36:49 AM com.gargoylesoftware.htmlunit.javascript.StrictErrorReporter runtimeError
SEVERE: runtimeError: message=[An invalid or illegal selector was specified (selector: '*,:x' error: Invalid selector: :x).] sourceName=[https://www.fastname.no/panel/js/custom/bootstrap.js?bust=?v20160408110428] line=[66] lineSource=[null] lineOffset=[0]
Apr 12, 2016 10:36:49 AM com.gargoylesoftware.htmlunit.javascript.StrictErrorReporter runtimeError
SEVERE: runtimeError: message=[An invalid or illegal selector was specified (selector: '*,:x' error: Invalid selector: :x).] sourceName=[https://www.fastname.no/panel/js/custom/bootstrap.js?bust=?v20160408110428] line=[66] lineSource=[null] lineOffset=[0]
Apr 12, 2016 10:36:51 AM com.gargoylesoftware.htmlunit.javascript.background.JavaScriptJobManagerImpl runSingleJob
SEVERE: Job run failed with unexpected RuntimeException: [object Object] (https://www.googletagmanager.com/gtm.js?id=GTM-TWH4RS&bust=?v20160408110428#65)
======= EXCEPTION START ========
Exception class=[net.sourceforge.htmlunit.corejs.javascript.JavaScriptException]
com.gargoylesoftware.htmlunit.ScriptException: [object Object] (https://www.googletagmanager.com/gtm.js?id=GTM-TWH4RS&bust=?v20160408110428#65)
Apr 12, 2016 10:36:52 AM com.gargoylesoftware.htmlunit.javascript.background.JavaScriptJobManagerImpl runSingleJob
SEVERE: Job run failed with unexpected RuntimeException: [object Object] (https://www.googletagmanager.com/gtm.js?id=GTM-TWH4RS&bust=?v20160408110428#65)
======= EXCEPTION START ========
Exception class=[net.sourceforge.htmlunit.corejs.javascript.JavaScriptException]
com.gargoylesoftware.htmlunit.ScriptException: [object Object] (https://www.googletagmanager.com/gtm.js?id=GTM-TWH4RS&bust=?v20160408110428#65)
at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.handleJavaScriptException(JavaScriptEngine.java:982)
at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:894)
2016-04-12 10:36:55.106:INFO::JS executor for com.gargoylesoftware.htmlunit.WebClient#d1a10ac: Logging initialized #10613ms
Apr 12, 2016 10:36:55 AM com.gargoylesoftware.htmlunit.javascript.host.WebSocket run
SEVERE: WS connect error
java.util.concurrent.ExecutionException: java.io.IOException: Cannot init SSL
at org.eclipse.jetty.util.FuturePromise.get(FuturePromise.java:123)
at com.gargoylesoftware.htmlunit.javascript.host.WebSocket$1.run(WebSocket.java:130)
I am not quite sure whether this is a HTMLunit problem handling JavaScript or it's just my code. Is there a better solution that I can use instead of HTMLunit that will get me where I want to go faster? After all I think the task I want to accomplish is relatively simple.
Try to enable SSL by adding:
webClient.getOptions().setUseInsecureSSL(true);
You can find more documentation here.
EDIT:
In order to hide the HtmlUnit logs, you have to add this:
LogFactory.getFactory().setAttribute("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.NoOpLog");
java.util.logging.Logger.getLogger("com.gargoylesoftware").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("org.apache.commons.httpclient").setLevel(Level.OFF);

Xcode 6.3 - Crash on Archive, NSOperationQueue

Xcode Version 6.3.2
The coding project: Objective-C
I can not send the application to Apple, I get this error when upload.
Can you help to correct the error?
Crashed Thread: 15 Dispatch queue: NSOperationQueue 0x7fb2c8a0bf50 :: NSOperation 0x7fb2c8bf5480 (QOS: USER_INITIATED)
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Application Specific Information:
ProductBuildVersion: 6D2105
ASSERTION FAILURE in /SourceCache/IDEFrameworks/IDEFrameworks-7718/IDEFoundation/Issues/IDEIssueManager.m:457
Details: This method must only be called on the main thread
Object: <IDEIssueManager>
Method: +_issueProviderInfo
Thread: <NSThread: 0x7fb2c75fbc90>{number = 21, name = (null)}
Hints: None
Close "workspace/project view", leave only "Organiser View" opened and submit your app.