I am trying to compose a worker verticle that will bridge Google cloud PubSub topic subscription with an event-bus of the vert.x by adopting kotlin example of PubSub combined with this answer regarding worker with an infinite blocking loop processing.
It does works but Vert.X keep nagging into the log that Thread blocked by throwing an exception sometime after the message was from PubSub was received (please ignore blocking initialization for now):
9:15:12 AM: Executing task 'run'...
WARNING: You are a using release candidate 2.0.0-rc5. Behavior of this plugin has changed since 1.3.5. Please see release notes at: https://github.com/GoogleCloudPlatform/app-gradle-plugin.
Missing a feature? Can't get it to work?, please file a bug at: https://github.com/GoogleCloudPlatform/app-gradle-plugin/issues.
:compileKotlin UP-TO-DATE
:compileJava NO-SOURCE
:processResources NO-SOURCE
:classes UP-TO-DATE
Mar 10, 2019 9:15:18 AM io.vertx.core.impl.launcher.commands.Watcher
INFO: Watched paths: [/home/username/IdeaProjects/project_name/./src]
Mar 10, 2019 9:15:18 AM io.vertx.core.impl.launcher.commands.Watcher
INFO: Starting the vert.x application in redeploy mode
:run
Starting vert.x application...
f48ba7fd-a52b-487f-b553-2b74473e58ba-redeploy
Creating topic gcs-project-id:vertx.
Mar 10, 2019 9:15:18 AM com.google.auth.oauth2.DefaultCredentialsProvider warnAboutProblematicCredentials
WARNING: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/.
Mar 10, 2019 9:15:21 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 2759 ms, time limit is 2000 ms
Topic gcs-project-id:vertx successfully created.
Creating subscription gcs-project-id:kotlin.
Mar 10, 2019 9:15:22 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 3759 ms, time limit is 2000 ms
Mar 10, 2019 9:15:23 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 4758 ms, time limit is 2000 ms
Mar 10, 2019 9:15:24 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 5759 ms, time limit is 2000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:469)
at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:142)
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1309)
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:52)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.pubsub.v1.SubscriptionAdminClient.createSubscription(SubscriptionAdminClient.java:359)
at com.google.cloud.pubsub.v1.SubscriptionAdminClient.createSubscription(SubscriptionAdminClient.java:260)
at com.example.project.MainVerticle.subscribeTopic(MainVerticle.kt:76)
at com.example.project.MainVerticle.init(MainVerticle.kt:46)
at io.vertx.core.impl.DeploymentManager.lambda$doDeploy$8(DeploymentManager.java:492)
at io.vertx.core.impl.DeploymentManager$$Lambda$28/1902260856.handle(Unknown Source)
at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:320)
at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
at io.vertx.core.impl.EventLoopContext$$Lambda$29/1640639994.run(Unknown Source)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Subscription gcs-project-id:kotlin successfully created.
Listening to messages on kotlin:
Mar 10, 2019 9:15:25 AM io.vertx.core.impl.launcher.commands.VertxIsolatedDeployer
INFO: Succeeded in deploying verticle
Message Id: 462746807438186 Data: Bazinga
Message Id: 462746750387788 Data: Another message
Mar 10, 2019 9:16:25 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 60171 ms, time limit is 60000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:32)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:13)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:272)
at io.vertx.core.impl.ContextImpl$$Lambda$33/1101004004.run(Unknown Source)
at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
at io.vertx.core.impl.TaskQueue$$Lambda$26/1213216872.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Mar 10, 2019 9:16:26 AM io.vertx.core.impl.BlockedThreadChecker
WARNING: Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 61172 ms, time limit is 60000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:32)
at com.example.project.MainVerticle$start$1.handle(MainVerticle.kt:13)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:272)
at io.vertx.core.impl.ContextImpl$$Lambda$33/1101004004.run(Unknown Source)
at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
at io.vertx.core.impl.TaskQueue$$Lambda$26/1213216872.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
And here is source-code:
package com.example.project_name
import com.google.api.gax.rpc.ApiException
import com.google.cloud.pubsub.v1.*
import com.google.pubsub.v1.ProjectSubscriptionName
import com.google.pubsub.v1.ProjectTopicName
import com.google.pubsub.v1.PubsubMessage
import com.google.pubsub.v1.PushConfig
import io.vertx.core.*
import java.util.concurrent.LinkedBlockingDeque
class MainVerticle : MessageReceiver, AbstractVerticle() {
private val projectId = "gcs-project-id"
private val topicId = "vertx"
private val topic: ProjectTopicName = ProjectTopicName.of(projectId, topicId)
private val subscriptionId = "kotlin"
private val subscription = ProjectSubscriptionName.of(projectId, subscriptionId)
private val messages = LinkedBlockingDeque<PubsubMessage>()
private lateinit var subscriber: Subscriber
override fun receiveMessage(message: PubsubMessage, consumer: AckReplyConsumer) {
messages.offer(message)
consumer.ack()
}
override fun start() {
vertx.executeBlocking<Void>({
try {
println("Listening to messages on $subscriptionId:")
subscriber.awaitRunning()
while (true) {
val message = messages.take()
println("Message Id: ${message.messageId} Data: ${message.data.toStringUtf8()}")
}
} finally {
subscriber.stopAsync()
it.complete()
}
}, { println("done, ${it.cause()}") })
}
override fun init(vertx: Vertx?, context: Context?) {
super.init(vertx, context)
try {
createTopic()
subscribeTopic()
subscriber = Subscriber.newBuilder(subscription, this).build()
subscriber.startAsync()
} catch (e: ApiException) {
// example : code = ALREADY_EXISTS(409) implies topic already exists
println("Failed: $e")
}
}
override fun stop(stopFuture: Future<Void>?) {
super.stop(stopFuture)
try {
deleteSub()
deleteTopic()
} catch (e: ApiException) {
println("Failed: $e")
} finally {
subscriber.stopAsync()
stopFuture!!.complete()
}
}
private fun createTopic() { // expects 1 arg: <topic> to create
println("Creating topic ${topic.project}:${topic.topic}.")
TopicAdminClient.create().use { topicAdminClient -> topicAdminClient.createTopic(topic) }
println("Topic ${topic.project}:${topic.topic} successfully created.")
}
private fun subscribeTopic() { // expects 2 args: <topic> and <subscription>
println("Creating subscription ${subscription.project}:${subscription.subscription}.")
SubscriptionAdminClient.create().use { it.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0) }
println("Subscription ${subscription.project}:${subscription.subscription} successfully created.")
}
private fun deleteTopic() {
println("Deleting topic ${topic.project}:${topic.topic}.")
TopicAdminClient.create().use { it.deleteTopic(topic) }
println("Topic ${topic.project}:${topic.topic} successfully deleted.")
}
private fun deleteSub() { // expects 1 arg: <subscription> to delete
println("Deleting subscription ${subscription.project}:${subscription.subscription}.")
SubscriptionAdminClient.create().use { it.deleteSubscription(subscription) }
println("Subscription ${subscription.project}:${subscription.subscription} successfully deleted.")
}
}
fun main(vararg args: String) {
Vertx.vertx().deployVerticle(MainVerticle(), DeploymentOptions().apply {
isWorker = true
})
}
I am clearly missing something. Also if you have a better approach that can integrate/unify Google’s PubSub library (that has its own async loop) with Vert.X I’d be happy to hear over my primitive example approach.
The problem is with your while loop.
"Blocking" in this case does not mean that you can just keep running forever.
Your call to it.complete() is never reached, and at some point Vert.x will complain about that.
See the manual on Running blocking code, specifically the WARNING section.
To solve your problem, you will need to schedule your calls to messages.take() in one way or another, for example using setPeriodic. Inside the interval handler, empty your queue with executeBlocking, then give back control by calling complete(), either before or after you have scheduled the processing of the messages, depending if you care about the result.
Related
I'm working on a web service using Ktor 1.6.8 and Exposed 0.39.2.
My application module and database is setup as following:
fun Application.module(testing: Boolean = false) {
val hikariConfig =
HikariConfig().apply {
driverClassName = "org.postgresql.Driver"
jdbcUrl = environment.config.propertyOrNull("ktor.database.url")?.getString()
username = environment.config.propertyOrNull("ktor.database.username")?.getString()
password = environment.config.propertyOrNull("ktor.database.password")?.getString()
maximumPoolSize = 10
isAutoCommit = false
transactionIsolation = "TRANSACTION_REPEATABLE_READ"
validate()
}
val pool = HikariDataSource(hikariConfig)
val db = Database.connect(pool, {}, DatabaseConfig { useNestedTransactions = true })
}
I use ktor-server-test-host, Test Containers and junit 5 to test the service. My test looks similar like below:
#Testcontainers
class SampleApplicationTest{
companion object {
#Container
val postgreSQLContainer = PostgreSQLContainer<Nothing>(DockerImageName.parse("postgres:13.4-alpine")).apply {
withDatabaseName("database_test")
}
}
#Test
internal fun `should make request successfully`() {
withTestApplication({
(environment.config as MapApplicationConfig).apply {
put("ktor.database.url", postgreSQLContainer.jdbcUrl)
put("ktor.database.user", postgreSQLContainer.username)
put("ktor.database.password", postgreSQLContainer.password)
}
module(testing = true)
}) {
handleRequest(...)
}
}
}
I observed an issue that if I ran multiple test classes together, some requests ended up using old Exposed db instance that was setup in a previous test class, causing the test case failed because the underlying database was already stopped.
When I ran one test class at a time, all were running fine.
Please refer to the log below for the error stack trace:
2022-10-01 08:00:36.102 [DefaultDispatcher-worker-5 #request#103] WARN Exposed - Transaction attempt #1 failed: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.. Statement(s): INSERT INTO cards (...)
org.jetbrains.exposed.exceptions.ExposedSQLException: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:49)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:143)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:128)
at org.jetbrains.exposed.sql.statements.Statement.execute(Statement.kt:28)
at org.jetbrains.exposed.sql.QueriesKt.insert(Queries.kt:73)
at com.example.application.services.CardService$createCard$row$1.invokeSuspend(CardService.kt:53)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:127)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
Caused by: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:695)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:197)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at org.jetbrains.exposed.sql.Database$Companion$connect$3.invoke(Database.kt:142)
at org.jetbrains.exposed.sql.Database$Companion$connect$3.invoke(Database.kt:139)
at org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:127)
at org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:128)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction$connectionLazy$1.invoke(ThreadLocalTransactionManager.kt:69)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction$connectionLazy$1.invoke(ThreadLocalTransactionManager.kt:68)
at kotlin.UnsafeLazyImpl.getValue(Lazy.kt:81)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction.getConnection(ThreadLocalTransactionManager.kt:75)
at org.jetbrains.exposed.sql.Transaction.getConnection(Transaction.kt)
at org.jetbrains.exposed.sql.statements.InsertStatement.prepared(InsertStatement.kt:157)
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:47)
... 19 common frames omitted
Caused by: org.postgresql.util.PSQLException: Connection to localhost:49544 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542)
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
I tried to add some cleanup code for Exposed's TransactionManager in my application module as following:
fun Application.module(testing: Boolean = false) {
// ...
val db = Database.connect(pool, {}, DatabaseConfig { useNestedTransactions = true })
if (testing) {
environment.monitor.subscribe(ApplicationStopped) {
TransactionManager.closeAndUnregister(db)
}
}
}
However, the issue still happened, and I also observed additional error as following:
2022-10-01 08:00:36.109 [DefaultDispatcher-worker-5 #request#93] ERROR Application - Unexpected error
java.lang.RuntimeException: database org.jetbrains.exposed.sql.Database#3bf4644c don't have any transaction manager
at org.jetbrains.exposed.sql.transactions.TransactionApiKt.getTransactionManager(TransactionApi.kt:149)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.closeAsync(Suspended.kt:85)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.access$closeAsync(Suspended.kt:1)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:138)
(Coroutine boundary)
at org.mpierce.ktor.newrelic.KtorNewRelicKt$runPipelineInTransaction$2.invokeSuspend(KtorNewRelic.kt:178)
at org.mpierce.ktor.newrelic.KtorNewRelicKt$setUpNewRelic$2.invokeSuspend(KtorNewRelic.kt:104)
at io.ktor.routing.Routing.executeResult(Routing.kt:154)
at io.ktor.routing.Routing$Feature$install$1.invokeSuspend(Routing.kt:107)
at io.ktor.features.ContentNegotiation$Feature$install$1.invokeSuspend(ContentNegotiation.kt:145)
at io.ktor.features.StatusPages$interceptCall$2.invokeSuspend(StatusPages.kt:102)
at io.ktor.features.StatusPages.interceptCall(StatusPages.kt:101)
at io.ktor.features.StatusPages$Feature$install$2.invokeSuspend(StatusPages.kt:142)
at io.ktor.features.CallLogging$Feature$install$2.invokeSuspend(CallLogging.kt:188)
at io.ktor.server.testing.TestApplicationEngine$callInterceptor$1.invokeSuspend(TestApplicationEngine.kt:296)
at io.ktor.server.testing.TestApplicationEngine$2.invokeSuspend(TestApplicationEngine.kt:50)
Caused by: java.lang.RuntimeException: database org.jetbrains.exposed.sql.Database#3bf4644c don't have any transaction manager
at org.jetbrains.exposed.sql.transactions.TransactionApiKt.getTransactionManager(TransactionApi.kt:149)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.closeAsync(Suspended.kt:85)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.access$closeAsync(Suspended.kt:1)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:138)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
Could someone show me what could be the issue here with my application code & test setup?
Thanks and regards.
Trying to write one sample for (file upload and download) error in server.
Though it downloads the file in the frontend, getting error in server.
For Routing--
WebServer.builder(getRouting()).port(8080).build().start();
private static Routing getRouting() throws Exception{
return Routing.builder().register("/filetest", JerseySupport.builder().register(FileController.class).build())
.build();
}
#RequestScoped
public class FileController {
#Context
ServerRequest req;
#Context
ServerResponse res;
#GET
#Path("/{fname}")
public void download(#PathParam("fname") String fname) {
try {
//Getting the file
java.nio.file.Path filepath = Paths.get("c:/"+fname+".txt");
ResponseHeaders headers = res.headers();
headers.contentType(io.helidon.common.http.MediaType.APPLICATION_OCTET_STREAM);
headers.put(Http.Header.CONTENT_DISPOSITION, ContentDisposition.builder()
.filename(filepath.getFileName().toString())
.build()
.toString());
res.send(filepath);
}catch(Exception e) {
}
}
Jul 29, 2021 6:20:36 PM
io.helidon.webserver.RequestRouting$RoutedRequest defaultHandler
WARNING: Default error handler: Unhandled exception encountered.
java.util.concurrent.ExecutionException: Unhandled 'cause' of this
exception encountered. at
io.helidon.webserver.RequestRouting$RoutedRequest.defaultHandler(RequestRouting.java:397)
at
io.helidon.webserver.RequestRouting$RoutedRequest.nextNoCheck(RequestRouting.java:377)
at
io.helidon.webserver.RequestRouting$RoutedRequest.next(RequestRouting.java:420)
at
io.helidon.webserver.jersey.ResponseWriter.failure(ResponseWriter.java:133)
at
org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:438)
at
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:263)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at
org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at
org.glassfish.jersey.internal.Errors.process(Errors.java:292) at
org.glassfish.jersey.internal.Errors.process(Errors.java:274) at
org.glassfish.jersey.internal.Errors.process(Errors.java:244) at
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
at
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
at
io.helidon.webserver.jersey.JerseySupport$JerseyHandler.lambda$doAccept$3(JerseySupport.java:299)
at io.helidon.common.context.Contexts.runInContext(Contexts.java:117)
at
io.helidon.common.context.ContextAwareExecutorImpl.lambda$wrap$5(ContextAwareExecutorImpl.java:154)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
io.helidon.common.http.AlreadyCompletedException: Response status code
and headers are already completed (sent to the client)! at
io.helidon.webserver.HashResponseHeaders$CompletionSupport.runIfNotCompleted(HashResponseHeaders.java:384)
at
io.helidon.webserver.HashResponseHeaders.httpStatus(HashResponseHeaders.java:251)
at io.helidon.webserver.Response.status(Response.java:122) at
io.helidon.webserver.Response.status(Response.java:48) at
io.helidon.webserver.jersey.ResponseWriter.writeResponseStatusAndHeaders(ResponseWriter.java:81)
at
org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:607)
at
org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:373)
at
org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:363)
at
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:258)
... 14 more
Jul 29, 2021 6:20:36 PM
io.helidon.webserver.RequestRouting$RoutedRequest defaultHandler
WARNING: Cannot perform error handling of the throwable (see cause of
this exception) because headers were already sent
java.lang.IllegalStateException: Headers already sent. Cannot handle
the cause of this exception. at
io.helidon.webserver.RequestRouting$RoutedRequest.defaultHandler(RequestRouting.java:405)
at
io.helidon.webserver.RequestRouting$RoutedRequest.nextNoCheck(RequestRouting.java:377)
at
io.helidon.webserver.RequestRouting$RoutedRequest.next(RequestRouting.java:420)
at
io.helidon.webserver.jersey.ResponseWriter.failure(ResponseWriter.java:133)
at
org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:438)
at
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:263)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at
org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at
org.glassfish.jersey.internal.Errors.process(Errors.java:292) at
org.glassfish.jersey.internal.Errors.process(Errors.java:274) at
org.glassfish.jersey.internal.Errors.process(Errors.java:244) at
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
at
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
at
io.helidon.webserver.jersey.JerseySupport$JerseyHandler.lambda$doAccept$3(JerseySupport.java:299)
at io.helidon.common.context.Contexts.runInContext(Contexts.java:117)
at
io.helidon.common.context.ContextAwareExecutorImpl.lambda$wrap$5(ContextAwareExecutorImpl.java:154)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
io.helidon.common.http.AlreadyCompletedException: Response status code
and headers are already completed (sent to the client)! at
io.helidon.webserver.HashResponseHeaders$CompletionSupport.runIfNotCompleted(HashResponseHeaders.java:384)
at
io.helidon.webserver.HashResponseHeaders.httpStatus(HashResponseHeaders.java:251)
at io.helidon.webserver.Response.status(Response.java:122) at
io.helidon.webserver.Response.status(Response.java:48) at
io.helidon.webserver.jersey.ResponseWriter.writeResponseStatusAndHeaders(ResponseWriter.java:81)
at
org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:607)
at
org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:373)
at
org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:363)
at
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:258)
... 14 more
When using Helidon MP, Jersey is responsible for sending the response.
In the code above you are using the underlying WebServer and sending a response inside the body of a JAXRS resource method, when Jersey sends the response the exception is thrown because it was already sent.
Helidon MP Multipart example
Jersey Multipart documentation
I'm trying to upload a couple of xml files to firestore from my computer (1000 or so). The users in my android app need to be able to search for these files, and because firebase storage doesn't provide that feature I thought I'd just dump them in firestore.
To test it I'm trying to upload 10 files for a start. The code I have looks something like this:
fun main(){
val db = getDatabase()
getXMLFiles().take(10).forEach { file ->
db.collection("xml").add(XMLFile(file.name, file))
}
}
private fun getDatabase() : Firestore {
val serviceAccount = FileInputStream("<project name>-firebase-adminsdk-<some other stuff>.json")
val firestoreOptions = FirestoreOptions.newBuilder()
.setTimestampsInSnapshotsEnabled(true).build()
val options = FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
.setDatabaseUrl("https://<project name>.firebaseio.com")
.setFirestoreOptions(firestoreOptions)
.build()
FirebaseApp.initializeApp(options)
return FirestoreClient.getFirestore()
}
fun getXMLFiles() = File("/XML").walk().asIterable()
data class XMLFile(val name: String, val file: File)
Nothing gets uploaded and half the time (weirdly inconsistently) I get this error message:
Apr 15, 2019 2:05:53 PM com.google.auth.oauth2.ComputeEngineCredentials runningOnComputeEngine
WARNING: Failed to detect whether we are running on Google Compute Engine.
java.net.ConnectException: No route to host (connect failed)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1242)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1181)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1075)
at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1009)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:104)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.auth.oauth2.ComputeEngineCredentials.runningOnComputeEngine(ComputeEngineCredentials.java:210)
at com.google.auth.oauth2.DefaultCredentialsProvider.tryGetComputeCredentials(DefaultCredentialsProvider.java:290)
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentialsUnsynchronized(DefaultCredentialsProvider.java:207)
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:124)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:127)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:100)
at com.google.cloud.ServiceOptions.defaultCredentials(ServiceOptions.java:304)
at com.google.cloud.ServiceOptions.<init>(ServiceOptions.java:278)
at com.google.cloud.firestore.FirestoreOptions.<init>(FirestoreOptions.java:225)
at com.google.cloud.firestore.FirestoreOptions$Builder.build(FirestoreOptions.java:219)
at UploaderKt.getDatabase(Uploader.kt:32)
at UploaderKt.main(Uploader.kt:13)
at UploaderKt.main(Uploader.kt)
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Process finished with exit code 0
I checked, and it does find the .json with the credentials, so that is not the problem. I can't find anything anywhere about the error message
Apr 15, 2019 2:05:53 PM com.google.auth.oauth2.ComputeEngineCredentials runningOnComputeEngine
WARNING: Failed to detect whether we are running on Google Compute Engine.
So I hoped someone could help.
Btw, without showing to much, my .json looks like this:
{
"type": "service_account",
"project_id": "XXX",
"private_key_id": "XXX",
"private_key": "-----BEGIN PRIVATE KEY-----XXX-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-XXX#XXX.iam.gserviceaccount.com",
"client_id": "XXX",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-XXX%40XXX.iam.gserviceaccount.com"
}
Edit: might be stating the obvious but I have no problems connecting to the internet otherwise
please anybody help to fix this issue?<br/>
**I am getting issue [client] - AMQ214008: Failed to handle packet java.lang.UnsupportedOperationException while processing the command data in javalite async?**<br/>
[2018-03-30 10:27:16,303] - [DEBUG] [client] - Calling close on session ClientSessionImpl [name=d13aa760-33d6-11e8-b4fb-844bf530b8f3, username=null, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl#58e64301, metaData=(jms-session=,)]#6a6c5fb3 <br/>
[2018-03-30 10:27:16,306] - [DEBUG] [server] - QueueImpl[name=jms.queue.eventQueue, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=70d74287-3283-11e8-8a66-844bf530b8f3]]#39533a61 doing deliver. messageReferences=0 <br/>
[2018-03-30 10:27:16,308] - [DEBUG] [client] - calling cleanup on ClientSessionImpl [name=d13aa760-33d6-11e8-b4fb-844bf530b8f3, username=null, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl#58e64301, metaData=(jms-session=,)]#6a6c5fb3 <br/>
[2018-03-30 10:27:16,335] - [DEBUG] [HttpAsyncRequestExecutor] - http-outgoing-0 [ACTIVE] [content length: 42355; pos: 42355; completed: true] <br/>
[2018-03-30 10:27:16,336] - [DEBUG] [ThreadLocalRandom] - -Dio.netty.initialSeedUniquifier: 0xad1a1d5891abf66a <br/>
**[2018-03-30 10:27:16,337] - [ERROR] [client] - AMQ214008: Failed to handle packet <br/>
java.lang.UnsupportedOperationException<br/>
at java.nio.ByteBuffer.array(Unknown Source)**<br/>
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.handleCompressedMessage(ClientConsumerImpl.java:600)<br/>
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.handleMessage(ClientConsumerImpl.java:532)<br/>
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.handleReceiveMessage(ClientSessionImpl.java:824)<br/>
at org.apache.activemq.artemis.spi.core.remoting.SessionContext.handleReceiveMessage(SessionContext.java:97)<br/>
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.handleReceivedMessagePacket(ActiveMQSessionContext.java:712)<br/>
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.access$400(ActiveMQSessionContext.java:111)<br/>
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext$ClientSessionPacketHandler.handlePacket(ActiveMQSessionContext.java:755)<br/>
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:594)<br/>
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:368)<br/>
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:350)<br/>
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1140)<br/>
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection$1.run(InVMConnection.java:183)<br/>
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)<br/>
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)<br/>
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)<br/>
at java.lang.Thread.run(Unknown Source)<br/>
<br/><br/>
When i execute the code in standalone project, its working fine. But while running the same in Tomcat server its throwing the above
error..?
Source Code is below
public class TestCommand extends Command
{
private TestEvent event;
public TestCommand(MsgEvent event)
{
this.event = (TestEvent)event;
}
public TestCommand()
{
}
#Override
public void execute()
{
//code stuff
}
}
<br/><br/>
async = new Async(filePath, false, new QueueConfig("eventQueue", new CommandListener(), threadCount));
<br/>
async.start();
<br/>
public void test(EventCommand ev)
{
async.send("eventQueue", ev);
} <br/>
Following libraries are loaded into classpath
please anybody help to fix this issue?
The evidence suggests to me that when this code is executed in Tomcat it is using a different java.nio.ByteBuffer implementation than when it is run standalone (perhaps due to different versions of Netty). The code causing the exception is calling java.nio.ByteBuffer.array() which is not required to be implemented (i.e. throwing an UnsupportedOperationException is valid here). This was dealt with in Artemis via this commit which is available in Artemis 1.4. That said, there's no reason to use such an old version of Artemis. I would recommend you upgrade to the latest 2.5 release as soon as possible.
One cassandra cluster doesn't have SSL enabled and another cassandra cluster has SSL enabled. How to interact with both the cassandra cluster from a single spark job. I have to copy the table from one server(without SSL) and put into another server(with SSL).
Spark job:-
object TwoClusterExample extends App {
val conf = new SparkConf(true).setAppName("SparkCassandraTwoClusterExample")
println("Starting the SparkCassandraLocalJob....")
val sc = new SparkContext(conf)
val connectorToClusterOne = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "localhost"))
val connectorToClusterTwo = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "remote"))
val rddFromClusterOne = {
implicit val c = connectorToClusterOne
sc.cassandraTable("test","one")
}
{
implicit val c = connectorToClusterTwo
rddFromClusterOne.saveToCassandra("test","one")
}
}
Cassandra conf:-
spark.master spark://ip:6066
spark.executor.memory 1g
spark.cassandra.connection.host remote
spark.cassandra.auth.username iccassandra
spark.cassandra.auth.password pwd1
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.eventLog.enabled true
spark.eventLog.dir /Users/test/logs/spark
spark.cassandra.connection.ssl.enabled true
spark.cassandra.connection.ssl.trustStore.password pwd2
spark.cassandra.connection.ssl.trustStore.path truststore.jks
Submitting the job:-
spark-submit --deploy-mode cluster --master spark://ip:6066 --properties-file cassandra-count.conf --class TwoClusterExample target/scala-2.10/cassandra-table-assembly-1.0.jar
Below error i am getting:-
17/10/26 16:27:20 DEBUG STATES: [/remote:9042] preventing new connections for the next 1000 ms
17/10/26 16:27:20 DEBUG STATES: [/remote:9042] Connection[/remote:9042-1, inFlight=0, closed=true] failed, remaining = 0
17/10/26 16:27:20 DEBUG ControlConnection: [Control connection] error on /remote:9042 connection, no more host to try
com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:157)
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:140)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:222)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /remote:9042
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:220)
... 6 more
17/10/26 16:27:20 DEBUG Cluster: Shutting down
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: java.io.IOException: Failed to open native connection to Cassandra at {remote}:9042
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:304)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:51)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:146)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at cassandraCount$.runJob(cassandraCount.scala:27)
at cassandraCount$delayedInit$body.apply(cassandraCount.scala:22)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at cassandraCount$.main(cassandraCount.scala:10)
at cassandraCount.main(cassandraCount.scala)
... 6 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /remote:9042 (com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155)
... 37 more
17/10/26 16:27:23 INFO SparkContext: Invoking stop() from shutdown hook
17/10/26 16:27:23 INFO SparkUI: Stopped Spark web UI at http://10.7.10.138:4040
Working code:-
object ClusterSSLTest extends App{
val conf = new SparkConf(true).setAppName("sparkCassandraLocalJob")
println("Starting the ClusterSSLTest....")
val sc = new SparkContext(conf)
val sourceCluster = CassandraConnector(
sc.getConf.set("spark.cassandra.connection.host", "localhost"))
val destinationCluster = CassandraConnector(
sc.getConf.set("spark.cassandra.connection.host", "remoteip1,remoteip2")
.set("spark.cassandra.auth.username","uname")
.set("spark.cassandra.auth.password","pwd")
.set("spark.cassandra.connection.ssl.enabled","true")
.set("spark.cassandra.connection.timeout_ms","10000")
.set("spark.cassandra.connection.ssl.trustStore.path", "../truststore.jks")
.set("spark.cassandra.connection.ssl.trustStore.password", "pwd")
.set("spark.cassandra.connection.ssl.trustStore.type", "JKS")
.set("spark.cassandra.connection.ssl.protocol", "TLS")
.set("spark.cassandra.connection.ssl.enabledAlgorithms", "TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA")
)
val rddFromSourceCluster = {
implicit val c = sourceCluster
val tbRdd = sc.cassandraTable("analytics","products")
println(s"no of rows ${tbRdd.count()}")
tbRdd
}
val rddToDestinationCluster = {
implicit val c = destinationCluster // connect to source cluster in this code block.
rddFromSourceCluster.saveToCassandra("analytics","products")
}
sc.stop()
}