AbstractChannel$AnnotatedConnectException: Conexión rehusada - kotlin

I am trying to connect with one activeMQ host using vertx (vertx client does not support failover, then I am trying do it manually):
And if the connection fail, I am trying to connect to second Host:
The connection method is:
private fun connectClient(vertx: Vertx, host: String, options: AmqpClientOptions,
address: String): Single<MQServerConnection> =
AmqpClient.create(vertx, options).rxConnect().flatMap { amqpConnection ->
amqpConnection.rxCreateDynamicReceiver().flatMap { receiver ->
amqpConnection.rxCreateSender(address).map { sender ->
MQServerConnection(amqpConnection, sender, receiver, host)
}
}
}
And i am calling using:
connectClient(vertx, host1, options, address).doOnError {
logger.warn("CONNECTION_REFUSED_WITH_HOST_[$host1]")
options = getConnectionOptions(host2)
connectClient(vertx, host2, options, address).doOnError {
Single.error<Connection>(Exception("Error"))
}
}
But if fails the connection with host1 the connection is opened, because i Am getting connecting with host2:
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Conexión rehusada: HOSTXXX/54.166.103.68:5671
Caused by: java.net.ConnectException: Conexión rehusada
How can I avoid this error in the connection, because i don't have connection because it fail

Related

Ktor-server-test-host did not cleaned up Exposed database instannce across tests

I'm working on a web service using Ktor 1.6.8 and Exposed 0.39.2.
My application module and database is setup as following:
fun Application.module(testing: Boolean = false) {
val hikariConfig =
HikariConfig().apply {
driverClassName = "org.postgresql.Driver"
jdbcUrl = environment.config.propertyOrNull("ktor.database.url")?.getString()
username = environment.config.propertyOrNull("ktor.database.username")?.getString()
password = environment.config.propertyOrNull("ktor.database.password")?.getString()
maximumPoolSize = 10
isAutoCommit = false
transactionIsolation = "TRANSACTION_REPEATABLE_READ"
validate()
}
val pool = HikariDataSource(hikariConfig)
val db = Database.connect(pool, {}, DatabaseConfig { useNestedTransactions = true })
}
I use ktor-server-test-host, Test Containers and junit 5 to test the service. My test looks similar like below:
#Testcontainers
class SampleApplicationTest{
companion object {
#Container
val postgreSQLContainer = PostgreSQLContainer<Nothing>(DockerImageName.parse("postgres:13.4-alpine")).apply {
withDatabaseName("database_test")
}
}
#Test
internal fun `should make request successfully`() {
withTestApplication({
(environment.config as MapApplicationConfig).apply {
put("ktor.database.url", postgreSQLContainer.jdbcUrl)
put("ktor.database.user", postgreSQLContainer.username)
put("ktor.database.password", postgreSQLContainer.password)
}
module(testing = true)
}) {
handleRequest(...)
}
}
}
I observed an issue that if I ran multiple test classes together, some requests ended up using old Exposed db instance that was setup in a previous test class, causing the test case failed because the underlying database was already stopped.
When I ran one test class at a time, all were running fine.
Please refer to the log below for the error stack trace:
2022-10-01 08:00:36.102 [DefaultDispatcher-worker-5 #request#103] WARN Exposed - Transaction attempt #1 failed: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.. Statement(s): INSERT INTO cards (...)
org.jetbrains.exposed.exceptions.ExposedSQLException: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:49)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:143)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:128)
at org.jetbrains.exposed.sql.statements.Statement.execute(Statement.kt:28)
at org.jetbrains.exposed.sql.QueriesKt.insert(Queries.kt:73)
at com.example.application.services.CardService$createCard$row$1.invokeSuspend(CardService.kt:53)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:127)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
Caused by: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:695)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:197)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at org.jetbrains.exposed.sql.Database$Companion$connect$3.invoke(Database.kt:142)
at org.jetbrains.exposed.sql.Database$Companion$connect$3.invoke(Database.kt:139)
at org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:127)
at org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:128)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction$connectionLazy$1.invoke(ThreadLocalTransactionManager.kt:69)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction$connectionLazy$1.invoke(ThreadLocalTransactionManager.kt:68)
at kotlin.UnsafeLazyImpl.getValue(Lazy.kt:81)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction.getConnection(ThreadLocalTransactionManager.kt:75)
at org.jetbrains.exposed.sql.Transaction.getConnection(Transaction.kt)
at org.jetbrains.exposed.sql.statements.InsertStatement.prepared(InsertStatement.kt:157)
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:47)
... 19 common frames omitted
Caused by: org.postgresql.util.PSQLException: Connection to localhost:49544 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542)
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
I tried to add some cleanup code for Exposed's TransactionManager in my application module as following:
fun Application.module(testing: Boolean = false) {
// ...
val db = Database.connect(pool, {}, DatabaseConfig { useNestedTransactions = true })
if (testing) {
environment.monitor.subscribe(ApplicationStopped) {
TransactionManager.closeAndUnregister(db)
}
}
}
However, the issue still happened, and I also observed additional error as following:
2022-10-01 08:00:36.109 [DefaultDispatcher-worker-5 #request#93] ERROR Application - Unexpected error
java.lang.RuntimeException: database org.jetbrains.exposed.sql.Database#3bf4644c don't have any transaction manager
at org.jetbrains.exposed.sql.transactions.TransactionApiKt.getTransactionManager(TransactionApi.kt:149)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.closeAsync(Suspended.kt:85)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.access$closeAsync(Suspended.kt:1)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:138)
(Coroutine boundary)
at org.mpierce.ktor.newrelic.KtorNewRelicKt$runPipelineInTransaction$2.invokeSuspend(KtorNewRelic.kt:178)
at org.mpierce.ktor.newrelic.KtorNewRelicKt$setUpNewRelic$2.invokeSuspend(KtorNewRelic.kt:104)
at io.ktor.routing.Routing.executeResult(Routing.kt:154)
at io.ktor.routing.Routing$Feature$install$1.invokeSuspend(Routing.kt:107)
at io.ktor.features.ContentNegotiation$Feature$install$1.invokeSuspend(ContentNegotiation.kt:145)
at io.ktor.features.StatusPages$interceptCall$2.invokeSuspend(StatusPages.kt:102)
at io.ktor.features.StatusPages.interceptCall(StatusPages.kt:101)
at io.ktor.features.StatusPages$Feature$install$2.invokeSuspend(StatusPages.kt:142)
at io.ktor.features.CallLogging$Feature$install$2.invokeSuspend(CallLogging.kt:188)
at io.ktor.server.testing.TestApplicationEngine$callInterceptor$1.invokeSuspend(TestApplicationEngine.kt:296)
at io.ktor.server.testing.TestApplicationEngine$2.invokeSuspend(TestApplicationEngine.kt:50)
Caused by: java.lang.RuntimeException: database org.jetbrains.exposed.sql.Database#3bf4644c don't have any transaction manager
at org.jetbrains.exposed.sql.transactions.TransactionApiKt.getTransactionManager(TransactionApi.kt:149)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.closeAsync(Suspended.kt:85)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.access$closeAsync(Suspended.kt:1)
at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:138)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
Could someone show me what could be the issue here with my application code & test setup?
Thanks and regards.

Timeout error while connecting to SQL Server: Connection Error: Failed to connect to servername\instancename in 15000ms

The error code is code ETIMEOUT. I am using SQL Server 2014, NodeJs version v12.16.2 and I am running this code in Visual Studio Code.
I have created the database and the table is also created with some records. For the server name, I have also tried giving the FQDN, but it didn't work.
This is the code snippet:
const express = require('express');
const app = express();
var Connection = require('tedious').Connection;
var Request = require('tedious').Request;
app.get('/', function(req, res) {
res.send('<hHii</h');
});
const sql = require('mssql');
var config = {
user: 'domain\username',
password: 'mypwd',
server: 'servername',
host: 'hostname',
port: '1433',
driver: 'tedious',
database: 'DBname',
options: {
instanceName: 'instancename'
}
};
sql.connect(config, function(err) {
if (err)
console.log(err);
let sqlRequest = new sql.Request();
//var sqlRequest = new sql.Request(connection)
let sqlQuery = 'SELECT * from TestTable';
sqlRequest.query(sqlQuery, function(err, data) {
if (err) console.log(err);
console.log(data);
//console.table(data.recordset);
// console.log(data.rowAffected);
//console.log(data.recordset[0]);
sql.close()
});
});
const webserver = app.listen(5000, function() {
console.log('server is up and running....');
});
Error:
tedious deprecated The default value for `config.options.enableArithAbort` will change from `false` to `true` in the next major version of `tedious`. Set the value to `true` or`false` explicitly to silence this message.
node_modules\mssql\lib\tedious\connection-pool.js:61:23
server is up and running....
ConnectionError: Failed to connect to servername\instantname in 15000ms
at Connection.<anonymous(..\SQL\Sample\sample\node_modules\mssql\lib\tedious\connection-pool.js:68:17)
at Object.onceWrapper (events.js:417:26)
at Connection.emit (events.js:310:20)
at Connection.connectTimeout (..\SQL\Sample\sample\node_modules\tedious\lib\connection.js:1195:10)
at Timeout._onTimeout (..\SQL\Sample\sample\node_modules\tedious\lib\connection.js:1157:12)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) {
code: 'ETIMEOUT',
originalError: ConnectionError: Failed to connect to INPUNPSURWADE\DA in 15000ms
at ConnectionError (..\SQL\Sample\sample\node_modules\tedious\lib\errors.js:13:12)
at Connection.connectTimeout (..\SQL\Sample\sample\node_modules\tedious\lib\connection.js:1195:54)
at Timeout._onTimeout (..\SQL\Sample\sample\node_modules\tedious\lib\connection.js:1157:12)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) {
message: 'Failed to connect to servername\instantname in 15000ms',
code: 'ETIMEOUT'
},
name: 'ConnectionError'
}
ConnectionError: Connection is closed.
at Request._query (..\SQL\Sample\sample\node_modules\mssql\lib\base\request.js:462:37)
at Request._query (..\SQL\Sample\sample\node_modules\mssql\lib\tedious\request.js:346:11)
at Request.query (..\SQL\Sample\sample\node_modules\mssql\lib\base\request.js:398:12)
at Immediate.<anonymous(..\SQL\Sample\sample\index.js:43:12)
at processImmediate (internal/timers.js:458:21) {
code: 'ECONNCLOSED',
name: 'ConnectionError'
}
Go to Start menu in Windows, search for "Services" and open it.
Look for "SQL Server Browser"
Right click on it and go to Properties.
Switch "Start up type" to "Automatic"
Click Ok
Right click on it again and Start the service.
It should work after that!
You might make a typo in host/port, or the server doesn't listen external interfaces (it might be configured to liten 127.0.0.1 only)
to check that the server is listening to incoming connections, run in terminal the following:
telnet <hostname> <port>
(mac/linux only)
If it says "Unable to connect to remote host: Connection refused" it means the host is there but it doesn't listen the port.
If it says "Unable to connect to remote host: Connection timed out" then the host might not be there, ot doesn't listen to the port.
You may also want to check if your firewall allows connection to that server.

How to handle Spark with multiple cassandra server with different ssl policy

One cassandra cluster doesn't have SSL enabled and another cassandra cluster has SSL enabled. How to interact with both the cassandra cluster from a single spark job. I have to copy the table from one server(without SSL) and put into another server(with SSL).
Spark job:-
object TwoClusterExample extends App {
val conf = new SparkConf(true).setAppName("SparkCassandraTwoClusterExample")
println("Starting the SparkCassandraLocalJob....")
val sc = new SparkContext(conf)
val connectorToClusterOne = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "localhost"))
val connectorToClusterTwo = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "remote"))
val rddFromClusterOne = {
implicit val c = connectorToClusterOne
sc.cassandraTable("test","one")
}
{
implicit val c = connectorToClusterTwo
rddFromClusterOne.saveToCassandra("test","one")
}
}
Cassandra conf:-
spark.master spark://ip:6066
spark.executor.memory 1g
spark.cassandra.connection.host remote
spark.cassandra.auth.username iccassandra
spark.cassandra.auth.password pwd1
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.eventLog.enabled true
spark.eventLog.dir /Users/test/logs/spark
spark.cassandra.connection.ssl.enabled true
spark.cassandra.connection.ssl.trustStore.password pwd2
spark.cassandra.connection.ssl.trustStore.path truststore.jks
Submitting the job:-
spark-submit --deploy-mode cluster --master spark://ip:6066 --properties-file cassandra-count.conf --class TwoClusterExample target/scala-2.10/cassandra-table-assembly-1.0.jar
Below error i am getting:-
17/10/26 16:27:20 DEBUG STATES: [/remote:9042] preventing new connections for the next 1000 ms
17/10/26 16:27:20 DEBUG STATES: [/remote:9042] Connection[/remote:9042-1, inFlight=0, closed=true] failed, remaining = 0
17/10/26 16:27:20 DEBUG ControlConnection: [Control connection] error on /remote:9042 connection, no more host to try
com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:157)
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:140)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:222)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /remote:9042
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:220)
... 6 more
17/10/26 16:27:20 DEBUG Cluster: Shutting down
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: java.io.IOException: Failed to open native connection to Cassandra at {remote}:9042
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:304)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:51)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:146)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at cassandraCount$.runJob(cassandraCount.scala:27)
at cassandraCount$delayedInit$body.apply(cassandraCount.scala:22)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at cassandraCount$.main(cassandraCount.scala:10)
at cassandraCount.main(cassandraCount.scala)
... 6 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /remote:9042 (com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155)
... 37 more
17/10/26 16:27:23 INFO SparkContext: Invoking stop() from shutdown hook
17/10/26 16:27:23 INFO SparkUI: Stopped Spark web UI at http://10.7.10.138:4040
Working code:-
object ClusterSSLTest extends App{
val conf = new SparkConf(true).setAppName("sparkCassandraLocalJob")
println("Starting the ClusterSSLTest....")
val sc = new SparkContext(conf)
val sourceCluster = CassandraConnector(
sc.getConf.set("spark.cassandra.connection.host", "localhost"))
val destinationCluster = CassandraConnector(
sc.getConf.set("spark.cassandra.connection.host", "remoteip1,remoteip2")
.set("spark.cassandra.auth.username","uname")
.set("spark.cassandra.auth.password","pwd")
.set("spark.cassandra.connection.ssl.enabled","true")
.set("spark.cassandra.connection.timeout_ms","10000")
.set("spark.cassandra.connection.ssl.trustStore.path", "../truststore.jks")
.set("spark.cassandra.connection.ssl.trustStore.password", "pwd")
.set("spark.cassandra.connection.ssl.trustStore.type", "JKS")
.set("spark.cassandra.connection.ssl.protocol", "TLS")
.set("spark.cassandra.connection.ssl.enabledAlgorithms", "TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA")
)
val rddFromSourceCluster = {
implicit val c = sourceCluster
val tbRdd = sc.cassandraTable("analytics","products")
println(s"no of rows ${tbRdd.count()}")
tbRdd
}
val rddToDestinationCluster = {
implicit val c = destinationCluster // connect to source cluster in this code block.
rddFromSourceCluster.saveToCassandra("analytics","products")
}
sc.stop()
}

Unable to configure and run pithos.io using AWS Java SDK

I am trying to configure pithos.io on my server testmbr1.kabuter.com:8081:
Here is how I start pithos.io:
java -jar pithos-0.7.5-standalone.jar -f pithos.yaml
My pithos.yaml:
service:
host: "0.0.0.0"
port: 8081
logging:
level: info
console: true
overrides:
io.pithos: debug
options:
service-uri: testmbr1.kabuter.com
default-region: myregion
keystore:
keys:
AKIAIOSFODNN7EXAMPLE:
master: true
tenant: test#example.com
secret: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
bucketstore:
default-region: myregion
cluster: "45.33.37.148"
keyspace: storage
regions:
myregion:
metastore:
cluster: "45.33.37.148"
keyspace: storage
storage-classes:
standard:
cluster: "45.33.37.148"
keyspace: storage
max-chunk: "128k"
max-block-chunk: 1024
cassandra:
saved_caches_directory: "target/db/saved_caches"
data_file_directories:
- "target/db/data"
commitlog_directory: "target/db/commitlog"
I am using AWS Java SDK to connect. Below is my JUnit:
#Test
public void testPithosIO() {
try {
ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
EndpointConfiguration endpointConfiguration = new EndpointConfiguration("http://testmbr1.kabuter.com:8081",
"myregion");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("AKIAIOSFODNN7EXAMPLE",
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion("myregion")
.withClientConfiguration(config)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(endpointConfiguration).build();
s3Client.createBucket("mybucket1");
System.out.println(s3Client.getRegionName());
System.out.println(s3Client.listBuckets());
} catch (Exception e) {
e.printStackTrace();
}
}
My problems is 1) I am getting: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to mybucket1.testmbr1.kabuter.com:8081 [mybucket1.testmbr1.kabuter.com/198.105.254.130, mybucket1.testmbr1.kabuter.com/104.239.207.44] failed: connect timed out
This was fixed by adding mybucket1.testmbr1 CNAME record pointing to testmbr1.kabuter.com.
2) while trying to createBucket: s3Client.createBucket("mybucket1") I am getting:
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d98b7908-d11e-458a-be27-254b136f344a), S3 Extended Request ID: d98b7908-d11e-458a-be27-254b136f344a
How do I get it to working? pithos.io seems to have limited documentation.
Any pointers?
Since my endpoint was using a non-standard port:
http://testmbr1.kabuter.com:8081
I had to define server-uri in pithos.yaml with the port as well:
server-uri : testmbr1.kabuter.com:8081

How to get authenticated to Redis Cloud Memcached using Spymemcached?

I am trying to connect to Redis Cloud Memcached but get an error (below). I have checked that the username, password, host, and port are correct in the apps.redislabs.com interface. I able to connect if I disable SASL and connect unauthenticated.
How can I diagnose this?
(Using spymemcached 2.11.6.)
import net.spy.memcached.auth.*;
import net.spy.memcached.*;
...
List<InetSocketAddress> addresses = Collections.singletonList(addr);
AuthDescriptor ad = new AuthDescriptor(new String[] { "CRAM-MD5", "PLAIN" },
new PlainCallbackHandler(user, password));
MemcachedClient mc = new MemcachedClient(new ConnectionFactoryBuilder()
.setProtocol(ConnectionFactoryBuilder.Protocol.BINARY)
.setAuthDescriptor(ad).build(),
AddrUtil.getAddresses(host + ":" + port));
The stacktrace:
net.spy.memcached.MemcachedConnection: Added {QA sa=pub-memcache-14154.us-central1-1-1.gce.garantiadata.com/104.197.191.74:14514, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
net.spy.memcached.protocol.binary.BinaryMemcachedNodeImpl: Discarding partially completed op: SASL auth operation
net.spy.memcached.MemcachedConnection: Reconnecting due to exception on {QA sa=pub-memcache-14154.us-central1-1-1.gce.garantiadata.com/104.197.191.74:14514, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=1}
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at net.spy.memcached.MemcachedConnection.handleReads(MemcachedConnection.java:820)
at net.spy.memcached.MemcachedConnection.handleReadsAndWrites(MemcachedConnection.java:720)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:683)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:436)
at net.spy.memcached.MemcachedConnection.run(MemcachedConnection.java:1446)
net.spy.memcached.MemcachedConnection: Closing, and reopening {QA sa=pub-memcache-14154.us-central1-1-1.gce.garantiadata.com/104.197.191.74:14514, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=1}, attempt 0.
net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for foo.
Lose the "CRAM-MD5" in your AuthDescriptior declaration.
The following works in my tests: (user, pass, and url removed)
AuthDescriptor ad = new AuthDescriptor(new String[] {"PLAIN"}, new PlainCallbackHandler(user, pass));
MemcachedClient mc = null;
try {
mc = new MemcachedClient(
new ConnectionFactoryBuilder()
.setProtocol(ConnectionFactoryBuilder.Protocol.BINARY)
.setAuthDescriptor(ad).build(),
AddrUtil.getAddresses(host + ":" + port ));
} catch (IOException e) {
// handle exception
}
mc.set("foo", 0, "bar");
String value = (String) mc.get("foo");
System.out.println(value);