I am using karate framework for my API testing in our organization. I am able to execute my project in local where DB connections are successful, when i execute in cloud jenkins we are getting below error
Error : Failed to obtain JDBC Connection; nested exception is java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
DB class used: https://github.com/intuit/karate/blob/master/karate-demo/src/main/java/com/intuit/karate/demo/util/DbUtils.java
Do we have any option to set proxy for DB only, i have also gone through proxy setup in karate-config.js like karate.configure('proxy', { uri: 'http://my.proxy.host:8080', username: 'john', password: 'secret' ,nonProxyHosts:['http://example.com'] }. This is setting up proxy to my API and not for DB instance.
I am also trying to check whether my jenkins server firewall is blocking to create a connection to my db.
Any help from karate framework creators or implementer's?
whether my jenkins server firewall is blocking
That is most likely the case, nothing Karate (or anyone associated with it) can do here to help.
Also please read this: https://stackoverflow.com/a/52078427/143475
Related
Need some help. Trying to import data from BigQuery using spark-bigquery-connector: spark 2.4.0, scala 2.11.12, hadoop 2.7, spark-bigquery-with-dependencies_2.11-0.24.2
The corporate firewall blocks access to external services. Tell me, please, what urls need to be provided for spark-bigquery-connector to work?
Have this error:
Exception in thread "main" com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryException: Error getting access token for service account: Connection refused: connect, iss:
I have tried below :
Increased connection time out through code
RestAssuredConfig newConfig = RestAssured.config() .httpClient(HttpClientConfig.httpClientConfig().setParam(CoreConnectionPNames.CONNECTION_TIMEOUT, 12000).setParam(CoreConnectionPNames.SO_TIMEOUT, 12000));
Added User-Agent in request header
Checked same api call manually from local system and remote agent-
both gives same result.
But I am getting java.net.SocketTimeoutException: connect timed out while it is working from local system and not from Teamcity agent execution
Please help
The problem is that TeamCity has a timeout of its own, which has not been changed through your code. You can change this property by setting an internal property in Administration > diagnostics > internal properties.
Here is the property to add:
teamcity.agentServer.connectionTimeout=12000
Here is the documentation for TeamCity startup properties.
I also read through this Youtrack ticket with a similar issue.
I am using Google Cloud PostgreSQL with my django rest api developed locally and to be able to connect to the database, you are required to enter an IP address of where do you want to connect from. My team and I are using dynamic IP addresses and we should change the IP address everytime in the cloud interface in order to connect. Is there any other way? I wanted to try the SSL thing but it's too complicated. Any thoughts?
Thanks
Edit:
I am trying to use SSL and this is what I added to my settings.py but I am getting an error:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': '00.000.00.000',
'NAME': 'dbname',
'USER': 'username',
'PASSWORD': 'mypassword',
'OPTIONS': {
'sslmode': 'require',
'ssl': {
'ca': 'certs/server-ca.pem',
'cert': 'certs/client-cert.pem',
'key': 'certs/client-key.pem'
}
},
}
}
The ssl files are located in a folder called certs and this folder is in the same directory as the settings.py file.
This is the error I get when running the server:
django.db.utils.ProgrammingError: invalid dsn: invalid connection option "ssl"
Try using cloud proxy This will enable the use of an authenticated connection, however you will not need to worry about the IP or authentication aspect.
Using cloud proxy - allows a dedicated connection to your Cloud SQL instance (this is an authenticated connection). Once this is correctly set up, simply point your application to the proxy and all traffic will be sent to the back-end Cloud SQL instance.
For bonus points, cloud proxy can be configured dynamically using the VM metadata to setup the environment. I have used it like this previously with Terraform where it used to point to a specific cloud sql instance and it saves a lot of effort.
I have next configuration: remote Gremlin server (TinkerPop 3.2.6) with Janus GraphDB
I have gremlin-console (with janus plugin) + conf in remote.yaml:
hosts: [10.1.3.2] # IP og gremlin-server host
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
So I want to make connection through gremlin-server (not to JanusGraph directly by graph = JanusGraphFactory.build().set("storage.backend", "cassandra").set("storage.hostname", "127.0.0.1").open();) and get graph which supports transactions?
Is it possible? Because as I see all TinkerFactory graphs do not support transactions
As I understood to use the Janus graph through the gremlin server you should:
Define ip&port in the config file of the gremlin-console:
conf/remote.yaml
Connect by Gremlin-console to the gremlin server:
: remote connect tinkerpop.server conf/remote.yaml
==> Configured localhost/10.1.23.113: 8182
...and work in remote mode (using :> or :remote console), i.e. send ALL commands (or #script) to the gremlin-server.
:> graph.addVertex(...)
or
:remote console
==>All scripts will now be sent to Gremlin Server - [10.1.2.222/10.1.2.222:818]
graph.addVertex(...)
You don't need to define variables for the graph and the trawersal, but rather used
graph. - for the graph
g. - for the traversal
In this case, you can uses all graph features that are provided by the JanusGraphDB.
Tinkerpop provides Cluster object to keep the config of connection. Using Cluster object graphTraversalSource object can be spawned.
this.cluster = Cluster.build()
.addContactPoints("192.168.0.2","192.168.0.1")
.port(8082)
.credentials(username, password)
.serializer(new GryoMessageSerializerV1d0(GryoMapper.build().addRegistry(JanusGraphIoRegistry.getInstance())))
.maxConnectionPoolSize(8)
.maxContentLength(10000000)
.create();
this.gts = AnonymousTraversalSource
.traversal()
.withRemote(DriverRemoteConnection.using(cluster));
gts object is thread safe. With remote each query will be executed in separate transaction. Ideally gts should be a singleton object.
Make sure to call gts.close() and cluster.close() upon shutdown of application else it may lead to connection leak.
I believe that connecting a java application to a running gremlin server using withRemote() will not support transactions. I have had trouble finding information on this as well but as far as I can tell, if you want to do anything but read the graph, you need to use "embedded janusgraph" and have your remotely hosted persistent data stored in a "storage backend" that you connect to from your application as you describe in the second half of your question.
https://groups.google.com/forum/#!topic/janusgraph-users/t7gNBeWC844
Some discussion I found around it here ^^ makes a mention of it auto-committing single transactions in remote mode, but it doesn't seem to do that when I try.
We are using websphere application server 8.5.0.0. we have a requirement where we have to query a LDAP server to get the customer details. I tried to configure the connection pool as described here and here.
I passed the below JVM arguments
-Dcom.sun.jndi.ldap.connect.pool.maxsize=5
-Dcom.sun.jndi.ldap.connect.pool.timeout=60000
-Dcom.sun.jndi.ldap.connect.pool.debug=all
Below is a sample code snippet
Hashtable<String,String> env = new Hashtable<String,String>();
...
...
env.put("com.sun.jndi.ldap.connect.pool", "true");
env.put("com.sun.jndi.ldap.connect.timeout", "5000");
InitialDirContext c = new InitialDirContext(env);
...
...
c.close();
I have two issues here
When I am calling the service for the 6th time, I am getting javax.naming.ConnectionException: Timeout exceeded while waiting for a connection: 5000ms. I checked the connection pool debug logs and I noticed the connections are not returning back to the pool immediately despite closing the context safely in a finally block. The connections are released after some time and expired after sometime after the release. There after if I call the service again, it connects to the LDAP server but new connections are being created.
I tried to execute the code and I am able to see the connection pool debug logs. But the logs are being logged in System.Err log. Is this an issue? Can I ignore it?
But when I run the code as a standalone application(multithreaded with loop of 50 times), the connections are returned/released immediately.
Can anyone please let me know what am I doing wrong?