I am using karate framework for my API testing in our organization. I am able to execute my project in local where DB connections are successful, when i execute in cloud jenkins we are getting below error
Error : Failed to obtain JDBC Connection; nested exception is java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
DB class used: https://github.com/intuit/karate/blob/master/karate-demo/src/main/java/com/intuit/karate/demo/util/DbUtils.java
Do we have any option to set proxy for DB only, i have also gone through proxy setup in karate-config.js like karate.configure('proxy', { uri: 'http://my.proxy.host:8080', username: 'john', password: 'secret' ,nonProxyHosts:['http://example.com'] }. This is setting up proxy to my API and not for DB instance.
I am also trying to check whether my jenkins server firewall is blocking to create a connection to my db.
Any help from karate framework creators or implementer's?
whether my jenkins server firewall is blocking
That is most likely the case, nothing Karate (or anyone associated with it) can do here to help.
Also please read this: https://stackoverflow.com/a/52078427/143475
I am trying to use Geode Redis Adapter as my server for Rate Limiting provided by Spring Cloud Gateway. If I use a real Redis Server, everything works perfectly, but with Geode Redis Adapter doesn't.
I am not too sure if this functionality is supported.
I tried to start a [Geode image] (https://hub.docker.com/r/apachegeode/geode/) exposing the default Redis port 6739. Starting the container, I executed using gfsh the following commands:
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
When I try to access in my local machine by redis-cli -h localhost -p 6379 I can get connected.
My implementation is simple:
application.yaml
- id: rate-limitter
predicates:
- Path=${GUI_CONTEXT_PATH:/rate-limit}
- Host=${APP_HOST:localhost:8080}
filters:
- name: RequestRateLimiter
args:
key-resolver: "#{#remoteAddrKeyResolve}"
redis-rate-limiter:
replenishRate: ${rate.limit.replenishRate:1}
burstCapacity: ${rate.limit.burstCapacity:2}
uri: ${APP_HOST:localhost:8080}
Application.java
#Bean
KeyResolver remoteAddrKeyResolve() {
return exchange -> Mono.just(exchange.getSession().subscribe().toString());
}
When my application is started and I try to access /rate-limit, I expected to connect to redis and my page be displayed.
However, my Spring application keeps trying to access and can't i.l.c.p.ReconnectionHandler: Reconnected to localhost:6379. So, the page is not displayed and keep loading. FIXED in Edit1 below
Problem is I am using RedisRateLimiter and tried to simulate the access with a for loop. Checking the RedisRateLimiter.REMAINING_HEADER, the value is -1 always. Doesn't seems right, because I don't have this issue in Redis itself.
During the start of the application, I also receive these messages on connection to Geode Redis Adapter:
Starting without optional epoll library
Starting without optional kqueue library
Is anything missing in my Geode Redis Adapter or anything else in Spring?
Thank you
Edit 1: I was missing to start the locator and region, that's why I wasn't able to connect.
start locator --name=locator
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
create region --name=redis-region --type=REPLICATE_PERSISTENT
I have setup Spark SQL on Jypterhub using Apache Toree SQL kernel. I wrote a Python function to update Spark configuration options in the kernel.json file for my team to change configuration based on their queries and cluster configuration. But I have to shutdown the running notebook and re-open or restart the kernel after running Python function. In this way, I'm forcing the Toree kernel to read the JSON file to pick up the new configuration.
I thought of implementing this shutdown and restart of kernel in a programmatic way. I got to know about the Jupyterhub REST API documentation and am able implement it by invoking related API's. But the problem is, the single user server API port is set randomly by the Spawner object of Jupyterhub and it keeps changing every time I spin up a cluster. I want this to be fixed before launching the Jupyterhub service.
Here is a solution I tried based on Jupyterhub docs:
sudo echo "c.Spawner.port = 35289
c.Spawner.ip = '127.0.0.1'" >> /etc/jupyterhub/jupyterhub_config.py
But this did not work as the port was again set by the Spawner randomly. I think there is a way to fix this. Any help on this would be greatly appreciated. Thanks
I am using gremlin-console (v3.2.7) bundled with Datastax Enterprise. On start it automatically connects to a remote gremlin server. Next, I create an alias to access the right graph :remote config alias g graph.g. Then, I connect to gephi (v0.9.2) :remote connect tinkerpop.gephi. From this moment on, however, I cannot traverse graph g so that :> g logically fails with java.lang.StackOverflowError as well. These are the two connections:
gremlin> :remote list
==>0 - Gremlin Server - [localhost/127.0.0.1:8182]-[<uuid>]
==>*1 - Gephi - [workspace1]
My question is whether there is a way to stream data from one remote connection to another using the setup outlined above (Datastax -> gephi) and if so how? If not, is there a workaround?
Note: All connections are successful, local gephi streaming tested with TinkerGraph.createModern() works flawlessly.
The Gephi plugin requires a local Graph instance. When you connect the Gremlin Console using :remote you no longer have that (i.e. the Graph instance is on a server somewhere and you're sending via :> the request to the server to be processed over there).
DSE Graph, Neptune, CosmosDB and similar graphs that only offer a remote Graph instance, the only way that you can make the Gephi plugin work is by taking a subgraph and bringing that down to your Gremlin Console. Then, as you've found, TinkerGraph (i.e. the holder of the subgraph) will work just fine with the Gephi plugin.
I have a problem when trying Erlang testing module riak_test to simulate connections among remote nodes.
It is possible to connect remote nodes within a test to local nodes (deployed by rt:deploy_nodes) but it is impossible to call functions of rt module, especially to add interceptors for the remote nodes without error.
Is there some solution or method to intercept also remote nodes using Riak testing module?
I need to use interceptors on remote nodes to retrieve some information about Riak node states.
More specifically: riak#10.X.X.X is my remote referenced node.
In the test it is possible to connect this node to local devX#127.0.0.1 nodes deployed in the test but in my test program I have: rt_intercept:add(riak#10.X.X.X, {}) I get error:
{{badmatch,
{badrpc,
{'EXIT',
{undef,
[{intercept,add,
[riak_kv_get_fsm,riak_kv_get_fsm_intercepts,
[{{waiting_vnode_r,2},waiting_vnode_r_tracing},
{client_info,3},client_info_tracing},
{execute,2},execute_preflist}]],
[]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,203}]}]}}}},
[{rt_intercept,add,2,[{file,"src/rt_intercept.erl"},{line,57}]},
{remoteRiak,'-confirm/0-lc$^2/1-2-',1,
[{file,"tests/remoteRiak.erl"},{line,49}]},
{remoteRiak,'-confirm/0-lc$^2/1-2-',1,
[{file,"tests/remoteRiak.erl"},{line,49}]},
{remoteRiak,confirm,0,[{file,"tests/remoteRiak.erl"},{line,49}]}]}
the rt_intercept:add function is going to use rpc:call to run the intercept:add function in the target node's VM. This means that the target node must either have the intercept module loaded or in the code path. You can add a path using add_paths in the config for the target node.