increase idle timeout for gremlin console AWS Neptune - gremlin-server

I am on ubuntu 18, I have the gremlin console always opened, after I :remote console, I run some query, then if I keep idle for like 3 mins, the connection then got dropped, then I have to exit the current connection and reconnect, which is quite annoying.
Is there a way to increase the idle timeout
basically I need to type in these command again and again.....
bin/gremlin.sh
:remote connect tinkerpop.server conf/neptune-remote.yaml
:remote console

It looks like what you are looking for is the keepAliveInterval in Settings.
https://github.com/apache/tinkerpop/blob/6083dc4fcb214df64be72483f8779d81e73c0fac/gremlin-driver/src/main/java/org/apache/tinkerpop/gremlin/driver/Settings.java#L319-L324
This key is wired inside connectionPool:
https://github.com/apache/tinkerpop/blob/6083dc4fcb214df64be72483f8779d81e73c0fac/gremlin-driver/src/main/java/org/apache/tinkerpop/gremlin/driver/Settings.java#L236-L237
So, as ashoksl# pointed out in a comment already, you need to include this setting inside your remotttte YAML conf under connectionPool.
Someething like:
host: localhost
port: 8182
connectionPool: {
enableSsl: true
..
..
keepAliveInterval: 30000
}
You might also want to look at this logic of server-clint ping pong for additional knobs.
https://github.com/apache/tinkerpop/blob/ad8d663ffd957df3724c7aa8fe8bb4f893d76557/gremlin-server/src/main/java/org/apache/tinkerpop/gremlin/server/handler/OpSelectorHandler.java#L101-L110
Hope this helps.

Related

Can not run Redis commands using StackExchangeRedis

Hello i am trying to connect to a Redis database from a ASP NET Core 3.1 application and i keep getting this error when i issue a command.
> 'No connection is active/available to service this operation: SET a; A
> blocking operation was interrupted by a call to
> WSACancelBlockingCall., mc: 1/1/0, mgr: 10 of 10 available,
> clientName: [ClientName], IOCP: (Busy=2,Free=998,Min=8,Max=1000),
> WORKER:
I think it has something to do with the library StackExchangeRedis since until now it worked, up until it stopped working randomly.I have updated to the last version, restarted pc, whatever and nothing.
I can connect to my local redis and issue commands with both the Redis-Cli and using telnet 127.0.0.1 6379 , so that is why i think the culprit is the library.
ConnectionString
localhost:6379,ssl=True,allowAdmin=True,abortConnect=False,defaultDatabase=0
How i use it:
var con=ConnectionMultiplexer.Connect(connectionString); //passes
con.GetDatabase().StringSet("a","a"); //throws
If just using it for localhost development purposes you can try disabling ssl : localhost:6379,**ssl=false**,allowAdmin=True,abortConnect=False,defaultDatabase=0

Modify Hikari properties at runtime

Where can I find information about Hikari properties that can be modified at runtime?
I tried to modify connectionTimeout. I can do it and it will be modified in the HikariDataSource without an exception (checked by setting and then getting the property) but it takes no effect.
If I initially do:
HikariConfig config = new HikariConfig();
config.setConnectionTimeout(12000);
HikariDataSource pool = new HikariDataSource(config);
and later on I do
config.setConnectionTimeout(5000);
Hikari tries to get a new connection for 12 seconds instead of 5 seconds.
Or is there a way to change the value with effect?
Are there other properties with the same behaviour?
You can do this through the MX bean, but you don't need to use JMX
public void updateTimeout(final long connectionTimeoutMs, final HikariDataSource ds) {
var poolBean = ds.getHikariPoolMXBean();
var configBean = ds.getHikariConfigMXBean();
poolBean.suspendPool(); // Block new connections being leased
configBean.setConnectionTimeout(connectionTimeoutMs);
poolBean.softEvictConnections(); // Close unused cnxns & mark open ones for disposal
poolBean.resumePool(); // Re-enable connections
}
Bear in mind you will need to enable pool suspension in your initial config
var config = new HikariConfig();
...
config.setAllowPoolSuspension(true);
You can't dynamically update the property values by resetting them on the config object - the config object is ultimately read once when instantiating the Hikari Pool (have a look at the source code in PoolBase.java to see how this works.
You can however do what you want and update the connection timeout value at runtime via JMX. How to do this is explained in the hikari documentation here
If your JVM has JMX enabled (I recommend for every prod), you could:
SSH-tunnel JMX port to your local machine
Connect to the VM in a JMX client like JConsole
Operate pool MBean as needed
Note: JMX port must never be public to the internet, be sure that firewall protects you.
SSH Tunnel command example:
ssh -i ${key_path} -N -L 9000:localhost:9000 -L 9001:localhost:9001 ${user}#${address}

Openshift online v3 - Timeout when reading response headers from daemon process

I created an python api on openshift online with python image. If you request all the data, it takes more than 30 seconds to respond. The server gives a 504 gateway timeout http response. How do you configure the length a response can take? > I created an annotation on the route, this seems to set proxy timeout.
haproxy.router.openshift.io/timeout: 600s
Problem remains, I now got logging. It looks like the message comes from mod_wsgi.
I want to try alter the configuration of the httpd (mod_wsgi-express process) from request-timeout 60 to request-timeout 600. Where doe you configure this. I am using base image https://github.com/sclorg/s2i-python-container/tree/master/2.7
Logging:
Timeout when reading response headers from daemon process 'localhost:8080':/tmp/mod_wsgi-localhost:8080:1000430000/htdocs
Does someone know how to fix this error on openshift online
Next to alter timeout of haproxy of the route of my app
haproxy.router.openshift.io/timeout: 600s
I altered the request-timeout and socket-timeout in app.sh of my python application. So the mod_wsgi-express server is configured with a higher timeout
ARGS="$ARGS --request-timeout 600"
ARGS="$ARGS --socket-timeout 600"
My application now wait 10 minutes before cancelling a request

logstash and centralised redis problems

I'm trying to get logstash working in a centralised setup using the docs as an example:
http://logstash.net/docs/1.2.2/tutorials/getting-started-centralized
I've got logstash (as indexer), redis, elasticsearch and standalone kibana3 running on my web server. I then need to run logstash as an agent on another server to collect apache logs and send them to the web server via redis. The number of agents will increase and the logs will vary, but for now I just want to get this working!
I need everything to run as a service so that all is well after reboots etc. All servers are running Ubuntu.
For all logstash instances (indexer and agent), I'm using the following init script (Ubuntu version, second gist):
https://gist.github.com/shadabahmed/5486949#file-logstash-ubuntu
For running redis as a service, I followed the instructions here:
http://redis.io/topics/quickstart (Installing redis more properly)
Elasticsearch is also running as a service.
On the web server, running redis-cli returns PONG correctly. Navigating to the correct Elasticsearch URL returns the correct JSON response. Navigating to the Kibana3 url gives me the dashboard, but no data. UFW is set to allow the redis port (at the moment from everywhere).
On the web server, my logstash.conf is:
input {
file {
path => "/var/log/apache2/access.log"
type => "apache-access"
sincedb_path => "/etc/logstash/.sincedb"
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "logstash"
codec => json
}
}
filter {
grok {
type => "apache-access"
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
elasticsearch {
embedded => true
}
statsd {
# Count one hit every event by response
increment => "apache.response.%{response}"
}
}
From the agent server, I can telnet successfully to the web server IP and redis port. logstash is running. The logstash.conf file is:
input {
file {
path => "/var/log/apache2/shift.access.log"
type => "apache"
sincedb_path => "/etc/logstash/since_db"
}
stdin {
type => "example"
}
}
filter {
if [type] == "apache" {
grok {
pattern => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout { codec => rubydebug }
redis { host => ["xx.xx.xx.xx"] data_type => "list" key => "logstash" }
}
If I comment out the stdin and stdout lines, I still don't get a result. The logstash logs do not give me any connection errors - only warnings about the deprecated grok settings format.
I have also tried running logstash from the command line (making sure to stop the demonised service first). The apache log file is correctly outputted in the terminal, so I know that logstash is accessing the log correctly. And I can write random strings and they are output in the correct logstash format.
The redis logs on the web server show no sign of trouble......
The frustrating thing is that this has worked once. One message from stdin made it all the way through to elastic search. That was this morning just after getting everything setup. Since then, I have had no luck and I have no idea why!
Any tips/pointers gratefully received... Solving my problem will stop me tearing out more of my hair which will also make my wife happy......
UPDATE
Rather than filling the comments....
Thanks to #Vor and #rutter, I've confirmed that the user running logstash can read/write to the logstash.log file.
I've run the agent with -vv and the logs are populated with e.g.:
{:timestamp=>"2013-12-12T06:27:59.754000+0100", :message=>"config LogStash::Outputs::Redis/#host = [\"XX.XX.XX.XX\"]", :level=>:debug, :file=>"/opt/logstash/logstash.jar!/logstash/config/mixin.rb", :line=>"104"}
I then input random text into the terminal and get stdout results. However, I do not see anything in the logs until AFTER terminating the logstash agent. After the agent is terminated, I get lines like these in the logstash.log:
{:timestamp=>"2013-12-12T06:27:59.835000+0100", :message=>"Pipeline started", :level=>:info, :file=>"/opt/logstash/logstash.jar!/logstash/pipeline.rb", :line=>"69"}
{:timestamp=>"2013-12-12T06:29:22.429000+0100", :message=>"output received", :event=>#<LogStash::Event:0x77962b4d #cancelled=false, #data={"message"=>"test", "#timestamp"=>"2013-12-12T05:29:22.420Z", "#version"=>"1", "type"=>"example", "host"=>"Ubuntu-1204-precise-64-minimal"}>, :level=>:info, :file=>"(eval)", :line=>"16"}
{:timestamp=>"2013-12-12T06:29:22.461000+0100", :level=>:debug, :host=>"XX.XX.XX.XX", :port=>6379, :timeout=>5, :db=>0, :file=>"/opt/logstash/logstash.jar!/logstash/outputs/redis.rb", :line=>"230"}
But while I do get messages in stdout, I get nothing in redis on the other server. I can however telnet to the correct port on the other server, and I get "ping/PONG" in telnet, so redis on the other server is working..... And there are no errors etc in the redis logs.
It looks to me very much like the redis plugin on the logstash shipper agent is not working as expected, but for the life of me, I can't see where the breakdown is coming from.....

how to syslog-ng to remote facility

i have a host running syslog-ng. it does all it's stuff locally fine (creating log files etc). however, i would like to forward ALL of it's logs to a remote machine - specifically to one facility on the remote machine (local4). i tried playing around with rewrite (set-facility) and templates within the destination (syntax errors) - but to no avail.
destination remote_server {
udp(\"172.18.192.8\" port (514));
udp(\"172.18.192.9\" port (514));
};
rewrite r_local4 {
set-facility(local4);
};
filter f_alllogs {
level (debug...emerg);
};
log {
source(local);
filter(f_alllogs);
rewrite(r_local4)
destination(remote_server);
};
AFAIK, currently it is not possible to modify the facility of a message in syslog-ng.
Is there a special reason you want to do it?