Backstory:
We decided to migrate the SVN from On-Prem to Cloud.
Both servers are CentOS 7 and the SVN version On-Prem is 1.8.15 while on Cloud it's 1.8.19;
The access protocol changed from SVN (port 3690) to HTTPS (443), so the httpd setup is a novelty.
For the migration of the repository, I've tried doing a plain old 'rsync' between the servers to move the whole repository, and it worked since the functionality & all the revisions were there, however I still got the same error.
I thought it may be some kind of DB issue, so I then used the SVN-native 'svnadmin dump' and 'svnadmin load' commands to import the repository. The issue still persists.
I am using SVN accessed using HTTPS through Apache HTTPD.
Everything seems to work fine and all the functionality is there, but after several checkouts I start getting a 500 Internal Server Error.
Currently, the issue is caused by a Jenkins pipeline which checkouts from SVN, here is the outputted error:
ERROR: Failed to check out https://svn-repo/path/to/files
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS of '/path/to/files': 500 Internal Server Error (https://svn-repo)
svn: E175002: OPTIONS request failed on '/path/to/files'
The reason why I don't think it's a problem from the client (Jenkins) side at the moment is because the same error happened to me when checking out from my PC SVN client.
Here are the logs from HTTPD:
10.10.10.16 - - [17/Aug/2020:12:45:21 +0300] "OPTIONS /path/to/files HTTP/1.
1" 401 381
10.10.10.16 - user [17/Aug/2020:12:45:21 +0300] "OPTIONS /path/to/files HTTP/1.1" 500 541
As you can see, I receive a 401 before getting the 500, but as I said the checkouts occur one after the other so it couldn't have checked out something previously if the authorization was invalid (the permissions for the whole repo are identical, not path-based).
Side-note: The 401 occurs due to the definition of the WEBSAV protocol: it allows unauthenticated access so it will always try it first. If it gets back a 401 then it will send the credentials.
---- Progress Report ----
It's been brought to my attention that 'SVNAllowBulkUpdates On' could be the cause of this issue.
I tried running the pipeline both with 'Prefer' and 'Off', however that did not fix the issue.
Possibly related issue:
Large SVN checkout fails sporadically
I upgraded the SVN to version 1.10 successfully.
After upgrading and running the pipeline once more, I saw the following error in the SVN error log:
[Thu Oct 01 17:25:55.268333 2020] [dav:error] [pid 9465] [client 11.11.11.11:39580] Provider encountered an error while streaming a REPORT response. [500, #0]
[Thu Oct 01 17:25:55.268355 2020] [dav:error] [pid 9465] [client 11.11.11.11:39580] A failure occurred while driving the update report editor [500, #104]
[Thu Oct 01 17:25:55.268360 2020] [dav:error] [pid 9465] [client 11.11.11.11:39580] Connection reset by peer [500, #104]
Since the log points to a client-side issue, I started searching for configuration changes related to the client. Added the following in "~/.subversion/servers":
http-timeout = 259200
Source: https://confluence.atlassian.com/fishkb/svn-operations-taking-longer-than-an-hour-time-out-229180362.html
Unfortunately, this still did not help.
Later, I performed a 'tcpdump' on port 443 (tcpdump -nnS -i ens5 port 443) to see the headers of the incoming / outgoing packets. I ran the commands both on the Jenkins Slave and the SVN simultaneously, and found that at a certain point they stopped exchanging information for precisely one minute, after which the SVN sent a session termination packet to the Jenkins Slave which tried to later send information and abort the connection:
17:14:56.976631 IP SVN > Jenkins-Slave: Flags [.], ack 4264260017, win 235, options [nop,nop,TS val 1054806523 ecr 1461582542], length 0
17:14:56.976961 IP SVN > Jenkins-Slave: Flags [P.], seq 394455454:394456190, ack 4264260017, win 235, options [nop,nop,TS val 1054806523 ecr 1461582542], length 736
17:14:56.983612 IP Jenkins-Slave > SVN: Flags [P.], seq 4264260017:4264260557, ack 394456190, win 279, options [nop,nop,TS val 1461582631 ecr 1054806523], length 540
17:14:56.983688 IP Jenkins-Slave > SVN: Flags [P.], seq 4264260557:4264260693, ack 394456190, win 279, options [nop,nop,TS val 1461582631 ecr 1054806523], length 136
17:14:57.065351 IP SVN > Jenkins-Slave: Flags [.], ack 4264260693, win 252, options [nop,nop,TS val 1054806611 ecr 1461582631], length 0
17:15:57.124806 IP SVN > Jenkins-Slave: Flags [P.], seq 394456190:394457011, ack 4264260693, win 252, options [nop,nop,TS val 1054866672 ecr 1461582631], length 821
17:15:57.124832 IP SVN > Jenkins-Slave: Flags [F.], seq 394457011, ack 4264260693, win 252, options [nop,nop,TS val 1054866672 ecr 1461582631], length 0
17:15:57.125768 IP Jenkins-Slave > SVN: Flags [P.], seq 4264260693:4264260724, ack 394457012, win 300, options [nop,nop,TS val 1461642773 ecr 1054866672], length 31
17:15:57.125804 IP Jenkins-Slave > SVN: Flags [R.], seq 4264260724, ack 394457012, win 300, options [nop,nop,TS val 1461642774 ecr 1054866672], length 0
I obfuscated the IPs for obvious reasons.
I'm using RPC Pattern for processing my objects with RabbitMQ.
You suspect,I have an object, and I want to have that process finishes and After that send ack to RPC Client.
Ack as default has a timeout about 3 Minutes.
My process Take long time.
How can I change this timeout for ack of each objects or what I must be do for handling these like processess?
Modern versions of RabbitMQ have a delivery acknowledgement timeout:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to.
Error message will be:
Channel error on connection <####> :
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
Timeout by default is 30 minutes (1,800,000ms)note 1 and is configured by the consumer_timeout parameter in rabbitmq.conf.
note 1: Timeout was 15 minutes (900,000ms) before RabbitMQ 3.8.17.
if you run rabbitmq in docker, you can describe volume with file rabbitmq.conf, then create this file inside volume and set consumer_timeout
for example:
docker compose
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.9.13-management-alpine
network_mode: host
container_name: 'you name'
ports:
- 5672:5672
- 15672:15672 ----- if you use gui for rabbit
volumes:
- /etc/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And you need create file
rabbitmq.conf
on you server by this way
/etc/rabbitmq/
documentation with params: https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example
We have defined Lettuce client connection factory to be able to connect to Redis defining custom socket and command timeout:
#Bean
LettuceConnectionFactory lettuceConnectionFactory() {
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(socketTimeout).build();
final ClientOptions clientOptions =
ClientOptions.builder().socketOptions(socketOptions).build();
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.commandTimeout(redisCommandTimeout)
.clientOptions(clientOptions).build();
RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration(redisHost,
redisPort);
final LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(serverConfig,
clientConfig);
lettuceConnectionFactory.setValidateConnection(true);
return new LettuceConnectionFactory(serverConfig, clientConfig);
}
Lettuce documentation define default values:
Default socket timeout is 10 seconds
Default command timeout is 60 seconds
If Redis service is down application must receive timeout in 300ms. Which value must be defined as the greatest value?
Github example project:
https://github.com/cristianprofile/spring-data-redis-lettuce
In socket options you specify connect timeout. This is a maximum time allowed for Redis client (Lettuce) to try to establish a TCP/IP connection to a Redis Server. This value should be relatively small (e.g. up to 1 minute).
If client could not establish connection to a server within 1 minute I guess it's safe to say server is not available (server is down, address/port is wrong, network security like firewalls prohibit connection etc).
The command timeout is completely different. Once connection is established, client can send commands to the server. It expects server to respond to those command. The timeout configures for how long client will be waiting for a response to a command from the server.
I think this timeout can be set to a bigger value (e.g a few minutes) in case client command sends a lot of data to the server and it takes time to transfer and store so much data.
I sent messages through smpp connection (using selenium SmppSim) from Kannel and it worked.
But somehow when I try to receive messages or in other words when I try to send messages from SmppSim It doesn't work. The MO messages of the SmppSim queue into the MO-queue.
I tried these things.
Used same port for send and receive (Kannel/SmppSim).
Used different ports for send and receive (Kannel/SmppSim).
Two groups for same smsc-smpp for send and receive. (It may be wrong)
Now I'm using port 2775 for send and port 2776 for receive.
#kannel.conf
group=smsc
smsc=smpp
....
port = 2775
receive-port = 2776
transceiver-mode = true
....
In SmppSim
#smppsim.props
SMPP_PORT=2775
....
SYSTEM_IDS=smppclient
PASSWORDS=password
OUTBIND_ENABLED=true
OUTBIND_ESME_IP_ADDRESS=127.0.0.1
OUTBIND_ESME_PORT=2776
OUTBIND_ESME_SYSTEMID=smppclient
OUTBIND_ESME_PASSWORD=password
....
When I run the bearerbox, it shows like below. (sms send is working)
....
connect failed
System error 111: Connection refused
ERROR: error connecting to server `localhost' at port `2776'
SMPP[SMPPSim]: Couldn't connect to server.
SMPP[SMPPSim]: Couldn't connect to SMS center (retrying in 10 seconds).
....
How do I configure this?
Thank you!
Please read SMPP v3.4 specification, part 2.2.1.
The purpose of the outbind operation is to allow the SMSC signal an ESME to originate a
bind_receiver request to the SMSC.
So it's used for SMSC (SMPPSim) to connect to ESME (Kannel) and request for callback connection.
However you can run few SMPPSim instances listening on different ports. Each instance should use own configuration file this case.
Is there any way to set keepalive for induvidual socket descriptor in vxworks? I read in some documents that "SOL_TCP" option in setsockopt function will do such favors in linux. Is such facility available in VxWorks too? If so please provide related details regarding the same, like what are the include file we need to include and how to use such option etc.
From the VxWorks "Library Reference" manual (can be download):
OPTIONS FOR STREAM SOCKETS
The following sections discuss the socket options available for stream (TCP) sockets.
SO_KEEPALIVE -- Detecting a Dead Connection
Specify the SO_KEEPALIVE option to make the transport protocol (TCP) initiate a timer to detect a dead connection:
setsockopt (sock, SOL_SOCKET, SO_KEEPALIVE, &optval, sizeof (optval));
This prevents an application from hanging on an invalid connection. The value at optval for this option is an integer (type int), either 1 (on) or 0 (off).
The integrity of a connection is verified by transmitting zero-length TCP segments triggered by a timer, to force a response from a peer node. If the peer does not respond after repeated transmissions of the KEEPALIVE segments, the connection is dropped, all protocol data structures are reclaimed, and processes sleeping on the connection are awakened with an ETIMEDOUT error.
The ETIMEDOUT timeout can happen in two ways. If the connection is not yet established, the KEEPALIVE timer expires after idling for TCPTV_KEEP_INIT. If the connection is established, the KEEPALIVE timer starts up when there is no traffic for TCPTV_KEEP_IDLE. If no response is received from the peer after sending the KEEPALIVE segment TCPTV_KEEPCNT times with interval TCPTV_KEEPINTVL, TCP assumes that the connection is invalid. The parameters TCPTV_KEEP_INIT, TCPTV_KEEP_IDLE, TCPTV_KEEPCNT, and TCPTV_KEEPINTVL are defined in the file target/h/net/tcp_timer.h.
IP_TCP_KEEPINTVL and also TCP_KEEPIDLE, TCP_KEEPCNT options supported by setsockopt after vxworks 6.8 version. In former releases of vxworks you can change these values globally and all the sockets created effected.
Below question is an answer for how will it be done.
How to set TCP keep alive interval for a specific socket fd (Not system wide) in VxWorks?