We are using node-smpp and struggling with receiving long (over 160 characters) messages from a SMPP sever. Does anyone have any idea if we should enable some option to get this message payload?
Related
Using mule 4.4 community edition running on premise
while configuring HTTP listener came across this property :
Checked online and documentation here
Maximum time in milliseconds that the listener must wait while receiving a message.
I tried changing it to 5000 ( 5 seconds ) and was waiting without making a request for more than a minute .
Then I invoked the listener and it worked fine so I am confused on what is the significance of this attribute ?
when should we use this value ? os this meant to act as a response timeout which consumer of http listener would get ?
Thanks
Read Timeout: (Number) Maximum time in milliseconds that the listener must wait while receiving a message. Default Value: 30000. Documentation is here
Read Timeout indicates once a TCP connection is opened, till how long the listener should wait to get the body.
From my understanding this is done sometimes by clients to keep the connection alive and to mitigate situations, where too many connections are opened and closed. Refer This
You can keep the default values in there and it won’t impact your implementation, given that you aren’t sending GB’s of data to your endpoints and not uploading any file using multi-part upload of HTML.
And if you are then you’ll need to tweak it a bit according to your needs
I wanted to write a plugin in OpenFire to inspect incoming messages between users and possibly stop this message from ongoing to the target recipient.
I can write a plugin that implement PacketInterceptor, but is there an api that supports blocking this packet from being sent or possibly modifying the body.
The rational for this is possibly offensive or illegal content. The times we live in :(
I found a packet filter created to do exactly this.
It can be found at https://www.igniterealtime.org/projects/openfire/plugins/packetfilter/readme.html
We have built REST web service and deployed on Websphere application server and IBM HTTP Web server 8.5.
What is happening that for some of the POST requests where we have quite large response (more than 64 KB), we are not getting the complete response data.
Application has generated good JSON but JSON is getting truncated in between somewhere. Same request being fired multiple times but response is getting truncated for few requests randomly.
Our analysis says that whenever we get this truncated response, we get the response as multiple of 32KB i.e. actual response size can be of say 105KB but we get only 64KB or 96KB of response.
Any idea what can be the reason? Any configuration which can help us resolve the issue?
Thanks
Narinder
You may want to increase the size of the Write buffer on Web Container to stop it chunking the writes on multiple threads. The default size of the writer buffer is 32K, which does correspond to the multiple you are seeing.
To change this setting :
Application servers > -serverName- > Ports > Transport Chain > HttpQueueInboundDefault
click on the Web Container and set he Write Buffer size to an appropriate value. In most cases you want to set the buffer to be able to write all(or most) responses in one single write rather than multiple writes.
See also WebSphere Application Server 8.5 tuning
Im' sending sms messages via EasySmpp library (http://sourceforge.net/projects/easysmpp/). In case of messages over 160 characters long it internally uses data_sm command.
The question is how can I determine number of parts of sms message that arrives to the phone. My operator charges me per part so obviously I need that information.
Of course I can count it myself (message length / 160 or message length / 70 in case of UTF-8), but is there a better solution? Maybe smpp has a field for this?
m.
I had similar discussion with our SMS gateway provider yesterday. This is explained in detail here http://en.wikipedia.org/wiki/Short_Message_Service#Message_size
I have two applications talking to each other over SSL. The client is running on a windows machine, the server is a linux based application. The client is sending a large amount of data to the server on startup. The data is sent in ~4000byte chunks over to the server that contains 30 entries. I have to send about 50000 entries over.
During that transmission the server sends a message to the client, the message size is ~4000bytes. After that happens, the SSL_write() on the client side begins to return error of SSL_ERROR_WANT_WRITE. The client sleeps for 10ms, and retries the SSL_write with the exact same parameters, however, the SSL_write fails infinitely. Subsequently it aborts. If it tries to send a new message, I get an error indicating I am not sending the same aborted message from earlier.
error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retry”
The server eventually kills the connection since it has not heard from the client for 60s and re-establishes a new one. This is just an FYI, the real issue is how can I get SSL_write to resume.
If the server does not send a request during the receive the problem goes away. If I shrink the size of the request from 16K to 100 bytes the problem does not happen.
The SSL CTX MODE is set to SSL_MODE_AUTO_RETRY and SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER.
Does anyone have an idea what might cause a simultaneous transmission from both sides with large information can cause this failure. What can I do to prevent it if this is a limitation other than capping the size that goes out from the server to the client. My concern is that if the client is not sending anything the throttling I applied to avoid this issue is a waste.
On the client side I tried to perform an SSL_read to see if I need to read during a write even though I never receive an SSL_ERROR_PENDING_READ, but the buffer is not that big anyway. ~1000bytes in size.
Any insight on this would be appreciated.
SSL_ERROR_WANT_WRITE - This error is returned by OpenSSL (I am assuming you are using OpenSSL) only when socket send gives it an EWOULDBLOCK or EAGAIN error. The socket send will give a EWOUDLBLOCK error when the send side buffer is full, which in turn means that your Server is not reading the messages sent from Client.
So, essentially, the problem lies with your Server which is not reading the messages sent to it. You need to check your server and fix it, which will automatically fix your client problem.
Also, why have you set the option "SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER"? SSL always expects that the record which it is trying to send should be sent completely before the next record can be sent.
As it turns out that with both the client and server side app, the read and writes are processed in one thread. In a perfect storm as I described above, the client is busy writing (non blocking). The server then decides to do a write a large set of messages of its own in between processing its rx buffers. The server tx is a blocking call. The server gets stuck writing, starves the read, the buffers fill up and we have a deadlock scenario.
The default windows buffer is 8k bytes so it doesn't take much to fill it up.
The architecture should be such that there is a separate thread for the rx and tx processing on both sides. As a short cut/term fix, once can increase the rx buffers and rate limit the tx side to prevent the deadlock.