Mass Transit Queue Configuration when internet disconnects - rabbitmq

I am trying to hold data that is to be send to a DB, over the internet. One of the things is to not loose information that is to be sent to the DB if somehow we disconnect to the internet. Mass Transit, from my testing, will only hold one packet of information, so when internet is reconnected only that one packet will get resent. Optimally, I would like to be able to edit the queue size so that It can hold more than one packet of info. Am I able to do this, or will I have to create separate methods to correct for this. Thanks

Related

Zeek/Bro IDS - Sumstats - qty similarly sized TCP segments?

I'm trying to write my first script in Zeek which would allow to make statistics out of TLS packet segments sent and received by client in local network (quantity of packets with same size, list of dest ip by packets sent). Unfortunately, I'm unable to find proper Event or guide which would help me to find a solution for this. May I get an advise of this one?
Zeek has a few packet-level events that might get you started:
https://docs.zeek.org/en/current/scripts/base/bif/event.bif.zeek.html#id-new_packet
https://docs.zeek.org/en/current/scripts/base/bif/event.bif.zeek.html#id-raw_packet
https://docs.zeek.org/en/current/scripts/base/bif/event.bif.zeek.html#id-packet_contents
Note the warning that comes with these events: they incur high per-event overhead since they'll be generated for every packet, so they're most likely not suitable for deployment on live traffic.

Using MSMQ to Store Data before Processing

I am new to the formum and new to MSMQ.
I have been asked to do research on it, to see if it will help out business, but still not sure if and how it works. Here is a short Summary.
We have a service provider that will receive messages via a mobile phone, to which they will pass certain info(Such as the cell number, text in message, etc.) to a URL that we have given them, which is an app we created, which will then process the data and store into our Database etc etc.
However, as we at times receive between a few hundred to a few thousand at any given time(Spread out or at once) - We get timeouts.
What I would like to know, is it possible to get this info stored into a que using MSMQ, before it hits our URL(That we provided to our service provider), so that we can avoid timeouts ?
I hope this makes sense and that someone can help!
Thank you!
MSMQ is a transport protocol. It is designed to get data from A to B as fast and as reliably as possible. As it uses store-and-forward, it can be used as a data buffer at the sender. In your case, though, that's not important. I would recommend:
ISP sends MSMQ message over HTTP(S) to your server.
Message is stored on server's local queue.
Your app reads messages from queue.
Your app writes to database.
Goto 3
So you would be buffering the messages AFTER they reach your server.
The messages would also be buffered at the ISP in case of network outage between them and you.

When do the events emitted by Port get emitted? And what do they mean?

As far as I can tell there are 7 events dispatched by a NoFlo port:
attach,
connect,
begingroup,
data,
endgroup,
disconnect,
detach
To me some of these events sound very similar such as attach + connect, and disconnect + detach. What is the difference?
What does begingroup and endgroup mean?
When do these events get emitted and when are they generally used?
I've seen the documentation at: http://noflojs.org/documentation/components/#portevents
Would my assumption be correct to assume that attach and detach are for handling NoFlo UI cases eg changing the state of the components look?
Another assumption would be that connect gets fired every time before data is sent? Then data gets fired. Then disconnect? Seems a bit odd to me...
I'm completely in the dark when it comes to groups.
attach and detach happen when the NoFlo Network attaches (or removes) a socket to the port. So usually they happen at network start-up time, before IIPs get sent.
The exception to this is when you're live-editing the graph with a tool like Flowhub. In that situation attach/detach can happen whenever you connect or remove wires.
Most components don't need to care about the attachment events.
connect happens before the upstream connection sends data, and disconnect when the upstream connection says that it has sent everything it is intending to send. So in effect they're beginning of transmission and end of transmission events. An upstream component may choose to connect again after a disconnect if it has a new batch of data to send.
data is the event for actual payload-containing packets.
begingroup and endgroup are the "bracket IPs" containing metadata about the data being sent. They can be used for creating tree structures with packet data.
For example, filesystem/ReadFile will send the file contents as a data packet, but the filename is sent via a bracket IP using a begingroup/endgroup packets around the actual file contents.
The noflo-groups library provides lots of components for utilizing group information for synchronization, routing, etc.

multicast packet loss - running two instances of the same application

On Redhat Linux, I have a multicast listener listening to a very busy multicast data source. It runs perfectly by itself, no packet losses. However, once I start the second instance of the same application with the exactly same settings (same src/dst IP address, sock buffer size, user buffer size, etc.) I started to see very frequent packet losses from both instances. And they lost exact the same packets. If I stop the one of the instances, the remaining one returns to normal without any packet loss.
Initially, I though it is the CPU/kernel load issue, maybe it could not get the packets out of buffer quickly enough. So I did another test. I still keep one instance of the application running. But then started a totally different multicast listener on the same computer but use the second NIC card and listen to a different but even busier multicast source. Both applications run fine without any packet loss.
So it looks like one NIC card is not powerful enough to support two multicast applications, even though they listen to exact the same thing. The possible cause to the packet loss problem might be that, in this scenario, the NIC card driver needs to copy the incoming data to two sock buffers, and this extra copy task is too much for the ether card to handle so it drops packets. Any deeper analysis on this issue and any possible solutions?
Thanks
You are basically finding out that the kernel is inefficient at fan-out of multicast packets. Worst case scenario the code is for every incoming packet allocating two new buffers, the SKB object and packet payload, and copying the NIC buffer twice.
Pick the best case scenario, for every incoming packet a new SKB is allocated but the packet payload is shared between the two sockets with reference counting. Now imagine what happens when two applications, each on their own core and on separate sockets. Every reference to the packet payload is going to cause the memory bus to stall whilst both core caches have to flush and reload, and above that each application is having to kernel context switch back and forth to pass the socket payload. The result is terrible performance.
You aren't the first to encounter such a problem and many vendors have created solutions to it. The basic design is to limit the incoming data to one thread on one core on one socket, then have that thread distribute the data to all other interested threads, preferably using user space code building upon shared memory and lockless data structures.
Examples are TIBCO's Rendezvous and 29 West's Ultra Messaging showing a 660ns IPC bus:
http://www.globenewswire.com/newsroom/news.html?d=194703

iPhone: Sending large data with Game Kit

I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.