Is there a way to send two udp streams over a xen paravirtualized system taking separate routes over 2 VMs to the same destination computer? - udp

I'm working on my second experiment for my master thesis and my supervisor had some requirements on this experiment and I don't really know how to proceed. Earlier I was thinking that it should be just to forward the packets but now facing the problem hands on I don't know where to begin.
The setup is Computer 1 ->Measurement point->System Under Test->Measurement point-> Computer2
The System Under Test consists of 2 VMs created with the XEN management tool XL.
There is a bridge from computer 1 on the interface "eth0" to "SUT" which is connected to the VM and the same thing on the otherside to computer 2.
I'm going to send 2 UDP streams and compare the timestamps over the measurement points with two servers on computer 2. The streams are going to be separated by port number and keyid for the stream.
My question is how do I make one of the UDP stream take the route through one of the VMs and the other UDP stream to take the other route?

Related

Measuring bandwidth usage between several local WebRTC instances

I have a prototype WebRTC application and I want to measure the bandwidth usage for a given peer at different scenarios. For example, measuring the up/down bandwidth usage when connected to 4, 8, 12 other peers.
I only have a few machines available, so my first thought was to just launch multiple instances per machine. But how do I measure the bandwidth usage correctly? I tried using Wireshark and NetLimiter, but I started getting weird results.
The problem was that I only measured bandwidth usage between the machines and not between the actual peers themselves.
For example, if I have 2 machines with 4 peers each, I want to measure a given peers bandwidth usage with the other 7 peers, even though 3 of the peers are actually on the same machine. Any ideas on how to go about this?

Ganglia gmetad failover

I want to know if it possible to have gmetad in a failover/replica scenario. My problem is the following:
I have 100 nodes that they speak to each other with multicast and they sync their gmond info. I have a separate machine that is running the gmetad (lets call it master1) that polls metrics from various gmonds (so far so good).
Now I want to be sure that if master1 dies, I will have a second gmetad (master2) that will have the same data. So I configured a second gmetad that reads the same gmonds. Now if master1 dies and comes again up after (lets say) 3 days, is there any way to get from master2 all the missed data and have a complete timeline in master1?
If there is no way to do that, can I use an NFS directory and point both the gmetads to write the rrds on the same directory?
If you are working in a multicast environment. All your rrd files will be saved at multiple places. So If you want Master1 to have the complete timeline data what you can do is back up the rrds and restart the gmond and gmetad process. Ganglia will again copy all the rrds from the multicast nodes.

Redis Read-Replicas On Web Servers

I am currently developing a system that makes heavy use of redis for a series of web services.
One of the key criteria of this system is fast responses.
At present the layout (ignoring load balancers etc) is as follows:
2 x Front End Play Framework 2.x Servers
2 x Job Handling/Persistence Play Framework 2.x Servers
1 x MySQL Server
2 x Redis Servers, 1 master, 1 slave
In this setup, redis serves 2 tasks - as a shared cache and also as a message bus.
Currently the front end servers host a service which interacts in its entirety with Redis.
The front end servers try to balance reads across the pool of read servers (currently the master and 1 slave), but being Redis they need to make their writes to the master server. They handle cache updates etc by sending messages on the queues, which are picked up by the job handling servers.
The job handling servers do blocking listens (BLPOP) to the Redis write server and process tasks when necessary. They have the only connection to MySQL.
At present the read replica server is a dedicated server - more there to be able to switch it to write master if the current master fails.
I was thinking of putting a read replica slave of redis on each of the front end servers which means that read latency would be even less, and writes (messages for queues) get pushed to the write server on a separate connection.
If I need to scale, I could just add more front end servers with read slaves.
It sounds like a win/win to me as even if the write server temporarily drops out, the front end servers can still read data at least from their local slave and act accordingly.
Can anyone think of reasons why this might not be such a good idea?
I understand the advantages of this approach... but consider this: what happens when you need to scale just one component (i.e. FE server or Redis) but not the other? For example, more traffic could mean you'll need more app servers to handle it while the Redises will be considerably less loaded. On the other hand, if your dataset grows and/or more load is put on the Redises - you'll need to scale these and not the app.
The design should fit your requirements, and the simplicity of your suggested setup has a definite appeal (i.e. to scale, just add another identical lego block) but from my meager experience - anything that sounds too good to be true usually is. In the longer run, even if this works for you now, you may find yourself in a jam down the road. My advice - separate your Redis(es) from you app servers, deal with and/or work around the network and make sure each layer is available and scalable on its own right.

Is simultaneous I2C, SPI and USB communication between multiple MSP430s possible?

I have programmed a couple of MSP430x6xx microcontrollers to serve as Master for some I2C slave devices. One of the MSP430s transfer the data received from I2C slaves to a PC using its built in USB Module. I want to extend this to allow all Micro controllers to send data received from their respective I2C slaves to PC using a common bus system. Will it be feasible to use SPI for transferring the data from all MSP430s to a single MSP430 master(already serving as I2C master and USB device simultaneously) which then transfers it to PC? I would appreciate any other suggestions. Thanks
Yes, it is feasible we will have to write your firmware to handle this. You have to identify on the PC somehow from whom SPI/I2C slave it comes the data. So, your main MSP430xxx will do this adding some kind of header to the data saying the id of the slave device.

Question about Process communication over USB cable

I have some questions regarding communication over USB cable in Linux, in a Host-Target Device environment.(USB2.0) Please help as we are stuck for the below imiplementation.
We have a host PC connected to a target device (Linux OS) through USB cable.
On the target device we need to spawn 3 or 4 child processes. [Using fork() or some equivalent system call]
All the child process should communicate to the host PC independently though there own source file descriptor and sink file descriptors.
As per our experimentation, one process communicates to the PC at a time then the control is given to another process. But our requirement is for simultaneous communication. We are not sure whether USB driver(2.0/3.0) supports this methodology.
Any pointers regarding this will be helpful.
Thank you.
-AD
As per our experimentation, one process communicates to the PC at a time then the control is given to another process.
This is how computers work. Only one thread at a time has control of a particular CPU - when it blocks for i/o or exhausts its quantum, control is given to another thread.
What do you need simultaneity for that you can't manage with sending data one after the other?
USB is a serial bus protocol with a SINGLE DATA BUS, and this means, what you are looking for is not possible.
But we can have 4 different USB COMMUNICATION PIPES which can provide different paths, but NOT simultaneously.