Spread Toolkit, how does it work? - spread-toolkit

I'm trying to read up on and understand how SPREAD toolkit works. Maybe i'm not really good at understanding some of these documents.
How does a machine send a message out to other machines? Does it have like a central machine controlling and directing the messages? The document says that there are 3 link protocols(link/TCP/HOP), so if it's a ring protocol, a broadcast message means that a message from one machine will be passed from one machine to the next in a ring?
If most of the time i simply wish to broadcast messages to everyone, but in certain situations if i wish to only send messages to a few machines, do i just join the machines to a group, or is there other ways to do it?
Thanks

Related

How can I know which instance of BizTalk has been used by a message?

We have an environment for three instances of BizTalk 2016 that are used randomly by the messages.
I would need to know which one has been used when I send a message, but only if it has had any error and it is shown in the Windows logs-->Application is said which computer was used.
But I'd need to know which computer was used when the message was fine, but in the 'Tracked Message Events' this information is not shown and the flow of the message is displayed in the three instances.
Any idea of how I could get this information?
The fact of the matter is that it could be anyone of the three servers within your BizTalk Group. What you could do is track the properties and in the send pipeline promote the machine name. Although I'm not sure why you would need to do this.
Please remember that a send port can be triggered by any server with a particular host instance configured and running and that retry attempts may even be done on other server(s) as long as the triggered host instance is running there.

Redirect NServiceBus message based on Endpoint availability

I'm new to NServiceBus, but currently using it with SQL Server Transport to send messages between three machines: one belongs to an endpoint called Server, and two belong to an endpoint called Agent. This is working as expected, with messages sent to the Agent endpoint distributed to one of the two machines via the default round-robin.
I now want to add a new endpoint called PriorityAgent with a different queue and two additional machines. While all endpoints use the same message type, I know where each message should be handled prior to sending it, so normally I can just choose the correct destination endpoint and the message will be processed accordingly.
However, I need to build in a special case: if all machines on the PriorityAgent endpoint are currently down, messages that ordinarily should be sent there should be sent to the Agent endpoint instead, so they can be processed without delay. On the other hand, if all machines on the Agent endpoint are currently down, any Agent messages should not be sent to PriorityAgent, they can simply wait for an Agent machine to return.
I've been researching the proper way to implement this, and haven't seen many results. I imagine this isn't an unheard-of scenario, so my assumption is that I'm searching for the wrong things or thinking about this problem in the wrong way. Still, I came up with a couple potential solutions:
Separately track heartbeats of PriorityAgent machines, and add a mutator or behavior to change the destination of outgoing PriorityAgent messages to the Agent endpoint if those heartbeats stop.
Give PriorityAgent messages a short expiration, and somehow handle the expiration to redirect messages to the Agent endpoint. I'm not sure if this is actually possible.
Is one of these solutions on the right track, or am I off-base entirely?
You have not seen many do this because it's considered an antipattern. Or rather one of two antipatterns.
1) Either you are sending a command, in which case the RECEIVER of the command defines the contract. Why are you sending a command defined by PriorityAgent to Agent? There should be no coupling there. A command belongs to ONE logical endpoint/queue.
2) Or you are publishing an event defined by whoever publishes, with both PriorityAgent and Agent as subscribers. The two subscribers should be 100% autonomous and share nothing. Checking heartbeats/sharing info between these two logical separate entities is a bad thing. Why have them separately in the first place then? If they know about each other "dirty secrets," they should be the same thing.
If your primary concern is that the PriorityAgent messages will not be handled if the machines hosting it are down, and want to use the machines hosting Agent as a backup, simply deploy PriorityAgent there as well. One machine can run more than one endpoint just fine.
That way you can leverage the additional machines, but don't have to get dirty with sending the same command to a different logical endpoint or coupling two different logical endpoints together through some back channel.
I'm Dennis van der Stelt and I work for Particular Software, makers of NServiceBus.
From what I understand, both PriorityAgent and Agent are already scaled out over multiple machines? Then they both work according to competing consumers pattern. In other words, both machines try to pick up messages from the same queue, where only one will win and starts processing the message.
You're also talking about high availability. So when PriorityAgent goes down, another machine will pick it up. That's what I don't understand. Why fail over to Agent, which seems to me to be a logically different endpoint? If it is logically different, how can it handle PriorityAgent messages? If it can handle the same message, it seems logically the same endpoint. Then why make the difference between PriorityAgent and Agent?
Besides that, SQL Server has all kinds of features (like Always-On) to make sure it does not (completely) go down. Why try to solve difficult scenarios with custom build solutions, when SQL Server can already solve this for you?
Another scenario could be that PriorityAgent should handle priority cases. Something like preferred customers, or high-value customers. That is sometimes used when (for example) a lot of orders (read: messages) come in, but we want to deal with high-value customers sooner than regular customers. But due to the amount of messages coming in, high-value customers would also end up in the back of the queue, together with regular customers. A solution could be to publish these messages and have two different endpoints (with different queues) subscribed both to this message. Both receive each unique message, but check whether it's a message they should handle. The Agent will ignore high-value customers, the PriorityAgent will ignore regular customer.
These are some of the solutions available as standard messaging patterns, or infrastructural solutions to solving your issue. Again, it's not completely clear to me what it is you're looking for. If you'd like to continue the discussion; perhaps you want to email support#particular.net and we can continue the discussion there.

Is the GameKit's communication reliable with GKMatchSendDataReliable?

I'm working with GameKit.framework and I'm trying to create a reliable communication between two iPhones.
I'm sending packages with the GKMatchSendDataReliable mode.
The documentation says:
GKMatchSendDataReliable
The data is sent continuously until it is successfully received by the intended recipients or the connection times out.
Reliable transmissions are delivered in the order they were sent. Use this when you need to guarantee delivery.
Available in iOS 4.1 and later. Declared in GKMatch.h.
I have experienced some problems on a bad WiFi connection. The GameKit does not declare the connection lost, but some packages never arrive.
Can I count on a 100% reliable communication when using GKMatchSendDataReliable or is Apple just using fancy names for something they didn't implement?
My users also complain that some data may be accidentally lost during the game. I wrote a test app and figured out that GKMatchSendDataReliable is not really reliable. On weak internet connection (e.g. EDGE) some packets are regularly lost without any error from the Game Center API.
So the only option is to add an extra transport layer for truly reliable delivery.
I wrote a simple lib for this purpose: RoUTP. It saves all sent messages until acknowledgement for each received, resends lost and buffers received messages in case of broken sequence.
In my tests combination "RoUTP + GKMatchSendDataUnreliable" works even beter than "RoUTP + GKMatchSendDataReliable" (and of course better than pure GKMatchSendDataReliable which is not really reliable).
It nearly 100% reliable but maybe not what you need sometimes… For example you dropped out of network all the stuff that you send via GKMatchSendDataReliable will be sent in the order you've send them.
This is brilliant for turn-based games for example, but if fast reaction is necessary a dropout of the network would not just forget the missed packages he would get all the now late packages till he gets to realtime again.
The case GKMatchSendDataReliable doesn't send the data is a connection time out.
I think this would be also the case when you close the app

Exactly how many users can Support Blazeds Messeging service ? for much user support what we need to do(pooling)?

I designed one On line Trading Application, which uses blazeds & jetty,
in that i used AMF-LongPooling as channel, with following parameter,
Here is the problem is Each message is not reaching all the user,who are connected, messages are missing to few users (300 recieving out of 600)...
what we need to do to provided instant messages to all Online. ??
Please help me one?
Your question is too generic, it's not possible to give an answer because it depends on too many things: network, size of the messages, your system architecture etc. My suggestion is to invest heavily in reading BlazeDS developer guide and to turn the debug messages on (there is a lot of useful information displayed by BlazeDS). It would also help to study BlazeDS source code.
In case of AMF-longpolling the request is parked on the server and if too many requests are parked at a time, they will consume all available threads for the server. And the next client won't be able to connect.
In your case I am assuming the message size is not very big. And the solution can be one of the followings:
To increase the number of available threads. For that you can have multiple server instances and distribute your clients over them.
You can make use of LCDS.
You don't get that problem in LCDS as it makes use of NIO end points that don't block the thread. I have come to know that this thread restriction is not a problem with Servlet 3.0 and in that case you can support more clients with blazeds itself. You can check more about it HERE.

What is an MQ and why do I want to use it?

On my team at work, we use the IBM MQ technology a lot for cross-application communication. I've seen lately on Hacker News and other places about other MQ technologies like RabbitMQ. I have a basic understanding of what it is (a commonly checked area to put and get messages), but what I want to know what exactly is it good at? How will I know where I want to use it and when? Why not just stick with more rudimentary forms of interprocess messaging?
All the explanations so far are accurate and to the point - but might be missing something: one of the main benefits of message queueing: resilience.
Imagine this: you need to communicate with two or three other systems. A common approach these days will be web services which is fine if you need an answers right away.
However: web services can be down and not available - what do you do then? Putting your message into a message queue (which has a component on your machine/server, too) typically will work in this scenario - your message just doesn't get delivered and thus processed right now - but it will later on, when the other side of the service comes back online.
So in many cases, using message queues to connect disparate systems is a more reliable, more robust way of sending messages back and forth. It doesn't work well for everything (if you want to know the current stock price for MSFT, putting that request into a queue might not be the best of ideas) - but in lots of cases, like putting an order into your supplier's message queue, it works really well and can help ease some of the reliability issues with other technologies.
MQ stands for messaging queue.
It's an abstraction layer that allows multiple processes (likely on different machines) to communicate via various models (e.g., point-to-point, publish subscribe, etc.). Depending on the implementation, it can be configured for things like guaranteed reliability, error reporting, security, discovery, performance, etc.
You can do all this manually with sockets, but it's very difficult.
For example: Suppose you want to processes to communicate, but one of them can die in the middle and later get reconnected. How would you ensure that interim messages were not lost? MQ solutions can do that for you.
Message queueuing systems are supposed to give you several bonuses. Among most important ones are monitoring and transactional behavior.
Transactional design is important if you want to be immune to failures, such as power failure. Imagine that you want to notify a bank system of ATM money withdrawal, and it has to be done exactly once per request, no matter what servers failed temporarily in the middle. MQ systems would allow you to coordinate transactions across multiple database, MQ and other systems.
Needless to say, such systems are very slow compared to named pipes, TCP or other non-transactional tools. If high performance is required, you would not allow your messages to be written thru disk. Instead, it will complicate your design - to achieve exotic reliable AND fast communication, which pushes the designer into really non-trivial tricks.
MQ systems normally allow users to watch the queue contents, write plugins, clear queus, etc.
MQ simply stands for Message Queue.
You would use one when you need to reliably send a inter-process/cross-platform/cross-application message that isn't time dependent.
The Message Queue receives the message, places it in the proper queue, and waits for the application to retrieve the message when ready.
reference: web services can be down and not available - what do you do then?
As an extension to that; what if your local network and your local pc is down as well?? While you wait for the system to recover the dependent deployed systems elsewhere waiting for that data needs to see an alternative data stream.
Otherwise, that might not be good enough 'real time' response for today's and very soon in the future Internet of Things (IOT) requirements.
if you want true parallel, non volatile storage of various FIFO streams(at least at some point along the signal chain) use an FPGA and FRAM memory. FRAM runs at clock speed and FPGA devices can be reprogrammed on the fly adding and taking away however many independent parallel data streams are needed(within established constraints of course).