How can I tell if my SignalR Backplane (Redis) is really working as it should? - redis

I'm currently playing SignalR 2.0.3, scaling out with a BackPlane that utilizes Redis for windows
http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
I've integrated with the appropriate SignalR.Redis package in VS.
I made the following changes to my startup:
GlobalHost.DependencyResolver.UseRedis(
server: "localhost",
port: 6379,
password: string.Empty,
eventKey: "BroadcasterExample"
);
app.MapSignalR(hubConfiguration);
It builds fine.
My client appear to connect OK.
I can send notifications between client & server and visa versa.
From the Redis-client, I can enter:
get BroadcasterExample
which returns: "3"
I assume that things are working, but...
A couple of question:
1) How can I tell that is actually working?
2) What can I examine on the Redis server (though the Redis-client)?
3) What is getting stored in what data structures (key/value pairs, lists, hashes, sets)?
I would like a little more in depth view as to what is going on.
I've looked at the commands on: http://redis.io/commands
Nothing is jumping out at me which will help me map what's really going on.
Can someone point me in the right direction here?
Thanks,
JohnB

1) I believe you have already verified that it was working when you ran "get BroadcasterExample" and it returned "3". BroadcasterExample is the name of the channel that SignalR will send messages over and I believe the 3 represents the number of messages that have been processed. As you send more messages with SignalR you should see that number increment.
2) A good way to tell that things are working is to subscribe to the BroadcasterExample channel with the redis client and watch the messages come through. From the client, run:
subscribe BroadcasterExample
3) SignalR will probably just store that one key, the "BroadcasterExample" key. SignalR is really just using the publish/subscribe functionality of Redis, not storing any data.

The answer from jaggedaz has useful info. I would also add that you can do a different tyope of test quite quickly by hosting your application twice, under 2 differents ports, using IIS Express. If you then connect 2 browser windows to these 2 different instances and start exchanging messages (like broadcasts to All), you will see them flow across both clients, which is possible only when the backplane is actually working.

Related

How do you handle newcomers efficiently in WebRTC signaling?

Signaling is not addressed by WebRTC (even if we do have JSEP as a starting point), but from what I understand, it works that way :
client tells the server it's available at X
server holds that information and maps it to an identifier
other client comes and sends an identifier to get connection information from the first client
other client uses it to create it's one connection information and sends it to the server
server sends this to first client
both client can now talk
This is all nice and well, but what happends if a 3rd client arrives ?
You have to redo the whole things. Which suppose the first two clients are STILL connected to the server, waiting for a 3rd client to signal itself, and start the exchanging process again so they can get the 3rd client connection information.
So does it mean you are required to have to sort of permanent link to the server for each client (long polling, websocket, etc) ? If yes, is there a way to do that efficiently ?
Cause I don't see the point of having webRTC if I have to setup nodejs or tornado and make it scales to the number of my users. It doesn't sound very p2pish to me.
Please tell me I missed something.
What about a chat system? Do you really need to keep a permanent link to the server for each client? Of course, because otherwise you have no way of keeping track of a user's status. This "permanent" link can be done different ways: you mentioned WebSocket and long polling, but simple periodic XHR polling works too (although this will affect the UX, depending on the interval).
So view it like a chat system, except that the media stream is P2P for reduced latency. Once a P2P WebRTC connection is established, the server may die and, of course, the P2P connection will be kept between the two clients. What I mean is: both users may always block your server once the P2P connection is established and still be connected together in the wild Internets.
Understand me well: once the P2P connection is established, your server will not be doing any more WebRTC signalling. The connection is only needed to keep track of the statuses.
So it depends on your application. If you want to keep the statuses of users and make them visible to others, then you're in the same situation as a chat system: you need to keep a certain link, somehow, to make sure their statuses are synced. Otherwise, your server exists to connect them together and is not needed afterwards. An example of the latter situation is: a user goes to a webpage, the webpage provides him with a new room URL, the user shares this URL to another peer by another mean, the other peer joins the room, server connects them together (manages WebRTC signalling) and then forgets them. They are now connected until one of them breaks the link. Just like this reference app.
Instead of a central server keeping one connection per client, a mesh network could also be considered, albeit difficult to implement.

WCF Session Instancing Mode Hosting Issue

I am facing a situation regarding hosting WCF on Session Instancing mode.I am encapsulating the actual situation and proposing an example to replicate it...as below.
The service to be hosted is "MyService". I am using windows service to host it..with http endpoint.
It will need to support 500 concurrent sessions.(Singleton & Percall cannot be done because the Contract is Workflow based...Login...Function1,Function2,Logout..)
I have 4 Servers each with a hardware capability of supporting 200 concurrent sessions.
So I configured the service on One server as a Router(ServiceHost S = new ServiceHost(RouterService)) with hosting path such as "http://myserver/MyService". I have set a simple load balancing mechanism and applied the Router table to redirect incoming requests to other three servers where the actual service copies are hosted...("http://myserver/MyService1","http://myserver/MyService2","http://myserver/MyService3")
It is still not working...As soon as hits go above 200...communication error starts...I suppose because when 500 concurrent calls are made, then the Router(capability 200) is also required to stay connected to the Client along with the Actual Service Server...(in Session Call mode)..Is my thinking correct??
My question is...
1) Is my approach correct or flawed from concept...Should I ask the Hardware team to set up NLB...
2) Should we redesign the contract specifically to ensure that the requests can somehow be made PerCall...
3) Someone suggested that such systems should be hosted on cloud (Windows Azure)...will need to look at costs involved...but is it correct...
4) What are the best practicies involved while hosting WCF to handle Session Based Calls.
I understand that my question is complex and there would not be one "Correct" answer...but any help and insight will be really appreciated.
Thanks
"Should I ask the Hardware team to set up NLB..." as per you & "Sticky IP cluster" by Shiraz are the closest one can get to host the given scnerio.
The thing is that WCF sessions are transport based.hence we cannot store these "sessions" on a state server/db like a traditional aspnet.
WCF4.0 has come up with new bindings such as NetTcpContextBinding, BasicHttpContextBinding, WSHttpContextBinding which could help context re-creation on cross machine environment.But I have no production implementation knowledge to provide example.
This article should help you to know more...
There are three seperate but connected issues here:
Your design requires that you maintain state between calls
You are dependent upon getting to the same server each time (since you store state in memory)
You have a limit of 200 connections per server
A solution where you are dependent on coming back to the same server will not work (well) on Windows Azure.
You could implement a Sticky IP cluster, that would solve most of your problems, but it would not guarrantee that no more than 200 connections are on one server. For the most part this would be OK.
You could store the cache in Appfabric Cache, then it would not matter which server you returned to.
You could redesign your system so that all state is stored in the database.

Dynamic server discovery list

I'd like to create a web service that an application server can contact to add itself to a list of servers implementing the application. Clients could then contact the service to get a list of servers. Something similar to how minecraft's heartbeats work for adding your server to the main server list.
I could implement it myself pretty easily, but I'm hoping someone has already created something like this.
Advanced features would be useful. Things like:
Allowing a client to perform queries on application-specific properties like the number of users currently connected to the server
Distributing the server list across more than one machine
Timing out a server's entry in the list if it hasn't sent a heartbeat within some amount of time
Does anyone know of a service like this? I know there are open protocols and servers for doing local-LAN service discovery, but this would be a WAN service.
The protocols I could find that had any relevance to your intended application are these:
XRDS (eXtensible Resource Descriptor Sequence).
XMPP Service Discovery protocol.
The XRDS documentation is obtuse, but you may be able to push service descriptions in XML format. The service type specification might be generic, but I get a headache from trying to decipher committee-speak.
The XMPP Service Discovery protocol (part of the protocol Formerly Known As Jabber) also looked promising, but it seems that even though you could push your service description, they expect it to be one of the services mentioned on this list. Extending it would make it nonstandard.
Finally, I found something called seap (SErvice Announcement Protocol). It's old, it's rickety, the source may be propriety, it's written in C and Perl, it's a kludge, but it seems to do what you want, kind-of.
It seems like pushing a service announcement pulse is such an application-specific and trivial problem, that almost nobody has considered solving the general case.
My advice? Read the protocols and sources mentioned above for inspiration (I'd start with seap), and then write, implement, and publish a generic (probably xml-based) protocol yourself. All the existing ones seem to be either application-specific, incomprehensible, or a kludge.
Basically, you can write it yourself though I am not aware if anyone has one for public (I wrote one over 10 yrs ago, but for a company).
database (TableCols: auto-counter, svr_name, svr_ip, check_in_time, any-other-data)
code to receive heartbeat (http://<you-app.com>?svr_name=XYZ&svr_ip=P.Q.R.S)
code to list out servers within certain check_in_time
code to do some housecleaning once a while (eg: purge old records)
To send a heartbeat out, you only need to send a http:// call, on Linux use wget* with crontab, on windows use wget.exe with task scheduler.
It is application specific, so even if you wrote one yourself, others can't use it without modifying the source code.

client to server communication in VB.net

I made some code in vb.net which checks if a certain process is running, and returns a 1 if it is, or a 0 if it isn't. Now I want it to send a packet to my server or something which would log the IP of the client, or something similar.
What would be the easiest way to approach this?
There are a lot of different solutions to this task. First, which comes to my mind is WCF - maybe the easiest one as you do not think about opening ports, establishing the connections, parsing the input socket string and so on.
Here is on more link:
Introducing Windows Communication Foundation in .NET Framework 4

Real-time application newbie - Node.JS + Redis or RabbitMQ -> client/server how?

I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
This is usually called RAD (rapid application design/development) and it is what I would recommend right now. This lets you build the proof of concept that you can use to work off of later to get what you want to happen.
As for how to talk to the clients from the server, and vice versa, have you read at all on websockets?
Given the choice between LAMP or event based programming, for what you're suggesting, I would tell you to go with the event based programming, so nodejs. But that's just one man's opinion.
Well,
LAMP - Apache create new process for every request. RabbitMQ can be useful with many features.
Node.js - Uses single process to handle all request asynchronously with help of event looping. So, no extra overhead process creation like apache.
For asynchronous chat application,
socket.io + Node.js + redis pub-sup is best stack.
I have already implemented real-time notification using above stack.