I want to create something similar to the Twitter streaming API using WCF. We need to stream data to clients as quickly as possible (1/4 second). Our clients are diverse (JAVA, C++, etc. etc. etc.) and we're a .NET shop. Does anyone know if WCF can do this?
Check out the Streaming Data section here http://msdn.microsoft.com/en-us/library/ms733742.aspx
I haven't tested this but here's an example http://shevaspace.blogspot.com/2009/01/streaming-media-content-over-wcf.html
Related
I'm new to redis. Trying to find out if we can build a REST api for redis cache elements so that they can be consumed by different clients (node, c# etc) Is it possible to do it? If so, can I get some guidance ?
You can use webdis which will give you rest oriented client which will wrap around redis. As it is REST oriented you can connect major through all other microservices written in different languages.
It supports major feature.
Hello I'm currently working on a web application and I would like some of your help in using the correct way to implement my API.
RPC is the way I've started implementing it because that was the most logical thing to do as a new web developer but I've been eyeing RESTful and WCF because they have been mentioned so many times in my research.
Is it common to have a RPC interface for more complex business logic intensive data manipulation and a REST like interface for the "rest".
RPC is the way I've started implementing it because that was the most
logical thing to do as a new web developer but I've been eyeing
RESTful and WCF because they have been mentioned so many times in my
research.
Let's de-tangle a bit:
RPC is a style of web service composition.
REST is a style of web service composition
WCF is a technology stack which supports both RPC and REST styles
Is it common to have a RPC interface for more complex business logic
intensive data manipulation and a REST like interface for the "rest".
At best, you could argue it's common to take complex and long running processes offline. Whether you do this using RPC or REST makes no difference. However, web services are generally a synchronous technology - although one way calls are supported this kind of semantic is better served by true asynchronous transport like message queues (which are also supported by WCF).
I have been reading a lot about WCF lately and whenever the subject of implementing a subscriber-broadcast mechanism comes up (as in an instant messaging system), the solution invariably is to use a static dictionary to hold your subscriber channels.
An example can be found in the answer to the following question, but it is a common practice.
Making a list of subscribers available across calls to a service
This seems like a very good solution for "traditional" web programming, but how is this handled in the cloud? Specifically, how do we get around the fact that every computer in the grid has different "static" variables?
I know very little about the different Cloud platforms. Are there different solutions for Azure, Amazon Web Services and VMWare?
For broadcasting/push-type notifications, please look at SignalR (http://signalr.net/). Microsoft is making that part of the ASP.NET platform:
http://channel9.msdn.com/Events/Build/2012/3-034
It has some real nice functionality like gracefully, falling back on multiple mechanisms if advanced things like WebSockets are not supported by the server/client. While it is do-able, you would have to code all of that in WCF.
There are pretty big differences between the cloud vendor platforms. I could post you multiple links but the cloud vendors you mention are changing VERY rapidly with what they offer. Your commitment to a particular cloud vendor is for the long term...don't think of it as well Vendor A has something Vendor B doesn't. There are differences BTW like that...Amazon for example, has specialized VMs: high I/O, high memory, high CPU. While, Azure for example has a much better designed VM layer.
I think of it this way (mu opinion)...Microsoft is a company that owns: .NET, ASP.NET, server platforms: SQL Server, Windows Server, SharePoint, Office Services etc. They are very well positioned against someone like Amazon or VMWare which do not have rich product portfolios like this. Plus Microsoft can price those servers into the cloud, Amazon/RackSpace/VMWare have to pay Microsoft a premium for it. You seem like you are talking about WCF/.NET, which would favor the Microsoft Azure platform.
On Azure you can run Linux VMs; code in python, Java etc. but it favors the Microsoft stack. Conversely, for AWS you can run .NET/Microsoft etc, but it favors the Linux/open source stack. Think of it in long term...because in 2 years both major cloud vendors will be making commitments in those areas. For example, RackSpace is going all-in with their OpenStack platform...they have no choice.
The Windows Azure Service Bus has a a couple of options that you can use for broadcasting WCF events.
The Realy service has a netEventRelayBinding, which will allow subscribing service instances to reeive a one-way service call when a client calls an endpoint. If the clients are discinnected, they will not receive any messages.
http://msdn.microsoft.com/en-us/wazplatformtrainingcourse_eventingonservicebusvs2010_topic2.aspx
Brokered Messaging has tipics and subscriptions where a message can be broeadasct to up to 2,000 subscribers. The messages are stored durably, so if a client is disconnedted they wil receive all the messages when they reconnect.
http://www.cloudcasts.net/devguide/Default.aspx?id=12044
Regards,
Alan
It might be worth you looking into something like RabbitMQ on AppHarbor. It's something I keep meaning to look at but can't find the time. I only mention it because nobody else has ;)
I have a need for an application to access reporting data from a remote database. We currently have a WCF service that handles the I/O for this database. Normally the application just sends small messages back and forth between the WCF service and itself, but now we need to run some historical reports on that activity. The result could be several hundred to a few thousand records. I came across http://msdn.microsoft.com/en-us/library/ms733742.aspx which talks about streaming, but it also mentions segmenting messages, which I didn't find any more information on. What is the best way to send large amounts of data such as this from a WCF service?
It seems my options are streaming or chunking. Streaming restricts other WCF features, message security being one (http://msdn.microsoft.com/en-us/library/ms733742.aspx). Chunking is breaking up a message into pieces then putting those pieces back together at the client. This can be done by implementing a custom Channel which MS has provided an example of here: http://msdn.microsoft.com/en-us/library/aa717050.aspx. This is implemented below the security layer so security can still be used.
The canonical example here is Twitter's API. I understand conceptually how the REST API works, essentially its just a query to their server for your particular request in which you then receive a response (JSON, XML, etc), great.
However I'm not exactly sure how a streaming API works behind the scenes. I understand how to consume it. For example with Twitter listen for a response. From the response listen for data and in which the tweets come in chunks. Build up the chunks in a string buffer and wait for a line feed which signifies end of Tweet. But what are they doing to make this work?
Let's say I had a bunch of data and I wanted to setup a streaming API locally for other people on the net to consume (just like Twitter). How is this done, what technologies? Is this something Node JS could handle? I'm just trying to wrap my head around what they are doing to make this thing work.
Twitter's stream API is that it's essentially a long-running request that's left open, data is pushed into it as and when it becomes available.
The repercussion of that is that the server will have to be able to deal with lots of concurrent open HTTP connections (one per client). A lot of existing servers don't manage that well, for example Java servlet engines assign one Thread per request which can (a) get quite expensive and (b) quickly hits the normal max-threads setting and prevents subsequent connections.
As you guessed the Node.js model fits the idea of a streaming connection much better than say a servlet model does. Both requests and responses are exposed as streams in Node.js, but don't occupy an entire thread or process, which means that you could continue pushing data into the stream for as long as it remained open without tying up excessive resources (although this is subjective). In theory you could have a lot of concurrent open responses connected to a single process and only write to each one when necessary.
If you haven't looked at it already the HTTP docs for Node.js might be useful.
I'd also take a look at technoweenie's Twitter client to see what the consumer end of that API looks like with Node.js, the stream() function in particular.