trigger a function when changes in the DB are being commited - sql

I was always wondering if it's possible to create a block of code (probably php code) that will execute when a certain change is being committed to the database.
For instance, chat application. When a user sends a message, it will add a message to a table, then I would like to force all of the other users to an AJAX request to read this new value (rather than sending AJAX request every 100ms to check if there is a new message)
I remember something that involved node.js and some other type of DB rather than mysql. If this is the only solution, can it work along with a normal mysql database?
Thanks in advance!

Yes, MySQL supports triggers, but they are pretty much limited to do other data operations. So you'd still have to get some notification sent to your javascript client
A better way of doing client notifications with with websocket or comet, allowing the server to push notifications from a message-queue.
You didn't give much detail about your programming environment, so I'll leave it to you to follow the tag links I gave above, and research the appropriate tools and frameworks for using these general methods.
Re your comment:
For PHP, here's an example "push" chat application:
http://www.aljtmedia.com/blog/websockets-for-php-ratchet-push-chat-application/
Here's an primer on using message queues in general:
http://blog.thecodepath.com/2013/01/06/asynchronous-processing-in-web-applications-part-2-developers-need-to-understand-message-queues/
And here are tutorials for RabbitMQ (one simple option among many MQ solutions usable by PHP), including PHP examples: https://www.rabbitmq.com/getstarted.html

Related

How can I use Reactive Extensions and WCF to process information on a remote server and monitor progress?

I am experimenting with using Reactive Extensions to create a Windows Service.
Essentially what I want is for the Observer to sit on the server, the clients able to create observables and have them pushed to the server, the server informing the client of the progress of the job (not sure how to do this or what mechanism to use to do it), and then when it's done, having the server send the client the return code and output of the program it called. Can this be done? Is it the best way to do what I'm trying to do? If you need any more information, what would you need to know to help me?
This seems back to front. Generally clients know about servers (how to find then and connect). In contrast the Observer pattern (and therefore Rx) is about allowing something to callback to another observer that is does not know about.
In your case I think you simply want to have clients call methods on a server. Potentially these are bound to a single connection/session. The client however maybe an observer of the progress from the server and the final result.
See the Reactive Trader project by the team at Adaptive to see a .NET client server app using Rx.

Why should I build an API with an asynchronous/non-blocking framework?

I have been looking into the Play Framework as a possible candidate for helping me to build a simple API. However, the Django Rest Framework (DRF) also seems to be a pretty strong contenter.
As far as I can tell, the DRF does not advertise itself to be an asynchronous (or non-blocking) framework like the Play Framework does, but I am interested in whether or not this even matters. The situation that I keep thinking of is sending an email to a user via Mandrill -- I do not want my API to get bogged-down waiting for the Mandrill API to tell it whether or not the email was sent.
Thus, I think the question can be summarized like this: is there a benefit from the client's perspective that will result from my building an API with an asynchronous/non-blocking framework like Play over the DRF, or am I missing the point?
I'm a Django REST framework contributor (and user), so my perspective is biased towards that.
Django REST framework is built on Django, which is a synchronous framework for web applications. If you're already using a synchronous framework like Django, having a synchronous API is less of an issue.
Now, just because it is synchronous, that doesn't mean that only a single request can ever be handled at a time. Most web servers that are handling Django applications can handle multiple requests, some of theme even do it somewhat asynchronously across multiple threads. Usually this isn't actually an issue, as your web server can typically handle many concurrent requests, even if some of them are blocking. And when you have long, blocking calls you usually don't want that done within the API - you should be delegating that to background workers like Celery or Resque.
This isn't just specific to Django, many of the same principles apply to other synchronous frameworks like Rails and ASP.NET MVC. If you have long-running requests, you generally should be delegating work to other processes instead of holding up the request. It's common to use the 202 response code for these cases.
Now, that doesn't necessarily mean that asynchronous frameworks are useless. In runtimes such as Node.js, most frameworks handle requests asynchronously. It doesn't make sense to use a synchronous framework in these languages, so most libraries are built to be asynchronous.
What you choose very much depends on the tools that you are already using.
Regarding the clients connecting to your app there should be no difference at all if your server uses asynchronous/non-blocking (ANB) technologies or not. But it may make a lot of difference in the number of requests your app can handle.
Suppose the following scenario: a request that checks if a FB/Google/etc access token is valid, and then uses it to get the social profile of your user and then returns something back.
If you are using a blocking http client in your server, during each of the 2 http requests the thread serving that request can be blocked a lot of time doing nothing.
If you are using a non-blocking http client (like the one Play brings) while the HTTP request is made and the response comes back the thread can be used to do something else (ex: process part of another request).
Note that to solve this "problem" you wouldn't need an ANB framework, just an ANB http client. So you should look more to the kind of operations you will have in your app and check how your chosen framework will deal with them. For example: if your app consists almost of DB CRUD operations and the DB driver is blocking (like JDBC in Java and probably the ones used by Django) it really does not matter much if the framework is asynchronous or not, you will be blocking most of the time on that specific component.
In your email example probably Django+Celery will do just as fine as Play/Akka.
Non async frameworks usually do long-running tasks passing them to some external process (e.g. Resque/DelayedJob/sidekiq for Rails development)
just wanted to add that Mandrill API supports async parameter for sending emails.
Here is what's their docs are saying:
enable a background sending mode that is optimized for bulk sending. In async mode, messages/send will immediately return a status of "queued" for every recipient. To handle rejections when sending in async mode, set up a webhook for the 'reject' event.
So in case using async set to true you'll get handle immediately after performing a call to the API without waiting for all emails to be sent.
https://mandrillapp.com/api/docs/messages.JSON.html#method-send
(I took JSON version of the API just as example)
The Django community is working on this thing for now if you want you can utilise the sync_to_async() adapter .
It comes with some limitations and performance penalties but the community is still working on the same .
The link below will help you to work with the sync_to_async() adapter
https://docs.djangoproject.com/en/3.2/topics/async/

Event notification on new e-mail in IBM Domino

Is it possible to subscribe to mail events on an IBM Domino server?
I need a service similar to the one provided by Microsoft Exchange Event Notification, where you can subscribe to events and get notified when there are changes - eg. arrival of a new e-mail. I need the solution to be server side, since I can't rely on users having their client running.
Unfortunately, as per my comment above, there is no pre-packaged equivalent to the push, pull and streaming subscription services that EWS supports. A Notes client can get notifications via Notes RPC protocol, and there's also obviously some technology in IBM's Notes Traveler mobile product, but nothing that I'm aware of as a pre-packed web service or even as a notifications API. You would have to build it. There are a variety of ways you could go about it.
For push or streaming subscriptions, one way would be with a Notes C API plugin using the Extension Manager, running on the server and monitoring the mailboxes. You might be able to use a DSAPI plugin into Domino's HTTP stack to manage the incoming connections and feed the data out to subscribers, but honestly I have no idea if Domino's HTTP stack can handle the persistent connections that are implied in the subscription model. Alternatively, the Extension Manager plugin could quickly send the data over to code written in any other language that you want, running on any web stack that. Of course, you'll have to deal with security through all the linked-together parts.
For pull subscriptions, I guess it's really more of a polling archiecture, with state saved somewhere so that only changes since the last call will be delivered. You have any number of options for that. You could use Domino's built-in HTTP server, obviously, so you could write your own Domino-hosted web service for this. You could also use the Domino Data Service, which is a REST API, to do this -- with all necessary state information being stored on the client-side. (On quick look, I don't see a good option for getting all new docs since a specified date-time via Domino Data Service, but it might be possible.)
I do worry a bit about scalability of any custom solution for this. My understanding is that Microsoft has quite a bit of caching and optimization in their services in order to address scale. Obviously, you can build whatever you need for that into your own web service, but it will likely add a lot of effort.

Real-time application newbie - Node.JS + Redis or RabbitMQ -> client/server how?

I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
This is usually called RAD (rapid application design/development) and it is what I would recommend right now. This lets you build the proof of concept that you can use to work off of later to get what you want to happen.
As for how to talk to the clients from the server, and vice versa, have you read at all on websockets?
Given the choice between LAMP or event based programming, for what you're suggesting, I would tell you to go with the event based programming, so nodejs. But that's just one man's opinion.
Well,
LAMP - Apache create new process for every request. RabbitMQ can be useful with many features.
Node.js - Uses single process to handle all request asynchronously with help of event looping. So, no extra overhead process creation like apache.
For asynchronous chat application,
socket.io + Node.js + redis pub-sup is best stack.
I have already implemented real-time notification using above stack.

Streaming API vs Rest API?

The canonical example here is Twitter's API. I understand conceptually how the REST API works, essentially its just a query to their server for your particular request in which you then receive a response (JSON, XML, etc), great.
However I'm not exactly sure how a streaming API works behind the scenes. I understand how to consume it. For example with Twitter listen for a response. From the response listen for data and in which the tweets come in chunks. Build up the chunks in a string buffer and wait for a line feed which signifies end of Tweet. But what are they doing to make this work?
Let's say I had a bunch of data and I wanted to setup a streaming API locally for other people on the net to consume (just like Twitter). How is this done, what technologies? Is this something Node JS could handle? I'm just trying to wrap my head around what they are doing to make this thing work.
Twitter's stream API is that it's essentially a long-running request that's left open, data is pushed into it as and when it becomes available.
The repercussion of that is that the server will have to be able to deal with lots of concurrent open HTTP connections (one per client). A lot of existing servers don't manage that well, for example Java servlet engines assign one Thread per request which can (a) get quite expensive and (b) quickly hits the normal max-threads setting and prevents subsequent connections.
As you guessed the Node.js model fits the idea of a streaming connection much better than say a servlet model does. Both requests and responses are exposed as streams in Node.js, but don't occupy an entire thread or process, which means that you could continue pushing data into the stream for as long as it remained open without tying up excessive resources (although this is subjective). In theory you could have a lot of concurrent open responses connected to a single process and only write to each one when necessary.
If you haven't looked at it already the HTTP docs for Node.js might be useful.
I'd also take a look at technoweenie's Twitter client to see what the consumer end of that API looks like with Node.js, the stream() function in particular.