I have built a "Large" application using Flask-AppBuilder and have 2 questions I have not seen the answer to.
Is there any way to "split" a large application into multiple
components (similar to what Blueprints do).
My business logic has mostly ended up in the View's but... some of
it does not feel right there. Few things I have added to the
models, again does not feel right. This is logic that tends to
create a long running processes so I have been testing out Celery.
Any examples of either of these would be lovely.
it does not really matter what framework you use, but as soon as the application grows you may want to isolate critical logic. Both for the reasons you described above, but also to be future-proof (you may want to move to a new frontend in the future without rewrite the heavy lifting).
I ususally set up a redis worker for this, and use e.g. flask only to trigger the queue with function calls. That also makes the application more scalable (concurrent users, more data) as you can simply start more workers listening to your queue if needed.
In essence:
from redis import Redis
from rq import Queue
from rq.job import Job
conn = Redis()
q = Queue(connection=conn)
Then as example in the flask routes (for appBuilder, use the views, or create your own lib) call:
result = q.enqueue('utils.your_function_name',args=(id,))
Have a look at RQ here for more examples, also how to monitor the status of your jobs etc.
https://python-rq.org/
Related
Is there an upper limit to the number of unique IEndpointInstances that be hosted within in a single process?
I'm considering a design that will see up to a 100 unique IEndpointInstances, all listening on separate queues, be active simultaneously.
Will this cause a problem for NServiceBus? Could the process deadlock or spin up so many threads as to be unresponsive and useless?
The question NServiceBus - How to get separate queue for each message type receiver subscribes to? seems to suggest that you can not have multiple endpoints in a process, but this is an older post. I have built a small sample against NServiceBus 6--beta4 that does work.
There is a similar question NServiceBus Single Process, but Multiple Input queues that concluded, based on the OP's context using Satellite Features was the recommended approach. However, in my case, I have 100 (functionally different) sagas (1 per queue), where each saga could need to receive similar messages, but I need to make sure that only the correct saga receives the message. Therefor, I don't think implementing a custom feature will meet my requirements. Or will Satellite Features support Sagas?
One of the options is to use self multi hosting. Using this approach, you self the endpoints yourself in the same process. There are a few things to take into consideration, such as:
Assembly scanning (might require custom scanning logic per endpoint).
Throughput (for heavy throughput endpoints I'd recommend a separate hosting process).
To update/redeploy a single endpoint, you'll be taking all of the other 99 endpoints down as well.
While there's no hard limit on how many endpoints can be co-hosted, 100 sounds a bit a lot. Saying that, it also depends how heavy the load on those endpoints is. If you process 1 msg/sec or 1K msg/sec determine a lot if this is a viable option or not.
Have a look at the sample that does exactly that.
FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.
The Heroku Dev Center on the page about using worker dynos and background jobs states that you need to use worker's + queues to handle API calls, such as fetching an RSS feed, as the operation may take some time if the server is slow and doing this on a web dyno would result in it being blocked from receiving additional requests.
However, from what I've read, it seems to me that one of the major points of Node.js is that it doesn't suffer from blocking under these conditions due to its asynchronous event-based runtime model.
I'm confused because wouldn't this imply that it would be ok to do API calls (asynchronously) in the web dynos? Perhaps the docs were written more for the Ruby/Python/etc use cases where a synchronous model was more prevalent?
NodeJS is an implementation of the reactor pattern. The default build of of NodeJS uses 5 reactors. Once these 5 reactors are being used for IO bound tasks, the main event loop will block.
A common misconception about NodeJS is that it is a system that allows you to do many things at once. This is not necessarily the case, it allows you to do other things while waiting on IO bound tasks, up to 5 at a time.
Any CPU bound tasks are always executed in the main event loop, meaning they will block.
This means if your "job" is IO bound, like putting things in databases then you can probably get away with not using dynos. This of course is dependent on how many things you plan on having go on at once. Remember, any task you put in your main app will take away resources from other incoming requests.
Generally it is not recommended for things like this, if you have a job that does some processing, it belongs in a queue that is executed in its own process or thread.
I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
This is usually called RAD (rapid application design/development) and it is what I would recommend right now. This lets you build the proof of concept that you can use to work off of later to get what you want to happen.
As for how to talk to the clients from the server, and vice versa, have you read at all on websockets?
Given the choice between LAMP or event based programming, for what you're suggesting, I would tell you to go with the event based programming, so nodejs. But that's just one man's opinion.
Well,
LAMP - Apache create new process for every request. RabbitMQ can be useful with many features.
Node.js - Uses single process to handle all request asynchronously with help of event looping. So, no extra overhead process creation like apache.
For asynchronous chat application,
socket.io + Node.js + redis pub-sup is best stack.
I have already implemented real-time notification using above stack.
I'd like to have a real-time 'system status'/'activity monitor' console for my Twisted application.
The app is basically a protocol.ServerFactory which accepts connections performs different jobs.
Kind of like the twisted.manhole, I'm looking for the simplest way to create a admin application where I can check the current stats of my app.
The admin can be a simple ascii-based shell or html/json setup.
I'm aware that I could build this with a bunch of counters, a separate protocol for authenticating and monitoring these, but I'm thinking Twisted might already have such thing since it at least knows the number of connections, protocol types, etc etc.
Tips?
There's the unmaintained, slowly rotting twisted.internet.gladereactor. If you're using twistd, then you can use this trivally:
twistd --reactor debug-gui ...
If you're running the reactor directly yourself, then it's only slightly more effort:
from twisted.manhole import gladereactor
gladereactor.install()
from twisted.internet import reactor
...
The Inspect feature appears to have been broken for some time, but it will still show you a list of established connections and what state they are in, and it will also apparently give you a traffic log for each connection. Fixing Inspect may also be a fairly straightforward effort, in case you're looking for a little project. :)