How to do Asynchronous inserts in Aerospike using Python Client - aerospike

I am using Aerospike 3.4 and Python Clinet 1.0.41
I am able to achieve only around 1400 writes per second. This is by synchronous writes, single thread. Can anyone suggest how to improve the write speed on single thread. I didn't find Asynchronous write feature in Python client.
I have seen benchmark results on the web claiming around 8L writes per second on SSD.
My Configuration:
No of nodes:2,
CPUs: 16 per node,
Replication: 2,
Data Persistence: SSD
Thanks,
Dhanasekaran

Updated 2015-07-29:
(1) The Python Aerospike client is fully synchronous at the moment. There appeared to be no firm plans for async support in the discussion at https://discuss.aerospike.com/t/gevent-compatibility-or-async-api/1001
but Ronen has since confirmed below (see comments) that async support is planned for all clients in the future.
(2) Regarding 1.4k TPS, I experienced very similar results when hosting Aerospike in a VirtualBox VM and connecting from the physical host. This may be due to VirtualBox's networking issues. When the client (Java benchmark) was run on the same VM as the host database, my speed went up to about 8k TPS.

A good news here is 'C' client 4.0 has been released with asynchronous support. http://www.aerospike.com/download/client/c/notes.html.
Since python client wraps around C client, there are very good chances that python client will have this feature sooner.

Source code: https://github.com/sean-tan-columbia/aerospike-async-client
I've implemented an Aerospike asynchronous client as an open-source project, the source code is as above. It has been tested on Aerospike 3.3 with Aerospike Python Client 1.0.38 and Python 2.7.
I just recently started it so it's not yet mature, welcome to improve it!

Related

very slow processing and converting videos to m3u8 using ffmpeg library

We are experiencing very slow processing and converting videos to m3u8 using ffmpeg library.
Note that the operating system used is Ubuntu Server and that the server has huge resources in RAM and CBU, and we noticed that the processing process does not consume the available resources significantly
Average video size: 1 GB
Average video processing time: 2 - 3 hours
The programming language used: asp.net core 3.1
We need to reach a maximum of 20 minutes in processing time, is that possible?
This problem does not seem to be related to programming. It is recommended to take the following steps to test, there should be some test results that can help you.
Use a local computer with a higher configuration of Ubuntu for testing, and it is recommended to install a solid state drive.
If the Ubuntu server you mentioned is a cloud server, it is recommended to upgrade to a higher performance for testing. It is best to test locally before deciding if such an upgrade is needed to save money.
If the above two points are difficult, for example, there is no such Ubuntu, we can also test it on a personal PC with Win 10/11 (with a solid state drive), install the ffmpeg environment, and then ensure that other software resources are closed, only Keep IIS, ffmpeg and other related services for testing.
I personally recommend trying the third suggestion first, so that you can get test results for the problem you care about. If the 1GB video can be processed within 20min, then we can consider upgrading the relevant configuration of Ubuntu.

Restcomm Smsc Gateway: Giving only 50tps on single connection

I am using Restcomm smsc gateway 7.3.135. When both client and server were running on same machine, I am getting only 50 tps on single connection.
In the document I have read we can get upto 1000tps. Please guide how can I achieve this.
Thanks
the community edition of restcomm projects is not performance tested, only the product is going through those performance tests as it requires a lot of fine tuning in the project itself, logging, OS, JVM Options, it also depends on the hardware you're running. You should may want to contact Telestax to get help on that as it's usually pretty involved.

How to run contexBroker Orion in a Raspbian OS?

Is there any possibility of running Orion ContextBroker on Raspberry Pi with Raspbian OS?
The requirements recommended in the Orion documentation are:
Although we haven't done yet a precise profiling on Orion Context
Broker, tests done in our development and testing environment show
that a host with 2 CPU cores and 4 GB RAM is fine to run the
ContextBroker and MongoDB server. In fact, this is a rather
conservative estimation, Orion Context Broker could run fine also in
systems with a lower resources profile. The critical resource here is
RAM memory, as MongoDB performance is related to the amount of
available RAM to map database files into memory.
Besides the board constrained resources, you will have to search about the equivalent required libraries for RaspbianOS.
There is a discussion about it here:
https://github.com/telefonicaid/fiware-orion/issues/15

Twisted and Tornado difference when deploying?

I only have a small knowledge about Tornado, and when it comes to deployement, it is better to use Nginx as a load balancer with the number of Tornado process.
How about Twisted? is that the same way?
If I'm tracking your question right, you seem to be asking: "Should Tornado be front-ended with Nginx, how about Twisted?"
If thats really where the question is going, then its got a "it depends" answer but perhaps in a way that you might not expect. In a runtime sense, twisted, tornado and Nginx are in more ways than not the same thing.
All three of these systems use the same programming pattern in their core. Something the OO people call a Reactor pattern, which is often also known as asynchronous I/O event programming, and something that old-school unix people would call a select style event loop. (done via select / epoll / kqueue / WaitForMultipleObjects / etc)
To build up to the answer, some background:
Twisted is a Reactor based framework that was created to for writing python based asynchronous I/O projects in their most generic form. So while its extremely good for web applications (via the Twisted Web sub-module), its equally good for dealing with Serial data (via the SerialPort sub-module) or implementing a totally different networking protocol like ssh.
Twisted excels in flexibility. If you need to glue many IO types together, and particularly if you want to connect IOs to the web it is fantastic. As noted in remudada answer, it also has application tool built in (twistd).
As an async I/O framework, Twisted has very few weaknesses. As a web framework though (while it is actively continuing to evolve) it feel decidedly behind the curve particularly as compared to plugin rich frameworks like Flask, and Tornado definitely has many web conveniences over Twisted Web.
Torando is a Reactor based framework in python, that was created to for serving webpages/webapps extremely fast (basically as fast as python can be made to serve up webpages). The folks who wrote it were aiming to write a system so fast that the production code itself could be python.
The Tornado core is nearly a logically match to the core of Twisted. The core of these projects are so similar that on modern releases you can run Twisted inside of Tornado or run a Tornado port inside of Twisted.
Tornado is single-mindedly focused on serving webpages/webapps fast. In most applications it will be 20%ish faster then Twisted Web, but nowhere near as flexible as supporting other async I/O uses. It (like twisted) is still python based, so if a given webapp does too much CPU based work its performance will fall quickly.
Nginx is a Reactor based application that was created to for serving webpages and connection redirection written in C. While Twisted and Tornado use the Reactor pattern to make python go fast, Nginx takes things to next step and uses that logic in C.
When people compare Python and C they often talk about Python being 1.2 to 100 times slower. However, in the Reactor pattern (which when done right spends most of it's time in operating system) language inefficiency is minimized - as long as not too much logic happens outside of the reactor.
I don't have hard data to back this up but my expectation is that you would find the simplest "Hello world" (I.E. serving static test) running no more then 50% slower on Tornado then on Nginx (with Twisted Web being 20% slower then Tornado on average).
Differences speed of the same thing, where does that leave us?
You asked "it is better to use Nginx as a load balancer with the number of Tornado process", so to answer that I need to ask you question back.
Are you deploying in a way where its critical that you take advanced of multiple cores?
In exchange for blazing async IO speed, the Reactor pattern has a weakness:
Each Reactor can only take advantage of one process core.
As you might guess, the flip side of that weakness is that the Reactor pattern uses that core extremely efficiently and if you have the load, should be able to take that core near 100% use.
This leads back to the type of design your asking about, but the reason for all the background in this answer is because the layering of these services (Nginx in front of Tornado or Twisted) should only be done to take advantage of multi-core machines.
If your running on a single core system (lowest class cloud server or an embedded platform like a Raspberry Pi) you SHOULD NOT front-end a reactor. Doing so will simple slow the whole system down.
If you run more (heavily loaded) reactor services then CPU cores, your also going to be working against yourself.
So:
if you're deploying on single core system:
Run one instance of either a Tornado or Twisted (or Nginx alone if its static pages)
if your trying to fully exploit multiple cores
Run Nginx (or twisted) on the application port, run one instance of Tornado or Twisted for each remaining processor core.
Twisted is a good enough application server on it's own and I would rather use it as it is (unlike Tornado)
If you look at the offical guide http://twistedmatrix.com/documents/11.1.0/core/howto/application.html
you can see how it is set up. Ofcourse you can use uwsgi / nginx / emperor with twsited since it can be run as a standard application, but I would suggest that you do this when you really need the scaling and load balancing.

Is Redis a better option for SignalR scale out over SQL Server, and do each support failover?

In David Fowler's blog, SQL Server has been added to the list of scale out providers for service bus.
I am in the process of implementing Redis on our Windows servers. Based on what I know about Redis, I'm guessing it will be significantly faster than using SQL Server - is that a fair assumption?
If so, how does the Windows version of Redis implement fail-over?
Redis is ~x200 faster than SQL, mainly because it's in-memory and the protocol is designed for speed.
If that helps, Redis Cloud is now offered on Windows Azure, and HA is a built-in capability of the service.
Disclosure - I'm the Co-Founder & CTO of Garantia Data, the company behind the Redis Cloud service.
Based on what I know about Redis, I'm guessing it will be
significantly faster than using SQL Server - is that a fair
assumption?
It will be faster than SQL Server since it's optimized for in-memory based operations, however its speed isn't the only advantage. Support of advanced data structures offers a great deal of flexibility when dealing with various scenarios.
If so, how does the Windows version of Redis implement fail-over?
There is a link in download section to unofficial windows based port of redis which however isn't meant to be used for production purpose. Official version of redis supports replication and sentinel has automatic failover, but it's hard to say what's the state of these features in windows port. In general I wouldn't recommend to use redis on windows machine but rather use virtual machine with linux distro and run it there.