Redis performance: hIncrByFloat function or custom Lua script ? - redis

let's say I have hash data stored in redis:
{"fee":0.11,"name":"scott"}
now I want to add some value to the field 'fee', should I use the hIncrByFloat command or , write a Lua script to implement that? please advice from the performance view, thanks!

Use HINCRBYFLOAT.
Core commands are more performant than Lua scripts in (probably) every scenario. Use Lua to compose flows that consist of core commands and server-side logic, but not to replace a single core command.
You can, and should, test performance yourself - redis-benchmark can be used for that.

Related

sails-redis: increment attributes values

I'd like to use sails-redis to track all kinds of events.
Therefore I need the ability to increment model attributes in a performant way.
I already found the Model.native function, wich allows me to access the native redis methods.
But since sails-redis is based on Strings and not on Hashes I can not use any native increment methods (so far i know).
Is there any clean and performant way to solve this issue?
The thing sails-redis does is to create a database with CRUD methods by using redis key-value-store based on strings.
Therefore do not see sails-redis as an wrapper for redis. Forget about that. It is just another database which almost has nothing to do with redis.
Use the right tool for the right job!
I you have a job like event tracking where you want to use Redis because of it's speed use node-redis and implement it yourself. sails-redis is just not made for such things.
I simply created a new service and used node-redis. There might be a more elegant way, but mines works and improved performance a hole lot.
https://github.com/balderdashy/sails-redis/issues/34

How to supply SQL functions and views required for testing Django app

I've created a file <APP>/<MODEL>.sql according to the Django docs in order to make use of a hook to pass arbitrary SQL after syncdb is run. Inside this file are two function declarations for PostgreSQL and a statement that creates a database view. This runs fine in production but -- just as the docs say -- the code is not run for the test database because the use of fixtures is suggested. Now my unittests are missing the crucial database views and functions and thus fail.
How do I test code that relies on raw sql functions / views?
UPDATE
I dug up this ticket which concerns this question directly and also presents a small workaround.
I found the best way to handle this is to put the custom SQL code into Django's migrations.
Django and South (which is the predecessor to Django's own migration framework) both provide commands to create custom (i.e. empty) migrations. The code for creating database views or functions can be put into an empty migration and will be run whenever a new installation of the project is migrated or the test suite is run.
A tutorial on how to use custom migrations for database views with South can be found here. The syntax is a bit different in Django's own migration framework but the documentation about RunSQL explains it all.
Just run them natively like the sql that they are.
or
Use sqlcustom
or
Don't bother with them; you might find yourself swimming upstream to try and make good use of these functions and view via the ORM.
or
Consider another python framework (dare i say it) which is more attuned to using native sql.

PigServer or PigRunner? Which is better?

I have written embedded pig program using PigServer class.But I come to know that we can also execute queries using PigRunner class.
Can anyone tell which one is better? please Explain the reason as well.
PigRunner essentially presents the same interface as the command line program "pig" with the advantage that it can be called without going to the system shell and that it returns a PigStats objects. It is therefore convenient for running complete user supplied scripts.
PigServer however allows on-the-fly creation and registration of queries, and then programmatic iteration over the results. It therefore provides a much more flexible and complete interface to PIG.

Django: run queries depending on SQL engine

In Django 1.2.3 I need to perform some queries that are not feasible with pure Django ORM functions. E.g.
result = MyModel.objects.extra(select={'stddev': 'STDDEV_SAMP(value)'}).values()
But, indeed, I need to run this code on several SQL engines (sqllite, MySQL and MSSQL). So, I should test settings.DATABASES['default']['engine'] and run engine-specific code.
Is there a more Django-like approach to this problem? (e.g. user-definined function to put somewhere so that Django run them according to default database engine).
Thank you
The proper place to store the code for accessing data is in a method in the model layer. That way, the model can:
be environment-aware
construct custom queries
use built-in ORM functions
These can be swapped around, optimized, and tweaked, without the rest of your application having to change a bit, because the rest of your application only manipulates data through your model.

Whats the best build system for building a database?

This is a problem that I come to on occasion and have yet to work out an answer that I'm happy with. I'm looking for a build system that works well for building a database - that is running all of the SQL files in the correct database instance as the correct user and in the correct order, and handling dependencies and the like properly.
I have a system that I hacked together using Gnu Make and it works, but it's not especially flexable and frankly can be a bit of a pain to work with in some situations. I've considered looking at things like SCons and CMake too, but I don't know how much better they are likely to be, or if there's a better system out there that already exists...
Just a shell script that runs all the create statements and imports in the proper order. You may also find migrations (comes with rails) interesting. It provides a make like infrastructure that let's you maintain a database the structure of which evolves over time.
Say you add a new column to some table. In migrations you'd write a snippet of code which describes the requirements for adding the column and also to rollback the change so you can switch to different versions of your schema automatically.
I'm not a big fan of the tight integration with rails, though, but the principles behind it are very interesting.
For SQL Server, I just use a batch file with SQLCMD.EXE and a bunch of .SQL files. It's not perfect, but it seems to work.
For my database, I use Migrator.NET
This is a .NET framework which allows you to create classes in where you define your DDL statements.
The framework comes with a command-line tool with which you can execute your 'migrations' in the correct order.
It also has a msbuild - task, so you can integrate it in a continuous integration build as well.
First export full DDL files describing all tables, views, source code
(procedures, functions, packages), sequences, and grants of a DB schema
See
Is there a tool to generate a full database DDL for SQL Server? What about Postgres and MySQL?
I created a database build system (part SQL-parser, part make file) to put these files together in a DB creation script using python.