In the documentation about scripting in redis the following is stated:
Redis guarantees the script's atomic execution. While executing the script, all server activities are blocked during its entire runtime.
Do I understand that correctly that really all server activities are blocked? In the call to EVAL you state the keys that are modified, so why would Redis not only block activities for those keys?
Thanks in advance for any clarification!
Because Redis runs commands one-by-one in a single thread. When it runs your script, i.e. EVAL, it cannot run other commands until it finishes the script.
So, NEVER run complicated Lua script, since it might block Redis for a long time, and hurt performance.
I have some scripts that touch a handful of keys. What happens if Elasticache decides to reshard while my script is running? Will it wait for my script to complete before it moves the underlying keys? Or should I assume that it is not the case and design my application with this edge case in mind?
One example would be a script that increment 2 keys at once. I could receive a "cluster error" which means something went wrong and I have to execute my script again (and potentially end up with one key being incremented twice and the other once)
Assuming you are talking about a Lua script, for as long as you're passing the keys in the arguments (and not hardcoded in the script) you should be good. It will be all or nothing. If you are not using a Lua script - consider doing so
From EVAL command:
All Redis commands must be analyzed before execution to determine
which keys the command will operate on. In order for this to be true
for EVAL, keys must be passed explicitly. This is useful in many ways,
but especially to make sure Redis Cluster can forward your request to
the appropriate cluster node.
From AWS ElastiCache - Best Practices: Online Cluster Resizing:
During resharding, we recommend the following:
Avoid expensive commands – Avoid running any computationally and I/O
intensive operations, such as the KEYS and SMEMBERS commands. We
suggest this approach because these operations increase the load on
the cluster and have an impact on the performance of the cluster.
Instead, use the SCAN and SSCAN commands.
Follow Lua best practices – Avoid long running Lua scripts, and always
declare keys used in Lua scripts up front. We recommend this approach
to determine that the Lua script is not using cross slot commands.
Ensure that the keys used in Lua scripts belong to the same slot.
As I know redis is single threaded solution from client point of view.
But what about the general architecture?
Amuse we have some lua script that going to execute several commands on keys that has some TTL.
How does redis garbage collections works? Could it interrupt the EVAL execution & evict some value or internal tasks share the single thread with user tasks?
Lua is majik, and because that is the case time stops when Redis is doing Lua. Put differently, expiration stops once you start running the script in the sense that time does not advance. However, if a key expired before the script started, it will not be available for the script to use.
I want to be able to access a very recent copy of my master Redis server keys. It doesn't have to be completely up to date as I will be polling the read only copy but I don't want the transactions and Lua scripts I run on the master instance to block on the read only instance as I SCAN through the keys on my read only instance.
Can anyone confirm/deny this behaviour?
It won't block the slaves from anything, but while the master is busy processing your logic replication will be stopped. Once the logic ends (possibly generating writes), replication will resume with the previous buffered contents and the new ones (if any).
I read about Voltdb's command log. The command log records the transaction invocations instead of each row change as in a write-ahead log. By recording only the invocation, the command logs are kept to a bare minimum, limiting the impact the disk I/O will have on performance.
Can anyone explain the database theory behind why Voltdb uses a command log and why the standard SQL databases such as Postgres, MySQL, SQLServer, Oracle use a write-ahead log?
I think it is better to rephrase:
Why does new distributed VoltDB use a command log over write-ahead log?
Let's do an experiment and imagine you are going to write your own storage/database implementation. Undoubtedly you are advanced enough to abstract a file system and use block storage along with some additional optimizations.
Some basic terminology:
State : stored information at a given point of time
Command : directive to the storage to change its state
So your database may look like the following:
Next step is to execute some command:
Please note several important aspects:
A command may affect many stored entities, so many blocks will get dirty
Next state is a function of the current state and the command
Some intermediate states can be skipped, because it is enough to have a chain of commands instead.
Finally, you need to guarantee data integrity.
Write-Ahead Logging - central concept is that State changes should be logged before any heavy update to permanent storage. Following our idea we can log incremental changes for each block.
Command Logging - central concept is to log only Command, which is used to produce the state.
There are Pros and Cons for both approaches. Write-Ahead log contains all changed data, Command log will require addition processing, but fast and lightweight.
VoltDB: Command Logging and Recovery
The key to command logging is that it logs the invocations, not the
consequences, of the transactions. By recording only the invocation,
the command logs are kept to a bare minimum, limiting the impact the disk I/O will
have on performance.
Additional notes
SQLite: Write-Ahead Logging
The traditional rollback journal works by writing a copy of the
original unchanged database content into a separate rollback journal
file and then writing changes directly into the database file.
A COMMIT occurs when a special record indicating a commit is appended
to the WAL. Thus a COMMIT can happen without ever writing to the
original database, which allows readers to continue operating from the
original unaltered database while changes are simultaneously being
committed into the WAL.
PostgreSQL: Write-Ahead Logging (WAL)
Using WAL results in a significantly reduced number of disk writes,
because only the log file needs to be flushed to disk to guarantee
that a transaction is committed, rather than every data file changed
by the transaction.
The log file is written sequentially, and so the
cost of syncing the log is much less than the cost of flushing the
data pages. This is especially true for servers handling many small
transactions touching different parts of the data store. Furthermore,
when the server is processing many small concurrent transactions, one
fsync of the log file may suffice to commit many transactions.
Conclusion
Command Logging:
is faster
has lower footprint
has heavier "Replay" procedure
requires frequent snapshot
Write Ahead Logging is a technique to provide atomicity. Better Command Logging performance should also improve transaction processing. Databases on 1 Foot
Confirmation
VoltDB Blog: Intro to VoltDB Command Logging
One advantage of command logging over ARIES style logging is that a
transaction can be logged before execution begins instead of executing
the transaction and waiting for the log data to flush to disk. Another
advantage is that the IO throughput necessary for a command log is
bounded by the network used to relay commands and, in the case of
Gig-E, this throughput can be satisfied by cheap commodity disks.
It is important to remember VoltDB is distributed by its nature. So transactions are a little bit tricky to handle and performance impact is noticeable.
VoltDB Blog: VoltDB’s New Command Logging Feature
The command log in VoltDB consists of stored procedure invocations and
their parameters. A log is created at each node, and each log is
replicated because all work is replicated to multiple nodes. This
results in a replicated command log that can be de-duped at replay
time. Because VoltDB transactions are strongly ordered, the command
log contains ordering information as well. Thus the replay can occur
in the exact order the original transactions ran in, with the full
transaction isolation VoltDB offers. Since the invocations themselves
are often smaller than the modified data, and can be logged before
they are committed, this approach has a very modest effect on
performance. This means VoltDB users can achieve the same kind of
stratospheric performance numbers, with additional durability
assurances.
From the description of Postgres' write ahead http://www.postgresql.org/docs/9.1/static/wal-intro.html and VoltDB's command log (which you referenced), I can't see much difference at all. It appears to be the identical concept with a different name.
Both sync only the log file to the disk but not the data so that the data could be recovered by replaying the log file.
Section 10.4 of VoltDB explains that their community version does not have command log so it would not pass the ACID test. Even in the enterprise edition, I don't see the details of their transaction isolation (e.g. http://www.postgresql.org/docs/9.1/static/transaction-iso.html) needed to make me comfortable that VoltDB is as serious as Postges.
With WAL, readers read from pages from unflushed logs. No modification is made to the main DB. With command logging, you have no ability to read from the command log.
Command logging is therefore vastly different. VoltDB uses command logging to create recovery points and ensure durability, sure - but it is writing to the main db store (RAM) in real time - with all the attendant locking issues, etc.
The way I read it is as follows: (My own opinion)
Command Logging as described here logs only transactions as they occur and not what happens in or to them. Ok, so here is the magic piece... If you want to rollback you need to restore the last snapshot and then you can replay all the transactions that were applied after that (Described in the link above). So effectively you are restoring a backup and re-applying all your scripts, only VoltDB has now automated it for you.
The real difference that i see with this is that you cannot rollback to a point in time logically as with a normal transaction log. Normal transaction logs (MSSQL, MySQL etc.) can easily rollback to a point in time (in the correct setup) as the transactions can be 'reversed'.
Interresting question comes up - referring to the pos by pedz, will it always pass the ACID test even with the Command Log? Will do some more reading...
Add: Did more reading and I don't think this is a good idea for very big and busy transactional databases. A DB snapshot is automatically created when the Command Logs fill up, to save you from big transaction logs and the IO used for this? You are going to incur large IO amounts with your snapshots being done at a regular interval and you are also using your memory to the brink. Alos, in my view you lose your ability to rollback easily to a point in time before the last automatic snapshot - think this will get very tricky to manage.
I'll rather stick to Transaction Logs for Transactional systems. It's proven and it works.
Its really just a matter of granularity. They log operations at the level of stored procedures, most RDBMS log at the level of individual statements (and 'lower'). Also their blurb regarding advantages is a bit of a red herring:
One advantage of command logging over ARIES style logging is that a
transaction can be logged before execution begins instead of executing
the transaction and waiting for the log data to flush to disk.
They have to wait for the command to be logged too, its just a much smaller record.
If I'm not mistaken VoltDB's unit of transaction is a stored proc. Traditional RDBMS usually need to support ad-hoc transactions containing any number of statements, so procedure-level logging is out of the question. Furthermore stored procedures are often not truly deterministic in traditional RDBMS (i.e. given params+log+data always produce same output), which they would have to be for this to work.
Nevertheless the performance improvements would be substantial for this constrained RDBMS model.
Few terminologies before I start explaining:
Logging schemes: The database uses logging schemes such as Shadow paging, Write Ahead Log (WAL), to implement concurrency, isolation, and durability (how is a different topic).
In order to understand why WAL is better, let's see an issue with shadow paging. In shadow paging, the database uses a master version and a shadow version of the database so that if the table size is 1 billion and the buffer pool manager does not have enough memory to hold all the tuple (records) in the memory the dirty pages are not written to the master version until the transaction(s) are not committed.
Once all the transactions are committed, the flag is switched and the shadow version becomes the master version. In the diagram above there are Page 3 and Page 5 that are old and can be garbage collected.
The issue with this approach is a large number of fragmented tuples left behind which is randomly located, this is slower as compared to if the dirty pages are sequentially accessed, and this is what Write Ahead Log does.
The other advantage of using WAL is the runtime performance (as you are not doing random IO to flush out the pages) but slower recovery time. Whereas, with shadow paging, the recovery performance is faster (which is required occasionally).