WL commands seem to process in their own time aside from the remainder of the running script. So what happens, is that while the WL command is being processed, the script following the command continues to process, which then results into problems, because the logic is processed out of order.
The same thing happens with WL commands between themselves. If you have consecutive WL commands in a certain processing order and therefore requiring to process in that order, the WL commands process simultaineously, but also at their own pace, so you end up having a succeeding WL command processing to completion before a preceeding WL command, result in a logical error of sequence, which obviously is problematic.
Is there an awareness of this problem, and are there any known solutions, because having WL commands processing in their own order of pace or preference, and superseeding consecutive code will just not work well, and result in too many difficulties.
wl calls are asynchronous calls... you need to chain your code to achive your requirement. example using ".then .else" for JasonStore.
Related
I have an issue with using lua scripts running on redis
When running INFO command
used_memory_lua_human:1.08G
The usage of Lua is not extensive (single set and get commands)
How can I reduce this value?
Apparently Redis caches every Lua script that is running on it, in order to spare loading it.
This is a good feature given the set of the scripts is limited.
The problem caused because we are formatting the script with different variable every execution, and it gets a different identifier every execution.
The solution I found is running SCRIPT FLUSH command after every execution, in order to remove it from cache
The explanation of why this is happening is explained by gCoh's answer very well.
The only problem is that running SCRIPT FLUSH can lead to a lot of errors and even loss of processing if you use EVALSHA to run LUA scripts. (For example, a library like scripto runs LUA scripts using EVALSHA). Read more about EVALSHA here
EVALSHA keeps the script cached in redis server and gives out a hash when you load the script for the first time. When you run the script second time, rather than loading the script on the server it just sends the script hash and redis executes it.
Now if you run SCRIPT FLUSH it will remove all the cached scripts and your SHA hashes will become obsolete, which will lead to runtime errors when the script is being executed and it isn't found.
For me it was this: NOSCRIPT No matching script. Please use EVAL.
This should be handled in the scripto library, where if the script isn't found it should load it again but until them, please take care that this doesn't happen with you.
First of all, my understanding is: redis is a single-process program, all commands are executed in first-in-first-out order. If this is the case, we don't need the watch command, but this is not the case.
I want to find out more about the order of execution of the redis command. Thanks in advance
You are correct, the Redis server will execute, the command in the order they are received independently of the client.
That said, it is interesting to know that you have some features like transaction and pipelining that do not have a direct impact on the execution order (not totally for a transaction, as you will see below)
Transactions
In a transaction, "all the commands in a transaction are serialized and executed sequentially". All the commands are executed as a single isolated operation.
So when you are running commands in the transaction, it is not possible to have commands from another client to be executed before the end of the transaction.
Pipelining
As described above the operation will be executed in order (FIFO), using pipelining that does not change, but what is different is that the client is able to send multiple commands without waiting for the response.
I let you look into the details of all this and test it in your application if needed.
We have TIMEOUT 3600 used across many execute_process calls.
Some of our slow servers often exceed the timeout and builds fail as expected.
However, this is often causing issues and we keep increasing the timeout intervals.
Now I'm thinking to remove timeout completely.
Is there a reason for using the TIMEOUT option other than failing?
Why should we have TIMEOUT at all (e.g., none of add_custom_target commands has this feature)? Why use it only in execute_process (request is not meant for CTest)?
Turning my comment into an answer
Generally speaking I would remove the individual timeouts.
I have a timeout for the complete build process running on a Jenkins server. There it calculates an average over the last builds and you can give a percentage value of how much your build may differ (I use a "Timeout as a percentage of recent non-failing builds" of 400% so only the really hanging tasks will be terminated).
In our environment we have quite a few long-running functional tests which currently tie up build agents and force other builds to queue. Since these agents are only waiting on test results they could theoretically just be handing off the tests to other machines (test agents) and then run queued builds until the test results are available.
For CI builds (including unit tests) this should remain inline as we want instant feedback on failures, but it would be great to get a better balance between the time taken to run functional tests, the lead time of their results, and the throughput of our collective builds.
As far as I can tell, TeamCity does not natively support this scenario so I'm thinking there are a few options:
Spin up more agents and assign them to a 'Test' pool. Trigger functional build configs to run on these agents (triggered by successful Ci builds). While this seems the cleanest it doesn't scale very well as we then have a lead time of purchasing licenses and will often have need to run tests in alternate environments which would temporarily double (or more) the required number of test agents.
Add builds or build steps to launch tests on external machines, then immediately mark the build as successful so queued builds can be processed then, when the tests are complete, we mark the build as succeeded/failed. This is reliant on being able to update the results of a previous build (REST API perhaps?). It also feels ugly to mark something as successful then update it as failed later but we could always be selective in what we monitor so we only see the final result.
Just keep spinning up agents until we no longer have builds queueing. The problem with this is that it's a moving target. If we knew where the plateau was (or whether it existed) this would be the way to go, but our usage pattern means this isn't viable.
Has anyone had success with a similar scenario, or knows pros/cons of any of the above I haven't thought of?
Your description of the available options seems to be pretty accurate.
If you want live update of the builds progress you will need to have one TeamCity agent "busy" for each running build.
The only downside here seems to be the agent licenses cost.
If the testing builds just launch processes on other machines, the TeamCity agent processes themselves can be run on a low-end machine and even many agents on a single computer.
An extension to your second scenario can be two build configurations instead of single one: one would start external process and another one can be triggered on external process completeness and then publish all the external process results as it's own. It can also have a snapshot dependency on the starting build to maintain the relation.
For anyone curious we ended up buying more agents and assigning them to a test pool. Investigations proved that it isn't possible to update build results (I can definitely understand why this ugliness wouldn't be supported out of the box).
I have Java JDBC application running against an Oracle 10g Database. I set up a PreparedStatement to execute a query, and then call ps.executeQuery() to run it. Occasionally the query takes a long time, and I need to kill it. I have another thread access that PreparedStatement object, and call cancel() on it.
My question is, does this actually kill the query in the database? Or does it just sever it from the client, and the query is still running somewhere in the bowels of Oracle?
Thanks!
Please note that what I say below is based on observations and inferences of Oracle in use, and is not based on any deep understanding of Oracle's internals. None of it should be considered authoritative.
What ninesided said in their first paragraph is correct. However, beware the test suggested. Not all long-running oracle queries are the same. It seems that queries are evaluated over two phases, first a phase that combines up sufficient data to know how to return the rows in the right order, and second a phase that returns the rows filling in the gaps that it didn't compute in the first phase. The division of work between the two phases is also affected by the settings of the cost-based optimizer. e.g. First-rows vs All-rows.
Now, if the query is in phase 1, the cancel request seems at best to be queued up to be applied at the end of phase 1, which means that the query continues operating.
In phase 2, rows are returned in bunches, and after each bunch, the cancel command can take effect, so assuming the driver supports the command, the cancel request will result in the query being killed.
The specification for the JDBC cancel command does not seem to say what should happen if the query does not stop running, and therefore the command may wait for confirmation of the kill, or may timeout and return with the query still running.
The answer is that it's a quality-of-implementation issue. If you look at the javadoc for Statement.cancel(), it says it'll happen "if both the DBMS and driver support aborting an SQL statement".
In my experience with various versions of Oracle JDBC drivers, Statement.cancel() seems to do what you'd want. The query seems to stop executing promptly when cancelled.
It depends on the driver you are using, both the driver and the database need to support the cancellation of a statement in order for it to work as you want. Oracle does support this feature, but you'd need to know more about the specific driver you are using to know for sure.
Alternatively, you could run a simple test by looking at the database after initiating and canceling a long running query.