I have a series of processes in nextflow pipeline, employing multiple heavy computing steps and database (SQL) insertion/fetch. I need to insert certain (intermediate) process results to the DB and fetch them later for further processing (within the same pipeline). In the most simplified form it will be something like:
process1 (fetch data from DB)
process2 (analyze process1.out)
process3 (inserts process2.out to DB)
The problem is, that when any values are changed in the DB, output from process1 is still cached (when using -resume flag), so changes in DB are not reflected here at all.
Is there any way to force reprocessing process1 while using -resume and ignore cache?
So far, I was manually deleting respective work folder, or adding dummy line to process1, but that is extremely ineffective solution.
Thanks for any help here.
Result caching is enable by default, but this feature can be disabled using the cache directive by setting the value to false. For example:
process process1 {
cache false
...
}
Not sure if we have the full picture here, but updating a database with some set of process results just to fetch them again later on seems wasteful. Or maybe I've just misunderstood. I would instead try to separate the heavy computational work (hours) from the database transactions (minutes) if at all possible. Note that if you need to make per process database transactions, you might be able to achieve this using the beforeScript and afterScript directives (which can be enabled/disabled using a nextflow.config profile, for example). For example, a beforeScript could be used to create a database object that is updated (using an afterScript) once the process has completed. Since both of these scripts are run from inside the workDir, you could use the basename of the current/working directory (i.e. the task UUID) as a key.
I have a question about the snowflake COPY INTO, searched but did not get my answers.
Suppose I want to push data from snowflake to s3 bucket and using the snowflake COPY INTO command in my code, How will I know if the file is ready or command is completed? So that I can read the file from the s3 location.
You can do the following things to check whether your COPY INTO was successful or at least to retrieve some useful information about your command:
Set DETAILED_OUTPUT = TRUE and check the result (this means you get information about every single unloaded file as a output; if set to "false" you only receive information about the whole unload-process)
Query your stage by using the syntax that can be found here https://docs.snowflake.com/en/user-guide/querying-stage.html
Query the metadata of your staged data by using metadata$filename and metadata$file_row_number: https://docs.snowflake.com/en/user-guide/querying-metadata.html
Keep in mind that even a failed COPY-command can result in some unloaded files on your stage.
More information can also be found at https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#validating-data-to-be-unloaded-from-a-query
depending on how you're actually running this.
any Snowflake interface will run synchronously so the query will just spin until it's complete.
any async call would need extra checks - the easiest one being the web interface (it will show the status of the query and when it completes the unload is complete)
I'm using the c API of the sqlite3 session extension and wondering if the session extension can be used to merge sqlite3 sessions that already have been written to file.
Following the tutorial referenced above I was able to register sqlite3 sessions by writing them to file one by one, e.g. for an UPDATE call I end up with a session file, and with another INSERT call I get another file, and so on. These transactions are triggered by UI button callbacks. I wonder if the session files could be somehow merged afterwards into one single session file, so that calling sqlite3changeset_apply() with this merged session file as its parameter I could end up with the same result as if I called sqlite3changeset_apply() on a list of session files. The reason I would like to do this is that I'd like to transfer only one session file instead of a folder of session files.
I tried iterating over a session list calling subsequent sqlite3changeset_apply() on a copy of the original database while registering the session, but in that I case I eventually get a session file with zero size (although the copy database would contain all the expected changes).
I could not find anything on this in the official documentation nor on the web.
🧟♀️🧟🧟♂️ necromancy alert 🧟♂️🧟🧟♀️
You're probably looking for the sqlite3changeset_concat function:
This function is used to concatenate two changesets, A and B, into a single changeset. The result is a changeset equivalent to applying changeset A followed by changeset B.
There is also a streaming version, sqlite3changeset_concat_strm.
If you need to combine many changesets, you can use the type sqlite3_changegroup and its associated functions.
I am copying 50 million records to amazon dynamodb using hive script. The script failed after running for 2 days with an item size exceeded exception.
Now if I restart the script again, it will start the insertions again from first record. Is there a way where I can say like "Insert only those records which are not in dynamo db" ?
You can use conditional writes to only write the item if it the specified attributes are not equal to the values you provide. This is done by using the ConditionExpression for a PutItem request. However, it still uses write capacity even if a write fails (emphasis mine) so this may not even be the best option for you:
If a ConditionExpression fails during a conditional write, DynamoDB
will still consume one write capacity unit from the table. A failed
conditional write will return a ConditionalCheckFailedException
instead of the expected response from the write operation. For this
reason, you will not receive any information about the write capacity
unit that was consumed. However, you can view the
ConsumedWriteCapacityUnits metric for the table in Amazon CloudWatch
to determine the provisioned write capacity that was consumed from the
table.
Is there any way to delete a set from namespace (Aerospike) from aql or CLI ???
My set also contains Ldts .
Please suggest me a way to delete whole Set from LDT
You can delete a set by using
asinfo -v "set-config:context=namespace;id=namespace_name;set=set_name;set-delete=true;"
This link explains more about how the set is deleted
http://www.aerospike.com/docs/operations/manage/sets/#deleting-a-set-in-a-namespace
There is a new and better way to do this as of Aerospike Server version 3.12.0, released in March 2017:
asinfo -v "truncate:namespace=namespace_name;set=set_name"
This previous command has been DEPRECATED, and works only up to Aerospike 3.12.1, released in April 2017:
asinfo -v "set-config:context=namespace;id=namespace_name;set=set_name;set-delete=true;"
The new command is better in several ways:
Can be issued during migrations
Can be issued while data is being written to the set
It is sufficient to run it on just one node of the cluster
I used it under those conditions (during migration, while data was being written, on 1 node) and it ran very quickly. A set with 30 million records was reduced to 1000 records in about 6 seconds. (Those 1000 records were presumably the ones written during those 6 seconds)
Details here
As of Aerospike 3.12, which was released in March 2017, the feature of deleting all data in a set or namespace is now supported in the database.
Please see the following documentation:
http://www.aerospike.com/docs/reference/info#truncate
Which provides the command line command that looks like:
asinfo truncate:namespace=;set=;lut=
It can also be truncated from the client APIs, here is the java documentation:
http://www.aerospike.com/apidocs/java/com/aerospike/client/AerospikeClient.html
and scroll down to the "truncate" method.
Please note that this feature can, optionally, take a time specification. This allows you to delete records, then start inserting new records. As timestamps are used, please make certain your server clocks are well synchronized.
You can also delete a set with the Java client as follows:
(1) Use the client "execute" method, which applies a UDF on all queried rows
AerospikeClient.execute(WritePolicy policy, Statement statement, String packageName, String functionName, Value... functionArgs) throws AerospikeException
(2) Define the statement to include all rows of the given set:
Statement statement = new Statement();
statement.setNamespace("my_namespace");
statement.setSetName("my_set");
(3) Specify a UDF that deletes the given record:
function delete_rec(rec)
aerospike:remove(rec)
end
(4) Call the method:
ExecuteTask task = AerospikeClient.execute(null, statement, "myUdf", "delete_rec")
task.waitTillComplete(timeout);
Is it performant? Unclear, but my guess is asinfo is better. But it's very convenient for testing/debugging/setup.
You can't delete a set but you can delete all records that exist in the set by scanning all the records and deleting then one by one. Sample C# code that will do the trick:
AerospikeClient.ScanAll(null, AerospikeNameSpace, category, DeleteAllRecordsCallBack);
Here DeleteAllRecordsCallBack is a callback function where in you can delete records one by one. This callback function gets called for all records.
private void DeleteAllRecordsCallBack(Key key, Record record)
{
AerospikeClient.Delete(null, key);
}
Using AQL:
TRUNCATE namespace_name.set_name
https://www.aerospike.com/docs/tools/aql/aql-help.html
Take care that you'll need to restart your nodes (one by one to avoid downtime) because you'll not retrieve bins left space instead.
I mean maximum limit of bins in Aerospike is 32,767. If you just delete and recreate several times your set, if it's create for example 10000 bins each time, you'll not be able to create more than 2,767 bins the 4th time because bins counter is kept in ram.
If you restart you're cluster, it will be released.
You can't dynamically delete a set from namespace like "drop table" in RDMS.
The following command using asinfo only lazily delete data inside a set:
asinfo -v "set-config:context=namespace;id=namespace_name;set=set_name;set-delete=true;"
There is a post about it in aerospike discuss site and I didn't see progress about this issue yet.
In our production experience special java deletion utility works quite well even without recently introduced durable deletes. You build it from sources, put somewhere near the cluster and run this way:
java -jar delete-set-1.0.0-jar-with-dependencies.jar -h <aerospike_host> -p 3000 -s <set_to_delete> -n <namespace_name>
In our prod environment cold restarts are quite rare events, basically when aerospike crashes. And the data flow is quite high so defragmentation kicks in earlier and we don't even have zombie record issue.
BTW asinfo way mentioned earlier didn't work for us. The records stayed there for couple of days so we use delete-set utility which worked right away.