Snakemake: Combine cluster profile with resources: attempt - snakemake

Let's say I have a rule that for 90% of my data needs 1h, but occasionally needs 3h. In a busy cluster environment, I however do not want to submit all jobs with a time limit of 3h to be save, as this would slow down the scheduling of my jobs.
Hence, I played around with the attempt variable:
resources:
# Increase time limit in factors of 1h, if the job fails due to time limit.
time = lambda wildcards, input, threads, attempt: int(60 * int(attempt))
(one could be even smarter and use powers of 2 to amortize better...).
But this approach forces me to put the base time (1h) direclty into the rule. How can I combine this approach with cluster profiles, where the base time is in some cluster_config.yaml file?
Thanks and so long
Lucas

Related

How are reserved slots re-allocated between reservation/projects if idle slots are used?

The documentation on Introduction to Reservations: Idle Slots states that idle slots from reservations can be used by other reservations if required
By default, queries running in a reservation automatically use idle slots from other reservations. That means a job can always run as long as there's capacity. Idle capacity is immediately preemptible back to the original assigned reservation as needed, regardless of the priority of the query that needs the resources. This happens automatically in real time.
However, I'm wondering if this can have a negative effect on other reservations in a scenario where idle slots are used but are shortly after required by the "owning" reservation.
To be concrete I would like to understand if i can regard assigned slots as guarantee OR as a best effort.
Example:
Reserved slots: 100
Reservation A: 50 Slots
Reservation B: 50 Slots
"A" starts a query at 14:00:00 and the computation takes 300 seconds if 100 slots are used.
All slots are idle at the start of the query, thus all 100 slots are made available to A.
5 seconds later at 14:00:05 "B" starts a query that takes 30 seconds if 50 slots are used.
Note:
For the sake of simplicity let's assume that both queries have only excactly 1 stage and each computation unit ("job") in the stage takes the full time of the query. I.e. the stage is divided into 100 jobs and if a slot starts the computation it takes the full 300 seconds to finish successfully.
I'm fairly certain that on "multiple stages" or "shorter computation times" (e.g. if the computation can be broken down in 1000 jobs) GBQ would be smart enough to dynamically re-assign the freed up slot the reservation it belongs to.
Questions:
does "B" now have to wait until a slot in "A" finishes?
this would mean ~5 min waiting time
I'm not sure how "realistic" the 5 min are, but I feel this is an important variable since I wouldn't worry about a couple of seconds - but I would worry about a couple of minutes!
or might an already started computation of "A" also be killed mid-flight?
the docu Introduction to Reservations: Slot Scheduling seems to suggest something like this
The goal of the scheduler is to find a medium between being too aggressive with evicting running tasks (which results in wasting slot time) and being too lenient (which results in jobs with long running tasks getting a disproportionate share of the slot time).
Answer via Reddit
A stage may run for quite some time (minutes, even hours in really bad cases) but a stage is run by many workers. And most workers complete their work within a very short time, e.g. milliseconds or seconds. Hence rebalancing, I.e. reallocating slots from one job to another is very fast.
So if a rebalancing happens and a job loses a large part of slots, then it will run a lot slower. And the one that gains slots will run fast. And this change is quick.
So in the above example. As job B starts 5 seconds in, within a second or so it would have acquired most of its slots.
So bottom line:
a query is broken up into "a lot" of units of work
each unit of work finishes pretty fast
this give GBQ to opportunity to re-assign slots

Snakemake 200000 job submission

I have 200000 fasta sequences. I am doing GATK to call variants and created a wildcard for every sequence. Now I would like to submit 200000 jobs using snakemake. Will this cause a problem to cluster? Is there a way to submit jobs in a set of 10-20?
First off, it might take some time to calculate the DAG, but I have been told the DAG calculation recently has been greatly improved. Anyways, it might be wise to split up in batches.
Most clusters won't allow you to submit more than X jobs at the same time, usually in the range of 100-1000. I believe the documentation is not fully correct, but when using --cluster cluster I believe the --jobs argument controls the number of active/submitted jobs at the same time, so by using snakemake --jobs 20 --cluster "myclustercommand" you should be able to control this. Know that this control the number of submitted jobs, not active jobs. It might be that all your jobs are in the queue, so probably best to check in with your cluster administrator and ask what the maximum number of submitted jobs is, and get as close to that number.

Waiting time of SUMO

I am using sumo for traffic signal control, and want to optimize the phase to reduce some objectives. During the process, I use the traci module as an output of states in traffic junction. The confusing part is traci.lane.getWaitingTime.
I don't know how the waiting time is calculated and also after I use two detectors as an output to observe, I think it is too large.
Can someone explain how the waiting time is calculated in SUMO?
The waiting time essentially counts the number of seconds a vehicle has a speed of less than 0.1 m/s. In the case of traci.lane this means it is the number of (nearly) standing vehicles multiplied with the time step length (since traci.lane returns the values for the last step).

Cloud DataFlow performance - are our times to be expected?

Looking for some advice on how best to architect/design and build our pipeline.
After some initial testing, we're not getting the results that we were expecting. Maybe we're just doing something stupid, or our expectations are too high.
Our data/workflow:
Google DFP writes our adserver logs (CSV compressed) directly to GCS (hourly).
A day's worth of these logs has in the region of 30-70 million records, and about 1.5-2 billion for the month.
Perform transformation on 2 of the fields, and write the row to BigQuery.
The transformation involves performing 3 REGEX operations (due to increase to 50 operations) on 2 of the fields, which produces new fields/columns.
What we've got running so far:
Built a pipeline that reads the files from GCS for a day (31.3m), and uses a ParDo to perform the transformation (we thought we'd start with just a day, but our requirements are to process months & years too).
DoFn input is a String, and its output is a BigQuery TableRow.
The pipeline is executed in the cloud with instance type "n1-standard-1" (1vCPU), as we think 1 vCPU per worker is adequate given that the transformation is not overly complex, nor CPU intensive i.e. just a mapping of Strings to Strings.
We've run the job using a few different worker configurations to see how it performs:
5 workers (5 vCPUs) took ~17 mins
5 workers (10 vCPUs) took ~16 mins (in this run we bumped up the instance to "n1-standard-2" to get double the cores to see if it improved performance)
50 min and 100 max workers with autoscale set to "BASIC" (50-100 vCPUs) took ~13 mins
100 min and 150 max workers with autoscale set to "BASIC" (100-150 vCPUs) took ~14 mins
Would those times be in line with what you would expect for our use case and pipeline?
You can also write the output to files and then load it into BigQuery using command line/console. You'd probably save some dollars of instance's uptime. This is what I've been doing after running into issues with Dataflow/BigQuery interface. Also from my experience there is some overhead bringing instances up and tearing them down (could be 3-5 minutes). Do you include this time in your measurements as well?
BigQuery has a write limit of 100,000 rows per second per table OR 6M/per minute. At 31M rows of input that would take ~ 5 minutes of just flat out writes. When you add back the discrete processing time per element & then the synchronization time (read from GCS->dispatch->...) of the graph this looks about right.
We are working on a table sharding model so you can write across a set of tables and then use table wildcards within BigQuery to aggregate across the tables (common model for typical BigQuery streaming use case). I know the BigQuery folks are also looking at increased table streaming limits, but nothing official to share.
Net-net increasing instances is not going to get you much more throughput right now.
Another approach - in the mean time while we work on improving the BigQuery sync - would be to shard your reads using pattern matching via TextIO and then run X separate pipelines targeting X number of tables. Might be a fun experiment. :-)
Make sense?

How to reduce time allotted for a batch of HITs?

today I created a small batch of 20 categorization HITs with the name Grammatical or Ungrammatical using the web UI. Can you tell me the easiest way to manage this batch so that I can reduce its time allotted to 15 minutes from 1 hour and remove also remove the categorization of masters. This is a very simple task that's set to auto-approve within 1 hour, and I am fine with that. I just need to make it more lucrative for people to attempt this at the penny rate.
You need to register a new HITType with the relevant properties (reduced time and no masters qualification) and then perform a ChangeHITTypeOfHIT operation on all of the HITs in the batch.
API documentation here: http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_ChangeHITTypeOfHITOperation.html