Synchronize Bamboo artifacts - bamboo

I have 2 bamboo plans:
Build plan has 2 jobs:
"compile code", which takes around 2 minutes
"generate data" which takes around 10 minutes
Every job generates a different artifact: "exes" and "data" respectively.
The second plan has one job, "create installer". It takes both "exes" and "data" shared artifacts and creates an installer.
If I just finished the first plan then I trigger the second, everything works fine. But, what happens if I start the first plan, wait 3 minutes (first artefact "exes" is created), then I start the second plan.
How can I ensure that the installer is created with artifacts from one plan only?

Related

Why is the query for the scheduled query is running but not as Scheduled query in Bigquery?

I have a query which will run if I simply run it through console or from code.
When I created Scheduled Query for the Query, it would not run. The Scheduled Query is successfully created, and the interval I set (every 2 hours) is correctly implemented but only the jobs are not created (I can see in Scheduled query that the time to run is being incremented by 2 hours every time it is supposed to run).
These are the properties when running query from Scheduled query:
Overwrite table, Processing location: US, Allow large results, Batch priority
If I do a Schedule Backfill, it creates 12 jobs which fails with an error messages similar to the following:
Exceeded CPU limit 125%
Exceeded memory
If I cancel all the created jobs and leave one to run, it would run successfully. The Scheduled Query itself would not create any jobs.
I started the Scheduled query at 12:00 and made it to run for every 2 hours in repeats.
I assumed the jobs would run at the start time but apparently it is not the case. Scheduled Query ran perfectly as intended from 14:00 following with 16:00 and so on.
The errors regarding maximum CPU/memory usage is because the query I wrote had ORDER BY statement which was causing this issue. Removing that cleared the issue.

Running Jobs in Parallel for same project and different branches

What do I need to change in order to run these jobs in parallel?
There is one more runner available on the server, but it's not picking up the "pending" job until the "running" one is finished.
UPDATE
The jobs are picked up by different runners, but in a sequential mode. See ci-runner-1 and ci-runner-2.
See screenshots
The problem was that in config.toml (/etc/gitlab-runner/config.toml in my case) I had:
concurrent = 1.
Changed this to 0 or a value > 1, restart gitlab-runner and all good.
Reference:
https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section

Run a SQL Server job until it succeeds

I have a SQL Server job that has run for almost 2 years.
It's connecting to a bad Oracle database that keeps disconnecting, it always fails due to that. And when I run it again after 10 or 15 minutes, it works successfully. I'm getting bored of checking it every day...
Is there a way that make the job run to connect to that Oracle source until it succeeds, or another job that looks over this job status and if it failed, then it runs it again until it succeeds?
A solution we are using is something like this:
Wrap your Oracle query in an SSIS package, and after the query, have the package update a SQL table that keeps either a history of executions, or just a single row that tracks the last time the job ran successfully. In short, if the Oracle query was successful, then put something in a table saying the query ran successfully today. If it was not successful, then don't put anything in the table for today.
Then at the beginning of the package, BEFORE the Oracle query, check to see if the query has been run successfully today. If it has already run successfully, then do nothing and exit the package. If it has not run successfully today, then go ahead and try to run it, following the post-query steps described above. If you have any other conditions about when the package should run (like "only after 10 am" or anything like that) you would include that logic here.
Finally, schedule the job to call the package, and schedule to run every 15 minutes, or however often you like. It will try every 15 minutes until it runs successfully, and after that it will stop doing anything until the next day.
As a bonus, you can use this same package and job to initiate all tasks that you want handled the same way. You just need to keep meta data about all these tasks in your history/metadata table.
an alternative is to create the job step and leave it unscheduled, and create an ssis job that acts as the master to all your jobs and it runs every minute checking all job steps from a config table that have yet to succeed today and any it finds execute using sp_start_job.
if they do run successfully log the stats to a log table and this prevents them ever being launched again until the next day. This prevents all yours jobs needing to be scheduled every 15 minutes etc, they launch asap, and you can add extra logic to handle dependencies, number parallel running, importance level etc, start time, latest start time, max number to retty etc

How to execute X times a Job Executor step

Introduction
To keep it simple, let's imagine a simple transformation.
This transformation gets an input of 4 rows, from a Data Grid step.
The stream passes through a Job Executor, referencing to a simple job, with a Write Log component.
Expectations
I would like the simple job executes 4 times, that means 4 log messages.
Results
It turns out that the Job Executor step launches the simple job only once, instead of 4 times : I only have one log message.
Hints
The documentation of the Job Executor component specifies the following :
By default the specified job will be executed once for each input row.
This is parametrized in the "Row grouping" tab, with the following field :
The number of rows to send to the job: after every X rows the job will be executed and these X rows will be passed to the job.
Answer
The step actually works well : an input of X rows will execute a "Job Executor" step X times. The fact is I wasn't able to see it with the logs.
To verify it, I have added a simple transformation inside the "Job Executor" step, which writes into a text file. After I have checked this file, it appeared that the "Job Executor" was perfectly executed X times.
Research
Trying to understand why I didn't have X log messages after the X times execution of "Job Executor", I have added a "Wait for" component inside the initial simple job. Finally, adding two seconds allowed me to see X log messages appearing during the execution.
Hope this helps because it's pretty tricky. Please feel free to provide further details.
A little late to the party, as a side note:
Pentaho is a set of programs (Spoon, Kettle, Chef, Pan, Kitchen), The engine is Kettle, and everything inside transformations is started in parallel. This makes log retrieving a challenging task for Spoon (the UI), you don't actually need a Wait for entry, try outputting the logs into a file (specifying a log file on the Job executor entry properties) and you'll see everything in place.
Sometimes we need to give Spoon a little bit of time to get everything in place, personally that's why I recommend not relying on Spoon Execution Results logging tab, it is better to output the logs to a DB or files.

How do I troubleshoot a hanging SQL Server Agent Job

I have a SQL Server Agent Job with 4 steps. If I run it, it displays as "Executing" indefinitely. If I run the code in the four steps sequentially directly into SSMS, they take ~7 seconds to execute. No configuration information (owner, run as, database, etc.) differs from any other Job that runs normally. What else can I examine?
As with any problem that comes in a group, break it down to it's individual parts. You have run each step separately and you know each individual step works. Next, add step 1 and 2 and see if it runs. Next add step 1,2 and 3 and see what happens. Eliminate all the possible issues step by step. My guess is that one step is not returning a success and the error logic on it does not say to fail or move to the next step.
Check the properties on each step under advanced and check the On success and On Failure actions.