AWS CodeDeploy only when a repo has updates and at a scheduled time - aws-codebuild

I currently have an AWS Code Build Pipeline setup that is executing on a set schedule using AWS Event Bridge.
However, this doesn't check to see if there are actually any pending changes in the connected Repo so it just builds regardless.
Is there an approach to:
Schedule a Pipeline to execute based on a specific cron schedule.
Then check and only execute the build if the repo has updates.

Related

Trigger npm scripts as cron jobs from gitlab

I have a number of node scripts collected in a git repo. These scripts run against the Airtables API for various forms of table maintenance and updates. The repo is on gitlab and until now the scripts have been deployed on Heroku with a scheduler ad on. I need to trigger these scripts at intervals, such as daily and weekly intervals. Is there any way to trigger these cron jobs directly from the repo in gitlab and through some kind of service from there? I want to phase out heroku for some other simpler service to trigger these cron job scripts primarily from gitlab. Grateful for suggestions on possible solutions.
I have found my own solution to the problem I described earlier. In gitlab I have set up scheduled pipelines against the different cron time intervals that run the script. I have initially separated through different scheduler branches with different gitlab-ci.yml for each script in each branch. It would have been best if you could collect all the scheduler scripts in a single branch and point out which of the scripts is running at which time. I suppose this can be solved via different flags and variables when the scripts are triggered.

hive application shows running even after killing from command line

I ran a hive query on a decently large dataset and it was taking too much time for the query so I decided to kill the application with :
yarn kill -application-id
Now when I check from the CLI with:
yarn application -list
then the above mentioned application does not show up in the list.
However, when I log into the Tez view from ambari the application is showing up to be still in the running state(almost been 24 hours since I created it).
I tried killing it again from the command line but it says that the application has already finished.
I also checked in the resource manager UI and the status for that job shows that it was killed.
Because of this, whenever I am trying to run any new hive job, it is just getting queued up and I am unable to run any other jobs.
Please help!
The TEZ VIEW is an export of the application timeline server info. If you use yarn kill, hive does not properly inform the YARN Application Timeline Server that the query has been terminated. Therefore, you still see these as running in the tez view because ATS never received any update that this entered a stopped/failed state. If you are unable to run new hive jobs, it will not be related to the fact that killed applications still show as running in the tez view and you should troubleshoot that separately. The bug you described is purely cosmetic and is documented in the following places:
https://issues.apache.org/jira/browse/HIVE-16429
https://community.hortonworks.com/content/supportkb/196542/tez-ui-displays-query-as-running-even-after-a-succ.html
So the way around I found to clear the queue so that I could run other queries was to go to /hadoop/yarn/timeline, backup the files and restart YARN. The TEZ queue was cleared up and I could start running my queries from the hive view again.
I should mention, however, that this will clear all queries(for all the users).

GitLab CI Pipeline not triggered by push from runner

We use Gitlab CE. We have two repos / projects, one to store the source code, and the other one to build and store the package that we’ll deploy. We use a runner to push the changes in the former into the latter. This used to trigger the pipeline of the latter repo. Recently, a change was pushed to the latter repo manually, and since then the push from the runner doesn’t trigger the pipeline any more in the target repo (manual pushes still trigger the pipeline, also, the push in the runner runs flawlessly, and the commit appears in the target repo). I was not the one who created the setup, so I don’t know how to make the push from the runner trigger the pripeline (or, rather, why it doesn’t do it automatically).
As far as I understand, the push should trigger the pipeline wherever it comes from. So why doesn't it do so?
So apparently the issue appeared because the user account to which the deploy key used in the target repo belonged to has been disabled. Creating a new key with an active user account solved the problem, the pipeline is triggered properly now.

Spinnaker Deploying the last build from Jenkins

I am using Spinnaker with Jenkins. When creating a server group we have to specify the image which needs to be deployed. But my jenkins job creates an image which I want to deploy using Spinnaker. My jenkins Job number is the tag. but I am not able to find a way to mention the tag dynamically. I am able to figure out that the build number is available in parameter "context.buildInfo.number" but to use this as tag number is something which I am not able to figure out.
Thanks in advance
Amol
Just for an update I was able to resolve the problem by creating a trigger for DockerRegistry. Now one of my Pipeline Starts the Job and pushes the build to DockerRegistry. And another pipeline to monitor the images from there. It can pickup the latest build from the registry and deploy that.

How/Where to run sequelize migrations in a serverless project?

I am trying to use Sequelize js with Serverless, coming from traditional server background, I am confused where/how to run database migrations.
Should I create a dedicated function for running migration or is there any other way of running migrations?
I found myself with this same question some days ago while structuring a serverless project, so I've decided to develop a simple serverless plugin to manage sequelize migrations through CLI.
With the plugin you can:
Create a migration file
List pending and executed migrations
Apply pending migrations
Revert applied migrations
Reset all applied migrations
I know this question was posted about two years ago but, for those who keep coming here looking for answers, the plugin can be helpful.
The code and the instructions to use it are on the plugin repository on github and plugin page on npm.
To install the plugin directly on your project via npm, you can run:
npm install --save serverless-sequelize-migrations
Lambda functions were designed to be available to run whenever necessary. You deploy them when you expect multiple executions.
Why would you create a Lambda function for a migration task? Applying a database migration is a maintenance task that you should execute just one time per migration ID. If you don't want to execute the same SQL script multiple times, I think that you should avoid creating a Lambda function for that purpose.
In this case, I would use a command line tool to connect with this database and execute the appropriate task. You could also run a Node.js script for this, but creating a Lambda to execute the script and later removing this Lambda sounds strange and should be used only if you don't have direct access to this database.