I have a number of node scripts collected in a git repo. These scripts run against the Airtables API for various forms of table maintenance and updates. The repo is on gitlab and until now the scripts have been deployed on Heroku with a scheduler ad on. I need to trigger these scripts at intervals, such as daily and weekly intervals. Is there any way to trigger these cron jobs directly from the repo in gitlab and through some kind of service from there? I want to phase out heroku for some other simpler service to trigger these cron job scripts primarily from gitlab. Grateful for suggestions on possible solutions.
I have found my own solution to the problem I described earlier. In gitlab I have set up scheduled pipelines against the different cron time intervals that run the script. I have initially separated through different scheduler branches with different gitlab-ci.yml for each script in each branch. It would have been best if you could collect all the scheduler scripts in a single branch and point out which of the scripts is running at which time. I suppose this can be solved via different flags and variables when the scripts are triggered.
Related
I currently have an AWS Code Build Pipeline setup that is executing on a set schedule using AWS Event Bridge.
However, this doesn't check to see if there are actually any pending changes in the connected Repo so it just builds regardless.
Is there an approach to:
Schedule a Pipeline to execute based on a specific cron schedule.
Then check and only execute the build if the repo has updates.
I'm writing an API that consists of several microservices. I have the code in a private Gitlab repo. I have a custom CI/CD pipeline configured to run a couple of different steps automatically on every commit to master (e.g. build, test, deploy to a dev environment). Deploying to prod is manual.
I have written some unit tests around this code, which naturally test only small units of the code. These, of course, are run with every commit, because if they fail, that means something in the code has broken.
I also have regression tests which we run after deploying. One of these is actually a bash script that uses curl to hit my production endpoint with certain parameters and checks to make sure that I'm getting 200 responses. I have parameterized this script so I can easily point it at my dev environment (instead of prod).
I use this regression test (and others like it) to check that my already-deployed service is functioning properly. And I run it right after deploying as a final, double-check to confirm that everything is working. But I want to automate that.
My question is where does this fit in a CI/CD workflow? It wouldn't make sense to run this kind of regression test on a commit, because that commit is not necessarily coupled with a deploy. And because there are any number of reasons why the service might be down that are unrelated to whatever code changes went into the most recent commit. In other words, the pipeline should not fail because of external circumstances.
Are there any best practices for running and automating regressions tests?
Great question. There are a couple of interesting points here.
When to run the regression tests (as they exist today) in your CI / CD environment.
The obvious answer to this is to run as a post deploy step. Using the same approach you are currently using to limit the deploy step to the master branch only you can limit this post deploy step to the master branch only.
If you add more details about your environment. For example the CI / CD system that you are using and your current configuration I would be very happy to provide more concrete details on how to achieve this.
It wouldn't make sense to run this kind of regression test on a commit
An interesting approach that I have seen a couple of times. Is using a cloud service (AWS / GCloud etc.) to spin up an environment on each CI run. This means that the full pipeline can be run for every commit. While it takes more resources, it means that you can find issues prior to merging to master. Of course up to you whether the ROI adds up in your environment.
I am trying to use Sequelize js with Serverless, coming from traditional server background, I am confused where/how to run database migrations.
Should I create a dedicated function for running migration or is there any other way of running migrations?
I found myself with this same question some days ago while structuring a serverless project, so I've decided to develop a simple serverless plugin to manage sequelize migrations through CLI.
With the plugin you can:
Create a migration file
List pending and executed migrations
Apply pending migrations
Revert applied migrations
Reset all applied migrations
I know this question was posted about two years ago but, for those who keep coming here looking for answers, the plugin can be helpful.
The code and the instructions to use it are on the plugin repository on github and plugin page on npm.
To install the plugin directly on your project via npm, you can run:
npm install --save serverless-sequelize-migrations
Lambda functions were designed to be available to run whenever necessary. You deploy them when you expect multiple executions.
Why would you create a Lambda function for a migration task? Applying a database migration is a maintenance task that you should execute just one time per migration ID. If you don't want to execute the same SQL script multiple times, I think that you should avoid creating a Lambda function for that purpose.
In this case, I would use a command line tool to connect with this database and execute the appropriate task. You could also run a Node.js script for this, but creating a Lambda to execute the script and later removing this Lambda sounds strange and should be used only if you don't have direct access to this database.
I want to be able to automate Jenkins server installation using a script.
I want, given Jenkins release version and a list of {(plugin,version)}, to run a script that will deploy me a new jenkins server and start it using Jetty or Tomcat.
It sounds like a common thing to do (in need to replicate Jenkins master enviroment or create a clean one). Do you know what's the best practice in this case?
Searching Google only gives me examples of how to deploy products with Jenkins but I want to actually deploy Jenkins.
Thanks!
this may require some additional setup at the beginning but perhaps could save you time in the long run. You could use a product called puppet (puppetlabs.com) to automatically trigger the script when you want. I'm basically using that to trigger build outs of my development environments. As I find new things that need to be modified, I simply update my puppet modules and don't need to worry about what needs to be done to recreate the environments through testing for the next go round.
I have some pig batch jobs in .pig files I'd love to automatically run on EMR once every hour or so. I found a tutorial for doing that here, but that requires using Amazon's GUI for every job I setup, which I'd really rather avoid. Is there a good way to do this using Whirr? Or the Ruby Elastic-mapreduce client? I have all my files in s3, along with a couple pig jars with functions I need to use.
Though I don't know how to run pig scripts with the tools that you mention, I know of two possible ways:
To run files locally: you can use cron
To run files on the cluster: you can use OOZIE
That being said, most tools with a GUI, can be controlled via the command line as well. (Though setup may be easier if you have the GUI available).