Does gitlab-ci have a way to download a script and store it on the local file system so it could be run? - gitlab-ci

Does gitlab-ci have a way to download a script and store it on the local file system so it could be run? Looks like other's have asked similar questions (see below).
One way to do it would be to use curl (but curl has to exist in the CI runner):
curl -o ./myscript -k https://example.com/myscript.sh
This was from https://stackoverflow.com/a/22800194/3281336.
If I have a script that I would like to use in multiple CI-pipelines, I'd like to have a way to download the script to the local file system to use in the pipeline. NOTE: I don't have the ability to create a custom runner or docker image in my given situation.
If the script were available via git or an https website, what are my alternatives?
Some search results
https://docs.gitlab.com/ee/ci/yaml/includes.html - Gitlab supports a way to include files, even from GIT repos. This might work I just haven't read how.
How to run a script from file in another project using include in GitLab CI? - Similar but the answer uses a multi-project pipeline and a trigger which is really (I think) a different answer.
.gitlab-ci.yml to include multiple shell functions from multiple yml files - Similar but the question is dealing with scripts in YAML files and I'm dealing with a stand alone script.
How to include a PowerShell script file in a GitLab CI YAML file - So far this is the closest to my question and some might consider it the same even though the question is asking about a powershell script. The answer said this wasn't possible to include a script (so maybe this is not possible using the GitLab CI syntax).
If it is possible, please let me know how to do this.

Related

Where can read about available commands i can use as a script in my gitlab CI yml file?

Basically, i'm getting started with gitlab's continuous integration, but having a hard time looking for a guide or documentation that would help me write scripts for the file.
Where can i read about operators, conditionals and instructions i can use for this?
Thanks!
First of all read the Quick start
Here is everything about yml file: Configuration of your jobs with .gitlab-ci.yml
I can recommend to use CI lint which can validates your yml syntax. It can saves your time ;-)

.sh script does not run on SSH instance on Google Cloud

When I try to run a .sh script on an SSH instance on Google cloud, I get this error:
bash: abc.sh command not found
This runs fine when I run it on the Google shell. I tried setting 'PermitUserEnvironment yes' in the sshd_config file but this did not change the output.
From my reading on similar issues, it seems as though I should be setting some other PATH variables but I'm not sure which ones these are.
The issue was solved running the command with the path.
/path/to/file.sh
You were not able to run the command because bash was trying to locate that command inside the location specified by PATH enviroment variable since you were not specifying the path.
Future people reading could find as well useful information here regarding possible causes.

What is the optimal way to store data-files for testing using travis-ci + Docker?

I am trying to set-up the testing of the repository using travis-ci.org and Docker. However, I couldn't find any manuals about what is the politics on memory usage.
To perform a set of tests (test.sh) I need a set of input files to run on, which are very big (up to 1 Gb, but average 500 Mb).
One idea is to wget directly in test.sh script, but for each test-run it would be not efficient to download the input file again and again.
The other idea is to create a separate dockerfile containing the test-files and mount it as a drive, but this would be not nice to push such a big dockerimage in the general register.
Is there a general prescription for such tests?
Have you considered using Travis File Cache?
You can write your test.sh script in a way so that it will only download a test file if it was not available on the local file system yet.
In your .travis.yml file, you specify which directories should be cached after a successful build. Travis will automatically restore that directory and files in it at the beginning of the next build. As your test.sh script will then notice the file exists already, it will simply skip the download and your build should be a little faster.
Note that how the Travis cache works is that it will create an archive file and put it on some cloud storage where it will need to download it later on as well. However, the assumption is that the network traffic will likely be inside that "cloud" and potentially in the same data center as well. This should still give you some benefits in terms of build time and lower use of resources in your own infrastructure.

How to restrict runners to a specific branch and lock the .gitlab-ci.yml from changes?

Right now, anyone that creates a branch in my project and adds a .gitlab-ci.yml file to it, can execute commands on my server using the runner. How can I make it so that only masters or owners can upload CI config files and make changes to them?
I'm using https://gitlab.com/gitlab-org/gitlab-ci-multi-runner running on bash.
The GitLab runner wasn't really designed for this scenario and thus you are unable to do this. What you could do instead is have a new project with just your .gitlab-ci.yml file and configure it so that it pulls the original repository. From there you can do all the other things you want to do with your repository.

How to update my server files from a git repo automatically everyday

I am a noob in these server related work. I am writing some PHP code in my local system and has been updating my repo in github regularly. Each time I want to test my application, I copy all the files from my local system onto my server through FTP and then do it. Now I want to know whether is there a way to automatically make the commits that I make to reflect in the files in the server. Is there a way to automatically make the server get the files from the repo periodically? (say, once everyday).
Can this be done other way, like when I make a push from my local machine, the repo gets updated and in turn the files on the server also get updated?
My Server Details: Apache 2.2.15, Architecture i686 with Linux Kernel 2.6.18-194.32.1.el5
In addition to cronjobs, you can use a post-receive hook: http://help.github.com/post-receive-hooks/
If you have cronjobs you can use them. First set up the repository on your server. Then you can set up the cronjob, choose a time in which it should be executed, and then in the cronjob execute the following command:
cd your/repository/folder; git pull master origin
Perhaps explore the git archive command, which you can use to get a zip file or similar of your code. You could then perhaps use a script to copy that to your (other) server?
git archive --format=zip --output=./src.zip HEAD
will create a zip file called src.zip from the HEAD of your repo
More info:
http://www.kernel.org/pub/software/scm/git/docs/git-archive.html
Do a "git export" (like "svn export")?