Can you Schedule a Gitlab Runner? - gitlab-ci

Is it possible to schedule the uptime on a runner?
We have a machine that is used heavily for other non-gitlab jobs during the day, so would only like it only accept jobs overnight, as we used to have on jenkins. Is this possible? If so - how?

You can use gitlab api to activate or disable runners. see the documentation
For example:
PUT /runners/:id
active: false

Related

How to execute Jenkins job parallely for Testcafe tests execution

I have created the Jenkins job to execute the TestCafe tests, the job is working fine.
But I want to execute multiple jobs parallelly.
As TestCafe runs tests on the default port, it is not allowing me to execute jobs parallelly.
Can anyone please suggest how to achieve this?  
I tried to execute the jobs parallelly, but its is giving an exception port already in use.
I am not able to change the port in Jenkins job
When starting TestCafe, you can set ports:
https://testcafe.io/documentation/402644/reference/testcafe-api/testcafe/createrunner
https://testcafe.io/documentation/402639/reference/command-line-interface#--ports-port1port2
Use different ports for each parallel task.

GitLab CI stuck at "Waiting Fargate task to be ready" - but Fargate task is in fact running, but never completes

Having set up GitLab CI and AWS Fargate resources as described in the documentation, we have a situation where the runner can trigger the Fargate task, which goes into RUNNING state, but the master runner never seems to realize this.
Running with gitlab-runner 14.7.0 (98daeee0)
on gitlab-fargate-master DyE5BsVA
Preparing the "custom" executor
INFO[2022-01-27T13:54:49Z] Starting fargate PID=1447 version="0.2.0 (933d940)"
INFO[2022-01-27T13:54:49Z] Executing the command PID=1447 command=config_exec
Using Custom executor with driver fargate 0.2.0 (933d940)...
INFO[2022-01-27T13:54:49Z] Starting fargate PID=1452 version="0.2.0 (933d940)"
INFO[2022-01-27T13:54:49Z] Executing the command PID=1452 command=prepare_exec
INFO[2022-01-27T13:54:56Z] Starting new Fargate task PID=1452 command=prepare_exec
INFO[2022-01-27T13:54:58Z] Persisting data that will be used by other commands PID=1452 command=prepare_exec taskARN="arn:aws:ecs:us-east-1:558517226390:task/gitlab-ci-cluster/ee488fa1d7d7475fab9be01d5bad180e"
INFO[2022-01-27T13:54:58Z] Waiting Fargate task to be ready PID=1452 command=prepare_exec taskARN="arn:aws:ecs:us-east-1:558517226390:task/gitlab-ci-cluster/ee488fa1d7d7475fab9be01d5bad180e"
Within AWS, the task has created its Log Stream in Cloudwatch, but there are no events in that log. It's unclear what is actually happening.
What can be done to find out?
We have reverted to using a vanilla Docker container from the GitLab documentation registry.gitlab.com/tmaczukin-test-projects/fargate-driver-debian:latest but exactly same happens.
Solved - problem was missing AWS permission ECS:DescribeTasks, which for some reason was not causing an error message in the Runner.
(I had mistakenly added AmazonEC2_FullAccess, not AmazonECS_FullAccess as described in the docs)
Having run a "Generate Policy" in AWS based on CloudTrail Events (awesome new feature!), I can now confirm the permissions actually being used are:
EC2: DescribeNetworkInterfaces.
ECS: StopTask, DescribeTasks, RunTask
Note the EC2 permission, which is missing from the docs.
Not sure if you have solved your problem but I noticed this question as I had the exact same issue yesterday. For me this was caused as my gitlab manager task was using an IAM role which was limited to start and stop tasks but it was apparently missing permissions to check weather a task is in the RUNNING state. So I fixed my ecs execution role and then it started working for me.

How to control job scheduling in a better way in gitlab-ci?

I have in my gitlab projects jobs defined and executed via gitlab-ci. However, it doesn't do well with interdependent jobs as there's no management of this case except manual.
The case I have is a service, which is a part of the overall app, takes long time to start. Starting this service is done within a job, while another job have another service, which is also a part of the overall app, querying the former service. Due to interdependence, I have just delayed the execution of this later job so that most probably the former job has its service up and running.
I wanted to use Rundeck as a job scheduler but not sure if this can be done with gitlab? Maybe I am wrong about gitlab, so does gitlab allow better job scheduling?
Here's an example of what I am doing:
.gitlab-ci.yml
deploy:
environment:
name:$CI_ENVIRONMENT
url: http://$CI_ENVIRONMENT.local.net:4999/
allow_failure: true
script:
- sudo dpkg -i myapp.deb
- sleep 30m //here I wait for the service to be ready for later jobs to run successfully
- RESULT=`curl http://localhost:9999/api/test | grep Success'
it looks like it is a typical trigger feature inside gitlab-ci
see gitlab-ci triggers
mostly at the end of the long start-up service job A to use curl to trigger another one
deploy_service_a:
stage: deploy
script:
- "curl --request POST --form token=TOKEN --form ref=master https://gitlab.example.com/api/v4/projects/9/trigger/pipeline"
only:
- tags

How to submit code to a remote Spark cluster from IntelliJ IDEA

I have two clusters, one in local virtual machine another in remote cloud. Both clusters in Standalone mode.
My Environment:
Scala: 2.10.4
Spark: 1.5.1
JDK: 1.8.40
OS: CentOS Linux release 7.1.1503 (Core)
The local cluster:
Spark Master: spark://local1:7077
The remote cluster:
Spark Master: spark://remote1:7077
I want to finish this:
Write codes(just simple word-count) in IntelliJ IDEA locally(on my laptp), and set the Spark Master URL to spark://local1:7077 and spark://remote1:7077, then run my codes in IntelliJ IDEA. That is, I don't want to use spark-submit to submit a job.
But I got some problem:
When I use the local cluster, everything goes well. Run codes in IntelliJ IDEA or use spark-submit can submit job to cluster and can finish the job.
But When I use the remote cluster, I got a warning log:
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
It is sufficient resources not sufficient memory!
And this log keep printing, no further actions. Both spark-submit and run codes in IntelliJ IDEA result the same.
I want to know:
Is it possible to submit codes from IntelliJ IDEA to remote cluster?
If it's OK, does it need configuration?
What are the possible reasons that can cause my problem?
How can I handle this problem?
Thanks a lot!
Update
There is a similar question here, but I think my scene is different. When I run my codes in IntelliJ IDEA, and set Spark Master to local virtual machine cluster, it works. But I got Initial job has not accepted any resources;... warning instead.
I want to know whether the security policy or fireworks can cause this?
Submitting code programatically (e.g. via SparkSubmit) is quite tricky. At the least there is a variety of environment settings and considerations -handled by the spark-submit script - that are quite difficult to replicate within a scala program. I am still uncertain of how to achieve it: and there have been a number of long running threads within the spark developer community on the topic.
My answer here is about a portion of your post: specifically the
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
sufficient resources
The reason is typically there were a mismatch on the requested memory and/or number of cores from your job versus what were available on the cluster. Possibly when submitting from IJ the
$SPARK_HOME/conf/spark-defaults.conf
were not properly matching the parameters required for your task on the existing cluster. You may need to update:
spark.driver.memory 4g
spark.executor.memory 8g
spark.executor.cores 8
You can check the spark ui on port 8080 to verify that the parameters you requested are actually available on the cluster.

Jenkins: Restricting test jobs to run on same slave as the job its testing

I've recently started to change my Jenkins jobs from being restricted to a certain slave to being restricted to a slave group identified by a label. However I have test jobs that I need to run on the same slave as the job that they're testing.
I need a way to tie two jobs together such that they can only be run on the same slave, but the slave is still chosen by Jenkins based on availability, etc.
Anyone know how to do this, or even if it's possible? Thanks in advance!
Couldn't you use
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
and to pass the node name ${NODE_NAME} (see https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables) to the next build, that should be parametrized on the node label (that can be node name) using
https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin
I need a way to tie two jobs together such that they can only be run on the same slave, but the slave is still chosen by Jenkins based on availability, etc.
I have the same problem, and I found Node Stalker Plugin
Right now the plugin can be found on the following url:
https://wiki.jenkins-ci.org/display/JENKINS/Node+Stalker+Plugin
Jenkins calls this plugin as
Job Node Stalker
on plugin management page. It will be part of Jenkins.