Multi device management in GitLab CI/CD pipelines - testing

There are several Embedded Linux devices connected over Ethernet. The goal is to install a software image which was generated by a previous pipeline on one of the available devices. After successfully uploading the image a set of Python tests shall be executed.
For managing the different IP addresses I have used the one runner, multiple workers concept. For each worker I have defined a variable DUT_IP which points to a device. For a single job in the pipeline this works as intended.
concurrent = 2
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Laboratory Tests runner #2"
url = "https://gitlab-ee.dev.ifm/"
token = "XXXXX"
executor = "docker"
environment = ["DUT_IP=192.168.0.70"]
[[runners]]
name = "Laboratory Tests runner #1"
url = "https://gitlab-ee.dev.ifm/"
token = "XXXXXX"
executor = "docker"
environment = ["DUT_IP=192.168.0.71"]
By using Resource groups it can be prevented that multiple jobs are executed on the same device. From my understanding this will not ensure that jobs are executed in a specific order. What I need to ensure is that for each pipeline the following steps are performed:
Firmware update
Python Tests
Test result upload
Of course I could put everything in a single Job which will become quite big. Are there other more general solutions. Maybe with dynamic environments. Unfortunately I was not able to find any good blueprint on this kind of set-up online.

Related

CloudFlare worker environment creates separate services with 1 env each, not 1 service with 2 environments

I'm confused about the intended function of Cloudflare's Worker Environments.
In the CloudFlare dashboard, a worker has an environment dropdown, which defaults to "Production". I thought that by leveraging the environments in my Wrangler file, I would have a single worker, but with multiple environments. However, what ended up happening was I just had two workers, with the environment added at the end (my-worker-dev and my-worker-prod). Each of these workers has 1 environment (the Production environment).
I'm not sure if I'm doing something wrong or just misunderstanding the intended behavior.
Can someone help me understand the difference between how wrangler just adds a different name and the "Environment" dropdown within a single worker/service?
My wrangler.toml file
name = "my-worker"
type = "javascript"
account_id = "<redacted>"
workers_dev = true
compatibility_date = "2021-12-10"
[env.dev]
vars = { ENVIRONMENT = "dev" }
kv_namespaces = [
{ binding = "TASKS", id = "<redacted>", preview_id = "<redacted>" }
]
[env.prod]
vars = { ENVIRONMENT = "prod" }
kv_namespaces = [
{ binding = "TASKS", id = "<redacted>", preview_id = "<redacted>" },
]
[build]
command = "npm install && npm run build"
[build.upload]
format = "modules"
dir = "dist"
main = "./worker.mjs"
I think there is currently some disconnect / confusion between the meaning of "environments" as defined in the new Dashboard functionality, and the pre-existing wrangler "environment" support.
For the Dashboard / Web UI, you define a "Service" which has multiple workers grouped under it (one per environment). This allows "promoting" a worker from one environment to another (essentially copying the script, but having separate variables and routes).
There is separate documentation for this functionality - https://developers.cloudflare.com/workers/learning/using-services#service-environments
Wrangler "environments", as you've seen, work differently. Simply creating one top-level "production" worker / service (named for the environment). The good news is (according to the docs above) it sounds like Cloudflare will be updating wrangler to support the new Dashboard type environments:
As of January 2022, the dashboard is the only way to interact with Workers Service environments. Support in Wrangler is coming in v2.1
https://github.com/cloudflare/wrangler2/issues/27

running a Vasp job in HPC cluster

Using pyiron, I build up my script and I would submit it in cluster for running , I was wondering How can I do that ?
Note: Vasp is already installed in my Cluster.
pyiron uses pysqa to submit jobs to a queuing system:
https://github.com/pyiron/pysqa
With sample queuing configurations available at:
https://github.com/pyiron/pysqa/tree/master/tests/config
So in your pyiron resources directory you create a folder named queues which contains the pysqa queuing system configuration.
Once this is done you can use:
job.server.list_queues()
to view the available queues and:
job.server.view_queues()
to get more information about the individual queue and finally submit the job using:
job.server.queue = 'queue_name'
where queue_name is the name of the queue you want to select and then specify the cores and run_time using:
job.server.cores = 8
job.server.run_time = 30000
Finally when you call job.run() it is automatically submitted to the queue.

Dynamic or Common bitbucket-pipeline.yml file

I am trying to setup the automated deployment through Bitbucket pipeline , but still not succeed might be my business requirement is not fulfill by the Bitbucekt pipeline.
Current Setup :
Dev Team push the code to default branch from their local machines. and as Team lead reviews their code and updated on UAT and production server manually by running the commands on the Server CLI directly.
#hg branch
#hg pull
#hg update
Automated deployment we want :
we have 3 environment DEV, UAT/Staging and production.
on the bases of the environments i have created
Release branches . DEV-Release, UAT-Release and PROD-Release respectively.
dev team push the code directly to the default branch dev lead will check the changes and then create a pull request form default to UAT-Release branch and after successful deployment on UAT server the again create Pull request from default to production branch and pipeline should be executed on the pull request and then started copying the bundle.zip on AWS S3 and then to AWS EC2 instance.
Issues :
The issue i am facing is bitbucket-pipeline.yml is not same on all release branches because the branch name s difference due to that when we create a pull request for any release branch we are getting the conflict of that file .
id there any why i can use the same bitbucket-pipline.yml file for all the branches and deployment should be happened on that particular for which pull request is created.
can we make that file dynamic for all branches with environment variables?
if the bitbucket pipeline can not fulfill my business requirement then what is other solution ?
if you guys think my business requirement is not good or justifiable just let me know on what step i have to change to achieve the final result of automated deployments
Flow :
Developer Machine push to--> Bitbucket default branch ---> Lead will review the code then pull request for any branch (UAT,PROD) --- > pipeline will be executed an push the code to S3 bucket ----> Awscodedeply ---> EC2 application server.
waiting for the prompt response.

Testing site with IP addr whitelist using BrowserStack automate + cloud hosted CI

I have a test system (various web pages / web applications), that is hosted in an environment accessible only via machines with IP addresses that are white listed. I control the white list.
Our CI system is cloud hosted (Gitlab), so VMs are spun up dynamically as needed to run automated integration tests as a part of the build pipeline.
The tests in question use BrowserStack automation to run Selenium based tests, which means the source IP addresses of the BrowserStack automation driven requests that hit the test environment are dynamic, as BS is cloud hosted. Also the IP addresses of our test runner machines that call / invoke the BrowserStack automation are dynamic as well.
The whole system worked fine before the intro of IP white listing on the test environment. Since white listing was enabled, the BrowserStack tests can no longer access the environment URLs (due to not being able to white list the dynamic IPs).
I have been trying to get the CI driven tests working again using BS "Local Testing" feature, outlined here https://www.browserstack.com/local-testing.
I have set-up a dedicated Linux VM with a static IP address (cloud hosted). I have installed and am running the BrowserStackLocal.exe binary, using our BS key. It starts up fine and says it has connected to BrowserStack via a web socket. My understanding is this should cause all http(s) etc requests that come from my CI / BrowserStack automation driven tests to be routed through that stand-alone machine (via BS cloud), resulting in it's static IP address being the source of the requests seen at the test environment. This IP addr is white listed.
This is the command that is running on the dedicated / static IP machine:
BrowserStackLocal.exe --{access key} --verbose 3
I have also tried the below, but it made no apparent difference:
BrowserStackLocal.exe --{access key} --force-local --verbose 3
However, this does not seem to work? Either through "live" testing if I try and access the test env directly through BrowserStack, or through BS automate. In both cases the http(s) requests all time out and cannot access our test environment URLs. Also even with --verbose 3 logging level enabled on the BrowserStackLocal.exe process, I never see any request being logged on the stand-alone / static IP machine when I try to run the tests in various ways.
So I am wondering if this is the correct way to solve this problem? Am I misunderstanding how to do this? Do I need to run the BrowserStackLocal.exe perhaps on the same CI runner machine that is invoking the BS automation? This would be problematic as these have dynamic IPs as well (currently).
Thanks in advance for any help!
EDIT/UPDATE: I managed to get this to work!! (Sort of) - it's just a bit slow. If I run the following command on my existing dedicated / static IP server:
BrowserStackLocal.exe --key {mykey} --force-local --verbose 3
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did (eventually) come back with my static IP machines address! The problem though is it was quite slow - 20-30 secs for that one site hit, so still looking at alternative solutions. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
UPDATE 2: Turning down the --verbose logging level on the local binary (or leaving that flag off completely) seemed to improve things - I am getting 5-10 sec response times now for each request. That might have to do. But this does work as described.
SOLUTION: I managed to get this to work - it's just a bit slow. If I run the following command on my existing dedicated / static IP server (note adding verbose logging seems to slow things down more, so no --verbose flag used now):
BrowserStackLocal.exe --key {mykey} --force-local
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did come back with my static IP machines address. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
So while a little slow, that might have to do. But this does work as described.

How to configure and run remote celery worker correctly?

I'm new to celery and may be doing something wrong, but I already
spent a lot of trying to figure out how to configure celery
correctly.
So, in my environment I have 2 remote servers; one is main (it has
public IP address and most of the stuff like database server, rabbitmq
server and web server running my web application is there) and another
is used for specific tasks which I want to asynchronously invoke from
the main server using celery.
I was planning to use RabbitMQ as a broker and as results back-end.
Celery config is very basic:
CELERY_IMPORTS = ("main.tasks", )
BROKER_HOST = "Public IP of my main server"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
CELERY_RESULT_BACKEND = "amqp"
When I'm running a worker on the main server tasks are executed just
fine, but when I'm running it on the remote server only a few tasks
are executed and then worker gets stuck not being able to executed any
task. When I restart the worker it executes a few more tasks and gets
stuck again. There is nothing special inside the task and I even tried
a test task that just adds 2 numbers. I tried to run the worker
differently (demonizing and not, setting different concurrency and
using celeryd_multi), nothing really helped.
What could be the reason? Did I miss something? Do I have to run
something on the main server other than the broker (RabbitMQ)? Or is
it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev,
but none of them worked)?
Hm... I've just reproduced the same problem on the local worker, so I
don't really know what it is... Is it required to restart celery
worker after every N tasks executed?
Any help will be very much appreciated :)
Don't know if you ended up solving the problem, but I had similar symptoms. Turned out that (for whatever reason) print statements from within tasks was causing tasks not to complete (maybe some sort of deadlock situation?). Only some of the tasks had print statements, so when these tasks executed eventually the number of workers (set by concurrency option) were all exhausted, which caused tasks to stop executing.
Try to set your celery config to
CELERYD_PREFETCH_MULTIPLIER = 1
CELERYD_MAX_TASKS_PER_CHILD = 1
docs