This is most probably a terribly simple issue, but I have been able to find it.
I am trying to use ansible and have built some yaml, and want to test it.
Google is suggesting ansible-playbook with parameters, but my dev machine doesn't have this command installed, nor possibly would it have the files that the code reads from.
So the question is how do you run ansible as you are developing the yaml so as to test what it is doing?
Related
I have a React frontend and a Node backend, I've made several E2E tests with Cypress for the frontend and run them locally. I love how end to end testing allows me to catch errors in the frontend as well as on the backend! so I'd like to have a way to run them automatically when sending a PR.
I'm using bitbucket pipelines, and I've configured it to run the npm test command which works perfectly to run my unit tests, but what I still don't understand is how to be able to run my Cypress tests automatically, because I'd need to have access to the backend repository from my pipeline that runs on the frontend repo.
What I've tried
I've read the documentation and played with the example repo, but I still don't understand how could I automate running my tests, on the example both backend and frontend are on the same repository.
I'm sorry if this is a vague question, I just don't seem to get if this is even possible with bitbucket pipelines, if it's not, what other tool could help me run my Cypress test in a similar way that I do locally? (running both backend and frontend).
I've really tried to search for an answer to this, maybe it's too obvious and I'm just missing something but I don't seem to find anything on the internet about this, any help will be very appreciated!
When your frontend and backend are versioned in different repositories, then you have to check out at least one of the two repositories (e.g. the other for which the pipeline is not currently being executed) during the pipeline execution to get access to the code and thus have the possibility to start frontend and backend together locally to run your tests.
This question has also already been asked and answered here:
https://community.atlassian.com/t5/Bitbucket-questions/Access-multiple-Bitbucket-repositories-from-a-single-Pipeline/qaq-p/1783419
I am working on a college project along with a group of people. Our goal is to add features to an already existing application that runs on the web. Currently, I'm in the process of getting the source code to run on my machine. This consists of cloning a bunch of repos, installing MySQL and some (very old and outdated :-| ) versions of Python, and running some scripts. The process sounds straightforward but it isn't; there are a lot of dependancies that need to be met for the code to run, which means that I need to spend a lot of time looking at error logs trying to figure out what package is missing and needs to be installed or downgraded. But that's not the point of this question.
I'd like to make it easier for people to pick up the project in the future and work on it without having to spend hours just to get the code to compile. I'd like to get the project set up on a Linux VM (something I know how to do using VirtualBox) and then somehow share (?) that VM so that other people can simply set it up and be able to immediately have the code compiling (something that I don't know how to do, or if it is even possible).
Additionally, I'd like to be able to do all the coding on the host OS if possible, and only do the compiling/running on the VM (something I also don't know how to do). I would like some help/pointers with all the "I don't know" 's, as I don't know much about VM's other than how to set one up using VirtualBox.
You can use Vagrant to automate the provisioning of the VM, and setup all your tools and dependencies using Docker.
There are many good tutorials and sample vagrantfiles online to get you started. There is a learning curve involved, but well worth the effort. Many companies use Vagrant to quickly provision dev environments.
Vagrant can automatically download a specific distro/version of a VM from the web if one is not already locally installed. It can also provision a Docker container, in which you can install any required dependencies, tools, etc. You can store the vagrantfile, dockerfile, scripts, etc. in GitHub for easy access by your colleagues. All they would have to do is install Vagrant and run vagrant up from the command line.
If you want to write code on the host machine and compile/test it on the VM, you will need to setup a shared folder in the VM using Guest Additions (see here). Be VERY careful with line endings if you are working in Windows and running in Linux. You can setup the shared folder with Vagrant as well (see here).
I am working on a repository where I have to extract some features from millions of files.
For me, the current workflow is:
Write code in IntelliJ
Run unit tests
Dry run with small data
ssh to the remote machine
sftp the current code to the remote machine
Run on the server with all million of files
Look into the log exceptions and find out where the code is failing for edge cases
Fix those issues and repeat from step 1
My question would be three-fold:
Is there an easy way how I can sync code with the remote machine automatically (I know I commit to git and then pull the changes in the machine. But is there some other way other than setting up rsync etc.?)
Can I run code directly in the remote machine from IDE and debug it that way?
1) There are a lot of ways to sync your code. Sometimes the best way is to create a kind of deploy script in Python or sh if you don't want to commit and push any changes you are going to test on server. You can use sftp or scp with more automation here (use gzip and so on). Git and rsync are more mature solutions here. But with VCS your problems will be more reproducible and easier to find.
2) You can connect to remote process to debug it directly from IntelliJ. There is a official tutorial for that: https://www.jetbrains.com/help/idea/tutorial-remote-debug.html (but it depends on your security settings because Java debug protocol is not secured itself, you may need to setup SSH tunnel for that)
3) Another, a bit more radical option: you might run IntelliJ IDEA itself on server and debug directly. You can use Projector - an open source project which lets you run IntelliJ on the server with UI in the browser (no X11 required to run). I recommend you look at this repo firstly https://github.com/JetBrains/projector-docker or configure IntelliJ on server following instructions here: https://github.com/JetBrains/projector-server.
I really don't know how to ask question to Google about this, so I excuse me that it is naive.
Our team is developing SPA application in ReactJS. We also do back-end programming for NodeJS. Our project recently got more e2e tests. They are written using webdriver.io packages. Everything works as expected but circa 30 tests run about 50 minutes. It is too long to pause developer work and force him to run tests.
We came with the idea that now when we have so many tests, we need to run them on separate computer (other than a developer's laptop, further I call it e2e-laptop).
So I programmed a bash script and installed Ubuntu on a e2e-laptop. My idea is, that developer who wants to run e2e test logs in on e2e-laptop with ssh, runs specified script with arguments (eg: --rev= specific git revision the tests should run on, --email= where to send Allure report) and logs out. After tests are done he gets Allure report in his mailbox.
This all sounds to me OK, but not very well. It works - it is like a dirty MVP. But what I really would like to give my team is the web browser based UI that gives the features my script has. I can imagine this software is hosted on e2e-laptop, every developer can open its webpage address in his local browser. Then after authorization, there are options: run all specs, run chosen specs, send report and more. It would be the best if that software could also allow simultaneous running of tests commissioned by multiple developers.
What software I need?
You need a continuous integration tool. https://stackify.com/top-continuous-integration-tools/
I recommend Jenkins.
I would first try to run your selenium tests headless in a docker container on your laptop. Once you are able to do that, use that same configuration in your docker container running in Bitbucket pipelines. It could actually be the same container and the same scripts. Then, developers can just make a branch and work with the tests on that branch. If only a certain subset of tests need to run, then the developer can make the necessary changes on his or her local branch to run those tests and push it up to Bitbucket. This should help with the configuration https://github.com/SeleniumHQ/docker-selenium.
I'm doing a project to my virtualization module in computer science degree.
The professor is asking me to setup a bsd template in kvm that can be automatically setup in Virtualizor Web Panel.
I have successfully create a vm and configure the network with a bridge, but then the professor is asking me to create a template of that VM.
I see that I need to run a post install script, but the thing is I dont know how and where should I run that script!
I have a sh file that will change the /etc/rc.conf settings but the million euro question is how can I run that script with Virtualizor KVM :(
Anyone can help me out please?
You can't run your script using Virtualizor. cause that's CSRF protected and not open-source!
But I think you can run the template making file directly.
/usr/local/virtualizor/tools/windows.php
Check their documentation here: Create OS Template