what is gitpod: what does it actually do? - gitpod

The gitpod GitHub page says
Gitpod is an open-source Kubernetes application providing prebuilt,
collaborative development environments in your browser - powered by VS
Code.
However, I can not comprehend what it actually does. Can anyone please explain.

Gitpod co-founder here.
Gitpod = server-side-dev-envs + dev-env-as-code + prebuilds + IDE + collaboration.
From a Git Repository on GitHub, Gitlab or Bitbucket, Gitpod can spin up a server-side-dev-environment for you in seconds. That's a docker container that you can fully customize and that includes your source code, git-Terminal, VS Code extensions, your IDE (Theia IDE), etc. The dev environment is enough powerful to run your app and even side-services like databases.
Step (1) is easily repeatable and reproducible because it's automated and version-controlled and shared across the team. We call this dev-environment-as-code. Think of infrastructure-as-code for your dev environment.
After (1), you're immediately ready-to-code, because your workplace is already compiled and all dependencies of your code have been downloaded. Gitpod does that by running your build tools on git-push (like CI/CD would do) and "prebuilds" and store your workspace until you need it. This really shines when reviewing PRs in Gitpod.
Collaboration becomes much easier once your dev environments live server-side and your IDE runs in the browser. Sending a snapshot of your dev environment to a colleague is as easy as sending a URL. The same goes for live shared coding in the same IDE and dev-environments.
At the end of the day, you start treating your dev environments as something ephemeral: You start them, you code, your push your code, and you forget your dev environment. For your next thing, you'll use a fresh dev environment.
The ease of mind that you get from not messing, massaging, and maintaining dev environments on your local machine is incredibly liberating.
Gitpod can be used on gitpod.io, or self-hosted on Kubernetes, GCP, or AWS.

To illustrate Gitpods, note that GitLab 13.5 (October 2020) adds a new feature
Launch Gitpod Workspaces directly from GitLab
Engineers have complicated development environments that can take time to set up and make testing changes or exploring new projects challenging. Often getting started with a project involves following documentation, installing dependencies, and hoping there are no conflicts with other services running. This process can be time consuming, error prone, and may not replicate the configuration accurately to test and contribute to a project.
With Gitpod integrated into GitLab, you can easily launch your Gitpod Workspace directly from the GitLab interface. When editing a project on GitLab, a new dropdown option exists to open that project in GitPod:
Gitpod allows you to define your project’s configuration in code so you can launch a prebuilt development environment with one click.
These environments are configured through a .gitpod.yml file inside of the project and include options for Docker configuration, start tasks, editor extensions and more. This flexible configuration, which is part of the project’s code, allows developers to get started working on a project quickly.
Try this today with the GitLab project which is already setup to work with Gitpod.
Thanks to Cornelius Ludmann from Gitpod for contributing this!
https://about.gitlab.com/images/13_5/phikai-launch-gitpod-editor.gif -- Launch Gitpod from the GitLab UI
See Documentation and Issue.
And with GitLab 14.2 (August 2021)
Launch a preconfigured Gitpod workspace from a merge request
Launch a preconfigured Gitpod workspace from a merge request
The Gitpod integration, introduced in GitLab 13.5, helps you manage your complicated development environments.
Once you define your project’s configuration in code, you can launch a prebuilt, cloud-based development environment with a single click.
This convenient workflow has made it faster than ever to generate new changes, but launching a Gitpod environment to review an existing merge request meant building an environment against the main branch before switching to the target branch and building again.
Now, in GitLab 14.2, you can launch Gitpod directly from the merge request page, preconfigured to use the target branch, to speed up your reviews and reduce the need for context switching.
Enable the Gitpod integration, and your merge requests display a grouped Open in button, so you can open the merge request in either the Web IDE or Gitpod.
Thanks to Cornelius Ludmann from Gitpod for this contribution!
https://about.gitlab.com/images/14_2/create-gitpod-in-mr-view.png -- Launch a preconfigured Gitpod workspace from a merge request
See Documentation and Issue.

GitPod is essentially an ephemerial/adhoc environment that instantiates a Docker container via a .gitpod.Dockerfile yaml. At the core, there is the VS Code integration and the SSH Remote extension is the key piece there that ties a lot of the "what GitPod does" question. In fact, the UI would be another key piece there, as workspaces can be cached via prebuilds (which are available "almost instantly"), or manual "one-off" builds (which take much longer to run - because it's a build - duh), and can be re-instantiated via the UI, which auto-parses stale workspaces after 14 days.
The workspace is the environment. The gitpod/workspace-full Docker image which contains the following at time of this post:
gitpod/workspace-c ✅
gitpod/workspace-clojure ✅
gitpod/workspace-go ✅
gitpod/workspace-java-11 ✅
gitpod/workspace-java-17 ✅
gitpod/workspace-node ✅
gitpod/workspace-node-lts ✅
gitpod/workspace-python ✅
gitpod/workspace-ruby-2 ✅
gitpod/workspace-ruby-3 ✅
gitpod/workspace-ruby-3.0 ✅
gitpod/workspace-ruby-3.1 ✅
gitpod/workspace-rust ✅
gitpod/workspace-elixir ✅
So all in all, as long as the open-source community is active, your getting a pretty fresh, well-provisioned, "full" environment, and it's available "on-demand" via a web UI, that can take a query string with gitpod.io/#{your github url}.
For free, a workspace runs for 1 hour with a total of 50 hours per month avaialble. Increased time and team config is available, so for example, a two-pizza team on a team plan is around $200-$300 per month, which, if you put pen and paper to it, has decent ROI considering time-savings, and amping up the DevX.

Related

How make changes on vue project in hosting

I have vue project which published on Digital Ocean. The main problem is when i make some changes on FileZilla it is not affect on website. How can i solve this issue?
This is not an issue per-se. This is just the way how modern web development works. Vue.js (but also Nuxt) is using a bundler right now (Webpack, Vite are the most common), hence to go to production it needs to be bundled each time you push something to it.
If you upload something via FTP or SSH and edit some source code, a bundle step will be required in order to get any changes on the actual webapp.
Backend languages may not need that, for example you could SSH into a server and change some .php file, if you F5 the page it will be updated in real time. But this is not how frontend JS code works, it needs to be optimised.
Another thing, sending code via SSH/FTP is not really a good workflow because it is not easily trackable, no version-controlled, will not trigger any build flags in case of an error etc...
The best approach is to have a git repo + some build step included in some CI.
A common platform for it is Netlify, you connect a Github repo, you tell which command to use to build the project and each time you push some code, it may do some checks/tests/optimizations/etc... via Github Actions before being released automatically to production (updated on your webapp).
This workflow have a lot of benefits as one may tell but is also de-facto, the official/regular approach for modern Web development on the frontend.

Jenkins pipeline dependency between developer and qa job

Our developers use Bitbucket as the code repository.
Dev Repository is: AbcdProject
We, from QA team, write selenium automation scripts. What is the right approach -
Should the automated scripts go under tests folder under the same repo as the Dev. Like:
AbcdProject/
-src
-tests
--unit
--functional
---AbcdAutomationScripts
----src
----pom.xml
----testng.xml
or we should have our own repo and our scripts should go under that repo? Like:
Dev Repo:
AbcdProject/
-src
-tests
--unit
QA Repo:
AbcdAutomationScripts/
-src
--pom.xml
---testng.xml
I would prefer having a separate repo for QA but I would like to know the industry standard/best practice.
Considering, we go with a separate repo for QA:
Right now, when the developer pushes the code in bitbucket, his jenkinsfile triggers the build and deploys the code in dev-server. But the question is how do I set the dependency in Jenkins Pipeline such that when developer's trigger of the build has completed and the code is deployed in dev-server, my selenium scripts in another repo should get executed.
Standard is to have the tests in the same project. Consequences:
Developers see the expectations of QA. They know what's in focus and what isn't.
They see if stuff is in focus that should't be, or vice versa, so this can help improve the test suite quality.
The downside is that devs get the option to specifically program for passing the tests instead of for improving quality. However, if this is a thing, developers aiming for the wrong goals is a symptom of deeper problems, such as developers generally being incentivized towards the wrong goals.
Developers see what DOM access paths are being used in QA. This helps them understand what paths are expected to be stable. You get a chance to fix any miscommunications about access path stability before you run into a nightmare of "every small dev change requires adapting all Selenium scripts".
Liability: Dev and QA need to coordinate their directory structure. Usually not a big issue and if you have a useful SCM (such as git, even svn should work) this isn't really a big problem, but the conventions need to be in place and understood by everybody.
QA will notice if dev starts a new development branch.
Currently i have merged automation testing code base with development code base and merged both the pom's and able to run my automation cases effectively on CI/CD pipeline y adding the stage for run tests in jenkinsfile and do mvn clean install.
Having said that, i am still looking for some better solution where i don't need to merge both pom's and handle both dev and testing code loosely coupled.

Wordpress development process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I want to design a wordpress development process like in following picture:
First I want to create a bitbucket repository for my Wordpress site. From this repository all our software developers should able to clone the site to their local machines for developing. For developing all developers should have one local database to test changes.
After a developer finished a task he should be able to push his changes to the repo. When a sprint is done I want to send all changes from the repo with Jenkins pipeline/job to the test environment. At this environment a tester should be able to test all new functions with a cloned database from the prod system (including the dev changes).
When all tests are successfully done I want be able to apply the database changes to the prod system (with a SQL script) and send all changes with an other Jekins pipeline/job to the prod system.
Do you think this can work? Whats with plugin updates? Can I setup environment variables for each system so the plugin updates can be just done on the dev machine?
I'm not sure if this could work because a plugin or plugin update creates a lot of new database changes and I think I need a tool who can display all changes like Sourcetree for git.
Is there someone who has expert knowledge with wordpress and this kind of development process and can share his experience with me?
Or do you think this process is not working with wordpress? If this is true it would be realy bad because I need a process like this.
Thanks a lot!
I don't really know Wordpress, but the process you describe is definitely possible (I've implemented similar solutions on Drupal and Adobe Experience Manager, for instance).
However...
It's hard.
In a CMS project, a change/new feature can include:
a code change (PHP, CSS, JavaScript)
a database structure change (e.g. a new table)
a database content change (e.g. a copy fix, or default/test content)
a configuration change
Working out which version should get what is really hard. You want a developer to commit a change, and have that change replicated on QA with test content - but once QA sign it off, you probably don't want to promote that test content to production. And config changes should probably flow between systems but with different values for each environment.
For managing the database changes, I've found a plug-in that monitors database changes; no idea how scriptable that is.
See WP Activity Log.
What I've done in the past in similar situations is write a script that creates the database definition for each change - so a developer can run that script, and commit it as part of their code change. It requires a lot of discipline, though - you can only modify the database structure by using the scripts.
The correct answer is yes you can do this. I know WordPress, Bit-bucket, GIT, SVN, Linux, Ubuntu exceptionally well. I have built a system very similar to what you describe and use it daily.
The problem stated is the CMS can get tricky. That is true, but you need to use the correct tools for the correct upgrades. So, WordPress ALREADY has versioning and revisions built into it. The DATABASE doesn't need to be involved at all
First off. The database doesn't need to be updated unless you are updating plugins. But for strict development no DB pushes are necessary. So have your developers check files in and out of Bit-bucket. When the lead developer approves the changes have him migrate / push to the MASTER BRANCH in your REPO. Inside of bit-bucket there is a tool called GIT HOOKS. You can trigger a php file on the server every time there is a push to the production branch. What the PHP file does is simply trigger the linux command GIT PULL which will update all the code on the server with what in on your PRODUCTION BRANCH. GIT PULL will also remove any files if files were removed etc. On the server you will have a "checked out" copy of the GIT repo and on linux the credentials after the first clone will be stored. Simply have your PHP file trigger a BASH script that does a GIT PULL. Done.
No matter how many developers you have there will always need to be a set of eyes that reviews the code changes and merges those into production. I.e. that is where the Lead Developer comes into play.
FYI. The only directories in your wordpress instance that needs to be in bitbucket is the THEME DIRECTORY and the PLUGINS directory. You DO NOT need to sync the entire WP install which is pretty large.
In the case that you would be building custom Plugins, again, it is just code that is stored in the plugins directory. If your custom plugins are built correctly and require the use of Databases then when they are activated they will immediately build the WP DB's that are needed. Likewise, correctly built plugin will also drop its own custom table when uninstalled.
You will need to sync the 2 below directories.
Plugins folder resides in: wp-content/plugins/
Themes Folder is wp-content/themes/SELECTED_THEME
Any additional questions just ask and I am here.
From my experience it is always better to allow each developer to have their own Branch and to setup the the Dev server a dedicated master branch for quality control. you can check out some documentation on how to set this up https://plixxer.com/docs/server-management/website-quality-control/
basically you want to have a live server and dev server. The live server should only ever pull from the REPO and and the Dev and coders can pull or Push from the repo. My team treats the dev server as a quality checking station. If the current live code is not up to our standards the entire dev is rolled back to what is live on the master branch. When code in the master succeeds our standards we pull from the master branch onto the live server. Each developer should have their own branch for testing on their local server. Let me know if you need some help on setting up a local environment with GIT.
You will want to make a distinction around "build" and another around "release". The workflow I understand is that developers call their local workstations "dev", and pull request their work to the develop branch (you may have already read through Gitflow). Then, using your choice of CI automation, you get the latest source into a build area and do that - build it. Check out Ansible. If you have BitBucket, maybe you also want to organize your sprint with the likes of Jira? Then you have pretty seemless integration of your sprint objectives with actual branches containing the relative work/source. Ansible can help you automate builds and releases to the point where you are doing daily builds, and running the unit tests across your builds in the various integration environments.
During builds, you would have different configuration files being factored in depending on the target environment. This is how to care for environment configuration. It is part of the build process, and ideally all configuration is possible through the build. For example, a connection string might be different across the environment if you are having different databases to isolate migration of schema changes. For example, in a Angular application you would execute ng b --prod to build production and this would bring in a production configuration file during build to change the connection string (for example).
More about configuration specific to environments... you can also include post deployment scripts that get deployed and executed after files are uploaded so that they will configure the environment as required.
Ask your questions below, and I will do my best to build this out into a comprehensive guide.

What use cases of Docker on real projects

I have read what the Docker is but having hard time finding of what are the real scenarios of using Docker?
It would be great to see here your usages.
I'm replicating production environment with it, on commit on project with jenkins after building binaries i deploy there, launch the required daemons and run integration tests, all in a very short time (a few seconds over the time that takes the integration tests). Having no need to boot, and little overhead on memory/cpu/disk is great for that kind of things.
I could extend that use for development (just adding a volume where the code resides to my git repository, at least for scripting languages) to have the production environment with the code im actually editing, at a fraction of what virtualbox would require.
Also needed to test how to integrate some 3rd party code into a production system that modified DB. Cloned the DB in a container, installed the production system in another, launched both and iterated the integration until i did it well, going back to zero to try again in seconds, and faster, cheaper and more scriptable than doing it with VMs+snapshots.
Also run several desktop browser instances on containers, with their own plugins, cookies, data storage and so on separated. The docker repository example for desktop integration is a good start for it, but planning to test subuser to extend this kind of usage.
I've used Docker to implement a virtualized build server which any user could ask to run a build off their personal git branch in our canonical environment.
Each SSH connection made to the server was connected to a new container, ensuring that all builds were isolated from each other (a major pain point in the past), ensuring that the container's state couldn't be corrupted (since changes were all isolated to that single instance), and ensuring that even developers on platforms such as Windows where Docker (and other tools in our canonical build environment) couldn't be run locally would be able to run builds.
We use it for the following uses:
We have a Jenkins Container which we can use to bring up our Jenkins server. We mount the workspace using volumes so we can migrate the server easily just by copying the files and launching the container somewhere else.
We use a Jetty container to easily deploy our war files in our production and development environment.
We use a whole host of other monitoring tools such as Uptime which we have containers for so that we can bring them up and down on various hosts with a single command.
I use docker to build and test our software on several different Linux distributions (RHEL 4/5/6/7, Ubuntu 12.04, 14.04).
Docker makes it easy and fast to create minimalistic and consistent build environments.
Docker gives you the benefits that other virtualization solutions give you to a fraction of the recourse needed.

With Continuous Integration, why are tests run after committing instead of before?

While I only have a github repository that I'm pushing to (alone), I often forget to run tests, or forget to commit all relevant files, or rely on objects residing on my local machine. These result in build breaks, but they are only detected by Travis-CI after the erroneous commit. I know TeamCity has a pre-commit testing facility (which relies on the IDE in use), but my question is with regards to the current use of continuous integration as opposed to any one implementation. My question is
Why aren't changes tested on a clean build machine - such as those which Travis-CI uses for post-commit tesing - before those changes are committed?
Such a process would mean that there would never be build breaks, meaning that a fresh environment could pull any commit from the repository and be sure of its success; as such, I don't understand why CI isn't implemented using post-commit testing.
I preface my answer with the details that I am running on GitHub and Jenkins.
Why should a developer have to run all tests locally before committing. Especially in the Git paradigm that is not a requirement. What if, for instance, it takes 15-30 minutes to run all of the tests. Do you really want your developers or you personally sitting around waiting for the tests to run locally before your commit and push your changes?
Our process usually goes like this:
Make changes in local branch.
Run any new tests that you have created.
Commit changes to local branch.
Push local changes remotely to GitHub and create pull request.
Have build process pick up changes and run unit tests.
If tests fail, then fix them in local branch and push them locally.
Get changes code reviewed in pull request.
After approval and all checks have passed, push to master.
Rerun all unit tests.
Push artifact to repository.
Push changes to an environment (ie DEV, QA) and run any integration/functional tests that rely on a full environment.
If you have a cloud then you can push your changes to a new node and only after all environment tests pass reroute the VIP to the new node(s)
Repeat 11 until you have pushed through all pre-prod environments.
If you are practicing continuous deployment then push your changes all the way to PROD if all testing, checks, etc pass.
My point is that it is not a good use of a developers time to run tests locally impeding their progress when you can off-load that work onto a Continuous Integration server and be notified of issues that you need to fix later. Also, some tests simply can't be run until you commit them and deploy the artifact to an environment. If an environment is broken because you don't have a cloud and maybe you only have one server, then fix it locally and push the changes quickly to stabilize the environment.
You can run tests locally if you have to, but this should not be the norm.
As to the multiple developer issue, open source projects have been dealing with that for a long time now. They use forks in GitHub to allow contributors the chance to suggest new fixes and functionality, but this is not really that different from a developer on the team creating a local branch, pushing it remotely, and getting team buy-in via code review before pushing. If someone pushes changes that break your changes then you try to fix them yourself first and then ask for their help. You should be following the principle of "merging early and often" as well as merging in updates from master to your branch periodically.
The assumption that if you write code and it compiles and tests are passed locally, no builds could be broken is wrong. It is only so, if you are the only developer working on that code.
But let's say I change the interface you are using, my code will compile and pass tests
as long as I don't get your updated code That uses my interface.
Your code will compile and pass tests as long as you don't get my update in the interface.
And when we both check in our code, the build machine explodes...
So CI is a process which basically say: put your changes in as soon as possible
and test them in the CI server (it should be of course compiled and tested locally first).
If all developers follow those rules,
the build will still break, but we will know about it sooner rather than later.
The CI server is not the same as the version control system. The CI server, too, checks the code out of the repository. And therefore the code has already been committed when it gets tested on the CI server.
More extensive tests may be run periodically, rather than at time of checking in, on whatever is the current version of the code at the time of testing. Think of multi-platform tests or load tests.
Generally, of course, you'll unit test your code on your development machine before checking it in.