What are the "tag" and "deployment" events in drone - drone.io

Drone events support "push", "pull_request", "tag", "deployment"
https://github.com/drone/drone-yaml-v1/blob/1c89a78f3ae4c8c70114203034a81fec59474bc2/main.go#L30
I have two questions:
When tag and deployment will be trigger?
who will trigger it?

tag events are triggered when you tag a commit (for instance, via the Releases page of your Github repo).
deployment events can be triggered via the Github Deployments API.
Both are ways for you to control when exactly a new version of your code is (build and) deployed.
For example, a common pattern is to always automatically deploy changes to your master branch to your dev environment and only deploy a tagged version to production (using the aforementioned Github releases or Deployments API).

Related

Managing feature branch versions with npm for component packages

We have a React App which uses some components written by us and published to our internal npm repository. Our code is maintained in Bitbucket Data Center, the build is done with Bamboo and the npm repository is hosted in JFrog Artifactory. We work with feature branches and pull requests for developing new features.
It happens often that a new feature in the app, requires a change in the component. In this case, each repository (the App and the component) will have its own feature branch and pull request. Many times the component interface changes, so that the App needs the pull request version of the component and not the mainline one to build and to be tested.
The build is done exclusively by the build server, so that the bundled javascript files are not committed to git.
Let's say the component has version 1.0.0. A new feature in the App needs a change in the component. In this case, the component version will incremented to 1.0.1. We don't want to publish it to Artifactory, until version 1.0.1 is tested, but at the same time, the build of the new App version needs the changes from version 1.0.1.
Our current solution is to change the package version of the component during the build of feature branches to something like 0.<Ticket #>.<Build #>. This 0.x.x version will be published to Artifactory so that the App feature branch can use it to compile.
We use 0.x.x so that the version is never bigger than the current released version. Once the component is merged to the main branch, it will compile with the right version (1.0.1) and will be published to Artifactory again.
I find this solution cumbersome, it requires some funny build scripts, making sure that the branch name always follows some convention and teaching developers about it.
I wonder if there is a better way for managing pull requests and feature branches using npm, without having to manipulate the package.json during build time, depending if it is a feature branch or the main branch.
Sounds like you are using artifactory like a secondary version / staging for the npm package, just use npm?
I am not in devops, but have worked on a few packages, testing a package that has not been released does not sound like testing the package - what about using a beta tag npm publish --tag beta, pulling that into your app npm i package#beta then testing your application in a staging environment?
As i expect you know if you apply a tag then the tag would need to be specified to be pulled into a repo so you can use it to deter users from using that version of the package - an i believe you can delete versions later if you are dead set on not having it public.
Here is a medium article which may be helpful?

what is gitpod: what does it actually do?

The gitpod GitHub page says
Gitpod is an open-source Kubernetes application providing prebuilt,
collaborative development environments in your browser - powered by VS
Code.
However, I can not comprehend what it actually does. Can anyone please explain.
Gitpod co-founder here.
Gitpod = server-side-dev-envs + dev-env-as-code + prebuilds + IDE + collaboration.
From a Git Repository on GitHub, Gitlab or Bitbucket, Gitpod can spin up a server-side-dev-environment for you in seconds. That's a docker container that you can fully customize and that includes your source code, git-Terminal, VS Code extensions, your IDE (Theia IDE), etc. The dev environment is enough powerful to run your app and even side-services like databases.
Step (1) is easily repeatable and reproducible because it's automated and version-controlled and shared across the team. We call this dev-environment-as-code. Think of infrastructure-as-code for your dev environment.
After (1), you're immediately ready-to-code, because your workplace is already compiled and all dependencies of your code have been downloaded. Gitpod does that by running your build tools on git-push (like CI/CD would do) and "prebuilds" and store your workspace until you need it. This really shines when reviewing PRs in Gitpod.
Collaboration becomes much easier once your dev environments live server-side and your IDE runs in the browser. Sending a snapshot of your dev environment to a colleague is as easy as sending a URL. The same goes for live shared coding in the same IDE and dev-environments.
At the end of the day, you start treating your dev environments as something ephemeral: You start them, you code, your push your code, and you forget your dev environment. For your next thing, you'll use a fresh dev environment.
The ease of mind that you get from not messing, massaging, and maintaining dev environments on your local machine is incredibly liberating.
Gitpod can be used on gitpod.io, or self-hosted on Kubernetes, GCP, or AWS.
To illustrate Gitpods, note that GitLab 13.5 (October 2020) adds a new feature
Launch Gitpod Workspaces directly from GitLab
Engineers have complicated development environments that can take time to set up and make testing changes or exploring new projects challenging. Often getting started with a project involves following documentation, installing dependencies, and hoping there are no conflicts with other services running. This process can be time consuming, error prone, and may not replicate the configuration accurately to test and contribute to a project.
With Gitpod integrated into GitLab, you can easily launch your Gitpod Workspace directly from the GitLab interface. When editing a project on GitLab, a new dropdown option exists to open that project in GitPod:
Gitpod allows you to define your project’s configuration in code so you can launch a prebuilt development environment with one click.
These environments are configured through a .gitpod.yml file inside of the project and include options for Docker configuration, start tasks, editor extensions and more. This flexible configuration, which is part of the project’s code, allows developers to get started working on a project quickly.
Try this today with the GitLab project which is already setup to work with Gitpod.
Thanks to Cornelius Ludmann from Gitpod for contributing this!
https://about.gitlab.com/images/13_5/phikai-launch-gitpod-editor.gif -- Launch Gitpod from the GitLab UI
See Documentation and Issue.
And with GitLab 14.2 (August 2021)
Launch a preconfigured Gitpod workspace from a merge request
Launch a preconfigured Gitpod workspace from a merge request
The Gitpod integration, introduced in GitLab 13.5, helps you manage your complicated development environments.
Once you define your project’s configuration in code, you can launch a prebuilt, cloud-based development environment with a single click.
This convenient workflow has made it faster than ever to generate new changes, but launching a Gitpod environment to review an existing merge request meant building an environment against the main branch before switching to the target branch and building again.
Now, in GitLab 14.2, you can launch Gitpod directly from the merge request page, preconfigured to use the target branch, to speed up your reviews and reduce the need for context switching.
Enable the Gitpod integration, and your merge requests display a grouped Open in button, so you can open the merge request in either the Web IDE or Gitpod.
Thanks to Cornelius Ludmann from Gitpod for this contribution!
https://about.gitlab.com/images/14_2/create-gitpod-in-mr-view.png -- Launch a preconfigured Gitpod workspace from a merge request
See Documentation and Issue.
GitPod is essentially an ephemerial/adhoc environment that instantiates a Docker container via a .gitpod.Dockerfile yaml. At the core, there is the VS Code integration and the SSH Remote extension is the key piece there that ties a lot of the "what GitPod does" question. In fact, the UI would be another key piece there, as workspaces can be cached via prebuilds (which are available "almost instantly"), or manual "one-off" builds (which take much longer to run - because it's a build - duh), and can be re-instantiated via the UI, which auto-parses stale workspaces after 14 days.
The workspace is the environment. The gitpod/workspace-full Docker image which contains the following at time of this post:
gitpod/workspace-c ✅
gitpod/workspace-clojure ✅
gitpod/workspace-go ✅
gitpod/workspace-java-11 ✅
gitpod/workspace-java-17 ✅
gitpod/workspace-node ✅
gitpod/workspace-node-lts ✅
gitpod/workspace-python ✅
gitpod/workspace-ruby-2 ✅
gitpod/workspace-ruby-3 ✅
gitpod/workspace-ruby-3.0 ✅
gitpod/workspace-ruby-3.1 ✅
gitpod/workspace-rust ✅
gitpod/workspace-elixir ✅
So all in all, as long as the open-source community is active, your getting a pretty fresh, well-provisioned, "full" environment, and it's available "on-demand" via a web UI, that can take a query string with gitpod.io/#{your github url}.
For free, a workspace runs for 1 hour with a total of 50 hours per month avaialble. Increased time and team config is available, so for example, a two-pizza team on a team plan is around $200-$300 per month, which, if you put pen and paper to it, has decent ROI considering time-savings, and amping up the DevX.

how to create a project from a template within youtrack with external hub integration?

I am experimenting with latest hub and youtrack on a linux machine,
I installed latest versions (2019.2 and 2019.1 respectively) and enabled the hub integration in youtrack. (not using https for the moment, old plain http is used)
What happened is this:
When I try to create a project i am always switched to Hub (is this correct? i did not find anything on JetBrains docs)
If I create a project from the hub interface and then click on the left panel to add a "Youtrack service" then i am offered the option to create only "default, scrum and kanban" projects that are the standard ones provided by JetBRains, however if i had already created a project and saved as template that project is not offered to me as an option to be the base of the new one.
If i use youtrack with the internal hub, all works as expected and the template projects are available as a starting point for new projects.
This happens as well with older versions (2018.4) of hub and youtrack.
As far as I recall this is bug in Hub yet to be fixed.
It seems correct, as with an external Hub installed projects are always created from there
2-3. There's known issue with that: https://youtrack.jetbrains.com/issue/JPS-9928

GitLab CI Pipeline not triggered by push from runner

We use Gitlab CE. We have two repos / projects, one to store the source code, and the other one to build and store the package that we’ll deploy. We use a runner to push the changes in the former into the latter. This used to trigger the pipeline of the latter repo. Recently, a change was pushed to the latter repo manually, and since then the push from the runner doesn’t trigger the pipeline any more in the target repo (manual pushes still trigger the pipeline, also, the push in the runner runs flawlessly, and the commit appears in the target repo). I was not the one who created the setup, so I don’t know how to make the push from the runner trigger the pripeline (or, rather, why it doesn’t do it automatically).
As far as I understand, the push should trigger the pipeline wherever it comes from. So why doesn't it do so?
So apparently the issue appeared because the user account to which the deploy key used in the target repo belonged to has been disabled. Creating a new key with an active user account solved the problem, the pipeline is triggered properly now.

Query regarding docker, test environments and dev workflow

I am a QA automation engineer and I am investigating docker as a potential way to run our tests.
Traditionally we have followed the git flow method where essentially where you have a dev and a master branch. Dev are constantly merging their new changes to the dev branch. When we wish to release, we will have a code cut off, where everything currently on the dev branch is deemed to be part of the next release. Script is then run to create the release candidate and this is deployed to staging. Any fixes that need to be done are made to the release branch and once ready to go to prod, new code is merged to master and deployed. Master is back merged to all branches so that everything is up to date. (described in more detail here: http://nvie.com/posts/a-successful-git-branching-model/).
So my question is with docker do you need to have this workflow? Im thinking of maybe having a workflow like describe below:
Dev start working on a new feature.
Dev pulls master, creates feature branch - does his dev work - unit tests pass, dev is happy for work to go to QA
Dev runs script to create release candidate (which would involve pulling master again in case new code has been merged to master by another dev),
Docker then spins up a container with multiple containers inside that (front end app, DB instance etc)
Tests (unit, api, selenium integration etc) are then run against this release candidate and if good deploy to production.
So do I need a staging env in the traditional sense where it is constantly available?
I think you're conflating two things: a continuous integration environment and a staging environment. Docker does make it easy to bring up a fresh instance of your entire stack for continuous integration (see drone for a good example), but generally you still need a staging environment that is always available to test against before deploying to prod. This staging environment should be running the same docker images that eventually get deployed to prod.