Create releases from within a GitLab runner/pipeline - api

With the release of Gitlab 11.7 in January 2019, we get the new key feature Publish releases for your projects. I want precisely what the screenshot on that page shows and I want to be able to download compiled binaries using the releases API.
I can do it manually. Of course, instructions for the manual approach can be found here on stack overflow. The problem I need help with is doing it as part of a CI/CD pipeline, which is not covered by the answers one can find easily.
The release notes contain a link to the documentation, which states:
we recommend doing this as one of the last steps in your CI/CD release pipeline.
From this I gather it's possible. However, the only approach I can imagine is using the GitLab API just as I do, when I create releases manually. When one wants to access the GitLab API one has essentially three options for authentication, according to the fine manual: OAUTH2 tokens, personal access tokens and session cookies. Consequently I would need a method for having either of these available in my CI/CD pipeline, with sufficient privileges. Solutions for this problem are an ongoing discussion with lots of contributions, but virtually no tangible progress in recent years.
So, how does one create releases as one of the last steps in one's CI/CD release pipeline?
Storing my personal access key with API access in a CI/CD variable or even a file in the repo is not an option for obvious reasons.

They've put up a blog post explaining how to do this:
https://about.gitlab.com/blog/2020/05/07/how-gitlab-automates-releases/
They've created a tool (gitlab-releaser) to help with this task. Basically you create a new step, where you use a docker image that provides this tool, and then call the tool with the proper parameters.
release_upload:
image: registry.gitlab.com/gitlab-org/release-cli:v0.1.0
script:
- gitlab-releaser create --name="My Release" --description="My Release description"

Related

Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

Alternate lambda functions deployment for production

I'm developping a web extension for the chrome store that calls a backend deployed on aws lambda using the serverless framework.
When developing the Rest API, I may introduce breaking chances. As publishing an update on the chrome store can take a lot of time and is unpredictable (1 day to 3 weeks), the solution I'm think about to keep a compatible API with the extension is to deploy 2 different lambda functions in production.
The idea is when I push new changes on the master branch, the lambda function with the oldest version is updated and ready to receive calls as soon as the update is approved on the chrome store, without erasing the API currently used.
First, is it a good pattern to handle the updates of a client consuming an API when you don't have full control on them?
Second, is this something doable with the serverless framework and how? I couln't find any resources on the subject.
Thanks

Is it possible to remote access and parse git revision history?

I have a usecase where I need to be able to inspect Git repositories as part of a web service and the average repo size will be very large - 1GB+ due to being used for video game projects. I need to do simple actions such as listing the revision history, etc.
Right now I'm implementing it via API calls to the remote Git host services (Github, Bitbucket, etc). This works okay, however there are some great Git projects like GitVersion that only work with real Git repos, that use libGit2sharp, and I cannot easily write a work around for.
I feel like this'll be a longshot, but I was wondering if anyone has discussed or begun work upon an implementation of libGit2sharp that works with the major Git hosts via their API's. Obviously not all actions available in libGit2 will work with an API interface, but at least most read-only actions should be.
If this is an entirely new feature request - I'd like to get the opinion of someone with knowledge of the libGit2sharp codebase about how difficult such a feature request would be to implement.
Git only specifies the network protocol for fetching, pushing and creating an archive. Nothing else can be done via the Git protocol (and providers will likely disable the archive so they can leverage their existing caching solutions).
If this is an entirely new feature request - I'd like to get the opinion of someone with knowledge of the libGit2sharp codebase about how difficult such a feature request would be to implement.
This feature would be out of scope and impossible as Git does not provide a way to perform these tasks.
Once you're trying not to do Git, then you're out of the Git world into each provider's API. Trying to replicate Git operations and git commands on top of each provider's API is a whole project unto itself, and one which is likely to get you to hit these provider's API limits, as in-depth analysis of the repositories is not generally why they provide these services.
Not to mention that looking up each necessary object over HTTP would be extremely slow and you'd likely not gain anything over grabbing a Gigabyte or two from the network.
But if all you need is a few questions that can be easily answered from the APIs themselves (say, latest commit and its relationship to different branches), and you do need the logic in GitVersion, then you're probably better off making its history analysis pluggable so you can put in the data from your API lookups.
I'm not familiar with how GitVersion makes its decisions, but if it doesn't just want references and their relationships to each other and the tags, but rather it wants to look at the repositories themselves, and you do need it rather than just replicate some of its logic, I would recommend to download the repositories and perform all the analysis there. It'll be a much more efficient use of time to rent a bit of disk space from some provider than try to fit each individual provider's API into some idealised version of a git command where you then still need to figure out the edge cases of both the command and its API you're using.

Is there any API to automate extension installation in XWiki?

I use xwiki Enterprise 7.4. The official way to install extensions is to use either Import feature or Extension Manager. Both ways require user interaction. I would like to automate extension installation process, so no user interactions for extension installation. Is it possible? I've automated spaces/pages creation via REST API. Maybe it's possible to use REST API to do it, I can't find it in documentation.
Why do I need it? It's simple: I've automated all the steps of deployment/migration process for my application and I would like to automate xwiki extension installation too.
As indicated by Vincent, you can use the extension script service from inside XWiki. This script service is what the UI is using so everything the UI is doing can be done also by any script (as long as the script author has proper rights).
I just wrote a Velocity example on http://extensions.xwiki.org/xwiki/bin/view/Extension/Extension+Script+Module#HNon-interactiveandsynchronousinstall:
{{velocity}}
## Create install request for extension with id org.xwiki.contrib:extension-tweak and version 1.3 on current wiki
#set($installRequest = $services.extension.createInstallRequest('org.xwiki.contrib:extension-tweak', '1.3', "wiki:${xcontext.database}"))
## Disable interactive mode
$installRequest.setInteractive(false)
## Start install
#set($installJob = $services.extension.install($installRequest))
## Wait until install is done
$installJob.join()
{{/velocity}}
All you need is to put the Thomas' script in a page. You can use the REST API for that. See: http://platform.xwiki.org/xwiki/bin/view/Features/XWikiRESTfulAPI#HPageresources
Then you call the URL from your application.
Ex: you put the code in XWiki/AutoInstall with a REST call and then you can call this page with the following url:
http://localhost:8080/xwiki/bin/get/XWiki/AutoInstall
I suggest to use the "get" action from the URL to avoid unnecessary informations.
The XWiki Core dev team is aware of this and it's in the roadmap but it's not done yet. For example you can see that it was planned for the 8.0 roadmap but it slipped (http://www.xwiki.org/xwiki/bin/view/Roadmaps/Archives8xCycle/).
Continue improving upgrade tools: Scriptable upgrades (priority 1), Simulation (priority 2)
It seems there's no issue created for this at the moment. Would be great if you could create a JIRA issue at http://xwiki.org in the XWiki Platform project.
Now regarding extensions, there's some Script Service that can be used to manipulate extensions, see http://extensions.xwiki.org/xwiki/bin/view/Extension/Extension+Script+Module
However this documentation is pretty terse. You could check the java code at https://github.com/xwiki/xwiki-platform/blob/95abd2951123431c1624c124b49ca7a88b41be00/xwiki-platform-core/xwiki-platform-extension/xwiki-platform-extension-script/src/main/java/org/xwiki/extension/script/ExtensionManagerScriptService.java#L84-L84
I've not personally used this script service so I can't give real examples of using this API

Is there a good way to wrap an existing Python based web application to require a login?

I'm in the process of installing an open-source Python based web application to an internal server here at work. The existing code is open - it doesn't require a login to view it - but one of the requirements is that users have to be approved before they can see anything.
Is there a good way (using Apache configuration files for example, but any method would be great) to wrap the application so that any access requires a login? I would like to avoid modifying the open-source code (a maintenance nightmare every time a new release comes out).
Any thoughts or suggestions?
Apache supports Authentication, Authorization and Access Control.
It is a detailed process, and summarising it here would not do it justice. I refer you to the link provided,