Is it possible to create content selectors for helm hosted repo in NXRM - repository

I want to create a subpath for each team in helm hosted repository, and apply some content selectors to restrict access (each team will have only access to a subset of helm charts, or can upload to a specific subpath) as we can do with docker or raw repos.
Is there a way to do that or content selectors not supported for helm hosted repo in NXRM3 OSS.
Thanks in advance for any idea or any advice to achieve that.
If you have other ideas how to secure access or upload to helm hosted repo that will be appreciated too.

Helm was added to the application not via plugin in 3.21 for OSS (and Pro licensed).
Content selectors should work for Helm the same as any format. They are path based primarily at this time, so it's up to you to figure out what path works for your teams. If they were to put teamAlpha in the path, then it'd be easy, for example.
See more general details here: https://help.sonatype.com/display/NXRM3/Content+Selectors
If you only have 2 teams, you could also do security based permissions, however, this doesn't scale well, so generally is not recommended.

Related

How to extract environment variables in Rancher automatically

First of all, sorry if this thread is not appropiated in Stack Overflow, but I think that is the best place of all.
We are using Rancher to manage a microservices solution. Most of the containers are NodeJS + Express apps, but there are others like Mongo or Identity Server.
We use many environment variables like endpoints or environment constants and, when we upgrade some of the containers individually, we forget to include them (most of the times, the person who deploys an upgrade is not the person who made the new version).
So, we're looking a way to manage them. We know that using a Dockerfile could be the best way, but if we need to upgrade just one container, we think that is too many work for just a minor change.
TLDR; How do you manage your enviromental variables in Rancher? How do you document them or how you extract them automatically?
Thanks!
Applications in Rancher are generally managed using Stacks/Services. Dockerfile is used to build a container image. docker-compose/rancher-compose files are used to define the applications. The environment variables can be specified in docker-compose file.
When you upgrade a service in rancher, the environment variables information is carried forward and also it's possible to edit them before upgrade.
Also Rancher "Catalog" feature might be something useful for you. Checkout: https://rancher.com/docs/rancher/v1.6/en/catalog/

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Dockerized Gitlab Container Backup

I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?

Is there a way to tell kubernetes to update your containers?

I have a kubernetes cluster, and I am wondering how (best practice) to update containers. I know the idea is to tear down the old containers and put up new ones, but is there a one-liner I can use, do I have to remove the replication controller or pod(s) and then spin up new ones (pods or replicaiton controllers)? With this I am using a self hosted private library that I know I have to build from the Dockerfile and the push to anyway, this I can automate with gulp (or any other build tool), can I automate kubernetes update/tear down and up?
Kubectl can automate the process of rolling updates for you. Check out the docs here:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_rolling-update.md
A rolling update of an existing replication controller foo running Docker image bar:1.0 to image bar:2.0 can be as simple as running
kubectl rolling-update foo --image=bar:2.0.
Found where in the Kubernetes docs they mention updates: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md#rolling-updates. Wish it was more automated, but it works.
EDIT 1
For automating this, I've found https://www.npmjs.com/package/node-kubernetes-client which since I already automate the rest of my build and deployment with a node process, this will work really well.
The OpenShift Origin project (https://github.com/openshift/origin) runs an embedded Kubernetes cluster, but provides automated build and deployment workflows on top of your cluster if you do not want to roll your own solution.
I recommend looking at the example here:
https://github.com/openshift/origin/tree/master/examples/sample-app
It's possible some of the build and deployment hooks may move upstream into the Kubernetes project in the future, but this would serve as a good example of how deployment solutions can be built on top of Kubernetes.

Building a service for a Drupal site to duplicate a node to another Drupal site in a multi-site setup

I'm trying to set up one of my Drupal sites to push a node to another Drupal site in a multi-site configuration. It looks like I need to do this with services somehow, but I can't find any tutorials out there and I need at least to be pointed in the right direction.
What I believe I need is set up Services on the receiving site to accept a call from the sending site which will be sending the node object via Json or serialized PHP using a Key that was set up on the receiving site. Can anyone show me an example of this working or give me some insight?
thanks
have you checked out the deploy module on d.o (drupal.org)? It's a great tool to push changes (also nodes) from one installation to another. It uses the services module for the communication.
I have not tested it with a multisite installation, but I guess it should work if at least the domain names are different for each site.
Regards
Mike