How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?
e.g. Duplicate default to my-namespace which will have the same content.
I'm interested just by services and deployments, so
when I try with method with kubectl get all and with api-resources i have error with services IP like :
Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated
As #coderanger mentioned in his answer, there is no straight way to make a copy of origin k8s resources to the separate namespace.
As was proposed, when you invoke kubectl get all command, k8s looks through resources catalog bounded to all category. Therefore, if you didn't add this category for each custom CRD object, throughout specific API group, you might probably miss some relevant k8s resources in the command output.
Furthermore, if you desire to export all k8s resources from the particular namespace, besides user workloads, I would recommend exploring API resources, filtering out only namespace scoped objects, and then apply bash processing to generate manifest files across each resource group:
kubectl api-resources --namespaced=true| awk '{print $1}'| sed '1d'| while read -r line; do echo "$(kubectl get $line -n namespace -o yaml > $line.yaml)"; done
You can also consider using Helm (as #David Maze pointed out in the comment), in order to manage user workloads through Helm Charts, as a more flexible and structured way to describe k8s native resources.
There is no specific way to do this. You could probably get close with something like kubectl get all -n sourcens -o yaml | sed -e 's/namespace: sourcens/namespace: destns/' | kubectl apply -f - but get all is always a bit wonky and this could easily miss weird edge cases.
You can backup your namespace using Velero and then you can restore it to another namespace or cluster!
Related
Ideally, I'd like to be able to do this with aws cli, but I'm open to alternatives. Assuming I'm authenticated to a particular aws account, is there any way to look up basic information about a cluster, or all clusters in the account, without knowing what region the cluster is in? I'd love a way to get information about a cluster without already knowing meta information about it. I could write a script to cycle through all regions looking for for clusters, but I hope there's a better way.
Here is a bash for loop that should do the trick:
for region in `aws ec2 describe-regions --output text | cut -f4`
do
echo -e "\nListing Clusters in region: $region..."
aws eks list-clusters --region $region --output text --no-cli-pager
done
A handy command is eksctl get cluster --all-regions -o json.
I am using playboooks to run my modules. i have a doubt, whether i can put my common variables outside playbook due to following reasons
Security reason like username and password
To reduce the repetitive code by using putting global variabble at common place and loc of playbook.
Right now my playbook looks like some thing below:
- hosts: localhost
tasks:
- name: Get all Storage Service Levels
StorageServiceLevelModule: host=<ip> port=<port> user=admin password=<password>
action=get name='my_ssl'
register: jsonResultforSSLs
- name: print the SSL key
debug: msg="{{ jsonResultforSSLs.meta.result.records[0].key}}"
- name: Get all Storage VMs
StorageVMModule: host=<ip> port=<port> user=admin password=<password>
action=get name=my_svm
register: jsonResultforSVMs
I want to put
host=<ip> port=<port> user=admin password=<password>
outside playbook and use it in all tasks of my playbooks. How can I do this ?
Please let me know if any clarification is required.
You can specify your own variables to all or certain hosts in the Inventory file or in the sub-directories related to it (like ./group_vars). Go to this webpage. There you can see an example of a file in that directory, which must have the name of a group and be written in yaml). The ./group_vars directory must be in the same directory of your hosts file. For example, if your hosts file is in ./inventory/hosts, then the files with variables should be ./inventory/group_vars/<group_name>. Keep in mind that the variables defined in those files will only apply to the members of the group. Example of the content of a file in that directory:
---
ip=1.1.1.1
port=420
password='password1' # should be encrypted with Ansible Vault
...
And then you would use them like normally:
- name: Get all Storage VMs
StorageVMModule: host='{{ip}}' port='{{port}}' user=admin action=get name=my_svm
register: jsonResultforSVMs
variables can be loaded in different ways. You can define a file named all inside vars/ directory, and these are available through the whole playbook.
Also, you can define it in a file and provide it when execution the playbook with -e #filename. I found this most convinient way.
Check this link from the docs, I think you might find it very useful
I strongly suggest you to use roles. There, in each role you have a vars folder where you can put relevant variables to the role. Then, you can provide their values using the OS environment variables.
To specify things as you describe you'd be best using a secret store for your secrets so something like hashicorp vault but luckily ansible also has a way of encrypting secret information called Ansible vault, which operates at a file level.
What you should never do is put secrets in plain text files then commit them to a source control system. Ansible vault will encrypt stuff to get around this.
Ansible vault isn't complicated but has very good documentation here
You can create a new encrypted file like this:
ansible-vault create filename.yml
You can edit the file with this:
ansible-vault edit filename.yml
You can encrypt an unencrypted file like this:
ansible-vault encrypt filename.yml
You can decrypt with ansible-vault decrypt
You can then use these in playbooks and commit to src control with them protected.
Another approach is to store it in an external secret store (vault) then export to environmental variables. Then read in the environmental variables and assign to ansible variables. This way nothing ever goes into source control at all, this is my preferred approach.
That's secrets taken care of.
For common structures you can use group_vars and set different values for different roles this is explained here
To second Vinny -
Roles. Roles, roles, roles.
The default structure for roles includes a defaults directory in which you can define default values in defaults/main.yml. This is about the lowest priority setting you can use, so I like it better than vars/main.yml for setting reasonable values that can be easily overridden at runtime, but as long as you pick a consistent structure you're good.
I don't personally like the idea of a "common" role just for variables everything uses, but if your design works well with that, you should be sure to prefix all the variable names with "virtual namespace" strings. for example, don't call it repo, call it common_git_repo or common_artifactory, or something more specific if you can.
Once you include that role in a playbook, make certain the default file is statically loaded before the values are called, but if it is you don't have to worry so much about it. Just use your {{ common_git_repo }} where you need it. It will be there...which is why you want to use virtual namespacing to avoid collisions with effectively global names.
When you need to override values, you can stage them accordingly. We write playbook-specific overrides of role defaults in the vars: section of a playbook, and then dynamically write last-minute overrides into a Custom.yml file that gets loaded in the vars_files: section. Watch your security, but it's very flexible.
We also write variables right into the inventory. If you use a dynamic inventory you can embed host- and/or group-specific variables there. This works very well. For the record, you can use YAML output instead of JSON. Here's a simplistic template - we sometimes use the shell script that runs the ansible as the inventory:
case $# in
0) # execute the playbook
ansible-playbook -i Jenkins.sh -vv site.yml
;;
*) case $1 in
# ansible-playbook will call the script with --list for hosts
--list)
printf "%s\n\n" ---
for group in someGroup otherGroup admin managed each all
do printf "\n$group:\n hosts:\n"
for s in $Servers
do printf " - $s\n"
done
printf " vars:\n"
printf " ansible_ssh_user: \"$USER\"\n"
printf " ansible_ssh_pass: \"$PSWD\"\n\n"
done
;;
esac
;;
esac
You can also use --extra-vars as a last-minute, highest-priority override.
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.
Docker documentation is pretty good at describing what you can do from the command line.
It also gives a pretty comprehensive description of the commands associated with the remote API.
It does not, however, appear to give sufficient context for using the remote API to do things that one would do using the command line.
An example of what I am talking about: suppose you want to do a command like:
docker run --rm=true -i -t -v /home/user/resources:/files -p 8080:8080 --name SomeService myImage_v3
using the Remote API. There is a container "run" command in the Remote API:
POST /containers/(id or name)/start
And this command refers back to the create container command for the rather long list of JSON strings that you would need to add in order to do the actual start.
The problem here is: first, just calling this command doesn't work. Apparently there is more that you have to do (I am guessing you have to do a create, then a start). Second, it is unclear which JSON strings you need to use in order to do what I showed in the command line (like setting ports, mapping to the external directory, etc). Not only do the JSON strings provided in the remote API documentation not line up with the command line parameters (at least, not in any way that is obvious!), but it is unclear which JSON strings are required for the create (assuming that we have to do a create, which isn't established yet!) and which are required for the start.
This is just related to starting a container. Suppose you want to stop and destroy a container, as in:
docker stop SomeService
docker rm SomeService
Granted, there appear to be one- to- one commands for doing this in the remote API:
POST /containers/(id or name)/stop
POST /containers/(id or name)/kill
But it seems that the IDs you can pass them do not correspond to the IDs shown when you list containers or images.
Is there somewhere I can go to gather information on how to set up and use remote API commands that relates these commands and their JSON parameters to the commands and parameters in the command line?
Failing that, can someone please tell me how to do the start that I showed in my illustration using the remote API???
In any event: is there someone working on docker development I can bring these documentation issues to? It is, I believe, a big "hole" in their documentation.
Someone please advise...
docker run is a combination of docker create, followed by docker start, so https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#create-a-container, followed by https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#start-a-container
If you're running "interactively", you may need to attach to the container after that; https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#attach-to-a-container
Our team is working on AWS, where we have lots of instances, which we keep adding and removing. Each instance has a logical name, which helps us know what it does as well as finding it.
When we want to connect to one, though, we either need to update the ~/.ssh/config file all the time, or go to the web console, find the instance by its name, copying its IP and only then we can run it using:
ssh -i ~/.aws/my-pem-file.pem ubuntu#ec2-111-111-111-111.compute-1.amazonaws.com
I was wandering whether there is an easier way to do it, where you could specify the machine name, and EC2 would do the rest?
Something like
ssh-aws my-machine-name
If you configure your instance/load balancer with an Elastic IP (which doesn't change), you can always use an SSH config file.
http://webadvent.org/2012/ssh-tips-by-lorna-mitchell
http://nerderati.com/2011/03/simplify-your-life-with-an-ssh-config-file/
Secondly, if you have the Unified AWS CLI Tools configured, you can add these functions to your Bash profile. Assuming every instance you have has a unique "Name" tag, this will return the IP address of that instance for SSH requests. (Otherwise, it will simply use the first "Name" match.)
function hostname_from_instance() {
echo $(aws ec2 describe-instances --filters "{\"Name\":\"tag:Name\", \"Values\":[\"$1\"]}" --query='Reservations[0].Instances[0].PublicDnsName' | tr -d '"')
}
function ip_from_instance() {
echo $(aws ec2 describe-instances --filters "{\"Name\":\"tag:Name\", \"Values\":[\"$1\"]}" --query='Reservations[0].Instances[0].PublicIpAddress' | tr -d '"')
}
function ssh-aws() {
ssh -i ~/.ssh/your-keypair.pem ec2-user#$(ip_from_instance "$1")
}
Depending on whether you're running instances inside of VPC or not, sometimes you'll get back one or the other. All-public (classic) EC2 should always get back a hostname, and sometimes a public IP.
Feel free to tweak/adjust as necessary.
I wrote a little bash script which uses aws-cli (thanks #Ryan Parman) to find the correct machine IP and PEM from the machine name:
http://sash.agassi.co.il/
To use it simply call
sash <machine-name>
I've also added more features to it like upload, download, and multiplex connect...
The simple way would be enter this ssh -i ~/.aws/my-pem-file.pem ubuntu#ec2-111-111-111-111.compute-1.amazonaws.cominto a .sh file with a logical name as you specified. Now when u run $logical-name.sh, you are logged in to that instance. The file needs to be updated in case the instance address has changed. One option to overcome would be assign ip's to each instance but i'm not sure if that is feasible from your end.