Azure Container Registry without Pull authentication (ACR Pull Role) - azure-container-service

I have Azure Container Registry instance, where the container images are pushed. We have ACRPush role to some crdentials(service principal account)
Can we pull the images from the ACR without any authentication. We want to make this publicly available to pull images without any docker login/authentication.
Regards
Jayashree

I think there are some things you need to understand carefully. First, the ACR is a private registry, so you must have the credential with the right permission to push and pull images. Second, the docker login is just a method to set the credential for the registry, so it's not necessary.
According to the above things, you do not need to run docker login command, but you must have a credential for the ACR. You can run the Azure CLI command az acr login --name acr_name and this command will set the credential for docker without running docker login.

Using Azure CLI, you can update the registry to allow anonymous pull using
az acr update --anonymous-pull-enabled ...
https://learn.microsoft.com/en-us/cli/azure/acr?view=azure-cli-latest#az_acr_update

Related

Create an SSH key for other account on Google Cloud Platform

I have installed the Cloud SDK for Google Cloud. I've logged in using auth which redirected me to the gmail-login. Created the SSH key and even logged in by SFTP using Filezilla.
The problem is, when I log in using the gmail auth, SDK shell (or putty?) logs me into an account that is not admin. It has created another SSH user account (named 'Acer', after my pc) and logs me into it. Due to this, FTP starts at the /home/Acer folder. I want access to the /home/admin/web folder, but I don't have it now.
How can I create a SSH key for the admin account so that I can gain access to the folder mentioned above? Otherwise, is it possible to grant 'Acer' the permissions to access all the folders?
I have a few suggestions.
First a bit of background. If you run this command on your home workstation:
sudo find / -iname gcloud
You'll discover a gcloud configuration folder for each user on your home workstation. You'll probably see something like this:
/root/.config/gcloud
/home/Acer/.config/gcloud
If you change directory into /home/Acer/.config/gcloud/configurations you'll see a file named 'config_default'. This file will contain the default account to use for that user ('Acer').
Because you have performed gcloud auth login as that user, and during that process selected your gmail account, it will contain that gmail ID/account within the config file for that user. If you would like a user named 'admin' to log into your project, you could try adding a user named 'admin' to your home workstation, and then before attempting to use gcloud auth login, ensure you switch user on your home workstation to user 'admin'. This will generate a gcloud configuration on your home workstation for user admin, and propagate SSH keys etc.
If you want to create ssh keys manually there's some useful info here.
(For what it's worth, if you decide to use gcloud compute ssh to log into your instance home workstation, you can specify the user in the command you would like to log in as. For example gcloud compute ssh admin#INSTANCE_NAME).
I want access to the /home/admin/web folder, but I don't have it now.
Even if you are logged into the machine as a different user (in this case 'Acer'), the folder /home/admin/web should still exist on the instance if it existed previously. If you land in folder /home/Acer have you tried changing directory to the folder above and then listing the folders to see if /home/admin/ exists?
For example, from /home/Acer run:
$ cd ..
then
$ ls
You should be able to see /home/admin/.
Otherwise, is it possible to grant 'Acer' the permissions to access
all the folders?
Yes this is also possible. If you access the instance as the project owner (the easiest way would be to log into the Console as the owner of the project and use the SSH functionality in the console to access the instance). Now you can run this command:
$ sudo chown Acer.Acer -R /home/admin/web
This will make user 'Acer' owner of directory /home/admin/web and all files/directories below it (thanks to the -R switch).
Now when you next access the instance as user 'Acer' you'll be able to access /home/admin/web by running the following and you'll also have read/write capabilities:
$ cd /home/admin/web

Upload Conan packages from CI

I run my own Conan server and want to automatically upload packages generated by CI. When I use conan upload it prompts me for a username and password. Is there a way to automate this process?
Yes, therea are a couple of ways to do it:
Using the command conan user myuser -p mypassword you can "log-in" into the remote, so the local cache will store a temporary token to authenticate against the server, and subsequent commands will not require it. Note that this token can expire, check the docs (e.g. for conan_server). Also, if you are managing more remotes, there is a login per remote (add -r=myremote to the above for each one
There are environment variables you can use for this CONAN_LOGIN_USERNAME, CONAN_PASSWORD and with using _REMOTENAME for different remotes. Have a look here in the docs. This is probably the way to go for CI, so the password is not plain text in the CI scripts. Some CI services will allow for encripted variables in the configuration. Furthermore, these variables allow automatically log-in in case of expired tokens, which can happen if they are set to short times, and the builds are very long.

How to switch Intellij Idea project vcs between Gitlab and Amazon CodeCommit?

I am new with Amazon codecommit.
Following their instruction, I did some works like below
make a new IAM user with AdministratorAccess
make a new codecommit repository
install awscli and did aws configure
When I right finished those things, I could pull/push from codecommit.
However it became disabled with intellij Idea.
I did something like...
I pull a project from gitlab
git remote rm origin
git remote add origin [code commit url]
git branch --set-upstream-to origin/master
Now I type git [pull / push] origin master, I got this error message.
unable to access 'https://git-codecommit.ap-northeast-1.amazonaws.com/v1/repos/test17/': The requested URL returned error: 403
When I access this url via browser, it requires id/pw. But my IAM user account information is not working.
What should I do? Is there any way to switch gitlab and codecommit in intellij?
Thanks.
IntelliJ does not use awscli. It uses the default system shell.
From the description, it looks like push/pull does not work for the command-line git in the native shell, so the issue is not IntelliJ-related.
Probably git tries to use wrong credentials save in its credential.helper, that is why it fails.
Check git config credential.helper to see if any is configured. If there is one, try disabling it or clear the saved credentials.
From the description it looks like you are trying to connect to a CodeCommit repository in Intellij using https. To do this you need to generate GitCredentials(username/password) for your iam user in the IAM console.
Detailed steps are documented in the aws documentation: http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html
Once you have the username/password you can use those credentials to connect to your CodeCommit repository in Intellij.
Tested on a Mac. Your milage may vary!
I just ran into the same issue. MacOS stores the GIT UID and PW in the Keychain (in your Applications > Utilities Folder). I deleted all references to AWS Code Commit from the keychain, which forced me to reenter the UID & PW. This seems to have solve the problem.
As a side note: I think this happened because I revoked a prior GIT credential on AWS and created a new one. I think that the keychain was entering the old UID/PW which then failed during authentication.
First, you are going to want to create an IAM user with appropriate permissions and then create Git credentials. Then go to IntelliJ IDEA and say you are opening project from VCS with Git credentials, use the AWS git credentials you created and log in. Once you have logged in, you should be able to pull/push to the repo. If you are still having issues and have checked the credentials you are using are active, along with the IAM user those credentials are attached to have the right permissions, I would recommend creating a ticket on AWS support as there may be something wrong with your account that AWS staff will need to fix.

SSH into staging machine from docker instance using Bitbucket Pipelines

Using the new Bitbucket Pipelines feature, how can I SSH into my staging box from the docker container it spins up?
The last step in my pipeline is an .sh file that deploys the necessary code on staging, however because my staging box uses public key authentication and doesn't know about the docker container, the SSH connection is being denied.
Anyway of getting around this without using password authentication over SSH (which is causing me issues as well by constantly choosing to authenticate over public key instead.)?
Bitbucket pipelines can use a Docker image you've created, that has the ssh client setup to run during your builds, as long as it's hosted on a publicly accessible container registry.
Create a Docker image.
Create a Docker image with your ssh key available somewhere. The image also needs to have the host key for your environment(s) saved under the user the container will run as. This is normally the root user but may be different if you have a USER command in your Dockerfile.
You could copy an already populated known-hosts file in or configure the file dynamically at image build time with:
RUN ssh-keyscan your.staging-host.com
Publish the image
Publish your image to a publicly accessible, but private registry. You can host your own or use a service like Docker Hub.
Configure Pipelines
Configure pipelines to build with your docker image.
If you use Docker Hub
image:
name: account-name/java:8u66
username: $USERNAME
password: $PASSWORD
email: $EMAIL
Or Your own external registry
name: docker.your-company-name.com/account-name/java:8u66
Restrict access on your hosts
You don't want to have ssh keys to access your hosts flying around the world so I would also restrict access for these deploy ssh keys to only run your deploy commands.
The authorized_keys file on your staging host:
command="/path/to/your/deploy-script",no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-dss AAAAC8ghi9ldw== deploy#bitbucket
Unfortunately bitbucket don't publish an IP list to restrict access to as they use shared infrastructure for pipelines. If they happen to be running on AWS then Amazon do publish IP lists.
from="10.5.0.1",command="",no-... etc
Also remember to date them an expire them from time to time. I know ssh keys don't enforce dates but it's a good idea to do it anyway.
You can now setup SSH keys under pipeline settings so that you do not need to have a private docker image just to store ssh keys. It is also extracted from your source code so you don't have it in your repo as well.
Under
Settings -> Pipelines -> SSH keys
You can either provide a key pair or generate a new one. The private key will be put in the docker container at ~/.ssh/config and provide you a public key you can put in your host to the ~/.ssh/authorized_keys file. The page also requires an ip or name to setup the fingerprint for known hosts when running on docker as well.
Also, Bitbucket has provided IP addresses you can white list if necessary for the docker containers being spun up. They are currently:
34.236.25.177/32
34.232.25.90/32
52.203.14.55/32
52.202.195.162/32
52.204.96.37/32
52.54.90.98/32
34.199.54.113/32
34.232.119.183/32
35.171.175.212/32

Revoking access to gsutil OAuth Token

we had configured standalone gsutil on a remote server, however we do not have access to the server anymore. How do we revoke access provided to gsutil on that server. The .boto file will have the refresh Oauth2.0 token.
we do not have access to the server and so cannot remove .boto file.
The project configured is active in our console but we cannot see any specific access in permissions section.
A standalone gsutil script was installed (not gcloud).
Use gcloud auth revoke.
https://cloud.google.com/sdk/gcloud/reference/auth/revoke
The .gsutil directory just gets recreated within 10 seconds for me.
OK we can revoke access to gsutil from account permissions through this link:
https://security.google.com/settings/security/permissions
screenshot for Google Security Permissions Page
Just remove credstore files.
rm -rf ~/.gsutil/