How to create symbolic link for Google Cloud SDK gcloud directory on a NAS drive? - google-cloud-sdk

The problem: we have several servers that need to reference the same Google Cloud SDK credentials and we want to reference those credentials from a central location. What is the easiest way to share these credentials between several servers?
What we tried: we tried to create soft and hard symbolic links to the gcloud directory but we did not have success, in both cases we received the following error message:
C:\Users\Redacted\AppData\Local\Google\Cloud SDK> bq ls
WARNING: Could not open the configuration file: [C:\Users\Redacted\AppData\Roaming\gcloud\configurations\config_default].
ERROR: (bq) You do not currently have an active account selected.

Found the answer... use the below commands for the two files below to make a soft symbolic link to centralize credential files, not the gcloud directory itself:
mklink access_tokens.db
mklink credentials.db

Related

Web server has no permission to access files on network drive

I want to use DDEV as a local development environment. The setup was successful and the website (a WordPress) is running.
Currenty our team is using XAMPP and to avoid downloading large files on every local machine we create symbolic links (e.g. the "uploads" folder in WordPress). The target is a network drive. So everyone in our team has access to the same files.
Now I want to do the same with DDEV. In WSL I mounted the network drive and created a symbolic link. Inside the console I have full access to the mounted directory, I can create, edit and remove files.
But when I access a file with the browser I get the following error message:
403 Forbidden. You don't have permission to access this resource.
The same error occurs when I try to upload a new file within WordPress.
Is there any way to give the webserver the permission to view and modify the files on a network drive?
The Webserver is an Apache/2.4.38.
As #rfay mentioned I had to add the network drive as volume so it's accessible by the web container. Therefore I created a new docker-compose-file within the .ddev directory (see also in the docs: https://ddev.readthedocs.io/en/stable/users/extend/custom-compose-files/#docker-compose42yaml-examples).
Additionally the permissions on the network drive were incorrect.

Gcloud auth login saves to legacy_credentials folder

I have no idea why, but I am running gcloud auth login, I have tried beta and application-default. All of them do not create the file: ~/.config/gcloud/credentials, instead I can find ~/.config/gcloud/legacy_credentials.
The issue I am having is that the library I am using does not want legacy_credentials, and renaming the folder did not work.
Here are my settings:
Google Cloud SDK 183.0.0
alpha 20
17.12.08
beta 2017.12.08
bq 2.0.27
core 2017.12.08
gsutil 4.28
Also I am using Ubuntu 16.04.3 LTS on digitalocean. I will be glad to supply any other information I can think of.
The credentials in the legacy folder contain:
" ============================================================================
" Netrw Directory Listing (netrw v155)
" /root/.config/gcloud/legacy_credentials/matt#mindbrainhive.org
" Sorted by name
" Sort sequence: [\/]$,\<core\%(\.\d\+\)\=\>,\.h$,\.c$,\.cpp$,\~\=\*$,*,\.o$,\.obj$,\.info$,\.swp$,\.bak$,\~$
" Quick Help: <F1>:help -:go up dir D:delete R:rename s:sort-by x:special
" ==============================================================================
../
./
.boto
adc.json
gcloud no longer uses ~/.config/gcloud/credentials, instead it stores credentials in sqlite3 ~/.config/gcloud/credentials.db.
These credentials files are considered internal to gcloud, they can change at any time. You should not be using them. What you likely want to use is
gcloud auth application-default login
instead of gcloud auth login. The former will create ~/.config/gcloud/application_default_credentials.json key file for user logged in account.
That said depending what you trying to do you probably want use service accounts (instead of user account). You can create the key file via
gcloud iam service-accounts keys create
See documentation for more info. Or you can use Google Cloud Platform Console to create key file.
Once you obtain json key file you can use it in your application as application default credentials, see https://developers.google.com/identity/protocols/application-default-credentials#howtheywork
You can also use this key in gcloud by using gcloud auth activate-service-account command.

gsutil AccessDenied Exception - Login Required in both API Explorer and Service Account

I am using gsutil with a service account to download a file from a gs bucket. I ran it once an hour ago and it worked without a problem, and then I ran the same code again and I get an AccessDeniedException: 401 Login Required.
To get more information, I run the command with a -D flag, like:
gsutil -D cp gs://mybucket/file localpath
In the debug output, I look at:
Command being run: /path/to/gsutil
config_file_list: /path/to/boto/config
Case 1: Running gsutil under a service account or as another user
I configured a service account using the executable at /path/to/gsutil, not gcloud auth or any other gsutil executable on the machine, using:
/path/to/gsutil config -e
This created a .boto config file in my home directory, $HOME/.boto, which I then moved to another location to refer to by the service.
Since I'm running the service as another user, I need to reference the newly-created .boto file. I set the environment variable BOTO_CONFIG:
BOTO_CONFIG=/path/to/$HOME/.boto
export BOTO_CONFIG
I can confirm that I am referencing the correct config file by looking at the config_file_list variable in the gsutil -D command's output.
To set up the service account, I followed:
https://console.cloud.google.com/permissions/serviceaccounts
The key file from the service account set-up process was downloaded, and the path to it is included during the gsutil config -e step.
However, the response to running the gsutil command from the service account is AccessDeniedException: 401 Login Required.
Case 2: API Explorer
On further exploration, I used the web-based API explorer storage.objects.get at:
https://developers.google.com/apis-explorer/#p/storage/v1/storage.objects.get
to get the object's "mediaLink". When I click the mediaLink, I still receive a response that says "Login Required", despite being logged into my google account on the browser.
Why am I receiving the above "Login Required" responses in both cases, when I have set up the service account as instructed in Case 1, and am logged into the API explorer in Case 2?
I was able to solve this problem by looking at the read permissions on the .boto file. The file that was created by
gsutil config -e
only had read permissions set for the current user. Since it was being read by a service running as a different user, it wasn't able to read the file and yielding a 401 Login Required error. I fixed it by adding read permissions for the service's group.
In the least sophisticated case, you could fix it by giving any user read permission with
chmod a+r .boto

How to authenticate google APIs with different service account credentials?

As anyone who has ever had the misfortune of having to interact with the panoply of Google CLI binaries programmatically will have realised, authenticating with the likes of gcloud, gsutil, bq, etc. is far from intuitive or trivial, especially when you need to work across different projects.
I am running various cron jobs that interact with Google Cloud Storage and BigQuery for different projects. Since the cron jobs may overlap, renaming config files is clearly not an option, and nor would any sane person take that approach.
There must surely be some sort of method of passing a path to a service account's key pair file to these CLI binaries, but bq help yields nothing.
The Google documentation, while verbose, is largely useless, taking one on a tour of how OAuth2 works, etc, instead of explaining what must surely be a very common requirement, vis-a-vis, how to actually authenticate a service account without running commands that modify central config files.
Can any enlightened being tell me whether the engineers at Google decided to add a feature as simple as passing the path to a service account's key pair file to the likes of gsutil and bq? Or perhaps I could simply export some variable so they know which key pair file to use for authentication?
I realise these simplistic approaches may be an insult to the intelligence, but we aren't concerning ourselves with harnessing nuclear fusion, so we needn't even consider what Amazon got so right with their approach to authentication in comparison...
Configuration in the Cloud SDK is global for the user, but you can specify what aspects of that config to use on a per command basis. To accomplish what you are trying to do you can:
gcloud auth activate-service-account foo#developer.gserviceaccount.com --key-file ...
gcloud auth activate-service-account bar#developer.gserviceaccount.com --key-file ...
At this point, both sets of credentials are in your global credentials store.
Now you can run:
gcloud --account foo#developer.gserviceaccount.com some-command
gcloud --account bar#developer.gserviceaccount.com some-command
in parallel, and each will use the given account without interfering.
A larger extension of this is 'configurations' which do the same thing, but for your entire set of config (including settings like account and project).
# Create first configuration
gcloud config configurations create myconfig
gcloud config configurations activate myconfig
gcloud config set account foo#developer.gserviceaccount.com
gcloud config set project foo
# Create second configuration
gcloud config configurations create anotherconfig
gcloud config configurations activate anotherconfig
gcloud config set account bar#developer.gserviceaccount.com
gcloud config set project bar
And you can say which configuration to use on a per command basis.
gcloud --configuration myconfig some-command
gcloud --configuration anotherconfig some-command
You can read more about configurations by running: gcloud topic configurations
All properties have corresponding environment variables that allow you to set that particular property for a single command invocation or for a terminal session. They take the form:
CLOUDSDK_<SECTION>_<PROPERTY>
for example: CLOUDSDK_CORE_ACCOUNT
You can see all the available config settings by running: gcloud help config
The equivalent of the --configuration flag is: CLOUDSDK_ACTIVE_CONFIG_NAME
If you really want complete isolation, you can also change the Cloud SDK's config directory by setting CLOUDSDK_CONFIG to a directory of your choosing. Note that if you do this, the config is completely separate including the credential store, all configurations, logs, etc.

How do I backup to google drive using duplicity?

I have been trying to get duplicity to backup to google drive. But it looks like it is still using the old client API.
I found some thread saying that the new API should be supported but not much details on how to get it to work.
I got as far as compiling and using duplicity 7.0.3 but then I got this error:
BackendException: GOOGLE_DRIVE_ACCOUNT_KEY environment variable not set. Please read the manpage to fix.
Has anyone set up duplicity to work with Google Drive and know how to do this?
Now that Google has begun forcing clients to use OAuth, using Google Drive as a backup target has actually gotten very confusing. I found an excellent blog post that walked me through it. The salient steps are:
Install PyDrive
PyDrive is the library that lets Duplicity use OAuth to access Drive.
pip install pydrive
should be sufficient, or you can go through your distribution's package manager.
Create an API token
Navigate to the Google Developer Console and log in. Create a project and select it from the drop-down on the top toolbar.
Now select the "Enable APIs and Services" button in the Dashboard, which should already be pulled up, but if not, is in the hamburger menu on the left.
Search for and enable the Drive API. After it's enabled, you can actually create the token. Choose "Credentials" from the left navigation bar, and click "Add Credential" > "OAuth 2.0 Client ID." Set the application type to "Other."
After the credential is created, click on it to view the details. Your Client ID and secret will be displayed. Take note of them.
Configure Duplicity
Whew. Time to actually configure the program. Paste the following into a file, replacing your client ID and secret with the ones from the Console above.
client_config_backend: settings
client_config:
client_id: <your client ID>.apps.googleusercontent.com
client_secret: <your client secret>
save_credentials: True
save_credentials_backend: file
save_credentials_file: gdrive.cache
get_refresh_token: True
(I'm using the excellent Duply frontend, so I saved this as ~/.duply/<server name>/gdrive).
Duplicity needs to be given the name of this file in the GOOGLE_DRIVE_SETTINGS environment variable. So you could invoke duplicity like this:
GOOGLE_DRIVE_SETTINGS=gdrive duplicity <...>
Or if you're using Duply, you can export this variable in the Duply configuration file:
export GOOGLE_DRIVE_SETTINGS=gdrive
Running Duplicity for the first time will begin the OAuth process; you'll be given a link to visit, which will ask permission for the app you created earlier in the Console to access your Drive account. Accept, and it will give you another authentication token to paste back into the terminal. The authorization info will be saved in a .cache file alongside the gdrive settings file.
At this point you should be good to go, and Duplicity should behave normally. Good luck!