I am following this Google cloud tutorial. Running codes in Jupyter notebook. Step 5 is fine and when I run step 7 I get this error:
DefaultCredentialsError Traceback (most recent call last)
<ipython-input-8-a1696640ccb4> in <module>
----> 1 get_ipython().run_cell_magic('bigquery', '', 'SELECT\n source_year AS year,\n COUNT(is_male) AS birth_count\nFROM `bigquery-public-data.samples.natality`\nGROUP BY year\nORDER BY year DESC\nLIMIT 15\n')
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell)
2379 with self.builtin_trap:
2380 args = (magic_arg_s, cell)
-> 2381 result = fn(*args, **kwargs)
2382 return result
2383
The DefaultCredentialsError means that the Application Default Credentials (ADC) that the client library is trying to use for authentication are not properly set.
You may have missed the step in the section Setting up a local Jupyter environment with the link for Getting Started with Authentication.
In short, you need to set your default credentials by running a cell with the following code.
export GOOGLE_APPLICATION_CREDENTIALS="path/to/key.json"
Where your path points to a valid service account key file.
You might also be able to set default credentials instead with the following in a cell if you have the gcloud sdk installed locally:
gcloud auth application-default login
Related
I have a typescript/node-based application where the following line of code is throwing an error:
const res = await s3.getObject(obj).promise();
The error I'm getting in terminal output is:
❌ Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
However, I do actually have a credentials file in my .aws directory with values for aws_access_key_id and aws_secret_access_key. I have also exported the values for these with the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. I have also tried this with and without running export AWS_SDK_LOAD_CONFIG=1 but to no avail (same error message). Would anyone be able to provide any possible causes/suggestions for further troubleshooting?
Install npm i dotenv
Add a .env file with your AWS_ACCESS_KEY_ID etc credentials in.
Then in your index.js or equivalent file add require("dotenv").config();
Then update the config of your AWS instance:
region: "eu-west-2",
maxRetries: 3,
httpOptions: { timeout: 30000, connectTimeout: 5000 },
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});
Try not setting AWS_SDK_LOAD_CONFIG to anything (unset it). Unset all other AWS variables. In Mac/linux, you can do export | grep AWS_ to find others you might have set.
Next, do you have AWS connectivity from the command line? Install the AWS CLI v2 if you don't have it yet, and run aws sts get-caller-identity from a terminal window. Don't bother trying to run node until you get this working. You can also try aws configure list.
Read through all the sections of Configuring the AWS CLI, paying particular attention to how to use the credentials and config files at $HOME/.aws/credentials and $HOME/.aws/config. Are you using the default profile or a named profile?
I prefer to use named profiles, but I use more than one so that may not be needed for you. I have always found success using the AWS_PROFILE environment variable:
export AWS_PROFILE=your_profile_name # macOS/linux
setx AWS_PROFILE your_profile_name # Windows
$Env:AWS_PROFILE="your_profile_name" # PowerShell
This works for me both with an Okta/gimme-aws-creds scenario, as well as an Amazon SSO scenario. With the Okta scenario, just the AWS secret keys go into $HOME/.aws/credentials, and further configuration such as default region or output format go in $HOME/.aws/config (this separation is so that tools can completely rewrite the credentials file without touching the config). With the Amazon SSO scenario, all the settings go in the config.
I'm installing an application on my vps for testing. The updated version of the bitcoin core doesnt create wallets automatically as it used to. hence i'm running into some problems i haven't dealt with before (i'm not very experienced with bitcoin core in general). If anyone could help me out, i will appreciate it
Here are some logs:
-After running a script that checks everything is set correctly including btc wallet, the following error is displayed:
> File"/usr/local/lib/python3.7/dist-packages/bitcoin/rpc.py", line 239, in _call 'message': err.get('message', 'error message not specified')})
> bitcoin.rpc.JSONRPCError: ['code': -18, 'message': 'No wallet is loaded. Load a wallet using loadwallet or create a new one with createwallet.
(Note: A default is no longer automatically created)'}
And this error occurs when i try to create a wallet:
bitcoin-cli createwallet user1
> error: Could not locate RPC credentials. No authentication cookie could be found, and RPC password is not set. See -rpcpassword and
> -stdinrpcpass. Configuration file: (/root/.bitcoin/bitcoin.conf)
I tried to counter it with the following command:
*bitcoin-cli rpcuser=user rpcpassword=password createwallet user1
I get the same error as above.
I want to SSH into a GCE VM instance using the google-api-client. I am able to start an instance using google-api-client with the following code:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
project = 'my_project'
zone = 'us-west2-a'
instance = 'my_instance'
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
In the command line the above code is rendered as:
gcloud compute instances start my_instance
Similarly, to SSH into a GCE VM instance with the command line one writes:
gcloud init && gcloud compute ssh my_instance --project my_project --verbosity=debug --zone=us-west2-a
I've already got the SSH keys set up and all that.
I want to know how to write the above command line in Google Api Client or Python.
There is no official REST API method to connect to a Compute Engine instance with SSH. But assuming you have the SSH keys configured as per the documentation, in theory, you could use a third-party tool such as Paramiko. Take a look at this post for more details.
i have already seen there are some similar questions but none of them actually provide a full answer.
Since I cannot comment in that thread, i am opening a new one.
How do I address Brandon's comment below?
"...
In order to use the Cloud Vision API with a non-public GCS object,
you'll need to send OAuth authentication information along with your
request for a user or service account which has permission to read the
GCS object."?
I have the json file the system gave me as described here when I created the service account.
I am trying to run the api from a python script.
It is not clear how to use it.
I'd recommend to use the Vision API Client Library for python to perform the call. You can install it on your machine (ideally in a virtualenv) by running the following command:
pip install --upgrade google-cloud-vision
Next, You'll need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. For example, on a Linux machine you'd do it like this:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Finally, you'll just have to call the Vision API client's method you desire (for example here the label_detection method) like so:
def detect_labels():
"""Detects labels in the file located in Google Cloud Storage."""
client = vision.ImageAnnotatorClient()
image = types.Image()
image.source.image_uri = "gs://bucket_name/path_to_image_object"
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description)
By initialyzing the client with no parameter, the library will automatically look for the GOOGLE_APPLICATION_CREDENTIALS environment variable you've previously set and run on behalf of this service account. If you granted it permissions to access the file, it'll run successfully.
I am running an apache 2.4 server with PHP 5.6. I have a form called form1.html that sends it's data to another PHP script that prints out the form's data.
I am trying to use Python with mechanize to automatically fill out that form. I need to do all this for automation for work.
So with this code:
import mechanize
br=mechanize.Browser()
br.set_handle_robots(False)
response = br.open("http://127.0.0.1/form1.html")
... it then gives this:
Traceback (most recent call last):
File "mechTest.py", line 21, in <module>
File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 203, in open
return self._mech_open(url, data, timeout=timeout)
File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 255, in _mech_open
raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 503: Service Unavailable
I am stuck on this issue, and I need mechanize to work. Is there anyway to fix this, such as fixing the browser header? Thanks.