I am trying to connect bigquery via Google. I have service account for my project and while trying to access query I am getting time out error.
Error : connection to oauth2.googleapis.com time out
Code snippet:
from google.cloud import Bigquery
client =bigquery.Client()
client.query("")
Related
I'm getting a server unavailable error whenever I try to authenticate user on colab:
My code is simply
from google.colab import auth
auth.authenticate_user()
And I'm getting a
WARNING:google.auth.compute_engine._metadata:
Compute Engine Metadata server unavailable on attempt 1 of 3.
Reason: [Errno 115] Operation now in progress
I wonder if Google has changed anything
We are trying to connect and insert data into our AZURE SQL DB using sqlalchemy and pandas.dataframe.to_sql using our service principal and token. The problem is that we are able to connect to one database perfectly but to another database we are getting the following error
sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError) ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user '<token-identified principal>'. (18456) (SQLDriverConnect); [28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user '<token-identified principal>'. (18456)") (Background on this error at: https://sqlalche.me/e/14/rvf5)
We are trying to login via Service Principal using client id, client secret & Tenant ID. All the values have been verified and are working fine to connect to the database using Azure databricks.
Any help would be highly appreciated.
The problem is that we are able to connect to one database perfectly but to another database, we are getting the Login failed for user '<token-identified principal>' error.
According to documentation, if you are using any external provider then you need to create and map the required permissions to Azure AD identities.
CREATE USER <Azure_AD_principal_name> FROM EXTERNAL PROVIDER;
CREATE USER [bob#contoso.com] FROM EXTERNAL PROVIDER;
CREATE USER [alice#fabrikam.onmicrosoft.com] FROM EXTERNAL PROVIDER;
And after that, as suggested by Jason Pan , make sure to On Identity status on portal.
Updated answer:
According to Radiatelabs, the issue got fixed after copying the whole database from DEV to UAT and creating the UAT user in the database.
References: Login Failed for user ``, Login failed for user token-identified principal when web app is in an AAD Group ,and Login failed for user 'token-identified principal' but works in Data Studio
I am getting following error while running DataFlow pipeline
Error reporting inventory checksum: code: "Unauthenticated", message: "Request is missing required authentication credential.
Expected OAuth 2 access token, login cookie or other valid authentication credential.
We have created service account dataflow#12345678.iam.gserviceaccount.com with following roles
BigQuery Data Editor
Cloud KMS CryptoKey Decrypter
Dataflow Worker
Logs Writer
Monitoring Metric Writer
Pub/Sub Subscriber
Pub/Sub Viewer
Storage Object Creator
And in our python code we are using import google.auth
Any idea what am I missing here ?
I do not believe I need to create key for SA , however I am not sure if "OAuth 2 access token" for SA need to be created ? If yes how ?
This was the issue in my case https://cloud.google.com/dataflow/docs/guides/common-errors#lookup-policies
If you are trying to access a service through HTTP, with a custom request (not using a client library), you can obtain a OAuth2 token for that service account using the metadata server of the worker VM. See this example for Cloud Run, you can use the same code snippet in Dataflow to get a token and use it with your custom HTTP request:
https://cloud.google.com/run/docs/authenticating/service-to-service#acquire-token
For the recent few days I have been struggling with the following problem on Google Colab
Upon entering the produced link and entering my credentials the usual text to copy is not there.
Instead I get this window
Afterwards the connection to the google cloud storage looks like this . The project number 522309567947 is not my project and I do not understand why its appearing there.
After entering my project ID I am able to connect to my google cloud storage account but the adc.json file with client_id, client_secret and refresh token is not produced. I need this file to connect my tensorflow to my google cloud storage.
The following code will create an error because the adc.json does not exist.
Is there any solution to my problem? Or any workaround to get the adc.json file?
The following code, should fix the issue you see:
!gcloud auth application-default login --no-launch-browser
The real hint is the project number 522309567947, which is probably the project number for the project Collab is hosted in. This means that it's not an authentication issue but a client project id or quota project id configuration issue.
Setting your quota project would probably resolve the issue:
!gcloud auth application-default set-quota-project your-project-id
The solution for me was to explicitly set the quota project id when creating the client:
from google.cloud import bigquery_datatransfer
from google.cloud import bigquery_datatransfer_v1
from google.api_core.client_options import ClientOptions
options = ClientOptions(quota_project_id="your-project-id")
transfer_client = bigquery_datatransfer.DataTransferServiceClient(client_options=options)
parent = transfer_client.common_location_path(project=project, location="europe")
configs = transfer_client.list_transfer_configs(parent=parent)
print("Got the following configs:")
for config in configs:
print(f"\tID: {config.name}, Schedule: {config.schedule}")
In attempting to write a python script to access GCS using service-based authorization, I have come up with the following. Note that 'key' is the contents of my p12 file.
I am attempting to just read the list of buckets on my account. I have successfully created one bucket using the web interface to GCS, and can see that with gsutil.
When I execute the code below I get a 403 error. At first I thought I was not authorized correctly, but I tried from this very useful web page (which uses web-based authorization), and it works correctly. https://developers.google.com/apis-explorer/#p/storage/v1beta1/storage.buckets.list?projectId=&_h=2&
When I look at the headers and query string and compare them to the keaders and query of the website-generated request I see that there is no authorization header, and that there is no key= tag in the query string. I suppose I thought that the credential authorization would have taken care of this for me.
What am I doing wrong?
code:
credentials = SignedJwtAssertionCredentials(
'xxx-my-long-email-from-the-console#developer.gserviceaccount.com',
key,
scope='https://www.googleapis.com/auth/devstorage.full_control')
http = httplib2.Http()
http = credentials.authorize(http)
service = build("storage", "v1beta1", http=http)
# Build the request
request = service.buckets().list(projectId="159910083329")
# Diagnostic
pprint.pprint(request.headers)
pprint.pprint(request.to_json())
# Do it!
response = request.execute()
When I try to execute I get the 403.
I got this working, however, the code I used is not fundamentally different from the snippet you posted. Just in case you'd like to diff my version with yours, attached below is a complete copy of a Python program that worked for me. I initially got a 403, just like you, which was due to inheriting your project id :). After updating that value to use my project ID, I got a correct bucket listing. Two things to check:
Make sure the project id you are using is correct and has the "Google Cloud Storage JSON API" enabled on the Google Developer Console "Services" tab (it's a different service from the other Google Cloud Storage API).
Make sure you are loading the service accounts private key exactly as it came from the developer's console. I would recommend reading it into memory from the file you downloaded, as I've done here, rather than trying to copy it into a string literal in your code.
#!/usr/bin/env python
import pprint
import oauth2client
from oauth2client.client import SignedJwtAssertionCredentials
import httplib2
from apiclient.discovery import build
f = open('key.p12', 'r')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(
'REDACTED',
key,
scope='https://www.googleapis.com/auth/devstorage.full_control')
http = httplib2.Http()
http = credentials.authorize(http)
service = build("storage", "v1beta1", http=http)
# Build the request
request = service.buckets().list(projectId="REDACTED")
# Diagnostic
pprint.pprint(request.headers)
pprint.pprint(request.to_json())
# Do it!
response = request.execute()
pprint.pprint(response)