Bigquery Not Accepting Stackdriver's Sink Writer Identity - google-bigquery

I've been following the documentation to export logs from Stackdriver to Bigquery. I've tried the following:
from google.cloud.logging.client import Client as lClient
from google.cloud.bigquery.client import Client as bqClient
from google.cloud.bigquery.dataset import AccessGrant
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path_to_json.json'
lc = lClient()
bqc = bqClient()
ds = bqc.dataset('working_dataset')
acc = ds.access_grants
acc.append(AccessGrant('WRITER', 'groupByEmail', 'cloud-logs#system.gserviceaccount.com')) #this is the service account that is shown in our Exports tab in GCP.
ds.access_grants = acc
ds.update()
But we get the error message:
NotFound: 404 Not found: Email cloud-logs#system.gserviceaccount.com (PUT https://www.googleapis.com/bigquery/v2/projects/[project-id]/datasets/[working-dataset])
Why won't our dataset be updated? The key being used is the one which already created the dataset itself.

Related

Getting error in a python script when using QuickSight API calls to retrieve the value of user parameter selection

I am working on a python script which will use QS APIs to retrieve the user parameter selections but keep getting the below error:
parameters = response['Dashboard']['Parameters'] KeyError: 'Parameters'
If I try a different code to retrieve the datasets in my QS account, it works but the Parameters code doesn't. I think I am missing some configuration.
#Code to retrieve the parameters from a QS dashboard (which fails):
import boto3
quicksight = boto3.client('quicksight')
response = quicksight.describe_dashboard(
AwsAccountId='99999999999',
DashboardId='zzz-zzzz-zzzz'
)
parameters = response['Dashboard']['Parameters']
for parameter in parameters:
print(parameter['Name'], ':', parameter['Value'])
#Code to display the datasets in the QS account (which works):
import boto3
import json
account_id = '99999999999'
session = boto3.Session(profile_name='default')
qs_client = session.client('quicksight')
response = qs_client.list_data_sets(AwsAccountId = account_id,MaxResults = 100)
results = response['DataSetSummaries']
while "NextToken" in response.keys():
response = qs_client.list_data_sets(AwsAccountId = account_id,MaxResults = 100,NextToken=response["NextToken"])
results.extend(response["DataSetSummaries"])
for i in results:
x = i['DataSetId']
try:
response = qs_client.describe_data_set(AwsAccountId=account_id,DataSetId=x)
print("succeeded loading: {} for data set {} ".format(x, response['DataSet']['Name']))
except:
print("failed loading: {} ".format(x))

Hybris. Export data. Passing incorrect impexscript to Exporter

I'm unable to export impexscript due to error:
de.hybris.platform.impex.jalo.ImpExException: line 3 at main script: No valid line type found for {0=, 1=user_to_test_export}
This is how I'm doing export of my data using impex
String impexScript = String.format("INSERT Customer;uid[unique=true];\n" +
";%s", customer.getUid());
ImpExMedia impExMedia = ImpExManager.getInstance().createImpExMedia("test_importexport_exportscript");
impExMedia.setData(new ByteArrayInputStream(impexScript.getBytes()), "CustomerData", "text/csv");
ExportConfiguration config = new ExportConfiguration(impExMedia, ImpExManager.getExportOnlyMode());
Exporter exporter = new Exporter(config);
exporter.export();
User/Customer can be easily exported via groovy.
import de.hybris.platform.impex.jalo.*
import de.hybris.platform.impex.jalo.exp.ExportConfiguration
import de.hybris.platform.impex.jalo.exp.Exporter
import de.hybris.platform.impex.jalo.exp.Export
String impexScript = String.format("INSERT Customer;uid[unique=true]");
ImpExMedia impExMedia = ImpExManager.getInstance().createImpExMedia("test_importexport_exportscript");
impExMedia.setData(new ByteArrayInputStream(impexScript.getBytes()), "CustomerData", "text/csv");
ExportConfiguration config = new ExportConfiguration(impExMedia, ImpExManager.getExportOnlyMode());
Exporter exporter = new Exporter(config);
Export export = exporter.export();
println(export.getExportedData())
Note: run groovy in commit mode.
Sample Output:
data_export_1676281713548(data_export_1676281713548(8798244438046))
With this PK 8798244438046 ImpExMedia can be search easily in Backoffice.

Google people API returning empty / no results in Python

I'm trying to read contacts from my person gmail account and the instructions provided by Google from the People API is returning an empty list. I'm not sure why. I've tried another solution from a few years ago, but that doens't seem to work. I've pasted my code below. Any help troubleshooting this is appreciated!
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
# If modifying these scopes, delete the file token.json.
SCOPES = ['https://www.googleapis.com/auth/contacts.readonly']
from google.oauth2 import service_account
SERVICE_ACCOUNT_FILE = '<path name hidden>.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
def main():
#Shows basic usage of the People API.
#Prints the name of the first 10 connections.
creds = None
service = build('people', 'v1', credentials=credentials)
# Call the People API
print('List 10 connection names')
results = service.people().connections().list(
resourceName='people/me',
pageSize=10,
personFields='names,emailAddresses').execute()
connections = results.get('connections', [])
request = service.people().searchContacts(pageSize=10, query="A", readMask="names")
results = service.people().connections().list(resourceName='people/me',personFields='names,emailAddresses',fields='connections,totalItems,nextSyncToken').execute()
for i in results:
print ('result', i)
for person in connections:
names = person.get('names', [])
if names:
name = names[0].get('displayName')
print(name)
if __name__ == '__main__':
main()

boto3 default session Profile - Access denied

I need to put the file on s3 using the library based on boto3 (great_expectations), and only have write permissions on a specific profile.
Using AWS_PROFILE seems to set the default session's profile, but does not help. To replicate, I ran the code myself and got the same result (AccessDenied):
import boto3
import os
import json
s3 = boto3.resource("s3")
bucket = 'my-bucket'
key = 'path/to/the/file.json'
content_encoding="utf-8"
content_type="application/json"
value = json.dumps({'test':'test indeed'})
os.environ['AWS_PROFILE']
>>> mycorrectprofile
boto3.DEFAULT_SESSION.profile_name
>>> mycorrectprofile
s3_object = s3.Object(bucket, key)
s3_object.put(
Body=value.encode(content_encoding),
ContentEncoding=content_encoding,
ContentType=content_type,
)
that results in AccessDenied
Now, right after that, I do:
my_session = boto3.session.Session(profile_name=os.getenv('AWS_PROFILE'))
ss3 = my_session.resource('s3')
r2_s3 = ss3.Object(bucket, key)
that runs Smoothly.
What is happening here, and how can I resolve that behavior?
boto3.__version__ = '1.15.0'

Logging the errors to azure blob

I am trying to log errors to azure blob but, its not creating any table in the blob. I have gone through many docs and also searched for ans in stackoverflow as well. Please help me with this.
Thanks
below is the code
def log():
import logging
import sys
from azure_storage_logging.handlers import BlobStorageRotatingFileHandler
mystorageaccountname='***'
mystorageaccountkey='***'
_LOGFILE_TMPDIR = mkdtemp()
logger = logging.getLogger('service_logger')
logger.setLevel(logging.DEBUG)
log_formater = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
azure_blob_handler = TableStorageHandler(account_name=mystorageaccountname,
account_key=mystorageaccountkey,
protocol='https',
table='logtable',
batchsize=100,
extra_properties=None,
partition_key_formatter=None,
row_key_formatter=None,
is_emulated=False)
logger.addHandler(azure_blob_handler)
logger.warning('warning message')
According to the code you provided, you use TableStorageHandler to store log. It will help us store log in Azure table storage instead of Azure blob storage. Please find your logs in Azure table.
Besides, if you want to store your log in Azure blob, please refer to the following code
import logging
import sys
from azure_storage_logging.handlers import BlobStorageRotatingFileHandler
mystorageaccountname='***'
mystorageaccountkey='***'
logger = logging.getLogger('service_logger')
logger.setLevel(logging.DEBUG)
log_formater = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
azure_blob_handler = BlobStorageRotatingFileHandler(filename = 'service.log',
account_name=mystorageaccountname,
account_key=mystorageaccountkey,
maxBytes= 5,
container='service-log')
azure_blob_handler.setLevel(logging.INFO)
azure_blob_handler.setFormatter(log_formater)
logger.addHandler(azure_blob_handler)
logger.warning('warning message')
For more details, please refer to the document
Update
When we useBlobStorageRotatingFileHandler, the log does not upload if the content does not reach till maxBytes
My test code
import logging
import sys
from azure_storage_logging.handlers import BlobStorageRotatingFileHandler
mystorageaccountname='blobstorage0516'
mystorageaccountkey=''
logger = logging.getLogger('service_logger')
log_formater = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
azure_blob_handler = BlobStorageRotatingFileHandler(filename = 'service.log',
account_name=mystorageaccountname,
account_key=mystorageaccountkey,
maxBytes=5,
container='service-log')
azure_blob_handler.setLevel(logging.INFO)
azure_blob_handler.setFormatter(log_formater)
logger.addHandler(azure_blob_handler)
logger.warning('warning message')