Logging the errors to azure blob - azure-storage

I am trying to log errors to azure blob but, its not creating any table in the blob. I have gone through many docs and also searched for ans in stackoverflow as well. Please help me with this.
Thanks
below is the code
def log():
import logging
import sys
from azure_storage_logging.handlers import BlobStorageRotatingFileHandler
mystorageaccountname='***'
mystorageaccountkey='***'
_LOGFILE_TMPDIR = mkdtemp()
logger = logging.getLogger('service_logger')
logger.setLevel(logging.DEBUG)
log_formater = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
azure_blob_handler = TableStorageHandler(account_name=mystorageaccountname,
account_key=mystorageaccountkey,
protocol='https',
table='logtable',
batchsize=100,
extra_properties=None,
partition_key_formatter=None,
row_key_formatter=None,
is_emulated=False)
logger.addHandler(azure_blob_handler)
logger.warning('warning message')

According to the code you provided, you use TableStorageHandler to store log. It will help us store log in Azure table storage instead of Azure blob storage. Please find your logs in Azure table.
Besides, if you want to store your log in Azure blob, please refer to the following code
import logging
import sys
from azure_storage_logging.handlers import BlobStorageRotatingFileHandler
mystorageaccountname='***'
mystorageaccountkey='***'
logger = logging.getLogger('service_logger')
logger.setLevel(logging.DEBUG)
log_formater = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
azure_blob_handler = BlobStorageRotatingFileHandler(filename = 'service.log',
account_name=mystorageaccountname,
account_key=mystorageaccountkey,
maxBytes= 5,
container='service-log')
azure_blob_handler.setLevel(logging.INFO)
azure_blob_handler.setFormatter(log_formater)
logger.addHandler(azure_blob_handler)
logger.warning('warning message')
For more details, please refer to the document
Update
When we useBlobStorageRotatingFileHandler, the log does not upload if the content does not reach till maxBytes
My test code
import logging
import sys
from azure_storage_logging.handlers import BlobStorageRotatingFileHandler
mystorageaccountname='blobstorage0516'
mystorageaccountkey=''
logger = logging.getLogger('service_logger')
log_formater = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
azure_blob_handler = BlobStorageRotatingFileHandler(filename = 'service.log',
account_name=mystorageaccountname,
account_key=mystorageaccountkey,
maxBytes=5,
container='service-log')
azure_blob_handler.setLevel(logging.INFO)
azure_blob_handler.setFormatter(log_formater)
logger.addHandler(azure_blob_handler)
logger.warning('warning message')

Related

Google people API returning empty / no results in Python

I'm trying to read contacts from my person gmail account and the instructions provided by Google from the People API is returning an empty list. I'm not sure why. I've tried another solution from a few years ago, but that doens't seem to work. I've pasted my code below. Any help troubleshooting this is appreciated!
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
# If modifying these scopes, delete the file token.json.
SCOPES = ['https://www.googleapis.com/auth/contacts.readonly']
from google.oauth2 import service_account
SERVICE_ACCOUNT_FILE = '<path name hidden>.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
def main():
#Shows basic usage of the People API.
#Prints the name of the first 10 connections.
creds = None
service = build('people', 'v1', credentials=credentials)
# Call the People API
print('List 10 connection names')
results = service.people().connections().list(
resourceName='people/me',
pageSize=10,
personFields='names,emailAddresses').execute()
connections = results.get('connections', [])
request = service.people().searchContacts(pageSize=10, query="A", readMask="names")
results = service.people().connections().list(resourceName='people/me',personFields='names,emailAddresses',fields='connections,totalItems,nextSyncToken').execute()
for i in results:
print ('result', i)
for person in connections:
names = person.get('names', [])
if names:
name = names[0].get('displayName')
print(name)
if __name__ == '__main__':
main()

boto3 default session Profile - Access denied

I need to put the file on s3 using the library based on boto3 (great_expectations), and only have write permissions on a specific profile.
Using AWS_PROFILE seems to set the default session's profile, but does not help. To replicate, I ran the code myself and got the same result (AccessDenied):
import boto3
import os
import json
s3 = boto3.resource("s3")
bucket = 'my-bucket'
key = 'path/to/the/file.json'
content_encoding="utf-8"
content_type="application/json"
value = json.dumps({'test':'test indeed'})
os.environ['AWS_PROFILE']
>>> mycorrectprofile
boto3.DEFAULT_SESSION.profile_name
>>> mycorrectprofile
s3_object = s3.Object(bucket, key)
s3_object.put(
Body=value.encode(content_encoding),
ContentEncoding=content_encoding,
ContentType=content_type,
)
that results in AccessDenied
Now, right after that, I do:
my_session = boto3.session.Session(profile_name=os.getenv('AWS_PROFILE'))
ss3 = my_session.resource('s3')
r2_s3 = ss3.Object(bucket, key)
that runs Smoothly.
What is happening here, and how can I resolve that behavior?
boto3.__version__ = '1.15.0'

403 Access Denied Error while creating a dataset in DOMO

I'm getting an 404 access denied error while trying to create a dataset in DOMO, anyone who is good in DOMO please help me,
import logging
from pydomo import Domo
from pydomo.datasets import DataSetRequest, Schema, Column, ColumnType, Policy
from pydomo.datasets import PolicyFilter, FilterOperator, PolicyType, Sorting
client_id = ''
client_secret_code = ''
data_set_id = ''
api_host = 'api.domo.com'
domo = Domo(client_id, client_secret_code, logger_name='foo', log_level=logging.INFO,
api_host=api_host)
data_set_name = 'Testing'
data_set_description = 'Test_to_update_schema'
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logging.getLogger().addHandler(handler)
domo = Domo(client_id, client_secret_code, logger_name='foo', log_level=logging.INFO,
api_host=api_host)
dsr = DataSetRequest()
dsr.name = data_set_name
dsr.description = data_set_description
dsr.schema = Schema([Column(ColumnType.STRING, 'Id'),
Column(ColumnType.STRING, 'Publisher_Name')])
dataset = domo.datasets.create(dsr)
print(dataset)
This is Authorization error. Either of credentials are wrong that is client_id, client_secret_code or maybe both.
I have the same error in .NET when using Domo to connect and get data from the dataset. I fix this when I create a base64 string and passed it to auth for example Authorization is Basic
"Basic", Convert.ToBase64String(Encoding.UTF8.GetBytes($"{clientId}:{clientSecret}"))
I am not sure how this goes in Python but create a base64 string and pass for auth.

Bigquery Not Accepting Stackdriver's Sink Writer Identity

I've been following the documentation to export logs from Stackdriver to Bigquery. I've tried the following:
from google.cloud.logging.client import Client as lClient
from google.cloud.bigquery.client import Client as bqClient
from google.cloud.bigquery.dataset import AccessGrant
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path_to_json.json'
lc = lClient()
bqc = bqClient()
ds = bqc.dataset('working_dataset')
acc = ds.access_grants
acc.append(AccessGrant('WRITER', 'groupByEmail', 'cloud-logs#system.gserviceaccount.com')) #this is the service account that is shown in our Exports tab in GCP.
ds.access_grants = acc
ds.update()
But we get the error message:
NotFound: 404 Not found: Email cloud-logs#system.gserviceaccount.com (PUT https://www.googleapis.com/bigquery/v2/projects/[project-id]/datasets/[working-dataset])
Why won't our dataset be updated? The key being used is the one which already created the dataset itself.

Google Custom Search via API is too slow

I am using Google Custom Search to index content on my website.
When I use a REST client to make the get request at
https://www.googleapis.com/customsearch/v1?key=xxx&q=query&cx=xx
I get response in sub seconds.
But when I try to make the call using my code, it takes up six seconds. What am I doing wrong ?
__author__ = 'xxxx'
import urllib2
import logging
import gzip
from cfc.apikey.googleapi import get_api_key
from cfc.url.processor import set_query_parameter
from StringIO import StringIO
CX = 'xxx:xxx'
URL = "https://www.googleapis.com/customsearch/v1?key=%s&cx=%s&q=sd&fields=kind,items(title)" % (get_api_key(), CX)
def get_results(query):
url = set_query_parameter(URL, 'q', query)
request = urllib2.Request(url)
request.add_header('Accept-encoding', 'gzip')
request.add_header('User-Agent','cfc xxxx (gzip)')
response = urllib2.urlopen(request)
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = f.read()
return data
I have implemented performance tips mentioned in Performance Tips. I would appreciate any help. Thanks.