I am using Google Custom Search to index content on my website.
When I use a REST client to make the get request at
https://www.googleapis.com/customsearch/v1?key=xxx&q=query&cx=xx
I get response in sub seconds.
But when I try to make the call using my code, it takes up six seconds. What am I doing wrong ?
__author__ = 'xxxx'
import urllib2
import logging
import gzip
from cfc.apikey.googleapi import get_api_key
from cfc.url.processor import set_query_parameter
from StringIO import StringIO
CX = 'xxx:xxx'
URL = "https://www.googleapis.com/customsearch/v1?key=%s&cx=%s&q=sd&fields=kind,items(title)" % (get_api_key(), CX)
def get_results(query):
url = set_query_parameter(URL, 'q', query)
request = urllib2.Request(url)
request.add_header('Accept-encoding', 'gzip')
request.add_header('User-Agent','cfc xxxx (gzip)')
response = urllib2.urlopen(request)
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = f.read()
return data
I have implemented performance tips mentioned in Performance Tips. I would appreciate any help. Thanks.
Related
Recently I come up a scenario where I need to encrypt a WEB API request and response using PyCryptodome inside Synapse notebook activity. I am trying to make a call to Google API, but the request should be encrypted and similarly response should be encrypted. After making the call with encrypted data, I am getting below error.
Error:
error code: 400, message: Invalid JSON Payload received. Unexpected Token, Status: Invalid argument.
I have written below code:-
import os
import requests
import json
import base64
from Crypto import Random
from Crypto.Cipher import AES
from Crypto.Random import get_random_bytes
from Crypto.Util.padding import pad,unpad
import secrets
key= os.urandom(16)
iv = Random.new().read(AES.block_size)
def encrypt_data(key, data):
BS = AES.block_size
pad = lambda s: s + ((BS - len(s) % BS) * chr(BS - len(s) % BS)).encode()
cipher = AES.new(key, AES.MODE_CBC, iv)
encrypted_data = base64.b64encode(cipher.encrypt(pad(data)))
return encrypted_data
url = "https://accounts.google.com/o/oauth2/token"
client_Id = "XXXXX"
client_secret = "YYYYY"
grant_type = "refresh_token"
refresh_token = "ZZZZZZ"
access_type="offline"
data = {"grant_type":grant_type,
"client_id":client_Id,
"client_secret":client_secret,
"refresh_token":refresh_token,
"access_type":access_type
}
encode_data = json.dumps(data).encode("utf-8")
encrypt_data = encrypt_data(key,encode_data)
response = requests.post(url, data = encrypt_data)
print(response.content)
It would be really helpful if someone can give me idea or guide me on how I can achieve this.
Thank You!
I'm trying to read contacts from my person gmail account and the instructions provided by Google from the People API is returning an empty list. I'm not sure why. I've tried another solution from a few years ago, but that doens't seem to work. I've pasted my code below. Any help troubleshooting this is appreciated!
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
# If modifying these scopes, delete the file token.json.
SCOPES = ['https://www.googleapis.com/auth/contacts.readonly']
from google.oauth2 import service_account
SERVICE_ACCOUNT_FILE = '<path name hidden>.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
def main():
#Shows basic usage of the People API.
#Prints the name of the first 10 connections.
creds = None
service = build('people', 'v1', credentials=credentials)
# Call the People API
print('List 10 connection names')
results = service.people().connections().list(
resourceName='people/me',
pageSize=10,
personFields='names,emailAddresses').execute()
connections = results.get('connections', [])
request = service.people().searchContacts(pageSize=10, query="A", readMask="names")
results = service.people().connections().list(resourceName='people/me',personFields='names,emailAddresses',fields='connections,totalItems,nextSyncToken').execute()
for i in results:
print ('result', i)
for person in connections:
names = person.get('names', [])
if names:
name = names[0].get('displayName')
print(name)
if __name__ == '__main__':
main()
i have tried two methods and both showing different location as given by me in this image
apikey='abcd'
import pandas as pd
from alpha_vantage.timeseries import TimeSeries
import time
ts=TimeSeries(key=apikey,output_format='pandas')
data,metadata=ts.get_intraday(symbol='name',interval='1min',outputsize='full')
data
while True:
data, metadata=ts.get_intraday(symbol='TCS',interval='1min',outputsize='full')
data.to_excel('livedat.xlsx')
time.sleep(60)
The code is running properly but I don't know how to get the data file in excel.
imp- the method should get the file which is updated timely i.e 1min automaticaly.
Also i am using IBM watson studio to write the code.
I am not familiar with the alpha_vantage wrapper that you are using however this is how i would perform your question. The code works and i have included comments.
To get the file in the python script i would do pd.read_excel(filepath).
import requests
import pandas as pd
import time
import datetime
# Your API KEY and the URL we will request from
API_KEY = "YOUR API KEY"
url = "https://www.alphavantage.co/query?"
def Generate_file(symbol="IBM", interval="1min"):
# URL parameters
parameters = {"function": "TIME_SERIES_INTRADAY",
"symbol": symbol,
"interval": interval,
"apikey": API_KEY,
"outputsize": "compact"}
# get the json response from AlphaVantage
response = requests.get(url, params=parameters)
data = response.json()
# filter the response to only get the time series data we want
time_series_interval = f"Time Series ({interval})"
prices = data[time_series_interval]
# convert the filtered reponse to a Pandas DataFrame
df = pd.DataFrame.from_dict(prices, orient="index").reset_index()
df = df.rename(columns={"index": time_series_interval})
# create a timestampe for our excel file. So that the file does not get overriden with new data each time.
current_time = datetime.datetime.now()
file_timestamp = current_time.strftime("%Y%m%d_%H.%M")
filename = f"livedat_{file_timestamp}.xlsx"
df.to_excel(filename)
# sent a limit on the number of calls we make to prevent infinite loop
call_limit = 3
number_of_calls = 0
while(number_of_calls < call_limit):
Generate_file() # our function
number_of_calls += 1
time.sleep(60)
from rest_framework import status, response
from rest_framework.test import APITestCase
from lots.models import Lot
class LotsTestCase(APITestCase):
def setUp(self) -> None:
self.lot = Lot.objects.create(name="1",
address="Dont Know",
phone_num="010-4451-2211",
latitude=127.12,
longitude=352.123,
basic_rate=20000,
additional_rate=2000,
partnership=False,
section_count=3,)
def test_delete(self):
response = self.client.delete(f'api/lots/{self.lot["name"]}')
# response = self.client.delete(f'/api/users/{self.users[0].pk}')
# url = reverse(f'/api/lots/{self.lot}', kwargs={'pk': self.lot.pk})
# self.client.delete(url)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(self.lot.objects.filter(pk=self.lot.pk.count()))
I have problems with the test code above. Why doesn't it work? I know it has to do with calling dictionary values but I just can't figure it out. Thanks for your help.
Lot.objects.create(...) returns a Lot type so you access name by self.lot.name.
I'm trying to replicate the following successful cURL operation with Grinder.
curl -X PUT -d "title=Here%27s+the+title&content=Here%27s+the+content&signature=myusername%3A3ad1117dab0ade17bdbd47cc8efd5b08" http://www.mysite.com/api
Here's my script:
from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPRequest
from HTTPClient import NVPair
import hashlib
test1 = Test(1, "Request resource")
request1 = HTTPRequest(url="http://www.mysite.com/api")
test1.record(request1)
log = grinder.logger.info
test1.record(log)
m = hashlib.md5()
class TestRunner:
def __call__(self):
params = [NVPair("title","Here's the title"),NVPair("content", "Here's the content")]
params.sort(key=lambda param: param.getName())
ps = ""
for param in params:
ps = ps + param.getValue() + ":"
ps = ps + "myapikey"
m.update(ps)
params.append(NVPair("signature", ("myusername:" + m.hexdigest())))
request1.setFormData(tuple(params))
result = request1.PUT()
The test runs okay, but it seems that my script doesn't actually send any of the params data to the API, and I can't work out why. There are no errors generated, but I get a 401 Unauthorized response from the API, indicating that a successful PUT request reached it, but obviously without a signature the request was rejected.
This isn't exactly an answer, more of a workaround that I came up with, that I've decided to post since this question hasn't yet received any responses, and it may help anyone else trying to achieve the same thing.
The workaround is basically to use the httplib and urllib modules to build and make the PUT request instead of the HTTPClient module.
import hashlib
import httplib, urllib
....
params = [("title", "Here's the title"),("content", "Here's the content")]
params.sort(key=lambda param: param[0])
ps = ""
for param in params:
ps = ps + param[1] + ":"
ps = ps + "myapikey"
m = hashlib.md5()
m.update(ps)
params.append(("signature", "myusername:" + m.hexdigest()))
params = urllib.urlencode(params)
print params
headers = {"Content-type": "application/x-www-form-urlencoded"}
conn = httplib.HTTPConnection("www.mysite.com:80")
conn.request("PUT", "/api", params, headers)
response = conn.getresponse()
print response.status, response.reason
print response.read()
conn.close()
(Based on the example at the bottom of this documentation page.)
You have to refer to the multi-form posting example in Grinder script gallery, but changing the Post to Put. It works for me.
files = ( NVPair("self", "form.py"), )
parameters = ( NVPair("run number", str(grinder.runNumber)), )
# This is the Jython way of creating an NVPair[] Java array
# with one element.
headers = zeros(1, NVPair)
# Create a multi-part form encoded byte array.
data = Codecs.mpFormDataEncode(parameters, files, headers)
grinder.logger.output("Content type set to %s" % headers[0].value)
# Call the version of POST that takes a byte array.
result = request1.PUT("/upload", data, headers)