Sending form data with an HTTP PUT request using Grinder API - jython

I'm trying to replicate the following successful cURL operation with Grinder.
curl -X PUT -d "title=Here%27s+the+title&content=Here%27s+the+content&signature=myusername%3A3ad1117dab0ade17bdbd47cc8efd5b08" http://www.mysite.com/api
Here's my script:
from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPRequest
from HTTPClient import NVPair
import hashlib
test1 = Test(1, "Request resource")
request1 = HTTPRequest(url="http://www.mysite.com/api")
test1.record(request1)
log = grinder.logger.info
test1.record(log)
m = hashlib.md5()
class TestRunner:
def __call__(self):
params = [NVPair("title","Here's the title"),NVPair("content", "Here's the content")]
params.sort(key=lambda param: param.getName())
ps = ""
for param in params:
ps = ps + param.getValue() + ":"
ps = ps + "myapikey"
m.update(ps)
params.append(NVPair("signature", ("myusername:" + m.hexdigest())))
request1.setFormData(tuple(params))
result = request1.PUT()
The test runs okay, but it seems that my script doesn't actually send any of the params data to the API, and I can't work out why. There are no errors generated, but I get a 401 Unauthorized response from the API, indicating that a successful PUT request reached it, but obviously without a signature the request was rejected.

This isn't exactly an answer, more of a workaround that I came up with, that I've decided to post since this question hasn't yet received any responses, and it may help anyone else trying to achieve the same thing.
The workaround is basically to use the httplib and urllib modules to build and make the PUT request instead of the HTTPClient module.
import hashlib
import httplib, urllib
....
params = [("title", "Here's the title"),("content", "Here's the content")]
params.sort(key=lambda param: param[0])
ps = ""
for param in params:
ps = ps + param[1] + ":"
ps = ps + "myapikey"
m = hashlib.md5()
m.update(ps)
params.append(("signature", "myusername:" + m.hexdigest()))
params = urllib.urlencode(params)
print params
headers = {"Content-type": "application/x-www-form-urlencoded"}
conn = httplib.HTTPConnection("www.mysite.com:80")
conn.request("PUT", "/api", params, headers)
response = conn.getresponse()
print response.status, response.reason
print response.read()
conn.close()
(Based on the example at the bottom of this documentation page.)

You have to refer to the multi-form posting example in Grinder script gallery, but changing the Post to Put. It works for me.
files = ( NVPair("self", "form.py"), )
parameters = ( NVPair("run number", str(grinder.runNumber)), )
# This is the Jython way of creating an NVPair[] Java array
# with one element.
headers = zeros(1, NVPair)
# Create a multi-part form encoded byte array.
data = Codecs.mpFormDataEncode(parameters, files, headers)
grinder.logger.output("Content type set to %s" % headers[0].value)
# Call the version of POST that takes a byte array.
result = request1.PUT("/upload", data, headers)

Related

Getting error in a python script when using QuickSight API calls to retrieve the value of user parameter selection

I am working on a python script which will use QS APIs to retrieve the user parameter selections but keep getting the below error:
parameters = response['Dashboard']['Parameters'] KeyError: 'Parameters'
If I try a different code to retrieve the datasets in my QS account, it works but the Parameters code doesn't. I think I am missing some configuration.
#Code to retrieve the parameters from a QS dashboard (which fails):
import boto3
quicksight = boto3.client('quicksight')
response = quicksight.describe_dashboard(
AwsAccountId='99999999999',
DashboardId='zzz-zzzz-zzzz'
)
parameters = response['Dashboard']['Parameters']
for parameter in parameters:
print(parameter['Name'], ':', parameter['Value'])
#Code to display the datasets in the QS account (which works):
import boto3
import json
account_id = '99999999999'
session = boto3.Session(profile_name='default')
qs_client = session.client('quicksight')
response = qs_client.list_data_sets(AwsAccountId = account_id,MaxResults = 100)
results = response['DataSetSummaries']
while "NextToken" in response.keys():
response = qs_client.list_data_sets(AwsAccountId = account_id,MaxResults = 100,NextToken=response["NextToken"])
results.extend(response["DataSetSummaries"])
for i in results:
x = i['DataSetId']
try:
response = qs_client.describe_data_set(AwsAccountId=account_id,DataSetId=x)
print("succeeded loading: {} for data set {} ".format(x, response['DataSet']['Name']))
except:
print("failed loading: {} ".format(x))

Encrypted access token request to google api failed with 400 code

Recently I come up a scenario where I need to encrypt a WEB API request and response using PyCryptodome inside Synapse notebook activity. I am trying to make a call to Google API, but the request should be encrypted and similarly response should be encrypted. After making the call with encrypted data, I am getting below error.
Error:
error code: 400, message: Invalid JSON Payload received. Unexpected Token, Status: Invalid argument.
I have written below code:-
import os
import requests
import json
import base64
from Crypto import Random
from Crypto.Cipher import AES
from Crypto.Random import get_random_bytes
from Crypto.Util.padding import pad,unpad
import secrets
key= os.urandom(16)
iv = Random.new().read(AES.block_size)
def encrypt_data(key, data):
BS = AES.block_size
pad = lambda s: s + ((BS - len(s) % BS) * chr(BS - len(s) % BS)).encode()
cipher = AES.new(key, AES.MODE_CBC, iv)
encrypted_data = base64.b64encode(cipher.encrypt(pad(data)))
return encrypted_data
url = "https://accounts.google.com/o/oauth2/token"
client_Id = "XXXXX"
client_secret = "YYYYY"
grant_type = "refresh_token"
refresh_token = "ZZZZZZ"
access_type="offline"
data = {"grant_type":grant_type,
"client_id":client_Id,
"client_secret":client_secret,
"refresh_token":refresh_token,
"access_type":access_type
}
encode_data = json.dumps(data).encode("utf-8")
encrypt_data = encrypt_data(key,encode_data)
response = requests.post(url, data = encrypt_data)
print(response.content)
It would be really helpful if someone can give me idea or guide me on how I can achieve this.
Thank You!

HTTP Basic Authentication not working with Python 3

I am trying to access an intranet site with HTTP Basic Authentication enabled.
Here's the code I'm using:
from bs4 import BeautifulSoup
import urllib.request, base64, urllib.error
request = urllib.request.Request(url)
string = '%s:%s' % ('username','password')
base64string = base64.standard_b64encode(string.encode('utf-8'))
request.add_header("Authorization", "Basic %s" % base64string)
try:
u = urllib.request.urlopen(request)
except urllib.error.HTTPError as e:
print(e)
print(e.headers)
soup = BeautifulSoup(u.read(), 'html.parser')
print(soup.prettify())
But it doesn't work and fails with 401 Authorization required. I can't figure out why it's not working.
The solution given here works without any modifications.
from bs4 import BeautifulSoup
import urllib.request
# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = "http://example.com/foo/"
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)
# use the opener to fetch a URL
u = opener.open(url)
soup = BeautifulSoup(u.read(), 'html.parser')
The previous code works as well. You just have to decode the utf-8 encoded string otherwise the header contains a byte-sequence.
from bs4 import BeautifulSoup
import urllib.request, base64, urllib.error
request = urllib.request.Request(url)
string = '%s:%s' % ('username','password')
base64string = base64.standard_b64encode(string.encode('utf-8'))
request.add_header("Authorization", "Basic %s" % base64string.decode('utf-8'))
try:
u = urllib.request.urlopen(request)
except urllib.error.HTTPError as e:
print(e)
print(e.headers)
soup = BeautifulSoup(u.read(), 'html.parser')
print(soup.prettify())
UTF-8 encoding might not work. You can try to use ASCII or ISO-8859-1 encoding instead.
Also, try to access the intranet site with a web browser and check how the Authorization header is different from the one you are generating.
Encode using "ascii". This worked for me.
import base64
import urllib.request
url = "http://someurl/path"
username = "someuser"
token = "239487svksjdf08234"
request = urllib.request.Request(url)
base64string = base64.b64encode((username + ":" + token).encode("ascii"))
request.add_header("Authorization", "Basic {}".format(base64string.decode("ascii")))
response = urllib.request.urlopen(request)
response.read() # final response string

Google Custom Search via API is too slow

I am using Google Custom Search to index content on my website.
When I use a REST client to make the get request at
https://www.googleapis.com/customsearch/v1?key=xxx&q=query&cx=xx
I get response in sub seconds.
But when I try to make the call using my code, it takes up six seconds. What am I doing wrong ?
__author__ = 'xxxx'
import urllib2
import logging
import gzip
from cfc.apikey.googleapi import get_api_key
from cfc.url.processor import set_query_parameter
from StringIO import StringIO
CX = 'xxx:xxx'
URL = "https://www.googleapis.com/customsearch/v1?key=%s&cx=%s&q=sd&fields=kind,items(title)" % (get_api_key(), CX)
def get_results(query):
url = set_query_parameter(URL, 'q', query)
request = urllib2.Request(url)
request.add_header('Accept-encoding', 'gzip')
request.add_header('User-Agent','cfc xxxx (gzip)')
response = urllib2.urlopen(request)
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = f.read()
return data
I have implemented performance tips mentioned in Performance Tips. I would appreciate any help. Thanks.

curl html file through mailchimp inlineCss API

I am using MailChimps inline-css form at: http://beaker.mailchimp.com/inline-css. It does a great job of preparing an html file for use in sending as an email.
I have an api key. I prefer not to have to run their PHP app for only one API call. If it possible to use curl to access their inlineCss API? If so, what is the syntax?
Here is the doc page: http://apidocs.mailchimp.com/api/1.2/inlinecss.func.php
See also line: 2096 of this gist: https://gist.github.com/740362
My key looks something like:
f1b46???????????????????f2d5-us2
Here is a start of what I will like to achieve:
curl post -d #input.html apiKey=xxxxxxxx "http://us1.api.mailchimp.com/1.2/"
Thank you
This is what I hacked up for anyone looking for a similar solution. Comments, other options are welcomed:
import os
import re
import urllib
import mechanize
import xml.sax.saxutils as saxutils
from xml.sax.saxutils import unescape
try:
issueRoot = os.environ['newslettersroot'] + os.environ['currYear'] + '/' + os.environ['issueRoot'] + '/'
except KeyError:
print "Please run init.bat"
sys.exit(1)
srcEmailFilename = 'email.html'
dstEmailFilename = 'email_inline_css.html'
# retrieve <body> section only
html = open(issueRoot + srcEmailFilename, 'rb').read()
html = re.findall("(?si)<body.*?</body>", html)[0]
# use mailchimp inlineCss site to inject class rules into html tags
response = mechanize.urlopen("http://beaker.mailchimp.com/inline-css")
# retrieve form
form = mechanize.ParseResponse(response, backwards_compat=False)[0]
form["html"] = html
# form["strip"] = "checked"
# submit form and retrieve result
html = mechanize.urlopen(form.click()).read()
match = re.search('<textarea name="text" cols="100" rows="12">(.*?)</textarea>', html, re.DOTALL | re.IGNORECASE | re.MULTILINE)
if not match:
print html
exit("Expected to find output from mailchimp.")
# clean up output
html = match.group(1)
html = saxutils.unescape(html)
html = urllib.unquote_plus(html)
html = unescape(html, {"&apos;": "'", """: '"'})
html = html.replace('&', '&').replace('%2F', '/').replace('%3A', ':')
# #sed -r 's/ class="[a-zA-Z0-9-]+"//g' %newslettersroot%%currYear%\%issueRoot%\email_inlinedcss.html > %newslettersroot%%currYear%\%issueRoot%\email_removedstyle.html
#replace class tags
html = re.sub(r'(?sim)\s*class="[a-zA-Z0-9-]+"', "", html)
fh = open(issueRoot + dstEmailFilename, 'wb')
fh.write(html)
fh.close()