How do you make HTTP requests with Raku? - raku

How do you make HTTP requests with Raku? I'm looking for the equivalent of this Python code:
import requests
headers = {"User-Agent": "python"}
url = "http://example.com/"
payload = {"hello": "world"}
res = requests.get(url, headers=headers)
res = requests.post(url, headers=headers, json=payload)

You may want to try out the recent HTTP::Tiny module.
use HTTP::Tiny;
my $response = HTTP::Tiny.new.get( 'https://example.com/' );
say $response<content>.decode

After searching around a bit, I found an answer in the Cro docs.
use Cro::HTTP::Client;
my $resp = await Cro::HTTP::Client.get('https://api.github.com/');
my $body = await $resp.body;
# `$body` is a hash
say $body;
There's more information on headers and POST requests in the link.

I want to contribute a little more. There is a fantastic module named WWW.
It's very convenient to make 'gets' that receive json because it can be parsed automagically.
In their example:
use WWW;
my $response = jget('https://httpbin.org/get?foo=42&bar=x');
You can examine the objects using the basic functionalities of arrays and hashes, for example to extract the values of my response you can use:
$response<object_you_want_of_json><other_nested_object>[1]<the_last_level>
Here the number [1] are a nested list inside a hash, and the properties are the same. Welcome to the raku community !!!

Related

get the params value from a variable

i have one feature file as
Feature: Getting the Token
Background:
header Content-Type 'application/json'
def CookieGenerator = Java.type('com.ade.Helpers.CookiesGenerator');
def endpoints read('classpath: src/test/java/com/ade/resources/endpoints.json')
Given url endpoints.token
Scenario: To check the Schema of the response
Given cookies (new CookieGenerator().getCookieValue())
When method GET
Then status 200
def txnToken = response
#print token
from above code i am getting Token's value as something like this "gdjsgjshjhsjfhsg646"
now i have another feature file where i have to use above Token's value in my query parameter value as
Feature: Testing datent Name and Client
Background:
header Content-Type 'application/json""
def endpoints read('classpath:src/test/java/com/ade/resources/endpoints.json") def CookieGenerator Java.type('com.ade.Helpers.CookiesGenerator");
call read('Token.feature')
Given url baseUrl+endpoints.dit.Client.path
Scenario: To check the Schema of the response
Given def head read('classpath:src/test/java/com/ade/resources/reqpay.json") =
def req head.data[1]
And cookies (new CookieGenerator().getCookieValue())
And request req
And param {txntoken = txnToken}
When method post
Then status 200
from above my endpoint should be like https://something.com/clients?txntoken='gdjsgjshjhsjfhsg646'
but i am getting as https://something.com/clients?txntoken=txnToken
https://something.com/clients?txntoken='gdjsgjshjhsjfhsg646'
Your post is hard to read, as #peter-thomas said, please try formatting it better in the future, or edit the post if I haven't answered your question.
I believe what you're looking for is described in the documentation here
* def signIn = call read('classpath:my-signin.feature') { username: 'john', password: 'secret' }
* def authToken = signIn.authToken
you can see how information can be passed
I also asked a similar question fairly recently here
relevant bit here:
* def key = karate.call('ReadRoundUpSubscription.feature');
* def keyvalue = key.acckey
i prefer to call features like this, and not defining things in the reusable feature.

Falcon - Difference in stream type between unittests and actual API on post

I'm trying to write unittests for my falcon api, and I encountered a really weird issue when I tried reading the body I added to the unittests.
This is my unittest:
class TestDetectionApi(DetectionApiSetUp):
def test_valid_detection(self):
headers = {"Content-Type": "application/x-www-form-urlencoded"}
body = {'test': 'test'}
detection_result = self.simulate_post('/environments/e6ce2a50-f68f-4a7a-8562-ca50822b805d/detectionEvaluations',
body=urlencode(body), headers=headers)
self.assertEqual(detection_result.json, None)
and this is the part in my API that reads the body:
def _get_request_body(request: falcon.Request) -> dict:
request_stream = request.stream.read()
request_body = json.loads(request_stream)
validate(request_body, REQUEST_VALIDATION_SCHEMA)
return request_body
Now for the weird part, my function for reading the body is working without any issue when I run the API, but when I run the unittests the stream type seems to be different which affect the reading of it.
The stream type when running the API is gunicorn.http.body.Body and using unittests: wsgiref.validate.InputWrapper.
So when reading the body from the api all I need to do it request.stream.read() but when using the unittests I need to do request.stream.input.read() which is pretty annoying since I need to change my original code to work with both cases and I don't want to do it.
Is there a way to fix this issue? Thanks!!
It seems like issue was with how I read it. instead of using stream I used bounded_stream which seemed to work, also I removed the headers and just decoded my body.
my unittest:
class TestDetectionApi(DetectionApiSetUp):
def test_valid_detection(self):
body = '''{'test': 'test'}'''
detection_result = self.simulate_post('/environments/e6ce2a50-f68f-4a7a-8562-ca50822b805d/detectionEvaluations',
body=body.encode(), headers=headers)
self.assertEqual(detection_result.json, None)
how I read it:
def _get_request_body(request: falcon.Request) -> dict:
request_stream = request.bounded_stream.read()
request_body = json.loads(request_stream)
validate(request_body, REQUEST_VALIDATION_SCHEMA)
return request_body

Authenticated api call to VALR - Python 3.8

I'm trying to make an authenticated api call to VALR crypto exchange as first step towards automated trading. They provide most of the code so I thought it would be easy even as a non coding techie. The code below does actually create the correct HMAC SHA512 signature using the API Secret provided for testing but I have a problem in passing this result along to the next section of code to request balances (starting at line 17). If I cut and paste the result/displayed 'signature' and 'timestamp' (after running the code) back into the code it does in fact work. So what changes do I need to make the code automatically pick up the signature and timestamp. The user defined function appears to keep all parameters "secret" from the rest of the code, especially after using return.
import time
import hashlib
import hmac
def sign_request( api_key_secret,timestamp, verb,path,body=""):
payload = "{}{}{}{}".format(timestamp, verb.upper(), path, body)
message = bytearray(payload, 'utf-8')
signature = hmac.new(bytearray(api_key_secret, 'utf-8'), message, digestmod=hashlib.sha512).hexdigest()
print("Signature=",signature)
print ("Timestamp",timestamp)
return signature
sign_request( verb = "GET", timestamp = int(time.time()*1000),path="/v1/account/balances",api_key_secret="4961b74efac86b25cce8fbe4c9811c4c7a787b7a5996660afcc2e287ad864363" )
import requests
url = "https://api.valr.com/v1/account/balances"
payload = {}
headers = {
'X-VALR-API-KEY': '2589fb273e86aeee10bac1445232aa302feb37e27d32c1c599abc3757599139e',
'X-VALR-SIGNATURE': 'signature',
'X-VALR-TIMESTAMP': 'timestamp'
}
response = requests.request("GET", url, headers=headers, data = payload)
print(response.text.encode('utf8'))
Well after some hard thinking I decided to change to using global variables. The hmac still worked fine and gave me a signature. Then I removed the quotes around signature and timestamp and realised they were both integers. I was then able to convert that signature and timestamp to a string and everything started to work perfectly. Maybe someone else will make use of this. If you want to make a POST request remember to put single quotes around anything in the {body} statement to make it a string.
Here is the code that I am currently using for a GET request from VALR. It's been working fine for many months. You will need to change the path and the url to correspond to whatever you are trying to get, and obviously you will need to add your_api_key and your_api_secret.
If you need to send through other request parameters like transaction types etc. then you will ned to include them in the path and the url e.g. https://api.valr.com/v1/account/transactionhistory?skip=0&limit=100&transactionTypes=MARKET_BUY&currency=ZAR
def get_orders(): # to get open orders from valr
timestamp = int(time.time()*1000)
verb = "GET"
path = "/v1/orders/open"
body = ""
api_key_secret = 'your_api_secret'
payload = "{}{}{}".format(timestamp, verb.upper(), path)
message = bytearray(payload, 'utf-8')
signature = hmac.new(bytearray(api_key_secret, 'utf-8'), message, digestmod=hashlib.sha512).hexdigest()
timestamp_str = str(timestamp)
url = "https://api.valr.com/v1/orders/open"
headers = {
'Content-Type': 'application/json',
'X-VALR-API-KEY': 'your_api_key',
'X-VALR-SIGNATURE': signature,
'X-VALR-TIMESTAMP': timestamp_str,
}
response = requests.request("GET", url, headers=headers, data=body)
dict = json.loads(response.text)
dict = pd.DataFrame.from_dict(dict)
print(dict)

How to pull data from CATSone API with Authentication Token?

I am completely new to coding. I am trying to build a dashboard in Klipfolio. I am using a CATSone API to pull data from CATSone to Klipfolio. However, I can only get 100 rows a time, which means I would have to pull data 2600 times.
I am now trying to build a script to get data from the API through Google Script Editor. However, since I have no experience in this, I am just trying stuff. I watched some videos, also from Ben Collins. The basis is simple, and I get what he is doing.
However, I have a problem with putting the API key.
var API_KEY = 'key'
function callCATSone(){
//Call the CATSone API for all candidate list
var response = UrlFetchApp.fetch("https://api.catsone.nl/v3/candidates");
Logger.log(response.getContentText());
// URL and params for the API
var url = 'https://api.catsone.nl/v3/candidates';
var params = {
'method': 'GET',
'muteHttpExceptions': true,
'headers': {
'Authorization': 'key ' + apikey
}
};
// call the API
var response = UrlFetchApp.fetch(url, params);
var data = response.getContentText();
var json = JSON.parse(data);
}
In the end, I would like to transfer all candidate list data to my sheets. Therefore, I call on the API with Authorization key. After that, I will manipulate the data, but that's for later. The first problem I now encounter, is this fail code:
'Verzoek voor https://api.catsone.nl/v3/candidates is mislukt. Foutcode: 401. Ingekorte serverreactie: {"message":"Invalid credentials."} (Gebruik de optie muteHttpExceptions om de volledige reactie te onderzoeken.) (regel 6, bestand 'Code')'.
I expect to get a list of all data from CATSone into my sheets.
Does anyone know how I can accomplish this?
Two changes should fix the credentials error:
Authorization header should be Authorization: 'Token ' + yourApiKey instead of 'key ', see the v3 API documentation https://docs.catsone.com/api/v3/#authentication.
API key in your case is stored in a global variable API_KEY, you should reference it exactly like that, not as an apikey (unless there is a typo in your sample or some missing code): Authorization : 'Token ' + API_KEY.
Btw, it should probably set either a Content-Type header or a contentType parameter for UrlFetchApp.fetch() method call to application/json as UrlFetchApp.fetch() request content type defaults to application/x-www-form-urlencoded.
If you plan to continue working with APIs, it would be beneficial to read this MDN article.

Crawl Wikipedia using ASP.NET HttpWebRequest

I am new to Web Crawling, and I am using HttpWebRequest to crawl data from sites.
As of now I was successfully able to crawl and get data from my wordpress site. This data was a simple user profile data. (like name, email, AIM id etc...)
Now as an exercise I want to crawl wikipedia, where I will search using the value entered into textbox at my end and then crawl wikipedia with the search value and get the appropriate title(s) from the search.
Now I have the following doubts/difficulties.
Firstly, is this even possible ? I have heard that wiki has robot.txt setup to block this. Though I have heard this only from a friend and hence not sure.
I am using the same procedure I used earlier, but I am not getting the required results.
Thanks !
Update :
After some explanation and help from #svick, I tried the below code, but still not able to get any value (see last line of code, there I am expecting an html markup of the search result page)
string searchUrl = "http://en.wikipedia.org/w/index.php?search=Wikipedia&title=Special%3ASearch";
var postData = new StringBuilder();
postData.Append("search=" + model.Query);
postData.Append("&");
postData.Append("title" + "Special:Search");
byte[] data2 = Crawler.GetEncodedData(postData.ToString());
var webRequest = (HttpWebRequest)WebRequest.Create(searchUrl);
webRequest.Method = "POST";
webRequest.UserAgent = "Crawling HW (http://yassershaikh.com/contact-me/)";
webRequest.AllowAutoRedirect = false;
ServicePointManager.Expect100Continue = false;
Stream requestStream = webRequest.GetRequestStream();
requestStream.Write(data2, 0, data2.Length);
requestStream.Close();
var responseCsv = (HttpWebResponse)webRequest.GetResponse();
Stream response = responseCsv.GetResponseStream();
// Todo Parsing
var streamReader = new StreamReader(response);
string val = streamReader.ReadToEnd();
// val is empty !! <-- this is my problem !
and here is my GetEncodedData method defination.
public static byte[] GetEncodedData(string postData)
{
var encoding = new ASCIIEncoding();
byte[] data = encoding.GetBytes(postData);
return data;
}
Pls help me on this.
You probably don't need to use HttpWebRequest. Using WebClient (or HttpClient if you're on .Net 4.5) will be much easier for you.
robots.txt doesn't actually block anything. If something doesn't support it (and .Net doesn't support it), it can access anything.
Wikipedia does block requests that don't have their User-Agent header set. And you should use an informative User-Agent string with your contact information.
A better way to access Wikipedia is to use its API, rather than scraping. This way, you will get an answer that's specifically meant to be read by a custom applications, formatted as XML or JSON. There are also dumps containing all information from Wikipedia available for download.
EDIT: The problem with your newly posted code is that your query returns a 302 Moved Temporarily response to the searched article, if it exists. Either remove the line that forbids AllowAutoRedirect, or add &fulltext=Search to your query, which will mean you won't get redirected.