Fiddler script to automatically save response body - scripting

I need help writing a script for fiddler. What I need is to automatically save a certain response body every time it comes up in the session window.
I have tried to follow the instructions in this post Fiddler Script - SaveResponseBody() but I just get an error when I try and save CustomRules.js. (I could be inserting in wrong or in the wrong place)
I am new to fiddler and scripts so any help here would be greatly appreciated.
I have tried adding this:
static function OnBeforeResponse(oSession: Session) {
if(oSession.url.EndsWith(".png")) {
oSession.SaveResponseBody(); //Actual content of OnBeforeResponse function.
}
}
and then adding this:
if ((oSession.responseCode == 200) &&
oSession.oResponse.headers.ExistsAndContains("Content-Type", "image/png")) {
SaveResponseBody("C:\\temp\\" + oSession.SuggestedFilename);
}
to the CustomRules.js script.

SaveResponseBody is a method on the oSession object.
oSession.SaveResponseBody("C:\\temp\\" + oSession.SuggestedFilename);

Be sure to add your code within OnBeforeResponse(oSession: Session) { ... } function
The following code will save the request and response body of any url that contains "procedimentoservice" and a response code that differs from OK (200).
if (oSession.PathAndQuery.ToLower().Contains("procedimentoservice"))
{
if(oSession.responseCode != 200)
{
var directory2 = "C:\\log\\NEXT\\";
var filename2 = oSession.oRequest.headers['SOAPAction'].ToString().Replace('"','') + "_" + Guid.NewGuid();
var path2: String = System.IO.Path.Combine(directory2, filename2);
oSession.SaveRequestBody(path2 + "_request.txt");
oSession.SaveResponseBody(path2 + "_response.txt");
}
}
File names will be in the following format:
c:\log\NEXT\CriarEvento_fa15709e-b2a8-402d-a623-e4f01e6e8ae1_request.txt
c:\log\NEXT\CriarEvento_fa15709e-b2a8-402d-a623-e4f01e6e8ae1_response.txt
c:\log\NEXT\CriarEvento_ff650cf8-8fe6-47a2-8552-a4d8bce246f3_request.txt
c:\log\NEXT\CriarEvento_ff650cf8-8fe6-47a2-8552-a4d8bce246f3_response.txt

Related

HTTP request won't get data from API. Gamemaker Studio 1.4.9

I'm trying to figure out how to get information from a dictionary API in Gamemaker Studio 1.4.9
I'm lost since I can't figure out how to get around the API's server block. All my return shows is a blank result.
Step Event:
if(keyboard_check_pressed(vk_space)){
http_get("https://api.dictionaryapi.dev/api/v2/entries/en/test");
}
HTTP Event:
var requestResult = ds_map_find_value(async_load, "result");
var resultMap = json_decode(requestResult);
if(resultMap == -1)
{
show_message("Invalid result");
exit;
}
if(ds_map_exists(resultMap,"word")){
var name= ds_map_find_value(resultMap, "word");
show_message("The word name is "+name);
}
Maybe my formatting is wrong? It's supposed to say the word test in the show_message function, but again, all I get returned is a blank result.
Any help would be appreciated, thanks!
You can see through the debugger that the data is coming from the server. But your code does not correctly try to retrieve the Word.
https://imgur.com/a/icQSnnx
This code gets this word
show_debug_message("http received")
var requestResult = ds_map_find_value(async_load, "result");
var resultMap = json_decode(requestResult);
if(resultMap == -1)
{
show_message("Invalid result");
exit;
}
if(ds_map_exists(resultMap,"default")){
var defaultList = ds_map_find_value(resultMap, "default")
var Map = ds_list_find_value(defaultList, 0)
var name= ds_map_find_value(Map, "word");
show_message("The word name is "+name);
}

How to use URLSession downloadTaskWithResumeData to start download again when AfterdidCompleteWithError Called..?

I have the code to download two files from Server and store It to In local using URLSession (let dataTask = defaultSession.downloadTask(with: url)). Everything Is working fine only the problem is it's downloading first file it's giving me success but the second file is not downloading completely.. So, I hope there is a way to restart download for the second file that gives error ..
I think there is way of doing that and start looking into it and I found this delegate method .. but not much help .. can anyone please help me out how to restart download if it fails .. Do i have to use handleEventsForBackgroundURLSession to clear up previous downloads..?
// bellow download method will triggered when i get filenames I am passing it to this and path is optional here..
func download(path: String?, filenames: [String]) -> Int {
for filename in filenames {
var downloadFrom = "ftp://" + username! + ":"
downloadFrom += password!.addingPercentEncoding(withAllowedCharacters: .urlPasswordAllowed)! + "#" + address!
if let downloadPort = port {
downloadFrom += ":" + String(downloadPort) + "/"
} else {
downloadFrom += "/"
}
if let downloadPath = path {
if !downloadPath.isEmpty {
downloadFrom += downloadPath + "/"
}
}
downloadFrom += filename
if let url = URL(string: downloadFrom) {
let dataTask = defaultSession.downloadTask(with: url)
dataTask.resume()
}
}
return DLResponseCode.success
}
Please find delegate methods bellow ..
func urlSession(_ session: URLSession, downloadTask: URLSessionDownloadTask, didFinishDownloadingTo location: URL) {
var responseCode = DLResponseCode.success
// Move the file to a new URL
let fileManager = FileManager.default
let filename = downloadTask.originalRequest?.url?.lastPathComponent
let destUrl = cacheURL.appendingPathComponent(filename!)
do {
let data = try Data(contentsOf: location)
// Delete it if it exists first
if fileManager.fileExists(atPath: destUrl.path) {
do{
try fileManager.removeItem(at: destUrl)
} catch let error {
danLogError("Clearing failed downloadFOTA file failed: \(error)")
responseCode = DLResponseCode.datalogger.failToCreateRequestedProtocolPipe
}
}
try data.write(to: destUrl)
} catch {
danLogError("Issue saving data locally")
responseCode = DLResponseCode.datalogger.noDataConnection
}
// Complete the download message
let message = DLBLEDataloggerChannel.Commands.download(responseCode: responseCode).description
connectionManagerDelegate?.sendMessageToDatalogger(msg: message)
}
func urlSession(_ session: URLSession, task: URLSessionTask, didCompleteWithError error: Error?) {
if error == nil {
print("session \(session) download completed")
} else {
print("session \(session) download failed with error \(String(describing: error?.localizedDescription))")
// session.downloadTask(withResumeData: <#T##Data#>)
}
guard error != nil else {
return
}
danLogError("Session \(session) invalid with error \(String(describing: error))\n")
let responseCode = DLResponseCode.datalogger.failToCreateRequestedProtocolPipe
let message = DLBLEDataloggerChannel.Commands.download(responseCode: responseCode).description
connectionManagerDelegate?.sendMessageToDatalogger(msg: message)
}
// When I call didWriteData delegate method it's printing below data seems not dowloaded complete data ..
session <__NSURLSessionLocal: 0x103e37970> download task <__NSCFLocalDownloadTask: 0x108d2ee60>{ taskIdentifier: 2 } { running } wrote an additional 30028 bytes (total 988980 bytes) out of an expected 988980 bytes.
//error that I am getting for second file .. this error is coming some times not always but most of the times..
session <__NSURLSessionLocal: 0x103e37970> download failed with error Optional("cancelled")
Please help me out to figure it out .. If there is any way to handle download again after it fails or why it fails ..
The resume data, if the request is resumable, should be in the NSError object's userInfo dictionary.
Unfortunately, Apple seems to have completely trashed the programming guide for NSURLSession (or at least I can't find it in Google search results), and the replacement content in the reference is missing all of the sections that talk about how to do proper error handling (even the constant that you're looking for is missing), so I'm going to have to describe it all from memory with the help of looking at the headers. Ick.
The key you're looking for is NSURLSessionDownloadTaskResumeData.
If that key is present, its value is a small NSData blob. Store that, then use the Reachability API (with the actual hostname from that request's URL) to decide when to retry the request.
After Reachability tells you that the server is reachable, create a new download task with the resume data and start it.

Pentaho - upload file using API

I need to upload a file using an API.
I tried REST CLIENT and didn't find any options.
Tried with HTTP POST and that responded with 415.
Please suggest how to accomplish this
Error 415 is “Unsupported media type”.
You may need to change the media type of the request or check whether that type of file us accepted by the remote server.
https://en.m.wikipedia.org/wiki/List_of_HTTP_status_codes
This solution uses only standard classes of jre 7. Add a step Modified Java Script Value in your transformation. You will have to add two columns in the flow: URL_FORM_POST_MULTIPART_COLUMN and FILE_URL_COLUMN, you can add as many files as you want, you will just have to call outputStreamToRequestBody.write more times.
try
{
//in this step you will need to add two columns from the previous flow -> URL_FORM_POST_MULTIPART_COLUMN, FILE_URL_COLUMN
var serverUrl = new java.net.URL(URL_FORM_POST_MULTIPART_COLUMN);
var boundaryString = "999aaa000zzz09za";
var openBoundary = java.lang.String.format("\n\n--%s\nContent-Disposition: form-data\nContent-Type: text/xml\n\n" , boundaryString);
var closeBoundary = java.lang.String.format("\n\n--%s--\n", boundaryString);
// var netIPSocketAddress = java.net.InetSocketAddress("127.0.0.1", 8888);
// var proxy = java.net.Proxy(java.net.Proxy.Type.HTTP , netIPSocketAddress);
// var urlConnection = serverUrl.openConnection(proxy);
var urlConnection = serverUrl.openConnection();
urlConnection.setDoOutput(true); // Indicate that we want to write to the HTTP request body
urlConnection.setRequestMethod("POST");
//urlConnection.addRequestProperty("Authorization", "Basic " + Authorization);
urlConnection.addRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundaryString);
var outputStreamToRequestBody = urlConnection.getOutputStream();
outputStreamToRequestBody.write(openBoundary.getBytes(java.nio.charset.StandardCharsets.UTF_8));
outputStreamToRequestBody.write(java.nio.file.Files.readAllBytes(java.nio.file.Paths.get(FILE_URL_COLUMN)));
outputStreamToRequestBody.write(closeBoundary.getBytes(java.nio.charset.StandardCharsets.UTF_8));
outputStreamToRequestBody.flush();
var httpResponseReader = new java.io.BufferedReader(new java.io.InputStreamReader(urlConnection.getInputStream()));
var lineRead = "";
var finalText = "";
while((lineRead = httpResponseReader.readLine()) != null) {
finalText += lineRead;
}
var status = urlConnection.getResponseCode();
var result = finalText;
var time = new Date();
}
catch(e)
{
Alert(e);
}
I solved this by using the solution from http://www.dietz-solutions.com/2017/06/pentaho-data-integration-multi-part.html
Thanks Ben.
He's written a Java class for Multi-part Form submission. I extendd by adding a header for Authorization...

Crunchbase Data API v3.1 to Google Sheets

I'm trying to pull data from the Crunchbase Open Data Map to a Google Spreadsheet. I'm following Ben Collins's script but it no longer works since the upgrade from v3 to v3.1. Anyone had any luck modifying the script for success?
var USER_KEY = 'insert your API key in here';
// function to retrive organizations data
function getCrunchbaseOrgs() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName('Organizations');
var query = sheet.getRange(3,2).getValue();
// URL and params for the Crunchbase API
var url = 'https://api.crunchbase.com/v/3/odm-organizations?query=' + encodeURI(query) + '&user_key=' + USER_KEY;
var json = getCrunchbaseData(url,query);
if (json[0] === "Error:") {
// deal with error with fetch operation
sheet.getRange(5,1,sheet.getLastRow(),2).clearContent();
sheet.getRange(6,1,1,2).setValues([json]);
}
else {
if (json[0] !== 200) {
// deal with error from api
sheet.getRange(5,1,sheet.getLastRow(),2).clearContent();
sheet.getRange(6,1,1,2).setValues([["Error, server returned code:",json[0]]]);
}
else {
// correct data comes back, filter down to match the name of the entity
var data = json[1].data.items.filter(function(item) {
return item.properties.name == query;
})[0].properties;
// parse into array for Google Sheet
var outputData = [
["Name",data.name],
["Homepage",data.homepage_url],
["Type",data.primary_role],
["Short description",data.short_description],
["Country",data.country_code],
["Region",data.region_name],
["City name",data.city_name],
["Blog url",data.blog_url],
["Facebook",data.facebook_url],
["Linkedin",data.linkedin_url],
["Twitter",data.twitter_url],
["Crunchbase URL","https://www.crunchbase.com/" + data.web_path]
];
// clear any old data
sheet.getRange(5,1,sheet.getLastRow(),2).clearContent();
// insert new data
sheet.getRange(6,1,12,2).setValues(outputData);
// add image with formula and format that row
sheet.getRange(5,2).setFormula('=image("' + data.profile_image_url + '",4,50,50)').setHorizontalAlignment("center");
sheet.setRowHeight(5,60);
}
}
}
This code no longer pulls data as expected.
I couldn't confirm about the error messages when you ran the script. So I would like to show about the clear difference point. It seems that the endpoint was changed from https://api.crunchbase.com/v/3/ to https://api.crunchbase.com/v3.1/. So how about this modification?
From :
var url = 'https://api.crunchbase.com/v/3/odm-organizations?query=' + encodeURI(query) + '&user_key=' + USER_KEY;
To :
var url = 'https://api.crunchbase.com/v3.1/odm-organizations?query=' + encodeURI(query) + '&user_key=' + USER_KEY;
Note :
From your script, I couldn't also find query. So if the script doesn't work even when you modified the endpoint, please confirm about it. You can see the detail of API v3 Compared to API v3.1 is here.
References :
API v3 Compared to API v3.1
Using the API
If this was not useful for you, I'm sorry.

Fiddler: Programmatically add word to Query string

Please be kind, I'm new to Fiddler
My purpose:I want to use Fiddler as a Google search filter
Summary:
I'm tired of manually adding "dog" every time I use Google.I do not want the "dog" appearing in my search results.
For example:
//www.google.com/search?q=cat+-dog
//www.google.com/search?q=baseball+-dog
CODE:
dog replaced with -torrent-watch-download
// ==UserScript==
// #name Tamper with Google Results
// #namespace http://superuser.com/users/145045/krowe
// #version 0.1
// #description This just modifies google results to exclude certain things.
// #match http://*.google.com
// #match https://*.google.com
// #copyright 2014+, KRowe
// ==/UserScript==
function GM_main () {
window.onload = function () {
var targ = window.location;
if(targ && targ.href && targ.href.match('https?:\/\/www.google.com/.+#q=.+') && targ.href.search("/+-torrent/+-watch/+-download")==-1) {
targ.href = targ.href +"+-torrent+-watch+-download";
}
};
}
//-- This is a standard-ish utility function:
function addJS_Node(text, s_URL, funcToRun, runOnLoad) {
var D=document, scriptNode = D.createElement('script');
if(runOnLoad) scriptNode.addEventListener("load", runOnLoad, false);
scriptNode.type = "text/javascript";
if(text) scriptNode.textContent = text;
if(s_URL) scriptNode.src = s_URL;
if(funcToRun) scriptNode.textContent = '(' + funcToRun.toString() + ')()';
var targ = D.getElementsByTagName('head')[0] || D.body || D.documentElement;
targ.appendChild(scriptNode);
}
addJS_Node (null, null, GM_main);
At first I was going to go with Tampermonkey userscripts,Because I did not know about Fiddler
==================================================================================
Now,lets focus on Fiddler
Before Request:
I want Fiddler to add text at the end of Google Query string.
Someone suggested me to use
static function OnBeforeRequest(oSession: Session) {
if (oSession.uriContains("targetString")) {
var sText = "Enter a string to append to a URL";
oSession.fullUrl = oSession.fullUrl + sText;
}
}
Before Response:
This is where my problem lies
I totally love the HTML response,Now I just want to scrape/hide the word in the search box without changing the search results.How can it be done? Any Ideas?
http://i.stack.imgur.com/4mUSt.jpg
Can you guys please take the above information and fix the problem for me
Thank you
Basing on goal definition above, I believe you can achieve better results with your own free Google custom search engine service. In particular, because you have control over GCSE fine-tuning results, returned by regular Google search.
Links:
https://www.google.com/cse/all
https://developers.google.com/custom-search/docs/structured_search