JWT Bearer token in ABCchrome header - http-headers

I am using ABCPdf 11 to convert html to pdf, my html page which needs to be converted required JWT token so that needs to be passed to ABCChrome so it can use the JWT token.
I have tried the following but the auth still fails:
doc.HtmlOptions.HttpAdditionalHeaders = $"Authorization: Bearer {accessToken}";
I followed example from here: https://www.websupergoo.com/helppdfnet/default.htm?page=source%2F5-abcpdf%2Fxhtmloptions%2F2-properties%2Fhttpadditionalheaders.htm
From the description in the above URL, I have also tried the below options:
doc.HtmlOptions.NoCookie = true;
doc.HtmlOptions.Media = MediaType.Screen;
After adding HttpAdditionalHeaders and when I get the http status from the pdf library I do get 401 http status code which confirms the
var imageId = doc.AddImageUrl(model.Url);
var status = doc.HtmlOptions.ForChrome.GetHttpStatusCode(imageId);
The status here is 401 - unauthorized

The HttpAdditionalHeaders property is not currently supported by the ABCChrome Engine. The only HtmlOptions supported by ABCChrome are specified here.
There are a few things you could try:
Check whether the target server supports sending the web token via GET request parameters - I guess you've probably done this already :-)
Make the AddImageUrl request URL to an intermediary web server (even a local HttpServer) to a script which can fetch the page for you based on any GET parameters.
If the service you are attempting to access accepts ajax requests you could try using javascript to inject the response into a page using XMLHttpRequest.setRequestHeader(). NB if you use a local file (e.g. file://) for this you may come across some Chromium enforced JavaScript security issues.
I do know that WebSupergoo offer free support for all their licenses, including trial licenses.
Good luck.

Emailed ABCPdf support and unfortunately ABCChrome does not support HttpAdditionalHeaders property so the work around is to download the html ourselves and convert that to PDF, see example below:
var imageId = doc.AddImageHtml(html); // <- html downloaded from auth url
Also don't forget to add paging:
// add all pages to pdf
while (doc.Chainable(imageId))
{
doc.Page = doc.AddPage();
imageId = doc.AddImageToChain(imageId);
}
for (int i = 1; i <= doc.PageCount; i++)
{
doc.PageNumber = i;
doc.Flatten();
}

Related

Github Enterprise Raw URL Gist Unable to Download

I'm able to get a list of gists and their files https://api.git.mygithub.net/users/myuser/gists?per_page=100&page=1 which I found using the docs here: https://docs.github.com/en/free-pro-team#latest/rest/reference/gists#get-a-gist
The files on the gist object have a raw_url. If I fetch the raw_url with the same token, it fails wanting me to authenticate. If I add the header: Accept: application/vnd.github.v3.raw it returns a 406 Not Acceptable. I've references to that header around.
I'm not sure what the scope should be on the token. It seems like it would be the same one I accessed the API. In the UI if you click the raw file it gets a token appended to the url. That token doesn't look like one of the Private tokens mentioned here: https://docs.github.com/en/free-pro-team#latest/github/authenticating-to-github/creating-a-personal-access-token
So what is the format of the HTTP request to download the raw gist?
The raw url needs to have the hostname of gist. changed to raw. and the url path needs to start with /gist/.
Example code in Go fixing it:
url := gistFile.RawUrl
url = strings.Replace(url, "gist.", "raw.", 1)
url = strings.Replace(url, ".net/", ".net/gist/", 1)

how to figure out how to authenticate myself using http requests

I am trying to log in to a site using requests as follows:
s = requests.Session()
login_data = {"userName":"username", "password":"pass", "loginPath":"/d2l/login"}
resp = requests.post("https://d2l.pima.edu/d2l/login?login=1", login_data)
although I am getting a 200 response, when I say
print(resp.content)
b"<!DOCTYPE html><html><head><meta charset='utf-8' /><script>var hash = window.location.hash;if( hash ) hash = '%23' + hash.substring( 1 );window.location.replace('/d2l/login?sessionExpired=0&target=%2fd2l%2ferror%2f404%2flog%3ftargetUrl%3dhttp%253A%252F%252Fd2l.pima.edu%253A80%252Fd2l%252Flogin%253Flogin%253D1' + hash );</script><title></title></head><body></body></html>"
notice it says session expired.
What I've tried:
logging back out and in in the actual browser, no success.
http basic auth, no success.
I'm thinking maybe I need to authenticate myself to this site using cookies?
If so how do I determine which cookies to send it?
I tried figuring this out by saying
resp.cookies
Out[4]: <RequestsCookieJar[]>
shouldn't this be giving me names of cookies? I'm not sure what to do with such output.
Main Point: HOW DO I FIGURE OUT HOW TO AUTHENTICATE MYSLEF TO THIS WEBSITE?
Help is appreciated.
I would rather not use selenium.
From loading this page https://d2l.pima.edu/d2l/login and viewing its source, you'll notice the POST target path is /d2l/lp/auth/login/login.d2l. Try using that as your POST path. Your other fields look consistent with the form's expectations.
Note: with python requests if you create a session object use it to make your requests:
resp = s.post(<blah blah>, login_data)
The session will hold any cookies set by the login server, and you can continue to use the s object to make requests in the authenticated session.

intermittent error from rally 'Not authorized to perform action: Invalid key' for POST request in chrome extension

I developed a chrome extension using Rally's WSAPI v2.0, and it basically does the following things:
get user and project, and store them
get current iteration everytime
send a post request to create a workitem
For the THIRD step, I sometimes get error ["Not authorized to perform action: Invalid key"] since end of last month.
[updated]Error can be reproduced everytime if I log in Rally website via SSO before using the extension to send requests via apikey.
What's the best practice to send subsequent requests via apikey in my extension since I can't control end users' habits?
I did see some similar posts but none of them is helpful... and in case it helps:
I'm adding ZSESSIONID:apikey in my request header, instead of user /
password to authenticate, so I believe no security token is needed
(https://comm.support.ca.com/kb/api-key-and-oauth-client-faq/kb000011568)
url starts with https://rally1.rallydev.com/slm/webservice/v2.0/
issue is fixed after clearing cookies for
https://rally1.rallydev.com/, but somehow it appears again some time
later
I checked the cookie when the issue was reproduced, and found one with name of ZSESSIONID and its value became something else rather than the apikey. Not sure if that matters though...
code for request:
function initXHR(method, url, apikey, cbFunc) {
let httpRequest = new XMLHttpRequest();
...
httpRequest.open(method, url);
httpRequest.setRequestHeader('Content-Type', ' application\/json');
httpRequest.setRequestHeader('Accept', ' application\/json');
httpRequest.setRequestHeader('ZSESSIONID', apikey);
httpRequest.onreadystatechange = function() {
...
};
return httpRequest;
}
...
usReq = initXHR ('POST', baseURL+'hierarchicalrequirement/create', apikey, function(){...});
Anyone has any idea / suggestion? Thanks a million!
I've seen this error when the API key had both read-only and full-access grants configured. I would start by making sure your key only has the full-access grant.

Receiving "Invalid policy document or request headers!"

I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.

Google Apps Script login to website with HTTP request

I have a spreadsheet on my Google Drive and I want to download a CSV from another website and put it into my spreadsheet. The problem is that I have to login to the website first, so I need to use some HTTP request to do that.
I have found this site and this. If either of these sites has the answer on it, then I clearly don't understand them enough to figure it out. Could someone help me figure this out? I feel that the second site is especially close to what I need, but I don't understand what it is doing.
To clarify again, I want to login with an HTTP request and then make a call to the same website with a different URL that is the call to get the CSV file.
I have done a lot of this in the past month so I should be able to help you, we are trying to emulate the browsers behaviour here so first you need to use chrome's developer tools(or something similar) and note down the exact things the browser does like the form values posted, the url that is called and so on. The following example shows the general techinique to be used:
The first step is to login to the website and get the session cookie:
var payload =
{
"user_session[email]" : "username",
"user_session[password]" : "password",
};// The actual values of the post variables (like user_session[email]) depends on the site so u need to get it either from the html of the login page or using the developer tools I mentioned.
var options =
{
"method" : "post",
"payload" : payload,
"followRedirects" : false
};
var login = UrlFetchApp.fetch("https://www.website.com/login" , options);
var sessionDetails = login.getAllHeaders()['Set-Cookie'];
We have logged into the website (In order to confirm just log the sessionDetails and match it with the cookies set by chrome). The next step is purely dependent on the website so I will give u a general example
var downloadPayload =
{
"__EVENTTARGET" : 'ctl00$ActionsPlaceHolder$exportDownloadLink1',
};// This is just an example it may or may not be needed, if needed u need to trace the values from the developer tools.
var downloadCsv = UrlFetchApp.fetch("https://www.website.com/",
{"headers" : {"Cookie" : sessionDetails},
"method" : "post",
"payload" : downloadPayload,
});
Logger.log(downloadCsv.getContentText())
The file should now be logged, you can then parse the csv using hte GAS inbuilt function and dump the data in the spreadsheet.
A few points to note:
I have assumed that all form post values are static and can be
hardcoded, in case this is not true then let me know I will give you
a function that can extract values from the html.
Some websites require the browser to send a token value(the value will be present in the html) along with the credentials. In this case you need to extract the values and then post it.