loading response data into web view Titanium - titanium

I got response data from the web services, which is base64binary data.
I want to load this base64binary data into web view for titanium alloy [version 3.1.0.2].
The data base64binary is of pdf file.
Ti.API.info('Status is ::',xhrDocument.status);
var ResponseData = xhrDocument.getResponseXML().getElementsByTagName('GetDocResult').item(0).text;
var file = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory,'pdfbinarray.pdf');
if(xhrDocument.status == 200){
var file = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory, 'filename2.pdf'); file.write(xhrDocument.getResponseXML().getElementsByTagName('GetDocResult').item(0).text);
Titanium.API.info('file write');
Titanium.API.info(file.size);
}
The above code created filename2.pdf in my Documents directory. When I open the file using Adobe Reader, it says Adobe Reader could not open filename2.pdf because it is either not a valid file or has been damaged (for example, it was sent as an email attachment and wasn't correctly decoded).

Is the web service call returning ONLY the document, or is there additional data included in the response?
We have had success using a simpler method. If the service is simply returning the document, try changing line two to something more like this:
var ResponseData = xhrDocument.responseText;

Related

Restrict number of pages on a PDF shown in browser Node server

I am developing a platform with model preview before paying to
download for pdf's my problem is how to restrict the number of pages of pdf showing to the user.
eg.
I use pdfjs to render the pdf on canvas but when I see the network
request the full file is downloadable.
I tried with hummus js to split on server & display to the client, here
my confusion is firstly I put the path on sql db & store pdfs on fs so
the response is JSON, so how to handle that? Secondly how to make it
dynamic, since hummusJs needs an input and output name and I am working with a lot of pdf's.
There are related questions on Stack Overflow but they are all for PHP.
Here is my try I read the pdf uploaded by user's and write it with hummusjs then send it to the browser but this is not scalable for many files
app.get('/', (req, res) => {
//hummusjs
let readPdf = hum.createReader('./uploads/quote.pdf-1675405130171.pdf');
let pageCount = readPdf.getPagesCount();
writePdf = hum.createWriter('preview.pdf');
writePdf
.createPDFCopyingContext(readPdf)
.appendPDFPageFromPDF(0);
writePdf.end()
const path = './preview.pdf';
if (fs.existsSync(path)) {
res.contentType("application/pdf");
fs.createReadStream(path).pipe(res)
} else {
res.status(500)
console.log('File not found')
res.send('File not found')
}
});
Restrict number of pages on a PDF shown in browser Node server, Users can't able to download it in no way without the preview file.
!(Inspect, Print & Network tab)
|| Pay & get full document

JWT Bearer token in ABCchrome header

I am using ABCPdf 11 to convert html to pdf, my html page which needs to be converted required JWT token so that needs to be passed to ABCChrome so it can use the JWT token.
I have tried the following but the auth still fails:
doc.HtmlOptions.HttpAdditionalHeaders = $"Authorization: Bearer {accessToken}";
I followed example from here: https://www.websupergoo.com/helppdfnet/default.htm?page=source%2F5-abcpdf%2Fxhtmloptions%2F2-properties%2Fhttpadditionalheaders.htm
From the description in the above URL, I have also tried the below options:
doc.HtmlOptions.NoCookie = true;
doc.HtmlOptions.Media = MediaType.Screen;
After adding HttpAdditionalHeaders and when I get the http status from the pdf library I do get 401 http status code which confirms the
var imageId = doc.AddImageUrl(model.Url);
var status = doc.HtmlOptions.ForChrome.GetHttpStatusCode(imageId);
The status here is 401 - unauthorized
The HttpAdditionalHeaders property is not currently supported by the ABCChrome Engine. The only HtmlOptions supported by ABCChrome are specified here.
There are a few things you could try:
Check whether the target server supports sending the web token via GET request parameters - I guess you've probably done this already :-)
Make the AddImageUrl request URL to an intermediary web server (even a local HttpServer) to a script which can fetch the page for you based on any GET parameters.
If the service you are attempting to access accepts ajax requests you could try using javascript to inject the response into a page using XMLHttpRequest.setRequestHeader(). NB if you use a local file (e.g. file://) for this you may come across some Chromium enforced JavaScript security issues.
I do know that WebSupergoo offer free support for all their licenses, including trial licenses.
Good luck.
Emailed ABCPdf support and unfortunately ABCChrome does not support HttpAdditionalHeaders property so the work around is to download the html ourselves and convert that to PDF, see example below:
var imageId = doc.AddImageHtml(html); // <- html downloaded from auth url
Also don't forget to add paging:
// add all pages to pdf
while (doc.Chainable(imageId))
{
doc.Page = doc.AddPage();
imageId = doc.AddImageToChain(imageId);
}
for (int i = 1; i <= doc.PageCount; i++)
{
doc.PageNumber = i;
doc.Flatten();
}

Downloading a publicly-shared file from OneDrive

When I create a share link in the UI with the "Anyone with this link can view this item" option, I get a URL that looks like https://onedrive.live.com/redir?resid=XXX!YYYY&authkey=!ZZZZZ&ithint=<contentType>. What I can't figure out is how to use this URL from code to download the content of the file. Hitting the link gives HTML for a page to show the file.
How can I construct a call to download the file? Also, is there a way to construct a call to get some (XML/JSON) metadata about the file, and maybe even a preview or something? I want to be able to do this all without prompting a user for credentials, and all the API docs are about how to make authenticated calls. I want to make anonymous calls to get publicly-shared files.
Have a read over https://dev.onedrive.com - it documents how you can make a query to our service to get the metadata for an item, along with URLs that can be used to directly download the content.
Update with more details
Sorry, the documentation you need for your specific scenario is still in process (along with the associated SDK changes) so I'll give you an overview of how to do it.
There's a sibling to the /drives path called /shares which accepts a sharing URL (such as the one you have above) in an encoded format and allows you to get metadata for the item it represents. This does not require authentication provided the sharing URL has a valid authkey.
The encoding scheme for the id is u!<UrlSafeBase64EncodedUrl>, where <UrlSafeBase64EncodedUrl> follows the guidelines outlined here (trim the = characters from the end).
Here's a snippet that should give you an idea of the whole process:
string originalUrl = "https://onedrive.live.com/redir?resid=XXX!YYYY&authkey=!foo";
byte[] urlAsUtf8Bytes = Encoding.UTF8.GetBytes(originalUrl);
string utf8BytesAsBase64String = Convert.ToBase64String(urlAsUtf8Bytes);
string encodedUrl = "u!" + utf8BytesAsBase64String.TrimEnd('=').Replace('/', '_').Replace('+', '-');
string metadataUrl = "https://api.onedrive.com/v1.0/shares/" + encodedUrl + "/root";
From there you can append /content if you want to get the contents of the file, or you can start navigating through if the URL represents a folder (e.g. /children/childfile.txt)

How do I get the byte array from a PDF generated by pdfMake?

I have a web app that is creating a PDF using pdfMake. I want to get the byte[] from the generated pdf to be used to transmit to a database. How can I get the byte[] of the pdf generated by pdfmake?
You can use the following method provided by the library to fetch the array of the generated PDF.
pdfMake.createPdf(docDefinition).getBuffer(function (databuffer) {
data = databuffer;
});

VEMap and a GeoRSS feed(hosted separately)

The scenario is as follows:
A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it.
A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object).
Now, VEMap can accept an input feed in this format via the following:
var layer = new VEShapeLayer();
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer);
map.ImportShapeLayerData(veLayerSpec, onComplete, true);
onComplete is a callback function I'm using to replace the default pin graphic with something custom.
The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format.
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer);
When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error.
The order of operations is currently: remote feed -> local handler -> VEMap import
If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.
The format I have above is actually very close to what I needed. A similar solution was found by Mike McDougall. Although I was passing the RSS feed directly through the handler(writing the read stream directly), I just needed to specify the following from within the handler:
context.Response.ContentType = "text/xml";
context.Response.ContentEncoding = System.Text.Encoding.UTF8;
With the above fix, I'm able to have a remote GeoRSS feed successfully load a separately hosted Virtual Earth map instance.