Google Drive's html to Docs document conversion corrupted since 2 Oct - objective-c

We have been using Google Drive SDK for our app.
Since 2 October, when our app tries to create a Google Docs native document by uploading a html file contains img tags, the result become partly corrupted. Although the upload request finishes without error, all the images in the html are lost in the created document.
The behavior is not usual. For nearly two years, basically, the conversion for document insertion had interpreted the img tags and created embedded images. It seems like a server side problem exists.
Is it a known issue and would be fixed?
Or, is there something I'm missing and it is correct behavior?
[About the htmls we use]
The htmls for upload are based on exported "text/html" files from existing Google Docs documents.
All the img tags have src attribute with absolute url. They start with https and all of them are links on googleusercontent.com.
Apart from the img tags, all the html tags seems to be handled as before.
[How to reproduce the problem]
This problem can be reproduced by using DriveSample app in "Google APIs Client Library for Objective-C". Modify "DriveSampleWindowController.m" to force document conversion turned on (i.e., query.convert=YES;) just before uploading a file to Drive.
Download an existing Docs document with images as html.
Upload it as a new document to Google Drive (with document conversion turned on).
Open the uploaded document in 2 with Google Docs web editor. All images would be lost in the document. Also, no spinning wheels will be shown for the lost image areas.

The problem seems fixed on the server side within the last half day. Thank you for fixing it, Google.

Related

Print webpage with canvas content

I have a tool that allows you to assemble reports generated with fabricjs.
We are trying to convert those reports that currently work on the web to pdf, using the sejda.com tool
The problem with sejda is that it has a generation time limit of 1:50 secs. After that time, the web page returns an error, and we have reports that are taking more than 1:50 seconds to generate completely.
I'm looking for other options, but most don't interpret the content in HTML5, the page comes out blank.
I have tried with html2pdf, javascript2pdf, and a dozen web pages that send the url and try to print the document without success as blank pages.
Is there any solution to our problem? We have been investigating this for months. One of them is to improve the load times of the reports, but it is a complex development due to how it is done.
To solve this I converted the PDF to PNG with PDF.js and stored toDataURL, then retreived the PNG image fromURL and set as background.
or this link can help you : Load PDF into fabricjs canvas
We have finally decided to opt for the caching of the reports.
Now everything works correctly

TinyMCE 5 - large images pasted via Safari do not render correctly

We are running TinyMCE version 5.4.1 with various options including:
paste_data_images: true
powerpaste_allow_local_image: true
When we drag & drop (or paste) in smaller images (400px X 400px) everything seems to work fine. The Base64 encoding is saved to the database and the image is rendered from all browsers, Chrome, Firefox and Safari.
However, when we paste in a larger image (1920px x 1081px) the image is only saved and rendered correctly in Chrome and Firefox. In Safari the Base64 encoding is saved with all lowercase characters. Therefore it doesn't render when attempting to view it. Has anyone else experienced this?
I have searched here as well as on the TinyMCE website but don't see anything mentioning this behavior. We will eventually attempt to move away from this Base64 implementation as it's no longer recommended but it's what we have for the time being so I'm just trying to address this issue.
When the page loads, its' elements can do so in parallel. But when the browser sees the base64 image, it blocks the page from loading until this image is rendered. Thus, inserting large images into the page as base64 is certainly not a good practice - it may slow down page loads and worsen the UX.
To fix this problem and maybe several other issues, utilizing the automatic_uploads option is highly recommended. It will upload pasted images on the server instead of converting them to base64. Here is the example of the PHP upload handler that will upload images and give their URLs back to TinyMCE.
Concerning the issue with Safari, some minimal reproducible example would be very useful.
I should also mention that PowerPaste is a premium feature that will not work with TinyMCE opensource. If you are using the paid version of TinyMCE, you can create a support ticket.

Android camera, take picture(s) and save as multipage PDF, then upload to server via <input type="file" />

I have a webform with and want to open it on smartphone - than take pictures of some documents which need to be merged in one PDF, and on the end this file need to be uploaded to server.
My solution is to use Google Drive to upload PDF (scan) to GDrive and then somehow download this file from gdrive to server via some sort of widget (any links appreciate) installed on website.
Maybe someone have a better idea?
I know its late but my answer might help others. I also face the same challenge and implemented a custom solution based on Javascript and Since you are using web form so this solution will perfectly fits on your need.
You have to use JSPdf javascript library, JSPdf provide you pdf object in your browser and you can upload it download it and there are many other thing to play with.
First you have to initialize JSPdf object as per your requirement. I am creating PDF with page size width:500px and height 500px.
pdf = new jsPDF("l", "pt", [500,500]);
Simply when you will take picture from camera you will have each picture in form of base64, that base64 format you have to insert in JSPdf object
pdf.addImage(imgData, 'JPEG', 0, 0);
you can repeat the above code to add pictures from camera as much as you want, at the back-end these images are compiling and creating pdf document where each page have each images in sequence.
Once you are done, you can get PDF object in form of base64 object using below code that you can upload to any server.
pdf.output('datauristring')
above is only pdf part, you can find complete working example including camera part here Javascript Component to Scan Document

XPage - Open scans in browser

I need to display uploaded scans (JPG, PNG, TIFF, PDF, etc.) in the browser's window instead downloading them to a local pc and using external apps like Acrobat Reader.
I made some research in the web on that issue but wasn't really successful.
Does anyone have hints, code snippets, how to achieve that ?
EDIT :
Since I am not looking for a solution which supports viewing scans in a typical browser like Chrome, FireFox, etc. but supports viewing scans in an XPage view within Notes I need to ask my question again.
What is the best (recommended) way to view different types of scans, uploaded as PDF, JPG, TIFF, PNG, etc., in Notes within an XPage view ?
Take a look here, XPages: Embed PDF and possibly Office files
Here is some code that I have in an app for PDF's.
I tried using Bumpbox, and pdf.js and while I could get them working, iframes seemed to work best for me with using normal Domino attachment urls in xpages
I am not sure if this solution is right or not, but it works well for an app I have that only has PDFs. It does work on mobile too, at least on iOS.
<iframe
src="#{javascript:
var url = 'https://app.nsf/';
var doc = sessionScope.docID;
var atname = #RightBack(sessionScope.aname,'Body');
var end = '/$file'+atname;
return url+doc+end}"
width="800" height="1000">
</iframe>
If you are looking at using different file types you need to use a renderer, give it the attachment URL, and then display what the renderer returns with. I haven't looked at this in a while so things might have changed. Look for a lightbox clone that can display pdf. I think Orangebox was one, bumpbox looks to not be updated but I was able to get that working for me.
This method will display everything inline. I would love to see some type of renderer like pdf.js for xpages.

Screen Scraping with HTTP Headers Issue - I Think

I've been trying to figure this one out for about a week now and just
can't come up with a good solution. So, I figured I would see if anyone could help me out. Here's one of the links that I'm trying to scrape:
http://content.lib.washington.edu/cdm4/item_viewer.php?CISOROOT=/alaskawcanada&CISOPTR=491&CISOBOX=1&REC=4
I right-clicked to copy image location.
This is the link that is copied:
(Can't paste this as a link because I'm new)
http:// content (dot) lib (dot) washington (dot) edu/cgi-bin/getimage.exe?CISOROOT=/alaskawcanada&CISOPTR=491&DMSCALE=100.00000&DMWIDTH=802&DMHEIGHT=657.890625&DMX=0&DMY=0&DMTEXT=%20NA3050%20%09AWC0644%20AWC0388%20AWC0074%20AWC0575&REC=4&DMTHUMB=0&DMROTATE=0
There is no clear image URL being displayed. Obviously that's
because the image is hidden behind some type of script. Through trial and
error I found that I can put ".jpg" after the "CISOPTR=491" and then the link becomes an Image URL. The problem is that this is not the high-resolution version of the image. To get to the
high-resolution version I have to change the URL even more. I found a lot of articles #Stackoverflow.com to mention trying to build a script using curl and PHP, I have even tried a few of them with no luck. "491" is the image number and I can change that number to find other images in the same directory. So, scraping a sequence of numbers should be pretty easy. But I'm still a noob at scraping and this one is kicking my butt. Here's what I've tried.
Get remote image using cURL then resample
also tried this.
http://psung.blogspot.com/2008/06/using-wget-or-curl-to-download-web.html
I also have Outwit Hub, and Site Sucker, but they don't recognize the URL as an image file and fo they just pass right ove it. I used SiteSucker overnight and it download 40,000 files and only 60 were jpegs, none of which were the ones I wanted.
The other thing I keep running into, is the files I have been able to download manually, the filename is always either getfile.exe or showfile.exe and then if I manually add ".jpg" as the extension I can view the image locally.
How can I reached the original high-res image file, and automate the download process so that I can scrape a couple hundred of these images?
I right-clicked to copy image location. This is the link that is
copied:
You noticed the title has ".exe" in there. Look at the stuff in the query string:
DMSCALE=100.00000
DMWIDTH=802
DMHEIGHT=657.890625
DMX=0
DMY=0
DMTEXT=%20NA3050%20%09AWC0644%20AWC0388%20AWC0074%20AWC0575
REC=4
DMTHUMB=0
DMROTATE=0
Strongly implies the original source of this image is in a database or something and it is being passed thru a server-side filter (not sure if that is what you meant by "some kind of script"). Ie, this is dynamically generated content, not static, and the same caveats apply as would to dynamic text content: you have to figure out what instructions to provide the server to get it to cough up what you want. Which you pretty much have in front of you...if SiteSucker or whatever won't deal with it properly, scrape the address yourself using an HTML parser.