Google Drive - use WebViewLink vs thumbnailLink - api

I'm using the Google Drive API where I can gain access to 2 pieces of data that I need to display a jpg file oin my program. WebViewLink is the "large" size image while thumbnailLink is the "thumb" smaller size of the same image.
I'm having an issue with downloading the WebViewLink that I do not have with the thumbnailLink. Part of my code calls either exif_imagetype($filename) or getimagesize($filename) so I can retrieve the type, height & width etc for the $filename. This is successful for the thumbnailView but not the WebViewLink...
code snippet...
$WebViewLink = "https://drive.google.com/a/treering.com/file/d/blablabla";
$type = exif_imagetype($WebViewLink);
--- results in the error
"PHP Warning: exif_imagetype(): stream does not support seeking..."
where as...
$thumbnailLink = "https://lh6.googleusercontent.com/blablabla";
$type = exif_imagetype($thumbnailLink);
--- successful
where $type = 2 // a .jpg file
Not sure what I need to do to gain a usable WebViewLink... maybe use the "export" function to copy to a file on my server that is accessible, then use that exported file for the functions that fail above?
Thanks for any help.
John

I think you are using the wrong property to get the image of the file.
WebViewLink
A link for opening the file in a relevant Google editor or viewer in a browser.
thumbnailLink
A short-lived link to the file's thumbnail, if available. Typically lasts on the order of hours.
You can try using the iconLink():
A static, unauthenticated link to the file's icon.
Sample image of thumbnailLink:
Sample image of a iconLink:
It will still show relevant image about the file.
Hope it helps!

Related

How to read s3 rasters with accompanying ".aux.xml" metadata file using rasterio?

Suppose a GeoTIFF raster on a S3 bucket which has - next to the raw TIF file - an associated .aux.xml metadata file:
s3://my_s3_bucket/myraster.tif
s3://my_s3_bucket/myraster.tif.aux.xml
I'm trying to load this raster directly from the bucket using rasterio:
fn = 's3://my_s3_bucket/myraster.tif'
with rasterio.Env(session, **rio_gdal_options):
with rasterio.open(fn) as src:
src_nodata = src.nodata
scales = src.scales
offsets = src.offsets
bands = src.tags()['bands']
And this seems to be a problem. The raster file itself is successfully opened, but because rasterio did not automatically load the associated .aux.xml, the metadata was never loaded. Therefore, no band tags, no proper scales and offsets.
I should add that doing exactly the same on a local file does work perfectly. The .aux.xml automatically gets picked up and all relevant metadata is correctly loaded.
Is there a way to make this work on s3 as well? And if not, could there be a workaround for this problem? Obviously, metadata was too large to get coded into the TIF file. Rasterio (GDAL under the hood) generated the .aux.xml automatically when creating the raster.
Finally got it to work. It appears to be essential that in the GDAL options passed to the rasterio.Env module, .xml is added as an allowed extension to CPL_VSIL_CURL_ALLOWED_EXTENSIONS:
The documentation of this option states:
Consider that only the files whose extension ends up with one that is listed in CPL_VSIL_CURL_ALLOWED_EXTENSIONS exist on the server.
And while almost all examples to be found online only set .tif as allowed extension because it can dramatically speed up file opening, any .aux.xml files are not seen by rasterio/GDAL.
So in case we expect there to be associated .aux.xml metadata files with the .tif files, we have to change our example to:
rio_gdal_options = {
    'AWS_VIRTUAL_HOSTING': False,
    'AWS_REQUEST_PAYER': 'requester',
    'GDAL_DISABLE_READDIR_ON_OPEN': 'FALSE',
    'CPL_VSIL_CURL_ALLOWED_EXTENSIONS': '.tif,.xml', # Adding .xml is essential!
    'VSI_CACHE': False
}
with rasterio.Env(session, **rio_gdal_options):
with rasterio.open(fn) as src: # The associated .aux.xml file will automatically be found and loaded now
src_nodata = src.nodata
scales = src.scales
offsets = src.offsets
bands = src.tags()['bands']

How to reuse S3 image?

I uploaded images to S3 with carrier wave gem.(Ruby on Rails, Vue.js)
But I want to reuse uploaded image as a file.
I have no idea about how to reuse uploaded S3 images as a file.
To be specific,
I made a model "reaction"
and the image is saved as a column of "reaction".
I can reuse image object like #reaction.image
(#reaction is a "reaction"`s object)
No trial
I totally dont know how to deal with it.
Once you get the image url from your table "Reaction". We can basically downloading an image located at a given URL and saved it as a file locally.
def download_aws_s3(url_aws_s3, filename)
uri = URI(url_aws_s3)
response = Net::HTTP.get_response(uri)
File.open(filename, 'wb'){|f| f.write(response.body)}
end
You can use "open-uri" or "down" gem as an alternative.

Save an image present in PDF on local File System

This is my first experience of using PDFBox jar files. Also, I have recently started working on TestComplete. In short, all these things are new for me and I have been stuck on one issue for last few hours. I will try to explain as much as I can. Would really appreciate any help!
Objective:
To save an image present in a PDF file on the file system
Issue:
When this line gets executed objImage.write2file_2(strSavePath);, I get the error Object doesn't support this property or method.
I am taking some help from here
Code:
function fn_PDFImage()
{
var objPdfFile, strPdfFilePath, strSavePath, objPages, objPage, objImages, objImage, imgbuffer;
strPdfFilePath = "C:\\Users\\aabb\\Desktop\\name.pdf";
strSavePath = "C:\\Users\\aabb\\Desktop\\abc";
objPdfFile = JavaClasses.org_apache_pdfbox_pdmodel.PDDocument.load_3(strPdfFilePath);
objPages = objPdfFile.getDocumentCatalog().getAllPages();
//getting a page with index=1
objPage = objPages.get(1)
objImages = objPage.getResources().getXObjects().values().toArray();
Log.Message(objImages.length); //This is returning 14. i.e, 14 images
//getting an image with index=1
objImage = objImages.items(1);
Log.Message(typeof objImage); //returns "Object" which means it is not null
//saving the image
objImage.write2file_2(strSavePath); //<---GETTING AN ERROR HERE
}
ERROR:
If you are bothered about the method namewrite2file_2, please read this excerpt from the link which I have shared:
In Java, the constructor of a class has the name of this class.
TestComplete changes the constructor names to newInstance(). If a
class has overloaded constructors, TestComplete names them like
newInstance, newInstace_2, newInstance_3 and so on.
Additional Info:
I have imported Jar file(pdfbox-app-1.8.13.jar) and their classes in testcomplete. I am not sure if I need to import some other jar file or its class here:
XObjects are not always image XObjects. And write2file is in the class PDXObjectImage so you need to check your object type first.
Re the second question asked in the comment: the form XObject isn't something you can save. XObject forms are content streams with resources etc, similar to pages. However what you can do is to explore these too whether the resources have images. See how this is done in the ExtractImages source code of PDFBox 1.8.
However there are other places where there can be images (e.g. patterns, soft masks, inline images); this is only available in PDFBox 2.*, see the ExtractImages source code there. (Note that the class names are different).

Using Leigh version of S3Wrapper.cfc Can't get past Init

I am new to S3 and need to use it for image storage. I found a half dozen versions of an s2wrapper for cf but it appears that the only one set of for v4 is one modified by Leigh
https://gist.github.com/Leigh-/26993ed79c956c9309a9dfe40f1fce29
Dropped in the com directory and created a "test" page that contains the following code:
s3 = createObject('component','com.S3Wrapper').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
but got the following error :
So I changed the line 37 from
variables.Sv4Util = createObject('component', 'Sv4').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
to
variables.Sv4Util = createObject('component', 'Sv4Util').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
Now I am getting:
I feel like going through Leigh code and start changing things is a bad idea since I have lurked here for year an know Leigh's code is solid.
Does any know if there are any examples on how to use this anywhere? If not what I am doing wrong. If it makes a difference I am using Lucee 5 and not Adobe's CF engine.
UPDATE :
I followed Leigh's directions and the error is now gone. I am addedsome more code to my test page which now looks like this :
<cfscript>
s3 = createObject('component','com.S3v4').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
bucket = "imgbkt.domain.com";
obj = "fake.ping";
region = "s3-us-west-1"
test = s3.getObject(bucket,obj,region);
writeDump(test);
test2 = s3.getObjectLink(bucket,obj,region);
writeDump(test2);
writeDump(s3);
</cfscript>
Regardless of what I put in for bucket, obj or region I get :
JIC I did go to AWS and get new keys:
Leigh if you are still around or anyone how has used one of the s3Wrappers any suggestions or guidance?
UPDATE #2:
Even after Alex's help I am not able to get this to work. The Link I receive from getObjectLink is not valid and getObject never does download an object. I thought I would try the putObject method
test3 = s3.putObject(bucketName=bucket,regionName=region,keyName="favicon.ico");
writeDump(test3);
to see if there is any additional information, I received this :
I did find this article https://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html but it is pretty old and since S3 specifically suggests using dots in bucketnames I don't that it is relevant any longer. There is obviously something I am doing wrong but I have spent hours trying to resolve this and I can't seem to figure out what it might be.
I will give you a rundown of what the code does:
getObjectLink returns a HTTP URL for the file fake.ping that is found looking in the bucket imgbkt.domain.com of region s3-us-west-1. This link is temporary and expires after 60 seconds by default.
getObject invokes getObjectLink and immediately requests the URL using HTTP GET. The response is then saved to the directory of the S3v4.cfc with the filename fake.ping by default. Finally the function returns the full path of the downloaded file: E:\wwwDevRoot\taa\fake.ping
To save the file in a different location, you would invoke:
downloadPath = 'E:\';
test = s3.getObject(bucket,obj,region,downloadPath);
writeDump(test);
The HTTP request is synchronous, meaning the file will be downloaded completely when the functions returns the filepath.
If you want to access the actual content of the file, you can do this:
test = s3.getObject(bucket,obj,region);
contentAsString = fileRead(test); // returns the file content as string
// or
contentAsBinary = fileReadBinary(test); // returns the content as binary (byte array)
writeDump(contentAsString);
writeDump(contentAsBinary);
(You might want to stream the content if the file is large since fileRead/fileReadBinary reads the whole file into buffer. Use fileOpen to stream the content.
Does that help you?

FPDF Fatal Error

I am trying to test implementation of FPDF. Below is the code I'm testing with, but it keeps giving me the error: "Fatal error: Class 'FPDF' not found in /home4/fwall/public_html/create-press-release.php on line 5". That is the URL to the page I am calling the below code on.
I have verified that the php file for FPDF is being required from the right spot, and it's still happening. Can anyone figure out what's going on?
require(__DIR__.'/fpdf.php'); //The fpdf folder is in my root directory.
//create a FPDF object
$pdf=new FPDF();
//set document properties
$pdf->SetAuthor('Lana Kovacevic');
$pdf->SetTitle('FPDF tutorial');
//set font for the entire document
$pdf->SetFont('Helvetica','B',20);
$pdf->SetTextColor(50,60,100);
//set up a page
$pdf->AddPage('P');
$pdf->SetDisplayMode(real,'default');
//insert an image and make it a link
//$pdf->Image('logo.png',10,20,33,0,' ','http://www.fpdf.org/');
//display the title with a border around it
$pdf->SetXY(50,20);
$pdf->SetDrawColor(50,60,100);
$pdf->Cell(100,10,'FPDF Tutorial',1,0,'C',0);
//Set x and y position for the main text, reduce font size and write content
$pdf->SetXY (10,50);
$pdf->SetFontSize(10);
$pdf->Write(5,'Congratulations! You have generated a PDF.');
//Output the document
$pdf->Output('example1.pdf','I');
This line:
require('http://siteurlredacted.com/fpdf/fpdf.php');
probably won't do what you expect. The request will make the remote server "execute" fpdf.php, returning a blank page, and your script will include an empty file. That is why it doesn't find any class to load.
You should download FPDF and put the file on your filesystem, where it is accesible to your script with no HTTP requests. You can try putting fpdf.php inside your project.
Hope this helps.