Handling of Thumbnails in Google Drive Android API (GDAA) - google-drive-android-api

I've run into the following problem when porting an app from REST API to GDAA.
The app needs to download some of (thousands of) JPEG images based on user selection. The way this is solved in the app is by downloading a thumbnail version first, using this construct of the REST API:
private static InputStream getCont(String rsid, boolean bBig){
InputStream is = null;
if (rsid != null) try {
File gFl = bBig ?
mGOOSvc.files().get(rsid).setFields("downloadUrl" ).execute():
mGOOSvc.files().get(rsid).setFields("thumbnailLink").execute();
if (gFl != null){
GenericUrl url = new GenericUrl(bBig ? gFl.getDownloadUrl() : gFl.getThumbnailLink());
is = mGOOSvc.getRequestFactory().buildGetRequest(url).execute().getContent();
}
} catch (UserRecoverableAuthIOException uraEx) {
authorize(uraEx.getIntent());
} catch (GoogleAuthIOException gauEx) {}
catch (Exception e) { }
return is;
}
It allows to get either a 'thumbnail' or 'full-blown' version of an image based on the bBig flag. User can select a thumbnail from a list and the full-blown image download follows (all of this supported by disk-base LRU cache, of course).
The problem is, that GDAA does not have an option to ask for reduced size / thumbnail version of an object (AFAIK), so I have to resort to combining both APIs, which makes the code more convoluted then I like (bottom of the page). Needles to state that the 'Resource ID' needed by the REST may not be immediately available.
So, the question is: Is there a way to ask GDAA for a 'thumbnail' version of a document?

Downloading thumbnails isn't currently available in the Drive Android API, and unfortunately I can't give a timeframe to when it will be available. Until that time, the Drive Java Client Library is the best way to get thumbnails on Android.
We'd appreciate if you go ahead and file a feature request against our issue tracker: https://code.google.com/a/google.com/p/apps-api-issues/
That gives requests more visibility to our teams internally, and issues will be marked resolved when we release updates.

Update: I had an error in the discussion of the request fields.
As Ofir says, you can't get thumbnails with the Drive Android API and you can get thumbnails with the Drive Java Client Library. This page has is a really good primer for getting started:
https://developers.google.com/drive/v3/web/quickstart/android
Oddly, I can't get the fields portion of the request to work as it is on that quick start. As I've experienced, you have to request the fields a little differently.
Since you're doing a custom field request you have to be sure to add the other fields you want as well. Here is how I've gotten it to work:
Drive.Files.List request = mService.files()
.list()
.setFields("files/thumbnailLink, files/name, files/mimeType, files/id")
.setQ("Your file param and/or mime query");
FileList files = request.execute();
files.getFiles(); //Each File in the collection will have a valid thumbnailLink
A sample query might be:
"mimeType = 'image/jpeg' or mimeType = 'video/mp4'"
Hope this helps!

Related

Kotlin for Volley, how can I check the JSON request for newer data in the API?

I'm working on an app that gets a list of documents/source URL from an api. I'd like to periodically check for new or updated contents within that API so users can update saved items in the database. I'm at a loss on the correct wording to search, thus Google and Stack Overflow have both failed me. My fetching function is below:
The URL for the API is https://api.afiexplorer.com
private fun fetchPubs() {
_binding.contentMain.loading.visibility = View.VISIBLE
request = JsonArrayRequest(
Request.Method.GET,
Config.BASE_URL,
JSONArray(),{ response ->
val items: List<Pubs> =
Gson().fromJson(response.toString(), object : TypeToken<List<Pubs>>() {}.type)
val sortedItems = items.sortedWith(compareBy { it.Number })
pubsList?.clear()
pubsList?.addAll(sortedItems)
// Hardcoded pubs moved to Publications Gitlab Repo
// https://gitlab.com/afi-explorer/pubs
_binding.contentMain.recyclerView.recycledViewPool.clear()
adapter?.notifyDataSetChanged()
_binding.contentMain.loading.visibility = View.GONE
setupData()
Log.i("LENGTH OF DATA", "${items.size}")
},
{error ->
println(error.printStackTrace())
Toasty.error(applicationContext, getString(string.no_internet), Toast.LENGTH_SHORT, true).show()
}
)
MyApplication.instance.addToRequestQueue(request!!)
}
private fun setupData(){
adapter = MainAdapter(applicationContext, pubsList!!, this)
_binding.contentMain.recyclerView.adapter = adapter
}
I tried using ChatGPT to see if that would get me started and that failed miserably. Also searched Google, Reddit and Stack Overflow for similar projects, but mine is a unique scenario I guess. I'm just a hobbyist and intermediate dev I guess. First time working with Volley, everything works, but I would like to find a way to send a notification (preferably not Firebase) if there is updated info within the API listed above. I'm not sure if this is actually doable.
Are you asking if you can somehow find if the remote API has changed its content? If so, how would that service advise you? If the service provider provides a web hook or similar callback you could write a server-based program to send a push notification to your Android app.
Perhaps you intent to poll the API periodically, and then you want to know if there is a change?
If you use a tool such as Postman or curl to easily see the headers of the API https://api.afiexplorer.com you will see, unfortunately, there is no Last-Modified header or ETag header which would allow you to easily determine if there was a change.
Next looking at the content of the API, the author does not provide an obvious version/change date, so no luck there.
What you could do is receive the content as a String, and perform a checksum operation on it, and if it differs you know there has been a change
or if you are deserialising the received JSON in Kotlin data classes, then out of the box, Kotlin will enable you to perform an equality operation on a previous copy of the data to know if there was a change.
This looks like an android app; if so, why don't you create a background service that makes requests to the API and updates the data as needed? You can use an AlarmManager class to set the interval threshold for polling by using the setInexactRepeating() method.
Most apps are updated in this fashion; sometimes, a separate table is created to catalog changesets.
Let me know if this helps.

Force FirebaseCrashlytics print logs to console

Is it possible to force FirebaseCrashlytics to print the log messages to console prior google buy them (and make shit as always) it was possible using the fabric api. But now seems these methods were removed.
Is there any way to do for the android sdk?
Short Answer
IT IS IMPOSSIBLE (USING FIREBASECRASHLYTICS SDK)
Complete Answer
It is a shame that prior to Google buys Crashlytics, watch the log messages on development console was an easy task to do. But now these methods were removed.
The whole problem is, if I'm in the development environment and want to follow the code execution (by watching the log messages) Crashlytics won't show them... I need to intentionally cause an crash, then wait a time for it be uploaded to dashboard them start hunting for the registers among maybe thousands of others... (non sense)
I filled a bug report for firebase
https://github.com/firebase/firebase-android-sdk/issues/3005
For those who don't want wait to google fix their shit there is a workaround:
FirebaseApp.initializeApp(this);
if (BuildConfig.DEBUG) {
try {
Field f = FirebaseCrashlytics.class.getDeclaredField("core");
f.setAccessible(true);
CrashlyticsCore core = (CrashlyticsCore) f.get(FirebaseCrashlytics.getInstance());
f = CrashlyticsCore.class.getDeclaredField("controller");
f.setAccessible(true);
Object controler = f.get(core);
f = controler.getClass().getDeclaredField("logFileManager");
f.setAccessible(true);
f.set(controler, new LogFileManager(null, null) {
#Override
public void writeToLog(long timestamp, String msg) {
super.writeToLog(timestamp, msg);
System.out.println(msg);
}
});
FirebaseCrashlytics.getInstance().log("test");
} catch (Exception e) {
}
}
The code above is replacing the field that was supposed to write the log messages to a file (AND ACTUALLY DOES NOTHING) by a new class which does all the previous one (NOTHING) but prints on the fly all messages logged.
ATTENTION
I've tested this on firebase-analytics:19.0.1 and this will only on versions of the lib with the same fields names
IT WON'T WORK IN OBFUSCATED BUILD, if you obfuscate the code in DEBUG mode the code will break (unless you add the proper rules to proguard)
If this topic reaches google engineers is very likely they will remove/obfuscate the code for next versions
google...

Nest.js and Archiver - pipe stream zip file into http response

In my node.js application I'm downloading multiple user files from AWS S3, compress them to single zip (with usage of Archiver npm library) file and send back to client. All the way I'm operating on streams, and yet I can't send files to client (so client would start download after successful http request).
const filesStreams = await this.awsService.downloadFiles(
document?.documentFiles,
);
const zipStream = await this.compressService.compressFiles(filesStreams);
// ts-ignore
response.setHeader('Content-Type', 'application/octet-stream');
response.setHeader(
'Content-Disposition',
'attachment; filename="files.zip"',
);
zipStream.pipe(response);
Where response is express response object. zipStream is created with usage of Archiver:
public async compressFiles(files: DownloadedFileType[]): Promise<Archiver> {
const zip = archiver('zip');
for (const { stream, name } of files) {
zip.append((stream as unknown) as Readable, {
name,
});
}
return zip;
And I know it is correct - because when I pipe it into WriteStream to some file in my file system it works correctly (I'm able to unzip written file and it has correct content). Probably I could temporarily write file in file system, send it back to client with usage of response.download and remove save file afterwards, but it looks like very inefficient solution. Any help will be greatly appreciated.
So I found a source of a problem - I'll post it here just for record, in case if anyone would have same problem. Source of the problem was something totally different - I was trying to initiate a download by AJAX request, which of course won't work. I changed my frontend code and instead AJAX I used HTML anchor element with src attribute set on exactly same endpoint and it worked just fine.
I had a different problem, but rather it came from the lint. I needed to read all files from the directory, and then send them to the client in one zip. Maybe someone finds it useful.
There were 2 issues:
I mismatched cwd with root; see glob doc
Because I used PassThrough as a proxy object between Archiever and an output, lint shows stream.pipe(response) and type issue... what is a mistake - it works fine.

Cannot call handler ashx more than once when using Response.TransmitFile

I've created a HttpHandler (.ashx) for clients download content (videos) from my website. First I was using the WriteFile method, that I realized it was requiring to much memory and then I decided to change it to TransmitFile method.
But one weird thing happened, I wasn't able to make more than one download anymore. I had to wait a download finishes and start the other.
Basically the code is like this:
System.IO.FileInfo file = new System.IO.FileInfo(file_path);
context.Response.Clear();
if (flagH264)
{
context.Response.ContentType = "video/mp4";
}
else
{
context.Response.ContentType = "video/x-ms-wmv";
}
context.Response.AddHeader("Content-Length", file.Length.ToString());
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + name);
//context.Response.WriteFile(file_path.Trim());
context.Response.TransmitFile(file_path.Trim());
context.Response.Flush();
Anyone may know what is this problem?
I found what was the problem.
The HttpHandler (ashx) I was using for the download page was implementing an interface IRequireSessionState that gave me read/write rights to manipulate session data. When using TransmitFile method IIS blocks any operation on the system to protect session data from being altered.
The solution was changing the IRequireSessionState for IReadOnlySessionState, that gives only reading access to session data and there was no need to provide any kind of security, blocking user actions.

Correct way to skip authorization with ImageResizer

HttpConext object has SkipAutorization property that is used to disable the authorization check in UrlAuthorizationModule, which is a part of standard asp.net pipeline.
ImageResizer calls UrlAuthorizationModule.CheckUrlAccessForPrincipal directly, outside of the normal asp.net pipeline. As a result the SkipAutorization property is not honoured.
A workaround to that would be:
protected void Application_Start(object sender, EventArgs e)
{
// Ask ImageResizer not to re-check authorization if it's skipped
// by means of the context flag
Config.Current.Pipeline.OnFirstRequest +=
(m, c) =>
{
Config.Current.Pipeline.AuthorizeImage +=
(module, context, args) =>
{
if (context.SkipAuthorization)
{
args.AllowAccess = true;
}
};
};
}
The outer OnFirstRequest here is to make sure that the AuthorizeImage subscription is happening after all plugins has been loaded so it's last in chain to execute.
I don't like this workaround because it's quite implementation dependent. For example if ImageResizer plugins loading is moved from onFirstRequest to elsewhere it will break.
It would be nice if this is fixed in ImageResizer itself. I would suggest changing the additional Autorization check in InterceptModule to something along these lines:
//Run the rewritten path past the auth system again, using the result as the default "AllowAccess" value
bool isAllowed = true;
if (canCheckUrl) try {
isAllowed = conf.HonourSkipAutorization && app.Context.SkipAuthorization
|| UrlAuthorizationModule.CheckUrlAccessForPrincipal(virtualPath, user, "GET");
} catch (NotImplementedException) { } //For MONO support
Would that be appropriate, or is there a better solution?
In the last part of the question, I'll describe my use case, reading is entirely optional, but it gives perspective how this query came to be.
In an asp.net application I have an HttpHandler that serves pdf documents. It accepts document id and security information in url and headers (I'm using OAuth) and it performs all the security checks and if they succeed the pdf document path is retrieved from the database, and the file is served to the client by Response.WriteFile.
I need to provide preview of a pdf page as an image, and I'm using ImageResize with the PdfRenderer plugin for that.
Unfortunately the path of the pdf is not know until my file handler have worked, and this is too late for ImageResizer to act on the request since all the magic happens in PostAuthorizeRequest which is (obviously) before a handler runs.
To work around this I re-wrote my HttpHandler as HttpModule, where it's executed on BeginRequest. If the authorization checks fail, the request is severed right there. If they are ok, then I use PathRewrite to point to the resulting pdf and at the same time write the proper Content-Type and other headers to the response. At the same time I set context.SkipAutorization flag, because, since the pdf files can't be accessible via a direct url as per web.config configuration, the pipeline would not even get to the PostAuthorizeRequest if authorization is not skipped. It is safe to skip authorization in this case, since all required check has already been performed by the module.
So this allows the execution flow to get to ImageResizer. But then Image resizer decides that it wants to re-check the authorization on the pdf url. Which fails unless you apply the workaround above.
What is the rationale for this re-check? In the scenario above, when ImageResizer has work to do, the image that it is to serve is not what appears in the URL and the auth check has been already done by the asp.net pipeline, now when we are in PostAuthorizeRequest. In which cases is this re-check useful?
Update: The latest version of ImageResizer respects the HttpContext.SkipAuthorization boolean, making the event handler no longer necessary.
Your work-around is exactly the right way to deal with this, and is forwards-comaptible.
The re-check exists because
Url rewriting is very common, encouraged, and even implemented by certain ImageResizer plugins (such as FolderResizeSyntax and ImageHandlerSyntax).
Url rewriting after the Authorize stage allows UrlAuthorization to be circumvented completely.
HttpContext.SkipAuthorization should be respected by ImageResizer; and probably will be in a future release.
That said, your workaround involving AuthorizeImage is actually exactly what I would suggest. I don't see how it could be more fragile than SkipAuthorization by itself; and in fact should work regardless of how ImageResizer reorders events in the future.
ImageResizer respects the order of events in the pipeline - your V2 with authorization happening before PostAuthorize is exactly correct (although it could be moved to PreAuthorize, if you wished to support additional front-end resizing during BeginRequest).
Also, using RewritePath for serving the original PDF is far more efficient than calling WriteFile, especially on IIS6+, as you probably discovered.