Cannot call handler ashx more than once when using Response.TransmitFile - file-io

I've created a HttpHandler (.ashx) for clients download content (videos) from my website. First I was using the WriteFile method, that I realized it was requiring to much memory and then I decided to change it to TransmitFile method.
But one weird thing happened, I wasn't able to make more than one download anymore. I had to wait a download finishes and start the other.
Basically the code is like this:
System.IO.FileInfo file = new System.IO.FileInfo(file_path);
context.Response.Clear();
if (flagH264)
{
context.Response.ContentType = "video/mp4";
}
else
{
context.Response.ContentType = "video/x-ms-wmv";
}
context.Response.AddHeader("Content-Length", file.Length.ToString());
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + name);
//context.Response.WriteFile(file_path.Trim());
context.Response.TransmitFile(file_path.Trim());
context.Response.Flush();
Anyone may know what is this problem?

I found what was the problem.
The HttpHandler (ashx) I was using for the download page was implementing an interface IRequireSessionState that gave me read/write rights to manipulate session data. When using TransmitFile method IIS blocks any operation on the system to protect session data from being altered.
The solution was changing the IRequireSessionState for IReadOnlySessionState, that gives only reading access to session data and there was no need to provide any kind of security, blocking user actions.

Related

LibGit2Sharp: how to Task-Async

Hi I have begun to use the package for some very simple tasks, mainly cloning a Git-Wiki repo and subsequently pulling the changes from the server when needed.
Now I can not see any methods corresponding to the Task-Async (TAP) pattern. Also in the documentation I could not find anything concerning.
Could you please give me some direction how to wrap the LibGit2Sharp methods into a TAP construct? Link to documentation (if I missed something) or just telling me which callback to hook up to the TaskCompletionSource object would be nice.
It also doesn't really help that I am a newbie with Git, and normally I only do basic branching, merging, pushing with it.
For cloning I use:
Repository.Clone(#"https://MyName#bitbucket.org/MyRepo/MyProject.git/wiki", "repo");
For pulling I use:
using (var repo = new Repository("repo"))
{
// Credential information to fetch
LibGit2Sharp.PullOptions options = new LibGit2Sharp.PullOptions();
options.FetchOptions = new FetchOptions();
var signature = new LibGit2Sharp.Signature(new Identity("myname", "mymail#google.com"), DateTimeOffset.Now);
Commands.Pull(repo, signature, options);
}
Thanks in advance
First of all, you should never try sync over async or async over sync. See this article.
If you're thinking of using Task.Run, don't. That will just trade on thread pool thread for another with the added cost of 2 context switches.
But you should reconsider your whole approach to this. You don't need to clone the repository just to get the contents of a file. Each version of a file, has a unique URL. You can even get the URL of a file for a specific branch.

Handling of Thumbnails in Google Drive Android API (GDAA)

I've run into the following problem when porting an app from REST API to GDAA.
The app needs to download some of (thousands of) JPEG images based on user selection. The way this is solved in the app is by downloading a thumbnail version first, using this construct of the REST API:
private static InputStream getCont(String rsid, boolean bBig){
InputStream is = null;
if (rsid != null) try {
File gFl = bBig ?
mGOOSvc.files().get(rsid).setFields("downloadUrl" ).execute():
mGOOSvc.files().get(rsid).setFields("thumbnailLink").execute();
if (gFl != null){
GenericUrl url = new GenericUrl(bBig ? gFl.getDownloadUrl() : gFl.getThumbnailLink());
is = mGOOSvc.getRequestFactory().buildGetRequest(url).execute().getContent();
}
} catch (UserRecoverableAuthIOException uraEx) {
authorize(uraEx.getIntent());
} catch (GoogleAuthIOException gauEx) {}
catch (Exception e) { }
return is;
}
It allows to get either a 'thumbnail' or 'full-blown' version of an image based on the bBig flag. User can select a thumbnail from a list and the full-blown image download follows (all of this supported by disk-base LRU cache, of course).
The problem is, that GDAA does not have an option to ask for reduced size / thumbnail version of an object (AFAIK), so I have to resort to combining both APIs, which makes the code more convoluted then I like (bottom of the page). Needles to state that the 'Resource ID' needed by the REST may not be immediately available.
So, the question is: Is there a way to ask GDAA for a 'thumbnail' version of a document?
Downloading thumbnails isn't currently available in the Drive Android API, and unfortunately I can't give a timeframe to when it will be available. Until that time, the Drive Java Client Library is the best way to get thumbnails on Android.
We'd appreciate if you go ahead and file a feature request against our issue tracker: https://code.google.com/a/google.com/p/apps-api-issues/
That gives requests more visibility to our teams internally, and issues will be marked resolved when we release updates.
Update: I had an error in the discussion of the request fields.
As Ofir says, you can't get thumbnails with the Drive Android API and you can get thumbnails with the Drive Java Client Library. This page has is a really good primer for getting started:
https://developers.google.com/drive/v3/web/quickstart/android
Oddly, I can't get the fields portion of the request to work as it is on that quick start. As I've experienced, you have to request the fields a little differently.
Since you're doing a custom field request you have to be sure to add the other fields you want as well. Here is how I've gotten it to work:
Drive.Files.List request = mService.files()
.list()
.setFields("files/thumbnailLink, files/name, files/mimeType, files/id")
.setQ("Your file param and/or mime query");
FileList files = request.execute();
files.getFiles(); //Each File in the collection will have a valid thumbnailLink
A sample query might be:
"mimeType = 'image/jpeg' or mimeType = 'video/mp4'"
Hope this helps!

Async ActionResult implementation is blocking

Okay,
Here I have an MVC 4 application and I am trying to create an Asynchronous ActionResult with in that.
Objective : User has a download PDF Icon on the WebPage, and downloading takes much of time. So while server is busy generating the PDF, the user shall be able to perform some actions in webpage.
(clicking "download PDF" link is sending and ajax request to the server, server is fetching some data and is pushing back the PDF)
What is happening is while I call the ajax to download the PDF it starts the process, but blocks every request until and unless it returns back to the browser. That is simple blocking request.
What I have tried so far.
1) Used AsyncController as a base class of controller.
2) Made the ActionResult to an async Task DownloadPDF(), and here I wrapped the whole code/logic to generate PDF into a wrapper. This wrapper is eventually an awaitable thing inside DownloadPDF()
something like this.
public async Task<ActionResult> DownloadPDF()
{
string filepath = await CreatePDF();
//create a file stream and return it as ActionResult
}
private async Task<string> CreatePDF()
{
// creates the PDF and returns the path as a string
return filePath;
}
YES, the Operations are session based.
Am I missing some thing some where?
Objective : User has a download PDF Icon on the WebPage, and downloading takes much of time. So while server is busy generating the PDF, the user shall be able to perform some actions in webpage.
async will not do this. As I describe in my MSDN article, async yields to the ASP.NET runtime, not the client browser. This only makes sense; async can't change the HTTP protocol (as I mention on my blog).
However, though async cannot do this, AJAX can.
What is happening is while I call the ajax to download the PDF it starts the process, but blocks every request until and unless it returns back to the browser. That is simple blocking request.
AFAIK, the request code you posted is completely asynchronous. It is returning the thread to the ASP.NET thread pool while the PDF is being created. However, there are several other aspects to concurrent requests. In particular, one common hangup is that by default the ASP.NET session state cannot be shared between multiple requests.
1) Used AsyncController as a base class of controller.
This is unnecessary. Modern controllers inspect the return type of their actions to determine whether they are asynchronous.
YES, the Operations are session based.
It sounds to me like the ASP.NET session is what is limiting your requests. See Concurrent Requests and Session State. You'll have to either turn it off or make it read-only in order to have concurrent requests within the same session.

Store and Sync local Data using Breezejs and MVC Web API

I want to use breezejs api for storing data in local storage (indexdb or websql) and also want to sync local data with sql server.
But I am failed to achieve this and also not able to find sample app of this type of application using breezejs, knockout and mvc api.
My requirement is:
1) If internet is available, the data will come from sql server by using mvc web api.
2) If internet is shutdown, the application will retrieve data from cached local storage (indexdb or websql).
3) As soon as internet is on, the local data will sync to sql server.
Please let me know Can I achieve this requirement by using breezejs api or not?
If yes, please provide me some and links and sample.
If no, what other can we use for achieving this type of requirement?
Thanks.
Please help me to meet this requirement.
You can do this, but I would suggest simply using localstorage. Basically, every time you read from the server or save to the server, you export the entities and save that to local storage. THen, when you need to read in the data, if the server is unreachable, you read the data from localstorage and use importentities to get it into the manager and then query locally.
function getData() {
var query = breeze.EntityQuery
.from("{YourAPI}");
manager.executeQuery.then(saveLocallyAndReturnPromise)
.fail(tryLocalRestoreAndReturnPromise)
// If query was successful remotely, then save the data in case connection
// is lost
function saveLocallyAndReturnPromise(data) {
// Should add error handling here. This code
// assumes tis local processing will be successful.
var cacheData = manager.exportEntities()
window.localStorage.setItem('savedCache',cacheData);
// return queried data as a promise so that this detour is
// transparent to viewmodel
return Q(data);
}
function tryLocalRestoreAndReturnPromise(error) {
// Assume any error just means the server is inaccessible.
// Simplified for example, but more robust error handling is
// warranted
var cacheData = window.localStorage.getItem('savedCache');
// NOTE: should handle empty saved cache here by throwing error;
manager.importEntities(cacheData); // restore saved cache
var query = query.using(breeze.FetchStrategy.FromLocalCache);
return manager.executeQuery(query); // this is a promise
}
}
This is a code skeleton for simplicity. You should check catch and handle errors, add an isConnected function to determine connectivity, etc.
If you are doing editing locally, there are a few more hoops to jump through. Every time you make a change to the cache, you will need to export either the whole cache or the changes (probably depending on the size of the cache). When there is a connection, you will need to test for local changes first and, if found, save them to the server before requerying the server. In addition, any schema changes made while offline complicate matters tremendously, so be aware of that.
Hope this helps. A robust implementation is a bit more complex, but this should give you a starting point.

Correct way to skip authorization with ImageResizer

HttpConext object has SkipAutorization property that is used to disable the authorization check in UrlAuthorizationModule, which is a part of standard asp.net pipeline.
ImageResizer calls UrlAuthorizationModule.CheckUrlAccessForPrincipal directly, outside of the normal asp.net pipeline. As a result the SkipAutorization property is not honoured.
A workaround to that would be:
protected void Application_Start(object sender, EventArgs e)
{
// Ask ImageResizer not to re-check authorization if it's skipped
// by means of the context flag
Config.Current.Pipeline.OnFirstRequest +=
(m, c) =>
{
Config.Current.Pipeline.AuthorizeImage +=
(module, context, args) =>
{
if (context.SkipAuthorization)
{
args.AllowAccess = true;
}
};
};
}
The outer OnFirstRequest here is to make sure that the AuthorizeImage subscription is happening after all plugins has been loaded so it's last in chain to execute.
I don't like this workaround because it's quite implementation dependent. For example if ImageResizer plugins loading is moved from onFirstRequest to elsewhere it will break.
It would be nice if this is fixed in ImageResizer itself. I would suggest changing the additional Autorization check in InterceptModule to something along these lines:
//Run the rewritten path past the auth system again, using the result as the default "AllowAccess" value
bool isAllowed = true;
if (canCheckUrl) try {
isAllowed = conf.HonourSkipAutorization && app.Context.SkipAuthorization
|| UrlAuthorizationModule.CheckUrlAccessForPrincipal(virtualPath, user, "GET");
} catch (NotImplementedException) { } //For MONO support
Would that be appropriate, or is there a better solution?
In the last part of the question, I'll describe my use case, reading is entirely optional, but it gives perspective how this query came to be.
In an asp.net application I have an HttpHandler that serves pdf documents. It accepts document id and security information in url and headers (I'm using OAuth) and it performs all the security checks and if they succeed the pdf document path is retrieved from the database, and the file is served to the client by Response.WriteFile.
I need to provide preview of a pdf page as an image, and I'm using ImageResize with the PdfRenderer plugin for that.
Unfortunately the path of the pdf is not know until my file handler have worked, and this is too late for ImageResizer to act on the request since all the magic happens in PostAuthorizeRequest which is (obviously) before a handler runs.
To work around this I re-wrote my HttpHandler as HttpModule, where it's executed on BeginRequest. If the authorization checks fail, the request is severed right there. If they are ok, then I use PathRewrite to point to the resulting pdf and at the same time write the proper Content-Type and other headers to the response. At the same time I set context.SkipAutorization flag, because, since the pdf files can't be accessible via a direct url as per web.config configuration, the pipeline would not even get to the PostAuthorizeRequest if authorization is not skipped. It is safe to skip authorization in this case, since all required check has already been performed by the module.
So this allows the execution flow to get to ImageResizer. But then Image resizer decides that it wants to re-check the authorization on the pdf url. Which fails unless you apply the workaround above.
What is the rationale for this re-check? In the scenario above, when ImageResizer has work to do, the image that it is to serve is not what appears in the URL and the auth check has been already done by the asp.net pipeline, now when we are in PostAuthorizeRequest. In which cases is this re-check useful?
Update: The latest version of ImageResizer respects the HttpContext.SkipAuthorization boolean, making the event handler no longer necessary.
Your work-around is exactly the right way to deal with this, and is forwards-comaptible.
The re-check exists because
Url rewriting is very common, encouraged, and even implemented by certain ImageResizer plugins (such as FolderResizeSyntax and ImageHandlerSyntax).
Url rewriting after the Authorize stage allows UrlAuthorization to be circumvented completely.
HttpContext.SkipAuthorization should be respected by ImageResizer; and probably will be in a future release.
That said, your workaround involving AuthorizeImage is actually exactly what I would suggest. I don't see how it could be more fragile than SkipAuthorization by itself; and in fact should work regardless of how ImageResizer reorders events in the future.
ImageResizer respects the order of events in the pipeline - your V2 with authorization happening before PostAuthorize is exactly correct (although it could be moved to PreAuthorize, if you wished to support additional front-end resizing during BeginRequest).
Also, using RewritePath for serving the original PDF is far more efficient than calling WriteFile, especially on IIS6+, as you probably discovered.