I have images in private blockblobs in Azure.
I am using Azure Reader 2 and can access the image like this http://localhost:55328/azure/00001/IMG_0001.JPG - it works fine and redirects to the blob with a Shared Access Signature.
However, if I try to resize the image, e.g. IMG_0001.JPG?width=100&height=100, I just get a 404.
Stepping through the code, I notice this line
if (e.VirtualPath.StartsWith(prefix, StringComparison.OrdinalIgnoreCase) && e.QueryString.Count == 0)
{
....
}
So, if there's a QueryString, no processing happens.
Debug output here:
https://gist.github.com/anonymous/28fd112eec194181baae
Thanks in advance
Your debugging misled you. It's true that redirection only happens when there is no querystring. When there are parameters, the blob needs to be modified, which means we must proxy it. A 302 redirect in that scenario is impossible.
AzureReader registers a IVirtualImageProvider, which ImageResizer automatically uses when handling all the proxying, processing, and caching.
The default behavior is to download, modify, and re-serve the data. The 302 redirect is just an optimization for throughput on unmodified files.
Notes:
sharedAccessExpiryTime is ignored, there is no setting by that name.
If you are going to reference code, it's best to link to the line in the file on github, otherwise we can't easily find the context. Press y on any github page to get a permalink, then click a line number (or range).
Related
I am in the middle of working with, and getting a handle on Vuejs. I would like to code my app in a way that it has some configurable behaviors, so that I could play with parameter values, like you do when you edit your Sublime preferences, but without having to compile my app again. Ideally, I would want a situation where I could have my colleagues be able to fiddle with settings all day long, by editing a file over FTP maybe, or over some interface....
The only way I know how to do it now, is to place those settings in a separate file, but as the app runs in the client, that file would have to be fetched via another HTTP request, meaning it's a publicly readable file. Even though there isn't any sensitive information in such a configuration file, I still feel a little wonky about having it public like that, if it can be avoided in any way...
Can it be avoided?
I dont think you can avoid this. One way or another your config file will be loaded into the vuejs application, therefore being visible to the end user (with some effort).
Even putting the file outside of the public folder wouldnt help you much, because then it is unavailable for HTTP to request the file. It would only be available to your compile process in this case.
So a possible solution could be to have some sort of HTTP request that requests GET example.com/settings and returns you a JSON object. Then you could have your app make a cookie like config_key = H47DXHJK12 (or better a UUID https://en.wikipedia.org/wiki/Universally_unique_identifier) which would be the access key for a specific config object.
Your application must then request GET example.com/settings (which should send the config_key cookie), and your config object for this secret key will be returned. If the user clears his cookies, a new request will return an empty config.
I have an app using the WebKitGTK1 API with WebKit-GTK 2.4.9 on Linux. (This is the current version in Debian Jessie and versions 2.5+ don't support the v1 API.)
I've implemented a custom URI scheme for loading entire basic page content by using a resource-request-starting handler, which parses the incoming URI via webkit_web_resource_get_uri and if it matches the custom scheme, generates some HTML content and calls webkit_network_request_set_uri to replace the original URI with a base64'd data: URI containing the content to render. (This is similar to the accepted answer of this question.)
This mostly works well, and my handler is called on each request (including repeated requests with the same original URI) and generates the correct content -- but somewhere upstream the browser appears to render only the first returned data for any given original URI, even if the data URI I generate is different.
Possibly of note is that webkit_web_resource_get_uri returns the original non-data: URI even after calling webkit_network_request_set_uri, so I assume this URI is being cached, and in turn is then being used as a key in some higher-level component to cache the data instead of using the real URI from the request.
Unfortunately this appears to be a G_PARAM_CONSTRUCT_ONLY property and there doesn't appear to be any public API to set and/or clear it so that it uses the rewritten URI of the request instead. Is there some way to force GTK to set the property after construction anyway? As far as I can tell it does have a setter method internally, and the getter would do the Right Thing™ if the internal property were reset to NULL.
Or is there some better method to force WebKit to render the new data: URI despite anything it thinks to the contrary?
For the moment I've worked around it by including the values that make it generate different data in the original custom URI (passed to webkit_web_view_load_uri or in links in the generated page). This does work but it's a bit ugly, and could be problematic if I forget to add something in the future, or if something changes generation but is not known in advance. It seems a bit silly that it goes to all the trouble of raising the event that generates the correct data, only to throw it away later, (presumably) due to an URI comparison on the wrong URI.
I suppose using a known-unique value (eg. sequential incrementing id) would also work, and resolve some of the unknown-in-advance issues, but that's no less ugly.
HttpConext object has SkipAutorization property that is used to disable the authorization check in UrlAuthorizationModule, which is a part of standard asp.net pipeline.
ImageResizer calls UrlAuthorizationModule.CheckUrlAccessForPrincipal directly, outside of the normal asp.net pipeline. As a result the SkipAutorization property is not honoured.
A workaround to that would be:
protected void Application_Start(object sender, EventArgs e)
{
// Ask ImageResizer not to re-check authorization if it's skipped
// by means of the context flag
Config.Current.Pipeline.OnFirstRequest +=
(m, c) =>
{
Config.Current.Pipeline.AuthorizeImage +=
(module, context, args) =>
{
if (context.SkipAuthorization)
{
args.AllowAccess = true;
}
};
};
}
The outer OnFirstRequest here is to make sure that the AuthorizeImage subscription is happening after all plugins has been loaded so it's last in chain to execute.
I don't like this workaround because it's quite implementation dependent. For example if ImageResizer plugins loading is moved from onFirstRequest to elsewhere it will break.
It would be nice if this is fixed in ImageResizer itself. I would suggest changing the additional Autorization check in InterceptModule to something along these lines:
//Run the rewritten path past the auth system again, using the result as the default "AllowAccess" value
bool isAllowed = true;
if (canCheckUrl) try {
isAllowed = conf.HonourSkipAutorization && app.Context.SkipAuthorization
|| UrlAuthorizationModule.CheckUrlAccessForPrincipal(virtualPath, user, "GET");
} catch (NotImplementedException) { } //For MONO support
Would that be appropriate, or is there a better solution?
In the last part of the question, I'll describe my use case, reading is entirely optional, but it gives perspective how this query came to be.
In an asp.net application I have an HttpHandler that serves pdf documents. It accepts document id and security information in url and headers (I'm using OAuth) and it performs all the security checks and if they succeed the pdf document path is retrieved from the database, and the file is served to the client by Response.WriteFile.
I need to provide preview of a pdf page as an image, and I'm using ImageResize with the PdfRenderer plugin for that.
Unfortunately the path of the pdf is not know until my file handler have worked, and this is too late for ImageResizer to act on the request since all the magic happens in PostAuthorizeRequest which is (obviously) before a handler runs.
To work around this I re-wrote my HttpHandler as HttpModule, where it's executed on BeginRequest. If the authorization checks fail, the request is severed right there. If they are ok, then I use PathRewrite to point to the resulting pdf and at the same time write the proper Content-Type and other headers to the response. At the same time I set context.SkipAutorization flag, because, since the pdf files can't be accessible via a direct url as per web.config configuration, the pipeline would not even get to the PostAuthorizeRequest if authorization is not skipped. It is safe to skip authorization in this case, since all required check has already been performed by the module.
So this allows the execution flow to get to ImageResizer. But then Image resizer decides that it wants to re-check the authorization on the pdf url. Which fails unless you apply the workaround above.
What is the rationale for this re-check? In the scenario above, when ImageResizer has work to do, the image that it is to serve is not what appears in the URL and the auth check has been already done by the asp.net pipeline, now when we are in PostAuthorizeRequest. In which cases is this re-check useful?
Update: The latest version of ImageResizer respects the HttpContext.SkipAuthorization boolean, making the event handler no longer necessary.
Your work-around is exactly the right way to deal with this, and is forwards-comaptible.
The re-check exists because
Url rewriting is very common, encouraged, and even implemented by certain ImageResizer plugins (such as FolderResizeSyntax and ImageHandlerSyntax).
Url rewriting after the Authorize stage allows UrlAuthorization to be circumvented completely.
HttpContext.SkipAuthorization should be respected by ImageResizer; and probably will be in a future release.
That said, your workaround involving AuthorizeImage is actually exactly what I would suggest. I don't see how it could be more fragile than SkipAuthorization by itself; and in fact should work regardless of how ImageResizer reorders events in the future.
ImageResizer respects the order of events in the pipeline - your V2 with authorization happening before PostAuthorize is exactly correct (although it could be moved to PreAuthorize, if you wished to support additional front-end resizing during BeginRequest).
Also, using RewritePath for serving the original PDF is far more efficient than calling WriteFile, especially on IIS6+, as you probably discovered.
I had to move a website on a bluehost server, that is in the Joomla Platform 1.7. I did alot of reasearch on Joomla.org and google and still haven't resloved the issue.
I recieve the following error: Infinite loop detected in JError. I went threw the configuration file and made sure that database and user match up with my new database parameters match up and I am still recieving this issue. Thank you. Urgent response will be very helpful. Currently working on this and this is not a good start. :(
I have ran into the same issue moving my Joomla site from one server to another. What I learned is the following;
Make sure you look at the parameters in configuration.php. Double check if the following variables in your configuration.php file is correct. (Double Check) You did mention that you already have done so. In that case I am 98% sure that you have an issue dealing with file Attributes with configuration.php. Change the Attributes of configuration.php from 444 to 666.
To get detailed information about the error, open the error.php file located in /libraries/joomla/error/ on your server.
In the following code:
public static function throwError(&$exception)
{
static $thrown = false;
// If thrown is hit again, we've come back to JError in the middle of
throwing
another JError, so die!
if ($thrown) {
// echo debug_print_backtrace();
jexit(JText::_('JLIB_ERROR_INFINITE_LOOP'));
}
change the line // echo debug_print_backtrace();
to the following:
**print"<pre>";
echo debug_print_backtrace();
print"</pre>";**
Remember- When you change the parameters in configuration.php to you new database make sure you that configuration.php file attribute is set to 666, Otherwise when you go to save the file it will not change. Try this first.. Good luck.
I have a program that checks if a file is present every 3 seconds, using webrequest and webresponse. If that file is present it does something if not, ect, that part works fine. I have a web page that controls the program by creating the file with a message and other variables as entered into the page, and then creates it and shoots it over to the folder that the program is checking. There is also a "stop" button that deletes that file.
This works well except that after one message is launched and then deleted, when it is launched the second time with a different message the program still sees the old message. I watch the file be deleted in IIS, so that is not the issue.
I've thought about meta tags to prevent caching, but would having the file be dynamically named solve this issue also? How would I make the program be able to check for a file where only the first part of the filename is known? I've found solutions for checking directories on local machines, but that won't work here.
Any ideas welcome, thanks.
I'm not that used to IIS, but in Apache you can create a .htaccess and set/modify HTTP-Headers.
With 'Cache-Control' you can tell a proxy/browser not to cache a file.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
A solution like this may work in IIS too if it is really a cache problem.
(To test this, open using your preffered browser with caching turned off
A simple hack is to add something unique to the url each time
http://www.yourdomain.com/yourpage.aspx?random=123489797
Adding a random number to the URL forces it to be fresh. Even if you don't use the querystring param, IIS doesnt know that, so executes the page again anyways.