Avoid SQL lookup with ImageResizer and DiskCache plugin - imageresizer

I currently have a problem figuring out how to use the ImageResizer plugin to properly work with SQL and the DiskCache plugin.
My strategy for naming is as follows:
/myimagetitle-4319560-100x100.jpg is rewritten to /4319560.jpg?id=4319560&title=myimagetitle&height=100&width=100 by the IIS URL Rewrite module.
This works as expected.
Now, in order to find the file name for the image, I need to translate the id using SQL. I have created a IVirtualImageProvider plugin, which implements the FileExistsand the GetFile methods.
public IVirtualFile GetFile(string virtualPath, NameValueCollection queryString)
{
var path = this.GetOriginalFilePath(queryString);
return new VirtualFileWrapper(new ProductPhotoVirtualFile(path));
}
public bool FileExists(string virtualPath, NameValueCollection queryString)
{
if (File.Exists(this.GetCachedFilePath(queryString)))
{
return true;
}
if (File.Exists(this.GetOriginalFilePath(queryString)))
{
return true;
}
return false;
}
private string GetCachedFilePath(NameValueCollection queryString)
{
// Get customized cache file path based on the query string
// "cache\4319\560\4319560_w_100_h_100.jpg"
}
private string GetOriginalFilePath(NameValueCollection queryString)
{
// Perform SQL lookup to translate the id from the query to a file name
}
I am using the DiskCache plugin to ensure my images are cached using IIS etc.
Unfortunately, the FileExists method is always run, executing the SQL on each request.
What I would like to achieve is the following:
have the DiskCache plugin run before the FileExists method, and in that way skip the actual SQL lookup, if the file is cached
handle the cache file naming strategy, in order for other tools to generate the images in the correct folders.
Is any of the above possible and/or am I doing something wrong?
Thanks

FileExists will always be called, as there's no other way to determine which IVirtualImageProvider should be responsible for the request — and therefore which is responsible for providing caching details like modified date and keys. A better name for it would be IsHandled
In practice, it's OK to lie within the FileExists method, as throwing a FileNotFoundException during .Open() later will also be handled as a 404. It's not OK to lie within FileExists if it will prevent another IVirtualImageProvider from working.
In your case, you should return true from FileExists if the image URL is in the format you're expecting (I.e, has an ID, or is within the path structure dedicated to SQL image blobs). Typically it's best to use a path prefix, though, as it's hard to predict more complex patterns reliably.

Related

Serve up file from outside wwwroot

I am migrating an ASP.NET4.5 website to ASP.NET 5. One function we had returned images off the hard disk from an absolute location. The files arent stored within the web directory. Previously this worked fine, with the following code:
public ActionResult GetVideoImage(string serialNumber, int videoEntryId)
{
try
{
var serial = Device.FriendlySerialNumberToNumericalSerialNumber(serialNumber);
var entry = this.service.GetVideoEntry(serial, videoEntryId);
if (entry != null && System.IO.File.Exists(entry.FirstVideoFrameLocation.LocalPath))
{
return this.File(entry.FirstVideoFrameLocation.LocalPath, "image/jpeg"); // adjust content type appropriately
}
}
return this.Redirect("/content/noimage.png");
}
Unfortunately this doesnt work anymore and throws an exception. From what I can tell its because this.File now takes a virtualPath rather than an absolute one so balks at the idea of serving a file from outside of its web directory.
How can I get around this?
Also is ActionResult still the best
return type for this?
I found the answer on a MS thread here that links to a ASP github commit.
Long story short there are new classes available in the Microsoft.AspNet.Mvc namespace that allow the thing I'm looking for. I specifically chose PhysicalFileResult which works as expected

App Folder files not visible after un-install / re-install

I noticed this in the debug environment where I have to do many re-installs in order to test persistent data storage, initial settings, etc... It may not be relevant in production, but I mention this anyway just to inform other developers.
Any files created by an app in its App Folder are not 'visible' to queries after manual un-install / re-install (from IDE, for instance). The same applies to the 'Encoded DriveID' - it is no longer valid.
It is probably 'by design' but it effectively creates 'orphans' in the app folder until manually cleaned by 'drive.google.com > Manage Apps > [yourapp] > Options > Delete hidden app data'. It also creates problem if an app relies on finding of files by metadata, title, ... since these seem to be gone. As I said, not a production problem, but it can create some frustration during development.
Can any of friendly Googlers confirm this? Is there any other way to get to these files after re-install?
Try this approach:
Use requestSync() in onConnected() as:
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
Drive.DriveApi.requestSync(getGoogleApiClient()).setResultCallback(syncCallback);
}
Then, in its callback, query the contents of the drive using:
final private ResultCallback<Status> syncCallback = new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
if (!status.isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(SearchableField.TITLE, "title"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(metadataCallback);
}
};
Then, in its callback, if found, retrieve the file using:
final private ResultCallback<DriveApi.MetadataBufferResult> metadataCallback =
new ResultCallback<DriveApi.MetadataBufferResult>() {
#SuppressLint("SetTextI18n")
#Override
public void onResult(#NonNull DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
MetadataBuffer mdb = result.getMetadataBuffer();
for (Metadata md : mdb) {
Date createdDate = md.getCreatedDate();
DriveId driveId = md.getDriveId();
}
readFromDrive(driveId);
}
};
Job done!
Hope that helps!
It looks like Google Play services has a problem. (https://stackoverflow.com/a/26541831/2228408)
For testing, you can do it by clearing Google Play services data (Settings > Apps > Google Play services > Manage Space > Clear all data).
Or, at this time, you need to implement it by using Drive SDK v2.
I think you are correct that it is by design.
By inspection I have concluded that until an app places data in the AppFolder folder, Drive does not sync down to the device however much to try and hassle it. Therefore it is impossible to check for the existence of AppFolder placed by another device, or a prior implementation. I'd assume that this was to try and create a consistent clean install.
I can see that there are a couple of strategies to work around this:
1) Place dummy data on AppFolder and then sync and recheck.
2) Accept that in the first instance there is the possibility of duplicates, as you cannot access the existing file by definition you will create a new copy, and use custom metadata to come up with a scheme to differentiate like-named files and choose which one you want to keep (essentially implement your conflict merge strategy across the two different files).
I've done the second, I have an update number to compare data from different devices and decide which version I want so decide whether to upload, download or leave alone. As my data is an SQLite DB I also have some code to only sync once updates have settled down and I deliberately consider people updating two devices at once foolish and the results are consistent but undefined as to which will win.

How can I get references to BlockBlob objects from CloudBlobDirectory.ListBlobs?

I am using the Microsoft Azure .NET client libraries to interact with Azure cloud storage. I need to be able to access additional information about each blob in its metadata collection. I am currently using CloudBlobDirectory.ListBlobs() method to get a list of blobs in a particular directory of a directory structure I've devised in the blob names. The ListBlobs() method returns a list of IListBlobItem objects. They only have a couple of properties: Url and references to parent directory and parent container. I need to get to the metadata of the actual blob objects.
I envisioned there would be a way to either cast the IListBlobItem to a BlockBlob object or use the IListBlockItem to get a reference to the BlockBlob, but can't seem to find a way to do that.
My question is: Is there a way to get a BlockBlob object from this method, or do I have to use a different way of getting the actual BlockBlob objects? If different, then can you suggest a way to achieve this, while also being able to filter by the "directory" scheme?
OK... I found a way to do this, and while it seems a little clunky and indirect, it does achieve the main thing I thought should be doable, which is to cast the IListBlobItem directly to a CloudBlockBlob object.
What I am doing is getting the list from the Directory object's ListBlobs() method and then looping over each item in the list and casting the item to a CloudBlockBlob object and then calling the FetchAttributes() method to retrieve the properties (including the metadata). Then add a new "info" object to a new list of info objects. Here's the code I'm using:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(true);
foreach (IListBlobItem item in blobs)
{
CloudBlockBlob blob = (CloudBlockBlob)item;
blob.FetchAttributes();
files.Add(new ImageInfo
{
FileUrl = item.Uri.ToString(),
FileName = item.Uri.PathAndQuery.Replace(restaurantId.ToString().PadLeft(3, '0') + "/", ""),
ImageName = blob.Metadata["Name"]
});
}
The whole "Blob" concept seems needlessly complex and doesn't seem to achieve what I'd have thought would have been one of the main features of the Blob wrapper. That is, a way to expand search capabilities by allowing a query over name, directory, container and metadata. I'd have thought you could construct a linq query that would read somewhat like: "return a list of all blobs in the 'images' container, that are in the 'natural/landscapes/' directory path that have a metadata key of 'category' with the value of 'sunset'". There doesn't seem to be a way to do that and that seems to be a missed opportunity to me. Oh, well.
If I'm wrong and way off base here, please let me know.
This approach has been developed for Java, but I hope it can somehow be modified to fit any other supported language. Despite the functionality you ask has not been explicitly developed yet, I think I found a different (hopefully less clunky) way to access CloudBlockBlob data from a ListBlobItem element.
The following code can be used to delete, for example, every blob inside a specific directory.
String blobUri;
CloudBlobClient blobClient = /* Obtain your blob client */
try{
CloudBlobContainer container = /* Obtain your blob container */
for (ListBlobItem blobItem : container.listBlobs(blobPrefix)) {
if (blobItem instanceof CloudBlob) {
blob = (CloudBlob) blobItem;
if (blob.exists()){
System.out.println("Deleting blob " + blob.getName());
blob.delete();
}
}
}
}catch (URISyntaxException | StorageException ex){
Logger.getLogger(BlobOperations.class.getName()).log(Level.SEVERE, null, ex);
}
The previous answers are good. I just wanted to point out 2 things:
1) Nowadays ASYNC programming is recommended to do and supported by Azure SDK as well. So try to use it:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(true);
foreach (IListBlobItem item in blobs)
{
CloudBlockBlob blob = (CloudBlockBlob)item;
await blob.FetchAttributesAsync(); //Use async calls...
}
2) Fetching Metadata in a separate call is not efficient. The code makes 2 HTTP request per blob object. ListBlobs() method supports getting Metadata with as well in one call by setting BlobListingDetails parameter:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Metadata);
I recommend to use second code it it is possible. Since it is the most efficient way to fetch Metadata.

Getting Path (context root) to the Application in Restlet

I am needing to get the application root within a Restlet resource class (it extends ServerResource). My end goal is trying to return a full explicit path to another Resource.
I am currently using getRequest().getResourceRef().getPath() and this almost gets me what I need. This does not return the full URL (like http://example.com/app), it returns to me /resourceName. So two problems I'm having with that, one is it is missing the schema (the http or https part) and server name, the other is it does not return where the application has been mounted to.
So given a person resource at 'http://dev.example.com/app_name/person', I would like to find a way to get back 'http://dev.example.com/app_name'.
I am using Restlet 2.0 RC3 and deploying it to GAE.
It looks like getRequest().getRootRef().toString() gives me what I want. I tried using a combination of method calls of getRequest().getRootRef() (like getPath or getRelativePart) but either they gave me something I didn't want or null.
Just get the base url from service context, then share it with the resources and add resource path if needed.
MyServlet.init():
String contextPath = getServletContext().getContextPath();
getApplication().getContext().getAttributes().put("contextPath", contextPath);
MyResource:
String contextPath = getContext().getAttributes().get("contextPath");
request.getRootRef() or request.getHostRef()?
The servlet's context is accessible from the restlet's application:
org.restlet.Application app = org.restlet.Application.getCurrent();
javax.servlet.ServletContext ctx = ((javax.servlet.ServletContext) app.getContext().getAttributes().get("org.restlet.ext.servlet.ServletContext"));
String path = ctx.getResource("").toString();

Grails - store sql that will be used by services

I am writing a Grails application that will mostly be using the springws web services plugin with endpoints backed by services. The services will retrieve data from a variety of back end databases (i.e., not via domain classes and GORM). I would like to store the sql that my services will be using to fetch the data for the web services in external files.
I'm looking for suggestions on:
Where is the best place to keep the files (i.e., I'd like to put them somewhere obvious like grails-app/sql) and best format (i.e., xml, configslurper, etc.)
Best way to abstract the retrieving of the sql text so my services that will execute the sql will not need to know where or how they are fetched. Services will just provide a sqlid and get the sql.
I was working on a project recently where I needed to do something similar. I created the following directory to store the sql files:
./grails-app/conf/sql
For example there is a file ./grails-app/conf/sql/hr/FIND_PERSON_BY_ID.sql that has something like the following:
select a.id
, a.first_name
, a.last_name
from person
where id = ?
I created a SqlCatalogService class that would load all files in that directory (and subdirectories) and store the filenames (minus extension) and file text in a Map. The service has a get(id) method that returns the sql text that is cached in the Map. Since files/directories stored in grails-app/conf are placed in the classpath, the SqlCatalogService uses the following code to read in the files:
....
....
Map<String,String> sqlCache = [:]
....
....
void loadSqlCache() {
try {
loadSqlCacheFromDirectory(new File(this.class.getResource("/sql/").getFile()))
} catch (Exception ex) {
log.error(ex)
}
}
void loadSqlCacheFromDirectory(File directory) {
log.info "Loading SQL cache from disk using base directory ${directory.name}"
synchronized(sqlCache) {
if(sqlCache.size() == 0) {
try {
directory.eachFileRecurse { sqlFile ->
if(sqlFile.isFile() && sqlFile.name.toUpperCase().endsWith(".SQL")) {
def sqlKey = sqlFile.name.toUpperCase()[0..-5]
sqlCache[sqlKey] = sqlFile.text
log.debug "added SQL [${sqlKey}] to cache"
}
}
} catch (Exception ex) {
log.error(ex)
}
} else {
log.warn "request to load sql cache and cache not empty: size [${sqlCache.size()}]"
}
}
}
String get(String sqlId) {
def sqlKey = sqlId?.toUpperCase()
log.debug "SQL Id requested: ${sqlKey}"
if(!sqlCache[sqlKey]) {
log.debug "SQL [${sqlKey}] not found in cache, loading cache from disk"
loadSqlCache()
}
return sqlCache[sqlKey]
}
Services that use various datasources use the SqlCatalogService to retrieve the sql by calling the get(id) method:
class PersonService {
def hrDataSource
def sqlCatalogService
private static final String SQL_FIND_PERSON_BY_ID = "FIND_PERSON_BY_ID"
Person findPersonById(String personId) {
try {
def sql = new groovy.sql.Sql(hrDataSource)
def row = sql.firstRow(sqlCatalogService.get(SQL_FIND_PERSON_BY_ID), [personId])
row ? new Person(row) : null
} catch (Exception ex) {
log.error ex.message, ex
throw ex
}
}
}
For now we only have a few sql statements so storing all the text in a Map is not an issue. If you lots of sql files to store you may need to think about using something like Ehcache and defining an eviction strategy (i.e., least recently used or least frequently used) and only storing the most used in memory and leaving the rest on disk until needed.
Before doing this I thought about using GORM and storing the sql text in the database. But decided that having the sql in files made it easier to develop with since we could pretty much save the sql to file directly from our sql tool (replacing hard-code params with question marks) and are able to let our revision control system track the changes. I'm not saying the above service is the most efficient or correct way to handle this, but it's worked so far for our needs.
Have you considered using Grails GORM and a HSQLDB database to store the SQL you want executed? You could then put in a record for each service containing that services SQL and retrieve it using normal Grails GORM functions. You could generate a default set of controllers and views that would allow you to edit the SQL. If you want to store the SQL in external files you can create a sub directory in the web-app directory called sql, then store your SQL statements as text files. You could create a class that would take a service name, load the associated text file containing the SQL and return the contents of that file. With out knowing how complex your SQL will be I cant' say what the best format would be. If your dealing with normal select statements with no parameter substitution plain text would be best. If your dealing with more complex SQL with substitutions and multiple queries you may want to use XML.