Azure storage: Uploaded files with size zero bytes - file-upload

When I upload an image file to a blob, the image is uploaded apparently successfully (no errors). When I go to cloud storage studio, the file is there, but with a size of 0 (zero) bytes.
The following is the code that I am using:
// These two methods belong to the ContentService class used to upload
// files in the storage.
public void SetContent(HttpPostedFileBase file, string filename, bool overwrite)
{
CloudBlobContainer blobContainer = GetContainer();
var blob = blobContainer.GetBlobReference(filename);
if (file != null)
{
blob.Properties.ContentType = file.ContentType;
blob.UploadFromStream(file.InputStream);
}
else
{
blob.Properties.ContentType = "application/octet-stream";
blob.UploadByteArray(new byte[1]);
}
}
public string UploadFile(HttpPostedFileBase file, string uploadPath)
{
if (file.ContentLength == 0)
{
return null;
}
string filename;
int indexBar = file.FileName.LastIndexOf('\\');
if (indexBar > -1)
{
filename = DateTime.UtcNow.Ticks + file.FileName.Substring(indexBar + 1);
}
else
{
filename = DateTime.UtcNow.Ticks + file.FileName;
}
ContentService.Instance.SetContent(file, Helper.CombinePath(uploadPath, filename), true);
return filename;
}
// The above code is called by this code.
HttpPostedFileBase newFile = Request.Files["newFile"] as HttpPostedFileBase;
ContentService service = new ContentService();
blog.Image = service.UploadFile(newFile, string.Format("{0}{1}", Constants.Paths.BlogImages, blog.RowKey));
Before the image file is uploaded to the storage, the Property InputStream from the HttpPostedFileBase appears to be fine (the size of the of image corresponds to what is expected! And no exceptions are thrown).
And the really strange thing is that this works perfectly in other cases (uploading Power Points or even other images from the Worker role). The code that calls the SetContent method seems to be exactly the same and file seems to be correct since a new file with zero bytes is created at the correct location.
Does any one have any suggestion please? I debugged this code dozens of times and I cannot see the problem. Any suggestions are welcome!
Thanks

The Position property of the InputStream of the HttpPostedFileBase had the same value as the Length property (probably because I had another file previous to this one - stupid I think!).
All I had to do was to set the Position property back to 0 (zero)!
I hope this helps somebody in the future.

Thanks Fabio for bringing this up and solving your own question. I just want to add code to whatever you have said. Your suggestion worked perfectly for me.
var memoryStream = new MemoryStream();
// "upload" is the object returned by fine uploader
upload.InputStream.CopyTo(memoryStream);
memoryStream.ToArray();
// After copying the contents to stream, initialize it's position
// back to zeroth location
memoryStream.Seek(0, SeekOrigin.Begin);
And now you are ready to upload memoryStream using:
blockBlob.UploadFromStream(memoryStream);

Related

UWP App: `Image.SetSource` hangs computer on `StorageFiles` outside of `KnownPlaces`

This one is hard to explain, so I give you some actual and pseudo code:
try
{
// If source (a string) points towards a file that is available with
// StorageFile.GetFileFromPathAsync(), just open the file that way.
// If that is not possible, use the path to look up an Access Token
// and use the file from the StorageFolder gotten via that token.
StorageFile file = await GetFileFromAccessList(source);
if (file != null)
{
bitmap = new BitmapImage();
using (IRandomAccessStream fileStream = await file.OpenAsync(FileAccessMode.Read))
{
await bitmap.SetSourceAsync(fileStream);
}
}
}
catch (Exception e)
{
string s = e.Message;
bitmap = null;
}
with the following method:
public async Task<StorageFile> GetFileFromAccessList(string path)
{
StorageFile result = null;
if (String.IsNullOrEmpty(path) == false)
try
{
// Try to access to file directly...
result = await StorageFile.GetFileFromPathAsync(path);
}
catch (Exception)
{
result = null;
try
{
// See if the folder this thing is in is in the access list...
StorageFolder folder = await GetFolderFromAccessList(Path.GetFullPath(path));
// If there is a folder, try that.
if (folder != null)
result = await folder.GetFileAsync(Path.GetFileName(path));
}
catch (Exception)
{
result = null;
}
}
return result;
}
The resulting bitmap is used in Image.SetSource() as an ImageSource.
Now what kills me: this call works perfectly, fast and rock solid for files stored within the apps folder or KnownFolders. So it works like a charm when I don't need an Access Token. Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList.GetFolderAsync(token)
However, it breaks if I have to use an access token, just not all the time
This code does not break immediately: it breaks when I try to open more than 5-7 source files at the same time.
Repeat that: this works if I display 5-7 images. If I try to open more, it freezes the PC. No such problem occurs when I open StorageFiles without tokens.
I can access such files using normal file operations. I can create bitmaps from them, process them, the work.
I just cannot make them a source of an XAML Image.
Any thoughts?
Ah clarity.
So it turns out that using the DataContextChanged event to refresh the bitmap through Image.SetSource() is the murder weapon.
The solution: declare a property of type BitmapSource. Bind the Image.Source to that property. Update the property with the loaded bitmap upon Image.Loaded and Image.DataContextChanged. Works stable and fast now in all conditions I was able to test.

Read a file from the cache in CEFSharp

I need to navigate to a web site that ultimately contains a .pdf file and I want to save that file locally. I am using CEFSharp to do this. The nature of this site is such that once the .pdf appears in the browser, it cannot be accessed again. For this reason, I was wondering if once you have a .pdf displayed in the browser, is there a way to access the source for that file in the cache?
I have tried implementing IDownloadHandler and that works, but you have to click the save button on the embedded .pdf. I am trying to get around that.
OK, here is how I got it to work. There is a function in CEFSharp that allows you to filter an incoming web response. Consequently, this gives you complete access to the incoming stream. My solution is a little on the dirty side and not particularly efficient, but it works for my situation. If anyone sees a better way, I am open for suggestions. There are two things I have to assume in order for my code to work.
GetResourceResponseFilter is called every time a new page is downloaded.
The PDF is that last thing to be downloaded during the navigation process.
Start with the CEF Minimal Example found here : https://github.com/cefsharp/CefSharp.MinimalExample
I used the WinForms version. Implement the IRequestHandler and IResponseFilter in the form definition as follows:
public partial class BrowserForm : Form, IRequestHandler, IResponseFilter
{
public readonly ChromiumWebBrowser browser;
public BrowserForm(string url)
{
InitializeComponent();
browser = new ChromiumWebBrowser(url)
{
Dock = DockStyle.Fill,
};
toolStripContainer.ContentPanel.Controls.Add(browser);
browser.BrowserSettings.FileAccessFromFileUrls = CefState.Enabled;
browser.BrowserSettings.UniversalAccessFromFileUrls = CefState.Enabled;
browser.BrowserSettings.WebSecurity = CefState.Disabled;
browser.BrowserSettings.Javascript = CefState.Enabled;
browser.LoadingStateChanged += OnLoadingStateChanged;
browser.ConsoleMessage += OnBrowserConsoleMessage;
browser.StatusMessage += OnBrowserStatusMessage;
browser.TitleChanged += OnBrowserTitleChanged;
browser.AddressChanged += OnBrowserAddressChanged;
browser.FrameLoadEnd += browser_FrameLoadEnd;
browser.LifeSpanHandler = this;
browser.RequestHandler = this;
The declaration and the last two lines are the most important for this explanation. I implemented the IRequestHandler using the template found here:
https://github.com/cefsharp/CefSharp/blob/master/CefSharp.Example/RequestHandler.cs
I changed everything to what it recommends as default except for GetResourceResponseFilter which I implemented as follows:
IResponseFilter IRequestHandler.GetResourceResponseFilter(IWebBrowser browserControl, IBrowser browser, IFrame frame, IRequest request, IResponse response)
{
if (request.Url.EndsWith(".pdf"))
return this;
return null;
}
I then implemented IResponseFilter as follows:
FilterStatus IResponseFilter.Filter(Stream dataIn, out long dataInRead, Stream dataOut, out long dataOutWritten)
{
BinaryWriter sw;
if (dataIn == null)
{
dataInRead = 0;
dataOutWritten = 0;
return FilterStatus.Done;
}
dataInRead = dataIn.Length;
dataOutWritten = Math.Min(dataInRead, dataOut.Length);
byte[] buffer = new byte[dataOutWritten];
int bytesRead = dataIn.Read(buffer, 0, (int)dataOutWritten);
string s = System.Text.Encoding.UTF8.GetString(buffer);
if (s.StartsWith("%PDF"))
File.Delete(pdfFileName);
sw = new BinaryWriter(File.Open(pdfFileName, FileMode.Append));
sw.Write(buffer);
sw.Close();
dataOut.Write(buffer, 0, bytesRead);
return FilterStatus.Done;
}
bool IResponseFilter.InitFilter()
{
return true;
}
What I found is that the PDF is actually downloaded twice when it is loaded. In any case, there might be header information and what not at the beginning of the page. When I get a stream segment that begins with %PDF, I know it is the beginning of a PDF so I delete the file to discard any previous contents that might be there. Otherwise, I just keep appending each segment to the end of the file. Theoretically, the PDF file will be safe until you navigate to another PDF, but my recommendation is to do something with the file as soon as the page is loaded just to be safe.

Windows Azure UploadFromStream No Longer Works After Porting to MVC4 - Pointers?

Updated my MVC3/.Net 4.5/Azure solution to MVC4.
My code for uploading an image to blob storage appears to fail each time in the upgraded MVC4 solution. However, when I run my MVC3 solution works fine. Code that does the uploading, in a DLL, has not changed.
I’ve uploaded the same image file in the MVC3 and MVC4 solution. I’ve inspected in the stream and it appears to be fine. In both instance I am running the code locally on my machine and my connections point to blob storage in cloud.
Any pointers for debugging? Any known issues that I may not be aware of when upgrading to MVC4?
Here is my upload code:
public string AddImage(string pathName, string fileName, Stream image)
{
var client = _storageAccount.CreateCloudBlobClient();
client.RetryPolicy = RetryPolicies.Retry(3, TimeSpan.FromSeconds(5));
var container = client.GetContainerReference(AzureStorageNames.ImagesBlobContainerName);
image.Seek(0, SeekOrigin.Begin);
var blob = container.GetBlobReference(Path.Combine(pathName, fileName));
blob.Properties.ContentType = "image/jpeg";
blob.UploadFromStream(image);
return blob.Uri.ToString();
}
I managed to fix it. For some reason reading the stream directly from the HttpPostFileBase wasn't working. Simply copy it into a new memorystream solved it.
My code
public string StoreImage(string album, HttpPostedFileBase image)
{
var blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("containerName");
if (container.CreateIfNotExist())
{
// configure container for public access
var permissions = container.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
container.SetPermissions(permissions);
}
string uniqueBlobName = string.Format("{0}{1}", Guid.NewGuid().ToString(), Path.GetExtension(image.FileName)).ToLowerInvariant();
CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = image.ContentType;
image.InputStream.Position = 0;
using (var imageStream = new MemoryStream())
{
image.InputStream.CopyTo(imageStream);
imageStream.Position = 0;
blob.UploadFromStream(imageStream);
}
return blob.Uri.ToString();
}

context path for file upload without HttpRequest in REST application

I am building REST application. I want to upload a file and I want to save it for example in /WEB-INF/resource/uploads
How can I get path to this directory ? My Controller looks like this
#RequestMapping(value = "/admin/house/update", method = RequestMethod.POST)
public String updateHouse(House house, #RequestParam("file") MultipartFile file, Model model) {
try {
String fileName = null;
InputStream inputStream = null;
OutputStream outputStream = null;
if (file.getSize() > 0) {
inputStream = file.getInputStream();
fileName = "D:/" + file.getOriginalFilename();
outputStream = new FileOutputStream(fileName);
int readBytes = 0;
byte[] buffer = new byte[10000];
while ((readBytes = inputStream.read(buffer, 0, 10000)) != -1) {
outputStream.write(buffer, 0, readBytes);
}
outputStream.close();
inputStream.close();
}
} catch(Exception ex) {
ex.printStackTrace();
}
model.addAttribute("step", 3);
this.houseDao.update(house);
return "houseAdmin";
}
Second question...what is the best place to upload user files ?
/WEB-INF is a bad place to try to store file uploads. There's no guarantee that this is an actual directory on the disk, and even if it is, the appserver may forbid write access to it.
Where you should store your files depends on what you want to do with them, and what operating system you're running on. Just pick somewhere outside of the webapp itself, is my advice. Perhaps create a dedicated directory
Also, the process of transferring the MultipartFile to another location is much simpler than you're making it out to be:
#RequestMapping(value = "/admin/house/update", method = RequestMethod.POST)
public String updateHouse(House house, #RequestParam("file") MultipartFile srcFile, Model model) throws IOException {
File destFile = new File("/path/to/the/target/file");
srcFile.transferTo(destFile); // easy!
model.addAttribute("step", 3);
this.houseDao.update(house);
return "houseAdmin";
}
You shouldn't store files in /WEB-INF/resource/uploads. This directory is either inside your WAR (if packaged) or exploded somewhere inside servlet container. The first destination is read-only and the latter should not be used for user files.
There are usually two places considered when storing uploaded files:
Some dedicated folder. Make sure users cannot access this directory directly (e.g. anonymous FTP folder). Note that once your application runs on more than one machine you won't have access to this folder. So consider some form of network synchronization or a shared network drive.
Database. This is controversial since binary files tend to occupy a lot of space. But this approach is a bit simpler when distributing your application.

Calculate the size of a file in Isolated Storage

I can't seem to find a way to determine the size of a file in Isolated Storage besides opening up the file stream and calling the "Length" property. Is there a more efficient way of doing this?
Thanks
I found a bit of a hack to make it work. What you have to do is use reflection to get the fully qualified file path to the file you want then create a new file info object:
//This is the private field name used for reflection
private const string IsolatedStoreRootDir = "m_RootDir";
//This method takes a file path relative to isolated storage
//and the current store
private static FileInfo GetFileInfo(string path, IsolatedStorageFile store)
{
return new FileInfo(GetFullyQualifiedFileName(path, store));
}
//This gets the fully qualified path of the root isolated storage directory
//then appends the relative path to it.
private static string GetFullyQualifiedFileName(string path, IsolatedStorageFile store)
{
return Path.Combine(store.GetType()
.GetField(IsolatedStorageFileSystem.IsolatedStoreRootDir,
System.Reflection.BindingFlags.NonPublic |
System.Reflection.BindingFlags.Instance).GetValue(store).ToString(), path);
}
//Here's how it's used
static void Main(string[] args)
{
var store = IsolatedStorageFile.GetUserStoreForAssembly();
var length = GetFileInfo("TestFile.txt", store).Length;
}
long Size = 0L;
using (IsolatedStorageFileStream stream = new IsolatedStorageFileStream(filePath, FileMode.Open, FileAccess.Read, isoFile))
{
Size = stream.Length;
}