How do I set Set ContentType before upload to azure blob container [duplicate] - azure-storage

This question already has answers here:
How upload blob in Azure Blob Storage with specified ContentType with .NET v12 SDK?
(2 answers)
Closed 2 years ago.
I'm using the asp.net-core webapi to upload images to azure storage.
I was able to successfully upload a image blob to azure storage (using the quickstart). However, the content-type property (with azure) is set to application/octet-stream. The problem with this is the public url will not load in a browser due to this content type. I plan to eventually consume this url/image in my website so I'm thinking this might work. Is there any way to specify the CONTENT-TYPE to image/jpeg? I've also tried the following code but received error message: 404 (The specified blob does not exist.) during the SetHttpHeaders call (the UploadBlob method call that is currently commented out does work, but has the octet-stream content type).
BlobClient blobClient = containerClient.GetBlobClient(guids[index]);
using (var content = file.OpenReadStream())
{
blobClient.Upload(content);
blobClient.SetHttpHeaders(new BlobHttpHeaders() { ContentType = "image/jpeg" });
//containerClient.UploadBlob(guids[index], content);
}

Not specific contentType, with the filename.[ext] download ok. When create the reference to blobclient set the extension. Example download:
var fileName = $"{guids[index]}.jpg";
var pathStorage = Path.Combine(path, fileName);
BlobClient blobClient = containerClient.GetBlobClient(pathStorage);
BlobDownloadInfo download = await blobClient.DownloadAsync();
byte[] bytesContent;
using (var ms = new MemoryStream())
{
await download.Content.CopyToAsync(ms);
bytesContent = ms.ToArray();
}
return bytesContent;
Example upload:
var fileName = $"{guids[index]}.jpg";
var pathStorage = Path.Combine(path, fileName);
BlobClient blobClient = containerClient.GetBlobClient(pathStorage);
var stream = new MemoryStream(bytesContent);
var uploadInfo = await blobClient.UploadAsync(stream);

Related

Trouble getting a file from Azure container

I am trying to get a file from Azure container. I need to read its content.
The file has been uploaded to umbraco media, media are stored in our Azure container.
Its normal (umbraco) url would be like:
~/media/10890/filename.xls
I am trying to retrieve it like this:
var storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["strorageconnstring"]);
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("storagemedia");
The thing is - I am not sure how I am supposed to retrieve a particular file? I tried:
1.
CloudBlobDirectory dira = container.GetDirectoryReference("10890"); // file folder within media
var list = dira2.ListBlobs(useFlatBlobListing: true).ToList(); // Returns error saying "The requested URI does not represent any resource on the server."
However the 10890 folder within media storage exists and I can browse it with storage browser.
2.
CloudBlockBlob blobFile = container.GetBlockBlobReference("10890/filename.xls");
string text;
using (var memoryStream = new MemoryStream())
{
blobFile.DownloadToStream(memoryStream); // Throws "The specifed resource name contains invalid characters." error
var length = memoryStream.Length;
text = System.Text.Encoding.UTF8.GetString(memoryStream.ToArray());
}
Any idea how to read the file? And what am I doing wrong?
Thank You Gaurav for providing your suggestion in comment section.
Thank You nicornotto for confirming that your issue got resolved by changing the container name reference in the below statement.
var container = blobClient.GetContainerReference("storagemedia");

Can you download a specific Block from an Azure Block Blob?

Is it possible to download a specific block from an Azure Block Blob if you know the Block Id?
Yes, you absolutely can, here's an example of how to download the first block:
var storageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=https;AccountName=...;AccountKey=...;EndpointSuffix=core.windows.net");
CloudBlobClient serviceClient = storageAccount.CreateCloudBlobClient();
var container = serviceClient.GetContainerReference("containerName");
var blockBlob = container.GetBlockBlobReference("blobName");
var blocklist = await blockBlob.DownloadBlockListAsync();
var firstBlock = blocklist.First();
var memStream = new MemoryStream();
await blockBlob.DownloadRangeToStreamAsync(memStream, 0, firstBlock.Length);
string contents;
using (var streamReader = new StreamReader(memStream))
{
contents = await streamReader.ReadToEndAsync();
}
You will need a couple of packages from nuget:
Microsoft.WindowsAzure.Storage
Microsoft.WindowsAzure.Storage.Blob
You could leverage Microsoft Azure Storage SDK for getting started with Azure Blob Storage quickly. And the Azure Storage SDK is a wrapper of Blob Service REST API. From the official tutorial about Blob Service REST API, we couldn't find anything about downloading the specific block via the Block Id. In addition, you could use Get Blob to download the bytes of your blob in the specified range by specifying the offset and the length of data to download from your blob.

PDF generation in mvc4 with itextsharp

I am working on pdf generation, it is successfully implemented using itextsharp.dll. It’s working fine on local environment after publish also. We have our own server at other site
But same code doesn't work on the server,pdf is not generated instead it gives an error: 'The document has no pages.'
Initially I thought it is due to no data in document but it works locally with or without data in the document.
I had code implemented as follows to make a web request Is any problem in that ??
try
{
var myHttpWebRequest = (HttpWebRequest)WebRequest.Create(strPdfData + "?objpatId=" + patID);
var response = myHttpWebRequest.GetResponse();
myHttpWebRequest.Timeout = 900000;
var stream = response.GetResponseStream();
StreamReader sr = new StreamReader(stream);
content = sr.ReadToEnd();
}
create a method in the controller:
[HttpGet]
public JsonResult GetFile()
{
var json = new WebClient().DownloadFile(string address, string fileName);
//This code is just to convert the file to json you can keep it in file format and send to the view
dynamic result = Newtonsoft.Json.JsonConvert.DeserializeObject(json);
var oc = Newtonsoft.Json.JsonConvert.DeserializeObject<countdata[]>(Convert.ToString(result.countdata));
return Json(oc, JsonRequestBehavior.AllowGet);
}
In the view just call this function:
#Url.Action('genPDF','GetFile');

PDF header signature not found error?

I am working on Asp.Net MVC application with Azure. When I upload the PDF document to Azure blob storage it will uploaded perfectly by using below code.
var filename = Document.FileName;
var contenttype = Document.ContentType;
int pdfocument = Request.ContentLength;
//uploading document in to azure blob
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount(FromConfigurationSetting("Connection"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("containername");
container.CreateIfNotExists();
var permissions = container.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
container.SetPermissions(permissions);
string uniqueBlobName = string.Format(filename );
CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = ;
blob.UploadFromStream(Request.InputStream);
after uploading the document to blob trying to to read the pdf document getting an error "PDF header signature not found." that erorr code is
byte[] pdf = new byte[pdfocument];
HttpContext.Request.InputStream.Read(pdf, 0, pdfocument);
PdfReader pdfReader = new PdfReader(pdf); //error getting here
and one more thing I forgot i.e if we comment the above code(uploading document in to Azure blob) then am not getting that error.
In your combined use case you try to read Request.InputStream twice, once during upload and once later when trying to read it into your byte[] pdf --- when you read it first, you read it until its end, so the second read most likely did not get any data at all.
As you anyways intend to read the PDF into memory (the afore mentioned byte[] pdf), you could in your combined use case
first read the data into that array
int pdfocument = Request.ContentLength;
byte[] pdf = new byte[pdfocument];
HttpContext.Request.InputStream.Read(pdf, 0, pdfocument);
then upload that array using CloudBlob.UploadByteArray
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount(FromConfigurationSetting("Connection"));
[...]
CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = ; // <--- something missing in your code...
blob.UploadByteArray(pdf); // <--- upload byte[] instead of stream
and then feed your PDF reader
PdfReader pdfReader = new PdfReader(pdf);
This way you read the stream only once, and a byte[] should be re-usable...

Windows Azure UploadFromStream No Longer Works After Porting to MVC4 - Pointers?

Updated my MVC3/.Net 4.5/Azure solution to MVC4.
My code for uploading an image to blob storage appears to fail each time in the upgraded MVC4 solution. However, when I run my MVC3 solution works fine. Code that does the uploading, in a DLL, has not changed.
I’ve uploaded the same image file in the MVC3 and MVC4 solution. I’ve inspected in the stream and it appears to be fine. In both instance I am running the code locally on my machine and my connections point to blob storage in cloud.
Any pointers for debugging? Any known issues that I may not be aware of when upgrading to MVC4?
Here is my upload code:
public string AddImage(string pathName, string fileName, Stream image)
{
var client = _storageAccount.CreateCloudBlobClient();
client.RetryPolicy = RetryPolicies.Retry(3, TimeSpan.FromSeconds(5));
var container = client.GetContainerReference(AzureStorageNames.ImagesBlobContainerName);
image.Seek(0, SeekOrigin.Begin);
var blob = container.GetBlobReference(Path.Combine(pathName, fileName));
blob.Properties.ContentType = "image/jpeg";
blob.UploadFromStream(image);
return blob.Uri.ToString();
}
I managed to fix it. For some reason reading the stream directly from the HttpPostFileBase wasn't working. Simply copy it into a new memorystream solved it.
My code
public string StoreImage(string album, HttpPostedFileBase image)
{
var blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("containerName");
if (container.CreateIfNotExist())
{
// configure container for public access
var permissions = container.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
container.SetPermissions(permissions);
}
string uniqueBlobName = string.Format("{0}{1}", Guid.NewGuid().ToString(), Path.GetExtension(image.FileName)).ToLowerInvariant();
CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = image.ContentType;
image.InputStream.Position = 0;
using (var imageStream = new MemoryStream())
{
image.InputStream.CopyTo(imageStream);
imageStream.Position = 0;
blob.UploadFromStream(imageStream);
}
return blob.Uri.ToString();
}