I have been asked to provide a WCF service that allows a blob (potentially 1GB) to be downloaded in chunks as an offset byte[] for consumption by a Silverlight application. Essentially, the operation will have a parameter for number of bytes to offset and the max number of bytes to return, nothing complex I think.
The code I have so far is:
[OperationContract]
public byte[] Download(String url, int blobOffset, int bufferSize)
{
var blob = new CloudBlob(url);
using(var blobStream = blob.OpenRead())
{
var buffer = new byte[bufferSize];
blobStream.Seek(blobOffset, SeekOrigin.Begin);
int numBytesRead = blobStream.Read(buffer, 0, bufferSize);
if (numBytesRead != bufferSize)
{
var trimmedBuffer = new byte[numBytesRead];
Array.Copy(buffer, trimmedBuffer, numBytesRead);
return trimmedBuffer;
}
return buffer;
}
}
I have tested this (albeit with relatively small files < 2MB) and it does work, but my questions are:
Can someone suggest improvements to the code?
Is there a better approach given the requirement?
using (BlobStream blobStream = blob.OpenRead())
{
bool getSuccess = false;
int getTries = 0;
rawBytes = new byte[blobStream.Length];
blobStream.Seek(0, SeekOrigin.Begin);
int blockSize = 4194304; //Start at 4 mb per batch
int index = 0;
int documentSize = rawBytes.Length;
while (getTries <= 10 && !getSuccess)
{
try
{
int batchSize = blockSize;
while (index < documentSize)
{
if ((index + batchSize) > documentSize)
batchSize = documentSize - index;
blobStream.Read(rawBytes, index, batchSize);
index += batchSize;
}
getSuccess = true;
}
catch (Exception e)
{
if (getTries > 9)
throw e;
else
blockSize = blockSize / 2; // Reduce by half for each attempt
}
finally
{ getTries++; }
}
}
You could return the blob as a stream instead of a byte array. There is a code sample in a related question here: Returning Azure BLOB from WCF service as a Stream - Do we need to close it?
Note there are some restrictions on which bindings you can use when you return a stream.
Related
On certain images, when I call:
PdfImageObject pimg = new PdfImageObject(stream);
Image bmp = pimg.GetDrawingImage();
The Image that is returned is twisted. I've seen this before and it usually has to do with byte alignment but I'm not sure how to get around this.
The /DecodeParms for this object are /EndOfLine true /K 0 /Columns 3300.
I have tried using the GetStreamBytesRaw() with BitMiracle.LibTiff and with it I can get the data formatted properly although the image is rotated. I'd prefer for GetDrawingImage() to decode the data properly if possible, assuming that is the problem.
I could provide the PDF via email if requested.
Thanks,
Darren
For anyone else that runs across this scenario here is my solution. The key to this puzzle was understanding that /K 0 is G3, /K -1 (or anything less than 0) is G4 /K 1 (or anything greater than 0) is G3-2D.
The twisting happens when you try to make G3 compressed data fit into a G4 image which it appears that is what iTextSharp may be doing. I know it definitely does not work with how I have iTextSharp implemented in my project. I confess that I cannot decipher all the decoding stuff that iTextSharp is doing so it could be something I'm missing too.
EndOfLine didn't have any part in this puzzle but I still think putting line feeds in binary data is a strange practice.
99% of this code came from BitMiracle.LibTiff.Net - Thank you.
int nK = 0;// Default to 0 like the PDF Spec
PdfObject oDecodeParms = stream.Get(PdfName.DECODEPARMS);
if (oDecodeParms is PdfDictionary)
{
PdfObject oK0 = ((PdfDictionary)oDecodeParms).Get(PdfName.K);
if (oK0 != null)
nK = ((PdfNumber)oK0).IntValue;
}
using (MemoryStream ms = new MemoryStream())
{
using (Tiff tiff = Tiff.ClientOpen("custom", "w", ms, new TiffStream()))
{
tiff.SetField(TiffTag.IMAGEWIDTH, width);
tiff.SetField(TiffTag.IMAGELENGTH, height);
if (nK == 0 || nK > 0) // 0 = Group 3, > 0 = Group 3 2D
tiff.SetField(TiffTag.COMPRESSION, Compression.CCITTFAX3);
else if (nK < 0) // < 0 = Group 4
tiff.SetField(TiffTag.COMPRESSION, Compression.CCITTFAX4);
tiff.SetField(TiffTag.BITSPERSAMPLE, bpc);
tiff.SetField(TiffTag.SAMPLESPERPIXEL, 1);
tiff.WriteRawStrip(0, rawBytes, rawBytes.Length); //saving the tiff file using the raw bytes retrieved from the PDF.
tiff.Close();
}
TiffStreamForBytes byteStream = new TiffStreamForBytes(ms.ToArray());
using (Tiff input = Tiff.ClientOpen("bytes", "r", null, byteStream))
{
int stride = input.ScanlineSize();
Bitmap result = new Bitmap(width, height, pixelFormat);
ColorPalette palette = result.Palette;
palette.Entries[0] = System.Drawing.Color.White;
palette.Entries[1] = System.Drawing.Color.Black;
result.Palette = palette;
for (int i = 0; i < height; i++)
{
Rectangle imgRect = new Rectangle(0, i, width, 1);
BitmapData imgData = result.LockBits(imgRect, ImageLockMode.WriteOnly, pixelFormat);
byte[] buffer = new byte[stride];
input.ReadScanline(buffer, i);
System.Runtime.InteropServices.Marshal.Copy(buffer, 0, imgData.Scan0, buffer.Length);
result.UnlockBits(imgData);
}
}
}
/// <summary>
/// Custom read-only stream for byte buffer that can be used
/// with Tiff.ClientOpen method.
/// </summary>
public class TiffStreamForBytes : TiffStream
{
private byte[] m_bytes;
private int m_position;
public TiffStreamForBytes(byte[] bytes)
{
m_bytes = bytes;
m_position = 0;
}
public override int Read(object clientData, byte[] buffer, int offset, int count)
{
if ((m_position + count) > m_bytes.Length)
return -1;
Buffer.BlockCopy(m_bytes, m_position, buffer, offset, count);
m_position += count;
return count;
}
public override void Write(object clientData, byte[] buffer, int offset, int count)
{
throw new InvalidOperationException("This stream is read-only");
}
public override long Seek(object clientData, long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
if (offset > m_bytes.Length)
return -1;
m_position = (int)offset;
return m_position;
case SeekOrigin.Current:
if ((offset + m_position) > m_bytes.Length)
return -1;
m_position += (int)offset;
return m_position;
case SeekOrigin.End:
if ((m_bytes.Length - offset) < 0)
return -1;
m_position = (int)(m_bytes.Length - offset);
return m_position;
}
return -1;
}
public override void Close(object clientData)
{
// nothing to do
return;
}
public override long Size(object clientData)
{
return m_bytes.Length;
}
}
I recently tried Dropnet API to connect dropbox app in my C# project. Everything works fine, but I want to upload large files through chunkupload request.
public void FileUpload()
{
string file = #"E:\threading.pdf";
int chunkSize = 1 * 1024 * 1024;
var buffer = new byte[chunkSize];
int bytesRead;
int chunkCount = 0;
ChunkedUpload chunkupload = null;
using (var fileStream = new FileStream(file, FileMode.Open, FileAccess.Read))
{
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
chunkCount++;
if (chunkCount == 1)
{
chunkupload = client.StartChunkedUpload(buffer);
}
else
{
chunkupload = client.AppendChunkedUpload(chunkupload, buffer);
}
}
}
var metadata = client.CommitChunkedUpload(chunkupload, "/threading.pdf", true);
}
The file size is 1.6 MB. When i Checked, first chunk contains 1 MB and second one contains 0.6MB but only 13 bytes data gets uploaded in each chunk. Could anyone point out problem here.
Update RestSharp to 104.4.0 to resolve this issue.
There's a problem with RestSharper that is used by Dropnet.
Each uploaded chunk uploads exactly 13 bytes
'System.Byte[]'
The problem is that array of bytes is converted to string using method 'AddParameter'.
I didn't dig too much. I'm trying to use UploadFile method.
I have a REST service that returns large files via a Stream as the return type. However I need a way to know how much of the file was transferred even if the download is canceled by the client. What would be the best way to accomplish something like this?
So far I've come up with the following:
public Message GetFile(string path)
{
UsageLog log = new UsageLog();
return WebOperationContext.Current.CreateStreamResponse((stream) =>
{
using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[_BufferSize];
int bytesRead = 0;
long totalBytesRead = 0;
while (true)
{
bytesRead = fs.Read(buffer, 0, _BufferSize);
if (bytesRead == 0)
{
break;
}
try
{
stream.Write(buffer, 0, bytesRead);
}
catch (CommunicationException ex)
{
break;
}
totalBytesRead += bytesRead;
}
log.TransferCompletedUtc = DateTime.UtcNow;
log.BytesTransferred = totalBytesRead;
}
},
"application/octet-stream");
}
I'm interested in hearing any other solutions to accomplishing something like this.
I've found the following code in my boss's project:
Dim strBuilder As New System.Text.StringBuilder("", 1000000)
Before I call him out on it, I'd like to confirm whether this line actually sets a megabyte (or two megabytes in Unicode?) of memory aside for that one stringbuilder?
That initializes a Char() of length 1000000.
So the actual size needed in memory is 2000000 Bytes = ~2 MB since a char is unicode and needs 2 bytes.
Edit: Just in case your boss doesn't believe, this is reflected with ILSpy:
// System.Text.StringBuilder
[SecuritySafeCritical]
public unsafe StringBuilder(string value, int startIndex, int length, int capacity)
{
if (capacity < 0)
{
throw new ArgumentOutOfRangeException("capacity", Environment.GetResourceString("ArgumentOutOfRange_MustBePositive", new object[]
{
"capacity"
}));
}
if (length < 0)
{
throw new ArgumentOutOfRangeException("length", Environment.GetResourceString("ArgumentOutOfRange_MustBeNonNegNum", new object[]
{
"length"
}));
}
if (startIndex < 0)
{
throw new ArgumentOutOfRangeException("startIndex", Environment.GetResourceString("ArgumentOutOfRange_StartIndex"));
}
if (value == null)
{
value = string.Empty;
}
if (startIndex > value.Length - length)
{
throw new ArgumentOutOfRangeException("length", Environment.GetResourceString("ArgumentOutOfRange_IndexLength"));
}
this.m_MaxCapacity = 2147483647;
if (capacity == 0)
{
capacity = 16;
}
if (capacity < length)
{
capacity = length;
}
this.m_ChunkChars = new char[capacity];
this.m_ChunkLength = length;
fixed (char* ptr = value)
{
StringBuilder.ThreadSafeCopy(ptr + (IntPtr)startIndex, this.m_ChunkChars, 0, length);
}
}
You could try calling GC.GetTotalMemory() before and after that allocation, and see if it increases. Note: this is not a good, scientific way to do this, but may prove your point.
How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application.
This is the code I am using to load the file which I used to load a .bmp file:
BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData)
{
//Open File
CFile fileImage;
CFileStatus fileStatus;
fileImage.Open(strFilePath,CFile::modeRead);
fileImage.GetStatus(fileStatus);
//Alocating memory for data
ULONG nBytes = (ULONG)fileStatus.m_size;
HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes);
LPVOID lpData = GlobalLock(hGlobal);
//Putting data into file
fileImage.Read(lpData,nBytes);
HRESULT hr;
_variant_t varChunk;
long lngOffset = 0;
UCHAR chData;
SAFEARRAY FAR *psa = NULL;
SAFEARRAYBOUND rgsabound[1];
try
{
//Create a safe array to store the BYTES
rgsabound[0].lLbound = 0;
rgsabound[0].cElements = nBytes;
psa = SafeArrayCreate(VT_UI1,1,rgsabound);
while(lngOffset<(long)nBytes)
{
chData = ((UCHAR*)lpData)[lngOffset];
hr = SafeArrayPutElement(psa,&lngOffset,&chData);
if(hr != S_OK)
{
return false;
}
lngOffset++;
}
lngOffset = 0;
//Assign the safe array to a varient
varChunk.vt = VT_ARRAY|VT_UI1;
varChunk.parray = psa;
hr = pFileData->AppendChunk(varChunk);
if(hr != S_OK)
{
return false;
}
}
catch(_com_error &e)
{
//get info from _com_error
_bstr_t bstrSource(e.Source());
_bstr_t bstrDescription(e.Description());
_bstr_t bstrErrorMessage(e.ErrorMessage());
_bstr_t bstrErrorCode(e.Error());
TRACE("Exception thrown for classes generated by #import");
TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode);
TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage);
TRACE("\tSource = %s\n",(LPCSTR)bstrSource);
TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription);
}
catch(...)
{
TRACE("***Unhandle Exception***");
}
//Free Memory
GlobalUnlock(lpData);
return true;
}
But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded.
Your help will be highly appreciated.