Buffer Not Large enough for pixel - kotlin

I am trying to get a bitmap From byte array
val bitmap_tmp =
Bitmap.createBitmap(height, width, Bitmap.Config.ARGB_8888)
val buffer = ByteBuffer.wrap(decryptedText)
bitmap_tmp.copyPixelsFromBuffer(buffer)
callback.bitmap(bitmap_tmp)
I am facing a error in the below line :
bitmap_tmp.copyPixelsFromBuffer(buffer)
The Error Reads As:
java.lang.RuntimeException: Buffer not large enough for pixels
I have tried Different Solutions found on stack Like Add the line before error but still it crashes:
buffer.rewind()
However the Weird part is the same code at a different place for the same image [Same image with same dimensions] get perfectly functioned and I get the bitmap but here it crashes.
How do I solve this?
Thanks in Adv

The error message makes it sound like the buffer you're copying from isn't large enough, like it needs to contain at least as many bytes as necessary to overwrite every pixel in your bitmap (which has a set size and pixel config).
The documentation for the method doesn't make it clear, but here's the source for the Bitmap class, and in that method:
if (bufferBytes < bitmapBytes) {
throw new RuntimeException("Buffer not large enough for pixels");
}
So yeah, you can't partially overwrite the bitmap, you need enough data to fill it. And if you check the source, that depends on the buffer's current position and limit (it's not just its capacity, it's how much data is remaining to be read).
If it works elsewhere, I'm guessing decryptedText is different there, or maybe you're creating your Bitmap with a different Bitmap.Config (like ARGB_8888 requires 4 bytes per pixel)

Related

Does vkCmdCopyImageToBuffer work when source image uses VK_IMAGE_TILING_OPTIMAL?

I have read (after running into the limitation myself) that for copying data from the host to a VK_IMAGE_TILING_OPTIMAL VkImage, you're better off using a VkBuffer rather than a VkImage for the staging image to avoid restrictions on mipmap and layer counts. (Here and Here)
So, when it came to implementing a glReadPixels-esque piece of functionality to read the results of a render-to-texture back to the host, I thought that reading to a staging VkBuffer with vkCmdCopyImageToBuffer instead of using a staging VkImage would be a good idea.
However, I haven't been able to get it to work yet, I'm seeing most of the intended image, but with rectangular blocks of the image in incorrect locations and even some bits duplicated.
There is a good chance that I've messed up my synchronization or layout transitions somewhere and I'll continue to investigate that possibility.
However, I couldn't figure out from the spec whether using vkCmdCopyImageToBuffer with an image source using VK_IMAGE_TILING_OPTIMAL is actually supposed to 'un-tile' the image, or whether I should actually expect to receive a garbled implementation-defined image layout if I attempt such a thing.
So my question is: Does vkCmdCopyImageToBuffer with a VK_IMAGE_TILING_OPTIMAL source image fill the buffer with linearly tiled data or optimally (implementation defined) tiled data?
Section 18.4 describes the layout of the data in the source/destination buffers, relative to the image being copied from/to. This is outlined in the description of the VkBufferImageCopy struct. There is no language in this section which would permit different behavior from tiled images.
The specification even has pseudo code for how copies work (this is for non-block compressed images):
rowLength = region->bufferRowLength;
if (rowLength == 0)
rowLength = region->imageExtent.width;
imageHeight = region->bufferImageHeight;
if (imageHeight == 0)
imageHeight = region->imageExtent.height;
texelSize = <texel size taken from the src/dstImage>;
address of (x,y,z) = region->bufferOffset + (((z * imageHeight) + y) * rowLength + x) * texelSize;
where x,y,z range from (0,0,0) to region->imageExtent.width,height,depth}.
The x,y,z part is the location of the pixel in question from the image. Since this location is not dependent on the tiling of the image (as evidenced by the lack of anything stating that it would be), buffer/image copies will work equally on both kinds of tiling.
Also, do note that this specification is shared between vkCmdCopyImageToBuffer and vkCmdCopyBufferToImage. As such, if a copy works one way, it by necessity must work the other.

File reading and checksums in go. Difference between methods

Recently I'm into creating checksums for files in go. My code is working with small and big files. I tried two methods, the first uses ioutil.ReadFile("filename") and the second is working with os.Open("filename").
Examples:
The first function is working with the io/ioutil and works for small files. When I try to copy a big file my ram gets blastet and for a 1.5GB iso it uses 3GB of ram.
func byteCopy(fileToCopy string) {
file, err := ioutil.ReadFile(fileToCopy) //1.5GB file
omg(err) //error handling function
ioutil.WriteFile("2.iso", file, 0777)
os.Remove("2.iso")
}
Even worse when I want to create a checksum with crypto/sha512 and io/ioutil.
It will never finish and abort because it runs out of memory.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
When using the function below everything works fine.
func ioHash() {
f, err := os.Open(iso) //iso is a big ~ 1.5tb file
omg(err) //error handling function
defer f.Close()
h := sha512.New()
io.Copy(h, f)
fmt.Printf("%x", h.Sum(nil))
}
My Question:
Why is the ioutil.ReadFile() function not working right? The 1.5GB file should not fill my 16GB of ram. I don't know where to look right now.
Could somebody explain the differences between the methods? I don't get it with reading the go-doc and examples.
Having usable code is nice, but understanding why its working is way above that.
Thanks in advance!
The following code doesn't do what you think it does.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
This first reads your 1.5GB iso. As jnml pointed out, it continuously makes bigger and bigger buffers to fill it. In the end, And total buffer size is no less than 1.5GB and no greater than 1.875GB (by the current implementation).
However, after that you then make another buffer! h.Sum(file) doesn't hash file. It appends the current hash to file! This may or may not cause yet another allocation.
The real problem is that you are taking that file, now appended with the hash, and printing it with %x. Fmt actually pre-computes using the same type of method jnml pointed out that ioutil.ReadAll used. So it constantly allocated bigger and bigger buffers to store the hex of your file. Since each letter is 4 bits, that means we are talking about no less than a 3GB buffer for that and no greater than 3.75GB.
This means your active buffers may be as big 5.625GB. Combine that with the GC not being perfect and not removing all the intermediate buffers, and it could very easily fill your space.
The correct way to write that code would have been.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
h.Write(file)
fmt.Printf("%x", h.Sum(nil))
}
This doesn't do nearly the number the allocations.
The bottom line is that ReadFile is rarely what you want to use. IO streaming (using readers and writers) is always the best way when it is an option. Not only do you allocate much less when you use io.Copy, you also hash and read the disk concurrently. In your ReadFile example, the two resources are used synchronously when they don't depend on each other.
ioutil.ReadFile is working right. It's your fault to abuse the system resources by using that function for things you know are huge.
ioutil.ReadFile is a handy helper for files you're pretty sure in advance that they're going to be small. Like configuration files, most source code files etc. (Actually it's optimizing things for files <= 1e9 bytes, but that's an implementation detail and not part of the API contract. Your 1.5GB file forces it to use slice growing and thus allocating more than one big buffer for your data in the process of reading the file.)
Even your other approach using os.File is not okay. You definitely should be using the "bufio" package for sequential processing of large files, see bufio.NewReader.

What can cause erroneous or missing index in AVI file?

I've built an AVI M-jpeg encoder which basically build an AVI Riff header with all the infos.
I'm adding a frame index at the end of the video stream as specified in the specs.
Index is built as follow:
idx1[Size], then 00dc[0x10,0x00,0x00,0x00][Offset from frame X][Size from frame X] until the end. I compared to any other AVI file, and everything is the same. So I can't understand where softwares don't find - or search for - the index in my AVI file. Also verified several time that each tag has the good byte length indicated after. By the way, there is the good padding in each offset, and the length is the size of the jpeg only.
I attached the current rendered file: movie.avi
I spent the whole day trying to figure out what is the problem with my index. AVI spec is really simple, so I'm smashing my head on the desk.
[Edit]
As soon as my video is longer than 1 second, it fails. That makes no sense for me currently as the algorithm is the same, whatever how many frames are written.
Your AVI file violates the alignment rule: every chunk must start at an even byte.
Add a zero byte after every odd-length frame, and update the index accordingly. The chunk size in the header should still be odd to tell the true size of the data, but all offsets should be even.

Storing bitmap data in an integer array for speed?

I am using Cocoa/Objective-C and I am using NSBitmapImageRep getPixel:atX:y: to test whether R is 0 or 255. That is the only piece of data I need (the bitmap is only black and white).
I am noticing that this one function is the biggest draw on CPU power in my application, accounting for something like 95% of the overhead. Would it be faster for me to preload the bitmap into a 2 dimensional integer array
NSUInteger pixels[1280][1024];
and read the values like so:
if(pixels[x][y]!=0){
//....do stuff
}
?
One thing that might be helpful could be converting the data into something more "dense". Since you're only interested in a single bit per pixel location, it doesn't make sense to store more than that. Storing more data than necessary means you get less usage out of your cache, which can really slow things down if the image is big and/or the accesses very random.
For instance, you could use the platform's largest "native" integer and pack in the pixels to use a single bit for each pixel. That will make the access a bit more involved since you need to do a single-bit testing, but it might be a win.
You would do something like this:
uint32_t image[HEIGHT * ((WIDTH + 31) / 32)];
Then initialize this array by using the slow getter method, once per pixel. Then you can read out the value of a pixel using something like image[y * ((WIDTH + 31) / 32) + (x / 32)] & (1 << (x & 31)).
I'm being vague ("might", "can" and so on) since it really depends on your access pattern, the size of the image, and other things. You should probably test it.
I'm not familiar with Objective-C or the NSBitmapImageRep object, but a reasonable guess is that the getPixel routine employs clipping to avoid reading outside of memory, which could a possible slowdown (among other things).
Have a look inside it and see what it does.
(update)
Having learnt that this is Apple code, you probably can't take a look inside it.
However, the documentation for NSBitmapImageRep_Class seems to indicate that getPixel:atX:y: performs at least some type magic. You could test if the result is clipped by accessing a pixel outside of the image boundary and observing the result.
The bitmapData seems to be something you'd be interested in: get the pointer to the data, then read the array yourself avoiding type conversion or clipping.

Bitmap while assigning height width crashes

Dim objBmpImage AS Bitmap = New Bitmap(9450, 6750)
On execution of above line of code application crashes - Argument Exception- Parameter is not valid.
Please advice.
I think this bitmap requires a huge contiguous unmanaged memory to store the bitmap bits.
More than what is available for your process. So its one way of saying the size you want is not supported. Try reducing and it would work.
Dim objBmpImage AS Bitmap = New Bitmap(objYourOriginBmpImage, New Size(9450, 6750))
You are using the name objBmpImage thrice and in the wrong context.
U r using the same object in the bitmap constructor before its actually being created. Try passing an already existing bitmap to the constructor.
You're bitmap is too large to process (I think you will hit a 2GB limit).
Try processing it in smaller chunks and merging files together if necessary.