Is there file size limit for the imported .dat file? - dm-script

I meet a problem for the large stacked images of several GB. Actually, I can directly open a stacked image (dm4 format) of 9GB (1000x1000x1000), but if I want to rotate it using volumn operation such as "rotate about x", the GMS or DM exits automatically. I write a simple script code to complete the operation with the slice3 function and display the result correctly, but I cannot save it! If I try to save the resulted stacked image, software says "sorry" and forces me to close it.
OK, I think this file is too large to the software's capability. So I save the original data file to .dat formate and write a fortran code to rotate it, then save the result as a .dat file. When I use the import function of GMS or DM, it only imports first several hundred frames, not all frames.
How to deal with it?

There certainly are size restrictions in both total size and maximum length along one dimension, but I don't think 1000 x 1000 x 1000 should be a limiting factor.
I just ran the following two scripts in order and saved the data on my GMS 3.4.3 without problem.
image big := RealImage("Big First",4,1000,1000,1000)
big = icol*sin(irow/iheight*100*pi())*10000+iplane
big.showimage()
image bigIn := A
image bigOut := bigIn.Slice3(0,0,0, 1,1000,1,0,1000,1,2,1000,1)
bigOut.ShowImage()
Can you edit you question to include the script code you're failing to run and other useful information?

Related

TensorFlow Dataset `.map` - Is it possible to ignore errors?

Short version:
When using Dataset map operations, is it possible to specify that any 'rows' where the map invocation results in an error are quietly filtered out rather than having the error bubble up and kill the whole session?
Specifics:
I have an input pipeline set up that (more or less) does the following:
reads a set of file paths of images stored locally (images of varying dimensions)
reads a suggested set of 'bounding boxes' from a csv
Produces the set of all image path to bounding box combinations
Reads and decodes the image then produces the set of 'cropped' images for each of these combinations using tf.image.crop_to_bounding_box
My issue is that there are (very rare) instances where my suggested bounding boxes are outside the bounds of a given image so (understandably) tf.image.crop_to_bounding_box throws an error something like this:
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [width must be >= target + offset.]
which kills the session.
I'd prefer it if these errors were simply ignored and that the pipeline moved onto the next combination.
(I understand that the correct fix for this specific issue would be commit the time to checking each bounding box and image dimension size are possible the step before and filter them out using a filter operation before it got to the map with the cropping operation. I was wondering if there was an easy way to just ignore an error and move on to the next case both for easy of implementation in this specific case and also in more general cases)
For Tensorflow 2
dataset = dataset.apply(tf.data.experimental.ignore_errors())
There is tf.contrib.data.ignore_errors. I've never tried this myself, but according to the docs the usage is simply
dataset = dataset.map(some_map_function)
dataset = dataset.apply(tf.contrib.data.ignore_errors())
It should simply pass through the inputs (i.e. returns the same dataset) but ignore any that throw an error.

STM32F407VET FatFs f_close returns FR_DISK_ERR

I am interfacing SD card(16Gb Sandisk ultra micro SD) to STM32F407 micro-controller with SDIO protocol using chan FatFS library. When I try to write data into existing file, f_write returns FR_OK and returns numbers of written bytes(this value is equal to number of bytes to write), but f_close() returns FR_DISK_ERR, and in the end the file is empty.
With more experiment i found that if i format the micro SD card using 64Kb unit allocation size and the existing file with some text in it then i am able to write 64Kb data to file but f_close() returns FR_DISK_ERR, and in the end the file is not empty. I am able to see the data in Windows 10 OS.
If the existing file has no text in this then i am getting an empty file even though f_write is returning FR_OK but f_close is returning FR_DISK_ERR.
In short when I try to use f_write on a text file i created from my PC I can overwrite the content of that file till 64Kb. But i can't get it to work with an empty file i created with f_open
I came across a similar post with the same issue
TMS320F2812 FatFs f_write returns FR_DISK_ERR
I tried the solutions given in the above post but it didn't work. Since i have 192K RAM in my controller, i guess its sufficient enough for this FatFs module to work. My stack size is around 13Kb and Heap size 4Kb which is too much for this application. SD card is given 3.3V supply voltage.
I went little deep in code to see where the error is occurring and found that i am getting SD_ILLEGAL_CMD error while setting block size for card. Inside f_close(ff.c file)->f_sync(ff.c file)->move_window(ff.c file)->disk_read(diskio.c file)->SD_ReadBlock(sdcard.c file) is returning SD_ILLEGAL_CMD error while setting block size for card.
Any solutions are appreciated. If more information is required please feel free to ask, i will update with more information.
Chan FatFs Version - R0.07e

File reading and checksums in go. Difference between methods

Recently I'm into creating checksums for files in go. My code is working with small and big files. I tried two methods, the first uses ioutil.ReadFile("filename") and the second is working with os.Open("filename").
Examples:
The first function is working with the io/ioutil and works for small files. When I try to copy a big file my ram gets blastet and for a 1.5GB iso it uses 3GB of ram.
func byteCopy(fileToCopy string) {
file, err := ioutil.ReadFile(fileToCopy) //1.5GB file
omg(err) //error handling function
ioutil.WriteFile("2.iso", file, 0777)
os.Remove("2.iso")
}
Even worse when I want to create a checksum with crypto/sha512 and io/ioutil.
It will never finish and abort because it runs out of memory.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
When using the function below everything works fine.
func ioHash() {
f, err := os.Open(iso) //iso is a big ~ 1.5tb file
omg(err) //error handling function
defer f.Close()
h := sha512.New()
io.Copy(h, f)
fmt.Printf("%x", h.Sum(nil))
}
My Question:
Why is the ioutil.ReadFile() function not working right? The 1.5GB file should not fill my 16GB of ram. I don't know where to look right now.
Could somebody explain the differences between the methods? I don't get it with reading the go-doc and examples.
Having usable code is nice, but understanding why its working is way above that.
Thanks in advance!
The following code doesn't do what you think it does.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
This first reads your 1.5GB iso. As jnml pointed out, it continuously makes bigger and bigger buffers to fill it. In the end, And total buffer size is no less than 1.5GB and no greater than 1.875GB (by the current implementation).
However, after that you then make another buffer! h.Sum(file) doesn't hash file. It appends the current hash to file! This may or may not cause yet another allocation.
The real problem is that you are taking that file, now appended with the hash, and printing it with %x. Fmt actually pre-computes using the same type of method jnml pointed out that ioutil.ReadAll used. So it constantly allocated bigger and bigger buffers to store the hex of your file. Since each letter is 4 bits, that means we are talking about no less than a 3GB buffer for that and no greater than 3.75GB.
This means your active buffers may be as big 5.625GB. Combine that with the GC not being perfect and not removing all the intermediate buffers, and it could very easily fill your space.
The correct way to write that code would have been.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
h.Write(file)
fmt.Printf("%x", h.Sum(nil))
}
This doesn't do nearly the number the allocations.
The bottom line is that ReadFile is rarely what you want to use. IO streaming (using readers and writers) is always the best way when it is an option. Not only do you allocate much less when you use io.Copy, you also hash and read the disk concurrently. In your ReadFile example, the two resources are used synchronously when they don't depend on each other.
ioutil.ReadFile is working right. It's your fault to abuse the system resources by using that function for things you know are huge.
ioutil.ReadFile is a handy helper for files you're pretty sure in advance that they're going to be small. Like configuration files, most source code files etc. (Actually it's optimizing things for files <= 1e9 bytes, but that's an implementation detail and not part of the API contract. Your 1.5GB file forces it to use slice growing and thus allocating more than one big buffer for your data in the process of reading the file.)
Even your other approach using os.File is not okay. You definitely should be using the "bufio" package for sequential processing of large files, see bufio.NewReader.

What can cause erroneous or missing index in AVI file?

I've built an AVI M-jpeg encoder which basically build an AVI Riff header with all the infos.
I'm adding a frame index at the end of the video stream as specified in the specs.
Index is built as follow:
idx1[Size], then 00dc[0x10,0x00,0x00,0x00][Offset from frame X][Size from frame X] until the end. I compared to any other AVI file, and everything is the same. So I can't understand where softwares don't find - or search for - the index in my AVI file. Also verified several time that each tag has the good byte length indicated after. By the way, there is the good padding in each offset, and the length is the size of the jpeg only.
I attached the current rendered file: movie.avi
I spent the whole day trying to figure out what is the problem with my index. AVI spec is really simple, so I'm smashing my head on the desk.
[Edit]
As soon as my video is longer than 1 second, it fails. That makes no sense for me currently as the algorithm is the same, whatever how many frames are written.
Your AVI file violates the alignment rule: every chunk must start at an even byte.
Add a zero byte after every odd-length frame, and update the index accordingly. The chunk size in the header should still be odd to tell the true size of the data, but all offsets should be even.

NSImage loading a portion of image

The requirement is like this,
I would get a single large PNG Images for a button, this single image will contain images for hOver, button clicked , mouse exit that need to be displayed,
Single PNG File size would be 1024 X 28, so each image have size about 256 X 28,
I am googling the best possible approach but couldn't make out how to achieve this,
I have following approach in mind,
NSImage *pBtnImage[MAX_BUTTON_IMAGES]
for ( i = 0; i < 4 ; i++) {
pBtnImage[i] = [[NSImage alloc]initWithData:??????];
}
I want to know what should i give in the NSData parameter,
Is it possible to load a Single Image and clipped image accordingly as and when it needed.
Thanks in advance
There's no simple Cocoa-supported way to read only a sub-rectangle of the image from its data. It's a simple matter, however, to read the whole image in and only use a select rectangle of the image when compositing. Thing is, with all the available API, you might be better off just to use the standard +[NSImage imageNamed:] method to read the images in individually and let the OS handle caching.
What actual, measured performance problem are you trying to solve? Does one really exist, or is this a case of premature optimization?