Multithreading a filestream in vb2005 - vb.net

I am trying to build a resource file for a website basically jamming all the images into a compressed file that is then unpacked on the output buffers to the client.
my question is in vb2005 can a filestream be multi threaded if you know the size of the converted file, ala like a bit torrent and work on pieces of the filestream ( the individual files in this case) and add them to the resource filestream when they are done instead of one at a time?

If you need something similar to the torrents way of writing to a file, this is how I would implement it:
Open a FileStream on Thread T1, and create a queue "monitor" for step 2
Create a queue that will be read from T1, but written by multiple network reader threads. (the queue data structure would look like this: (position where to write, size of data buffer, data buffer).
Fire up the threads
:)
Anyway, from your comments, your problem seems to be another one..
I have found something in, but I'm not sure if it works:
If you want to write data to a file,
two parallel methods are available,
WriteByte() and Write(). WriteByte()
writes a single byte to the stream:
byte NextByte = 100;
fs.WriteByte(NextByte);
Write(), on the other hand, writes out
an array of bytes. For instance, if
you initialized the ByteArray
mentioned before with some values, you
could use the following code to write
out the first nBytes of the array:
fs.Write(ByteArray, 0, nBytes);
Citation from:
Nagel, Christian, Bill Evjen, Jay
Glynn, Morgan Skinner, and Karli
Watson. "Chapter 24 - Manipulating
Files and the Registry". Professional
C# 2005 with .NET 3.0. Wrox Press. ©
2007. Books24x7. http://common.books24x7.com/book/id_20568/book.asp
(accessed July 22, 2009)

I'm not sure if you're asking if a System.IO.FileStream object can be read from or written to in a multi-threaded fashion. But the answer in both cases is no. This is not a supported scenario. You will need to add some form of locking to ensure serialized access to the resource.
The documentation calls out multi-threaded access to the object as an unsupported scenario
http://msdn.microsoft.com/en-us/library/system.io.filestream.aspx

Related

FileStream faster way to read and write big file

I have a speed problem and memory efficiency, I'm reading the cutting the big chunk of bytes from the .bin files then writing it in another file, the problem is that to read the file i need to create a huge byte array for it:
Dim data3(endOFfile) As Byte ' end of file is here around 270mb size
Using fs As New FileStream(path, FileMode.Open, FileAccess.Read, FileShare.None)
fs.Seek(startOFfile, SeekOrigin.Begin)
fs.Read(data3, 0, endOFfile)
End Using
Using vFs As New FileStream(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) & "\test.bin", FileMode.Create) 'save
vFs.Write(data3, 0, endOFfile)
End Using
so it takes a long time to procedure, what's the more efficient way to do it?
Can I somehow read and write in the same file stream without using a bytes array?
I've never done it this way but I would think that the Stream.CopyTo method should be the easiest method and as quick as anything.
Using inputStream As New FileStream(...),
outputStream As New FileStream(...)
inputStream.CopyTo(outputStream)
End Using
I'm not sure whether that overload will read all the data in one go or use a default buffer size. If it's the former or you want to specify a buffer size other than the default, there's an overload for that:
inputStream.CopyTo(outputStream, bufferSize)
You can experiment with different buffer sizes to see whether it makes a difference to performance. Smaller is better for memory usage but I would expect bigger to be faster, at least up to a point.
Note that the CopyTo method requires at least .NET Framework 4.0. If you're executing this code on the UI thread, you might like to call CopyToAsync instead, to avoid freezing the UI. The same two overloads are available, plus a third that accepts a CancellationToken. I'm not going to teach you how to use Async/Await here, so research that yourself if you want to go that way. Note that CopyToAsync requires at least .NET Framework 4.5.

How to read all bytes of a file in winrt with ReadBufferAsync?

I have an array of objects that each needs to load itself from binary file data. I create an array of these objects and then call an AsyncAction for each of them that starts it reading in its data. Trouble is, they are not loading entirely - they tend to get only part of the data from the files. How can I make sure that the whole thing is read? Here is an outline of the code: first I enumerate the folder contents to get a StorageFile for each file it contains. Then, in a for loop, each receiving object is created and passed the next StorageFile, and it creates its own Buffer and DataReader to handle the read. m_file_bytes is a std::vector.
m_buffer = co_await FileIO::ReadBufferAsync(nextFile);
m_data_reader = winrt::Windows::Storage::Streams::DataReader::FromBuffer(m_buffer);
m_file_bytes.resize(m_buffer.Length());
m_data_reader.ReadBytes(m_file_bytes);
My thought was that since the buffer and reader are class members of the object they would not go out of scope and could finish their work uninterrupted as the next objects were asked to load themselves in separate AsyncActions. But the DataReader only gets maybe half of the file data or less. What can be done to make sure it completes? Thanks for any insights.
[Update] Perhaps what is going is that the file system can handle only one read task at a time, and by starting all these async reads each is interrupting the previous one -? But there must be a way to progressively read a folder full of files.
[Update] I think I have it working, by adopting the principle of concentric loops - the idea is not to proceed to the next load until the first one has completed. I think - someone can correct me if I'm wrong, that the file system cannot do simultaneous reads. If there is an accepted and secure example of how to do this I would still love to hear about it, so I'm not answering my own question.
#include <wrl.h>
#include <robuffer.h>
uint8_t* GetBufferData(winrt::Windows::Storage::Streams::IBuffer& buffer)
{
::IUnknown* unknown = winrt::get_unknown(buffer);
::Microsoft::WRL::ComPtr<::Windows::Storage::Streams::IBufferByteAccess> bufferByteAccess;
HRESULT hr = unknown->QueryInterface(_uuidof(::Windows::Storage::Streams::IBufferByteAccess), &bufferByteAccess);
if (FAILED(hr))
return nullptr;
byte* bytes = nullptr;
bufferByteAccess->Buffer(&bytes);
return bytes;
}
https://learn.microsoft.com/en-us/cpp/cppcx/obtaining-pointers-to-data-buffers-c-cx?view=vs-2017
https://learn.microsoft.com/en-us/windows/uwp/xbox-live/storage-platform/connected-storage/connected-storage-using-buffers

STM32F407VET FatFs f_close returns FR_DISK_ERR

I am interfacing SD card(16Gb Sandisk ultra micro SD) to STM32F407 micro-controller with SDIO protocol using chan FatFS library. When I try to write data into existing file, f_write returns FR_OK and returns numbers of written bytes(this value is equal to number of bytes to write), but f_close() returns FR_DISK_ERR, and in the end the file is empty.
With more experiment i found that if i format the micro SD card using 64Kb unit allocation size and the existing file with some text in it then i am able to write 64Kb data to file but f_close() returns FR_DISK_ERR, and in the end the file is not empty. I am able to see the data in Windows 10 OS.
If the existing file has no text in this then i am getting an empty file even though f_write is returning FR_OK but f_close is returning FR_DISK_ERR.
In short when I try to use f_write on a text file i created from my PC I can overwrite the content of that file till 64Kb. But i can't get it to work with an empty file i created with f_open
I came across a similar post with the same issue
TMS320F2812 FatFs f_write returns FR_DISK_ERR
I tried the solutions given in the above post but it didn't work. Since i have 192K RAM in my controller, i guess its sufficient enough for this FatFs module to work. My stack size is around 13Kb and Heap size 4Kb which is too much for this application. SD card is given 3.3V supply voltage.
I went little deep in code to see where the error is occurring and found that i am getting SD_ILLEGAL_CMD error while setting block size for card. Inside f_close(ff.c file)->f_sync(ff.c file)->move_window(ff.c file)->disk_read(diskio.c file)->SD_ReadBlock(sdcard.c file) is returning SD_ILLEGAL_CMD error while setting block size for card.
Any solutions are appreciated. If more information is required please feel free to ask, i will update with more information.
Chan FatFs Version - R0.07e

How do EMACS Lisp programmers read text files for non-editing purposes?

What do EMACS Lisp programmers do, when they want to write something roughly the equivalent of...
for line in open("foo.txt", "r", encoding="utf-8").readlines():
...(split on ws and call a fn, or whatever)...
..?
When I look in the EMACS lisp help, I see functions about opening files into text editing buffers -- not exactly what I was intending. I suppose I could write functions to visit the lines of the file, but if I did that, I wouldn't want the user to see it, and besides, it doesn't seem very efficient from a text-processing standpoint.
I think a more direct translation of the original Python code is as follows:
(with-temp-buffer
(insert-file-contents "foo.txt")
(while (search-forward-regexp "\\(.*\\)\n?" nil t)
; do something with this line in (match-string 1)
))
I think with-temp-buffer/insert-file-contents is generally preferable to with-current-buffer/find-file-noselect, because the former guarantees that you're working with a fresh copy of the entire file contents. With the latter construction, if you happen to already have a buffer visiting the target file, then that buffer is returned by find-file-noselect, so if that buffer has been narrowed, you'll only see that part of the file when you process it.
Keep in mind that it may very well be more convenient not to process the file line-by-line. For example, this is an expression that returns a list of all sequences of consecutive digits in the file:
(with-temp-buffer
(insert-file-contents "foo.txt")
(loop while (search-forward-regexp "[0-9]+" nil t)
collect (match-string 0)))
(require 'cl) first to bring in the loop macro.
Yes, that is what you want to do: visit the file in a buffer, and operate on the text in that buffer.
You do not have to display the buffer, i.e., the user need not see it.
And as for efficiency: manipulating text in a buffer is typically the most efficient way to manipulate text.
You can visit a file in a buffer in several ways. You might want to use an existing file buffer for this, depending on the use case. That is, if the file is already "open" in Emacs then you might want to use its buffer.
Or you might want to disregard any existing file buffer for an already "open" file, and read the file anew into a new buffer. For that, as #Sean mentions, you can use insert-file-contents with a buffer that you create. You can create the buffer using with-temp-buffer or generate-new-buffer, depending, again, on what you want/need to do with it.
If you do want to reuse a buffer that is already visiting the file, you can test whether it has been modified in memory, whether it is narrowed, etc., and do whatever is appropriate for your use case. You can check whether there is already a buffer visiting the file (using any path/file name) using function find-buffer-visiting.
To visit the file, taking advantage of any existing buffer that is visiting it, you can use find-file-noselect. That function returns the buffer that visits the file, so you can pass that buffer as the first argument to with-current-buffer. Here is a simple example.
(with-current-buffer (let ((enable-local-variables ())) (find-file-noselect file))
;; Do some stuff with the text in the buffer.
;; Optionally save the buffer back to the file.
)
(The binding of enable-local-variables to nil is a minor optimization, for the common case where you don't need to bother with buffer-local variables.)

Writing .hex file to Internal FlashROM of 8051 microcontroller using SPI bus

I am doing a firmware upgrade using SPI bus on EEPROM as well as Internal ROM of 8051, basically writing a .hex file on both these memory devices.I am able to see .hex file written there.I am able to see slave and master are communicating properly, but not able to write anything on my memory devices.
If you have suggestions and if you have faced similar problems, please let me know where is the actual problem.
Any inputs would be welcomed.
Regards,
Ravi
I think more information will likely be required. In any case, here a few pitfalls I could see:
Hex Files are not necessarily memory images. The 8051s I've worked with usually use Intel Hex which is an ASCII format that describes the memory. The format is well documented here.
If you're having trouble writing to the EEPROM, you may not be writing the proper instructions. Typically, SPI EEPROM will be Byte addressed, but still has paging internally. You should start your writes on a Page boundary and write the whole page, then issue another write command, etc. By convention if you overrun a page, or start in the middle of a page it will loop around. So if your page is 8 bytes long, and you start writing 0-7 starting at index 4, you'll get:
Page Start: Index 0 = 4
Index 1 = 5
Index 2 = 6
Index 3 = 7
Index 4 = 0
Index 5 = 1
Index 6 = 2
Index 7 = 3
Most EEPROMs have locking mechanisms to prevent accidental writes once they are finalized. If the lock has been set, you will need to write an unlocking method (this will be detailed in the data sheet if it has it)
To further help you, please reference part numbers and better yet Data Sheets if you can.