Why do maildirs need the `new` folder? - maildir

I understand the reason why tmp is needed in a maildir: It makes it possible to not have half-written mails in cur. However, why is new needed?
Couldn't the mails just be dropped into the cur folder directly with a name <uniq>: or so? As soon as a MUA wants to add flags, it could initialize the uniq part. That would probably make things a (well, just tiny) bit simpler to code.

Related

How do EMACS Lisp programmers read text files for non-editing purposes?

What do EMACS Lisp programmers do, when they want to write something roughly the equivalent of...
for line in open("foo.txt", "r", encoding="utf-8").readlines():
...(split on ws and call a fn, or whatever)...
..?
When I look in the EMACS lisp help, I see functions about opening files into text editing buffers -- not exactly what I was intending. I suppose I could write functions to visit the lines of the file, but if I did that, I wouldn't want the user to see it, and besides, it doesn't seem very efficient from a text-processing standpoint.
I think a more direct translation of the original Python code is as follows:
(with-temp-buffer
(insert-file-contents "foo.txt")
(while (search-forward-regexp "\\(.*\\)\n?" nil t)
; do something with this line in (match-string 1)
))
I think with-temp-buffer/insert-file-contents is generally preferable to with-current-buffer/find-file-noselect, because the former guarantees that you're working with a fresh copy of the entire file contents. With the latter construction, if you happen to already have a buffer visiting the target file, then that buffer is returned by find-file-noselect, so if that buffer has been narrowed, you'll only see that part of the file when you process it.
Keep in mind that it may very well be more convenient not to process the file line-by-line. For example, this is an expression that returns a list of all sequences of consecutive digits in the file:
(with-temp-buffer
(insert-file-contents "foo.txt")
(loop while (search-forward-regexp "[0-9]+" nil t)
collect (match-string 0)))
(require 'cl) first to bring in the loop macro.
Yes, that is what you want to do: visit the file in a buffer, and operate on the text in that buffer.
You do not have to display the buffer, i.e., the user need not see it.
And as for efficiency: manipulating text in a buffer is typically the most efficient way to manipulate text.
You can visit a file in a buffer in several ways. You might want to use an existing file buffer for this, depending on the use case. That is, if the file is already "open" in Emacs then you might want to use its buffer.
Or you might want to disregard any existing file buffer for an already "open" file, and read the file anew into a new buffer. For that, as #Sean mentions, you can use insert-file-contents with a buffer that you create. You can create the buffer using with-temp-buffer or generate-new-buffer, depending, again, on what you want/need to do with it.
If you do want to reuse a buffer that is already visiting the file, you can test whether it has been modified in memory, whether it is narrowed, etc., and do whatever is appropriate for your use case. You can check whether there is already a buffer visiting the file (using any path/file name) using function find-buffer-visiting.
To visit the file, taking advantage of any existing buffer that is visiting it, you can use find-file-noselect. That function returns the buffer that visits the file, so you can pass that buffer as the first argument to with-current-buffer. Here is a simple example.
(with-current-buffer (let ((enable-local-variables ())) (find-file-noselect file))
;; Do some stuff with the text in the buffer.
;; Optionally save the buffer back to the file.
)
(The binding of enable-local-variables to nil is a minor optimization, for the common case where you don't need to bother with buffer-local variables.)

misunderstanding how the threads work

I have a problem, big problem with threads in vb.net
First of all I want to tell that I didn't work with threads before (just on the school), I read lot of pages about it, but none of them could help me for my problem.
My main question here is understand the logic, and after if is possible, solve the problem that I have, both are related, then I going to explain the problem.
The code hasn't comments and/or documentation related, and is a program developed lot of years ago and the guy that did this is not working on the office, nobody knows how it works :S
I have a list called listOfProccess, and when is only 1 works fine.
in the callback function in QueueUserWorkItem fill the information about p, then execute the thread, I suppose
That list contain an array with information type
listOfProccess[].type = 'a/b/c/d/e/f/g/'
also the list include an ID.
Code:
If listOfProccess.Count > 0 Then
Threading.ThreadPool.SetMinThreads(1, 1)
Threading.ThreadPool.SetMaxThreads(4, 4)
For Each p In listOfProccess
Try
Threading.ThreadPool.QueueUserWorkItem(New Threading.WaitCallback(Object p.function))
Catch e As Exception
sendMail("mail#mail", "alerts#mail.ie", "", e.StackTrace)
End Try
Next
Problems:
I have two problems here:
Sometimes execute an item in the list i.e. 'a' in a infinite loop, and expends all resources of the machine, but if i close and restart again, works, I didn't know if is a problem with the threads or not, sincerely I think that is other thing, because this problem starts two or tree weeks ago, and the program still running during a year.
This one i think is related about threads, if I have two (or more) p on the list like this:
p[1].type = 'a/b/c/d/e/f/g/'
p[1].ID = 1
p[2].type = 'ww/xx/ff/yy/aa/rr/'
p[2].ID =2
when the system execute something like this, the way that it follows is 'random' ie. takes for the first one, a, b,c and after this it does ww, and come back to the first. the problem is bigger if I have more items in the list, like 4 or 5; this is not a very big problem, because the program works, it not works 100% fine, but works, is more to try the understand why it works on this way.
Any help is welcome.
The second problem is a race condition problem, as you can't guarantee the order of the execution for the threads, there is a non-zero probability that your threads will replace each other's values. There are a lot of wayss to solve this issue: algorythms (either lock-oriented and lock-free), synchronization techniques and so on, and there is no silver-bullet solution for this.
First problem is unclear for me, as I can't understand what exactly do you mean by a infinite loop - may be this can happen if you link items to deleted (from other thread) ones and there is no way to go out of your task, as the links in a list are broken. This still should be solved with sychronization.
I think your should start with MSDN articles or some book about multithreading in general, and after that you should split your program step by step for a small parts you understand.
Update:
p.function - about the infinite loop you should consider the code of this delegate. If there is a condition to restart work, you should check a recursion limit. For example, if there is an optimistic locking algorythm, your code can find out that the update it tried isn't valid, and restart it. And if the links are broken, it will never end it's task, and will stay forever in an infinite loop.

VB.NET, best practice for sorting concurrent results of threadpooling?

In short, I'm trying to "sort" incoming results of using threadpooling as they finish. I have a functional solution, but there's no way in the world it's the best way to do it (it's prone to huge pauses). So here I am! I'll try to hit the bullet points of what's going on/what needs to happen and then post my current solution.
The intent of the code is to get information about files in a directory, and then write that to a text file.
I have a list (Counter.ListOfFiles) that is a list of the file paths sorted in a particular way. This is the guide that dictates the order I need to write to the text file.
I'm using a threadpool to collect the information about each file, create a stringbuilder with all of the text ready to write to the text file. I then call a procedure(SyncUpdate, inlcluded below), send the stringbuilder(strBld) from that thread along with the name of the path of the file that particular thread just wrote to the stringbuilder about(Xref).
The procedure includes a synclock to hold all the other threads until it finds a thread passing the correct information. That "correct" information being when the xref passed by the thread matches the first item in my list (FirstListItem). When that happens, I write to the text file, delete the first item in the list and do it again with the next thread.
The way I'm using the monitor is probably not great, in fact I have little doubt I'm using it in an offensively wanton manner. Basically while the xref (from the thread) <> the first item in my list, I'm doing a pulseall for the monitor. I originally was using monitor.wait, but it would eventually just give up trying to sort through the list, even when using a pulse elsewhere. I may have just been coding something awkwardly. Either way, I don't think it's going to change anything.
Basically the problem comes down to the fact that the monitor will pulse through all of the items it has in the queue, when there's a good chance the item I am looking for probably got passed to it somewhere earlier in the queue or whatever and it's now going to sort through all of the items again before looping back around to find a criteria that matches. The result of this is that my code will hit one of these and take a huge amount of time to complete.
I'm open to believing I'm just using the wrong tool for the job, or just not using tool I have correctly. I would strongly prefer some sort of threaded solution (unsurprisingly, it's much faster!). I've been messing around a bit with the Parallel Task functionality today, and a lot of the stuff looks promising, but I have even less experience with that vs. threadpool, and you can see how I'm abusing that! Maybe something with queue? You get the idea. I am directionless. Anything someone could suggest would be much appreciated. Thanks! Let me know if you need any additional information.
Private Sub SyncUpdateResource(strBld As Object, Xref As String)
SyncLock (CType(strBld, StringBuilder))
Dim FirstListitem As String = counter.ListOfFiles.First
Do While Xref <> FirstListitem
FirstListitem = Counter.ListOfFiles.First
'This makes the code much faster for reasons I can only guess at.
Thread.Sleep(5)
Monitor.PulseAll(CType(strBld, StringBuilder))
Loop
Dim strVol As String = Form1.Volname
Dim strLFPPath As String = Form1.txtPathDir
My.Computer.FileSystem.WriteAllText(strLFPPath & "\" & strVol & ".txt", strBld.ToString, True)
Counter.ListOfFiles.Remove(Xref)
End SyncLock
End Sub
This is a pretty typical multiple producer, single consumer application. The only wrinkle is that you have to order the results before they're written to the output. That's not difficult to do. So let's let that requirement slide for a moment.
The easiest way in .NET to implement a producer/consumer relationship is with BlockingCollection, which is a thread-safe FIFO queue. Basically, you do this:
In your case, the producer threads get items, do whatever processing they need to, and then put the item onto the queue. There's no need for any explicit synchronization--the BlockingCollection class implementation does that for you.
Your consumer pulls things from the queue and outputs them. You can see a really simple example of this in my article Simple Multithreading, Part 2. (Scroll down to the third example if you're just interested in the code.) That example just uses one producer and one consumer, but you can have N producers if you want.
Your requirements have a little wrinkle in that the consumer can't just write items to the file as it gets them. It has to make sure that it's writing them in the proper order. As I said, that's not too difficult to do.
What you want is a priority queue of some sort onto which you can place an item if it comes in out of order. Your priority queue can be a sorted list or even just a sequential list if the number of items you expect to get out of order isn't very large. That is, if you typically have only a half dozen items at a time that could be out of order, then a sequential list could work just fine.
I'd use a heap because it performs well. The .NET Framework doesn't supply a heap, but I have a simple one that works well for jobs like this. See A Generic BinaryHeap Class.
So here's how I'd write the consumer (the code is in pseudo-C#, but you can probably convert it easily enough).
The assumption here is that you have a BlockingCollection called sharedQueue that contains the items. The producers put items on that queue. Consumers do this:
var heap = new BinaryHeap<ItemType>();
foreach (var item in sharedQueue.GetConsumingEnumerable())
{
if (item.SequenceKey == expectedSequenceKey)
{
// output this item
// then check the heap to see if other items need to be output
expectedSequenceKey = expectedSequenceKey + 1;
while (heap.Count > 0 && heap.Peek().SequenceKey == expectedSequenceKey)
{
var heapItem = heap.RemoveRoot();
// output heapItem
expectedSequenceKey = expectedSequenceKey + 1;
}
}
else
{
// item is out of order
// put it on the heap
heap.Insert(item);
}
}
// if the heap contains items after everything is processed,
// then some error occurred.
One glaring problem with this approach as written is that the heap could grow without bound if one of your consumers crashes or goes into an infinite loop. But then, your other approach probably would suffer from that as well. If you think that's an issue, you'll have to add some way to skip an item that you think won't ever be forthcoming. Or kill the program. Or something.
If you don't have a binary heap or don't want to use one, you can do the same thing with a SortedList<ItemType>. SortedList will be faster than List, but slower than BinaryHeap if the number of items in the list is even moderately large (a couple dozen). Fewer than that and it's probably a wash.
I know that's a lot of info. I'm happy to answer any questions you might have.

set a breakpoint, when called: return and continue

I know how to do this in gdb. I'd attach, and follow with:
break myfunction
commands
return
cont
end
cont
I'm wondering if there's a way of doing this in c? I already have my code working for reading memory addresses and writing to memory addresses. And it automatically finds the pid and does related stuff. I'm stuck with implementing that use of breakpoints.
If you are talking about some sort of hand-written debugger, you can use IP value to set a breakpoint; Literally, when IP hits some certain value, you stop the program being debugged and perform some routine (for example, heading away to debugger process). To use function names, you should use symbol tables like it is done in GDB.
It's not quite clear what you are trying to achieve.
The GDB sequence you've show will simply make myfunction immediately return.
Assuming you want your mini-debugger to have the same effect, simply write the opcode for ret (0xC3 on x86) to the address of myfunction; no need to do the breakpoint at all.

Ada Modifying a variable address during runtime

I have an array and a variable declared like this
NextPacketRegister : array (1 .. Natural (Size)) of Unsigned_32;
PacketBufferPointer : Unsigned_32;
for PacketBufferPointer'Address use To_Address (SPW_PORT_0_OUT_REG_ADDR);
for NextPacketRegister'Address use To_Address (16#A000_0000# + Integer_Address (PacketBufferPointer));
PacketBufferPointer points to an HW registers that you access thru the PCI of our board.
NextPacketRegister uses this register's value + 16#A000_0000#
The thing is everytime I access NextPacketRegister, behind the scene I perform a PCI access, these access are very slow and we are trying to remove this limitation.
But I can't seem to find a way to modify NextPacketRegister'Address during runtime (I'd like to read ONCE the PacketBufferPointer register and then add this value + 16#A000_0000# only once so I don't have to perform PCI access everytime.
I looked around but I have no clue how I could achieve this.
That is correct; if you use for ...'address use to overlay an object at a specific address, you cannot change it later.
Generally I try to avoid overlays. What you show is one drawback to them. Another is that if the object has any parts that require initialization, they will be reinitialized every time the object is elaborated.
One thing I do have to ask up front though: This looks like a device driver. If you don't like the hit from going to the PCI bus then, fine. The obvious way around your problem of course is to just read the object into a temporary variable and use that when you don't want to hit the PCI bus. But obviously when you do that you are no longer reading directly from the device, and thus won't see changes it made to its memory-mapped registers (and your changes won't go straight to those memory-mapped registers). That's what you want, right? Ada contains no magic to allow you to get data on and off the PCI bus without hitting the PCI bus.
It almost looks like you are thinking that this line:
for NextPacketRegister'Address use To_Address (16#A000_0000# + Integer_Address (PacketBufferPointer));
Means: "Every time I access NextPacketRegister, go find the value of PacketBufferPointer and overlay it where it happens to be right now". That is not the case. This will only happen once when your declaration is processed. Thereafter, every access to something like NextPacketRegister[12] will go to the same place, without any access to PacketBufferPointer.
Another way would be to use pointers and Unchecked_Conversion. That's generally my preferred solution for overlays. It looks hairer, but what you are doing is hairy, so it should look that way. Also, it doesn't perform initializations on the overlaid memory area. I suppose that could be a bad thing though, if you count on those. Of course overlaying this way could cause an access to PacketBufferPointer, if you want. You'd have control over it depending on how you code it.
Since you asked about pointers, in this case I think you have a very valid case for using the package System.Address_to_Access_Conversions. I don't have the compiler handy, but I think it would go something like this:
type Next_Packet_Array is array (1 .. Natural (Size)) of Unsigned_32;
package Next_Packet_Array_Convert is new System.Address_To_Access_Conversions
(Next_Packet_Array);
Synced_Next_Packet_Address : System.Address;
Now when you "sync", I guess you'd want to hit that PacketBufferPointer to get the register value (as a SYSTEM.ADDRESS), and save it into a variable for later use:
Synced_Next_Packet_Address = 16#A000_0000# + Integer_Address (PacketBufferPointer);
And when you want to access the Next_Packet_Array, it would be something like this: Next_Packet_Array_Convert.To_Pointer (Synced_Next_Packet_Address).all
Make a structure (array of buffers ? ) that is what your set of packet buffers looks like and sit that at the address of the start of the array.
read the array index from the register.
you can write C in any language, even Ada.
At least it works and you get some sensible bounds checks.