How do blockchains find invalid blocks? - cryptography

I was reading about the blockchain, as I wanted to make a small implementation.
What I did not understand is what happens if a miner adds valid PoW to a block with transactions that have an invalid digital signature, why doesn't the blockchain just continue with the forged block and keep stacking blocks on top of it? How is it "corrected"?

The behavior of miners and other nodes is determined by the specific protocol of the network. That is, either such a block can be considered invalid and not accepted as "next", or the block will be accepted, but the transactions included in it will be ignored. In some blockchains, the miner who created such a block can be somehow fined - excluded from the pool of miners, deprived of part of the collateral.

Related

Race condition removing local event monitor in event handler in Objective-C using Cocoa

I am creating a local event monitor in Objective-C using the Cocoa framework and wondered if this would introduce a race condition or not:
id monitor = [NSEvent addLocalMonitorForEventsMatchingMask:
(NSEventMaskLeftMouseDown | NSEventMaskRightMouseDown | NSEventMaskOtherMouseDown)
handler:^(NSEvent* event)
{
[NSEvent removeMonitor:monitor];
}];
Your code does not compile as the block does not return a value, so maybe you've simplified this too much for posting.
Next the value of monitor within the block will always be nil as its value is copied as part of block construction before addLocalMonitorForEventsMatchingMask returns and a value is assigned to monitor.
You could address this last issue by declaring monitor to be __block, thus capturing the variable rather than its value, however that gets you to:
You've got a reference cycle. The opaque monitor object returned by addLocalMonitorForEventsMatchingMask contains within it a reference to your block, and your block contains a reference to the monitor object. This won't effect the operation of the monitoring or its removal, it will just mean the monitor object & block objects will never get collected. You could address this by niling monitor in the block when you do removeMonitor.
Which gets us to your final question, is there a race condition? Presumably you mean between the event system calling your monitor for one event and trying to call it on a following event. I don't know if we can say for sure, however the documentation for removeMonitor does not mention any precautions to take, and initial event processing is done through a "queue" suggesting the system will not start processing a following event until it has at least dispatched the current one to your app. This does strongly suggest that race conditions are not an issue here.
Note however that the documentation, even the Swift version, uses the term "garbage collection" and though ARC is a type of GC Apple tends to reserve the term for the long deprecated pre-ARC (and pre-Swift) garbage collector - suggesting the documentation has not been revised for eons (in computer terms). Maybe someone else will offer a definitive answer on this.
HTH

vkAllocateDescriptorSets returns VK_OUT_OF_HOST_MEMORY

I wrote vulkan code on my laptop that worked, and then I got a new laptop and now running it, the program aborts because vkAllocateDescriptorSets() returns VK_OUT_OF_HOST_MEMORY.
I doubt that it is actually out of memory, and I know it can allocate some memory because VkCreateInstance() doesn't fail like in this stack overflow post: Vulkan create instance VK_OUT_OF_HOST_MEMORY.
EDIT: Also, I forgot to mention, vkAllocateDescriptorSets() only returns VK_OUT_OF_HOST_MEMORY the second time I run it.
vkAllocateDescriptorSets allocates descriptors from a pool. So while such allocation could fail due to a lack of host/device memory, there are two other things that could cause failure. There may simply not be enough memory in the pool to allocate the number of descriptors/sets you asked for. Or there could be enough memory, but repeated allocations/deallocations have fragmented the pool such that the allocations cannot be made.
The case of allocating more descriptors/sets than are available should never happen. After all, you know how many descriptors&sets you put into that pool, so you should know exactly when you'll run out. This is an error state that a working application can guarantee it will never encounter. Though the VK_KHR_maintenance1 extension did add support for this circumstance: VK_ERROR_OUT_OF_POOL_MEMORY_KHR.
However, if you've screwed up your pool creation in some way, you will get this possibility. Of course, since there's no error code for it (outside of the extension), the implementation will have to provide a different error code: either host or device memory exhaustion.
But again, this is a programming error on your part, not something you should ever expect to see with working code. In particular, even if you request that extension, do not keep allocating from a pool until it stops giving you memory. That's just bad coding.
For the fragmentation case, they do have an error code: VK_ERROR_FRAGMENTED_POOL. However, the Khronos Group screwed up. See, the first few releases of Vulkan didn't include this error code; it was added later. Which means that implementations from before the adding of this error code (and likely afterwards) had to pick an inappropriate error code to return. Again, either host or device memory.
So you basically have to treat any failure of this function as either fragmentation, programming error (ie: you asked for more stuff than you put into the pool), or something else. In all cases, it's not something you can recover from at runtime.
Since it appeared to work once, odds are good that you probably just allocated more stuff than the pool contains. So you should make sure that you add enough stuff to the pool before allocating from it.
The problem was that I had not allocated enough memory in the pool. I solved it by creating multiple pools. One for each descriptor set.

What high level synchronisation construct should be used for thread safe single shot method?

I have a situation where a session of background processing can finish by timing out, user asynchronously cancelling or the session completing. Any of those completion events can run a single shot completion method. The completion method must only be run once. Assume that the session is an instance of an object so any synchronisation must use instance constructs.
Currently I'm using an Atomic Compare and Swap operation on a completion state variable so that each event can test and set the completion state when it runs. The first completion event to fire gets to set the completed state and run the single shot method and the remaining events fail. This works nicely.
However I can't help feeling that I should be able to do this in a higher level way. I tried using a Lock object (NSLock as I'm writing this with Cocoa) but then got a warning that I was releasing a lock that was still in the locked state. This is what I want of course. The lock gets locked once and never unlocked but I was afraid that system resources representing the lock might get leaked.
Anyway, I'm just interested as to whether anyone knows of a more high level way to achieve a single shot method like this.
sample code for any of the completion events:
if(OSAtomicCompareAndSwapInt(0, 1, &completed))
{
self.completionCallback();
}
Doing a CAS is almost certainly the right thing to do. Locks are not designed for what you need, they are likely to be much more expensive and are semantically a poor match anyway -- the completion is not "locked". It is "done". A boolean flag is the right representation, and doing a CAS ensures that it is manipulated safely in concurrent scenarios. In C++, I'd use std::atomic_flag for this, maybe check whether Cocoa has anything similar (this just wraps the CAS in a nicer interface, so that you never accidentally use a non-CAS test on the variable, which would be racy).
(edit: in pthreads, there's a function called pthread_once which does what you want, but I wouldn't know about Cocoa; the pthread_once interface is quite unwieldy anyway, in my opinion...)

Does g_strdup return NULL on memory allocation failure?

The glib documentation lacks many important things that I think API documentation absolutely should include. For instance the entry for g_malloc says nothing about that it will crash upon memory allocation failure (in direct contrast to the behaviour of the standard malloc, which the name implies that it mimics). Only if you happen to notice that there also is a variant named g_try_malloc and read its description you will be informed that g_try_malloc
Attempts to allocate n_bytes, and returns NULL on failure. Contrast
with g_malloc(), which aborts the program on failure.
Now for the question, glib have a function g_strdup which also does not mention anything about possibly returning NULL. I assume that it will not since it is implied that it will be using g_malloc internally. Will it?
The documentation does say it, though. Check the introductory section to the "Memory Allocation" page in the GLib manual:
If any call to allocate memory fails, the application is terminated. This also means that there is no need to check if the call succeeded.
This goes for any library call that allocates memory, and therefore for g_strdup() too.

Good method to pass messages between embedded RTOS tasks (but can handle message timeouts gracefully)

I'm working with an embedded RTOS (CMX), but I think this applies to any embedded RTOS. I want to pass messages between various tasks. The problem is that one task sometimes 'locks' every other task out for a long period of time (several seconds).
Since I no longer wait for the message to be ACK'ed after ~100 ms or so, if I send a mailbox message during this time, the task that sent the message is no longer waiting for it reply, but the receiving task will get the message and try to act on it. The problem is that the receiving task has a pointer to the message, but since the sending task has moved on, the pointer is no longer pointing to the message which can cause huge problems.
I have no method of removing messages once they are in the queue. How can I handle this error gracefully?
This question actually covers several different issues / points.
First of all, I'm wondering why one task hogs the CPU for seconds at a time sometimes. Generally this is an indication of a design problem. But I don't know your system, and it could be that there is a reasonable explanation, so I won't go down that rabbit hole.
So from your description, you are enqueueing pointers to messages, not copies of messages. Nothing inherently wrong with that. But you can encounter exactly the problem you describe.
There are at least 2 solutions to this problem. Without knowing more, I cannot say which of these might be better.
The first approach would be to pass a copy of the message, instead of a pointer to it. For example, VxWorks msg queues (not CMX queues obviously) have you enqueue a copy of the message. I don't know if CMX supports such a model, and I don't know if you have the bandwidth / memory to support such an approach. Generally I avoid this approach when I can, but it has its place sometimes.
The second approach, which I use whenever I can in such a situation, is to have the sender allocate a message buffer (usually from my own msg/buffer pools, usually a linked-list of fixed size memory blocks - but that is an implementation detail - see this description of "memory pools" for an illustration of what I'm talking about). Anyway -- after the allocation, the sender fills in the message data, enqueues a pointer to the message, and releases control (ownership) of the memory block (i.e., the message). The receiver is now responsible for freeing/returning the memory after reading the message.
There are other issues that could be raised in this question, for example what if the sender "broadcasts" the msg to more than one receiver? How do the receivers coordinate/communicate so that only the last reader frees the memory (garbage collection)? But hopefully from what you asked, the 2nd solution will work for you.