I have an application running on VxWorks 5.5.1
It basically allocates a STL map data structure but in some cases my
main task crashes.
When I monitor via debugger, the allocated size for an STL map type
variable has the value of another task's ID instead of 8.
So it seems like a task ID and a variable is mixing.
The problem occurs in consecutive start and stop of main task.
Could it possible, TCB block of a task overwrites an area in memory
Regards
It's solved. It was beacuse of an custom debug function.
Thanks for spending time.
Related
vkUpdateDescriptorSets updates the descriptor instantly, so it seems to me that it's doing a write across PCI-E to write to device-local memory. I think this is what's happening as there's no concept of the descriptor writes being available "by the next command", it's immediate, so I think this must be taking place. My question is that if you have a case where you do:
vkUpdateDescriptorSets();
vkCmdDraw();
And then submit the command buffer to a queue, you're essentially doing the transfer/write to GPU twice. However since you generally don't need the descriptors updated until you submit further commands I'm wondering if we can have greater control of this? I've seen there's an extension vkCmdPushDescriptorSetKHR, which I think does what I want, but according to this answer AMD drivers don't support it.
Is my concept about this right? I mean, for the descriptors writes to be made immediately then what must be happening essentially is an extra HOST-GPU synchronisation, right?
And then submit the command buffer to a queue, you're essentially doing the transfer/write to GPU twice.
The result of that depends on whether you're using descriptor indexing or not. If you're not using VK_DESCRIPTOR_BINDING_UPDATE_AFTER_BIND_BIT on the binding(s) being modified, then you have undefined behavior. Outside of the protections provided by that flag, you are specifically disallowed from modifying any descriptor set which is bound to any command buffer until all CBs in which the descriptor set is bound have completed execution.
So, does this mean that updating a descriptor set performs some kind of immediate memory transfer to GPU-accessible memory? Or does binding the descriptor set cause the transfer? Or does something else entirely happen?
It doesn't matter. Unless you're using descriptor indexing, you are not allowed to modify a bound descriptor set. And if you are using descriptor indexing, you still cannot modify a descriptor set that's being used. It simply changes the definition of "used" to "is part of a command buffer that was submitted." Exactly how those changes happen is up to the implementation.
I am currently reading the last specification of the JVM. It is clear that each thread has its own call stack and its own program counter that keeps track of the (next) instruction to execute.
My question is maybe dump, but from the description, I cannot find an answer.
Where is the current program counter stored when a new or a method is invoked?
In other terms, how does the thread now where to continue after the invokation of a method?
The answer is implementation-dependent as different hardware architectures and even different JVMs may implement this behavior in different ways. In the standard Oracle JVM, most of your bytecode will be compiled to native code by JIT (Just in Time compiler) and method calls will be executed as for native code (give or take some extra code which may be added to handle checkpointing etc.). On a PC this means that current register values, including the instruction pointer / program counter will be saved on the stack before a method call. When returning from the call, the processor pops these values from the stack, among them the return address.
I was running a test to make sure objects are being deallocated properly by wrapping the relevant code section in a 10 second long while loop. I ran the test in Debug and Release configurations with different results.
Debug (Build & Run in simulator):
Release (Build & Run on device, and Profile using Instruments):
The CPU spikes signify where objects are created and destroyed (there's 3 in each run). Notice how in the Debug build, the memory usage rises gradually during the busy loop, and then settles a little afterwards at a higher base level, this happens with each loop iteration. On the Release build it stays constant the whole time. At the end after 3 runs the memory usage level of the Debug build is significantly higher than that of the Release build. (The CPU spikes are offset on the time axis relative to each other but that's just because I pressed the button that triggers the loop at different times).
The inner loop code in question is very simple and basically consists of a bunch of correctly paired malloc and free statements as well as a bunch retain and release calls (courtesy of ARC, also verified as correctly paired).
Any idea what is causing this behaviour?
In Release builds ARC will do its best to keep objects out of the autorelease pool. It does this using objc_returnsRetainAutorelease and checking for it at runtime.
A lot of Cocoa-Touch classes use caching to improve performance. Memory amount used for caching data could vary depending on total memory, available memory and probably some other things. Since you compare results for Mac and device it is not strange that you receive different results.
Some examples of classes/methods that use caching:
+(UIImage *)imageNamed:(NSString *)name
Discussion
This method looks in the system caches for an image object with the specified name and
returns that object if it exists. If a matching image object is not
already in the cache, this method loads the image data from the
specified file, caches it, and then returns the resulting object.
NSURLCache
The NSURLCache class implements the caching of responses to
URL load requests by mapping NSURLRequest objects to
NSCachedURLResponse objects. It provides a composite in-memory and
on-disk cache
For one thing, the release builds optimize code and remove debugging information from the code. As a result, the application package is significantly smaller and to load it, less memory is necessary.
I suppose that most of the used memory in Debug builds is the actual debugging information, zombie tracking etc.
Here's my scenario: I have a thread running heavy calculations, saving the results via core data at the end. I have the thread set up with it's own autorelease pool and it's own NSManagedContext created within the thread. The user can change the inputs to the calculation on the main thread, in which case the calculation thread is terminated (via regular checks of a NSLocked variable) and relaunched with the new inputs. Everything is working fine here.
For performance reasons, the calculation thread doesn't have an undo manager and there is only one context save at the very end. If a termination command is detected I don't want to save the results. Right now I'm just skipping the context save and releasing it, which seems to work fine.
I noticed, however, that there's a reset method for NSManagedContext. Apple's documentation on this method isn't very helpful to me. It simply states that it returns the receiver's contents to it's base state and that all the receiver's managed objects are "forgotten".
What does that mean? Is it the equivalent to reverting to the last saved version? Is an undo manager required for proper operation of this method? Any reason I should use this method instead of what I'm doing now?
It sounds like you are using the context to cache changes independent of the context on the main thread, and if you don't want those changes to be recorded, you just throw them out by deleting the "local" context. This is good enough for the scenario you are describing. -reset might be useful if you didn't want to relaunch the background thread, but just start over using the same thread (and context), but with new inputs. Since you launch a new thread (thus creating a new NSManagedObjectContext on it), -reset is probably not very useful for you in this scenario. You already pretty much doing it as Apple recommends in several of their sample codes.
I am using Garbage collector in my Cocoa based application on Mac OS X. It has 100s of threads running and synchronization is done using Operation Queue.
After a long run, one of the object is garbaged and application will crash.
Checking to see if the object is non nil also fails as the the object is invalid and contains some garbage value. Calling a method on the object leads to crash.
Anyone please help me in debugging the issue.
Thank you......................
I am using Garbage collector in my
Cocoa based application on Mac OS X.
It has 100s of threads running and
synchronization is done using
Operation Queue.
More likely than not, the bug lies within the seemingly rather overly concurrent nature of your code. Running 100s of threads on a machine with "only" double digits worth of cores (if that) is unlikely to be very efficient and, of course, keeping everything properly in sync is going to be rather difficult.
The best place to start is to turn on Malloc stack logging and use malloc_history to find out what events occurred at the address that went south.
Also, there were fixes in 10.6.5 that impacted GC correctness.
If you can change the code of the garbage collected object, then override the finalize method like this, to get some information:
- (void) finalize
{
NSLog(#"Finalizing!\n%#", [[NSThread callStackSymbols] componentsJoinedByString:#"\n"]);
//if you put here a breakpoint, you can check the supposed references to this object
[super finalize];
}