https://www.dropbox.com/s/lhsnm9u16gscsm8/Allocations%20instrument.png
Check this link..
In the Allocations instrument under the reference count column, there is an entry after autorelease and release entries where the reference count becomes 3(the selected entry).. I don't understand how the ref count went up to 3 from 1 since ref count increases or decreases by 1 at each step. This doesn't seem right. I have even checked the code using stack trace.. I just don't get it. I'm a first time user of Instruments app and Allocations instrument.. The rest of the allocation/deallocation entries above the selected one are clear to me.. The selected entry is where the problem starts. Am i interpreting this the wrong way or what? Pls assist.
Look at the # column on the left side. Because the list is displayed "By Group", some of the events are displayed out of order.
Change to "By Time" and you should see something like that:
#25 | Retain | 2
#26 | Autorelease |
#27 | Retain | 3
Related
I am building a work-logging app which starts by showing a list of projects that I can select, and then when one is selected you get a collection of other buttons, to log data related to that selected project.
I decided to have a selected_project : Maybe Int in my model (projects are keyed off an integer id), which gets filled with Just 2 if you select project 2, for example.
The buttons that appear when a project is selected send messages like AddMinutes 10 (i.e. log 10 minutes of work to the selected project).
Obviously the update function will receive one of these types of messages only if a project has been selected but I still have to keep checking that selected_project is a Just p.
Is there any way to avoid this?
One idea I had was to have the buttons send a message which contains the project id, such as AddMinutes 2 10 (i.e. log 10 minutes of work to project 2). To some extent this works, but I now get a duplication -- the Just 2 in the model.selected_project and the AddMinutes 2 ... message that the button emits.
Update
As Simon notes, the repeated check that model.selected_project is a Just p has its upside: the model stays relatively more decoupled from the UI. For example, there might be other UI ways to update the projects and you might not need to have first selected a project.
To avoid having to check the Maybe each time you need a function which puts you into a context wherein the value "wrapped" by the Maybe is available. That function is Maybe.map.
In your case, to handle the AddMinutes Int message you can simply call: Maybe.map (functionWhichAddsMinutes minutes) model.selected_project.
Clearly, there's a little bit more to it since you have to produce a model, but the point is you can use Maybe.map to perform an operation if the value is available in the Maybe. And to handle the Maybe.Nothing case, you can use Maybe.withDefault.
At the end of the day is this any better than using a case expression? Maybe, maybe not (pun intended).
Personally, I have used the technique of providing the ID along with the message and I was satisfied with the result.
I am monitoring the power of a laser and I want to know when n consecutive measurements are outside a safe range. I have a Queue(Of Double) which has n items (2 in my example) at the time it's being checked. I want to check that all items in the queue satisfy a condition, so I pass the items through a Count() with a predicate. However, the count function always returns the number of items in the queue, even if they don't all satisfy the predicate.
ElseIf _consecutiveMeasurements.AsEnumerable().Count(Function(m) m <= Me.CriticalLowLevel) = ConsecutiveCount Then
_ownedISetInstrument.Disable()
' do other things
A view of the debugger with the execution moving into the If.
Clearly, there are two measurements in the queue, and they are both greater than the CriticalLowLevel, so the count should be zero. I first tried Enumerable.Where(predicate).Count() and I got the same result. What's going on?
Edit:
Of course the values are below the CriticalLowLevel, which I had mistakenly set to 598 instead of 498 for testing. I had over-complicated the problem by focusing my attention on the code when it was my test case which was faulty. I guess I couldn't see the forest for the trees, so they say. Thanks Eric for pointing it out.
Based on your debug snapshot, it looks like both of your measurements are less than the critical level of 598.0, so I would expect the count to match the queue length.
Both data points are <= Me.CriticalLowLevel.
Can you share an example where one of the data points is > Me.CriticalLowLevel that still exhibits this behavior?
While programming (for example, in C), a lot of variables are kept in the stack.
A stack is a FIFO data structure, and we can pop only the top value of the stack.
let's say I have 100 variables stored in the stack, and I want to get the value of the one of them, which is not in the top of the stack.
How do I get it? Do the operating system pop all the variables which are newer in the stack until getting the wanted variable, then push all of them back inside?
Or is there a different way the operating system can approach a variable inside the stack?
Thanks
The stack, as used in languages like C, is not a typical LIFO. It's called a stack because it is used in a way similar to a LIFO: When a procedure is called, a new frame is pushed onto the stack. The frame typically contains local variables and bookkeeping information like where to return to. Similarly, when a procedure returns, its frame is popped off the stack.
There's nothing magical about this. The compiler (not the operating system) allocates a register to be used as a stack pointer - let's call it SP. By convention, SP points to the memory location of the next free stack word:
+----------------+ (high address)
| argument 0 |
+----------------+
| argument 1 |
+----------------+
| return address |
+----------------+
| local 0 |
+----------------+
| local 1 |
+----------------+ +----+
| free slot | <-------------- | SP |
+----------------+ (low address) +----+
To push a value onto the stack, we do something like this (in pseudo-assembly):
STORE [SP], 42 ; store the value 42 at the address where SP points
SUB SP, 1 ; move down (the stack grows down!) to the next stack location
Where the notation [SP] is read as "the contents of the memory cell to which SP points". Some architectures, notably x86, provide a push instruction that does both the storing and subtraction. To pop (and discard) the n top values on the stack, we just add n to SP*.
Now, suppose we want to access the local 0 field above. Easy enough if our CPU has a base+offset addressing mode! Assume SP points to the free slot as in the picture above.
LOAD R0, [SP+2] ; load "local 0" into register R0
Notice how we didn't need to pop local 0 off the stack first, because we can reference any field using its offset from the stack pointer.
Depending on the compiler and machine architecture, there may be another register pointing to the area between locals and arguments (or thereabouts). This register, typically called a frame pointer, remains fixed as the stack pointer moves around.
I want to stress the fact that normally, the operating system isn't involved in stack manipulation at all. The kernel allocates the initial stack, and possibly monitors its growth, but leaves the pushing and popping of values to the user program.
*For simplicity, I've assumed that the machine word size is 1 byte, which is why we subtract 1 from SP. On a 32-bit machine, pushing a word onto the stack means subtracting (at least) four bytes.
I. The operating system doesn't do anything directly with your variables.
II. Don't think of the stack as a physical stack (the reason for this is overly simple: it isn't one). The elements of the stack can be accessed directly, and the compiler generates code that does so. Google "stack pointer relative addressing".
My iOS application uses CoreData to store many things, and NSUserDefaults to store other things that CoreData would be overkill for.
I'm going to simplify this so bear with me (Note: for the sake of this example let's use credit cards). When my application loads it displays a list of saved credit cards. Now, when each card is added it is assigned a number 1-10 (let's call it cardNum) that stays with the card for the life of it.
So we add one card and assign it #1. We add another card and assign it #2. Another... #3. We can set a default card that will be highlighted on launch by tapping it. What it does is saves the cardNum as the cDefault to NSUserDefaults to keep track of it (effectively setting card X as the default card).
(crude drawing of a table view)
//cDefault is 2
------------
1 - Card1
------------
2 - Card2 (cDefault)
------------
3 - Card3
------------
4 - Card4
------------
So let's say we assign cDefault to 2 and then next week we delete that card. On deletion, I need to automatically assign one of the other cards to be a default card.
(crude drawing of table view AFTER card 2 is deleted - note how cDefault is still set...)
//cDefault is still 2
------------
1 - Card1
------------
3 - Card3
------------
4 - Card4
------------
So my question is how, upon deletion, to set another card as cDefault (i.e if it was an array I would probably set the new default to the cDefault value of whatever card is at indexPath.row==0 but I'm not sure if I can do that using CoreData)
Please ask as many questions as you need to. I tried to explain this to the best of my abilities but if I was unclear on anything please ask me.
Thank you so much in advance.
James
First, I don't understand why you have to separate certain informations from your managed objects... I would define a BOOL property like "highlighted" in the Card class and handle all your stuff in core data... it sounds more OOP to me :P
Second, I didn't try it before, but you can subscribe to the NSNotificationCenter and listen to NSManagedObjectContextDidSaveNotification, that notification contains an userInfo dictionary with a key named NSDeletedObjectsKey that should contains a set of deleted objects (cards), in your selector you can write the piece of logic to update the objects in the store (that is, set the "highlighted" property to YES for the first card for example)... this approach is cool because can be placed anywhere in your code, anyway the simplest and obvious place to handle the update is where you are deleting the card.
I'm getting some behavior with a container object's children I just don't understand.
I'm making four display objects children of an mx:Canvas object. When I call getChildren(), I see them all in order, right where I think they should be:
1
2
3
4
The fun begins when I call swapChildrenAt(0,1); that's supposed to swap the positions of 1 and 2, but instead I wind up with:
MYSTERY_OBJECT_OF_MYSTERY
2
3
4
So, where did 1 go? Why, it's at position -1, of course.
getChildAt(-1): 1
getChildAt(0): MYSTERY_OBJECT_OF_MYSTERY
getChildAt(1): 2
getChildAt(2): 3
getChildAt(3): 4
FWIW, MYSTERY_OBJECT_OF_MYSTERY is a 'border'. Don't know how it got there.
Regardless, I find it baffling that getChildAt() and swapChildrenAt() are apparently using different starting indexes. Can anybody shed some light on this behavior?
You appear to be swapping the index of the display Objects instead of passing the display Objects at those location themselves.
The official documentation says
"swapChildren(child1:DisplayObject, child2:DisplayObject):void" therefore the index of the Display Objects themselves cant be used.
I hope this solves your problem.