Why biased-locking use the different mark word with lightweight-locking - jvm

When one thread hold a bassed-locking. The mark word is thread id.
But for lightweight-locking, the mark word is stack pointer to the thread, why not store thread id still?

Thin lock schema assumes that the mark word of a locked object points to a stack slot of the frame that has locked this object. This stack slot stores the original object header (aka displaced header).
Unlocked:
[ orig_header | 001 ] | Stack frame |
| |
Locked: | |
[ stack_ptr | 000 ] | |
| |-------------|
--------------------->| orig_header |
|-------------|
| |
| |
-------------
Obviously the stack slot carries more information than a thread ID, since you can derive thread ID from a stack slot, but not vice versa.
Unlike biased schema where unlock operation is effectively a no-op, thin locks need to restore the original header when an object is unlocked. This becomes very simple since the mark word already points to a stack slot with the original value.

My understanding is to ensure that the hashcode can work when using lightweight/heavyweight locking.
When using biased locking, if you call System.identityHashCode(), the biased locking will inflate to heavyweight locking in order to store the hashcode into the header word.
if lightweight/heavyweight locking also uses the thread id, then there is no place to store the hashcode.
the hashcode and biased locking are exclusive.

Related

What does justify the use of coroutines instead of subroutines?

I am currently exploring coroutines in Python, but I have difficulties in determining when to use them over normal subroutines. I am trying to explain my problem with the help of an example.
The task is to iterate over tabular data and format the content of the row according to its entry in the type field. When formatted, the result is written to the same output file.
index | type | time | content
-----------------------------
0 | A | 4 | ...
1 | B | 6 | ...
2 | C | 9 | ...
3 | B | 11 | ...
...
Normally, I would check the type and write some sort of switch case and delegate the data to a specific subroutine (== function) like so (pseudo-code):
outfile = open('test.txt')
for row in infile:
switch(row.type)
{
case A:
format_a(row.content, outfile) // subroutine that formats and writes data of type A
break
case B:
format_b(row.content, outfile) // same for type B...
break
case C:
format_c(row.content, outfile) // ... and type C
break
default:
// handle unknown type exception
break
}
close(outfile)
The question is: would I get any benefit from realizing this with coroutines? I don't think so, and let me explain why: if I have determined the type of a row, I would pass the content to the respective coroutine. As long as I do this, the other coroutines and the calling function are paused. The data is formatted and written to the file and passes control back to the calling function which gets the next row, etc. This procedure repeats until I run out of rows. Therefore, this is exactly what the workflow using subroutines would look like.
One pro for using coroutines here would be if I had to keep track of some state. Maybe I am interested in the time difference to the last row per type. In this case, the coroutine function for B would save the time of its first call (6). When it is called the second time, it retrieves the value (11) and can calculate the difference (11 - 6 = 5). This would be way harder to do with subroutines.
But is the argument of keeping track of some state the only reason for using coroutines? I am looking for a rule-of-thumb, not a rule that covers every case possible.

Ref count issue in Allocations instrument

https://www.dropbox.com/s/lhsnm9u16gscsm8/Allocations%20instrument.png
Check this link..
In the Allocations instrument under the reference count column, there is an entry after autorelease and release entries where the reference count becomes 3(the selected entry).. I don't understand how the ref count went up to 3 from 1 since ref count increases or decreases by 1 at each step. This doesn't seem right. I have even checked the code using stack trace.. I just don't get it. I'm a first time user of Instruments app and Allocations instrument.. The rest of the allocation/deallocation entries above the selected one are clear to me.. The selected entry is where the problem starts. Am i interpreting this the wrong way or what? Pls assist.
Look at the # column on the left side. Because the list is displayed "By Group", some of the events are displayed out of order.
Change to "By Time" and you should see something like that:
#25 | Retain | 2
#26 | Autorelease |
#27 | Retain | 3

Storing things in isa

The 64-bit runtime took away the ability to directly access the isa field of an object, something CLANG engineers had been warning us about for a while. They've been replaced by a rather inventive (and magic) set of everchanging ABI rules about which sections of the newly christened isa header contain information about the object, or even other state (in the case of NSNumber/NSString). There seems to be a loophole, in that you can opt out of the new "magic" isa and use one of your own (a raw isa) at the expense of taking the slow road through certain runtime code paths.
My question is twofold, then:
If it's possible to opt out and object_setClass() an arbitrary class into an object in +allocWithZone:, is it also possible to put anything up there in the extra space with the class, or will the runtime try to read it through the fast paths?
What exactly in the isa header is tagged to let the runtime differentiate it from a normal isa?
If it's possible to opt out and object_setClass() an arbitrary class into an object in +allocWithZone:
According to this article by Greg Parker
If you override +allocWithZone:, you may initialize your object's isa field to a "raw" isa pointer. If you do, no extra data will be stored in that isa field and you may suffer the slow path through code like retain/release. To enable these optimizations, instead set the isa field to zero (if it is not already) and then call object_setClass().
So yes, you can opt out and manually set a raw isa pointer. To inform the runtime about this, you have to the first LSB of the isa to 0. (see below)
Also, there's an environment variable that you can set, named OBJC_DISABLE_NONPOINTER_ISA, which is pretty self-explanatory.
is it also possible to put anything up there in the extra space with the class, or will the runtime try to read it through the fast paths?
The extra space is not being wasted. It's used by the runtime for useful in-place information about the object, such as the current state and - most importantly - its retain count (this is a big improvement since it used to be fetched every time from an external hash table).
So no, you cannot use the extra space for your own purposes, unless you opt out (as discussed above). In that case the runtime will go through the long path, ignoring the information contained in the extra bits.
Always according to Greg Parker's article, here's the new layout of the isa (note that this is very likely to change over time, so don't trust it)
(LSB)
1 bit | indexed | 0 is raw isa, 1 is non-pointer isa.
1 bit | has_assoc | Object has or once had an associated reference. Object with no associated references can deallocate faster.
1 bit | has_cxx_dtor | Object has a C++ or ARC destructor. Objects with no destructor can deallocate faster.
30 bits | shiftcls | Class pointer's non-zero bits.
9 bits | magic | Equals 0xd2. Used by the debugger to distinguish real objects from uninitialized junk.
1 bit | weakly_referenced | Object is or once was pointed to by an ARC weak variable. Objects not weakly referenced can deallocate faster.
1 bit | deallocating | Object is currently deallocating.
1 bit | has_sidetable_rc | Object's retain count is too large to store inline.
19 bits | extra_rc | Object's retain count above 1. (For example, if extra_rc is 5 then the object's real retain count is 6.)
(MSB)
What exactly in the isa header is tagged to let the runtime differentiate it from a normal isa?
As anticipated above you can discriminate between a raw isa and a new rich isa by looking at the first LSB.
To wrap it up, while it looks feasible to opt out and start messing with the extra bits available on a 64 bit architecture, I personally discourage it. The new isa layout is carefully crafted for optimizing the runtime performances and it's far from guaranteed to stay the same over time.
Apple may also decide in the future to drop the retro-compatibility with the raw isa representation, preventing opt out. Any code assuming the isa to be a pointer would then break.
You can't safely do this, since if (when, really) the usable address space expands beyond 33 bits, the layout will presumably need to change again. Currently though, the bottom bit of the isa controls whether it's treated as having extra info or not.

How to get a variable that stored in the stack?

While programming (for example, in C), a lot of variables are kept in the stack.
A stack is a FIFO data structure, and we can pop only the top value of the stack.
let's say I have 100 variables stored in the stack, and I want to get the value of the one of them, which is not in the top of the stack.
How do I get it? Do the operating system pop all the variables which are newer in the stack until getting the wanted variable, then push all of them back inside?
Or is there a different way the operating system can approach a variable inside the stack?
Thanks
The stack, as used in languages like C, is not a typical LIFO. It's called a stack because it is used in a way similar to a LIFO: When a procedure is called, a new frame is pushed onto the stack. The frame typically contains local variables and bookkeeping information like where to return to. Similarly, when a procedure returns, its frame is popped off the stack.
There's nothing magical about this. The compiler (not the operating system) allocates a register to be used as a stack pointer - let's call it SP. By convention, SP points to the memory location of the next free stack word:
+----------------+ (high address)
| argument 0 |
+----------------+
| argument 1 |
+----------------+
| return address |
+----------------+
| local 0 |
+----------------+
| local 1 |
+----------------+ +----+
| free slot | <-------------- | SP |
+----------------+ (low address) +----+
To push a value onto the stack, we do something like this (in pseudo-assembly):
STORE [SP], 42 ; store the value 42 at the address where SP points
SUB SP, 1 ; move down (the stack grows down!) to the next stack location
Where the notation [SP] is read as "the contents of the memory cell to which SP points". Some architectures, notably x86, provide a push instruction that does both the storing and subtraction. To pop (and discard) the n top values on the stack, we just add n to SP*.
Now, suppose we want to access the local 0 field above. Easy enough if our CPU has a base+offset addressing mode! Assume SP points to the free slot as in the picture above.
LOAD R0, [SP+2] ; load "local 0" into register R0
Notice how we didn't need to pop local 0 off the stack first, because we can reference any field using its offset from the stack pointer.
Depending on the compiler and machine architecture, there may be another register pointing to the area between locals and arguments (or thereabouts). This register, typically called a frame pointer, remains fixed as the stack pointer moves around.
I want to stress the fact that normally, the operating system isn't involved in stack manipulation at all. The kernel allocates the initial stack, and possibly monitors its growth, but leaves the pushing and popping of values to the user program.
*For simplicity, I've assumed that the machine word size is 1 byte, which is why we subtract 1 from SP. On a 32-bit machine, pushing a word onto the stack means subtracting (at least) four bytes.
I. The operating system doesn't do anything directly with your variables.
II. Don't think of the stack as a physical stack (the reason for this is overly simple: it isn't one). The elements of the stack can be accessed directly, and the compiler generates code that does so. Google "stack pointer relative addressing".

Should classes be grouped by functionality type or by model?

I know the title probably isn't too clear because I probably don't have the right terminology, but an example should make it clear. Suppose I have an app with posts and comments, what would be the best practice as far as grouping those into namespaces/packages of the various ways. If there's no better way, what are the advantages and disadvantages of both. Here are a couple different ways I envisioned, note this is in no way exhaustive, it's just to get the point across:
1)
MyAp
|--Entities
| |--AbstractEntity.class
| |--Comment.class
| |--Post.class
|--DataMappers
| |--AbstractDataMapper.class
| |--CommentDataMapper.class
| |--PostDataMapper.class
|--DataMappers
| |--AbstractService.class
| |--CommentService.class
| |--PostService.class
2)
MyAp
|--Abstract
| |--AbstractDataMapper.class
| |--AbstractEntity.class
| |--AbstractService.class
|--Impl
| |--Comment
| | |--Comment.class
| | |--CommentDataMapper.class
| | |--CommentService.class
| |--Post
| | |--Post.class
| | |--PostDataMapper.class
| | |--PostService.class
With a big project, you could break either one of the methods above into broader groups. For example, for #1 you could have Db, Util, System, etc. beneath your Entities, DataMappers, and Services namespaces and would place class implementations in there while keep the AbstractEntity class under the Entities namespace. For #2, you could do the same putting those additional namespaces under Abstract and Impl.
I'm leaning towards #1 being better, seems like I would have to add additional Db, Util, System, etc. namespaces to 2 different places. But #2 has the appeal of keeping all classes related to one model class together. Can't make up my mind!
I'd say there's something wrong in both approaches.
Most of developers tend to break classes by their main specialization. Mapper should go to mappers, model should go to models, helpers should go to helpers, interfaces to interfaces, we think first. That can be easy decision an the beginning of the project, but it causes some pain as time passes. It looks rather stupid some times. Especially when you need to extract a certain functionality into a separate component.
From my experience I can say that you should group classes by their high-level function, or 'sub-system', or, as now DDD specifies 'bounded context'. In the same time there shouldn't be very may grouping levels.
So - I can see all of your entities belong to Posting context. It can look strange enough, but I'd suggest that you put all of your classes to Posting fodler and do not create extra subfolders unless you have a very specific functionality area within the context.
MyAp
|--Core
|--AbstractEntity.class
|--AbstractDataMapper.class
|--AbstractService.class
|--Posting
|--Comment.class
|--Post.class
|--CommentDataMapper.class
|--PostDataMapper.class
|--CommentService.class
|--PostService.class
In general your second approach looks similar. In that case you can easy add more and more context-specific folders. Something like - 'Voting', 'Notifications', 'Authentication', etc. Also I'd suggest to choose the simplest way, and wait until you have some 'critical mass' of classes so you'd have enough information about how to group classes correctly.
With the first approach you domain's contexts would be spread over all folders.
In my experience I've seen the first FAR more than I've seen the latter (I don't think I've ever seen a project divided on the first, actually).
Example: let's say you've got an abstract class that uses the Hollywood pattern. All implementing classes would reasonably be in the same package. It doesn't make any sense to have the "master" template off in an "Abstract" package so far from the actual implementation.
The one other thing I would add is SINGULAR IN ALL CASES EXCEPT COLLECTIONS.
MyAp
|--Entity
| |--AbstractEntity.class
| |--Comment.class
| |--Post.class
|--DataMapper
| |--AbstractDataMapper.class
| |--CommentDataMapper.class
| |--PostDataMapper.class
|--Service
| |--AbstractService.class
| |--CommentService.class
| |--PostService.class
Some frameworks like JaxB put configuration information into a package-info.java. In that case the first approach is mandatory to be able to use package-info.java.