Netty:why use volatile field defaultFactory in InternalLoggerFactory of netty source code? - singleton

When I read source code of netty,I am confused.
Threre is a volatile field defaultFactory in class InternalLoggerFactory,In my opinion,if to implement the singleton,why method of "getDefaultFactory" do not have a synchronized keyword or double check,Only keyword of volatile are used here

This is because volatile gives good enough protection here while very cheap (especially on x86).

Related

Can the delete operator be used instead of the Marshal.FreeHGlobal method to free memory from a wchar_t*?

I read the official documentation for the StringToHGlobalUni method and it says that the FreeHGlobal method should be use to free memory.
The memory allocated by StringToHGlobalUni method is on the native heap, so I don't see why the delete operator could not be used.
I searched a lot, but I couldn't find any explanation why I can't use the delete operator.
I'm new to this and some explanations would help me. Can the delete operator be used or not ?
Code
const wchar_t* filePath = (const wchar_t*)(Marshal::StringToHGlobalUni(inputFilePath)).ToPointer();
Marshal::FreeHGlobal(IntPtr((void*)filePath));
Don't use Marshal::StringToHGlobalUni in C++/CLI.
Use either
PtrToStringChars (accesses Unicode characters in-place, no allocation) or
marshal_as<std::wstring> (manages the allocation with a smart pointer class that will free it correctly and automatically)
Apart from the fact that StringToHGlobalUni requires you to free the memory manually, the name of that function is completely misleading. It has absolutely no connection to HGLOBAL whatsoever.

Variable Encapsulation in Case Statement

While modifying an existing program's CASE statement, I had to add a second block where some logic is repeated to set NetWeaver portal settings. This is done by setting values in a local variable, then assigning that variable to a Changing parameter. I copied over the code and did a Pretty Print, expecting to compiler to complain about the unknown variable. To my surprise however, this code actually compiles just fine:
CASE i_actionid.
WHEN 'DOMIGO'.
DATA: ls_portal_actions TYPE powl_follow_up_sty.
CLEAR ls_portal_actions.
ls_portal_actions-bo_system = 'SAP_ECC_Common'.
" [...]
c_portal_actions = ls_portal_actions.
WHEN 'EBELN'.
ls_portal_actions-bo_system = 'SAP_ECC_Common'.
" [...]
C_PORTAL_ACTIONS = ls_portal_actions.
ENDCASE.
As I have seen in every other programming language, the DATA: declaration in the first WHEN statement should be encapsulated and available only inside that switch block. Does SAP ignore this encapsulation to make that value available in the entire CASE statement? Is this documented anywhere?
Note that this code compiles just fine and double-clicking the local variable in the second switch takes me to the data declaration in the first. I have however not been able to test that this code executes properly as our testing environment is down.
In short you cannot do this. You will have the following scopes in an abap program within which to declare variables (from local to global):
Form routine: all variables between FORM and ENDFORM
Method: all variables between METHOD and ENDMETHOD
Class - all variables between CLASS and ENDCLASS but only in the CLASS DEFINITION section
Function module: all variables between FUNCTION and ENDFUNCTION
Program/global - anything not in one of the above is global in the current program including variables in PBO and PAI modules
Having the ability to define variables locally in a for loop or if is really useful but unfortunately not possible in ABAP. The closest you will come to publicly available documentation on this is on help.sap.com: Local Data in the Subroutine
As for the compile process do not assume that ABAP will optimize out any variables you do not use it won't, use the code inspector to find and remove them yourself. Since ABAP works the way it does I personally define all my variables at the start of a modularization unit and not inline with other code and have gone so far as to modify the pretty printer to move any inline definitions to the top of the current scope.
Your assumption that a CASE statement defines its own scope of variables in ABAP is simply wrong (and would be wrong for a number of other programming languages as well). It's a bad idea to litter your code with variable declarations because that makes it awfully hard to read and to maintain, but it is possible. The DATA statements - as well as many other declarative statements - are only evaluated at compile time and are completely ignored at runtime. You can find more information about the scopes in the online documentation.
The inline variable declarations are now possible with the newest version of SAP Netweaver. Here is the link to the documentation DATA - inline declaration. Here are also some guidelines of a good and bad usage of this new feature
Here is a quote from this site:
A declaration expression with the declaration operator DATA declares a variable var used as an operand in the current writer position. The declared variable is visible statically in the program from DATA(var) and is valid in the current context. The declaration is made when the program is compiled, regardless of whether the statement is actually executed.
Personally have not had time to check it out yet, because of lack of access to such system.

Why isn't Eiffel's automatic type conversion feature more popular?

What happened to me while programming in Java:
String str
// want to call something(), but signature does not match
something(Foo foo)
// but I have this conversion function
Foo fooFrom(String)
// Obviously I am about to create another method overload.. sigh
something(String s) {
something(fooFrom(s));
}
But then I thought of the possibility of a "automatic type conversion" which just uses my defined conversion function fooFrom everytime a string is passed in where a Foo object is excepted.
My search brought me to the wikipedia page about type conversion with this Eiffel example:
class STRING_8
…
create
make_from_cil
…
convert
make_from_cil ({SYSTEM_STRING})
to_cil: {SYSTEM_STRING}
…
The methods after convert are called automatically if a STRING_8 is used as a SYSTEM_STRING and vice-versa.
Somehow surprising for me I could not find any other language supporting this.
So my question: are there any other languages supporting this feature?
If not, are there any reasons for that, since it seems quite useful to me?
Further I think it would not be difficult to implement it as a language add-on.
There is one minor point that may make the things a bit more complicated. At the moment Eiffel has a rule that conversion can be applied only when the source of reattachment is attached to an object, i.e. is not Void (not null in Java/C#).
Let's look at the original example:
something (str);
Suppose that str is null. Do we get a NullPointerException / InvalidArgumentException, because the code is transformed into
something (fooFrom (str));
and fooFrom does not expect null? Or is the compiler smart enough to transform this into
if (str == null)
something (null);
else
something (fooFrom (str));
?
The current Eiffel standard makes sure that such issues simply do not happen and str is not null if conversion is involved. However many other languages like Java or C# do not guarantee that and the additional complexity may be not worth the effort for them.
I believe that Eiffel is not the only language to support conversion routines, but I would say that it might be one of the very few that integrate this very nicely with the rest of the language definition.
In .NET, for example, you have both op_Explicit and op_Implicit routines that can be used for conversion for languages that support them. And I believe C# does.
Manu
Type coercion (implicit conversion) is a curse and a blessing--handy in some case, but it can also backfire.
For instance, Javascript has many weird coercion rules, that can leads to bug when coercings string to number, etc.
Scala has something called "implicit" which achieves something similar (at least to me) to what you describe in Eiffel. With little surprise, they can lead to certain gotchas. But they can be also very handy, see for instance the article Pimp My Library.
C++ has copy constructors and assignment operator.

What does it mean if you put more that one static modifier before a variable?

I was just fooling around in Xcode and I discovered that the following statement compiles and it doesn't even raise a warning let alone an error:
static static static int static long static var[5];
What's up with that? Does this make it super-DUPER static? :)
All joking aside, why does the compiler permit repeating the static modifier? Is there actually a reason to allow people to do this or were the people who wrote the compiler too lazy to make this raise an error?
I'm not an Objective-C developer, but does the language allow for an arbitrary ordering of modifiers (e.g. static volatile extern)? If so, then it's probably a benign bug in the compiler that after reading a modifier ("static" in this case) returns to a state where it accepts any modifier terminal again, and will do until it encounters the variable's type. Continual static declarations wouldn't contradict any prior modifiers so it wouldn't raise any errors; so based on this I would expect volatile volatile volatile int x; to also work.
For C 2011, The following applies:
Section 6.7.1 Paragraph 2
At most, one storage-class specifier may be given in the declaration specifiers in a declaration, except that _Thread_local may appear with static or extern.
And storage class specifiers are defined as:
storage-class-specifier:
typedef
extern
static
_Thread_local
auto
register
So for C 2011, that should be illegal.
As to Objective C, I have no idea where to find a language specification, so I can't help you there.

Ada - does pragma Attach_Handler() can attach handler with System.Priority'Last priority?

The next two declarations are equivalent:
protected type prot_Type is
....
pragma Priority(System.Priority'Last);
end;
protected type prot_Type is
....
end;
One way of attaching interrupt handler is:
protected type prot_Type is
procedure Handler;
pragma Attach_Handler(Handler, ...);
end;
--//Attach is made at the creation of the next object:
Object : prot_Type;
it's a legal attachment (It works).
How is it possible that the handler has ceiling priority of System.Priority Last ? (As far as I know the legal priority is in range Priority'Last+1 .. Any_Priority'Last).
Another thing:
if I add the pragma Priority(System.Priority'Last); to the protected declaration, a program_error exception is raised at the elaboration (when attaching the handler).
Someone can please spread the fog?
I finally manage to understand thanks to:
http://www.iuma.ulpgc.es/users/jmiranda/gnat-rts/node33.htm
The fact that an hadler that defined in a protected with ceiling priority System.Priority'Last managed to be attached to Interrupt seems to me like bug in the compiler.
Only hendlers that defined in a protected with ceiling priority in Interrupt_Prioriy'Range can be attached to interrupt.
Another important thing - for non static protected (i.e declared with "protected type ... ") the attachment is made by the creation of the object of that type. The object must be allocated dynamicly.
Yony.
This question is about attaching interrupts (or signals) to a protected object to function as interrupt handlers. It is wonderful that Ada provides you a mostly language-standard way to do this, but there are limits to what is in the standard, and I think your question hits one. You really need to read your compiler's documenation for this one.
For example, if what you are attaching to is an honest-to-god system interrupt, then it is quite possible that your handler will get called directly from the system interrupt, which is of course completely outside of (and thus above) both your OS's process priority and Ada's task priority systems.
Generally in such a case, like with any ISR, you'd want to do the absolute minimum required to make note of and deal with the interrupt, interact with the system as little as possible (no I/O or tasking interactions), and return control back to the system so it can start behaving normally again. In your case, you might want to increment a variable or set a flag internal to your tagged type, take down any volatile info about the interrupt you may need later, then return.