Tensorflow raw operations: Variable, VariableV2, VarhandleOp; What are the differences? - tensorflow

(I have read a similar question: https://www.tensorflow.org/guide/variables, but that does not answer it).
I see that variables in TF can be created with three different raw operations. I understand that Variable is deprecated and VariableV2 should be used. However somewhere I had read Variable gives back a variable by reference and that is considered outdated, while VarhandleOp gives back a resource-typed variable and that is to be used. With VariableV2 I am confused. Is it a new version of the old style Variable and hence still not considered up-to-date or it is actually a modern approach, just using the same old Variable interface (probably for compatibility reasons)? Maybe it is using something like VarhandleOp under the hood?
A related question: What is a "container" used in all the three a.m. operations? In all the documents I found, it only says that it is defaulted to "" and it is OK. But what is it?

Related

How to fix the auto code formatting in Pharo?

When I save a method and get back to it later, all of my variable names become temp and all of my parameters becomes arg and the code indentation get changed.
Any thoughts on how I can fix this?
The behaviour that you are experiencing is not code formatting at all. You immage is experiencing an issue where it can't access original source code. Thus it uses a backup solution and decomples method bytecode. During the compilation process the variable names are erased, so they can't be re-created during the decompilation, and generic substitudes are used instead.
Now, why you are missing sources is another question. First of all it's important to check if you get some exceptions. Often these happen when you open or save your image, but also thaty may occur when you save methods.
Depending on the Pharo version you may be missing .changes or .sources files. This often happens when you more an image without moving other supporting files.

Is local_variables_initializer really necessary?

In practice, isn't running global_variables_initializer enough to initialize model variables?
local_variables_initializer seems to be unnecessary and absent even in official and semi-official tensorflow example code. See for example:
https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial
https://www.tensorflow.org/get_started/mnist/pros
In both cases only global_variables_initializer is used.
Am I missing something here? Is there any case where I should call local_variables_initializer explicitly?
local_variables_initializer is useful in particular for streaming metrics (e.g. tf.contrib.metrics.streaming_auc). As said in the doc of contrib.metrics:
Because the streaming metrics use local variables, the Initialization stage is performed by running the op returned by tf.local_variables_initializer().

Is there a way to get/nslog an array of all local variables in a function in objective-c [duplicate]

When attached to the debugger via Xcode, LLDB provides a useful view of local variables (the bottom left of the screenshot):
I found an LLDB command frame variable (and gdb's info locals) that provides a list of the local variables (as seen in the right side of the screenshot above).
My hope is that this functionality is possible to perform on the device at runtime. For example, I can access the stack trace using backtrace_symbols(), the current selector via _cmd, and a few others.
Has anyone had experience in this area? Thanks in advance.
Xcode/LLDB can show you this information because they have access to debug information in the binary, called a symbol table, which help it understand what memory locations correspond to which names in your source code. This is all outside the Objective-C runtime, and there's no interface in the runtime to get at it.
There's another reason why this won't work, though. When you're building code to run in the debugger, compiler optimizations are turned off, so all the variables you reference in your code are there.
When you build for release, though, generally the compiler optimizations get in there and re-arrange all your carefully named local variables to make things run faster. They might not even ever get stored in memory, just in CPU registers. Or they might not exist at all, if the optimizer can prove to itself that it doesn't need them.
My advice is to think again about the larger problem you're trying to solve...

Why Decompilers cant produce original code theoretically

I searched the internet but did not find a concrete answer that why decompilers are unable to produce original source code. I dint get a satisfactory answer. Somewhere it was written that it is similar to halting problem but dint tell how. So what is the theoretical and technical limitation of creating a decompiler which is perfect.
It is, quite simply, a many-to-one problem. For example, in C:
b++;
and
b+=1;
and
b = b + 1;
may all get compiled to the same set of operations once the compiler and optimizer are done. It reorders things, drops in-effective operations, and rewrites entire sections of code. By the time it is done, it has no idea what you wrote, just a pretty good idea what you intended to happen, at a raw-CPU (or vCPU) level.
It is even smart enough to remove variables that aren't needed:
{
a=5;
b=func();
c=a+b;
d=func2(c);
}
## gets rewritten as:
REGISTERA=func()
REGISTERA+=5
return(func2(REGISTERA))
For starters, the variable names are never preserved when your program is compiled. ...so the best it could possibly do would be to use meaningless variable names throughout your re-constituted program. Compiling is generally a one-way transformation - like a one-way hashing function. Like the hash, it may be possible to generate something else that could hash to the same value, but it's highly unlikely the decompiled program will be the exact same as your original.
Compilers throw out information; not all the information that is in the source code is in the compiled code. For example in compiled Java, you can't tell the difference between a parameterized and unparameterized generic type because the information is only used by the compiler; some annotations are only used at compile time and are not included in the compiled output. That doesn't mean you couldn't get some sort of source code by decompiling; it just wouldn't match nor would be as informative as the actual source code.
There is usually not a 1-to-1 correspondence between source code and compiled code. If an essentially infinite number of possible sources could result in the same object code (given unbounded variable name lengths, etc.), how is a decompiler to guess which one to spit out?

ABAP OO obsolete statements: How do these affect your existing code-base?

Since upgrading from 4.7 to ECC6 the ABAP compiler has become a lot stricter on the use of certain statements in the OO context.
For instance you're not allowed to use the statement LIKE, but in stead have to use TYPE and internal tables does not have an implicit header line, etc.
These restrictions are explained in greater detail here
MY QUESTION: To what extent does this restriction affect your existing code-base?.
We have over a thousand "Classes" written since 1998 in OO as far as it was available at the time. For the most part each class is its own include in SE38, with the class definition and implementation together in this include.
Up to now, we could successfully change and activate these classes as long as the main program was pre-existing in 4.7. Now we are trying to use one of these older classes in a new main program for regression test purposes, and we are getting the following error:
"Within classes and interfaces, you can only use "TYPE" to refer to ABAP Dictionary types (not "LIKE" or "STRUCTURE")."
This error is valid as per the current definition of the SAP language.
I would like to know wheter the SAP interpreter continues to run old code with obsolete statements intentionally, or whether a future patch may correct this "feature" and cause these classes to stop compiling.
Each development object is tagged with a version corresponding to the SAP version it was developed on. You can see this in version management or table VRSD.
As I understand it, that is there specifically so that code with statements that have been made illegal in later versions will survive an upgrade and continue to run.
This is why, when you attach an include developed in 4.5b to a class that was developed in NW700, it won't compile. The compiler knows that this is new dev, and its applying the rules accordingly.
The ABAP community has been informed for a really long time (years) that LIKEs, work areas, RANGEs etc. are obsolete.
I don't think SAP will kill any old code, but I wouldn't count on it if I were in charge.
So can they cause it to stop compiling: yes, will they: probably not.