jupyterlab debugging mode contains many variables, what are they? - variables

I have last version of jupyterlab Version 3.0.14
in debugging mode many variables not needed exists in the right box
what are they?
I want to delete them except i, j and result.

You can adjust the variableFilters setting of the debugger to hide variable that you are not interested in; see this answer for more details.
The debugger simply shows all variables that are available at runtime in the kernel; the Python kernels (IPython and Xeus Python) come with a feature of remembering your input and output for each executed cell; it is immensely useful if you execute a cell with compute-intensive task but forget to assign the result onto a variable, for example instead of:
result = do_long_calculation()
you do:
do_long_calculation()
in the latter case you can use the IPython underscore _ variable which caches the last execution result to recover the output:
result = _
If you already overwritten the most recent output, it the previous one goes to __ and so on. Learn more about this in the output caching system documentation.
Similarly _i, _ii, etc. variables cache most recent input of cells so you can check what has been executed. See more in input caching system documentation. There are also In and Out variables which store entire execution history for your reference.
Fur the future convenience I raised the idea of allowing regular expressions to hide all variables matching a pattern here.

Related

TFAgents: how to take into account invalid actions

I'm using TF-Agents library for reinforcement learning,
and I would like to take into account that, for a given state,
some actions are invalid.
How can this be implemented?
Should I define a "observation_and_action_constraint_splitter" function when
creating the DqnAgent?
If yes: do you know any tutorial on this?
Yes you need to define the function, pass it to the agent and also appropriately change the environment output so that the function can work with it. I am not aware on any tutorials on this, however you can look at this repo I have been working on.
Note that it is very messy and a lot of the files in there actually are not being used and the docstrings are terrible and often wrong (I forked this and didn't bother to sort everything out). However it is definetly working correctly. The parts that are relevant to your question are:
rl_env.py in the HanabiEnv.__init__ where the _observation_spec is defined as a dictionary of ArraySpecs (here). You can ignore game_obs, hand_obs and knowledge_obs which are used to run the environment verbosely, they are not fed to the agent.
rl_env.py in the HanabiEnv._reset at line 110 gives an idea of how the timestep observations are constructed and returned from the environment. legal_moves are passed through a np.logical_not since my specific environment marks legal_moves with 0 and illegal ones with -inf; whilst TF-Agents expects a 1/True for a legal move. My vector when cast to bool would therefore result in the exact opposite of what it should be for TF-agents.
These observations will then be fed to the observation_and_action_constraint_splitter in utility.py (here) where a tuple containing the observations and the action constraints is returned. Note that game_obs, hand_obs and knowledge_obs are implicitly thrown away (and not fed to the agent as previosuly mentioned.
Finally this observation_and_action_constraint_splitter is fed to the agent in utility.py in the create_agent function at line 198 for example.

How to save variables from Uppaal created during the modeling process

I've created a model with Uppaal in which several integer variables change over the course of time. Now I would like to save the values of the variables during the modelling process somewhere (best in xml or a text file). In the Uppaal documentation (https://www.it.uu.se/research/group/darts/uppaal/documentation.shtml) I found the method in point 13 (How do I export and interpret the traces from Uppaal?) and tried the Java API way already, in the hope that it can output the variables as well as the traces. Unfortunately this method seems to be limited to traces. Does anyone know a method to save the variable values from Uppaal?
Hopeful greetings,
Josi
Solution from the comments.
to export the variable value tractory over time, one may use SMC query in the verifier.
For example:
Typeset the following query: simulate 1 [<=300] { Gate.len }
Click Check
Right-click on the query, and from the popup menu choose Simulations (1)
Observe a new window popup with a plot
Right-click on the plot and choose Export Comma Separated Values
Follow the save file dialog and observe the resulting file to contain time and value sequence.
Note that SMC assumes that all channels are broadcast and there are no deadlocks.

Run-State values within shape script EA

Enterprise Architect 13.5.
I made MDG technology extending Object metatype. I have a shape script for my stereotype working well. I need to print several predefined run-state parameters for element. Is it possible to access to run-state params within Shape ?
As Geert already commented there is no direct way to get the runstate variables from an object. You might send a feature request to Sparx. But I'm pretty sure you can't hold your breath long enough to see it in time (if at all).
So if you really need the runstate in the script the only way is to use an add-in. It's actually not too difficult to create one and Geert has a nice intro how to create it in 10 minutes. In your shape script you can print a string restult returned from an operation like
print("#addin:myAddIn,pFunc1#")
where myAddIn is the name of the registered operation and pFunc1 is a parameter you pass to it. In order to control the script flow you can use
hasproperty('addin:myAddIn,pFunc2','1')
which evaluates the returned string to match or not match the string 1.
I once got that to work with no too much hassle. But until now I never had the real need to use it somewhere in production. Know that the addin is called from the interpreted script for each shaped element on the diagram and might (dramatically) affect rendering times.

Does histogram_summary respect name_scope

I am getting a Duplicate tag error when I try to write out histogram summaries for a multi-layer network that I generate procedurally. I think that the problem might be related to naming. Imagine code like the following:
with tf.name_scope(some_unique_name):
...
_ = tf.histogram_summary('weights', kernel_weights)
I'd naively assumed that 'weights' would be scoped to some_unique_name but I'm suspecting that it is not. Are summary names independent of name_scope?
As Dave points out, the tag argument to tf.histogram_summary(tag, ...) is indeed independent of the current name scope. Part of the reason for this is that the tag may be a string Tensor (i.e. computed by part of your graph), whereas name scopes are a purely client-side construct (i.e. Python-only), so there's no good way to make the scoping work consistently across the two modes of use.
However, if you're using TensorFlow build from source (and should be available in the next release, 0.8.0), you can use the following recipe to scope your tags (using Graph.unique_name(..., mark_as_used=False)):
with tf.name_scope(some_unique_name):
# ...
tf.histogram_summary(
tf.get_default_graph().unique_name('weights', mark_as_used=False),
kernel_weights)
Alternatively, you can do the following in the current version:
with tf.name_scope(some_unique_name) as scope:
# ...
tf.histogram_summary(scope + 'weights', kernel_weights)
They are.
I'm with you in thinking this is a bug, but I haven't run it past the designers of the op yet. Go ahead and open an issue for it on GitHub!
(I've run into this also and found it terribly annoying -- it prevents reuse of the model without deliberately parameterizing the summary op invocations.)

Linux Kernel Process Management

First, i admit all the things i will ask are about our homework but i assure you i am not asking without struggling at least two hours.
Description: We are supposed to add a field called max_cpu_percent to task_struct data type and manipulate process scheduling algorithm so that processes can not use an higher percentage of the cpu.
for example if i set max_cpu_percent field as 20 for the process firefox, firefox will not be able to use more than 20% of the cpu.
We wrote a system call to set max_cpu_percent field. Now we need to see if the system call works or not but we could not get the value of the max_cpu_percent field from a user-spaced program.
Can we do this? and how?
We tried proc/pid/ etc can we get the value using this util?
By the way, We may add additional questions here if we could not get rid of something else
Thanks All
Solution:
The reason was we did not modify the code block writing the output to the proc queries.
There are some methods in array.c file (fs/proc/array.c) we modified the function so that also print the newly added fields value. kernel is now compiling we'll see the result after about an hour =)
It Worked...
(If you simply extended getrlimit/setrlimit, then you'd be done by now…)
There's already a mechanism where similar parts of task_struct are exposed: /proc/$PID/stat (and /proc/$PID/$TID/stat). Look for functions proc_tgid_stat and proc_tid_stat. You can add new fields to the ends of these files.