IntelliJ idea deeplearning4j debugger has discrepancy in data - intellij-idea

I work with deeplearning4j and created INDArray of quite big size. Into that array I write some values. If I try to see those values in debugger, initially I see zeros, and only for data at FloatBuffer I see entered values. See the screenshot.
If to debug the code of XorExample in deeplearning4j such behavior I didn't notice:
Is there any way to always show or always hide values that sit inside of INDArray without shoving zeros? Or it is some kind of bug inside of idea?

By default debugger evaluates and shows toString() defined in the NDArray class (or BaseNDArray in your case). You can replace it with your custom renderer for this type. The easiest way is to right click on a variable -> View as -> Create...
Or go to File | Settings | Build, Execution, Deployment | Debugger | Data Views | Java Type Renderers.
Then put your requred expression into "When rendering a node" expression.

Related

jupyterlab debugging mode contains many variables, what are they?

I have last version of jupyterlab Version 3.0.14
in debugging mode many variables not needed exists in the right box
what are they?
I want to delete them except i, j and result.
You can adjust the variableFilters setting of the debugger to hide variable that you are not interested in; see this answer for more details.
The debugger simply shows all variables that are available at runtime in the kernel; the Python kernels (IPython and Xeus Python) come with a feature of remembering your input and output for each executed cell; it is immensely useful if you execute a cell with compute-intensive task but forget to assign the result onto a variable, for example instead of:
result = do_long_calculation()
you do:
do_long_calculation()
in the latter case you can use the IPython underscore _ variable which caches the last execution result to recover the output:
result = _
If you already overwritten the most recent output, it the previous one goes to __ and so on. Learn more about this in the output caching system documentation.
Similarly _i, _ii, etc. variables cache most recent input of cells so you can check what has been executed. See more in input caching system documentation. There are also In and Out variables which store entire execution history for your reference.
Fur the future convenience I raised the idea of allowing regular expressions to hide all variables matching a pattern here.

How to save variables from Uppaal created during the modeling process

I've created a model with Uppaal in which several integer variables change over the course of time. Now I would like to save the values of the variables during the modelling process somewhere (best in xml or a text file). In the Uppaal documentation (https://www.it.uu.se/research/group/darts/uppaal/documentation.shtml) I found the method in point 13 (How do I export and interpret the traces from Uppaal?) and tried the Java API way already, in the hope that it can output the variables as well as the traces. Unfortunately this method seems to be limited to traces. Does anyone know a method to save the variable values from Uppaal?
Hopeful greetings,
Josi
Solution from the comments.
to export the variable value tractory over time, one may use SMC query in the verifier.
For example:
Typeset the following query: simulate 1 [<=300] { Gate.len }
Click Check
Right-click on the query, and from the popup menu choose Simulations (1)
Observe a new window popup with a plot
Right-click on the plot and choose Export Comma Separated Values
Follow the save file dialog and observe the resulting file to contain time and value sequence.
Note that SMC assumes that all channels are broadcast and there are no deadlocks.

Run-State values within shape script EA

Enterprise Architect 13.5.
I made MDG technology extending Object metatype. I have a shape script for my stereotype working well. I need to print several predefined run-state parameters for element. Is it possible to access to run-state params within Shape ?
As Geert already commented there is no direct way to get the runstate variables from an object. You might send a feature request to Sparx. But I'm pretty sure you can't hold your breath long enough to see it in time (if at all).
So if you really need the runstate in the script the only way is to use an add-in. It's actually not too difficult to create one and Geert has a nice intro how to create it in 10 minutes. In your shape script you can print a string restult returned from an operation like
print("#addin:myAddIn,pFunc1#")
where myAddIn is the name of the registered operation and pFunc1 is a parameter you pass to it. In order to control the script flow you can use
hasproperty('addin:myAddIn,pFunc2','1')
which evaluates the returned string to match or not match the string 1.
I once got that to work with no too much hassle. But until now I never had the real need to use it somewhere in production. Know that the addin is called from the interpreted script for each shaped element on the diagram and might (dramatically) affect rendering times.

How to create a custom Allure step function for sensible data

I am currently working on a Testing Automation team, using Python and Allure to make reports of all the test cases that we run. Sometimes we deal with sensible data (e.g: passwords) that I can't show on the reports. If I use a function with a step decorator, something like this:
Which takes an element (a text box) and enters the value in it. In the step function I display the value that I want to enter, I could easily change that but the problem resides in the actual report. No matter what I enter on the step title, the report always shows the info that was passed as arguments to the function:
Thus, the "value" argument will always be displayed and that is something that I cannot have on certain projects. Is there anyway to make a custom step function that solves my problem?. I could either use with not showing the value at all or to change it to something like '*****'.
Just a thought.
#allure.step("Entering a value in element {3}")
def setSecureBoxValue(driver, element, value, box_name):
I solved my problem with the use of Fernet cryptography library.
I created a new function for the sensible data that encrypts the strings and then, inside this new function I call the one I shared on the screenshot (with a slight modification to decrypt the data). This results in the following report:

How to get output of Kotlin IntelliJ scratch file?

A script as simple as
println("a")
doesn't produce any output in the Scratch output window. I expect an a to appear in the output window.
I'm using IntelliJ 2019.1.2 CE.
EDIT: Information regarding the expected functionality
According to a question I had asked: IntelliJ Ultimate Kotlin Script REPL skips first printed lines - Scratch Output cut off
It seems to be the case that the REPL wants to output per line, and will only overflow into the bottom area after a certain line length is reached.
In the question I state that I generally add some initial padding so that it always flows into the lower area with something similar to:
repeat(10) { println("BLANK ") }
END EDIT
You have to make sure that Interactive Mode is turned off at the top of the scratch window
This will cause prints to be put into the Scratch Output window.
Be warned, atleast in the version I have of IntelliJ ultimate 2019.1.3, the first 5-9 lines generally dont print so I do something like:
repeat(9) { println("blank") }
If you've defined everything inside of a function, the script won't execute the function automatically - you'll need to explicitly call the function.
You can create your own print, like this:
fun <T> myPrint(x: T) : T = x
then calling e.g.:
myPrint(5)
should show 5 in the result window.