I'm running this code for ALBERT, one of Google's machine learning models on Google Colab. At the end of the code, they do everything by running a script using ! (the shell) to run the script. However, I'd like to do some stuff to the resulting model after the code has run.
Is there any way to either access or get the script to output particular variables in a way that my code in Colab could access afterwards?
Here's another way of phrasing my question. Putting $HELLO_WORLD into the shell command accesses the HELLO_WORLD variable in my Colab code. Is there any way to get the script to set the HELLO_WORLD variable in my Colab code?
You can use os.environ like this.
import os
os.environ['HELLO_WORLD']='hello world from Python'
Then later
!echo $HELLO_WORLD
# hello world from Python
Related
I open raku/rakudo/perl6 thus:
con#V:~/Scripts/perl6$ perl6
To exit type 'exit' or '^D'
>
Is the above environment called the "interpreter"? I have been searching forever, and I cannot find what it's called.
How can I execute a rakudo script like I would do
source('script.R') in R, or exec(open('script.py').read()) in python3?
To clarify, I would like the functions and libraries in the script to be available in REPL, which run doesn't seem to do.
I'm sure this exists in documentation somewhere but I can't find it :(
As Valle Lukas has said, there's no exact replacement. However, all usual functions are there to run external programs,
shell("raku multi-dim-hash.raku") will run that as an external program.
IIRC, source also evaluated the source. So you might want to use require, although symbols will not be imported directly and you'll have to use indirect lookup for that.
You can also use EVAL on the loaded module, but again, variables and symbols will not be imported.
It's called Read-Eval-Print Loop REPL. You can execute raku scripts direct in the shell: raku filename.raku without REPL. To run code from REPL you can have a look at run (run <raku test.raku> ) or EVALFILE.
The rosettacode page Include a file has some information. But it looks like there is no exact replacement for your R source('script.R') example at the moment.
I am trying to plot my model with the data types with the following the code:
plot_model(model, to_file='model/model.png', show_dtype=True, show_shapes=True, show_layer_names=True)
However, I get an error that show_dtype is not an acceptable parameter even though it appears on the TensorFlow documentation: https://www.tensorflow.org/api_docs/python/tf/keras/utils/plot_model
This is the first time that I have run into this issue. It seems that this may be due to having an earlier release if you downloaded it from Anaconda Forge rather than something else like Pip. It is a simple fix, however.
Basically, you need to go into the library source file and edit it to the current version that is shown on the TensorFlow documentation page.
The link to the GitHub page that you will copy the Python code from is here: https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/utils/vis_utils.py#L278-L348
Afterwards, head to your library path and paste that Python code there.
For example, my path is the following: C:/ProgramData/Anaconda3/envs/ml/Lib/site-packages/tensorflow/python/keras/utils/vis_utils.py. Yours should be something similar.
I am trying to get OpenAI roboschool to run in Google Colab (have a virtual display setup that records the environment during training and displays video after). The roboschool library will import, but the environments don't show up properly (at all), when I run:
import roboschool, gym;
print("\n".join(['- ' + spec.id for spec in
gym.envs.registry.all() if spec.id.startswith('Roboschool')]))
the list is empty, and it should include the environments.
When cmake links dlls, does it do so with environment variables? Environment variables in Colab don't work as usual, and I think that may be the issue. I don't know enough to know for sure.
This output looks suspect to me, doesn't seem right that the runtime path would be removed. There are a number of these so I only grabbed two for example.
-- Set runtime path of "/content/roboschool/roboschool/cpp-
household/bullet_local_install/lib/libBulletDynamics.so.2.87" to ""
-- Set runtime path of "/content/roboschool/roboschool/cpp-
household/bullet_local_install/lib/libBullet3Geometry.so.2.87" to "
Here is the command sequence.
cmake -DBUILD_SHARED_LIBS=ON -DUSE_DOUBLE_PRECISION=1 -
DCMAKE_INSTALL_PREFIX:PATH=/content/roboschool/roboschool/cpp-
household/bullet_local_install -DBUILD_CPU_DEMOS=OFF -
DBUILD_BULLET2_DEMOS=OFF -DBUILD_EXTRAS=OFF -DBUILD_UNIT_TESTS=OFF -
DBUILD_CLSOCKET=OFF -DBUILD_ENET=OFF -DBUILD_OPENGL3_DEMOS=OFF ..
make -j4
make install
Is there a way I can override the way paths are determined for the linked libraries so they will link with the correct paths if that is correct? Seems like looking into RPATH may be a step in the right direction?
Thanks in advance. Please let me know if additional detail is necessary.
Hard to say what's going on without more details, but if you're building .so's to non-standard location that you want the python runtime to see, you have to somehow tell the runtime about the .so's location. Guessing based on the snippets you provided, maybe this (possibly after a runtime restart (ctrl-m-period)) will unblock you:
import os
os.environ['LD_LIBRARY_PATH'] = '/content/roboschool/roboschool/cpp-household/bullet_local_install/lib:' + os.environ['LD_LIBRARY_PATH']
If that doesn't do it for you, two other suggestions are:
Change your configuration to install to "standard" locations
Share a minimal notebook that reproduces the issue.
I would like to log GDB command output to a log file.
This is done using the following command:
set logging file outfulfile.txt
But, instead of simple outfulfile.txt, I would like to give a unique name to the file; for example outfulfile-PID.txt. This is because I will have several processes simultaneously producing the output and I want each one to log to its own unique file.
How can I programmatically derive such a file name in a GDB script?
There area few ways.
One relatively simple way is to use gdb's "eval" command. It substitutes arguments like printf, and then executes the result as a gdb command. This is a new-ish command.
If you don't have "eval" you might still have Python scripting. In this case you can write a short (one line) Python script like:
(gdb) python gdb.execute('set logging file ' + .. your logic here ..)
If you don't have Python scripting, then your gdb is really old and you ought to upgrade. However you can still maybe accomplish what you want, just with some awful gyrations using "shell" and writing out a script that you then "source" back into gdb. Though this technique seems somewhat hard in this case.
I sometimes test Python modules as I develop them by running a Python interactive prompt in a terminal, importing my new module and testing out the functionality. Of course, since my code is in development there are bugs, and frequent restarts of the interpreter are required. This isn't too painful when I've only executed a couple of interpreter lines before restarting: my key sequence when the interpreter restart looks like Up Up Enter Up Up Enter... but extrapolate it to 5 or more statements to be repeated and it gets seriously painful!
Of course I could put my test code into a script which I execute with python -i, but this is such a scratch activity that it doesn't seem quite "above threshold" for opening a text editor :) What I'm really pining for is the Ctrl-r behaviour from the bash shell: executing a sequence of 10 commands in sequence in bash involves finding the command in history (repeated Up or Ctrl-r for a search -- both work in the Python interpreter shell) and then just pressing Ctrl-o ten times. One of my favourite bash shell features.
The problem is that while lots of other readline binding functionality like Ctrl-a, Ctrl-e, Ctrl-r, and Ctrl-s work in the Python interpreter, Ctrl-o does not. I've not been able to find any references to this online, although perhaps the readline module can be used to add this functionality to the python prompt. Any suggestions?
Edit: Yes, I know that using the interactive interpreter is not a development methodology that scales beyond a few lines! But it is convenient for small tests, and IMO the interactiveness can help to work out whether a developing API is natural and convenient, or too heavy. So please confine the answers to the technical question of whether readline history-stepping can be made to work in python, rather than the side-opinion of whether one should or shouldn't choose to (sometimes) work this way!
Edit: Since posting I realised that I am already using the readline module to make some Python interpreter history functions work. But the Ctrl-o binding to the operate-and-get-next readline command doesn't seem to be supported, even if I put readline.parse_and_bind("Control-o: operate-and-get-next") in my PYTHONSTARTUP file.
I often test Python modules as I develop them by running a Python interactive prompt in a terminal, importing my new module and testing out the functionality.
Stop using this pattern and start writing your test code in a file and your life will be much easier.
No matter what, running that file will be less trouble.
If you make the checks automatic rather than reading the results, it will be quicker and less error-prone to check your code.
You can save that file when you're done and run it whenever you change your code or environment.
You can perform metrics on the tests, like making sure you don't have parts of your code you didn't test.
Are you familiar with the unittest module?
Answering my own question, after some discussion on the python-ideas list: despite contradictory information in some readline documentation it seems that the operate-and-get-next function is in fact defined as a bash extension to readline, not by core readline.
So that's why Ctrl-o neither behaves as hoped by default when importing the readline module in a Python interpreter session, nor when attempting to manually force this binding: the function doesn't exist in the readline library to be bound.
A Google search reveals https://bugs.launchpad.net/ipython/+bug/382638, on which the GNU readline maintainer gives reasons for not adding this functionality to core readline and says that it should be implemented by the calling application. He also says "its implementation is not complicated", although it's not obvious to me how (or whether it's even possible) to do this as a pure Python extension to the readline module behaviour.
So no, this is not possible at the moment, unless the operate-and-get-next function from bash is explicitly implemented in the Python readline module or in the interpreter itself.
This isn't exactly an answer to your question, but if that is your development style you might want to look at DreamPie. It is a GUI wrapper for the Python terminal that provides various handy shortcuts. One of these is the ability to drag-select across the interpreter display and copy only the code (not the output). You can then paste this code in and run it again. I find this handy for the type of workflow you describe.
Your best bet will be to check that project : http://ipython.org
This is an example with a history search with Ctrl+R :
EDIT
If you are running debian or derivated :
sudo apt-get install ipython