I am trying to load a module in my workers after creating them with addprocs. When addprocs is called at the top level, everything is fine. However, I am not able to do the same thing when I wrap the code in a function.
In my case, I am adding workers dynamically, so it is not feasible to call #everywhere using XXX always on the top level, I would need to do this inside a function.
In short, this works:
addprocs(1)
#everywhere using XXX
and this doesn't:
function myaddprocs()
addprocs(1)
#everywhere using XXX
end
Any ideas?
After a bit more investigation, I have pinpointed some of the issues which were making my code not work.
Imports must happen after addprocs. If an import has happened before, the import must be prefixed by an #everywhere.
Top-level expressions (such as using) inside functions do not work, unless wrapped in an eval statement.
A fix to my code would be:
function myaddprocs()
addprocs(1)
eval(macroexpand(quote #everywhere using XXX end))
end
Examples
I have tested the following snippets on Julia 0.6.1. I have also tested them using the same version on a SGE cluster (OGS/GE 2011.11p1), by substituting all addprocs by addprocs_sge, and importing ClusterManagers.jl. The following snippets work:
using after addprocs:
addprocs(1)
using SpecialFunctions
pmap(x->SpecialFunctions.sinint(1), workers())
using before and after addprocs, the second with #everywhere:
using SpecialFunctions
addprocs(1)
#everywhere using SpecialFunctions
pmap(x->sinint(1), workers())
using wrapped in eval after addprocs within function
function getprocs()
addprocs(1)
eval(Expr(:using,:SpecialFunctions))
pmap(x->SpecialFunctions.sinint(1), workers())
end
getprocs()
Same as before with with #everywhere applied to eval
function getprocs()
addprocs(1)
#everywhere eval(Expr(:using,:SpecialFunctions))
pmap(x->sinint(1), workers())
end
getprocs()
Same as before with with #everywhere within eval instead
function getprocs()
addprocs(1)
eval(macroexpand(quote #everywhere using SpecialFunctions end))
pmap(x->sinint(1), workers())
end
getprocs()
These snippets, on the other hand, do not work:
using before addprocs
using SpecialFunctions
addprocs(1)
pmap(x->SpecialFunctions.sinint(1), workers())
using before and after addprocs
using SpecialFunctions
addprocs(1)
using SpecialFunctions
pmap(x->SpecialFunctions.sinint(1), workers())
using within function
using SpecialFunctions
function getprocs()
addprocs(1)
#everywhere using SpecialFunctions
pmap(x->sinint(1), workers())
end
getprocs()
using within function
function getprocs()
addprocs(1)
using SpecialFunctions
pmap(x->SpecialFunctions.sinint(1), workers())
end
getprocs()
you don't need #everywhere. this works for me:
addprocs()
using mymodule # loads code on all procs but brings in to scope only on master process.
this is what you want if you want to then do pmap(x->fun(x),workers()), and fun is exported from mymodule. you can read this up here: https://docs.julialang.org/en/release-0.6/manual/parallel-computing/#Code-Availability-and-Loading-Packages-1
Cako's solution is helpful but I had to add the module as first argument to macroexpand :
eval(macroexpand(Distributed,quote #everywhere using DistributedArrays end))
Related
I attach a screenshot of the code
I am new to Python and am currently studying artificial intelligence, working in Spyder(python 3.9)
After executing the code, I expected this output
Binarized data:
[[1.о.1.]
[о.1.о.]
[1.о.о.]
[1.о.о.]]
In Python it is important to write one command in one line:
data_binarized = preprocessing.Binarizer(your_code)
If you want to write it in two lines, you can use implicit line continuation (Possible only inside parentheses, brackets and braces):
data_binarized = preprocessing.Binarizer(
your_code)
As an second possible option you can use the backslash (explicit line continuation):
data_binarized =\
preprocessing.Binarizer(your_code)
For more information about this look at this answer:
https://stackoverflow.com/a/4172465/21187993
For example, if I have a pipe function:
def process_data(weighting, period, threshold):
# do stuff
Can I get autocompletion on the process data arguments?
There are a lot of arguments to remember and I would like to make sure they get passed in correctly. In ipython, the function can autocomplete to show me the keyword args which is really neat, but I would like it to do this when piping a pandas dataframe too!
I don't see how this would be possible, but then again, I'm truly in awe of ipython and all its greatness. So, is this possible? If not, are there other hacks that people have come up with?
Install the pyreadline library.
$ pip install pyreadline
Update:
It seems like this problem is specific to some versions of ipython. The solution is the following:
Run below command from the terminal:
$ ipython profile create
It will create a default profile at ~/.ipython/profile_default/ipython_config.py
Now edit this ipython_config.py and add the below lines and it will solve the issue.
c = get_config()
c.Completer.use_jedi = False
Reference:
https://github.com/jupyter/notebook/issues/2435
https://ipython.readthedocs.io/en/stable/config/intro.html
I'm trying to remove accents with Pentaho. It's possible use regular expression, but is there another simpler way?
I'm using Pentaho 6.
The simplest way is use a Modified Java Script step using apache commons. This lib is already packaged inside the tool. With the following code you are ready to go:
COLUMN_SANITIZED = org.apache.commons.lang3.StringUtils.stripAccents(COLUMN_SANITIZED);
Don't forget do add this column in the fields section in Modified Java Script with option Replace value 'Fieldname' or 'Rename To' to Yes.
Need help to try adding new entity and train my own model with spacy named entity recognition. I wanted first to try the example already done here:
https://github.com/explosion/spaCy/blob/master/examples/training/train_new_entity_type.py
but i'am getting this error :
ipykernel_launcher.py: error: unrecognized arguments: -f /root/.local/share/jupyter/runtime/kernel-c46f384e-5989-4902-a775-7618ffadd54e.json
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2890: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)
Tried to look into all related questions and answers and couldn't resolve this.
Thank you for your help.
It looks like you're running the code from a Jupyter notebook, right? All spaCy examples are designed as fully standalone scripts to run from the command line. They use the Python library plac for generating the command-line interface, so you can run the script with arguments. Jupyter however seems to add another command-line option -f, which causes a conflict with the existing command-line interface.
As a solution, you could execute the script directly instead, for example:
python train_new_entity_type.py
Or, with command line arguments:
python train_new_entity_type.py --model en_core_web_sm --n-iter 20
Alternatively, you could also remove the #plac.annotations and plac.call(main) and just execute the main() function directly in your notebook.
I have searched everywhere for this and could not find an answer. I am using os.system to print to a printer, but it prints it off as a portrait and I need it to print off as Landscape. I assume there is a simple way to add something within the os.system command to get this to work, but I cannot figure out what it is. This is how I am printing it off right now:
os.system('lp "file.png"')
Try os.system('lp -o landscape "file.png"')
Ok it was a bug, but just a hint on convenience:
I usually replace os.system with the following snippet
from subprocess import (PIPE, Popen)
def invoke(command):
'''
Invoke process and return its output.
'''
return Popen(command, stdout=PIPE, shell=True).stdout.read()
or if you want to be more comfortable with sh, try
from sh import lp
lp('-o', 'landscape', 'file.png')