I am trying to run Pytorch code on three nodes using openMPI but the code just halts without any errors or output. Eventually my purpose is to distribute a Pytorch graph on these nodes.
Three of my nodes are connected in same LAN and have SSH access to each other without password and have similar specifications:
Ubuntu 18.04
Cuda 10.0
OpenMPI built and installed from source
PyTorch built and installed from source
The code shown below works on single node - multiple processes, as:
> mpirun -np 3 -H 192.168.100.101:3 python3 run.py
With following output:
INIT 0 of 3 Init env://
INIT 1 of 3 Init env://
INIT 2 of 3 Init env://
RUN 0 of 3 with tensor([0., 0., 0.])
RUN 1 of 3 with tensor([0., 0., 0.])
RUN 2 of 3 with tensor([0., 0., 0.])
Rank 1 has data tensor(1.)
Rank 0 has data tensor(1.)
Rank 2 has data tensor(1.)
But when I placed the code on three nodes and run following command on each node separately, it does nothing:
> mpirun -np 3 -H 192.168.100.101:1,192.168.100.102:1,192.168.100.103:1 python3 run.py
Please give some idea about any modifications in code or configurations for MPI to run given Pytorch code on multiple nodes?
#!/usr/bin/env python
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run(rank, size):
tensor = torch.zeros(size)
print(f"RUN {rank} of {size} with {tensor}")
# incrementing the old tensor
tensor += 1
# sending tensor to next rank
if rank == size-1:
dist.send(tensor=tensor, dst=0)
else:
dist.send(tensor=tensor, dst=rank+1)
# receiving tensor from previous rank
if rank == 0:
dist.recv(tensor=tensor, src=size-1)
else:
dist.recv(tensor=tensor, src=rank-1)
print('Rank ', rank, ' has data ', tensor[0])
def init_processes(rank, size, fn, backend, init):
print(f"INIT {rank} of {size} Init {init}")
dist.init_process_group(backend, init, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
os.environ['MASTER_ADDR'] = '192.168.100.101'
os.environ['BACKEND'] = 'mpi'
os.environ['INIT_METHOD'] = 'env://'
world_size = int(os.environ['OMPI_COMM_WORLD_SIZE'])
world_rank = int(os.environ['OMPI_COMM_WORLD_RANK'])
init_processes(world_rank, world_size, run, os.environ['BACKEND'], os.environ['INIT_METHOD'])
N.B. NCCL is not an option for me due to arm64-based hardware.
Apologies for replying late to this, but I could solve the issue by adding --mca btl_tcp_if_include eth1 flag to mpirun command.
The reason for halt was that openMPI, by default, tries to locate and communicate with other nodes over local loopback network interface e.g. lo. We have to explicitly specify which interface(s) should be included (or excluded) to locate other other nodes.
I hope it would save someone's day :)
Related
This should be a simple task: Download a model saved in tensorflow_hub format, load using tensorflow_hub, and use..
This is the model I am trying to use (simCLR stored in Google Cloud): https://console.cloud.google.com/storage/browser/simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0;tab=objects?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false
I downloaded the /hub folder as they say, using
gsutil -m cp -r \
"gs://simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0/hub" \
.
The /hub folder contains the files:
/saved_model.pb
/tfhub_module.pb
/variables/variables.index
/variables/variables.data-00000-of-00001
So far so good.
Now in python3, tensorflow2, tensorflow_hub 0.12 I run the following code:
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
path_to_hub = '/home/my_name/my_path/simclr/hub'
# Attempt 1
m = tf.keras.models.Sequential([hub.KerasLayer(path_to_hub, input_shape=(224,224,3))])
# Attempt 2
m = tf.keras.models.Sequential(hub.KerasLayer(hubmod))
m.build(input_shape=[None,224,224,3])
# Attempt 3
m = hub.KerasLayer(hub.load(hubmod))
# Toy Data Test
X = np.random.random((1,244,244,3)).astype(np.float32)
y = m.predict(X)
None of these 3 options to load the hub model work, with the following errors:
Attempt 1 :
ValueError: Error when checking input: expected keras_layer_2_input to have shape (224, 224, 3) but got array with shape (244, 244, 3)
Attempt 2:
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node sequential_3/keras_layer_3/StatefulPartitionedCall/base_model/conv2d/Conv2D}}]] [Op:__inference_keras_scratch_graph_46402]
Function call stack:
keras_scratch_graph
Attempt 3:
ValueError: Expected a string, got <tensorflow.python.training.tracking.tracking.AutoTrackable object at 0x7fa71c7a2dd0>
These 3 attempts are all code taken from tensorflow_hub tutorials and are repeated in other answers in stackoverflow, but none works, and I don't know how to continue from those error messages.
Appreciate any help, thanks.
Update 1:
Same issues happen if I try with this ResNet50 hub/
https://storage.cloud.google.com/simclr-gcs/checkpoints/ResNet50_1x.zip
As #Frightera pointed out, there was an error with the input shapes. Also the error on "Attempt 2" was solved by allowing for memory growth on the selected GPU. "Attempt 3" still does not work, but at least there are two methods for loading and using a model saved in /hub format:
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
hubmod = 'https://tfhub.dev/google/imagenet/mobilenet_v2_035_96/feature_vector/5'
# Alternative 1 - Works!
m = tf.keras.models.Sequential([hub.KerasLayer(hubmod, input_shape=(96,96,3))])
print(m.summary())
# Alternative 2 - Works!
m = tf.keras.models.Sequential(hub.KerasLayer(hubmod))
m.build(input_shape=[None, 96,96,3])
print(m.summary())
# Alternative 3 - Doesnt work
#m = hub.KerasLayer(hub.load(hubmod))
#m.build(input_shape=[None, 96,96,3])
#print(m.summary())
# Test
X = np.random.random((1,96,96,3)).astype(np.float32)
y = m.predict(X)
print(y.shape)
I am trying to come up with a method to test a number of Jupyter notebooks. A test should run when a new notebook is implemented in a Github branch and submitted for a pull request. The tests are not that complicated, they are mostly just testing if the notebook runs end-to-end and without any errors, and maybe a few asserts. However:
There are certain calls in some cells that need to be mocked, e.g. a call to download the data from a database.
There may be some magic cells in the notebooks which run a pip command or something else.
I am open to use any testing library, such as 'pytest' or unittest, although pytest is preferred.
I looked at a few libraries for testing notebooks such as nbmake, treon, and testbook, but I was unable to make them work. I also tried to convert the notebook to a python file, but the magic cells were converted to a get_ipython().run_cell_magic(...) call which became an issue, since pytest uses python and not ipython, and get_ipython() is only available in ipython.
So, I am wondering what is a good way to test jupyter notebooks with all of that in mind. Any help is appreciated.
One straightforward approach I've already used is to execute the entire notebook with nbconvert.
A notebook failed.ipynb raising an exception will result in a failed run thanks to the --execute option that tells nbconvert to execute the notebook prior to its conversion.
jupyter nbconvert --to notebook --execute failed.ipynb
# ...
# Exception: FAILED
echo $?
# 1
Another correct notebook passed.ipynb will result in a successful export.
jupyter nbconvert --to notebook --execute passed.ipynb
# [NbConvertApp] Converting notebook passed.ipynb to notebook
# [NbConvertApp] Writing 1172 bytes to passed.nbconvert.ipynb
echo $?
# 0
Cherry on the cake, you can do the same through the API and so wrap it in Pytest!
import nbformat
import pytest
from nbconvert.preprocessors import ExecutePreprocessor
#pytest.mark.parametrize("notebook", ["passed.ipynb", "failed.ipynb"])
def test_notebook_exec(notebook):
with open(notebook) as f:
nb = nbformat.read(f, as_version=4)
ep = ExecutePreprocessor(timeout=600, kernel_name='python3')
try:
assert ep.preprocess(nb) is not None, f"Got empty notebook for {notebook}"
except Exception:
assert False, f"Failed executing {notebook}"
Running the test gives.
pytest test_nbconv.py
# FAILED test_nbconv.py::test_notebook_exec[failed.ipynb] - AssertionError: Failed executing failed.ipynb
# PASSED test_nbconv.py::test_notebook_exec[passed.ipynb]
Notes
There is several output formats, I've used here notebook.
This doesn’t convert a notebook to a different format per se, instead it allows the running of nbconvert preprocessors on a notebook, and/or conversion to other notebook formats.
The python code example is just a quick draft it can be largely improved.
Here is my own solution using testbook. Let's say I have a notebook called my_notebook.ipynb with the following content:
The trick is to inject a cell before my call to bigquery.Client and mock it:
from testbook import testbook
#testbook('./my_notebook.ipynb')
def test_get_details(tb):
tb.inject(
"""
import mock
mock_client = mock.MagicMock()
mock_df = pd.DataFrame()
mock_df['week'] = range(10)
mock_df['count'] = 5
p1 = mock.patch.object(bigquery, 'Client', return_value=mock_client)
mock_client.query().result().to_dataframe.return_value = mock_df
p1.start()
""",
before=2,
run=False
)
tb.execute()
dataframe = tb.get('dataframe')
assert dataframe.shape == (10, 2)
x = tb.get('x')
assert x == 7
Hi, I have two GPUs and sometimes I want to run one script on GPU:0 and the other one on GPU:1. the question is how can I execute a python script on a specific GPU, or how to bind script execution to a particular GPU. Looking ahead I'll say what I know about with tf.device('/device:GPU:1'):
I thought I can solve my issue using tf.config.experimental.set_visible_devices but I was wrong. The plan was simple: at the top of the code, I planned to set up the required GPU. And then, as I thought, the script will run on that GPU which I made visible. But I was wrong - see the code below. In my case when I run the script I got an error:
TensorFlow device (GPU:0) is being mapped to multiple CUDA devices (1 now, and 0 previously),
which is not supported. This may be the result of providing different GPU configurations
(ConfigProto.gpu_options, for example different visible_device_list) when creating multiple
Sessions in the same process. This is not currently supported, see https://github.com/tensorflow/tensorflow/issues/19083
__
import tensorflow as tf
def work():
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
for i in range(100000):
c = tf.matmul(a, b)
def main():
print(tf.__version__)
tf.config.set_soft_device_placement(False)
tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
print(f'set visible GPU device as {gpus[1]}')
tf.config.experimental.set_visible_devices(gpus[1], 'GPU')
except RuntimeError as e:
print(e)
device_name = tf.test.gpu_device_name()
print(device_name)
with tf.device('/device:GPU:1'):
work()
if __name__ == "__main__":
main()
I believe I'm not the first person who meets with this issue - could somebody share the solution?
as turn it out the casket opened simple - here is what I find out. When you have 2 GPU and call tf.config.experimental.set_visible_devices(gpus[1], 'GPU') they're only one GPU will be available for your script Device:1 (In my example) . BUT... IT IS IMPORTANT - the name(number) of this device will be Device:0 instead of Device:1 as it can be expected. And here is the reason: the system re-enumerates all available for your script GPUs beginning from the scratch - it means the first name of your device will be started from 0 again, despite your real device that has the name Device:1 - thus the available in your script device will have name Device:0, not Device:1.
I hope this can help someone like me.
I want to achieve parallel computing by Python multiprocessing module, so I implement a simulated calculation to test whether I can use multiple CPU cores. I found a very strange thing that a single process can achieve 8 CPU usage of 100% on Windows Subsystem for Linux(WSL) on my desktop rather than only one CPU usage of 100% on Ubuntu on Lab's server.
Like this:
And this is the contrast:
Furthermore, I found that using multiple processes does not reduce the time cost on WSL on my desktop, but which indeed largely reduce the time cost on Ubuntu on Lab's server.
Like this:
(Here I run 6 processes and running a single process on Lab's server needs about 440s.)
And this is the contrast:
(Here I run 3 processes and running a single process on my desktop needs about 29s.)
Here is my Python source codes:
import numpy as np
import time
import os
import multiprocessing as mp
PROCESS_MAX = 1
LOOPS = 1
process_list = []
def simulated_calculation():
x = np.random.rand(100, 100)
y = np.random.rand(100, 100)
z = np.outer(x, y)
determinant = np.linalg.det(z)
def child_process(name):
for i in range(LOOPS):
print("The child process[%s] starts at %s and its PID is %s" % (str(name), time.ctime(), os.getpid()))
simulated_calculation()
print("The child process[%s] stops at %s and its PID is %s" %(str(name), time.ctime(), os.getpid()))
def main():
print("All start at %s" % time.ctime())
print("The parent process stars at %s and its PID is %s" % (time.ctime(), os.getpid()))
start_wall_time = time.time()
for i in range(PROCESS_MAX):
p = mp.Process(target = child_process, args = (i + 1, ))
process_list.append(p)
p.daemon = True
p.start()
for i in process_list:
i.join()
stop_wall_time = time.time()
print("All stop at %s" % time.ctime())
print("The whole runtime is %ss" % str(stop_wall_time - start_wall_time))
if __name__ == "__main__":
main()
I hope someone can help me. Thanks!
WSL1 has a virtual layer through which the Windows device drivers are being passed. WSL2 on the other hand, has more access due to a Linux kernel in place. However direct access to the hardware is inaccessible to WSL1 except USB. Hardware such as USB and GPU are currently not available to WSL2 but is being worked.
I have access through ssh to a cluster of n GPUs. Tensorflow automatically gave them names gpu:0,...,gpu:(n-1).
Others have access too and sometimes they take random gpus.
I did not place any tf.device() explicitely because that is cumbersome and even if I selected gpu number j and that someone is already on gpu number j that would be problematic.
I would like to go throuh the gpus usage and find the first that is unused and use only this one.
I guess someone could parse the output of nvidia-smi with bash and get a variable i and feed that variable i to the tensorflow script as the number of the gpu to use.
I have never seen any example of this. I imagine it is a pretty common problem. What would be the simplest way to do that ? Is a pure tensorflow one available ?
I'm not aware of pure-TensorFlow solution. The problem is that existing place for TensorFlow configurations is a Session config. However, for GPU memory, a GPU memory pool is shared for all TensorFlow sessions within a process, so Session config would be the wrong place to add it, and there's no mechanism for process-global config (but there should be, to also be able to configure process-global Eigen threadpool). So you need to do on on a process level by using CUDA_VISIBLE_DEVICES environment variable.
Something like this:
import subprocess, re
# Nvidia-smi GPU memory parsing.
# Tested on nvidia-smi 370.23
def run_command(cmd):
"""Run command, return output as string."""
output = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()[0]
return output.decode("ascii")
def list_available_gpus():
"""Returns list of available GPU ids."""
output = run_command("nvidia-smi -L")
# lines of the form GPU 0: TITAN X
gpu_regex = re.compile(r"GPU (?P<gpu_id>\d+):")
result = []
for line in output.strip().split("\n"):
m = gpu_regex.match(line)
assert m, "Couldnt parse "+line
result.append(int(m.group("gpu_id")))
return result
def gpu_memory_map():
"""Returns map of GPU id to memory allocated on that GPU."""
output = run_command("nvidia-smi")
gpu_output = output[output.find("GPU Memory"):]
# lines of the form
# | 0 8734 C python 11705MiB |
memory_regex = re.compile(r"[|]\s+?(?P<gpu_id>\d+)\D+?(?P<pid>\d+).+[ ](?P<gpu_memory>\d+)MiB")
rows = gpu_output.split("\n")
result = {gpu_id: 0 for gpu_id in list_available_gpus()}
for row in gpu_output.split("\n"):
m = memory_regex.search(row)
if not m:
continue
gpu_id = int(m.group("gpu_id"))
gpu_memory = int(m.group("gpu_memory"))
result[gpu_id] += gpu_memory
return result
def pick_gpu_lowest_memory():
"""Returns GPU with the least allocated memory"""
memory_gpu_map = [(memory, gpu_id) for (gpu_id, memory) in gpu_memory_map().items()]
best_memory, best_gpu = sorted(memory_gpu_map)[0]
return best_gpu
You can then put it in utils.py and set GPU in your TensorFlow script before first tensorflow import. IE
import utils
import os
os.environ["CUDA_VISIBLE_DEVICES"] = str(utils.pick_gpu_lowest_memory())
import tensorflow
An implementation along the lines of Yaroslav Bulatov's solution is available on https://github.com/bamos/setGPU.