TensorBoard error : path /[[_dataImageSrc]] not found - tensorflow

I was trying to run TensorBoard example : mnist_with_summaries.py and when I tried to open an instance of TensorBoard I got this :
command:
'tensorboard --logdir=/output --host localhost --port=6006'
output:
'TensorBoard 1.5.1 at http://localhost:6006 (Press CTRL+C to quit)
W0219 15:12:02.944875 Thread-1 application.py:273] path /[[_dataImageSrc]] not found, sending 404'
When I try to open http://localhost:6006, my browser crash.
I am on Ubuntu 16.04, with tensorflow GPU and the 1.5.1 version

I found a solution for this:
DO NOT USE "" or '' in the logdir path. Say, if you cd into the parent folder of log files (training folder is my log files parent folder), use "tensorboard --logdir=training" instead of "tensorboard --logdir='training'". This method works for my environment.

This is because you entered the wrong directory. For example, the file is in E:/path/to/log/events.out.tfevents.1526472789.DESKTOP-CUF1KNI
you should cd to the E:/path/to/"directory,ues "tensorboard --logdir=log
You may cd the E:/path/to/log directory, this will cause the error.

Related

Can't attach solidity-lsp for .sol files in neovim

Firtsble solidity-ls installed by mason via neovim command
:LspInstall solidity-ls
wasn't executable from terminal, altough it was shown as an executable one when typing :LspInfo in .sol file. After I installed it manually via npm
npm i solidity-ls -g
This is configuration for solidity-ls
require'lspconfig'.solidity.setup({
on_attach = on_attach,
})
With this empty configuration however root directory is not found, also solidity-ls is not attached to the file which I guess is just a consequence.
After adding root_dir manually in a following way and checking the file via luafile %
require'lspconfig'.solidity.setup({
on_attach = on_attach,
root_dir = function(fname)
return vim.fn.getcwd()
end,
})
I get this error
LSP[solidity]: Error SERVER_REQUEST_HANDLER_ERROR: "/usr/share/nvim/runtime/lua/vim/lsp/handlers.lua:86: bad argument #1 to 'ipairs' (table expected, got nil)"
Although :LspInfo shows root_dir is set up solidity-ls still is not attached
LspInfo from .sol file after
Any suggestion how to solve this?
Ubuntu 20.04.3 LTS
NVIM v0.7.2
I am new in vim, neovim and lua programming so apologizes in advance in case the question is too silly. Thanks.

Flatpak Intellij Idea - problem with subversion executable

After installing Intellij Idea using flatpak on Clear Linux I'm not able to make it run svn executable.
I added ---filesystem=host to flatpak permissions and tried to set executable path to /run/host/usr/bin/svn but with no luck (path is available/exists, though Intellij keeps complain)
svn command is normally available from system terminal.
When I try to run /run/host/usr/bin/svn command via Intellij Idea built-in terminal, I've got error that library is not available:
sh-5.0$ /run/host/usr/bin/svn
/run/host/usr/bin/svn: error while loading shared libraries: libsvn_client-1.so.0: cannot open shared object file: No such file or directory
I also tried set flatpak-spawn. Following command works perfectly fine in Intellij Idea built-in terminal:
/usr/bin/flatpak-spawn --host /usr/bin/svn, though when set as path to svn executable still gives me Intellij Idea error:
"The path to Subversion executable is probably wrong"
Could anybody please help with making it work?
TLDR: You probably need to add the path to svn into your IntelliJ terminal Path.
Details:
It looks like you are having a path issue. I had a similar problem running kubectl running PyCharm installed from a flatpak on Pop_Os.
If I try to run kubectl I see the following:
I have kubectl installed in /usr/local/bin. This is a screenshot from my 'normal' terminal.
In the PyCharm terminal this location is mounting under /run/host/usr/local/bin/.
If I look at my path in the PyCharm terminal, it is not there.
So I'll add the /run/host/usr/local/bin/ to my path and I can then run kubectl:
To make sure this comes up all the time, I need to add the PATH to the Terminal settings:
I can now execute any of the commands in my /usr/local/bin dir.
I found a really ugly solution for dealing with SVN with the JetBrains family, which does actually answer the question. But in a very roundabout way. Unfortunately Alex Nelson's solution didn't work for me.
You would think the Flatpak would come with a valid SVN, since it's actually part of the expected requirements for the program...
When in the terminal, you can run
cd ..
/usr/bin/flatpak-spawn --host vim ./svn
Then press i to go into input mode, then paste the following in the opened text file (Basically what it does is create an executable which passes it to the flatpak-spawn invocation):
#!/bin/bash
/usr/bin/flatpak-spawn --host /usr/bin/svn $#
Save and quit from vim (ESC, then :wq!). Make it executable:
chmod +x svn
Then in IntelliJ's menu, set the "path to svn" to
/home/<yourusername>/IdeaProjects/svn
It's worked for everything I've tried... Hope this helps out anyone else who was struggling with this.
I am using a similar solution to caluga.
#!/bin/sh
cd
exec /usr/bin/env -- flatpak-spawn --host /usr/bin/env -- svn "$#"
exec makes it replace the wrapper script process so the wrapper script process can end.
I'm using /bin/sh instead of /bin/bash as bash features are not needed.
using /usr/bin/env, but maybe not necessary if PATH is set up right.
remember to quote "$#" in case there are spaces in arguments.
I am putting it in ~/.local/bin and referencing it with its absolute path in the IntelliJ settings (Settings -> Version Control -> Subversion -> Path to Subversion executable).
I also was running into problems with IntelliJ saying that /app/idea-IC path does not exist. Figured that something outside the flatpak (i.e. svn or env) was trying to change directory to the working directory from where the wrapper script was invoked (inside the flatpak). Using cd allows the wrapper script to change to a directory that exists both inside the flatpak and on the host.
Fedora Silverblue or toolbox users might want to use dev tools inside their toolbox, in which case you can do:
#!/bin/sh
cd
exec /usr/bin/env -- flatpak-spawn --host toolbox run svn "$#"

Couldn't open file: data/obj.data and /bin/bash: ./darknet: No such file or directory

I tried to train the custom object according to https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects and I got an error.
Please tell me any good solutions.
I am going with google colaboratory.
I changed the directory, but it does not change.
dir
--content/
┠darknet-master/
┠build/
┠darknet/
┠x64/
┠data/
┠obj.data
┠
%%bash
cd /content/darknet-master./darknet detector train data/obj.data yolo-obj.cfg darknet53.conv.74 > train_log.txt
Couldn't open file: data/obj.dat
%cd /content/darknet-master/build/darknet/x64
!./darknet detect cfg/yolov3.cfg yolov3.weights data/person.jpg
/content/darknet-master/build/darknet/x64
/bin/bash: ./darknet: No such file or directory
Use %cd or os.chdir rather than %%bash cd...
The reason is that %%bash run commands in a sub-shell. But, I believe what you want to do is to change the working directory of the Python backend running your code.

Tensorflow 1.4 looking for libcudnn.so.6 rather than libcudnn.so.8

I am installed tensorflow 1.4.1 on two gpu machines. After installation , one send an error message that
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
After setting PATH and LD_LIBRARY_PATH. It works for me.
But the other machine send an error message as
ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory
But I don't have such installed. Could someone explain why they looking for different version of libcudnn.so? And how to fix this?
download libcudnn.so.6 in https://developer.nvidia.com/cudnn. put it to /usr/local/cuda/lib64/ folder and use next commond
sudo chmod u=rwx,g=rx,o=rx libcudnn.so.6.5.18
sudo ln -s libcudnn.so.6.5.18 libcudnn.so.6
sudo ln -s libcudnn.so.6 libcudnn.so
or change the folder to /home/username/anaconda3/envs/tensorflow/lib/

Installing tensorboard built from source

This is about a tensorboard which is built from source, not about pip-installed one.
I could successfully build it.
$ git clone https://github.com/tensorflow/tensorboard.git
$ cd tensorboard/
$ bazel build //tensorboard
tensorflow/tensorboard$ bazel build //tensorboard
Starting local Bazel server and connecting to it...
......................................
: (log messages here)
Target //tensorboard:tensorboard up-to-date:
bazel-bin/tensorboard/tensorboard
INFO: Elapsed time: 326.553s, Critical Path: 187.92s
INFO: 619 processes: 456 linux-sandbox, 12 local, 151 worker.
INFO: Build completed successfully, 1268 total actions
Then yes I can run it as documented in tensorboard/README.md, and it works.
$ ./bazel-bin/tensorboard/tensorboard --logdir path/to/logs
The problem is, I'd like to run it as if installed via pip like this:
$ tensorboard --logdir path/to/logs
But as far as I looked for, no script provided to create .whl file so that we can locally-pip-install it, unlike tensorflow provides one like this.
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.8.0-py2-none-any.whl
So... can anybody show how to do that? Making packaging script would solve this, but it should exist somewhere as long as tensorboard is provided via pip anyway. :)
My workaround so far is not clean enough:
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard ~/bin
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard.runfiles ~/bin
I appreciate your suggestions, thanks!
Update July-21:
Thanks to W JC, I found instruction is already there in tensorboard/pip_package/BUILD.
# rm -rf /tmp/tensorboard
# bazel run //tensorboard/pip_package:build_pip_package
# pip install -U /tmp/tensorboard/*py2*.pip
Though unfortunately it shows error in my environment, and I guess it's local issue maybe because I'm using anaconda.
But basically the problem was resolved. It should basically work as long as running on supported environment.
It seems there exists an script in the /tensorboard/pip_packages try to build wheels
bazel run //tensorboard/pip_package:build_pip_package ./ did generate the wheel out, but in the folder where bazel-bin points to. In my case, it's generated at ~/.cache/bazel/_bazel_peijia/b64ba42719633ff75eec6880decefcd3/execroot/org_tensorflow_tensorboard/bazel-out/k8-fastbuild/bin/tensorboard/pip_package/build_pip_package.runfiles/org_tensorflow_tensorboard/tensorboard-2.10.0a0-py3-none-any.whl