I am trying to reproduce the experiments in the paper Cross Modal Focal Loss for RGBD Face Anti-Spoofing. I have downloaded the source codes from gitlab and moved it to my linux (ubuntu) remote server . I am following the installation steps as given in the gitlab repo . The link to the gitlab repo is as follows :https://gitlab.idiap.ch/bob/bob.paper.cross_modal_focal_loss_cvpr2021
I am getting an error (ResolvePackageNotFound) when I try to build the environment as mentioned on step 2 . It is worthy mentioning that the Bob package isn't installed because I see that it is contained in the 'environment.yml'. The 'environment.yml' file I used is the newest version.
Error result is here.
I hope anyone would be able to assist me in solving this issue. It is very important for me.
I hope anyone would be able to assist me in solving this issue. It is very important for me.
Related
Repost from Antonio D.:
I just installed FLOW following all the instructions given in the following link. After executing the sugiyama example, SUMO shows an error saying this: "Error: tcpip::Storage::readIsSafe: want to read 8 bytes from Storage, but only 4 remaining". I know that after the release of SUMO 1.0.0 TraCI libraries and SUMO are no more compatible but I am not able to downgrade the last version of SUMO in my machine (MacBook). Which is the version I should downgrade tool and how can I do it?
I would really appreciate if anyone could help me to fix this.
Repost from Flow team:
This is probably happening because your conda environment cannot find the associate binaries. I would recommend installing the binaries into your conda environment; that should fix this. You can do so from your terminal by running the following commands:
cd /path/to/flow
source activate flow
scripts/setup_sumo_osx.sh
Hope this helps.
I am quite familiar with Pycharm except 1 thing that I can't seem to figure out how to download Keras_contrib which is not availble in conda's channel and conda-forge channel which is also often used.
I have read the following article which suggest to add additional channel to conda.
"How to Install a Package in PyCharm when project interpreter is set to conda, and the package is not provided/listed by conda? 1"
but as I mentioned Keras_contrib is not provided, and I am not sure quite sure how to download it.
I managed to install Keras_contrib sucessfully to my environment which is also used by Pycharm interpreter, but for some reason, Pycharm does not recognize it.
I follow instruction given in https://github.com/keras-team/keras-contrib
which is running setup.py install
Here are the questions
By doing this Does it get install in the site_packages automatically? because I do not see it.
if I have to do it manually, how come my environment can recognize it, but Pycharm cannot.
Is there a default location in which environment and Pycharm usually look at?
because it would make sense in this case that one may recognize it while other may not.
How can I download Keras_contrib which is not avaliable in well known channel?
Is there other way to check that Pycharm Interpreter is compatible with my anaconda environment other than looking folder it is linked to?
In my case they link to the same environment, but Pycharm just cannot recognize
I just figured it out.
so Pycharm looks at site-packages of your environment.
I solved by problem that Pycharm cannot recognize the packages while anacoda env can by copy and past the Keras_contrib to the site-packages. (I still find this to be strange if any one answer to this. Feel free to comment)
I'm having issue with launch_script.sh for pix2pix sample in models/research/gan/pix2pix
After looking into the script, it stuck at these lines
bazel build "${git_repo}/research/slim:download_and_convert_imagenet"
"./bazel-bin/download_and_convert_imagenet" ${DATASET_DIR}
I've looked up previous questions (both here and Issues in Tensorflow's models repo, and many said that it's because bazel configurations, mismatching between where bazel looks and where bazel generates. But after looking up all directories created by bazel, there's no directories of what I've been missing
I'm doing this on my Azure VM on Linux Ubuntu 16.04. This is my first time working with bazel. So sorry for lack of knowledge in bazel. I'd be very thankful if someone can help solve this problem
Followed the instructions to run a local instance of lumify using Vagrant.
Vagrant up demo, fails as the https://bits.lumify.io/yum/repodata/repomd.xml is down.
The try site is down as https://try.lumify.io/ as well.
Need pointers if any yum repo can be used for this.
I see that there are few dependencies related to opencv etc and i could not find them all in 1 place.
Any inputs on this would be greatly appreciated
I'm pretty sure active development of Lumify's open source version ended in 2015. Have you tried the open source version of Visallo? There's also an enterprise edition if you need additional capabilities or greater scalability.
I do not have internet access on my linux computer therefore I installed TF from source by following TensorFlow Get Started.
I ran into a few trouble to build trainer_example due to the lack of internet connection hopefully someone from tensorflow helped me through it by creating local repositories for re2, gemmlowp, jpegsrc v9a, libpng and six and modifying WORKSPACE accordingly.
When I try to bazel build pip_package to create the wheel then I think I run into the same problem but :
-the list of repositories is insanely long (to manually install each of them) even if they seem to be mostly part of PolymerElements
Is there an easy workaround ?
If you are happy to create a PIP package without TensorBoard, you should be able to avoid rewriting the Polymer dependencies by removing this line ("//tensorflow/tensorbaord" in the build_pip_package dependencies) from tensorflow/tools/pip_package/BUILD.