Is Tensor Flow compatible with Julia 0.6.4? - tensorflow

So I have a project in my machine learning class and we are using Julia as our programming language. We can use any packages we want to build neural networks but I can't seem to get Tensor Flow to test correctly. Pkg.add("TensorFlow") works seemingly fine but here is the output for Pkg.test("TensorFlow")
julia> Pkg.test("TensorFlow")
INFO: Testing TensorFlow
ERROR: LoadError: LoadError: could not load library "C:\Users\Ryan .LAPTOP-
KJUJGIC7\.julia\v0.6\TensorFlow\src\..\deps\usr\bin\libtensorflow"
The specified module could not be found.
Stacktrace:
[1] dlopen(::String, ::UInt32) at .\libdl.jl:97
[2] TensorFlow.Graph() at C:\Users\Ryan .LAPTOP- KJUJGIC7\.julia\v0.6\TensorFlow\src\core.jl:21
[3] include_from_node1(::String) at .\loading.jl:576
[4] include(::String) at .\sysimg.jl:14
[5] include_from_node1(::String) at .\loading.jl:576
[6] include(::String) at .\sysimg.jl:14
[7] process_options(::Base.JLOptions) at .\client.jl:305
[8] _start() at .\client.jl:371
while loading C:\Users\Ryan .LAPTOP-KJUJGIC7\.julia\v0.6\TensorFlow\test\..\examples\logistic.jl, in expression starting on line 22
while loading C:\Users\Ryan .LAPTOP-KJUJGIC7\.julia\v0.6\TensorFlow\test\runtests.jl, in expression starting on line 6
=================================================[ ERROR: TensorFlow ]==================================================
failed process: Process(`'C:\Users\Ryan .LAPTOP-KJUJGIC7\AppData\Local\Julia-0.6.4\bin\julia.exe' -Cgeneric '-JC:\Users\Ryan .LAPTOP-KJUJGIC7\AppData\Local\Julia-0.6.4\lib\julia\sys.dll' --compile=yes --depwarn=yes --check-bounds=yes --code-coverage=none --color=yes --compilecache=yes 'C:\Users\Ryan .LAPTOP-KJUJGIC7\.julia\v0.6\TensorFlow\test\runtests.jl'`, ProcessExited(1)) [1]
========================================================================================================================
ERROR: TensorFlow had test errors
I'm running Julia Version 0.6.4 on Windows 10; if there's a way to resolve this error or a workaround I'd love some suggestions.

TensorFlow.jl does not support Windows.
You have two options:
(1) Try using TensorFlow via PyCall.jl:
using Conda
Conda.runconda("install -c conda-forge tensorflow")
(2) Use Flux.jl instead

Related

Unable to convert tensorflow Mask-Rcnn to IR with Open Vino toolkit

python mo_tf.py
--saved_model_dir C:\DATASETS\mask50000\exports\saved_model
--output_dir C:\DATASETS\mask50000
--reverse_input_channels
--tensorflow_custom_operations_config extensions\front\tf\mask_rcnn_support_api_v2.0.json
--tensorflow_object_detection_api_pipeline_config C:\DATASETS\mask50000\exports\pipeline.config
--log_level=DEBUG
I have been trying to convert the model using the above script, but every time I got the error:
"Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class'extensions.front.tf.tensorflow_custom_operations_config_update.TensorflowCustomOperationsConfigUpdate'>)": The function 'update_custom_layer_attributes' must be implemented in the sub-class."
I have exported the graph using exporter_main_v2.py. If more information is needed please inform me.
EDIT:
I was able to convert the model by changing the file mask_rcnn_support_api_v2.4.json.
first change:
"custom_attributes": {
"operation_to_add": "Proposal",
"clip_before_nms": false,
"clip_after_nms": true
}
second change:
"start_points": [
"StatefulPartitionedCall/concat/concat",
"StatefulPartitionedCall/concat_1/concat",
"StatefulPartitionedCall/GridAnchorGenerator/Identity",
"StatefulPartitionedCall/Cast",
"StatefulPartitionedCall/Cast_1",
"StatefulPartitionedCall/Shape"
]
that solved the problme.
OpenVINO 2020.4 is not compatible with TensorFlow 2. Support for TF 2.0 Object Detection API models was fully enabled only in OpenVINO 2021.3.
I’ve successfully converted the model mask_rcnn_inception_resnet_v2_1024x1024_coco17 to IR using the latest OpenVINO release (2021.4.752).
I share the MO conversion command here:
python mo_tf.py --saved_model_dir <model_dir>\saved_model --tensorflow_object_detection_api_pipeline_config <pipeline_dir>\pipeline.config --transformations_config <installed_dir>\extensions\front\tf\mask_rcnn_support_api_v2.0.json"

theano.function() throws up a long exception in Colab

I am using Google Colab to run the BinaryNet Neural Network implemented using theano by the authors of the original paper here: https://github.com/MatthieuCourbariaux/BinaryNet
When I run the following line from /Train-time/mnist.py (line 199):
train_fn = theano.function([input, target, LR], loss, updates=updates)
Colab throws up this error:
You can find the C code in this temporary file:
/tmp/theano_compilation_error_5_e2lq4v library
inux-gnu/bits/libc-header-start.h:33, is not found. library
inux-gnu/7/include-fixed/limits.h:194, is not found. library
inux-gnu/7/include-fixed/syslimits.h:7, is not found. library
inux-gnu/7/include-fixed/limits.h:34, is not found. library
inux-gnu/bits/mathcalls.h:298:1: is not found. library
inux-gnu/bits/mathcalls.h:298:1: is not found. library
inux-gnu/bits/libc-header-start.h:33, is not found. library
inux-gnu/7/include-fixed/limits.h:194, is not found. library
inux-gnu/7/include-fixed/syslimits.h:7, is not found. library
inux-gnu/7/include-fixed/limits.h:34, is not found. library
inux-gnu/bits/mathcalls.h:298:1: is not found. library
inux-gnu/bits/mathcalls.h:298:1: is not found.
Exception: ('The following error happened while compiling the node',
Elemwise{Composite{(i0 * (i1 + (i0 * round3(clip(i2, i3, i4)))) *
i5)}}[(0, 2)](TensorConstant{(1, 1) of 2.0}, TensorConstant{(1, 1) of
-1.0}, Elemwise{Composite{(i0 * (i1 + (i2 * i3 * i4) + i5))}}.0, TensorConstant{(1, 1) of 0}, TensorConstant{(1, 1) of 1},
Elemwise{Composite{Cast{float64}(LT(i0, i1))}}[(0, 0)].0), '\n',
"Compilation failed (return status=1):
/root/.theano/compiledir_Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic-x86_64-3.6.9-64/tmp9q80fef3/mod.cpp:932:2:
warning: character constant too long for its type. ['V15_tmp2'] =
round(['V15_tmp1']);. ^~~~~~~~~~.
/root/.theano/compiledir_Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic-x86_64-3.6.9-64/tmp9q80fef3/mod.cpp:932:23:
warning: character constant too long for its type. ['V15_tmp2'] =
round(['V15_tmp1']);. ^~~~~~~~~~.
/root/.theano/compiledir_Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic-x86_64-3.6.9-64/tmp9q80fef3/mod.cpp:1054:2:
warning: character constant too long for its type. ['V15_tmp2'] =
round(['V15_tmp1']);. ^~~~~~~~~~.
/root/.theano/compiledir_Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic-x86_64-3.6.9-64/tmp9q80fef3/mod.cpp:1054:23: warning: character constant too long for its type. ['V15_tmp2'] =
round(['V15_tmp1']);. ^~~~~~~~~~.
/root/.theano/compiledir_Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic-x86_64-3.6.9-64/tmp9q80fef3/mod.cpp: In member function ‘in...
I used this to install theano and lasagne:
!pip install --upgrade https://github.com/Theano/Theano/archive/master.zip
!pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip
I am using the exact same code as in the github repo with the only difference being that I used keras to import the mnist dataset instead of pylearn2
Could someone please help me figure out why this is happening? Thank you!
EDIT
I ran my code in python 2.7 and it worked! This question deals with using python 2 in Colab.

Converting a TensorFlow* Model

I want to convert my 1 tensorflow model to IR currently I am following the instructions here:
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html
The model I use is meta graph and ubuntu 16.04
I ran the line deflected:
python3 mo_tf.py --input_meta_graph .meta
then it will get an error:
[ERROR] Exception occurred during running replacer "None" (): Data flow edge coming out of AssignSub node model_0 / resnet_v1_50 / block4 / unit_1 / bottleneck_v1 / shortcut / BatchNorm / AssignMovingAvg
Can you guys please help me? thanks everyone
Did you freeze the model before conversion? Please look at how to freeze your model using instructions from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py and retry.

ERROR: LoadError: LoadError: syntax: "()" is not a valid function argument name

While trying to load TensorFlow in Julia, I gets the error
ERROR: LoadError: LoadError: syntax: "()" is not a valid function argument name
I could not get the solution. I am using Ubuntu and Julia version 0.6.0
The problems:
julia> using TensorFlow
INFO: Precompiling module TensorFlow.
WARNING: Loading a new version of TensorFlow.jl for the first time. This initial load can take around 5 minutes as code is precompiled; subsequent usage will only take a few seconds.
ERROR: LoadError: LoadError: syntax: "()" is not a valid function argument name
Stacktrace:
[1] include_from_node1(::String) at ./loading.jl:569
[2] include(::String) at ./sysimg.jl:14
[3] include_from_node1(::String) at ./loading.jl:569
[4] include(::String) at ./sysimg.jl:14
[5] anonymous at ./<missing>:2
while loading /home/spg/.julia/v0.6/TensorFlow/src/ops.jl, in expression starting on line 119
while loading /home/spg/.julia/v0.6/TensorFlow/src/TensorFlow.jl, in expression starting on line 184
ERROR: Failed to precompile TensorFlow to /home/spg/.julia/lib/v0.6/TensorFlow.ji.
Stacktrace:
[1] compilecache(::String) at ./loading.jl:703
[2] _require(::Symbol) at ./loading.jl:490
[3] require(::Symbol) at ./loading.jl:398
Following this link, run Pkg.update()

Tensorflow: classifier.predict and predicted_classes

System information
custom code: no, it is the one in https://www.tensorflow.org/get_started/estimator
system: Apple
OS: Mac OsX 10.13
TensorFlow version: 1.3.0
Python version: 3.6.3
GPU model: AMD FirePro D700 (actually, two such GPUs)
Describe the problem
Dear all,
I am running the simple iris program:
https://www.tensorflow.org/get_started/estimator
under python 3.6.3 and tensorflow 1.3.0.
The program executes correctly, apart from the very last part, i.e. the one related to the confusion matrix.
In fact, the result I get for the confusion matrix is:
New Samples, Class Predictions: [array([b'1'], dtype=object), array([b'2'], dtype=object)]
rather than the expected output:
New Samples, Class Predictions: [1 2]
Has anything about confusion matrix changed in the latest release?
If so, how should I modify that part of the code?
Thank you very much for your help!
Best regards
Ivan
Source code / logs
https://www.tensorflow.org/get_started/estimator
This looks like a numpy issue. array([b'1'], dtype=object) is one way numpy represents the string '1'.