I successfully added TensorFlow with
using Pkg
Pkg.add("TensorFlow")
But when I
using TensorFlow
I get Failed to precompile TensorFlow. In more detail, I get something like below.
[ Info: Precompiling TensorFlow
ERROR: LoadError: UndefVarError: warn not defined
Stacktrace:
[1] top-level scope at none:0
[2] include at ./boot.jl:317 [inlined]
[3] include_relative(::Module, ::String) at ./loading.jl:1038
[4] include(::Module, ::String) at ./sysimg.jl:29
[5] top-level scope at none:2
[6] eval at ./boot.jl:319 [inlined]
[7] eval(::Expr) at ./client.jl:389
[8] top-level scope at ./none:3
in expression starting at
/home/...
I appreciate your help.
It is a bit unfortunate, but most packages have not defined any upper bounds on Julia versions (in the past), and thus allow themselves to be installed on Julia 1.0, while they are not ready yet as was pointed out in the comments. If in doubt, I would always check the repository. A quick Google search points to https://github.com/malmaud/TensorFlow.jl.
The badges at the top of the README show that it only tested for Julia 0.5 & 0.6, indicating it might not be ready (or the author did not add the badge, but it is ready)
The last release is from May 30th, Julia 0.7 and 1.0 (1.0 = 0.7 minus the deprecation warnings) are just weeks old, so that will definitely not work unless the package is trivial (and this one is not).
There is plenty of activity to port it to 1.0, particularly in this pull request: https://github.com/malmaud/TensorFlow.jl/pull/419. If you would like to contribute, I would start from that work, it seems a lot has been sorted out, but not all of it
Related
I have recently upgraded my Intel MacBook Pro 13" to a MacBook Pro 14" with M1 Pro. Been working hard on getting my software to compile and work again. No big issues fortunately, except for floating point problems in some obscure fortran code and in python. With regard to python/numpy I have the following question.
I have a large code base bur for simplicity will use this simple function that converts flight level to pressure to show the issue.
def fl2pres(FL):
P0=101325
T0=288.15
T1=216.65
g=9.80665
R=287.0528742
GAMMA=0.0065
P11=P0*np.exp(-g/GAMMA/R*np.log(T0/T1))
h=FL*30.48
return np.where(h<=11000, \
P0*np.exp(-g/GAMMA/R*np.log((T0/(T0-GAMMA*h) ))),\
P11*np.exp(-g/R/T1*(h-11000)) )
When I run the code on my M1 Pro, I get:
In [2]: fl2pres(np.float64([400, 200]))
Out[3]: array([18753.90334892, 46563.239766 ])
and;
In [3]: fl2pres(np.float32([400, 200]))
Out[3]: array([18753.90234375, 46563.25080916])
Doing the same on my older Intel MacBook Pro I get:
In [2]: fl2pres(np.float64([400, 200]))
Out[2]: array([18753.90334892, 46563.239766 ])
and;
In [3]: fl2pres(np.float32([400, 200]))
Out[3]: array([18753.904296888, 46563.24778944])
The float64 calculations match but the float32 do not. We use float32 quite a lot throughout our code for memory optimisation. I understand that due to architectural differences this sort of floating point errors can occur but was wondering whether a simple fix was possible as currently some unit-tests fail. I could include the architecture in these tests but am hoping for an easier solution?
Converting all inputs to float64 makes my unit-tests pass and hence fixes this issue but sine we have quite some large arrays and dataframes, the impact on memory is unwanted.
Both laptops run python 3.9.10 installed through homebrew, pandas 1.4.1 and numpy 1.22.3 (installed to map against accelerate and blas).
EDIT
I have changes the function to print intermediate values to see where changes occur:
def fl2pres(FL):
P0=101325
T0=288.15
T1=216.65
g=9.80665
R=287.0528742
GAMMA=0.0065
P11=P0*np.exp(-g/GAMMA/R*np.log(T0/T1))
h=FL*30.48
A = np.log((T0/(T0-GAMMA*h)))
B = np.exp(-g/GAMMA/R*A)
C = np.exp(-g/R/T1*(h-11000))
print(f"P11:{P11}, h:{h}, A:{A}, B:{B}, C:{C}")
return np.where(h<=11000, P0*B, P11*C)
Running this function with the same input as above for the float32 case, I get on M1 Pro:
P11:22632.040591374975, h:[12192. 6096.], A:[0.32161594 0.14793371], B:[0.1844504 0.45954345], C:[0.82864394 2.16691503]
array([18753.90334892, 46563.239766 ])
On Intel:
P11:22632.040591374975, h:[12192. 6096.], A:[0.32161596 0.14793368], B:[0.18445034 0.45954353], C:[0.828644 2.166915]
array([18753.90429688, 46563.24778944])
As per the issue I created at numpy's GitHub:
the differences you are experiencing seem to be all within a single
"ULP" (unit in the last place), maybe 2? For special math functions,
like exp, or sin, small errors are unfortunately expected and can be
system dependend (both hardware and OS/math libraries).
One thing that could be would might have a slightly larger effect
could be use of SVML that NumPy has on newer machines (i.e. only on
the intel one). That can be disabled at build time using
NPY_DISABLE_SVML=1 as an environment variable, but I don't think you
can disable its use without building NumPy. (However, right now, it
may well be that the M1 machine is the less precise one, or that they
are both roughly the same, just different)
I haven't tried compiling numpy using NPY_DISABLE_SVML=1 and my plan now is to use a docker container that can run on all my platforms and use a single "truth" for my tests.
I am trying to use PyDREAM to sample a likelihood that has a number of dynamically constructed elements, i.e., class factories. The class factories are pretty necessary, so making the likelihood function easier to pickle is not really an option. This doesn't have much to do with PyDREAM, as there are a number of "out of the box" samplers that use pickling of some sort for multiprocessing. I assume this is pretty standard since the pickling happens in the multiprocessing module. I'd like to figure out if there is a way to make them work with my code. I was really excited to find cloudpickle which can successfully pickle my likelihood function.
I recently forked PyDREAM and tried this monkey patch. I have successfully patched cloudpickle in, but multiprocessing is trying to call a method called register, which does not seem to exist in cloudpickle. I know nothing about the inner workings of these picklers. There are other methods that start with "register" in cloudpickle, but they don't seem quite right.
~/anaconda3/envs/dream/lib/python3.9/multiprocessing/sharedctypes.py in rebuild_ctype(type_, wrapper, length)
136 if length is not None:
137 type_ = type_ * length
--> 138 _ForkingPickler.register(type_, reduce_ctype)
139 buf = wrapper.create_memoryview()
140 obj = type_.from_buffer(buf)
AttributeError: type object 'CloudPickler' has no attribute 'register'
Also, I've tried using dill to serialize the likelihood with no luck. It would be awesome if multiprocess allowed the use of cloudpickle, and there is an issue on the multiprocess GitHub page about this, but it doesn't seem to be a feature that is being actively worked on.
I am using kears with tensorflow backend, and following is the problem. Is there any can solve this problem, thanks!
The error is caused by a illegal value of CNMEM. According to theano doc, CNMEM can only be assigned as a float.
0: not enabled.
0 < N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory).
1: use this number in megabytes (MB) of memory.
You can also refer to here.
The warning is due to a change in Theano (Kera's backend). It will change from CUDA to GpuArray. You can refer to here for a solution.
Actually if you fix the warning, the error will disappear as well according to:
This value allocates GPU memory ONLY when using (CUDA backend) and has no effect when the GPU backend is (GpuArray Backend). For the new backend, please see config.gpuarray.preallocate
I'm working on a TensorFlow project where 'targets' is defined as:
targets = tf.sparse_placeholder(tf.int32, name='targets')
Now saving my model with saver.save(sess, model_path, meta_graph_suffix='meta', write_meta_graph=True) gives me the following error:
WARNING:tensorflow:Error encountered when serializing targets.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'SparseTensor' object has no attribute 'name'
I believe the warning is printed in the following lines of code: https://github.com/tensorflow/tensorflow/blob/f974e8d0c2420c6f7e2a2791febb4781a266823f/tensorflow/python/training/saver.py#L1452
Reloading the model with saver.restore(session, save_path) seems to work though.
Has anyone seen this issue before? Why would serializing a SparseTensor give that warning? Is there any way to avoid this warning?
I'm using TensorFlow version 0.10.0rc0 python 2.7 GPU version. I can't provide a minimal example, it doesn't happen all the time, only in certain configurations. And I can't share the model I currently have this issue with.
The component placeholders (for indices, values, and possibly shape) somehow get added to some collections. If you trace through the code in saver.py, you can see ops.get_all_collection_keys() being used.
This should be a benign warning. I will forward to the team to see if something can be done to improve this handling.
The warning means that a SparseTensor type of operation has been added to a collection whose to_proto() implementation expects a "name" field.
I'd consider this a bug if you intend to restore the complete graph from meta_graph, including all the Python objects, and you should find out which operation added that SparseTensor into a collection.
If you never intend to restore from meta_graph, then you can ignore this error.
Hope that helps.
Sherry
I have a USB webcam (unknown make, no markings) thats been detected fine on my Raspberry Pi.
This is the output of lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
Bus 001 Device 004: ID 0c45:608f Microdia PC Camera (SN9C103 + OV7630)
Bus 001 Device 005: ID 1267:0103 Logic3 / SpectraVideo plc G-720 Keyboard
However when i run motion, using /dev/video0 with the only default config changed the resolution and setting the webcam host off so that i can stream it on a network.
This is my log when i run motion
Log of motion -n
[0] Processing thread 0 - config file /etc/motion/motion.conf
[0] Motion 3.2.12 Started
[0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478784
[0] Thread 1 is from /etc/motion/motion.conf
[0] motion-httpd/3.2.12 running, accepting connections
[0] motion-httpd: waiting for data on port TCP 8080
[1] Thread 1 started
[1] cap.driver: "sonixb"
[1] cap.card: "USB camera"
[1] cap.bus_info: "usb-bcm2708_usb-1.2"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: S910 (S910)
[1] 1: BA81 (BA81)
[1] Selected palette BA81
[1] Test palette BA81 (480x640)
[1] Adjusting resolution from 480x640 to 160x120.
[1] Using palette BA81 (160x120) bytesperlines 160 sizeimage 19200 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255
[1] "Brightness", default 127, current 127
[1] found control 0x00980911, "Exposure", range 0,1023
[1] "Exposure", default 66, current 66
[1] found control 0x00980912, "Automatic Gain (and Exposure)", range 0,1
[1] "Automatic Gain (and Exposure)", default 1, current 1
[1] found control 0x00980913, "Gain", range 0,255
[1] "Gain", default 127, current 127
[1] mmap information:
[1] frames=4
[1] 0 length=20480
[1] 1 length=20480
[1] 2 length=20480
[1] 3 length=20480
[1] Using V4L2
[1] Resizing pre_capture buffer to 1 items
[1] v4l2_next: VIDIOC_DQBUF: EIO (s->pframe 0): Input/output error
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Error capturing first image
[1] Started stream webcam server in port 8081
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Video device fatal error - Closing video device
[1] Closing video device /dev/video0
[1] Retrying until successful connection with camera
[1] cap.driver: "sonixb"
[1] cap.card: "USB camera"
[1] cap.bus_info: "usb-bcm2708_usb-1.2"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: S910 (S910)
[1] 1: BA81 (BA81)
[1] Selected palette BA81
[1] Test palette BA81 (480x640)
[1] Adjusting resolution from 480x640 to 160x120.
[1] Using palette BA81 (160x120) bytesperlines 160 sizeimage 19200 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255
[1] "Brightness", default 127, current 127
[1] found control 0x00980911, "Exposure", range 0,1023
[1] "Exposure", default 66, current 66
[1] found control 0x00980912, "Automatic Gain (and Exposure)", range 0,1
[1] "Automatic Gain (and Exposure)", default 1, current 1
[1] found control 0x00980913, "Gain", range 0,255
[1] "Gain", default 127, current 127
[1] mmap information:
[1] frames=4
[1] 0 length=20480
[1] 1 length=20480
[1] 2 length=20480
[1] 3 length=20480
[1] Using V4L2
[1] Camera has finally become available
[1] Camera image has different width and height from what is in the config file. You should fix that
[1] Restarting Motion thread to reinitialize all image buffers to new picture dimensions
[1] Thread exiting
[1] Calling vid_close() from motion_cleanup
[1] Closing video device /dev/video0
[0] Motion thread 1 restart
[1] Thread 1 started
[1] config image height (120) is not modulo 16
[1] Could not fetch initial image from camera
[1] Motion continues using width and height from config file(s)
[1] Resizing pre_capture buffer to 1 items
[1] Started stream webcam server in port 8081
[1] Retrying until successful connection with camera
[1] config image height (120) is not modulo 16
[0] httpd - Finishing
[0] httpd Closing
[0] httpd thread exit
[1] Thread exiting
[0] Motion terminating
The light on the camera comes on at the start and then goes off again, does anyone recognise any of the errors i'm getting?
Thanks!
I think you need to set the height and width for the image in the conf file to your camera specification. Mine didnt work until I set height 640 width 480. Streams great! Just need to figure out the patch for the webstream authentication. currently I have this streaming to my webserver that requires a login but this can be bypassed if someone enters my IP plus the port im streaming on.
Even if configured in the conf file differently, motion uses the possible resolution it detects when it runs (at least in my experience).
Also, it seems an unsupported palette is set in the conf file and motion picks one of the two it detects as supported. Have you tried changing the palette setting to "0" (S910) in the conf file?
Lastly, the Pi's USB support has some known and as of now unsolved issues regarding big chunks of data. Lowering the framerate may also help in other cases (in this case, I think, I wouldn't help, since the process already fails with the first image).
Try v2l4-ctl --list-format-ext to see what combinations of pixel format and image size are supported on your camera. The S910 is a cheap old camera, you might want to upgrade.
Your problem is in the log:
config image height (120) is not modulo 16
So you need a different image resolution.
See what your device supports with
$ uvcdynctrl -f
Pick one that has a y-resolution that is a multiple of 16.
E.g. 640x480 if that one is listed for your camera.
I would suggest you try guvcview instead of motion. It runs faster and gives a far better picture on my Pi. It runs under X.
Two notes of guvcview - set POWER LINE FREQUENCY to your local mains freq.
- set resolution to 640 x 480.
guvcview takes about 50% processor power. Yes, use a USB Hub too!
Unh.