cx_Oracle running with mod_wsgi environment - apache

I installed cx_Oracle on CentOS 6.2. When I import the library from the shell, it works fine but when I launch it through wsgi, I get the error :
ImportError: libclntsh.so.10.1: cannot open shared object file: No such file or directory
This is an environment variable problem : cx_Oracle does not find the path to the lib.
I have tried the solutions provided here
I have added a link to libclntsh.so.10.1 (with ln) in the /usr/lib directory
I have edited apache configuration and added :
ORACLE_HOME=/usr/lib/oracle/11.2/client64/lib
LD_LIBRARY_PATH=$ORACLE_HOME/
PATH=$ORACLE_HOME/bin:$PATH
I edited /etc/ld.so.conf and added :
/usr/lib/oracle/11.2/client64/lib
done after ldconfig
I tried to use python with :
os.env['ORACLE_HOME']='/usr/lib/oracle/11.2/client64/lib'
I edited the bashrc with :
export ORACLE_HOME=/usr/lib/oracle/11.2/client64/lib
export LD_LIBRARY_PATH=$ORACLE_HOME/
export PATH=$ORACLE_HOME/bin:$PATH
I also edited apachectl with
ORACLE_HOME=/usr/lib/oracle/11.2/client64/lib
export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/
export LD_LIBRARY_PATH
PATH=$ORACLE_HOME/bin:$PATH
export PATH
I am running out of ideas. Any suggestions ?

When you compile the Python module for Oracle, set:
LD_RUN_PATH=/usr/lib/oracle/11.2/client64/lib
user environment variable and export it. This will cause that directory to be embedded in Python extension module .so file and will know where to find it at run time without needing to set LD_LIBRARY_PATH environment variable.
For a standard Apache distribution (Linux distros are often a bit different), the file to set extra environment variables in is called 'envvars' and is in same directory as 'httpd'. For Linux distros often needs to be in a special init.d startup script.
So, lookup what LD_RUN_PATH is all about.

Instead of using yum install on the cx_Oracle rpm, I downloaded the source of the library and run the setup.py build.
I got an error that would point me to the function that was trying to locate the instant client sdk libraries in :
possibleIncludeDirs = ["rdbms/demo", "rdbms/public", "network/public","sdk/include"]
Browsing the Oracle_home folder, i discovered that the sdk file where installed in the lib folder ( I installed the skd using yum install on the rpm from oracle ) and not in the possibleIncludeDirs or in an include folder as suggested in the setup.py :
if not includeDirs:
path = os.path.join(oracleLibDir, "include")
if os.path.isdir(path):
includeDirs.append(path)
if not includeDirs:
path = re.sub("lib(64)?", "include", oracleHome)
if os.path.isdir(path):
includeDirs.append(path)
I downloaded the instant client sdk (the zip file this time) and unziped it to the lib folder.
There was then a sdk folder in the lib folder (/usr/lib/oracle/11.2/client64/lib)
I then launched the setup.py build and setup.py install and it worked.

Related

Setting up on Macbook Pro M1 Tenserflow with OpenCV, Scipy, Scikit-learn

I think I read pretty much most of the guides on setting up tensorflow, tensorflow-hub, object detection on Mac M1 on BigSur v11.6. I managed to figure out most of the errors after more than 2 weeks. But I am stuck at OpenCV setup. I tried to compile it from source but seems like it can't find the modules from its core package so constantly can't make the file after the successful cmake build. It fails at different stages, crying for different libraries, despite they are there but max reached 31% after multiple cmake and deletion of the build folder or the cmake cash file. So I am not sure what to do in order to make successfully the file.
I git cloned and unzipped the opencv-4.5.0 and opencv_contrib-4.5.0 in my miniforge3 directory. Then I created a folder "build" in my opencv-4.5.0 folder and the cmake command I use in it is (my miniforge conda environment is called silicon and made sure I am using arch arm64 in bash environment):
cmake -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DWITH_OPENJPEG=OFF -DWITH_IPP=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/Users/adi/miniforge3/opencv_contrib-4.5.0/modules -D PYTHON3_EXECUTABLE=/Users/adi/miniforge3/envs/silicon/bin/python3.8 -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=ON /Users/adi/miniforge3/opencv-4.5.0
So it cries like:
[ 20%] Linking CXX shared library ../../lib/libopencv_core.dylib
[ 20%] Built target opencv_core
make: *** [all] Error 2
or also like in another tries was initially asking for calib3d or dnn but those libraries are there in the main folder opencv-4.5.0.
The other way I try to install openCV is with conda:
conda install opencv
But then when I test with
python -c "import cv2; cv2.__version__"
it seems like it searches for the ffmepg via homebrew (I didn't install any of these via homebrew but with conda). So it complained:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: dlopen(/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so, 2): Library not loaded: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
Referenced from: /Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so
Reason: image not found
Though I have these libs, so when I searched with: find /usr/ -name 'libavcodec.58.dylib' I could find many locations:
find: /usr//sbin/authserver: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/keyring: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/data: Permission denied
find: /usr//local/hw_mp_userdata/Internet_Manager/OnlineUpdate: Permission denied
/usr//local/lib/libavcodec.58.dylib
/usr//local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib
(silicon) MacBook-Pro:opencv-4.5.0 adi$ ln -s /usr/local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
ln: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib: No such file or directory
One of the guides said to install homebrew also in arm64 env, so I did it with:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
export PATH="/opt/homebrew/bin:/usr/local/bin:$PATH"
alias ibrew='arch -x86_64 /usr/local/bin/brew' # create brew for intel (ibrew) and arm/ silicon
Not sure if that is affecting it but seems like it didn't do anything because still uses /opt/homebrew/ instead of /usr/local/.
So any help would be highly appreciated if I can make any of the ways work. Ultimately I want to use Tenserflow Model Zoo Object Detection models. So all the other dependencies seems fine (for now) besides either OpenCV not working or if it is working with conda install then it seems that scipy and scikit-learn don't work.
In my case I also had lot of trouble trying to install both modules. I finally managed to do so but to be honest not really sure how and why. I leave below the requirements in case you might want to recreate the environment that worked in my case. You should have the conda Miniforge 3 installed :
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: osx-arm64
absl-py=1.0.0=pypi_0
astunparse=1.6.3=pypi_0
autocfg=0.0.8=pypi_0
blas=2.113=openblas
blas-devel=3.9.0=13_osxarm64_openblas
boto3=1.22.10=pypi_0
botocore=1.25.10=pypi_0
c-ares=1.18.1=h1a28f6b_0
ca-certificates=2022.2.1=hca03da5_0
cachetools=5.0.0=pypi_0
certifi=2021.10.8=py39hca03da5_2
charset-normalizer=2.0.12=pypi_0
cycler=0.11.0=pypi_0
expat=2.4.4=hc377ac9_0
flatbuffers=2.0=pypi_0
fonttools=4.31.1=pypi_0
gast=0.5.3=pypi_0
gluoncv=0.10.5=pypi_0
google-auth=2.6.0=pypi_0
google-auth-oauthlib=0.4.6=pypi_0
google-pasta=0.2.0=pypi_0
grpcio=1.42.0=py39h95c9599_0
h5py=3.6.0=py39h7fe8675_0
hdf5=1.12.1=h5aa262f_1
idna=3.3=pypi_0
importlib-metadata=4.11.3=pypi_0
jmespath=1.0.0=pypi_0
keras=2.8.0=pypi_0
keras-preprocessing=1.1.2=pypi_0
kiwisolver=1.4.0=pypi_0
krb5=1.19.2=h3b8d789_0
libblas=3.9.0=13_osxarm64_openblas
libcblas=3.9.0=13_osxarm64_openblas
libclang=13.0.0=pypi_0
libcurl=7.80.0=hc6d1d07_0
libcxx=12.0.0=hf6beb65_1
libedit=3.1.20210910=h1a28f6b_0
libev=4.33=h1a28f6b_1
libffi=3.4.2=hc377ac9_2
libgfortran=5.0.0=11_1_0_h6a59814_26
libgfortran5=11.1.0=h6a59814_26
libiconv=1.16=h1a28f6b_1
liblapack=3.9.0=13_osxarm64_openblas
liblapacke=3.9.0=13_osxarm64_openblas
libnghttp2=1.46.0=h95c9599_0
libopenblas=0.3.18=openmp_h5dd58f0_0
libssh2=1.9.0=hf27765b_1
llvm-openmp=12.0.0=haf9daa7_1
markdown=3.3.6=pypi_0
matplotlib=3.5.1=pypi_0
mxnet=1.6.0=pypi_0
ncurses=6.3=h1a28f6b_2
numpy=1.21.2=py39hb38b75b_0
numpy-base=1.21.2=py39h6269429_0
oauthlib=3.2.0=pypi_0
openblas=0.3.18=openmp_h3b88efd_0
opencv-python=4.5.5.64=pypi_0
openssl=1.1.1m=h1a28f6b_0
opt-einsum=3.3.0=pypi_0
packaging=21.3=pypi_0
pandas=1.4.1=pypi_0
pillow=9.0.1=pypi_0
pip=22.0.4=pypi_0
portalocker=2.4.0=pypi_0
protobuf=3.19.4=pypi_0
pyasn1=0.4.8=pypi_0
pyasn1-modules=0.2.8=pypi_0
pydot=1.4.2=pypi_0
pyparsing=3.0.7=pypi_0
python=3.9.7=hc70090a_1
python-dateutil=2.8.2=pypi_0
python-graphviz=0.8.4=pypi_0
pytz=2022.1=pypi_0
pyyaml=6.0=pypi_0
readline=8.1.2=h1a28f6b_1
requests=2.27.1=pypi_0
requests-oauthlib=1.3.1=pypi_0
rsa=4.8=pypi_0
s3transfer=0.5.2=pypi_0
scipy=1.8.0=pypi_0
setuptools=58.0.4=py39hca03da5_1
six=1.16.0=pyhd3eb1b0_1
sqlite=3.38.0=h1058600_0
tensorboard=2.8.0=pypi_0
tensorboard-data-server=0.6.1=pypi_0
tensorboard-plugin-wit=1.8.1=pypi_0
tensorflow-deps=2.8.0=0
tensorflow-macos=2.8.0=pypi_0
termcolor=1.1.0=pypi_0
tf-estimator-nightly=2.8.0.dev2021122109=pypi_0
tk=8.6.11=hb8d0fd4_0
tqdm=4.63.1=pypi_0
typing-extensions=4.1.1=pypi_0
tzdata=2021e=hda174b7_0
urllib3=1.26.9=pypi_0
werkzeug=2.0.3=pypi_0
wheel=0.37.1=pyhd3eb1b0_0
wrapt=1.14.0=pypi_0
xz=5.2.5=h1a28f6b_0
yacs=0.1.8=pypi_0
zipp=3.7.0=pypi_0
zlib=1.2.11=h5a0b063_4

How to compile a source into an ARM binary

I want to compile VTK-DICOM to run on an ARM Raspberry Pi (Raspbian). Is it posible? Where should I start?
Building for Raspbian Debian Buster images and ARMv6
This tutorial also supports older Rasperry Pi (A, B, B+, Zero) based on the ARMv6 CPU.
See also:
GCC 8 Cross Compiler outputs ARMv7 executable instead of ARMv6
Set up the toolchain
There is no official git repository containing an updated toolchain (See https://github.com/raspberrypi/tools/issues/102).
Here is a github repository which includes building and precompiled toolchains for ARMv6 based on GCC8 and newer:
https://github.com/Pro/raspi-toolchain
As mentioned in the project's readme, these are the steps to get the toolchain. You can also build it yourself (see the README for further details).
Download the toolchain:
wget https://github.com/Pro/raspi-toolchain/releases/latest/download/raspi-toolchain.tar.gz
Extract it. Note: The toolchain has to be in /opt/cross-pi-gcc since it's not location independent.
sudo tar xfz raspi-toolchain.tar.gz --strip-components=1 -C /opt
You are done! The toolchain is now in /opt/cross-pi-gcc
Optional, add the toolchain to your path, by adding:
export PATH=$PATH:/opt/cross-pi-gcc/bin
to the end of the file named ~/.bashrc
Now you can either log out and log back in (i.e. restart your terminal session), or run . ~/.bashrc in your terminal to pick up the PATH addition in your current terminal session.
Get the libraries from the Raspberry PI
To cross-compile for your own Raspberry Pi, which may have some custom libraries installed, you need to get these libraries onto your host.
Create a folder $HOME/raspberrypi.
In your raspberrypi folder, make a folder called rootfs.
Now you need to copy the entire /liband /usr directory to this newly created folder. I usually bring the rpi image up and copy it via rsync:
rsync -vR --progress -rl --delete-after --safe-links pi#192.168.1.PI:/{lib,usr,opt/vc/lib} $HOME/raspberrypi/rootfs
where 192.168.1.PI is replaced by the IP of your Raspberry Pi.
Use CMake to compile your project
To tell CMake to take your own toolchain, you need to have a toolchain file which initializes the compiler settings.
Get this toolchain file from here:
https://github.com/Pro/raspi-toolchain/blob/master/Toolchain-rpi.cmake
Now you should be able to compile your cmake programs simply by adding this extra flag: -D CMAKE_TOOLCHAIN_FILE=$HOME/raspberrypi/pi.cmake and setting the correct environment variables:
export RASPBIAN_ROOTFS=$HOME/raspberry/rootfs
export PATH=/opt/cross-pi-gcc/bin:$PATH
export RASPBERRY_VERSION=1
cmake -DCMAKE_TOOLCHAIN_FILE=$HOME/raspberry/Toolchain-rpi.cmake ..
An example hello world is shown here:
https://github.com/Pro/raspi-toolchain/blob/master/build_hello_world.sh
Source:
https://stackoverflow.com/a/58559140/13859552

libwebsockets (on ubuntu) - trying compile example "lws minimal ws server + permessage-deflate echo" - can't find libwebsocketsConfig.cmake

I am an (absolute) beginner with libwebsockets (and cmake), and am trying to build one of the minimal examples from libwebsockets.org:
"lws minimal ws server + permessage-deflate echo"
at
https://libwebsockets.org/git/libwebsockets/tree/minimal-examples/ws-server/minimal-ws-server-echo
I have installed libwebsockets-dev (sudo apt install libwebsockets-dev) and cmake (sudo apt install cmake).
The example page tells me to build the example (two .c files and CMakeLists.txt) using
$ cmake . && make
The build fails with the following message:
CMake Error at CMakeLists.txt:3 (find_package):
Could not find a package configuration file provided by "libwebsockets"
with any of the following names:
libwebsocketsConfig.cmake
libwebsockets-config.cmake
Add the installation prefix of "libwebsockets" to CMAKE_PREFIX_PATH or set
"libwebsockets_DIR" to a directory containing one of the above files. If
"libwebsockets" provides a separate development package or SDK, be sure it
has been installed.
-- Configuring incomplete, errors occurred!
See also "/home/user/ws/CMakeFiles/CMakeOutput.log".
I cannot find either of the .cmake files in my system (they are evidently not provided as part of package libwebsockets-dev.)
What am I missing?
Thank you!
Thank you, Tsyvarev, you are correct.
The solution was to build libwebsockets from github repository, use that instead of libwebsocket-dev installed from ubuntu 18.04.

Error when installing Tensorflow with Conda on Windows

Operating on Windows 10.
I am trying to install Tensorflow within a conda environment. The Anaconda 3 version I am using is the one that can be installed from within Visual Studio, conda version number is 4.6.14.
I created a new environment with conda create -n test python=3.6 and afterwards tried to install Tensorflow:
> conda activate test
> (test) conda install tensorflow-gpu
after which I'm getting the following error:
Downloading and Extracting Packages
tensorflow-base-1.13 | 217.6 MB | ############################################################################ | 100%
[Errno 2] No such file or directory: 'C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Anaconda3_64\\pkgs\\tensorflow-base-1.13.1-gpu_py36h871c8ca_0\\Lib\\site-packages\\tensorflow\\include\\tensorflow\\include\\external\\eigen_archive\\unsupported\\Eigen\\src\\SpecialFunctions\\SpecialFunctionsPacketMath.h'
Any idea on what could be the error here?
Edit: conda info returned:
active environment : base
active env location : C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64
shell level : 1
user config file : C:\Users\Me\.condarc
populated config files :
conda version : 4.6.14
conda-build version : 3.10.5
python version : 3.6.5.final.0
base environment : C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/win-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\pkgs
C:\Users\Me\.conda\pkgs
C:\Users\Me\AppData\Local\conda\conda\pkgs
envs directories : C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\envs
C:\Users\Me\.conda\envs
C:\Users\Me\AppData\Local\conda\conda\envs platform : win-64
user-agent : conda/4.6.14 requests/2.18.4 CPython/3.6.5 Windows/10 Windows/10.0.17134
administrator : True netrc file : None
offline mode : False
I have the same issue. Like you, I also did the convenient "mistake" of installing Anaconda with the Visual Studio installer. As James pointed out, the problem is indeed that the file path is too long (> 260). Anyway, it would seem we have to swallow the bitter pill, uninstall Anaconda and reinstall it at root level (I'll go for Mini-Conda this time).
Before I do that, I'll try to find a way of moving the Anaconda3_64\pkgs\ to a different location. Should that work, I'll post it here.
Edit: Got the solution, no reinstall necessary!
The problem comes from windows itself, which by default does not support long file paths. But this can be changed using a registry flag:
https://knowledge.autodesk.com/support/autocad/learn-explore/caas/sfdcarticles/sfdcarticles/The-Windows-10-default-path-length-limitation-MAX-PATH-is-256-characters.html
Open Regedit as Admin
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
Change LongPathsEnabled to 1 (DWORD 32 bit)
Warning: it could break compatibility of older programs. So far, for me everything is still working fine. If you should run into problems, simply revert the setting (and remember you have changed this setting!)

How to set up Kurento Media Server helpers?

I want to build Kurento Media Server against latest Fedora.
However, CMake fails to configure sources:
Could not find a package configuration file provided by "KurentoHelpers"
with any of the following names:
KurentoHelpersConfig.cmake
kurentohelpers-config.cmake
I installed kms-cmake-utils, as suggested, to /usr/local/. However, I still have this error, even if I set CMAKE_PREFIX_PATH to the folder where kms-cmake-utils's install target put .cmake modules.
In fact, there is no KurentoHelpersConfig.cmake file in kms-cmake-utils.
How can I configure Kurento for Fedora?
Try installing to /usr instead of /usr/local because cmake is looking for modules in /usr/share
Executing cmake like this should fix the problem:
cmake .. -DCMAKE_PREFIX_PATH=/usr
You should append path of KurentoHelpersConfig.cmake to CMAKE_MODULE_PATH, do that by adding this line to CMakeLists.txt :
SET(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "/usr/local/share/cmake-3.5/Modules")
It seems something wrong in cmake, it cannot read external CMAKE_MODULE_PATH, so I force set into its arguments line ( Ubuntu server x86_64 used), pay attention -DCMAKE_MODULE_PATH=$CMAKE_MODULE_PATH.
HOME=`pwd`
BUILD=$HOME/build
export CMAKE_MODULE_PATH=$BUILD/usr/local/share/cmake-3.5/Modules
mkdir -p build
cd build
cmake -DCMAKE_PREFIX_PATH=$HOMEDIR/build -DCMAKE_MODULE_PATH=$CMAKE_MODULE_PATH ..
make DESTDIR=$HOMEDIR/build install