Hello I'm currently trying to build WebRTC on Asterisk, so far i've been following this site (http://www.nethram.com/webrtc-with-asterisk-12/) and get "aconfigure: error: unable to use SRTP" after running ./configure pjproject.
It gives me this notification about SRTP (since I configure it with "--with-external-srtp")
the error message (the others are working fine)==>
checking if external SRTP devkit is installed... aconfigure: error: Unable to use SRTP. If SRTP development files are not available in the default locations, use CFLAGS and LDFLAGS env var to set the include/lib paths
Can anybody help? Thank you very much
Let's use v1.5.0 instead of current (v2.x)
git clone https://github.com/cisco/libsrtp/
cd libsrtp
git fetch
git tag -l
git checkout v1.5.0
..
Related
I'm just starting learning HLF, and I have an error while following tutorial from the docs: link
I downloaded fabric-samples using this command (replaced bit.ly link with the destination):
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s -- 2.2.2 1.4.9
I run logspout in one terminal and try to execute peer lifecycle chaincode install basic.tar.gz in another one, and this is the result i get
Error: failed to retrieve endorser client for install: endorser client
failed to connect to localhost:7051: failed to create new connection:
context deadline exceeded
Log presented by Logspout:
peer0.org1.example.com|2022-03-15 13:03:24.452 UTC [core.comm]
ServerHandshake -> ERRO 04a Server TLS handshake failed in 2.650245ms
with error remote error: tls: bad certificate server=PeerServer
remoteaddress=172.22.0.1:61126
I set the envs in terminal as instructed in the docs, and I checked that CORE_PEER_TLS_ROOTCERT_FILE variable points to an existing file. The content of the file is the same as on the container.
What I tried to do:
download fabric-samples again and redo all the setup with copy-pasting the commands directly from docs
Do you have any suggestions where I can look for an issue?
I resolved the problem, I was using peer version 2.2.1 from previous experiments, it probably collided with FABRIC_CFG_PATH
When trying to cmake a CGAL example, I get
CMake Error: Remove failed on file:
/cgal/example/CMakeFiles/CMakeTmp/cmTC_9e180.exe: System Error: Device or resource busy
Working under Win10 + Msys2.
CGAL was obtained via pacman (local/mingw-w64-x86_64-cgal 4.13-1).
Since I did not find the CGAL examples in any Msys2 package,
it was copied from file /usr/share/doc/libcgal13/examples.tar.gz, which was obtained in an Ubuntu system with
$ sudo apt-get install libcgal-demo
The example is reconstruction_surface_mesh.cpp from examples/Advancing_front_surface_reconstruction.
I wouldn't know if the origin of the error is specific to my CMakeLists.txt, or else.
Related, but AFAICT not providing the answer:
https://cmake.org/pipermail/cmake-developers/2010-November/012619.html
https://gitlab.kitware.com/cmake/cmake/issues/17566
https://github.com/TadasBaltrusaitis/OpenFace/issues/634
CMake: how to use INTERFACE_INCLUDE_DIRECTORIES with ExternalProject?
https://www.google.com/search?safe=off&q=CMake+Error+in+CMakeLists.txt%3A+++Imported+target+includes+non-existent+path+in+its+INTERFACE_INCLUDE_DIRECTORIES.++Possible+reasons+include
I have 2 machines -
dccten1a with no internet access where I need to install Tensorflow with GPU support
dccten1b with internet access so that I can download packages and transfer to dccten1a
In the final step of installing Tensorflow, when running the bazel build command to produce a whl file, I get an error which says that it can't find a file in a folder it is looking in, and also cannot download, obviously, as 1a doesn't have internet access.
bazel build --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"
ERROR: error loading package '': Encountered error while reading extension file 'closure/defs.bzl': no such package '#io_bazel_rules_closure//closure': Error downloading [http://bazel-mirror.storage.googleapis.com/github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz, https://github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz] to /home/xyzuser/.cache/bazel/_bazel_xyzuser/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz: All mirrors are down: [Unknown host: github.com, Unknown host: mirror.bazel.build]
I checked in the system, and there is no such directory as shown in the error message (i.e., /home/xyzuser/.cache/bazel/_bazel_xyzuser/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure/). So, I created it, searched and found the requisite (?) file online, downloaded the file in the machine with internet, transferred it to the target machine, moved the file to the just created directory, and tried running the command again:
(tensorflow#dccten1a):
mkdir -p /home/tensorflow/.cache/bazel/_bazel_tensorflow/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure
(tensorflow#dccten1b):
http://bazel-mirror.storage.googleapis.com/github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz
sudo scp -r /home/tensorflow/Downloads/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz tensorflow#160.88.114.17:/home/tensorflow/Documents/tf_dependencies
(tensorflow#dccten1a):
mv /home/tensorflow/Documents/tf_dependencies/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz /home/tensorflow/.cache/bazel/_bazel_tensorflow/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure
Then I run the bazel build command again, but the same error persists.
Use --experimental_repository_cache to download the dependencies on the machine with internet access, transfer the cache to the machine without internet access, and use --experimental_repository_cache to refer to the same cache.
e.g.
1) On the machine with internet access, run
tensorflow#dccten1b $ bazel build --experimental_repository_cache=/path/to/some/folder --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0""
2) Copy the cache at /path/to/some/folder to the machine without internet access using a SD card or flash drive.
3) On the machine without internet access, run the same command again and setting the flag to the cache's location.
tensorflow#dccten1a $ bazel build --experimental_repository_cache=/path/to/some/folder --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0""
I'm trying to compile gcc5.3.0 on my Raspberry Pi with latest Raspbian system image.
$ ./configure --enbale-checking=release --enable-languages=c,c++,fortran --host=arm-cortexa7_neon-linux-gnueabihf --build=arm-cortexa7_neon-linux-gnueabihf --target=arm-cortexa7_neon-linux-gnueabihf
$ make
However, the original compiler (gcc4.9) complains about not founding sys/cdefs.h when compiling libgcc.
I checked I have libc6-dev and build-essential installed.
So I used grep -R 'cdefs' /usr/include/ to search it and I found it at /usr/include/bsd/. I created the sys directory and made hard links to these headers under /usr/include/bsd/sys.
This time it gave me a more weird error,
/usr/include/stdio.h:312:8: error: unknown type name 'FILE'.
I searched this on stackoverflow, and there's a similar question, https://stackoverflow.com/a/21047237/5691005. But when I removed /usr/include/sys and /usr/include/bsd, then reinstalled libc6-dev, I cannot find sys/cdefs.h under /usr/include, and the compiler gave errors still.
I'm now totally lost. Any suggestion will be appreciated.
I had similar problem with compiling gcc-8.2. I tried to do as described here with reinstalling:
sudo apt-get --reinstall install libc6 libc6-dev
After that I was locating all missing headers:
find / -name cdefs.h
and copying them to /usr/include:
those steps allowed only to move forward but I still didn't manage to completely build gcc.
The best solution I found is to download compiled version of gcc-8.1 from:
https://solarianprogrammer.com/2017/12/07/raspberry-pi-raspbian-compiling-gcc/
I also ran into this problem when creating a containerized build environment for cross-compiled Qt applications for raspberry pi 4.
I found I needed to edit the mkspec for the linux-rasp-pi4-v3d device and add another cflag so that gcc could find the header from my Raspi sysroot that was used to cross-compile Qt.
Specifically under qtbase/mkspecs/devices/linux-rasp-pi4-v3d-g++/qmake.conf:
QMAKE_CFLAGS = -march=armv8-a -mtune=cortex-a72 -mfpu=crypto-neon-fp-armv8 -I$$[QT_SYSROOT]/usr/include/arm-linux-gnueabihf
I'm failing to compiled the rabbitmq-c library on Mac OS 10.6.6
I intend to build the php-ampq extension against it.
I've tried both the latest branch of rabbitmq-c and rabbitmq-codegen according to the instructions here and the specific branches according to the instructions here.
Running autoreconf -i as per instructions I get:
glibtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.ac and
glibtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree.
glibtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.
configure.ac:12: installing `./config.sub'
configure.ac:12: required file `./ltmain.sh' not found
configure.ac:3: installing `./missing'
configure.ac:3: installing `./install-sh'
configure.ac:12: installing `./config.guess'
examples/Makefile.am: installing `./depcomp'
autoreconf: automake failed with exit status: 1
Running simply autoconf I get:
configure.ac:3: error: possibly undefined macro: AM_INIT_AUTOMAKE
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:12: error: possibly undefined macro: AM_PROG_LIBTOOL
configure.ac:90: error: possibly undefined macro: AM_CONDITIONAL
Most of what I can find by searching online suggests I don't have libtool or automake. I have both.
I'm afraid I'm out of my depth with autoconf, so I don't know how/where to alter configure.ac, or whether the warning is anything do with the missing ltmain.sh file.
I solved the same problem by installing pkg-config:
sudo port install pkgconfig