I need my .so module to stay loaded even after dlclose() call. In Linux since glibc 2.2 I could do that using RTLD_NODELETE flag while calling dlopen. Is there a way to do the same thing in AIX 6 and newer versions?
Related
I am working on a Python script (I use Python 3.7.3) that uses tensorflow-gpu (1.14.0) and used PyInstaller 3.5 to convert this script to an executable. I am using CUDA 10.0 and cuDNN 7.6.1 and my graphics card is a NVIDIA GeForce GTX 960M. I recently deinstalled CUDA to test if the executable of the Python script still runs and surprisingly it still runs via GPU, which does not work when I now run the Python script directly.
My question is, can this executable be run on systems without the CUDA toolkit but with a CUDA-capable graphics card?
According to this documentation PyInstaller will make and store a private copy of all of the dependent external libraries which Python code relies on when building a single file executable.
Therefore it is safe to assume that your executable runs irrespective of the installation status of the CUDA toolkit because it has a full private copy of the necessary CUDA libraries internally which it uses when the executable is run.
According to the GitHub issues in the official repository (here and here for example) CUDA libraries are usually dynamically loaded at run-time and not at link-time, so they are typically not included in the final exe file (or folder) with the result that the exe file won't work on a machine without CUDA installed. The solution (please refer to the linked issues too) is to put the DLLs necessary to run the exe in its dist folder (if generated without the --onefile option) or install the CUDA runtime on the target machine.
The behaviour that you're experimenting maybe it's due to the specific version of TF, that loads the libraries in a different fashion with respect to what described above, but it's not the expected behaviour nowadays.
I am trying to run my pharo2.0 application on CentoOS which was previously been installed in a mac. The original version is pharo2.0 so I need to run the same image CentoOS too, but I get an error which says this below :
/lib/libc.so.6: version `GLIBC_2.15' not found (required by xxxxx)
Should I be trying to upgrade the CentoOS and see if pharo2.0 works or port my whole application to a later version of pharo?
There is now a VM build especially for systems with an older libc version. In fact there is a build for Centos specifically (which has a slight variation in linkages from Debian), the latest version of which is permalinked here. See http://pharo.org/download#custom for more info.
Would I encounter any problems running Java programs and associated libraries compiled in Java version 1.6 and 1.7 (I'm compiling using 1.7 whereas some libraries are compiled using 1.6) and running the entire program in a 1.7 JRE?
As answered already you are mostly safe and most products and 3rd party libraries will simply work. However there do exist very rare cases where binary incompatibilities (ones where the class file compiled using older JDK will fail to run in the newer JVM) were introduced between JDK versions.
Official list of Oracle Java incompatibilities between versions:
in Java SE 9 since Java SE 8
in Java SE 8 since Java SE 7
in Java SE 7 since Java SE 6
in Java SE 6 since Java SE 5.0
in Java SE 5.0 since Java SE 1.4.2
Compatibility tool
Packaged with JDK 9, there is a tool called jdeprscan which will verify the compatibility, list no longer used APIs within your code and suggest alternatives(!). You can specify the target JDK version (works for JDK 9, 8, 7 and 6) and it will list incompatibilities specific to your target version.
Additional comment in case of libraries:
A reasonable rule of thumb is to use latest stable release version of library for the JRE version your software targets. Obviously you will find many exceptions from this rule, but in general stability of publicly available libraries usually increases with time.
Naturally API compatibility and versioning have to be considered when changing versions of dependencies.
Again most popular dependencies will have web pages where such information should be available.
If however you are using something a bit more obscure, you can discern which JRE were the classes within your dependency compiled for.
Here is a great answer on how to find out class version. You might need to unzip the JAR file first.
You would not encounter any problems - that's the magic of Java -it's backwards compatible.You can run almost all code from Java 1 on Java 8. There's no reason why Java 6 code won't run on a Java 8 Runtime.
What is interesting, is that for applications written in, let's say, Java 1.4, you even have speed increases when running them on later runtimes. This is because Java is constantly evolving, not just the language known as "Java", but also the JVM (Java virtual machine). I still have source code from more than 10 years ago that still work, as expected in the latest JVM.
If you want to target, let's say, a Java 5 VM, then you can do that with the Java 8 SDK tools. You can ultimately specify which target VM you wish to support, as long as you bear in mind that a version 5 VM might not support all the features a version 8 VM will.
I've just tested code I wrote in Java 5 against the new Java 8 runtime and everything works as expected, so, even though we have a more powerful language and runtime now, we can continue to use our investments of the past. Just that alone makes Java a great development choice for companies.
Yes the title is correct...we are going BACK to qt4. We recently built a decent size app with Qt5. We now been told that the app must support RH 6 and RH 5 distros.
Since RH6 ships with Qt 4.6.2 and Rh 5 ships with Qt 3.3.6, I'm concerned about having to make lots of modifications to port back to older versions of Qt.
Can the latest versionf of Qt 4.x and 3.x understand new syntax of Qt5 (eg: connect is slightly different)? If not, can someone suggest how best to undertake this? Are we looking at ifdef'ing our way out of this? (and if so, is there an easy reference for how to do this)
Consider building qt5 libraries and deploying them (only ones you actually use) together with your project. This link can help to build.
I actually built them today on my CentOS 6.5 64-bit with this configuration command:
./configure -prefix /opt/my_prod/Qt-5.2.1 -release -nomake examples -dbus -qt-xcb -no-c++11
However I did not built all libs listed on the link and did not apply patches.
Then I built a small test app and ran it on CentOS and then on Ubuntu 12.04 (to which Qt5 libs I copied manually).
I have compiled an Ada program on Ubuntu using GNAT.
Afterwards, I tried a few test runs with that program and it worked properly.
But when I uploaded this to my Apache (UNIX) webserver and tried to run the program, there was no output. Why is this so?
Could it be that programs which have been compiled on Ubuntu don't work on a UNIX server?
(Sorry for the stupid question!)
Linux version of the system I use for compiling (uname -a):
Linux ubuntu 3.0.0-12-generic #20-Ubuntu x86-64 GNU/Linux
Linux version of the system I want to run the program on later (uname -a):
Linux 2.6.37-he-xeon-64gb+1 i686 GNU/Linux
For compiling on the Ubuntu machine, I use:
gnatmake -O3 myprogram -bargs -static
When you build a GNAT program (gnatmake my_program), by default it links against dynamic libraries (libgnat.so, libgnarl.so). These libraries are part of the GNAT system and are very unlikely to be available on your web server.
If you say ldd my_program it will show you the shared libraries used.
You can force the build to use the static GNAT libraries by saying
gnatmake my_program -bargs -static
(the -bargs -static must come after regular flags like -O2).
Edit: more info on -bargs and friends.
You must make sure that the server has the libraries your app links against or link them statically like already suggested by others. Some other comments point out that you need to "cross compile" or that the server won't run 64 bit binaries. This is easily solved unless the app you're building is very complex.
gnatmake --GCC='gcc -m32'
Will make a binary that will run on a 32bit system. However the chief problem is that the servers (g)libc is very likely to be older than what's on your ubunu box. Programs compiled against newer glibc will not necessarily run on systems with an older glibc installed.
for more info and plenty more links, look here:
Linking against an old version of libc to provide greater application coverage
How can I link to a specific glibc version?
edit:
Besides, apache may not be configured to accept invocation of external binaries. Have you "tried to run the program" with something you know exists on the server? Try to run something trivial like /bin/ls to make sure your method of running the program works. Look at the logs if it doesn't work. Programs need to be executable, by the way: chmod 755 /path/to/webeserver/uploads/ada-app
Why don't you just compile it on your Webserver instead of your local machine ?
Aswell cat /etc/issue or cat /etc/release could give us some information about the distribution you're using.