What happens:
I execute the following command.
java -jar sat4j-sat.jar -remote
No window opens, and I get a console output same as without the -remote flag, which begins:
c SAT4J: a SATisfiability library for Java (c) 2004-2013 Artois (...)
c This is free software under the dual EPL/GNU LGPL licenses.
c See www.sat4j.org for details.
c version 2.3.4.v20130419
c java.runtime.name OpenJDK Runtime Environment
c java.vm.name OpenJDK Client VM
c java.vm.version 24.65-b04
c java.vm.vendor Oracle Corporation
c sun.arch.data.model 32
c java.version 1.7.0_65
c os.name Linux
c os.version 3.2.0-4-686-pae
(...)
What is expected:
From readme.txt:
To run sat4j with on the fly configuration:
java -jar sat4j-sat.jar -remote
These instructions should open a java window named Remote Control. We
assume that the 1.5 version of the java command is in your path. If
it isn’t, then you should either specify the complete path to the java
command or update your PATH environment variable as described in the
installation instructions for the Java 2 SDK.
Other details
I have tried multiple versions of the library, up to 2.3.4.
My system is Debian 7 with Gnome 2.
My default Java installation is OpenJDK 1.7.0_65.
My secondary Java installation is Oracle Java 1.8.0_45 (with the same issue).
Gnuplot 4.6 is installed.
My first machine has a 32 bit dual core CPU with 2GB of RAM.
My second machine has a 64 bit quad core CPU with 8GB of RAM with nearly identical software.
Question
Has anyone used SAT4J's remote control feature? What is the problem with my method?
Update
On another machine (64 bit Debian 7) the window opens. After start dat files are created, but plotting does not start.
Update 2
I ran the generated instance.dimacs-gnuplot.gnuplot file manually from a gnuplot terminal, and I got the message unknown or ambiguous terminal type for the x11 type. I installed the gnuplot-x11 package, and now it works on the workplace machine: I can see the diagrams (wow!). Unfortunately on my home machines the Remote Control window still doesn't open.
The -remote parameter is used to display the remote control, i.e. to setup the various parameters of the solver.
If you want to always monitor what the solver is doing, you need to use in conjunction the -r parameter.
So the complete command line should be:
java -jar sat4j-sat.jar -r -remote file.cnf
You can get a fresh snapshot of Sat4j Sat on our continuous integration server:
http://bamboo.ow2.org/browse/SAT4J-DEF2-41/artifact/JOB1/nightly_build/
This might solve the issue you met with the 2.3.4 release.
Cheers,
Daniel
Related
I am developing a RHEL 7 Qt application and need to connect to an Oracle database. When calling QSqlDatabase::addDatabase("QOCI"), I am prompted with the following:
QSqlDatabase: QOCI driver not loaded
QSqlDatabase: available drivers: QSQLITE
I have Oracle Install Client v11.2 installed, but I'm not sure where to go from here. I've done extensive research and cannot find a solution.
Based on what I saw online, I tried creating an oci directory within my Qt dir (/usr/lib64/qt5/plugins/sqldrivers) and then created an oci.pro file. Its contents are below:
INCLUDEPATH+=/usr/include/oracle/11.2/client
LIBS+=-L/usr/lib/oracle/11.2/client/lib -lclntsh
TEMPLATE = subdirs
I ran qmake-qt5 on this to generate a Makefile, but when I run make, the necessary QOCI .so file is not generated.
I have an issue when I try running Pentaho Data Integration on Mac bigSur (M1).
issue code in below:
I'm sorry, this Mac platform [arm64] is not yet supported! Please try starting using 'Data Integration 32-bit' or 'Data Integration 64-bit' as appropriate.
java version
> java version "1.8.0_291"
Java(TM) SE Runtime Environment (build 1.8.0_291-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.291-b10, mixed mode)
can anyone help me with this issue?
Thanks
Try this guide from reddit
Guide:
Here is how you can force the shell to run in Intel mode so that you
can continue working in this little command-line Rosetta Island while
waiting for native ARM64 support.
Open the Terminal app.
Open the Terminal app’s Preferences.
Click on the Profiles tab.
Select a profile, click on the ellipsis at the bottom of the profile list and then select Duplicate Profile.
Click on the new profile and give it a good name. I named mine as “Rosetta Shell”.
Also in the new profile, click on the Window tab. In the Title, put a name to indicate that this is for running Intel-based apps.I put “Terminal (Intel)” on mine.
Click on the Shell tab and use the following as its Run Command to force the shell run under Rosetta: env /usr/bin/arch -x86_64 /bin/zsh --login
Untick the Run inside shell checkbox. Clearing the checkbox would prevent running the shell twice, which could bloat your environment variables since ~/.zshrc gets run twice.
Optionally set this profile as the Default.
This is the first step. After that you have to replace the swt.jar in the data-integration Folder /path_to_your_data-integration/libswt/osx64/
Otherwise it won't start.
You can download the jar here
Important!: You don't have to rename this file, but you have to remove the original swt.jar .
I've just want to add that you need to have an x86 Java version installed for Tufan Atak solution to work.
So if you've installed an M1 compat Java version and you try to start spoon with that it will throw the next error:
java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
no swt-cocoa-4944r26 in java.library.path
no swt-cocoa in java.library.path
no swt in java.library.path
Can't load library: /<user-home-path>/.swt/lib/macosx/aarch64/libswt-cocoa-4944r26.jnilib
Can't load library: /<user-home-path>/.swt/lib/macosx/aarch64/libswt-cocoa.jnilib
Can't load library: /<user-home-path>/.swt/lib/macosx/aarch64/libswt.jnilib
/<user-home-path>/.swt/lib/macosx/aarch64/libswt-cocoa-4944r26.jnilib: dlopen(/<user-home-path>/.swt/lib/macosx/aarch64/libswt-cocoa-4944r26.jnilib, 0x0001): tried: '/<user-home-path>/.swt/lib/macosx/aarch64/libswt-cocoa-4944r26.jnilib' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))
at org.eclipse.swt.internal.Library.loadLibrary(Library.java:348)
at org.eclipse.swt.internal.Library.loadLibrary(Library.java:257)
at org.eclipse.swt.internal.C.<clinit>(C.java:19)
at org.eclipse.swt.widgets.Display.<clinit>(Display.java:107)
at org.pentaho.di.ui.core.widget.OsHelper.setAppName(OsHelper.java:106)
at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:652)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.pentaho.commons.launcher.Launcher.main(Launcher.java:92)
So you need to install Java again from the same intel terminal profile so an x86 version is installed.
You can use SDK Man and after that you can execute this command (for java 8 temurin):
> sdk install java 8.0.345-tem
.
.
.
> Do you want java 8.0.345-tem to be set as default? (Y/n): n
The n answer is because you don't wanna run every other java program with the x86 version.
After that you can tell SDK Man to use this new version for this terminal shell
> sdk use java 8.0.345-tem
then just verify the current version is the one you've just indicated sdk man to use:
> java -version
you should see something like this:
openjdk version "1.8.0_345"
OpenJDK Runtime Environment (Temurin)(build 1.8.0_345-b01)
OpenJDK 64-Bit Server VM (Temurin)(build 25.345-b01, mixed mode)
After that you can finally start spoon
> ./spoon.sh
I first installed a Ubuntu linux subsystem with the windows store.
I then installed the hyper terminal for windows like explained in this tutorial : https://medium.com/#ssharizal/hyper-js-oh-my-zsh-as-ubuntu-on-windows-wsl-terminal-8bf577cdbd97
Like it is written in the tutorial I put C:\\Windows\\System32\\bash.exe in the hyper configuration file.
However, afterwards, I installed another linux subsystem, Wlinux.
So now I have two subsytems located here
Wlinux : C:\Users\martinpc\AppData\Local\Packages\WhitewaterFoundryLtd.Co....
Ubuntu : C:\Users\martinpc\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_7...
However, when I open the hyper terminal, It seem like I can only access the files of the ubuntu distrib and not the Wlinux. Therefore, I would like to know how I can point Hyper to Wlinux and not Ubuntu anymore. Thank you for your answer.
First of all, bash.exe has been deprecated. You should use wsl.exe in command lines. Check your installed distributions in WSL with wslconfig.exe /list /all command. Alternatively, for Windows 10 version 1903 and above, wsl.exe --list --all command can be used. Choose the distribution that you want to connect with HyperJS Terminal emulator. Open up Hyper.js configuration with Ctrl + , or open %UserProfile%\.hyper.js in any text editor. Edit the shell configuration from these two named values:
shell: 'C:\\Windows\\System32\\wsl.exe',
shellArgs: ['--distribution', 'Your-Distro-Name'],
Alternatively, you can use wslconfig.exe /setdefault <DistributionName> command to change default distribution. With this step, you can skip the shellArgs line in .hyper.js configuration file.
I'm receiving AttachNotSupportedException when trying to start a JFR recording.
It was working normally, until now.
jcmd 3658 JFR.start maxsize=100M filename=jfr_1.jfr dumponexit=true settings=profile
Output:
3658:
com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
What might be happening?
SO: Oracle Linux Server release 6.7
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
One of the probable reasons is that /tmp/.java_pid1234 file has been deleted (where 1234 is PID of a Java process).
Tools that depend on Dynamic Attach Mechanism (jstack, jmap, jcmd, jinfo) communicate to JVM through a UNIX domain socket created at /tmp.
This socket is created by JVM lazily on the first attach attempt or eagerly at JVM initialization if -XX:+StartAttachListener flag is specified.
Once the file corresponding to the socket is deleted, tools cannot connect to the target process, and unfortunately there is no way to re-create communication socket without restarting JVM.
For the description of Dynamic Attach Mechanism see this answer.
With personal experience... This problem also occurs in scenarios where the development environment is divided into partitions, and the partition where the operating system is located is different from the operating system partition. Example, operating system partition is EXT4 and the development environment partition is NTFS (where the JVM is). Problem occurs because you can not create a file "/tmp/.java_pid6024" (where 6024 is the PID of the java process).
To troubleshoot add -XX: + StartAttachListener at the start of the JVM, or application server.
Another possibility: your app is running under systemd with 'PrivateTmp=yes'. This prevents the /tmp/.java_pid1234 file from being found.
I have run into an interesting problem. I am compiling my application code and was using ace library(version 6_1_1) on my centos 6 machine. Everything worked fine. When i look at the symbols of the ace library compiled on centos 6 machine, it looks like this:
bash-4.1$ nm ace/libACE.so.6.1.1 | grep handle_sig 000f9430 T
_ZN15ACE_Sig_Adapter13handle_signalEiP7siginfoP8ucontext 000b84d0 T _ZN17ACE_Event_Handler13handle_signalEiP7siginfoP8ucontext 00079f10 T _ZN18ACE_Service_Config13handle_signalEiP7siginfoP8ucontext 000f26d0 T _ZN19ACE_Process_Manager13handle_signalEiP7siginfoP8ucontext 0007ee70 T _ZN19ACE_Service_Manager13handle_signalEiP7siginfoP8ucontext
000cf920 T
_ZN20ACE_MMAP_Memory_Pool13handle_signalEiP7siginfoP8ucontext 000f8b80 T _ZN22ACE_Shared_Memory_Pool13handle_signalEiP7siginfoP8ucontext
bash-4.1$
But when i compile the same project on centos 7 machine, the symbols change:
bash# nm ace/6_1_1/ace/libACE.so.6.1.1 | grep handle_sig 000fa090 T
_ZN15ACE_Sig_Adapter13handle_signalEiP9siginfo_tP8ucontext 000b9570 T
_ZN17ACE_Event_Handler13handle_signalEiP9siginfo_tP8ucontext 0007e070 T
_ZN18ACE_Service_Config13handle_signalEiP9siginfo_tP8ucontext 000f3500 T
_ZN19ACE_Process_Manager13handle_signalEiP9siginfo_tP8ucontext 00081cb0 T
_ZN19ACE_Service_Manager13handle_signalEiP9siginfo_tP8ucontext 000d1990 T
_ZN20ACE_MMAP_Memory_Pool13handle_signalEiP9siginfo_tP8ucontext 000f93d0 T
_ZN22ACE_Shared_Memory_Pool13handle_signalEiP9siginfo_tP8ucontext bash#
Notice that there is an extra _t in siginfo. So, my application which links this library fails to launch during run time giving me that error:
symbol "_ZN17ACE_Event_Handler13handle_signalEiP9siginfo_tP8ucontext"
not found
Another interesting point to note is that if i copy the compiled ace library from my centos 6 box into centos 7 box, my application works fine.
I am lost on how to fix this issue. Any help in this regards will be appreciated!
But when i compile the same project on centos 7 machine, the symbols change:
Probably Glibc on Centos 7 has changed one of the types in public headers which caused mangler to emit different symbols:
$ echo _ZN15ACE_Sig_Adapter13handle_signalEiP7siginfoP8ucontext | c++filt
ACE_Sig_Adapter::handle_signal(int, siginfo*, ucontext*)
$ echo _ZN15ACE_Sig_Adapter13handle_signalEiP9siginfo_tP8ucontext | c++filt
ACE_Sig_Adapter::handle_signal(int, siginfo_t*, ucontext*)
Notice that new method now uses siginfo_t rather than siginfo (you'll see hundreds of complaints if you google for "siginfo_t vs siginfo").
Another interesting point to note is that if i copy the compiled ace
library from my centos 6 box into centos 7 box, my application works fine.
That's backward compatibility - you can (usually) run apps linked on older version of distro on it's newer versions.
On the contrary, forward compatibility (in your case - linking old application against new library) is not guaranteed.
I am lost on how to fix this issue.
If you are only interested in new CentOS - rebuild all your code. If you want to run on older versions - build on the oldest and distribute that.