ipf is not present in AIX 7.1 - aix

I am using AIX 7.1
bash-4.4# uname -a
AIX aix71 1 7 00FA2BC04C00
The documents from IBM is suggesting to use ipf for packet filtering
https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.security/ipsec_filters_aix.htm
as per IBM suggestion as like below-
After installing the package, run the following command to load the kernel extension:
/usr/lib/methods/cfg_ipf -l
so I am facing below two issues--
1.How to install package for ipf
2./usr/lib/methods/cfg_ipf is by default is not present in AIX 7.1. How to get this cfg_ipf to load the kernel extension

Related

Can't connect to Snowflake via unixODBC and R [duplicate]

I'm having issues getting the ODBC driver for Snowflake to work on an M1 Apple Silicon Mac running Big Sur.
Successfully following the instructions on Snowflake's website gets me to the point where testing the driver from the command line (using iodbctest) using the DSN results in the following error:
1: SQLDriverConnect = [iODBC][Driver Manager]dlopen(/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib, 6): no suitable image found. Did find:
/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib: no matching architecture in universal wrapper
/opt/snowfl (0) SQLSTATE=00000
2: SQLDriverConnect = [iODBC][Driver Manager]Specified driver could not be loaded (0) SQLSTATE=IM003
My Snowflake driver is installed to /opt/snowflake/snowflakeodbc, so that is correct -- I'm suspicious that this is specifically an M1 problem. I'm using the 2.24.1 version of the driver available from the download mirror here, and the path to the driver in /etc/odbcinst.ini is /opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib (which exists and seems, from all my research, that it should be right).
When I run a connection via DBI in R, I get a completely different error:
Error: nanodbc/nanodbc.cpp:1021: 00000:
[Snowflake][ODBC] (11560) Unable to locate SQLGetPrivateProfileString function.
In other StackOverflow posts, people have referenced the above error meaning that there is a missing library of some kind (IODBC isn't configured correctly?), but I've tried quite a few things to no avail. Any guidance would be great.
Tinkered with this a bit more and realized it's an artifact of the installation pathways for the .dmgs & the preset paths in simba.snowflake.ini.
You need to point the Snowflake driver towards the iODBC dylib (as per a sideswiping statement in the docs) -- the driver is originally configured to look for the ODBC dylib (not iODBC) in a folder that's on the path.
When you install the iODBC driver, verify that it is installed to /usr/local/iODBC (this was where my Silicon Mac installed it to) -- and that /usr/local/iODBC/lib has libiodbc.dylib in it. If so, navigate to your installed snowflake driver directory (should be /etc/snowflake) and alter the simba.snowflake.ini file (/etc/snowflake/snowflake/snowflakeodbc/universal/simba.snowflake.ini). You want to uncomment & alter the last line to be both uncommented & point with a full path towards the iODBC dylib (instead of the default, which is the ODBC dylib).
# Darwin specific ODBCInstLib
# iODBC
ODBCInstLib=/usr/local/iODBC/lib/libiodbcinst.dylib
Make sure to comment out any other ODBCInstLib line so that only one is configured. That should enable you to get your connection to snowflake up and running on an M1 Mac.
Big Sur is macOS v11.n
Snowflake supports macOS 10.14 and 10.15 Supported OSs
So what you are trying to do is not supported and is unlikely to work
None of the other solutions worked for me but #kiran-kumawat 's answer set me down a path that worked.
It seems like the core of the issue is that the odbc code is looking for arm64 architecture drivers but Snowflake is providing it in x86_64 architecture. By installing an x86_64 versions of odbc we are able to have it successfully talk to the driver.
First I uninstalled R and Rstudio. (it may be possible to sim-link or change things behind the scenes to make this work with existing installs but I am not sure).
Then install rosetta (apples software for translating between architectures) and a version of homebrew built with it. I am leaving my main version of homebrew in place.
softwareupdate --install-rosetta
arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Then use that version of homebrew to install odbc, R, and Rstudio.
arch -x86_64 /usr/local/Homebrew/bin/brew install unixodbc
arch -x86_64 /usr/local/Homebrew/bin/brew install --cask rstudio
arch -x86_64 /usr/local/Homebrew/bin/brew install --cask r
We then need to install the snowflake driver: https://sfc-repo.snowflakecomputing.com/odbc/mac64/latest/index.html
Click through all the install prompts.
Modify your files
/usr/local/etc/odbcinst.ini:
[Snowflake Driver]
Driver = /opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib
/usr/local/etc/odbc.ini
[Snowflake]
Driver = Snowflake Driver
uid = <uid>
server = <server>
role = <role>
warehouse = <warehouse>
authenticator = externalbrowser
We also need to modify the simba.snowflake.ini file.
It is somewhat locked down so run:
sudo chmod 646 /opt/snowflake/snowflakeodbc/lib/universal/simba.snowflake.ini
Then
vim /opt/snowflake/snowflakeodbc/lib/universal/simba.snowflake.ini
And find the ODBCInstLib line that is uncommented and change it to:
ODBCInstLib=/usr/local/Cellar/unixodbc/2.3.9_1/lib/libodbcinst.dylib
After setting this up I was able to use this connection successfully:
install.packages("DBI")
install.packages("odbc")
con <- DBI::dbConnect(odbc::odbc(), "Snowflake")
one of our team member suggested below steps and it worked for us for Apple M1 series
Install the latest snowflake driver
Uninstall m1 based homebrew using cmd
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall.sh)"
Install intel based homebrew - restart terminal when done
arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Re-install unixodbc
arch -x86_64 brew install unixodbc
Test
isql -v Pattern
in your database.yml file for connection to snowflake make following change-
replace "dsn: <DSN_NAME>" with following
conn_str: "Driver={PATH};Locale=en-US;uid={USER_NAME};pwd= {PASSWORD};server=<yours>.snowflakecomputing.com;role=<ROLE>;charset=UTF-8;warehouse=<WAREHOUSE>;database=<DATABASE>;schema=<SCHEMA>;"
Has anyone gotten this to work? I use excel w odbc to refresh snowflake files and have tried multiple ways to move the drivers etc and followed snowflake instructions but never works. I did get parallels to work running windows arm but would prefer to just do this in Mac OS
I also have a M1 (version Monterey 12.0) and I ran into similar issues when I tested the driver. Nevertheless, when I tried the "real connection" it worked like a charm. So, maybe it would be good for you to go and test the "real connection" to avoid a wasting of time using such testing. Hope you find this useful.

Change bash.exe with multiple linux subsystems on windows

I first installed a Ubuntu linux subsystem with the windows store.
I then installed the hyper terminal for windows like explained in this tutorial : https://medium.com/#ssharizal/hyper-js-oh-my-zsh-as-ubuntu-on-windows-wsl-terminal-8bf577cdbd97
Like it is written in the tutorial I put C:\\Windows\\System32\\bash.exe in the hyper configuration file.
However, afterwards, I installed another linux subsystem, Wlinux.
So now I have two subsytems located here
Wlinux : C:\Users\martinpc\AppData\Local\Packages\WhitewaterFoundryLtd.Co....
Ubuntu : C:\Users\martinpc\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_7...
However, when I open the hyper terminal, It seem like I can only access the files of the ubuntu distrib and not the Wlinux. Therefore, I would like to know how I can point Hyper to Wlinux and not Ubuntu anymore. Thank you for your answer.
First of all, bash.exe has been deprecated. You should use wsl.exe in command lines. Check your installed distributions in WSL with wslconfig.exe /list /all command. Alternatively, for Windows 10 version 1903 and above, wsl.exe --list --all command can be used. Choose the distribution that you want to connect with HyperJS Terminal emulator. Open up Hyper.js configuration with Ctrl + , or open %UserProfile%\.hyper.js in any text editor. Edit the shell configuration from these two named values:
shell: 'C:\\Windows\\System32\\wsl.exe',
shellArgs: ['--distribution', 'Your-Distro-Name'],
Alternatively, you can use wslconfig.exe /setdefault <DistributionName> command to change default distribution. With this step, you can skip the shellArgs line in .hyper.js configuration file.

distro 'rhel7.2' does not exist in our dictionary

While installing a kvm via virt-install I have used following attribute os_variant=rhel7.2. While installing I am getting following error :
distro 'rhel7.2' does not exist in our dictionary
When I do uname -r I am getting output as
3.10.0-327.el7.x86_64
It is a RHEL KVM host.
Running osinfo-query os|grep 'Red Hat Enterprise Linux 7.2' returns following:
rhel7.1 | Red Hat Enterprise Linux 7.2 | 7.2 | http://redhat.com/rhel/7.2
What can be solution to this problem?
You could create a custom config file to define a RHEL-7.2 distro, but it is honestly not that important from virt-install's POV. The distro is used to lookup the optimized drivers to use for disk & network primarily. Just using the 'rhel7.1' distro type when installing 'rhel-7.2' will work just fine from this POV.
virt-install get's it's os information out of osinfo-db
If your os does not ship a recent version of osinfo-db you can manually download it from https://releases.pagure.org/libosinfo/ and import it.
e.g.
wget https://releases.pagure.org/libosinfo/osinfo-db-20200325.tar.xz
osinfo-db-import -v osinfo-db-20200325.tar.xz
-v will display all imported os'es, I believe your choice has to match one of the xml files in this list.

Undefined symbol error in Centos compile

I have run into an interesting problem. I am compiling my application code and was using ace library(version 6_1_1) on my centos 6 machine. Everything worked fine. When i look at the symbols of the ace library compiled on centos 6 machine, it looks like this:
bash-4.1$ nm ace/libACE.so.6.1.1 | grep handle_sig 000f9430 T
_ZN15ACE_Sig_Adapter13handle_signalEiP7siginfoP8ucontext 000b84d0 T _ZN17ACE_Event_Handler13handle_signalEiP7siginfoP8ucontext 00079f10 T _ZN18ACE_Service_Config13handle_signalEiP7siginfoP8ucontext 000f26d0 T _ZN19ACE_Process_Manager13handle_signalEiP7siginfoP8ucontext 0007ee70 T _ZN19ACE_Service_Manager13handle_signalEiP7siginfoP8ucontext
000cf920 T
_ZN20ACE_MMAP_Memory_Pool13handle_signalEiP7siginfoP8ucontext 000f8b80 T _ZN22ACE_Shared_Memory_Pool13handle_signalEiP7siginfoP8ucontext
bash-4.1$
But when i compile the same project on centos 7 machine, the symbols change:
bash# nm ace/6_1_1/ace/libACE.so.6.1.1 | grep handle_sig 000fa090 T
_ZN15ACE_Sig_Adapter13handle_signalEiP9siginfo_tP8ucontext 000b9570 T
_ZN17ACE_Event_Handler13handle_signalEiP9siginfo_tP8ucontext 0007e070 T
_ZN18ACE_Service_Config13handle_signalEiP9siginfo_tP8ucontext 000f3500 T
_ZN19ACE_Process_Manager13handle_signalEiP9siginfo_tP8ucontext 00081cb0 T
_ZN19ACE_Service_Manager13handle_signalEiP9siginfo_tP8ucontext 000d1990 T
_ZN20ACE_MMAP_Memory_Pool13handle_signalEiP9siginfo_tP8ucontext 000f93d0 T
_ZN22ACE_Shared_Memory_Pool13handle_signalEiP9siginfo_tP8ucontext bash#
Notice that there is an extra _t in siginfo. So, my application which links this library fails to launch during run time giving me that error:
symbol "_ZN17ACE_Event_Handler13handle_signalEiP9siginfo_tP8ucontext"
not found
Another interesting point to note is that if i copy the compiled ace library from my centos 6 box into centos 7 box, my application works fine.
I am lost on how to fix this issue. Any help in this regards will be appreciated!
But when i compile the same project on centos 7 machine, the symbols change:
Probably Glibc on Centos 7 has changed one of the types in public headers which caused mangler to emit different symbols:
$ echo _ZN15ACE_Sig_Adapter13handle_signalEiP7siginfoP8ucontext | c++filt
ACE_Sig_Adapter::handle_signal(int, siginfo*, ucontext*)
$ echo _ZN15ACE_Sig_Adapter13handle_signalEiP9siginfo_tP8ucontext | c++filt
ACE_Sig_Adapter::handle_signal(int, siginfo_t*, ucontext*)
Notice that new method now uses siginfo_t rather than siginfo (you'll see hundreds of complaints if you google for "siginfo_t vs siginfo").
Another interesting point to note is that if i copy the compiled ace
library from my centos 6 box into centos 7 box, my application works fine.
That's backward compatibility - you can (usually) run apps linked on older version of distro on it's newer versions.
On the contrary, forward compatibility (in your case - linking old application against new library) is not guaranteed.
I am lost on how to fix this issue.
If you are only interested in new CentOS - rebuild all your code. If you want to run on older versions - build on the oldest and distribute that.

Sat4j Remote Control window doesn't open

What happens:
I execute the following command.
java -jar sat4j-sat.jar -remote
No window opens, and I get a console output same as without the -remote flag, which begins:
c SAT4J: a SATisfiability library for Java (c) 2004-2013 Artois (...)
c This is free software under the dual EPL/GNU LGPL licenses.
c See www.sat4j.org for details.
c version 2.3.4.v20130419
c java.runtime.name OpenJDK Runtime Environment
c java.vm.name OpenJDK Client VM
c java.vm.version 24.65-b04
c java.vm.vendor Oracle Corporation
c sun.arch.data.model 32
c java.version 1.7.0_65
c os.name Linux
c os.version 3.2.0-4-686-pae
(...)
What is expected:
From readme.txt:
To run sat4j with on the fly configuration:
java -jar sat4j-sat.jar -remote
These instructions should open a java window named Remote Control. We
assume that the 1.5 version of the java command is in your path. If
it isn’t, then you should either specify the complete path to the java
command or update your PATH environment variable as described in the
installation instructions for the Java 2 SDK.
Other details
I have tried multiple versions of the library, up to 2.3.4.
My system is Debian 7 with Gnome 2.
My default Java installation is OpenJDK 1.7.0_65.
My secondary Java installation is Oracle Java 1.8.0_45 (with the same issue).
Gnuplot 4.6 is installed.
My first machine has a 32 bit dual core CPU with 2GB of RAM.
My second machine has a 64 bit quad core CPU with 8GB of RAM with nearly identical software.
Question
Has anyone used SAT4J's remote control feature? What is the problem with my method?
Update
On another machine (64 bit Debian 7) the window opens. After start dat files are created, but plotting does not start.
Update 2
I ran the generated instance.dimacs-gnuplot.gnuplot file manually from a gnuplot terminal, and I got the message unknown or ambiguous terminal type for the x11 type. I installed the gnuplot-x11 package, and now it works on the workplace machine: I can see the diagrams (wow!). Unfortunately on my home machines the Remote Control window still doesn't open.
The -remote parameter is used to display the remote control, i.e. to setup the various parameters of the solver.
If you want to always monitor what the solver is doing, you need to use in conjunction the -r parameter.
So the complete command line should be:
java -jar sat4j-sat.jar -r -remote file.cnf
You can get a fresh snapshot of Sat4j Sat on our continuous integration server:
http://bamboo.ow2.org/browse/SAT4J-DEF2-41/artifact/JOB1/nightly_build/
This might solve the issue you met with the 2.3.4 release.
Cheers,
Daniel