Property broker.crlFileList on IBM Integration Bus 10 - ssl

I want to enable automatic loading of CRLs on IBM Integration Bus 10 broker.
But I don't understand, can I use only local files or IBM IIB can use HTTP or another protocol to donwload CRLs? And what is format of parameter's value in these command:
mqsichangeproperties IBNODE -o BrokerRegistry -n crlFileList -v file_path
Thank you!

It's error in command on article from IBM: name of parameter is brokerCRLFileList, not crlFileList.
Correct command is:
mqsichangeproperties IBNODE -o BrokerRegistry -n brokerCRLFileList -v file_path
Format of the value described in article of IBM

Related

Redis Graph Bulk Loader Issue

I have created redis cloud subscription and followed the instructions as mentioned in this document https://github.com/RedisGraph/redisgraph-bulk-loader. I am seeing error when I followed the instructions. I have also tried to use example2 csv but received the same error. Error : "Python int too large to convert to C long"
redisgraph-bulk-insert GRAPH_DEMO -n example/Person.csv -n example/Country.csv -r example/KNOWS.csv -r example/VISITED.csv
-h ***** -p ***** -a ********
I also received this message while trying to insert csv data (all data was quoted to be imported as string) into a graph. So what I did was to simply comment this line csv.field_size_limit(sys.maxsize) in entity_file.py. It worked in my case, but I don’t think this would be safe for every use case.

How to install schema registry

I am looking options to install confluent schema registry, is it possible to download and install registry alone and make it work with existing kafka setup ?
Thanks
Assuming you have Zookeeper/Kafka running already, you can easily run Confulent Schema Registry using Docker with running the following command:
docker run -p 8081:8081 -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=host.docker.internal:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:5.3.2
parameters:
-p 8081:8081 - will open the port 8081 between the container to your machine
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL - is your Zookeeper host and port, I'm using host.docker.internal to resolve local machine that is hosting Zookeeper (outside of the container)
SCHEMA_REGISTRY_HOST_NAME - The hostname advertised in Zookeeper. This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment.
SCHEMA_REGISTRY_LISTENERS - the Schema Registry host and port number to open
SCHEMA_REGISTRY_DEBUG Run in debug mode
note: the script was using the version 5.3.2, make sure this version is aligned with your Kafka version.
Yes you can use your existing Kafka setup, just match to the compatible version of Confluent Platform. Here are the docs on getting started
https://docs.confluent.io/current/schema-registry/docs/intro.html#installation
tl;dr download the platform to pull out the pieces you need or get the docker image and point it at your Kafka cluster.

Can RabbitMQ "disk free monitoring" be enabled?

In response to this curl to the RabbitMQ Management API ...
curl localhost:15672/api/nodes/{node name}/ -u {user} | jq .
... after typing the password, I am getting a response that includes this line ...
"disk_free_limit": "disk_free_monitoring_disabled",
How can I enable this, or is it a build-time choice or platform limitation?
I am using RabbitMQ 3.6.6 on CentOS release 6.8 (Final).
UPDATE: I just noticed this erlang startup flag (using ps -ef | grep rabbit):
-os_mon start_disksup false
That turned out to be unrelated. It is set by rabbitmq-server that way, even in installations where I don't get the problem.
you get this error when rabbit_disk_monitor can't execute:
/bin/df -kP ++ dir
check the logs, maybe is a permission problem and or try to execute /bin/df -kP and see the result

docker run cannot find name flag argument

I have recently setup a Rstudio application on Google compute container engine using Docker and the Rocker/rstudio package. Now I want to start my saved container with a name using the following ssh command line:
sudo docker -d -p 8787:8787 --name samplename user/laatste
which returns the following error
flag provided but not defined: --name
I have tried with and without quotes, equal signs, double and single hyphens, before, between and after the other flags and arguments, but the same error keeps returning.
version information:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
The reason I want to name the container is that I want to run standard (static) startup and shutdown scripts with the Google compute instance to automatically save and load changes made in R. The container name is used for identifying the container to be saved. Any other solution for this is also very welcome.
I guess you wanted to do:
sudo docker run -d -p 8787:8787 --name samplename user/laatste
You forgot to specify command (run) here.

Cannot find hive odbc connector error messages using unixODBC

I am trying to setup unixodbc to use the hive driver connector from cloudera (in an Ubuntu machine).
In my ~/.local/lib folder I have links to the .so files provided by cloudera,
also the env variable LD_LIBRARY_PATH contains /home/luca/.local/lib:/opt/cloudera/hiveodbc/lib/64/.
I created the file /etc/odbcinst.ini containing the following:
[hive]
Description = Cloudera ODBC Driver for Apache Hive (64-bit)
Driver = /home/luca/.local/lib/libclouderahiveodbc64.so
ODBCInstLib= /home/luca/.local/lib/libodbcinst.so
UsageCount = 1
DriverManagerEncoding=UTF-16
ErrorMessagesPath=/opt/cloudera/hiveodbc/ErrorMessages/
LogLevel=0
SwapFilePath=/tmp
and in my home folder I have .odbc.ini containing:
[hive]
Driver=hive
HOST=<thehost>
PORT=<theport>
Schema=<theschema>
FastSQLPrepare=0
UseNativeQuery=0
HiveServerType=2
AuthMech=2
#KrbHostFQDN=[Hive Server 2 Host FQDN]
#KrbServiceName=[Hive Server 2 Kerberos service name]
UID=<myuid>
When I test the connection using isql -v hive
I get the following error message:
[S1000][unixODBC][DSI] The error message NoSQLGetPrivateProfileString could not be found in the en-US locale. Check that /en-US/ODBCMessages.xml exists.
[ISQL]ERROR: Could not SQLConnect
How can I fix this issue (why is the path absolute for /en-US/)?
The SQLGetPrivateProfileString was not found in your ODBCInstLib library. Either the library could not be loaded, or the library did not contain the symbol.
Use strace isql -v hive 2>&1 | grep ini to see if your configuration file is being loaded. Use strace isql -v hive 2>&1 | grep odbcinst.so to see where it is looking for the library.
Make sure that the library exists in the given location and has the correct architecture. Use file -L /home/luca/.local/lib/libodbcinst.so to check the architecture. Use nm /home/luca/.local/lib/libodbcinst.so | grep SQLGetPrivateProfilString to check if it has the correct symbol.