Dask-ssh with local scheduler? - ssh

I would like to use dask-ssh to automatically load up worker nodes on a set of remote IP addresses. However, I would like the worker nodes to connect to a local scheduler. From the docs page, I wasn't quite sure how to accomplish this.
My specific questions are as follows:
Is this supported by dask-ssh?
If so, do I provide my local IP address into dask-ssh? e.g.:
$ dask-ssh --scheduler <my.ip.addr.here> <other.ip.addresses.here>

As of 2019-01-30 this is not supported, but is definitely in-scope. It would be a nice contribution.

Related

What to do in a PyMQI program that does an IBM MQ PUT to remotely connect via the MQSERVER environment variable?

[pymqi] What to do in simple program that does an IBM MQ PUT to remotely connect via the MQSERVER environment variable?
.
Hello! I am able to find short samples that use PyMQI and the connect method has hardcoded the name of the queue manager, the server-connection channel and the host(port). However, I want to have a generic connect method and instead, pass the connection information by setting the MQSERVER environment variable.
.
Is there anything that needs to be configured or setup? I looked at the available documentation but I could not find references about how to do this.
.
My hope is that if the MQSERVER issue is resolved, then I would like to use the 2 environment variables for reading a CCDT file: MQCHLLIB and MQCHLTAB.
.
Thank you in advance!
I setup the MQSERVER variable to point to a remote queue manager. This works fine with a C-based example amqsputc.
But I experimented with different ways to specify the connect() method in my small pymqi program and I have captured the MQ client traces and I see that the MQSERVER is read, but the contents is ignored by the connect().

How to set a specific port for single-user Jupyterhub server REST API calls?

I have setup Spark SQL on Jypterhub using Apache Toree SQL kernel. I wrote a Python function to update Spark configuration options in the kernel.json file for my team to change configuration based on their queries and cluster configuration. But I have to shutdown the running notebook and re-open or restart the kernel after running Python function. In this way, I'm forcing the Toree kernel to read the JSON file to pick up the new configuration.
I thought of implementing this shutdown and restart of kernel in a programmatic way. I got to know about the Jupyterhub REST API documentation and am able implement it by invoking related API's. But the problem is, the single user server API port is set randomly by the Spawner object of Jupyterhub and it keeps changing every time I spin up a cluster. I want this to be fixed before launching the Jupyterhub service.
Here is a solution I tried based on Jupyterhub docs:
sudo echo "c.Spawner.port = 35289
c.Spawner.ip = '127.0.0.1'" >> /etc/jupyterhub/jupyterhub_config.py
But this did not work as the port was again set by the Spawner randomly. I think there is a way to fix this. Any help on this would be greatly appreciated. Thanks

splunk search head cluster joining indexer cluster

Just trying out splunk, have had an issue with integrating a search head cluster with an indexer cluster.
I have 3 machines in a search head cluster and 3 machines in an indexer cluster. These are all on CentOS7, no firewall installed, all machines are able to ping / view each others splunk instaces (ip:8000 / ip:8089).
When following https://docs.splunk.com/Documentation/Splunk/6.6.2/DistSearch/SHCandindexercluster specifically
splunk edit cluster-config -mode searchhead -master_uri 10.152.31.202:8089 -secret newsecret123
I get an error of
Could not contact master. Check that the master is up, the master_uri=10.152.31.202:8089 and secret are specified correctly
I have removed the https:// part from the IP's above as I couldn't post with them included.
I have set the pass4SymmKey to be the same on all servers.
thanks
Please check shclustering pass4symmkey in both search head cluster and in the master.
i suspect pass4symmkey issue.
You should check splunkd.log to see what the error is. I would recommend not setting up the Pass4symKey and verifying it works first, if not then you found your issue.
Also, you did not mention having an extra server acting as the cluster master. This should be an independent server from your indexers. You have one right?

SUMO Address Error

I'm running multiple SUMO simulations in parallel using TraCI.
Every so often one will fail with the message
Error: tcpip::Socket::accept() Unable to create listening socket: Address already in use
Quitting (on error).
I haven't found anyway to set the address to use on the configuration list at http://sumo.dlr.de/wiki/SUMO
I figure if I can set each instances address manually I should be able to avoid this.
The answer is right on the page you mention. The option --remote-port specifies the port number, so something like
sumo --remote-port 54323 -c my.sumocfg
should do the trick. Of course you need to give the same port when connecting from your traci client.

Configuring SSL channel connectivity on MQ client machine

From Linux server with MQ client installed we are trying to set up connection to secured channel. I am ETL person and our MQ admin is struggling. Anyways I will explain what I tried (which obviously hasn't worked yet ) and anyone please let me know what else needs to be done to set up the connectivity.. Thanks :)
tmp/mqmutility/keyrepmodmq> ls
AMQCLCHL.TAB key.kdb key.rdb key.sth MODE_MODELTAP_DEV_keyStLst.txt
export MQSSLKEYR=/tmp/mqmutility/keyrepmodmq/key
export MQCHLLIB=/tmp/mqmutility/keyrepmodmq
export MQCHLTAB=AMQCLCHL.TAB
/opt/mqm/samp/bin> amqsputc <queue_name> <queue_manager_name>
Sample AMQSPUT0 start
MQCONN ended with reason code 2058
Note: I can connect to the same queue manager for a non-SSL channel.
Any help will be great and other approaches you follow for SSL channel connectivity from client machine will also be helpful.
When using a Client Channel Definition Table (CCDT) file - your AMQCLCHL.TAB file, a return code of 2058 usually means that the queue manager name the application tried to use - your 'queue_manager_name' - was not found in any of the channel entries in the CCDT file.
If you're using. MQ V8 you can very easily display the entries in your CCDT file and the queue manager names they are configured for using the following command:
runmqsc -n
DISPLAY CHANNEL(*) QMNAME
If none of the channels in your file have the queue manager name you are using when running the amqsputc sample then this is the cause of your 2058 reason code.
Hopefully it will be clear when you see the entries in the file listed out which queue manager name you should be using, but if not, update your question with some more details (like the contents of said file and the queue manager details) and we can help further.
You must ensure that you have a CLNTCONN channel defined which has the queue manager name you want to use in the QMNAME field, and that you have a matching named SVRCONN channel defined on the queue manager. Since you are using SSL, you must also ensure that these two channels are using the same SSLCIPH.
Please read Creating server-connection and client-connection definitions on the server and it's child topics.