How to reset the channel in Hyperledger Fabric to genesis block to reuse? - testing

I have a test Hyperledger Fabric environment using docker and caliper.
I have installed the fabric from https://github.com/hyperledger/fabric.
I want to reuse the started test network in fabcar folder, but using networkDown.sh to stop the network and start it again is very time consuming.
Is there a way to reset the fabric blockchain to its first state or genesis block without recreating it, so I can run a new Caliper test on it?

No, there is no supported way of resetting a channel on peers and orderers to genesis blocks. However, you may create a new channel if that helps.

Related

Kubernetes probe running acceptance test

I have a situation where my acceptance test makes a connection with a rabbitMQ instance during the pipeline. But the rabbitMQ instance is private, making not possible to make this connection in the pipeline.
I was wondering if making an api endpoint that run this test and adding to the startup probe would be a good approach to make sure this test is passing.
If the rabbitmq is a container in your pod yes, if it isn't then you shouldn't.
There's no final answer to this, but the startup probe is just there to ensure that your pod is not being falsly considered unhealthy by other probes just because it takes a little longer to start. It's aimed at legacy applications that need to build assets or compile stuff at startup.
If there was a place to put a connectivity test to rabitmq would be the liveness probe, but you should only do that if your application is entirely dependent on a connection to rabbitmq, otherwise your authentication would fail because you couldn't connect to the messaging queue. And if you have a second app that tries to connect to your endpoint as a liveness probe? And a third app that connects the second one to check if that app is alive? You could kill an entire ecosystem just because rabbitmq rebooted or crashed real quick.
Not recommended.
You could have that as part of your liveness probe IF your app is a worker, then, not having a connection to rabbitmq would make the worker unusable.
Your acceptance tests should be placed on your CD or in a post-deploy script step if you don't have a CD.

How can I test that I've successfully connected to all *five* channels (shell, iopub, hb, stdin, control) of an IPython kernel via SSH when using 2FA?

I've set up a remote kernel running through SSH to which I connect using my Spyder IDE, and have just added 2-factor authentication (2FA) on the SSH connections using Duo .
Now when I attempt to connect, I get 4 different push notifications, and once I approve some or all of them, Spyder connects and gives me the IPython prompt; and for each attempt below I did approve all 4.
On my first attempt, it didn't display a result when testing with something like 2+2
On my second attempt, everything appeared to be working fine.
However, I am aware that there are 5 channels involved (shell, iopub, hb, stdin, control) as I can see on this Jupyter client doc page.
Is there any way I can, once connected to the remote kernel, test each of the individual 5 channels and check that they are all working properly?
And can you think of a reason why I would receive 4 push notifications rather than 5? Is it possible that one of the channels isn't used or connected to later on-demand or something like that?
UPDATE: After doing a netstat on the server side, I can see that the control channel is not connected, but the other four (shell, iopub, hb, stdin) are. Still unsure what I miss out on by not using the control channel, and whether Spyder provides the same features the control channel by other means; this page says:
Control: This channel is identical to Shell, but operates on a separate socket to avoid queueing behind execution requests. The control channel is used for shutdown and restart messages, as well as for debugging messages.
For a smoother user experience, we recommend running the control channel in a separate thread from the shell channel, so that e.g. shutdown or debug messages can be processed immediately without waiting for a long-running shell message to be finished processing (such as an expensive execute request).

Is there a way of using dask jobqueue over ssh

Dask jobqueue seems to be a very nice solution for distributing jobs to PBS/Slurm managed clusters. However, if I'm understanding its use correctly, you must create instance of "PBSCluster/SLURMCluster" on head/login node. Then you can on the same node, create a client instance to which you can start submitting jobs.
What I'd like to do is let jobs originate on a remote machine, be sent over SSH to the cluster head node, and then get submitted to dask-jobqueue. I see that dask has support for sending jobs over ssh to a "distributed.deploy.ssh.SSHCluster" but this seems to be designed for immediate execution after ssh, as opposed to taking the further step of putting it in jobqueue.
To summarize, I'd like a workflow where jobs go remote --ssh--> cluster-head --slurm/jobqueue--> cluster-node. Is this possible with existing tools?
I am currently looking into this. My idea is to set-up an SSH tunnel with paramiko and then use Pyro5 to communicate with the cluster object from my local machine.

How does Hyperledger Fabric avoid infinite loop?

There is 'gas' in Ethereum and Bitcoin doesn't support loop at all, I am curious how does Hyperledger Fabric avoid infinite loop?
Hyperledger Fabric does not use gas, but it does address the halting problem by setting a timeout for chaincode execution. The chaincode container will be killed if the transaction does not execute within the configured timeout as specified by the chaincode.executetimeout property.
It appears there is no mechanism to stop infinite loops. There is an open issue in github, https://github.com/hyperledger-archives/fabric/issues/2232 so its possible that it is coming.
Hyperledger Fabric is not intended to be a public blockchain and smart contracts are not intended to be uploaded by any user. They are intended to be developed by an internal team and tested these scenarios.

integrating redis into serverless

I am looking at integrating a caching service with serverless.
I have decided to go with redis. However through reading through the npm redis, it seems that you are required to call client.quit() after completing the request.
The way serverless seems to work is that the instance is spawned, and then deleted when not in use. So I was wondering if there was a way to quit the redis connection when the serverless instance is being deleted.
Or whether I just have to actually just start a connection on every request, and quit the connection before each request finishes.
I was hoping I could do it on the app state, instead of request state, that way I wont have to spawn so many connections.
No. A connection could be reused. It does not need to start a new connection on every request.
If you use the redis.creatClient() to create a connection, you could use this connection always in your app. And it has reconnect mechanism if the connection is broken. So in your app development, you do not need to care the connection problem, just get a global connection and always use it.