How to create a znode asynchronously in Apache Curator - apache

With the fluent API of Curator, we can create a znode synchronously by invoking something like:
client.create().withMode(CreateMode.PERSISTENT).forPath("/mypath", new byte[0]);
I would like to know how we can execute the same operation asynchronously while specifying the create mode?

We can execute the given create operation asynchronously while specifying the create mode like below,
client.create()
.withMode(CreateMode.PERSISTENT)
.inBackground()
.forPath("/mypath", new byte[0]);

If you're on Java 8 and ZooKeeper 3.5.x, the latest version of Curator (note: I'm the main author) has a new DSL for async. You can read about it here: http://curator.apache.org/curator-x-async/index.html
E.g.
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
async.checkExists().forPath(somePath).thenAccept(stat -> mySuccessOperation(stat));

Related

Using Hibernate Reactive Panache with SDKs that switch thread

I'm using Reactive Panache for Postgresql. I need to take an application level lock(redis), inside which I need to perform certain operations. However, panache library throws the following error:
java.lang.IllegalStateException: HR000069: Detected use of the reactive Session from a different Thread than the one which was used to open the reactive Session - this suggests an invalid integration; original thread [222]: 'vert.x-eventloop-thread-3' current Thread [154]: 'vert.x-eventloop-thread-2'
My code looks something like this:
redissonClient.getLock("lock").lock(this.leaseTimeInMillis, TimeUnit.MILLISECONDS, this.lockId)
.chain(() -> return Panache.withTransaction(() -> Uni.createFrom.nullItem())
.eventually(lock::release);
)
Solutions such as the ones mentioned in this issue show the correct use with AWS SDK but not when used in conjunction with something like redisson. Does anyone have this working with redisson?
Update:
I tried the following on lock acquire and release:
.runSubscriptionOn(MutinyHelper.executor(Vertx.currentContext())
This fails with the following error even though I have quarkus-vertx dependency added:
Cannot invoke "io.vertx.core.Context.runOnContext(io.vertx.core.Handler)" because "context" is null
Panache might not be the best choice in this case.
I would try using Hibernate Reactive directly:
#Inject
Mutiny.SessionFactory factory;
...
redissonClient.getLock("lock")
.lock(this.leaseTimeInMillis,TimeUnit.MILLISECONDS, this.lockId)
.chain(() -> factory.withTransaction(session -> Uni.createFrom.nullItem())
.eventually(lock::release))

Spring streamBridge create rabbitmq queue on startup

Since spring deprecated org.springframework.cloud.stream.annotation.Output annotation.
I'm using streamBridge new api.
And I wonder what is the best way to auto create the queue automatically on startup like the behaviour of the annotation.
I found a workaround using spring.cloud.stream.function.definition=myChannel just to create the queue
As in this sample
#Bean
fun myChannel(): Supplier<Flux<Message<String>>> = Supplier {
Flux.empty()
}
and application.properties:
bindings:
myChannel-out-0:
destination: Mystuff
producer:
required-groups: mychannel
When I was using #Output annotation the queue was created automatically.
Is there another more elegant solution ?
You still don't need to do that (pre-create the queue), since once you execute the first streamBridge.send the destination will be resolved (queue will be created) and your properties applied.
That said, if you still want to do it you can use spring.cloud.stream.source property and point to the name of the destination that you would identify with #Output in the older versions. For example spring.cloud.stream.source=foo.

How to send the parameter values to azure data factory using powershell

I want to send 2 parameters to data factory pipeline parameters and trigger the pipeline using powershell scripting.
May I know how to do it.
You can trigger the pipeline through Invoke-AzureRmDataFactoryV2Pipeline command. It should look like:
Invoke-AzureRmDataFactoryV2Pipeline -DataFactory $yourADFv2DataFactory -PipelineName "YourAdfv2PipelineName" -ParameterFile .\PipelineParameters.json
The PipelineParameters.json file should be like:
{
"parameter_1_name": "parameter_1_value",
"parameter_2_name": "parameter_2_value"
}
You can refer the Official Documentation here

how to use selectionListener="#{bindings.Products.collectionModel. makeCurrent}" programmatically?

i am using custom selection listener in a adf table component.so how can i invoke selectionListener="#{bindings.Products.collectionModel. makeCurrent}" programmatically to get selected rows/keys?
Look at sample #23 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html#CodeCornerSamples which shows how to do this in a generic way.
... in addition you can use a MethodExpression to invoke the EL from Java. In this case your selection listener will initially create
FacesContext instance
ELContext
ExpressionFactory
MethodExpression (build from ExpressionFactory, the EL you put into your question will go in there)
... then you invoke the MethodExpression to execute the logic. The benefit you get out of such an approach is that you can perform pre- and post- processing (like pre- and post- triggers

Rrd4j persistence

I am using rrd4j to do what rrd4j does, and it works great. However, if I shut down my app and start it back up again, the data from the previous session will be gone.
I am using a normal file backend, like so:
RrdDef rrdDef = new RrdDef( "/path/to/my/file", 3000 );
Is there a setting or something I need to trigger to make rrd4j load the data from the previous session?
Seems you should use RrdDb("/path/to/my/file") instead. From Javadocs:
RrdDb(java.lang.String path): Constructor used to open already
existing RRD in R/W mode, with a
default storage (backend) type (file
on the disk).
And also:
RrdDb(RrdDef rrdDef):
Constructor used to create new RRD object from the definition.