Since spring deprecated org.springframework.cloud.stream.annotation.Output annotation.
I'm using streamBridge new api.
And I wonder what is the best way to auto create the queue automatically on startup like the behaviour of the annotation.
I found a workaround using spring.cloud.stream.function.definition=myChannel just to create the queue
As in this sample
#Bean
fun myChannel(): Supplier<Flux<Message<String>>> = Supplier {
Flux.empty()
}
and application.properties:
bindings:
myChannel-out-0:
destination: Mystuff
producer:
required-groups: mychannel
When I was using #Output annotation the queue was created automatically.
Is there another more elegant solution ?
You still don't need to do that (pre-create the queue), since once you execute the first streamBridge.send the destination will be resolved (queue will be created) and your properties applied.
That said, if you still want to do it you can use spring.cloud.stream.source property and point to the name of the destination that you would identify with #Output in the older versions. For example spring.cloud.stream.source=foo.
Related
I'm using Reactive Panache for Postgresql. I need to take an application level lock(redis), inside which I need to perform certain operations. However, panache library throws the following error:
java.lang.IllegalStateException: HR000069: Detected use of the reactive Session from a different Thread than the one which was used to open the reactive Session - this suggests an invalid integration; original thread [222]: 'vert.x-eventloop-thread-3' current Thread [154]: 'vert.x-eventloop-thread-2'
My code looks something like this:
redissonClient.getLock("lock").lock(this.leaseTimeInMillis, TimeUnit.MILLISECONDS, this.lockId)
.chain(() -> return Panache.withTransaction(() -> Uni.createFrom.nullItem())
.eventually(lock::release);
)
Solutions such as the ones mentioned in this issue show the correct use with AWS SDK but not when used in conjunction with something like redisson. Does anyone have this working with redisson?
Update:
I tried the following on lock acquire and release:
.runSubscriptionOn(MutinyHelper.executor(Vertx.currentContext())
This fails with the following error even though I have quarkus-vertx dependency added:
Cannot invoke "io.vertx.core.Context.runOnContext(io.vertx.core.Handler)" because "context" is null
Panache might not be the best choice in this case.
I would try using Hibernate Reactive directly:
#Inject
Mutiny.SessionFactory factory;
...
redissonClient.getLock("lock")
.lock(this.leaseTimeInMillis,TimeUnit.MILLISECONDS, this.lockId)
.chain(() -> factory.withTransaction(session -> Uni.createFrom.nullItem())
.eventually(lock::release))
I am trying to LOG all methods that are invoked in my Springboot application using byte-buddy based java agent.
I am able to log all layers except Spring data JPA repositories, which are actually interfaces. Below is agent initialization:
new AgentBuilder.Default()
.type(ElementMatchers.hasSuperType(nameContains("com.soka.tracker.repository").and(ElementMatchers.isInterface())))
.transform(new AgentBuilder.Transformer.ForAdvice()
.include(TestAgent.class.getClassLoader())
.advice(ElementMatchers.any(), "com.testaware.MyAdvice"))
.installOn(instrumentation);
any hints or workaround that I can use to log when my repository methods are invoked. Below is a sample repository in question:
package com.soka.tracker.repository;
.....
#Repository
public interface GeocodeRepository extends JpaRepository<Geocodes, Integer> {
Optional<Geocodes> findByaddress(String currAddress);
}
Modified agent:
new AgentBuilder.Default()
.ignore(new AgentBuilder.RawMatcher.ForElementMatchers(any(), isBootstrapClassLoader().or(isExtensionClassLoader())))
.ignore(new AgentBuilder.RawMatcher.ForElementMatchers(nameStartsWith("net.bytebuddy.")
.and(not(ElementMatchers.nameStartsWith(NamingStrategy.SuffixingRandom.BYTE_BUDDY_RENAME_PACKAGE + ".")))
.or(nameStartsWith("sun.reflect."))))
.type(ElementMatchers.nameContains("soka"))
.transform(new AgentBuilder.Transformer.ForAdvice()
.include(TestAgent.class.getClassLoader())
.advice(any(), "com.testaware.MyAdvice"))
//.with(AgentBuilder.Listener.StreamWriting.toSystemOut())
.with(AgentBuilder.TypeStrategy.Default.REDEFINE)
.installOn(instrumentation);
I see my advice around controller and service layers - JPA repository layer is not getting logged.
By default, Byte Buddy ignores synthetic types in its agent. I assume that Spring's repository classes are marked as such and therefore not processed.
You can set a custom ignore matcher by using the AgentBuilder DSL. By default, the following ignore matcher is set to ignore system classes and Byte Buddy's own types:
new RawMatcher.Disjunction(
new RawMatcher.ForElementMatchers(any(), isBootstrapClassLoader().or(isExtensionClassLoader())),
new RawMatcher.ForElementMatchers(nameStartsWith("net.bytebuddy.")
.and(not(ElementMatchers.nameStartsWith(NamingStrategy.SuffixingRandom.BYTE_BUDDY_RENAME_PACKAGE + ".")))
.or(nameStartsWith("sun.reflect."))
.<TypeDescription>or(isSynthetic())))
You would probably need to remove the last condition.
for anybody visiting this question / problem - I was able to go around the actual problem with logging actual queries invoked during execution - Bytebuddy is awesome and very powerful - for ex- in my case I am simply advice'ing on my db connection pool classes and gathering all required telemetry -
.or(ElementMatchers.nameContains("com.zaxxer.hikari.pool.HikariProxyConnection"))
I am very new to Activiti BPMN. I am creating a flow diagram in activiti. I m looking for how username (who has completed the task) can be pass into shell task arguments. so that I can fetch and save in db that user who has completed that task.
Any Help would be highly appreciated.
Thanks in advance...
Here's something I prepared for Java developers based on I think a blog post I saw
edit: https://community.alfresco.com/thread/224336-result-variable-in-javadelegate
RESULT VARIABLE
Option (1) – use expression language (EL) in the XML
<serviceTask id="serviceTask"
activiti:expression="#{myService.toUpperCase(myVar)}"
activiti:resultVariable="myVar" />
Java
public class MyService {
public String toUpperCase(String val) {
return val.toUpperCase();
}
}
The returned String is assigned to activiti:resultVariable
HACKING THE DATA MODEL DIRECTLY
Option (2) – use the execution environment
Java
public class MyService implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String myVar = (String) execution.getVariable("myVar");
execution.setVariable("myVar", myVar.toUpperCase());
}
}
By contrast here we are being passed an ‘execution’, and we are pulling values out of it and twiddling them and putting them back.
This is somewhat analogous to a Servlet taking values we are passed in the HTMLRequest and then based on them doing different things in the response. (A stronger analogy would be a servlet Filter)
So in your particular instance (depnding on how you are invoking the shell script) using the Expression Language (EL) might be simplest and easiest.
Of course the value you want to pass has to be one that the process knows about (otherwise how can it pass a value it doesn't have a variable for?)
Hope that helps. :D
Usually in BPM engines you have a way to hook out listener to these kind of events. In Activiti if you are embedding it inside your service you can add an extra EventListener and then record the taskCompleted events which will contain the current logged in user.
https://www.activiti.org/userguide/#eventDispatcher
Hope this helps.
I have used activiti:taskListener from activiti app you need to configure below properties
1. I changed properties in task listener.
2. I used java script variable for holding task.assignee value.
Code Snip:-
With the fluent API of Curator, we can create a znode synchronously by invoking something like:
client.create().withMode(CreateMode.PERSISTENT).forPath("/mypath", new byte[0]);
I would like to know how we can execute the same operation asynchronously while specifying the create mode?
We can execute the given create operation asynchronously while specifying the create mode like below,
client.create()
.withMode(CreateMode.PERSISTENT)
.inBackground()
.forPath("/mypath", new byte[0]);
If you're on Java 8 and ZooKeeper 3.5.x, the latest version of Curator (note: I'm the main author) has a new DSL for async. You can read about it here: http://curator.apache.org/curator-x-async/index.html
E.g.
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
async.checkExists().forPath(somePath).thenAccept(stat -> mySuccessOperation(stat));
I am currently using Spring Boot Starter 1.4.2.RELEASE, and Geode Core 1.0.0-incubating via Maven, against a local Docker configuration consisting of a Geode Locator, and 2 cache nodes.
I've consulted the documentation here:
http://geode.apache.org/docs/guide/developing/distributed_regions/locking_in_global_regions.html
I have configured a cache.xml file for use with my application like so:
<?xml version="1.0" encoding="UTF-8"?>
<client-cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<pool name="serverPool">
<locator host="localhost" port="10334"/>
</pool>
<region name="testRegion" refid="CACHING_PROXY">
<region-attributes pool-name="serverPool"
scope="global"/>
</region>
</client-cache>
In my Application.java I have exposed the region as a bean via:
#SpringBootApplication
public class Application {
#Bean
ClientCache cache() {
return new ClientCacheFactory()
.create();
}
#Bean
Region<String, Integer> testRegion(final ClientCache cache) {
return cache.<String, Integer>getRegion("testRegion");
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
And in my "service" DistributedCounter.java:
#Service
public class DistributedCounter {
#Autowired
private Region<String, Integer> testRegion;
/**
* Using fine grain lock on modifier.
* #param counterKey {#link String} containing the key whose value should be incremented.
*/
public void incrementCounter(String counterKey) {
if(testRegion.getDistributedLock(counterKey).tryLock()) {
try {
Integer old = testRegion.get(counterKey);
if(old == null) {
old = 0;
}
testRegion.put(counterKey, old + 1);
} finally {
testRegion.getDistributedLock(counterKey).unlock();
}
}
}
I have used gfsh to configure a region named /testRegion - however there is no option to indicate that it's type should be "GLOBAL", only a variety of other options are available - ideally this should be a persistent, and replicated cache though so the following command:
create region --name=/testRegion --type=REPLICATE_PERSISTENT
Using the how-to at: http://geode.apache.org/docs/guide/getting_started/15_minute_quickstart_gfsh.html it is easy to see the functionality of persistence and replication on my two node configuration.
However, the locking in DistributedCounter, above, does not cause any errors - but it just does not work when two processes attempt to acquire a lock on the same "key" - the second process is not blocked from acquiring the lock. There is an earlier code sample from the Gemfire forums which uses the DistributedLockService - which the current documentation warns against using for locking region entries.
Is the use-case of fine-grained locking to support a "map" of atomically incremental longs a supported use case, and if so, how to appropriately configure it?
The Region APIs for DistributedLock and RegionDistributedLock only support Regions with Global scope. These DistributedLocks have locking scope within the name of the DistributedLockService (which is the full path name of the Region) only within the cluster. For example, if the Global Region exists on a Server, then the DistributedLocks for that Region can only be used on that Server or on other Servers within that cluster.
Cache Clients were originally a form of hierarchical caching, which means that one cluster could connect to another cluster as a Client. If a Client created an actual Global region, then the DistributedLock within the Client would only have a scope within that Client and the cluster that it belongs to. DistributedLocks do not propagate in anyway to the Servers that such a Client is connected to.
The correct approach would be to write Function(s) that utilize the DistributedLock APIs on Global regions that exist on the Server(s). You would deploy those Functions to the Server and then invoke them on the Server(s) from the Client.
In general, use of Global regions is avoided because every individual put acquires a DistributedLock within the Server's cluster, and this is a very expensive operation.
You could do something similar with a non-Global region by creating a custom DistributedLockService on the Servers and then use Functions to lock/unlock around code that you need to be globally synchronized within that cluster. In this case, the DistributedLock and RegionDistributedLock APIs on Region (for the non-Global region) would be unavailable and all locking would have to be done within a Function on the Server using the DistributedLockService API.
This only works for server side code (in Functions for example).
From client code you can implement locking semantics using "region.putIfAbsent".
If 2 (or more) clients call this API on the same region and key, only one will successfully put, which is indicated by a return value of null. This client is considered to hold the lock. The other clients will get the object that was put by the winner. This is handy because, if the value you "put" contains a unique identifier of the client, then the losers even know who is holding the lock.
Having a region entry represent a lock has other nice benefits. The lock survives across failures. You can use region expiration to set the maximum lease time for a lock, and, as mentioned previously, its easy to tell who is holding the lock.
Hope this helps.
It seems that GFSH does not provide an option to provide the correct scope=GLOBAL.
Maybe you could start a server with --cache-xml-file option... which would point to a cache.xml file.
The cache.xml file should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schema.pivotal.io/gemfire/cache" xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd" version="8.1" lock-lease="120" lock-timeout="60" search-timeout="300" is-server="true" copy-on-read="false">
<cache-server port="0"/>
<region name="testRegion">
<region-attributes data-policy="persistent-replicate" scope="global"/>
</region>
</cache>
Also the client configuration does not need to define the scope in region-attributes