Spring AMQP + RabbitMQ RPC How to execute/delegate different methods in a class - rabbitmq

I spent the entire day trying to get this to work. I've been following the tutorial 5 & 6 for Spring AMQP in the RabbitMQ tutorials page.
Is it possible for a single class to execute a different method based on some property? E.g. Routing key?
I've tried this so far to no avail:
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "requests"),
exchange = #Exchange(value = "ourexchange"),
key = "doFunc1")
)
public String func1(long id) {
System.out.println("func1 " + id);
return "func1 " + id;
}
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "requests"),
exchange = #Exchange(value = "ourexchange"),
key = "doFunc2")
)
public String func2(long id) {
System.out.println("func2 " + id);
return "func2 " + id;
}
In my client I did this:
public void send() {
System.out.println(" [x] Get func1( account_id: " + accountId + ")");
String response = (String) template.convertSendAndReceive
("ourexchange", "doFunc1", accountId);
System.out.println(" [.] Got '" + response + "'");
System.out.println(" [x] Get func2( account_id: " + accountId + ")");
String response = (String) template.convertSendAndReceive
(exchange.getName(), "doFunc2", accountId);
System.out.println(" [.] Got '" + response + "'");
}
I've got it "somewhat" to work but it appears to work in a round-robin fashion where the first method is called then the next one.
I've already considered the explanation here: Single Queue, multiple #RabbitListener but different services
But since the signatures of both methods look identical I'm not sure it's possible.
Do note that I'm a beginner to the concepts of AMQP (like I've just read about the basics today). Am I doing this right or am I misunderstanding the usage?

The #RabbitListener infrastructure doesn't perform routing based on the routing key. You should use a different queue for each method and let RabbitMQ do the routing at the exchange level.
Alternatively, if you must use a single queue for some reason, you can pass the RECEIVED_ROUTING_KEY as a #Header parameter to your listener and delegate to different methods from the listener.
I've got it "somewhat" to work but it appears to work in a round-robin fashion where the first method is called then the next one.
That's because RabbitMQ sees 2 consumers and will round-robin the messages. You need to use 2 queues or a single method and do the routing therein.

Related

jira api - how to update fields on a jira issue that is closed/published

the following code works fine on a jira issue that is in open. but when this is tried on a closed/published issue i get error. wanted to see if this is even possible to be done? manually on closed/published jira issue, we can update those fields
Client client = Client.create();
WebResource webResource = client.resource("https://jira.com/rest/api/latest/issue/JIRA_KEY1");
String data1 = "{\r\n" +
" \"fields\" : {\r\n" +
" \"customfield_10201\" : \"Value 1\"\r\n" +
" }\r\n" +
"}";
String auth = new String(Base64.encode("user" + ":" + "pass"));
ClientResponse response = webResource.header("Authorization", "Basic " + auth).type("application/json").accept("application/json").put(ClientResponse.class, data1);
Error received
Http Error : 400{"errorMessages":[],"errors":{"customfield_10201":"Field 'customfield_10201' cannot be set. It is not on the appropriate screen, or unknown."}}
There is probably a restriction that doesn't allow the value of that field to be changed once the issue is marked as complete.
Try opening that completed issue via the web interface and changing the field's value; if you can't do it via the web interface, then you can't do it the REST API.

PLC4X:Exception during scraping of Job

I'm actually developing a project that read data from 19 PLCs Siemens S1500 and 1 modicon. I have used the scraper tool following this tutorial:
PLC4x scraper tutorial
but when the scraper is working for a little amount of time I get the following exception:
I have changed the scheduled time between 1 to 100 and I always get the same exception when the scraper reach the same number of received messages.
I have tested if using PlcDriverManager instead of PooledPlcDriverManager could be a solution but the same problem persists.
In my pom.xml I use the following dependency:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-scraper</artifactId>
<version>0.7.0</version>
</dependency>
I have tried to change the version to an older one like 0.6.0 or 0.5.0 but the problem still persists.
If I use the modicon (Modbus TCP) I also get this exception after a little amount of time.
Anyone knows why is happening this error? Thanks in advance.
Edit: With the scraper version 0.8.0-SNAPSHOT I continue having this problem.
Edit2: This is my code, I think the problem can be that in my scraper I am opening a lot of connections and when it reaches 65526 messages it fails. But since all the processing is happenning inside the lambda function and I'm using a PooledPlcDriverManager, I think the scraper is using only one connection so I dont know where is the mistake.
try {
// Create a new PooledPlcDriverManager
PlcDriverManager S7_plcDriverManager = new PooledPlcDriverManager();
// Trigger Collector
TriggerCollector S7_triggerCollector = new TriggerCollectorImpl(S7_plcDriverManager);
// Messages counter
AtomicInteger messagesCounter = new AtomicInteger();
// Configure the scraper, by binding a Scraper Configuration, a ResultHandler and a TriggerCollector together
TriggeredScraperImpl S7_scraper = new TriggeredScraperImpl(S7_scraperConfig, (jobName, sourceName, results) -> {
LinkedList<Object> S7_results = new LinkedList<>();
messagesCounter.getAndIncrement();
S7_results.add(jobName);
S7_results.add(sourceName);
S7_results.add(results);
logger.info("Array: " + String.valueOf(S7_results));
logger.info("MESSAGE number: " + messagesCounter);
// Producer topics routing
String topic = "s7" + S7_results.get(1).toString().substring(S7_results.get(1).toString().indexOf("S7_SourcePLC") + 9 , S7_results.get(1).toString().length());
String key = parseKey_S7("s7");
String value = parseValue_S7(S7_results.getLast().toString(),S7_results.get(1).toString());
logger.info("------- PARSED VALUE -------------------------------- " + value);
// Create my own Kafka Producer
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);
// Send Data to Kafka - asynchronous
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// executes every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record was successfully sent
logger.info("Received new metadata. \n" +
"Topic:" + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}, S7_triggerCollector);
S7_scraper.start();
S7_triggerCollector.start();
} catch (ScraperException e) {
logger.error("Error starting the scraper (S7_scrapper)", e);
}
So in the end indeed it was the PLC that was simply hanging up the connection randomly. However the NiFi integration should have handled this situation more gracefully. I implemented a fix for this particular error ... could you please give version 0.8.0-SNAPSHOT a try (or use 0.8.0 if we happen to have released it already)

Plc4x addressing system

I am discovering the Plc4x java implementation which seems to be of great interest in our field. But the youth of the project and the documentation makes us hesitate. I have been able to implement the basic hello world for reading out of our PLCs, but I was unable to write. I could not find how the addresses are handled and what the maskwrite, andMask and orMask fields mean.
Please can somebody explain to me the following example and detail how the addresses should be used?
#Test
void testWriteToPlc() {
// Establish a connection to the plc using the url provided as first argument
try( PlcConnection plcConnection = new PlcDriverManager().getConnection( "modbus:tcp://1.1.2.1" ) ){
// Create a new read request:
// - Give the single item requested the alias name "value"
var builder = plcConnection.writeRequestBuilder();
builder.addItem( "value-" + 1, "maskwrite:1[1]/2/3", 2 );
var writeRequest = builder.build();
LOGGER.info( "Synchronous request ..." );
var syncResponse = writeRequest.execute().get();
}catch(Exception e){
e.printStackTrace();
}
}
I have used PLC4x for writing using the modbus driver with success. Here is some sample code I am using:
public static void writePlc4x(ProtocolConnection connection, String registerName, byte[] writeRegister, int offset)
throws InterruptedException {
// modbus write works ok writing one record per request/item
int size = 1;
PlcWriteRequest.Builder writeBuilder = connection.writeRequestBuilder();
if (writeRegister.length == 2) {
writeBuilder.addItem(registerName, "register:" + offset + "[" + size + "]", writeRegister);
}
...
PlcWriteRequest request = writeBuilder.build();
request.execute().whenComplete((writeResponse, error) -> {
assertNotNull(writeResponse);
});
Thread.sleep((long) (sleepWait4Write * writeRegister.length * 1000));
}
In the case of modbus writing there is an issue regarding the return of the writer Future, but the write is done. In the modbus use case I don't need any mask stuff.

what is correct use of consumer groups in spring cloud stream dataflow and rabbitmq?

A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?

AppWarp onPrivateUpdateReceived doesn't work

var codestring = this.ball + " " + this.ball.position["x"] + " "
+ this.ball.position["y"];
_warpclient.sendPrivateUpdatePeers(enemy, codestring);
I use this function to send peers to the person with defined name enemy.
function onSendPrivateUpdateDone (result){
console.log('Update done ' + result);
}
onSendPrivateUpdateDone works and write messages to person who send peers, here https://apphq.shephertz.com/appWarp/index#/testManager it's noted that my messages are sent, room is created and players are connected to the room, but the person who must accept message doesn't do that, because function onPrivateUpdateReceived callback does nothing.
onPrivateUpdateReceived(userName, msg){
console.log("Username: "+userName);
console.log("Message: "+msg);
}
onPrivateUpdateReceived recieves msgs from the tosendPrivateUpdate method, you should change sendPrivateUpdatePeers to sendPrivateUpdate to work.