PLC4X:Exception during scraping of Job - apache-plc4x

I'm actually developing a project that read data from 19 PLCs Siemens S1500 and 1 modicon. I have used the scraper tool following this tutorial:
PLC4x scraper tutorial
but when the scraper is working for a little amount of time I get the following exception:
I have changed the scheduled time between 1 to 100 and I always get the same exception when the scraper reach the same number of received messages.
I have tested if using PlcDriverManager instead of PooledPlcDriverManager could be a solution but the same problem persists.
In my pom.xml I use the following dependency:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-scraper</artifactId>
<version>0.7.0</version>
</dependency>
I have tried to change the version to an older one like 0.6.0 or 0.5.0 but the problem still persists.
If I use the modicon (Modbus TCP) I also get this exception after a little amount of time.
Anyone knows why is happening this error? Thanks in advance.
Edit: With the scraper version 0.8.0-SNAPSHOT I continue having this problem.
Edit2: This is my code, I think the problem can be that in my scraper I am opening a lot of connections and when it reaches 65526 messages it fails. But since all the processing is happenning inside the lambda function and I'm using a PooledPlcDriverManager, I think the scraper is using only one connection so I dont know where is the mistake.
try {
// Create a new PooledPlcDriverManager
PlcDriverManager S7_plcDriverManager = new PooledPlcDriverManager();
// Trigger Collector
TriggerCollector S7_triggerCollector = new TriggerCollectorImpl(S7_plcDriverManager);
// Messages counter
AtomicInteger messagesCounter = new AtomicInteger();
// Configure the scraper, by binding a Scraper Configuration, a ResultHandler and a TriggerCollector together
TriggeredScraperImpl S7_scraper = new TriggeredScraperImpl(S7_scraperConfig, (jobName, sourceName, results) -> {
LinkedList<Object> S7_results = new LinkedList<>();
messagesCounter.getAndIncrement();
S7_results.add(jobName);
S7_results.add(sourceName);
S7_results.add(results);
logger.info("Array: " + String.valueOf(S7_results));
logger.info("MESSAGE number: " + messagesCounter);
// Producer topics routing
String topic = "s7" + S7_results.get(1).toString().substring(S7_results.get(1).toString().indexOf("S7_SourcePLC") + 9 , S7_results.get(1).toString().length());
String key = parseKey_S7("s7");
String value = parseValue_S7(S7_results.getLast().toString(),S7_results.get(1).toString());
logger.info("------- PARSED VALUE -------------------------------- " + value);
// Create my own Kafka Producer
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);
// Send Data to Kafka - asynchronous
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// executes every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record was successfully sent
logger.info("Received new metadata. \n" +
"Topic:" + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}, S7_triggerCollector);
S7_scraper.start();
S7_triggerCollector.start();
} catch (ScraperException e) {
logger.error("Error starting the scraper (S7_scrapper)", e);
}

So in the end indeed it was the PLC that was simply hanging up the connection randomly. However the NiFi integration should have handled this situation more gracefully. I implemented a fix for this particular error ... could you please give version 0.8.0-SNAPSHOT a try (or use 0.8.0 if we happen to have released it already)

Related

Using any ignite filter throws exception with the query

I am querying an ignite cache like this:
try (QueryCursor<Cache.Entry<Long, IgniteAccountOrder>> qryCursor = cache.query(new ScanQuery<>())) {
qryCursor.forEach(
entry -> System.out.println("Key = " + entry.getKey() + ", Value = " + entry.getValue()));
}
This works fine and the value gets serialized fine.
As soon as any filter is added to the query an exception occurs. Here is the exact same code with a filter that always returns true which is technically equivalent to the above code without any filter:
IgniteBiPredicate<Long, IgniteAccountOrder> filter = (key, p) -> true;
try (QueryCursor<Cache.Entry<Long, IgniteAccountOrder>> qryCursor = cache.query(new ScanQuery<>(filter))) {
qryCursor.forEach(
entry -> System.out.println("Key = " + entry.getKey() + ", Value = " + entry.getValue()));
}
The following exception occurs with the second code:
Exception in thread "main" org.apache.ignite.client.ClientException: Ignite failed to process request [4]: Failed to deserialize object [typeName=java.lang.invoke.SerializedLambda] (server status code [1])
at org.apache.ignite.internal.client.thin.TcpClientChannel.convertException(TcpClientChannel.java:336)
at org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:296)
at org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:218)
at org.apache.ignite.internal.client.thin.ReliableChannel.lambda$service$1(ReliableChannel.java:165)
at org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:763)
at org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:731)
at org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:164)
at org.apache.ignite.internal.client.thin.GenericQueryPager.next(GenericQueryPager.java:93)
at org.apache.ignite.internal.client.thin.ClientQueryCursor$1.nextPage(ClientQueryCursor.java:93)
at org.apache.ignite.internal.client.thin.ClientQueryCursor$1.hasNext(ClientQueryCursor.java:76)
at java.base/java.lang.Iterable.forEach(Iterable.java:74)
The issue was that IgniteAccountOrder class was not available to the remote server.
I was connecting to the remote server via a thin client and peer loading could not be enabled for the thin client.
I did the following to make this work:
Switched to a thick client
Enabled peer class loading via the thick client:
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true);

ConnectionSpecWrapper no longer present in recent releases

Why the activejdbc class ConnectionSpecWrapper has disappeared in recent releases?
in the 3.0 (and also 2.3.2-j8) activejdbc jar we have:
org/javalite/activejdbc/connection_config/ConnectionJndiConfig.class
org/javalite/activejdbc/connection_config/ConnectionConfig.class
org/javalite/activejdbc/connection_config/ConnectionJdbcConfig.class
org/javalite/activejdbc/connection_config/ConnectionDataSourceConfig.class
org/javalite/activejdbc/connection_config/DBConfiguration.class
In 2.3 jar we have
org/javalite/activejdbc/connection_config/ConnectionSpecWrapper.class
org/javalite/activejdbc/connection_config/DbConfiguration.class
org/javalite/activejdbc/connection_config/ConnectionJdbcSpec.class
org/javalite/activejdbc/connection_config/ConnectionSpec.class
org/javalite/activejdbc/connection_config/ConnectionDataSourceSpec.class
org/javalite/activejdbc/connection_config/ConnectionJndiSpec.class
I am using it like this, in a filter:
#Override
public void before() {
if(Configuration.isTesting())
return;
List<ConnectionSpecWrapper> connectionWrappers = getConnectionWrappers();
if (connectionWrappers.isEmpty()) {
throw new InitException("There are no connection specs in '" + Configuration.getEnv() + "' environment");
}
for (ConnectionSpecWrapper connectionWrapper : connectionWrappers) {
DB db = new DB(connectionWrapper.getDbName());
db.open(connectionWrapper.getConnectionSpec());
log.debug("Opened connection: " + connectionWrapper.getDbName() + " envname " + connectionWrapper.getEnvironment());
if(manageTransaction){
db.openTransaction();
}
}
}
#Override
public void after() {
if(Configuration.isTesting())
return;
List<ConnectionSpecWrapper> connectionWrappers = getConnectionWrappers();
if (connectionWrappers != null && !connectionWrappers.isEmpty()) {
for (ConnectionSpecWrapper connectionWrapper : connectionWrappers) {
DB db = new DB(connectionWrapper.getDbName());
if(manageTransaction){
db.commitTransaction();
}
db.close();
log.debug("Closed connection: " + connectionWrapper.getDbName() + " envname " + connectionWrapper.getEnvironment());
}
}
}
I'm thinking of upgrading the Gazzetta dello Sport's fantasy football site which has been live for something like 8 years and working really well. It is on Java 7/Activeweb 1.10/Activejdbc 1.4.9
The "wrapper" classes have been renamed into "Spec" classes, as you rightly noticed. Generally these classes are not used. If you want to continue using them you can of course (rename accordingly). However, a better approach is to define your connections in a file:
https://javalite.io/database_configuration#property-file-configuration
and simply use https://javalite.io/controller_filters#dbconnectionfilter.
I'm assuming you wrote a custom controller filter and are using ActiveWeb.
Update:
Now that we established you use ActivewWeb, consider removing your code and simply using a DBConnectionFilter, here is a perfect example: https://github.com/javalite/javalite-examples/blob/master/activeweb-simple/src/main/java/app/config/AppControllerConfig.java#L31

Plc4x addressing system

I am discovering the Plc4x java implementation which seems to be of great interest in our field. But the youth of the project and the documentation makes us hesitate. I have been able to implement the basic hello world for reading out of our PLCs, but I was unable to write. I could not find how the addresses are handled and what the maskwrite, andMask and orMask fields mean.
Please can somebody explain to me the following example and detail how the addresses should be used?
#Test
void testWriteToPlc() {
// Establish a connection to the plc using the url provided as first argument
try( PlcConnection plcConnection = new PlcDriverManager().getConnection( "modbus:tcp://1.1.2.1" ) ){
// Create a new read request:
// - Give the single item requested the alias name "value"
var builder = plcConnection.writeRequestBuilder();
builder.addItem( "value-" + 1, "maskwrite:1[1]/2/3", 2 );
var writeRequest = builder.build();
LOGGER.info( "Synchronous request ..." );
var syncResponse = writeRequest.execute().get();
}catch(Exception e){
e.printStackTrace();
}
}
I have used PLC4x for writing using the modbus driver with success. Here is some sample code I am using:
public static void writePlc4x(ProtocolConnection connection, String registerName, byte[] writeRegister, int offset)
throws InterruptedException {
// modbus write works ok writing one record per request/item
int size = 1;
PlcWriteRequest.Builder writeBuilder = connection.writeRequestBuilder();
if (writeRegister.length == 2) {
writeBuilder.addItem(registerName, "register:" + offset + "[" + size + "]", writeRegister);
}
...
PlcWriteRequest request = writeBuilder.build();
request.execute().whenComplete((writeResponse, error) -> {
assertNotNull(writeResponse);
});
Thread.sleep((long) (sleepWait4Write * writeRegister.length * 1000));
}
In the case of modbus writing there is an issue regarding the return of the writer Future, but the write is done. In the modbus use case I don't need any mask stuff.

Plc4x cannot read more than 9 registers at once

I am trying to understand the address system in the plac4x java implementation. Below an example of the reading code of the plcs:
#Test
void testReadingFromPlc() {
// Establish a connection to the plc using the url provided as first argument
try( PlcConnection plcConnection = new PlcDriverManager().getConnection( "modbus:tcp://1.1.2.1" ) ){
// Create a new read request:
// - Give the single item requested the alias name "value"
var builder = plcConnection.readRequestBuilder();
builder.addItem( "value-" + 1, "register:1[9]" );
builder.addItem( "value-" + 2, "coil:1000[8]" );
var readRequest = builder.build();
LOGGER.info( "Synchronous request ..." );
var syncResponse = readRequest.execute().get();
// Simply iterating over the field names returned in the response.
var bytes = syncResponse.getAllByteArrays( "value-1" );
bytes.forEach( item -> System.out.println( TopicsMapping.byteArray2IntegerArray( item )[0] ) );
var booleans = syncResponse.getAllBooleans( "value-2" );
booleans.forEach( System.out::println );
}catch(Exception e){
e.printStackTrace();
}
}
Our PLCs manage 16 registers, but the regex of the addresses don't allow to have a quantity bigger than 9. Is it possible to change this?
Moreover, if I try to add an other field with the same purpose then no reading happen:
var builder = plcConnection.readRequestBuilder();
builder.addItem( "value-" + 0, "register:26[8]" );
builder.addItem( "value-" + 1, "register:34[8]" );
builder.addItem( "value-" + 2, "coil:1000[8]" );
var readRequest = builder.build();
Any help much appreciated. Could you also show me where I can find more information on this framework?
I am reading and writing using the modbus driver in PLC4x with success. I have attached some writing code to your other question at: Plc4x addressing system
About reading, here is some code:
public static PlcReadResponse readModbusTestData(ProtocolClient client,
String registerName,
int offset,
int size,
String registerType)
throws ExecutionException, InterruptedException, TimeoutException {
PlcReadRequest readRequest = client.getConnection().readRequestBuilder()
.addItem(registerName, registerType + ":" + offset + "[" + size + "]").build();
return readRequest.execute().get(2, TimeUnit.SECONDS);
}
The multiple read adding more items to the PlcReadRequest has not been tested yet by me, but it should work. Writing several items is working.
In any case, in order to understand how PLC4x works for modbus or opc-ua I have needed to dive into the source code. It works, but you need to read the source code for the details at its current state.

noflo 0.5.13 spreadsheet example broken?

I am new to noflo and looking at examples in order to explore it. The spreadsheet example looked interesting but I couldn't make it run. First, it takes some time and manual debugging to identify missing components, not a big deal and I believe will be improved in the future, for now the error message I get is
return process.component.outPorts[port].attach(socket);
^
TypeError: undefined is not a function
Apparently, before this isAddressable() was also undefined. Checked with this SO issue but I don't have any noflo 0.4 as a dependency anywhere. Spent some time to debug it but seemingly stuck at it, decided to post to SO.
The question is, what are the correct steps to run the spreadsheet example?
For reproduction, here is what I have done:
0) install the following components
noflo-adapters
noflo-core
noflo-couchdb
noflo-filesystem
noflo-groups
noflo-objects
noflo-packets
noflo-strings
noflo-tika
noflo-xml
i) edit spreadsheet/parse.fbp, because first error was
throw new Error("No outport '" + port + "' defined in process " + proc
^
Error: No outport 'error' defined in process Read (Read() ERROR -> IN Display())
apparently couchdb ReadDocument component does not provide Error outport. therefore replaced ReadDocument with ReadFile.
18c18
< 'tika-app-0.9.jar' -> TIKA Read(ReadDocument)
---
> 'tika-app-0.9.jar' -> TIKA Read(ReadFile)
ii) at this point, received the following:
if (process.component.outPorts[port].isAddressable()) {
^
TypeError: undefined is not a function
and improvised a stunt by checking if isAddressable is defined at this location of code:
## -259,9 +261,11 ##
throw new Error("No outport '" + port + "' defined in process " + process.id + " (" + (socket.getId()) + ")");
return;
}
- if (process.component.outPorts[port].isAddressable()) {
+ if (process.component.outPorts[port].isAddressable && process.component.outPorts[port].isAddressable()) {
return process.component.outPorts[port].attach(socket, index);
}
return process.component.outPorts[port].attach(socket);
};
and either way fails. Again, the question is What are the correct steps to run the spreadsheet example?
Thanks in advance.