I am discovering the Plc4x java implementation which seems to be of great interest in our field. But the youth of the project and the documentation makes us hesitate. I have been able to implement the basic hello world for reading out of our PLCs, but I was unable to write. I could not find how the addresses are handled and what the maskwrite, andMask and orMask fields mean.
Please can somebody explain to me the following example and detail how the addresses should be used?
#Test
void testWriteToPlc() {
// Establish a connection to the plc using the url provided as first argument
try( PlcConnection plcConnection = new PlcDriverManager().getConnection( "modbus:tcp://1.1.2.1" ) ){
// Create a new read request:
// - Give the single item requested the alias name "value"
var builder = plcConnection.writeRequestBuilder();
builder.addItem( "value-" + 1, "maskwrite:1[1]/2/3", 2 );
var writeRequest = builder.build();
LOGGER.info( "Synchronous request ..." );
var syncResponse = writeRequest.execute().get();
}catch(Exception e){
e.printStackTrace();
}
}
I have used PLC4x for writing using the modbus driver with success. Here is some sample code I am using:
public static void writePlc4x(ProtocolConnection connection, String registerName, byte[] writeRegister, int offset)
throws InterruptedException {
// modbus write works ok writing one record per request/item
int size = 1;
PlcWriteRequest.Builder writeBuilder = connection.writeRequestBuilder();
if (writeRegister.length == 2) {
writeBuilder.addItem(registerName, "register:" + offset + "[" + size + "]", writeRegister);
}
...
PlcWriteRequest request = writeBuilder.build();
request.execute().whenComplete((writeResponse, error) -> {
assertNotNull(writeResponse);
});
Thread.sleep((long) (sleepWait4Write * writeRegister.length * 1000));
}
In the case of modbus writing there is an issue regarding the return of the writer Future, but the write is done. In the modbus use case I don't need any mask stuff.
Related
I am trying to write to a S3 sink.
private static StreamingFileSink<String> createS3SinkFromStaticConfig(
final Map<String, Properties> applicationProperties
) {
Properties sinkProperties = applicationProperties.get(SINK_PROPERTIES);
String s3SinkPath = sinkProperties.getProperty(SINK_S3_PATH_KEY);
return StreamingFileSink
.forRowFormat(
new Path(s3SinkPath),
new SimpleStringEncoder<String>(StandardCharsets.UTF_8.toString())
)
.build();
}
The following code works and I can see the results in S3
input.map(value -> { // Parse the JSON
JsonNode jsonNode = jsonParser.readValue(value, JsonNode.class);
return new Tuple2<>(jsonNode.get("ticker").asText(), jsonNode.get("price").asDouble());
}).returns(Types.TUPLE(Types.STRING, Types.DOUBLE))
.keyBy(0) // Logically partition the stream per stock symbol
.timeWindow(Time.seconds(10), Time.seconds(5)) // Sliding window definition
.min(1) // Calculate minimum price per stock over the window
.setParallelism(3) // Set parallelism for the min operator
.map(value -> value.f0 + ": ----- " + value.f1.toString() + "\n")
.addSink(createS3SinkFromStaticConfig(applicationProperties));
But the following doesn't write anything to S3.
KeyedStream<EnrichedMetric, EnrichedMetricKey> input = env.addSource(new EnrichedMetricSource())
.assignTimestampsAndWatermarks(
WatermarkStrategy.<EnrichedMetric>forMonotonousTimestamps()
.withTimestampAssigner(((event, l) -> event.getEventTime()))
).keyBy(new EnrichedMetricKeySelector());
DataStream<String> statsStream = input
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.process(new PValueStatisticsWindowFunction());
statsStream.addSink(createS3SinkFromStaticConfig(applicationProperties));
PValueStatisticsWindowFunction is a ProcessWindowFunction as below.
#Override
public void process(EnrichedMetricKey enrichedMetricKey,
Context context,
Iterable<EnrichedMetric> in,
Collector<String> out) throws Exception {
int count = 0;
for (EnrichedMetric m : in) {
count++;
}
out.collect("Count: " + count);
}
When I run the Flink app locally, statsStream.print() prints the results to log/flink-*-taskexecutor-*.out.
In the cluster, I can see checkpoint is enabled and the various checkpoints history from the Flink dashboard. I also made sure the S3 path is in the format s3a://<bucket>
Not sure what I am missing here.
I'm actually developing a project that read data from 19 PLCs Siemens S1500 and 1 modicon. I have used the scraper tool following this tutorial:
PLC4x scraper tutorial
but when the scraper is working for a little amount of time I get the following exception:
I have changed the scheduled time between 1 to 100 and I always get the same exception when the scraper reach the same number of received messages.
I have tested if using PlcDriverManager instead of PooledPlcDriverManager could be a solution but the same problem persists.
In my pom.xml I use the following dependency:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-scraper</artifactId>
<version>0.7.0</version>
</dependency>
I have tried to change the version to an older one like 0.6.0 or 0.5.0 but the problem still persists.
If I use the modicon (Modbus TCP) I also get this exception after a little amount of time.
Anyone knows why is happening this error? Thanks in advance.
Edit: With the scraper version 0.8.0-SNAPSHOT I continue having this problem.
Edit2: This is my code, I think the problem can be that in my scraper I am opening a lot of connections and when it reaches 65526 messages it fails. But since all the processing is happenning inside the lambda function and I'm using a PooledPlcDriverManager, I think the scraper is using only one connection so I dont know where is the mistake.
try {
// Create a new PooledPlcDriverManager
PlcDriverManager S7_plcDriverManager = new PooledPlcDriverManager();
// Trigger Collector
TriggerCollector S7_triggerCollector = new TriggerCollectorImpl(S7_plcDriverManager);
// Messages counter
AtomicInteger messagesCounter = new AtomicInteger();
// Configure the scraper, by binding a Scraper Configuration, a ResultHandler and a TriggerCollector together
TriggeredScraperImpl S7_scraper = new TriggeredScraperImpl(S7_scraperConfig, (jobName, sourceName, results) -> {
LinkedList<Object> S7_results = new LinkedList<>();
messagesCounter.getAndIncrement();
S7_results.add(jobName);
S7_results.add(sourceName);
S7_results.add(results);
logger.info("Array: " + String.valueOf(S7_results));
logger.info("MESSAGE number: " + messagesCounter);
// Producer topics routing
String topic = "s7" + S7_results.get(1).toString().substring(S7_results.get(1).toString().indexOf("S7_SourcePLC") + 9 , S7_results.get(1).toString().length());
String key = parseKey_S7("s7");
String value = parseValue_S7(S7_results.getLast().toString(),S7_results.get(1).toString());
logger.info("------- PARSED VALUE -------------------------------- " + value);
// Create my own Kafka Producer
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);
// Send Data to Kafka - asynchronous
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// executes every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record was successfully sent
logger.info("Received new metadata. \n" +
"Topic:" + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}, S7_triggerCollector);
S7_scraper.start();
S7_triggerCollector.start();
} catch (ScraperException e) {
logger.error("Error starting the scraper (S7_scrapper)", e);
}
So in the end indeed it was the PLC that was simply hanging up the connection randomly. However the NiFi integration should have handled this situation more gracefully. I implemented a fix for this particular error ... could you please give version 0.8.0-SNAPSHOT a try (or use 0.8.0 if we happen to have released it already)
I am trying to understand the address system in the plac4x java implementation. Below an example of the reading code of the plcs:
#Test
void testReadingFromPlc() {
// Establish a connection to the plc using the url provided as first argument
try( PlcConnection plcConnection = new PlcDriverManager().getConnection( "modbus:tcp://1.1.2.1" ) ){
// Create a new read request:
// - Give the single item requested the alias name "value"
var builder = plcConnection.readRequestBuilder();
builder.addItem( "value-" + 1, "register:1[9]" );
builder.addItem( "value-" + 2, "coil:1000[8]" );
var readRequest = builder.build();
LOGGER.info( "Synchronous request ..." );
var syncResponse = readRequest.execute().get();
// Simply iterating over the field names returned in the response.
var bytes = syncResponse.getAllByteArrays( "value-1" );
bytes.forEach( item -> System.out.println( TopicsMapping.byteArray2IntegerArray( item )[0] ) );
var booleans = syncResponse.getAllBooleans( "value-2" );
booleans.forEach( System.out::println );
}catch(Exception e){
e.printStackTrace();
}
}
Our PLCs manage 16 registers, but the regex of the addresses don't allow to have a quantity bigger than 9. Is it possible to change this?
Moreover, if I try to add an other field with the same purpose then no reading happen:
var builder = plcConnection.readRequestBuilder();
builder.addItem( "value-" + 0, "register:26[8]" );
builder.addItem( "value-" + 1, "register:34[8]" );
builder.addItem( "value-" + 2, "coil:1000[8]" );
var readRequest = builder.build();
Any help much appreciated. Could you also show me where I can find more information on this framework?
I am reading and writing using the modbus driver in PLC4x with success. I have attached some writing code to your other question at: Plc4x addressing system
About reading, here is some code:
public static PlcReadResponse readModbusTestData(ProtocolClient client,
String registerName,
int offset,
int size,
String registerType)
throws ExecutionException, InterruptedException, TimeoutException {
PlcReadRequest readRequest = client.getConnection().readRequestBuilder()
.addItem(registerName, registerType + ":" + offset + "[" + size + "]").build();
return readRequest.execute().get(2, TimeUnit.SECONDS);
}
The multiple read adding more items to the PlcReadRequest has not been tested yet by me, but it should work. Writing several items is working.
In any case, in order to understand how PLC4x works for modbus or opc-ua I have needed to dive into the source code. It works, but you need to read the source code for the details at its current state.
I am currently using HttpDeclarePushto exploit the Server Push feature in HTTP/2.
I am able to successfully create all the parameters that this function accepts. But the issue is when HttpDeclarePushexecutes it returns a value of 1229 (ERROR_CONNECTION_INVALID) - https://learn.microsoft.com/en-us/windows/desktop/debug/system-error-codes--1000-1299-.
On further investigation I found that the HttpHeaderConnection in _HTTP_HEADER_ID (https://learn.microsoft.com/en-us/windows/desktop/api/http/ne-http-_http_header_id) is actually passed in the function as 'close'. That implies that on every request response the server closes the connection and that is also happening in my case, I checked it in the log.
Here is the code.
class http2_native_module : public CHttpModule
{
public:
REQUEST_NOTIFICATION_STATUS OnBeginRequest(IN IHttpContext * p_http_context, IN IHttpEventProvider * p_provider)
{
HTTP_REQUEST_ID request_id;
const HTTPAPI_VERSION version = HTTPAPI_VERSION_2;
auto pHttpRequest = p_http_context->GetRequest();
auto phttpRequestRaw = pHttpRequest->GetRawHttpRequest();
HANDLE p_req_queue_handle = nullptr;
auto isHttp2 = phttpRequestRaw->Flags;
try {
const auto request_queue_handle = HttpCreateRequestQueue(version, nullptr, nullptr, NULL, &p_req_queue_handle);
const auto verb = phttpRequestRaw->Verb;
const auto http_path = L"/polyfills.0d74a55d0dbab6b8c32c.js"; //ITEM that I want to PUSH to client
const auto query = nullptr;
request_id = phttpRequestRaw->RequestId;
auto headers = phttpRequestRaw->Headers;
auto connId = phttpRequestRaw->ConnectionId;
WriteEventViewerLog(L"OnBeginRequest - Entering HTTPDECLAREPUSH");
headers.KnownHeaders[1].pRawValue = NULL;
headers.KnownHeaders[1].RawValueLength = 0;
const auto is_success = HttpDeclarePush(p_req_queue_handle, request_id, verb, http_path, query, &headers);
sprintf_s(szBuffer, "%lu", is_success);
Log("is_success value", szBuffer); //ERROR CODE 1229 here
HttpCloseRequestQueue(p_req_queue_handle);
}
catch (std::bad_alloc & e)
{
auto something = e;
}
return RQ_NOTIFICATION_CONTINUE;
}
I even tried to update the header connection value as below but it still gives me 1229.
headers.KnownHeaders[1].pRawValue = NULL;
headers.KnownHeaders[1].RawValueLength = 0;
I understand from https://http2.github.io/http2-spec/ that HTTP/2 actually ignores the content in HTTP HEADERs and uses some other mechanism as part of its FRAME.
This brings us to the next question on how we can keep the connection OPEN and is it something related to the FRAME (similar to HEADER) that HTTP2 uses, if so, how C++ or rather Microsoft helps us to play and exploit with the FRAME in HTTP2?
We are trying to build a jmeter testcase which does the following:
login to a system
obtain some information and check whether correct.
Where we are facing issues is because there is a captcha while logging into the system. What we had planned to do was to download the captcha link and display, and wait for user to type in the value. Once done, everything goes as usual.
We couldnt find any plugin that can do the same? Other than writing our own plugin, is there any option here?
I was able to solve it myself. The solution is as follows:
Create a JSR223 PostProcessor (using Groovy)
more practical CAPTCHA example with JSESSIONID handling and proxy setting
using image.flush() to prevent stale CAPTCHA image in dialog box
JSR223 Parameters for proxy connection setting:
Parameters: proxy 10.0.0.1 8080
In it, the following code displays the captcha and waits for user input
import java.awt.Image;
import java.awt.Toolkit;
import javax.swing.Icon;
import javax.swing.JOptionPane;
import org.apache.jmeter.threads.JMeterContextService;
import org.apache.jmeter.threads.JMeterContext;
import org.apache.jmeter.protocol.http.control.CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
URL urlTemp ;
urlTemp = new URL( "https://your.domainname.com/endpoint/CAPTCHACode");
HttpURLConnection myGetContent = null;
if(args[0]=="proxy" ){
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(args[1], Integer.parseInt(args[2])));
myGetContent = (HttpURLConnection) urlTemp.openConnection(proxy);
}else{
myGetContent = (HttpURLConnection) urlTemp.openConnection();
}
// false for http GET
myGetContent.setDoOutput(false);
myGetContent.connect();
int status = myGetContent.getResponseCode();
log.info("HTTP Status Code: "+Integer.toString(status));
if (status == HttpURLConnection.HTTP_OK) {
//We have 2 Set-Cookie headers in response message but 1 Set-Cookie entry in Map
String[] parts2;
for (Map.Entry<String, List<String>> entries : myGetContent.getHeaderFields().entrySet()) {
if( entries.getKey() == "Set-Cookie" ){
for (String value : entries.getValue()) {
if ( value.contains("JSESSIONID") == true ){
String[] parts = value.split(";",2);
log.info("Response header: "+ entries.getKey() + " - " + parts[0] );
JMeterContext context = JMeterContextService.getContext();
CookieManager manager = context.getCurrentSampler().getCookieManager();
parts2 = parts[0].split("=",2)
Cookie cookie = new Cookie("JSESSIONID",parts2[1],"your.domainname.com","/endpoint",true,0, true, true, 0);
manager.add(cookie);
log.info( cookie.toString() );
log.info("CookieCount "+ manager.getCookieCount().toString() );
}
}
}
}//end of outer for loop
if ( parts2.find() == null ) {
throw new Exception("The Response Header not contain Set-Cookie:JSESSIONID= .");
}
}else{
throw new Exception("The Http Status Code was ${status} , not expected 200 OK.");
}
BufferedInputStream bins = new BufferedInputStream(myGetContent.getInputStream());
String destFile = "number.png";
File f = new File(destFile);
if(f.exists() ) {
boolean fileDeleted = f.delete();
log.info("delete file ... ");
log.info(String.valueOf(fileDeleted));
}
FileOutputStream fout =new FileOutputStream(destFile);
int m = 0;
byte[] bytesIn = new byte[1024];
while ((m = bins.read(bytesIn)) != -1) {
fout.write(bytesIn, 0, m);
}
fout.close();
bins.close();
log.info("File " +destFile +" downloaded successfully");
Image image = Toolkit.getDefaultToolkit().getImage(destFile);
image.flush(); // release the prior cache of Captcha image
Icon icon = new javax.swing.ImageIcon(image);
JOptionPane pane = new JOptionPane("Enter Captcha", 0, 0, null);
String captcha = pane.showInputDialog(null, "Captcha", "Captcha", 0, icon, null, null);
captcha = captcha.trim();
captcha = captcha.replaceAll("\r\n", "");
log.info(captcha);
vars.put("captcha", captcha);
myGetContent.disconnect();
By vars.put method we can use the captcha variable in any way we want. Thank you everyone who tried to help.
Since CAPTHA used to detect non-humans, JMeter will always fail it.
You have to make a workaround in your software: either disable captcha requesting or print somewhere on page correct captcha. Of course, only for JMeter tests.
Dirty workaround? Print the captcha value in alt image for the tests. And then you can retrieve the value and go on.