Nextflow tutorial getting error "no such variable" - nextflow

I'm trying to learn nextflow but it's not going very well. I started with the tutorial of this site: https://www.nextflow.io/docs/latest/getstarted.html (I'm the one who installed nextflow).
I copied this script :
#!/usr/bin/env nextflow
params.str = 'Hello world!'
process splitLetters {
output:
file 'chunk_*' into letters
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
file x from letters.flatten()
output:
stdout result
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
result.view { it.trim() }
But when I run it (nextflow run tutorial.nf), in the terminal I have this :
N E X T F L O W ~ version 22.03.1-edge
Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
No such variable: result
-- Check script 'tutorial.nf' at line: 29 or see '.nextflow.log' file for more details
And in the log file I have this :
avr.-20 14:14:12.319 [main] DEBUG nextflow.cli.Launcher - $> nextflow run tutorial.nf
avr.-20 14:14:12.375 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 22.03.1-edge
avr.-20 14:14:12.466 [main] INFO nextflow.cli.CmdRun - Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
avr.-20 14:14:12.481 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/user/.nextflow/plugins; core-plugins: nf-amazon#1.6.0,nf-azure#0.13.0,nf-console#1.0.3,nf-ga4gh#1.0.3,nf-google#1.1.4,nf-sqldb#0.3.0,nf-tower#1.4.0
avr.-20 14:14:12.483 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
avr.-20 14:14:12.494 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
avr.-20 14:14:12.495 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
avr.-20 14:14:12.501 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
avr.-20 14:14:12.515 [main] INFO org.pf4j.AbstractPluginManager - No plugins
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Session uuid: 67344021-bff5-4131-9c07-e101756fb5ea
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Run name: intergalactic_waddington
avr.-20 14:14:12.573 [main] DEBUG nextflow.Session - Executor pool size: 8
avr.-20 14:14:12.604 [main] DEBUG nextflow.cli.CmdRun -
Version: 22.03.1-edge build 5695
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Work-dir: /home/user/Documents/formations/nextflow/testScript/work [ext2/ext3]
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /home/user/Documents/formations/nextflow/testScript/bin
avr.-20 14:14:12.637 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
avr.-20 14:14:12.648 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
avr.-20 14:14:12.669 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
avr.-20 14:14:12.678 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 9; maxThreads: 1000
avr.-20 14:14:12.741 [main] DEBUG nextflow.Session - Session start invoked
avr.-20 14:14:13.423 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
avr.-20 14:14:13.446 [main] DEBUG nextflow.Session - Session aborted -- Cause: No such property: result for class: Script_6634cd79
avr.-20 14:14:13.463 [main] ERROR nextflow.cli.Launcher - #unknown
groovy.lang.MissingPropertyException: No such property: result for class: Script_6634cd79
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:65)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at Script_6634cd79.runScript(Script_6634cd79:29)
at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:170)
at nextflow.script.BaseScript.run(BaseScript.groovy:203)
at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:220)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120)
at nextflow.cli.CmdRun.run(CmdRun.groovy:334)
at nextflow.cli.Launcher.run(Launcher.groovy:480)
at nextflow.cli.Launcher.main(Launcher.groovy:639)
What should I do ?
Thanks a lot for your help.

Nextflow includes a new language extension called DSL2, which became the default syntax in version 22.03.0-edge. However, it's possible to override this behavior by either adding nextflow.enable.dsl=1 to the top of your script, or by setting the -dsl1 option when you run your script:
nextflow run tutorial.nf -dsl1
Alternatively, roll back to the latest release (as opposed to an 'edge' pre-release). It's possible to specify the Nextflow version using the NXF_VER environment variable:
NXF_VER=21.10.6 nextflow run tutorial.nf
I find DSL2 drastically simplifies the writing of complex workflows and would highly recommend getting started with it. Unfortunately, the documentation is lagging a bit, but lifting it over is relatively straight forward once you get the hang of things:
params.str = 'Hello world!'
process splitLetters {
output:
path 'chunk_*'
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
path x
output:
stdout
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
workflow {
splitLetters | flatten() | convertToUpper | view()
}
Results:
nextflow run tutorial.nf -dsl2
N E X T F L O W ~ version 21.10.6
Launching `tutorial.nf` [prickly_kilby] - revision: 0c6f835b9c
executor > local (3)
[b8/84a1de] process > splitLetters [100%] 1 of 1 ✔
[86/fd19ea] process > convertToUpper (2) [100%] 2 of 2 ✔
HELLO
WORLD!

Related

TestContainer Rabbitmq seems to release connection as soon as it is start

I am using testcontainers in a spring boot project (version : 1.17.2) and I am trying to spin up a rabbitmq container. It seems like rabbitmq container starts up successfully, but it releases connection as soon as it is up.
I can see some error in logs but after that I can see that the test container is started. I am kind of flummoxed as to why am I seeing this error and/or if the container is started or not ?
Pasting excerpt from logs :
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management_agent
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_web_dispatch
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_prometheus
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> Server startup complete; 4 plugins started.
15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: cancel
15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.DefaultManagedHttpClientConnection - http-outgoing-1: close connection IMMEDIATE
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: endpoint closed
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: discarding endpoint
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: releasing endpoint
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection is not kept alive
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "end of stream"
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection released [route: {}->npipe://localhost:2375][total available: 0; route allocated: 0 of 2147483647; total allocated: 0 of 2147483647]
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException"
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException"
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse - Failed to close the response
java.io.IOException: java.nio.channels.ClosedChannelException
at java.base/java.nio.channels.Channels$2.read(Channels.java:240)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.LoggingInputStream.read(LoggingInputStream.java:81)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:149)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:261)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:147)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:314)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.IncomingHttpEntity.close(IncomingHttpEntity.java:111)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.io.entity.HttpEntityWrapper.close(HttpEntityWrapper.java:120)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.message.BasicClassicHttpResponse.close(BasicClassicHttpResponse.java:93)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.CloseableHttpResponse.close(CloseableHttpResponse.java:200)
at com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse.close(ApacheDockerHttpClientImpl.java:256)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:277)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedChannelException: null
....................................
15:00:44.009 [main] INFO 🐳 [rabbitmq:3.9.13-management-alpine] - Container rabbitmq:3.9.13-management-alpine started in PT7.8752359S
Config Java :
public abstract class RabbitMqTestContainerConfiguration {
private static final int RABBITMQ_DEFAULT_PORT = 5672;
#Container
public static RabbitMQContainer rabbitMQContainer = new RabbitMQContainer("rabbitmq:3.9.13-management-alpine")
.withExposedPorts(RABBITMQ_DEFAULT_PORT).withStartupTimeout(Duration.ofMinutes(3));
public static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(configurableApplicationContext,
"spring.rabbitmq.host=" + rabbitMQContainer.getHost(),
"spring.rabbitmq.port=" + rabbitMQContainer.getMappedPort(RABBITMQ_DEFAULT_PORT),
"spring.rabbitmq.username=" + rabbitMQContainer.getAdminUsername(),
"spring.rabbitmq.password=" + rabbitMQContainer.getAdminPassword());
}
}
} ```

"Unsupported case for discriminated type" when reading tag from Allen Bradley's Micro820 PLC, using Apache's PLC4X

I’m using Apache’s PLC4X library in order to read some tags from an Allen Bradley’s Micro820 PLC (2080-LC20-20QWB). So far, I am able to establish a connection with the device but when I try to execute a read request I’m getting the error shown in the stack trace bellow.
I’m running a Java maven based project with the following dependencies:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-api</artifactId>
<version>0.9.1</version>
</dependency>
<!-- Ethernet / IP driver -->
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-driver-eip</artifactId>
<version>0.9.1</version>
</dependency>
This is the code I'm running on a Ubuntu 18.04 width JDK 11
package com.example.plctest;
import org.apache.plc4x.java.PlcDriverManager;
import org.apache.plc4x.java.api.PlcConnection;
import org.apache.plc4x.java.api.messages.PlcReadRequest;
import org.apache.plc4x.java.api.messages.PlcReadResponse;
import org.apache.plc4x.java.eip.readwrite.field.EipField;
import org.apache.plc4x.java.eip.readwrite.types.CIPDataTypeCode;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ReadPlcDemo {
private static final Logger LOGGER = LoggerFactory.getLogger(ReadPlcDemo.class);
private static String connectionString = "eip://192.168.1.100?backplane=0&slot=0";
public static void main(String[] args) {
// Establish a connection with the PLC
try (PlcConnection connection = new PlcDriverManager().getConnection(connectionString)) {
if (connection.getMetadata().canRead()) {
LOGGER.info("PLC can read!");
}
// Create the read request
EipField field = new EipField("Sensor1", CIPDataTypeCode.BOOL);
// EipField field = new EipField("Sensor2", CIPDataTypeCode.SINT);
// EipField field = new EipField("Sensor3", CIPDataTypeCode.SINT);
PlcReadRequest.Builder builder = connection.readRequestBuilder();
builder.addItem("value-" + field.getTag(), field);
PlcReadRequest readRequest = builder.build();
// Execute the request
PlcReadResponse response;
try {
response = readRequest.execute().get();
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
This is the result from my stack trace, I set the log level to 'trace' in order to get a better insight of what's going on:
/home/ghinojosa/.jdks/corretto-11.0.14.1/bin/java -Dio.netty.tryReflectionSetAccessible=true --add-opens java.base/jdk.internal.misc=ALL-UNNAMED -javaagent:/home/ghinojosa/.local/share/JetBrains/Toolbox/apps/IDEA-C/ch-0/213.6777.52/lib/idea_rt.jar=44017:/home/ghinojosa/.local/share/JetBrains/Toolbox/apps/IDEA-C/ch-0/213.6777.52/bin -Dfile.encoding=UTF-8 -classpath /home/ghinojosa/IdeaProjects/plc-test/target/classes:/home/ghinojosa/Downloads/eeip-library.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-api/0.9.1/plc4j-api-0.9.1.jar:/home/ghinojosa/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.12.5/jackson-annotations-2.12.5.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-driver-eip/0.9.1/plc4j-driver-eip-0.9.1.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-spi/0.9.1/plc4j-spi-0.9.1.jar:/home/ghinojosa/.m2/repository/io/netty/netty-codec/4.1.67.Final/netty-codec-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-common/4.1.67.Final/netty-common-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-transport/4.1.67.Final/netty-transport-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-resolver/4.1.67.Final/netty-resolver-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/commons-beanutils/commons-beanutils/1.9.4/commons-beanutils-1.9.4.jar:/home/ghinojosa/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/ghinojosa/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.12.5/jackson-core-2.12.5.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.5/jackson-databind-2.12.5.jar:/home/ghinojosa/.m2/repository/io/vavr/vavr/0.10.4/vavr-0.10.4.jar:/home/ghinojosa/.m2/repository/io/vavr/vavr-match/0.10.4/vavr-match-0.10.4.jar:/home/ghinojosa/.m2/repository/com/github/jinahya/bit-io/1.4.3/bit-io-1.4.3.jar:/home/ghinojosa/.m2/repository/commons-codec/commons-codec/1.15/commons-codec-1.15.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-transport-tcp/0.9.1/plc4j-transport-tcp-0.9.1.jar:/home/ghinojosa/.m2/repository/io/netty/netty-buffer/4.1.67.Final/netty-buffer-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/org/slf4j/slf4j-api/1.8.0-beta4/slf4j-api-1.8.0-beta4.jar:/home/ghinojosa/.m2/repository/org/slf4j/slf4j-simple/1.8.0-beta4/slf4j-simple-1.8.0-beta4.jar com.example.plctest.ReadPlcDemo
[main] INFO org.apache.plc4x.java.PlcDriverManager - Instantiating new PLC Driver Manager with class loader jdk.internal.loader.ClassLoaders$AppClassLoader#5c8da962
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering available drivers...
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol eip (EthernetIP)
[main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
[main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
[main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
[main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#4de5031f
[main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
[main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
[main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/home/ghinojosa/.m2/repository/io/netty/netty-common/4.1.67.Final/netty-common-4.1.67.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
[main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
[main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
[main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8390705152 bytes (maybe)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: 8390705152 bytes
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: 1024
[main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
[main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
[main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 8
[main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
[main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
[main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
[main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#76505305
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#7b98f307
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#4802796d
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#34123d65
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#59474f18
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#65fb9ffc
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#3e694b3f
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#1bb5a082
[main] INFO org.apache.plc4x.java.transport.tcp.TcpChannelFactory - Configuring Bootstrap with org.apache.plc4x.java.eip.readwrite.configuration.EIPConfiguration#5aa9e4eb
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 10908 (auto-detected)
[main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
[main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
[main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: f4:06:69:ff:fe:d6:97:69 (auto-detected)
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
[main] TRACE org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection - Channel was created, firing ChannelCreated Event
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - User Event triggered org.apache.plc4x.java.spi.events.ConnectEvent#645e8927
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.eip.readwrite.protocol.EipProtocolLogic - Sending RegisterSession EIP Package
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Adding Response Handler ...
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Sending to wire EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - Forwarding request to plc EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
[nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
[nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
[nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#20edcbc7
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Sending bytes to PLC for message EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0] as data 65000400000000000000000000000000000000000000000001000000
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Receiving bytes, trying to decode Message...
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Decoding EipConnectionRequest[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Checking handler HandlerRegistration#0 for Object of type EipConnectionRequest
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Handler HandlerRegistration#0 has right expected type EipPacket, checking condition
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Handler HandlerRegistration#0 accepts element EipConnectionRequest[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0], calling handle method
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.eip.readwrite.protocol.EipProtocolLogic - Got assigned with Session 3604940806
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Firing Connected!
[main] INFO com.example.plctest.ReadPlcDemo - PLC can read!
[main] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Submission of transaction 0
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Start execution of transaction 0
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Adding Response Handler ...
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Completed execution of transaction 0
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - Forwarding request to plc CipRRData[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0,exchange=CipExchange[service=CipUnconnectedRequest[unconnectedService=CipReadRequest[RequestPathSize=5,tag={-111,7,83,101,110,115,111,114,49,0},elementNb=1],backPlane=0,slot=0]]]
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Sending bytes to PLC for message CipRRData[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0,exchange=CipExchange[service=CipUnconnectedRequest[unconnectedService=CipReadRequest[RequestPathSize=5,tag={-111,7,83,101,110,115,111,114,49,0},elementNb=1],backPlane=0,slot=0]]] as data 6f002c000608dfd600000000000000000000000000000000000000000000020000000000b2001c00520220062401059d0e004c05910753656e736f723100010001000000
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Receiving bytes, trying to decode Message...
[nioEventLoopGroup-2-1] INFO org.apache.plc4x.java.eip.readwrite.io.CipRRDataIO - Expected constant value 0 but got 5 for reserved field.
[nioEventLoopGroup-2-1] WARN org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Error decoding package with content [6f0016000608dfd600000000000000000000000000000000000000000500020000000000b2000600d20001011103]: Unsupported case for discriminated type
org.apache.plc4x.java.spi.generation.ParseException: Unsupported case for discriminated type
at org.apache.plc4x.java.eip.readwrite.io.CipServiceIO.staticParse(CipServiceIO.java:100)
at org.apache.plc4x.java.eip.readwrite.io.CipExchangeIO.staticParse(CipExchangeIO.java:96)
at org.apache.plc4x.java.eip.readwrite.io.CipRRDataIO.staticParse(CipRRDataIO.java:80)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.staticParse(EipPacketIO.java:101)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.parse(EipPacketIO.java:48)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.parse(EipPacketIO.java:42)
at org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec.decode(GeneratedDriverByteToMessageCodec.java:79)
at io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:42)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
I'd appreciate any help or if someone could point me in the right direction, in terms of how to solve this. Thanks in advance!
Guillermo
For anyone having the same issue:
Apparently, from what I've read, the Micro820 series have limitations reading tags. In the case of the PLC I was using (2080-LC20-20QWB) I had to:
Establish a session by sending a "Forward_Open" request. The details of the request is in page 105 of the CIP specification.
Once the connection is established, multiple "Read Data" requests can be send with the names of the tags that you wish the read.
Finally, a "Forward_Close" request is send in order to close the connection.
And that's it.
I cloned this project and implemented the steps that I just described. You can find my version here.
And here's an example showing how to use it:
try {
EtherNetIP plc = new EtherNetIP("10.0.1.100", 0);
plc.connectTcp();
List<TagReadReply> tags = plc.connectAndReadTags("Sensor1", "Sensor2" , "Sensor10");
tags.forEach(each -> {
logger.info("Tag name:" + each.getTag() + " is valid ? " + each.isValid());
});
} catch (Exception e) {
e.printStackTrace();
logger.severe("Exception occurred:" + e.getMessage());
}

Karate DSL - How to use afterFeature with #Karate.Test

I used configure afterFeature with #Karate.Test, but it seems that afterFeature function is never called. However, when I run test with #jupiter.api.Test void testParallel() {}, it works fine.
Question: Is it a bug or expected behaviour?
Thanks in advance for your helps,
users.feature
Feature: Sample test
Background:
* configure afterScenario = function() { karate.log("I'm afterScenario"); }
* configure afterFeature = function() { karate.log("I'm afterFeature"); }
Scenario: Scenario 1
* print "I'm Scenario 1"
Scenario: Scenario 2
* print "I'm Scenario 2"
UsersRunner.java - Does NOT work
class UsersRunner {
#Karate.Test
Karate testUsers() {
return Karate.run("users").relativeTo(getClass());
}
}
/* karate.log
11:44:01.598 [main] DEBUG com.intuit.karate.Suite - [config] classpath:karate-config.js
11:44:02.404 [main] INFO com.intuit.karate - karate.env system property was: null
11:44:02.434 [main] INFO com.intuit.karate - [print] I'm Scenario 1
11:44:02.435 [main] INFO com.intuit.karate - I'm afterScenario
11:44:02.447 [main] INFO com.intuit.karate - karate.env system property was: null
11:44:02.450 [main] INFO com.intuit.karate - [print] I'm Scenario 2
11:44:02.450 [main] INFO com.intuit.karate - I'm afterScenario
*/
ExamplesTest.java - Works
class ExamplesTest {
#Test
void testParallel() {
Results results = Runner.path("classpath:examples")
.tags("~#ignore")
//.outputCucumberJson(true)
.parallel(5);
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
}
/* karate.log
11:29:48.904 [main] DEBUG com.intuit.karate.Suite - [config] classpath:karate-config.js
11:29:48.908 [main] INFO com.intuit.karate.Suite - backed up existing 'target/karate-reports' dir to: target/karate-reports_1621308588907
11:29:49.676 [pool-1-thread-2] INFO com.intuit.karate - karate.env system property was: null
11:29:49.676 [pool-1-thread-1] INFO com.intuit.karate - karate.env system property was: null
11:29:49.707 [pool-1-thread-2] INFO com.intuit.karate - [print] I'm Scenario 2
11:29:49.707 [pool-1-thread-1] INFO com.intuit.karate - [print] I'm Scenario 1
11:29:49.707 [pool-1-thread-2] INFO com.intuit.karate - I'm afterScenario
11:29:49.707 [pool-1-thread-1] INFO com.intuit.karate - I'm afterScenario
11:29:49.709 [pool-2-thread-1] INFO com.intuit.karate - I'm afterFeature
11:29:50.116 [pool-2-thread-1] INFO com.intuit.karate.Suite - <<pass>> feature 1 of 1 (0 remaining) classpath:examples/users/users.feature
*/
Can you upgrade to the latest 1.0.1 - if you still see the problem, it is a bug and please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Flux (WebClient) Parallel Calls and then Post Processing on Collected List

I have a use case where I need to Run WebClient on Parallel Calls for 10 Upstream systems and timeout is 450ms, few of the Upstream System gives result in 80-150ms as p99 latency and few takes around ~300ms. Here I need to collect data from each and then do the ranking. I need a suggestion here If I use Collect() and then go for Ranking will that be blocking the thread by any chance as I need to get the data from all first and then go for Ranking. Can someone explain the Thread flow here? Thanks in advance!
Example 1:
Flux.just("red", "white", "blue")
.log()
.map(String::toUpperCase)
.subscribe(System.out::println);
Output:
17:02:18.479 [main] INFO reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
17:02:18.483 [main] INFO reactor.Flux.Array.1 - | request(unbounded)
17:02:18.483 [main] INFO reactor.Flux.Array.1 - | onNext(red)
RED
17:02:18.484 [main] INFO reactor.Flux.Array.1 - | onNext(white)
WHITE
17:02:18.484 [main] INFO reactor.Flux.Array.1 - | onNext(blue)
BLUE
17:02:18.485 [main] INFO reactor.Flux.Array.1 - | onComplete()
Example 2:
Flux.just("red", "white", "blue")
.log()
.map(String::toUpperCase)
.collectList()
.subscribe(System.out::println);
Output:
17:03:10.556 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
17:03:10.622 [main] INFO reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
17:03:10.625 [main] INFO reactor.Flux.Array.1 - | request(unbounded)
17:03:10.626 [main] INFO reactor.Flux.Array.1 - | onNext(red)
17:03:10.626 [main] INFO reactor.Flux.Array.1 - | onNext(white)
17:03:10.626 [main] INFO reactor.Flux.Array.1 - | onNext(blue)
17:03:10.627 [main] INFO reactor.Flux.Array.1 - | onComplete()
[RED, WHITE, BLUE]
In Example #2 I have added .collectList()
Example #3
public Mono<List<String>> fetchUsers(List<String> userIds) {
return Flux.fromIterable(userIds)
.subscribeOn(Schedulers.elastic())
.flatMap(this::getUser)
.collectList();
}
// This will be an HTTP Call p99 latency 450 ms
public Mono<String> getUser(String id) {
return webClient.get()
.uri("/user/{id}", id)
.retrieve()
.bodyToMono(String.class);
}

How to use -a in command line?

As documented for standalone-jar I'm trying to provide args to my feature and can't figure how to get it work. what do I miss ?
My command line :
java -jar c:\karate\karate-0.9.1.jar -a myKey1=myValue1 TestArgs.feature
karate-config.js
function fn() {
var env = karate.env;
karate.log('karate.env system property was:', env);
if (!env) {
env = 'test';
}
var config = { // base config JSON
arg:karate.properties['myKey1']
};
return config;
}
TestArgs.feature
Feature: test args
Scenario: print args
* print myKey1
* print arg
* print karate.properties['myKey1']
* print karate.get('myKey1')
I don't get anything printed :
java -jar c:\karate\karate-0.9.1.jar -a myKey1=myValue1 TestArgs.feature
10:32:57.904 [main] INFO com.intuit.karate.netty.Main - Karate version: 0.9.1
10:32:58.012 [main] INFO com.intuit.karate.Runner - Karate version: 0.9.1
10:32:58.470 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - karate.env system property was: null
10:32:58.489 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
10:32:58.491 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
10:32:58.495 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
10:32:58.501 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
Actually we meant to delete the docs, apologies since the -a / --args option is not supported any more.
You can of course use the karate.properties['some.key'] approach to unpack values from the command-line. Also refer how you can even get environment variables: https://github.com/intuit/karate/issues/547
My suggestion is you can use karate-config-<env>.js to read a bunch of variables from a file if needed. For example, given this feature:
Feature:
Scenario:
* print myKey
And this file karate-config-dev.js:
function() { return { myKey: 'hello' } }
You can run this command, which will auto load the config js file:
java -jar karate.jar -e dev test.feature
We will update the docs. Thanks for catching this.