I'm having an error when trying to call an SP in BigQuery with Airflow BigQueryInsertJobOperator. I have tried many different syntax and it doesn't seem to be working. I pulled the same SQL out of the SP, put it into a file, and it ran fine.
Below is the code that I tried to execute the SP in BigQuery.
PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "project-id")
Dataset = os.environ.get("GCP_Dataset", "dataset")
with DAG(dag_id='dag_id',default_args=default_args,schedule_interval="#daily", start_date=days_ago(1), catchup=False
) as dag:
Call_SP = BigQueryInsertJobOperator(
task_id='Call_SP',
configuration={
"query": {
"query": "CALL `" + PROJECT_ID + "." + Dataset + "." + "SP`();",
#"query": "{% include 'Scripts/Script.sql' %}",
"useLegacySql": False,
}
}
)
Call_SP
I see the is outputing call I am expecting
[2022-06-30, 17:32:07 UTC] {bigquery.py:2247} INFO - Executing: {'query': {'query': 'CALL `project-id.dataset.SP`();', 'useLegacySql': False}}
[2022-06-30, 17:32:07 UTC] {bigquery.py:1560} INFO - Inserting job airflow_Call_SP_2022_06_30T16_47_13_748417_00_00_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[2022-06-30, 17:32:13 UTC] {taskinstance.py:1776} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2269, in execute
table = job.to_api_repr()["configuration"]["query"]["destinationTable"]
KeyError: 'destinationTable'
It just doesn't make sense to me since my SP is just one statement merge
You hit a known bug in apache-airflow-providers-google which was fixed in PR and released in apache-airflow-providers-google version 8.0.0
To solve your issue you should upgrade the google provider version
If for some reason you can't upgrade the provider then you can create a custom operator from the code PR and use it until you are able to upgrade the provider version.
Expounding on #Elad Kalif's answer, I was able to update apache-airflow-providers-google to version 8.0.0 using this GCP documentation on How to install a package from PyPi:
gcloud composer environments update <your-environment> \
--location <your-environment-location> \
--update-pypi-package apache-airflow-providers-google>=8.1.0
After installation, using this code (derived from your given code), it yielded successful results:
import datetime
from airflow import models
from airflow import DAG
from airflow.operators import bash
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
PROJECT_ID = "<your-proj-id>"
Dataset = "<your-dataset>"
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
default_args = {
'owner': 'Composer Example',
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': YESTERDAY,
}
with models.DAG(dag_id='dag_id',default_args=default_args,schedule_interval="#daily", catchup=False
) as dag:
Call_SP = BigQueryInsertJobOperator(
task_id='Call_SP',
configuration={
"query": {
"query": "CALL `" + PROJECT_ID + "." + Dataset + "." + "<your-sp>`();",
#"query": "{% include 'Scripts/Script.sql' %}",
"useLegacySql": False,
}
}
)
Call_SP
Logs:
*** Reading remote log from gs:///2022-07-04T01:02:28.122529+00:00/1.log.
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1044} INFO - Dependencies all met for <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [queued]>
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1044} INFO - Dependencies all met for <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [queued]>
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1250} INFO -
--------------------------------------------------------------------------------
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1251} INFO - Starting attempt 1 of 2
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1252} INFO -
--------------------------------------------------------------------------------
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1271} INFO - Executing <Task(BigQueryInsertJobOperator): Call_SP> on 2022-07-04 01:02:28.122529+00:00
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:52} INFO - Started process 2461 to run task
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'dag_id', 'Call_SP', 'manual__2022-07-04T01:02:28.122529+00:00', '--job-id', '29', '--raw', '--subdir', 'DAGS_FOLDER/20220704.py', '--cfg-path', '/tmp/tmpjpm9jt5h', '--error-file', '/tmp/tmpnoplzt6r']
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:80} INFO - Job 29: Subtask Call_SP
[2022-07-04, 01:02:34 UTC] {task_command.py:298} INFO - Running <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [running]> on host airflow-worker-r66xj
[2022-07-04, 01:02:34 UTC] {taskinstance.py:1448} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_EMAIL=
AIRFLOW_CTX_DAG_OWNER=Composer Example
AIRFLOW_CTX_DAG_ID=dag_id
AIRFLOW_CTX_TASK_ID=Call_SP
AIRFLOW_CTX_EXECUTION_DATE=2022-07-04T01:02:28.122529+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-07-04T01:02:28.122529+00:00
[2022-07-04, 01:02:34 UTC] {bigquery.py:2243} INFO - Executing: {'query': {'query': 'CALL ``();', 'useLegacySql': False}}
[2022-07-04, 01:02:34 UTC] {credentials_provider.py:324} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2022-07-04, 01:02:35 UTC] {bigquery.py:1562} INFO - Inserting job airflow_dag_id_Call_SP_2022_07_04T01_02_28_122529_00_00_d8681b858989cd5f36b8b9f4942a96a0
[2022-07-04, 01:02:36 UTC] {taskinstance.py:1279} INFO - Marking task as SUCCESS. dag_id=dag_id, task_id=Call_SP, execution_date=20220704T010228, start_date=20220704T010233, end_date=20220704T010236
[2022-07-04, 01:02:37 UTC] {local_task_job.py:154} INFO - Task exited with return code 0
[2022-07-04, 01:02:37 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
Project History in Bigquery:
I am using testcontainers in a spring boot project (version : 1.17.2) and I am trying to spin up a rabbitmq container. It seems like rabbitmq container starts up successfully, but it releases connection as soon as it is up.
I can see some error in logs but after that I can see that the test container is started. I am kind of flummoxed as to why am I seeing this error and/or if the container is started or not ?
Pasting excerpt from logs :
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management_agent
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_web_dispatch
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_prometheus
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> Server startup complete; 4 plugins started.
15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: cancel
15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.DefaultManagedHttpClientConnection - http-outgoing-1: close connection IMMEDIATE
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: endpoint closed
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: discarding endpoint
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: releasing endpoint
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection is not kept alive
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "end of stream"
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection released [route: {}->npipe://localhost:2375][total available: 0; route allocated: 0 of 2147483647; total allocated: 0 of 2147483647]
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException"
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException"
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse - Failed to close the response
java.io.IOException: java.nio.channels.ClosedChannelException
at java.base/java.nio.channels.Channels$2.read(Channels.java:240)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.LoggingInputStream.read(LoggingInputStream.java:81)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:149)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:261)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:147)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:314)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.IncomingHttpEntity.close(IncomingHttpEntity.java:111)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.io.entity.HttpEntityWrapper.close(HttpEntityWrapper.java:120)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.message.BasicClassicHttpResponse.close(BasicClassicHttpResponse.java:93)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.CloseableHttpResponse.close(CloseableHttpResponse.java:200)
at com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse.close(ApacheDockerHttpClientImpl.java:256)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:277)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedChannelException: null
....................................
15:00:44.009 [main] INFO š³ [rabbitmq:3.9.13-management-alpine] - Container rabbitmq:3.9.13-management-alpine started in PT7.8752359S
Config Java :
public abstract class RabbitMqTestContainerConfiguration {
private static final int RABBITMQ_DEFAULT_PORT = 5672;
#Container
public static RabbitMQContainer rabbitMQContainer = new RabbitMQContainer("rabbitmq:3.9.13-management-alpine")
.withExposedPorts(RABBITMQ_DEFAULT_PORT).withStartupTimeout(Duration.ofMinutes(3));
public static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(configurableApplicationContext,
"spring.rabbitmq.host=" + rabbitMQContainer.getHost(),
"spring.rabbitmq.port=" + rabbitMQContainer.getMappedPort(RABBITMQ_DEFAULT_PORT),
"spring.rabbitmq.username=" + rabbitMQContainer.getAdminUsername(),
"spring.rabbitmq.password=" + rabbitMQContainer.getAdminPassword());
}
}
} ```
Iām using Apacheās PLC4X library in order to read some tags from an Allen Bradleyās Micro820 PLC (2080-LC20-20QWB). So far, I am able to establish a connection with the device but when I try to execute a read request Iām getting the error shown in the stack trace bellow.
Iām running a Java maven based project with the following dependencies:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-api</artifactId>
<version>0.9.1</version>
</dependency>
<!-- Ethernet / IP driver -->
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-driver-eip</artifactId>
<version>0.9.1</version>
</dependency>
This is the code I'm running on a Ubuntu 18.04 width JDK 11
package com.example.plctest;
import org.apache.plc4x.java.PlcDriverManager;
import org.apache.plc4x.java.api.PlcConnection;
import org.apache.plc4x.java.api.messages.PlcReadRequest;
import org.apache.plc4x.java.api.messages.PlcReadResponse;
import org.apache.plc4x.java.eip.readwrite.field.EipField;
import org.apache.plc4x.java.eip.readwrite.types.CIPDataTypeCode;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ReadPlcDemo {
private static final Logger LOGGER = LoggerFactory.getLogger(ReadPlcDemo.class);
private static String connectionString = "eip://192.168.1.100?backplane=0&slot=0";
public static void main(String[] args) {
// Establish a connection with the PLC
try (PlcConnection connection = new PlcDriverManager().getConnection(connectionString)) {
if (connection.getMetadata().canRead()) {
LOGGER.info("PLC can read!");
}
// Create the read request
EipField field = new EipField("Sensor1", CIPDataTypeCode.BOOL);
// EipField field = new EipField("Sensor2", CIPDataTypeCode.SINT);
// EipField field = new EipField("Sensor3", CIPDataTypeCode.SINT);
PlcReadRequest.Builder builder = connection.readRequestBuilder();
builder.addItem("value-" + field.getTag(), field);
PlcReadRequest readRequest = builder.build();
// Execute the request
PlcReadResponse response;
try {
response = readRequest.execute().get();
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
This is the result from my stack trace, I set the log level to 'trace' in order to get a better insight of what's going on:
/home/ghinojosa/.jdks/corretto-11.0.14.1/bin/java -Dio.netty.tryReflectionSetAccessible=true --add-opens java.base/jdk.internal.misc=ALL-UNNAMED -javaagent:/home/ghinojosa/.local/share/JetBrains/Toolbox/apps/IDEA-C/ch-0/213.6777.52/lib/idea_rt.jar=44017:/home/ghinojosa/.local/share/JetBrains/Toolbox/apps/IDEA-C/ch-0/213.6777.52/bin -Dfile.encoding=UTF-8 -classpath /home/ghinojosa/IdeaProjects/plc-test/target/classes:/home/ghinojosa/Downloads/eeip-library.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-api/0.9.1/plc4j-api-0.9.1.jar:/home/ghinojosa/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.12.5/jackson-annotations-2.12.5.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-driver-eip/0.9.1/plc4j-driver-eip-0.9.1.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-spi/0.9.1/plc4j-spi-0.9.1.jar:/home/ghinojosa/.m2/repository/io/netty/netty-codec/4.1.67.Final/netty-codec-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-common/4.1.67.Final/netty-common-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-transport/4.1.67.Final/netty-transport-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-resolver/4.1.67.Final/netty-resolver-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/commons-beanutils/commons-beanutils/1.9.4/commons-beanutils-1.9.4.jar:/home/ghinojosa/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/ghinojosa/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.12.5/jackson-core-2.12.5.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.5/jackson-databind-2.12.5.jar:/home/ghinojosa/.m2/repository/io/vavr/vavr/0.10.4/vavr-0.10.4.jar:/home/ghinojosa/.m2/repository/io/vavr/vavr-match/0.10.4/vavr-match-0.10.4.jar:/home/ghinojosa/.m2/repository/com/github/jinahya/bit-io/1.4.3/bit-io-1.4.3.jar:/home/ghinojosa/.m2/repository/commons-codec/commons-codec/1.15/commons-codec-1.15.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-transport-tcp/0.9.1/plc4j-transport-tcp-0.9.1.jar:/home/ghinojosa/.m2/repository/io/netty/netty-buffer/4.1.67.Final/netty-buffer-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/org/slf4j/slf4j-api/1.8.0-beta4/slf4j-api-1.8.0-beta4.jar:/home/ghinojosa/.m2/repository/org/slf4j/slf4j-simple/1.8.0-beta4/slf4j-simple-1.8.0-beta4.jar com.example.plctest.ReadPlcDemo
[main] INFO org.apache.plc4x.java.PlcDriverManager - Instantiating new PLC Driver Manager with class loader jdk.internal.loader.ClassLoaders$AppClassLoader#5c8da962
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering available drivers...
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol eip (EthernetIP)
[main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
[main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
[main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
[main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#4de5031f
[main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
[main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
[main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/home/ghinojosa/.m2/repository/io/netty/netty-common/4.1.67.Final/netty-common-4.1.67.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
[main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
[main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
[main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8390705152 bytes (maybe)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: 8390705152 bytes
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: 1024
[main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
[main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
[main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 8
[main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
[main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
[main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
[main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#76505305
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#7b98f307
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#4802796d
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#34123d65
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#59474f18
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#65fb9ffc
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#3e694b3f
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#1bb5a082
[main] INFO org.apache.plc4x.java.transport.tcp.TcpChannelFactory - Configuring Bootstrap with org.apache.plc4x.java.eip.readwrite.configuration.EIPConfiguration#5aa9e4eb
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 10908 (auto-detected)
[main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
[main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
[main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: f4:06:69:ff:fe:d6:97:69 (auto-detected)
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
[main] TRACE org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection - Channel was created, firing ChannelCreated Event
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - User Event triggered org.apache.plc4x.java.spi.events.ConnectEvent#645e8927
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.eip.readwrite.protocol.EipProtocolLogic - Sending RegisterSession EIP Package
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Adding Response Handler ...
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Sending to wire EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - Forwarding request to plc EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
[nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
[nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
[nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#20edcbc7
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Sending bytes to PLC for message EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0] as data 65000400000000000000000000000000000000000000000001000000
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Receiving bytes, trying to decode Message...
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Decoding EipConnectionRequest[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Checking handler HandlerRegistration#0 for Object of type EipConnectionRequest
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Handler HandlerRegistration#0 has right expected type EipPacket, checking condition
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Handler HandlerRegistration#0 accepts element EipConnectionRequest[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0], calling handle method
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.eip.readwrite.protocol.EipProtocolLogic - Got assigned with Session 3604940806
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Firing Connected!
[main] INFO com.example.plctest.ReadPlcDemo - PLC can read!
[main] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Submission of transaction 0
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Start execution of transaction 0
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Adding Response Handler ...
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Completed execution of transaction 0
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - Forwarding request to plc CipRRData[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0,exchange=CipExchange[service=CipUnconnectedRequest[unconnectedService=CipReadRequest[RequestPathSize=5,tag={-111,7,83,101,110,115,111,114,49,0},elementNb=1],backPlane=0,slot=0]]]
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Sending bytes to PLC for message CipRRData[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0,exchange=CipExchange[service=CipUnconnectedRequest[unconnectedService=CipReadRequest[RequestPathSize=5,tag={-111,7,83,101,110,115,111,114,49,0},elementNb=1],backPlane=0,slot=0]]] as data 6f002c000608dfd600000000000000000000000000000000000000000000020000000000b2001c00520220062401059d0e004c05910753656e736f723100010001000000
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Receiving bytes, trying to decode Message...
[nioEventLoopGroup-2-1] INFO org.apache.plc4x.java.eip.readwrite.io.CipRRDataIO - Expected constant value 0 but got 5 for reserved field.
[nioEventLoopGroup-2-1] WARN org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Error decoding package with content [6f0016000608dfd600000000000000000000000000000000000000000500020000000000b2000600d20001011103]: Unsupported case for discriminated type
org.apache.plc4x.java.spi.generation.ParseException: Unsupported case for discriminated type
at org.apache.plc4x.java.eip.readwrite.io.CipServiceIO.staticParse(CipServiceIO.java:100)
at org.apache.plc4x.java.eip.readwrite.io.CipExchangeIO.staticParse(CipExchangeIO.java:96)
at org.apache.plc4x.java.eip.readwrite.io.CipRRDataIO.staticParse(CipRRDataIO.java:80)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.staticParse(EipPacketIO.java:101)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.parse(EipPacketIO.java:48)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.parse(EipPacketIO.java:42)
at org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec.decode(GeneratedDriverByteToMessageCodec.java:79)
at io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:42)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
I'd appreciate any help or if someone could point me in the right direction, in terms of how to solve this. Thanks in advance!
Guillermo
For anyone having the same issue:
Apparently, from what I've read, the Micro820 series have limitations reading tags. In the case of the PLC I was using (2080-LC20-20QWB) I had to:
Establish a session by sending a "Forward_Open" request. The details of the request is in page 105 of the CIP specification.
Once the connection is established, multiple "Read Data" requests can be send with the names of the tags that you wish the read.
Finally, a "Forward_Close" request is send in order to close the connection.
And that's it.
I cloned this project and implemented the steps that I just described. You can find my version here.
And here's an example showing how to use it:
try {
EtherNetIP plc = new EtherNetIP("10.0.1.100", 0);
plc.connectTcp();
List<TagReadReply> tags = plc.connectAndReadTags("Sensor1", "Sensor2" , "Sensor10");
tags.forEach(each -> {
logger.info("Tag name:" + each.getTag() + " is valid ? " + each.isValid());
});
} catch (Exception e) {
e.printStackTrace();
logger.severe("Exception occurred:" + e.getMessage());
}
I have a use case where I need to Run WebClient on Parallel Calls for 10 Upstream systems and timeout is 450ms, few of the Upstream System gives result in 80-150ms as p99 latency and few takes around ~300ms. Here I need to collect data from each and then do the ranking. I need a suggestion here If I use Collect() and then go for Ranking will that be blocking the thread by any chance as I need to get the data from all first and then go for Ranking. Can someone explain the Thread flow here? Thanks in advance!
Example 1:
Flux.just("red", "white", "blue")
.log()
.map(String::toUpperCase)
.subscribe(System.out::println);
Output:
17:02:18.479 [main] INFO reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
17:02:18.483 [main] INFO reactor.Flux.Array.1 - | request(unbounded)
17:02:18.483 [main] INFO reactor.Flux.Array.1 - | onNext(red)
RED
17:02:18.484 [main] INFO reactor.Flux.Array.1 - | onNext(white)
WHITE
17:02:18.484 [main] INFO reactor.Flux.Array.1 - | onNext(blue)
BLUE
17:02:18.485 [main] INFO reactor.Flux.Array.1 - | onComplete()
Example 2:
Flux.just("red", "white", "blue")
.log()
.map(String::toUpperCase)
.collectList()
.subscribe(System.out::println);
Output:
17:03:10.556 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
17:03:10.622 [main] INFO reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
17:03:10.625 [main] INFO reactor.Flux.Array.1 - | request(unbounded)
17:03:10.626 [main] INFO reactor.Flux.Array.1 - | onNext(red)
17:03:10.626 [main] INFO reactor.Flux.Array.1 - | onNext(white)
17:03:10.626 [main] INFO reactor.Flux.Array.1 - | onNext(blue)
17:03:10.627 [main] INFO reactor.Flux.Array.1 - | onComplete()
[RED, WHITE, BLUE]
In Example #2 I have added .collectList()
Example #3
public Mono<List<String>> fetchUsers(List<String> userIds) {
return Flux.fromIterable(userIds)
.subscribeOn(Schedulers.elastic())
.flatMap(this::getUser)
.collectList();
}
// This will be an HTTP Call p99 latency 450 ms
public Mono<String> getUser(String id) {
return webClient.get()
.uri("/user/{id}", id)
.retrieve()
.bodyToMono(String.class);
}
Attempt to print elements of array in JSON does not appear to work in Karate DSL - Want to understand if I missed something here?
Trying to run the section of the JSON Arrays described in https://github.com/intuit/karate#json-arrays
Scenario: Test
Given def cat =
"""
{
name: 'Billie',
kittens: [
{ id: 23, name: 'Bob' },
{ id: 42, name: 'Wild' }
]
}
"""
* print ('\n')
* print ('printing all kittens - working')
* print cat.kittens
* print ('printing all id of kittens - not working')
* print cat.kittens[*].id
when executed as java -jar karate-0.9.0.jar test.feature returns result as
23:30:29.778 [ForkJoinPool-1-worker-1] WARN com.intuit.karate - skipping bootstrap configuration: could not find or read file: classpath:karate-config.js
23:30:29.893 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
23:30:29.899 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] printing all kittens - working
23:30:29.921 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] [
{
"id": 23,
"name": "Bob"
},
{
"id": 42,
"name": "Wild"
}
]
23:30:29.935 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] printing all id of kittens - not working
23:30:29.944 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
23:30:30.022 [ForkJoinPool-1-worker-1] INFO com.intuit.karate.Runner - <<pass>> feature 1 of 1: test.feature
---------------------------------------------------------
feature: test.feature
report: target\test.json
scenarios: 1 | passed: 1 | failed: 0 | time: 0.1339
---------------------------------------------------------
Karate version: 0.9.0
======================================================
elapsed: 1.28 | threads: 1 | thread time: 0.13
features: 1 | ignored: 0 | efficiency: 0.10
scenarios: 1 | passed: 1 | failed: 0
======================================================
But I am not seeing the id elements being printed out
Is this expected behaviour?
How to achieve this - can this be done only by setting this to temp variable using * def temp = cat.kittens[*].id
The print statement supports only JavaScript and not JsonPath:
* def ids = $cat.kittens[*].id
* print ids