I have installed spark-1.5.1 standalone mode and using spark-submit command to get result.
Actually i would like to get result using spark hidden rest API.when spark-driver stop,i am not able to get result using rest API.
After dig down using spark-submit,i found that after getting output all the spark-context,driver are stopped.
Can any one please help
Here is the console result:-
result--Lines with a: 60, lines with b: 29
15/11/01 08:46:08 INFO SparkContext: Invoking stop() from shutdown hook
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/metrics/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/api,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/static,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
15/11/01 08:46:08 INFO ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs,null}
15/11/01 08:46:08 INFO SparkUI: Stopped Spark web UI at http://182.95.208.242:4040
15/11/01 08:46:08 INFO DAGScheduler: Stopping DAGScheduler
15/11/01 08:46:08 INFO SparkDeploySchedulerBackend: Shutting down all executors
15/11/01 08:46:08 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
15/11/01 08:46:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/11/01 08:46:09 INFO MemoryStore: MemoryStore cleared
15/11/01 08:46:09 INFO BlockManager: BlockManager stopped
15/11/01 08:46:09 INFO BlockManagerMaster: BlockManagerMaster stopped
15/11/01 08:46:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/11/01 08:46:09 INFO SparkContext: Successfully stopped SparkContext
15/11/01 08:46:09 INFO ShutdownHookManager: Shutdown hook called
15/11/01 08:46:09 INFO ShutdownHookManager: Deleting directory /tmp/spark-a2d4622c-d3c0-447b-aa73-21a3b6af1539
15/11/01 08:46:09 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/11/01 08:46:09 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
ipieawb1#master:~/spark-1.5.1/bin$
I am trying to run simple code e.g
public class SimpleApp {
public static void main(String[] args) throws InterruptedException {
SparkConf conf = new SparkConf().setAppName("Simple Application").
setMaster("spark://master.genpact.com:7078").set("spark.eventLog.enabled", "true");
File tempDir=Files.createTempDir();
tempDir.deleteOnExit();
JavaSparkContext sc = new JavaSparkContext(conf);
java.util.List<String> strings = Arrays.asList("Hello", "World");
JavaRDD<String> s1 = sc.parallelize(strings);
JavaRDD<String> s2 = sc.parallelize(strings);
// Varargs
JavaRDD<String> sUnion = sc.union(s1, s2);
System.out.println("Union *******"+sUnion.collect());
}
}
Related
I'm having an error when trying to call an SP in BigQuery with Airflow BigQueryInsertJobOperator. I have tried many different syntax and it doesn't seem to be working. I pulled the same SQL out of the SP, put it into a file, and it ran fine.
Below is the code that I tried to execute the SP in BigQuery.
PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "project-id")
Dataset = os.environ.get("GCP_Dataset", "dataset")
with DAG(dag_id='dag_id',default_args=default_args,schedule_interval="#daily", start_date=days_ago(1), catchup=False
) as dag:
Call_SP = BigQueryInsertJobOperator(
task_id='Call_SP',
configuration={
"query": {
"query": "CALL `" + PROJECT_ID + "." + Dataset + "." + "SP`();",
#"query": "{% include 'Scripts/Script.sql' %}",
"useLegacySql": False,
}
}
)
Call_SP
I see the is outputing call I am expecting
[2022-06-30, 17:32:07 UTC] {bigquery.py:2247} INFO - Executing: {'query': {'query': 'CALL `project-id.dataset.SP`();', 'useLegacySql': False}}
[2022-06-30, 17:32:07 UTC] {bigquery.py:1560} INFO - Inserting job airflow_Call_SP_2022_06_30T16_47_13_748417_00_00_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[2022-06-30, 17:32:13 UTC] {taskinstance.py:1776} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2269, in execute
table = job.to_api_repr()["configuration"]["query"]["destinationTable"]
KeyError: 'destinationTable'
It just doesn't make sense to me since my SP is just one statement merge
You hit a known bug in apache-airflow-providers-google which was fixed in PR and released in apache-airflow-providers-google version 8.0.0
To solve your issue you should upgrade the google provider version
If for some reason you can't upgrade the provider then you can create a custom operator from the code PR and use it until you are able to upgrade the provider version.
Expounding on #Elad Kalif's answer, I was able to update apache-airflow-providers-google to version 8.0.0 using this GCP documentation on How to install a package from PyPi:
gcloud composer environments update <your-environment> \
--location <your-environment-location> \
--update-pypi-package apache-airflow-providers-google>=8.1.0
After installation, using this code (derived from your given code), it yielded successful results:
import datetime
from airflow import models
from airflow import DAG
from airflow.operators import bash
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
PROJECT_ID = "<your-proj-id>"
Dataset = "<your-dataset>"
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
default_args = {
'owner': 'Composer Example',
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': YESTERDAY,
}
with models.DAG(dag_id='dag_id',default_args=default_args,schedule_interval="#daily", catchup=False
) as dag:
Call_SP = BigQueryInsertJobOperator(
task_id='Call_SP',
configuration={
"query": {
"query": "CALL `" + PROJECT_ID + "." + Dataset + "." + "<your-sp>`();",
#"query": "{% include 'Scripts/Script.sql' %}",
"useLegacySql": False,
}
}
)
Call_SP
Logs:
*** Reading remote log from gs:///2022-07-04T01:02:28.122529+00:00/1.log.
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1044} INFO - Dependencies all met for <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [queued]>
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1044} INFO - Dependencies all met for <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [queued]>
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1250} INFO -
--------------------------------------------------------------------------------
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1251} INFO - Starting attempt 1 of 2
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1252} INFO -
--------------------------------------------------------------------------------
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1271} INFO - Executing <Task(BigQueryInsertJobOperator): Call_SP> on 2022-07-04 01:02:28.122529+00:00
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:52} INFO - Started process 2461 to run task
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'dag_id', 'Call_SP', 'manual__2022-07-04T01:02:28.122529+00:00', '--job-id', '29', '--raw', '--subdir', 'DAGS_FOLDER/20220704.py', '--cfg-path', '/tmp/tmpjpm9jt5h', '--error-file', '/tmp/tmpnoplzt6r']
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:80} INFO - Job 29: Subtask Call_SP
[2022-07-04, 01:02:34 UTC] {task_command.py:298} INFO - Running <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [running]> on host airflow-worker-r66xj
[2022-07-04, 01:02:34 UTC] {taskinstance.py:1448} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_EMAIL=
AIRFLOW_CTX_DAG_OWNER=Composer Example
AIRFLOW_CTX_DAG_ID=dag_id
AIRFLOW_CTX_TASK_ID=Call_SP
AIRFLOW_CTX_EXECUTION_DATE=2022-07-04T01:02:28.122529+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-07-04T01:02:28.122529+00:00
[2022-07-04, 01:02:34 UTC] {bigquery.py:2243} INFO - Executing: {'query': {'query': 'CALL ``();', 'useLegacySql': False}}
[2022-07-04, 01:02:34 UTC] {credentials_provider.py:324} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2022-07-04, 01:02:35 UTC] {bigquery.py:1562} INFO - Inserting job airflow_dag_id_Call_SP_2022_07_04T01_02_28_122529_00_00_d8681b858989cd5f36b8b9f4942a96a0
[2022-07-04, 01:02:36 UTC] {taskinstance.py:1279} INFO - Marking task as SUCCESS. dag_id=dag_id, task_id=Call_SP, execution_date=20220704T010228, start_date=20220704T010233, end_date=20220704T010236
[2022-07-04, 01:02:37 UTC] {local_task_job.py:154} INFO - Task exited with return code 0
[2022-07-04, 01:02:37 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
Project History in Bigquery:
I am using testcontainers in a spring boot project (version : 1.17.2) and I am trying to spin up a rabbitmq container. It seems like rabbitmq container starts up successfully, but it releases connection as soon as it is up.
I can see some error in logs but after that I can see that the test container is started. I am kind of flummoxed as to why am I seeing this error and/or if the container is started or not ?
Pasting excerpt from logs :
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management_agent
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_web_dispatch
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_management
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> * rabbitmq_prometheus
15:00:44.007 [main] DEBUG org.testcontainers.containers.output.WaitingConsumer - STDOUT: 2022-06-29 05:00:24.477316+00:00 [info] <0.703.0> Server startup complete; 4 plugins started.
15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: cancel
15:00:44.007 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.DefaultManagedHttpClientConnection - http-outgoing-1: close connection IMMEDIATE
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: endpoint closed
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient - ep-0000000C: discarding endpoint
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: releasing endpoint
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection is not kept alive
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "end of stream"
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-0000000C: connection released [route: {}->npipe://localhost:2375][total available: 0; route allocated: 0 of 2147483647; total allocated: 0 of 2147483647]
15:00:44.008 [main] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException"
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.wire - http-outgoing-1 << "[read] I/O error: java.nio.channels.ClosedChannelException"
15:00:44.008 [docker-java-stream--540868428] DEBUG com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse - Failed to close the response
java.io.IOException: java.nio.channels.ClosedChannelException
at java.base/java.nio.channels.Channels$2.read(Channels.java:240)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.LoggingInputStream.read(LoggingInputStream.java:81)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:149)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:261)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:147)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:314)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.impl.io.IncomingHttpEntity.close(IncomingHttpEntity.java:111)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.io.entity.HttpEntityWrapper.close(HttpEntityWrapper.java:120)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.io.Closer.close(Closer.java:48)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.core5.http.message.BasicClassicHttpResponse.close(BasicClassicHttpResponse.java:93)
at com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.CloseableHttpResponse.close(CloseableHttpResponse.java:200)
at com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$ApacheResponse.close(ApacheDockerHttpClientImpl.java:256)
at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:277)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedChannelException: null
....................................
15:00:44.009 [main] INFO 🐳 [rabbitmq:3.9.13-management-alpine] - Container rabbitmq:3.9.13-management-alpine started in PT7.8752359S
Config Java :
public abstract class RabbitMqTestContainerConfiguration {
private static final int RABBITMQ_DEFAULT_PORT = 5672;
#Container
public static RabbitMQContainer rabbitMQContainer = new RabbitMQContainer("rabbitmq:3.9.13-management-alpine")
.withExposedPorts(RABBITMQ_DEFAULT_PORT).withStartupTimeout(Duration.ofMinutes(3));
public static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(configurableApplicationContext,
"spring.rabbitmq.host=" + rabbitMQContainer.getHost(),
"spring.rabbitmq.port=" + rabbitMQContainer.getMappedPort(RABBITMQ_DEFAULT_PORT),
"spring.rabbitmq.username=" + rabbitMQContainer.getAdminUsername(),
"spring.rabbitmq.password=" + rabbitMQContainer.getAdminPassword());
}
}
} ```
I'm trying to learn nextflow but it's not going very well. I started with the tutorial of this site: https://www.nextflow.io/docs/latest/getstarted.html (I'm the one who installed nextflow).
I copied this script :
#!/usr/bin/env nextflow
params.str = 'Hello world!'
process splitLetters {
output:
file 'chunk_*' into letters
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
file x from letters.flatten()
output:
stdout result
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
result.view { it.trim() }
But when I run it (nextflow run tutorial.nf), in the terminal I have this :
N E X T F L O W ~ version 22.03.1-edge
Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
No such variable: result
-- Check script 'tutorial.nf' at line: 29 or see '.nextflow.log' file for more details
And in the log file I have this :
avr.-20 14:14:12.319 [main] DEBUG nextflow.cli.Launcher - $> nextflow run tutorial.nf
avr.-20 14:14:12.375 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 22.03.1-edge
avr.-20 14:14:12.466 [main] INFO nextflow.cli.CmdRun - Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
avr.-20 14:14:12.481 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/user/.nextflow/plugins; core-plugins: nf-amazon#1.6.0,nf-azure#0.13.0,nf-console#1.0.3,nf-ga4gh#1.0.3,nf-google#1.1.4,nf-sqldb#0.3.0,nf-tower#1.4.0
avr.-20 14:14:12.483 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
avr.-20 14:14:12.494 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
avr.-20 14:14:12.495 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
avr.-20 14:14:12.501 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
avr.-20 14:14:12.515 [main] INFO org.pf4j.AbstractPluginManager - No plugins
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Session uuid: 67344021-bff5-4131-9c07-e101756fb5ea
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Run name: intergalactic_waddington
avr.-20 14:14:12.573 [main] DEBUG nextflow.Session - Executor pool size: 8
avr.-20 14:14:12.604 [main] DEBUG nextflow.cli.CmdRun -
Version: 22.03.1-edge build 5695
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Work-dir: /home/user/Documents/formations/nextflow/testScript/work [ext2/ext3]
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /home/user/Documents/formations/nextflow/testScript/bin
avr.-20 14:14:12.637 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
avr.-20 14:14:12.648 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
avr.-20 14:14:12.669 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
avr.-20 14:14:12.678 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 9; maxThreads: 1000
avr.-20 14:14:12.741 [main] DEBUG nextflow.Session - Session start invoked
avr.-20 14:14:13.423 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
avr.-20 14:14:13.446 [main] DEBUG nextflow.Session - Session aborted -- Cause: No such property: result for class: Script_6634cd79
avr.-20 14:14:13.463 [main] ERROR nextflow.cli.Launcher - #unknown
groovy.lang.MissingPropertyException: No such property: result for class: Script_6634cd79
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:65)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at Script_6634cd79.runScript(Script_6634cd79:29)
at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:170)
at nextflow.script.BaseScript.run(BaseScript.groovy:203)
at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:220)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120)
at nextflow.cli.CmdRun.run(CmdRun.groovy:334)
at nextflow.cli.Launcher.run(Launcher.groovy:480)
at nextflow.cli.Launcher.main(Launcher.groovy:639)
What should I do ?
Thanks a lot for your help.
Nextflow includes a new language extension called DSL2, which became the default syntax in version 22.03.0-edge. However, it's possible to override this behavior by either adding nextflow.enable.dsl=1 to the top of your script, or by setting the -dsl1 option when you run your script:
nextflow run tutorial.nf -dsl1
Alternatively, roll back to the latest release (as opposed to an 'edge' pre-release). It's possible to specify the Nextflow version using the NXF_VER environment variable:
NXF_VER=21.10.6 nextflow run tutorial.nf
I find DSL2 drastically simplifies the writing of complex workflows and would highly recommend getting started with it. Unfortunately, the documentation is lagging a bit, but lifting it over is relatively straight forward once you get the hang of things:
params.str = 'Hello world!'
process splitLetters {
output:
path 'chunk_*'
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
path x
output:
stdout
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
workflow {
splitLetters | flatten() | convertToUpper | view()
}
Results:
nextflow run tutorial.nf -dsl2
N E X T F L O W ~ version 21.10.6
Launching `tutorial.nf` [prickly_kilby] - revision: 0c6f835b9c
executor > local (3)
[b8/84a1de] process > splitLetters [100%] 1 of 1 ✔
[86/fd19ea] process > convertToUpper (2) [100%] 2 of 2 ✔
HELLO
WORLD!
I’m using Apache’s PLC4X library in order to read some tags from an Allen Bradley’s Micro820 PLC (2080-LC20-20QWB). So far, I am able to establish a connection with the device but when I try to execute a read request I’m getting the error shown in the stack trace bellow.
I’m running a Java maven based project with the following dependencies:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-api</artifactId>
<version>0.9.1</version>
</dependency>
<!-- Ethernet / IP driver -->
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-driver-eip</artifactId>
<version>0.9.1</version>
</dependency>
This is the code I'm running on a Ubuntu 18.04 width JDK 11
package com.example.plctest;
import org.apache.plc4x.java.PlcDriverManager;
import org.apache.plc4x.java.api.PlcConnection;
import org.apache.plc4x.java.api.messages.PlcReadRequest;
import org.apache.plc4x.java.api.messages.PlcReadResponse;
import org.apache.plc4x.java.eip.readwrite.field.EipField;
import org.apache.plc4x.java.eip.readwrite.types.CIPDataTypeCode;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ReadPlcDemo {
private static final Logger LOGGER = LoggerFactory.getLogger(ReadPlcDemo.class);
private static String connectionString = "eip://192.168.1.100?backplane=0&slot=0";
public static void main(String[] args) {
// Establish a connection with the PLC
try (PlcConnection connection = new PlcDriverManager().getConnection(connectionString)) {
if (connection.getMetadata().canRead()) {
LOGGER.info("PLC can read!");
}
// Create the read request
EipField field = new EipField("Sensor1", CIPDataTypeCode.BOOL);
// EipField field = new EipField("Sensor2", CIPDataTypeCode.SINT);
// EipField field = new EipField("Sensor3", CIPDataTypeCode.SINT);
PlcReadRequest.Builder builder = connection.readRequestBuilder();
builder.addItem("value-" + field.getTag(), field);
PlcReadRequest readRequest = builder.build();
// Execute the request
PlcReadResponse response;
try {
response = readRequest.execute().get();
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
This is the result from my stack trace, I set the log level to 'trace' in order to get a better insight of what's going on:
/home/ghinojosa/.jdks/corretto-11.0.14.1/bin/java -Dio.netty.tryReflectionSetAccessible=true --add-opens java.base/jdk.internal.misc=ALL-UNNAMED -javaagent:/home/ghinojosa/.local/share/JetBrains/Toolbox/apps/IDEA-C/ch-0/213.6777.52/lib/idea_rt.jar=44017:/home/ghinojosa/.local/share/JetBrains/Toolbox/apps/IDEA-C/ch-0/213.6777.52/bin -Dfile.encoding=UTF-8 -classpath /home/ghinojosa/IdeaProjects/plc-test/target/classes:/home/ghinojosa/Downloads/eeip-library.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-api/0.9.1/plc4j-api-0.9.1.jar:/home/ghinojosa/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.12.5/jackson-annotations-2.12.5.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-driver-eip/0.9.1/plc4j-driver-eip-0.9.1.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-spi/0.9.1/plc4j-spi-0.9.1.jar:/home/ghinojosa/.m2/repository/io/netty/netty-codec/4.1.67.Final/netty-codec-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-common/4.1.67.Final/netty-common-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-transport/4.1.67.Final/netty-transport-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/io/netty/netty-resolver/4.1.67.Final/netty-resolver-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/commons-beanutils/commons-beanutils/1.9.4/commons-beanutils-1.9.4.jar:/home/ghinojosa/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/ghinojosa/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.12.5/jackson-core-2.12.5.jar:/home/ghinojosa/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.5/jackson-databind-2.12.5.jar:/home/ghinojosa/.m2/repository/io/vavr/vavr/0.10.4/vavr-0.10.4.jar:/home/ghinojosa/.m2/repository/io/vavr/vavr-match/0.10.4/vavr-match-0.10.4.jar:/home/ghinojosa/.m2/repository/com/github/jinahya/bit-io/1.4.3/bit-io-1.4.3.jar:/home/ghinojosa/.m2/repository/commons-codec/commons-codec/1.15/commons-codec-1.15.jar:/home/ghinojosa/.m2/repository/org/apache/plc4x/plc4j-transport-tcp/0.9.1/plc4j-transport-tcp-0.9.1.jar:/home/ghinojosa/.m2/repository/io/netty/netty-buffer/4.1.67.Final/netty-buffer-4.1.67.Final.jar:/home/ghinojosa/.m2/repository/org/slf4j/slf4j-api/1.8.0-beta4/slf4j-api-1.8.0-beta4.jar:/home/ghinojosa/.m2/repository/org/slf4j/slf4j-simple/1.8.0-beta4/slf4j-simple-1.8.0-beta4.jar com.example.plctest.ReadPlcDemo
[main] INFO org.apache.plc4x.java.PlcDriverManager - Instantiating new PLC Driver Manager with class loader jdk.internal.loader.ClassLoaders$AppClassLoader#5c8da962
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering available drivers...
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol eip (EthernetIP)
[main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
[main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
[main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
[main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#4de5031f
[main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
[main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
[main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/home/ghinojosa/.m2/repository/io/netty/netty-common/4.1.67.Final/netty-common-4.1.67.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
[main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): available
[main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
[main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
[main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8390705152 bytes (maybe)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: 8390705152 bytes
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: 1024
[main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
[main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
[main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
[main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 8
[main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
[main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
[main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
[main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#76505305
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#7b98f307
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#4802796d
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#34123d65
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#59474f18
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#65fb9ffc
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#3e694b3f
[main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl#1bb5a082
[main] INFO org.apache.plc4x.java.transport.tcp.TcpChannelFactory - Configuring Bootstrap with org.apache.plc4x.java.eip.readwrite.configuration.EIPConfiguration#5aa9e4eb
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 10908 (auto-detected)
[main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
[main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
[main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: f4:06:69:ff:fe:d6:97:69 (auto-detected)
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
[main] TRACE org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection - Channel was created, firing ChannelCreated Event
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - User Event triggered org.apache.plc4x.java.spi.events.ConnectEvent#645e8927
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.eip.readwrite.protocol.EipProtocolLogic - Sending RegisterSession EIP Package
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Adding Response Handler ...
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Sending to wire EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - Forwarding request to plc EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
[nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
[nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
[nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
[nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#20edcbc7
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Sending bytes to PLC for message EipConnectionRequest[sessionHandle=0,status=0,senderContext={0,0,0,0,0,0,0,0},options=0] as data 65000400000000000000000000000000000000000000000001000000
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Receiving bytes, trying to decode Message...
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Decoding EipConnectionRequest[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0]
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Checking handler HandlerRegistration#0 for Object of type EipConnectionRequest
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Handler HandlerRegistration#0 has right expected type EipPacket, checking condition
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Handler HandlerRegistration#0 accepts element EipConnectionRequest[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0], calling handle method
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.eip.readwrite.protocol.EipProtocolLogic - Got assigned with Session 3604940806
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Firing Connected!
[main] INFO com.example.plctest.ReadPlcDemo - PLC can read!
[main] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Submission of transaction 0
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Start execution of transaction 0
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.Plc4xNettyWrapper - Adding Response Handler ...
[pool-2-thread-1] TRACE org.apache.plc4x.java.spi.transaction.RequestTransactionManager - Completed execution of transaction 0
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.Plc4xNettyWrapper - Forwarding request to plc CipRRData[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0,exchange=CipExchange[service=CipUnconnectedRequest[unconnectedService=CipReadRequest[RequestPathSize=5,tag={-111,7,83,101,110,115,111,114,49,0},elementNb=1],backPlane=0,slot=0]]]
[nioEventLoopGroup-2-1] DEBUG org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Sending bytes to PLC for message CipRRData[sessionHandle=3604940806,status=0,senderContext={0,0,0,0,0,0,0,0},options=0,exchange=CipExchange[service=CipUnconnectedRequest[unconnectedService=CipReadRequest[RequestPathSize=5,tag={-111,7,83,101,110,115,111,114,49,0},elementNb=1],backPlane=0,slot=0]]] as data 6f002c000608dfd600000000000000000000000000000000000000000000020000000000b2001c00520220062401059d0e004c05910753656e736f723100010001000000
[nioEventLoopGroup-2-1] TRACE org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Receiving bytes, trying to decode Message...
[nioEventLoopGroup-2-1] INFO org.apache.plc4x.java.eip.readwrite.io.CipRRDataIO - Expected constant value 0 but got 5 for reserved field.
[nioEventLoopGroup-2-1] WARN org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec - Error decoding package with content [6f0016000608dfd600000000000000000000000000000000000000000500020000000000b2000600d20001011103]: Unsupported case for discriminated type
org.apache.plc4x.java.spi.generation.ParseException: Unsupported case for discriminated type
at org.apache.plc4x.java.eip.readwrite.io.CipServiceIO.staticParse(CipServiceIO.java:100)
at org.apache.plc4x.java.eip.readwrite.io.CipExchangeIO.staticParse(CipExchangeIO.java:96)
at org.apache.plc4x.java.eip.readwrite.io.CipRRDataIO.staticParse(CipRRDataIO.java:80)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.staticParse(EipPacketIO.java:101)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.parse(EipPacketIO.java:48)
at org.apache.plc4x.java.eip.readwrite.io.EipPacketIO.parse(EipPacketIO.java:42)
at org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec.decode(GeneratedDriverByteToMessageCodec.java:79)
at io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:42)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
I'd appreciate any help or if someone could point me in the right direction, in terms of how to solve this. Thanks in advance!
Guillermo
For anyone having the same issue:
Apparently, from what I've read, the Micro820 series have limitations reading tags. In the case of the PLC I was using (2080-LC20-20QWB) I had to:
Establish a session by sending a "Forward_Open" request. The details of the request is in page 105 of the CIP specification.
Once the connection is established, multiple "Read Data" requests can be send with the names of the tags that you wish the read.
Finally, a "Forward_Close" request is send in order to close the connection.
And that's it.
I cloned this project and implemented the steps that I just described. You can find my version here.
And here's an example showing how to use it:
try {
EtherNetIP plc = new EtherNetIP("10.0.1.100", 0);
plc.connectTcp();
List<TagReadReply> tags = plc.connectAndReadTags("Sensor1", "Sensor2" , "Sensor10");
tags.forEach(each -> {
logger.info("Tag name:" + each.getTag() + " is valid ? " + each.isValid());
});
} catch (Exception e) {
e.printStackTrace();
logger.severe("Exception occurred:" + e.getMessage());
}
I'm writing End-to-ends tests for a React Native app using WebdriverIo, Appium and Cucumberjs and deploying the tests on the AWS Device Farm.
The tests are working fine both on the Android simulator and on an actual real device. Though, on AWS, it throws errors such as (node:3113) DeprecationWarning: Got: "options.rejectUnauthorized" is now deprecated, please use "options.https.rejectUnauthorized" and RequestError: socket hang up. This is shown in more detail below:
[DeviceFarm] echo "Start Appium Node test"
Start Appium Node test
[DeviceFarm] npm run only-one-test-aws
> e2e-cucumber-wdio-appium#1.0.0 only-one-test-aws /tmp/scratch0EgKio.scratch/test-packageFvDXet
> npx wdio ./wdio.android.conf.js --spec ./features/recarga/verificacacaoCodigoDeAcessoInvalido.feature
Execution of 1 spec files started at 2020-06-04T18:23:19.220Z
2020-06-04T18:23:19.223Z INFO #wdio/cli:launcher: Run onPrepare hook
2020-06-04T18:23:19.254Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2020-06-04T18:23:19.256Z INFO #wdio/local-runner: Start worker 0-0 with arg: ./wdio.android.conf.js,--spec,./features/recarga/verificacacaoCodigoDeAcessoInvalido.feature
[0-0] 2020-06-04T18:23:20.961Z INFO #wdio/local-runner: Run worker command: run
[0-0] 2020-06-04T18:23:21.062Z DEBUG #wdio/local-runner:utils: init remote session
[0-0] 2020-06-04T18:23:21.095Z INFO webdriverio: Initiate new session using the ./protocol-stub protocol
[0-0] RUNNING in undefined - /features/recarga/verificacacaoCodigoDeAcessoInvalido.feature
[0-0] 2020-06-04T18:23:22.697Z DEBUG #wdio/local-runner:utils: init remote session
[0-0] 2020-06-04T18:23:22.714Z INFO webdriverio: Initiate new session using the webdriver protocol
[0-0] 2020-06-04T18:23:22.717Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session
[0-0] 2020-06-04T18:23:22.717Z INFO webdriver: DATA {
capabilities: { alwaysMatch: {}, firstMatch: [ {} ] },
desiredCapabilities: {}
}
[0-0] (node:3113) DeprecationWarning: Got: "options.rejectUnauthorized" is now deprecated, please use "options.https.rejectUnauthorized"
[0-0] 2020-06-04T18:24:52.853Z WARN webdriver: Request timed out! Consider increasing the "connectionRetryTimeout" option.
[0-0] 2020-06-04T18:24:52.856Z INFO webdriver: Retrying 1/3
[0-0] 2020-06-04T18:24:52.857Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session
[0-0] 2020-06-04T18:24:52.857Z INFO webdriver: DATA {
capabilities: { alwaysMatch: {}, firstMatch: [ {} ] },
desiredCapabilities: {}
}
[0-0] 2020-06-04T18:24:54.004Z DEBUG webdriver: request failed due to response error: unknown error
[0-0] 2020-06-04T18:24:54.004Z WARN webdriver: Request failed with status 500 due to An unknown server-side error occurred while processing the command. Original error: end of central directory record signature not found
[0-0] 2020-06-04T18:24:54.005Z INFO webdriver: Retrying 2/3
[0-0] 2020-06-04T18:24:54.014Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session
[0-0] 2020-06-04T18:24:54.014Z INFO webdriver: DATA {
capabilities: { alwaysMatch: {}, firstMatch: [ {} ] },
desiredCapabilities: {}
}
[0-0] 2020-06-04T18:24:54.030Z ERROR webdriver: RequestError: socket hang up
at ClientRequest.<anonymous> (/tmp/scratch0EgKio.scratch/test-packageFvDXet/node_modules/got/dist/source/core/index.js:817:25)
at Object.onceWrapper (events.js:417:26)
at ClientRequest.emit (events.js:322:22)
at ClientRequest.EventEmitter.emit (domain.js:482:12)
at ClientRequest.origin.emit (/tmp/scratch0EgKio.scratch/test-packageFvDXet/node_modules/#szmarczak/http-timer/dist/source/index.js:39:20)
at Socket.socketOnEnd (_http_client.js:453:9)
at Socket.emit (events.js:322:22)
at Socket.EventEmitter.emit (domain.js:482:12)
at endReadableNT (_stream_readable.js:1187:12)
at connResetException (internal/errors.js:608:14)
at Socket.socketOnEnd (_http_client.js:453:23)
at Socket.emit (events.js:322:22)
at Socket.EventEmitter.emit (domain.js:482:12)
at endReadableNT (_stream_readable.js:1187:12)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
[0-0] 2020-06-04T18:24:54.031Z ERROR #wdio/runner: Error: Failed to create session.
socket hang up
at startWebDriverSession (/tmp/scratch0EgKio.scratch/test-packageFvDXet/node_modules/webdriver/build/utils.js:45:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
[0-0] Error: Failed to create session.
socket hang up
2020-06-04T18:24:54.147Z DEBUG #wdio/local-runner: Runner 0-0 finished with exit code 1
[0-0] FAILED in undefined - /features/recarga/verificacacaoCodigoDeAcessoInvalido.feature
2020-06-04T18:24:54.148Z INFO #wdio/cli:launcher: Run onComplete hook
Spec Files: 0 passed, 1 failed, 1 total (100% completed) in 00:01:34
2020-06-04T18:24:54.164Z INFO #wdio/local-runner: Shutting down spawned worker
2020-06-04T18:24:54.416Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
2020-06-04T18:24:54.417Z INFO #wdio/local-runner: shutting down
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! e2e-cucumber-wdio-appium#1.0.0 only-one-test-aws: `npx wdio ./wdio.android.conf.js --spec ./features/recarga/verificacacaoCodigoDeAcessoInvalido.feature`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the e2e-cucumber-wdio-appium#1.0.0 only-one-test-aws script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/device-farm/.npm/_logs/2020-06-04T18_24_54_506Z-debug.log
[DEVICEFARM] ########### Entering phase post_test ###########
[DEVICEFARM] ########### Finish executing testspec ###########
[DEVICEFARM] ########### Setting upload permissions ###########
[DEVICEFARM] Tearing down your device. Your tests report will come shortly.
I also tried this question's solution by installing the webdriverio and #wdio/cli dependencies globally, though it did not work to me.
PS: I've already been able to successfully perform tests on the AWS Device Farm using webdriverio and cucumber. Though, these errors started to show up for me recently and I still have no clue on how to solve them.
Can someone please help me with this?