BigQueryInsertJobOperator call Stored Procedure error - google-bigquery

I'm having an error when trying to call an SP in BigQuery with Airflow BigQueryInsertJobOperator. I have tried many different syntax and it doesn't seem to be working. I pulled the same SQL out of the SP, put it into a file, and it ran fine.
Below is the code that I tried to execute the SP in BigQuery.
PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "project-id")
Dataset = os.environ.get("GCP_Dataset", "dataset")
with DAG(dag_id='dag_id',default_args=default_args,schedule_interval="#daily", start_date=days_ago(1), catchup=False
) as dag:
Call_SP = BigQueryInsertJobOperator(
task_id='Call_SP',
configuration={
"query": {
"query": "CALL `" + PROJECT_ID + "." + Dataset + "." + "SP`();",
#"query": "{% include 'Scripts/Script.sql' %}",
"useLegacySql": False,
}
}
)
Call_SP
I see the is outputing call I am expecting
[2022-06-30, 17:32:07 UTC] {bigquery.py:2247} INFO - Executing: {'query': {'query': 'CALL `project-id.dataset.SP`();', 'useLegacySql': False}}
[2022-06-30, 17:32:07 UTC] {bigquery.py:1560} INFO - Inserting job airflow_Call_SP_2022_06_30T16_47_13_748417_00_00_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[2022-06-30, 17:32:13 UTC] {taskinstance.py:1776} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2269, in execute
table = job.to_api_repr()["configuration"]["query"]["destinationTable"]
KeyError: 'destinationTable'
It just doesn't make sense to me since my SP is just one statement merge

You hit a known bug in apache-airflow-providers-google which was fixed in PR and released in apache-airflow-providers-google version 8.0.0
To solve your issue you should upgrade the google provider version
If for some reason you can't upgrade the provider then you can create a custom operator from the code PR and use it until you are able to upgrade the provider version.

Expounding on #Elad Kalif's answer, I was able to update apache-airflow-providers-google to version 8.0.0 using this GCP documentation on How to install a package from PyPi:
gcloud composer environments update <your-environment> \
--location <your-environment-location> \
--update-pypi-package apache-airflow-providers-google>=8.1.0
After installation, using this code (derived from your given code), it yielded successful results:
import datetime
from airflow import models
from airflow import DAG
from airflow.operators import bash
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
PROJECT_ID = "<your-proj-id>"
Dataset = "<your-dataset>"
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
default_args = {
'owner': 'Composer Example',
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': YESTERDAY,
}
with models.DAG(dag_id='dag_id',default_args=default_args,schedule_interval="#daily", catchup=False
) as dag:
Call_SP = BigQueryInsertJobOperator(
task_id='Call_SP',
configuration={
"query": {
"query": "CALL `" + PROJECT_ID + "." + Dataset + "." + "<your-sp>`();",
#"query": "{% include 'Scripts/Script.sql' %}",
"useLegacySql": False,
}
}
)
Call_SP
Logs:
*** Reading remote log from gs:///2022-07-04T01:02:28.122529+00:00/1.log.
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1044} INFO - Dependencies all met for <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [queued]>
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1044} INFO - Dependencies all met for <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [queued]>
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1250} INFO -
--------------------------------------------------------------------------------
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1251} INFO - Starting attempt 1 of 2
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1252} INFO -
--------------------------------------------------------------------------------
[2022-07-04, 01:02:33 UTC] {taskinstance.py:1271} INFO - Executing <Task(BigQueryInsertJobOperator): Call_SP> on 2022-07-04 01:02:28.122529+00:00
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:52} INFO - Started process 2461 to run task
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'dag_id', 'Call_SP', 'manual__2022-07-04T01:02:28.122529+00:00', '--job-id', '29', '--raw', '--subdir', 'DAGS_FOLDER/20220704.py', '--cfg-path', '/tmp/tmpjpm9jt5h', '--error-file', '/tmp/tmpnoplzt6r']
[2022-07-04, 01:02:33 UTC] {standard_task_runner.py:80} INFO - Job 29: Subtask Call_SP
[2022-07-04, 01:02:34 UTC] {task_command.py:298} INFO - Running <TaskInstance: dag_id.Call_SP manual__2022-07-04T01:02:28.122529+00:00 [running]> on host airflow-worker-r66xj
[2022-07-04, 01:02:34 UTC] {taskinstance.py:1448} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_EMAIL=
AIRFLOW_CTX_DAG_OWNER=Composer Example
AIRFLOW_CTX_DAG_ID=dag_id
AIRFLOW_CTX_TASK_ID=Call_SP
AIRFLOW_CTX_EXECUTION_DATE=2022-07-04T01:02:28.122529+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-07-04T01:02:28.122529+00:00
[2022-07-04, 01:02:34 UTC] {bigquery.py:2243} INFO - Executing: {'query': {'query': 'CALL ``();', 'useLegacySql': False}}
[2022-07-04, 01:02:34 UTC] {credentials_provider.py:324} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2022-07-04, 01:02:35 UTC] {bigquery.py:1562} INFO - Inserting job airflow_dag_id_Call_SP_2022_07_04T01_02_28_122529_00_00_d8681b858989cd5f36b8b9f4942a96a0
[2022-07-04, 01:02:36 UTC] {taskinstance.py:1279} INFO - Marking task as SUCCESS. dag_id=dag_id, task_id=Call_SP, execution_date=20220704T010228, start_date=20220704T010233, end_date=20220704T010236
[2022-07-04, 01:02:37 UTC] {local_task_job.py:154} INFO - Task exited with return code 0
[2022-07-04, 01:02:37 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
Project History in Bigquery:

Related

Nextflow tutorial getting error "no such variable"

I'm trying to learn nextflow but it's not going very well. I started with the tutorial of this site: https://www.nextflow.io/docs/latest/getstarted.html (I'm the one who installed nextflow).
I copied this script :
#!/usr/bin/env nextflow
params.str = 'Hello world!'
process splitLetters {
output:
file 'chunk_*' into letters
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
file x from letters.flatten()
output:
stdout result
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
result.view { it.trim() }
But when I run it (nextflow run tutorial.nf), in the terminal I have this :
N E X T F L O W ~ version 22.03.1-edge
Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
No such variable: result
-- Check script 'tutorial.nf' at line: 29 or see '.nextflow.log' file for more details
And in the log file I have this :
avr.-20 14:14:12.319 [main] DEBUG nextflow.cli.Launcher - $> nextflow run tutorial.nf
avr.-20 14:14:12.375 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 22.03.1-edge
avr.-20 14:14:12.466 [main] INFO nextflow.cli.CmdRun - Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
avr.-20 14:14:12.481 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/user/.nextflow/plugins; core-plugins: nf-amazon#1.6.0,nf-azure#0.13.0,nf-console#1.0.3,nf-ga4gh#1.0.3,nf-google#1.1.4,nf-sqldb#0.3.0,nf-tower#1.4.0
avr.-20 14:14:12.483 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
avr.-20 14:14:12.494 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
avr.-20 14:14:12.495 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
avr.-20 14:14:12.501 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
avr.-20 14:14:12.515 [main] INFO org.pf4j.AbstractPluginManager - No plugins
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Session uuid: 67344021-bff5-4131-9c07-e101756fb5ea
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Run name: intergalactic_waddington
avr.-20 14:14:12.573 [main] DEBUG nextflow.Session - Executor pool size: 8
avr.-20 14:14:12.604 [main] DEBUG nextflow.cli.CmdRun -
Version: 22.03.1-edge build 5695
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Work-dir: /home/user/Documents/formations/nextflow/testScript/work [ext2/ext3]
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /home/user/Documents/formations/nextflow/testScript/bin
avr.-20 14:14:12.637 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
avr.-20 14:14:12.648 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
avr.-20 14:14:12.669 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
avr.-20 14:14:12.678 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 9; maxThreads: 1000
avr.-20 14:14:12.741 [main] DEBUG nextflow.Session - Session start invoked
avr.-20 14:14:13.423 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
avr.-20 14:14:13.446 [main] DEBUG nextflow.Session - Session aborted -- Cause: No such property: result for class: Script_6634cd79
avr.-20 14:14:13.463 [main] ERROR nextflow.cli.Launcher - #unknown
groovy.lang.MissingPropertyException: No such property: result for class: Script_6634cd79
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:65)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at Script_6634cd79.runScript(Script_6634cd79:29)
at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:170)
at nextflow.script.BaseScript.run(BaseScript.groovy:203)
at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:220)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120)
at nextflow.cli.CmdRun.run(CmdRun.groovy:334)
at nextflow.cli.Launcher.run(Launcher.groovy:480)
at nextflow.cli.Launcher.main(Launcher.groovy:639)
What should I do ?
Thanks a lot for your help.
Nextflow includes a new language extension called DSL2, which became the default syntax in version 22.03.0-edge. However, it's possible to override this behavior by either adding nextflow.enable.dsl=1 to the top of your script, or by setting the -dsl1 option when you run your script:
nextflow run tutorial.nf -dsl1
Alternatively, roll back to the latest release (as opposed to an 'edge' pre-release). It's possible to specify the Nextflow version using the NXF_VER environment variable:
NXF_VER=21.10.6 nextflow run tutorial.nf
I find DSL2 drastically simplifies the writing of complex workflows and would highly recommend getting started with it. Unfortunately, the documentation is lagging a bit, but lifting it over is relatively straight forward once you get the hang of things:
params.str = 'Hello world!'
process splitLetters {
output:
path 'chunk_*'
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
path x
output:
stdout
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
workflow {
splitLetters | flatten() | convertToUpper | view()
}
Results:
nextflow run tutorial.nf -dsl2
N E X T F L O W ~ version 21.10.6
Launching `tutorial.nf` [prickly_kilby] - revision: 0c6f835b9c
executor > local (3)
[b8/84a1de] process > splitLetters [100%] 1 of 1 ✔
[86/fd19ea] process > convertToUpper (2) [100%] 2 of 2 ✔
HELLO
WORLD!

Karate DSL - How to use afterFeature with #Karate.Test

I used configure afterFeature with #Karate.Test, but it seems that afterFeature function is never called. However, when I run test with #jupiter.api.Test void testParallel() {}, it works fine.
Question: Is it a bug or expected behaviour?
Thanks in advance for your helps,
users.feature
Feature: Sample test
Background:
* configure afterScenario = function() { karate.log("I'm afterScenario"); }
* configure afterFeature = function() { karate.log("I'm afterFeature"); }
Scenario: Scenario 1
* print "I'm Scenario 1"
Scenario: Scenario 2
* print "I'm Scenario 2"
UsersRunner.java - Does NOT work
class UsersRunner {
#Karate.Test
Karate testUsers() {
return Karate.run("users").relativeTo(getClass());
}
}
/* karate.log
11:44:01.598 [main] DEBUG com.intuit.karate.Suite - [config] classpath:karate-config.js
11:44:02.404 [main] INFO com.intuit.karate - karate.env system property was: null
11:44:02.434 [main] INFO com.intuit.karate - [print] I'm Scenario 1
11:44:02.435 [main] INFO com.intuit.karate - I'm afterScenario
11:44:02.447 [main] INFO com.intuit.karate - karate.env system property was: null
11:44:02.450 [main] INFO com.intuit.karate - [print] I'm Scenario 2
11:44:02.450 [main] INFO com.intuit.karate - I'm afterScenario
*/
ExamplesTest.java - Works
class ExamplesTest {
#Test
void testParallel() {
Results results = Runner.path("classpath:examples")
.tags("~#ignore")
//.outputCucumberJson(true)
.parallel(5);
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
}
/* karate.log
11:29:48.904 [main] DEBUG com.intuit.karate.Suite - [config] classpath:karate-config.js
11:29:48.908 [main] INFO com.intuit.karate.Suite - backed up existing 'target/karate-reports' dir to: target/karate-reports_1621308588907
11:29:49.676 [pool-1-thread-2] INFO com.intuit.karate - karate.env system property was: null
11:29:49.676 [pool-1-thread-1] INFO com.intuit.karate - karate.env system property was: null
11:29:49.707 [pool-1-thread-2] INFO com.intuit.karate - [print] I'm Scenario 2
11:29:49.707 [pool-1-thread-1] INFO com.intuit.karate - [print] I'm Scenario 1
11:29:49.707 [pool-1-thread-2] INFO com.intuit.karate - I'm afterScenario
11:29:49.707 [pool-1-thread-1] INFO com.intuit.karate - I'm afterScenario
11:29:49.709 [pool-2-thread-1] INFO com.intuit.karate - I'm afterFeature
11:29:50.116 [pool-2-thread-1] INFO com.intuit.karate.Suite - <<pass>> feature 1 of 1 (0 remaining) classpath:examples/users/users.feature
*/
Can you upgrade to the latest 1.0.1 - if you still see the problem, it is a bug and please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

How to use -a in command line?

As documented for standalone-jar I'm trying to provide args to my feature and can't figure how to get it work. what do I miss ?
My command line :
java -jar c:\karate\karate-0.9.1.jar -a myKey1=myValue1 TestArgs.feature
karate-config.js
function fn() {
var env = karate.env;
karate.log('karate.env system property was:', env);
if (!env) {
env = 'test';
}
var config = { // base config JSON
arg:karate.properties['myKey1']
};
return config;
}
TestArgs.feature
Feature: test args
Scenario: print args
* print myKey1
* print arg
* print karate.properties['myKey1']
* print karate.get('myKey1')
I don't get anything printed :
java -jar c:\karate\karate-0.9.1.jar -a myKey1=myValue1 TestArgs.feature
10:32:57.904 [main] INFO com.intuit.karate.netty.Main - Karate version: 0.9.1
10:32:58.012 [main] INFO com.intuit.karate.Runner - Karate version: 0.9.1
10:32:58.470 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - karate.env system property was: null
10:32:58.489 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
10:32:58.491 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
10:32:58.495 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
10:32:58.501 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
Actually we meant to delete the docs, apologies since the -a / --args option is not supported any more.
You can of course use the karate.properties['some.key'] approach to unpack values from the command-line. Also refer how you can even get environment variables: https://github.com/intuit/karate/issues/547
My suggestion is you can use karate-config-<env>.js to read a bunch of variables from a file if needed. For example, given this feature:
Feature:
Scenario:
* print myKey
And this file karate-config-dev.js:
function() { return { myKey: 'hello' } }
You can run this command, which will auto load the config js file:
java -jar karate.jar -e dev test.feature
We will update the docs. Thanks for catching this.

Error from Json Loader in Pig

I have got below error while writing json scripts.. Please let me know how to write json loader script in pig.
script:
x = LOAD 'hdfs://user/spanda20/pig/phone.dat' USING JsonLoader('id:chararray, phone:(home:{(num:chararray, city:chararray)})');
Data set:
{
"id": "12345",
"phone": {
"home": [
{
"zip": "23060",
"city": "henrico"
},
{
"zip": "08902",
"city": "northbrunswick"
}
]
}
}
2015-03-18 14:24:10,917 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2015-03-18 14:24:10,918 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1426618756946_0028 has failed! Stop running all dependent jobs
2015-03-18 14:24:10,918 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2015-03-18 14:24:10,977 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: AttemptID:attempt_1426618756946_0028_m_000000_3 Info:Error: org.codehaus.jackson.JsonParseException: Unexpected end-of-input: expected close marker for OBJECT (from [Source: java.io.ByteArrayInputStream#43c59008; line: 1, column: 0])
at [Source: java.io.ByteArrayInputStream#43c59008; line: 1, column: 3]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1291)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportInvalidEOF(JsonParserMinimalBase.java:318)
at org.codehaus.jackson.impl.JsonParserBase._handleEOF(JsonParserBase.java:354)
at org.codehaus.jackson.impl.Utf8StreamParser._skipWSOrEnd(Utf8StreamParser.java:1841)
at org.codehaus.jackson.impl.Utf8StreamParser.nextToken(Utf8StreamParser.java:275)
at org.apache.pig.builtin.JsonLoader.readField(JsonLoader.java:180)
at org.apache.pig.builtin.JsonLoader.getNext(JsonLoader.java:164)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
2015-03-18 14:24:10,977 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2015-03-18 14:24:10,978 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.5.0-cdh5.2.0 0.12.0-cdh5.2.0 spanda20 2015-03-18 14:23:02 2015-03-18 14:24:10 UNKNOWN
Regards
Sanjeeb
Sanjeeb - Use this json:
{"id":"12345","phone":{"home":[{"zip":"23060","city":"henrico"},{"zip":"08902","city":"northbrunswick"}]}}
output shall be:
(12345,({(23060,henrico),(08902,northbrunswick)}))
PS: Pig doesn't usually like "human readable" json. Get rid of the spaces and/or indentations, and you're good.

How do I convince Erlang's Common Test to spawn local nodes?

I'd like to have Common Test spin up some local nodes to run suites. To that end, I've got the following spec file:
{node, a, 'a#localhost'}.
{logdir, [a,master], "../logs/"}.
{init, [a], [{node_start, [{callback_module, slave}
%% , {erl_flags, "-pa ../ebin/"}
%% , {monitor_master, true}
]}]}.
{suites, [a], "." , all}.
Which works okay:
> erl -sname ct#localhost
Erlang R15B03 (erts-5.9.3) [source] [64-bit] [smp:8:8] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.9.3 (abort with ^G)
(ct#localhost)1> ct_master:run("itest/lk.spec").
=== Master Logdir ===
/Users/blt/projects/us/troutwine/locker/logs
=== Master Logger process started ===
<0.41.0>
Node a#localhost started successfully with callback slave
=== Cookie ===
'DJWDEGHCXBKRAYVPMHTX'
=== Starting Tests ===
Tests starting on: [a#localhost]
=== Test Info ===
Starting test(s) on a#localhost...
********** node_ctrl process <6888.38.0> started on a#localhost **********
Common Test starting (cwd is /Users/blt/projects/us/troutwine/locker)
Common Test: Running make in test directories...
Including the following directories:
[]
CWD set to: "/Users/blt/projects/us/troutwine/locker/logs/ct_run.a#localhost.2013-08-07_09.44.23"
TEST INFO: 1 test(s), 1 suite(s)
Testing locker.itest: Starting test (with repeated test cases)
- - - - - - - - - - - - - - - - - - - - - - - - - -
locker_SUITE:init_per_suite failed
Reason: {badmatch,{error,{"no such file or directory","locker.app"}}}
- - - - - - - - - - - - - - - - - - - - - - - - - -
Testing locker.itest: *** FAILED *** init_per_suite
Testing locker.itest: TEST COMPLETE, 0 ok, 0 failed, 100 skipped of 100 test cases
Updating /Users/blt/projects/us/troutwine/locker/logs/index.html... done
Updating /Users/blt/projects/us/troutwine/locker/logs/all_runs.html... done
=== Test Info ===
Test(s) on node a#localhost finished.
=== TEST RESULTS ===
a#localhost_____________________________{0,0,{0,100}}
=== Info ===
Updating log files
=== Info ===
[{"itest/lk.spec",ok}]
without, obviously, running any tests on the extra local node. Now, when I uncomment the extra configuration in the spec so that it looks like
{init, [a], [{node_start, [{callback_module, slave}
, {erl_flags, "-pa ../ebin/"}
, {monitor_master, true}
]}]}.
the result is less than what I'd hoped for:
(ct#localhost)2> ct_master:run("itest/lk.spec").
=== Master Logdir ===
/Users/blt/projects/us/troutwine/locker/logs
=== Master Logger process started ===
<0.41.0>
=ERROR REPORT==== 7-Aug-2013::11:05:24 ===
Error in process <0.51.0> on node 'ct#localhost' with exit value: {badarg,[{erlang,open_port,[{spawn,[101,114,108,32,45,100,101,116,97,99,104,101,100,32,45,110,111,105,110,112,117,116,32,45,109,97,115,116,101,114,32,99,116,64,108,111,99,97,108,104,111,115,116,32,32,45,115,110,97,109,101,32,97,64,108,111,99,97,108,104,111,115,116,32,45,115,32,115,108,97,118,101,32,115,108,97,118,101,95,115,116,97,114,116,32,99,116,64,108,111,99,97,108,104,111,115,116,32,115,108,97,118,101,95,119,97,105,116,101,114,95,48,32,{erl_flags,"-pa ../ebin/"},{monitor_master,true}]},[stream...
Am I doing anything obviously wrong here?