How do I convince Erlang's Common Test to spawn local nodes? - testing
I'd like to have Common Test spin up some local nodes to run suites. To that end, I've got the following spec file:
{node, a, 'a#localhost'}.
{logdir, [a,master], "../logs/"}.
{init, [a], [{node_start, [{callback_module, slave}
%% , {erl_flags, "-pa ../ebin/"}
%% , {monitor_master, true}
]}]}.
{suites, [a], "." , all}.
Which works okay:
> erl -sname ct#localhost
Erlang R15B03 (erts-5.9.3) [source] [64-bit] [smp:8:8] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.9.3 (abort with ^G)
(ct#localhost)1> ct_master:run("itest/lk.spec").
=== Master Logdir ===
/Users/blt/projects/us/troutwine/locker/logs
=== Master Logger process started ===
<0.41.0>
Node a#localhost started successfully with callback slave
=== Cookie ===
'DJWDEGHCXBKRAYVPMHTX'
=== Starting Tests ===
Tests starting on: [a#localhost]
=== Test Info ===
Starting test(s) on a#localhost...
********** node_ctrl process <6888.38.0> started on a#localhost **********
Common Test starting (cwd is /Users/blt/projects/us/troutwine/locker)
Common Test: Running make in test directories...
Including the following directories:
[]
CWD set to: "/Users/blt/projects/us/troutwine/locker/logs/ct_run.a#localhost.2013-08-07_09.44.23"
TEST INFO: 1 test(s), 1 suite(s)
Testing locker.itest: Starting test (with repeated test cases)
- - - - - - - - - - - - - - - - - - - - - - - - - -
locker_SUITE:init_per_suite failed
Reason: {badmatch,{error,{"no such file or directory","locker.app"}}}
- - - - - - - - - - - - - - - - - - - - - - - - - -
Testing locker.itest: *** FAILED *** init_per_suite
Testing locker.itest: TEST COMPLETE, 0 ok, 0 failed, 100 skipped of 100 test cases
Updating /Users/blt/projects/us/troutwine/locker/logs/index.html... done
Updating /Users/blt/projects/us/troutwine/locker/logs/all_runs.html... done
=== Test Info ===
Test(s) on node a#localhost finished.
=== TEST RESULTS ===
a#localhost_____________________________{0,0,{0,100}}
=== Info ===
Updating log files
=== Info ===
[{"itest/lk.spec",ok}]
without, obviously, running any tests on the extra local node. Now, when I uncomment the extra configuration in the spec so that it looks like
{init, [a], [{node_start, [{callback_module, slave}
, {erl_flags, "-pa ../ebin/"}
, {monitor_master, true}
]}]}.
the result is less than what I'd hoped for:
(ct#localhost)2> ct_master:run("itest/lk.spec").
=== Master Logdir ===
/Users/blt/projects/us/troutwine/locker/logs
=== Master Logger process started ===
<0.41.0>
=ERROR REPORT==== 7-Aug-2013::11:05:24 ===
Error in process <0.51.0> on node 'ct#localhost' with exit value: {badarg,[{erlang,open_port,[{spawn,[101,114,108,32,45,100,101,116,97,99,104,101,100,32,45,110,111,105,110,112,117,116,32,45,109,97,115,116,101,114,32,99,116,64,108,111,99,97,108,104,111,115,116,32,32,45,115,110,97,109,101,32,97,64,108,111,99,97,108,104,111,115,116,32,45,115,32,115,108,97,118,101,32,115,108,97,118,101,95,115,116,97,114,116,32,99,116,64,108,111,99,97,108,104,111,115,116,32,115,108,97,118,101,95,119,97,105,116,101,114,95,48,32,{erl_flags,"-pa ../ebin/"},{monitor_master,true}]},[stream...
Am I doing anything obviously wrong here?
Related
Nextflow tutorial getting error "no such variable"
I'm trying to learn nextflow but it's not going very well. I started with the tutorial of this site: https://www.nextflow.io/docs/latest/getstarted.html (I'm the one who installed nextflow). I copied this script : #!/usr/bin/env nextflow params.str = 'Hello world!' process splitLetters { output: file 'chunk_*' into letters """ printf '${params.str}' | split -b 6 - chunk_ """ } process convertToUpper { input: file x from letters.flatten() output: stdout result """ cat $x | tr '[a-z]' '[A-Z]' """ } result.view { it.trim() } But when I run it (nextflow run tutorial.nf), in the terminal I have this : N E X T F L O W ~ version 22.03.1-edge Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4 No such variable: result -- Check script 'tutorial.nf' at line: 29 or see '.nextflow.log' file for more details And in the log file I have this : avr.-20 14:14:12.319 [main] DEBUG nextflow.cli.Launcher - $> nextflow run tutorial.nf avr.-20 14:14:12.375 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 22.03.1-edge avr.-20 14:14:12.466 [main] INFO nextflow.cli.CmdRun - Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4 avr.-20 14:14:12.481 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/user/.nextflow/plugins; core-plugins: nf-amazon#1.6.0,nf-azure#0.13.0,nf-console#1.0.3,nf-ga4gh#1.0.3,nf-google#1.1.4,nf-sqldb#0.3.0,nf-tower#1.4.0 avr.-20 14:14:12.483 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[] avr.-20 14:14:12.494 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: [] avr.-20 14:14:12.495 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: [] avr.-20 14:14:12.501 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode avr.-20 14:14:12.515 [main] INFO org.pf4j.AbstractPluginManager - No plugins avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Session uuid: 67344021-bff5-4131-9c07-e101756fb5ea avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Run name: intergalactic_waddington avr.-20 14:14:12.573 [main] DEBUG nextflow.Session - Executor pool size: 8 avr.-20 14:14:12.604 [main] DEBUG nextflow.cli.CmdRun - Version: 22.03.1-edge build 5695 avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Work-dir: /home/user/Documents/formations/nextflow/testScript/work [ext2/ext3] avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /home/user/Documents/formations/nextflow/testScript/bin avr.-20 14:14:12.637 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[] avr.-20 14:14:12.648 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory avr.-20 14:14:12.669 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory avr.-20 14:14:12.678 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 9; maxThreads: 1000 avr.-20 14:14:12.741 [main] DEBUG nextflow.Session - Session start invoked avr.-20 14:14:13.423 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution avr.-20 14:14:13.446 [main] DEBUG nextflow.Session - Session aborted -- Cause: No such property: result for class: Script_6634cd79 avr.-20 14:14:13.463 [main] ERROR nextflow.cli.Launcher - #unknown groovy.lang.MissingPropertyException: No such property: result for class: Script_6634cd79 at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:65) at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341) at Script_6634cd79.runScript(Script_6634cd79:29) at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:170) at nextflow.script.BaseScript.run(BaseScript.groovy:203) at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:220) at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212) at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120) at nextflow.cli.CmdRun.run(CmdRun.groovy:334) at nextflow.cli.Launcher.run(Launcher.groovy:480) at nextflow.cli.Launcher.main(Launcher.groovy:639) What should I do ? Thanks a lot for your help.
Nextflow includes a new language extension called DSL2, which became the default syntax in version 22.03.0-edge. However, it's possible to override this behavior by either adding nextflow.enable.dsl=1 to the top of your script, or by setting the -dsl1 option when you run your script: nextflow run tutorial.nf -dsl1 Alternatively, roll back to the latest release (as opposed to an 'edge' pre-release). It's possible to specify the Nextflow version using the NXF_VER environment variable: NXF_VER=21.10.6 nextflow run tutorial.nf I find DSL2 drastically simplifies the writing of complex workflows and would highly recommend getting started with it. Unfortunately, the documentation is lagging a bit, but lifting it over is relatively straight forward once you get the hang of things: params.str = 'Hello world!' process splitLetters { output: path 'chunk_*' """ printf '${params.str}' | split -b 6 - chunk_ """ } process convertToUpper { input: path x output: stdout """ cat $x | tr '[a-z]' '[A-Z]' """ } workflow { splitLetters | flatten() | convertToUpper | view() } Results: nextflow run tutorial.nf -dsl2 N E X T F L O W ~ version 21.10.6 Launching `tutorial.nf` [prickly_kilby] - revision: 0c6f835b9c executor > local (3) [b8/84a1de] process > splitLetters [100%] 1 of 1 ✔ [86/fd19ea] process > convertToUpper (2) [100%] 2 of 2 ✔ HELLO WORLD!
Gitlabci: how make dynamic next stage?
I have pipeline case Exist 2 files: .base_integration_test.yml - Integration tests without Kafka .base_integration_test_with_kafka.yml - Integration tests with Kafka include: # PRODUCT - project: 'gitlabci/integration-test' ref: dev_v2 file: - 'spark/.base_integration_test.yml' - 'spark/.base_integration_test_with_kafka.yml' scenario selection in the preliminary step need choose or .base_integration_test: variables: COVERAGE_SOURCE: "./src" extends: .base_integration_test or .base_integration_test_with_kafka: variables: COVERAGE_SOURCE: "./src" extends: .base_integration_test_with_kafka How to do it better? p.s. how tried make stage prepare_test: image: $CI_REGISTRY/platform/docker-images/vault:1.8 stage: prepare_test script: - export CICD_KAFKA_HOST=$(cat test/fixtures.py | grep KAFKA_HOST) - > if [ "$CICD_KAFKA_HOST" != "" ]; then export CICD_KAFKA_HOST="true" else export CICD_KAFKA_HOST="false" echo "CICD_KAFKA_HOST=$CICD_KAFKA_HOST" >> dotenv.env fi - env | sort -f artifacts: reports: dotenv: - dotenv.env expire_in: 6000 seconds in next stage integration_test: variables: COVERAGE_SOURCE: "./src" extends: .base_integration_test dependencies: - prepare_test rules: - if: $CICD_KAFKA_HOST == "false" - when: never but integration_test doesn't even show up on startup pipeline
Karate DSL - How to use afterFeature with #Karate.Test
I used configure afterFeature with #Karate.Test, but it seems that afterFeature function is never called. However, when I run test with #jupiter.api.Test void testParallel() {}, it works fine. Question: Is it a bug or expected behaviour? Thanks in advance for your helps, users.feature Feature: Sample test Background: * configure afterScenario = function() { karate.log("I'm afterScenario"); } * configure afterFeature = function() { karate.log("I'm afterFeature"); } Scenario: Scenario 1 * print "I'm Scenario 1" Scenario: Scenario 2 * print "I'm Scenario 2" UsersRunner.java - Does NOT work class UsersRunner { #Karate.Test Karate testUsers() { return Karate.run("users").relativeTo(getClass()); } } /* karate.log 11:44:01.598 [main] DEBUG com.intuit.karate.Suite - [config] classpath:karate-config.js 11:44:02.404 [main] INFO com.intuit.karate - karate.env system property was: null 11:44:02.434 [main] INFO com.intuit.karate - [print] I'm Scenario 1 11:44:02.435 [main] INFO com.intuit.karate - I'm afterScenario 11:44:02.447 [main] INFO com.intuit.karate - karate.env system property was: null 11:44:02.450 [main] INFO com.intuit.karate - [print] I'm Scenario 2 11:44:02.450 [main] INFO com.intuit.karate - I'm afterScenario */ ExamplesTest.java - Works class ExamplesTest { #Test void testParallel() { Results results = Runner.path("classpath:examples") .tags("~#ignore") //.outputCucumberJson(true) .parallel(5); assertEquals(0, results.getFailCount(), results.getErrorMessages()); } } /* karate.log 11:29:48.904 [main] DEBUG com.intuit.karate.Suite - [config] classpath:karate-config.js 11:29:48.908 [main] INFO com.intuit.karate.Suite - backed up existing 'target/karate-reports' dir to: target/karate-reports_1621308588907 11:29:49.676 [pool-1-thread-2] INFO com.intuit.karate - karate.env system property was: null 11:29:49.676 [pool-1-thread-1] INFO com.intuit.karate - karate.env system property was: null 11:29:49.707 [pool-1-thread-2] INFO com.intuit.karate - [print] I'm Scenario 2 11:29:49.707 [pool-1-thread-1] INFO com.intuit.karate - [print] I'm Scenario 1 11:29:49.707 [pool-1-thread-2] INFO com.intuit.karate - I'm afterScenario 11:29:49.707 [pool-1-thread-1] INFO com.intuit.karate - I'm afterScenario 11:29:49.709 [pool-2-thread-1] INFO com.intuit.karate - I'm afterFeature 11:29:50.116 [pool-2-thread-1] INFO com.intuit.karate.Suite - <<pass>> feature 1 of 1 (0 remaining) classpath:examples/users/users.feature */
Can you upgrade to the latest 1.0.1 - if you still see the problem, it is a bug and please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
Bitrise + Detox + React-native - Hello World Example hangs on detox.init
I'm trying to get the react-native "Hello World" app setup with Detox and running on Bitrise.io. I went through the react-native-cli getting started guide and am trying to run the simplest detox test using Detox + Jest on Bitrise with it. The specific error I'm encountering is the device and element globals not being defined (see log or in the link below). From what I've researched so far, this is caused by detox.init not ever finishing. Is there some basic config with Bitrise that I'm missing? The detox test command runs locally for me just fine. I'm using a free Bitrise account, and the project is public. You can see a failed build here: https://app.bitrise.io/build/e7926ddfc759288f#?tab=log The repo is also public: https://github.com/jamesopti/react-native-test/blob/add_detox/AwesomeProject/e2e/firstTest.spec.js Thanks in advance! Bitrise Error Log Example: should have welcome screen Example: should have welcome screen [FAIL] FAIL e2e/firstTest.spec.js (122.008s) Example ✕ should have welcome screen (8ms) ● Example › should have welcome screen Timeout - Async callback was not invoked within the 120000ms timeout specified by jest.setTimeout.Error: Timeout - Async callback was not invoked within the 120000ms timeout specified by jest.setTimeout. at mapper (../node_modules/jest-jasmine2/build/queueRunner.js:25:45) ● Example › should have welcome screen ReferenceError: device is not defined 1 | describe('Example', () => { 2 | beforeEach(async () => { > 3 | await device.reloadReactNative(); | ^ 4 | }); 5 | 6 | it('should have welcome screen', async () => { bitrise.yml --- format_version: '8' default_step_lib_source: https://github.com/bitrise-io/bitrise-steplib.git project_type: react-native trigger_map: - push_branch: "*" workflow: primary - pull_request_source_branch: "*" workflow: primary workflows: deploy: description: "## ..." steps: - activate-ssh-key#4.0.3: run_if: '{{getenv "SSH_RSA_PRIVATE_KEY" | ne ""}}' - git-clone#4.0.17: {} - script#1.1.5: title: Do anything with Script step - yarn#0.1.0: inputs: - workdir: AwesomeProject - command: install - install-missing-android-tools#2.3.7: inputs: - gradlew_path: "$PROJECT_LOCATION/gradlew" - android-build#0.10.0: inputs: - project_location: "$PROJECT_LOCATION" - certificate-and-profile-installer#1.10.1: {} - xcode-archive#2.7.0: inputs: - project_path: "$BITRISE_PROJECT_PATH" - scheme: "$BITRISE_SCHEME" - export_method: "$BITRISE_EXPORT_METHOD" - configuration: Release - deploy-to-bitrise-io#1.9.2: {} primary: steps: - activate-ssh-key#4.0.3: run_if: '{{getenv "SSH_RSA_PRIVATE_KEY" | ne ""}}' - git-clone#4.0.17: {} - yarn#0.1.0: inputs: - workdir: AwesomeProject - command: install title: Yarn Install - yarn#0.1.0: inputs: - workdir: AwesomeProject - command: test title: Unit tests after_run: - detox detox: steps: - cocoapods-install#1.9.1: inputs: - source_root_path: "$BITRISE_SOURCE_DIR/AwesomeProject/ios" - npm#1.1.0: title: Install Global inputs: - workdir: "$BITRISE_SOURCE_DIR/AwesomeProject" - command: install -g detox-cli react-native-cli - script#1.1.5: inputs: - working_dir: "$BITRISE_SOURCE_DIR/AwesomeProject" - content: |- #!/usr/bin/env bash brew tap facebook/fb export CODE_SIGNING_REQUIRED=NO brew install fbsimctl brew tap wix/brew brew install applesimutils --HEAD title: Install detox utils - script#1.1.5: inputs: - working_dir: "$BITRISE_SOURCE_DIR/AwesomeProject" - content: |- #!/usr/bin/env bash detox build --configuration ios.sim.debug title: Detox Build - script#1.1.5: inputs: - working_dir: "$BITRISE_SOURCE_DIR/AwesomeProject" - content: |- #!/usr/bin/env bash detox test --configuration ios.sim.debug --cleanup title: Detox Test app: envs: - opts: is_expand: false PROJECT_LOCATION: AwesomeProject/android - opts: is_expand: false MODULE: app - opts: is_expand: false VARIANT: '' - opts: is_expand: false BITRISE_PROJECT_PATH: AwesomeProject/ios/AwesomeProject.xcworkspace - opts: is_expand: false BITRISE_SCHEME: AwesomeProject - opts: is_expand: false BITRISE_EXPORT_METHOD: ad-hoc meta: bitrise.io: machine_type: elite
Unfortunately, this is a very generic bug and can be caused by multiple things (incompatible OS/jest + detox versions/node + detox versions, etc). If you're using MacOS, an option you can take in able to check what's going wrong in Bitrise's VM is connecting via Screen Sharing: I'd suggest adding a while someFileDoesntExist (to stop iteration naturally without the abort button) in your current workflow after the packages installation steps (node, detox, jest etc) and then checking if its the same one you're running locally. If that's not the case, check if the VM is running the same simulator -- phone model && OS version; if its not, then you can specify the simulator & OS version in your detox configuration. If that also didn't work, I'm not sure what it could be :(
How to use -a in command line?
As documented for standalone-jar I'm trying to provide args to my feature and can't figure how to get it work. what do I miss ? My command line : java -jar c:\karate\karate-0.9.1.jar -a myKey1=myValue1 TestArgs.feature karate-config.js function fn() { var env = karate.env; karate.log('karate.env system property was:', env); if (!env) { env = 'test'; } var config = { // base config JSON arg:karate.properties['myKey1'] }; return config; } TestArgs.feature Feature: test args Scenario: print args * print myKey1 * print arg * print karate.properties['myKey1'] * print karate.get('myKey1') I don't get anything printed : java -jar c:\karate\karate-0.9.1.jar -a myKey1=myValue1 TestArgs.feature 10:32:57.904 [main] INFO com.intuit.karate.netty.Main - Karate version: 0.9.1 10:32:58.012 [main] INFO com.intuit.karate.Runner - Karate version: 0.9.1 10:32:58.470 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - karate.env system property was: null 10:32:58.489 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] 10:32:58.491 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] 10:32:58.495 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] 10:32:58.501 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print]
Actually we meant to delete the docs, apologies since the -a / --args option is not supported any more. You can of course use the karate.properties['some.key'] approach to unpack values from the command-line. Also refer how you can even get environment variables: https://github.com/intuit/karate/issues/547 My suggestion is you can use karate-config-<env>.js to read a bunch of variables from a file if needed. For example, given this feature: Feature: Scenario: * print myKey And this file karate-config-dev.js: function() { return { myKey: 'hello' } } You can run this command, which will auto load the config js file: java -jar karate.jar -e dev test.feature We will update the docs. Thanks for catching this.