Nextflow: output not being "found", despite setting publishDir - nextflow

I have the following nextflow script:
echo true
wd = "$params.wd"
geoid = "$params.geoid"
process step1 {
publishDir = "$wd/data/"
input:
val celFiles from "$wd/data/$geoid"
output:
file "${geoid}_datFiles.RData" into channel
"""
Rscript $wd/scripts/step1.R $celFiles $wd/data/${geoid}_datFiles.RData
"""
}
The Rscript contains the following commands:
step1=function(WD,
celFiles,
output) {
library(affy)
datFiles=ReadAffy(celfile.path=paste0(WD,"/",celFiles))
save(datFiles,file=output)
}
args=commandArgs(trailingOnly=TRUE)
WD=args[1]
celFiles=args[2]
output=args[3]
step1(WD,celFiles,output)
When it runs, the output file is saved in the directory I want ($wd/data/${geoid}_datFiles.RData). Given that publishDir points to the same directory, I would expect output (defined as "${geoid}_datFiles.RData") to be available under the publishDir directory.
However, I get the following error:
Missing output file(s) `GSE4290_datFiles.RData` expected by process `step1`
The log file suggests that nextflow is still looking for the output in the workflow created directory:
Process `step1` is unable to find [UnixPath]: `/Users/rebeccaeliscu/Desktop/workflow/affymetrix/nextflow/work/92/42afb131a36eb32ed780bd1bf3bc3b/GSE4290_datFiles.RData`
The complete log file:
Nov-12 17:55:39.611 [main] DEBUG nextflow.cli.Launcher - $> nextflow run main.nf
Nov-12 17:55:39.945 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 20.07.1
Nov-12 17:55:39.968 [main] INFO nextflow.cli.CmdRun - Launching `main.nf` [infallible_brahmagupta] - revision: d68e496ea0
Nov-12 17:55:40.026 [main] DEBUG nextflow.config.ConfigBuilder - Found config local: /Users/rebeccaeliscu/Desktop/workflow/affymetrix/nextflow/nextflow.config
Nov-12 17:55:40.029 [main] DEBUG nextflow.config.ConfigBuilder - Parsing config file: /Users/rebeccaeliscu/Desktop/workflow/affymetrix/nextflow/nextflow.config
Nov-12 17:55:40.140 [main] DEBUG nextflow.config.ConfigBuilder - Applying config profile: `standard`
Nov-12 17:55:41.288 [main] DEBUG nextflow.Session - Session uuid: 94f22a74-2a63-4a87-9fb3-33cf925a5a74
Nov-12 17:55:41.288 [main] DEBUG nextflow.Session - Run name: infallible_brahmagupta
Nov-12 17:55:41.289 [main] DEBUG nextflow.Session - Executor pool size: 4
Nov-12 17:55:41.326 [main] DEBUG nextflow.cli.CmdRun -
Version: 20.07.1 build 5412
Created: 24-07-2020 15:18 UTC (08:18 PDT)
System: Mac OS X 10.15.7
Runtime: Groovy 2.5.11 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_111-b14
Encoding: UTF-8 (UTF-8)
Process: 46458#Rebeccas-MacBook-Pro-6.local.ucsf.edu [10.49.41.197]
CPUs: 4 - Mem: 8 GB (708.4 MB) - Swap: 2 GB (927 MB)
Nov-12 17:55:41.353 [main] DEBUG nextflow.Session - Work-dir: /Users/rebeccaeliscu/Desktop/workflow/affymetrix/nextflow/work [Mac OS X]
Nov-12 17:55:41.354 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /Users/rebeccaeliscu/Desktop/workflow/affymetrix/nextflow/bin
Nov-12 17:55:41.594 [main] DEBUG nextflow.Session - Observer factory: TowerFactory
Nov-12 17:55:41.598 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
Nov-12 17:55:41.911 [main] DEBUG nextflow.Session - Session start invoked
Nov-12 17:55:42.309 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Nov-12 17:55:42.331 [main] DEBUG nextflow.Session - Workflow process names [dsl1]: step1
Nov-12 17:55:42.334 [main] WARN nextflow.script.BaseScript - The use of `echo` method has been deprecated
Nov-12 17:55:42.495 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Nov-12 17:55:42.496 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Nov-12 17:55:42.508 [main] DEBUG nextflow.executor.Executor - [warm up] executor > local
Nov-12 17:55:42.521 [main] DEBUG n.processor.LocalPollingMonitor - Creating local task monitor for executor 'local' > cpus=4; memory=8 GB; capacity=4; pollInterval=100ms; dumpInterval=5m

Your output declaration is looking for a file in the current workDir: "${geoid}_datFiles.RData", but your Rscript is writing to: $wd/data/${geoid}_datFiles.RData. If you change your command to:
Rscript $wd/scripts/step1.R $celFiles ${geoid}_datFiles.RData
Then Nextflow should be able to find the output file. The publishDir directive will then 'publish' it to the defined publishDir.

Related

How to trigger karate gatling distributed test?

We want to run karate gatling distributed mode to test API performance.
Here is the performance test class for master node
public class TestPerformanceDocker {
#Test
void test(){
//master node server ip
String serverUrl= "127.0.0.1";
String threads = System.getProperty("THREADS");
String rampDuration = System.getProperty("RAMP_DURATION");
String constantDuration =System.getProperty("CONST_DURATION");
//slave node amounts
Integer executorCount=Integer.parseInt(System.getProperty("NODES_COUNT"));
String cmd ="mvn gatling:test -DTHREADS="+threads+" -DRAMP_DURATION="+rampDuration+" -DCONST_DURATION="+constantDuration+"";
GatlingMavenJobConfig config = new GatlingMavenJobConfig(executorCount, serverUrl, 9090) {};
config.setMainCommand(cmd);
JobManager<Integer> manager = new JobManager(config);
manager.start();
manager.waitForCompletion();
}
}
Start the master node with cmd :
docker container run -d -t --network=karate_network --name karate_boss --privileged karate mvn clean test -Dtest=TestPerformanceDocker -DTHREADS=3 -DRAMP_DURATION=600 -DCONST_DURATION=600
Face with error log below:
2022-12-20 21:55:35 13:55:35.259 [main] DEBUG com.intuit.karate.http.HttpServer - server started: f224cb587867:9090
2022-12-20 21:55:35 13:55:35.261 [main] DEBUG com.intuit.karate.job.JobManager - added to queue: 1
2022-12-20 21:55:35 13:55:35.262 [main] DEBUG com.intuit.karate.job.JobManager - added to queue: 2
2022-12-20 21:55:35 13:55:35.265 [main] DEBUG com.intuit.karate.job.JobManager - added to queue: 3
2022-12-20 21:55:35 13:55:35.283 [1671544535281] DEBUG com.intuit.karate - command: [docker, run, --rm, --cap-add=SYS_ADMIN, -e, KARATE_JOBURL=http://127.0.0.1:9090, ptrthomas/karate-chrome], working dir: null
2022-12-20 21:55:35 13:55:35.285 [1671544535281] DEBUG com.intuit.karate - command: [docker, run, --rm, --cap-add=SYS_ADMIN, -e, KARATE_JOBURL=http://127.0.0.1:9090, ptrthomas/karate-chrome], working dir: null
2022-12-20 21:55:35 13:55:35.285 [1671544535281] DEBUG com.intuit.karate - command: [docker, run, --rm, --cap-add=SYS_ADMIN, -e, KARATE_JOBURL=http://127.0.0.1:9090, ptrthomas/karate-chrome], working dir: null
2022-12-20 21:55:35 13:55:35.348 [1671544535281-out] DEBUG com.intuit.karate - docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
2022-12-20 21:55:35 13:55:35.349 [1671544535281-out] DEBUG com.intuit.karate - See 'docker run --help'.
2022-12-20 21:55:35 13:55:35.357 [1671544535281] WARN com.intuit.karate.shell.Command - exit code was non-zero: 125 - [docker, run, --rm, --cap-add=SYS_ADMIN, -e, KARATE_JOBURL=http://127.0.0.1:9090, ptrthomas/karate-chrome] working dir: null
2022-12-20 21:55:35 13:55:35.363 [1671544535281-out] DEBUG com.intuit.karate - docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
2022-12-20 21:55:35 13:55:35.364 [1671544535281-out] DEBUG com.intuit.karate - See 'docker run --help'.
2022-12-20 21:55:35 13:55:35.365 [1671544535281-out] DEBUG com.intuit.karate - docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
2022-12-20 21:55:35 13:55:35.366 [1671544535281-out] DEBUG com.intuit.karate - See 'docker run --help'.
2022-12-20 21:55:35 13:55:35.369 [1671544535281] WARN com.intuit.karate.shell.Command - exit code was non-zero: 125 - [docker, run, --rm, --cap-add=SYS_ADMIN, -e, KARATE_JOBURL=http://127.0.0.1:9090, ptrthomas/karate-chrome] working dir: null
2022-12-20 21:55:35 13:55:35.371 [1671544535281] WARN com.intuit.karate.shell.Command - exit code was non-zero: 125 - [docker, run, --rm, --cap-add=SYS_ADMIN, -e, KARATE_JOBURL=http://127.0.0.1:9090, ptrthomas/karate-chrome] working dir: null
With the reference article in official github, we want to run the karate-gatling with distributed mode for API testing.
Reference assets:
https://github.com/karatelabs/karate/wiki/Distributed-Testing#jenkins-config

nextflow's process don't chain, stops after first success

I'm working on a colleague's pipeline Nextflow 19.04 and i have a weird behavior.
in every case i used it, it was working like intended but recently we changed the technology of input data without modifying anything of the format or anything else.
The first process called "init" runs, succeed and nextflow doesn't chain with the next one
process init {
output:
stdout into init_ch
script:
"""
head -n 1 ${params.config} | awk '{print \$1}' | tr -d "\n"
"""
}
bampbi_ch=bamtuple_ch.join(pbituple_ch).combine(init_ch)
process rename {
publishDir path: "${params.publishdirResults}/Correction_Stats", mode: 'copy' , pattern : '*.stats'
input:
set ID, file(bam), file(pbi), val(espece) from bampbi_ch
output:
set val("${name}"), file("${name}.bam"), file ("${name}.bam.pbi") into bampbi2_ch, bampbi3_ch, bampbi4_ch, bampbi_forbf_ch
file "*.stats" into correctionstats_ch
script:
name=espece+"_"+params.target
"""
cp $bam ${name}.bam
cp $pbi ${name}.bam.pbi
size=`ls -sh | grep "$name" | grep ".bam\$" | awk '{print \$1}'`
echo -e "${name}.bam\t\${size}" >> bamSize.stats
"""
}
the .nextflow.log speaks about other process but it dont quite understand what it means
Mar-29 13:34:28.749 [main] DEBUG nextflow.cli.Launcher - $> nextflow -c CATCH.confi run CATCH.nf --bamData '/somepath*.bam' --bamDatapbi '/somepath*.bam.pbi' --barcodes /somepath/barcodes.fasta --minScoreDemux 80 --config /somepath/config.txt --target blabal --primers primers.fasta --canuSpec /somepath/Canu.spec --tailleCanuDeNovo 3G --seqInterne /somepath/merge_2_DSI_regions.fasta --publishdirResults /somepath/Results
Mar-29 13:34:28.971 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 19.04.0
Mar-29 13:34:28.997 [main] INFO nextflow.cli.CmdRun - Launching `CATCH.nf` [berserk_sax] - revision: c4df35f250
Mar-29 13:34:29.045 [main] DEBUG nextflow.config.ConfigBuilder - User config file: /somepath/CATCH.confi
Mar-29 13:34:29.045 [main] DEBUG nextflow.config.ConfigBuilder - Parsing config file: /somepath/CATCH.confi
Mar-29 13:34:29.114 [main] DEBUG nextflow.config.ConfigBuilder - Applying config profile: `standard`
Mar-29 13:34:30.341 [main] DEBUG nextflow.Session - Session uuid: 25da2eab-481e-42c1-babf-bfb3bf3dd89a
Mar-29 13:34:30.341 [main] DEBUG nextflow.Session - Run name: berserk_sax
Mar-29 13:34:30.342 [main] DEBUG nextflow.Session - Executor pool size: 2
Mar-29 13:34:30.370 [main] DEBUG nextflow.cli.CmdRun -
Version: 19.04.0 build 5069
Modified: 17-04-2019 06:25 UTC (08:25 CEST)
System: Linux 3.10.0-1160.el7.x86_64
Runtime: Groovy 2.5.6 on OpenJDK 64-Bit Server VM 1.8.0_262-b10
Encoding: UTF-8 (UTF-8)
Process: 36284#node120 [192.168.1.120]
CPUs: 1 - Mem: 251.6 GB (82 GB) - Swap: 0 (0)
Mar-29 13:34:30.455 [main] DEBUG nextflow.Session - Work-dir: /somepath/work [gpfs]
Mar-29 13:34:30.456 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /somepath/bin
Mar-29 13:34:30.704 [main] DEBUG nextflow.Session - Session start invoked
Mar-29 13:34:30.708 [main] DEBUG nextflow.processor.TaskDispatcher - Dispatcher > start
Mar-29 13:34:30.722 [main] DEBUG nextflow.script.ScriptRunner - > Script parsing
Mar-29 13:34:31.534 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Mar-29 13:34:31.596 [PathVisitor-1] DEBUG nextflow.file.PathVisitor - files for syntax: glob; folder: /somepath/; pattern: pattern*.bam; options: [:]
Mar-29 13:34:31.747 [PathVisitor-1] DEBUG nextflow.file.PathVisitor - files for syntax: glob; folder: /somepath/; pattern: pattern*.bam.pbi; options: [:]
Mar-29 13:34:32.015 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:34:32.016 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:34:32.044 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:34:32.046 [main] INFO nextflow.executor.Executor - [warm up] executor > slurm
Mar-29 13:34:32.067 [main] DEBUG n.processor.TaskPollingMonitor - Creating task monitor for executor 'slurm' > capacity: 100; pollInterval: 5s; dumpInterval: 5m
Mar-29 13:34:32.070 [main] DEBUG nextflow.processor.TaskDispatcher - Starting monitor: TaskPollingMonitor
Mar-29 13:34:32.070 [main] DEBUG n.processor.TaskPollingMonitor - >>> barrier register (monitor: slurm)
Mar-29 13:34:32.085 [main] DEBUG nextflow.executor.Executor - Invoke register for executor: slurm
Mar-29 13:34:32.085 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:34:32.134 [main] DEBUG nextflow.Session - >>> barrier register (process: init)
Mar-29 13:34:32.136 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > init -- maxForks: 2
Mar-29 13:34:32.322 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:34:32.322 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:34:32.323 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:34:32.324 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.544 [main] DEBUG nextflow.Session - >>> barrier register (process: rename)
Mar-29 13:39:37.545 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > rename -- maxForks: 2
Mar-29 13:39:37.569 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:39:37.569 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:39:37.569 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:39:37.570 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.570 [main] DEBUG nextflow.Session - >>> barrier register (process: bam2fastaCorrection)
Mar-29 13:39:37.571 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > bam2fastaCorrection -- maxForks: 2
Mar-29 13:39:37.595 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:39:37.595 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:39:37.595 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:39:37.595 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.596 [main] DEBUG nextflow.Session - >>> barrier register (process: statsCorrection)
Mar-29 13:39:37.596 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > statsCorrection -- maxForks: 2
Mar-29 13:39:37.607 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:39:37.607 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:39:37.608 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:39:37.608 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.609 [main] DEBUG nextflow.Session - >>> barrier register (process: demultiplexage)
Mar-29 13:39:37.609 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > demultiplexage -- maxForks: 2
Mar-29 13:39:37.895 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:39:37.895 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:39:37.895 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:39:37.895 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.896 [main] DEBUG nextflow.Session - >>> barrier register (process: nomenclature)
Mar-29 13:39:37.896 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > nomenclature -- maxForks: 2
Mar-29 13:39:37.904 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:39:37.912 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:39:37.913 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:39:37.913 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.913 [main] DEBUG nextflow.Session - >>> barrier register (process: bam2fastaDemultiplexage)
Mar-29 13:39:37.913 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > bam2fastaDemultiplexage -- maxForks: 2
Mar-29 13:39:37.942 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: slurm
Mar-29 13:39:37.942 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'slurm'
Mar-29 13:39:37.943 [main] DEBUG nextflow.executor.Executor - Initializing executor: slurm
Mar-29 13:39:37.943 [main] DEBUG n.executor.AbstractGridExecutor - Creating executor 'slurm' > queue-stat-interval: 1m
Mar-29 13:39:37.943 [main] DEBUG nextflow.Session - >>> barrier register (process: statsDemultiplexage)
Mar-29 13:39:37.943 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > statsDemultiplexage -- maxForks: 2
Mar-29 13:39:37.953 [main] DEBUG nextflow.script.ScriptRunner - > Await termination
Mar-29 13:39:37.953 [main] DEBUG nextflow.Session - Session await
Mar-29 13:39:37.998 [Task submitter] DEBUG nextflow.executor.GridTaskHandler - [SLURM] submitted process init > jobId: 33152594; workDir: /somepath/work/23/c994dde9ed99a00f4e190239f07998
Mar-29 13:39:38.002 [Task submitter] INFO nextflow.Session - [23/c994dd] Submitted process > init
Mar-29 13:39:42.347 [Task monitor] DEBUG n.processor.TaskPollingMonitor - Task completed > TaskHandler[jobId: 33152594; id: 1; name: init; status: COMPLETED; exit: 0; error: -; workDir: /somepath/work/23/c994dde9ed99a00f4e190239f07998 started: 1648553982321; exited: 2022-03-29T11:39:40.340241Z; ]
Mar-29 13:39:42.364 [Actor Thread 3] DEBUG nextflow.Session - <<< barrier arrive (process: rename)
Mar-29 13:39:42.391 [Actor Thread 3] DEBUG nextflow.Session - <<< barrier arrive (process: bam2fastaCorrection)
Mar-29 13:39:42.392 [Actor Thread 3] DEBUG nextflow.Session - <<< barrier arrive (process: statsCorrection)
Mar-29 13:39:42.393 [Actor Thread 3] DEBUG nextflow.Session - <<< barrier arrive (process: init)
Mar-29 13:39:42.400 [Actor Thread 8] DEBUG nextflow.Session - <<< barrier arrive (process: demultiplexage)
Mar-29 13:39:42.421 [Actor Thread 8] DEBUG nextflow.Session - <<< barrier arrive (process: bam2fastaDemultiplexage)
Mar-29 13:39:42.424 [Actor Thread 9] DEBUG nextflow.Session - <<< barrier arrive (process: nomenclature)
Mar-29 13:39:42.456 [Actor Thread 10] DEBUG nextflow.Session - <<< barrier arrive (process: statsDemultiplexage)
Mar-29 13:39:42.456 [main] DEBUG nextflow.Session - Session await > all process finished
Mar-29 13:39:47.290 [Task monitor] DEBUG n.processor.TaskPollingMonitor - <<< barrier arrives (monitor: slurm)
Mar-29 13:39:47.292 [main] DEBUG nextflow.Session - Session await > all barriers passed
Mar-29 13:39:47.300 [main] DEBUG nextflow.trace.StatsObserver - Workflow completed > WorkflowStats[succeedCount=1; failedCount=0; ignoredCount=0; cachedCount=0; succeedDuration=19ms; failedDuration=0ms; cachedDuration=0ms]
Mar-29 13:39:47.544 [main] DEBUG nextflow.CacheDB - Closing CacheDB done
Mar-29 13:39:47.569 [main] DEBUG nextflow.script.ScriptRunner - > Execution complete -- Goodbye
N E X T F L O W ~ version 19.04.0
Launching `CATCH.nf` [mighty_joliot] - revision: 79006232db
debut Script
Donnees = /work/project/gaia/Samplix_test*.bam
executor > slurm (1)
[23/c994dd] process > init [100%] 1 of 1 ✔
Completed at: 29-Mar-2022 13:39:47
Duration : 11.4s
CPU hours : (a few seconds)
Succeeded : 1
all this while when it runs normally (the exact same way to run etc.) there are many process.
What could have gone wrong ? what could be the reason for nextflow not to chain with next process ?
Nicolas
Process execution stops if one or more input channels are empty. It's likely your input channel declared here is empty:
bampbi_ch = bamtuple_ch.join(pbituple_ch).combine(init_ch)
This could be because one or more component channels (i.e. bamtuple_ch, pbituple_ch, init_ch) are empty. However, the usual culprit in this case is that the join operation hasn't succeeded in the way it was intended.
The join operator creates a channel that joins together the items
emitted by two channels for which exists a matching key. The key is
defined, by default, as the first element in each item emitted.
The default behavior (which can be changed with the remainder parameter) is for incomplete tuples to be discarded. Check that bamtuple_ch and pbituple_ch produce tuples that can be joined using a shared/common key as the first element.
Ok so the join was indeed the problem. but not on the side of init, more on the pbituple that was empty because the files were not found because of a / missing. sorry about that and TIL about the logic of Nextflow !

Intellij Tomcat shows 404 upon start up

I am using Intellij ultimate to start up a simple service. The structure of the project is like this:
My tomcat configuration as following:
When launching, the console shows no errors:
/tmp/apache-tomcat-8.5.38/bin/catalina.sh run
[2019-02-24 11:24:53,412] Artifact InsbotTomcat:war: Waiting for server connection to start artifact deployment...
[2019-02-24 11:24:53,412] Artifact web:war exploded: Waiting for server connection to start artifact deployment...
24-Feb-2019 23:24:54.294 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.5.38
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Feb 5 2019 11:42:42 UTC
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.5.38.0
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Mac OS X
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 10.12.6
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: x86_64
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home/jre
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_201-b09
24-Feb-2019 23:24:54.296 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
24-Feb-2019 23:24:54.297 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /Users/diyu/Library/Caches/IntelliJIdea2018.3/tomcat/Tomcat_8_5_38_InsbotTomcat
24-Feb-2019 23:24:54.297 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /private/tmp/apache-tomcat-8.5.38
24-Feb-2019 23:24:54.297 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/Users/diyu/Library/Caches/IntelliJIdea2018.3/tomcat/Tomcat_8_5_38_InsbotTomcat/conf/logging.properties
24-Feb-2019 23:24:54.297 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
24-Feb-2019 23:24:54.298 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote=
24-Feb-2019 23:24:54.298 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.port=1099
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.ssl=false
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.password.file=/Users/diyu/Library/Caches/IntelliJIdea2018.3/tomcat/Tomcat_8_5_38_InsbotTomcat/jmxremote.password
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.access.file=/Users/diyu/Library/Caches/IntelliJIdea2018.3/tomcat/Tomcat_8_5_38_InsbotTomcat/jmxremote.access
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.rmi.server.hostname=127.0.0.1
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
24-Feb-2019 23:24:54.300 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
24-Feb-2019 23:24:54.301 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/Users/diyu/Library/Caches/IntelliJIdea2018.3/tomcat/Tomcat_8_5_38_InsbotTomcat
24-Feb-2019 23:24:54.301 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/tmp/apache-tomcat-8.5.38
24-Feb-2019 23:24:54.301 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/tmp/apache-tomcat-8.5.38/temp
24-Feb-2019 23:24:54.301 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/diyu/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
24-Feb-2019 23:24:54.457 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
24-Feb-2019 23:24:54.477 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
24-Feb-2019 23:24:54.494 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
24-Feb-2019 23:24:54.496 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
24-Feb-2019 23:24:54.497 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 591 ms
24-Feb-2019 23:24:54.527 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Feb-2019 23:24:54.527 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.38
24-Feb-2019 23:24:54.537 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
24-Feb-2019 23:24:54.548 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
24-Feb-2019 23:24:54.549 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 52 ms
Connected to server
[2019-02-24 11:24:55,011] Artifact InsbotTomcat:war: Artifact is being deployed, please wait...
[2019-02-24 11:24:55,012] Artifact web:war exploded: Artifact is being deployed, please wait...
24-Feb-2019 23:24:58.220 INFO [RMI TCP Connection(2)-127.0.0.1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
[2019-02-24 11:24:58,277] Artifact InsbotTomcat:war: Artifact is deployed successfully
[2019-02-24 11:24:58,277] Artifact InsbotTomcat:war: Deploy took 3,266 milliseconds
[2019-02-24 11:24:58,342] Artifact web:war exploded: Artifact is deployed successfully
[2019-02-24 11:24:58,342] Artifact web:war exploded: Deploy took 3,330 milliseconds
24-Feb-2019 23:25:04.541 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/private/tmp/apache-tomcat-8.5.38/webapps/manager]
24-Feb-2019 23:25:04.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/private/tmp/apache-tomcat-8.5.38/webapps/manager] has finished in [32] ms
While the auto-opened page shows like this:
I also tried http://localhost:8080/web_war_exploded/index.html, showing the same output.
After Open Module Settings, I have changed the configuration to be this:
And it works.
The problem might be that the deployment directory is "webapps/manager" (/private/tmp/apache-tomcat-8.5.38/webapps/manager) and not "webapps".

Unable to open iterator for alias <alias_name>

I know this is one of the most repeated question. I have looked almost everywhere and none of the resources could resolve the issue I am facing.
Below is the simplified version of my problem statement. But in actual data is little complex so I have to use UDF
My input File: (input.txt)
NotNeeded1,NotNeeded11;Needed1
NotNeeded2,NotNeeded22;Needed2
I want the output to be
Needed1
Needed2
So, I am writing the below UDF
(Java code):
package com.company.pig;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
public class myudf extends EvalFunc<String>{
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
String s = (String)input.get(0);
String str = s.split("\\,")[1];
String str1 = str.split("\\;")[1];
return str1;
}
}
And packaging it into
rollupreg_extract-jar-with-dependencies.jar
Below is my pig shell code
grunt> REGISTER /pig/rollupreg_extract-jar-with-dependencies.jar;
grunt> DEFINE myudf com.company.pig.myudf;
grunt> data = LOAD 'hdfs://sandbox.hortonworks.com:8020/pig_hdfs/input.txt' USING PigStorage(',');
grunt> extract = FOREACH data GENERATE myudf($1);
grunt> DUMP extract;
And I get the below error:
2017-05-15 15:58:15,493 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2017-05-15 15:58:15,577 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2017-05-15 15:58:15,659 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2017-05-15 15:58:15,774 [main] INFO org.apache.pig.impl.util.SpillableMemoryManager - Selected heap (PS Old Gen) of size 699400192 to monitor. collectionUsageThreshold = 489580128, usageThreshold = 489580128
2017-05-15 15:58:15,865 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2017-05-15 15:58:15,923 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2017-05-15 15:58:15,923 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2017-05-15 15:58:16,184 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2017-05-15 15:58:16,196 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2017-05-15 15:58:16,396 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2017-05-15 15:58:16,576 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2017-05-15 15:58:16,580 [main] WARN org.apache.pig.tools.pigstats.ScriptState - unable to read pigs manifest file
2017-05-15 15:58:16,584 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2017-05-15 15:58:16,588 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
2017-05-15 15:58:17,258 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/pig/rollupreg_extract-jar-with-dependencies.jar to DistributedCache through /tmp/temp-1119775568/tmp-858482998/rollupreg_extract-jar-with-dependencies.jar
2017-05-15 15:58:17,276 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2017-05-15 15:58:17,294 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2017-05-15 15:58:17,295 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2017-05-15 15:58:17,295 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2017-05-15 15:58:17,354 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2017-05-15 15:58:17,510 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2017-05-15 15:58:17,511 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2017-05-15 15:58:17,511 [JobControl] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2017-05-15 15:58:17,753 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2017-05-15 15:58:17,820 [JobControl] INFO org.apache.pig.builtin.PigStorage - Using PigTextInputFormat
2017-05-15 15:58:17,830 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2017-05-15 15:58:17,830 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2017-05-15 15:58:17,884 [JobControl] INFO com.hadoop.compression.lzo.GPLNativeCodeLoader - Loaded native gpl library
2017-05-15 15:58:17,889 [JobControl] INFO com.hadoop.compression.lzo.LzoCodec - Successfully loaded & initialized native-lzo library [hadoop-lzo rev 7a4b57bedce694048432dd5bf5b90a6c8ccdba80]
2017-05-15 15:58:17,922 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2017-05-15 15:58:18,525 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2017-05-15 15:58:18,692 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1494853652295_0023
2017-05-15 15:58:18,879 [JobControl] INFO org.apache.hadoop.mapred.YARNRunner - Job jar is not present. Not adding any jar to the list of resources.
2017-05-15 15:58:18,973 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1494853652295_0023
2017-05-15 15:58:19,029 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1494853652295_0023/
2017-05-15 15:58:19,030 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1494853652295_0023
2017-05-15 15:58:19,030 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases data,extract
2017-05-15 15:58:19,030 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: data[2,7],extract[3,10] C: R:
2017-05-15 15:58:19,044 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2017-05-15 15:58:19,044 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1494853652295_0023]
2017-05-15 15:58:29,156 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2017-05-15 15:58:29,156 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1494853652295_0023 has failed! Stop running all dependent jobs
2017-05-15 15:58:29,157 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2017-05-15 15:58:29,790 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2017-05-15 15:58:29,791 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2017-05-15 15:58:29,793 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2017-05-15 15:58:30,311 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2017-05-15 15:58:30,312 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2017-05-15 15:58:30,313 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2017-05-15 15:58:30,465 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
2017-05-15 15:58:30,467 [main] WARN org.apache.pig.tools.pigstats.ScriptState - unable to read pigs manifest file
2017-05-15 15:58:30,472 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.3.2.5.0.0-1245 root 2017-05-15 15:58:16 2017-05-15 15:58:30 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_1494853652295_0023 data,extract MAP_ONLY Message: Job failed! hdfs://sandbox.hortonworks.com:8020/tmp/temp-1119775568/tmp-1619300225,
Input(s):
Failed to read data from "/pig_hdfs/input.txt"
Output(s):
Failed to produce result in "hdfs://sandbox.hortonworks.com:8020/tmp/temp-1119775568/tmp-1619300225"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1494853652295_0023
2017-05-15 15:58:30,472 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2017-05-15 15:58:30,499 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias extract
Details at logfile: /pig/pig_1494863836458.log
I know it complaints that
Failed to read data from "/pig_hdfs/input.txt"
But I am sure this is not the actual issue. If I don't use the udf and directly dump the data, I get the output. So, this is not the issue.
First, you do not need an udf to get the desired output.You can use semi colon as the delimiter in load statement and get the needed column.
data = LOAD 'hdfs://sandbox.hortonworks.com:8020/pig_hdfs/input.txt' USING PigStorage(';');
extract = FOREACH data GENERATE $1;
DUMP extract;
If you insist on using udf then you will have to load the record into a single field and then use the udf.Also,your udf is incorrect.You should split the string s with ';' as the delimiter, which is passed from the pig script.
String s = (String)input.get(0);
String str1 = s.split("\\;")[1];
And in your pig script,you need to load the entire record into 1 field and use the udf on field $0.
REGISTER /pig/rollupreg_extract-jar-with-dependencies.jar;
DEFINE myudf com.company.pig.myudf;
data = LOAD 'hdfs://sandbox.hortonworks.com:8020/pig_hdfs/input.txt' AS (f1:chararray);
extract = FOREACH data GENERATE myudf($0);
DUMP extract;

Can not access output at localhost page

package com.example;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
System.out.println("xxxx");
SpringApplication.run(DemoApplication.class, args);
}
}
other class
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
#RestController
public class SampleController {
#RequestMapping("/")
public String index() {
return "Greetings from Spring Boot!";
}
}
I made tomcat working on port 8181, because when I used 8080 and run in intellj, it says 8080 is already in use and can not start it.
So, I use 8181 and after executing, it opens localhost:8181 page but it is a white page, nothing there.
These are logs of output
06-Mar-2016 14:38:16.383 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /opt/tomcat/webapps/manager
06-Mar-2016 14:38:16.977 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /opt/tomcat/webapps/manager has finished in 593 ms
These are catalina log
06-Mar-2016 14:38:05.878 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.32
06-Mar-2016 14:38:05.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Feb 2 2016 19:34:53 UTC
06-Mar-2016 14:38:05.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.32.0
06-Mar-2016 14:38:05.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
06-Mar-2016 14:38:05.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.2.0-30-generic
06-Mar-2016 14:38:05.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
06-Mar-2016 14:38:05.893 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-8-oracle/jre
06-Mar-2016 14:38:05.893 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_74-b02
06-Mar-2016 14:38:05.894 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
06-Mar-2016 14:38:05.894 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /home/caneraydin/.IntelliJIdea16/system/tomcat/Unnamed_Last5
06-Mar-2016 14:38:05.895 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /opt/tomcat
06-Mar-2016 14:38:05.896 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/home/caneraydin/.IntelliJIdea16/system/tomcat/Unnamed_Last5/conf/logging.properties
06-Mar-2016 14:38:05.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
06-Mar-2016 14:38:05.898 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote=
06-Mar-2016 14:38:05.899 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.port=1099
06-Mar-2016 14:38:05.900 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.ssl=false
06-Mar-2016 14:38:05.900 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.management.jmxremote.authenticate=false
06-Mar-2016 14:38:05.901 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.rmi.server.hostname=127.0.0.1
06-Mar-2016 14:38:05.901 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.endorsed.dirs=/opt/tomcat/endorsed
06-Mar-2016 14:38:05.902 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/home/caneraydin/.IntelliJIdea16/system/tomcat/Unnamed_Last5
06-Mar-2016 14:38:05.905 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/opt/tomcat
06-Mar-2016 14:38:05.905 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/opt/tomcat/temp
06-Mar-2016 14:38:05.906 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /home/caneraydin/Downloads/idea-IU-144.4199.23/bin::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
06-Mar-2016 14:38:06.265 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8181"]
06-Mar-2016 14:38:06.296 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
06-Mar-2016 14:38:06.302 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-34294"]
06-Mar-2016 14:38:06.304 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
06-Mar-2016 14:38:06.305 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 1704 ms
06-Mar-2016 14:38:06.353 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
06-Mar-2016 14:38:06.353 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.32
06-Mar-2016 14:38:06.370 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8181"]
06-Mar-2016 14:38:06.433 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-34294"]
06-Mar-2016 14:38:06.448 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 142 ms
06-Mar-2016 14:38:16.383 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /opt/tomcat/webapps/manager
06-Mar-2016 14:38:16.977 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /opt/tomcat/webapps/manager has finished in 593 ms
What am i doing wrong?
change the port in application.properties file to 8181 it will be default 8080
Regards,
Nitin