Turn absolute file paths and line numbers in the tool output into hyperlinks - intellij-idea

This is an example output:
/usr/local/bin/node /usr/local/bin/elm-make src/elm/Main.elm --output=builds/main.js
-- TYPE MISMATCH ---------------------------------------------- src/elm/Main.elm
The type annotation for `init` does not match its definition.
35| init : Maybe Route.Location -> ( Model, Cmd Msg )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The type annotation is saying:
Maybe Route.Location -> ( { route : Maybe Route.Location }, Cmd Msg )
But I am inferring that the definition has this type:
Maybe Route.Location
-> ( { route : Maybe Route.Location -> Route.Model }, Cmd a )
Detected errors in 1 module.
Process finished with exit code 1
This is the regex that i came up with:
http://regexr.com/3egqu
However, creating output filter out of it like this:
doesn't work.
Thus far, I only know that the following works: ------ ($FILE_PATH$)
And it turns the file path into a link:
Help me find a way to include the line numbers into the links.

Here's what I've come up with;
First,
elm-make --report json
outputs the build errors in structured JSON;
$ elm-make --report json src/main.elm
[{"tag":"unused import","overview":"Module `Bootstrap.CDN` is unused.","details":"Best to remove it. Don't save code quality for later!","region":{"start":{"line":3,"column":1},"end":{"line":3,"column":28}},"type":"warning","file":"src/main.elm"}]
Now you can pipe that output through jq (see here). to reformat it into
elm make src/main.elm --report json --output ./public/app.js | \
jq '.[] | { type: .type, file: .file, line: .region.start.line|tostring, tag: .tag, column: .region.start.column|tostring, tag: .tag, details: .details }' | \
jq --raw-output '. | "[" + (.type|ascii_upcase) + "] " + .file + ":" + .line + ":" + .column + " " + .tag + " -- " + .details + "\n"'
that gives you a reformatted output;
[WARNING] src/main.elm:9:1 unused import -- Best to remove it. Don't save code quality for later!
[WARNING] src/main.elm:17:1 missing type annotation -- I inferred the type annotation so you can copy it into your code:
main : Program Never Model Main.Msg
Which you pick up in intellij using the format
$FILE_PATH$:$LINE$:$COLUMN$ $MESSAGE$
You then get to click on an error message to jump to the file, and the error text in a tooltip.

Related

Nextflow adding def function in to script

I have got errors like .command.sh: line 2: syntax error near unexpected token `('
/*
* Step 3
*/
chr_length = file(params.chr_length)
process create_bedgraph_and_bigwig {
publishDir "${params.outdir}/bedgraphandbigwig", mode: 'copy'
input:
set val(sample_id), file(vector_log) from vector_log_ch
set val(sample_id), file(target_query_bam) from target_query_bam_ch
file chr_length
output:
set val(sample_id), file("${sample_id}.bedgraph.log.txt") into bed_log_ch
set val(sample_id), file("${sample_id}.bed") into bed_ch
set val(sample_id), file("${sample_id}.clean.bed") into clean_bed_ch
set val(sample_id), file("${sample_id}.fragments.bed") into fragments_bed_ch
set val(sample_id), file("${sample_id}.sorted.fragments.bed") into sorted_fragments_bed_ch
shell:
'''
def fp = file(${vector_log})
def lines = fp.readLines()
def line3 = lines[3].split(' ')[4].toInteger()
def line4 = lines[4].split(' ')[4].toInteger()
def aln_sum = (10000/(line3 + line4)).toString()
bedtools bamtobed -bedpe -i !{target_query_bam} > !{sample_id}.bed 2>!{sample_id}.bedgraph.log.txt
awk '$1==$4 && $6-$2 < 1000 {{print $0}}' !{sample_id}.bed > !{sample_id}.clean.bed 2>!{sample_id}.bedgraph.log.txt
cut -f 1,2,6 !{sample_id}.clean.bed > !{sample_id}.fragments.bed 2>!{sample_id}.bedgraph.log.txt
sort -k 1,1 !{sample_id}.fragments.bed > !{sample_id}.sorted.fragments.bed
'''
}
The simple answer is to avoid using 'def' if the variable needs to be used in a shell definition or template. I couldn't actually find this after a quick search of the documentation, but I did find this note from the author:
Using groovy native string interpolation that would work, but when using the !{..} syntax scripts variable cannot be declared locally using the def keyword.
To summarise:
script/shell variable should be defensively declared in the local scope using the def keyboard
do not use def when:
i. the variable needs to be referenced as a output value
ii. the variable needs to be used in a shell template
https://github.com/nextflow-io/nextflow/issues/678#issuecomment-386206123

channel checks as empty even if it has content

I am trying to have a process that is launched only if a combination of conditions is met, but when checking if a channel has a path to a file, it always returns it as empty. Probably I am doing something wrong, in that case please correct my code. I tried to follow some of the suggestions in this issue but no success.
Consider the following minimal example:
process one {
output:
file("test.txt") into _chProcessTwo
script:
"""
echo "Hello world" > "test.txt"
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.toList().view()
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(!_chProcessTwoCheck.toList().isEmpty())
script:
def test = _chProcessTwoUse.toList().isEmpty() ? "I'm empty" : "I'm NOT empty"
println "The outcome is: " + test
}
I want to have process two run if and only if there is a file in the _chProcessTwo channel.
If I run the above code I obtain:
marius#dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [infallible_gutenberg] - revision: 9f57464dc1
[c8/bf38f5] process > one [100%] 1 of 1 ✔
[- ] process > two -
[/home/marius/pipeline/work/c8/bf38f595d759686a497bb4a49e9778/test.txt]
where the last line are actually the contents of _chProcessTwoView
If I remove the when directive from the second process I get:
marius#mg-dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [modest_descartes] - revision: 5b2bbfea6a
[57/1b7b97] process > one [100%] 1 of 1 ✔
[a9/e4b82d] process > two [100%] 1 of 1 ✔
[/home/marius/pipeline/work/57/1b7b979933ca9e936a3c0bb640c37e/test.txt]
with the contents of the second worker .command.log file being: The outcome is: I'm empty
I tried also without toList()
What am I doing wrong? Thank you in advance
Update: a workaround would be to check _chProcessTwoUse.view() != "" but that is pretty dirty
Update 2 as required by #Steve, I've updated the code to reflect a bit more the actual conditions i have in my own pipeline:
def runProcessOne = true
process one {
when:
runProcessOne
output:
file("inputProcessTwo.txt") into _chProcessTwo optional true
file("inputProcessThree.txt") into _chProcessThree optional true
script:
// this would replace the probability that output is not created
def outputSomething = false
"""
if ${outputSomething}; then
echo "Hello world" > "inputProcessTwo.txt"
echo "Goodbye world" > "inputProcessThree.txt"
else
echo "Sorry. Process one did not write to file."
fi
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(runProcessOne)
script:
"""
echo "The outcome is: ${myInput}"
"""
}
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(inputFromProcessTwo) from _chProcessThree
script:
def extra_parameters = _chProcessThree.isEmpty() ? "" : "--extra-input " + inputFromProcessTwo
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
As #Steve mentioned, I should not even check if a channel is empty, NextFlow should know better to not initiate the process. But I think in this construct I will have to.
Marius
I think part of the problem here is that process 'one' creates only optional outputs. This makes dealing with the optional inputs in process 'three' a bit tricky. I would try to reconcile this if possible. If this can't be reconciled, then you'll need to deal with the optional inputs in process 'three'. To do this, you'll basically need to create a dummy file, pass it into the channel using the ifEmpty operator, then use the name of the dummy file to check whether or not to prepend the argument's prefix. It's a bit of a hack, but it works pretty well.
The first step is to actually create the dummy file. I like shareable pipelines, so I would just create this in your baseDir, perhaps under a folder called 'assets':
mkdir assets
touch assets/NO_FILE
Then pass in your dummy file if your '_chProcessThree' channel is empty:
params.dummy_file = "${baseDir}/assets/NO_FILE"
dummy_file = file(params.dummy_file)
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(optfile) from _chProcessThree.ifEmpty(dummy_file)
script:
def extra_parameters = optfile.name != 'NO_FILE' ? "--extra-input ${optfile}" : ''
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
Also, these lines are problematic:
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
Calling view() will emit all values from the channel to stdout. You can ignore whatever value it returns. Unless you enable DSL2, the channel will then be empty. I think what you're looking for here is a closure:
_chProcessTwoView.view { "Found: $it" }
Be sure to append -ansi-log false to your nextflow run command so the output doesn't get clobbered. HTH.

how to pass a function under snakemake run directive

I am building a workflow in snakemake and would like to recycle one of the rules to two different input sources. The input sources could be either source1 or source1+source2 and depending on the input the output directory would also vary. Since this was quite complicated to do in the same rule and I didn't want to create the copy of the full rule I would like to create two rules with different input/output, but running same command.
Is it possible to make this work? I get the DAG resolved correctly but the job don't go through on the cluster (ERROR : bamcov_cmd not defined)..
An example below (both rules use the same command at the end):
this is command
def bamcov_cmd():
return( (deepTools_path+"bamCoverage " +
"-b {input.bam} " +
"-o {output} " +
"--binSize {params.bw_binsize} " +
"-p {threads} " +
"--normalizeTo1x {params.genome_size} " +
"{params.read_extension} " +
"&> {log}") )
this is the rule
rule bamCoverage:
input:
bam = file1+"/{sample}.bam",
bai = file1+"/{sample}.bam.bai"
output:
"bamCoverage/{sample}.filter.bw"
params:
bw_binsize = bw_binsize,
genome_size = int(genome_size),
read_extension = "--extendReads"
log:
"bamCoverage/logs/bamCoverage.{sample}.log"
benchmark:
"bamCoverage/.benchmark/bamCoverage.{sample}.benchmark"
threads: 16
run:
bamcov_cmd()
this is the optional rule2
rule bamCoverage2:
input:
bam = file2+"/{sample}.filter.bam",
bai = file2+"/{sample}.filter.bam.bai"
output:
"bamCoverage/{sample}.filter.bw"
params:
bw_binsize = bw_binsize,
genome_size = int(genome_size),
read_extension = "--extendReads"
log:
"bamCoverage/logs/bamCoverage.{sample}.log"
benchmark:
"bamCoverage/.benchmark/bamCoverage.{sample}.benchmark"
threads: 16
run:
bamcov_cmd()
What you asked is possible in python.
It depends if you have JUST python code in the file, or python and Snakemake.
I will answer that first, and then I have a follow up response because I want you to set it up differently so you don't have to do it this way.
Just Python:
from fileContainingMyBamCovCmdFunction import bamcov_cmd
rule bamCoverage:
...
run:
bamcov_cmd()
Visually, see how I do it in this file, to reference access to buildHeader and buildSample. These files are being called by a Snakefile. It should work the same for you.
https://github.com/LCR-BCCRC/workflow_exploration/blob/master/Snakemake/modules/py_buildFile/buildFile.py
EDIT 2017-07-23 - Updating code segment below to reflect user comment
Snakemake and Python:
include: "fileContainingMyBamCovCmdFunction.suffix"
rule bamCoverage:
...
run:
shell(bamcov_cmd())
EDIT END
If the function is truly specific to the bamCoverage call, if you prefer you can put it back in the rule. This implies it's not being called elsewhere, which may be true.
Be careful when annotating files using '.' notation, I use '_' as I find it's easier to prevent creating cyclical dependencies this way.
Also, if you do end up leaving the two rules separately, you will likely end up with ambiguity errors.
http://snakemake.readthedocs.io/en/latest/snakefiles/rules.html?highlight=ruleorder#handling-ambiguous-rules
When possible, it's best practice to have rules generating unique outputs.
As for alternatives, consider setting up the code like this?
from subprocess import call
rule all:
input:
"path/to/file/mySample.bw"
#OR
#"path/to/file/mySample_filtered.bw"
bamCoverage:
input:
bam = file1+"/{sample}.bam",
bai = file1+"/{sample}.bam.bai"
output:
"bamCoverage/{sample}.bw"
params:
bw_binsize = bw_binsize,
genome_size = int(genome_size),
read_extension = "--extendReads"
log:
"bamCoverage/logs/bamCoverage.{sample}.log"
benchmark:
"bamCoverage/.benchmark/bamCoverage.{sample}.benchmark"
threads: 16
run:
callString= deepTools_path + "bamCoverage " \
+ "-b " + wilcards.input.bam \
+ "-o " + wilcards.output \
+ "--binSize " str(params.bw_binsize) \
+ "-p " + str({threads}) \
+ "--normalizeTo1x " + str(params.genome_size) \
+ " " + str(params.read_extension) \
+ "&> " + str(log)
call(callString, shell=True)
rule filterBam:
input:
"{pathFB}/{sample}.bam"
output:
"{pathFB}/{sample}_filtered.bam"
run:
callString="samtools view -bh -F 512 " + wildcards.input \
+ ' > ' + wildcards.output
call(callString, shell=True)
Thoughts?

How can I signal parsing errors with LPeg?

I'm writing an LPeg-based parser. How can I make it so a parsing error returns nil, errmsg?
I know I can use error(), but as far as I know that creates a normal error, not nil, errmsg.
The code is pretty long, but the relevant part is this:
local eof = lpeg.P(-1)
local nl = (lpeg.P "\r")^-1 * lpeg.P "\n" + lpeg.P "\\n" + eof -- \r for winblows compat
local nlnoeof = (lpeg.P "\r")^-1 * lpeg.P "\n" + lpeg.P "\\n"
local ws = lpeg.S(" \t")
local inlineComment = lpeg.P("`") * (1 - (lpeg.S("`") + nl * nl)) ^ 0 * lpeg.P("`")
local wsc = ws + inlineComment -- comments count as whitespace
local backslashEscaped
= lpeg.P("\\ ") / " " -- escaped spaces
+ lpeg.P("\\\\") / "\\" -- escaped escape character
+ lpeg.P("\\#") / "#"
+ lpeg.P("\\>") / ">"
+ lpeg.P("\\`") / "`"
+ lpeg.P("\\n") -- \\n newlines count as backslash escaped
+ lpeg.P("\\") * lpeg.P(function(_, i)
error("Unknown backslash escape at position " .. i) -- this error() is what I wanna get rid of.
end)
local Line = lpeg.C((wsc + (backslashEscaped + 1 - nl))^0) / function(x) return x end * nl * lpeg.Cp()
I want Line:match(...) to return nil, errmsg when there's an invalid escape.
LPeg itself doesn't provide specific functions to help you with error reporting. A quick fix to your problem would be to make a protected call (pcall) to match like this:
local function parse(text)
local ok, result = pcall(function () return Line:match(text) end)
if ok then
return result
else
-- `result` will contain the error thrown. If it is a string
-- Lua will add additional information to it (filename and line number).
-- If you do not want this, throw a table instead like `{ msg = "error" }`
-- and access the message using `result.msg`
return nil, result
end
end
However, this will also catch any other error, which you probably don't want. A better solution would be to use LPegLabel instead. LPegLabel is an extension of LPeg that adds support for labeled failures. Just replace require"lpeg" with require"lpeglabel" and then use lpeg.T(L) to throw labels where L is an integer from 1-255 (0 is used for regular PEG failures).
local unknown_escape = 1
local backslashEscaped = ... + lpeg.P("\\") * lpeg.T(unknown_escape)
Now Line:match(...) will return nil, label, suffix if there is a label thrown (suffix is the remaining unprocessed input, which you can use to compute for the error position via its length). With this, you can print out the appropriate error message based on the label. For more complex grammars, you would probably want a more systematic way of mapping the error labels and messages. Please check the documentation found in the readme of the LPegLabel repository to see examples of how one may do so.
LPegLabel also allows you to catch the labels in the grammar by the way (via labeled choice); this is useful for implementing things like error recovery. For more information on labeled failures and examples, please check the documentation.

MEL error unterminated string, script for textures to change their strings

Recently i got into MEL programming to help out a few friends in maya. They wanted to have their files refferenced on a server, so i needed to change the reff strings. Now i have compiled a solution to do this, and used another example as a guide, but when i run the script it says
// Error: int $nt=tokenize $TexturePath "\" $buff;
//
// Error: Line 12.43: Unterminated string. //
What gives?
p.s. full code below for anyone who wants to use it
string $SceneTextures[] = `ls -tex`;
string $plus="";//Place a file type here to be saved in that subfolder
for ($i = 0; $i< (`size $SceneTextures`); $i++)
{
$Test = catchQuiet(`getAttr ($SceneTextures[$i] + ".fileTextureName")`);
if ($Test == 0)
{
string $TexturePath = `getAttr ($SceneTextures[$i] + ".fileTextureName")`;
string $buff[];
int $nt=`tokenize $TexturePath "\\" $buff`;
string $newPath=("${ARC_SURF}\\" + plus + "\\" + $buff[$nt-3] + "\\" + $buff[$nt-2] + "\\" + $buff[$nt-1]);
setAttr -type "string" ($SceneTextures[$i] + ".fileTextureName") $NewPath;
catchQuiet (AEfileTextureReloadCmd ($SceneTextures[$i] + ".fileTextureName"));
//print $TexturePath;
}//end if
}//end for i
EDIT: Fixed the code as it should be, now it only throws // Error: line 14: Invalid negative index used to reference array "$buff".
But i think that probably only 1 texture screws stuff up, will check and report back
I'm no expert in MEL, but in many languages \ is used to escape control-sequences, so I would guess you want "\\" instead of "\", in the many places it appears.