I'm using Snakemake and tried to order my overall pipeline structure by utilizing modules as the modularization approach suggests. However, one of my modules has a rule which depends on the ouput of the other module and since both modules have their own namespace, the outputs and inputs are not connected (atleast so it seems to me). I tried a lot of different things, but did not find a solution yet.
So, my question in clear: Is there in Snakemake a way to utilize modules, whereby the respective rules can (dynamically) rely on the output of another module's rule?
Here's a minimal code to illustrate my situation:
Snakemake main
# module1
module module1:
snakefile: "modules/module1/Snakefile"
prefix: "output/module1"
use rule * from module1 as module1_*
# module2
module module2:
snakefile: "modules/module2/Snakefile"
prefix: "output/module2"
use rule * from module2 as module2_*
# ======== Main rule ===========
rule all:
input:
rules.module1_all.input,
rules.module2_all.input
default_target: True # Makes this rule the default rule
Module 1
rule create_txt:
output:
"output/test.txt"
shell:
"touch {output}"
rule all:
input:
"output/test.txt"
Module 2
rule create_txt2:
input:
rules.module1_create_txt.output
output:
"output/test2.txt"
shell:
"touch {output}"
rule all:
input:
"output/test2.txt"
If I then run the pipeline, this is the output (I dropped the following exceptions, since I don't think this would help...)
(snakemake) Olivers-MacBook-Pro-5:test_snakemake oliverkuchler$ snakemake -nr
Building DAG of jobs...
Traceback (most recent call last):
File "/Users/oliverkuchler/opt/miniconda3/envs/snakemake/lib/python3.10/site-packages/snakemake/dag.py", line 1808, in collect_potential_dependencies
yield PotentialDependency(file, known_producers[file], True)
KeyError: 'output/module2/output/module1/output/test.txt'
So, as you can see, Snakemake accepts referencing rules from the other module. However, this leads to the application of multiple prefixes to the input-file 'output/module2/output/module1/output/test.txt'. But even if I would solve this issue, still module2 would not be able to relate to rules from module1. Are there any solutions, that I do not see? Happy about any suggestions :)
Related
For using conda activate myenv inside a script rule, I should add
shell.prefix("source /usr/local/genome/Anaconda3/etc/profile.d/conda.sh;")
at the beginning of the Snakefile.
I would like to add this option in my profile. For example in config.yaml or in sge-submit.py.
I tried to add it in sge-jobscript.sh, but it doesn't seem to work.
Is there a solution for this ? Do you have the same problem ?
Thanks in advance
I think you should be able to add a prefix variable to your config.yaml:
prefix: "source /usr/local/genome/Anaconda3/etc/profile.d/conda.sh;"
and then in your snakefile:
shell.prefix(config["prefix"])
I have faced a problem with Sphinx in Python. Even if I have followed the instructions from https://groups.google.com/forum/#!topic/sphinx-users/lO4CeNJQWjg I was not able to solve it.
I have Docs/source folder which contains:
conf.py
index.rst
RstFiles (the folder which contains .rst files for each module).
In conf.py I specify the abs path in the following way:
sys.path.insert(0, os.path.abspath('..'))
In index.rst I call all the modules from RstFiles folder in the following way:
.. toctree::
:maxdepth: 2
:caption: Contents:
BatchDataContainer.rst
BatchDefaultValues.rst
BatchTypes.rst
And finally, the content of each .rst file is in the following way:
BatchDataContainer
==================
.. automodule:: RstFiles.BatchDataContainer
:members:
When I run sphinx-build I get 2 main errors:
D:\hfTools\Projects\Validation-Source\Docs\source\RstFiles\BatchDataContainer.rst:
WARNING: document isn't included in any toctree
and
WARNING: autodoc: failed to import module 'BatchDataContainer' from
module 'RstFiles'; the following exception was raised: No module named
'RstFiles'
Any ideas what might be wrong cause I have tried different things already and nothing has helped?
If conf.py is located in this directory,
D:\hfTools\Projects\Validation-Source\Docs\source,
and the project's Python modules (including BatchDataContainer.py) are in
D:\hfTools\Projects\Validation-Source\Products,
then you need sys.path.insert(0, os.path.abspath('../..')) in conf.py.
The automodule directive needs to be updated as well:
.. automodule:: Products.BatchDataContainer
:members:
I'm working on Node.js native module which contains C and C++ sources. node-gyp is used to build the module.
As I want only one warning rises error in C code I use the following lines in binding.gyp:
"cflags!": [ "-Werror"],
"cflags": [ "-Werror=implicit-function-declaration" ],
This works fine while compiling C code but produces the following warning on each C++ source file:
cc1plus: warning: ‘-Werror=’ argument ‘-Werror=implicit-function-declaration’ is not valid for C++
I found this answer - Apply C-specific gcc options to C/C++ mixed library - which solves the same problem when using 'pure' CMake. Unfortunately I didn't found if it is possible and how to add this condition correctly to GYP configuration file - maybe using variables and conditions? Please, let me know if it's solvable. Thanks.
I found solution to the problem in my question and I'm posting an answer just in case somebody will have the same kind of a problem.
Original improper configuration in binding.gyp was as follows:
"cflags!": [ "-Werror"],
"cflags": [ "-Werror=implicit-function-declaration" ],
Correct configuration for my requirements is:
"cflags!": [ "-Werror"],
"cflags_c": [ "-Werror=implicit-function-declaration" ],
To avoid the warning in C++ we just need to add required flag to C-special flags cflag_c.
Solution was obtained while studying my_module.target.mk file in my project which contains the following comments (thanks to developers!):
# Flags passed to all source files.
CFLAGS_Release := \
# Flags passed to only C files.
CFLAGS_C_Release := \
# Flags passed to only C++ files.
CFLAGS_CC_Release := \
Thus it's seemed obvious, but I still didn't find clear reference in CMake and GYP documentation on these flags. I'm asking please to provide me with corresponding links if you know them or you will find them - I should know where is my mistake in searching docs to avoid them in the future.
I have a library that interfaces against ImageMagick 6. During compilation I get the below compilation warnings (promoted to errors by me).
I am aware that explicitly defining these values during compilation using -DMAGICKCORE_QUANTUM_DEPTH=16 -DMAGICKCORE_HDRI_ENABLE=0 will solve the issue (on my specific installation), however, as I am writing my CMake configuration files to be as portable as I can make them, this feels way to brittle and I really hope there is a better way.
Which brings me back to my question: Is there a way to determine MAGICKCORE_HDRI_ENABLE and MAGICKCORE_QUANTUM_DEPTH using cmake, bash or similar for the specific version of the library I am linking against?
/usr/include/ImageMagick-6/magick/magick-config.h:29:3: error: #warning "you should set MAGICKCORE_QUANTUM_DEPTH to sensible default set it to configure time default" [-Werror=cpp]
# warning "you should set MAGICKCORE_QUANTUM_DEPTH to sensible default set it to configure time default"
^
/usr/include/ImageMagick-6/magick/magick-config.h:30:3: error: #warning "this is an obsolete behavior please fix your makefile" [-Werror=cpp]
# warning "this is an obsolete behavior please fix your makefile"
^
/usr/include/ImageMagick-6/magick/magick-config.h:52:3: error: #warning "you should set MAGICKCORE_HDRI_ENABLE to sensible default set it to configure time default" [-Werror=cpp]
# warning "you should set MAGICKCORE_HDRI_ENABLE to sensible default set it to configure time default"
^
/usr/include/ImageMagick-6/magick/magick-config.h:53:3: error: #warning "this is an obsolete behavior please fix yours makefile" [-Werror=cpp]
# warning "this is an obsolete behavior please fix yours makefile"
^
cc1plus: all warnings being treated as errors
While writing the question I came across an answer to this. I'll summarize it here as the other questions regarding this angle it slightly differently.
Imagemagick ships with an utility called Magick++-config on my installation (Ubuntu 16.04) I found this utility under /usr/lib/x86_64-linux-gnu/ImageMagick-6.8.9/bin-Q16/Magick++-config. Below is the cmake code snipped I ended up using to extract the relevant build options.
find_package(ImageMagick 6.7 COMPONENTS Magick++ MagickCore)
if(ImageMagick_FOUND)
# Find Imagemagick Library directory
get_filename_component(MAGICK_LIB_DIR ${ImageMagick_MagickCore_LIBRARY} DIRECTORY)
# Find where Magick++-config lives
file(GLOB_RECURSE MAGICK_CONFIG FOLLOW_SYMLINKS ${MAGICK_LIB_DIR}/Magick++-config)
# Ask about CXX and lib flags/locations
set(MAGICK_CONFIG ${MAGICK_CONFIG} CACHE string "Path to Magick++-config utility")
execute_process(COMMAND "${MAGICK_CONFIG}" "--cxxflags" OUTPUT_VARIABLE MAGICK_CXX_FLAGS)
execute_process(COMMAND "${MAGICK_CONFIG}" "--libs" OUTPUT_VARIABLE MAGICK_LD_FLAGS)
# Add these to cache
set(MAGICK_CXX_FLAGS "${MAGICK_CXX_FLAGS}" CACHE string "ImageMagick configuration specific compilation flags." )
set(MAGICK_LD_FLAGS "${MAGICK_LD_FLAGS}" CACHE string "ImageMagick configuration specific linking flags.")
# Split into list:
string(REGEX MATCHALL "([^\ ]+)" MAGICK_CXX_FLAGS "${MAGICK_CXX_FLAGS}")
string(REGEX MATCHALL "([^\ ]+)" MAGICK_LD_FLAGS "${MAGICK_LD_FLAGS}")
# Remove trailing whitespace (CMAKE warns about this)
string(STRIP "${MAGICK_CXX_FLAGS}" MAGICK_CXX_FLAGS)
string(STRIP "${MAGICK_LD_FLAGS}" MAGICK_LD_FLAGS)
target_compile_options(<project> ${MAGICK_CXX_FLAGS})
target_link_libraries(<project> ${MAGICK_LD_FLAGS})
endif(ImageMagick_FOUND)
Source
I've got Webstorm 7 (on Win7) compiling my .less files into minified css with sourcemaps (using lessc on nodejs v0.10.26, run from a File Watcher in Webstorm), and I can then run autoprefixer on that generated css to automatically insert vendor prefixes.
What I'm not sure how to do, is combine the two steps. Is it possible to chain File Watchers in Webstorm?
Possible approaches:
Create a batch script that is called from the file watcher, then calls less and autoprefixer in sequence.
Create a node.js script/module that calls less, then autoprefixer.
Have the less transpiler output the css with a custom extension (e.g., .unprefixedcss), then have a File Watcher specifically for that extension.
Something I'm missing?
There is a plugin for less which does this job without adding a watcher: https://github.com/less/less-plugin-autoprefix
After installation you can add --autoprefix="…" to your arguments for the output in webstorms file watcher.
yes, it's possible to chain file watchers. The autoprefixer file watcher will listen to css changes and run after less. The first and secong approaches will work too
I tried the batch script approach with a python script, called from a single File Watcher:
#!/usr/bin/env python
"""
less-prefixed.py
Chains less and autoprefixer, to produce a minified, vendor-prefixed css file.
"""
# TODO: move config data to a config file
# TODO: delete the intermediate files generated by less
import argparse
import os
import subprocess
from pprint import pprint as pp
# Config data
node_folder = r'C:/Users/ClementMandragora/AppData/Roaming/npm'
less_script = os.path.join(node_folder, 'lessc.cmd')
autoprefixer_script = os.path.join(node_folder, 'autoprefixer.cmd')
parser = argparse.ArgumentParser()
parser.add_argument("--file-name", help="filename, not including the extension", required=True)
parser.add_argument("--working-dir", help="the directory to do the work in", required=True)
args = parser.parse_args()
print('\nArgs:')
pp(vars(args))
print('')
os.chdir(args.working_dir)
print('CWD: {c}\n'.format(c=os.getcwd() + '\n'))
print('Running less-css...')
# Compile and minify the less file to css.
# Include a sourcemap.
exitcode = subprocess.Popen([
less_script,
'--no-color',
'-x',
'--source-map={n}.css.map'.format(n=args.file_name),
'{n}.less'.format(n=args.file_name), # source
'{n}.min.css'.format(n=args.file_name) # dest
], cwd=args.working_dir).wait()
assert exitcode is 0, 'Nonzero return code from less! Got: {r}'.format(r=exitcode)
print('less compilation completed.\nRunning autoprefixer...')
# Run autoprefixer over the result from lessc.
exitcode = subprocess.Popen([
autoprefixer_script,
'-o',
'{n}.prefixed.min.css'.format(n=args.file_name), # dest
'{n}.min.css'.format(n=args.file_name) # source
], cwd=args.working_dir).wait()
assert exitcode is 0, 'Nonzero return code from autoprefixer! Got: {r}'.format(r=exitcode)
print('autoprefixer completed.\nFinal filename is {n}.prefixed.min.css'.format(n=args.file_name))
It worked, but seemed unwieldy.
Next attempt was having multiple file watchers; it turns out that have autoprefixer watch for changes to css files, then generate another css file, results in a loop.
So I created a custom File Type in Webstorm for files matching *.min.css (the output of the less transpiler), then created a File Watcher for that extension. The other differences from the default/included less File Watcher are:
Program: C:\Users\ClementMandragora\AppData\Roaming\npm\autoprefixer.cmd
Arguments: -o $FileNameWithoutExtension$.prefixed.css $FileName$
Output paths to refresh: $FileNameWithoutExtension$.prefixed.css:$FileNameWithoutExtension$.prefixed.css.map
It wasn't initially clear to me that the 'Output paths to refresh' also signal Webstorm to parent the generated files under the main *.less file, reducing project clutter. (I'm keeping source and output in the same folder.)