I am trying to add mono to core-image-minimal for P202RDB custom Linux distro. Here is my bblayers.conf file:
# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
LCONF_VERSION = "6"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/poky/meta \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/poky/meta-yocto \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/poky/meta-yocto-bsp \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/meta-freescale \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/meta-freescale-internal \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/meta-freescale-extra \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/meta-mono \
"
BBLAYERS_NON_REMOVABLE ?= " \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/poky/meta \
/home/testuser/QorIQ-SDK-V1.9-20151210-yocto/sources/poky/meta-yocto \
"
Now, when I try to build image using bitbake core-image-minimal, I get following output from it:
Loading cache: 100% |##############################################################################################################| ETA: 00:00:00
Loaded 1496 entries from dependency cache.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.26.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "Debian-8.6"
TARGET_SYS = "powerpc-fsl-linux-gnuspe"
MACHINE = "p2020rdb"
DISTRO = "fsl-qoriq"
DISTRO_VERSION = "1.9"
TUNE_FEATURES = "m32 spe ppce500v2"
TARGET_FPU = "ppc-efd"
meta
meta-yocto
meta-yocto-bsp = "(detachedfromb74ea96):ddf114933ccfc6e3ce51a10e8e8f95e514b73578"
meta-freescale = "(detachedfrom7fb32a2):7fb32a20983a0ebd5503eb42e851550b0deb8679"
meta-freescale-internal = "(detachedfrom220bff8):220bff8b2030e5af7393b5870d74c6f0af0d76d1"
meta-freescale-extra = "(nobranch):ced26c806cb566b1400a2f4f26a94d8d44d13233"
meta-mono = "daisy:f01b4f7a98d07abcf4c1f845c057199e112fb7d6"
NOTE: Preparing RunQueue
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
NOTE: Tasks Summary: Attempted 1248 tasks of which 1248 didn't need to be rerun and all succeeded.
It seems mono repository is found, then I prepare SD card using this image and it boots without problems on target board, however, mono command is not available. What am I missing?
Add
IMAGE_INSTALL_append = " mono"
to your local.conf. Just adding a layer doesn't add any package to your image.
Even better, create your own image, and add mono to IMAGE_INSTALL in that recipe.
Related
I implemented an example using Kotlin + Apache Beam to define the Kotlin properties of the pipes but when I ran the project I got the error:
Caused by: java.lang.IllegalStateException: Could not read class: VirtualFile: /Users/duanybaro/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-runners-google-cloud-dataflow-java/2.27.0/3e551e54b23441cc58c9d01e6614ff67216a7e87/beam-runners-google-cloud-dataflow-java-2.27.0.jar!/org/apache/beam/runners/dataflow/DataflowPipelineJob.class
at org.jetbrains.kotlin.load.java.structure.impl.classFiles.BinaryJavaClass.<init>(BinaryJavaClass.kt:122)
at org.jetbrains.kotlin.load.java.structure.impl.classFiles.BinaryJavaClass.<init>(BinaryJavaClass.kt:34)
This error only occurs in Kotlin because, with the code made in java, it works perfectly. Can you give me any suggestions to solve the error?
I really recommend you using the latest version of Apache Beam, your version is very old.
You can also use the starter for Beam Kotlin.
I share with your an example of Kotlin Beam project from my Github repo based on Maven.
For the Beam pipeline options, can you try with instead of using DataflowPipelineOptions :
val options = PipelineOptionsFactory
.fromArgs(*args)
.withValidation()
.`as`(TeamLeagueOptions::class.java)
val pipeline = Pipeline.create(options)
Example of PipelineOptions :
import org.apache.beam.sdk.options.Description
import org.apache.beam.sdk.options.PipelineOptions
interface TeamLeagueOptions : PipelineOptions {
#get:Description("Path of the input Json file to read from")
var inputJsonFile: String
#get:Description("Path of the slogans file to read from")
var inputFileSlogans: String
#get:Description("Path of the file to write to")
var teamLeagueDataset: String
#get:Description("Team stats table")
var teamStatsTable: String
#get:Description("Job type")
var jobType: String
#get:Description("Failure output dataset")
var failureOutputDataset: String
#get:Description("Failure output table")
var failureOutputTable: String
#get:Description("Feature name for failures")
var failureFeatureName: String
}
And pass program argument in the mvn command line :
mvn compile exec:java \
-Dexec.mainClass=fr.groupbees.application.TeamLeagueApp \
-Dexec.args=" \
--project=my-project \
--runner=DataflowRunner \
--jobName=team-league-kotlin-job-$(date +'%Y-%m-%d-%H-%M-%S') \
--region=europe-west1 \
--streaming=false \
--zone=europe-west1-d \
--tempLocation=gs://mazlum_dev/dataflow/temp \
--gcpTempLocation=gs://mazlum_dev/dataflow/temp \
--stagingLocation=gs://mazlum_dev/dataflow/staging \
--inputJsonFile=gs://mazlum_dev/team_league/input/json/input_teams_stats_raw.json \
--inputFileSlogans=gs://mazlum_dev/team_league/input/json/input_team_slogans.json \
--teamLeagueDataset=mazlum_test \
--teamStatsTable=team_stat \
--jobType=team_league_kotlin_ingestion_job \
--failureOutputDataset=mazlum_test \
--failureOutputTable=job_failure \
--failureFeatureName=team_league \
" \
-Pdataflow-runner
I have imx7d-pico with Carrier board. This tiny computer
was used a lot for Android Things. PDF (datasheet) easily
found.
I stay (during the last two weeks) trying this tutorial:
https://github.com/TechNexion/freertos-tn/tree/freertos_1.0.1_imx7d
export ARMGCC_DIR=${HOME}/gcc-arm-none-eabi-4_9-2015q3/
after ./build_release.sh I do
sudo cp ${HOME}/freertos-tn/examples/imx7d_pico_m4/demo_apps/hello_world/armgcc/release/hello_world.bin
/media/neuberfran/boot
ls mmc 0:1 (result command in U-boot)
10572 hello_world.bin
then:
=> fatload mmc 0:1 0x7F8000 hello_world.bin (with issue)
** Reading file would overwrite reserved memory **
Failed to load 'hello_world.bin
=> dcache flush (I can't)
=> bootaux 0x7F8000 (I can't)
I'm correctly generating my image Yocto-hardknott-technexion with this:
$ mkdir tn-imx-yocto
$ cd tn-imx-yocto
$ repo init -u https://github.com/TechNexion/tn-imx-yocto-manifest.git -b hardknott_5.10.y-next -m imx-5.10.52-2.1.0.xml
$ repo sync -j8
$ DISTRO=fsl-imx-x11 MACHINE=pico-imx7 BASEBOARD=pi source tn-setup-release.sh -b build-x11-pico-imx7
$ bitbake core-image-base
imx7d-pico-pi-qca-m4.dts
#include "imx7d-pico-pi-qca.dts"
/ {
memory {
linux,usable-memory = <0x80000000 0x1ff00000>;
};
m4_tcm: tcml#007f8000 {
compatible = "fsl, m4_tcml";
reg = <0x007f8000 0x8000>;
};
gpio-leds {
status = "disabled";
};
};
&adc1 {
status = "disabled";
};
&adc2 {
status = "disabled";
};
&gpt3 {
status = "disabled";
};
&gpt4 {
status = "disabled";
};
&ocram {
reg = <0x00901000 0xf000>;
};
&rpmsg{
vdev-nums = <1>;
reg = <0x9fff0000 0x10000>;
status = "okay";
};
&uart6 {
status = "disabled";
};
&wdog3{
status = "disabled";
pico-imx7.conf
##TYPE: Machine
##NAME: pico-imx7
##SOC: i.MX7/Solo/Dual/UtraLowPower
##DESCRIPTION: Machine configuration for PICO-IMX7 with QCA(Qualcomm)/BRCM(Broadcom) WLAN module
##MAINTAINER: Po Cheng <po.cheng#technexion.com>
MACHINEOVERRIDES =. "mx7:mx7d:"
MACHINEOVERRIDES_EXTENDER_pico-imx7 = "uenv"
include conf/machine/include/imx-base.inc
include conf/machine/include/tune-cortexa7.inc
require conf/machine/tn-base.inc
#
# Kernel Device Trees
#
PREFERRED_PROVIDER_virtual/kernel ?= "linux-tn-imx"
PREFERRED_PROVIDER_virtual/kernel_mx7 = "linux-tn-imx"
KERNEL_DEVICETREE = "imx7d-pico-pi-qca.dtb"
# imx7d-pico-pi-m4.dtb \
#"
KERNEL_DEVICETREE_append = " imx7d-pico-pi-m4.dtb"
# imx7d-pico-pi.dtb \
# imx7d-pico-pi-c2-qca.dtb imx7d-pico-pi-c2.dtb \
# imx7d-pico-nymph-qca.dtb imx7d-pico-nymph.dtb \
# imx7d-pico-dwarf-qca.dtb imx7d-pico-dwarf.dtb \
# imx7d-pico-hobbit-qca.dtb imx7d-pico-hobbit.dtb \
#"
# Setup the additional devicetree file
#KERNEL_DEVICETREE_append_voicehat = " imx7d-pico-pi-qca-voicehat.dtb \
# imx7d-pico-pi-voicehat.dtb \
# imx7d-pico-pi-c2-qca-voicehat.dtb \
# imx7d-pico-pi-c2-voicehat.dtb "
# Bootloader Specifics
UBOOT_MACHINE = "pico-imx7d_spl_defconfig"
#M4_MACHINE = "pico-imx7d-pi"
#IMAGE_BOOTFILES_DEPENDS += "imx-m4-demos-tn:do_deploy"
#IMAGE_BOOTFILES += "hello_world.bin rpmsg_lite_pingpong_rtos_linux_remote.bin rpmsg_lite_str_echo_rtos_imxcm4.bin"
In U-boot printenv command currently I have nothing about m4 and tcm
printenv
arch=arm
baseboard=pi
baudrate=115200
board=pico-imx7d
board_name=pico-imx7d
boot_fdt=try
bootcmd=mmc dev ${mmcdev}; if mmc rescan; then if run loadbootenv; then echo Loaded environment from ${bootenv};run importbootenv;fi;if test -n $uenvcmd; then echo Running uenvcmd ...;run uenvcmd;fi;if run loadbootscript; then run bootscript; fi; if run loadfit; then run fitboot; fi; if run loadimage; then run mmcboot; else echo WARN: Cannot load kernel from boot media; fi; else run netboot; fi
bootdelay=2
bootenv=uEnv.txt
bootscript=echo Running bootscript from mmc ...; source
console=ttymxc4
cpu=armv7
default_baseboard=pi
fastboot_dev=mmc0
fbcmd=fastboot 0
fdt_addr=0x83000000
fdt_file=imx7d-pico-pi-qca-m4.dtb
fdt_high=0xffffffff
fdtcontroladdr=9cd62ed0
fit_addr=0x87880000
fit_high=0xffffffff
fit_overlay=for ov in ${dtoverlay}; do echo Overlaying ${ov}...; setenv fitov "${fitov}#${ov}"; done; echo fit conf: ${fdtfile}${fitov};
fitargs=setenv bootargs console=${console},${baudrate} root=/dev/ram0 rootwait rw modules-load=g_acm_ms g_acm_ms.stall=0 g_acm_ms.removable=1 g_acm_ms.file=${mmcrootdev} g_acm_ms.iSerialNumber=00:00:00:00:00:00 g_acm_ms.iManufacturer=TechNexion
fitboot=echo Booting from FIT image...; run searchbootdev; run setfdt; run fit_overlay; run fitargs; bootm ${fit_addr}#conf#${fdtfile}${fitov};
fitov=""
form=pico
image=zImage
importbootenv=echo Importing environment from mmc ...; env import -t -r $loadaddr $filesize
initrd_high=0xffffffff
ip_dyn=yes
loadaddr=0x80800000
loadbootenv=fatload mmc ${mmcdev} ${loadaddr} ${bootenv}
loadbootscript=fatload mmc ${mmcdev}:${mmcpart} ${loadaddr} ${script};
loadfdt=fatload mmc ${mmcdev}:${mmcpart} ${fdt_addr} ${fdtfile}
loadfit=fatload mmc ${mmcdev}:${mmcpart} ${fit_addr} tnrescue.itb
loadimage=fatload mmc ${mmcdev}:${mmcpart} ${loadaddr} ${image}
mmcargs=setenv bootargs console=${console},${baudrate} root=${mmcroot}
mmcautodetect=yes
mmcboot=echo Booting from mmc ...; run m4boot; run searchbootdev; run mmcargs; echo baseboard is ${baseboard}; run setfdt; if test ${boot_fdt} = yes || test ${boot_fdt} = try; then if run loadfdt; then bootz ${loadaddr} - ${fdt_addr}; else if test ${boot_fdt} = try; then echo WARN: Cannot load the DT; echo fall back to load the default DT; setenv baseboard ${default_baseboard}; run setfdt; run loadfdt; bootz ${loadaddr} - ${fdt_addr}; else echo WARN: Cannot load the DT; fi; fi; else bootz; fi;
mmcdev=0
mmcpart=1
netargs=setenv bootargs console=${console},${baudrate} root=/dev/nfs ip=dhcp nfsroot=${serverip}:${nfsroot},v3,tcp
netboot=echo Booting from net ...; if test ${ip_dyn} = yes; then setenv get_cmd dhcp; else setenv get_cmd tftp; fi; run loadbootenv; run importbootenv; run setfdt; run netargs; ${get_cmd} ${loadaddr} ${image}; if test ${boot_fdt} = yes || test ${boot_fdt} = try; then if ${get_cmd} ${fdt_addr} ${fdtfile}; then bootz ${loadaddr} - ${fdt_addr}; else if test ${boot_fdt} = try; then bootz; else echo WARN: Cannot load the DT; fi; fi; else bootz; fi;
script=boot.scr
searchbootdev=if test ${bootdev} = SD0; then setenv mmcrootdev /dev/mmcblk2; setenv mmcroot /dev/mmcblk2p2 rootwait rw; else setenv mmcrootdev /dev/mmcblk0; setenv mmcroot /dev/mmcblk0p2 rootwait rw; fi
serial#=0091ceb8deadbeef
setfdt=if test -n ${wifi_module} && test ${wifi_module} = qca; then setenv fdtfile ${som}-${form}-${baseboard}-${wifi_module}${mcu}.dtb; else setenv fdtfile ${som}-${form}-${baseboard}${mcu}.dtb;fi
soc=mx7
soc_type=imx7d
som=imx7d
splashimage=0x8c000000
splashpos=m,m
splashsource=mmc_fs
stdout=serial
update_m4_from_sd=if sf probe 0:0; then if run loadm4image; then setexpr fw_sz ${filesize} + 0xffff; setexpr fw_sz ${fw_sz} / 0x10000; setexpr fw_sz ${fw_sz} * 0x10000; sf erase 0x0 ${fw_sz}; sf write ${loadaddr} 0x0 ${filesize}; fi; fi
vendor=technexion
wifi_module=qca
Environment size: 3808/8188 bytes
How can I solve this? (pls)
I solved with two changes in device-tree files:
in imx7d.dtsi
I put status = "okay";
in rpmsg: rpmsg{
in imx7d-pico-pi-qca-m4.dts
I put:
reserved-memory {
rpmsg_vrings: vrings0#0x8ff00000 {
reg = <0x8fff0000 0x10000>;
no-map;
};
};
&
&rpmsg{
memory-region = <&rpmsg_vrings>;
vdev-nums = <1>;
reg = <0x9fff0000 0x10000>;
status = "okay";
};
I am clueless about an issue which i am facing.
During cross compiling one of the app, i am getting following error which is making no sense.
If someone can help me debug the issue, it would be really helpful.
ERROR: lib32-audiod-1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00 do_package_qa: QA Issue: package lib32-audiod-ptest contains bad RPATH /home/work/ashutosh.tripathi/o20_build/build-starfish/BUILD/work/o20-starfishmllib32-linux-gnueabi/lib32-audiod/1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00/audiod-1.0.0-161.jcl4tv.85 in file /home/work/ashutosh.tripathi/o20_build/build-starfish/BUILD/work/o20-starfishmllib32-linux-gnueabi/lib32-audiod/1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00/packages-split/lib32-audiod-ptest/opt/webos/tests/audiod/gtest_audiod
package lib32-audiod-ptest contains bad RPATH /home/work/ashutosh.tripathi/o20_build/build-starfish/BUILD/work/o20-starfishmllib32-linux-gnueabi/lib32-audiod/1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00/audiod-1.0.0-161.jcl4tv.85 in file /home/work/ashutosh.tripathi/o20_build/build-starfish/BUILD/work/o20-starfishmllib32-linux-gnueabi/lib32-audiod/1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00/packages-split/lib32-audiod-ptest/opt/webos/tests/audiod/gtest_audiod [rpaths]
ERROR: lib32-audiod-1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00 do_package_qa: QA run found fatal errors. Please consider fixing them.
ERROR: lib32-audiod-1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00 do_package_qa: Function failed: do_package_qa
ERROR: Logfile of failure stored in: /home/work/ashutosh.tripathi/o20_build/build-starfish/BUILD/work/o20-starfishmllib32-linux-gnueabi/lib32-audiod/1.0.0-161.jcl4tv.85-r26audiod-automation-10Feb_00/temp/log.do_package_qa.4873
ERROR: Task (virtual:multilib:lib32:/home/work/ashutosh.tripathi/o20_build/build-starfish/meta-lg-webos/meta-webos/recipes-multimedia/audiod/audiod.bb:do_package_qa) failed with exit code '1'
NOTE: Tasks Summary: Attempted 2622 tasks of which 2608 didn't need to be rerun and 1 failed.
Here is the audiod recipe file:
DEPENDS = "glib-2.0 libpbnjson luna-service2 pmloglib luna-prefs boost pulseaudio"
RDEPENDS_${PN} = "\
libasound \
libasound-module-pcm-pulse \
libpulsecore \
pulseaudio \
pulseaudio-lib-cli \
pulseaudio-lib-protocol-cli \
pulseaudio-misc \
pulseaudio-module-cli-protocol-tcp \
pulseaudio-module-cli-protocol-unix \
pulseaudio-server \
"
WEBOS_VERSION = "1.0.0-161.open.12_49f981e4e5a599b75d893520b30393914657a4ae"
PR = "r26"
inherit webos_component
inherit webos_enhanced_submissions
inherit webos_cmake
inherit webos_library
inherit webos_daemon
inherit webos_system_bus
inherit webos_machine_dep
inherit gettext
inherit webos_lttng
inherit webos_public_repo
inherit webos_test_provider
# TODO: move to WEBOS_GIT_REPO_COMPLETE
WEBOS_REPO_NAME = "audiod-pro"
SRC_URI = "${WEBOS_PRO_GIT_REPO_COMPLETE}"
S = "${WORKDIR}/git"
EXTRA_OECMAKE += "${#bb.utils.contains('WEBOS_LTTNG_ENABLED', '1', '-DWEBOS_LTTNG_ENABLED:BOOLEAN=True', '', d)}"
EXTRA_OECMAKE += "-DAUDIOD_PALM_LEGACY:BOOLEAN=True"
EXTRA_OECMAKE += "-DAUDIOD_TEST_API:BOOLEAN=True"
FILES_${PN} += "${datadir}/alsa/"
FILES_${PN} += "/data"
FILES_${PN} += "${webos_mediadir}/internal"
I would like to thank the stackoverflow community for the help offered.
Adding the following flags helped resolve the issue.
Posting it here, so that others may get some benefit from it, if they ever face similar problem
set(CMAKE_INSTALL_RPATH "$ORIGIN")
set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
Or, skipping RPATH also does the job:
SET(CMAKE_SKIP_BUILD_RPATH TRUE)
SET(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE)
SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH FALSE)
worked also by adding following line into the recipe:
EXTRA_OECMAKE += "-DCMAKE_SKIP_RPATH=TRUE"
https://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html#ref-classes-cmake
I am building a workflow in snakemake and would like to recycle one of the rules to two different input sources. The input sources could be either source1 or source1+source2 and depending on the input the output directory would also vary. Since this was quite complicated to do in the same rule and I didn't want to create the copy of the full rule I would like to create two rules with different input/output, but running same command.
Is it possible to make this work? I get the DAG resolved correctly but the job don't go through on the cluster (ERROR : bamcov_cmd not defined)..
An example below (both rules use the same command at the end):
this is command
def bamcov_cmd():
return( (deepTools_path+"bamCoverage " +
"-b {input.bam} " +
"-o {output} " +
"--binSize {params.bw_binsize} " +
"-p {threads} " +
"--normalizeTo1x {params.genome_size} " +
"{params.read_extension} " +
"&> {log}") )
this is the rule
rule bamCoverage:
input:
bam = file1+"/{sample}.bam",
bai = file1+"/{sample}.bam.bai"
output:
"bamCoverage/{sample}.filter.bw"
params:
bw_binsize = bw_binsize,
genome_size = int(genome_size),
read_extension = "--extendReads"
log:
"bamCoverage/logs/bamCoverage.{sample}.log"
benchmark:
"bamCoverage/.benchmark/bamCoverage.{sample}.benchmark"
threads: 16
run:
bamcov_cmd()
this is the optional rule2
rule bamCoverage2:
input:
bam = file2+"/{sample}.filter.bam",
bai = file2+"/{sample}.filter.bam.bai"
output:
"bamCoverage/{sample}.filter.bw"
params:
bw_binsize = bw_binsize,
genome_size = int(genome_size),
read_extension = "--extendReads"
log:
"bamCoverage/logs/bamCoverage.{sample}.log"
benchmark:
"bamCoverage/.benchmark/bamCoverage.{sample}.benchmark"
threads: 16
run:
bamcov_cmd()
What you asked is possible in python.
It depends if you have JUST python code in the file, or python and Snakemake.
I will answer that first, and then I have a follow up response because I want you to set it up differently so you don't have to do it this way.
Just Python:
from fileContainingMyBamCovCmdFunction import bamcov_cmd
rule bamCoverage:
...
run:
bamcov_cmd()
Visually, see how I do it in this file, to reference access to buildHeader and buildSample. These files are being called by a Snakefile. It should work the same for you.
https://github.com/LCR-BCCRC/workflow_exploration/blob/master/Snakemake/modules/py_buildFile/buildFile.py
EDIT 2017-07-23 - Updating code segment below to reflect user comment
Snakemake and Python:
include: "fileContainingMyBamCovCmdFunction.suffix"
rule bamCoverage:
...
run:
shell(bamcov_cmd())
EDIT END
If the function is truly specific to the bamCoverage call, if you prefer you can put it back in the rule. This implies it's not being called elsewhere, which may be true.
Be careful when annotating files using '.' notation, I use '_' as I find it's easier to prevent creating cyclical dependencies this way.
Also, if you do end up leaving the two rules separately, you will likely end up with ambiguity errors.
http://snakemake.readthedocs.io/en/latest/snakefiles/rules.html?highlight=ruleorder#handling-ambiguous-rules
When possible, it's best practice to have rules generating unique outputs.
As for alternatives, consider setting up the code like this?
from subprocess import call
rule all:
input:
"path/to/file/mySample.bw"
#OR
#"path/to/file/mySample_filtered.bw"
bamCoverage:
input:
bam = file1+"/{sample}.bam",
bai = file1+"/{sample}.bam.bai"
output:
"bamCoverage/{sample}.bw"
params:
bw_binsize = bw_binsize,
genome_size = int(genome_size),
read_extension = "--extendReads"
log:
"bamCoverage/logs/bamCoverage.{sample}.log"
benchmark:
"bamCoverage/.benchmark/bamCoverage.{sample}.benchmark"
threads: 16
run:
callString= deepTools_path + "bamCoverage " \
+ "-b " + wilcards.input.bam \
+ "-o " + wilcards.output \
+ "--binSize " str(params.bw_binsize) \
+ "-p " + str({threads}) \
+ "--normalizeTo1x " + str(params.genome_size) \
+ " " + str(params.read_extension) \
+ "&> " + str(log)
call(callString, shell=True)
rule filterBam:
input:
"{pathFB}/{sample}.bam"
output:
"{pathFB}/{sample}_filtered.bam"
run:
callString="samtools view -bh -F 512 " + wildcards.input \
+ ' > ' + wildcards.output
call(callString, shell=True)
Thoughts?
I am using following configuration for pushing data to hdfs from log file.
agent.channels.memory-channel.type = memory
agent.channels.memory-channel.capacity=5000
agent.sources.tail-source.type = exec
agent.sources.tail-source.command = tail -F /home/training/Downloads/log.txt
agent.sources.tail-source.channels = memory-channel
agent.sinks.log-sink.channel = memory-channel
agent.sinks.log-sink.type = logger
agent.sinks.hdfs-sink.channel = memory-channel
agent.sinks.hdfs-sink.type = hdfs
agent.sinks.hdfs-sink.batchSize=10
agent.sinks.hdfs-sink.hdfs.path = hdfs://localhost:8020/user/flume/data/log.txt
agent.sinks.hdfs-sink.hdfs.fileType = DataStream
agent.sinks.hdfs-sink.hdfs.writeFormat = Text
agent.channels = memory-channel
agent.sources = tail-source
agent.sinks = log-sink hdfs-sink
agent.channels = memory-channel
agent.sources = tail-source
agent.sinks = log-sink hdfs-sink
I got no error message, but still i m not able to find out the output in hdfs.
on interrupting I can see sink interruption exception & some data of that log file.
I am running following command:
flume-ng agent --conf /etc/flume-ng/conf/ --conf-file /etc/flume-ng/conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent;
I had a similar issue. In my case now it's working. Below is the conf file:
#Exec Source
execAgent.sources=e
execAgent.channels=memchannel
execAgent.sinks=HDFS
#channels
execAgent.channels.memchannel.type=file
execAgent.channels.memchannel.capacity = 20000
execAgent.channels.memchannel.transactionCapacity = 1000
#Define Source
execAgent.sources.e.type=org.apache.flume.source.ExecSource
execAgent.sources.e.channels=memchannel
execAgent.sources.e.shell=/bin/bash -c
execAgent.sources.e.fileHeader=false
execAgent.sources.e.fileSuffix=.txt
execAgent.sources.e.command=cat /home/sample.txt
#Define Sink
execAgent.sinks.HDFS.type=hdfs
execAgent.sinks.HDFS.hdfs.path=hdfs://localhost:8020/user/flume/
execAgent.sinks.HDFS.hdfs.fileType=DataStream
execAgent.sinks.HDFS.hdfs.writeFormat=Text
execAgent.sinks.HDFS.hdfs.batchSize=1000
execAgent.sinks.HDFS.hdfs.rollSize=268435
execAgent.sinks.HDFS.hdfs.rollInterval=0
#Bind Source Sink Channel
execAgent.sources.e.channels=memchannel
execAgent.sinks.HDFS.channel=memchannel
I suggest using the prefix configuration when placing files in HDFS:
agent.sinks.hdfs-sink.hdfs.filePrefix = log.out
#bhavesh - Are you sure, the log file(agent.sources.tail-source.command = tail -F /home/training/Downloads/log.txt) keeps appending data ? Since you have used a Tail command with -F, only changed data(within the file) will be dumped into HDFS