I have a tsv file which is seperated in class, id and text, e.g.
positive 2342 This is very good.
negative 4343 I hate it.
and I'm trying to feed Mahout's nbayes to classify the text part either pos or neg.
My first attempt was using mahout seqdirectory command on every line as a seperate file in its class directory. This works well with a small amount of data but eventually fails at around 30 Gigabytes of data with OutOfMemoryException. Increasing the heap size fails with "GC overhead limit exceeded" probably because of the large amount of seperate files.
My second attempt was loading the data into a hive table and convert it to a sequence file, as it is described here [0], which seems to work fine at first but after creating the vector file and splitting up the data set the trainnb step fails with an ArrayIndexOutOfBounds Exception.
[0] http://files.meetup.com/6195792/Working%20With%20Mahout.pdf
Right now I'm out of ideas what to look for. Any ideas how I can convert the tsv file or hive table to a sequencefile as it's generated by seqdirectory command on a directory?
Going to answer by myself in case some else needs a solution to the same or similar problem:
I found this code snippet at github and modified it to my needs. Additionally I had to trim the value string to get proper results.
This may be a simpler implementation for those searching for this answer in the future. This can be done completely from the command line (I tested it in EMR):
hadoop jar \
/home/hadoop/contrib/streaming/hadoop-streaming.jar \
-D mapred.reduce.tasks=0 \
-inputformat TextInputFormat \
-input {input_directory}/* \
-mapper '/bin/cat' \
-outputformat org.apache.hadoop.mapred.SequenceFileOutputFormat \
-output {output_directory}
/home/hadoop/contrib/streaming/hadoop-streaming.jar is the location of the hadoop-streaming.jar on Amazon EMR (AMI 3.4.0). It may be a in a different location depending on your configuration.
Related
I have a workflow written in snakemake that balloons at one point to a large number of output files. This happens because there are many combinations of my wildcards yielding on the order of 60,000 output files.
With this number of input files, DAG generation is very slow for subsequent rules to the point of being unusable. In the past, I've solved this issue by iterating with bash loops across subsetted configuration files where all but one of the wildcards is commented out. For example, in my case I had one wildcard (primer) that had 12 possible values. By running snakemake iteratively for each value of "primer", it divided up the workflow into digestible chunks (~5000 input files). With this strategy DAG generation is quick and the workflow proceeds well.
Now I'm interested in using the new --batch flag available in snakemake 5.7.4 since Johannes suggested to me on twitter that it should basically do the same thing I did with bash loops and subsetted config files.
However, I'm running into an error that's confusing me. I don't know if this is an issue I should be posting on the github issue tracker or I'm just missing something basic.
The rule my workflow is failing on is as follows:
rule get_fastq_for_subset:
input:
fasta="compute-workflow-intermediate/06-subsetted/{sample}.SSU.{direction}.{group}_pyNAST_{primer}.fasta",
fastq="compute-workflow-intermediate/04-sorted/{sample}.{direction}.SSU.{group}.fastq"
output:
fastq=temp("compute-workflow-intermediate/07-subsetted-fastq/{sample}.SSU.{direction}.{group}_pyNAST_{primer}.full.fastq"),
fastq_revcomp="compute-workflow-intermediate/07-subsetted-fastq/{sample}.SSU.{direction}.{group}_pyNAST_{primer}.full.revcomped.fastq"
conda:
"envs/bbmap.yaml"
shell:
"filterbyname.sh names={input.fasta} include=t in={input.fastq} out={output.fastq} ; "
"revcompfastq_according_to_pyNAST.py --inpynast {input.fasta} --infastq {output.fastq} --outfastq {output.fastq_revcomp}"
I set up a target in my all rule to generate the output specified in the rule, then tried running snakemake with the batch flag as follows:
snakemake --configfile config/config.yaml --batch get_fastq_for_subset=1/20 --snakefile Snakefile-compute.smk --cores 40 --use-conda
It then fails with this message:
WorkflowError:
Batching rule get_fastq_for_subset has less input files than batches. Please choose a smaller number of batches.
Now the thing I'm confused about is that my input for this rule should actually be on the order of 60,000 files. Yet it appears snakemake getting less than 20 input files. Perhaps it's only counting 2, as in the number of input files that are specified by the rule? But this doesn't really make sense to me...
The source code for snakemake is here, showing how snakemake counts things but it's a bit beyond me:
https://snakemake.readthedocs.io/en/stable/_modules/snakemake/dag.html
If anyone has any idea what's going on, I'd be very grateful for your help!
Best,
Jesse
So, I have a bunch of .csv files which need cleaning. They all need to go through the same steps, so I've extracted OpenRefine's operation history in order to apply it to other ones.
I could open each file one by one in OpenRefine and apply the extracted JSON history. But there are a lot of files...
Also, I don't have enough memory to open them all at once in OpenRefine (multiple selecting when opening the files).
Is there any way I could edit them all or automatically using that JSON I extracted from OpenRefine?
That's what we created BatchRefine for, the README should be pretty much self-explanatory. If not, let me know.
I just recently converted 4 million CSV records to RDF using BatchRefine, took me less than 10 minutes on my MacBook Pro.
I execute BatchRefine with this simple shell script:
#!/bin/bash
for file in ./input/*.tsv
do
filename=$(basename "$file")
if [ ! -f "target/"$filename"-transformed" ]
then
echo Processing $filename...
curl -XPOST -H 'Accept: text/turtle' -H 'Content-Type:text/csv' --data-binary "#"$file -o "target/"$filename"-transformed" 'localhost:8310/?refinejson=http://localhost:8000/bar-config.json'
else
echo Found "target/"$filename"-transformed", skipping $file
fi
done;
Note that you need to adjust the Acceptheader in the script, I guess you want CSV as output again, not RDF.
You can automate some OpenRefine operations using one of the existing libraries:
python
An other python library
ruby
javascript - nodejs
I am using the trnL chloroplast gene to identify plants from herbivore dung, and am currently trying to assign taxonomy to trnL sequences from my Illumina output. Here is the QIIME script and options I would like to run:
assign_taxonomy.py -i rep_set_numbered.fa -r sequence.fasta -t id_to_taxonomy.txt -e 0.01 -m blast
I have the input file from our data pipeline, and the reference file from NCBI GenBank (205,703 sequences). However, I do not have a tab-delimted taxonomy text file. Normally I would generate one from Excel, but because the FASTA file is so large (over 500 MB), it cannot be fully viewed in Excel, and therefore cannot be reliably edited.
My question is, is there a command line method for generating my own tab-delimited taxonomy file from my reference FASTA file, and if so, how would I do that? If not, what are my other options for handling this required option on the "assign_taxonomy.py" QIIME script?
I have successfully imported many gzipped JSON files on several occasions. For the two files BQ import choked. Both files reported the same error:
File: 0 / Offset:0 / Line:1 / Column:20971521, Row larger than the maximum allowed size
Now I've read about the row limit of 20MB and I understand that the number above is 20MB +1 but what really bugs me is that the meaning is totally off. My GZs have millions of JSONs (each on a new line). I have written a script to measure the longest line (longest JSON) in the failed GZ file and found it to be 103571 bytes. Why is the BQ import choking then?
I have inspected the longest JSON and it looks perfectly normal. How should I interpret the error? How can I fix it?
Why is BQ thinking the import is on line 1, column 20971521 when there are millions of lines in the file?
All your investigations are correct, but you must check your file as new lines are not identified, and BQ seas all the import as a large line.
That's why it reports column 20971521 for the problem.
You should try importing a sample from the file.
Some of the answers here gave me an idea so I went on a tried it. It appears as if for some strange reason BQ didn't like line endings so I wrote a quick script to rewrite the original input file to use line endings. Automagically the import worked!
This is utterly strange considering I already imported many GBs of data with pure line endings.
I am happy that it worked but I could never guess why. I hope this helps someone else.
I have a model of a heating process on Ansys Multiphysics, V11.
After running the simulation, I have a script to plot a temperature profile:
!---------------- POST PROCESSING -----------------------
/post1 ! tdatabase postprocessor
!---define profile temperature
path,s_temp1,2,,100 ! define a path
ppath,1,,dop/2,0,0 ! create a path point
ppath,2,,dop/2,1.5,0 ! create a path point
PDEF,surf_t1,TEMP, ,noav ! print a path
plpath,surf_t1 ! plot a path
What I now need, is to save the resulting path in a text file. I have already looked online for a solution, and found the following code to do it, which I appended after the lines above:
/OUTPUT,filename,extension
PRPATH,surf_t1
/OUTPUT
Ansys generates the file filename.extension but it is empty. I tried to place the OUTPUT command in a few locations in the script, but without any success.
I suspect I need to define something else, but I have no idea where to look, as Ansys documentation online is terribly chaotic, and all internet pages I've opened before writing this question are not better.
A final note: Ansys V11 is an old version of the software, but I don't want to upgrade it and fit the old model to the new software.
For the output of the simulation (which includes all calculation steps, and sub-steps description and node-by-node results) the output must be declared in the beginning of the code, and not in the postprocessing phase.
Declaring
/OUTPUT,filename,extension
in the preamble of the main script makes such that the output is stored in the right location, with the desired extension. At the end of the scripts, you must then declare
/OUTPUT
to reset the output file location for ANSYS.
The output to the PATH call made in the postprocessing script is however not printed in the file.
It is convenient to use
*CFOPEN,file,ext
*VWRITE,Vector(1,1).Vector(1,2)
(2F12.6)
*CFCLOSE
where Vector(1,1) is a two column array created by *DIM, and stores your data to output to file
As this is a special command, run it from file i.e. macro_output.mac