Reading files using Contiki on MSP430F5438A - file-io

I want to read a file from my computer and display it on the board. But we are facing a problem while reading the file as the board is constantly resetting. (The Watchdog is off)
Can anyone help?

Here are the steps to upload a file:
If you want to read a file from your local drive only way to do it by uploading the file into coffee file system (cfs) first than read the file using cfs library such as cfs_open, cfs_seek, and cfs_read as a reference have a look into this link:
https://github.com/contiki-os/contiki/wiki/Coffee-filesystem-guide
Modify the program ".c" file you are working to initialize the base64 and coffee commands in the shell by adding:
shell_base64_init();
shell_coffee_init();
Compile and upload via the command:
make TARGET=platformuaresingnow example.upload
to read/upload .txt file by modifying some bash code. To do so, add the following lines
%.shell-upload: %.txt
``(echo; sleep 4; echo "~K"; sleep 4;``
``echo "dec64 | write $*.txt | null"; sleep 4; \``
``../../tools/base64-encode < $<; sleep 4; ``
`` echo ""; echo "~K"; echo "read $*.txt | size"; sleep 4) | make login``
Now you can upload any .txt file to the coffee filesystem of the currently connected mote node using the command:
make testfile.shell-upload
Hope that it 'll solve your problem.

Related

Problems getting two output files in Nextflow

Hello all!
I´m trying to write a small Nextflow pipeline that runs vcftools comands in 300 vcf´s. The pipe takes four inputs: vcf, pop1, pop2 and a .txt file, and would have to generate two outputs: a .log.weir.fst and a .log.log file. When i run the pipeline, it only gives the .log.weir.fst files but not the .log files.
Here´s my process definition:
process fst_calculation {
publishDir "${results_dir}/fst_results_pop1_pop2/", mode:"copy"
input:
file vcf
file pop_1
file pop_2
file mart
output:
path "*.log.*"
"""
while read linea
do
echo "[DEBUG] working in line: \$linea"
inicio=\$(echo "\$linea" | cut -f3)
final=\$(echo "\$linea" | cut -f4)
cromosoma=\$(echo "\$linea" | cut -f1)
segmento=\$(echo "\$linea" | cut -f5)
vcftools --vcf ${vcf} \
--weir-fst-pop ${pop_1} \
--weir-fst-pop ${pop_2} \
--out \$inicio.log --chr \$cromosoma \
--from-bp \$inicio --to-bp \$final
done < ${mart}
"""
}
And here´s the workflow of my process
/* Load files into channel*/
pop_1 = Channel.fromPath("${params.fst_path}/pop_1")
pop_2 = Channel.fromPath("${params.fst_path}/pop_2")
vcf = Channel.fromPath("${params.fst_path}/*.vcf")
mart = Channel.fromPath("${params.fst_path}/*.txt")
/* Import modules
*/
include {
fst_calculation } from './nf_modules/modules.nf'
/*
* main pipeline logic
*/
workflow {
p1 = fst_calculation(vcf, pop_1, pop_2, mart)
p1.view()
}
When i check the work directory of the pipeline, I can see that the pipe only generates the .log.weir.fst. To verify if my code was wrong, i ran "bash .command.sh" in the working directory and this actually generates the two output files. So, is there a reason for not getting the two output files when i run the pipe?
I appreciate any help.
Note that bash .command.sh and bash .command.run do different things. The latter is basically a wrapper around the former that sets up the environment and stages the declared input files, among other things. If running the latter produces the unusual behavior, you'll need to dig deeper.
It's not completely clear to me what the problem is here. My guess is that vcftools might behave differently when run non-interactively, such that it sends it's logging to STDERR. If that's the case, the logging will be captured in a file called .command.err. To instead send that to a file, you can just redirect STDERR in the usual way, untested:
while IFS=\$'\\t' read -r cromosoma null inicio final segmento ; do
>&2 echo "[DEBUG] Working with: \${cromosoma}, \${inicio}, \${final}, \${segmento}"
vcftools \\
--vcf "${vcf}" \\
--weir-fst-pop "${pop_1}" \\
--weir-fst-pop "${pop_2}" \\
--out "\${inicio}.log" \\
--chr "\${cromosoma}" \\
--from-bp "\${inicio}" \\
--to-bp "\${final}" \\
2> "\${cromosoma}.\${inicio}.\${final}.log.log"
done < "${mart}"

phpseclib: "CAT" command works randomly

I have a script that fetches data from a site.
Basically it is divided into two section.
1.executes commands on a remote machine and saves output in a file
2.read the contents of a file.
For some reason it works from time to time. Section 1 always works (checked the remote machine and found the files). The problem is related to cat.
I've added to my code the option to dump the results of "CAT" command to a file.
Sometimes it has info sometimes it doesn't. However the file is always created!
The nodes i'm querying are the same. Timeout of execution of Section 1 on a remote server is 11-12 secs.
Thanks beforehand.
$ssh->exec("rm toolkit/mybatch/$newfileid");
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/mybatch");
$ssh->setTimeout(15);
echo $ssh->exec('cat ' . escapeshellarg("toolkit/mybatch/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/mybatch/traffic.txt');
$traffic = $ssh->exec("cat toolkit/mybatch/traffic.txt");
$traffic = substr($traffic,21,-16);
$ssh->disconnect();
echo $traffic;
I've updated the code above, however, it worked several times , but after deletion of old files, it only creates "traffic.txt" and sometimes it has info in it, sometimes no.
Also, the file "traffic.txtescapeshellarg" is not created anymore. So i was forced to go back to my previous solution and read the "traffic.txt".
So, the solution was very simple. All i needed to do is adding "timeout".
The final code:
$ssh->exec("rm toolkit/traffic/$newfileid");
$ssh->setTimeout(0);
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/traffic");
sleep(8);
$ssh->exec('cat ' . escapeshellarg("toolkit/traffic/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/traffic/traffic.txt');
$traffic = $ssh->exec("cat toolkit/traffic/traffic.txt");
$traffic = substr($traffic,21,-83);
echo $traffic;

How do I export into pdf in phantomjs with this link.

I have read about phantomjs and rasterizejs as well. But my link is this:
http://localhost:5601/#/dashboard/External?_g=(time:(from:'2014-12-31T16:00:00.000Z',mode:absolute,to:'2015-01-01T16:00:00.000Z'))&_a=(filters:!(),panels:!((col:10,id:'Count-of-Source-IPs-(External)',row:1,size_x:3,size_y:3,type:visualization),(col:4,id:'Protocols-(External)',row:4,size_x:3,size_y:2,type:visualization),(col:7,id:'Top-5-Source-IPs-with-Protocols-and-Source-Port-(External)',row:4,size_x:6,size_y:6,type:visualization),(col:1,id:'Top-5-Source-IPs-(External)',row:4,size_x:3,size_y:2,type:visualization),(col:1,id:'Top-5-Countries-with-Protocols-(External)',row:1,size_x:6,size_y:3,type:visualization),(col:1,id:'Geographical-of-External-(Source)',row:6,size_x:6,size_y:4,type:visualization),(col:7,id:'Action-(External)',row:1,size_x:3,size_y:3,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:'*')),title:External)
How do I make it such that it works with this command:
phantom.js rasterize.js "http://localhost:5601/#/dashboard/External?_g=(time:(from:'2014-12-31T16:00:00.000Z',mode:absolute,to:'2015-01-01T16:00:00.000Z'))&_a=(filters:!(),panels:!((col:10,id:'Count-of-Source-IPs-(External)',row:1,size_x:3,size_y:3,type:visualization),(col:4,id:'Protocols-(External)',row:4,size_x:3,size_y:2,type:visualization),(col:7,id:'Top-5-Source-IPs-with-Protocols-and-Source-Port-(External)',row:4,size_x:6,size_y:6,type:visualization),(col:1,id:'Top-5-Source-IPs-(External)',row:4,size_x:3,size_y:2,type:visualization),(col:1,id:'Top-5-Countries-with-Protocols-(External)',row:1,size_x:6,size_y:3,type:visualization),(col:1,id:'Geographical-of-External-(Source)',row:6,size_x:6,size_y:4,type:visualization),(col:7,id:'Action-(External)',row:1,size_x:3,size_y:3,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:'*')),title:External)" external.pdf
I have been getting syntax error because of that.
The problem is probably that the command is too long for your terminal and some of it is cut off.
You can either directly put it into the script or read it from stdin. For that you need to edit rasterize.js.
First you need to reduce the x in all system.args[x] where x is above 1 by 1. If you've done that, then you can call the script as
phantom.js rasterize.js external.pdf
or
cat file.url | phantom.js rasterize.js external.pdf
in the second case.
Put URL into script
Change
address = system.args[1];
to
address = "http://localhost....";
Read from pipe
You can put your long URL into some file and pass that file to stdin of the PhantomJS script.
Change
address = system.args[1];
to
address = system.stdin.read();

Tail and grep combo to show text in a multi line pattern

Not sure if you can do this through tail and grep. Lets say I have a log file that I would like to tail. It spits out quite a bit of information when in debug mode. I want to grep for information pertaining to only my module, and the module name is in the log like so:
/*** Module Name | 2014.01.29 14:58:01
a multi line
dump of some stacks
or whatever
**/
/*** Some Other Module Name | 2014.01.29 14:58:01
this should show up in the grep
**/
So as you can imagine, the number of lines that would be spit out pertaining to "Module Name" could be 2, or 50 until the end pattern appears (**/)
From your description, it seems like this should work:
tail -f log | awk '/^\/\*\*\* Module Name/,/^\*\*\//'
but be wary of buffering issues. (Lines printed to the file will very likely see high latency before actually being printed.)
I think you will have to install pcregrep for this. Then this will work:
tail -f logfile | pcregrep -M "^\/\*\*\* Some Other Module Name \|.*(\n|.)*?^\*\*/$"
This worked in testing, but for some reason, I had to append your example text to the log file over 100 times before output started showing up. However, all of the "Some other Module Name" data that was written to the log file as soon as this line was invoked was eventually printed to stdout.

PhantomJS: exported PDF to stdout

Is there a way to trigger the PDF export feature in PhantomJS without specifying an output file with the .pdf extension? We'd like to use stdout to output the PDF.
You can output directly to stdout without a need for a temporary file.
page.render('/dev/stdout', { format: 'pdf' });
See here for history on when this was added.
If you want to get HTML from stdin and output the PDF to stdout, see here
Sorry for the extremely long answer; I have a feeling that I'll need to refer to this method several dozen times in my life, so I'll write "one answer to rule them all". I'll first babble a little about files, file descriptors, (named) pipes, and output redirection, and then answer your question.
Consider this simple C99 program:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
if (argc < 2) {
printf("Usage: %s file_name\n", argv[0]);
return 1;
}
FILE* file = fopen(argv[1], "w");
if (!file) {
printf("No such file: %s\n", argv[1]);
return 2;
}
fprintf(file, "some text...");
fclose(file);
return 0;
}
Very straightforward. It takes an argument (a file name) and prints some text into it. Couldn't be any simpler.
Compile it with clang write_to_file.c -o write_to_file.o or gcc write_to_file.c -o write_to_file.o.
Now, run ./write_to_file.o some_file (which prints into some_file). Then run cat some_file. The result, as expected, is some text...
Now let's get more fancy. Type (./write_to_file.o /dev/stdout) > some_file in the terminal. We're asking the program to write to its standard output (instead of a regular file), and then we're redirecting that stdout to some_file (using > some_file). We could've used any of the following to achieve this:
(./write_to_file.o /dev/stdout) > some_file, which means "use stdout"
(./write_to_file.o /dev/stderr) 2> some_file, which means "use stderr, and redirect it using 2>"
(./write_to_file.o /dev/fd/2) 2> some_file, which is the same as above; stderr is the third file descriptor assigned to Unix processes by default (after stdin and stdout)
(./write_to_file.o /dev/fd/5) 5> some_file, which means "use your sixth file descriptor, and redirect it to some_file"
In case it's not clear, we're using a Unix pipe instead of an actual file (everything is a file in Unix after all). We can do all sort of fancy things with this pipe: write it to a file, or write it to a named pipe and share it between different processes.
Now, let's create a named pipe:
mkfifo my_pipe
If you type ls -l now, you'll see:
total 32
prw-r--r-- 1 pooriaazimi staff 0 Jul 15 09:12 my_pipe
-rw-r--r-- 1 pooriaazimi staff 336 Jul 15 08:29 write_to_file.c
-rwxr-xr-x 1 pooriaazimi staff 8832 Jul 15 08:34 write_to_file.o
Note the p at the beginning of second line. It means that my_pipe is a (named) pipe.
Now, let's specify what we want to do with our pipe:
gzip -c < my_pipe > out.gz &
It means: gzip what I put inside my_pipe and write the results in out.gz. The & at the end asks the shell to run this command in the background. You'll get something like [1] 10449 and the control gets back to the terminal.
Then, simply redirect the output of our C program to this pipe:
(./write_to_file.o /dev/fd/5) 5> my_pipe
Or
./write_to_file.o my_pipe
You'll get
[1]+ Done gzip -c < my_pipe > out.gz
which means the gzip command has finished.
Now, do another ls -l:
total 40
prw-r--r-- 1 pooriaazimi staff 0 Jul 15 09:14 my_pipe
-rw-r--r-- 1 pooriaazimi staff 32 Jul 15 09:14 out.gz
-rw-r--r-- 1 pooriaazimi staff 336 Jul 15 08:29 write_to_file.c
-rwxr-xr-x 1 pooriaazimi staff 8832 Jul 15 08:34 write_to_file.o
We've successfully gziped our text!
Execute gzip -d out.gz to decompress this gziped file. It will be deleted and a new file (out) will be created. cat out gets us:
some text...
which is what we expected.
Don't forget to remove the pipe with rm my_pipe!
Now back to PhantomJS.
This is a simple PhantomJS script (render.coffee, written in CoffeeScript) that takes two arguments: a URL and a file name. It loads the URL, renders it and writes it to the given file name:
system = require 'system'
renderUrlToFile = (url, file, callback) ->
page = require('webpage').create()
page.viewportSize = { width: 1024, height : 800 }
page.settings.userAgent = 'Phantom.js bot'
page.open url, (status) ->
if status isnt 'success'
console.log "Unable to render '#{url}'"
else
page.render file
delete page
callback url, file
url = system.args[1]
file_name = system.args[2]
console.log "Will render to #{file_name}"
renderUrlToFile "http://#{url}", file_name, (url, file) ->
console.log "Rendered '#{url}' to '#{file}'"
phantom.exit()
Now type phantomjs render.coffee news.ycombinator.com hn.png in the terminal to render Hacker News front page into file hn.png. It works as expected. So does phantomjs render.coffee news.ycombinator.com hn.pdf.
Let's repeat what we did earlier with our C program:
(phantomjs render.coffee news.ycombinator.com /dev/fd/5) 5> hn.pdf
It doesn't work... :( Why? Because, as stated on PhantomJS's manual:
render(fileName)
Renders the web page to an image buffer and save it
as the specified file.
Currently the output format is automatically set based on the file
extension. Supported formats are PNG, JPEG, and PDF.
It fails, simply because neither /dev/fd/2 nor /dev/stdout end in .PNG, etc.
But no fear, named pipes can help you!
Create another named pipe, but this time use the extension .pdf:
mkfifo my_pipe.pdf
Now, tell it to simply cat its inout to hn.pdf:
cat < my_pipe.pdf > hn.pdf &
Then run:
phantomjs render.coffee news.ycombinator.com my_pipe.pdf
And behold the beautiful hn.pdf!
Obviously you want to do something more sophisticated that just cating the output, but I'm sure it's clear now what you should do :)
TL;DR:
Create a named pipe, using ".pdf" file extension (so it fools PhantomJS to think it's a PDF file):
mkfifo my_pipe.pdf
Do whatever you want to do with the contents of the file, like:
cat < my_pipe.pdf > hn.pdf
which simply cats it to hn.pdf
In PhantomJS, render to this file/pipe.
Later on, you should remove the pipe:
rm my_pipe.pdf
As pointed out by Niko you can use renderBase64() to render the web page to an image buffer and return the result as a base64-encoded string.But for now this will only work for PNG, JPEG and GIF.
To write something from a phantomjs script to stdout just use the filesystem API.
I use something like this for images :
var base64image = page.renderBase64('PNG');
var fs = require("fs");
fs.write("/dev/stdout", base64image, "w");
I don't know if the PDF format for renderBase64() will be in a future version of phanthomjs but as a workaround something along these lines may work for you:
page.render(output);
var fs = require("fs");
var pdf = fs.read(output);
fs.write("/dev/stdout", pdf, "w");
fs.remove(output);
Where output is the path to the pdf file.
I don't know if it would address your problem, but you may also check the new renderBase64() method added to PhantomJS 1.6: https://github.com/ariya/phantomjs/blob/master/src/webpage.cpp#L623
Unfortunately, the feature is not documented on the wiki yet :/