Console output from bash command executed with wsl getting truncated when redirected to a file - windows-subsystem-for-linux

I'm attempting to use wsl to execute a bash command from powershell/cmd and capture the output to a file.
When I run wsl -e cat /etc/services the full contents of the file appears correctly in the console.
However, if I run wsl -e cat /etc/services > foo.txt the contents of the foo.txt only contain the first ~ 4k characters from the output. If I run the same command in wsl bash, the foo.txt contains the full content I would expect. I've tried this with a number of wsl commands, and the cutoff point always seems to be about 4k characters. I've also tried wsl -- cat /etc/services > foo.txt with the same results.
Does anyone know why the truncation is happening? More importantly, how do I run a command with wsl and capture the output to a file?

Related

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.

No such file or directory from sh script

Looking for the origin of this error message:
Processing: +([^_]).flv
date: +([^_]).flv: No such file or directory
I started getting this at some point in the last few months (can't say when as I wasn't logging my cron output. I know, I know!).
When I originally wrote this, it worked ok for at least two months. I'm wondering if there was an sh update that broke it?
The script runs via crontab and gets all .flv files in the current directory without an underscore and processes each one. It then checks the modified date for files that have been created in the last 24 hours and runs the yamdi meta tag injector for .flv files.
It seems like it's not recognizing the pattern as a pattern and looking for it as an actual file to me. If I run this script from an ssh shell it works ok, it's only when running via cron that it gives this error.
shopt -s extglob
now=$(date +"%s")
for f in +([^_]).flv; do
echo "Processing: $f"
age=$(date -r "$f" +"%s")
calc=$(((now-age) / 60 / 60))
if(( calc < 24 )); then
echo "$f age=$calc"
yamdi -i "$f" -o "$f".seek
rm "$f"
cp "$f".seek "$f"
touch -d #$age "$f"
fi
done
This is most likely a problem of the wrong shell being used; make sure your script's first line represents the right shell:
#!/bin/bash
for bash, or whatever shell you wrote this for. You might want to check your environment variables that cron may set (that's a very common problem -- one assumes everything is set up correctly, but the environment that cron offers to scripts it executes is different).

How can I cat back exact formatting regardless of shell?

While trying to write a script, I found an interesting issue with cat today. If I do the following at the command line, everything works properly:
var=$(ssh user#server "cat /directory/myfile.sh")
echo $var > ~/newfile.sh
This works and I have a script file with all the proper formatting and can run it. However, if I do the EXACT same thing in a script:
#!/bin/sh
var=$(ssh user#server "cat /directory/myfile.sh")
echo $var > ~/newfile.sh
The file is mangled with carriage returns and weird formatting.
Does anyone know why this is happening? My goal is to ultimately cat a script from a server and run it locally on my machine.
EDIT
I now know that this is happening because of my invoking #!/bin/sh in my shell script. The command line works because I'm using zsh and it is preserving the formatting.
Is there a way to cat back the results regardless of the shell?
As you seem to have figured out, word splitting is off by default on zsh, but on in sh, bash, etc. You can prevent word splitting in all shells by quoting the variable:
echo "$var" > ~/newfile.sh
Note that echo appends a newline to its output by default, which you can suppress (on most echo implementations and builtins) with -n.

Transfer files over SSH, then appended to another file

I'm trying to automate a script that copies a file from my local server to a remote server on the command line. I've done the research on scp and know how to copy the file to the remote server, but then I want to append that file to another.
This is my code:
scp ~/file.txt user#host:
ssh user#host cat file.txt >> other_file.txt
When I enter everything into the command line manually as such, everything works fine:
scp ~/file.txt user#host:
ssh user#host
cat file.txt >> other_file.txt
But when I run the script, only the file is copied, not appended to the end of other_file.txt. Help?
The second line of your code should be
ssh user#host "cat file.txt >> other_file.txt"
Three important points:
You don't want your local shell to interpret >> in any way (which it does if it's unquoted)
There is a remote shell which will interpret >> in the command correctly.
Final arguments to ssh are "joined" to form a command, not carried into an argv array as they are. It may be convenient but it also may lead to confusion or bugs: ssh cat "$MYFILE" and ssh "cat '$MYFILE'" both work in a common use case, but they both break for different values of $MYFILE.
You need to enclose the command to be run on the remote host in quotes. Otherwise, the redirection is being done locally rather than remotely. Try this instead:
scp ~/file.txt user#host:
ssh user#host 'cat file.txt >> other_file.txt'
Try this:
$ cat file.txt| ssh hostname 'cat >> other_file.txt'

Opening multiple shells with tcsh script

Currently working with kde3.5
Here is what I would eventually like to do to help my workflow:
Have a script that:
Opens multiple konsole shells
Renames each shell
This is what I have so far:
#!/bin/tcsh -fv
set KPID =ps -ef | grep konsole | grep -v grep | awk '{print $2}'| tr "\n" " "
dcop konsole-$KPID konsole newSession
The dcop command works just fine in command line (substituting variable for actual pid) but when I run it through the script, it gives 'object not accessible' error. No other errors present.
I've made sure permissions are ok (777) and even added sudo with it, but no luck.
As per second part again I have it working on command line:
dcop $KONSOLE_DCOP_SESSION renameSession "name"
This however only works for the active (working) shell and am not sure how to get it to do it for the others. I have not put this part in script yet as I am still working on the first part. Any suggestions would be great.
Thanks.
If it's a script, it doesn't need to be tcsh. see http://www.grymoire.com/Unix/CshTop10.txt
But if you want to pass $KPID into your script, use $1 in your script argument #1), and call it with
script $KPID