Viewing more lines with 'rhc tail' and 'rhc-tail-files' - jboss7.x

I've deployed a JBoss 7.1 application on OpenShift. Now I have to examine the log file but with the tail -f -n 100 jbossas-7/logs/server.log command I see only the last 10 rows of the log file. There is a way to view whole file? Can I download it?
Thank you!
Edit
Sorry, I haven't explained well. I meant that I wasn't able to view more rows from the rhc-tail-file tool. Sorry! I have solved my problem using ssh client Putty and less command. Thank you for your replies.

You can use the -o option with rhc tail to pass the -n option to tail:
rhc tail <app name> -f jbossas-7/logs/server.log -o '-n100'
Notice, there is no space in '-n100'.

You can use the less or cat command: less jbossas-7/logs/server.log or cat jbossas-7/logs/server.log

Related

ssh one shot command gives partial results

I execute a command to grep a long log file on a remote server and the problem is whenever I ssh first and then execute the grep command remotely I get way more matches than if I do it in one shot as follows:
ssh host 'less file | grep something'
I was suspecting some default automatic timeout with the second version so I experimented with those options -o ServerAliveInterval=<seconds> -o ServerAliveCountMax=<int> but to no avail. Any idea what could be the problem?
The problem was related to less. It does not behave well outside of interactive mode. Using cat solved the issue.

Fail to download file using SCP

I am trying to download a large number of files from a remote Ubuntu Server to my machine which is also running on Ubuntu. I am using SCP protocol as below:
for i in *; do $i sshpass -p 'Remote_Server_Passcode' scp root#<Remote_Server_IP>:'/opt/Data/' .; done
This is failing with an error message saying command not found
Any help pointing towards right direction will be highly helpful.
Thanks
If I understand correctly you just want to copy the whole /opt/Data directory, this can also be achieved like this:
scp -r root#<Remote_Server_IP>:/opt/Data/ .
-r means recursive
And as to what was going wrong the for i in *; do $i loops through all files in the current local directory and then tries to execute those, which is probably not what you wanted.

How to rsync with a non-standard port and two factor 2FA authentication?

I need to rsync to a remote server using a non-standard SSH port and 2FA which I use via Authy app. The SSH works with this command:
ssh -2 -p 9999 -i /Users/Me/.ssh/id_rsa user#9.9.9.9
This brings up a "Verification Code" prompt in the shell. Which I enter from Authy, and I'm in.
Given the discussion on this a StackOverflow answers I tried this variation of rsync:
rsync -rvz -e 'ssh -p 9999 -i /Users/Me/.ssh/id_rsa \
--progress /src/ user#9.9.9.9.9:/dest/
(Put here on two lines just for legibility, it's one line in my shell command).
This does bring up the Verification Code prompt, which I enter correctly, but then it produces this error:
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(185) [sender=3.1.3]
How can I use rsync with 2FA? Many thanks.
Because #JGK mentioned the answer in the comment, adding answer here for posterity. This "is your shell clean" stuff is shown when remote server is echoing some output upon login, which in my case .bashrc indeed was. I've added a conditional to that echo only to apply when the shell login is "interactive", as mentioned in this Server Fault thread, and it works. Just for easier clarity, the IF condition reads as follows:
if echo "$-" | grep i > /dev/null; then
[any code that outputs text here]
fi
Many thanks.

--immediate-submit {dependencies} string contains script paths, not job IDs?

I'm trying to use the --immediate-submit on a PBSPro cluster. I tried using an in-place modification of the dependencies string to adapt it to PBSPro, similar to what is done here.
snakemake --cluster "qsub -l wd -l mem={cluster.mem}GB -l ncpus={threads} -e {cluster.stderr} -q {cluster.queue} -l walltime={cluster.walltime} -o {cluster.stdout} -S /bin/bash -W $(echo '{dependencies}' | sed 's/^/depend=afterok:/g' | sed 's/ /:/g')"
This last part gets converted into, for example:
-W depend=afterok: /g/data1a/va1/dk0741/analysis/2018-03-25_marmo_test/.snakemake/tmp.cyrhf51c/snakejob.trimmomatic_pe.7.sh
There are two problems here:
How can I get the dependencies string to output job ID instead of the script path? The qsub command normally outputs the job ID to stdout, so I'm not sure why it's not doing so here.
How do I get rid of the space after afterok:? I've tried everything!
As an aside, it would be helpful if there were some option to debug the submission or not to delete the tmp.cyrhf51c directory in .snakemake -- is there some way to do this?
Thanks,
David
I suggest to use a profile for this, instead of trying to find an ad-hoc solution. This will also help with debugging. E.g., there is already a pbs-torque profile available (https://github.com/Snakemake-Profiles/pbs-torque), probably there is not much to change towards pbspro?

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.