Saving the result of a telnet request as a text document - telnet

So, say I run "telnet google.com 80\rGET / HTTP/1.0\r", is there any way I could save the ensuing HTTP data?
Is it possible to do in bash? If not, Perl?

Use tee.
telnet google.com 80 | tee output.txt

The script command seems to meet your requirements. Once you start it, all the terminal output from that session gets saved to a file. But for the specific task you mentioned,
I'd use wget or curl rather than messing around with script.

Related

How to avoid ssh connection overheads in scp?

I am testing the performance of scp command. I want to minimize the overheads for making TCP connection of ssh protocol inside scp.
How can I open the first ssh connection and reuse it over time?
thanks for your help.
// I should have said that one way to achieve it is to zip the files and send it at once, which only works when the files are available all the time. Let's assume then the files are generated in stream fashion in the source side, and I want to send those as early as possible after they are generated.
// Pleaser refer to the link for the answer I found: (How To Reuse SSH Connection To Speed Up Remote Login Process) http://www.cyberciti.biz/faq/linux-unix-reuse-openssh-connection/
If you are copying so many tiny files that the connection overhead comes into play, you could try tar'ing everything up on the fly and sending that instead.
Try something like this:
tar zcvf - data | ssh user#server "cat > data.tar.gz"
You can drop the z if compression isn't desired/helpful too.

Limit SSH - bash with no commands

So I have been working on this for some time. Would like to know if there is a better way or if I am on the right track.
I would basically like to allow some users to login to my server via SSH and then have a squid tunnel via that SSH connection.
The tricky part however is that I dont want these users to be able to execute ANY commands. I mean NOTHING at all.
So at this stage I have setup a Jail via - jailkit. The specific user is then placed in the jail and given the bash shell as a shell.
The next step would be to remove all the commands in the /jail/bin/ directories etc so that they are not able to execute any commands.
Am I on the right path here? What would you suggest?
Also...I see that it will give them many command not found errors...how do I remove these.
Is there any other shell I could look at giving them that would not let them do anything?
You could set their shell to something like /bin/true, or maybe a simple script that will output an informational message, and then have them logon using ssh -N (see the ssh manual page). I believe that allows them to use portforwarding without having an actuall shell on the system.
EDIT:
The equivalent of ssh -N in PuTTY is checking the "Don't start a shell or command at all" checkbox in its SSH configuration tab (Connection->SSH).
EDIT2:
As an alternative to this you could use a script that enters an infinite sleep loop. Until it is interrupted using Ctrl-C the connection will remain alive. I just tried this:
#!/bin/sh
echo "DNSH: Do-Nothing Shell"
while sleep 3600; do :; done
If you use this as a shell (preferrably with a more helpful message) your users will be able to use port-forwarding without an actual shell and without having to know about ssh -N and friends.

How can I remotely log on to a machine, execute a script which sets up an environment, then accept user input?

I've been trying to figure out a way to do this for a few hours now, and am having no luck.
I have a large environment file that I have saved as a ksh script. This script works perfect if I type . ./setEnv.sh
However, what I'm trying to do is use either ssh or rsh to log on to a remote system, execute this script, then allow me to use the system in it's modified form. I am able to successfully execute the script, but the connection always closes after execution. I would like to be able to keep this connection open.
Any idea on how I can do this?
At the moment, it does not matter if I use SSH or RSH to accomplish this. RSH is preferable. I am using a variety of Linux and Solaris operating systems, so a catch-all method would be nice.
Thanks,
Matt
Couldn't you do something like that ?
ssh user#host "./setEnv.sh && your-command"

telnet to different IPs and run commands

I'm not sure if this is possible or not.
What I'm looking for is a way to tell telnet to use a certain IP address to log into and then run commands where the commands change based on a user's MAC address.
Basically it would be:
tell telnet to use x.x.x.x as the IP to log into and put in the correct username and password
tell telnet to run commands (based on the user's MAC address) that can change based on which user stats you want to see, for example: show macaddress
export the output to notepad
close
expect can do this. If you don't have Tcl but Python, try Pexpect.
If you just want to run one command, use ssh (which allows you to log in, run a command and which will return with the error code of the command, so you can handle errors, too).
If you want to run more than a single command, write a script, use scp to copy that script to the other side and then execute the script with ssh. I've used this approach with great success to build a simple spider that could run a script to gather system information over a large number of hosts.
I think you're looking for expect (it automates these kind of interactive applications). Here is a gratis chapter from the authority on expect, the book "Exploring Expect".
Also you should use SSH if this is over the internet. Telnet is insecure as it's a plain text protocol.
Not to blow my own horn, but you may be able to twist a personal app of mine (note: Sorry, I've removed this.) to this end.
There's currently no documentation other than what is on that page and no public source code (though I've been meaning to get onto that, and will work that out tomorrow if you're interested), but I'd be happy to answer any questions.
That said, any MUD client could be turned to the same use too.

how reliable would it be to download over a 100,000 files via wget from a bash file over ssh?

I have a bash file that contains wget commands to download over 100,000 files totaling around 20gb of data.
The bash file looks something like:
wget http://something.com/path/to/file.data
wget http://something.com/path/to/file2.data
wget http://something.com/path/to/file3.data
wget http://something.com/path/to/file4.data
And there are exactly 114,770 rows of this. How reliable would it be to ssh into a server I have an account on and run this? Would my ssh session time out eventually? would I have to be ssh'ed in the entire time? What if my local computer crashed/got shut down?
Also, does anyone know how many resources this would take? Am I crazy to want to do this on a shared server?
I know this is a weird question, just wondering if anyone has any ideas. Thanks!
Use
#nohup ./scriptname &>logname.log
This will ensure
The process will continue even if ssh session is interrupted
You can monitor it, as it is in action
Will also recommend, that you can have some prompt at regular intervals, will be good for log analysis. e.g. #echo "1000 files copied"
As far as resource utilisation is concerned, it entirely depends on the system and majorly on network characteristics. Theoretically you can callculate the time with just Data Size & Bandwidth. But in real life, delays, latencies, and data-losses come into picture.
So make some assuptions, do some mathematics and you'll get the answer :)
Depends on the reliability of the communication medium, hardware, ...!
You can use screen to keep it running while you disconnect from the remote computer.
You want to disconnect the script from your shell and have it run in the background (using nohup), so that it continues running when you log out.
You also want to have some kind of progress indicator, such as a log file that logs every file that was downloaded, and also all the error messages. Nohup sends stderr and stdout into files.
With such a file, you can pick up broken downloads and aborted runs later on.
Give it a test-run first with a small set of files to see if you got the command down and like the output.
I suggest you detach it from your shell with nohup.
$ nohup myLongRunningScript.sh > script.stdout 2>script.stderr &
$ exit
The script will run to completion - you don't need to be logged in throughout.
Do check for any options you can give wget to make it retry on failure.
If it is possible, generate MD5 checksums for all of the files and use it to check if they all were transferred correctly.
Start it with
nohup ./scriptname &
and you should be fine.
Also I would recommend that you log the progress so that you would be able to find out where it stopped if it does.
wget url >>logfile.log
could be enough.
To monitor progress live you could:
tail -f logfile.log
It may be worth it to look at an alternate technology, like rsync. I've used it on many projects and it works very, very well.