Connect via ssh with a single .bat script to multiple Adresses - ssh

I have a environment of 27 Mikrotik Routers and I want to add a user on each one with same credentials.
Normally I had to connect on every Router and click through the GUI to add the user, but now I found a way to use SSH connection via cmd.
I wrote this - which connects on a single Router and performs the Add-User process
ssh admin#10.1.2.3 -password "Passw0rd!" "user add name=customer-support password=#F0ry0u! group=full"
But now I want to make a script which maybe reads in a csv file with the ip adresses of all the routers I want to perform the change on and connect on each Router to execute the command.
Is this possible?

Since SSH is available for MikroTik router, you can indeed read their addresses from a file.
For example, use bash (which you can execute even on Windows through WSL/WSL2 or Git for Windows which includes a MinGW bash)
See for example "Bash Read Comma Separated CSV File on Linux / Unix", here adapted to your case
#!/bin/bash
# Purpose: Read Comma Separated CSV File
# Author: Vivek Gite under GPL v2.0+
# ------------------------------------------
INPUT=data.cvs
OLDIFS=$IFS
IFS=','
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read maddress
do
echo "Address : $maddress"
# do your SSH call here
done < $INPUT
IFS=$OLDIFS
But if you really need a bat script, see for instance "Help in writing a batch script to parse CSV file and output a text file".

Related

Robot Framework - SSH library - Editing a file on remote server

I am writing a test case in Robot Framework where in, I have to either copy the file from the local machine (windows) to the remote server (linux) or create a new one at the location.
I have used multiple sudo su - command to switch users to root user to reach the desired host. As a result of this, I am not able to use Put File Keyword from SSH Library to upload the file. I have reached at the desired folder location by executing the commands with Write keyword.
Since there is no option left (thats what i realize with my limited knowledge on Robot Framework), i started creating a new file with vi <filename> command. I have also reached the INSERT mode of the file, BUT i am not able to edit text into the file.
Can someone please suggest me how can i either
Copy the file from local windows machine to remote linux server AFTER multiple SU commands (Switch User)
Create a new text file and enter the content.
Please See : the new file which is being created / copied is a certificate file. Hence i do not wish to write the entire content of the certificate in my test suite file
The entire test case looks something like this
First Jump1
Log Starting the connection to AWS VM
# Connection to VM with Public Key
Connection To VM ${hostname} ${username}
Send Command sudo su -
Send Command su - <ServiceUser1>
# Reached the Detination server
Send Command whoami
Send Command ss -tln | grep 127.0.0.1:40
# Connecting to Particular ZIP
Send Command sudo -u <ServiceUser2> /usr/bin/ssh <ServiceUser2>#localhost -p <port>
Send Command sudo su -
# Check Auth Certificate
Send Command mosquitto_pub -h ${mq_host} -p ${mq_port} -u ${mq_username} -P ${mq_password}
In the step Check Auth Certificate, the certificate is checked to be present or not, if present -> delete the current certificate and create the new one (either create a new file or upload from local) and if it not there create a new certificate
though it might not be ideal, but was able to achieve what i wanted to do with
echo "content" > newFilename
echo "update content" >> newFileName

SSH from Synology NAS to remote server

When I run competitions for Icelandic Horses, I want to automatically upload the results from our Synology NAS to a remote webserver. The program we use automatically generates the html-files that needs to be uploaded.
What is the easiest way to achieve this? I have SSH access on both the NAS and the webserver.
Any help is appreciated :)
In this case you can create a cron task in the synology console with the command:
sudo -i
vi /etc/crontab
Edit the file and add a line like this at the end of the file with a scp command:
0 0 * * * root scp -r "-i/root/.ssh/mykey" 'root#serverurl.com:/some/remote/path' '/some/local/path'
Finally you have to reload the configuration restarting the service with:
synoservice -restart crond
Before all this you must to configure a pair keys to avoid the password entry:
cd to a private directory of the user which will be running the script (typically "$HOME/.ssh", to be created if needed). That directory must be protected to write acces from other users, fix the modes if needed.
generate the keypair using command "ssh-keygen"
("/usr/syno/bin/ssh-keygen" if not in your PATH)
at the prompt "Enter file in which to save the key", choose a file
name (let's say "mykey")
at the prompt "Enter passphrase (empty for no passphrase):" press
return (this will create a passwordless private key)
Two files will be created: "mykey" and "mykey.pub"
copy the contents of mykey.pub inside "$HOME/.ssh/authorized_key"
file of user account on the remote machine your script is going to
connect to.
in your script, add "-i" as argument to the
ssh command
Also in this forum is explained how to make the copy with rsync instead of scp

creating a shell script to modify and/or create bookmarks in firefox

I have several applications that change IP addresses every-time they are deployed.
I am wondering how I can create a bash shell script to:
Modify/update existing Firefox bookmarks
If the bookmarks don't exist, then create them.
After some research, I found that I need to modify places.sqlite, so I downloaded sqlite. I looked at the schema, and I think moz_places and moz_bookmarks are what I need to insert to, but I am not sure. If so, how would connect the ids if I need to 2 separate inserts. I already have a way to get the new ip address for every new deployment, so I would just stick that into a variable.
My use case looks something like this:
Deployment 1: URL: 192.168.1.**10**/app1
Deployment 2: URL: 192.168.1.**20**/app1
Brownie points if I can create multiple folders 1st and insert bookmarks inside them. Like {Folder#1: app1, app2}, {Folder#2: app3}, {Folder#3: app4, app5, app6}.
A shell script might not be the best tool for this problem; but you could use a script like this to redirect your browser to a new location each time your application redeploys, and bookmark localhost:<port>:
#!/bin/bash
# redirect localhost:<port> to another address with HTML
local_port="${1:?provide local port to listen on}"
redirect="${2:?provide application ip address}"
while :; do
(
echo "HTTP/1.1 200 OK"
echo ""
echo "<head>"
echo " <meta http-equiv=\"refresh\" content=\"0; url=$redirect\" />"
echo "</head>"
echo ""
) | ncat -l -p "$local_port" > /dev/null
done
This would let you bookmark localhost:8000 in Firefox, and call the script when you redeploy. You could have an instance of the script running for each app; you'd just need to change the listening port. If you have a script that redeploys your app, you could add these lines there.
$ bash redirect.sh 8000 192.168.1.10/app1
$ bash redirect.sh 8001 192.168.1.11/app1

File Transfer over SSH connection [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to transfer (copy) files from my local computer (unix) to a remote computer host (linux) while I have established a SSH connection to the remote computer.
After connecting using SSH, and while under this very same SSH connection to the remote host, what command in the terminal do I use to transfer (copy/sync) files to that remote host from my local computer?
I know about 'scp', 'sftp' and 'rsync' file transfers but these are used OUT of the SSH connection, independently correct?...So, I want to be able to run a command that copies the files under that secure SSH connection.
Could I use 'scp', 'sftp', 'ftp' , or 'rsync' commands under the running SSH connection, and if so HOW?
Thanks!
Update: This might be the answer you are actually wanting to know about: https://askubuntu.com/questions/13382/download-a-file-over-an-active-ssh-session/13586#13586
scp, sftp, etc., create their own SSH connection unless they are tunneled through a certain port that has already been opened. This is true anytime they are run from the command line, whether or not that command line is part of an existing SSH session. Tunneling data is inefficient and no more secure than an independent SSH connection except where you are trying to relay between servers to hide your own address.
To do what you want, you need to set up your local computer to allow SSH access. Then you can simply do things like "scp -P 4321 yourUserName#yourlocalhostsexternalhostname:path/to/file.txt ./" in order to copy a file to your server at the same time that you are logged in to your server.
It is a pain to type that command out all the time, and I prefer to work in the command line on my local Unix computer and synchronize just the current working directory to wherever it belongs on the server. I am including my program to do this (it is only designed for when you are going to be the one running it--there are lots of insecure lines that trust the user's input far too much).
This program is so easy to use that I have never used anything else to transfer files that I am working on my local computer to my live webserver, for several years. And when I am working for others, I always miss this program.
In order to use this program, you have to create a file ".root" within the current directory or any parent directory. The ".root" file specifies where on the server that root directory belongs. This way, my program can find exactly where any subdirectory of that root directory belongs. And it greatly multiplies the efficiency of rsync, because instead of rsyncing through my whole web site, it simply rsyncs the part of the website that I am working on at that instant.
#!/usr/bin/perl
# cpwd
# make sure it is in your path
$flagjpeg = "--exclude '*.JPG'";
my $J = shift;
$flagjpeg = "" if $J eq 'JPG';
$n = 10;
$wd = '';
while (! -f '.root') {
$pwd = `pwd | xargs basename`;
chomp $pwd;
$wd = "$pwd/$wd";
chdir('..');
last if $n-- < 1;
}
# chop $wd;
if (! -f '.root') {
print "No root found!\n";
exit;
}
$root = `head -1 .root`;
chomp $root;
#cmds = ($root =~ m/(\S+)/g);
$root = pop(#cmds);
$source = "$wd" || './';
$dest = "$root/$source";
print "copy '#cmds' '$source' (to) '$dest'\n";
my $cmd = "(rsync #cmds -vv --max-size=1208KiB $flagjpeg --exclude '*.ezip' --exclude '*.tgz' --exclude '*.gz' -C -avz $source $dest 2>&1) > /tmp/cpwd.log";
chomp(my $mypwd = `pwd`);
my $cmdlarge = "cd '$mypwd'; (rsync #cmds -vv $flagjpeg --exclude '*.ezip' --exclude '*.tgz' --exclude '*.gz' -C -avz $source $dest 2>&1) > /tmp/cpwd.log";
print "$cmd\n$cmdlarge\n\n";
# exit;
system($cmd);
system("grep -e 'over max-size' -e 'sender finished' /tmp/cpwd.log");
system("tail -4 /tmp/cpwd.log | head -3");
Example of a ".root" file:
$ cat .root
myname#server.net:www
Example of a ".root" file with extra flags:
$ cat .root
-e 'ssh -p 4321 -C' yourname#host2468.hosthosthost.com:www
Once the "cpwd" program is in your path and the ".root" file is created somewhere in the current or parent directory, all you need to do is work on your website and go to the command line (Ctrl-Z comes to mind) and type
$ cpwd
in order to synchronize everything within the working directory to your website as specified in the .root file.
Note that for safety cpwd will not create more than one level of non-existing directories, just in case you goof up your ".root" file and try to replicate your entire website inside of a subdirectory of your webserver by accident.

Smart way to copy multiple files from different paths using scp [duplicate]

This question already has answers here:
scp or sftp copy multiple files with single command
(19 answers)
Closed last year.
I would like to know an easy way to use scp to copy files and folders that are present in different paths on my file system. The SSH destination server requests a password and I cannot put this in configuration files. I know that scp doesn't have a password parameter that I could supply from a script, so for now I must copy each file or directory one by one, writing my password every time.
in addition to the already mentioned glob:
you can use {,} to define alternative paths/pathparts in one single statement
e.g.: scp user#host:/{PATH1,PATH2} DESTINATION
From this site:
Open the master
SSHSOCKET=~/.ssh/myUsername#targetServerName
ssh -M -f -N -o ControlPath=$SSHSOCKET myUsername#targetServerName
Open and close other connections without re-authenticating as you like
scp -o ControlPath=$SSHSOCKET myUsername#targetServerName:remoteFile.txt ./
Close the master connection
ssh -S $SSHSOCKET -O exit myUsername#targetServerName
It's intuitive, safer than creating a key pair, faster than creating a compressed file and worked for me!
If you can express all the names of the files you want to copy from the remote system using a single glob pattern, then you can do this in a single scp command. This usage will only support a single destination folder on the local system for all files though. For example:
scp 'RemoteHost:/tmp/[abc]*/*.tar.gz' .
copies all of the files from the remote system which are names (something).tar.gz and which are located in subdirectories of /tmp whose names begin with a, b, or c. The single quotes are to protect the glob pattern from being interpreted from the shell on the local system.
If you cannot express all the files you want to copy as a single glob pattern and you still want the copy to be done using a single command (and a single SSH connection which will ask for your passsword only once) then you can either:
Use a different command than scp, like sftp or rsync, or
Open an SSH master connection to the remote host and run several scp commands as slaves of that master. The slaves will piggyback on the master connection which stays open throughout and won't ask you for a password. Read up on master & slave connections in the ssh manpage.
create a key pair, copy the public key to the server side.
ssh-keygen -t rsa
Append content inside the file ~/.ssh/identity.pub to file ~/.ssh/authorized_keys2 of server side user. You need not to type password anymore.
However, be careful! anybody who can access your "local account" can "ssh" to the server without password as well.
Alternatively, if you cannot use public key authentication, you may add the following configuration to SSH (either to ~/.ssh/config or as the appropriate command-line arguments):
ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r
ControlPersist 2m
With this config, the SSH connection will be kept open for 2 minutes so you'll only need to type the password the first time.
This post has more details on this feature.