How to import SQL dump into Postgres DB running on Vagrant instance of Ubuntu - sql

I am using Postgres via a vagrant instance running ubuntu-xenial-16.04-cloudimg box and I have an sql dump from another developer.
By the way, I tried using PGAdmin IV from my Win 10 host machine after I had connected to the Postgres server on the virtualbox (ubuntu) but it takes forever and not running.
How can I import this to the Postgres running on virtualbox instance?

So given an sql dump file as dump.sql.
Run vagrant ssh on an ssh client like git bash(for windows)
Put the dump file in the directory containing the vagrantfile on the host machine. As it syncs by default with the guest machine or run vagrant rsync, just to make sure.
Navigate to the vagrant directory on the host machine (eg cd ../../ for an ubuntu guest on a window host)
Run psql -h hostname -U test -d databasename -f dump.sql.

Depending on the format of the dump (normal or custom), you can use psql or pg_restore.
Check the --format option in the documentation for pg_dump

Following the simple steps below solves my problem:
After vagrant up, vagrant ssh to log into the os
Type psql command
Then create database your_db_name to create the empty db
Make sure the dump sql file is in the folder containing the vagrantfile ( cd vagrant) or a sub folder within
Write this command to import the dump file into the newly create db
your_db_name -f /path/to/the/dump.sql
I hope the steps will help you too.

Related

How to copy file from server to local using ssh and sudo su?

Somewhat related to: Copying files from server to local computer using SSH
When debugging on DEV server I can see logs with
# Bash for Windows
ssh username#ip
# On server as username
sudo su
# On server as su
cat path/to/log.file
The problem is that while every line of the file is indeed printed out, the CLI seems to have a height limit, and I can only see the last "so many" lines after the printing is done.
If there is a better solution, please bring it forward, otherwise, how do I copy the "log.file" to my computer.
Note: I don't have a password for my username, because the user is created with echo "$USER ALL=(ALL:ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/$USER.
After sudo su copy the file to the /tmp folder on the server with
cp path/to/log.file /tmp/log.file
After that the standard command should work
scp username#ip:/tmp/log.file log.file
log.file is now in the current directory (echo $PWD).

How to access a folder via SMB protocol from ASP Net Core [duplicate]

I am trying to setup a script that will:
Connect to a windows share
Using LOAD DATA LOCAL INFILE, upload the two files into their appropriate db tables
Umount share
Situation:
I can currently vpnc into this remote machine
Problem:
I cannot
mount -t cifs //ip.address/share /mnt/point -o username=u,password=p,port=445
mount error(110) Connection timed out
I am attempting to do this manually first
Remote server is open to port 445
Questions:
Do I even need to vpnc in first?
Do I need to do route add for the remote ip/mask/gw after vpnc?
Thank you!
The mount.cifs file is provided by the samba-client package. This can be installed from the standard CentOS yum repository by running the following command:
yum install samba samba-client cifs-utils
Once installed, you can mount a Windows SMB share on your CentOS server by running the following command:
Syntax:
mount.cifs //SERVER_ADDRESS/SHARE_NAME MOUNT_POINT -o user=USERNAME
SERVER_ADDRESS: Windows system’s IP address or hostname
SHARE_NAME: The name of the shared folder configured on the Windows system
USERNAME: Windows user that has access to this share
MOUNT_POINT: The local mount point on your CentOS server
I am mounting to a share from \\10.11.10.26\snaps
Make a directory under mount for your reference
mkdir /mnt/mymount
Now I am mounting the snaps folder from indiafps02, User name is the Domain credentials, i.e. Mydomain in this case
mount.cifs //10.11.10.26/snaps /mnt/mymount -o user=Girish.KG
Now you could see the content by typing
ls /mnt/mymount
So, after performing your task, just fire umount command
umount /mnt/mymount
That's it. You are done.
no need to install "samba" and "samba-client", only "cifs-utils" using command
yum install cifs-utils
after that in windows share the folder you would like to mount in centos if you didn't do that already ("c:\interpub\wwwroot" in my case).
make sure you share it with a specific username whom your know the password for ("netops" in my case).
create a directory in centos in which you would like to mount the windows share in to ("/mnt/cm" in my case).
after that run that simple command as a root
mount.cifs //10.16.0.160/wwwroot /mnt/cm/ -o user=netops
centos will prompt you for the windows username password.
you are done.

How can I import a SQL Server RDS backup into a SQL Server Linux Docker instance?

I've followed the directions from the AWS documentation on importing / exporting a database from RDS using their stored procedures.
The command was similar to:
exec msdb.dbo.rds_backup_database
#source_db_name='MyDatabase',
#s3_arn_to_backup_to='my-bucket/myBackup.bak'
This part works fine, and I've done it plenty of times in the past.
However what I want to achieve now; is restoring this database to a local SQL Server instance; however I'm struggling at this point. I'm assuming this isn't a "normal" SQL Server dump - but I'm unsure what the difference is.
I've spun up a new SQL Server for Linux Docker instance; which seems all set. I have made a few changes so that the sqlcmd tool is installed; so technically the image I'm running is comprised of this Dockerfile; not much different.
FROM microsoft/mssql-server-linux:2017-latest
RUN apt-get update && \
apt-get install -y curl && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
apt-get update && \
apt-get install -y mssql-tools unixodbc-dev
This image works fine; I'm building it via docker build -t sql . and running it via docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myPassword1!' -p 1433:1433 -v $(pwd):/backups sql
Within my local folder, I have my backup from RDS downloaded, so this file is now in /backups/myBackup.bak
I now try to run sqlcmd to import the data with the following command; and I'm running into an issue which makes me assume this isn't a traditional SQL dump. Unsure what a traditional SQL dump looks like, but the majority of the file looks garbled with ^#^#^#^# and of course other things.
/opt/mssql-tools/bin/sqlcmd -S localhost -i /backups/myBackup.bak -U sa -P myPassword1! -x
And finally; I get this error:
Sqlcmd: Error: Syntax error at line 56048 near command 'GO' in file '/backups/myBackup.bak'.
Final Answer
My final solution for this mainly came from using -Q and running a RESTORE query rather than importing with the file, but I also needed to include some MOVE options as they were pointing at Windows file paths.
/opt/mssql-tools/bin/sqlcmd -U SA -P myPassword -Q "RESTORE DATABASE MyDatabase FROM DISK = N'/path/to/my/file.bak' WITH MOVE 'mydatabase' TO '/var/opt/mssql/mydatabase.mdf', MOVE 'mydatabase_log' TO '/var/opt/mssql/mydatabase.ldf', REPLACE"
You should use the RESTORE DATABASE command to interact with your backup file instead of specifying it as an input file of commands to the database:
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword1! -Q "RESTORE DATABASE MyDatabase FROM DISK='/backups/myBackup.bak'"
According to the sqlcmd Docs, the -i flag you used specifies:
The file that contains a batch of SQL statements or stored procedures.
That flag likely won't work properly if given a database backup file as an argument.

Run interactive local script on remote machine using docker-machine ssh

I have a local interactive (ruby) script, script.rb. I have a dockermachine, aws01. (The script pulls large files from point A, does some simple processing, and uploads them to S3).
Unfortunately, this incantation doesn't seem to do it:
docker-machine ssh aws02 -t ruby < script.rb
It runs the script, but not interactively :/
Any ideas how to do this in a single command?
(You could copy the script over and run it, you could grab the docker-machine's info and plug it into SSH with the -t flag... but I don't know how to do that in a single command)
You are putting the script itself on the standard input of the remote command (< redirection) so there is no other channel left for you to interact with the script.
In short, it is not possible with a single command. I would go with two:
docker-machine ssh aws02 "cat > script.rb" < script.rb
docker-machine ssh aws02 -t "ruby script.rb"

scp files from local to remote machine error: no such file or directory

I want to be able to transfer a directory and all its files from my local machine to my remote one. I dont use SCP much so I am a bit confused.
I am connected to my remote machine via ssh and I typed in the command
scp name127.0.0.1:local/machine/path/to/directory filename
the local/machine/path/to/directory is the value i got from using pwd in the desired directory on my local host.
I am currently getting the error
No such file or directory
Looks like you are trying to copy to a local machine with that command.
An example scp looks more like the command below:
Copy the file "foobar.txt" from the local host to a remote host
$ scp foobar.txt your_username#remotehost.edu:/some/remote/directory
scp "the_file" your_username#the_remote_host:the/path/to/the/directory
to send a directory:
Copy the directory "foo" from the local host to a remote host's directory "bar"
$ scp -r foo your_username#remotehost.edu:/some/remote/directory/bar
scp -r "the_directory_to_copy" your_username#the_remote_host:the/path/to/the/directory/to/copy/to
and to copy from remote host to local:
Copy the file "foobar.txt" from a remote host to the local host
$ scp your_username#remotehost.edu:foobar.txt /your/local/directory
scp your_username#the_remote_host:the_file /your/local/directory
and to include port number:
Copy the file "foobar.txt" from a remote host with port 8080 to the local host
$ scp -P 8080 your_username#remotehost.edu:foobar.txt /your/local/directory
scp -P port_number your_username#the_remote_host:the_file /your/local/directory
From a windows machine to linux machine using putty
pscp -r <directory_to_copy> username#remotehost:/path/to/directory/on/remote/host
i had a kind of similar problem. i tried to copy from a server to my desktop and always got the same message for the local path. the problem was, i already was logged in to my server per ssh, so it was searching for the local path in the server path.
solution: i had to log out and run the command again and it worked
In my case I had to specify the Port Number using
scp -P 2222 username#hostip:/directory/ /localdirectory/
Your problem can be caused by different things. I will provide you three possible scenarios in Linux:
The File location
When you use scp name , you mean that your File name is in Home directory. When it is in Home but inside in another Folder, for example, my_folder, you should write:
scp /home/my-username/my_folder/name my-username#127.0.0.1:/Path....
You File Permission
You must know the File Permission your File has. If you have Read-only you should change it.
To change the Permission:
As Root ,sudo caja ( the default file manager for the MATE Desktop) or another file manager ,then with you Mouse , right-click to the File name , select Properties + Permissions
and change it on Group and Other to Read and write .
Or with chmod .
You Port Number
Maybe you remote machine or Server can only communicate with a Port Number, so you should write -P and the Port Number.
scp -P 22 /home/my-username/my_folder/name my-usernamee#127.0.0.1 /var/www/html
You also need to make sure what is in the .bashrc file of the user.
I've also got this ridiculous error because I put cd and ls commands in there, as it was mean to let them see the current files & directories when the user is has logged in from ssh.
The filename should go at the end of the path to the directory. That is, it should be the full path to the file. You are doing this from a command line, and you have a working directory for that command line (on your local machine), this is the directory that your file will be downloaded to. The final argument in your command is only what you want the name of the file to be. So, first, change directory to where you want the file to land. I'm doing this from git bash on a Windows machine, so it looks like this:
cd C:\Users\myUserName\Downloads
Now that I have my working directory where I want the file to go:
scp -i 'c:\Users\myUserName\.ssh\AWSkeyfile.pem' ec2-user#xx.xxx.xxx.xxx:/home/ec2-user/IwantThisFile.tar IgotThisFile.tar
Or, in your case:
cd /local/path/where/you/want/the/file/to/land
scp name#127.0.0.1:/local/machine/path/to/directory/filename filename
Be sure the folder from where you send the file does not contain space !
I was trying to send a file to a remote server from my windows machine from VS code terminal, and I got this error even if the file was here.
It's because the folder where the file was contained space in its name...
If you want to copy everything in a Folder + have a special Port use this one.
Works for me on Ubuntu 18.04 and a local machine with Mac OS X.
-r for recursive
-P for Port
scp -rP 1234 /Your_Directory/Source_Folder/ username#yourdomain.com:/target/folder
As #Astariul said, path to the file might cause this bug.
In addition, any parent directory which contains non-ASCII character, for example Chinese, will cause this.
In that case, you should rename you parent directory
This happened to me and I solved it.
This problem can be because the file you are trying to get is not existing (typo in the name of file or folder?) or because it is invisible to the user that you enter in scp.
The problem in my case was that the files that I wanted to get from remote machine were created by another user (root on my case), so, those files were invisible
To fix, I did:
ssh myuser#myserver
chown myuser:myuser myfile
exit
scp mysuer#myserver:/home/myuser/myfile /localfolder/myfile
For me on my mac,
I just have to run the command from my MAC terminal
scp -r root#ip_addres:/root/source /Users/path/Desktop/others/destination