I need to use root user to run scripts at crontab, for example to read and write on all /home folders.
But something that I need to do also in the shell script is to run psql. Problem:
my user (me = whoami and not is root) can run for example psql -c "\l"
the root user not works (!) with psql -c "\l"... And error not make sense "psql: error: could not connect to server: FATAL: database "root" does not exist".
How to enable root to run psql?
PS: looking for a kind of "GRANT ALL PRIVILEGES ON ALL DATABASES TO root".
root is allowed to run psql, but nobody can connect to a database that doesn't exist.
The default value for the database user name with psql is the operating system user name, and the default for the database is the same as the database user name.
So you have to specify the correct database and database user explicitly:
psql -U postgres -d postgres -l
The next thing you are going to complain about is that peer authentication was denied.
To avoid that, either run as operating system user postgres or change the rules in pg_hba.conf.
I previously asked how to make a backup of a Firebird database in
I need to backup or clone one remote firebird database or export it to Sql server
Now the backup is complete, but when I try to restore it to Firebird on my computer, I get an error.
I use this command:
gbak -r -p 4096 -o e:\mybackup.fbk localhost:e:\bddados.fdb -user sysdba -pas masterkey
The error I receive is
gbak: ERROR:Your user name and password are not defined. Ask your database administrator to set up a Firebird login. gbak:Exiting before completion due to errors
But I test my Firebird locally with this user and password and it's ok. Does the created backup database need to specify in generate command a password or do I need to use the same of the old database?
user and pas[sword] parameters should be before the path to files
gbak -r -p 4096 -o -user sysdba -pas masterkey e:\mybackup.fbk localhost:e:\bddados.fdb
gbak documentation
I dont have a lot of experience with Postgres, so I'm having a bit of a trouble accessing a database I just rescued from a broken Ubuntu server.
What I'm trying to do: The server that was sporting postgres is dead now, I can only access it via "rescue mod" provided by the hosting company. I tried chroot in order to dump the database using pg_dump or pg_dumpall, but it seems that the server is unreachable in this way. The dump attempts to enroute himself via rescue.ovh.net (OVH being the hosting provider), even if I specify -h localhost.
So I came with a different idea: copying the whole Postgres folder into a local machine, and then dumping the database and keep on restoring everything starting from this dump. This is something you can do in MySQL, so I thought that this maybe would be possible with Postgres.
But so far it is not workin. I copied /var/lib/postgres folder to my local machine (taking care that the owner in my local machine is also the postgres user), but when I try to dump a database, I can't really do it.
The errors vary on the command:
My first attempt is to dump the database using the database user:
$ pg_dump -U discourse -h localhost > discourse_prod.sql
Password:
pg_dump: [archiver (db)] connection to database "discourse" failed: fe_sendauth: no password supplied
The prompt asks for a password, the user did not have a password, but If i just press enter it aches and says no password provided.
My second attempt was dumping via postgres admin user:
sudo pg_dump -U postgres discourse_prod > ~/test.sql
I get a pg_dump: [archiver (db)] connection to database "discourse_prod" failed: FATAL: Peer authentication failed for user "postgres". So I try to switch user before trying to dump...
sudo -u postgres pg_dump -Fp discourse_prod > dump.sql
and now it seems that the database was not properly copied: pg_dump: [archiver (db)] connection to database "discourse_prod" failed: FATAL: database "discourse_prod" does not exist
As I said, I have not very much experience with Postgres, and I'm running out of ideas... on how to get a dump out from these files, i don't mind if I manage to get it from the devastated machine or from my locally copied files.
Any ideas?
I am in the process of migrating my MySQL installation to Amazon RDS and they run MySQL Server version 5.6.12.
I got the client tools of version 5.6.13 and trying to use mysqldump for automated backups.
I always get the question to enter password which block my scripting of backups.
I looks like this:
ubuntu#ip-10-48-203-112:~$ mysqldump --user=dbadmin -pmysecretpassword -h someserver.eu-west-1.rds.amazonaws.com -p skygd > dump.sql
Warning: Using a password on the command line interface can be insecure.
Enter password:
I have tried with a configuration file .my.cnf
[client]
user=dbadmin
password=mysecretpassword
And it is picked up ok, if I run:
mysqldump would have been started with the following arguments: --port=3306 --socket=/var/run/mysqld/mysqld.sock --quick --quote-names --max_allowed_packet=16M --user=dbadmin --password=mysecretpassword
But still same question about enter password.
Are there a bug in 5.6.13 that doesn't allow automated login with password?
mysqldump --user=dbadmin --password=mysecretpassword -h someserver.eu-west-1.rds.amazonaws.com skygd > dump.sql
you typed -p at the end of the line
It is a better option to mention mysql password at the end of the first command.
mysqldump -uUsername -p"space-here"Databasename -h"space-here" Hostname >xyz.sql
And for database import use
mysql -uUsername -p"space-here"Databasename -h"space-here"Hostname
I have a connection between my localhost and a remote server using putty SSH tunnel.
Thats fine.
Now I need a command to get the sql file on my local machine i.e. c:\folder\test.sql and import it into mysql on the remote server
I thought maybe...
mysql -u prefix_username -p testpass -h localhost -P 3307 prefix_testdb
then do a command like
mysql -p testpass -u prefix_username prefix_testdb < c:\folder\test.sql
this command did not work.
How can I acheive this?
You should run this command
mysql -h host -u user_name -pPassword database < file.sql > output.log
file.sql contains the sql queries to run and output.log makes sense only when you have a query that returns something (like a select)
The only thing different I can see in your code is the blank space between the -p option and the password. If you use the -p option, you must write the password without leaving any blank space. Or you just can user the option --password=Password
I hope you can solve the problem
You will need to ssh to the remote machine with the mysql command appended:
ssh remote_user#remote_server mysql -p testpass -u username testdb < c:\folder\test.sql
1. mysql -h xxx -uxxx -pxxx . //login to the remote mysql
2. use DATABASE. //assign which db to import
3. source path/to/file.sql //the path can be your local sql file path.
Reference: Import SQL file into mysql
Use 'scp' to copy and mysql to insert to you local machine.
Syntax:
scp remote_user#remove_server:/path/to/sql/file.sql ~/path/to/local/directory
after you transfered the file use:
mysql -uYouUserName -p name_of_database_to_import_to < ~/path/to/local/directory/file.sql
mysql {mydbname} --host {server}.mysql.database.azure.com --user {login} --password={password} < ./{localdbbackupfile}.sql
As managed services, DevOps, and CI/CD workflows have become more popular by this point, most providers of those managed services want to remove the human error part of getting the connection strings correct. If you happen to be using Azure, AWS, GCP, etc, There usually is a page or terminal command that shows you these strings to help you easily integrate. Don't forget to check their docs if you're using something like that. They are auto generated, so they are most likely 'best practice' with spot-on correct syntax for the db version you may be using.
The above command is from "connection strings" on the product details page of my Azure Managed Mysql DB Server instance.
Not necessarily asked, but an fyi, a lot of those services auto generate templates for use in a lot of common connection scenarios:
{
"connectionStrings": {
"ado.net": "Server={server}.mysql.database.azure.com; Port=3306; Database=mytestdb; Uid={login}; Pwd={password};",
"jdbc": "jdbc:mysql://{server}.mysql.database.azure.com:3306/mytestdb?user={login}&password={password}",
"jdbc Spring": "spring.datasource.url=jdbc:mysql://{server}.mysql.database.azure.com:3306/mytestdb spring.datasource.username={login} spring.datasource.password={password}",
"mysql_cmd": "mysql mytestdb --host {server}.mysql.database.azure.com --user {login} --password={password}",
"node.js": "var conn = mysql.createConnection({host: '{server}.mysql.database.azure.com', user: '{login}', password: {password}, database: mytestdb, port: 3306});",
"php": "$con=mysqli_init(); [mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL);] mysqli_real_connect($con, '{server}.mysql.database.azure.com', '{login}', '{password}', 'mytestdb', 3306);",
"python": "cnx = mysql.connector.connect(user='{login}', password='{password}', host='{server}.mysql.database.azure.com', port=3306, database='mytestdb')",
"ruby": "client = Mysql2::Client.new(username: '{login}', password: '{password}', database: 'mytestdb', host: '{server}.mysql.database.azure.com', port: 3306)"
}
}
You can use pscp to upload file to the server. Go to your command line and type this
pscp.exe c:\folder\test.sql usernameoftheserver#websitename.com:/serverpath