I want to do an import:
this is my command:
mysql -u root -p axelen > C:\Users\Netlogiq\Desktop\netlogiq_axelen.sql
After that it ask me about the password and after that nothing happens. What am i doing wrong ?
Short Question - short answer. You just have to turn your > to <:
mysql -u root -p axelen < C:\Users\Netlogiq\Desktop\netlogiq_axelen.sql
Related
When running postgresql alpine image with podman :
podman run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -d postgres:11-alpine
the result is :
Error: /usr/bin/slirp4netns failed: "open(\"/dev/net/tun\"): No such device\nWARNING: Support for sandboxing is experimental\nchild failed(1)\nWARNING: Support for sandboxing is experimental\n"
The running system is archlinux. Is there a way to fix this error or a turn arround ?
Thanks
Is slirp4netns correctly installed? Check the project Site for information.
Sometimes the flag order matters. try -d first and -p last (directly infornt of the image) looking like:
podman run -d --name postgres -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -p 5432:5432 postgres:11-alpine
Try only creating the neccessary password, then log into your container and create manually (this always worked for me)
podman run -d --name postgres -e POSTGRES_PASSWORD=test -p 5432:5432 postgres:11-alpline
podman exec -it postgres bash
Create default user postgres
su - postgres
start postgres
psql
create databases and tables
CREATE USER testuser WITH PASSWORD 'testpassword' | Doku
CREATE DATABASE testdata WITH OWNER testuser
Check if it worked
\l+
Connect to your Database via IP and Port
I assume you upgraded Arch packages recently. Most likely your system needs a restart.
Setup:
Local *nix machine with a SQL script script.sql (Postgres).
Remote machine remote (Debian 7) with Postgres.
I can SSH in as some_user, who is a sudoer.
Anything with Postgres needs to be done as postgres user.
The server only listens on localhost:5432.
How do I execute script.sql on remote without copying it there first?
This works well:
ssh -t some_user#remote 'sudo -u postgres psql -c "COMMANDS FOO BAR"'
The -t flag means that sudo will ask for some_user's password correctly on the local terminal.
One thing remains, to be able to pipe script.sql to psql. This does not work:
ssh -t some_user#remote 'sudo -u postgres psql' < script.sql
It fails with the message:
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Edit: simplified example
Postgres and psql don't seem to figure much in the problem. The following code has the same issues:
ssh some_user#remote xargs sudo ls < input_file
The problem seems to be: we need to send 2 inputs to sudo, both the password using a tty, and the stdin to pass to ls.
Edit: even simpler
ssh localhost xargs sudo ls < input_file
sudo: no tty present and no askpass program specified
Adding -t does not work:
$ ssh -t localhost xargs sudo ls < input_file
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Adding another -t does not work either:
$ ssh -t -t localhost xargs sudo ls < input_file
<content of input_file>
<waiting on a prompt>
ssh -T some_user#remote "sudo -u postgres psql -f-" < script.sql
"-f-" will read the script from STDIN. Just redirect the file in there, and there you go.
Don't bother with -t option to ssh, you don't need a full terminal for this.
ssh -T ${user}#${ip} sudo DEBIAN_FRONTEND=noninteractive postgres psql -f- < test.sql
Use DEBIAN_FRONTEND=noninteractive for resolve no tty present or equivalent of your distribution.
I have a data base named "mig". it has 10 tables. now i want to create a same database in another system so I am using mysqldump command but it shows error.
I entered command as follows :
mysqldump -u root -p root mig >file.sql;
This is the error i got :
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use near 'mysql
dump -u root -p root mig >file.sql' at line 1
I am getting the same error when I use ,
mysqldump -u root -proot mig >file.sql;
How can i fix this ?
Simply try-
mysqldump -u root mig> file.sql
Edit
mysqldump is not a MySQL command, it is a command line utility. You must call it from your shell command line. I hope you are not calling this from MySQL prompt.
When providing password on the command line you should leave no space after -p.
It should look smth like:
mysqldump -u root -proot mig >file.sql;
You can use some tools like MySQL Workbench or SQLyog to import the dump file.
Free version: https://code.google.com/p/sqlyog/wiki/Downloads
When you execute mysqldump from command line, you must have mysql_home/bin directory in your classpath variable or command-line must be pointing to it.
try using
mysqldump -u root -proot mig >(abs_path)/file.sql;
This works for me on my local. Open Terminal and execute the following code (Make sure your are NOT on the MySQL prompt):
mysqldump -uroot -p mig > file.sql
It will ask you to input the password on the next line, for security the password won't be shown.
If you get Access Denied, means the mysql credentials are wrong (or the user you use don't have the right permissions to generate a dump), so make sure you have a valid username and password. I hope it helps.
mysqldump will not run from mysql cli, you will have to run it from windows command prompt:
mysqldump -u username -p database_name > output_file_name.sql;
If you are getting error on running above command 'mysqldump is not recognized as an internal or external command' then navigate to < MySQL Installation Directory/bin/ > and then run the command.
i have the same problem, my situation was i connect from client in local computer to server in SQL instance of Google. Since i read Sahil Mittal said this is comman utilty, i just put in terminal the same command adding -h parameter.
mysqldump -h ip.del.host -u root -p database_name > database_desired_name.sql
I try to backup my database with mysqldump and cronjobs.
Well, I added the following command to the crontab of user root:
*/30 * * * * mysqldump -u root -pVERYSECUREPASSWORD --all-databases > /var/www/cloud/dump_komplett.sql &> /dev/null
This works fine so far, but the problem is that the password is set in this command.
So I want to include a .database.cnf file that look like this
[mysqldump]
user=root
password=VERYSECUREPASSWORD
and changed the mysqldump command to
mysqldump --defaults-extra-file="/var/crons/mysql/.database.cnf" --all-databases -u root > /var/www/cloud/dump_komplett.sql
to solve this problem.
But this command fails with the error:
mysqldump: Got error: 1045: Access denied for user 'root'#'localhost' (using password: YES) when trying to connect
I don't know what's wrong.
Here are some commands I also tried:
mysqldump --defaults-extra-file="/var/crons/mysql/.database.cnf" --all-databases > /var/www/cloud/dump_komplett.sql
mysqldump --defaults-file="/var/crons/mysql/.database.cnf" --all-databases > /var/www/cloud/dump_komplett.sql
mysqldump --defaults-file="/var/crons/mysql/.database.cnf" --all-databases -u root > /var/www/cloud/dump_komplett.sql
and .database.cnf contents I also tried:
[client]
user=root
password=VERYSECUREPASSWORD
[mysqldump]
host=localhost
user=root
password=VERYSECUREPASSWORD
[client]
host=localhost
user=root
password=VERYSECUREPASSWORD
I found out that password should be between quotes
[client]
user=root
password="VERYSECUREPASSWORD"
Took me a while to figure out why it didn't work with passwords with lots of non alfanumeric symbols
The user has to be specified in the command and not in the file with the u parameter.
For more details on scheduling cron jobs using mysqldump, check this answer
I'm looking to be able to run a single query on a remote server in a scripted task.
For example, intuitively, I would imagine it would go something like:
mysql -uroot -p -hslavedb.mydomain.com mydb_production "select * from users;"
mysql -u <user> -p -e 'select * from schema.table'
(Note the use of single quotes rather than double quotes, to avoid the shell expanding the * into filenames)
mysql -uroot -p -hslavedb.mydomain.com mydb_production -e "select * from users;"
From the usage printout:
-e, --execute=name
Execute command and quit. (Disables --force and history file)
here's how you can do it with a cool shell trick:
mysql -uroot -p -hslavedb.mydomain.com mydb_production <<< 'select * from users'
'<<<' instructs the shell to take whatever follows it as stdin, similar to piping from echo.
use the -t flag to enable table-format output
If it's a query you run often, you can store it in a file. Then any time you want to run it:
mysql < thefile
(with all the login and database flags of course)
echo "select * from users;" | mysql -uroot -p -hslavedb.mydomain.com mydb_production
As by the time of the question containerization wasn't that popular, this is how you pass a single query to a dockerized database cluster with Ansible, following #RC.'s answer:
ansible <host | group > -m shell -a "docker exec -it <container_name | container_id> mysql -u<your_user> -p<your_pass> <your_database> -e 'SELECT COUNT(*) FROM my_table;'"
If not using Ansible, just login to the server and use docker exec -it ... part.
MySQL will issue a warning that passing credentials in plain text may be insecure, so be aware of your risks.
From the mysql man page:
You can execute SQL statements in a script file (batch file) like this:
shell> mysql db_name < script.sql > output.tab
Put the query in script.sql and run it.