Importing sample database into phpMyAdmin - sql

I am trying to import a sample database "employees.sql" from official phpMyAdmin webpage. I am using uwamp server and getting the following error when using phpMyAdmin "import" option:
Unrecognized statement type. (near "source" at position 0)
.SQL FILE AT LINE WHERE ERROR IS REPORTED:
SELECT 'LOADING departments' as 'INFO';
source load_departments.dump ;
I am not sure what to change to successfully import the database. I also tried different things like putting load_departments.dump in quotes, but it still didn't work.

How do you use MySQL's source command to import large files in windows
must read and you will definitely get many ideas!
I think you should fire source command from cmd (command prompt)

I suggest you to create an empty database and import that sql file inside of it. Check it out..

Assumption: MySQL Server was installed and you have downloaded the employees database from github. Unzip the package and go to the directory from command prompt.
Enter the following command and on prompt, provide the sql password.
mysql -u root -p -t < employees.sql
Verify your installation by entering the following command.
mysql -u root -p -t < test_employees_md5.sql

Related

How do I import a sql data file (the content of file are INSERT queries) into SQL Server?

I have a .sql file with a content of insert queries for all tables of my database.
This file is 12 GB in size. I tried to open it with Notepad++, SQL Server Management Studio, and also with chrome browser, but the file is very long. I cant' open it.
How can I import it into my database directly without opening the file and execute the queries?
What is the proper way to do this?
you need to use sqlcmd , the documentation has a full explanation of how to use it but here is a quick sample how to use it, you need to install sqlcmd utility first then open up your cmd command prompt and type the command like this :
sqlcmd -S DBSERVER\TESTINSTANCE -d DATABASENAME -U USERNAME -P PASSWORD -i "D:/InsertData.sql"

Unable to run .sql file in SQL Server

I have a .sql dump file 20 gb and I am trying to run it on Mysql workbench using run script and after successful execution, using SSMA I'll migrate the data from Mysql workbench to SQL Server. I have migrated the data this way many times successfully however for 20 gb file it seems very time-consuming. Please let me know if there is any alternate way to achieve this quickly. I have followed the following link:
Steps to migrate mysql tables to sql server using SSMA!
From your Title "unable to run .sql file in SSMS" and "I have a .sql dump file 20 gb" are you trying to open a 20GB .sql in SSMS? That's never going to work. SSMS is a 32bit application, so the maximum addressable memory is 2GB. If you want to run your .sql file, I suggest using sqlcmd.
Open up Powershell, and then run the command below replacing the appropriate parts:
sqlcmd -S {Server Name/ServerIP} -U {Your Login} -i {Your full path to your script}
You'll be prompted for your password and then you the file will be run. So, as an example, you might run:
sqlcmd -S svSQL2017 -U Larnu -i \\svFileServer\SQLShare\Scripts\BigBatchFile.sql
If you are using integrated security, then don't pass the -U parameter for the command.
Edit: This answer is no relevant to the OPs question, as they were using "SSMS" as a synonym for SQL Server, which it is not. I have left this here for the moment so the OP can review my comments, and I will likely remove this answer at a later point.

Teradata client on Unix Solaris

I deploy some .bteq and .sql scripts on a TERADATA database. For doing this, I use a client on my desktop called BTEQWin version 13.10.0.03.
I get the .bteq/.sql from a version control like pvcs/svn etc and all I do once the files are in my workspace folder (from Version control tool), to just drag and drop the files from Windows browser to BTEQWin client (which I connect to a database prior to drag/drop for running those scripts).
Now, I have to automate this whole process in UNIX.
I have written a SHELL KSH/BASH script which is getting all the .bteq/.sql from a TAG/LABEL in the version control tool to a given UNIX folder. Now, all I need to do is the pass these files one by one (i'll take care of the order) to Teradata client.
My ?
- what client do I need to tell Unix admin team to install on Unix server - so that I can run something like below:
someTeraDataCommand -u username -p password -h hostname -d database -f filenametoexectue | tee output_filename.log
Where, someTeraDataCommand is the client / executable - which will let me run Teradata scripts (like I was doing using BTEQWin on my desktop - GUI session). Other parameters can be username, password, which database to connect on what server and which file to run or make that file passed to the command using "<" operator at command line.
Any idea?
- What client ?
Assuming the complete Teradata Tools and Utilities package is installed on your UNIX server (which will have the connectivity tools to talk to Teradata), you should have access to bteq from the command line. Something like this:
bteq < script_file > output_file
Your script file should contain a .LOGON statement to establish the connection:
.LOGON yourTDPID/your_account,your_pw
You might also need to use other commands to set your default database or non-default session values.
Another option would be to combine the SQL and call to BTEQ in a Korn shell script:
#!/usr/bin/ksh
##############
SHELL_NAME=`basename $0`
PRG_NAME=`basename $(SHELL_NAME} .ksh`
LOG_FILE=${PRG_NAME}.log
OUT_FILE=${PRG_NAME}.out
#
bteq <<EOBTQ > ${LOG_FILE} 2>$1
.LOGON {TDPID}/{USERID},{PWD};
--.RUN file=${LOGON}
/* Add your SQL/BTEQ commands here */
.QUIT 0;
EOBTQ
Edit
The double hyphen indicates a single line comment. Typically in a UNIX script you do not leave your password in plain text of a KSH script. The .RUN command would reference a text file in a barely sufficient secure location containing the .LOGON {TPDID}/{USERID},{PWD}; command.
The .RUN command in BTEQ allows you to reference another text file containing a series of valid BTEQ commands that you want to run in the current BTEQ script.
Easiest way is to setup the Solaris TTU, is to request root sudo, and run an interactive installation into defaults as a root. That would cure all client issues.

PostgreSQL - inconsistent COPY permissions errors

I am using the EnterpriseDB pgAdmin III (v. 1.12.1) on a Windows 7, 32-bit machine to work with PostgreSQL databases on a remote Linux server. I am logged in as the user postgres, which allows me to access the $PGDATA directory (in this instance, it is found here: /var/lib/pgsql/data/)
If I log into the server via a terminal, run psql, and use the \copy command to import data from csv files into newly created tables, I have no problems.
If I'm in pgAdmin, however, I use the COPY command to import data from csv files into newly created tables.
COPY table_name FROM '/var/lib/pgsql/data/file.csv'
WITH DELIMITER AS ',' csv header
Sometimes this works fine, other times I get a permissions error:
ERROR: could not open file '/var/lib/pgsql/data/file.csv" for reading: Permission denied
SQL state: 42501
It is the inconsistency of the error that is confusing to me. When the error arises, I change the file permission to anywhere from 644 - 777, with no effect. I also try moving the file to other folders, e.g., var/tmp/, also with no effect.
Any ideas?
The problem is the access permissions trough the directories to the file. Postgres user does not have access to your home folder, for example. The answer is to use a folder all users have access like /tmp, or create one with the correct permissions so any user can access/read/write there, a sort of users shared folder.
I think your postgres user still don't have access to your file.
Did you tried the folowing commands ?
chown postgres /var/lib/pgsql/data/file.csv
chmod u+r /var/lib/pgsql/data/file.csv
Try \COPY table_name FROM '/var/lib/pgsql/data/file.csv'
WITH DELIMITER AS ',' csv header
Notice the backslash before copy, when you run it with back slash it runs with user permissions other wise it just runs as postmaster which in the documentation is deprecated for recent versions of pg :|, anyways this might probably do the trick for ya .

Run Mysql scripts in a batch?

Can someone link me to a tutorial or explain if there is a way to create some sort of batch file of mysql scripts / stored procs and run them all at the same time? I can not seem to find any documentation on this online but I feel that I might be searching using the wrong terms.
You can chain mysql scripts by calling them from within a script using the source command (details of command line options)
# my_textfile.sql
# ---------------
USE my_database;
\. subscript1.sql
\. subdir/subscript2.sql
\. /full/path/to/subscript3.sql
Command Line:
mysql < my_textfile.sql
Don't forget the command line options, if you are going to script the files you might need the password/ user account.
mysql -uyouraccount -pyourpassword YourDatabase < mytextfile.sql
This isn't the most secure way to do it because it puts your username/ password on the command line but it works. If you are doing much scripting I suggest you look into .my.cnf and the various options for saving your account/ password in there (and securing that file).
You can simply create a text file with SQL statements separated with ; and then execute all statements with the MySQL command line client:
# my_textfile.sql
# ---------------
USE my_database;
SELECT * FROM table1;
UPDATE table2 SET foo='bar';
Command Line:
mysql < my_textfile.sql
For peeps running MAMP PRO on OS X Yosemite, I was able to get all my *.sql scripts executed (import) by running from terminal:
/Applications/MAMP/Library/bin/mysql -h localhost -u root -p < /Applications/MAMP/myDBRestore.sql
myDBRestore.sql contained a reference to all the MySQL DB scripts as thus:
\. /full/path/to/sql/file1.sql
\. /full/path/to/sql/file2.sql
\. /full/path/to/sql/file3.sql
...
\. /full/path/to/sql/file(n).sql
where n is the last .sql file in the directory.