Packages like RMySQL and sqldf allow one to interface with local or remote database servers. I'm creating a portable project which involves importing sql data in cases (or on devices) which do not always have access to a running server, but which do always have access to the latest .sql dump of the database.
The goal seems simple enough: import an .sql dump into R without the involvement of a MySQL server. More specifically, I'd like to create a list of lists in which the elements correspond to any databases defined in the .sql dump (there may be multiple), and those elements in turn consist of the tables in those databases.
To make this reproducible, let's take the sample sportsdb SQL file here — if you unzip it it's called sportsdb_sample_mysql_20080303.sql.
One would think sqldf might be able to do it:
read.csv.sql('sportsdb_sample_mysql_20080303.sql', sql="SELECT * FROM addresses")
Error in sqliteSendQuery(con, statement, bind.data) :
error in statement: no such table: addresses
This even though there certainly is a table addresses in the dump. This post on the sqldf list mentions the same error, but no solution.
Then there is an sql.reader function in the package ProjectTemplate, which looks promising. Poking around, the source for the function can be found here, and it assumes a running database server and relies on RMySQL — not what I need.
So... we seem to be running out of options. Any help from the hivemind appreciated!
(To reiterate, I am not looking for a solution that relies on access to an SQL server; that's easy with dbReadTable from the RMySQL package. I would very much like to bypass the server and get the data straight from the .sql dump file.)
depending on what you want to extract from the table, here is how you can play around with the data
numLines <- R.utils::countLines("sportsdb_sample_mysql_20080303.sql")
# [1] 81266
linesInDB <- readLines("sportsdb_sample_mysql_20080303.sql",n=60)
Then you can do some regex to get tables names (after CREATE TABLE), column names (between first brackets) and VALUES (lines after CREATE TABLE and between second brackets)
Reference:
Reverse engineering a mysqldump output with MySQL Workbench gives "statement starting from pointed line contains non UTF8 characters" error
EDIT: in response to OP's answer, if i interpret the python script correct, it is also reading it line by line, filter for INSERT INTO lines, parse as csv, then write to file. This is very similar to my original suggestion. My version below in R. If the file size is too large, it would be better to read in the file in chunks using some other R package
options(stringsAsFactors=F)
library(utils)
library(stringi)
library(plyr)
mysqldumpfile <- "sportsdb_sample_mysql_20080303.sql"
allLines <- readLines(mysqldumpfile)
insertLines <- allLines[which(stri_detect_fixed(allLines, "INSERT INTO"))]
allwords <- data.frame(stri_extract_all_words(insertLines, " "))
d_ply(allwords, .(X3), function(x) {
#x <- split(allwords, allwords$X3)[["baseball_offensive_stats"]]
print(x[1,3])
#find where the header/data columns start and end
valuesCol <- which(x[1,]=="VALUES")
lastCols <- which(apply(x, 2, function(y) all(is.na(y))))
datLastCol <- head(c(lastCols, ncol(x)+1), 1) - 1
#format and prepare for write to file
df <- data.frame(x[,(valuesCol+1):datLastCol])
df <- setNames(df, x[1,4:(valuesCol-1)])
#type convert before writing to file otherwise its all strings
df[] <- apply(df, 2, type.convert)
#write to file
write.csv(df, paste0(x[1,3],".csv"), row.names=F)
})
I don't think you will find a way to import a sql dump (which contains multiple tables with references) and then perform arbitrary sql queries on them within R. This would basically require the R package to run a complete database server (compatible with the one creating the dump) within R.
I would suggest exporting the tables/select statements you need as CSV from your database (see here). If you can only work from the dump and don't want to setup a server for the conversion you could use some simple regular expressions to turn the insert statements in your dump into a bunch of CSV files for the tables using a tool of your choosing like sed or awk (or even R as suggested by the other answer but that might be rather slow for this file size).
I'll reluctantly answer my own question, using the input from +bnord and +chinsoon12 (who both contributed pieces of the puzzle).
Short answer: there is no out of the box solution. As +bnord notes, it would be preferred to fix it server-side (e.g., by exporting to CSV format with mysqldump). However, as my question indicated, I'm looking for a solution that allows me to work with the sql dump, bypassing the server.
So if we have to work with the dump, how? The hardcore, manual way is to use regular expressions to convert INSERT statements to CSV, either (1) outside R using sed and awk on the .sql text file (+bnord), or (2) inside R with grep and gsub on strings loaded with readLines (+chinsoon12).
Some good soul wrote a python script that can convert sql dumps to CSV. This requires yet another piece of (potentially non-trivial to install/maintain) software, so it's not the answer I was hoping for, but it does look like a good model in case anyone wants to reinvent the wheel in R.
For now I'll stick with my modus operandi of (on Windows) running MySQL Community Server and using WorkBench to import the dump, then talk to the local server from R. A very indirect method that is a pain in the ass because of the inscrutable access rights system of MySQL (especially annoying since it's all just there in an ASCII text file), but the only way for now, it seems. Thanks all for your input!
(If a better, more complete answer comes along I'll gladly accept that, turning this into a comment if possible.)
Related
I am under strict corporate environment and don't have access to Postgres' psql. Therefore I can't do what's shown e.g. in the SO Convert SQLITE SQL dump file to POSTGRESQL. However, I can generate the sqlite dump file .sql. The resulting dump.sql file is 1.3gb big.
What would be the best way to import this data into Postgres? I also have DBeaver and can connect to both databases simultaneously but unfortunately can't do INSERT from SELECT.
I think the term for that is 'absurd', not 'strict'.
DBeaver has an 'execute script' feature. But who knows, maybe it will be blocked.
EnterpriseDB offers binary downloads. If you unzip those to a local drive you might be able to execute psql from the bin subdirectory.
If you can install psycopg2 or pg8000 for python, you should be able to connect to the database and then loop over the dump file sending each line to the database with cur.execute(line) . It might take some fiddling if the dump file has any multi-line commands, but the example you linked to doesn't show any of those.
I have a table created in myFile.csv. I wanted to load this table into SQL database. I am working in C language under unix environment. I went through some links but I didn't get any useful direction. Thanks.
I think you are referring to a CSV file instead of a CVS file. CSV stands for Comma Seperated Values. To load data from C to a database you will need C libraries for the database that allow you to run SQL INSERT statements. C isn't suited really well for this task in this day and age. Java would likely be a better bet because because nearly all vendors provide JDBC drivers for this purpose. If you insist on doing this in C you will likely be using ODBC drivers or a native library for your database on non-Windows platforms. Some information about ODBC can be found at this link.
This is not a direct answer to your question.
If you want to load a text file into an SQL database, you can do this usually with some helper program from the database in question. For MySQL, this could be LOAD DATA INFILE or mysqlimport
not a real code snippet, but just guidelines...
read the file with fgets() , this will give you line by line output...
for each line, tokenize it using strtok
char *brkt;
for (item = strtok_r(line, ",", &brkt); line; line = strtok_r(NULL, ",", &brkt)) {
}
connect to the database and send your query . i.e. mysqlconnect() for mysql
Are there any command line tools (Linux, Mac, and/or Windows) that I could use to scan a delimited file and output a DDL create table statement with the data type determined for me?
Did some googling, but couldn't find anything. Was wondering if others might know, thanks!
DDL-generator can do this. It can generate DDL's for YAML, JSON, CSV, Pickle and HTML (although I don't know how the last one works). I just tried it on some data exported from Salesforce and it worked pretty well. Note you need to use it with Python 3, I could not get it to work with Python 2.7.
You can also try https://github.com/mshanu/idli. It can take csv file as input and can generate create statement with appropriate types.It can generate for mysql, oracle and postgres. I am actively working on this and happy to receive feedback for future improvement
I have just downloaded pgAdmin 1.14.3 in an effort to import, query, and manage large textfiles. These textfiles are either quote comma quote delimited or tab delimited (they come as quote comma quote and I edited many for use with another software). While version 1.16 allows an import function, it has not been released yet and I am wondering how to import data into a newly created table using pgAdmin.
The text files range from 12MB to 2GB, so I'm looking for a comprehensive solution that would not involve importing row by row. I tried this with phppgadmin, but ran into file size limitations embedded in the php.ini file (separate post) and am trying this as a possible workaround. I'm a little new to SQL, so not really sure of all the commands possible at my fingertips. Any helps is appreciated - thanks!
You can issue a COPY statement, like this:
COPY table_name (column_name)
FROM 'd:\test.sql';
Query returned successfully: 6 rows affected, 31 ms execution time.
See the documentation here:
http://www.postgresql.org/docs/9.1/static/sql-copy.html
Note that I did not test this in PgAdmin for large files, but using psql I have never seen a case where the file had been too big for COPY.
I got a txt file which includes 350.000 lines and I have to download and insert it to my sql server database. I write the part that connects to FTP gets the related file and download it. What I want is insert it to my table.
Here's a line:
9996281000L0000000000000000
As you can see also I need to seperate the specific parts like
999 628 1000 L 0000000000000000
I need an effective solution which cuts the lines and inserts the data to related columns.
Anyone any ideas how I can achieve this?
Look into the BCP utility and its format files. It's a detailed and somewhat complex process, but it will do the job quickly and efficiently once set up.
You can get similar functionality (with a much better GUI) with SQL Server Integration Services (SSIS). While completely different, it does much the same thing as bcp.