I've been searching without luck for a MacOS iODBC driver that can read saved .SQL files exported in Microsoft SQL Server format. Does one exist?
We've got a large pile of research data stored in one app that can export as Excel spreadsheets or SQL files (eg, a text file full of SQL CREATE TABLE and INSERT statements). We need to import this data into another app (Stata 9) that runs under MacOS and can import Excel files, its own format, or from an ODBC source. So, I need an ODBC driver that can read plain SQL files as its source. We don't need a driver that actually talks to an MSSQL database, because there is no actual database here; just a plain .SQL file with MSSQL-style commands in it.
Unfortunately, the default MacOS install seems to come with no ODBC drivers whatsoever, not even one for reading flat files or SQLite databases.
The current workflow for moving this data — exporting it from DatStat as an Excel spreadsheet, opening that spreadsheet and fixing it by hand to conform to Stata's need, then saving and reimporting into Stata — is ridiculously labor-intensive and also loses a lot of important metadata like variable descriptions and annotations.
I think that best thing to do here is load the data from DatStat to a database and then load it back into Stata. First, export your data from DatStat to a .sql file. I'm not familiar with DatStat, but if you can do this in bulk or via the command line it would be best. You can access your OS's terminal in Stata by using the -shell- command. After you have a .sql file, say foo.sql, you can use the following Stata code to send it to a database and then import into Stata.
odbc sqlfile("foo.sql"), dsn("DataSourceName")
odbc load, exec("SELECT * FROM CustomerTable") dsn("DataSourceName")
You could even issue a final command to cleanup the tables in the database if you don't think you'll use this database again and you don't want it taking up space. Use something like:
odbc exec("DROP TABLE CustomerTable")
Yes, this will probably be slow if your dataset is large, but it could be nice once your data is in the database because you can query parts of it at a time instead of importing the whole thing.
Lastly, you mentioned that no ODBC driver for Mac exists for MS SQL Server. If that is the case, you may want to install one of the open-source database systems like MySQL or PostgreSQL. I'm not a Mac user but drivers for these must exist for mac.
Good luck!
Related
I have a text file of tab separated data. Millions of rows so I can't open it in Excel.
Is there a way to use some SQL client to load the file and run queries on it without uploading the file to a database? I.e. can my local machine function as my database?
I know I could solve this with command line scripting but I'm trying to find a solution that I can then share with the accounting department and they're scared of the command line.
You can create an ODBC datasource based on the Microsoft Text Driver. That driver should come pre-installed on any Windows PC. This uses the Jet Engine which is still somewhere in the Windows OS. Then you can use any Query tool (even MS Query) to analyse your data.
I inherited some old records for a company I volunteer for. One of the old files is an SQL Dump from their old webpage, and I would like to get the data from one of the tables for their use into Excel.
-- MySQL dump 10.11
The dump drops the table if it exists, creates the table new, and then inserts all of the data.
Is there some easy way I can get this data into Excel on my PC? I don't have SQL Server or anything like that loaded... I assumed there was some easy way to get a CSV or Excel file out of it but I have failed to find this yet without first uploading the dump to some SQL Server.
Unfortunately I don't think that there is any way to export a dump file into an excel or .CSV file. The reason for this is that the dump file is actually a collection of Select statements instead of the actual data itself. SQL servers do this to prevent a whole list of problems that can occur when you try to manipulate raw data manually.
Lucky for you, MySQL offers a free version of their server. You can find it here: http://dev.mysql.com/downloads/
I think you are best off downloading this and restoring your file as a new database. This has the added benefit of allowing you complete control over the data from that point on. Exporting to excel would be easy at that point however, you may find it a lot more fulfilling to continue using MySQL server.
Hope this helped.
I have a SPSS file saved as a .sav that I am trying to migrate into a Postgres db. I tried using the SPSS export to database, but that does not seem to be working (still waiting to hear back from IBM). I also tried to export as a .csv and import it in via a GUI, Navicat, but the default data type is varchar(255) and can't detect the correct types. I can't sit down and create the tables manually as there are 640 variables.
Just as some other info there will be multiple of similar files going into multiple tables with around 250,000 tuples/table. If there is some sort of script that can automatically detect schema and let me export it or a software that does that or can accept .sav, willing to try pretty much anything.
It sounds like you have a problem with the ODBC driver. You might need a new one from the Data Access Pack or a native driver from the database vendor.
Disclaimer: I am somewhat of a n00b when it comes to database programming, so bear with me.
I've been attempting to batch process a rather large amount (~20 gb) of data all contained in .MDF SQL database files. The files contain meteorological data obtained through weather balloons, with each table consisting of ~1 second observations of winds, pressure, height, temps, etc, and are created with our radiosonde tracking software on an unnetworked Windows machine. It is possible (and quite easy) to load the files using the associated software and export the tables as an ASCII text file...however, this process involves manually loading each one. As I'm performing a study that requires as many soundings as possible (we have over 2000), doing this process over and over for several years of twice-daily observations is extremely time-prohibitive.
I've been taking the files off of the computer and putting them on my laptop running Linux Mint, and consider myself to be fluent with Perl...I do most of my data analysis with Perl scripts. That said, I've had the darndest time trying to get into the database files!
I've tried to connect to one of the files using the DBI package using variants on
$dbh = DBI->connect("DBI:ODBC:$filename") or die "blahblahblah";
I have unixODBC installed and configured, have downloaded "libmyodbc.so" and "libodbcmyS.so", and keep getting the error
DBI connect('','',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified (SQL-IM002) at dumpsql.pl line 6.
I've tried remedying this a number of ways over the past couple days, and I won't post them here for the sake of brevity. My odbcinst.ini file is as follows:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/sib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1
I'm seriously confused. I THINK I'm doing everything that various online tutorials are suggesting, but everyone else is connecting to servers and these files are all local and in the same directory! Could anyone attempt to point me in the right direction? All I want is to calculate meteorological values using vertical sounding data! Am I missing something totally obvious?
Any help would be greatly appreciated!
It seems the original database server was a Microsoft SQL Server (MDF files). I am afraid these files alone are useless on a Linux machine. You need a Microsoft SQL Server on a Windows machine to get access to the contained data.
You described that you are able to attach a MDF file on a SQL server manually and then you can export the needed data as text files. Try to automate that. I'm not a MS SQL Server expert but it should be possible.
E.g. here is a tutorial to attach and detach a MDF file via T-SQL. So my approach would be to write a script which iterates over the 2000 MDF files and attach each to the SQL server. Then execute a query to export your data and then detach the MDF.
I'm looking for a simple solution (beginner to SQL) to allow the import of data from my .csv file to my SQL DB.
I have a third party program that is updating my .csv file every 30 seconds and I want to put that updating information into my SQL DB. I tried the importing & exporting wizard but it didn't work due to the .csv file being utilized by the other third party program.
Getting the information into the SQL DB doesn't need to be in real time it could just retrieve all the information when opening a saved sql query file.
Thank you!
OPENROWSET is the simplest one if you get that working in your env for CSV file. i have seen lot of issues with what OS, and what version of MS office installed with 32bit or 64 bit.
but bit more work and you will be all set with creating a small SSIS package to import that CSV in to table. execute that SSIS using SQL JOB at desired interval. later if you needed more complex insert/update you can always modify the package.
This is the case of producer consumer problem where one process is writing data and another one is reading it.
Whatever you do you need to setup some kind of lock on this file that process can check if file is available for reading/writing. If import/export wizard had issues with concurrency then probably other processes will also.
Another option is to always create new file to write into and have reader process to always read from the newest one and delete it after processing.
One more thing you’ll have to take care of is reading from same file multiple times. You need some way to mark the records that have already been read so these are not inserted twice.
All of the above is needed if this needs to be a fully automated and unattended process.
If not you can just manually create a copy of CSV file and then use import/export wizard to import the data.
Here is another resource you can check out for importing CSV into SQL Server
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/