Trouble importing .sav into PostgreSQL - sql

I have a SPSS file saved as a .sav that I am trying to migrate into a Postgres db. I tried using the SPSS export to database, but that does not seem to be working (still waiting to hear back from IBM). I also tried to export as a .csv and import it in via a GUI, Navicat, but the default data type is varchar(255) and can't detect the correct types. I can't sit down and create the tables manually as there are 640 variables.
Just as some other info there will be multiple of similar files going into multiple tables with around 250,000 tuples/table. If there is some sort of script that can automatically detect schema and let me export it or a software that does that or can accept .sav, willing to try pretty much anything.

It sounds like you have a problem with the ODBC driver. You might need a new one from the Data Access Pack or a native driver from the database vendor.

Related

.SQL export from PHPMyAdmin to Excel or CSV

I inherited some old records for a company I volunteer for. One of the old files is an SQL Dump from their old webpage, and I would like to get the data from one of the tables for their use into Excel.
-- MySQL dump 10.11
The dump drops the table if it exists, creates the table new, and then inserts all of the data.
Is there some easy way I can get this data into Excel on my PC? I don't have SQL Server or anything like that loaded... I assumed there was some easy way to get a CSV or Excel file out of it but I have failed to find this yet without first uploading the dump to some SQL Server.
Unfortunately I don't think that there is any way to export a dump file into an excel or .CSV file. The reason for this is that the dump file is actually a collection of Select statements instead of the actual data itself. SQL servers do this to prevent a whole list of problems that can occur when you try to manipulate raw data manually.
Lucky for you, MySQL offers a free version of their server. You can find it here: http://dev.mysql.com/downloads/
I think you are best off downloading this and restoring your file as a new database. This has the added benefit of allowing you complete control over the data from that point on. Exporting to excel would be easy at that point however, you may find it a lot more fulfilling to continue using MySQL server.
Hope this helped.

Create a local DB and import csv or excel file

Due to circumstances beyond my control, I'm currently not able to create a table on one of my company's databases, and I got a project where I need to break down and get stats from a large data file. I can open it in Excel, but its not very happy about it. What I'd like to be able to do is create a local database where I could use the import wizard to import an excel file to a new table. Is this possible? If so, how would I do it?
SQL server has a free express edition of their database http://www.microsoft.com/en-us/server-cloud/products/sql-server-editions/sql-server-express.aspx. It probably has a cumulative data limit though (Oracle express does, I don't know much about sql server express).

Import updating CSV into SQL Server

I'm looking for a simple solution (beginner to SQL) to allow the import of data from my .csv file to my SQL DB.
I have a third party program that is updating my .csv file every 30 seconds and I want to put that updating information into my SQL DB. I tried the importing & exporting wizard but it didn't work due to the .csv file being utilized by the other third party program.
Getting the information into the SQL DB doesn't need to be in real time it could just retrieve all the information when opening a saved sql query file.
Thank you!
OPENROWSET is the simplest one if you get that working in your env for CSV file. i have seen lot of issues with what OS, and what version of MS office installed with 32bit or 64 bit.
but bit more work and you will be all set with creating a small SSIS package to import that CSV in to table. execute that SSIS using SQL JOB at desired interval. later if you needed more complex insert/update you can always modify the package.
This is the case of producer consumer problem where one process is writing data and another one is reading it.
Whatever you do you need to setup some kind of lock on this file that process can check if file is available for reading/writing. If import/export wizard had issues with concurrency then probably other processes will also.
Another option is to always create new file to write into and have reader process to always read from the newest one and delete it after processing.
One more thing you’ll have to take care of is reading from same file multiple times. You need some way to mark the records that have already been read so these are not inserted twice.
All of the above is needed if this needs to be a fully automated and unattended process.
If not you can just manually create a copy of CSV file and then use import/export wizard to import the data.
Here is another resource you can check out for importing CSV into SQL Server
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/

Is there a MacOS ODBC driver that reads SQL-command text files?

I've been searching without luck for a MacOS iODBC driver that can read saved .SQL files exported in Microsoft SQL Server format. Does one exist?
We've got a large pile of research data stored in one app that can export as Excel spreadsheets or SQL files (eg, a text file full of SQL CREATE TABLE and INSERT statements). We need to import this data into another app (Stata 9) that runs under MacOS and can import Excel files, its own format, or from an ODBC source. So, I need an ODBC driver that can read plain SQL files as its source. We don't need a driver that actually talks to an MSSQL database, because there is no actual database here; just a plain .SQL file with MSSQL-style commands in it.
Unfortunately, the default MacOS install seems to come with no ODBC drivers whatsoever, not even one for reading flat files or SQLite databases.
The current workflow for moving this data — exporting it from DatStat as an Excel spreadsheet, opening that spreadsheet and fixing it by hand to conform to Stata's need, then saving and reimporting into Stata — is ridiculously labor-intensive and also loses a lot of important metadata like variable descriptions and annotations.
I think that best thing to do here is load the data from DatStat to a database and then load it back into Stata. First, export your data from DatStat to a .sql file. I'm not familiar with DatStat, but if you can do this in bulk or via the command line it would be best. You can access your OS's terminal in Stata by using the -shell- command. After you have a .sql file, say foo.sql, you can use the following Stata code to send it to a database and then import into Stata.
odbc sqlfile("foo.sql"), dsn("DataSourceName")
odbc load, exec("SELECT * FROM CustomerTable") dsn("DataSourceName")
You could even issue a final command to cleanup the tables in the database if you don't think you'll use this database again and you don't want it taking up space. Use something like:
odbc exec("DROP TABLE CustomerTable")
Yes, this will probably be slow if your dataset is large, but it could be nice once your data is in the database because you can query parts of it at a time instead of importing the whole thing.
Lastly, you mentioned that no ODBC driver for Mac exists for MS SQL Server. If that is the case, you may want to install one of the open-source database systems like MySQL or PostgreSQL. I'm not a Mac user but drivers for these must exist for mac.
Good luck!

setup local informix db as in server

currently i'm working with informix db server which is not in local, where can't connect from outside office or virtual lan, so there is any tools so i can copy all table and work locally,
thanks in advance
You can export database to text using dbexport and import it locally using dbimport. There are other way of migrating database, but for small databases it should work. The data is in text so it may be easy to change something (for example data format), or even use such export to import data to other database. Have a look at: http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.mig.doc/mig138.htm