Extracting SAP SQL Anywhere .db/.log database files with free unix tools? - sap

I've recently gotten a copy of a SAP SQL Anywhere 12.0.1.3152[1] .db and .log file. I don't have access to the source database, only these two files.
This is an ARGUS database of public records I'd like to make publicly available (via BigQuery).
However, I've not found any free, unix version of SAP I could use to read it and export to something usable (eg csv, json, MySQL dump, etc).
How can I extract this data, using free tools (preferably on Ubuntu)?
[1] My guess of the data format is based on the first line of the .db file, which has the string WIN_LATIN1windows-1252UCAUTF-8 and many repetitions of Sybase Inc., Copyright (c)2000 12.0.1.3152.
For scale:
$ du -m *
736 Argus12.db
2170 Argus_new.log
$ wc *
10943417 44373930 771203072 Argus12.db
38517623 83903318 2275373056 Argus_new.log
49461040 128277248 3046576128 total

The files you have are the database. Assuming you have a userid / password for the database, you can use the SQL Anywhere developer edition (available here) on either Windows or Linux to run it and get the data from it.
Disclaimer: I work for SAP in SQL Anywhere engineering.

Related

How can I export a SQL Server database in *.sql format?

The system I am working on needs to create data dumps in SQL format. Is there a built-in way (like the mysqldump tool) to create this with SQL Server?
I am working specifically with Azure SQL Edge in a container on a Mac or on Linux.
This is not a production system, no need to be super performant.
If this is not possible, is there any other simple way to share small pieces of my database, as files?
The desired result of such a dump would be a simple text file (not CSV, nor *.bak or similar)
insert into table a.A
values
(v,v,v,v,v),
(v,v,v,v,v),
....
....
....
(v,v,v,v,v);
insert into table a.b
values
(v,v,v,v,v),
(v,v,v,v,v),
....
....
....
(v,v,v,v,v);
One of the purposes is educational, which is why readable SQL format is preferable.
All suggestions redirect you to installing SSMS and creating a .sql file but if I'm not wrong you are using MacOS or Linux.
In this case I suggest you to install DbVisualizer which is available for MacOS or Linux and do the same thing.

Is there an alternative way to import data into Postgres than using psql?

I am under strict corporate environment and don't have access to Postgres' psql. Therefore I can't do what's shown e.g. in the SO Convert SQLITE SQL dump file to POSTGRESQL. However, I can generate the sqlite dump file .sql. The resulting dump.sql file is 1.3gb big.
What would be the best way to import this data into Postgres? I also have DBeaver and can connect to both databases simultaneously but unfortunately can't do INSERT from SELECT.
I think the term for that is 'absurd', not 'strict'.
DBeaver has an 'execute script' feature. But who knows, maybe it will be blocked.
EnterpriseDB offers binary downloads. If you unzip those to a local drive you might be able to execute psql from the bin subdirectory.
If you can install psycopg2 or pg8000 for python, you should be able to connect to the database and then loop over the dump file sending each line to the database with cur.execute(line) . It might take some fiddling if the dump file has any multi-line commands, but the example you linked to doesn't show any of those.

Export DB with PostgreSQL's PgAdmin-III

How to export a Postgresql db into SQL that can be executed into other pgAdmin?
Exporting as backup file, doesn't work when there's a difference in version
Exporting as SQL file, does not execute when tried to run on a different pgAdmin
I tried exporting a DB with pgAdmin III but when I tried to execute the SQL in other pgAdmin it throws error in the SQL, when I tried to "restore" a Backup file, it says there's a difference in version that it can't do the import/restore.
So is there a "safe" way to export a DB into standard SQL that can be executed plainly in pgAdmin SQL editor, regardless of which version it is?
Don't try to use PgAdmin-III for this. Use pg_dump and pg_restore directly if possible.
Use the version of pg_dump from the destination server to dump the origin server. So if you're going from (say) 8.4 to 9.2, you'd use 9.2's pg_dump to create a dump. If you create a -Fc custom format dump (recommended) you can use pg_restore to apply it to the new database server. If you made a regular SQL dump you can apply it with psql.
See the manual on upgrading your PostgreSQL cluster.
Now, if you're trying to downgrade, that's a whole separate mess.
You'll have a hard time creating an SQL dump that'll work in any version of PostgreSQL. Say you created a VIEW that uses a WITH query. That won't work when restored to PostgreSQL 8.3 because it didn't support WITH. There are tons of other examples. If you must support old PostgreSQL versions, do your development on the oldest version you still support and then export dumps of it for newer versions to load. You cannot sanely develop on a new version and export for old versions, it won't work well if at all.
More troubling, developing on an old version won't always give you code that works on the new version either. Occasionally new keywords are added where support for new specification features are introduced. Sometimes issues are fixed in ways that affect user code. For example, if you were to develop on the (ancient and unsupported) 8.2, you'd have lots of problems with implicit casts to text on 8.3 and above.
Your best bet is to test on all supported versions. Consider setting up automated testing using something like Jenkins CI. Yes, that's a pain, but it's the price for software that improves over time. If Pg maintained perfect backward and forward compatibility it'd never improve.
Export/Import with pg_dump and psql
1.Set PGPASSWORD
export PGPASSWORD='123123123';
2.Export DB with pg_dump
pg_dump -h <<host>> -U <<username>> <<dbname>> > /opt/db.out
/opt/db.out is dump path. You can specify of your own.
3.Then set again PGPASSWORD of you another host. If host is same or password is same then this is not required.
4.Import db at your another host
psql -h <<host>> -U <<username>> -d <<dbname>> -f /opt/db.out
If username is different then find and replace with your local username in db.out file. And make sure on username is replaced and not data.
If you still want to use PGAdmin then see procedure below.
Export DB with PGAdmin:
Select DB and click Export.
File Options
Name DB file name for you local directory
Select Format - Plain
Ignore Dump Options #1
Dump Options #2
Check Use Insert Commands
Objects
Uncheck tables if you don't want any
Import DB with PGAdmin:
Create New DB.
By keeping selected DB, Click Menu->Plugins->PSQL Console
Type following command to import DB
\i /path/to/db.sql
If you want to export Schema and Data separately.
Export Schema
File Options
Name schema file at you local directory
Select Format - Plain
Dump Options #1
Check Only Schema
Check Blobs (By default checked)
Export Data
File Options
Name data file at you local directory
Select Format - Plain
Dump Options #1
Check Only Data
Check Blobs (By default checked)
Dump Options #2
Check Use Insert Commands
Check Verbose messages (By default checked)
Note: It takes time to Export/Import based on DB size and with PGAdmin it will add some more time.

Foxbase to postrgresql data transfer. (dbf files reader)

I rewrite a program based on the old Foxbase database consisting of files .dbf. I need a tool that would read these files, and helped in the transfer of data to PostgreSQL. You know maybe some of this type of tool?
pgdbf.sourceforge.net - has worked for all the DBF I've fed it. Quoting the site description:
PgDBF is a program for converting XBase databases - particularly
FoxPro tables with memo files - into a format that PostgreSQL can
directly import. It's a compact C project with no dependencies other
than standard Unix libraries.
If you are looking for something to run on Windows, and this doesn't compile directly, you could use cygwin (www.cygwin.com) to build and run pgdbf.
As part of the migration path you could use Python and my dbf module. A very simple script to convert the dbf files to csv would be:
import sys
import dbf
dbf.export(sys.argv[1])
which will create a .csv file of the same name as the dbf file. If you put that code into a script named dbf2csv.py you could then call it as
python dbf2csv.py dbfname
Hopefully there are some handy tools to get the csv file into PostgreSQL.

How to convert a bunch of .btr and .lck files to a readable SQL?

I have a bunch of .btr and .lck files and I need to import those to a SQL Server Data Base.
How can I do that?
.LCK files are lock files. You can't (and don't need to) read those directly. The .BTR files are the data files. Do you have DDF files (FILE.DDF, FIELD.DDF, INDEX.DDF)? If so, you should be able to download a trial version of Pervasive PSQL v11 from www.pervasivedb.com. Once you've installed the trial version, you can create an ODBC DSN pointing to your data and then use SSIS or DTS or any number of programs to export the data from PSQL and import it to MS SQL.
If you don't have DDFs, you would need to either get them or create them. The DDFs describe record structure of each data file.