How to insert LONG BINARY from SQL Server to Oracle - sql

I need to get a copy of a SQL Server 2008 table into an Oracle RDBMS. I have database link for SQL Server, database has a table which contains LONG BINARY type column.
When I issue
create table test_ora as select * from mssqltable#dblink
I get the error
Can't convert LONG
I tried to use to_lob, to_char, hextoraw and a ream of Oracle conversion function but still hasn't defeated the issue. Do you have any ideas?
p.s. I'm out of work now so can't tell exact ORA- error number.

There is a way to do that with undocumented Oracle's package:
http://tonguc.wordpress.com/2008/08/28/how-to-transfer-long-datatype-over-dblink/
I would recommend tool called Pentaho Data Integration. This is free, small and superb ETL tool.
Download page: community(.)pentaho(.)com
It will recreated all tables and types for you. How to do it:
pldwh(.)blogspot(.)co(.)uk/2013/03/pentaho-data-integration-create-tables_1(.)html

Related

Migrate data from SQL Server to PostgreSQL

I have a stored procedure function as well as table in the SQL Server enterprise 2014. I also have data in the table. Now I need same table and data in PostgreSql(pgAdmin4).
Can anyone suggest to me the idea to migrate data to POSTGRESQL or any idea on creating the SQL script so that I can use psql to run the script?
Depending on how much data you have, you could script out the table and data. Then you could tweak the script as needed for PostgreSQL:
Right click on the SQL database > Tasks > Generate Scripts
On the "Choose Objects" screen, select your specific table then select "Next>"
On the "Set Scripting Options" screen, select "Advanced"
Find the option called "Types of data to script", then select "Schema and data" and select "OK"
Set the filename and continue through the dialog until the file is generated
Tweak the sql script for any specific PostgreSQL syntax
If there is a larger amount of data, you might look into some type of data transfer tool like SSIS.
Exporting the table structure and data as Josh Jay describes will likely require some fixes where the syntax doesn't match, but it should be doable if not tedious. Luckily there are existing conversion tools available to help.
You could also try using a foreign data wrapper to map the tables in SQL Server to a running instance of PostgreSQL. Then it's just a matter of copying tables. Depends on your needs and where each database server is located relative to one another.
The stored procedures will be far more difficult to handle unfortunately. While Oracle's pl/sql language is substantially similar to PostgreSQL's pl/pgsql, MS SQL Server/Sybase's TransactSQL dialect on the other hand is different enough to require rewrites. If the TransactSQL functions also access .Net objects, the migration task may end up far more difficult as you reimplement dependencies or find logical equivalents.

Migrate table from Oracle to SQL Server

Migrate a table from Oracle to SQL Server.
I have used Toad to export (select * from table) into a pipe delimited .txt file so it can be used to be consumed in SQL Server. Now the Oracle table has a DATE column and the output from Toad for that column is (2/26/2016 3.05.10.000000 PM). This format is not being compatible for the datetime column in SQL Server side.
I feel we can convert the date in Oracle to a compatible SQL Server format for easier ingestion.
Please help me understand the conversion both from Oracle to a compatible SQL Server format.
Create Oracle Linked server in SQL Server with ODBC connection. and use that Linked server to play with Oracle and SQL Server tables using SQL Server.
You must understand that DATE datatypes are binary data. Using to_date() on a column that is already a DATE is inappropriate. It forces oracle to perform (behind the scenes) a to_char() on the DATE column in order to produce character data that is the required input to to_date(). Then, when you see (in your text csv file) that it has produced a "date" in some particular format, it is because oracle has then had to run the result of your to_date() back through to_char(), using the default NLS_DATE_FORMAT setting to produce a character string for the text output.
So your solution is this:
First, determine what text format of a date MSSQL wants when it uses this csv file. I don't know what that is, but for the sake of argument, let's say it is 'yyyy-mm-dd'. With that information, construct your SELECT in oracle like this:
select mycol1,
to_char(my_date_col,'yyyy-mm-dd'),
mycol2
from my_table;
That said, I agree with the others, why bother with this cumbersome process in the first place? Or even some other intermediary like SSIS? Why not just create a shared server in MSSQL and query the oracle table directly? Or create a database link in the Oracle DB and, using the oracle transparent gateway as the conduit, INSERT directly into the MSSQL table from Oracle? Either the linked server or the database link will be much faster than any external process.
I would suggest a best way to transfer Oracle table to SQL Serveris by using SSIS package.
You can have a Source as Oracle and your conversion issue can be fixed by Data
Conversion task and your Destination can be SQL Server.

generate database & tables schema (ddl) on Oracle pl-sql

Anyone have a PL-SQL statement that i can use to generate database & tables schema for specific database on Oracle 10g? I need the schema in .sql file and if possible compatible with ANSI-92/99 sql implementation, so i can use the generated .sql directly on sql server 2005.
Already heard about exp/imp, but it seems generated dump file, what i need just a simple ddl on .sql file.
Thanks
You could try:
select dbms_metadata.get_ddl('TABLE',table_name,owner)
from dba_tables where owner='schema name';
It returns longs, so you may want to play with the long buffer.
More about dbms_metadata here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_metada.htm
If you just need to dump your schema, this free package does a very nice job. We use it in daily production.
http://sourceforge.net/projects/cx-oracletools
If you need to convert from Oracle to SQL Server, this software might do a better job. We've used it to convert between Oracle, MySql, and Postgreqsql.
http://www.spectralcore.com/fullconvert
I wrote oraddlscript which calls dbms_metadata.get_ddl (Pop's answer) for each database object owned by a user and writes the DDL to a file.
Update: Answered comment
Greetings, I'd recomend using Oracle SQL Developer Data Modeler since it's from Oracle, it can read the DDL information directly from the Data Dictionary. It creates an ERD and then you can produce DDL for SQL Server 2000/2005, some versions of DB2 and Oracle 9i/10g/11g.

SSIS and MySQL - Table Name Delimiter Issue

I am trying to insert rows into a MySQL database from an Access database using SQL Server 2008 SSIS.
TITLE: Microsoft SQL Server Management Studio
------------------------------
ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.0.51a-community-nt]You have
an error in your SQL syntax; check the manual that corresponds to your MySQL
server version for the right syntax to use near '"orders"' at line 1
The problem is with the delimiters. I am using the 5.1 ODBC driver, and I can connect to MySql and select a table from the ADO.Net destination data source.
The MySql tables all show up delimited with double-quotes in the SSIS package editor:
"shipto addresses"
Removing the double quotes from the "Use a table or view" text box on the ADO.NET Destination Editor or replacing them with something else does not work if there is a space in the table name.
When SSIS puts the Insert query together, it retains the double quotes and adds single quotes.
The error above is shown when I click on "Preview" in the editor, and a similar error is thrown when I run the package (albeit then from the actual insert statement).
I don't seem to have control over this behavior. Any suggestions? Other package types where I can hand-code the SQL don't have this problem.
Sorry InnerJoin, I had to take the accepted answer away from you. I found a workaround here:
The solution is to reuse the connection for all tasks, and to turn ANSI quotes on for the connection before you do any inserts, with an Execute Sql task that runs the following:
set sql_mode='STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,
NO_ENGINE_SUBSTITUTION,ANSI_QUOTES'
Try using square brackets around the table names. That may help.
EDIT: If you can, I would create views (with no spaces) based on the Access tables, and use those to export. Even if it means building another Access database with linked tables, I think this is your best bet.
I've always struggled with using SSIS with MYSQL directly. Even after installing the ODBC drivers, they just don't play well in data flows. I've always ended up creating linked ODBC connections between SQL Server and MYSQL. I then rely on linked server queries to bring over data. Instead of using a SSIS data flow task, I use an Execute SQL command, usually in the form of a stored procedure that executes an OPENQUERY.
One solution you could do is load the data into a SQL Server database and use it as a staging environment before you load it into the MYSQL database. I regularly move data between SQL Server 2008 and MYSQL and in the past I use to regularly move data between Access and SQL Server.
Another possible solution is to transform the incoming Access data before it loads into the MYSQL database. That may give you a chance to clean up the column names and the actual data that's going through to MYSQL.
Let me know if either of these work for you.
You can locate the configuration setting file my.ini at <<Drive>>:\ProgramData\MySQL\MySQL Server 5.6\my.ini and add "ANSI_QUOTES" to sql-mode.
e.g: sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,ANSI_QUOTES". This should solve the issue while previewing in the SSIS editor.

UNDEFINED data type when reading SQL database from Lotus Notes using ODBC: nvarchar

This is the second time it happens to me and before modifying a 3rd party Database structure I wanted to know if anyone knew a better solution:
I'm accessing a MS SQL Server 2008 from a Lotus Notes Agent (Notes 7) to retrieve some data. I use LSXODBC and my "Select" statement works perfect... Except that my agent cannot "understand" Nvarchar SQL Field types. Any other data types work ok (can get the values from number and dates fields without a problem).
It took me a while to figure it out, and I couldn't find a solution (other than modifying the field types on the SQL table to Varchar instead of nVarchar)
I could replicate this both in MS SQL 2005 and 2008.
Last "elegant" solution was to create an SQL view -instead of modifying table structure- with the varchar types instead of nvarchar. Works ok but I have to create a view for each table I'm retrieving data from.
I tried to set the Field type using FieldExpectedDataType Method but didn't work. Still got a DB_TYPE_UNDEFINED.
I thought there might be some configuration issues? or maybe I'm using an old LN Version / ODBC Driver version?
Any hint would be greatly appreciated.
Thank you in advance.
Diego
An old ODBC driver may not support unicode. It was not added until SQL Server 2000 (I'm fairly sure)