I try to convert mysql data base to mssql, I used SSMA.
At first I converted schema from mysql to mssql, then I synchronized it.
Finally I migrated data's and faced with these errors:
Column 'column1 for example' does not allow DBnull.vallue
used softwares:
sql server 2016
mysql work bench 6.1
SSMA
In this case, I’d like to suggest you either change the source data to ‘0000-00-01’ works well with ‘Zero-date in NOT NULL Columns’ or set destination column to NULL so you could process null data after the migration is complete.
Related
Migrate a table from Oracle to SQL Server.
I have used Toad to export (select * from table) into a pipe delimited .txt file so it can be used to be consumed in SQL Server. Now the Oracle table has a DATE column and the output from Toad for that column is (2/26/2016 3.05.10.000000 PM). This format is not being compatible for the datetime column in SQL Server side.
I feel we can convert the date in Oracle to a compatible SQL Server format for easier ingestion.
Please help me understand the conversion both from Oracle to a compatible SQL Server format.
Create Oracle Linked server in SQL Server with ODBC connection. and use that Linked server to play with Oracle and SQL Server tables using SQL Server.
You must understand that DATE datatypes are binary data. Using to_date() on a column that is already a DATE is inappropriate. It forces oracle to perform (behind the scenes) a to_char() on the DATE column in order to produce character data that is the required input to to_date(). Then, when you see (in your text csv file) that it has produced a "date" in some particular format, it is because oracle has then had to run the result of your to_date() back through to_char(), using the default NLS_DATE_FORMAT setting to produce a character string for the text output.
So your solution is this:
First, determine what text format of a date MSSQL wants when it uses this csv file. I don't know what that is, but for the sake of argument, let's say it is 'yyyy-mm-dd'. With that information, construct your SELECT in oracle like this:
select mycol1,
to_char(my_date_col,'yyyy-mm-dd'),
mycol2
from my_table;
That said, I agree with the others, why bother with this cumbersome process in the first place? Or even some other intermediary like SSIS? Why not just create a shared server in MSSQL and query the oracle table directly? Or create a database link in the Oracle DB and, using the oracle transparent gateway as the conduit, INSERT directly into the MSSQL table from Oracle? Either the linked server or the database link will be much faster than any external process.
I would suggest a best way to transfer Oracle table to SQL Serveris by using SSIS package.
You can have a Source as Oracle and your conversion issue can be fixed by Data
Conversion task and your Destination can be SQL Server.
I've downloaded SSMA for DB2 V8.2.0. I kept all the default setting and tried to convert couple of Stored procs from DB2 to SQL 2017.
I've selected one SP and clcik on convert Schema and I got special characters just like below.
%####�������#�������
%####�������#����#��
%####������#�
%####������������#���
%####�����������#��������#������#������
Do I have to change any setting? Why is this happening?
I need to get a copy of a SQL Server 2008 table into an Oracle RDBMS. I have database link for SQL Server, database has a table which contains LONG BINARY type column.
When I issue
create table test_ora as select * from mssqltable#dblink
I get the error
Can't convert LONG
I tried to use to_lob, to_char, hextoraw and a ream of Oracle conversion function but still hasn't defeated the issue. Do you have any ideas?
p.s. I'm out of work now so can't tell exact ORA- error number.
There is a way to do that with undocumented Oracle's package:
http://tonguc.wordpress.com/2008/08/28/how-to-transfer-long-datatype-over-dblink/
I would recommend tool called Pentaho Data Integration. This is free, small and superb ETL tool.
Download page: community(.)pentaho(.)com
It will recreated all tables and types for you. How to do it:
pldwh(.)blogspot(.)co(.)uk/2013/03/pentaho-data-integration-create-tables_1(.)html
I am trying to insert rows into a MySQL database from an Access database using SQL Server 2008 SSIS.
TITLE: Microsoft SQL Server Management Studio
------------------------------
ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.0.51a-community-nt]You have
an error in your SQL syntax; check the manual that corresponds to your MySQL
server version for the right syntax to use near '"orders"' at line 1
The problem is with the delimiters. I am using the 5.1 ODBC driver, and I can connect to MySql and select a table from the ADO.Net destination data source.
The MySql tables all show up delimited with double-quotes in the SSIS package editor:
"shipto addresses"
Removing the double quotes from the "Use a table or view" text box on the ADO.NET Destination Editor or replacing them with something else does not work if there is a space in the table name.
When SSIS puts the Insert query together, it retains the double quotes and adds single quotes.
The error above is shown when I click on "Preview" in the editor, and a similar error is thrown when I run the package (albeit then from the actual insert statement).
I don't seem to have control over this behavior. Any suggestions? Other package types where I can hand-code the SQL don't have this problem.
Sorry InnerJoin, I had to take the accepted answer away from you. I found a workaround here:
The solution is to reuse the connection for all tasks, and to turn ANSI quotes on for the connection before you do any inserts, with an Execute Sql task that runs the following:
set sql_mode='STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,
NO_ENGINE_SUBSTITUTION,ANSI_QUOTES'
Try using square brackets around the table names. That may help.
EDIT: If you can, I would create views (with no spaces) based on the Access tables, and use those to export. Even if it means building another Access database with linked tables, I think this is your best bet.
I've always struggled with using SSIS with MYSQL directly. Even after installing the ODBC drivers, they just don't play well in data flows. I've always ended up creating linked ODBC connections between SQL Server and MYSQL. I then rely on linked server queries to bring over data. Instead of using a SSIS data flow task, I use an Execute SQL command, usually in the form of a stored procedure that executes an OPENQUERY.
One solution you could do is load the data into a SQL Server database and use it as a staging environment before you load it into the MYSQL database. I regularly move data between SQL Server 2008 and MYSQL and in the past I use to regularly move data between Access and SQL Server.
Another possible solution is to transform the incoming Access data before it loads into the MYSQL database. That may give you a chance to clean up the column names and the actual data that's going through to MYSQL.
Let me know if either of these work for you.
You can locate the configuration setting file my.ini at <<Drive>>:\ProgramData\MySQL\MySQL Server 5.6\my.ini and add "ANSI_QUOTES" to sql-mode.
e.g: sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,ANSI_QUOTES". This should solve the issue while previewing in the SSIS editor.
This is the second time it happens to me and before modifying a 3rd party Database structure I wanted to know if anyone knew a better solution:
I'm accessing a MS SQL Server 2008 from a Lotus Notes Agent (Notes 7) to retrieve some data. I use LSXODBC and my "Select" statement works perfect... Except that my agent cannot "understand" Nvarchar SQL Field types. Any other data types work ok (can get the values from number and dates fields without a problem).
It took me a while to figure it out, and I couldn't find a solution (other than modifying the field types on the SQL table to Varchar instead of nVarchar)
I could replicate this both in MS SQL 2005 and 2008.
Last "elegant" solution was to create an SQL view -instead of modifying table structure- with the varchar types instead of nvarchar. Works ok but I have to create a view for each table I'm retrieving data from.
I tried to set the Field type using FieldExpectedDataType Method but didn't work. Still got a DB_TYPE_UNDEFINED.
I thought there might be some configuration issues? or maybe I'm using an old LN Version / ODBC Driver version?
Any hint would be greatly appreciated.
Thank you in advance.
Diego
An old ODBC driver may not support unicode. It was not added until SQL Server 2000 (I'm fairly sure)