Moving Oracle SQL CLOB values(>4000 chars) to SQL Server without trimming - sql

I'm building ETL in SSIS, and have to move data from Oracle DB to data warehouse on MS SQL Server.
I am facing a problem with migrating contents of columns of datatype CLOB.
So far I was casting CLOB values to VARCHAR using:
dbms_lob.substr(*columnName*, 4000, 1)
I could then easily write such contents as nvarchar in SQL Sevrer.
However I would not like to force-trim everything- how can I move entire content of CLOB (when exceeds 4000chars) in a format that's going be recognized by SSIS/SQL Server?
Can I cast CLOB as any other datatype without trimming it?

Related

Query remote oracle CLOB data from MSSQL

I read different posts about this problem but it didn't help me with my problem.
I am on a local db (Microsoft SQL Server) and query data on remote db (ORACLE).
In this data, there is a CLOB type.
CLOB type column shows me only 7 correct data the others show me <null>
I tried to CAST(DEQ_COMMENTAIRE_REFUS_IMPORT AS VARCHAR(4000))
I tried to SUBSTRING(DEQ_COMMENTAIRE_REFUS_IMPORT, 4000, 1)
Can you help me, please ?
Thank you
No MSSQL but in my case we were pulling data into MariaDB using the ODBC Connect engine from Oracle.
For CLOBs, we did the following (in outline):
Create PLSQL function get_clob_chunk ( clobin CLOB, chunkno NUMBER) RETURN VARCHAR2.
This will return the the specified nth chunk of 1000 chars for the CLOB.
We found 1,000 worked best with multibyte data. If the data is all plain text single byte that chunks of 4,000 are safe.
Apologies for the absence of actual code, as I'm a bit rushed for time.
Create a Oracle VIEW which calls the get_clob_chunk function to split the CLOB into 1,000 char chunk columns chunk1, chunk2, ... chunkn, CAST as VARCHAR2(1000).
We found that Oracle did not like having more than 16 such columns, so we had to split the views into sets of 16 such columns.
What this means is that you must check what the maximum size of data in the CLOB is so you know how many chunks/views you need. To do this dynamically adds complexity, needless to say.
Create a view in MariaDB querying the view.
Create table/view in MariaDB that joins the chunks up into a single Text column.
Note, in our case, we found that copying Text type columns between MariaDB databases using the ODBC Connect engine was also problematic, and required a similar splitting method.
Frankly, I'd rather use Java/C# for this.

[Excel Destination [28]] Error: An error occurred while setting up a binding for the column. The binding status was "DT_NTEXT"

I'm working on ssis package which exports data from SQL Server to Excel. I had a problem converting non-unicode to unicode string data types. So I created a derived Column task and converted to Unicode string [DT_WSTR] 4 columns which have a type Varchar(40) in SQL Server table. It worked with these columns. But I also have a Description column of type varchar(max) and I tried to convert it to Unicode text stream [DT_NTEXT]. It did not work.
If your source is SQL Server (as you said), you can convert it directly in your SQL Query
SELECT
CONVERT(NVARCHAR(40), 'att1')
,CONVERT(NTEXT, 'att2')
Convert your VARCHAR into NVARCHAR
Convert your TEXT into NTEXT
it's faster.
P.S. To test it (Do not forget to delete or reset your previous OLE DB Input component) -> It will be forced to reevaluate your datatype
Does it help you?
The only thing that worked was to cast a Description column in Stored Procedure as varchar(1000). I checked the max length of this field and it was about 300 characters. So I made it varchar(1000) and in Derived column Unicode string [DT_WSTR]. This was a workaround, but I still want to know how to make it in ssis package without converting data type in Stored Procedure.

Hibernate MSSQL nvarchar Equivalent datatype in oracle [duplicate]

This question already exists:
nvarchar in sql ,oracle and mysql in Hibernate annotation mapping
Closed 8 years ago.
We are developed project using hibernate with sql server. Now we are migrating sql server to oracle.
We have user Nvarchar datatype in mssql server for more than 80 tables. While we are trying to create tables in oracle through hibernate table containing Nvarchar datatype are not getting generated other table are creating successfully.
While we change the column to varchar than also table are getting generated.
How to create UTF-8 datatype in oracle and mssql as a common datatype in hibernate.
Please Help !!!!!!
What is your database character set? Assuming that you create the database setting the database character set to AL32UTF8, VARCHAR2 columns in Oracle would store data in the UTF-8 character set.
If you cannot use a Unicode character set for your database character set, you'd need to use nvarchar2 columns. An nvarchar2 column stores data using the national character set. Unless you've upgraded from an old version of Oracle, your database character set would use the UTF-16 character set, not UTF-8.

Date stored as a string in sql server when upsized from access 2007

When I upsize from Access2007 to SQL Server 2008, I have few issues...
1. text to nvarchar(255)
fields with text data type in Access are automatically converted to nvarchar(255)(I have unicode data) in sql server, but in reality the column-length is not that big so can I change the data type to nvarchar(55) or varchar(100)? Will there be any problem?
2. Date stored as text
Some tables throwed an error when tried to upsize because of the date column(mm/dd/yyyy), what I did is I changed the date/time column data type to text datatype in access, then the upsizing was successful, it converted to nvarchar(255) in sql server. I have converted nvarchar data type to date data type in sql server, but that does not show me a calendar symbol in access front-end. How to get a calendar symbol in the date field in my access front end?
I have tried the solution given in this link, but it did not work... Please give me some suggestions
text in sql server is deprecated, use nvarchar if you need to store unicode (multi lang support). Otherwise you can use varchar.

How to insert Arabic characters into SQL database?

How can I insert Arabic characters into a SQL Server database? I tried to insert Arabic data into a table and the Arabic characters in the insert script were inserted as '??????' in the table.
I tried to directly paste the data into the table through SQL Server Management Studio and the Arabic characters was successfully and accurately inserted.
I looked around for resolutions for this problems and some threads suggested changing the datatype to nvarchar instead of varchar. I tried this as well but without any luck.
How can we insert Arabic characters into SQL Server database?
For the field to be able to store unicode characters, you have to use the type nvarchar (or other similar like ntext, nchar).
To insert the unicode characters in the database you have to send the text as unicode by using a parameter type like nvarchar / SqlDbType.NVarChar.
(For completeness: if you are creating SQL dynamically (against common advice), you put an N before a string literal to make it unicode. For example: insert into table (name) values (N'Pavan').)
Guess the solation is first turn on the field to ntext then write N with the value. For example
insert into eng(Name) values(N'حسن')
If you are trying to load data directly into the database like me, I found a great way to do so by creating a table using Excel and then export as CSV. Then I used the database browser SQLite to import the data correctly into the SQL database. You can then adjust the table properties if needed. Hope this would help.