How to store hyphen/dash in Oracle Varchar2 - sql

I am trying to store some text with a hyphen aka dash (-) in Oracle 12c Varchar2 field.
But when I go to do a Select on the table value, the hyphen/dash character results in a funny looking symbol. I have tried escaping before using the dash (-) but that still produced the funny looking symbol.
How do i store hypens/dashes properly in Oracle?
Thank you

Putting as answer as for comment it would be too long.
First you have to establish the problem is with inserting dash or while fetching it. To verify, run this on the column
select * from table where column like '%-%';
If you get output, that means it is stored properly. So the problem is with displaying it.
If you don't get ouput, that means you are not inserting it properly. In that case show your insert statement. You just have to treat dash as any other string character.

Related

How to get rid of special character in Netezza columns

I am transferring data from one Netezza database to another using Talend, an ETL tool. When I pull data from a varchar(30) field and try to put it in the new database's varchar(30) field, it gives an error saying it's too long. Logs show the field has whitespace at the end followed by a square, representing some character I can't figure out. I attached a screenshot of the logs below. I have tried writing SQL to pull this field and replace what I thought was a CRLF, but no luck. When I do a select on the field and get the length, it has a few extra characters than what you see, so something is there and I want to get rid of it. Trimming does not do anything.
This SQL does not return a length shorter than simply doing length() on the column itself. Does anyone know what else it could be?
SELECT LENGTH(trim(translate(TRANSLATE(<column>, chr(13), ''), chr(10), ''))) as len_modified
Note that the last column in the logs, where you see a square in brackets, is supposed to show the last character examined.
Save the data to a larger target table size that works. If 30 character data put it in a 500 character table. Get it to work. Then look through character by character on the fields that are the longest to determine what character is being added. Use commands like ascii() to determine the ascii value of the individual characters and the beginning and end. Most likely you are getting some additional character in the beginning or the end. Determine what the extra character data is and then write code to remove it or to never load it so that it fits in the 30 character column. Or just leave your target column with longer and include the additional characters. For example Varchar(30) becomes Varchar(32) (waste the space but don't alter the data as it comes in to you).

SQL Server NText field limited to 43,679 characters?

I working with SQL Server data base in order to store very long Unicode string. The field is from type 'ntext', which theoretically should be limit to 2^30 Unicode characters.
From MSDN documentation:
ntext
Variable-length Unicode data with a maximum string length of 2^30 - 1 (1,073,741,823) bytes. Storage size, in bytes, is two times the string length that is entered. The ISO synonym for ntext is national
text.
I'm made this test:
Generate 50,000 characters string.
Run an Update SQL statement
UPDATE [table]
SET Response='... 50,000 character string...'
WHERE ID='593BCBC0-EC1E-4850-93B0-3A9A9EB83123'
Check the result - what actually stored in the field at the end.
The result was that the field [Response] contain only 43,679 characters. All the characters at the end of the string was thrown out.
Why this happens? How I can fix this?
If this is really the capacity limit of this data type (ntext), which another data type can store longer Unicode string?
Based on what I've seen, you may just only be able to copy 43679 characters. It is storing all the characters, they're in the db(check this with Select Len(Reponse) From [table] Where... to verify this), and SSMS has problem copying more than when you go to look at the full data.
NTEXT datatype is deprecated and you should use NVARCHAR(MAX).
I see two possible explanations:
Your ODBC driver you use to connect to database truncate parameter value when it is too long (try using SSMS)
You write you generate your input string. I suspect you generate CHAR(0) which is Null literal
If second is your case make sure you cannot generate \0 char.
EDIT:
I don't know how you check the length but keep in mind that LEN does not count trailing whitespaces
SELECT LEN('aa ') AS length -- 2
,DATALENGTH('aa ') AS datalength -- 7
Last possible solution I see you do sth like:
SELECT 'aa aaaa'
-- result in SSMS `aa aaaa`: so when you count you lose all multiple whitespaces
Check query below if returns 100k:
SELECT DATALENGTH(ntext_column)
For all bytes; Grid result on right click and click save result to file.
Can confirm. The actual limit is 43679. Had a problem with a subscription service for a week now. Every data looked good, but it still gave us an error that one of the fields have invalid values, even tho, it got correct values in. It turned out that the parameters was stored in NText and it maxed out at 43679 characters. And because we cannot change the database design, we had to make 2 different subscriptions for the same thing and put half of the entities to the other one.

Replace special character apostrophe with normal apostrophe

I have a field which is getting data that contains a special character type of apostrophe outside of the normal oracle ascii range of 0-127. I am trying to do a replace function on this but it keeps being switched to a ? in the DDL. Looking for another way to do the replace
This works in a query but switches when put in the DDL for a view
regexp_replace(field_name,'’',chr(39))
switches to
regexp_replace(field_name,'?',chr(39))
A dump function shows that oracle is storing the apostrophe as three characters of ascii 226,128,153. I tried to write the replace on a concatenation of those but that didn't work either.
First, examine the original data that contains the weird apostrophe. I'm not convinced that it is indeed three characters. Use this:
select value
, substr(value, 5, 1) one_character
, ascii(substr(value, 5, 1)) ascii_value
from table;
This would isolate the 5th character from a column value and its ascii value. Adjust the 5 to the place where the weird apostrophe is located.
When you have the ascii value, use plain replace like this to get rid of it (regexp_replace seems overkill):
replace(value, chr(ascii_value_of_weird_apostrophe), chr(39));

count number of characters in nvarchar column

Does anyone know a good way to count characters in a text (nvarchar) column in Sql Server?
The values there can be text, symbols and/or numbers.
So far I used sum(datalength(column))/2 but this only works for text. (it's a method based on datalength and this can vary from a type to another).
You can find the number of characters using system function LEN.
i.e.
SELECT LEN(Column) FROM TABLE
Use
SELECT length(yourfield) FROM table;
Use the LEN function:
Returns the number of characters of the specified string expression, excluding trailing blanks.
Doesn't SELECT LEN(column_name) work?
text doesn't work with len function.
ntext, text, and image data types will be removed in a future version
of Microsoft SQL Server. Avoid using these data types in new
development work, and plan to modify applications that currently use
them. Use nvarchar(max), varchar(max), and varbinary(max) instead. For
more information, see Using Large-Value Data Types.
Source
I had a similar problem recently, and here's what I did:
SELECT
columnname as 'Original_Value',
LEN(LTRIM(columnname)) as 'Orig_Val_Char_Count',
N'['+columnname+']' as 'UnicodeStr_Value',
LEN(N'['+columnname+']')-2 as 'True_Char_Count'
FROM mytable
The first two columns look at the original value and count the characters (minus leading/trailing spaces).
I needed to compare that with the true count of characters, which is why I used the second LEN function. It sets the column value to a string, forces that string to Unicode, and then counts the characters.
By using the brackets, you ensure that any leading or trailing spaces are also counted as characters; of course, you don't want to count the brackets themselves, so you subtract 2 at the end.

How do I escape an enclosure character in a SQL Loader data file?

I have a SQL*Loader control file that has a line something like this:
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '#'
Normally, I'd use a quotation mark, but that seems to destroy emacs's python syntax highlighting if used inside a multi-line string. The problem is that we are loading an ADDRESS_LINE_2 column where only 7,000 out of a million records are loading because they have lines like this:
...(other columns),Apt #2,(other columns)...
Which is of course causing errors. Is there any way to escape the enclosing character so this doesn't happen? Or do I just need to choose a better enclosing character?
I've looked through the documentation, but don't seem to have found an answer to this.
I found it...
If two delimiter characters are encountered next to each other, a single occurrence of the delimiter character is used in the data value. For example, 'DON''T' is stored as DON'T. However, if the field consists of just two delimiter characters, its value is null.
Field List Reference
Unfortunately, SqlLoader computes both occurrences of the delimiter while checking for max length of the field. For instance, DON''T will be rejected in a CHAR(5) field, with ORA-12899: value too large for column blah.blah2 (actual: 6, maximum: 5).
At least in my 11gR2 . Haven't tried in other versions....