Hi guys I'm using varchar2 for a product name field, but when I query the database from the run SQL command line it shows too many empty spaces, how can I fix this without changing the datatype
here is the link to the ss
http://img203.imageshack.us/img203/20/varchar.jpg
The data that got inserted into the database (probably through some ETL process) had spaces which were not trimmed.
You could update using (pseudo code)
Update Table Set Column = Trim(Column)
If TRIM does not change the results, that tells you that there are not trailing spaces in the actual database rows; they're just being added as part of the formatted screen output.
By default, sqlplus (the command-line Oracle tool you appear to be using) uses the maximum length of the varchar2 column as the (fixed) width when displaying the results of a select statement.
If you want to change this, use the column format sqlplus command before running the select. For example:
column DEPT_NAME format a20
Hi
Try to use trim on both sides,
Update TableName set FieldName = RTrim(LTrim(FieldName))
Regards
Related
I have executed a script that updates a column in a database and that worked well.
The script would be having an update statement as below. It is trying to update the display_name with an inverted comma in it.
Update table1
Set display_name = 'I'm Kumar'
Where internal_name = 'IK';
When I executed the same script in another database, it is updating the display name with some special character in place of an inverted comma. Seems like the script is being considered as Ansi encoded format instead of UTF-8 format.
Please help me to understand why is this happening. Will there be any setting at the database level to change.
Yes, and that setting is client_encoding.
The default value is specified in the server configuration, and the client has to override it if desired:
SET client_encoding = 'UTF8';
I have to store some strange characters in my SQL Server DB which are used by an Epson Receipt Printer code page.
Using an INSERT statement, all are stored correctly except one - [SCI] (nchar(154)). I realise that this is a control character that isn't representable in a string, but the character is replaced by a '?' in the stored DB string, suggesting that it is being parsed (unsuccessfully) somewhere.
The collation of the database is LATIN1_GENERAL_CI_AS so it should be able to cope with it.
So, for example, if I run this INSERT:
INSERT INTO Table(col1) VALUES ('abc[SCI]123')
Where [SCI] is the character, a resulting SELECT query will return 'abc?123'.
However, if I use NCHAR(154), by directly inserting or by using a REPLACE command such as:
UPDATE Table SET col1 = REPLACE(col1, '?', NCHAR(154))
The character is stored correctly.
My question is, why? And how can I store it directly from an INSERT statement? The latter is preferable as I am writing from an existing application that produces the INSERT statement that I don't really want to have to change.
Thank you in advance for any information that may be useful.
When you write a literal string in SQL is is created as a VARCHAR unless you prefix is with N. This means if you include any Unicode characters, they will be removed. Instead write your INSERT statement like this:
INSERT INTO Table(col1) VALUES (N'abc[SCI]123')
I tried using Toad 12.1 DBA and SQLDeveloper to achieve my goal but i got same results.
When i update NVARCHAR2 cell in grid result, i can set and commit '€' sign as value. But when i execute an update script to do the same thing, it prints '?' character as data.
Here is the result when i edit and commit data in grid:
Here is the problem when i use update script to do the same thing:
I tried using differend NLS_LANG parameters but they did not work either.
AMERICAN_AMERICA.WE8ISO8859P9
AMERICAN_AMERICA.AL16UTF16
Database NLS parameters is here:
SELECT * FROM NLS_DATABASE_PARAMETERS where PARAMETER in ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');
PARAMETER VALUE
------------ ------------
NLS_CHARACTERSET WE8ISO8859P9
NLS_NCHAR_CHARACTERSET AL16UTF16
Create script for the simple table:
CREATE TABLE AAA
(
NVARCHAR2COL NVARCHAR2(50)
);
I also tried using sqlplus to execute update script, but it printed question mark as data as well.
Edit: one more thing, using update script with unistr function works but i need to update data by using readable text:
UPDATE AAA SET NVARCHAR2COL = unistr('\20AC');
COMMIT;
Solution: using #Nationalized annotation on my JPA entities for NVARCHAR fields solved my problem.
Default data type of Text Literal is CHAR. CHAR strings can contain only ASCII symbols and there is no '€' symbol in ASCII table, so '€' is converted to '?' during statement compilation.
To use national character set string value must be prefixed with N. N function converts the data at compilation time to NCHAR.
UPDATE AAA SET NVARCHAR2COL = N'€'; should work.
Alternatively UPDATE AAA SET NVARCHAR2COL = to_nchar('€'); can be used.
When updating a value in a grid the development tool does the same transparently for the user.
I'm new to oracle (or SQL in general) and trying to get stuff done.
In MS SQL Server I can do select * from tablename; and it displays all the data in a tabulated format. If I do that in Oracle it displays the data in a weird format that's hard to read, unless I specifically select the columns I want.
Is there a way to display the data in Oracle formatted like regular MSSQL format?
I looked around and people say to use SHOW COLUMNS FROM TABLENAME; but that gives me an error saying "unknown option". I can do Desc tablename but that only gives me the metadata.
Welcome to Oracle!
As a new user, we have 2 command-line interfaces. It sounds like you're using SQL*Plus. We also have something called SQLcl.
The latter includes automatic output formatting so you don't have to do the things like 'linesize' or 'format' in your commands to get readable query results.
I think you are using SQLPlus window to see output. You have to format it to see good.
First
set linesize 5000;
This command will make your row long enough to hold columns. You need to increase or decrease number according your need.
Set pagesize 100;
Here, 100 is number of records in one page to show.
Moreover, some column length are very big in database. You need to format display for those specific column to show it as short.
COLUMN Column_NAME FORMAT A10;
Here, a10 is ten digit for that column.
Use below query, use solution 2 if you don't know exact table name:
--Solution1
select column_name,table_name from all_tab_cols where table_name='table_name'
--Solution2
select column_name,table_name from all_tab_cols where table_name like '%table_name%'
Download SQLDeveloper from Oracle. It's free to use.
I have a table in an SSMS database:
I am trying to update the contents of the "Name" column by removing the leading spaces from every entry. Following the question How to delete leading empty space in a SQL Database Table using MS SQL Server Managment Studio, I am therefore trying to run the following:
UPDATE ReferenceHierarchy set Name = LTRIM(Name)
The problem is that when I try to run it, it says "Name" is an invalid column. When I look at the code completion options for "Name", it sees the three fields ID, ParentID, and Sequence. Interestingly, these are the three non-NVarChar fields.
What could be the problem? And how can I fix it?
LTRIM only removes the leading NORMAL Spaces. In your case it may be the Tab space.
Try this for TAB SPACE
UPDATE ReferenceHierarchy set Name = LTRIM(REPLACE(Name,(CHAR(9)),''))
If your space character not a TAB Space and Normal space then it might by non Unicode character
Then Try this
UPDATE REFERENCEHIERARCHY SET NAME = TRIM(LTRIM(CASE WHEN NAME NOT LIKE '[A-ZA-Z0-9]%'
THEN STUFF(NAME, 1, 1, ' ')ELSE NAME END))
TT's comment was the key to solving this one. I actually think there is an interesting moral here: if things are not working the way you think they should, perhaps your enviornment is not what you think it is. What was actually happening was that I had in SSMS an old version of the same database in which the ReferenceHierarchy table did not have a "name" column. And without realizing it, I was running my query against that version. Running it against the correct version of the database solved my problem.