SQL - Select Uppercase fields - sql

i have a table, that has a column (SQL_Latin1_General_CP1_CI_AI), and has a couple of hundred rows where that column is filled in all uppercase.
I would like to know if it is possible to select all the rows that have that field in uppercase and, ultimately if i can make an update to capitalize the fields.
I'm using SQL Management Studio on MSSQL 2005
Thanks

DECLARE #T TABLE
(
col VARCHAR(3) COLLATE SQL_Latin1_General_CP1_CI_AI
)
INSERT INTO #T
SELECT 'foo' UNION ALL SELECT 'BAR' UNION ALL SELECT ''
UPDATE #T
SET col = STUFF(LOWER(col),1,1,LEFT(col,1))
WHERE col = UPPER(col) COLLATE SQL_Latin1_General_CP1_CS_AS AND LEN(col) >1
SELECT * FROM #T
Returns
(1 row(s) affected)
col
----
foo
Bar
(3 row(s) affected)

Related

Efficient way to merge alternating values from two columns into one column in SQL Server

I have two columns in a table. I want to merge them into a single column, but the merge should be done taking alternate characters from each columns.
For example:
Column A --> value (1,2,3)
Column B --> value (A,B,C)
Required result - (1,A,2,B,3,C)
It should be done without loops.
You need to make use of the UNION and get a little creative with how you choose to alternate. My solution ended up looking like this.
SELECT ColumnA
FROM Table
WHERE ColumnA%2=1
UNION
SELECT ColumnB
FROM TABLE
WHERE ColumnA%2=0
If you have an ID/PK column that could just as easily be used, I just didn't want to assume anything about your table.
EDIT:
If your table contains duplicates that you wish to keep, use UNION ALL instead of UNION
Try This;
SELECT [value]
FROM [Table]
UNPIVOT
(
[value] FOR [Column] IN ([Column_A], [Column_B])
) UNPVT
If you have SQL 2016 or higher you can use:
SELECT QUOTENAME(STRING_AGG (cast(a as varchar(1)) + ',' + b, ','), '()')
FROM test;
In older versions, depending on how much data you have in your tables you can also try:
SELECT QUOTENAME(STUFF(
(SELECT ',' + cast(a as varchar(1)) + ',' + b
FROM test
FOR XML PATH('')), 1, 1,''), '()')
Here you can try a sample
http://sqlfiddle.com/#!18/6c9af/5
with data as (
select *, row_number() over order by colA) as rn
from t
)
select rn,
case rn % 2 when 1 then colA else colB end as alternating
from data;
The following SQL uses undocumented aggregate concatenation technique. This is described in Inside Microsoft SQL Server 2008 T-SQL Programming on page 33.
declare #x varchar(max) = '';
declare #t table (a varchar(10), b varchar(10));
insert into #t values (1,'A'), (2,'B'),(3,'C');
select #x = #x + a + ',' + b + ','
from #t;
select '(' + LEFT(#x, LEN(#x) - 1) + ')';

SQL Select values starting with a capital letter

I have a table with a varchar type column. I want to select values from this column which start with a capital letter only.
For example
MyTable
Col1 Col2
Argentina 2
Brasil 3
uruguay 4
I want my select query to return:
Argentina
Brasil
This is a bit of a pain. You can use ASCII() or COLLATE, but these depend on how the data is stored. For varchar() and char(), this will work:
where ASCII(left(col1, 1)) between ASCII('A') and ASCII('Z')
With TestData As
(
Select '012324' As Name
Union All Select 'ABC'
Union All Select 'abc'
Union All Select 'aBc'
Union All Select 'ABé'
Union All Select 'ABÉ'
)
Select *
From TestData
Where Name = UPPER(Name) Collate SQL_Latin1_General_CP1_CS_AS
hope this will help
You Can Use UPPER AND COLLATE Functions to do this.
For E.g:
create table #a (text varchar(10))
insert into #a values ('HAI'),('Hai'),('hai'),('44A')
select *from #a where substring([text],1,1) = substring(Upper([text]),1,1) collate SQL_Latin1_General_CP1_CS_AS
AND substring([text],1,1) like '[A-Z]%'
select Col1 from MyTable where Col1 like '[A-Z]%'

How to find and update columns containing a superscript character

I'm using a SQL Server database. In a specific column, in some cells (which are of type varchar), there is this character : ³
I don't know how to find cells containing this character, and replace this character with character j.
When I use this query to find cells :
SELECT *
FROM table
WHERE col LIKE '%³%'
I get regular 3's as well.
Can you tell me how to do these? I looked up in the internet but I couldn't find what I want. Thanks.
All you need to do is:
UPDATE table
SET col = replace(col, N'³', 'j');
You don't need to search for the rows before updating / replacing the values, so no need for a SELECT. All you need to do is UPDATE.
If you want to add more conditions to filter your rows and only update some, you can just use a WHERE clause and specify more filters.
UPDATE table
SET col = replace(col, N'³', 'j');
WHERE col NOT LIKE '%..%'
AND col2 ...
AND col3 ...
etc.
You can use the same WHERE clause in a simple SELECT to see what rows will get updated before running the UPDATE.
In order not to go over all of the rows and to make your query faster, you can add a WHERE clause to update only the rows which contain that character.
EDIT/UPDATE (final):
In order to detect if data is uppercase / superscript you need to use collation:
UPDATE #t
SET col = replace(col, NCHAR(179) COLLATE SQL_Latin1_General_CP1_CS_AS, 'j');
This only works if you are using a case sensitive collation, use NCHAR(179) to represent the superscript:
create table #t (col1 nvarchar(50) COLLATE SQL_Latin1_General_CP1_CS_AS)
INSERT INTO #t
VALUES (NCHAR(179)),('3')
UPDATE #t
SET col1 = REPLACE(col1, NCHAR(179), 'j')
WHERE col1 like N'%' + NCHAR(179) + N'%'
SELECT *
FROM #t
**EDIT
As Radu has pointed out this will work without the table having case sensitive collation:
create table #t (col1 nvarchar(50))
INSERT INTO #t
VALUES (NCHAR(179)),('3')
UPDATE #t
SET col1 = REPLACE(col1, NCHAR(179) COLLATE SQL_Latin1_General_CP1_CS_AS, 'J')
WHERE col1 like N'%' + NCHAR(179) + N'%' COLLATE SQL_Latin1_General_CP1_CS_AS
SELECT *
FROM #t

How to split string in multiple column in sql server [duplicate]

This question already has answers here:
How to split a comma-separated value to columns
(38 answers)
Closed 7 years ago.
I have string "1:182:1,1:195:2,1:213:1".
I spllited the string with ',' in different rows. Now I want each rows single column to be splitted in 3 different columns in same row.
I tried using
SELECT LEFT(ThemeProperty, CHARINDEX(':', ThemeProperty) - 1) ,
RIGHT(ThemeProperty,
LEN(ThemeProperty) - CHARINDEX(':', ThemeProperty))
FROM #tempThemeProperty
But its output is
(No column name) (No column name)
1 182:1
1 195:2
1 213:1
But I want it to be
Column1 Column2 Column3
1 182 1
So any help would be appreciated.
DECLARE #Tmp TABLE (Id INT,Name VARCHAR(20))
INSERT #Tmp SELECT 1,'182:1'
INSERT #Tmp SELECT 2,'195:2'
INSERT #Tmp SELECT 3,'213:1'
--Using PARSENAME
SELECT Id,
PARSENAME(REPLACE(Name,':','.'),2) Value1,
PARSENAME(REPLACE(Name,':','.'),1) Value2
FROM #Tmp
This should be able to di it.. Let me know if it helps
This is one method but subject to sql injection
declare #s varchar(2000),#data varchar(2000)
select #s='1:182:1'
select #data=''''+replace(#s,':',''',''')+''''
exec('select '+#data)

Count the Null columns in a row in SQL

I was wondering about the possibility to count the null columns of row in SQL, I have a table Customer that has nullable values, simply I want a query that return an int of the number of null columns for certain row(certain customer).
This method assigns a 1 or 0 for null columns, and adds them all together. Hopefully you don't have too many nullable columns to add up here...
SELECT
((CASE WHEN col1 IS NULL THEN 1 ELSE 0 END)
+ (CASE WHEN col2 IS NULL THEN 1 ELSE 0 END)
+ (CASE WHEN col3 IS NULL THEN 1 ELSE 0 END)
...
...
+ (CASE WHEN col10 IS NULL THEN 1 ELSE 0 END)) AS sum_of_nulls
FROM table
WHERE Customer=some_cust_id
Note, you can also do this perhaps a little more syntactically cleanly with IF() if your RDBMS supports it.
SELECT
(IF(col1 IS NULL, 1, 0)
+ IF(col2 IS NULL, 1, 0)
+ IF(col3 IS NULL, 1, 0)
...
...
+ IF(col10 IS NULL, 1, 0)) AS sum_of_nulls
FROM table
WHERE Customer=some_cust_id
I tested this pattern against a table and it appears to work properly.
My answer builds on Michael Berkowski's answer, but to avoid having to type out hundreds of column names, what I did was this:
Step 1: Get a list of all of the columns in your table
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'myTable';
Step 2: Paste the list in Notepad++ (any editor that supports regular expression replacement will work). Then use this replacement pattern
Search:
^(.*)$
Replace:
\(CASE WHEN \1 IS NULL THEN 1 ELSE 0 END\) +
Step 3: Prepend SELECT identityColumnName, and change the very last + to AS NullCount FROM myTable and optionally add an ORDER BY...
SELECT
identityColumnName,
(CASE WHEN column001 IS NULL THEN 1 ELSE 0 END) +
-- ...
(CASE WHEN column200 IS NULL THEN 1 ELSE 0 END) AS NullCount
FROM
myTable
ORDER BY
NullCount DESC
For ORACLE-DBMS only.
You can use the NVL2 function:
NVL2( string1, value_if_not_null, value_if_null )
Here is a select with a similiar approach as Michael Berkowski suggested:
SELECT (NVL2(col1, 0, 1)
+ NVL2(col2, 0, 1)
+ NVL2(col3, 0, 1)
...
...
+ NVL2(col10, 0, 1)
) AS sum_of_nulls
FROM table
WHERE Customer=some_cust_id
A more generic approach would be to write a PL/SQL-block and use dynamic SQL. You have to build a SELECT string with the NVL2 method from above for every column in the all_tab_columns of a specific table.
Unfortunately, in a standard SQL statement you will have to enter each column you want to test, to test all programatically you could use T-SQL. A word of warning though, ensure you are working with genuine NULLS, you can have blank stored values that the database will not recognise as a true NULL (I know this sounds strange).
You can avoid this by capturing the blank values and the NULLS in a statement like this:
CASE WHEN col1 & '' = '' THEN 1 ELSE 0 END
Or in some databases such as Oracle (not sure if there are any others) you would use:
CASE WHEN col1 || '' = '' THEN 1 ELSE 0 END
You don't state RDBMS. For SQL Server 2008...
SELECT CustomerId,
(SELECT COUNT(*) - COUNT(C)
FROM (VALUES(CAST(Col1 AS SQL_VARIANT)),
(Col2),
/*....*/
(Col9),
(Col10)) T(C)) AS NumberOfNulls
FROM Customer
Depending on what you want to do, and if you ignore mavens, and if you use SQL Server 2012, you could to it another way. .
The total number of candidate columns ("slots") must be known.
1. Select all the known "slots" column by column (they're known).
2. Unpivot that result to get a
table with one row per original column. This works because the null columns don't
unpivot, and you know all the column names.
3. Count(*) the result to get the number of non-nulls;
subtract from that to get your answer.
Like this, for 4 "seats" in a car
select 'empty seats' = 4 - count(*)
from
(
select carId, seat1,seat2,seat3,seat4 from cars where carId = #carId
) carSpec
unpivot (FieldValue FOR seat in ([seat1],[seat2],[seat3],[seat4])) AS results
This is useful if you may need to do more later than just count the number of non-null columns, as it gives you a way to manipulate the columns as a set too.
This will give you the number of columns which are not null. you can apply this appropriately
SELECT ISNULL(COUNT(col1),'') + ISNULL(COUNT(col2),'') +ISNULL(COUNT(col3),'')
FROM TABLENAME
WHERE ID=1
The below script gives you the NULL value count within a row i.e. how many columns do not have values.
{SELECT
*,
(SELECT COUNT(*)
FROM (VALUES (Tab.Col1)
,(Tab.Col2)
,(Tab.Col3)
,(Tab.Col4)) InnerTab(Col)
WHERE Col IS NULL) NullColumnCount
FROM (VALUES(1,2,3,4)
,(NULL,2,NULL,4)
,(1,NULL,NULL,NULL)) Tab(Col1,Col2,Col3,Col4) }
Just to demonstrate I am using an inline table in my example.
Try to cast or convert all column values to a common type it will help you to compare the column of different type.
I haven't tested it yet, but I'd try to do it using a PL\SQL function
CREATE OR REPLACE TYPE ANYARRAY AS TABLE OF ANYDATA
;
CREATE OR REPLACE Function COUNT_NULL
( ARR IN ANYARRAY )
RETURN number
IS
cnumber number ;
BEGIN
for i in 1 .. ARR.count loop
if ARR(i).column_value is null then
cnumber := cnumber + 1;
end if;
end loop;
RETURN cnumber;
EXCEPTION
WHEN OTHERS THEN
raise_application_error
(-20001,'An error was encountered - '
||SQLCODE||' -ERROR- '||SQLERRM);
END
;
Then use it in a select query like this
CREATE TABLE TEST (A NUMBER, B NUMBER, C NUMBER);
INSERT INTO TEST (NULL,NULL,NULL);
INSERT INTO TEST (1 ,NULL,NULL);
INSERT INTO TEST (1 ,2 ,NULL);
INSERT INTO TEST (1 ,2 ,3 );
SELECT ROWNUM,COUNT_NULL(A,B,C) AS NULL_COUNT FROM TEST;
Expected output
ROWNUM | NULL_COUNT
-------+-----------
1 | 3
2 | 2
3 | 1
4 | 0
This is how i tried
CREATE TABLE #temptablelocal (id int NOT NULL, column1 varchar(10) NULL, column2 varchar(10) NULL, column3 varchar(10) NULL, column4 varchar(10) NULL, column5 varchar(10) NULL, column6 varchar(10) NULL);
INSERT INTO #temptablelocal
VALUES (1,
NULL,
'a',
NULL,
'b',
NULL,
'c')
SELECT *
FROM #temptablelocal
WHERE id =1
SELECT count(1) countnull
FROM
(SELECT a.ID,
b.column_title,
column_val = CASE b.column_title
WHEN 'column1' THEN a.column1
WHEN 'column2' THEN a.column2
WHEN 'column3' THEN a.column3
WHEN 'column4' THEN a.column4
WHEN 'column5' THEN a.column5
WHEN 'column6' THEN a.column6
END
FROM
( SELECT id,
column1,
column2,
column3,
column4,
column5,
column6
FROM #temptablelocal
WHERE id =1 ) a
CROSS JOIN
( SELECT 'column1'
UNION ALL SELECT 'column2'
UNION ALL SELECT 'column3'
UNION ALL SELECT 'column4'
UNION ALL SELECT 'column5'
UNION ALL SELECT 'column6' ) b (column_title) ) AS pop WHERE column_val IS NULL
DROP TABLE #temptablelocal
Similary, but dynamically:
drop table if exists myschema.table_with_nulls;
create table myschema.table_with_nulls as
select
n1::integer,
n2::integer,
n3::integer,
n4::integer,
c1::character varying,
c2::character varying,
c3::character varying,
c4::character varying
from
(
values
(1,2,3,4,'a','b','c','d'),
(1,2,3,null,'a','b','c',null),
(1,2,null,null,'a','b',null,null),
(1,null,null,null,'a',null,null,null)
) as test_records(n1, n2, n3, n4, c1, c2, c3, c4);
drop function if exists myschema.count_nulls(varchar,varchar);
create function myschema.count_nulls(schemaname varchar, tablename varchar) returns void as
$BODY$
declare
calc varchar;
sqlstring varchar;
begin
select
array_to_string(array_agg('(' || trim(column_name) || ' is null)::integer'),' + ')
into
calc
from
information_schema.columns
where
table_schema in ('myschema')
and table_name in ('table_with_nulls');
sqlstring = 'create temp view count_nulls as select *, ' || calc || '::integer as count_nulls from myschema.table_with_nulls';
execute sqlstring;
return;
end;
$BODY$ LANGUAGE plpgsql STRICT;
select * from myschema.count_nulls('myschema'::varchar,'table_with_nulls'::varchar);
select
*
from
count_nulls;
Though I see that I didn't finish parametising the function.
My answer builds on Drew Chapin's answer, but with changes to get the result using a single script:
use <add_database_here>;
Declare #val Varchar(MAX);
Select #val = COALESCE(#val + str, str) From
(SELECT
'(CASE WHEN '+COLUMN_NAME+' IS NULL THEN 1 ELSE 0 END) +' str
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<add table name here>'
) t1 -- getting column names and adding the case when to replace NULLs for zeros or ones
Select #val = SUBSTRING(#val,1,LEN(#val) - 1) -- removing trailling add sign
Select #val = 'SELECT <add_identity_column_here>, ' + #val + ' AS NullCount FROM <add table name here>' -- adding the 'select' for the column identity, the 'alias' for the null count column, and the 'from'
EXEC (#val) --executing the resulting sql
With ORACLE:
Number_of_columns - json_value( json_array( comma separated list of columns ), '$.size()' ) from your_table
json_array will build an array with only the non null columns and the json_query expression will give you the size of the array
There isn't a straightforward way of doing so like there would be with counting rows. Basically, you have to enumerate all the columns that might be null in one expression.
So for a table with possibly null columns a, b, c, you could do this:
SELECT key_column, COALESCE(a,0) + COALESCE(b,0) + COALESCE(c,0) null_col_count
FROM my_table