Ok so I have a column in a table in SQL Server.
Each records have a string (i.e. names)
SOME of the strings have English AND NON ENGLISH characters.
I have to select ONLY the records that have English AND NON English characters.
How do I go about doing that?
My try...
Select * from Table
Where Nameofthecolumn NOT LIKE '%[A-Z]%'
Go
This will give me EMPTY table.
I know for sure that there are at least two records that have english and non-english characters.
I need those two records as output.
I was trying to do
Select * from Table
Where Nameofthecolumn NOT LIKE '%[A-Z,a-z]%' AND Like '%[A-Z,a-z]%'
Go
but turns out you can use boolean with Like/Not Like.
Please guide me the right direction.
Thanks
How about reversing your search, e.g. find anything that doesn't match A-Z:
... WHERE col LIKE '%[^A-Z]%' AND col LIKE '%[A-Z]%';
If the collation is case insensitive you shouldn't need a-z, if it is case sensitive you could add the COLLATE clause. But you may want to filter out spaces, numbers and other non-alphanumerics that are expected.
Do you mean something like this?
select 1 as field1,'1' as field2 into #data
union all select 1,'abc'
union all select 2,'abc'
union all select 3,'999'
SELECT * FROM
(
select field1,field2
,MAX(CASE WHEN field2 NOT LIKE '%[A-Z,a-z]%' THEN 1 ELSE 0 END) OVER (PARTITION BY field1)
+ MAX(CASE WHEN field2 LIKE '%[A-Z,a-z]%' THEN 1 ELSE 0 END) OVER (PARTITION BY field1) as chars
FROM #data
) alias
WHERE chars =2
Related
Is it possible to add a select statement in case or if function in SQL
select case when :A='do' then (select col1 from table1) else 'n/a' end;
or
select if(:A='do',(select col1 from table1),'N/A');
If my parameter is 'do' it should display all the value in table or else it should just display 'N/A'.
Please help me. Thanks!
If this is Oracle, as per your tag, then neither of your queries are valid as they're not actually selecting from a table.
Perhaps what you're after is something like:
select case when :A='do' then col1 else 'n/a' end col
from table1;
Or maybe you're after something like:
select col1
from table1
where :A != 'do'
union all
select 'N/A' col1
from dual
where :A = 'do';
You didn't provide any example data, so I'm not sure if what you're trying to do is make all the values of col1 appear as 'N/A' if the bind variable is "do" or whether you only want a single row containing 'N/A'.
QUERY:
select ws_path from workpaths where
(
(ws_path like '%R_%') or
(ws_path like '%PB_%' ) or
(ws_path like '%ST_%')
)
OUTPUT:
/x/eng/users/ST_3609843_ijti4689_3609843_1601272247
/x/eng/users/ST_3610020_zozt5229_3610020_1601282033
/x/eng/users/ST_3611181_zozt5229_3611181_1601282032
/x/eng/users/ST_3611226_zozt5229_3611226_1601282033
/x/eng/users-random/john/N_3582168_3551186_1601040805
/x/eng/users-random/james/N_3582619_3551186_1601041405
/x/eng/users-random/jimmy/N_3582791_3551186_1601042005
/x/eng/users/R_3606462_3606462_1601251334
/x/eng/users/R_3611775_3612090_1601290909
/x/eng/users/R_3612813_3613016_1601292252
Is there way to group partially by ST_, N_ and R_?
i.e. group by ws_path wont work at the moment for the obvious reason
I need to only look at the last item in the path (split by '/') and then the front part of splitting with '_'
You can use regexp_substr to get the patterns being searched for and then group by the number of such occurrences.
select regexp_substr(ws_path,'\/R_|\/PB_|\/ST_'), count(*)
from workpaths
group by regexp_substr(ws_path,'\/R_|\/PB_|\/ST_')
Regexp is a good solution but can be expensive. A simpler substring might be cheaper and faster:
CREATE TABLE tbl (field1 VARCHAR(100));
INSERT INTO dbo.tbl
( field1 )
VALUES
('/x/eng/users/ST_3609843_ijti4689_3609843_1601272247'),
('/x/eng/users/ST_3610020_zozt5229_3610020_1601282033'),
('/x/eng/users/ST_3611181_zozt5229_3611181_1601282032'),
('/x/eng/users/ST_3611226_zozt5229_3611226_1601282033'),
('/x/eng/users-random/john/N_3582168_3551186_1601040805'),
('/x/eng/users-random/james/N_3582619_3551186_1601041405'),
('/x/eng/users-random/jimmy/N_3582791_3551186_1601042005'),
('/x/eng/users/R_3606462_3606462_1601251334'),
('/x/eng/users/R_3611775_3612090_1601290909'),
('/x/eng/users/R_3612813_3613016_1601292252');
SELECT
COUNT(CASE WHEN field1 LIKE '%/ST_%' THEN 1 ELSE NULL END) AS 'st_count',
COUNT(CASE WHEN field1 LIKE '%/N_%' THEN 1 ELSE NULL END) AS 'n_count',
COUNT(CASE WHEN field1 LIKE '%/R_%' THEN 1 ELSE NULL END) AS 'r_count'
FROM dbo.tbl
I don't quite get it...
Could somebody please give me a hint on why the results of queries B + C won't add-up to A?
I first thought, that the amount of underscores (should be ten) mismatch between B and C because of a typo, but after copy/pasting I am a bit helpless. The result of A is higher than the sum of B + C.
Is there some kind of implicit distinct etc. in statement B and C that I am not aware of?
-- statement A
select count(*) from mytable;
-- statement B
select count(*) from mytable where mycolumn like '__________';
-- statement C
select count(*) from mytable where mycolumn not like '__________';
If mycolumn has some rows with NULL values, those will be excluded from both LIKE and NOT LIKE clauses.
Therefore, those 2 statements should be equal:
SELECT (select count(*) from mytable where mycolumn like '__________')
+ (select count(*) from mytable where mycolumn not like '__________')
+ (select count(*) from mytable where mycolumn IS NULL)
FROM DUAL
-- is equal to
select count(*) from mytable;
Most likely your mycolumn contains NULL values.
NULL values won't compare to LIKE or NOT LIKE.
When you add the result of this, it will add up:
select count(*) from mytable where mycolumn is null;
The reason behind this is that null is considered 'undefined'. So you can't say something you don't know is like, or not like something else. It is undefined. Comparison to null, except when using is null will always return false.
Your column contains NULL values. When you compare anything to NULL (even another NULL), the result is false.
So in your example, there are those that are like your pattern, those that are not like your pattern and the NULL values, that are neither.
I have data set as a varchar(500), but I only know if it's numeric or character.
I need to count the max spaces of the length of a column AND the max spaces after a decimal point.
For Example:
ColumnA
1234.56789
123.4567890
would return 11 spaces total AND 7 spaces after the decimal.
It can be two separate queries.
SELECT len(ColumnA), len(columnA) - charIndex('.',ColumnA)
FROM theTable
SELECT LEN(ColumnA )
,CHARINDEX('.',REVERSE(ColumnA ))-1
FROM Table1
If a value has no decimal, the above will return -1 for the spaces after decimal, so you could use:
SELECT LEN(ColumnA)
,CASE WHEN ColumnA LIKE '%.%' THEN CHARINDEX('.',REVERSE(ColumnA))-1
ELSE 0
END
FROM Table1
Demo of both: SQL Fiddle
If you just wanted the MAX() then you'd just wrap the above in MAX():
SELECT MAX(LEN(ColumnA ))
,MAX(CHARINDEX('.',REVERSE(ColumnA ))-1)
FROM Table1
SELECT ColumnA, Len(ColumnA) As Total, LEN(SUBSTRING(ColumnA,CHARINDEX('.',ColumnA,LEN(ColumnA)) As Decimal
FROM TABLE
Is it possible in an SQL query to only show a field if another field has data? For example, if Field1 <> '', then show the value in Field2 else don't show the value?
It can be done using a case statement. (At least in SQL Server)
select case when Field1 <> ''
then Field2
end as Field2
from YourTable
Sure (this works in Oracle and SQLite):
select
field1,
(case
when field1 is null then null
else field2
end) field2_wrapped
from my_table
if 'has no data' means the empty string (''), you need to use this statement:
SELECT Filed2 FROM Table1 WHERE Filed1<>''
If 'no data' means NULL value, you need use
SELECT Filed2 FROM Table1 WHERE NOT (Filed2 IS NULL)
Take a look at Standard SQL functions COALESCE() and NULLIF():
COALESCE(NULLIF(Field1, ''), Field2)