print the values that are common from both the tables of a column - sql

I want to print the values that are common from both the tables of a column.
The issue is one column's substring value matches with the other column's string.
Printing the subquery alone fetches the right values (proving the substring query is correct) but I think the entire query after the where clause needs changing.
Kindly suggest.
Code:
select distinct sd.sourceworkitemid
from u_prodstypetest pst, sdidata sd
where sd.keyid1 = 'S-20210719-00000003'
and sd.sourceworkitemid in (select substr(testmethodid,0,INSTR(testmethodid,'|',1)-1) from u_prodstypetest);
I want to create a substring of a value of a column in a table and compare it with a column in another table. But since it is a substring the where clause of column1=column2 does not suffice and hence I wroe the subquery to fetch the substring which if run throws an error in 'in' because the subquery return >1 values.

Related

Postgres query cannot find rows based on column value

I want to select rows based on a column value. I know for a fact the column value exists. The first query returns 100 rows from the listing table. The second query, which looks for listings.OriginatingSystemName = 'mfrmls` returns nothing. Why?
(Removing the quotes or using double quotes does not work).
I am using pgAdmin4 to run these queries.
first query:
select * from listing limit 100;
second query:
select * from listing where 'listing.OriginatingSystemName' = 'mfrmls'
This produces a 'column does not exist' error:
select * from listing where OriginatingSystemName = 'mfrmls'
The correct syntax is to just write the column name in your WHERE statement:
SELECT * FROM listings WHERE "OriginatingSystemName" = 'mfrmls';
To elaborate further:
What your original query is doing is selecting every row in the listings table where the text string 'listings.OriginatingSystemName' is equal to this other text string 'mfrmls'. It is not actually grabbing the value from the column you want. No row in the table satisfies your where statement because your where statement is always false. Therefore, no rows are returned but the query was a success.
We need to implement the double quotes when dealing with case-sensitive identifiers. Here is some helpful documentation.

How I can use where Condition in SQL I have comma separated value in A columns in multiple rows

Column1 EventTypes_pKey
Are 5,3
Test 1,4,5
test 1,3,5
If I am using
Select * from Table name where EventTypes_pKey in('5,1,4)
then I want that record where these value belongs the column.
How I can use where condition on the basis of EventTypes_pKey this is my Varchar column.
I want If I am selecting 5,3,4 the there should be all three row data.
Please help me.
If you are using Postgres, you can do this by by converting the value into an array and then using the overlaps operator &&
select *
from badly_designed_table
where string_to_array(eventtypes_pkey, ',')::int[] && array[5,3,4];
Online example

SQL Query to return result with and without whitespace

I have a column in my Postgres database table which contains a value with some whitespaces in between. For example, a value present in the column is '123 1062 10'.
Now, I want to write an SQL query which can return the row which contains the above-mentioned value by passing in the value '123106210' in the where clause of SQL Query.
Any ideas of how can write to the SQL Query to get the desired result?
replace function does not work for you?
This works
select replace('123 1062 10',' ','')::bigint
so this could be your final select:
select ...
from ...
where replace(your_text_column,' ','')::bigint = 123106210

Extract alphanumeric value from varchar column

I have a table which contains a column having alphanumeric values which is stored as a string. I have multiple values in that column having values such as F4737, 00Y778, PP0098, XXYYYZ etc.
I want to extract values starting with a series of F and must have numeric values in that row.
Alphanumeric column is the unique column having unique values but the rest of the columns contain duplicate values in my table.
Futhermore, once these values are extracted I would like to pick up the max value from the duplicate row,for eg:
Suppose I have F4737 and F4700 as a unique Alphanumeric row, then F4737 must be extracted from it.
I have written a query like this but the numeric values are not getting extracted from this query:
select max(Alplanumeric)
from Customers
where Alplanumeric '%[F0-9]%
or
select max(Alplanumeric)
from Customers
where Alplanumeric like '%[0-9]%'
and Alplanumeric like 'F%'**
I run the above query but I am only getting the F series if I remove the numeric part from the above query. How do I extract both, the F starting series as well as the numeric values included in that row?
Going out on a limb, you might be looking for a query like this:
SELECT *, substring(alphanumeric, '^F(\d+)')::int AS nr
FROM customers
WHERE alphanumeric ~ '^F\d+'
ORDER BY nr DESC NULLS LAST
, alphanumeric
LIMIT 1;
The WHERE conditions is a regular expression match, the expression is anchored to the start, so it can use an index. Ideally:
CREATE INDEX customers_alphanumeric_pattern_ops_idx ON customers
(alphanumeric text_pattern_ops);
This returns the one row with the highest (extracted) numeric value in alphanumeric among rows starting with 'F' followed by one ore more digits.
About the index:
PostgreSQL LIKE query performance variations
About pattern matching:
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
Ideally, you should store the leading text and the following numeric value in separate columns to make this more efficient. You don't necessarily need more tables like has been suggested.

Proc Sql case confusion

Within SAS
I have a proc-sql step that I'm using to create macro variables to do some list processing.
I have ran into a confusing step where using a case statement rather than a where statement results in the first row of the resulting data set being a null string ('')
There are no null strings contained in either field in either table.
These are two sample SQL steps with all of the macro business removed for simplicity:
create table test as
select distinct
case
when brand in (select distinct core_brand from new_tv.core_noncore_brands) then brand
end as brand1
from new_tv.new_tv2
;
create table test2 as
select distinct brand
from new_tv.new_tv2
where brand in (select distinct core_brand from new_tv.core_noncore_brands)
;
using the first piece of code the result is a table with multiple rows, the first being an empty string.
The second piece of code works as expected
Any reason for this?
So the difference is that without a WHERE clause you aren't limiting what you are selecting, IE every row is considered. The CASE statement can bucket items by criteria, but you don't lose results just because your buckets don't catch everything, hence the NULL. WHERE limits the items being returned.
Yes, the first has no then clause in the case statement. I'm surprised that it even parses. It wouldn't in many SQL dialects.
Presumably you mean:
create table test as
select distinct
case
when brand in (select distinct core_brand from new_tv.core_noncore_brands)
then brand
end as brand1
from new_tv.new_tv2
;
The reason you are getting the NULL is because the case statement is return NULL for the non-matching brands. You would need to add:
where brand1 is not NULL
to prevent this (using either a subquery or making brand1 a calculated field).
Your first query is not correct, there is no 'then' statement in the 'case' clause.
create table test as
select distinct
case
when brand in (select distinct core_brand from new_tv.core_noncore_brands)
*then value*
end as brand1
from new_tv.new_tv2
;
Probably, you have NULL value because there is no default value for the 'case' clause, so for the value which doesn't meet the condition it returns NULL. There is a difference between 'case' clause and 'NOT IN', the first returns you all the rows, but without values, which do not meet condition, when second query will return only row which meet condition.