I need to SELECT a group of columns not null - sql

Basically, i have a table that have a series of columns named:
ATTRIBUTE10, ATTRIBUTE11, ATTRIBUTE12 ... ATTRIBUTE50
I want a query that gives me all the columns from ATTRIBUTE10 to ATTRIBUTE50 not null

As others have commented we aren't exactly sure of your requirements, but if you want a list the UNPIVOT can do that...
SELECT attribute , value
FROM
(SELECT * from YourFile) p
UNPIVOT
(value FOR attribute IN
(attribute1, attribute2, attribute3, etc.)
)AS unpvt

May be you can use where condition for all columns Or use between operator as below.
For All Columns
where ATTRIBUTE10 is not null and ATTRIBUTE11 is not null ...... and ATTRIBUTE50 is not null
By using between operator
where ATTRIBUTE10 between ATTRIBUTE11 and ATTRIBUTE50

One way to approach the problem is to unfold your table-with-a-zillion-like-named-attributes into one in which you've got one attribute per row, with appropriate foreign keys back to the original table. So something like:
CREATE TABLE ATTR_TABLE AS
SELECT ID_ATTR, ID_TABLE_WITH_ATTRS, ATTR
FROM (SELECT ((ID_TABLE_WITH_ATTRS-1)*100)+1 AS ID_ATTR, ID_TABLE_WITH_ATTRS, ATTRIBUTE10 AS ATTR FROM TABLE_WITH_ATTRS UNION ALL
SELECT ((ID_TABLE_WITH_ATTRS-1)*100)+2, ID_TABLE_WITH_ATTRS, ATTRIBUTE11 FROM TABLE_WITH_ATTRS UNION ALL
SELECT ((ID_TABLE_WITH_ATTRS-1)*100)+3, ID_TABLE_WITH_ATTRS, ATTRIBUTE12 FROM TABLE_WITH_ATTRS);
This only unfolds ATTRIBUTE10, ATTRIBUTE11, and ATTRIBUTE12, but you should be able to get the idea - the rest of the attributes just requires a little cut-n-paste on your part.
You can then query this table to find your non-NULL attributes as
SELECT *
FROM ATTR_TABLE
WHERE ATTR IS NOT NULL
ORDER BY ID_ATTR
Hopefully the difficulty you're encountering in dealing with this table-with-a-zillion-repeated-fields teaches you a hard lesson about exactly why tables with repeated fields or groups of fields are a Bad Idea.
dbfiddle here

Related

How to select a column whose name is a value in another column in POSTGRESQL?

I know this isn't valid SQL, but I'd like to do something like:
SELECT items.{SELECT items.preferred_column}
To elaborate, to achieve what I'm trying to achieve, I could write a long case when statement:
SELECT
CASE WHEN items.preferred_column = "column_a" THEN items.column_a
CASE WHEN items.preferred_column = "column_b" THEN items.column_b
CASE WHEN items.preferred_column = "column_c" THEN items.column_c
... and so on...
But that seems wrong. I would prefer to write a query that looks at the value of items.preferred_column and loads that column.
Is this possible?
My use case involves an Active Record (the ORM for Rails) query, which limits me. I'm not able to use "INTO" for example.
Doing this without creating a SQL function would preferred, though if it's not possible without creating a SQL function that would be good to know.
Thanks in advance for lending your expertise!
You can try transforming the table rows with row_to_json() and then using json_each(), you can join the resultant "key" field on the preferred_column:
WITH CTE AS (
SELECT
row_to_json(Z.*)::jsonb as rcr,
row_number() over(partition by null order by <whatever comparator clause>) as rn,
Z.*
FROM items Z)
SELECT b.value, a.*
FROM CTE a, jsonb_each(rcr) b, CTE c
WHERE c.rn=a.rn AND b.key = ( c.preferred_column )
Note that this essentially operates as a quasi-pivot, so you'll need to maintain an index (the row_number invocation) to self-join the table when extracting the appropriate key-value pairs from jsonb_each's set-return. Casting to jsonb will be helpful in that the binary form will alphabetize the key-value pairs by key order within the object itself.
If you need to get the resultant value as a text string instead of a json primitive, you can do
b.value #>>'{}'
instead of using jsonb_each_text(), which will preserve any json columns.

how to have a Link from main query to sub query

I am inserting two values from one table to another table. One of the inserted values comes from concatenating three column values.
I am using the below query, but the error says "subquery has more than one value."
We can't include "top 1" into sub query which gives same value to all.
insert into dbo.tblCrucibleLdgDtls (R2IGTNo,TotalMtrlWgt)
Select R2IGTNo,
(select RTRIM(LTRIM(( CONCAT(ULTotalS1S2MtrlWgt,ULTotalS3S4MtrlWgt,ULTotalS5S6MtrlWgt)))) as TotalMtrlWgt
from dbo.tbl1RMWeighingDetails
where ULTotalS1S2MtrlWgt is not null or ULTotalS3S4MtrlWgt is not null or ULTotalS5S6MtrlWgt is not null
)
from dbo.tbl1RMWeighingDetails
where R2IGTNo like '%C%'
From Table
Solution may be simple. I am not a expert.There is no duplicates and (ULTotalS1S2MtrlWgt,ULTotalS3S4MtrlWgt,ULTotalS5S6MtrlWgt) has unique relation with R2IGTNo. Like if R2IGTNo has b1 then ULTotalS1S2MtrlWgt has value, if R2IGTNo has b2 then ULTotalS3S4MtrlWgt has value, if R2IGTNo has b3 then ULTotalS5S6MtrlWgt has value. with that condition query can be altered.
pls suggest.
Give alias names to the tables:
SELECT ..., (SELECT ... FROM dbo.tbl1RMWeighingDetails wdA)
FROM dbo.tbl1RMWeighingDetails wdB
...
Now the inner/nested sub query can reference wdB, and it will mean the outer instance of the table.
But also, this has the look of something that would be better done with a JOIN, APPLY, or windowing function.
The solution depends on what type of rows you are getting from the inner query. If your inner query returns multiple duplicate rows then the solution is simpler, just use a distinct -
insert into dbo.tblCrucibleLdgDtls (R2IGTNo,TotalMtrlWgt)
Select R2IGTNo,
(select distinct RTRIM(LTRIM((CONCAT(ULTotalS1S2MtrlWgt,ULTotalS3S4MtrlWgt,ULTotalS5S6MtrlWgt)))) as TotalMtrlWgt
from dbo.tbl1RMWeighingDetails
where ULTotalS1S2MtrlWgt is not null or ULTotalS3S4MtrlWgt is not null or ULTotalS5S6MtrlWgt is not null
)
from dbo.tbl1RMWeighingDetails
where R2IGTNo like '%C%'
But if your inner query returns multiple rows which are different from one another then you have to revisit your requirement.
I got the answer by using cte, suggestion got in MS forum
insert into dbo.tblCrucibleLdgDtls (R2IGTNo,TotalMtrlWgt)
Select R2IGTNo, coalesce(ULTotalS1S2MtrlWgt,ULTotalS3S4MtrlWgt,ULTotalS5S6MtrlWgt) as TotalMtrlWgt from dbo.tbl1RMWeighingDetails where R2IGTNo like '%C%'
or
with cte1 as(
select R2IGTNo,RTRIM(LTRIM(( CONCAT(ULTotalS1S2MtrlWgt,ULTotalS3S4MtrlWgt,ULTotalS5S6MtrlWgt)))) as TotalMtrlWgt
from dbo.tbl1RMWeighingDetails
where R2IGTNo like '%C%'
and (ULTotalS1S2MtrlWgt is not null
or ULTotalS3S4MtrlWgt is not null
or ULTotalS5S6MtrlWgt is not null))
insert into dbo.tblCrucibleLdgDtls (R2IGTNo,TotalMtrlWgt)
select * from cte1
Anyway Thank you all. -shankar

How do I combine columns into one, and filter out NULLs?

Here is my table. A list of ids with signup dates in columns newsletter, report, infographics.
I want to combine all those columns into one, without the NULLs
I've tried the following code
SELECT id, combined_column
FROM (
SELECT id, CONCAT(newsletter, report, infographics) AS combined_column
FROM table
)
WHERE combined_column IS NOT NULL
But this just gives me a blank table. Is there a way to solve this? Thanks
I think you want coalesce which return the first not null value from the list (it you have more than one not null value in a row it'll still return the first one):
SELECT id, COALESCE(newsletter, report, infographics) AS combined_date
FROM t
WHERE COALESCE(newsletter, report, infographics) IS NOT NULL
Do you just want this?
select max(newsletter) as newsletter,
max(report) as report,
max(infographics) as infographics
from t;
Answer may depend on what database you're using, so caveat lector.
Is it the case that only one column will be non-null, as in your sample?
Then something like:
SELECT id, COALESCE(newsletter, infographics, report) FROM my_table;
might work for you...
If you are using Oracle, use NVL to replace NULL with empty string
SELECT id,
combined_column
FROM (
SELECT id,
CONCAT(NVL(newsletter,''), NVL(report,''), NVL(infographics,'')) AS combined_column
FROM table
)
WHERE combined_column is not NULL
SELECT id,
CONCAT(newsletter, report, infographics) AS combined_column
FROM table WHERE newsletter is NOT NULL and report is NOT NULL and infographics is NOT NULL

'In' clause in SQL server with multiple columns

I have a component that retrieves data from database based on the keys provided.
However I want my java application to get all the data for all keys in a single database hit to fasten up things.
I can use 'in' clause when I have only one key.
While working on more than one key I can use below query in oracle
SELECT * FROM <table_name>
where (value_type,CODE1) IN (('I','COMM'),('I','CORE'));
which is similar to writing
SELECT * FROM <table_name>
where value_type = 1 and CODE1 = 'COMM'
and
SELECT * FROM <table_name>
where value_type = 1 and CODE1 = 'CORE'
together
However, this concept of using 'in' clause as above is giving below error in 'SQL server'
ERROR:An expression of non-boolean type specified in a context where a condition is expected, near ','.
Please let know if their is any way to achieve the same in SQL server.
This syntax doesn't exist in SQL Server. Use a combination of And and Or.
SELECT *
FROM <table_name>
WHERE
(value_type = 1 and CODE1 = 'COMM')
OR (value_type = 1 and CODE1 = 'CORE')
(In this case, you could make it shorter, because value_type is compared to the same value in both combinations. I just wanted to show the pattern that works like IN in oracle with multiple fields.)
When using IN with a subquery, you need to rephrase it like this:
Oracle:
SELECT *
FROM foo
WHERE
(value_type, CODE1) IN (
SELECT type, code
FROM bar
WHERE <some conditions>)
SQL Server:
SELECT *
FROM foo
WHERE
EXISTS (
SELECT *
FROM bar
WHERE <some conditions>
AND foo.type_code = bar.type
AND foo.CODE1 = bar.code)
There are other ways to do it, depending on the case, like inner joins and the like.
If you have under 1000 tuples you want to check against and you're using SQL Server 2008+, you can use a table values constructor, and perform a join against it. You can only specify up to 1000 rows in a table values constructor, hence the 1000 tuple limitation. Here's how it would look in your situation:
SELECT <table_name>.* FROM <table_name>
JOIN ( VALUES
('I', 'COMM'),
('I', 'CORE')
) AS MyTable(a, b) ON a = value_type AND b = CODE1;
This is only a good idea if your list of values is going to be unique, otherwise you'll get duplicate values. I'm not sure how the performance of this compares to using many ANDs and ORs, but the SQL query is at least much cleaner to look at, in my opinion.
You can also write this to use EXIST instead of JOIN. That may have different performance characteristics and it will avoid the problem of producing duplicate results if your values aren't unique. It may be worth trying both EXIST and JOIN on your use case to see what's a better fit. Here's how EXIST would look,
SELECT * FROM <table_name>
WHERE EXISTS (
SELECT 1
FROM (
VALUES
('I', 'COMM'),
('I', 'CORE')
) AS MyTable(a, b)
WHERE a = value_type AND b = CODE1
);
In conclusion, I think the best choice is to create a temporary table and query against that. But sometimes that's not possible, e.g. your user lacks the permission to create temporary tables, and then using a table values constructor may be your best choice. Use EXIST or JOIN, depending on which gives you better performance on your database.
Normally you can not do it, but can use the following technique.
SELECT * FROM <table_name>
where (value_type+'/'+CODE1) IN (('I'+'/'+'COMM'),('I'+'/'+'CORE'));
A better solution is to avoid hardcoding your values and put then in a temporary or persistent table:
CREATE TABLE #t (ValueType VARCHAR(16), Code VARCHAR(16))
INSERT INTO #t VALUES ('I','COMM'),('I','CORE')
SELECT DT. *
FROM <table_name> DT
JOIN #t T ON T.ValueType = DT.ValueType AND T.Code = DT.Code
Thus, you avoid storing data in your code (persistent table version) and allow to easily modify the filters (without changing the code).
I think you can try this, combine and and or at the same time.
SELECT
*
FROM
<table_name>
WHERE
value_type = 1
AND (CODE1 = 'COMM' OR CODE1 = 'CORE')
What you can do is 'join' the columns as a string, and pass your values also combined as strings.
where (cast(column1 as text) ||','|| cast(column2 as text)) in (?1)
The other way is to do multiple ands and ors.
I had a similar problem in MS SQL, but a little different. Maybe it will help somebody in futere, in my case i found this solution (not full code, just example):
SELECT Table1.Campaign
,Table1.Coupon
FROM [CRM].[dbo].[Coupons] AS Table1
INNER JOIN [CRM].[dbo].[Coupons] AS Table2 ON Table1.Campaign = Table2.Campaign AND Table1.Coupon = Table2.Coupon
WHERE Table1.Coupon IN ('0000000001', '0000000002') AND Table2.Campaign IN ('XXX000000001', 'XYX000000001')
Of cource on Coupon and Campaign in table i have index for fast search.
Compute it in MS Sql
SELECT * FROM <table_name>
where value_type + '|' + CODE1 IN ('I|COMM', 'I|CORE');

SQL Server where column in where clause is null

Let's say that we have a table named Data with Id and Weather columns. Other columns in that table are not important to this problem. The Weather column can be null.
I want to display all rows where Weather fits a condition, but if there is a null value in weather then display null value.
My SQL so far:
SELECT *
FROM Data d
WHERE (d.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%' OR d.Weather IS NULL)
My results are wrong, because that statement also shows values where Weather is null if condition is not correct (let's say that users mistyped wrong).
I found similar topic, but there I do not find appropriate answer.
SQL WHERE clause not returning rows when field has NULL value
Please help me out.
Your query is correct for the general task of treating NULLs as a match. If you wish to suppress NULLs when there are no other results, you can add an AND EXISTS ... condition to your query, like this:
SELECT *
FROM Data d
WHERE d.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%'
OR (d.Weather IS NULL AND EXISTS (SELECT * FROM Data dd WHERE dd.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%'))
The additional condition ensures that NULLs are treated as matches only if other matching records exist.
You can also use a common table expression to avoid duplicating the query, like this:
WITH cte (id, weather) AS
(
SELECT *
FROM Data d
WHERE d.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%'
)
SELECT * FROM cte
UNION ALL
SELECT * FROM Data WHERE weather is NULL AND EXISTS (SELECT * FROM cte)
statement show also values where Wether is null if condition is not correct (let say that users typed wrong sunny).
This suggests that the constant 'sunny' is coming from end-user's input. If that is the case, you need to parameterize your query to avoid SQL injection attacks.