I am using POSTGRESQL with sequelize.js and would really like to execute a query which looks like this
SELECT "CT_Foo"."cola", "CT_Foo"."colb", "CT_Foo"."colc",
"CT_Foo"."cold", "CT_Foo"."cole", "CT_Foo"."colf",
COUNT(*)
FROM (
SELECT "CT_BAR"."cola", "CT_BAR"."colb", "CT_BAR"."colc",
"CT_BAR"."cold", "CT_BAR"."cole", "CT_BAR"."colf",
"CT_BAR"."colg"
FROM public."Customers" AS "CT_BAR"
WHERE ("CT_BAR"."colf" IN ('SOMEID') AND "CT_BAR"."colg" ?
'date')
) AS "CT_Foo"
WHERE ("CT_Foo"."colf" IN ('SOMEID'))
GROUP BY "CT_Foo"."cola", "CT_Foo"."colb", "CT_Foo"."colc",
"CT_Foo"."cold", "CT_Foo"."cole", "CT_Foo"."colf"
Columns A-F are text and G is JSONB
Basically the reason why I am doing this is because I need to group on columns A-F with a query on column F and if something exists in the JSONB in column G, I do not wish to include column G in the grouping, because it's JSON and I'm checking for the existence of a date... This seems a simple way to do this. I know postgresql has CTE's but sequelize does not support them. I believe another name for this is "Derived Tables"
I can form a query normally, but cannot get the subquery into the FROM clause.
Any idea on how todo this or get the same result?
Related
I know this isn't valid SQL, but I'd like to do something like:
SELECT items.{SELECT items.preferred_column}
To elaborate, to achieve what I'm trying to achieve, I could write a long case when statement:
SELECT
CASE WHEN items.preferred_column = "column_a" THEN items.column_a
CASE WHEN items.preferred_column = "column_b" THEN items.column_b
CASE WHEN items.preferred_column = "column_c" THEN items.column_c
... and so on...
But that seems wrong. I would prefer to write a query that looks at the value of items.preferred_column and loads that column.
Is this possible?
My use case involves an Active Record (the ORM for Rails) query, which limits me. I'm not able to use "INTO" for example.
Doing this without creating a SQL function would preferred, though if it's not possible without creating a SQL function that would be good to know.
Thanks in advance for lending your expertise!
You can try transforming the table rows with row_to_json() and then using json_each(), you can join the resultant "key" field on the preferred_column:
WITH CTE AS (
SELECT
row_to_json(Z.*)::jsonb as rcr,
row_number() over(partition by null order by <whatever comparator clause>) as rn,
Z.*
FROM items Z)
SELECT b.value, a.*
FROM CTE a, jsonb_each(rcr) b, CTE c
WHERE c.rn=a.rn AND b.key = ( c.preferred_column )
Note that this essentially operates as a quasi-pivot, so you'll need to maintain an index (the row_number invocation) to self-join the table when extracting the appropriate key-value pairs from jsonb_each's set-return. Casting to jsonb will be helpful in that the binary form will alphabetize the key-value pairs by key order within the object itself.
If you need to get the resultant value as a text string instead of a json primitive, you can do
b.value #>>'{}'
instead of using jsonb_each_text(), which will preserve any json columns.
I need help to understand what I did wrong ... I'm a beginner so excuse me the simple question!
I have two tables in which I want to do a JOIN where, in one of the columns I had to use REPLACE to remove the text 'RIxRE' that does not interest me.
In table 1, this is the original text of the column id_notification: RIxRE-1787216-BSB and this is the text that returns when using REPLACE: 1787216-BSB
In column 2, this is the text that exists: 1787216-BSB
However, I get the following error:
# 1054 - Unknown column 'a.id_not' in 'on clause'
SELECT *, REPLACE(a.id_notificacao,'RIxRE','') AS id_not
FROM robo_qualinet_cadastro_remedy a
JOIN (SELECT * FROM painel_monitoracao) b ON a.id_not = b.id_notificacao
You cannot use a column alias again in the FROM clause or the WHERE clause after the SELECT (and possibly not other clauses as well, depending on the database).
So, repeat the expression:
SELECT *, REPLACE(a.id_notificacao, 'RIxRE', '') AS id_not
FROM robo_qualinet_cadastro_remedy rqcr JOIN
painel_monitoracao pm
ON REPLACE(rqcr.id_notificacao, 'RIxRE', '') = pm.id_notificacao;
Notes:
Use table aliases the mean something, such as abbreviations for the able names.
The subquery is not necessary in the FROM clause.
I suspect that you have a problem with your data model if you need a REPLACE() for the JOIN condition, but that is a different issue from this question.
I have a table vehicles has a column 'name'
Values are stored like car/tesla, car/honda, truck/daimler (each value is stored as type/brand)
I want to query the table using only brand. If I look up tesla, it should return the row corresponding to car/tesla. How do I do it? I'm using postgres.
There is no need in regex in your case. Just use old good like:
select name
from vehicles
where name like '%/tesla'
2 solutions are available a select query with LIKE operand or a select query with contains operand.
select * from vehicles where name LIKE '%tesla%'
Select * from vehicles where Contains(name, "tesla");
CREATE VIEW ALL_TABLES AS SELECT * FROM employee_view, av_pay;
I keep getting error message how do I overcome this
VIEW Duplicate column name 'ISLAND'
av_pay:
employee_view:
You are doing a select *, which will output the same column names as defined in the tables you are querying. As you have both columns defined with the same name in both, there you have the error.
So either rename one of the columns or change the query to something like:
select employee_view.ISLAND ISLAND_V, av_pay.ISLAND ISLAND_P, ... FROM ...
The db engine complaints because your select clause is "*" and both the source tables contain the column "island". As a result, the dbms does not know which column should be returned - from employee_view or av_pay?
BTW, a select from 2 tables without a join will result in a cartesian product...
Let's say that we have a table named Data with Id and Weather columns. Other columns in that table are not important to this problem. The Weather column can be null.
I want to display all rows where Weather fits a condition, but if there is a null value in weather then display null value.
My SQL so far:
SELECT *
FROM Data d
WHERE (d.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%' OR d.Weather IS NULL)
My results are wrong, because that statement also shows values where Weather is null if condition is not correct (let's say that users mistyped wrong).
I found similar topic, but there I do not find appropriate answer.
SQL WHERE clause not returning rows when field has NULL value
Please help me out.
Your query is correct for the general task of treating NULLs as a match. If you wish to suppress NULLs when there are no other results, you can add an AND EXISTS ... condition to your query, like this:
SELECT *
FROM Data d
WHERE d.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%'
OR (d.Weather IS NULL AND EXISTS (SELECT * FROM Data dd WHERE dd.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%'))
The additional condition ensures that NULLs are treated as matches only if other matching records exist.
You can also use a common table expression to avoid duplicating the query, like this:
WITH cte (id, weather) AS
(
SELECT *
FROM Data d
WHERE d.Weather LIKE '%'+COALESCE(NULLIF('',''),'sunny')+'%'
)
SELECT * FROM cte
UNION ALL
SELECT * FROM Data WHERE weather is NULL AND EXISTS (SELECT * FROM cte)
statement show also values where Wether is null if condition is not correct (let say that users typed wrong sunny).
This suggests that the constant 'sunny' is coming from end-user's input. If that is the case, you need to parameterize your query to avoid SQL injection attacks.