I have some records that have information about stores. These records have several different nested fields. One of the nested fields is tags and one is employees. I am trying to get a count of the number of stores that have a tag and an employee with a certain name. So I did this:
SELECT count(*)
FROM [stores.stores_844_1]
where tags.tag_name='foo'
and employees.first_name='bar'
Then I get the error:
Error: Cannot query the cross product of repeated fields tags.tag_name and employees.first_name.
I can make it work by changing the query to:
SELECT count(*)
FROM ((flatten([stores.stores_844_1],tags))
where tags.tag_name='foo'
and employees.first_name='bar'
The problem with this is that I am dynamically creating the where clause and so my from clause will have to change depending on what I have in the where. While I could generate some logic in code to figure out what the from clause should be, I was wondering if there is a way to do something like:
SELECT count(*)
FROM [stores.stores_844_1]
where tags.tag_name='foo' WITHIN RECORD
and employees.first_name='bar' WITHIN RECORD
That would not have to flatten the main table?
I have tried using an ugly work around like this:
SELECT count(*)
FROM
(SELECT GROUP_CONCAT(CONCAT('>', tags.tag_name,'<')) WITHIN RECORD as f1, GROUP_CONCAT(CONCAT('>',employees.first_name,'<')) WITHIN RECORD as f2
FROM [stores.stores_844_1]
)
where f1 CONTAINS '>foo<'
and f2 CONTAINS '>bar<'
This ugly workaround works how I want it to, but it just seems really hacky and ugly and there must be a better way, right?
You can use WITHIN RECORD to come up with another field that indicates whether the values are present. I'm not sure if this meets your requirements, since you still have to change the FROM clause, but it seems cleaner than what you are currently doing. In other words, try this:
SELECT count(*) FROM (
SELECT SUM(IF(tags.tag_name='foo', 1, 0)) WITHIN RECORD as has_foo,
SUM(IF(employees.first_name='bar', 1, 0)) WITHIN RECORD as has_bar,
FROM [stores.stores_844_1])
WHERE has_foo > 0 AND has_bar > 0
Related
I'm trying to remove duplicates on a column in SQL, without including that column in the extract (since it contains personally identifiable data). I thought I might be able to do this with nested queries (as below), however this isn't working. I also thought it might be possible to remove duplicates in the WHERE statement, but couldn't find anything from googling. Any ideas? Thanks in advance.
SELECT [ETHNIC], [RELIGION]
FROM
(SELECT DISTINCT [ID], [ETHNIC], [RELIGION]
FROM MainData)
Using distinct like that will apply distinct to the row, so if there are two rows with the same ID but different ETHNIC and RELIGION the distinct won't remove them. To do that you could use group by in your query, but then you need to use an aggregation (e.g. max):
SELECT [ETHNIC], [RELIGION]
FROM
(SELECT [ID], MAX([ETHNIC]) AS ETHNIC, MAX([RELIGION]) AS RELIGION
FROM MainData
GROUP BY [ID])
If that's not what you're looking for, some SQL dialects require that you name your inner select, so you could try adding AS X to the end of your query.
I'm converting a SAS script to Python for a PostgreSQL environment. In a few places I've found a data statement in SAS, which looks something like this (in multiple scripts):
data dups;
set picc;
by btn wtn resp_ji;
if not (first.resp_ji and last.resp_ji);
run;
Obviously datasets aren't the same in python or SQL environments, and I'm having trouble determining what this specific statement is doing. To be clear, there are a number of scripts being converted which create a dataset in this manner with this same name. So my expectation would be that most of these would be overwritten over and over.
I'm also unclear as to what the postgres equivalent to the condition in the data dups statement would be.
Is there an obvious PostgreSQL statement that would work in its place? Something like this?:
CREATE TABLE dups AS
SELECT btn, wtn, resp_ji
WHERE /*some condition that matches the condition in the data statement*/
Does the
by btn wtn respji;
statement mean which columns are copied over, or is that the equivalent of an ORDER BY clause in PostgreSQL?
Thanks.
The statement is using what's called 'by group processing'. Before the step can run, it requires that the data is sorted by btn wtn resp_ji.
The first.resp_ji piece is checking to see if it's the first time it's seen the current value of resp_ji within the current btn/wtn combination. Likewise the last.resp_ji piece is checking if it's the final time that it will see the current value of resp_ji within the current btn/wtn combination.
Combining it all together the statement:
if not (first.resp_ji and last.resp_ji);
Is saying, if the current value of resp_ji occurs multiple times for the current combination of btn/wtn then keep the record, otherwise discard the record. The behaviour of the if statement when used like that implicitly keeps/discards the record.
To do the equivalent in SQL, you could do something like:
Find all records to discard.
Discard those records from the original dataset.
So...
create table rows_to_discard as
select btn, wtn, resp_ji, count(*) as freq
from mytable
group by btn, wtn, resp_ji
having count(*) = 1
create table want as
select a.*
from mytable a
left join rows_to_discard b on b.btn = a.btn
and b.wtn = a.wtn
and b.resp_ji = a.resp_ji
where b.btn is null
EDIT : I should mention that there is no simple SQL equivalent. It may be possible by numbering rows in subqueries, and then building logic on top of that but it'd be ugh-ly. It may also depend on the specific flavour of SQL being used.
As someone that learned SAS before postgressql, I found the following much more similar to SAS first. last. logic:
--first.
select distinct on (resp_ji) from <table> order by resp_ji
--last.
select distinct on (resp_ji) from <table> order by resp_ji desc
A way to detect duplicates (when no extra differentiating field is available) is to use the ctid as tie-breaker:
CREATE TABLE dups
AS
SELECT * FROM pics p
WHERE EXISTS ( SELECT * FROM pics x
WHERE x.btn = p.btn
AND x.wtn = p.wtn
AND x.resp_ji = p.resp_ji
AND x.ctid <> p.ctid
);
I have two separate databases. I am trying to update a column in one database to the values of a column from the other database:
UPDATE customer
SET customer_id=
(SELECT t1 FROM dblink('port=5432, dbname=SERVER1 user=postgres password=309245',
'SELECT store_key FROM store') AS (t1 integer));
This is the error I am receiving:
ERROR: more than one row returned by a subquery used as an expression
Any ideas?
Technically, to remove the error, add LIMIT 1 to the subquery to return at most 1 row. The statement would still be nonsense.
... 'SELECT store_key FROM store LIMIT 1' ...
Practically, you want to match rows somehow instead of picking an arbitrary row from the remote table store to update every row of your local table customer.
I assume a text column match_name in both tables (UNIQUE in store) for the sake of this example:
... 'SELECT store_key FROM store
WHERE match_name = ' || quote_literal(customer.match_name) ...
But that's an extremely expensive way of doing things.
Ideally, you completely rewrite the statement.
UPDATE customer c
SET customer_id = s.store_key
FROM dblink('port=5432, dbname=SERVER1 user=postgres password=309245'
, 'SELECT match_name, store_key FROM store')
AS s(match_name text, store_key integer)
WHERE c.match_name = s.match_name
AND c.customer_id IS DISTINCT FROM s.store_key;
This remedies a number of problems in your original statement.
Obviously, the basic error is fixed.
It's typically better to join in additional relations in the FROM clause of an UPDATE statement than to run correlated subqueries for every individual row.
When using dblink, the above becomes a thousand times more important. You do not want to call dblink() for every single row, that's extremely expensive. Call it once to retrieve all rows you need.
With correlated subqueries, if no row is found in the subquery, the column gets updated to NULL, which is almost always not what you want. In my updated query, the row only gets updated if a matching row is found. Else, the row is not touched.
Normally, you wouldn't want to update rows, when nothing actually changes. That's expensively doing nothing (but still produces dead rows). The last expression in the WHERE clause prevents such empty updates:
AND c.customer_id IS DISTINCT FROM sub.store_key
Related:
How do I (or can I) SELECT DISTINCT on multiple columns?
The fundamental problem can often be simply solved by changing an = to IN, in cases where you've got a one-to-many relationship. For example, if you wanted to update or delete a bunch of accounts for a given customer:
WITH accounts_to_delete AS
(
SELECT account_id
FROM accounts a
INNER JOIN customers c
ON a.customer_id = c.id
WHERE c.customer_name='Some Customer'
)
-- this fails if "Some Customer" has multiple accounts, but works if there's 1:
DELETE FROM accounts
WHERE accounts.guid =
(
SELECT account_id
FROM accounts_to_delete
);
-- this succeeds with any number of accounts:
DELETE FROM accounts
WHERE accounts.guid IN
(
SELECT account_id
FROM accounts_to_delete
);
This means your nested SELECT returns more than one rows.
You need to add a proper WHERE clause to it.
This error means that the SELECT store_key FROM store query has returned two or more rows in the SERVER1 database. If you would like to update all customers, use a join instead of a scalar = operator. You need a condition to "connect" customers to store items in order to do that.
If you wish to update all customer_ids to the same store_key, you need to supply a WHERE clause to the remotely executed SELECT so that the query returns a single row.
USE LIMIT 1 - so It will return only 1 row.
Example
customerId- (select id from enumeration where enumerations.name = 'Ready To Invoice' limit 1)
The result produced by the Query is having no of rows that need proper handling this issue can be resolved if you provide the valid handler in the query like
1. limiting the query to return one single row
2. this can also be done by providing "select max(column)" that will return the single row
I'm writing a function in ColdFusion that returns the first couple of records that match the user's input, as well as the total count of matching records in the entire database. The function will be used to feed an autocomplete, so speed/efficiency are its top concerns. For example, if the function receives input "bl", it might return {sampleMatches:["blue", "blade", "blunt"], totalMatches:5000}
I attempted to do this in a single query for speed purposes, and ended up with something that looked like this:
select record, count(*) over ()
from table
where criteria like :criteria
and rownum <= :desiredCount
The problem with this solution is that count(*) over () always returns the value of :desiredCount. I saw a similar question to mine here, but my app will not have permissions to create a temp table. So is there a way to solve my problem in one query? Is there a better way to solve it? Thanks!
I'm writing this on top of my head, so you should definitely have to time this, but I believe that using following CTE
only requires you to write the conditions once
only returns the amount of records you specify
has the correct total count added to each record
and is evaluated only once
SQL Statement
WITH q AS (
SELECT record
FROM table
WHERE criteria like :criteria
)
SELECT q1.*, q2.*
FROM q q1
CROSS JOIN (
SELECT COUNT(*) FROM q
) q2
WHERE rownum <= :desiredCount
A nested subquery should return the results you want
select record, cnt
from (select record, count(*) over () cnt
from table
where criteria like :criteria)
where rownum <= :desiredCount
This will, however, force Oracle to completely process the query in order to generate the accurate count. This seems unlikely to be what you want if you're trying to do an autocomplete particularly when Oracle may decide that it would be more efficient to do a table scan on table if :criteria is just b since that predicate isn't selective enough. Are you really sure that you need a completely accurate count of the number of results? Are you sure that your table is small enough/ your system is fast enough/ your predicates are selective enough for that to be a requirement that you could realistically meet? Would it be possible to return a less-expensive (but less-accurate) estimate of the number of rows? Or to limit the count to something smaller (say, 100) and have the UI display something like "and 100+ more results"?
I'm not a database programmer, but I have a simple database-backed app where I have items with tags. Each item may have multiple tags, so I'm using a typical junction table (like this), where each row represents the fact that the item with the appropriate ID has the tag with the appropriate ID.
This works very logically when I want to do something like select all items with a given tag.
But, what is the typical pattern for doing AND searches? That is, what if I want to find all items which have all of a certain set of tags? This is such a common operation that I'd think some of the intro tutorials would cover it, but I guess I'm not looking in the right places.
The approach I tried was to use INTERSECT, first directly and then with subqueries and IN. This works, but builds up long-seeming queries quickly as I add search terms. And, crucially, this approach appears to be about an order of magnitude slower than the approach of shoving all the tags as text into one "tags" column and using SQLite's full-text search. (And, as I would expect/hope, the FTS search gets faster as I add more terms, which doesn't seem to be the case with the INTERSECTS approach.)
What's the proper design pattern here, and what's the right way to make it snappy? I'm using SQLite in this case, but I'm most interested in a general answer, since this must be a common thing to do.
The following is the standard ANSI SQL solution which avoids synchronizing the number of ids and the ids themselves.
with tag_ids (tid) as (
values (1), (2)
)
select id
from tags
where id (select tid from tag_ids)
having count(*) = (select count(*) from tag_ids);
The values clause ("row constructor") is supported by PostgreSQL and DB2. For database that don't support that, you can replace it with a simple "select", e.g. in Oracle this would be:
with tag_ids (tid) as (
select 1 as tid from dual
union all
select 2 from dual
)
select id
from tags
where id (select tid from tag_ids)
having count(*) = (select count(*) from tag_ids);
For SQL Server you would simply leave out the "from dual", as it does not require a FROM clause for a SELECT.
This assumes that one tag can only be assigned exactly once. If that isn't the case, you would need to use a count(distinct id) in the having clause.
I would be inclined to use a group by:
select id
from tags
where id in (<tag1>, <tag2>)
group by id
having count(*) = 2
This would guarantee that both appear.
For an unlimited size list, you could store the ids in a string, such as '|tag1|tag2|tag3|' (note delimiters on ends). Then you can do:
select id
from tags
where #taglist like '%|'+tag+'|%'
group by id
having count(*) = len(#taglist) - (len(replace(#taglist, '|', '') - 1)
This is using SQL Server syntax. But, it is saying two things. The WHERE clause is saying that the tag is in the list. The HAVING clause is saying that the number of matches equals the length of the list. It does this with a trick, by counting the number of separtors and subtracting 1.