I have an SQL db which contains a table of license plate numbers (plates), a table of parking lots (places), and a table corresponding one to the other over time (parking), each row placing a specific vehicle plate in a specific place at a particular time.
create table parking (
plateid integer,
placeid integer,
time_period integer
);
This means each row as a whole is unique but the plate/place combinations are not.
I need to count how many times each plate appears in each place. There are an indeterminate number of both, so I cannot maintain a table per place and simply count the entries.
This is easy enough using a general purpose programming language applied to the list. Is there a way to do it with straight SQL?
Are you just looking for aggregation?
select plate_id, place_id, count(*)
from parking
group by plate_id, place_id;
You can group by both plate and place and count occurrences:
SELECT place_id, place_id, count(*)
FROM parking
GROUP BY place_id, plate_id
Related
I was reading this but can't manage to hack it to work on my own problem.
My data has the following fields, in a single table in Postgres:
Seller_id (varchar) (contains_duplicates).
SKU (varchar) (contains duplicates).
selling_starts (datetime).
selling_ends (datetime).
I want to query it so I get the count of unique SKUs on sale, per seller, per day. If there are any null days I don't need these.
I've tried before querying it by using another table to generate a list of unique "filler" dates and then joining it to where the date is more than the selling_starts and less than the selling_ends fields. However, this is so computationally expensive that I get timeout errors.
I'm vaguely aware there are probably more efficient ways of doing this via with statements to create CTEs or some sort of recursive function, but I don't have any experience of this.
Any help much appreciated!
try this :
WITH list AS
( SELECT generate_series(date_trunc('day', min(selling_starts)), max(selling_ends), '1 day') AS ref_date
FROM your_table
)
SELECT seller_id
, l.ref_date
, count(DISTINCT sku) AS sku_count
FROM your_table AS t
INNER JOIN list AS l
ON t.selling_starts <= l.ref_date
AND t.selling_ends > l.ref_date
GROUP BY seller_id, l.ref_date
If your_table is large, you should create indexes to accelerate the query.
I have two tables in my DB:
Building(bno,address,bname) - PK is bno. bno
Room(bno,rno,floor,maxstud) - PK is bno,rno (together)
The Building table stands for a building number, address and name.
The Room table stands for building number, room number, floor number and maximum amount of students who can live in the room.
The query I have to write:
Find a building who has at least 10 rooms, which the maximum amount of students who can live in is 1. The columns should be bno, bname, number of such rooms.
What I wrote:
select building.bno, building.bname, count(rno)
from room natural join building
where maxstud =1
group by bno, bname
having count(rno)>=10
What the solution I have states:
with temp as (
select bno, count(distinct rno) as sumrooms
from room
where maxstud=1
group by bno
)
select bno, bname, sumrooms
from building natural join temp
where sumrooms>=10
Is my solution correct? I didn't see a reason to use a sub-query, but now I'm afraid I was wrong.
Thanks,
Alan
Your query will perform faster but I'm afraid won't compile because you are not including every unaggregated column in the GROUP BY clause (here: building.bname).
Also, the solution that you have which isn't yours counts distinct room numbers, so one may conclude that a building can have several rooms with the same numbers for example on different floors, so that a room would be identified correctly by the unique triple (bno, rno, floor).
Given what I've wrote above your query would look:
select building.bno, building.bname, count(distinct rno)
from room natural join building
where maxstud = 1
group by 1,2 -- I used positions here, you can use names if you wish
having count(distinct rno) >= 10
Your solution is better.
If you are unsure, run both queries on a sample dataset and convince yourself that the results are the same.
How can one find the count of unique events in Big Query ? I am having a hard time calculating the count of Unique events and matching it with GA interface.
two ways how this is used:
1) One is as the original linked documentation says, to combine full visitor user id, and their different session id: visitId, and count those.
SELECT
EXACT_COUNT_DISTINCT(combinedVisitorId)
FROM (
SELECT
CONCAT(fullVisitorId,string(VisitId)) AS combinedVisitorId
FROM
[google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910]
WHERE
hits.type='PAGE' )
2) The other is just counting distinct fullVisitorIds
SELECT
EXACT_COUNT_DISTINCT(fullVisitorId)
FROM
[google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910]
WHERE
hits.type='PAGE'
I am working on a database with products and lot numbers. Each entry in the Lots table has a Lot Number and a Product description.
Sometimes there are multiple records of the same lot number, for example when an item is repacked a new record is created, but with the same Lot Number and same product description - this is fine. But other times there are problem cases, namely when two different products share the same Lot Number. I am trying to find those.
In other words, there are 3 possibilities:
Lot numbers for which there is only one record in the table.
Lot numbers for which there are multiple records, but the Product description is the same for all of them
Lot numbers for which there are multiple records, and the product descriptions are not all the same.
I need to return only #3, with a separate record for each instance of that Lot Number and product description.
Any help would be greatly appreciated.
Thanks Juan for the sample data. Using this example, I want to return the data contained in Id 2-8, but not 1, 9, 10, 11.
This wasn't easy because lot of time don't use access.
First select unique values using distinct.
Then count how many diferent product appear on each lotnumber using group by
Last join both result and show only the lots with more than one description where total >1
.
SELECT id, Product.lotnumber, Product.Product, total
FROM
Product Inner join
(
SELECT lotnumber, count(*) as total
FROM
(SELECT distinct lotnumber, product
FROM Product)
GROUP BY lotnumber
) SubT On Product.lotnumber = SubT.lotnumber
WHERE total > 1
ORDER BY id
As you can see :
lot 2 have two products (yy and zz)
lot 3 have thre products (aa, bb, cc)
I include my product table:
Sorry for spanish. Field types are Autonumeric, Short Text, and Number
Where are Cartesian Joins used in real life?
Can some one please give examples of such a Join in any SQL database.
just random example. you have a table of cities: Id, Lat, Lon, Name. You want to show user table of distances from one city to another. You will write something like
SELECT c1.Name, c2.Name, SQRT( (c1.Lat - c2.Lat) * (c1.Lat - c2.Lat) + (c1.Lon - c2.Lon)*(c1.Lon - c2.Lon))
FROM City c1, c2
Here are two examples:
To create multiple copies of an invoice or other document you can populate a temporary table with names of the copies, then cartesian join that table to the actual invoice records. The result set will contain one record for each copy of the invoice, including the "name" of the copy to print in a bar at the top or bottom of the page or as a watermark. Using this technique the program can provide the user with checkboxes letting them choose what copies to print, or even allow them to print "special copies" in which the user inputs the copy name.
CREATE TEMP TABLE tDocCopies (CopyName TEXT(20))
INSERT INTO tDocCopies (CopyName) VALUES ('Customer Copy')
INSERT INTO tDocCopies (CopyName) VALUES ('Office Copy')
...
INSERT INTO tDocCopies (CopyName) VALUES ('File Copy')
SELECT * FROM InvoiceInfo, tDocCopies WHERE InvoiceDate = TODAY()
To create a calendar matrix, with one record per person per day, cartesian join the people table to another table containing all days in a week, month, or year.
SELECT People.PeopleID, People.Name, CalDates.CalDate
FROM People, CalDates
I've noticed this being done to try to deliberately slow down the system either to perform a stress test or an excuse for missing development deliverables.
Usually, to generate a superset for the reports.
In PosgreSQL:
SELECT COALESCE(SUM(sales), 0)
FROM generate_series(1, 12) month
CROSS JOIN
department d
LEFT JOIN
sales s
ON s.department = d.id
AND s.month = month
GROUP BY
d.id, month
This is the only time in my life that I've found a legitimate use for a Cartesian product.
At the last company I worked at, there was a report that was requested on a quarterly basis to determine what FAQs were used at each geographic region for a national website we worked on.
Our database described geographic regions (markets) by a tuple (4, x), where 4 represented a level number in a hierarchy, and x represented a unique marketId.
Each FAQ is identified by an FaqId, and each association to an FAQ is defined by the composite key marketId tuple and FaqId. The associations are set through an admin application, but given that there are 1000 FAQs in the system and 120 markets, it was a hassle to set initial associations whenever a new FAQ was created. So, we created a default market selection, and overrode a marketId tuple of (-1,-1) to represent this.
Back to the report - the report needed to show every FAQ question/answer and the markets that displayed this FAQ in a 2D matrix (we used an Excel spreadsheet). I found that the easiest way to associate each FAQ to each market in the default market selection case was with this query, unioning the exploded result with all other direct FAQ-market associations.
The Faq2LevelDefault table holds all of the markets that are defined as being in the default selection (I believe it was just a list of marketIds).
SELECT FaqId, fld.LevelId, 1 [Exists]
FROM Faq2Levels fl
CROSS JOIN Faq2LevelDefault fld
WHERE fl.LevelId=-1 and fl.LevelNumber=-1 and fld.LevelNumber=4
UNION
SELECT Faqid, LevelId, 1 [Exists] from Faq2Levels WHERE LevelNumber=4
You might want to create a report using all of the possible combinations from two lookup tables, in order to create a report with a value for every possible result.
Consider bug tracking: you've got one table for severity and another for priority and you want to show the counts for each combination. You might end up with something like this:
select severity_name, priority_name, count(*)
from (select severity_id, severity_name,
priority_id, priority_name
from severity, priority) sp
left outer join
errors e
on e.severity_id = sp.severity_id
and e.priority_id = sp.priority_id
group by severity_name, priority_name
In this case, the cartesian join between severity and priority provides a master list that you can create the later outer join against.
When running a query for each date in a given range. For example, for a website, you might want to know for each day, how many users were active in the last N days. You could run a query for each day in a loop, but it's simplest to keep all the logic in the same query, and in some cases the DB can optimize the Cartesian join away.
To create a list of related words in text mining, using similarity functions, e.g. Edit Distance