Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Given the table below -called marks- why the following statement is wrong?
SELECT student_name, SUM(subject1)
FROM marks
WHERE student_name LIKE 'R%';
Following is 'marks' table
You need to add a GROUP BY whenever we perform an aggregation function like SUM, AVG.. etc.
SELECT student_name,SUM(subject1)
FROM marks
WHERE student_name
LIKE'R%' GROUP BY student_name;
Hope this works for you now!
Why is it wrong?
The SUM() makes this an aggregation query. An aggregation query is fine without a GROUP BY. It returns one row and all columns in the SELECT should be aggregation functions.
However, you have an unaggregated column student_name. This is not in a GROUP BY and it is not the argument to an aggregation function.
Presumably, you want GROUP BY student_name. But there are other possibilities, such as MIN(), MAX() or LISTAGG().
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 11 months ago.
Improve this question
I have 2 columns called price and PID in database called seatinginfo.
Both columns will have multiple of the same values like
pid 1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3
price 10,10,30,40,60,80,70,90,90,90,90 etc
I'm looking to make a query that will give me a list of all unique prices and pick just 1 or the max connected pid.
So example I only want one price,pid like 10,1 30,1 40,1 60,2 etc
Thanks
This is probably as simple as basic aggregation. Something like this should work based on the pretty vague details.
select price
, max(pid)
from YourTable
group by price
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 11 months ago.
Improve this question
I need to compute the sum of a column (B) in another column (C), but adding the next value and changing taking into account another column criteria (A), something like this:
Is this possible with a simple SELECT?
In SQL Server, the window function sum() over() should do the trick.
Note the order by ColB ... This is just a placeholder, I suspect you have another column which would have the proper sequence
Example
Select ColA
,ColB
,ColC = sum(ColB) over (partition by ColA order by ColB rows unbounded preceding )
From YourTable
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Say there are two tables, one table has a column name, and the other has a column occupation. I'm trying to find out how many people have more than 6 occupations in my records. I've tried to COUNT the occupations, but the problem is, when I do, I need to group by. When I group by the name, the problem arises when there are two "Alex Jones", each having 4 occupations, and so the resulting group by gives me "Alex Jones: 8".
I'm not sure how I can avoid this, some advise would be great, thanks in advance!
If your problem is that when you group by "name" you end up grouping two names that are identical, but refer to different people, than your "name" column is not unique. Try using a combination of columns that make the group by unique or group by using a unique column.
You can do for example group by name, other_column, where other_column is a column that in conjunction with "name" identify uniquely the person. Or even better group by personal_id., if you have a unique column like a social security number, or something like that.
As another option, you can use window functions to count without grouping by. For example :
select
...
name,
COUNT(occupation) OVER(PARTITION BY name)
...
from
my_table
You can learn how to use it from here:
https://www.postgresql.org/docs/current/tutorial-window.html
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to know how I can find duplicate data entries within one table in clickhouse.
I am actually investigating on a merge tree table and actually threw optimize statements at my table but that didn't do the trick. The duplicate entries still persist.
Preferred would be to have a universal strategy without referencing individual column names.
I only want to see the duplicate entries, since I am working on very large tables.
The straight forward way would be to run this query.
SELECT
*,
count() AS cnt
FROM myDB.myTable
GROUP BY *
HAVING cnt > 1
ORDER BY date ASC
If that query gets to big you can run it in pieces.
SELECT
*,
count() AS cnt
FROM myDB.myTable
WHERE (date >= '2020-08-01') AND (date < '2020-09-01')
GROUP BY *
HAVING cnt > 1
ORDER BY date ASC
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
From the above table, I want to remove the rows that have the duplicate CODE column. Out of the duplicate rows, the row with the smallest ID_CHECK should be retained.
Output:
If I understand correctly, this should be your SQL query:
Select Names, Code, ID_CHECK
FROM
(
Select Names, Code, ID_CHECK,
ROW_NUMBER() OVER(Partition by Name,Code Order By ID_CHECK) as rnk
FROM table
)tmp
WHERE tmp.rnk=1;
Let me know if this works.