Query multiple tables in access - sql

We have 50 tables we need to query a column that exists in all. This column is a checkbox. We need to count per table how many are checked and how many are unchecked. Cant seem to get 1 query to count results and display per table as opposed to multiplying or combining results.
We need 1 column per table to display count of checked and unchecked.
Thanks
SELECT "Table1" , Count('qcpass') AS column
FROM 5000028
GROUP BY [5000028].qcpass
union
SELECT "Table2",count('qcpass')
FROM 5000029
Group By [5000029].qcpass;

Edit
Based on your feedback, try this (sorry, didn't realize you wanted 1 column per table):
Make a union query that combines all 50 tables. The result should be 1 row per table:
SELECT "5000028" as QCPASS, Count () FROM 5000028 group by QCPASS
UNION
SELECT "5000029" as QCPASS, Count () FROM 5000029 group by QCPASS
UNION...
Now make a "Crosstab" query which is pretty easy in Access. First, make a new query and select the Crosstab option at the top. This query will use the union query as its source.
This will have 3 columns. The first will be a constant value (you can use "Totals" if you like, it's just a placeholder). Set this as your "Row Heading".
The 2nd column will be QCPass. Set this as your "Column Heading".
The 3rd column will be Expr1. Set this as your "Value".
When you run this, you should see a 1-row table with 1 column per each of your source tables.

SELECT columna, 'tablename1' from tablename1 where ..
UNION
SELECT columna, 'tablename2' from tablename2 where ..
UNION
SELECT columna, 'tablename3' from tablename3 where ..
...
SELECT columna, 'tablename4' from tablename50 where ..

Related

How to merge data of two tables with different column name in Big Query

How can I get final output based on table 1 and table 2 in Big Query
Table 1
Table 2
Final Output
You can use union all. If the columns are in the same order:
select *
from table1
union all
select *
from table2;
In general, though, it is better to list out the column names instead of using *. Note that in the result set, the names from the first select are used for the result set.

How to efficiently perform union of two queries with and without group by

I have a query that performs a union between two select statements one that uses group by and another that doesn't. The problem is I'm selecting the same columns and using the same fucntions in both select statements. It feels Im duplicating the code and I wish to know if there's a better way to write this
I've tried to use the normal union function to two select statements, but both select statements use the same functions.
Is there a way to simplify the following query without duplication?
Example:
select
sum(col1), sum(col2)....
from table
union
select sum(col1), sum(col2)...
from table
group by class
I require a table which is obtained by combining the result of the above.
The second query may have multiple categories and first query yields only one aggregated row
The objective is to compare the income and other details of the total population with one or more of categories within the population
Thanks in advance :)
You can add the WITH ROLLUP clause to your GROUP BY and it will add an aggregate row to the end of your output i.e.
SELECT SUM(col1), SUM(col2)...
FROM table
GROUP BY class WITH ROLLUP
You have not provided the sample data to check, but one approach can be using CASE WHEN in GROUP BY.
Following UNION
SELECT Sum(col1)
FROM tablename
WHERE id <> 1
UNION
SELECT Sum(col1)
FROM tablename
WHERE id = 1
GROUP BY class
Can be written as following using CASE
SELECT Sum(col1)
FROM tablename
GROUP BY CASE
WHEN id = 1 THEN 0
ELSE 1
END

To Remove Duplicates from Netezza Table

I have a scenario for a type2 table where I have to remove duplicates on total row level.
Lets consider below example as the data in table.
A|B|C|D|E
100|12-01-2016|2|3|4
100|13-01-2016|3|4|5
100|14-01-2016|2|3|4
100|15-01-2016|5|6|7
100|16-01-2016|5|6|7
If you consider A as key column, you know that last 2 rows are duplicates.
Generally to find duplicates, we use group by function.
select A,C,D,E,count(1)
from table
group by A,C,D,E
having count(*)>1
for this output would be 100|2|3|4 as duplicate and also 100|5|6|7.
However, only 100|5|6|7 is only duplicate as per type 2 and not 100|2|3|4 because this value has come back in 3rd run and not soon after 1st load.
If I add date field into group by 100|5|6|7 will not be considered as duplicate, but in reality it is.
Trying to figure out duplicates as explained above.
Duplicates should only be 100|5|6|7 and not 100|2|3|4.
can someone please help out with SQL for the same.
Regards
Raghav
Use row_number analytical function to get rid of duplicates.
delete from
(
select a,b,c,d,e,row_number() over (partition by a,b,c,d,e) as rownumb
from table
) as a
where rownumb > 1
if you want to see all duplicated rows, you need join table with your group by query or filter table using group query as subquery.
wITH CTE AS (select a, B, C,D,E, count(*)
from TABLE
group by 1,2,3,4,5
having count(*)>1)
sELECT * FROM cte
WHERE B <> B + 1
Try this query and see if it works. In case you are getting any errors then let me know.
I am assuming that your column B is in the Date format if not then cast it to date
If you can see the duplicate then just replace select * to delete

Is there some way to do the following in SQL?

Let's imagine that we have these 2 tables:
Table 1, with the column:
Field1
1
3
Table 2, with the column:
Field1
2
4
(Well they could also be called in any other way, but I want to represent that the type of table1.field1 is the same as table2.field1).
Would it be possible to do a SQL query that would return the following?
[1,2,3,4], I mean the numbers ordered by any criteria I would want but that criteria aplying to both tables. As far as I know ORDER BY can just ORDER by the values of a column, not by a general criteria like "from lower to higher number. And even if it could I believe the SELECT instruction can't fuse columns. I mean I think the best I could achieve with that instruction would be to get something like [(1,2),(1,4),(3,2),(3,4)] and later work on it, but this can be painful with lots of results.
And the application needs fields to be on different tables, I cannot merge them.
Any idea about how to deal with this?
Thanks a lot for your help.
Edit:
Oh, it was much easier than what I thought, with that instruction is not something hard to achieve.
Thank you everyone.
This is what the UNION statement is for. It lets you combine two SELECT statements into the same resultset:
SELECT Field1
FROM Table1
UNION ALL
SELECT Field1
FROM Table2
ORDER BY 1
can you do union all
Like below:
Select field 1
from
(Select field 1 from Table 1
Union
select field 1 from table 2)
order by field 1
Use union or Union all based on your need to repeat elements in both the tables or not.
select * from
(
select field1 as field_value from table1
union
select field2 as field_value from table2
)
order by field_value asc

Most efficient way to select 1st and last element, SQLite?

What is the most efficient way to select the first and last element only, from a column in SQLite?
The first and last element from a row?
SELECT column1, columnN
FROM mytable;
I think you must mean the first and last element from a column:
SELECT MIN(column1) AS First,
MAX(column1) AS Last
FROM mytable;
See http://www.sqlite.org/lang_aggfunc.html for MIN() and MAX().
I'm using First and Last as column aliases.
if it's just one column:
SELECT min(column) as first, max(column) as last FROM table
if you want to select whole row:
SELECT 'first',* FROM table ORDER BY column DESC LIMIT 1
UNION
SELECT 'last',* FROM table ORDER BY column ASC LIMIT 1
The most efficient way would be to know what those fields were called and simply select them.
SELECT `first_field`, `last_field` FROM `table`;
Probably like this:
SELECT dbo.Table.FirstCol, dbo.Table.LastCol FROM Table
You get minor efficiency enhancements from specifying the table name and schema.
First: MIN() and MAX() on a text column gives AAAA and TTTT results which are not the first and last entries in my test table. They are the minimum and maximum values as mentioned.
I tried this (with .stats on) on my table which has over 94 million records:
select * from
(select col1 from mitable limit 1)
union
select * from
(select col1 from mitable limit 1 offset
(select count(0) from mitable) -1);
But it uses up a lot of virtual machine steps (281,624,718).
Then this which is much more straightforward (which works if the table was created without WITHOUT ROWID) [sql keywords are in capitals]:
SELECT col1 FROM mitable
WHERE ROWID = (SELECT MIN(ROWID) FROM mitable)
OR ROWID = (SELECT MAX(ROWID) FROM mitable);
That ran with 55 virtual machine steps on the same table and produced the same answer.
min()/max() approach is wrong. It is only correct, if the values are ascending only. I needed something liket this for currency rates, which are random raising and falling.
This is my solution:
select st.*
from stats_ticker st,
(
select min(rowid) as first, max(rowid) as last --here is magic part 1
from stats_ticker
-- next line is just a filter I need in my case.
-- if you want first/last of the whole table leave it out.
where timeutc between datetime('now', '-1 days') and datetime('now')
) firstlast
WHERE
st.rowid = firstlast.first --and these two rows do magic part 2
OR st.rowid = firstlast.last
ORDER BY st.rowid;
magic part 1: the subselect results in a single row with the columns first,last containing rowid's.
magic part 2 easy to filter on those two rowid's.
This is the best solution I've come up so far. Hope you like it.
We can do that by the help of Sql Aggregate function, like Max and Min. These are the two aggregate function which help you to get last and first element from data table .
Select max (column_name ), min(column name) from table name
Max will give you the max value means last value and min will give you the min value means it will give you the First value, from the specific table.