Is there a way to query multiple tables at once based on common table name in Oracle through SQL or PL/SQL - sql

I have a requirement to fetch data from all the tables based on common table name pattern every week.Basically my requirement is to merge all the data to form a single lookup table.
Example:
Table Names:
Department_20190101
Department_20190109
Department_20190122
Department_20190129
I have to fetch data from all the tables and have to create a single lookup table. Is there a simple way to do this other than by going through iteration in PL/SQL by getting the table names with the help of ALL_TABLES
Note:The Date part is not consistent.If i can achieve this requirement once then i can easily start inserting the data from the new table to the existing lookup(n+1)
Please share your suggestions.
Regards,
Scott

If you have a very long list of tables, or you requirement is to aggergate the results from all tables e.g. starting with Department_ followed by a certain date or range of dates, then you might need dynamic SQL for this. For the exact example you did show in your question, a CTE with a union query might work:
WITH cte AS (
SELECT * FROM Department_20190101 UNION ALL
SELECT * FROM Department_20190109 UNION ALL
SELECT * FROM Department_20190122 UNION ALL
SELECT * FROM Department_20190129
)
And then use the CTE as:
SELECT *
FROM cte;
This assumes that all tables have identical structures. Also, as a side note, if this be the case, you might want to consider just having a single table with a date column to differentiate between the otherwise common data.

Check here: Execute For Each Table in PLSQL there is nice example to resolve Your problem using pl/sql (pl/sql help You dinamicly grow up sql-query) .

Related

Merge multiple SQL Server tables into one

I have many SQL Server tables in a database that have information about the same domain (same columns) and their names are the same plus a date suffix (yyyyMMdd):
TABLE_ABOUT_THIS_THING_20200131
TABLE_ABOUT_THIS_THING_20191231
TABLE_ABOUT_THIS_THING_20191130
TABLE_ABOUT_THIS_THING_20191031
TABLE_ABOUT_THIS_THING_20190930
TABLE_ABOUT_THIS_THING_20190831
...
This seems like it would make more sense if it was all in the same table. Is there a way, using a query/SSIS or something similar, to merge this tables into one (TABLE_ABOUT_THIS_THING) with a new column (extraction_date) made out of the current table suffix?
Using SSIS: use union for the collect data from the multi table and use Derived Column for the extraction_date before the destination for more information you can take from following link:
https://www.tutorialgateway.org/union-all-transformation-in-ssis/
You can use UNION ALL:
create view v_about_this_thing as
select convert(date, '20200131') as extraction_date t.*
from TABLE_ABOUT_THIS_THING_20200131
union all
select convert(date, '20201912') as extraction_date t.*
from TABLE_ABOUT_THIS_THING_20191231
union all
. . .
This happens to be a partitioned view, which has some other benefits.
The challenge is how to keep this up-to-date. My recommendation is to fix your data processing so all the data goes into a single table. You can also set up a job that runs once a month and inserts the most recent values into an existing table.
An alternative is to reconstruct the view every month or periodically. You can do this using a DDL trigger that recreates the view when the new table appears.
Another alternative is to create a year's worth of tables all at once -- but empty -- and to create the view manually once a year. But a note on your calendar to remind you!
You can use SSIS with the new table "TABLE_ABOUT_THIS_THING" as destination and a query that look like this as source:
`Select * FROM table1
UNION
Select * FROM table2
UNION
.
.
.`

BigQuery loop to select values from dynamic table_names registered in another table

I'm looking for a solution to extract data from multiple tables and insert it into another table automatically running a single script. I need to query many tables, so I want to make a loop to select from those table's names dynamically.
I wonder if I could have a table with table names, and execute a loop like:
foreach(i in table_names)
insert into aggregated_table select * from table_names[i]
end
Below is for BigQuery Standard SQL
#standardSQL
SELECT * FROM `project.dataset1.*`
WHERE _TABLE_SUFFIX IN (SELECT table_name FROM `project.dataset2.list`)
This approach will work if below conditions are met
all to be processed table from list have exact same schema
one of those tables is the most recent table - this table will define schema that will be used for all the rest tables in the list
to meet above bullet - ideally list should be hosted in another dataset
Obviously, you can add INSERT INTO ... to insert result into whatever destination is to be
Please note: Filters on _TABLE_SUFFIX that include subqueries cannot be used to limit the number of tables scanned for a wildcard table, so make sure your are using longest possible prefix - for example
#standardSQL
SELECT * FROM `project.dataset1.source_table_*`
WHERE _TABLE_SUFFIX IN (SELECT table_name FROM `project.dataset2.list`)
So, again - even though you will select data from specific tables (set in project.dataset2.list) the cost will be for scanning all tables that match project.dataset1.source_table_* woldcard
While above is purely in BigQuery SQL - you can use any client of your choice to script exacly the logic you need - read table names from list table and then select and insert in loop - this option is simplest and most optimal I think

SQL or statement vs multiple select queries

I'm having a table with an id and a name.
I'm getting a list of id's and i need their names.
In my knowledge i have two options.
Create a forloop in my code which executes:
SELECT name from table where id=x
where x is always a number.
or I'm write a single query like this:
SELECT name from table where id=1 OR id=2 OR id=3
The list of id's and names is enormous so i think you wouldn't want that.
The problem of id's is the id is not always a number but a random generated id containting numbers and characters. So talking about ranges is not a solution.
I'm asking this in a performance point of view.
What's a nice solution for this problem?
SQLite has limits on the size of a query, so if there is no known upper limit on the number of IDs, you cannot use a single query.
When you are reading multiple rows (note: IN (1, 2, 3) is easier than many ORs), you don't know to which ID a name belongs unless you also SELECT that, or sort the results by the ID.
There should be no noticeable difference in performance; SQLite is an embedded database without client/server communication overhead, and the query does not need to be parsed again if you use a prepared statement.
A "nice" solution is using the INoperator:
SELECT name from table where id in (1,2,3)
Also, the IN operator is syntactic sugar built for exactly this purpose..
SELECT name from table where id IN (1,2,3,4,5,6.....)
Hoping that you are getting the list of ID's on which you have to perform a query for names as input temp table #InputIDTable,
SELECT name from table WHERE ID IN (SELECT id from #InputIDTable)

SQL Combine two tables in select statement

I have a situation where I want to combine two tables for queries in a select statement, and I haven't found a working solution yet.
The Situation:
Table A
Table B
Both A and B have identical fields but distinct populations. I have other queries that are pulling from each table separately.
I want to build a query that pulls from them as if they were one table. There are no instances of records being in both tables.
My research so far led me to think the FULL OUTER JOIN was what I wanted, but I'm not entirely sure how to do that when I'm not really joining them on any field and it failed in my tests. So I searched for append options thinking that might more accurately represent what I'm trying to do and the INSERT INTO looked promising but less so for select statements. Is there a way to do this or do I need to create a third table that's the combination of the first two to query from?
.
This is being done as an Excel VBA query to Access via DAO. I'm building up SQL statements piece by piece in my VBA code based on user-selected options and then pulling the results into Excel for use. AS such my hope is to be able to only alter the FROM statement (since I'm building up the queries piecemeal) to effect this so that any other aspects of the select statement won't be impacted. Any suggestions or help will be much appreciated!
You can UNION the tables to do this:
SELECT StuffYouWant
FROM (SELECT *
FROM TableA
UNION ALL
SELECT *
FROM TableB) c
Something like this:
SELECT * FROM a
UNION
SELECT * FROM b
Make sure the a table and the b table have the same number of columns and the corresponding columns have the same data type

How do you complex join a number table with an actual table with many clauses dependent on the data from the number table?

I have a table of numbers (PLSQL collection containing some_table_line_ids passed in from a website).
Then I have some_table also has columns -> config_data, config_state
I want to pull in all lines that have the same table_id from the all the table_ids in the number table.
I also want to pull in all lines that have the same config_data as each record pulled in from the first part.
So its a parent/child relationship. This can be done in two for loops by selecting a line by an id in a cursor then another for loop selecting each line equaling the parents config data. Each loop I am performing data manipulation on each line.
I would like to combine both these into a single cursor having all table ids that I need.
What would that look like?
You just want to do a complicated join on different factors. Something like:
select st2.*
from numbers n join
some_table st
on st.table_id = n.table_id join
some_table st2
on st2.config_data = st.config_data
Quite possibly, you actually want:
select distinct st.*
since you might otherwise have duplicates. Or, you might want:
select n.table_id, st.config_data, st2.*
So you know which of the original values was responsible for bringing in the row.
You describe the array as a PL/SQL collection. If you employ a SQL type instead you could include it in the FROM clause by using the TABLE function.
create type some_table_line_id_nt as table of number;
Something like:
select s.*
from some_table s
join table(some_table_line_ids) t
on s.id = t.column_value
(I haven't offered a complete solution as you haven't given enough details of table structure and data.)
I solved the issue using start with and connect by prior.