Snowflake sql table name wildcard - sql

What is a good way to "select" from multiple tables at once when the list of tables is not known in advance in snowflake sql?
Something that simulates
Select * from mytable*
which would fetch same results as
Select * from mytable_1
union
Select * from mytable_2
...
I tried doing this in a multistep.
show tables like 'mytable%';
set mytablevar =
(select listagg("name", ' union ') table_
from table(result_scan(last_query_id())))
The idea was to use the variable mytablevar to store the union of all tables in a subsequent query, but the variable size exceeded the size limit of 256 as the list of tables is quite large.

Even if you do not hit 256 character limits, it will not help you to query all these tables. How will you use that session variable?
If you have multiple tables which have the same structure, and hold similar data that you need to query together, why are the data not in one big table? You can use Snowflake's clustering feature to distribute data based on a specific column to your micro-partitions.
https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html
Anyway, you may create a stored procedure which will create/replace a view.
https://docs.snowflake.com/en/sql-reference/stored-procedures-usage.html#dynamically-creating-a-sql-statement
And then you can query that view:
CALL UPDATE_MY_VIEW( 'myview_name', 'table%' );
SELECT * FROM myview_name;

Related

How to merge multiple access database with similar column field

I have multiple MS Access (.mdb) databases which consist of exactly the same fields, a big database has been split into multiple parts to make it manageable.
I have imported all of the .mdb files into MS SQL Server but I don't know how to merge them all into one big database, or if that's not possible, then how to make a query so that it will search from all the databases and return the result.
Let me give an example to make myself more clear:
I have part1.mdb, part2.mdb, part3.mdb, ..., part50.mdb files.
All of the files contain the fields:
name
mobile no
address
city
state
Now if I have to search certain mobile no then I have to search in all of the files which is very tedious.
the best approach would be to create one table possibly with partitioning (for instance: by date). But lets concentrate on generating one table from many.
I believe that you have multiple tables like table1, table2 ... table50
if that is the case
SELECT * INTO myBigtable FROM
(
SELECT * FROM table1 union all
SELECT * FROM table2 union all
SELECT * FROM table3 union all
...
SELECT * FROM table50
) T;
don't forget to create proper indexes on it

How can I UNION ALL on all columns of a table in Access

I have two select queries with the same number of columns (c.150) and I am trying to UNION ALL the two with:
SELECT *
FROM query1
UNION ALL
SELECT *
FROM query2
I am getting the error "Too many fields defined", but I understand that Access can process 255 fields? Given I don't want to have to write out every field name within each of my select queries, is there a practical way to achieve this union?
As Parfait mentions in his comment, this error is caused as Access is counting the column count of each of my tables towards the limit. 150 + 150 > 255 => Too many fields defined. See a similar question here.
Provided you don't have too much data, an alternative is to write one into a table and append the other into the same table.

How to put more than 1million ID's using union All [duplicate]

I have comma delimited id's that I want to use in NOT IN clause..
I'm using oracle 11g.
select * from table where ID NOT IN (1,2,3,4,...,1001,1002,...)
results in
ORA-01795: maximum number of expressions in a list is 1000
I don't want to use temp table. am trying considering doing this
select * from table1 where ID NOT IN (1,2,3,4,…,1000) AND
ID NOT IN (1001,1002,…,2000)
Is there any other better workaround to this issue?
You said you don't want to, but: use a temporary table. That's the correct solution here.
Query parsing is expensive in Oracle, and that's what you'll get when you put thousands of identifiers into a giant blob of SQL. Also, there are ill-defined limits on query length that you're going to hit. Doing an anti-JOIN against a table, on the other hand... Oracle is good at that. Bulk loading data into a table, Oracle is good at that too. Use a temp table.
Limiting IN to a thousand entries is a sanity check. The fact that you're hitting it means you're trying to do something insane.
Jump out of the question, can you combine the SQL to get more than 1000 IDs with this SQL. That's the better way to simplify your SQLs.
It's insane.
But you can probably try to select from select:
SELECT * FROM
(SELECT * FROM table WHERE ID NOT IN (1,2,3,4,...,1000))
WHERE ID NOT IN (1001,1002,…,2000)
Make as many levels as you need.
Use MINUS, the opposite to `UNION
SELECT * FROM TABLE
MINUS
SELECT T.* FROM TABLE T,TABLE2 T2 WHERE T.ID = T2.ID
This represents registers on table T which id not in table2 t2

Select * from n tables

Is there a way to write a query like:
select * from <some number of tables>
...where the number of tables is unknown? I would like to avoid using dynamic SQL. I would like to select all rows from all the tables that (the tables) have a specific prefix:
select * from t1
select * from t2
select * from t3
...
I don't know how many t(n) might there be (might be 1, might be 20, etc.) The t table column structures are not the same. Some of them have 2 columns, some of them 3 or 4.
It would not be hard using dynamic SQL, but I wanted to know if there is a way to do this using something like sys.tables.
UPDATE
Basic database design explained
N companies will register/log in to my application
Each company will set up ONE table with x columns
(x depends on the type of business the company is, can be different, for example think of two companies: one is a Carpenter and the other is a Newspaper)
Each company will fill his own table using an API built by me
What I do with the data:
I have a "processor", that will be SQL or C# or whatever.
If there is at least one row for one company, I will generate a record in a COMMON table.
So the final results will be all in one table.
Anybody from any of those N companies will log in and will see the COMMON table filtered for his own company.
There would be no way to do that without Dynamic SQL. And having different table structures does not help that at all.
Update
There would be no easy way to return the desired output in one single result set (result set would have at least the same # of columns of the table with most columns and don't even get me started on data types compatibility).
However, you should check #KM.'s answer. That will bring multiple result sets.
to list ALL tables you could try :
EXEC sp_msforeachtable 'SELECT * FROM ?'
you can programmability include/exclude table by doing something like:
EXEC sp_msforeachtable 'IF LEFT(''?'',9)=''[dbo].[xy'' BEGIN SELECT * FROM ? END ELSE PRINT LEFT(''?'',9)'

SQL argument limit in Oracle

It appears that there is a limit of 1000 arguments in an Oracle SQL. I ran into this when generating queries such as....
select * from orders where user_id IN(large list of ids over 1000)
My workaround is to create a temporary table, insert the user ids into that first instead of issuing a query via JDBC that has a giant list of parameters in the IN.
Does anybody know of an easier workaround? Since we are using Hibernate I wonder if it automatically is able to do a similar workaround transparently.
An alternative approach would be to pass an array to the database and use a TABLE() function in the IN clause. This will probably perform better than a temporary table. It will certainly be more efficient than running multiple queries. But you will need to monitor PGA memory usage if you have a large number of sessions doing this stuff. Also, I'm not sure how easy it will be to wire this into Hibernate.
Note: TABLE() functions operate in the SQL engine, so they need us to declare a SQL type.
create or replace type tags_nt as table of varchar2(10);
/
The following sample populates an array with a couple of thousand random tags. It then uses the array in the IN clause of a query.
declare
search_tags tags_nt;
n pls_integer;
begin
select name
bulk collect into search_tags
from ( select name
from temp_tags
order by dbms_random.value )
where rownum <= 2000;
select count(*)
into n
from big_table
where name in ( select * from table (search_tags) );
dbms_output.put_line('tags match '||n||' rows!');
end;
/
As long as the temporary table is a global temporary table (ie only visible to the session), this is the recommended way of doing things (and I'd go that route for anything more than a dozen arguments, let alone a thousand).
I'd wonder where/how you are building that list of 1000 arguments. If this is a semi-permanent grouping (eg all employees based in a particular location) then that grouping should be in the database and the join done there. Databases are designed and built to do joins really quickly. Much quicker than pulling a bunch of id's back to the mid tier and then sending them back to the database.
select * from orders
where user_id in
(select user_id from users where location = :loc)
You can add additional predicates to split the list into chunks of 1000:
select * from orders where user_id IN (<first batch of 1000>)
OR user_id IN (<second batch of 1000>)
OR user_id IN ...
the comments regarding "if these IDs are in your database, use joins/correlation instead" hold true. However, if your list of IDs comes from elsewhere, like a SOLR result, you can get around the temp table requirement by issuing multiple queries, each with no more than 1000 ids present, and then merging the results of the query in memory. If you place the initial list of ids in a unique collection like a hashset, you can pop off 1000 ids at a time.