Query just runs, doesn't execute - sql

my query just runs and doesnt execute, what is wrong. work on oracle sql developer, company server
CREATE TABLE voice2020 AS
SELECT
to_char(SDATE , 'YYYYMM') as month,
MSISDN,
SUM(CH_MONEY_SUBS_DED)/100 AS AIRTIME_VOICE,
SUM(CALLDURATION/60) AS MIN_USAGE,
sum(DUR_ONNET_OOB/60) as DUR_ONNET_OOB,
sum(DUR_ONNET_IB/60) as DUR_ONNET_IB,
sum(DUR_ONNET_FREE/60) as DUR_ONNET_FREE,
sum(DUR_OFFNET_OOB/60) as DUR_OFFNET_OOB,
sum(DUR_OFFNET_IB/60) as DUR_OFFNET_IB,
sum(DUR_OFFNET_FREE/60) as DUR_OFFNET_FREE,
SUM(case when sdate < to_date('20190301','YYYYMMDD')
then CH_MONEY_PAID_DED-nvl(CH_MONEY_SUBS_DED,0)-REV_VOICE_INT-REV_VOICE_ROAM_OUTGOING-REV_VOICE_ROAM_Incoming
else (CH_MONEY_OOB-REV_VOICE_INT-REV_VOICE_ROAM_OUTGOING-REV_VOICE_ROAM_Incoming) end)/100 AS VOICE_OOB_SPEND
FROM CCN.CCN_VOICE_MSISDN_MM#xdr1
where MSISDN IN ( SELECT MSISDN FROM saayma_a.BASE30112020) --change date
GROUP BY
MSISDN,
to_char(SDATE , 'YYYYMM')
;

This is a performance issue. Clearly the query driving your CREATE TABLE statement is taking too long to return a result set.
You are querying from a table in a remote database (CCN.CCN_VOICE_MSISDN_MM#xdr1) and then filtering against a local table (saayma_a.BASE30112020) . This means you are going to copy all of that remote table across the network, then discard the records which don't match the WHERE clause.
You know your data (or at least you should know it): does that sound efficient? If you're actually discarding most of the records you should try to filter CCN_VOICE_MSIDN_MM in the remote database.
If you need more advice you need to provide more information. Please read this post about asking Oracle tuning questions on this site, then edit your question to include some details.

You are executing CTAS (CREATE TABLE AS SELECT) and the purpose of this query is to create the table with data which is generated via this query.
If you want to just execute the query and see the data then remove first line of your query.
-- CREATE TABLE voice2020 AS
SELECT
.....
Also, the data of your actual query must be present in the voice2020 table if you have already executed it once.
Select * from voice2020;

Looks like you are trying to copying the data from one table to another table, Can you once create the table if it's not created and then try this statement.
insert into target_table select * from source_table;

Related

Drop tables using table names from a SELECT statement, in SQL (Impala)?

How do I drop a few tables (e.g. 1 - 3) using the output of a SELECT statement for the table names? This is probably standard SQL, but specifically I'm using Apache Impala SQL accessed via Apache Zeppelin.
So I have a table called tables_to_drop with a single column called "table_name". This will have one to a few entries in it, each with the name of another temporary table that was generated as the result of other processes. As part of my cleanup I need to drop these temporary tables whose names are listed in the "tables_to_drop" table.
Conceptually I was thinking of an SQL command like:
DROP TABLE (SELECT table_name FROM tables_to_drop);
or:
WITH subquery1 AS (SELECT table_name FROM tables_to_drop) DROP TABLE * FROM subquery1;
Neither of these work (syntax errors). Any ideas please?
even in standard sql this is not possible to do it the way you showed.
in standard sql usually you can use dynamic sql which impala doesn't support.
however you can write an impala script and run it in impala shell but it's going to be complicated for such task, I would prepare the drop statement using select and run it manually if this is one-time thing:
select concat('DROP TABLE IF EXISTS ',table_name) dropstatements
from tables_to_drop

Optimize view that dynamically choose a table or another

So the problem is that I have three huge table with same structure, and I need to show the results of one of them depending on result from another query.
So my order table looks like that:
code order
A 0
B 2
C 1
And I need to retrieve data from t_results
My approach (which is working) looks like this:
select *
from t_results_a
where 'A' in (
select code
from t_order
where order = 0
)
UNION ALL
select *
from t_results_b
where 'B' in (
select code
from t_order
where order = 0
)
UNION ALL
select *
from t_results_c
where 'C' in (
select code
from t_order
where order = 0
)
Is there anyway to not scan all three tables, as I am working with Athena so I can't program?
I presume that changing your database schema is not an option.
If it were, you could use one database table and add a CODE column whose value would be either A, B or C.
Basically the result of the SQL query on your ORDER table determines which other database table you need to query. For example, if CODE in table ORDER is A, then you have to query table T_RESULTS_A.
You wrote in your question
I am working with Athena so I can't program
I see that there is both an ODBC driver and a JDBC driver for Athena, so you can program with either .NET or Java. So you could write code that queries the ORDER table and use the result of that query to build another query string to query just the relevant table.
Another thought I had was dynamic SQL. Oracle database supports it. I can create a string containing variables where one variable is the database table name and have Oracle interpret the string as SQL and execute it. I briefly searched the Internet to see whether Athena supports this (as I have no experience with Athena) but found nothing - which doesn't mean to say that it does not exist.

Select name from system table and select from this table

I need dynamically obtain table name from system table and perform a select query on this table example:
SELECT "schema"+'.'+"table" FROM SVV_TABLE_INFO WHERE "table" LIKE '%blabla%'
it returns my_schema.the_main_blabla_table
And after I get this table name I need to perform :
SELECT * FROM my_schema.the_main_blabla_table LIMIT 100
Is it possible to in a single query?
If you are talking about select subquery after "from" i can say that you can do this.
You will get something like this:
SELECT * FROM
(
SELECT "schema"+'.'+"table" FROM SVV_TABLE_INFO WHERE "table" LIKE '%blabla%'
)
LIMIT 100
Unfortunately, i can't test it on yor data, but i very interested in result because i have never done something like this. If i get your question incorrect, tell me pls.
Amazon Redshift does not support the ability to take the output of a query and use it as part of another query.
Your application will need to query Redshift to obtain the relevant table name(s), then make another call to Redshift to query that table.

Oracle SQL use variable partition name

I run a daily report that has to query another table which is updated separately. Due to the high volume of records in the source table (8M+ per day) each day is stored in it's own partition. The partition has a standard format as P ... 4 digit year ... 2 digit month ... 2 digit date, so yesterday's partition is P20140907.
At the moment I use this expression, but have to manually change the name of the partition each day:
select * from <source_table> partition (P20140907) where ....
By using sysdate, toChar and Concat I have created another table called P_NAME2 that will automatically generate and update a string value as the name of the partition that I need to read. Now I need to update my main query so it does this:
select * from <source_table> partition (<string from P_NAME2>) where ....
You are working too hard. Oracle already does all these things for you. If you query the table using the correct date range oracle will perform the operation only on the relevant partitions - this is called pruning .
I suggest reading the docs on that.
If you'r still skeptic, Query all_tab_partitions.HIGH_VALUE to get each partitions high value (the table you created ... ).
I thought I'd pop back to share how I solved this in the end. The source database has a habit of leaking dates across partitions which is why queries for one day were going outside a single partition. I can't affect this, just work around it ...
begin
execute immediate
'create table LL_TEST as
select *
from SCHEMA.TABLE Partition(P'||TO_CHAR(sysdate,'YYYYMMDD')||')
where COLUMN_A=''Something''
and COLUMN_B=''Something Else''
';
end
;
Using the PL/SQL script I create the partition name with TO_CHAR(sysdate,'YYYYMMDD') and concatenate the rest of the query around it.
Note that the values you are searching for in the where clause require double apostrophes so to send 'Something' to the query you need ''Something'' in the script.
It may not be pretty, but it works on the database that I have to use.

Need to select all tables from a database in SQL Server where the most recent date-timestamp is from a year or more ago

I'm going through all tables in a database trying to determine which tables are old (have not been altered in a long time). I've been going through and flagging all tables with old DTS's as "old"
Is there a more efficient way to do this? Can I run a statement that scans all tables in a database for date-timestamp fields and then looks at the most recent ones?
Thank you in advance for any help you can provide!
You can use the INFORMATION_SCHEMA.COLUMNS view to retrieve all the tables having the timestamp column (assuming the column name is known and not used for other columns).
Use these to generate Dynamic SQL of the form
SELECT COUNT(*) ,#TableName FROM #TableName WHERE #TimeStampColumn < #TimestampToCheck
The tables where count(*) is 0 are the ones you need to look at
I would do something like this:
Show a COUNT(*) of rows written in the last year... if you get 0, you know the table isn't in use.
Use the information_schema to get the columns/tables.
SELECT 'SELECT COUNT(*) AS ['+TABLE_NAME+'] FROM ['+TABLE_CATALOG+'].['+TABLE_SCHEMA+'].['+TABLE_NAME+'] WHERE '+COLUMN_NAME+' > DATEADD(YY,-1,CONVERT(DATETIME,FLOOR(CONVERT(FLOAT,GETDATE()))))'
-- SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE='DATETIME'
--AND (COLUMN_DEFAULT='(getdate())' OR COLUMN_DEFAULT='CURRENT_TIMESTAMP')
You can add the last line in, if you only want columns with a default value.
Then take the output, copy to a new window, and run it!