SQL - Generate intervals based on Range of Dates - sql

I've a challenge, but I even don't know if it will be possible to solve the way I need.
I need to transform the information on the left table into the data of the right table.
Is it possible to archive this results without using procedures (T-SQL, PLSQL)(I can use intermediate tables to store transformations and so one, but I can't use procedures).
Many thanks in advance

Related

Query all tables within a Snowflake Schema

Due to the way our database is stored, we have tables for each significant event that occurs within a products life:
Acquired
Sold
Delivered
I need to go through and find the status of a product at any given time. In order to do so I'd need to query all of the tables within the schema and find the record with the most up to date record. I know this is possible by union-ing all tables and then finding the MAX timestamp but I wonder if there's a more elegant solution?
Is it possible to query all tables by just querying the root schema or database? Is there a way to loop through all tables within the schema and substitute that into the FROM clause?
Any help is appreciated.
Thanks
You could write a Stored Procedure but, IMO, that would only be worth the effort (and more elegant) if the list of tables changed regularly.
If the list of tables is relatively fixed then creating a UNION statement is probably the most elegant solution and relatively trivial to create - if you plan to use it regularly then just create it as a View.
The way I always approach this type of problem (creating the same SQL for multiple tables) is to dump the list of tables out into Excel, generate the SQL statement for the first table using functions, copy this function down for all the table names and then concatenate all these statements in a final function. You can then just paste this text back into your SQL editor

BigQuery Table Last Query Date

I heavily use bigQuery and there are now quite a number of intermediate tables. Because teammates can upload their own tables, I do not understand all the tables well.
I want to check if a table have not been used for a long time, then check if it can be deleted manually.
Is there anyone know how to do?
Many thanks
You could use logs if you have access. If you made yourself familiar with how to filter log entries you can find out about your usage quite easily: https://cloud.google.com/logging/docs/quickstart-sdk#explore
There's also the possibility of exporting logs to big query - so you could analyze them using SQL - I guess that's even more convenient.
You can get table specific meta data via the TABLES command.
SELECT *,TIMESTAMP_MILLIS(LAST_MODIFIED_TIME) ACCESS_DATE
FROM [DATASET].__TABLES__
The mentioned code snippet should provide you with the last access date.

SQL Server Performance Tuning of large table

I have a table of 755 columns and around holding 2 million records as of now and it will grow.There are many procedures accessing it with other tables join, are running slow. Now it's hard to split/normalize them as everything is already built and customer is not ready to spend much on it. Is there any way to make the query access to that table faster? Please advise.
Will column store index help?
How little are they prepared to spend?
It may be possible to split this table into multiple 1 to 1 joined tables (vertical partitioning), then use a view to present it as one single blob to existing code.
With some luck you may get join elimination happening frequently enough to make it worthwhile.
View will probably require INSTEAD OF triggers to fully replicate existing logic. INSTEAD OF triggers have a number of restrictions e.g. no support for OUTPUT clause, which can prove to be to hard to overcome depending on your specific setup.
You can name your view the same as existing table, which will eliminate the need of fixing code everywhere.
IMO this is the simplest you can do short of a full DB re-factoring exercise.
See: http://aboutsqlserver.com/2010/09/15/vertical-partitioning-as-the-way-to-reduce-io/ and https://logicalread.com/sql-server-optimizer-may-eliminate-foreign-key-joins-mc11/#.WXgEzlERW6I
755 Columns thats a lot. You should try to index the columns that are mostly used in where clause. this might speed up the process
It is fine, dont worry about it, actually how many columns you have it is not important in sql server (But be careful I said 'have'). The main problem is data count and how many column you select in queries. There is a few point firstly you can check.
Do not use * selector and change it if used in everywhere
In the joins, do not use it directly, you can firstly filter it as inner select. (Just try it, I have no idea about your table so I m telling the general rules.)
Try the diminish data count for ex: use history table for old records. This technicque depends on needs of your organization.
Try to use column index and something like that features.
And of course remove dynamic selects in your queries.
I wish one of them will work.

Solution: get report faster from database with big data

I use oracle database. I have many table with data very big (300-500 million record).
I use query statement have join many table together. I set index for table but get report very slow.
Please, help me solution when working with big data.
Thanks.
Do you really need to have all the data at once?
Try creating a table that stores just the information you need for the report, and run a query once a day (or few hours) to update the table. You can also use Sql Server Integration Services, although I have not tried SSIS with Oracle myself.
I agree with the other users, you really need to give more info on the problem.

is TSQL the right tool to product "transposed" query

I have a task which require the data manipulation to transpose the rows to columns. The data is stored in SQL Server, typical relational database model. I know it can be done with TSQL, but it is so complex that there are almost ten different row groups to be transposed to about 200 columns.
Just wondering whether there is other better tools to do that?
You will have to generate dynamic SQL to do this. Someone asked something similar a couple days ago:
sql-query-to-display-db-data
Are you using SQL Server 2005+? I have a stored procedure that makes it pretty easy to pivot data like that.
This post as an example of using it: SQL query result monthly arrangement
For 200 columns, you should probably break it up into logical groups and store the pivoted data in temp tables (the stored proc does that as an option) - then join up the temp tables for your final result. I've had squirrely results with more than 147 columns with PIVOT.
Some example set up source can be found here.
Without a detailed question, it's hard to say which way is the best. However, you can turn rows to columns in TSQL using PIVOT. You may want to check it out and see if it can do what you need.
I would also try to do it from the application as this might work faster than pivoting the data. But it is one of those things you probably need to try both ways to really see waht is best in your own particular case.