Why CTAS statement is so fast in Azure SQL DW? - azure-sql-database

I have noticed that Create Table As Select (CTAS) in SQL Data Warehouse statements are extremely fast compared to Select into statement.
I want to know what magic microsoft did to make it so fast?

The magic is: Polybase with minimal transaction loging

Related

Query optimization and comparison with impala

I am working on PowerBi and use SQL server as database. I used views or direct tables as source to PowerBi . My views are simple select queries with simple joins. I am not finding any scope for query optimizations. Query execution takes time in SQL and table has millions of data increasing day by day.
Now I am thinking to use impala as well as SQL server. I am getting clean data from Rapidminer. I didn't use impala before. So I have some doubts. Please answer if you can. I have zero knowledge of impala.
Can we create connection between rapid miner and impala? then what will be the steps? google give me some steps which is difficult to understand.
Can we create connection between impala and sql?
Can we create view on impala and create joins in views? I know we can create view as well as joins in impala. But my question is can we create it together?
suppose SQl and impala connection is made then suppose I have one table from impala and one table from sql server management studio. can I join both tables in impala? for this can we create connection between impala and sql server management studio?
5.Can I use all tables or views created in sql to impala (after making connection between sql and impala). That means my tables or views are in sql. but I am fetching data in impala.
All tables stored in sql server. can I do join operation on these tables in impala.
7.Can I make views in impala using tables which are stored in sql
8.Can I create all tables in impala and do etl operation like sum, add, dateadd in impala
9.Can I create all tables in impala and do etl operation like sum, add, dateadd in power query
10.Can I create views from sql and put it in impala table. and use in power query
Can I create all tables and views with joins in impala?
12.How can I optimise my query in sql and if I run same query for same data in impala then my execution time will reduce or not?
My SQL query is like this
create view as test
select * from table a
inner join table b on a.id=b.id
inner join table c on b.name=c.name
go
output is 3000000 row. increasing day by day
also instead of using view I use table directly. but execution time is not decreasing.

Resource Intensive Query

I am using ado.net entities, against a SQL azure database. One of the queries is taking an extremely long time, most likely pulling data it doesn't need. Is there a way to match up the query in C# with the query execution in Azure.
Please enable Query Store on SQL Azure to identify the T-SQL equivalent of the LINQ query. Use this article for more details.
Below command helps you enabled query store
alter database current set query_Store on
Hope this helps.

How to capture truncate statement information in SQL Server 2012

I would like to capture the truncate statements information along with the user/Login information for all database in my production server.
Example:
Use mydb
go
truncate table deleteme_table
I would like to capture the information into the table like the below
Table Operation Database Login Time
deleteme_table Truncate mydb sandeep.pulikonda 17-12-2014 17:50:00
If the above scenario is not possible please suggest possible ways to capture it
I am using SQL Server 2012 Standard version. So granular level audit are not supported for that version.
you can use the SQL Server Audit functionality and add an audit for those queries.
this article explains in detail how to obtain this.
Another good way of profiling your SQL Server is using SQL Profiler. Here is a SO question similar to yours and an answer describing how to use SQL Profiler to achieve the results.
SQL Server Profiler - How to filter trace to only display TSQL containing a DELETE statement?

Periodically store data from a PostgreSQL table to SQL Server 2005 table (with the same schema)

I have a PostgreSQL database that stores real-time data from sensors in a specific table (every 30sec).
What I want to do, is to get periodically the data from the remote PostgreSQL database (for instance every 30sec) and store them in SQL Server 2005 to manipulate them locally. I don't care about having the two databases with duplicate tables. Actually this is what I want to achieve!
So far, I have as Linked Server the PostgreSQL to SQL Server and I can query and retrieve the sensor data. However, I prefer to store them in my SQL Server for performance reasons.
Solution so far:
Make select openquery statements with the linked PostgreSQL and insert the results to my table in SQL Server. Repeat this periodically and store fresh data only (e.g. with a larger timestamp).
I assume that my proposed solution is not ideal. I want to know what are the best practices to achieve this synchronization between the two databases.
Thank you in advance!
If you don't want to write your own code(implementations) to do that you can use SymmetricDS to synch the table from postgreSQL to MSSQL .

Move Data from Oracle to SQL Server

I would like to copy parts of an Oracle DB to a SQL Server DB. I need to move the data because the Oracle box is being decommissioned. I only need the data for reference purposes so don't need indexes or stored procedures or contstaints, etc. All I need is the data.
I have a link to the Oracle DB in SQL Server. I have tested the following query, which seemed to work just fine:
select
*
into
NewTableName
from
linkedserver.OracleTable
I was wondering if there are any potential issues with using this approach?
Using SSIS (sql integration services) may be a good alternative especially if your table names are the same on both servers. Use the import wizard via and it should create the destination tables for you and let you edit any mappings.
The only issue I see with that is you will need to execute that of course for each and every table you need. Glad you are decommissioning the oracle server :-). Otherwise if you are not concerned with indexes or any of the existing sprocs I don't see any issue in what you are doing.
The "select " approach could be very slow if tables are large. Consider writing pro*C in that case or use Fastreader http://www.wisdomforce.com/products-FastReader.html
A faster and easier approach might be to use the Data Transformation Services, depending on the number of objects you're trying to copy over.