Do I need to import the full data in SSAS tabular mode from my SQL database in DirectQuery mode? - sql-server-2012

I'm trying to build a Analysis service tabular project in tabular mode and want to use DirectQuery mode so that the queries are executed at the backend.
When I click on the model, and select import data from source, I see option to retrieve the full data. Now I have a billion rows in my fact table and I dont want to import the entire data when building the model. Am I missing something here? DirectQuery in tabular , from what I understand, is similar to ROLAP storage mode in Multi Dimensional world, where there is no need for the process step and queries get real time data. So what's the point of importing all the data when building the model?
If it is just to get the schema of the tables, why not just query the DB for schema of tables instead of importing the full data? Can someone explain?

When you go through the Import From Data Source wizard, select Write a query that will specify the data to import. Write a query that imports only one row, SELECT TOP 1 * FROM <table_name>. That will import just one row and the schema.

Related

Easier way of Doing SQL Data Export - Azure

I have two tables Table A and B that is present in a Azure SQL Database. I have the same database clone running in my local, but I want to populate the data that is present in Azure by using SSMS Export Data option. While using that option I specify the source and destination and then choose the option of "Write a query to specify the data to transfer"
And then I add the query "Select * from A where Condition1" and select the destination table here:
The issue is if I have 5 tables to export data from, I have to do this whole process 5 times, only difference is the queries and destination tables. Anyone has any idea how can I do this whole thing faster by some other means? I just need to copy data using some select statements with where clauses.
As per the Official Documentation
When you select  Write a query to specify the data to transfer, you can only copy the results of one query to one destination table.
So, you have to repeat the entire process for multiple times if you want to export data like that.
You can use the following ways for importing and exporting data:
Use Transact-SQL statements.
Use BCP (Bulk Copy Program) from the command prompt.
If you want to design a custom data import, you can use SQL Server Integration Services.
Use Azure Data factory.
Use BACPAC file. Refer this material by accu web hosting to know about it. Rather than querying before exporting the data, instead you can delete the unwanted data in destination database after exporting using delete statement.
REFERENCE:
Import & export data from SQL Server & Azure SQL Database - SQL Server | Microsoft Docs

Issue in Partition in SSAS tabular model with DirectQuery Mode

I am trying to create a sample partition to a tabular model database in DirectQuery mode, and I got the following error after setting the filter and trying to import:
"Failed to save modifications to the server: Error returned: 'A table that has partitions using DirectQuery mode and a Full DataView can have only one partition in DirectQuery mode. In this mode, table 'FactInternetSales' has invalid partition settings. You might need to merge or delete partitions so that there is only one partition in DirectQuery mode with Full Data View."
Would anyone please help me understand the issue. Thank you
A DirectQuery model is one which doesn’t cache the data in the model. Instead as the DirectQuery model is queried it in turn generates queries against the backend SQL data source at query time. This is compared to an Import model where the source data is imported ahead of time and compressed in memory for snappy query performance. Import models require periodic refreshes so data won’t get stale. DirectQuery models don’t require refresh since they always reflect what’s in the source system.
The error you got is self explanatory. DirectQuery models should only have one partition per table and that partition’s query should cover 100% of the date range your model should cover for that particular table. So check FactInternetSales partitions and remove all but one partition and remove the WHERE clause from the partition query.

Azure Data Factory: trivial SQL query in Data Flow returns nothing

I am experimenting with Data Flows in Azure Data Factory.
I have:
Set up a LinkedService to a SQL Server db. This db only has 2 tables.
The two tables are called "dummy_data_table1" and "dummy_data_table1" and are registered as Datasets
The ADF is copying data from these 2 tables, and in the Data Flow they are called "source1" and "source2"
However, when I select a source, go to Source options, and change Input from Table to Query and enter a simple query, it returns 0 columns (there are 11 columns in dummy_data_table1). I suspect my syntax is wrong, but how should I change it?
Hopefully this screenshot will help.
The problem was not the syntax. The problem was that the data flow could not recognize "dummy_data_table1" because it didn't refer to anything known. To make it work, I had to:
Enable Data Flow Debug (at the top of the page, not visible in my screenshot)
Once that's enabled, I had to click on "import projection" to import the schema of my table
Once this is done, the table name and fields are all automatically recognized and can be referenced to in the query just like one would do in SQL Server.
Source:
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-source#import-schema

Power bi query is this possible?

Is it possible to select two tables from one database and one table from another database
Yes, when using import mode, there is no limit on the number and type of the data sources (make sure that Import radio button is selected):
Import the data from the first data source, then import the data from the second data source. After that you may want to review and define the relationships between the tables from the different data sources (by clicking on Manage Relationships button in the ribbon):
There is no limit on the data sources, so you can do this not only between two SQL Servers, but between SQL Server and flat file, or flat file and web source, etc. So there is no need to use linked servers at all.

Select into VS Import and export wizard in sql server

In sql server, From my desktop I connected to the server. And I want to move a data from a database to another. I have used both select into and import wizard. But import wizard seems to be slow. Why?
Is there any methodology changes for transferring data ?
Select into is a SQL query, and it is executed directly.
Import and Export Wizard is a tool which invokes Integration Services (SSIS).
Wizard is slow, but can use various data sources
More about export/import wizard
https://msdn.microsoft.com/en-US/en-en/library/ms141209.aspx
Topic about select into and export/import wizard
https://social.msdn.microsoft.com/forums/sqlserver/en-US/e0524b2a-0ea4-43e7-b74a-e9c7302e34e0/super-slow-performance-while-using-import-export-wizard
I agree with Andrey. The Wizard is super slow. If you perform a Google search on "sql server import and export wizard slow", you will receive nearly 50k hits. You may want to consider a couple of other options.
BCP Utility
Note: I have used this on a number occasions. Very fast processing.
The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files. Except when used with the queryout option, the utility requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for that table or understand the structure of the table and the types of data that are valid for its columns.
Example:
BULK INSERT TestServer.dbo.EmployeeAddresses
FROM 'D:\Users\Addresses.txt';
GO
OPENROWSET(BULK) Function
The OPENROWSET(BULK) function connects to an OLE DB data source to restore data and it allows access to a remote data by connecting to a remote data source.
Example:
INSERT INTO AllAddress(Address)
SELECT * FROM OPENROWSET(
BULK 'D:\Users\Addresses.txt',
SINGLE_BLOB) AS x;
Reference
https://msdn.microsoft.com/en-us/library/ms175915.aspx
http://solutioncenter.apexsql.com/sql-server-bulk-copy-and-bulk-import-and-export-techniques/
Mysql Store data into many places and it stores data in Small chunk of files for faster retrieve and when we use export wizard what it does is write all metadata and data to our RAM first and depending on our system and increases overhead and same happen in case of importing, and Select into is fast because mysql has to create inbuilt replica of the database that already exist.
in real life, Select into is like photocopy of a page whereas wizard is like re-writing the page manually.