Power BI Direct query mode - handle multiple tables - sql

I have a azure sql db where all tables sit - fact, dimension other lookups. Have a requirement to pull(3 tables) a fact table, a dimension and another lookup table(not part of star schema) via direct query and be part of data model within Power BI.
Direct query doesn't allow more than one table to query against(from a single source).
Any thoughts/suggestions?

Sorry.. thanks to Jon's response, it triggered me to look further and I have found that for each table(from same source) I have to go through 'get data' process.
Initially, I thought via one 'get data' process I could multi select tables, but, obviously not.
All ok here now.

I tried with mysql db, I able to select multiple tables and load once to power bi.
New Source -> Database -> Mysql Database -> Add host and password ----> then you retrieve window with table list --> it views tables, views , procedures etc....

Related

In SQL External table take a time for select and insert data into temp table

I am using external table of main database to Datawarehouse database. While selecting data from external table to # table it take almost 9 min, sometime it take more time. How I can improve this performance of external table?
Is to use the following TSQL to perform the query in the external database and get only the data required. The filter will be applied first in the external database, and then the data from the filter will be received by the database.
When you enable the query's Actual Execution Plan option, you can see
that the query : Select * from PerformanceVarcharNVarchar, brings data
from an external database to the temporal database, and then the
engine applies the filter.
Here is the Official Microsoft Documents :EXECUTE (Transact-SQL) | Docs
Else you can use Azure Data Sync : SQL Data Sync is an Azure SQL Database-based service that allows you to synchronize selected data bidirectionally between multiple databases, both on-premises and in the cloud.
The Original Post has got detailed insights: Lesson Learned #56:
External tables and performance issues | techcommunity

ADF - How should I copy table data from source Azure SQL Database to 6 other Azure SQL Databases?

We curate data in the "Dev" Azure SQL Database and then currently use RedGate's Data Compare tool to push up to 6 higher Azure SQL Databases. I am trying to migrate that manual process to ADFv2 and would like to avoid copy/pasting the 10+ copy data actives for each database (x6) to keep it more maintainable for future changes. The static tables have some customization in the copy data activity but the basic idea follows this post to perform an upsert.
How can the implementation described above be done in Azure Data Factory?
I was imagining something like the following:
Using one parameterized link service that has the server name & database name configurable to generate a dynamic connection to Azure SQL Database.
Creating a pipeline for each table's copy data activity.
Creating a master pipeline to then nest each table's pipeline in.
Using variables loop over the different connections an passing those to the sub-pipelines parameters.
Not sure if that is the most efficient plan or even works yet. Other ideas/suggestions?
we can not tell you if that's the most efficient plan. But I think so. Just make it works.
As you said in the comment:
we can use Dynamic Pipelines - Copy multiple tables in Bulk with
'Lookup' & 'ForEach'. we can perform dynamic copies of your data
table lists in bulk within a single pipeline. Lookup returns either
the lists of data or first row of data. ForEach - #activity('Azure
SQL Table lists').output.value ;
#concat(item().TABLE_SCHEMA,'.',item().TABLE_NAME,'.csv') + This is
efficient and cost optimized since we are using less number of
activities and datasets.
In usually, we also will choose same solution with you: dynamic parameter/pipeline, lookup + foreach active to achieve the scenario. In one word, make the pipeline has a strong logic, simple and efficient.
Added the same info mentioned in the Comment as Answer.
Yup, we can use Dynamic Pipelines - Copy multiple tables in Bulk with 'Lookup' & 'ForEach'.
We can perform dynamic copies of your data table lists in bulk within a single pipeline. Lookup returns either the lists of data or first row of data.
ForEach - #activity('Azure SQL Table lists').output.value ;
#concat(item().TABLE_SCHEMA,'.',item().TABLE_NAME,'.csv')
This is efficient and cost optimized since we are using less number of activities and datasets.
Attached pic as ref-

Azure Data Factory: trivial SQL query in Data Flow returns nothing

I am experimenting with Data Flows in Azure Data Factory.
I have:
Set up a LinkedService to a SQL Server db. This db only has 2 tables.
The two tables are called "dummy_data_table1" and "dummy_data_table1" and are registered as Datasets
The ADF is copying data from these 2 tables, and in the Data Flow they are called "source1" and "source2"
However, when I select a source, go to Source options, and change Input from Table to Query and enter a simple query, it returns 0 columns (there are 11 columns in dummy_data_table1). I suspect my syntax is wrong, but how should I change it?
Hopefully this screenshot will help.
The problem was not the syntax. The problem was that the data flow could not recognize "dummy_data_table1" because it didn't refer to anything known. To make it work, I had to:
Enable Data Flow Debug (at the top of the page, not visible in my screenshot)
Once that's enabled, I had to click on "import projection" to import the schema of my table
Once this is done, the table name and fields are all automatically recognized and can be referenced to in the query just like one would do in SQL Server.
Source:
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-source#import-schema

Pentaho multi table input multi table output

Question in regard of Pentaho Spoon (Data Integration):
How can I transfer the input of multiple tables from one database to multiple tables in another database? Basically a 1:1 data migration with creating the tables automatically in the target database.
I basically want to multiply the following transfomation: Picture of table transformation
Try the Copy Tables wizard, under the tools menu.
To use it, you will need to create a new transformation and define both database connections that you want to use.

Using SSIS to create new Database from two separate databases

I am new to SSIS.I got the task have according to the scenario as explained.
Scenario:
I have two databases A and B on different machines and have around 25 tables and 20 columns with relationships and dependencies. My task is to create a database C with selected no of tables and in each table I don't require all the columns but selected some. Conditions to be met are that the relationships should be intact and created automatically in new database.
What I have done:
I have created a package using the transfer SQL Server object task to transfer the tables and relationships.
then I have manually edited the columns that are not required
and then I transferred the data using the data source and destination
My question is: can I achieve all these things in one package? Also after I have transferred the data how can I schedule the package to just transfer the recently inserted rows in the database to the new database?
Please help me
thanks in advance
You can schedule the package by using a SQL Server Agent Job - one of the options for a job step is run SSIS package.
With regard to transferring new rows, I would either:
Track your current "position" in another table, assumes you have either an ascending key or a time stamp column - load the current position into an SSIS variable, use this variable in the WHERE statement of your data source queries.
Transfer all data across into "dump" copies of each table (no relationships/keys etc required just the same schema) & use a T-SQL MERGE statement to load new rows in, then truncate "dump" tables.
Hope this makes sense - its a bit difficult to get across in writing.