How to Populate REST APIs with Azure Data Factory Pipeline with Multiple Entries - azure-data-factory-2

I am trying to establish a way of ingesting multiple company data with Azure Data Factory using HTTP Link Service.
The below image shows how to ingest a single data source or a single entry using the HTTP link
https://duedil.io/v4/company/gb/06999618.json
The above would give me the financials for a single company. If we want to ingest the financials of another company with company id of 07803944 we would have to replace the above link with
https://duedil.io/v4/company/gb/07803944.json
For obvious reasons it would not be practical to have a copy activity or pipeline for every company we need the financials for.
Therefore, can someone let me know if's possible to parameterize the link service to ingest multiple companies in the same copy activity? Alternatively, simply add individual company ids to the link service?
I tried the following, but it didn't work.

Related

Azure SQL Data Discovery and Classification Unable to Discovery Our Tables

Data Discovery & Classification is built into Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. It provides basic capabilities for discovering, classifying, labeling, and reporting the sensitive data in your databases.
However, the Data Discovery and Classification engine fails to identify a bunch of tables. At the moment, it is only able to identify our contacts table.
Can someone let me know if the table needs to be in a certain format for the Data Discovery to able to discover and classify tables?
If not, can someone explain why the Data Discovery and Classification engine is unable identify/discover other tables?
Azure SQL Data Discovery and Classification identify based on Information types of columns. It works using the column names.
To directly add classification, you can follow below steps:
Manage Information Types
To detect tables, it uses Information types of columns which are built-in as shown below it will match pattern with your column name and detect it. you can edit this Information Type by clicking on Configure and add the pattern of your column name and save it.
After creating table with column name matching the patterns mentioned in the Information Type It is successfully detecting tables.

Extracting and loading data from a rest API whit data factory automatically whenever there is a new report ID

I am aiming to create a service combining Azure Data Factory and Azure Databricks where one or several reports are extracted and stored from a Rest API (swagger). I have managed to obtain the data of the reports maually using each individual ID using copy data from rest API. However, I'd like to obtain all the reports that share the same name, and whenever there is a new one (new ID) automatically extract it and store it.
I asume that dynamizing this, could be done by querying the IDs for that specific name with a periodic trigger and if eventually there is a new one, the data would be queried by using the updated list of IDs obtained. However I can't get my mind around how to use this list as parameters I have neither found a function that could do this.
I have also tried to use Azure functions, but I am not sure if it would cover this need and I gave up as I did struggle in the config.
Could I get some help please?
Thanks in advance!
Santiago

SESSION_USER equivalent from Big Query in Data Studio reports

We are creating dashboards for clients using data studio.
Each client should see their data in the dashboard, based on their login credentials. It is simple to create an authorized_view in Big Query to let certain users see certain rows of an underlying shared table. But how would one achieve to then move this into a dashboard which can be shared with each client, yet show only the individuals client in the dashboard instead of the data that was visible to the report creator?
So let's say we have a large table with a bunch of columns and one column email which contains the email of users. Now, we want the dashboard to show metrics for each user based on this email column.
On DataStudio in the datasource schema review step, make sure the flag USING VIEWER’S CREDENTIALS is on. By turning it on, the query when being executed will use the viewer’s credential instead of the owner who created the report.
After you finish create proper visualization on Data Studio, final step is to share the report to eg: store managers using the share option of Data Studio which is similar to share a Google Docs. You can confidently share it with the whole organization or with the email group of eg: store managers, permission already be controlled at data level.
Read more about this topic here.

Is tableau able to access data dynamically?

Usually a Tableau dashboard operates on "static" data that are "attached" to the published dashboard. I wonder if it is possible to make Tableau able to read data on-the-fly (when a user interacts with it). By that I mean that the data, that should be visualized, are taken from a data base that can by "dynamic". It means, for example, that the data shown by Tableau today and yesterday should not be the same because content of the database might change. Alternatively, we might try to retrieve data from an API. For example Tableau sends a request to a HTTP server and gets a data table in form of JSON and than visualizes it. Is Tableau able to do that?
Yes, Tableau can connect to live data sources such as any number of database technologies. No, it cannot send HTTP requests for JSON directly. It does a have web data connection feature if you or someone has built that web service. Here are some tips on when to use Live connections versus taking an Extract. http://mindmajix.com/use-direct-connection-data-extract-tableau/

Transferring specific data from one db to another locally

Ok here is the situation. We have a client application that contains a local sql database. At any given time, they could be working in the field and may have no internet connection at all. This means they cannot sync back to the server or to anything else. Client A needs to export specific pipe information that will include data from several tables and will have to hand off this information to Client B that will continue to work on this. Client A will then need to put this information on a file and give to Client B. Client B will then need to import this pipe information into his local database.
I'm brainstorming for ideas of what could be the best solution to accomplish this?
So far querying the specific pipe information, writing to a file in xml and then importing the xml and writing to the database could be an option.
Or just querying the information and writing sql scripts that can be executed on client Bs machine.
We just can't export the entire databases information from one computer to another. It has to be only specific information the user wants to export.