Azure Synapse Analytics Pipeline - Which IP range? - azure-synapse

I'm trying to set up an ELT pipeline to pull source data from a mySQL database into Synapse using the "Copy Data tool". The source mySQL database is operated by a 3rd party that requires me to provide an address (range) for their IP-whitelist.
I've searched around but cannot find the IP address range for "Azure Synapse Analytics" -- but must admit that I'm new to Azure Synapse Analytics and very confused by the nomenclature. E.g., I found this Azure IP Range list, which contains ranges for various services but none named "Synapse Analytics".
Where do I find the proper IP range? Or do I need to set up my Synapse Analytics with a fixed IP address, and if so, where do I find more information on that?

You can find the current list of IP addresses for Azure SQL/Synapse Analytics here: https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture#gateway-ip-addresses
Similarly, Synapse pipelines share many attributes with Data Factory pipelines, and the IP addresses should match. You can find those in the document you linked.
Keep in mind that this list doesn't change often, but it can change. Generally there will be an announcement ahead of time so you can update your firewalls.

Related

Google Cloud Data Fusion, How can I load many tables to bigquery in one pipeline

I want to load many tables which is in aws rds mysql server by using cloud data fusion. each table storage is more than about 1gb. also I found the plugin which name is "multiple database table" to load multi table. but i got a fail. Also basically when I used database source I can check my tables' schema. However, in multiple database table, i can 't find how to check table's schema. how can i use this plugin? or is there any other way to load many tables in data fusion service?
My pipeline setting was as follows.
I'm posting this Community Wiki as OP didn't provide enough details to reproduce but the below information might help someone.
There are few ways to get your data using Cloud Data Fusion, you can use pipeline, plugin, driver and a few others depending on your needs.
On the internet you can find two very well described guides with examples.
If you would like to find some information about Cloud Data Fusion with GCP products you should read Bahadir Bulut guide - How I used Google Cloud Data Fusion to create a data warehouse - Part 1 and Part 2. Also Data Fusion allows to use 150+ preconfigured connectors and transformations like Amazons S3, SQS, etc. Azure services and many more.
Another well described (which I guess would help OP) is to configure both Amazon and GCP resources and using pipelines. This guide is Building a Simple Batch Data Pipeline from AWS RDS to Google BigQuery — Part 1: Setting UP AWS Data pipeline and second part Building a Simple Batch Data Pipeline from AWS RDS to Google BigQuery — Part 2: Setting up BigQuery Transfer Service and Scheduled Query.. In short this guide describes 2 main steps:
Extract data from MYSQL RDS and bring into S3 using AWS data pipeline service
From S3, bring the file inside Bigquery using BigqQuery transfer service.

Connecting to Cloud SQL server instance from BigQuery

There is an option to connect a Cloud mySQL instance from BigQuery. I just wanted to know how we can connect a Cloud SQL Server instance to BigQuery.
SQL Server:
There are a bunch of third-party extensions/tools that provide this service. One of them is SSIS Data Flow Source & Destination for Google BigQuery, which is Visual Studio extension that connects SQL Server with Google BigQuery data through SSIS Workflows.:
https://www.cdata.com/drivers/bigquery/ssis/
https://marketplace.visualstudio.com/items?itemName=CDATASOFTWARE.SSISDataFlowSourceDestinationforGoogleBigQuery
In regards to using SQL Server Integration Services to load the data from the on-premises SQL Server to BigQuery, you can take a look for this site. You can also perform ETL from a relational database into BigQuery using Cloud Dataflow, the official documentation details how it can be done, you might need to use Cloud Storage as an intermediate data sink.
Cloud SQL:
BigQuery allows to query data from Cloud SQL by using federated query. The connection must be created within the same project where your Cloud SQL instance is located. If you want to query your data stored in your Cloud SQL instance from BigQuery located in another project, please follow the steps listed below:
Enable the BigQuery API and the BigQuery connection API within your project.
Create a connection to your Cloud SQL instance within the project by following this documentation.
Once you have created the connection, please locate and select it within BigQuery.
Click on the SHARE CONNECTION button and grant permissions to the users that will be use that connection. Please note that the BigQuery Connection User role is the only needed to use a shared connection.
Additionally, please notice that the "Cloud SQL federated queries" feature is in a Beta stage and might change or have limited support (is no available for certain regions, in which case, it is required to use one the supported options mentioned here). Please remember, that to use Cloud SQL Federated queries in BigQuery, the intances need to have a public IP.
If you are limited e.g. by region, one good option might be exporting the data from CloudSQL to Storage as a CSV, and then load it into BigQuery. If you need, it is possible to automate this process using Cloud Composer, refer to this article.
Other approach is to extract information from Cloud SQL (with exports) and import it into BigQuery through load jobs, or streaming inserts.
I hope you find the above pieces of information useful.
It is possible, but be warned the feature is currently Beta
https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries

Azure ADF unable to create linked service to an Azure SQL server VM in a different tenancy or subscription

I am trying to create a pipeline to copy some data between Azure SQL databases on different servers, but creating a Linked Service using SQL authentication fails (and gives no helpful information, just a dialog box saying it failed). I think that the server VMs are in different tenancies or different subscriptions (I am not sure of the distinction), so I am guessing that the one I am working in cannot see the one I want the connection to go to. Is that likely, and what needs to be done to make it work? Any advice welcome, including RTFM if you can point me at the right one and it doesn't take weeks to wade through it!
In case anyone hits the same issue: the problem turned out to be the 'encrypted' checkbox in the self-hosted integration runtime (IR). Clearing this flag allowed the IR to see the target database, and the pipeline could then be created with the new connection set to use that IR. #Leon Yue: both databases are Azure SQL instances on Azure PaaS VMs.

My azure storage unique namespace

I'm trying to move some tables from SQL to Azure Table Storage.
I created an MVC Website with the default authentication. I successfully connected it to my Azure SQL database. Now I want to use the table storage for authentication too, instead of the SQL database.
The problem is, I cannot find my storage account's unique namespace. What, where is that namespace?
Thanks!
Looking at a table URL, for example 'http://myaccountname.blob.core.windows.net/mytable', the 'myaccountname' will be the name of your account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. The storage account name must be unique on the Azure service. A list of storage accounts your own and more information about them can be found in the Azure Portal.
More information on authentication for tables can be found here and here. Manipulating and authenticating access to your tables are features built into the storage client libraries which are available in a variety of languages. Since you mention MVC, you might want to check out the .Net storage library.

Row filtering in SQL Azure Data Sync - is it gone?

I'm going to use SQL Azure Data Sync service to synchronize an SQL Azure and an on-premise database (one-way, from Azure to the on-premise db).
But I need to filter rows to synchronize (based on column value).
E.g., this tutorial tells how to configure filters, but it refers old Windows Azure portal.
A newer tutorial says:
If you also want to filter a column so that only rows with specific
values (such as, Age>=65) are synchronized, use the SQL Data Sync
portal at Azure and the documentation at Select the Tables, Columns,
and Rows to Synchronize to define the data to sync.
However, I can't find any link to edit filters in the new Azure portal.
Is this feature gone?
that feature was not ported in the new UI. Data Sync is a preview product with unclear future, you may want to re-evaluate using it.