I am going through the following article:
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-dev-guide-sas?tabs=node#sas-tokens
... and I understand that I have the following options to generate a per device SAS token:
Azure CLI: https://learn.microsoft.com/en-us/cli/azure/iot/hub?view=azure-cli-latest#az-iot-hub-generate-sas-token
Azure IoT tools for Visual Studio Code: https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools
Other than above two options, are there any other way(s) to generate per-device SAS connection strings?
Is there any way to generate per-device SAS tokens in Azure Portal?
Have a look at the Azure IoT Explorer.
Other than above two options, are there any other way(s) to generate per-device SAS connection strings?
Next to Roman's suggestion for Azure IoT Explorer, you can also write your own SAS token generator. The documentation you linked also code samples in four programming languages; you can use those to create SAS tokens for a device as well. Use {hub name}.azure-devices.net/devices/{device name} for the resourceUri and don't include a policyName.
Is there any way to generate per-device SAS tokens in Azure Portal?
No (I mean, you could use the Cloud Shell and use the Azure CLI in the Portal, but I don't think that's what you're asking). There is no visual interface for this in the Azure Portal.
It is possible to generate SAS tokens in the Azure portal:
In your IoT hub, navigate to Devices in the Device Management section.
Either add a new device or select an existing one.
You can view the SAS keys and connection strings for the device.
The Manage keys menu lets you regenerate and swap keys.
Typically, you would generate the SAS token on the device, at least in a production environment due to the SAS token having a limited lifetime. Once that expires you would need to go back and give it a new one.
You don't mention what language you might be using but, if you are using C you will notice that the examples cited above do not include that language. I have a C demo here though: https://github.com/markrad/IoTSASTokenGenerate should it be of interest to you.
Related
I can create an Azure data lake database with pre-built tables using Azure Synapse database templates from the Synapse Studio UI, but is there a way to use these templates programmatically? So far from my research I have not found a command, API, or SDK for this. Perhaps I could create the database and tables via the UI, then generate the associated spark sql creation scripts, but don't see a way how to do that either. Does anyone have any ideas on how to do either of the prior?
You can create the data lake storage, tables and data insertion programmatically using Azure SDKs. But these templates have been made available to overcome these series of manual tasks. Using these templates save your time and efforts to create an environment and sample data for your development.
Therefore, asking to deploy these templates programmatically challenging the complete concept of templates. If you want to deploy these resources manually, you can use Azure SDKs.
I'm trying to find a way from within Azure SQL to either 1) enumerate members of an Azure Active Directory security group or 2) check if a user login is a member of an SG. I've found various articles about doing it from a domain joined standalone SQL installation but not from Azure SQL. Most of the samples for the standalone installation use system sprocs like xp_cmdshell which don't exist in Azure SQL. I know I can create an Azure Function or Logic App to sync users to a table but I'd like to avoid using an external process to do this if possible.
#Kalyan Chanumolu-MSFT's comment should be very helpful to you. This scenario is not supported today.
You can try to use his suggestion.
You will have to talk to Microsoft Graph API from an intermediate like an Azure function to relay the data to Azure SQL Database.
You also can raise a support ticket to confirm it and also can put forward your suggestions in the feedback.
We would like to use the Podio API Key to directly connect to Tableau and have the data refreshed at a cadence set in Tableau. Is this possible?
Yes, connecting to data via an API is possible and there are a couple of ways to do it:
Option #1: Web Data Connector
A WDC is a hosted web application built with JavaScript that connects to an API, converts the data to a JSON format, and passes the data to Tableau. You'll require a webserver to host your WDC and JavaScript skills to write it. Once set up, anyone in your org can just grab the link and use it in Desktop. With WDCs, since the data connection is made when the end-user is requesting the WDC, you can build in customizations for your users. (Ex: users can add filter parameters or authenticate with their own user/pass to only get what they have access to). WDC connections are extracts and can be refreshed on Tableau Server and Online. If you're using Tableau Online you'll need to use Bridge to auto-refresh.
Option #2: Hyper API
The Hyper API allows you to create, modify, and update extract (.hyper) files that you can then publish to Tableau Server/Online on a regular cadence. It's available for Python, Java, .NET, and C++ so you will need skills in one of those languages. I suggest Python as we have the most samples for it. You'll also need a server where you can run the extract refresh and publish scripts on a schedule. With the Hyper API you are creating a single extract for everyone to connect to. Once published to Tableau Server/Online, end users can just connect to this data source directly without having to do any input but this also means the connection can't be customized per user or use case.
Option #3: Use a 3rd-party connector
If building your own connector doesn't appeal to you there are also plenty of services out there you can pay for that can bring your data into Tableau. Ex: tray.io, dataddo, and skyvia are a couple I found after a quick google search.
There is an option to connect a Cloud mySQL instance from BigQuery. I just wanted to know how we can connect a Cloud SQL Server instance to BigQuery.
SQL Server:
There are a bunch of third-party extensions/tools that provide this service. One of them is SSIS Data Flow Source & Destination for Google BigQuery, which is Visual Studio extension that connects SQL Server with Google BigQuery data through SSIS Workflows.:
https://www.cdata.com/drivers/bigquery/ssis/
https://marketplace.visualstudio.com/items?itemName=CDATASOFTWARE.SSISDataFlowSourceDestinationforGoogleBigQuery
In regards to using SQL Server Integration Services to load the data from the on-premises SQL Server to BigQuery, you can take a look for this site. You can also perform ETL from a relational database into BigQuery using Cloud Dataflow, the official documentation details how it can be done, you might need to use Cloud Storage as an intermediate data sink.
Cloud SQL:
BigQuery allows to query data from Cloud SQL by using federated query. The connection must be created within the same project where your Cloud SQL instance is located. If you want to query your data stored in your Cloud SQL instance from BigQuery located in another project, please follow the steps listed below:
Enable the BigQuery API and the BigQuery connection API within your project.
Create a connection to your Cloud SQL instance within the project by following this documentation.
Once you have created the connection, please locate and select it within BigQuery.
Click on the SHARE CONNECTION button and grant permissions to the users that will be use that connection. Please note that the BigQuery Connection User role is the only needed to use a shared connection.
Additionally, please notice that the "Cloud SQL federated queries" feature is in a Beta stage and might change or have limited support (is no available for certain regions, in which case, it is required to use one the supported options mentioned here). Please remember, that to use Cloud SQL Federated queries in BigQuery, the intances need to have a public IP.
If you are limited e.g. by region, one good option might be exporting the data from CloudSQL to Storage as a CSV, and then load it into BigQuery. If you need, it is possible to automate this process using Cloud Composer, refer to this article.
Other approach is to extract information from Cloud SQL (with exports) and import it into BigQuery through load jobs, or streaming inserts.
I hope you find the above pieces of information useful.
It is possible, but be warned the feature is currently Beta
https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries
I am trying to create a pipeline to copy some data between Azure SQL databases on different servers, but creating a Linked Service using SQL authentication fails (and gives no helpful information, just a dialog box saying it failed). I think that the server VMs are in different tenancies or different subscriptions (I am not sure of the distinction), so I am guessing that the one I am working in cannot see the one I want the connection to go to. Is that likely, and what needs to be done to make it work? Any advice welcome, including RTFM if you can point me at the right one and it doesn't take weeks to wade through it!
In case anyone hits the same issue: the problem turned out to be the 'encrypted' checkbox in the self-hosted integration runtime (IR). Clearing this flag allowed the IR to see the target database, and the pipeline could then be created with the new connection set to use that IR. #Leon Yue: both databases are Azure SQL instances on Azure PaaS VMs.