Lake Database converts into a SQL Database in Azure Synapse Analytics - azure-synapse

Forgive me I am newbie here and I cannot post images just yet.
Lately I am having few issues with Lake Database that was created in Azure Synapse Analytics using Azure Synapse Link for Dataverse database in PowerApps.
Dynamics365 developers were adding new columns to Dataverse database and they are not displaying or working when executing queries in SSMS or Synapse Studio.
Therefore I have unlinked the Synapse Link in PowerApps and relinked with some tables.
When I unlink, the container and Lake database were deleting correctly but the same database appears in SQL databases section in Azure Synapse studio. I tried to delete it but I am getting an error "Operation DROP DATABASE is not allowed for a replicated database".
Before unlink
After unlink
I have created the Lake database again using Synapse link from PowerApps but it seems the tables meta data is not updating.
Can anyone help me with the above issues (in bold) please.

an error "Operation DROP DATABASE is not allowed for a replicated database".
This error is returned when you try to create objects in a database that's shared with Spark pool. The databases that are replicated from Apache Spark pools are read only. You can't create new objects into a replicated database by using T-SQL.
Create a separate database and reference the synchronized tables by using three-part names and cross-database queries.
Dynamics365 developers were adding new columns to Dataverse database
and they are not displaying or working when executing queries in SSMS
or Synapse Studio.
This issue could be because of many reasons. Without knowing the actual reason, we can't troubleshoot this issue.
If we refer Microsoft official document on Known limitations and issues with Azure Synapse Link for SQL if your column data type isn't supported by the Azure Synapse Analytics, it won't be replicated in Azure Synapse Analytics.

Related

Synapse Lake database view not available in SQL Pool?

Currently exploring using Spark notebooks in Synapse for data transformation instead of data flows but the lake db capabilities are a little confusing.
I created a lake db, an external table (catalog?) and a view using a notebook in Synapse Workspace. The view is visible in the Synapse UI and I can query it.
But the view is not available when connecting via the SQL pool using management or data studio for example. Are only table meta data shared, or am I missing something? Having trouble finding documentation regarding this.
But the view is not available when connecting via the SQL pool using management or data studio for example. Is this intended, or am I missing something?
The Serverless SQL Pool and the Spark Pool share a catalog, but the Dedicated SQL Pool has its own.
Spark views are session (temp views) or app (global views) scoped and do not belong in the catalog. That is the reason you don't see views.

Azure Synapse Lake Database Not Appearing in Built Serverless Pool List

I have created a new Azure Lake Database using the following procedure
The Lake Database name is called TestLakeDB.
However, when I check the list of databases available in Use database TestLakeDB doesn't appear.
Any thoughts?
Thanks for the valuable discussion. Posting your conversation as answer to help other community members who faces similar issues.
When we create Lake database after connecting to the github, it won't reflect in the Use Database because it is created in github mode.
To reflect the the Lake Database, create the database in the synapse live mode and connect to the github. Now we can see it reflects our database named Lake_Database1 which is created in synapse live mode in the Use Database.

DBeaver connection to Azure Synapse Analytics

I would like to query data from Azure Synapse Analytics with DBeaver.
I am using the community version of DBeaver
On the machine which I am running DBeaver, I have installed the MS SQL Server ODBC driver
I have created the connection to Azure Synapse Analytics and it is successfully connecting to the server/instance
On the 'Database Navigator', when I do a drop down list on my connection, I see the different SQL Pools/Databases that I have created.
When I do a drop down on each database, I only see the schemas 'dbo', 'INFORMATION_SCHEMA' and 'sys'. But I do not see the schemas that I have created.
When I do a drop down on each schema, I see tables, views, indexes, procedures, data types. When I do a drop down on 'tables' or 'views', I do not see anything.
Has someone tried querying data from Synapse Analytics with DBeaver ?
Has someone also experienced the same, not able to see all the schemas or to read any tables ?
Thank you for your help!
The easiest method is to choose to connect to 'Azure SQL Server':
Then, on the next screen, all you need to do is fill in the Host, Username, and Password. The Host is the value you get from your Synapse Workspace for the Serverless SQL endpoint.
Like this:
Then it should connect successfully. 👍

Can we migrate a Database from Azure Sql Database directly to Azure postgreSql Database

Is there a way to directly migrate your database in Azure SQL database to the Azure PostgreSQL database (HyperScale-Citus).
I have looked into the Azure migration services but it does not support this particular migration route.
I have an approach in mind but don't know if it will work?
We can make a backup of the Azure SQL database on the cloud itself
and then load that backup to Azure PostgreSQL database
But I do not where to make a backup. In azure blob storage or something else?
Frist way, you could try the tutorial #ffffff01 provided for you.
There this another way can help you achieve that: Data Factory can help you migrate the database/data from Azure SQL database to Azure PostgreSQL database directly.
Ref bellow tutorial:
Copy data to and from Azure Database for PostgreSQL by using Azure
Data Factory
Copy and transform data in Azure SQL Database by using Azure Data
Factory
Create Azure SQL database as source dataset and Azure PostgreSQL database as sink.
Hope this helps.

Access Azure Data Lake Analytics Tables from SQL Server Polybase

I need to export a multi terabyte dataset processed via Azure Data Lake Analytics(ADLA) onto a SQL Server database.
Based on my research so far, I know that I can write the result of (ADLA) output to a Data Lake store or WASB using built-in outputters, and then read the output data from SQL server using Polybase.
However, creating the result of ADLA processing as an ADLA table seems pretty enticing to us. It is a clean solution (no files to manage), multiple readers, built-in partitioning, distribution keys and the potential for allowing other processes to access the tables.
If we use ADLA tables, can I access ADLA tables via SQL Polybase? If not, is there any way to access the files underlying the ADLA tables directly from Polybase?
I know that I can probably do this using ADF, but at this point I want to avoid ADF to the extent possible - to minimize costs, and to keep the process simple.
Unfortunately, Polybase support for ADLA Tables is still on the roadmap and not yet available. Please file a feature request through the SQL Data Warehouse User voice page.
The suggested work-around is to produce the information as Csv in ADLA and then create the partitioned and distributed table in SQL DW and use Polybase to read the data and fill the SQL DW managed table.