How to read parquet file from synapse dedicated SQL pool - azure-synapse

I m trying to load parquet file into dedicated SQL pool table using copy command....same operation I'm able to do for gzip files CSV files etc ...for parquet seems some other settings required...
Please note I'm not using spark cluster in synapse and not serverless SQL pool as they both have ready to use facilities for parquet...I'm explicitly looking for dedicated SQL pool

Related

Can't Access Azure Synapse Spark Pool Databases on SSMS

Since I've starting using Azure Synapse Analytics, I created a Spark Pool Cluster, then on the Spark Pool cluster I created databases and tables using Pyspark on top of parquet files in Azure Data Lake Store Gen2.
I use to be able to access my spark Database/ parquet tables through SSMS using the Serverless SQL endpoint but now I can no longer see my spark Databases through the Severless SQL Endpoint in SSMS. My spark databases are still accessible through Azure Data Studio but not through SSMS. Nothing has been deployed or alter on my side. Can you help resolve the issue? I would like to be able to access my spark databases through SSMS.
Sql Serverless Endpoint
Azure Synapse Database
If your Spark DB is built on top of Parquet files, as you said, databases should sync to external tables in Serverless SQL pool just fine and you should be able to see synced SQL external tables in SSMS as well. Check this link for more info about metadata synchronization.
If everything mentioned above is checked, then I'd suggest you to navigate to Help + Support in Azure Portal and fill in a support ticket request with details of your problem so engineering team can take a look and see whether there is some issue with your workspace or not.

Create Amazon S3 target and load data from Oracle sources using Informatica powercenter

I need information about how to load data to S3 from Oracle with informatica powercenter 10.2, I start creating mapping but I don't know how to create a target file for amazon S3 and configure the connection to S3 buckets.
I found that I should create amazon S3 data objects in developer tools which need powerexchange Amazon S3.
I worked in another requirement in order to load data into Redshift but here we used ODBC connection.
There is anyone who can tell me more structured details about how to create target for S3 and configure connection and the file size?

Azure SQL - bulk insert from Azure files and not blobs

I have a ruby app in azure container for which I have mounted an azure storage. The app uploads few files to mounted drive which needs to be picked up by azure sql for bulk insert and processing. Now, from this article https://github.com/Azure/app-service-linux-docs/blob/master/BringYourOwnStorage/mounting_azure_blob.md mounting the blob storage is readonly, I can use azure files for mounting but azure sql doesn't give any option to directly bulk insert from azure files. so I have got stuck between azure files vs blobs, please help me out...
Azure SQL Database only supports reading from Azure Blob Storage.
File Storage is not supported.
Ref: BULK INSERT (Transact-SQL)
I would suggest you choose Blob Storage.
HTH.

How to write sqlcmd results directly to Azure Storage using Azure PowerShell?

Current story:
Moving overall BI solution fully to Azure cloud services. Building a new Azure DW and loading data from an Azure DB. Currently, Azure DW doesn't support linked servers and/or the elastic query (this is only supported in Azure DB). Due to price, we can not use data factory or an instance of SSIS. We can't use bcp as we don't have a local directory to hold the file in between loads.
Is it possible to use Azure PowerShell with sqlcmd to write results of a query directly to Azure Storage, without having to write to a file on a local directory in between?
Are there other options that aren't mentioned above?
Thank you for any input.
The current Azure Storage PowerShell (Set-AzureStorageBlobContent) only support upload blob from local file.
Azure Storage Client Library (https://github.com/Azure/azure-storage-net) support to upload blob from stream, can you try to develop your own application with the Azure Storage Client Library?
If your data is big, you can also try https://github.com/Azure/azure-storage-net-data-movement/, it has better performance in upload big blob.

Importing XML files to Azure SQL Database

I have a large amount of XML files that I transfer via ftp to an azure website folder on a daily basis. I currently use c# to transfer the data to azure sql server tables. However, it is extremely slow.
Is there a way I can run an Azure SQL job to bulk import these files and if so, how do I access the files in the web apps folder?
I know how to do this on a standard SQL server with XML files residing on a share drive but am unsure how to do this in azure.
Currently, we do not support any T-SQL interface to read files from blob store or container. So you have to push the data from outside of SQL Server.
One option is to use Azure Automation to run your code periodically or based on a schedule. See post below on how to use Azure Automation:
http://azure.microsoft.com/en-us/documentation/articles/automation-manage-sql-database/