Error when running optimize on delta lake table - azure-synapse

When I run OPTIMIZE in Synapse Analytics workspace notebook, for some tables I get / by 0 error.
Exact query that I am running is below:
OPTIMIZE default.file_logs
Error: / by zero

Related

Query data lake from Azure SQL database

I'm just finding my way around Azure, trying to build a modern data warehouse. One thing I haven't been able to figure out is how to query my data lake from an Azure SQL database.
Something similar to the following works in Azure Synapse (note the long-term plan is to remove Synapse due to cost reasons):
SELECT top 100 *
FROM
OPENROWSET( BULK
'https://storageaccount.blob.core.windows.net/container/folder/2022/09/03/filename.parquet'
,SINGLE_BLOB
) AS [result]
But I get the following error running this from an Azure SQL database (in the Azure portal, using the query editor):
Failed to execute the query. Error: Cannot bulk load because the file "https://storageaccount.blob.core.windows.net/container/folder/2022/09/03/filename.parquet" could not be opened. Operating system error code 6(The handle is invalid.).
I also tried the code below after searching on the Internet:
CREATE EXTERNAL DATA SOURCE pocBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://storageaccount.blob.core.windows.net/container/folder/2022/09/03',
CREDENTIAL= sqlblob);
-- Query remote file
SELECT *
FROM OPENROWSET(BULK 'filename.parquet',
DATA_SOURCE = 'pocBlobStorage',
SINGLE_CLOB
--FORMATFILE='currency.fmt',
--FIRSTROW=2
--, FORMATFILE_DATA_SOURCE = 'pocBlobStorage'
) as D
I tried various combinations of the formatting options, but couldn't get anything to work.
The current error I'm getting is: Failed to execute query. Error: Referenced external data source "pocBlobStorage" not found.
I'm wondering if I need to do something to enable the Azure SQL database to access my data lake. For example, I haven't configured any credential called 'SQL blob' as per my last code segment, but I'm not sure where to do this (for example something similar to creating a linked service in azure data factory).
So how do I query my data lake, directly from my azure SQL database? Is the issue in my query, or do I need to configure access first, and if so how?

GBQ Data load SSIS ERROR [HY000] [Simba][BigQuery] (131) Unable to authenticate with Google BigQuery Storage API. Check your account permissions

I'm creating SSIS using ADO NET Source from Google Biquery with OLE DB Destination table. When the query result is under 100,000 rows there is no issue, my SSIS executed successfully. This issue occurs when the result is above 100,000 rows. Is there any way to fix this so there is no limit to how many rows?

Why am I getting an error when scheduling a query on Google BigQuery?

When trying to schedule a query in BQ, I am getting the following error:
Error code 3 : Query error: Not found: Dataset was not found in location EU at [2:1]
Is this a permissions issue?
This sounds like a case of the scheduled query being configured to run in a different region than either the referenced tables, or the destination table of the query.
Put another way, BigQuery requires a consistent location for reading and writing, and does not allow a query in location A to write results in location B.
https://cloud.google.com/bigquery/docs/scheduling-queries has some additional information about this.

Dataset error message - PdwManagedToNativeInteropException

Currently have a pipeline running in our production environment that has an activity that copies data from an on prem sql database to sql azure database. This pipeline is replicated among the dev and QA environments but don't fail in those environments. Wanted to get a bit more insight as to what this error means.
Message=A database operation failed with the following error: 'PdwManagedToNativeInteropException ErrorNumber: 46724,
"PDW" is short for Parallel Data Warehouse and suggests you might be using the MPP product Azure SQL Data Warehouse, rather than a SQL DB as you mentioned. Is that correct?
This error reflects when your defined size of the column like varchar /int is getting overflown.
Try increasing the size of data types and column and rerun the pipeline.
I recreated it and fixed it in my Data factory.

weird issue with Hive 0.12 in BigInsights 3.0

I have this simple query which is fine in hive 0.8 in IBM BigInsights2.0:
SELECT * FROM patient WHERE hr > 50 LIMIT 5
However when I run this query using hive 0.12 in BigInsights3.0 it runs forever and returns no results.
Actually the scenario is the same for following query and many others:
INSERT OVERWRITE DIRECTORY '/Hospitals/dir' SELECT p.patient_id FROM
patient1 p WHERE p.readingdate='2014-07-17'
If I exclude the WHERE part then it would be all fine in both versions.
Any idea what might be wrong with hive 0.12 or BigInsights3.0 when including WHERE clause in the query?
When you use a WHERE clause in the Hive query, Hive will run a map-reduce job to return the results. That's why it usually takes longer to run the query because without the WHERE clause, Hive can simply return the content of the file that represents the table in HDFS.
You should check the status of the map-reduce job that is triggered by your query to find out if an error happened. You can do that by going to the Application Status tab in the BigInsights web console and clicking on Jobs, or by going to the job tracker web interface. If you see any failed tasks for that job, check the logs of the particular task to find out what error occurred. After fixing the problem, run the query again.