Can't update dataset in Power Bi Service - google-bigquery

I see this message in 30 minutes after clicking Refresh-now-button (in Dataset tab):
Something went wrong
There was an error when processing the data in the dataset.
Please try again later or contact support. If you contact support, please provide these details.
Data source error: {"error":{"code":"ModelRefresh_ShortMessage_ProcessingError","pbi.error":{"code":"ModelRefresh_ShortMessage_ProcessingError","parameters":{},"details":[{"code":"Message","detail":{"type":1,"value":"Timeout expired. The timeout period elapsed prior to completion of the operation."}}],"exceptionCulprit":1}}}
Cluster URI: WABI-WEST-EUROPE-redirect.analysis.windows.net
Activity ID: 6465a7a0-8ee3-4f9b-bfae-d26800ff83b4
Request ID: 2a3851e0-5a38-3b96-783c-d0f5e1b464cb
Time: 2020-03-19 11:16:05Z
ODBC: ERROR [HY000] [Microsoft][BigQuery] (100) Error interacting with REST API: Operation timed out after 6.0 hours. Consider reducing the amount of work performed by your operation so that it can complete within this limit
Power Bi tries to refresh dataset 30 minutes, then shows this error.
I use only Google BigQuery connection in my dataset.
I refreshed my data in BigQuery. Everything ok. It refreshes about 3-4 minutes.
I contacted with PowerBi support. Support-team told me, that problems with Google driver. And they can't help me...Until Google updates driver
Could someone help me?

There are few steps you need to check. Firstly, try to refresh credential for BigQuery to Power Bi and reduce the size of the data under 1GB.
I recommend to use Simba ODBC BigQuery Connector, which will eliminate the issue, please follow this documentation.
There is already a bug report on the issue tracker about it. I will let you know if there will be any updates.
I hope it helps.

Related

Pentaho Data Integration - Transformation is stuck not able to fetch data from Rest API

I hope you are doing well, I am newbie in Pentaho I need help in troubleshooting an issue.
Flow of transformation --
Fetching 9000 Id number from previous step without any issue.
Requesting data from an API for 9000 Id's - "Rest Client".
Injecting it into MongoDB
Transformation Snapshot
I have attached an snapshot of transformation (not the actual only with main steps).
After fetching some amount of data form REST Client, I believe it is not sending the next request which is way the transformation is being stuck and it never stopped.
The Steps I have taken to troubleshoot this issue but did not work:
I broken-down the transformation to retrieve 2k data at one go with the help of "block until this step" operation.
Closely monitor the CPU & Memory of the server - Max CPU 40% sometimes it touch 90% for a seconds , Memory - less than 80%
Not sure , if this is a cache issue or PDI issue or something else ?
Please help me to resolve this issue, Any suggestion will be much appreciated.
Thanks and Regards
Aslam Shaikh

Why I'm missing events on my Log Analytics Query?

On query below , I can only see data from hours 3 up 8.. all data for other timeframes are missing.
The data is being generated by Azure SQL log analytics configuration where I can't see anything missing.
Any ideas?
Thanks a lot!
I don't think the events are missing for the other times, since you can see the events between time 3 to 8.
You'd better check if there are logs in the other time. Azure log analytics does not abandon your logs.
According to MS support, I'm hitting the daily cap limit and also free tier wasn't available on my region.

Why is Power BI HTTP Response stating I've hit maximum number of Dataflow Refreshes?

I've created a Power Automate connector which allows a user to create an SQL triggered refresh sequence which cascades all the way through the Dataflow refresh to the Dataset refresh thus eliminating the need for schedules. It seemed to work well when testing yesterday and then I hit the 8th refresh and it started failing. However, when I looked at it today, it seems the first 2 refreshes failed today and I am still getting this error although it only fired twice today. I have set up on Power BI 7 refreshes but it hasn't hit all of them yet in order to return this message. I tried to switch the refresh off on the dataflow but still to no avail. Has anyone encountered this issue before?
{
"error": {
"code": "DailyDataflowRefreshLimitExceeded",
"message": "Skipping dataflow refresh as number of auto refreshes in last 24 hours (8) exceeded allowed limit 8"
}
}
UPDATE: I've just tried the same flow on a new Power BI workspace for the first time and got the same error.
You definitely hit 8 refresh limit in 24 hours. you will have to wait complete 24 hours to perform next set of refresh.
Short answer to lift this limitation, you may have to buy a premium license(48 times per day)
Blog stating the same
Blog from PowerBI

Connection::SQLGetInfoW: [Simba][ODBC] (11180) SQLGetInfo property not found: 1750

This is a setup where Microsoft's Power BI is the frontend for the data presentation to end-users. Behind it there's an on-premises PBI gateway which connects to BigQuery via Magnitude Simba ODBC driver for BigQuery. Since two days ago, after always working flawlessly, the PBI data refresh started failing due to timeout.
BigQuery ODBC driver's debug shows these two errors below in hundreds of rows per refresh:
SimbaODBCDriverforGoogleBigQuery_connection_9.log:Aug 29 15:21:54.154 ERROR 544 Connection::SQLGetInfoW: [Simba][ODBC] (11180) SQLGetInfo property not found: 180
SimbaODBCDriverforGoogleBigQuery_connection_9.log:Aug 29 15:22:49.427 ERROR 8176 Connection::SQLGetInfoW: [Simba][ODBC] (11180) SQLGetInfo property not found: 1750
And only occurence per refresh of this:
SimbaODBCDriverforGoogleBigQuery_connection_6.log:Aug 29 16:56:15.102 ERROR 6704 BigQueryAPIClient::GetResponseCheckErrors: HTTP error: Error encountered during execution. Retrying may solve the problem.
After some intensive research web search, it kinda looks like this might be related to 'wrong' coding, either wrong data types or strings that are too big, but nothing conclusive.
Other, smaller, refreshes to the same place work without issues.
Do we have any knowledgebase or reference for such cryptic error messages? Any advice on how to troubleshoot this?
Already tried:
Searching Google;
Updating Magnitude Simba ODBC driver for BigQuery to the latest
version;
Updating PBI Gateway to the latest version;
Rebooting the gateway server.
This issue occurs when ODBC drivers try to pull the data in streams which is via port 444. You either need to enable port 444 for optimal performance or disable streams, so that the data is pulled using pagination(Not recommended for huge data).

Support for Google BigQuery JDBC Driver using KNIME

I get an error when using the following JDBC driver to retrieve BigQuery data in KNIME
The Error Message is in the Database Connection Table Reader node as follow:
Execute failed: " Simba BigQueryJDBCDriver 100033" Error getting job status.
However, this only occurs after consecutively running a couple of similar data flows including the BigQuery driver, in KNIME.
After google searches, no extra info was found. And I already updated the driver / KNIME to the latest version. Als tried to rerun the flow on a different system with no success.
Is there a quota/limits attached to usin g this specific driver?
Hope someone is able to help!
I found this issue tracker, it seems that you opened it and there's already interaction with the BigQuery's Engineering team. Thus, I suggest following the interaction made there and subscribing to it to keep updated as you'll receive e-mails regarding its progress.
Regarding your question about the limits for the driver, the quotas and limits that you usually have in BigQuery will apply to the Simba driver too (I.e. Concurrent queries limit, execution time limit, maximum response size, etc...).
Hope it helps.
Just discovered a new query limit is set at company's Group level, some miscommunication internally. Sorry for bothering and thans for the feedback!