BQ error [80324028] in querying Materialized View - google-bigquery

We are getting following error when we are querying freshly created Materialized view. We could also see the Materialized View listed in INFORMATION_SCHEMA table. Have retried several times but getting same error.
An internal error
occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the
BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at
https://cloud.google.com/support. Error: 80324028
We have retried querying the view multiple times but no luck. We have tried creating and querying with normal columns (without aggregation) also. Have tried creating a Standard View at the same time and was able to query it.

Related

SSAS : Errors in the metadata file - error trying to update cube in a system table 'DataSource'

A few days ago, we started having an error with the processing in our cube. The cube is being processed by the job and returns the error:
The following system error occurred: Invalid data. Failed to decrypt
sensitive data. Possibly the encryption key does not match or is
inaccessible due to improper service account change. The current
operation was canceled because another operation on the transaction
failed. (Microsoft.AnalysisServices.Core)
In this instance we have other cubes working correctly.
We verified that this cube is the only one that does not have credentials. We've already tried to add the credentials by refreshing credentials and via the script, and for the first solution we don't have any errors, but it doesn't continue without changing anything. By the second solution we get the error:
Failed to encrypt sensitive data. Possibly the encryption key is
inaccessible due to improper service account change. An error occurred
while trying to update a row in a system table 'DataSource' in the
metadata database.
Anyone have a similar error?
Thanks in advance.

Avoid Deadlock During SSRS Reports Deployment

I wonder if anyone has any suggestion or experience with the same scenario.
We have one Server we utilise for our SSRS Reports. We deploy to Multiple Folders in SSRS i.e. Site_1, Site_2, Site_3 ... Site_26
In each site we deploy roughly about 800+ Reports. These reports are the same for Site_1 to Site_26 (except if we skip a site).
We use Azure DevOps with Powershell ReportingServicesTools to deploy the reports.
What happens is when we start the deployment, we will get several sites failing due to a deadlock with the below error:
The Report and Process ID is Random and never the same
##[error]Failed to create item Report.rdl : Failed to create catalog item C:\azagent\A9_work\r5\a\SSRS Reports\Reports\Report.rdl : Exception calling "CreateCatalogItem" with "7" argument(s): "System.Web.Services.Protocols.SoapException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> System.Data.SqlClient.SqlException: Transaction (Process ID 100) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
The error is not related to Low Disk etc as we've tested this to death and it occurs with two sites on a monster server. The error is Transaction Deadlock.
The only way we can successfully deploy the reports is if we deploy them concurrently one after the other. However, due to time constraints and business requirements this is not an option.
We have done all the PSSDiags etc and found that the error occurs due to this Stored Procedure "FindObjectsNonRecursive"
We nearly resolved it by adding the (NoLock) option but it seems this was only temporary and we're back to where we were. Microsoft also advised that they would not change it. Also noting that 18 months down the line MS still has not been able to give us a fix or a solution to our problem.
I would appreciate any feedback from anyone on how you overcame this problem if you had it.
Thank you for your time.
I would appreciate any feedback from anyone on how you overcame this problem if you had it.
Did you try retrying like the error suggests? Deadlocks are timing-dependent, so it should eventually succeed.

Oracle: Use of Materialized view for avoiding Socket Read Timeout;

We have a spring application. We generally have to execute several SQL queries on the view exposed to us by the Client.
In one scenario our queries work fine but the count(*) over the same queries creates problems. It returns
org.springframework.dao.RecoverableDataAccessException - StatementCallback;
IO Error: Socket read timed out; nested exception is java.sql.SQLRecoverableException: IO Error: Socket read timed out]
We asked the client to increase the oracle.jdbc.ReadTimeout property.
He instead has offered to expose a materialized view.
Can a materialized view helps in situations like these (where count queries lead to timeouts.)
How Materialized views can ve leveraged upon to increase performance of Queries
A materialized view is a great solution to your problem. Materialized views store the results of queries in a table, and can significantly improve performance. Your client seems to be doing you a huge favor, as they will be responsible for maintaining the objects that support the query.
The only potential downside depends on how they implement the materialized view. If they create a fast-refresh materialized view, it will automatically store the correct result after every change to the data. But there are many limitations to fast-refresh materialized views, and most likely your client will provide a complete refresh materialized view, which must have a schedule. If they provide a complete refresh materialized view, make sure the application can work with old data.
(Or course, the database timeout settings may still be inappropriate. There could be a bad profile, a bad sqlnet.ora parameter, a bad setting for resource manager, an ORA-600 bug, etc. You might want to find out the specific reason why your query timed out. Not that I think the client is trying to hide things from you; a terrible DBA would have just said, "tough luck, fix your stupid query". The fact that you're being offered a materialized view is a good sign that they are really trying to solve the problem.)

BigQuery python client library dropping data on insert_rows

I'm using the Python API to write to BigQuery -- I've had success previously, but I'm pretty novice with the BigQuery platform.
I recently updated a table schema to include some new nested records. After creating this new table, I'm seeing significant portions of data not making it to BigQuery.
However, some of the data is coming through. In a single write statement, my code will try to send through a handful of rows. Some of the rows make it and some do not, but no errors are being thrown from the BigQuery endpoint.
I have access to the stackdriver logs for this project and there are no errors or warnings indicating that a write would have failed. I'm not streaming the data -- using the BigQuery client library to call the API endpoint (I saw other answers that state issues with streaming data to a newly created table).
Has anyone else had issues with the BigQuery API? I haven't found any documentation stating about a delay to access the data (I found the opposite -- supposed to be near real-time, right?) and I'm not sure what's causing the issue at this point.
Any help or reference would be greatly appreciated.
Edit: Apparently the API is the streaming API -- missed on my part.
Edit2: This issue is related. Though, I've been writing to the table every 5 minutes for about 24 hours, and I'm still seeing missing data. I'm curious if writing to a BigQuery table within 10 minutes of it's creation puts you in a permanent state of losing data or if it would be expected to catching everything after the initial 10 minutes from creation.

Bigquery Backend Error when exporting results to a table

There is some time that the query runs perfectly, but lately has appeared to me this error: "Backend Error".
I know that my query is huge, and it takes about 300 seconds to execute. But I imagine this is some BigQuery's bug, so I wonder why this error is happening.
This error started appears when I was executing some other queries, when I just wanted the results and not export them.
So I started to create a table with the results hopping that BigQuery could be able to perform the query
Here is an image that shows the error:
I looked up your job in the BigQuery job database, and it completed successfully after 160 seconds.
BigQuery queries are fundamentally asynchronous. That is, when you run a query, it runs as a named Job by the BigQuery service. Since the original call may timeout, usual best-practice is to poll for completion by using the jobs.getQueryResults() API. My guess is that this is the API call that actually failed.
We had reports of an elevated number of Backend Errors yesterday and we're still investigating. However, these don't appear to be actual failed queries, instead they are failures getting the status of queries or getting the results, that should go away by retrying.
How did you run the query? Did you use the BigQuery Web UI? If you are using the API, did you call the bigquery.jobs.insert() api or the bigquery.jobs.query() api?