I have been running several codes in google Sheets through OWOX bigquery (add-on) for the last few weeks, but suddenly it stopped working and the error it indicates is: "Cells limit will be exceeded by xx". If I run the same code in another new Spreadsheet it works.
I really need help on this thanks!
Related
I have linked my google analytics account with big query and trying to create a schedule in big query to get my desired output on regular basis, but while saving the schedule query the "Schedule query error" occurred. As this is something which I could not understand.
I want to run query multiple time so that I could get updated live report of my data.
Link shared for your reference.
click here
I'm currently migrating around 200 tables in Bigquery (BQ) from one dataset (FROM_DATASET) to another one (TO_DATASET). Each one of these tables has a _TABLE_SUFFIX corresponding to a date (I have three years of data for each table). Each suffix contains typically between 5 GB and 80 GB of data.
I'm doing this using a Python script that asks BQ, for each table, for each suffix, to run the following query:
-- example table=T_SOME_TABLE, suffix=20190915
CREATE OR REPLACE TABLE `my-project.TO_DATASET.T_SOME_TABLE_20190915`
COPY `my-project.FROM_DATASET.T_SOME_TABLE_20190915`
Everything works except for three tables (and all their suffixes) where the copy job fails at each _TABLE_SUFFIX with this error:
An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 4893854
Retrying the job after some time actually works but of course is slowing the process. Is there anyone who has an idea on what the problem might be?
Thanks.
It turned out that those three problematic tables were some legacy ones with lots of columns. In particular, the BQ GUI shows this warning for two of them:
"Schema and preview are not displayed because the table has too many
columns and may cause the BigQuery console to become unresponsive"
This was probably the issue.
In the end, I managed to migrate everything by implementing a backoff mechanism to retry failed jobs.
i constantly getting this error when i'm using Python script to execute queries.
however when i copies and execute the queries directly in Bigquery web console i'm not getting this error.
any thought what might it be or what to check/change?
I am trying to load some data from Google Sheets to Big Query. It's 5 sheets. I am using gapps scripts to do it.
I am trying to loading very same data as I was loading last week the problem is that all the upload jobs are failing now, for every single sheet.
I am getting:
Error while reading data, error message: CSV table encountered too many errors, giving up. Rows: 1280; errors: 1. Please look into the errors[] collection for more details.
When I look in CLI, I see :-bash: syntax error near unexpected token `newline'
The problem is that my doc has 0 newline characters. I am also getting the "1280" number for 5 various sheets I am trying to upload and NONE of them has 1280 rows
The second column is the name of the sheet being uploaded. All getting the same error all of the sudden:
Big Query is quite powerful but this random crap is totally killing the experience.
Any ideas what could be wrong?
I am trying to work with the github data which has been uploaded to Google's big data. I ran a few queries (which generated a lot of rows -
eg: a query SELECT actor_attributes_login, repository_watchers , repository_forks FROM [githubarchive:github.timeline]
where repository_watchers > 2 and REGEXP_MATCH(repository_created_at, '2012-')
ORDER BY actor_attributes_login;
The answer had more than 2,20,000 rows. When I attempted to download to CSV , it said
Download Unavailable
This result set contains too many rows for direct download. Please use "Save as Table" and then export the resulting table.
When I tried to do it as Save as Table I got the following error:
Access Denied: Job publicdata:job_c2338ba91e494b21970854e13cdc4b2a: RUN_JOB
Also, I ran queries where I limited the number of rows to 200 or so, even in such cases I got the error as mentioned above. However I was able to download it as CSV.
Any solution to this problem?
#Anerudh You don't have access to modify the publicdata samples dataset. Create a brand new dataset, and try to save your query results to a new table in that dataset.