Unable to Save Results of Query to a New Table in BigQuery - google-bigquery

I am currently not able to save query results to a new table in BigQuery using the BigQuery console. I was able to do this 2 weeks ago, but now it doesn't work for me or anyone in my organization. I don't get an error; the 'loading' wheel simply keeps indicates that it's loading, and it eventually times out.
Has anyone experienced this? I thought it was a general BigQuery issue, but there's no evidence of others complaining, or a general bug.
[Image of what occurs when I try to export results to a BigQuery table1

Me too. I am unable to export query results to BQ table since yesterday. I thought it is just me, now I know everyone is affected. Think it is bug that needs to be fixed, quick!!!
LATEST: Go to Query Settings and change processing location to somewhere else and you should be able to save the query results. I think default is USA, and I changed to Singapore. Try a few locations and see which one works for you.

I have the same problem. I'm gcp project owner and my Bigquery quotas are almost in 0% I noticed this problem since the day before yesterday and it's very annoying. Currently I have to export results into a file (csv or json) in order to import them later from this same file. I hope Google fix this bug soon.

Related

Bug in google cloud bigquery?

I observed something very strange today when trying to steam records into bigquery table , sometimes after the successful stream, it shows all the records being steamed into, something it only shows part of it? What I did was I deleted the table, and recreated it. Has anyone encountered any scenario like this? I am seriously concerned.
Many thanks.
Regards,
I've experienced a similar issue after deleting and recreating the table in a short time span, which is part of our e2e testing plan. As long as you do not delete/recreate your table streaming API works great. In our case workaround was to customize streaming table suffix for e2e execution only.
I am not sure it this was addressed or not, but I would expect constant improvement.
I've also created a test project reproducing the issue and shared it with BigQuery team.

Only null values in BigQuery after changes to new schema when importing from Firebase Analytics

A while back we got an email from Google saying they would move all our Firebase Analytics data in BigQuery to a new table and a new schema. Fair enough.
A couple of days ago, the change was made. But the problem now is that we only see null events in our <project>:analytics_xxxxx_.events_<date> table. The events are correct in the intraday table, but for the last three days we only have null values in the events table. Anyone else seen this? Did we do something wrong or is it a bug with Google BigQuery?
I guess that you used the script they sent you for importing from Firebase. This is a known issue, and in some cases the script breaks the schema and that's probably why you are getting those null values.
I would recommend to reach Firebase Support [1] and they will help you solve this.
EDIT: I saw this other question posted and answered by a Googler that may be of help [2]

Backfill Google Analytics in BigQuery

I'm looking for a workaround on the following issue. Hope someone can help.
I'm unable to backfill data in the ga_sessions_ table in BigQuery through product linking in GA. e.g. partition ga_sessions_20180517 is missing
This specific view has already been linked before. Google documentation says that historical load is only done once per view (hence, the issue) (https://support.google.com/analytics/answer/3416092?hl=en)
Is there any way to work around it?
Kind regards,
Martijn
You can use Google Analytics Reporting API to get the data for that view. This method has lot of restrictions like sometimes the data is sampled/only 7 dimensions can be exported in one call, but at least you will be able to fetch your data in a partitioned manner.
Documentation hereDoc
If you need a lot of dimensions/metrics in hit level format, scitylana.com has a service that can provide this data historically.
If you have a clientId set in a custom dimension the data-quality is near perfect.
It also works without a clientId set.
You can get all history as available through the API.
You can get 100+ dimensions/metrics in one batch into BQ.

Big Query table too fragmented - unable to rectify

I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?

What to do about InternalError

Every now and then I start getting "Unexpected. Please try again." for ANY query i try. It usually gets better a bit later but the rate at which it appears is very worrying.
What can I do when this happens?
here are some job ids:
job_648552a52f3046c5b2df9300a31d4693
job_4af4184725974c3fb38e0ded96b776c9
If you get this response, please let us know because it indicates a bug in BigQuery.
Looking at the jobs in question, this is a race condition where we mark the import complete before it has completely replicated to all datacenters, and the query hits a different datacenter. I've filed a bug and we're working on a fix.