I'm working with a Google Ads dataset that is updated daily on BQ. The query was working fine last week, however, when I tried to run it today, I got an error:
division by zero: 0/0
Really not sure what I did wrong to get the error because I haven't touched this query since last week. I've already tried SAFE_DIVIDE, but still the same result. Any ideas?
Data keeps changing all the time so you need to test the values before you divide
Try Safe_divide
Also : https://www.mysqltutorial.org/mysql-nullif/
Related
My Firebase project is integrated with BigQuery, so all raw Google Analytics events are exported daily & streamed to a dedicated collection.
Since today even simple queries on those events are failing with an error:
Error running query
Failed to load FileDescriptorProto for
'CLOUD_QUERY_METADATA_SCHEMA': ;Field number 23 has already been
used in "Msg_0_CLOUD_QUERY_TABLE" by field "items".
An example query which is failing:
SELECT * FROM `project.analytics_184030700.events_*` WHERE event_name IN ("share")
As I mentioned, those (and more advanced) queries used to run until yesterday. I did not change the schema nor any other configuration in the meantime. I've also noticed that BigQuery was updated yesterday.
Looking at the error description, looks like my table schema indeed contains a field called items (a very last one, 23rd) but it was automatically added by Google Analytics.
My suspicions:
Something went wrong with the recent BigQuery release
Something went wrong during daily sync Google Analytics -> BigQuery
Some old job or cache is getting in the way of new queries
At this point I have no idea what to try next. Does anyone have any insight into what could be causing this error?
EDIT:
I noticed that this problem was also just reported in the Google Issue Tracker here: https://issuetracker.google.com/issues/192325507.
I have same issue and I didn't solve it yet but as you said it's cause is Firebase I guess. There's an extra field problem which are limited only for three days (26th,27th and 28th June).
I checked all data older than 26th June but there was no privacy_info field. As you see there is no privacy_info field again for 29th June. I think firebase put this new field but they changed their mind for some reason. But this causes a big problem for us.
Update:
I changed this part:
SELECT * FROM `project.analytics_184030700.events_*`
Like this:
SELECT * FROM `project.analytics_184030700.events_2*`
Interestingly this worked for me.
You can do a workaround for that issue; It seems there are problems with the field
privacy_info
If you select multiple table partitions, just make sure you only select the fields you need, and omit the field privacy_info.
Not using "SELECT *" did resolve this error for me.
I'm trying to figure out the proper syntax to converting a total of minutes to show properly. For example: if something shows 65 minutes I want it to show 1:05.
What I have been messing with is as follows:
Format(Round(DateDiff("n",[StartDate],[DateCompleted]),2),"Short Time")
The query has totals turned on as this field is set to Expression. I'm getting strange results with the current criteria.
I'm sure there is something simple I'm missing but I haven't been having much luck.
Thanks!!!
As long as you won't exceed 24 hours, you can do it straight:
Duration: Format([DateCompleted]-[StartDate],"Short Time")
I have a pentaho transformation, which is used to read a text file, to check some conditions( from which you can have errors, such as the number should be a positive number). From this errors I'm creating an excel file and I need for my job the number of the lines in this error file plus to log which lines were with problem.
The problem is that sometimes I have an error " the return values id can't be found in the input row".
This error is not every time. The job is running every night and sometimes it can work without any problems like one month and in one sunny day I just have this error.
I don't think that this is from the file, because if I execute the job again with the same file it is working. I can't understand what is the reason to fail, because it is saying the value "id", but I don't have such a value/column. Why it is searching a value, which doesn't exists.
Another strange thing is that normally the step, which fails should be executed at all( as far as I know), because no errors were found, so we don't have rows at all to this step.
Maybe the problem is connected with the "Prioritize Stream" step? Here I'm getting all errors( which use exactly the same columns). I tried before the grouping steps to put a sorting, but it didn't help. Now I'm thinking to try with "Blocking step".
The problem is that I don't know why this happen and how to fix it. Any suggestions?
see here
Check if all your aggregates ins the Group by step have a name.
However, sometimes the error comes from a previous step: the group (count...) request data from the Prioritize Stream, and if that step has an error, the error gets reported mistakenly as coming from the group rather than from the Prioritze.
Also, you mention a step which should not be executed because there is no data: I do not see any Filter which would prevent rows with missing id to flow from the Prioritize to the count.
This is a bug. It happens randomly in one of my transformations that often ends up with empty stream (no rows). It mostly works, but once in a while it gives this error. It seems to only fail when the stream is empty though.
This question/bug is mainly for the Google BigQuery team.
I have a daily report in Tableau that connect to a Google BigQuery live Connection. This report has been running for over a year without problems. Since March 15th however, the report is no longer working and the result of the gbq queries generated by Tableau now returns 'null'.
Note: The version of Tableau and version of the BigQuery driver have not changed for over a month. So, nothing has changed on our side. I have also checked in the Query History and the generated queries have always been the same in the last weeks.
One simple query that is generated by Tableau and that now returns 'null' looks like this:
SELECT (CASE WHEN 1000000 = 0 THEN NULL ELSE FLOAT([log_time]) / 1000000 END)
AS [none_Calculation_0500516094317075_ok]
FROM [GDT.MissingItems] [sqlproxy]
GROUP BY 1
This query comes from a simple calculated field created in Tableau that is divided by 1000000 and is cast to a INT. The job_id is job_ydTIq1c_ydnyua4s4SW3zJj00fs
This looks to me like something has changed recently and that is causing the query to now return 'null' instead of what it should return. This is a big problemfor us as we are using this report for operational purposes.
I posted my question/problem in Stackoverflow as mentioned in the Google BigQuery Support page:
https://developers.google.com/bigquery/support
This was a bug in the incorrect application of an optimization in the query execution engine. It has been fixed and we expect to release the fix today (it is possible that the fix won't go live until monday, because we often try to avoid making production changes last minute before the weekend).
The workaround in the meantime would be to use 0.0 rather than null in the case statement.
we have been using the following query for months and now suddenly (within the last hour) BigQuery started returning "Error calling POST https://www.googleapis.com/bigquery/v2/projects/877311797081/queries: (500) Unexpected. Please try again"
SELECT DATE(SEC_TO_TIMESTAMP(sdate+(-4*3600))) AS d, 0 as range,
SUM(IF(type=0,1,0)) as looks, SUM(IF(type=1,1,0)) as books,
SUM(IF(type=1,nights,0)) AS nights, SUM(base_total*currencies.USD) as totals,
SUM(base_rate*currencies.USD*IF(type=1,nights,0)) as adr,
FROM [reztrack.201307]
WHERE act=157 AND
(( ((sdate+(-4*3600)) >= 1372636800 AND (sdate+(-4*3600))<= 1375228799) ))
GROUP BY d,range
ORDER BY range, d ASC
I Checked our account to make sure there were no overages on quota limits or billing issues. Trying several other queries, even basic ones seem to net the same result.
We released a new version of BigQuery this morning, it had a bug affecting tables that had been truncated, so we rolled it back. Please let us know if you see any further issues.