I added all my AdWords Accounts via the automatic transfer to BigQuery. All AdWords tables are now in BigQuery. But today I realized, that the same tables from different accounts have a different column count.
Here one example:
AdWordsAccount1.p_Customer_21530XXX --> 11 columns
AdWordsAccount2.p_Customer_23450XXX --> 12 columns
In this example AdWordsAccount2 contains the "AccountTimeZoneId" (Description:
Deprecated by Adwords API v201702. Please use AccountTimeZone instead). Why is this column missing in the other table?
It is only one example. Other tables like Campaign, AdGroup... have also a different column count.
I am looking forward for your help!
Related
I'm writing a code to load data from Google Ads api to BigQuery table by using Cloud Functions, the process query a table called ad_group_ad but i'm struggling when trying to validate if there's duplicated rows at my destination.
By reading the docs I was expecting to find some attribute used to identifier a column or a group of columns that represents the table key. May this question seem obviously but i ain't having progress when trying to google this.
Is there a way to identifies if there's is duplicated rows? I'm not using any group by instruction when collecting, just a simple select like the example below:
SELECT
segments.ad_network_type,
campaign.name,
ad_group.name,
ad_group.id,
so
on,
and,
so,
forth
FROM ad_group_ad
WHERE segments.date = ?
The combination of ad ID and ad group ID is guaranteed to be unique.
However, as soon as you include segments in your select clause, you'll get multiple rows with the same IDs. So the combination of ad_group.id, ad.id plus whatever segment fields you need should be a candidate key.
I am trying to query the patents-public-data:patents dataset. This dataset includes information on U.S. patent classifications according to the CPC guidelines.
There are a couple "publications" tables within the patent dataset. Each of them (except for one) has an assigned date, e.g. 201710 or 201809. I wonder what these dates signify. Which "publications" table is the most up to date? And how often is it updated?
As it was mentioned, SO is not the appropriate channel for this question; however, if you check the dataset information within the GCP Marketplace this dataset is updated quarterly. It looks like the table named "publications" is the most up to date one and the tables "publications_201710", "publications_201802", "publications_201809" and "publications_201903" contain the publications until the date indicated within their name.
You can find additional information regarding this dataset in this link. In addition, in the BigQuery public datasets documentation you can see the alias to contact the team that manages the BigQuery public dataset program.
I am using a query to calculate daily retention on my Firebase Analytics data exported to BigQuery. It is working well and the numbers match with the numbers in Firebase, but when I try to filter the query by a cohort of users, the numbers don't add up.
I want to compare the results of an A/B test from Firebase, and so I've looked at the user_property "firebase_exp_2" which is my A/B test, and I've split up the users in each group (0/1). The retention numbers do not match (at all) the numbers that I can see in my A/B test results in Firebase - actually they show the opposite pattern.
The query is adapted from here: https://github.com/sagishporer/big-query-queries-for-firebase/wiki/Query:-Daily-retention
All I've changed is adding the following under the "WHERE" clause:
WHERE
event_name = 'user_engagement' AND user_pseudo_id IN
(SELECT user_pseudo_id
FROM `analytics_XXX.events_*`,
UNNEST (user_properties) user_properties
WHERE user_properties.key = 'firebase_exp_2' AND user_properties.value.string_value='1')
Firebase says that there are 6,043 users in the Control group and 6,127 in the Variant A group, but my numbers are 5,632 and 5,730, and the retained users are around 1,000 users more than what Firebase reports.
What am I doing wrong?
The export to BigQuery happens on a daily basis and each imported table is named events_YYYYMMDD. Additionally, a table is imported for events received throughout the current day. This table is named events_intraday_YYYYMMDD.
The additions you made are querying from events_* which is fine. The example uses events_201812* though which would ignore the intraday table. That would explain why your numbers a lower. You are missing users added to the A/B test during the current day.
I started to test Google AdWords transfers for Big Query (https://cloud.google.com/bigquery/docs/adwords-transfer).
I have few questions for which I cannot find answers anywhere.
Is it possible to e.g. edit which columns are downloaded from AdWords to Big Query? E.g. Keyword report has only ad group ID column but not ad group text name.
Or is it possible to decide which tables=reports are downloaded? The transfer creates around 60 tables and I need just 5...
DZ
According to here, AdWords data transfer
store your AdWords data into a Dataset. So, the inputs are in terms of Adwords customer IDs (minimum one customer ID) and the output is a collection of Datasets.
I think, you need a modified version of PubSub to store special columns or tables in BigQuery.
I have a Google fusion table with 3 row layouts as shown below:
We can query the fusion table as,
var query = new google.visualization.Query("https://www.google.com/fusiontables/gvizdata?tq=select * from *******************");
which select the data from the first row layout ie Rows 1 by default. Is there any way that we can query the second or 3rd Row layout of a fusion table?
API queries apply to the actual table data. The row layout tabs are just different views onto that data. You can get the actual query being executed for a tab with Tools > Publish; the HTML/JavaScript contains the FusionTablesLayer request.
I would recommend using the regular Fusion Tables APi rather than the gvizdata API because it's much more flexible and not limited to 500 response rows.
The documentation for querying a Fusion Tables source has not been updated yet to account for the new structure, so this is just a guess. Try appending #rows:id=2 to the end of your table id:
select * from <table id>#rows:id=2
A couple of things:
Querying Fusion Tables with SQL is deprecated. Please see the porting guide.
Check out the Working With Rows part of the documentation. I believe this has your answers.