Data Transfer V2.0 Campaign Manager : No Costs and Revenues inside storage CSV - google-bigquery

We are trying to solve an issue of costs and revenues from DoubleclickCampaignManager/Campaign Manager.
The goal is to create a daily dashboard of media costs (paid search, display, videos and social) thanks to Google Cloud.
We have right now access to Facebook data, Google Analytics and Campaign Manager. The issue is on the last one.
For Campaign Manager, the bucket oustide our organization, have been added to our organization thanks to Data TransferV2.0. We have access to impressions, clicks, activity and match tables csv on Storage and so on, on BigQuery.
We have date, clicks, impressions, cities, ad name metrics, etc... but we only have 0 in costs metrics.
What i mean about costs, it's how much we paid for 1 impression. In revenues, DBM costs, total media costs... (Avertiser, Partner or Account Currency) we only had 0.
We ask Google to help us : they told us to check a checkbox meaning that "Check Campaign Manager and DV360 are linked into Data Transfer".
They told us, that it should work, but we still have 0 on Revenues and Costs.
We should have 32.00 for instance, instead of 0. Do you have any idea how to solve this issue ?
Best,
Theo

If after this solution you have not get any information. I reccomend you to send your offline reports to BigQuery directly.
You could do it following some steps as follows:
create a dataset in BigQuery and copy its ID,
Then, go to settings in CM360 and activate the BigQuery Export:
Copy the iam email account that CM settings has returned to you,
then after, go to bigQuery and include this account with only editor permissions to your dataset.
After all this procces the option will be available to activate in the delivery options of CM360's offline reports :

Have you tried to enable "Report Display & Video 360 cost to Campaign Manager" from Basic Details under Partnet level from DV360.
Report Display & Video 360 cost to Campaign Manager

Related

Google Ads transfer to BigQuery misses the data from Smart Campaigns

I have a Google Ads account which has a single Smart Campaign and multiple usual campaigns. Also I've set up a data transfer to Google BigQuery. When I try to compare BigQuery data using the query
SELECT sum(Cost) FROM `project.dataset.AccountBasicStats_XXXXXX` where Date between '2021-12-01' and '2021-12-31'
the query result shows a less cost than I see in the Google Ads interface for the same time period. The difference is equal to the spend of my smart campaign. To check this, I've tried the queries:
SELECT * FROM `project.dataset.CampaignBasicStats_XXXXXX` where Date between '2021-12-01' and '2021-12-31' AND CampaignId = {ID of my smart campaign}
SELECT * FROM `project.dataset.CampaignStats_XXXXXX` where Date between '2021-12-01' and '2021-12-31' AND CampaignId = {ID of my smart campaign}
The both give me no results.
Is it true that BigQuery data transfer discards the data of smart campaigns? What are other ways to get statistics for them?
BigQuery Data Transfer service for Google Ads is based on an old API, AdWords API v201809. This version of the API does not support Google Ads features introduced in 2019 or later, and Smart Campaigns is among that.
To get around this limitation, you could consider a different Data Transfer - for instance, SuperMetrics and FiveTran have Data Transfers for Google Ads too. Note they are paid, and you should make sure that they do support Smart Campaigns.
Google is said to be working on a Data Transfer using the newer Google Ads API.

Bigquery - How to in crease the expiration time of tables in free sandbox?

I am using the free bigquery sandbox to generate some custom metrics based on my analytics data. I have read in the documentation that the expiration time of table in free account is 60 days. What does this expiration time means ? What will exactly happen after 60 days. All my datas will be lost ? How can i increase the expiration time in this case ? Should i need to pay for it ? If yes, what will be the cost ?
According to the documentation:
The BigQuery sandbox gives you free access to the power of BigQuery
subject to the sandbox's limits. The sandbox allows you to use the web
UI in the Cloud Console without providing a credit card. You can use
the sandbox without creating a billing account or enabling billing for
your project.
In addition, according to the limits :
All datasets have the default table expiration time and the default
partition expiration set to 60 days. Any tables, views, or partitions
in partitioned tables automatically expire after 60 days.
You can edit this expiration date if your data is exported to BigQuery but, in order to do that, you have to upgrade the project's plan to use it (if needed). Then you would be billed by the amount of bytes processed, you can check the billing options here.
Thus, within BigQuery you can edit the expiration date. In BigQuery, you go to Project > Dataset > Table > Details > click in the pencil next to the table's name and set expiration date to never or select a date. As follows:

Does querying a public BigQuery dataset counts towards project quota?

I am running a Google Cloud Platform project that utilizes BigQuery in Sandbox mode (no billing enabled). In this project, I query solely public datasets.
The Quota (in IAM & admin) shows 0 MiB although I queried a few 100 GBs already.
This raises the question of whether or not querying public BigQuery datasets counts towards project quota.
The first 1TB of data you query will be free. After that you will be billed at $5 per TB.
You can monitor your usage in logs, or I find using billing easier for this, you will get an exact usage figure, Product will be 'BigQuery', SKU will be 'Analysis'. If the data you were querying was not a public dataset you would also be charged for 'Active Storage'.
Relevant quote 1:
You pay only for the queries that you perform on the data (the first 1 TB per month is free, subject to query pricing details).
And 2:
To get started using a BigQuery public dataset, you must create or select a project. The first terabyte of data processed per month is free, so you can start querying public datasets without enabling billing. If you intend to go beyond the free tier, you must also enable billing.
Source: https://cloud.google.com/bigquery/public-data/

Bulk data filters in Tableau

Our organization is in e-commerce and users are looking to change a filter everyday with a different list of items, and none of the users will have their own license, just read-only access. The data is connected through Google Big Query, is there a way to have this bulk filter upload capability without the License owners having to touch the filter each time?
Example
Product ID is the filter
Monday: they have a list of 10,000 ID's they want to check sales for
Tuesday: They have a new list of 4,000 different ID's they want to check sales for.
Without clicking each ID each time, is there a way to just upload a list, csv, google sheet etc.
We thought users can upload a list of Product ids to Google sheets which can map to a BigQuery table. We can use it to join with the sales table and get the relevant data. However this becomes unmanageable when we have more than 1 user as users might step on to others data.
Any suggestions/recommendations are welcome. Our team is pretty new to Tableau as such. Let me know if any additional details are needed.
Have you tried changing the filter type to "Multi Values (custom list)" and then having the report user paste their list into the filter? See below:

Where do you get Google Bigquery usage info (mainly for processed data)

I know that BigQuery offers the first "1 TB of data processed" per month for free but I can't figure out where to look on my dashboard to see my monthly usage. I used to be able to "revert" to the old dashboard which had the info but for the past couple of weeks the "old dashboard" isn't accessible.
From the Google Cloud Console overview page for your project, click on the "details" section on the top-right, next to the charge estimate :
You'll get an estimate of the charges for the current month for each service and item in the service, including Big Query analysis :
If you want to track this usage, you can also export the data into CSV every day by going in the Billing settings and enable the usage export feature. Do not worry about the fact that it only mentions Compute Engine, it actually works for other services also.
You can also access directly the billing history by clicking on the billing account link :
You will get a detailed bill with the usage info :
Post GCP Console Redesign Answer
The GCP console was redesigned and now the other answer here no longer applies, but it is still possible to view your usage by going to IAM & Admin -> Quotas.
What you're looking for is "Big Query API: Query usage per day". It doesn't seem possible to view your usage over 30 days unfortunately, but you can see your current usage (per day) and your peak usage over the past 7 days. You can also set a daily quota. If you're just working infrequently or doing a lot in one day, you can set a quota to 1 TiB and prevent yourself from blowing your whole allocation in one day.
You can try sending feedback about these limitations, like I did, by clicking the question mark at the top right and then send feedback.
Theo is correct that there is no way to view the number of bytes processed or billed since the start of the month (inside of the free tier) in the GCP Billing Console. However, you can extract the bytes processed and bytes billed data from logs in Cloud Logging and calculate the total bytes processed/billed since the start of the month inside of BigQuery.
Here are the steps to count total bytes billed in a month:
Under Cloud Logging, go to Logs Explorer (NOT the Legacy Logs Explorer) and run the following query in the query builder frame:
resource.type="bigquery_project" AND
protoPayload.metadata.jobChange.job.jobStats.queryStats.totalBilledBytes>1 AND
timestamp>="2021-04-01T00:00:00Z"
The timestamp clause is not actually necessary, but it will speed up the query. You can set timestamp >= <value> to any valid timestamp you want as long as it returns at least one result.
In the Query Results frame, click the "Action" button, and select "Create Sink".
In the window that opens, give your sink a name, click "Next", and in the "Select sink service" dropdown menu select "BigQuery dataset".
In the "Select BigQuery dataset" dropdown menu, either select an existing dataset where you would like to create your sink (which is a table containing logs) or if you prefer, choose "Create new BigQuery dataset.
Finally, you will likely want to check the box for Partition Table, since this will help you control costs whenever you query this sink. As of the time of this answer, however, Google limits partition tables to 4000 partitions, so you may find it is necessary to clear out old logs eventually.
Click "Create Sink" (there is no need for any inclusion or exclusion filters).
Run a query in BigQuery that produces bytes billed (i.e. a query that does not return a previously cached result). This is necessary to instantiate the sink. Moments after your query runs, you should now see a table called <your_biquery_dataset>.cloudaudit_googleapis_com_data_access
Enter the following Standard SQL query in the BigQuery query editor:
WITH
bytes_table AS (
SELECT
JSON_VALUE(protopayload_auditlog.metadataJson,
'$.jobChange.job.jobStats.createTime') AS date_time,
JSON_VALUE(protopayload_auditlog.metadataJson,
'$.jobChange.job.jobStats.queryStats.totalBilledBytes') AS billedbytes
FROM
`<your_project><your_bigquery_dataset>.cloudaudit_googleapis_com_data_access`
WHERE
EXTRACT(MONTH
FROM
timestamp) = 4
AND EXTRACT(YEAR
FROM
timestamp) = 2021)
SELECT
(SUM(CAST(billedbytes AS INT64))/1073741824) AS total_GB
FROM
bytes_table;
You will want to chance the month from 4 to whatever month you intend to query, and 2021 to whatever year you intend to query. Also, you may find it helpful to save this query as a view if you intend to rerun it periodically.
Be advised that your sink does not contain your past BigQuery logs, only BigQuery logs produced after you created the sink. Therefore in the first month the number of GB returned by this query will not be an accurate count your bytes billed in month unless you happen to have created the sink prior to running any queries in BigQuery during the current month.
Might be related to How can I monitor incurred BigQuery billings costs (jobs completed) by table/dataset in real-time?
If you are fine by using BigQuery itself to get that information (instead of using a UI), you can use something like this:
DECLARE gb_divisor INT64 DEFAULT 1024*1024*1024;
DECLARE tb_divisor INT64 DEFAULT gb_divisor*1024;
DECLARE cost_per_tb_in_dollar INT64 DEFAULT 5;
DECLARE cost_factor FLOAT64 DEFAULT cost_per_tb_in_dollar / tb_divisor;
SELECT
ROUND(SUM(total_bytes_processed) / gb_divisor,2) as bytes_processed_in_gb,
ROUND(SUM(IF(cache_hit != true, total_bytes_processed, 0)) * cost_factor,4) as cost_in_dollar,
user_email,
FROM (
(SELECT * FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_USER)
UNION ALL
(SELECT * FROM `other-project.region-us`.INFORMATION_SCHEMA.JOBS_BY_USER)
)
WHERE
DATE(creation_time) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY) and CURRENT_DATE()
GROUP BY
user_email
Open in BigQuery UI
Explanation
Please consider the caveats I mentioned in my answer here