BigQuery Transfer: Google Ads (formerly AdWords): Transfer job is successful but no data - google-bigquery

I try to setup transfer with following configuration:
Source: Google Ads (formerly AdWords)
Destination dataset: app_google_ads
Schedule (UTC): every day 08:24
Notification Cloud Pub/Sub topic: None
Email notifications: None
Data source details
Customer ID: xxx-xxx-xxxx
Exclude removed/disabled items: None
I got no error during transfer but my dataset is empty, why?
12:02:00 PM Summary: succeeded 72 jobs, failed 0 jobs.
12:01:04 PM Job 77454333956:adwords_5cdace41-0000-2184-a73e-001a11435098 (table p_VideoConversionStats_2495318378$20190502) completed successfully
12:00:04 PM Job 77454333956:adwords_5cdace37-0000-2184-a73e-001a11435098 (table p_HourlyAccountStats_2495318378$20190502) completed successfully
12:00:04 PM Job 77454333956:adwords_5cd88a2b-0000-2117-b857-089e082679e4 (table p_HourlyCampaignStats_2495318378$20190502) completed successfully
12:00:04 PM Job 77454333956:adwords_5cd0ba27-0000-2c7c-aed0-f40304362f4a (table p_AudienceBasicStats_2495318378$20190502) completed successfully
12:00:04 PM Job 77454333956:adwords_5cd907f8-0000-2e16-a735-089e082678cc (table p_KeywordStats_2495318378$20190502) completed successfully
12:00:04 PM Job 77454333956:adwords_5cd88a32-0000-2117-b857-089e082679e4 (table p_ShoppingProductConversionStats_2495318378$20190502) completed successfully
12:00:04 PM Job 77454333956:adwords_5cce5c09-0000-28bd-86d3-f4030437b908 (table p_AdBasicStats_2495318378$20190502) completed successfully
etc

I have AdBlocked enabled in my browser. So it prevent me to see google ads tables in my dataset. So I turn off it and it works!

Related

Bigquery Data Transfer from S3 intermittent success

When using bigquery data transfer to move data into BigQuery from S3, I get intermittent success (I've actually only seen it work correctly one time).
The success:
6:00:48 PM Summary: succeeded 1 jobs, failed 0 jobs.
6:00:14 PM Job bqts_5f*** (table test_json_data) completed successfully. Number of records: 516356, with errors: 0.
5:59:13 PM Job bqts_5f*** (table test_json_data) started.
5:59:12 PM Processing files from Amazon S3 matching: "s3://bucket-name/*.json"
5:59:12 PM Moving data from Amazon S3 to Google Cloud complete: Moved 2661 object(s).
5:58:50 PM Starting transfer from Amazon S3 for files with prefix: "s3://bucket-name/"
5:58:49 PM Starting transfer from Amazon S3 for files modified before 2020-07-27T16:48:49-07:00 (exclusive).
5:58:49 PM Transfer load date: 20200727
5:58:48 PM Dispatched run to data source with id 138***3616
The usual instance those is just 0 success, 0 failures, like the following:
8:33:13 PM Summary: succeeded 0 jobs, failed 0 jobs.
8:32:38 PM Processing files from Amazon S3 matching: "s3://bucket-name/*.json"
8:32:38 PM Moving data from Amazon S3 to Google Cloud complete: Moved 3468 object(s).
8:32:14 PM Starting transfer from Amazon S3 for files with prefix: "s3://bucket-name/"
8:32:14 PM Starting transfer from Amazon S3 for files modified between 2020-07-27T16:48:49-07:00 and 2020-07-27T19:22:14-07:00 (exclusive).
8:32:13 PM Transfer load date: 20200728
8:32:13 PM Dispatched run to data source with id 13***0415
What might be going on such that the second log above doesn't have the Job bqts... run? Is there somewhere I can get more details about these data transfer jobs? I had a different job that ran into a JSON error, so I don't believe it was that.
Thanks!
I was a bit confused by the logging, since it finds and moves the objects like
I believe I misread the docs, I had thought previously that an amazon URI of s3://bucket-name/*.json would crawl the directory for the json files, but even though the message above seems to indicate such, it only loads files into bigquery that are at the top level (for the s3://bucket-name/*.json URI).

Failure of Importing data from Bigquery to GCS

Dear support at Google,
We recently noticed that many of the GAP site import jobs extracting&uploading data from Google Bigquery to Google Cloud Service have been failing (Since April 4th). Our uploading jobs are running fine before April 4th but have been failing since April 4th, after did investigation, we feel this is an issue/error from Bigquery side, not from our job. The details of error info from Bigquery API when uploading data is shown below:
216769 [main] INFO  org.mortbay.log  - Dataset : 130288123
217495 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
227753 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
237995 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
Heart beat
248208 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds..
258413 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
268531 [main] INFO  org.mortbay.log  - Job is RUNNING waiting 10000 milliseconds...
Heart beat
278675 [main] INFO  org.mortbay.log  - An internal error has occurred
278675 [main] INFO  org.mortbay.log  - ErrorProto : null
 
As per log, it is an internal error with the issue ErrorProto:null.
 
Our google account: ea.eadp#gmail.com
 
Our Google Big Query projects:
Origin-BQ              origin-bq-1
Pulse-web             lithe-creek-712
The importing failure on following data set:
 
In Pulse-web, lithe-creek-712:
101983605
130288123
48135564
56570684
57740926
64736126
64951872
72220498
72845162
73148296
77517207
86821637
 
 
Please look into this and let us know if you have any updates.
Thank you very much and looking forward to hearing back from you.
 
Thanks

CRM 2013 SP1 - Duplicate call to MessageProcessor start processing message:'Retrieve' for entity:'account'

I have a CRM 2013 SP1 on-premise setup.
This is my scenario. I want to log every form visit for the Account entity so I created a plugin which hooks onto the Retrieve call of the Account enitity.
That's where the problem starts, I am getting a duplicate entry for each single form view.
First, I thought I had some error in my plugin, but it's so basic that it's not doing any extra retrieve so that's not the case. It's not a problem with the depth of the context, see from the example trace below.
I did a trace on the CRM server and I can see the two Retrieve calls in the trace log, both seem "legit" calls.
What I have done so far to debug:
Looked at the IIS access log and checked for multiple form hits, that's not the case.
Disabled the plugin and made sure no other "external" plugins are hooked into the Account entity.
Stripped the Account form, removed the Social pane, and actually all other fields from the form, except mandatory 'name'
Started CRM trace-ing on different organization on the same CRM server and saw the same behavior, that is, Retrieve request being made two times for a single form open action. That org was "clean" if you can say so, has not been modified.
Example output from the tracelog (not complete) which shows the timestamps:
[2014-08-15 16:22:11.2] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x3D
>MessageProcessor start processing message:'Retrieve' for entity:'account' correlationId:17242856-5c45-484c-b79a-0d102988390a depth:1 last updated at: 08/15/2014 16:22:11.
[2014-08-15 16:22:11.2] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x2DC
>MessageProcessor finish processing message 'Retrieve' for 'account'.
[2014-08-15 16:22:12.8] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x3D
>MessageProcessor start processing message:'Retrieve' for entity:'account' correlationId:cfe4c47e-31f4-4dfd-8fc9-7ed26187d4b4 depth:1 last updated at: 08/15/2014 16:22:12.
[2014-08-15 16:22:12.9] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x2DC
>MessageProcessor finish processing message 'Retrieve' for 'account'.
I kind of out of ideas and that's where you come to mind, any ideas?

Backend error, when loading gzip csv

I got the "backend error. Job aborted" , job ID below.
I know this question was asked, but I still need some help to try & resolve this.
what happen if this happen in production,we want to have a 5min periodic loads?
thanks in advance
Errors:
Backend error. Job aborted.
Job ID: job_744a2b54b1a343e1974acdae889a7e5c
Start Time: 4:32pm, 30 Aug 2012
End Time: 5:02pm, 30 Aug 2012
Destination Table: XXXXXXXXXX
Source URI: gs://XXXXX/XXXXXX.csv.Z
Delimiter: ,
Max Bad Records: 99999999
This job hit an internal error. Since you ran this job, BigQuery has been updated to a new version, and a number of internal errors have been fixed. Can you retry your job?

SQL server 2005 agent not working

Sql server 2005 service pack 2 version: 9.00.3042.00
All maintenance plans fail with the same error.
The details of the error are:-
Execute Maintenance Plan
Execute maintenance plan. test7 (Error)
Messages
Execution failed. See the maintenance plan and SQL Server Agent job history logs for details.
The advanced information section shows the following;
Job 'test7.Subplan_1' failed. (SqlManagerUI)
Program Location:
at Microsoft.SqlServer.Management.SqlManagerUI.MaintenancePlanMenu_Run.PerformActions()
At this point the following appear in the windows event log:
Event Type: Error
Event Source: SQLISPackage
Event Category: None
Event ID: 12291
Date: 28/05/2009
Time: 16:09:08
User: 'DOMAINNAME\username'
Computer: SQLSERVER4
Description:
Package "test7" failed.
and also this:
Event Type: Warning
Event Source: SQLSERVERAGENT
Event Category: Job Engine
Event ID: 208
Date: 28/05/2009
Time: 16:09:10
User: N/A
Computer: SQLSERVER4
Description:
SQL Server Scheduled Job 'test7.Subplan_1' (0x96AE7493BFF39F4FBBAE034AB6DA1C1F) - Status: Failed - Invoked on: 2009-05-28 16:09:02 - Message: The job failed. The Job was invoked by User 'DOMAINNAME\username'. The last step to run was step 1 (Subplan_1).
There are no entries in the SQl Agent log at all.
Probably no points for this, but you're likely to get more help on this over at ServerFault.com now that they are open.