I'm looking to get conversion path data from GA4 (Advertising > Attribution > Conversion Paths) into my big query project. But I can't seem to find it after linking the account to the project.
Has anyone been able to do this?
Thanks,
I've linked my GA4 property to a Big Query project, the data has pulled through but I cannot see where to find Conversion Paths
Related
I just cleaned up my firestore collection data using DataPrep and verified the data via BigQuery. I now want to move the data back to Firestore. Is there a way to do this?
I have used manual method of exporting to JSON and then uploading using a code provided by AngularFirebase. But It is not automated as there is a need to periodically cleanup this data.
I am looking for a process within Google Cloud console. Any help will be appreciated
This is not an answer, more like a partial answer. I could not add a comment as I don't have 50 reputation yet. I am in a similar boat but not entirely the same situation. My situation being that I want to use a subset of BigQuery data and add it to Firestore. My thinking is to do the following:
Use the BigQuery API to query the data periodically using BigQuery Jobs' Load (in your case) or Query (in my case)
Convert it to JSON in code
Use batch commit in Firestore's API to update the firestore database
This is my idea and I am not sure whether this will work, but I will you know more once I am done with this. Unless someone else has better insights to help me and the person asking this question
I'm looking for a workaround on the following issue. Hope someone can help.
I'm unable to backfill data in the ga_sessions_ table in BigQuery through product linking in GA. e.g. partition ga_sessions_20180517 is missing
This specific view has already been linked before. Google documentation says that historical load is only done once per view (hence, the issue) (https://support.google.com/analytics/answer/3416092?hl=en)
Is there any way to work around it?
Kind regards,
Martijn
You can use Google Analytics Reporting API to get the data for that view. This method has lot of restrictions like sometimes the data is sampled/only 7 dimensions can be exported in one call, but at least you will be able to fetch your data in a partitioned manner.
Documentation hereDoc
If you need a lot of dimensions/metrics in hit level format, scitylana.com has a service that can provide this data historically.
If you have a clientId set in a custom dimension the data-quality is near perfect.
It also works without a clientId set.
You can get all history as available through the API.
You can get 100+ dimensions/metrics in one batch into BQ.
I'm adding some testing to my current project which uses Azure blob storage to store telemetry data coming from a stream analytics job. I want to do testing of the routines that get the telemetry data, so I created a separate container for test data. I downloaded a sample set of data, modified the data to serve my needs and re-uploaded (using Azure storage explorer) everything back into the new container.
The tests were immediately failing and I quickly found out that this is because the LastModified date of the files changed into the date/time of upload. This is fine, but the sequence of the upload was also different. My code uses the modified date of the file to find out which one is the most recent, which would now return a different file based on the new dates.
I found that you cannot modify this property, although you can change another property to have it update. So I know the solution: I could write a quick script which gets the sequence of files from my production instance and then touches every file in the test instance in the same sequence.
But... I was wondering whether this is the best option. I also read it's 'best practice' to store a custom datetime in a separate property, but I don't think I can do that straight from Stream Analytics (which is writing the blobs). I also considered using an Azure Function to do this (new blob => update property), but I'm than adding complexity and something that might fail for whatever reason.
So I'm looking for the best way to solve this problem. Anyone?
Update: this one probably deserves a tiny bit more explanation. Apart from using the LastModified date to sort on, I also use it to filter blobs. The blobs themselves are CSV files containing ASA output data, so telemetry records. Each record has a timestamp, but that information is IN the file. When retrieving data, I don't want to have to dive into each file to find out what the timestamp is of those records. So I use a prefilter to filter out the blobs within a certain timespan, and then only download / open those file to the records inside.
This works perfectly as long as you do not touch any of the blob, but obviously it stops working as soon as any of the blobs gets modified for whatever reason. So I'm now convinced that I need a different / better way to solve this issue; but how?
It seems to me that you have two separate things: the data that you want to store in blob storage and metadata about the blob such as the timestamp. I would create a different (azure) database for the metadata or even simpler just add metadata to the (block)blob:
blockBlob.Metadata.Add("from", dateTime.ToString());
blockBlob.Metadata.Add("to", dateTime.ToString());
blockBlob.Metadata.Add("order", "1");
For sorting I would just add a simple order property.
The comment by #Vignesh deserves the credit here, but in order to get this one marked answer I'll provide it myself.
With ASA, you can set the output to be structured by date/time. That means in this case, data is written to the blob store with a directory structure such as:
2016 / 06 / 27 / 15 / 23 (= 27-06-2016 15:23)
2016 / 06 / 28 / 11 / 02 (= 28-06-2016 11:02)
The ASA output allow you to specify how granular you want the structure to be, in my case I chose to store it by day (so not including a time path). The ASA runtime will now ensure that data from a certain point in time is stored within a blob in that resides in the correct path.
Then I subsequently changed my logic to not use the datetime stamp of the individual blob files any more, but simply read just the files from the folders that are within the timerange I'm interested in. That assures we only get data that was produced within that timerange. And if there's more than one file in a folder, I need to load them both since both were in the same timerange anyway. As long as minutes are enough granularity for you, this works excellent even though it might feel a bit strange to use a folder structure for such a thing.
Having a seperate 'index' for blobs which tracks their datetime would work too of course, but adds complexity which in this case I don't really need.
I struggled with this issue for too long before finally tracking down how to avoid/fix it. It seems like something that should be on StackOverflow for the benefit of others.
I had an SSRS report where the query worked fine and displayed the string results I expected. However, when I tried to add that field to the report, it kept showing "ERROR#". I was eventually able to find a little bit more info:
The Value expression used in [textbox] returned a data type that is
not valid.
But, I knew my data was valid.
Found the answer here.
Basically, it's a problem with caching and you need to delete the ".data" file that is created in the same directory as your report. Some also suggested copying the query/report to a new report, but that appears to be the hard way to achieve the same thing. I deleted the .data file for the report I was having trouble with and it immediately started working as-expected.
After you preview the report, click the refresh button on the report and it will pull the data again creating an updated rdl.data file.
Another solution to this issue is to click Refresh Fields in the Dataset Properties menu.
This will update the list of fields, and force SSRS to get new data, rather than relying on a cached version.
I want to know what are the possible errors that can arose from Big Query server side during upload mechanism, though the .CSV file that i'm uploading contains perfect data. Can you list out those errors?
Thanks.
Some of the common errors are:
Files must be encoded in UTF-8 format.
Source data must be properly
escaped within standard guidelines for CSV and JSON.
The structure of
records and the data within of must match the schema provided.
Individual files must be under the size limits listed on our
quota/limits page.
More information about BigQuery source data formats.
Check out our Data Loading cookbook for additional tips.