How do I access GA4 Screen Resolution data using Big Query? - google-bigquery

Where is the Screen resolution data in Big Query when linked to GA4? I can not find the data in Big Query, but see the data in the GA4 UI. Thanks

Related

inserting Links into SQL DB that leads to folders on server/local disk

Currently developing a program that will allow workers to view up and coming jobs that are saved on the database, using SQL DB and displaying data on a TableView Java. Currently I have been able to save an image to a column but all jobs have a lot of photos so this option isn't feasible. I want said workers to be able to click a link which will lead to all photos that are relevant to the job. Any links or advice will be much appreciated.

Is there a way to find out how much data does a google data studio dashboard consume from BigQuery?

I would like to know how much data does Data Studio consume from querying a View from BigQuery.
For example, if I have a dashboard that is getting its data from a View in BigQuery, how much data would it be using?
I have tried to look at the usage logs of Big Query, but due to my lack of experience with the tools I have not been able to find a solution. I have been able to find specific bytes processed for the data(View from BigQuery) in question but don't know how much is from Data Studio.
If you look in BigQuery at the Query History you can search for queries that have used that view by typing in it's name.
Queries from Data Studio will have strange looking names e.g.
COUNT(DISTINCT t0.yourVariable) AS t0_qt_QqufHrYw
If you click on the query you will see the amount of data processed and billed (Bytes processed & Bytes billed).
Bare in mind that each component of your report will have it's own query (all at a similar time) so you may need to add them up to find the total bytes queried.

View SQL data of Marklogic in graph form and perform profiling on it

I have created a view for a json file in Marklogic database and I am getting all the records of the json file in a table form. I had to know whether we can view this data in a graph form and perform profiling on this as well. Does Marklogic have the feature of representing the table data in a graph form and perform profiling on it?
Attached below is the screenshot of the sql data which can be viewed in a graphical manner
Thank You.
SQL data to appear in a graph form
You need to use an external tool. Please see some suggestions in the
SQL Data Modelling Guide
The items listed on that page(Tableau, Qlik , Cognos) are not exhaustive. You also have options such as Excel and PowerBI, for example.

Performance enhancement when using Direct Query to get data from SQL server in Power BI

I am using PBI Desktop to create PBIX files which I later upload to Azure Power BI Embedded (PaaS solution). The PBIX gets data from Azure SQL server in Direct Query mode. I want to increase the efficiency of queries that Power BI Embedded sends to SQL for getting my data.
My pbix contains relationships between many tables and RLS (Row Level Security) configured and is taking a lot of time to load. Please advice if the following options will help me increase the efficiency of queries, thus reducing the time taken by the pbix to load: -
Using Advanced Options in the Get Data dialog box : Inserting a SQL statement here will get only specific data instead of the entire table. This will reduce the data I see in PBI Desktop, but will it really increase the efficiency of queries sent to SQL for the creation of charts? Eg: Say PBIX needs to create a join between two tables. If I use the advanced options, will the Join be done on reduced data?
Using Filters to filter out unwanted rows of the table : Again like above option, this will reduce the data I see in PBI Desktop, but will it really increase the efficiency of queries sent to SQL for the creation of charts? Eg: If I use filters, will the Join be done on reduced data?
[EDIT - I have added one more option below]
Are the queries for charts on different pages of a PBIX file sent to SQL only when the page is loaded? : If this is true then I can separate my charts into different pages to reduce the number of queries sent at once to SQL.

Tableau data extract refresh from Google BigQuery takes very long

We are very pleased with the combination BigQuery <-> Tableau Server with live connection. However, we now want to work with a data extract (500MB) on Tableau Server (since this datasource is not too big and is used very frequently). This takes too much time to refresh (1.5h+). We noticed that only 0.1% is query time and the rest is data export. Since the Tableau Server is on the same platform and location, latency should not be a problem.
This is similar to the slow export of a BigQuery table to a single file, which can be solved by using "daisy chain" option (wildcards). Unfortunately we can't use similar logic with a Google BigQuery data extract refresh in Tableau...
We have identified some approaches, but are not pleased with our current ideas:
Working with incremental refresh: our existing BigQuery table rows can change: these changes can only be applied in Tableau if you do a full refresh
Exporting the BigQuery table to GCS using the daisy chain option and making a Tableau data extract using the Tableau SDK: this would result in quite some overhead...
Writing a Dataflow job using a custom sink for Tableau Server (data extracts).
Experimenting with a Tableau web connector that communicates directly with the BigQuery API: I don't think this will be faster? I didn't see anything about parallelizing calls with the Tableau web connecector, but I didn't try this approach yet.
We would prefer a non-technical option, to limit maintenance... Is there a way to modify the Tableau connector to make use of the "daisy chain" option for BigQuery?
You've uploaded the data in BigQuery. Can't you just use the input for that load job (a CSV perhaps) as input for Tableau?
When we use Tableau and BigQuery we also notice that extracts are slow but we generally don't do that because you lose BigQuery's power. We start with a live data connection at first, and then (if needed) convert this into a custom query that aggregates that data into a much smaller datasets which extracts in just a few seconds.
Another way to achieve higher performance with BigQuery and Tableau is aggregating or joining tables on beforehand. JOINs on huge tables can be slow, so if you use a lot of those you might consider generating a denormalised dataset which does all of the JOIN-ing first. You will get a dataset with a lot of duplicates and a lot of columns. But if you select only what you need in Tableau (hide unused fields!) then these columns won't count in your query cost.
One recommendation I have seen is similar to your point 2 where you export the BQ table to Google Cloud Storage and then use the Tableau Extract API to create a .tde from the flat files in GCS.
This was from an article on the Google Cloud site so I'd assume it would be best practice:
https://cloud.google.com/blog/products/gcp/the-switch-to-self-service-marketing-analytics-at-zulily-best-practices-for-using-tableau-with-bigquery
There is an article here which provides a step by step guide to achieving the above.
https://community.tableau.com/docs/DOC-23161
It would be nice if Tableau optimised the BQ connector for extract refresh using the BigQuery Storage API. We too have our Tableau Server environment in the same GCP zone as our BQ datasets and experience slow refresh times.