I want to add a Column on the existing UIBB Schema i have
(SOLMAN_WORKCENTER - > TEST SUITE ANALYTICS - > Defect Analysis)
I tried to Enhance the Component Configuration.
So I thought i could simply add additional Columns of choice by entering the (+)Column wizard.
But the Column I need is not pre-included.
What is the correct way of adding extra columns to a UIBB Schema?
Related
Our organization hundreds of Periscope daashboard that is generated from a database, named "Animal".
Now, let's suppose there is a table named "Puppy" with column name "is_spotted". Is there an easy way to find out whether "is_spotted" has been used to create a dashboard, without going through every single dashboards?
I would like to add descriptions to each field in a table. My issue is that we are using dbt and this tool recreates the table each time you run a job causing the descriptions to be deleted if they are there. I am able to control the datatypes casting the fields in the last SELECT statement but I am not sure if I can add the description using SQL.
I have been googling for a while and I am not able to see whether descriptions can be added using SQL this way.
I've thought a workaround which would be to create the table and then insert but this in theory is bad practice using dbt.
Thanks!
just want to post the solution in case someone has the same problem. dbt didn't update the descriptions in BQ itself. However, they released this new feature last month: https://github.com/fishtown-analytics/dbt/releases/tag/v0.17.0
Docs can be generated as usual and BQ will show the descriptions of tables and columns. You will only need to add the below to your dbt_project.yml file:
+persist_docs:
relation: true
columns: true
You can insert decriptions two ways.
Using the schema.yml file
{version: 2
models:
- name: events
description: This table contains clickstream events from the marketing website
columns:
- name: event_id
description: This is a unique identifier for the event
tests:
- unique
- not_null
- name: user-id
quote: true
description: The user who performed the event
tests:
- not_null
}
You can also use jinga templating within the SQL.
{% docs table_events %}
This table contains clickstream events from the marketing website.
The events in this table are recorded by [Snowplow](http://github.com/snowplow/snowplow) and piped into the warehouse on an hourly basis. The following pages of the marketing site are tracked:
- /
- /about
- /team
- /contact-us
{% enddocs %}
I am a new employee at the company. The person before me had built some tables in BigQuery. I want to investigate the create table query for that particular table.
Things I would want to check using the query is:
What joins were used?
What are the other tables used to make the table in question?
I have not worked with BigQuery before but I did my due diligence by reading tutorials and the documentation. I could not find anything related there.
Brief outline of your actions below:
Step 1 - gather all query jobs of that user using Jobs.list API - you must have Is Owner permission for respective projects to get someone else's jobs
Step 2 - extract only those jobs run by the user you mentioned and referencing your table of interest - using destination table attribute
Step 3 - for those extracted jobs - just simply check respective queries which allow you to learn how that table was populated
Hth!
I have been looking for an answer since a long time.
Finally found it :
Go to the three bars tab on the left hand side top
From there go to the Analytics tab.
Select BigQuery under which you will find Scheduled queries option,click on that.
In the filter tab you can enter the keywords and get the required query of the table.
For me, I was able to go through my query history and find the query I used.
Step 1.
Go to the Bigquery UI, on the bottom there are personal history and project history tabs. If you can use the same account used to execute the query I recommend personal history.
Step 2.
Click on the tab and there will be a list of queries ordered from most recently run. Check the time the table was created and find a query that ran before the table creation time.
Since the query will run first and create the table there will be slight differences. For me it stayed between a few seconds.
Step 3.
After you find the query used to create the table, simply copy it. And you're done.
I am reviewing a coworkers sqlgen job and I am unable to figure out what this means in the table generation settings.
Specify number of rows by: "Same as mapped data"
My coworker has this selected on each table, I just need to know what is meant by this I have looked through documentation and been unable to find a definition for this.
I am on version 2 at the moment. Probably not the best question but I need an answer and he is gone for a long period of time and our data is not working correctly with this tool.
The "Same as mapped data" option is only available when you're using an existing table or view as a data source - it just means that the generator will insert all the rows from the source table or view. The other options are:
Numeric value - a set number of rows
Proportion of table - a proportion of the source table/view
Generation time - as much data as the tool can generate in a set time
There's a little more about using an existing table/view as a data source here on the website, but it doesn't have much else useful in it.
suppose i have 2 schema,i.e. fy0910 and fy1011.so when i see the report in schema fy0910 or in fy1011 i have to change every time the schema reference for the subreport,but when i see the report in the specific schema after setting the schema reference then there is no problem in subreport but when i go to another schema then again i have to change the schema reference for sub report.For your kind information there is absolutely no problem in main report.so plz give solution to me i will be thankful to u.
What I have done with my reports is to generate a generic result table schema, such as FYDetails instead of your FY0910 and FY1011. Your query to the database can be exactly the same, but with your obvious date limit parameters that only show respective fiscal year. Then, add the FYDetails schema to your report as the binding basis and do a global search / replace on your report file from the fy0910 to FYDetails and save.
Then, whenever you run your queries, just ensure you change the data table result name and your good to go. I almost never hard-code reports to a specific table name that are a direct result of a specific filtered set.
HTH