Redash how to change the dashboard datasource? - datasource

I'm new to Redash and trying to use it to analyze data.
I have created several queries and added to a dashboard.
Is Redash possible to change the datasouce on dashboard level instead of changing the datasouce of query one by one? (all datasource with the same data structure, so query can share to use)

Nope not yet. The queries underneath are tied to datasources, not the dashboard.
If you want to bulk update, you should be able to update the column in the queries table in postgres. The column data_source_id should allow you to do a bulk update.

Related

Is There A Way To Append Deleted and Updated Data To History Table

Alright, so I am working on a project at work and I need to append data to a new history table every time the data in our other table is updated or deleted. However, we are getting access to our sql tables from another company and they only gave us read-only privileges and we can only view them through Microsoft Power BI and Excel.
So I wanted to see if there was any way of creating a trigger of some sorts.
Thank You
From your question, you are trying to do an incremental load of data, to be able to append new data to a table. Also you are looking to have some sort of archive process to a history table, via a trigger. Incremental loads are a Power BI Premium feature only. However for the way you want to move the data based on a trigger, this is not supported in Power BI.
I would recommend trying to get better access to the SQL, or use Excel to get the data, dump it into Excel/CSV files, then create a process to load the new file(s) and figure out the changes, using some other database/etl process, then output to a file/table the results that PBI can read from.
Hope that helps

Google BigQuery Metadata

I am trying to populate populate and collect metadata for the business in GBQ. Basically, the business doesn't have access to tables, we create authorised views for them that they use in their reports.
The problem is, if I populate the column description field in the table, the views based on that table won't inherit the metadata, the same with sharded tables.
There is going to be a degree of data entry to populate the metadata, but I would really like to be able to share it across related views.
Is it possible to automate BQ metadata in any way?
There are some different options to both get information about a table or a view (https://cloud.google.com/bigquery/docs/tables#getting_information_about_tables), and to update that information (https://cloud.google.com/bigquery/docs/updating-views#update-description).
Depending on your specific case you can use the bq command line or a programming language SDK to automatize the process for retrieving and updating BigQuery's metadata.

Power BI maxing connections to DB :( Can we populate multiple tables with single Sql.Database call?

I am assisting my team troubleshoot an issue with a Power BI report we are developing. We have a rather complex data model in the source SQL database, so we have created 5-6 views to better manage the data. We have a requirement to use DirectQuery, as one key requirement for the report is that the most up-to-date data in the database is visible, rather than having a delay in loading/caching the data. We also have the single data source, just the one database.
When we run the report, we see a spike of 200-500 connections to the database from the specific user for the report data source, and those connections don't close. This is clearly an issue and unsustainable for any product. We have a ticket open with Microsoft premium support to address the connections not closing, but in the meantime, I'm wondering if we're doing something wrong inside the report?
When I view the queries in the query editor, we basically have one query for each view, and it's a simple:
let
Source = Sql.Database(Server, Database)
query_view_name = Source{[Schema ......]}[Data]
in
query_view_name
(I don't have the raw code in front of me, but that's the gist of it.)
It seems to me, based on analytics in the database, that "Sql.Database" is opening a new connection every time this view is called. And with 5-6 views, that's 5-6 connections at a minimum; then each time a filter is changed, it's more connections, and it's compounds from there until the database connection pool is maxed out.
Is there a way to populate all the tables using a single connection to the database? Why would Power BI be using so many connections? Can we populate multiple tables in the advanced query editor? Using DirectQuery, are there any suggestions for what we can look at/troubleshoot/change in the report?
Thanks!
Power BI establishes multiple connections to the database to load multiple tables in parallel. If you don't want this, you can turn it off from Options->Current file->Data Load->Enable parallel loading of tables:
Keep in mind, that turning this option off most likely will increase the model loading time.
You may want to take a look at Maximum connections per data source option in Options->Current file->Direct query and the whole section Query reduction beneat it. Turning on Slicer selection and Filter selection on this page is highly recommended for cases like yours, but you need to train your users that they need to click on apply to see the results.
Ok.
We have a rather complex data model in the source SQL database, so we have created 5-6 views to better manage the data.
That's fine.
We have a requirement to use DirectQuery,
But now you're going to have a bad time. DirectQuery + complex views is a recipe for poor performance. Queries against your views will add joins, potentially across the whole model for filter context, as well as Measure and Calculated Column expressions. And these queries will change dynamically, based on the user's interaction with the report. So it's very difficult to see and test all the possible queries.
Basic guidance is to use import mode against views, and only use DirectQuery against properly-indexed tables. To address data freshness, you can replace the views with tables you load and keep up-to-date from your application, or perhaps use an Indexed View, etc.

Custom SQL queries in PerformancePoint DashBoard designer

Is it possible to utilize the data that is returned by a CUSSTOM SQL query on a database to create the DasshBoard elements ?
So far i was ablle to connect to a specific Table and use it in AS IS form ( no interaction with multible tables )
Any pointers in this topic would be much appreciated.
Thanks,
IP.
I'd recommend creating a View in your database and creating a datasource that connects to that view. The view would join all the tables you need.
Edit:
If you don't have permissions to create a view, you still have some options:
stand up a SQL box you control, create a linked server to the one you don't, and create the view in your control
create an Access database, with a linked table to the tables you want. create your view there, and use that as your ODBC datasource in dashboard designer.
Give to the DBA the query you want and ask them to create the view for you. If you have permissions to create the query, there should be no problem creating the view (better performance, stability of the data, etc)

Create a database from another database?

Is there an automatic way in SQL Server 2005 to create a database from several tables in another database? I need to work on a project and I only need a few tables to run it locally, and I don't want to make a backup of a 50 gig DB.
UPDATE
I tried the Tasks -> Export Data in Management studio, and while it created a new sub database with the tables I wanted, it did not copy over any table metadata, ie...no PK/FK constraints and no Identity data (Even with Preserve Identity checked).
I obviously need these for it to work, so I'm open to other suggestions. I'll try that database publishing tool.
I don't have Integration Services available, and the two SQL Servers cannot directly connect to each other, so those are out.
Update of the Update
The Database Publishing Tool worked, the SQL it generated was slightly buggy, so a little hand editing was needed (Tried to reference nonexistent triggers), but once I did that I was good to go.
You can use the Database Publishing Wizard for this. It will let you select a set of tables with or without the data and export it into a .sql script file that you can then run against your other db to recreate the tables and/or the data.
Create your new database first. Then right-click on it and go to the Tasks sub-menu in the context menu. You should have some kind of import/export functionality in there. I can't remember exactly since I'm not at work right now! :)
From there, you will get to choose your origin and destination data sources and which tables you want to transfer. When you select your tables, click on the advanced (or options) button and select the check box called "preserve primary keys". Otherwise, new primary key values will be created for you.
I know this method can hardly be called automatic but why don't you use a few simple SELECT INTO statements?
Because I'd have to reconstruct the schema, constraints and indexes first. Thats the part I want to automate...Getting the data is the easy part.
Thanks for your suggestions everyone, looks like this is easy.
Integration Services can help accomplish this task. This tool provids advanced data transformation capabilities so you will be able to get exact subset of data that you need from large database.
Assuming that such data is needed for testing/debugging you may consider applying Row Sampling to reduce amount of data exported.
Create new database
Right click on it,
Tasks -> Import Data
Follow instructions