Tableau - Lock 2 datasource blending settings - locking

I'm using Tableau 10.1 and have defined a workbook which use 2 datasources (an Oracle connection and a Salesforce connection), which have been put in relationship to each other by the 'Edit Relationship' function.
Since this procedure makes 'a sort of' join operation between the two datasources, I'd like to save the two connected datasources into a single entity or, if this is not possible, to 'lock' somehow the relationship settings.
My objective is, in fact, to make the two datasources available on a tableau server to other members of my organization, but they should find the data ready and must not be able to modify the relationship.
So: is it possible to create a single datasource by them (consider that one of them is a Salesforce connection, and the multi-connection feature is not available)? Is it possible to 'lock' the data blending in a way that no one can modify the relationship by them? Making available an empty workbook with the right blending settings doesn't look a good solution to me...
Thanks in advance for your answers

Since you are on version 10.1, use cross-database joins instead. This allows you to create a single data source, publish it like you want, but it will include the join between two different databases.

Related

I'm a new CDS/Dataverse user and am wondering why there are so many columns in new tables?

I'm new to CDS/Dataverse, coming from the SQL Server world. I created a new Dataverse table and there are over a dozen columns in my "new" table (e.g. "status", "version number"). Apparently these are added automatically. Why is this?
Also, there doesn't seem to be a way to view a grid of data (like I can with SQL Server) for quick review/modification of the data. Is there a way to view data visually like this?
Any tips for a new user, coming from SQL Server, would be appreciated. Thanks.
Edit: clarified the main question with examples (column names). (thanks David)
I am also new to CDS/Dataverse, so the following is a limited understanding from what I have explored so far.
The idea behind Dataverse is that it gives you a pre-built schema that follows best-practice for you build off of, so that you spend less time worrying about building a comprehensive data schema, creating tables, and how to relate them all together, and more time building applications in Power Apps.
For example, amongst the several dozen tables it generates from the get-go is Account and Contact. The former is for organisational entities and the latter is for single-person entities. You can go straight into adding your user records in one of these tables and take advantage of bits of Power Apps functionality already hooked up to these tables. You do not have to spend time thinking up column names, creating the table, making sure it hooks up to all the other Dataverse tables, testing whether the Power Apps functionality works with it correctly etc.
It is much the same story with the automatically generated columns for new tables: they are all there to maintain a best-practice schema and functionality for Power Apps. For example, the extra columns give you good auditing with the data you add, including when a row was created, modified, who created the row etc. The important thing is to start from what you want to build, and not get too caught up in the extra tables/columns. After a bit of research, you'll probably find you can utilise some more tables/columns in your design.
Viewing and adding data is very tedious -- it seems to take 5 clicks and several seconds to load the bit of data you want, which is eons in comparison to doing it in SQL Server. I believe it is how it is due to Microsoft's attempt to make it "user friendly".
Anyhow, the standard way to view data, starting from the main Power Apps view is:
From the right-hand side pane, click Data
Click Tables
From the list of tables, click your table
Along the top row, click Data
There is an alternative method that allows you to view the Dataverse tables in SSMS – see link below:
https://www.strategy365.co.uk/using-sql-to-query-the-common-data-service/
To import data in bulk:
Click on Data from the top drop-down menu > Get data.
Importing data from Excel is free. To import from other sources, including SQL Server, I believe is a paid service (although I think you may be able to do this on the free Community Plan).

SSAS cube with multiple DB

I have 3 databases with the same structure, but different data, since they are from different clients.
Now, I have an existing SSAS project. Its Data Source Views, Cubes and Dimensions can only use or access one DB.
What I want is to be able to use multiple databases with the same structure, and create a cube using them.
Each client must also be able to use the cube, but they can only see their own data.
Are these possible? Can you please provide insights and some useful references?
Easy Solution
The easiest way to solve this would be to just have three Analysis Services databases. Setup would be easy, you would have just three structurally identical databases, and no need to manage security within the cubes, only access to the cube. It is easy to manage, and difficult to make errors allowing users to get access to data they should not see. And as nobody should be allowed to access data form other companies, there is no need for one common cube.
Just deploy your project three times using a different Analysis Services database name.
Then change the data source object of the deployed databases to point to the different relational databases.
For the first step, in Business Intelligence Development Studio, right click on the project node in Solution Explorer, select the bottom entry ("Properties"), and then select "Deployment". Here, you can enter the server to deploy the solution to, as well as the database name. After closing the dialog, right click on the project node again, and select Deploy. Repeat this step, using three different database names.
Then, connect to your Analysis Services server in SQL Server Management Studio, open each database, and edit the data source object of each database to point to its relational database.
After that, re-process the Analysis Services database.
Alternatively, you can also do everything in BIDS, i. e. between changing the target database for deployment and deploying, change the data source there, and after deployment, possibly, re-process the Analysis Services database.
If you assume you will need to change and deploy the cube definition several times, you probably could make use of configurations which you can edit in the project properties dialog using the "Configuration Manager" button. You would have three configurations, one for each target Analysis Services database. You could select one of the configurations in the dropdown list in the toolbar for each deployment without the need to edit properties again and again.
If you need to do this often, I think it would not be difficult to automate the steps to change the database and reprocess the cube, either via XMLA, or via AMO, or in PowerShell. But to implement this this would be another question.
More Complex Solution
If you really want to have everything in one cube, then you will have to have a union of the tables from the different sources in the data source view. If all three relational databases are on the same SQL Server instance, you can define this either as a named query in the data source view, or as a view in one of the databases, maybe even better as a view or table in a separate relational database. You can access a table or view from another database running in the same instance of SQL Server in the form NameOfDB.Schema.Tablename.
In case these databases are on different instances, you could use linked servers.
And of course, you will have to manage the keys in these different databases so that the same dimension entry has the same key, and different dimension entries have different keys. And you will have to set up security in the cube so that no user can see data that is not meant to be seen.
While you could use different data source objects in Analysis Services for different tables or named queries in Analysis Services, each of these only uses one, as actually, this is one SQL statement that is sent to this source. And dimensions need to be based on one data source view object like one named query, view, or table. For fact tables, you could get around this using partitions, but not for dimensions.

Hide / remove columns from certain users in a SSAS tabular model

I've got a nice SSAS tabular model with users processing away. Certain users need access to certain information, such as confidential info (e.g., SS numbers), that should not be visible to everyone. How should I handle this?
This indicates that there is no way to use roles to remove columns, only rows. Is my only option to make a copy of the model and maintain both? This can't be such an edge case...
I guess I can jury-rig something with a scm fork and code-generation, but I'd rather not go down that road.
Alternatively, is there any way to hide the columns (per user/role), so that at least they don't show up in client tools?
One method that requires very little additional development is to use the method described in the following blog post. http://blog.westmonroepartners.com/a-workaround-for-column-security-in-the-sql-server-analysis-services-bism-tabular-model/
The blog contains a link to an SSIS package which will replicate an existing cube, with the exception of the sensitive data columns. The users who cannot view the sensitive data columns can be given access to the second cube that does not contain sensitive data columns.
One way to achieve this is to create Perspectives. You can create different perspectives for different group of users. And end users can connect to their specific model.

Sync multiple and different DB to one DB (sql Server 2008)

I need suggestions on how i could accomplish this...
i have 3 employee solutions
Every solutions with its own DB
what im trying to do is to unify all these three DB into one. i would also like to be flexible enough to add some other employee solution in the future.
i would also like to use SSIS.
what i've been thinking is, creating a SSIS package where opens a DB and import all data, this SSIS package would use "something" where it says the column mapping of certain DB so the SSIS knows what clumns import...
I tried to be as clear as possible... Any ideas?
thanks in advance
So if you add a new employee to the central DB it should be pushed out to the other three. If the other 3 have different schemas and different platforms then when you add an employee you need to work out which db it is going to, and you need custom insertion code for each db. Also you need to think about what happens if the same person is accidentally added to the central and remote db with slightly different information. You'll end up with a duplicate in the remote db. This is more of a master data application than a reporting application if I understand your requirement correctly.
Your first step is to come up with a schema for your central db, and this has to accomodate all of the information in the three remote db's. Then you need to think about rules ffor merging information accross all these DB's. i.e. if one DB has 'Eye Colour' and the other two don't, and you then create an employee with eye colour into your central db, but the target db doesn't have that field, is that OK? What happens if someone changes the DOB in a source DB, is that meant to update the central DB. What if you do it in both DB's at the same time - who is the winner?
What is the business problem that you are trying to solve?

Ideas for Combining Thousand Databases into One Database

We have a SQL server that has a database for each client, and we have hundreds of clients. So imagine the following: database001, database002, database003, ..., database999. We want to combine all of these databases into one database.
Our thoughts are to add a siteId column, 001, 002, 003, ..., 999.
We are exploring options to make this transition as smoothly as possible. And we would LOVE to hear any ideas you have. It's proving to be a VERY challenging problem.
I've heard of a technique that would create a view that would match and then filter.
Any ideas guys?
Create a client database id for each of the client databases. You will use this id to keep the data logically separated. This is the "site id" concept, but you can use a derived key (identity field) instead of manually creating these numbers. Create a table that has database name and id, with any other metadata you need.
The next step would be to create an SSIS package that gets the ID for the database in question and adds it to the tables that have to have their data separated out logically. You then can run that same package over each database with the lookup for ID for the database in question.
After you have a unique id for the data that is unique, and have imported the data, you will have to alter your apps to fit the new schema (actually before, or you are pretty much screwed).
If you want to do this in steps, you can create views or functions in the different "databases" so the old client can still hit the client's data, even though it has been moved. This step may not be necessary if you deploy with some downtime.
The method I propose is fairly flexible and can be applied to one client at a time, depending on your client application deployment methodology.
Why do you want to do that?
You can read about Multi-Tenant Data Architecture and also listen to SO #19 (around 40-50 min) about this design.
The "site-id" solution is what's done.
Another possibility that may not work out as well (but is still appealing) is multiple schemas within a single database. You can pull common tables into a "common" schema, and leave the customer-specific stuff in customer-specific schema. In some database products, however, the each schema is -- effectively -- a separate database. In other products (Oracle, DB2, for example) you can easily write queries that work in multiple schemas.
Also note that -- as an optimization -- you may not need to add siteId column to EVERY table.
Sometimes you have a "contains" relationship. It's a master-detail FK, often defined with a cascade delete so that detail cannot exist without the parent. In this case, the children don't need siteId because they don't have an independent existence.
Your first step will be to determine if these databases even have the same structure. Even if you think they do, you need to compare them to make sure they do. Chances are there will be some that are customized or missed an upgrade cycle or two.
Now depending on the number of clients and the number of records per client, your tables may get huge. Are you sure this will not create a performance problem? At any rate you may need to take a fresh look at indexing. You may need a much more powerful set of servers and may also need to partion by client anyway for performance.
Next, yes each table will need a site id of some sort. Further, depending on your design, you may have primary keys that are now no longer unique. You may need to redefine all primary keys to include the siteid. Always index this field when you add it.
Now all your queries, stored procs, views, udfs will need to be rewritten to ensure that the siteid is part of them. PAy particular attention to any dynamic SQL. Otherwise you could be showing client A's information to client B. Clients don't tend to like that. We brought a client from a separate database into the main application one time (when they decided they didn't still want to pay for a separate server). The developer missed just one place where client_id had to be added. Unfortunately, that sent emails to every client concerning this client's proprietary information and to make matters worse, it was a nightly process that ran in the middle of the night, so it wasn't known about until the next day. (the developer was very lucky not to get fired.) The point is be very very careful when you do this and test, test, test, and test some more. Make sure to test all automated behind the scenes stuff as well as the UI stuff.
what I was explaining in Florence towards the end of last year is if you had to keep the database names and the logical layer of the database the same for the application. In that case you'd do the following:
Collapse all the data into consolidated tables into one master, consolidated database (hereafter referred to as the consolidated DB).
Those tables would have to have an identifier like SiteID.
Create the new databases with the existing names.
Create views with the old table names which use row-level security to query the tables in the consolidated DB, but using the SiteID to filter.
Set up the databases for cross-database ownership chaining so that the service accounts can't "accidentally" query the base tables in the consolidated DB. Access must happen through the views or through stored procedures and other constructs that will enforce row-level security. Now, if it's the same service account for all sites, you can avoid the cross DB ownership chaining and assign the rights on the objects in the consolidated DB.
Rewrite the stored procedures to either handle the change (since they are now referring to views and they don't know to hit the base tables and include SiteID) or use InsteadOf Triggers on the views to intercept update requests and put the appropriate site specific information into the base tables.
If the data is large you could look at using a partioned view. This would simplify your access code as all you'd have to maintain is the view; however, if the data is not large, just add a column to identify the customer.
Depending on what the data is and your security requirements the threat of cross contamination may be a show stopper.
Assuming you have considered this and deem it "safe enough". You may need/want to create VIEWS or impose some other access control to prevent customers from seeing each-other's data.
IIRC a product called "Trusted Oracle" had the ability to partition data based on such a key (about the time Oracle 7 or 8 was out). The idea was that any given query would automagically have "and sourceKey = #userSecurityKey" (or some such) appended. The feature may have been rolled into later versions of the popular commercial product.
To expand on Gregory's answer, you can also make a parent ssis that calls the package doing the actual moving within a foreach loop container.
The parent package queries a config table and puts this in an object variable. The foreach loop then uses this recordset to pass variables to the package, such as your database name and any other details the package might need.
You table could list all of your client databases and have a flag to mark when you are ready to move them. This way you are not sitting around running the ssis package on 32,767 databases. I'm hooked on the foreach loop in ssis.