I am using report builder 3.0 (very similar to SQL server reporting services) to create reports for users on an application using SQL server 2012 database.
To set the scene, we have a database with over 1200 tables. We actually only need about 100 of these for reporting purposes. But it is very common that we need to combine fields from multiple tables together to get a common resource of data that my colleagues and I need for our reports.
Eg if I want a view of a customer, I would want to bring in information about the customer from the customer_table, information about his phone details from the Phone table, information about his account(s) from the accounts table and so on. Then I might need another view of the accounts - account type, various balance amounts, opening date, status etc.
What I would love to do is create a "customer view" where we combine all these fields into a single combined virtual table. Then we have an "Accounts view". It would be easier to use, easier to manage etc. Then we use this for all our reports going forwards. And when we need to, we can combine the customer and accounts view to use on a report plus actual tables into one combo-dataset to use on a report.
I am unsure about the right way to do this.
I see I can create a data source. This doesn't seem right as this appears to be what one might do if working off 2 or more databases. We are using just 1 database.
Then there are report models. It seems these are being deprecated and phased out so this doesn't seem a good option.
Finally I see we can create shared datasets. However, this option (as far as I can tell) won't allow me to combine this with another dataset. So using the example above, I won't be able to combine the customer view and the account view with this approach to use for a report to display details about the customer and his/her accounts.
Would appreciate guidance on the best way to achieve what I am trying to do...
Thanks
I can only speak from personal experience, but using the the data source approach has been good for our purposes. We have a single database with 50+ tables in it. This is linked to as a shared data source in the project so is available to all 50+ reports.
We then use Stored Procedures to make the information in the databases available to the reports, each report has it's own Stored Procedure that joins as many tables as required to provide the data for the report. The advantage of using Stored Procedures also allows you to only return rows you are interested in, rather than entire tables.
I'm not certain if this is the kind of answer that you were after, but describes how we solve a similar (smaller) issue.
Related
I'm new to CDS/Dataverse, coming from the SQL Server world. I created a new Dataverse table and there are over a dozen columns in my "new" table (e.g. "status", "version number"). Apparently these are added automatically. Why is this?
Also, there doesn't seem to be a way to view a grid of data (like I can with SQL Server) for quick review/modification of the data. Is there a way to view data visually like this?
Any tips for a new user, coming from SQL Server, would be appreciated. Thanks.
Edit: clarified the main question with examples (column names). (thanks David)
I am also new to CDS/Dataverse, so the following is a limited understanding from what I have explored so far.
The idea behind Dataverse is that it gives you a pre-built schema that follows best-practice for you build off of, so that you spend less time worrying about building a comprehensive data schema, creating tables, and how to relate them all together, and more time building applications in Power Apps.
For example, amongst the several dozen tables it generates from the get-go is Account and Contact. The former is for organisational entities and the latter is for single-person entities. You can go straight into adding your user records in one of these tables and take advantage of bits of Power Apps functionality already hooked up to these tables. You do not have to spend time thinking up column names, creating the table, making sure it hooks up to all the other Dataverse tables, testing whether the Power Apps functionality works with it correctly etc.
It is much the same story with the automatically generated columns for new tables: they are all there to maintain a best-practice schema and functionality for Power Apps. For example, the extra columns give you good auditing with the data you add, including when a row was created, modified, who created the row etc. The important thing is to start from what you want to build, and not get too caught up in the extra tables/columns. After a bit of research, you'll probably find you can utilise some more tables/columns in your design.
Viewing and adding data is very tedious -- it seems to take 5 clicks and several seconds to load the bit of data you want, which is eons in comparison to doing it in SQL Server. I believe it is how it is due to Microsoft's attempt to make it "user friendly".
Anyhow, the standard way to view data, starting from the main Power Apps view is:
From the right-hand side pane, click Data
Click Tables
From the list of tables, click your table
Along the top row, click Data
There is an alternative method that allows you to view the Dataverse tables in SSMS – see link below:
https://www.strategy365.co.uk/using-sql-to-query-the-common-data-service/
To import data in bulk:
Click on Data from the top drop-down menu > Get data.
Importing data from Excel is free. To import from other sources, including SQL Server, I believe is a paid service (although I think you may be able to do this on the free Community Plan).
We currently use Infor Syteline to manage our manufacturing and customer information. I am working on an alternative front end (vb.net) software that would work seamlessly with our Syteline software to add custom document generation and other functions to a SQL database and the only problem I am running into is with generating RowPointers that Syteline uses. Example of one of these pointers is below:
F5025224-046D-40DC-95F6-C517303A0D26
Can someone fill me in on what these pointers do, how I can replicate them, and why standard ID's aren't used to reference rows? The goal is to be able to add entries into the SQL database that are identical to those created by Syteline so that both softwares can be used to manage customer data, production data, etc.
Thanks
You can generate them with select newid(). They are always unique, so you can use them to filter on 1 specific row. They will never have a duplicate.
Not sure what you mean by standard ID.
I've got a nice SSAS tabular model with users processing away. Certain users need access to certain information, such as confidential info (e.g., SS numbers), that should not be visible to everyone. How should I handle this?
This indicates that there is no way to use roles to remove columns, only rows. Is my only option to make a copy of the model and maintain both? This can't be such an edge case...
I guess I can jury-rig something with a scm fork and code-generation, but I'd rather not go down that road.
Alternatively, is there any way to hide the columns (per user/role), so that at least they don't show up in client tools?
One method that requires very little additional development is to use the method described in the following blog post. http://blog.westmonroepartners.com/a-workaround-for-column-security-in-the-sql-server-analysis-services-bism-tabular-model/
The blog contains a link to an SSIS package which will replicate an existing cube, with the exception of the sensitive data columns. The users who cannot view the sensitive data columns can be given access to the second cube that does not contain sensitive data columns.
One way to achieve this is to create Perspectives. You can create different perspectives for different group of users. And end users can connect to their specific model.
I'm building a talent management CRM application and I'm having trouble choosing between a SQL or NoSQL database for my data.
The application will only have a few 'core' entities (Person, Job, Company, Interview), and will rely heavily on 'tagging' of those entities. You can add Tags and Notes to a Person, a Job, a Company, and then sort/search data by those tags.
What I learned about NoSQL is that I can just have a Person object (document) with an array of Tags and Notes, where in SQL I would need separate Tags and Notes tables and construct joins to gather all my data for a Person.
Could anyone give me some pointers on what would be the way to go for my particular scenario?
Thanks!
Our ERP system is based on UniData (NoSQL), it is okay for performing the standard tasks needed to do business like entering in customers, creating sales orders, invoicing etc. But when it comes to creating reports that were not originally foreseen it is quite cumbersome. The system only lets you create reports off of one table, if you need data from another table you have two options: 1. Create what is called a virtual attribute for every field you need to look up from a different table, Or write a UniBasic program to retrieve the data needed.
To meet most of our business needs on the reporting front it is more beneficial for us to export the Data to SQL and then perform reports in SQL, the result is the reports run quicker from SQL and most of the time a reporting tool can be used to create the reports - this can usually be performed by a power user as opposed to someone that has to have quite a high level of programming abilities to just build a report.
It would have been nice if it had already been in SQL in the first place.
But maybe some other NoSQL database has better functionality than UniData, that said too usually 3rd party support for NoSQL database engines comes at a higher premium because there are less specialists available than 3rd party support for SQL engines.
We have a SQL server that has a database for each client, and we have hundreds of clients. So imagine the following: database001, database002, database003, ..., database999. We want to combine all of these databases into one database.
Our thoughts are to add a siteId column, 001, 002, 003, ..., 999.
We are exploring options to make this transition as smoothly as possible. And we would LOVE to hear any ideas you have. It's proving to be a VERY challenging problem.
I've heard of a technique that would create a view that would match and then filter.
Any ideas guys?
Create a client database id for each of the client databases. You will use this id to keep the data logically separated. This is the "site id" concept, but you can use a derived key (identity field) instead of manually creating these numbers. Create a table that has database name and id, with any other metadata you need.
The next step would be to create an SSIS package that gets the ID for the database in question and adds it to the tables that have to have their data separated out logically. You then can run that same package over each database with the lookup for ID for the database in question.
After you have a unique id for the data that is unique, and have imported the data, you will have to alter your apps to fit the new schema (actually before, or you are pretty much screwed).
If you want to do this in steps, you can create views or functions in the different "databases" so the old client can still hit the client's data, even though it has been moved. This step may not be necessary if you deploy with some downtime.
The method I propose is fairly flexible and can be applied to one client at a time, depending on your client application deployment methodology.
Why do you want to do that?
You can read about Multi-Tenant Data Architecture and also listen to SO #19 (around 40-50 min) about this design.
The "site-id" solution is what's done.
Another possibility that may not work out as well (but is still appealing) is multiple schemas within a single database. You can pull common tables into a "common" schema, and leave the customer-specific stuff in customer-specific schema. In some database products, however, the each schema is -- effectively -- a separate database. In other products (Oracle, DB2, for example) you can easily write queries that work in multiple schemas.
Also note that -- as an optimization -- you may not need to add siteId column to EVERY table.
Sometimes you have a "contains" relationship. It's a master-detail FK, often defined with a cascade delete so that detail cannot exist without the parent. In this case, the children don't need siteId because they don't have an independent existence.
Your first step will be to determine if these databases even have the same structure. Even if you think they do, you need to compare them to make sure they do. Chances are there will be some that are customized or missed an upgrade cycle or two.
Now depending on the number of clients and the number of records per client, your tables may get huge. Are you sure this will not create a performance problem? At any rate you may need to take a fresh look at indexing. You may need a much more powerful set of servers and may also need to partion by client anyway for performance.
Next, yes each table will need a site id of some sort. Further, depending on your design, you may have primary keys that are now no longer unique. You may need to redefine all primary keys to include the siteid. Always index this field when you add it.
Now all your queries, stored procs, views, udfs will need to be rewritten to ensure that the siteid is part of them. PAy particular attention to any dynamic SQL. Otherwise you could be showing client A's information to client B. Clients don't tend to like that. We brought a client from a separate database into the main application one time (when they decided they didn't still want to pay for a separate server). The developer missed just one place where client_id had to be added. Unfortunately, that sent emails to every client concerning this client's proprietary information and to make matters worse, it was a nightly process that ran in the middle of the night, so it wasn't known about until the next day. (the developer was very lucky not to get fired.) The point is be very very careful when you do this and test, test, test, and test some more. Make sure to test all automated behind the scenes stuff as well as the UI stuff.
what I was explaining in Florence towards the end of last year is if you had to keep the database names and the logical layer of the database the same for the application. In that case you'd do the following:
Collapse all the data into consolidated tables into one master, consolidated database (hereafter referred to as the consolidated DB).
Those tables would have to have an identifier like SiteID.
Create the new databases with the existing names.
Create views with the old table names which use row-level security to query the tables in the consolidated DB, but using the SiteID to filter.
Set up the databases for cross-database ownership chaining so that the service accounts can't "accidentally" query the base tables in the consolidated DB. Access must happen through the views or through stored procedures and other constructs that will enforce row-level security. Now, if it's the same service account for all sites, you can avoid the cross DB ownership chaining and assign the rights on the objects in the consolidated DB.
Rewrite the stored procedures to either handle the change (since they are now referring to views and they don't know to hit the base tables and include SiteID) or use InsteadOf Triggers on the views to intercept update requests and put the appropriate site specific information into the base tables.
If the data is large you could look at using a partioned view. This would simplify your access code as all you'd have to maintain is the view; however, if the data is not large, just add a column to identify the customer.
Depending on what the data is and your security requirements the threat of cross contamination may be a show stopper.
Assuming you have considered this and deem it "safe enough". You may need/want to create VIEWS or impose some other access control to prevent customers from seeing each-other's data.
IIRC a product called "Trusted Oracle" had the ability to partition data based on such a key (about the time Oracle 7 or 8 was out). The idea was that any given query would automagically have "and sourceKey = #userSecurityKey" (or some such) appended. The feature may have been rolled into later versions of the popular commercial product.
To expand on Gregory's answer, you can also make a parent ssis that calls the package doing the actual moving within a foreach loop container.
The parent package queries a config table and puts this in an object variable. The foreach loop then uses this recordset to pass variables to the package, such as your database name and any other details the package might need.
You table could list all of your client databases and have a flag to mark when you are ready to move them. This way you are not sitting around running the ssis package on 32,767 databases. I'm hooked on the foreach loop in ssis.