Hello everyone I have a difficult situation:
How to restrict access to a single data perimeter in a BigQ table for use case layer. Which strategy is best?
Details – The BigQuery warehouse is 76 GB data (annual 20% growth). The reporting/ visualization tool shall be MS Power BI. We want to restrict access of a Italy user to only see Italy data and UK user to only see UK data.
Options considered -
Authorized Views with CONTROL tavle - create and mantain users i project; hard and not scalable
Filtering Views or tables (Complete Isolation Methodology) - create individual views for all countries
BigQ Row Level Security Using “Grant” (Native Methodology) - published in July and very fresh; but we can use grant and apply it to a AD group.
Success criteria - ease of implementation; high perforance with PBI dashboard; ABAC - Attribute based access on rows and scalability to other projects.
Any help is highly appreciated.
Related
I am making a back-end that handles inventory, user data, documentation, & images (basically a custom ERP system) for an application I am developing.
I am deriving some accounting information (payroll, TVM calculations) based on price data and hours logged by users. Is it typical to use the same relational database (MySQL) to store EVERYTHING (but in separate tables of course)? Or do you want to split things into separate databases?
I work at an advertising agency.
We have been asked to implement GCP (BigQuery) for an advertiser in our commercial distribution.
I've been told that each agency should have different access rights to BigQuery.
I think it is possible to divide the permissions by project.
Can we have the following configuration?
■Project 1
・BigQuery is used.
・The data is stored in the advertising area.
・Browse and edit permissions are given only to agency A
■Project 2
・Use BigQuery
・Store data in the CRM domain
・Only agency B is authorized to view and edit the data.
I am building a OLAP Cube in SSAS for an organization which has many different companies under its umbrella.
i have built a principal cube which consists of all the measure groups and dimensions, which has the data of all the companies in this organization.
Now this cube is ok for the top level management, but i need to limit the access of users from each of the companies, only to the data of its own company.
Is there a way to do that in the principal cube, without duplicating it to many sub cubes, each consists of only the relevant company?
Thank you in advance,
Tal
You can use role based security in combination cell-based security. below link can be useful:
https://learn.microsoft.com/en-us/sql/analysis-services/multidimensional-models/grant-custom-access-to-cell-data-analysis-services
We solved the problem through the automatic modification of MDX queries, restricting data through nested cubes (subcube) for each organization. Not only data, but also cube metadata were limited. The mechanism of roles to the customer is not suitable, as organizations and users are constantly added.The Ranet UI Pivot table was used in the Saas solution. The library allows you to parse and modify MDX query, as well as filter the metadata of the cube.
I'm building a talent management CRM application and I'm having trouble choosing between a SQL or NoSQL database for my data.
The application will only have a few 'core' entities (Person, Job, Company, Interview), and will rely heavily on 'tagging' of those entities. You can add Tags and Notes to a Person, a Job, a Company, and then sort/search data by those tags.
What I learned about NoSQL is that I can just have a Person object (document) with an array of Tags and Notes, where in SQL I would need separate Tags and Notes tables and construct joins to gather all my data for a Person.
Could anyone give me some pointers on what would be the way to go for my particular scenario?
Thanks!
Our ERP system is based on UniData (NoSQL), it is okay for performing the standard tasks needed to do business like entering in customers, creating sales orders, invoicing etc. But when it comes to creating reports that were not originally foreseen it is quite cumbersome. The system only lets you create reports off of one table, if you need data from another table you have two options: 1. Create what is called a virtual attribute for every field you need to look up from a different table, Or write a UniBasic program to retrieve the data needed.
To meet most of our business needs on the reporting front it is more beneficial for us to export the Data to SQL and then perform reports in SQL, the result is the reports run quicker from SQL and most of the time a reporting tool can be used to create the reports - this can usually be performed by a power user as opposed to someone that has to have quite a high level of programming abilities to just build a report.
It would have been nice if it had already been in SQL in the first place.
But maybe some other NoSQL database has better functionality than UniData, that said too usually 3rd party support for NoSQL database engines comes at a higher premium because there are less specialists available than 3rd party support for SQL engines.
I have a requirement and I am not sure if I should use Analysis services or Reporting services or some other technique.
My client wants to show special deals from a database on their online website. They want to target users, i.e. if user is from UK; show deals for UK and in Pound. if user is from Canada; show deals for Canada and in canadian dollar etc.
Their database has multiple tables loaded with 1 to 2 million records in each table. Each table is for a different category of products and has a currency and a Country column to filter. I cannot restructure their schema as they have huge amount of development done to integrate with various buisness applications.
I need a solution which involves datawarehouse, can fetch data quickly and cache it for next 12 or 24 hours (Do not want to cache on web server). I do not have much experience in Analysis and Reporting services so I need your solution/suggestion and anything you can share from your good or bad experiences.
Analysis services is not what you want here: you do not need cubes that summerize info.
Nor is reporting services: you will want to display your data in plain HTML.
I would just query the existing data and display that data. If performance becomes an issue you could run an SSIS job every 12 hours to extract data to a specific database you created for this application. But consider tweaking your indexes first.