How can i limit on table maintenance fetching data base on authorization?
For instance the user only can view plant/storage location based on the authorization object because sometimes the user gets confused of too many plants are irrelevant for him.
In the table maintenance generator for your table or view, choose Environment -> Modification -> Events from the menu.
Here you have the option to extend the logic of the table maintenance generator at particular points during the execution.
You are able to define your own logic; one promising event might be 'AA' (Instead of the standard data read routine). You should be able to change the logic for reading data to perform a custom authority check that will allow users to see only records for which they have access.
Here is a document on SDN relating to the topic of using the table maintenance events: https://wiki.scn.sap.com/wiki/display/ABAP/TABLE+MAINTENANCE+GENERATOR+and+ITS+EVENTS
Related
I have User model which is aggregate. I also plan to create WorkingHours object. It's like every user will have his own working hours per day. There will be also graphical user interface separated from User for add/remove/update hours etc. I am thinking that whether should i put all operations into UserRepository related to WorkingHours or should i tread WorkingHours model as aggregate and create separated WorkingHoursRepository so then i could put property into User as id to WorkingHours object. Which option should i choose?
My thoughts are that to not make WorkingHours as aggregate because every set of working hours belong to specific user which makes it if i am thinking right dependent on User and cannot live without it. My only thought about to make it aggregate and create separate repository is due to have cleaner code means not to put all CRUD etc in same repository but i suppose it's should be not the thing to separate it therefore to me the only way is to WorkingHours as value object and not aggregate and use UserRepository for it.
You design your Domain Model based on your business requirements and not on how it needs to be saved.
In this scenario, if Working Hours can be only manipulated within the User domain and if you think User is the only aggregate required, then Working Hours should not be made aggregate. That said, it does not stop you save your data in a clean manner in your data store. Strategy to store your data also depends a lot on your type of data store.
For example, if you are using SQL and your data is stored in multiple tables then you can Commit or Rollback the entire transaction. How you implement it is not tied to DDD as long as you are adhering to the concept that the aggregates should only be updated via the root entity.
If you are using a No-SQL database like Cosmos DB you can choose to load or save the entire document. In that case, you would be only dealing with the User repository.
Hope this helps.
Does anyone know of any way to remove the public datasets from a BigQuery project?
Though the risk is very low, I don't want my users to be able to run queries against them and rack up costs.
Thanks
Its an old question, but for those who just want to unpin the "bigquery-public-data" to tidy up the resources list, you can click the name on the side, then on the far right of the info pane there is an "unpin project button". Click that.
The whole point of public datasets is that everyone has access to them so they can test BigQuery. Even if a feature request will create the option to disable the listing in the panel of the BigQuery web UI, the users will still have access and could query the public datasets.
It will be more practical to use custom quotas.
So you would create a project with a number of users that share a quota that you consider enough for their activities. When the established quota is reached BigQuery stops and the users receive an error message when trying to run queries.
Another useful tool is creating budget alerts with a desired level that you can set taking into account the previous month's spend. The alert will notify you when the project's bill have reached the amount you set and can save you from bad surprises.
In addition, implementing the Audit Logs in your project will give comprehensive overview on the BigQuery operations. Check this example of an Audit Logs query that will give details on the performed queries. Of course, you will find out about the use of a public dataset after it happens but this will point out who’s the user that performed the query and you can reinforce the administration policy of not inquiring public datasets. To get information on the performed query, including the interrogated dataset, use this field when querying the Audit Logs:
'protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobConfiguration.query.query'
As a last resort, you can create a designated project for your users to query the public datasets and to make sure it will not create additional costs, you can remove the billing account. Though, by doing so you can only query 1 TB of data per month, the BigQuery always free usage tier.
Also keep in mind about this best practices to limit the queries costs.
if you closed current tab , public data set will disappear from google BigQuery page
I'm trying to design a system where an administrator will have to approve changes to the data and other various administrative tasks -- add a user, add an admin etc.
My idea is to have a notification table that contains these notifications, but the problem is that a notification can be any of the previously mentioned types, ie it's data is stored in one of many tables. Here is a picture to describe my current plan -- note I'm sure that it's not a proper ER diagram.
full_screen
Also, the data goes into a pending table, that reflects the table it will eventually wind up in, provided the data is approved -- it's a staging ground of sorts. So, a pending_user is a user that is not in the user table. And as you can see the user table, amongst others, is not shown here, but one can use their imagination.
I'm concerned that the multiple null values in the pending table will have adverse effects that I'm not totally aware of, such as increased space usage and possibly increase query time. Also, I'm not sure how I'll implement the retrieval of these notifications. My naive approach is to select the first X notifications, analyze the rows to find the non-null column, retrieve the appropriate data and then load all the data in a response.
Is there a more straight forward pattern for this type of problem?
Thanks in advance for any help.
I think, the traditional way is to provide various levels of access/read/write rights to users. These access rights define what actions a user can and can't perform. In this traditional approach if a user has access to a certain function, he can do it without further approval.
Also, traditionally there are some kind of audit logs that contain a trace of all important changes to the data. With such logs it would be possible to know who made a change (and when).
If you need to build a two-stage system, where a change has to go through an approval, I'd add a flag column to each important table that would indicate that values in the given row are not final and have to be approved. The table would store all historical changes to the data and with the help of this flag the system would know which variant is the latest approved version and which variant is pending and waiting for approval.
I would not try to make a single universal table that would hold data related to changes in many different tables. Each table is different and approval process for each table is likely to be different. I doubt that you'll have more than a dozen entities that are important enough to go through this approval process.
What is the best way of MDX querying for each drill down of BI dash board chart? as a example if you have four drill level every drill down we should execute four MDX query or execute only one query in the initial time, and keep all data of four drill levels in object collection. If you can please explain with a example.
This depends a lot on what tool you are using to display the BI Dashboard. Is it SSRS, PerformancePoint, something else?
Pull all the data in the initial MDX query, configure the Dashboard software to display the top level of detail and provide user with options for drilldown. As users drilldown, unhide the next level of detail. This option only requires 1 roundtrip to the database. So intially loading the dashboard may be a bit slower, but drilldown experience will be very fast (since the data has already be retrieved).
Pull just the top level of detail in the initial MDX query, configure the Dashboard software to display results and provide users with options for drilldown. As users drilldown, Dashboard software will send another MDX query to retrieve the next level of detail from your data source. This option will require multiple roundtrips to the database...one for the intial top-level of detail when the user first loads the dashboard, and another for each time the user drills down.
Either option will work but you'll need to make the call on which option best suits your needs after weighing the pros and cons...
how fast is the network between your dashboard and the datasource?
how much concurrency can your data-source handle?
how "big" is the query to pull everything?
how important is speed to your users?
be sure and test each if you are unsure.
I am trying to decide on the best method for audit logging within my application. The main reason for the log is reporting the sequence of events (changes).
I have a hierarchy of Objects, I need to create reports when something changes on any part of that hierarchy, at a latter date.
I think that I have three options:
Have a log for each table and therefore matching the hierarchy of objects then creating a view for the report.
Flatten the hierarchy and de-normalise the table, making reporting easier - simple select statement.
Have one log table and have a record for each change making reporting harder but more flexible to changes.
I am currently leaning towards option 1.
I have to talk to this subject even though it's old.
It is usually a poor idea to have only one audit table as you will create locking problems in the database as everything hits that table. Use separate audit tables for each table.
It is also a poor idea to have the application do the auditing. Audit must be done at the database level or you risk losing some of the information. Data does not change only from applications in most databases; no one is going to change the prices of all their products one at a time from the user interface when you need a 10% increase to all 10,000,000 of them. Auditing should capture all changes not just some of them. This should be done in a trigger in most databases (SQL server 2008 has a built in auditing function). Some of the worst potential possible changes (employees committing fraud or wanting to maliciously destroy data) also are frequently from places other than the application especially if you allow table level access to users (Which you should not do in any financial database or one that contains personal information). Auditing from the application won't catch this. Developers often forget that in protecting their data, outside sources are not the only threat.
An audit log is basically a chronological list of events that occurred, who performed these events, and what the events were.
I think a flat view would be better as it can be easily ordered and queried. So I'm leaning more towards your option #2/#3.
Include things like the transaction type, the time, the user id, a description of what's changed, and other pertinent information related to your product.
You can also add things to your product over time and you won't need to continually modify your audit log module.
If it's for auditing purposes I'd use a true append-only medium rather than a table/tables in the same db.
You suggest it's for change history purposes - in which case I would restructure your application/db to record the actual events in the first place rather than just the current state.
I would go with (2) and (3): create a single table for all Audit entries.
A flat view is good, provided the extra work flattening does not impact performance.
You could look into an AOP framework to help with this. It would allow you to inject logging functionality at the beginning or end of any/all methods. If you go down this road, it might help define what would make sense for storing the log data.