How to express pagination in attribute based access control? - authorization

Based on my coarse reading, ABAC, i.e. attribute based access control, boils down to attach attributes to subjects, resources and other related entities (such as actions to be performed on the resources), and then evaluate a set of boolean valued functions to grant or deny the access.
To be concrete, let's consider XACML.
This is fine when the resource to be accessed is known before it hits the decision engine (PDP, in the case of XACML), e.g. view the mobile number of some account, in which case the attributes of the resource to be accessed probability can be easily retrieved with a single select SQL.
However consider the function of listing one's bank account transaction history, 10 entries per page, let's assume that only the account owner can view this history, and the transaction is stored in the database in a table transaction like:
transaction_id, from_account_id, to_account_id, amount, time_of_transaction
This function, without access control, is usually written with a SQL like this:
select to_account_id, amount, time_of_transaction
from transaction
where from_account_id = $current_user_account_id
The question: How can one express this in XACML? Obviously, the following approach is not practical (due to performance reasons):
Attach each transaction in the transaction table with the from_account_id attribute
Attach the request (of listing transaction history) with the account_id attribute
The decision rule, R, is if from_account_id == account_id then grant else deny
The decision engine fetch loops the transaction table, evaluate each row according to R, if granted, then emit the row, util 10 rows are emitted.
I assume that there will be some preprocess step to fetch the transactions first, (without consulting the decision engine), and then consult the decision engine with the fetched transaction, to see if it has access?

What you are referring to is known as 'open-ended' or data-centric authorization i.e.access control on an unknown number (or a large number) of items such as a bank account's transaction history. Typically ABAC (and XACML or alfa) have a decision model that is transactional (i.e. Can Alice view record #123?)
It's worth noting the policy in XACML/ALFA doesn't change in either scenario. You'd still write something along the lines of:
A user can view a transaction history item if the owner is XXX and the date is less than YYY...
What you need to consider is how to ask the question (that goes from the PEP to the PDP). There are 2 ways to do this:
Use the Multiple Decision Profile to bundle your request e.g. Can Alice view items #1, #2, #3...
Use an open-ended request. This is known as partial evaluation or reverse querying. Axiomatics has a product (ARQ) that addresses this use case.
I actually wrote about a similar use case in this SO post.
HTH,
David

Related

DDD: Do item counts belong in domain model?

Say you are modeling a forum and you are doing your best to make use of DDD and CQRS (just the separate read model part). You have:
Category {
int id;
string name;
}
Post {
int id;
int categoryId;
string content;
}
Every time that a new post has been created a domain event PostCreated is raised.
Now, our view wants to project count of posts for each category. My domain doesn't care about count. I think I have two options:
Listen for PostCreated on the read model side and increment the count using something like CategoryQueryHandler.incrimentCount(categoryId).
Listen for PostCreated on domain side and increment the count using something like CategoryRepo.incrimentCount(categoryId).
The same question goes for all the other counts like number of posts by user, number of comments in a post, etc. If I don't use these counts anywhere except my views should I just have my query handlers take care of persisting them?
And finally if one of my domain services will ever want to have a count of posts in category do I have to implement the count property onto the category domain model or can that service simply use read model query to get that count or alternatively a repository query such as CategoryRepo.getPostCount(categoryId).
My domain doesn't care about count.
This is equivalent to saying that you don't have any invariant that requires or manages the count. Which means that there isn't an aggregate where count makes sense, so the count shouldn't be in your domain model.
Implement it as a count of PostCreated events, as you suggest, or by running a query against the Post store, or.... whatever works for you.
If I don't use these counts anywhere except my views should I just have my query handlers take care of persisting them?
That, or anything else in the read model -- but you don't even need that much if your read model supports something like select categoryId, count(*) from posts...
domain services will ever want to have a count of posts in category
That's a pretty strange thing for a domain service to want to do. Domain services are generally stateless query support - typically they are used by an aggregate to answer some question during command processing. They don't actually enforce any business invariant themselves, they just support an aggregate in doing so.
Querying the read model for counts to be used by the write model doesn't make sense, on two levels. First, that the data in the read model is stale - any answer you get from that query can change between the moment that you complete the query and the moment when you attempt to commit the current transaction. Second, once you've determined that stale data is useful, there's no particular reason to prefer the stale data observed during the transaction to stale data prior. Which is to say, if the data is stale anyway, you might as well pass it to the aggregate as a command argument, rather than hiding it in a domain service.
OTOH, if your domain needs it -- if there is some business invariant that constraints count, or one that uses the count to constrain something else -- then that invariant needs to be captured in some aggregate that controls the count state.
Edit
Consider two transactions running concurrently. In transaction A, Aggregate id:1 running a command that requires the count of objects, but the aggregate doesn't control that count. In transaction B, Aggregate id:2 is being created, which changes the count.
Simple case, the two transactions happen by luck to occur in contiguous blocks
A: beginTransaction
A: aggregate(id:1).validate(repository.readCount())
A: repository.save(aggregate(id:1))
A: commit
// aggregate(id:1) is currently valid
B: beginTransaction
B: aggregate(id:2) = aggregate.new
B: repository.save(aggregate(id:2))
B: commit
// Is aggregate(id:1) still in a valid state?
I represent that, if aggregate(id:1) is still in a valid state, then its validity doesn't depend on the timeliness of the repository.readCount() -- using the count prior to the beginning of the transaction would have been just as good.
If aggregate(id:1) is not in a valid state, then its validity depends on data outside its own boundary, which means that the domain model is wrong.
In the more complicated case, the two transactions can be running concurrently, which means that we might see the save of aggregate(id:2) happen between the read of the count and the save of aggregate(id:1), like so
A: beginTransaction
A: aggregate(id:1).validate(repository.readCount())
// aggregate(id:1) is valid
B: beginTransaction
B: aggregate(id:2) = aggregate.new
B: repository.save(aggregate(id:2))
B: commit
A: repository.save(aggregate(id:1))
A: commit
It may be useful to consider also why having a single aggregate that controls the state fixes the problem. Let's change this example up, so that we have a single aggregate with two entities....
A: beginTransaction
A: aggregate(version:0).entity(id:1).validate(aggregate(version:0).readCount())
// entity(id:1) is valid
B: beginTransaction
B: entity(id:2) = entity.new
B: aggregate(version:0).add(entity(id:2))
B: repository.save(aggregate(version:0))
B: commit
A: repository.save(aggregate(version:0))
A: commit
// throws VersionConflictException
Edit
The notion that the commit (or the save, if you prefer) can throw is an important one. It highlights that the model is a separate entity from the system of record. In the easy cases, the model prevents invalid writes and the system of record prevents conflicting writes.
The pragmatic answer may be to allow this distinction to blur. Trying to apply a constraint to the count is an example of Set Validation. The domain model is going to have trouble with that unless a representation of the set lies within an aggregate boundary. But relational databases tend to be good at sets - if your system of record happens to be a relational store, you may be able to maintain the integrity of the set by using database constraints/triggers.
Greg Young on Set Validation and Eventual Consistency
How you approach any problem like this should be based on an understanding of the business impact of the particular failure. Mitigation, rather than prevention, may be more appropriate.
When it comes to counts of things I think one has to consider if you actually need to save the count to the DB or not.
In my view in most cases you do not need to save counts unless their calculation is very expensive. So I would not have a CategoryQueryHandler.incrementCount or CategoryRepo.incrementCount.
I would just have a PostService.getPostCount(categoryId) that runs a query like
SELECT COUNT(*)
FROM Post
WHERE CategoryId=categoryId
and then call it when your PostCreated event fires.

Database design for coupon usuage restriction

While working on implementing voucher feature for an eCommerce application, I need to implement Voucher usage restriction, some of restriction I am planning to have
Products
Exclude products
Product categories
Exclude categories
Email /Customer restrictions
Currently We are supporting following 2 type of Vouchers with an option to create Custom voucher type and all those Vouchers types are being maintained in a single table with help of discriminator (Hibernate use).
Serial Vouchers
Promotion Vouchers.
these are only few which I am targeting at initial stage.My main confusion is about database design and restriction of these voucher usage with Voucher.I am not able to decide which is best way to Map these restrictions in database.
Should I go for a single table for all these restriction and have a relation with Voucher table or is it good to group all similar type of restriction in a single table and have their relation with Voucher table.
As an additional information , we are using hibernate to map our entities with the DB table.
This seems like a very wide-open and freeform requirement. Some questions:
How complex will the business rules you are attempting to model be? If you’re allowing (business) users to define their own vouchers, odds are good they’ll come up with some pretty byzantine rules and combinations. If you have to support anything they come up with, you will have problems.
What will the database be tasked to do with this data? Store the “voucher definition”, sure, but then what? Run tallies or reports on them? Analyze how many are used, by who/when/how/for what? Or just list out what was used/generated over the past year?
What kind of data volumes are you going to have? One entry per voucher definition, or per voucher printed/issued? (If the latter, can you use one entry per voucher, with a count of how many issued?) Are we talking dozens, hundreds, or millions of vouchers?
If it’s totally free-form, if they just want a listing without serious analysis, if the overall volume is small, consider using blob fields rather than minutiae-oriented columns. Something like a big text field and a data-entry box wherein the user will “Enter any other criteria defining the voucher”. (You might even do this using XML.) Ugly, you can’t readily analyze the data, but if the goals are too great or diffuse and you're not going to use all that detailed data, it might be necessary.
A final note: a voucher that is good for only selected products cannot be used on products that are added after the voucher is created. A voucher that is good for all but selected products can be used for subsequently created products. This logic may apply to any voucher-limiting criteria. Both methodologies have merit, make sure the users are clear on what they’re doing.
If I understand what your your are doing, you will have a problem with only one table for all restrictions, because it means 1 row per Voucher and multiple values in your different restrictions columns.
It will be harder for you to UPDATE, extract and cast restrictions values.
In my opinion, you should have one table for each restrictions type and map them with Voucher table. However It will be easier for you to add new restrictions.
As a suggestion:
Isn't it more rational to have valid-products and valid-categories instead of Exclude-products and Exclude-categories?
Having a Customer-Creditgroup table will lead us to have valid-customer-group table.
BTW in the current design we can have a voucher definition table, I will call it voucher-type table.
About the restrictions:
In RDBMS level you can state only two types of table constraints decoratively:
uniquely identifying attributes (keys)
Subsets requirements referencing back to the same or other table
(foreign key)
Implementing all other types of table constraints (like a multi-tuple constraints or transition constraints) requires you to develop procedural data integrity code.
When a voucher is going to sold to a specific customer for a specific product we will need to check validity of excluded elements, that could be done by triggers in data base level or business logic of your application.
I would personally go with your second proposal... grouping all similar types of restrictions in a single table, which refers the Voucher table.
I'll add to that, that you can handle includes and excludes on the same table.
So the structure I'd use is some along the lines of:
Voucher (id, type, etc...)
VoucherProductRestriction (id,voucher_id,product_id,include)
VoucherProductCategoryRestriction (id,voucher_id,product_category_id,include)
VoucherCustomerRestriction (id,voucher_id,customer_id)
VoucherEmailRestriction (id,voucher_id,email)
...where the include column could be a boolean that is true in case you want to restrict the voucher to that product or category, or false if you want to restrict it to any product or category other than those specificied.
If I understand your context correctly, it makes no sense to have both include and exclude restrictions on the same voucher (although it could make sense to have more than one of the same type). You can probably handle and check this better if you use a single table for both types of restrictions.

Permissions design pattern that allows date-based access

I am looking at ways to implement an authorization (not authentication) scheme in my app.
There are currently two roles in the system: A and B, but there may be more. User's only have one role.
Basically, the I have it set up now is with two database tables. One is for role-based permissions on a model, and the other is for specific user-based permissions. I am thinking that this way, users can have a set of default permissions based on their role-based permissions, but then they can also have specific permissions granted/revoked.
So for example:
table: user_permissions
columns:
user_id: [int]
action: [string]
allowed: [boolean]
model_id: [int]
model_type: [string]
table: role_permissions
columns:
role: [int]
action: [string]
model_type: [string]
In the user_permissions table, the allowed field specifies whether the action is allowed or not, so that permissions can be revoked if this value is 0.
In another table, I have the definitions for each action:
table: model_actions
columns:
action: [string]
bitvalue: [int]
model_type: [string]
I do this so that when I check permissions on a model, for example ['create', 'delete'], I can use a bitwise and operation to compare the user's permissions to the permissions I am checking. For example, a model X could have the following model_actions:
action: 'create'
bitvalue: 4
model_type: X
action: 'delete'
bitvalue: 2
model_type: X
action: 'view'
bitvalue: 1
model_type: X
If my user/role permissions specify that the create, view, and delete actions for the model X are 1, 0, and 1, respectively, then this is represented as 110 based on the model_actions table. When I check if I can create model X, I use the fact that create is 4 to construct the bitarray 100. If the bitwise AND operation of 110 and 100 is 100, then the permission is valid.
ANYWAY, I think I have a granular permissions design pattern figured out. If not PLEASE feel free to educate me on the subject.
The actual focus of my question concerns the following:
Some of my models have actions that are time-dependent. For example, you can only delete a model Y no more than 24 hours after its created_at date.
What I am thinking is to automatically create a cron job when the model is created that will update the permissions on the date that this occurs. In the case of model Y, I would want to insert a record into the user_permissions that revokes the 'delete' action of this model.
My question is: is this advisable?
Edit
What if I include another row in the SQL tables, that specifies a date for the permission to 'flip' (flipDate)? If a flipDate is defined, and if the current date is after the flip date, the permission is reversed. This seems much easier to manage than a series of cron jobs, especially when models may be updated.
Your models seems fine, but... you are reinventing the wheel a bit and, as you realized yourself, your model is not flexible enough to cater for additional parameters e.g. time.
In the history of authorization, there is a traditional, well-accepted model, called role-based access control (RBAC). That model works extremely well when you have a clearly defined set of roles and a hierarchy between these roles.
However, when the hierarchy isn't as clear or when there are relationships (e.g. a doctor-patient relationship) or when there are dynamic attributes (such as time, location, IP...), RBAC doesn't work well. A new model emerged a few years back called attribute-based access control (ABAC). In a way, it's an evolution or generalization of RBAC. With ABAC, you can define authorization logic in terms of attributes. Attributes are a set of key-value pairs that describe the user, the action, the resource, and the context. With attributes, you can describe any number of authorization situations such as:
a doctor can view a patient's medical record between 9am and 5pm if and only if the patient is assigned to that doctor
a nurse can edit a patient's medical record if and only if the patient belongs to the same clinic as the nurse.
ABAC enables what one could call PBAC or policy-based access control since now the authorization logic moves away from proprietary code and database schemes into a set of centrally managed policies. The de-facto standard for these policies is XACML, the eXtensible Access Control Markup Language.
In a nutshell, XACML lets you do what you are looking for in a technology-neutral way, in a decoupled, externalized way. It means, you get to define authorization once and enforce it everywhere it matters.
I recommend you check out these great resources on the topic:
NIST's website on RBAC (the older model)
NIST's website on ABAC (the model you need)
the OASIS XACML Technical Committee website (the standard that implements ABAC)
Gartner's Externalized Authorization Management
Kuppinger Cole's Dynamic Authorization Management
The ALFA plugin for Eclipse, a tool to write attribute-based policies.

SQL Server Grantting permission for specific rows

I am doing the BI reports for a group of 5 companies. Since the information is more or less the same for all the companies, I am consolidating all the data of the 5 companies in one DB, restructuring the important data, indexing the tables (I can not do that in the original DB because ERP restrictions) and creating the views with all the information required.
In the group, I have some corporate roles that would be benefit of having the data of the 5 companies in one view, nevertheless, I am not interested that an employee of company 1 see the information of company 2, neither in the other way. There is any way to grant permissions restricting the information to the rows that contain employee´s company name in a specific column?.
I know that I could replicate the view and filtering the information using the WHERE clause, but I really want to avoid this. Please help. Thanks!
What you are talking about is row level security. There is little to no support out of the product for this.
Here are a couple articles on design patterns that can be used.
http://sqlserverlst.codeplex.com/
http://msdn.microsoft.com/en-us/library/bb669076(v=vs.110).aspx
What is the goal of consolidating all the companies into one database?
Here are some ideas.
1 - Separate databases makes it easier to secure data; However, hard to aggregate data.
Also, duplication of all objects.
2 - Use schema's to separate the data. Security can be given out at the schema level.
This does have the same duplicate objects, less the database container, but a super user group can see all schema's and write aggregated reports.
I think schema's are under used by DBA's and developers.
3 - Code either stored procedures and/or duplicate views to ensure security. While tables are not duplicated, some code is.
Again there is no silver bullet for this problem.
However, this is a green field project and you can dictate which way you want to implement it.
As of SQL Server 2016 there is support specifically for this problem. The MSDN link in the accepted answer already forwards to the right article. I decided to post again though as the relevant answer changed.
You can now create security policies which implement row level permissions like this (code from MSDN; assuming per-user permissions and a column named UserName in your table):
CREATE SCHEMA Security
GO
CREATE FUNCTION Security.userAccessPredicate(#UserName sysname)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS accessResult
WHERE #UserName = SUSER_SNAME()
GO
CREATE SECURITY POLICY Security.userAccessPolicy
ADD FILTER PREDICATE Security.userAccessPredicate(UserName) ON dbo.MyTable,
ADD BLOCK PREDICATE Security.userAccessPredicate(UserName) ON dbo.MyTable
GO
Furthermore it's advisable to create stored procedures which check permission too for accessing the data as a second layer of security, as users might otherwise find out details about existing data they don't have access to i.e. by trying to violate constraints. For details see the MSDN article, which is exactly on this topic.
It points out workarounds for older versions of SQL Server too.
If you want to restrict view data using the where clause, the easiest way is to create a view and then assign permission to the user.
example:
CREATE VIEW emp AS SELECT Name, Bdate, Address FROM EMPLOYEE WHERE id=5;
GRANT SELECT ON emp TO user

Help me with my SQL project (please)

For this grading period, my CS teacher left us a open-choice project involving SQL and Delphi/VB.
I ended up with the assignment of designing and building a program that allowed the users to, through a GUI in Delphi/VB, insert and read hurricane data pulled from a database (latest SQL Server, by the way). However, there are a few catches.
Three tables are required: Hurricanes, Hurricane_History, and Category
The Category table is not meant to be modified, and it contains the columns 'Min. Speed', 'Max. Speed', and 'Category'. The idea is that a hurricane with a rotational speed of X falls into category Y if X is within the minimum and maximum speed of category Y.
The Hurricane table is meant to be modified by the end-user, through the Delphi/VB gui. It contains the following columns: 'Name', 'Day', 'Time', 'Rotational_Speed', 'Movement_Speed', 'Latitude', 'Longitude', and 'Photo'.
Then there is the Hurricane_History table, which contains 'Name', 'Category', 'Starting_DateTime', 'Ending_DateTime', 'Starting Latitude', 'Starting Longitude', 'Ending Latitude', 'Ending Longitude'. This table is not meant to be directly modified, but rather automatically populated through SQL (I figure using SQL triggers and stored procedures).
What the program should end up doing is the following: The user opens the visual app, and enters in information for a certain hurricane. Since only the table Hurricanes is meant to be modified, the user would insert the Name, Day, Time, Current Rotational Speed, Current Movement speed, current latitude, current longitude, and, optionally, a picture.
If the user enters a hurricane that does not exist yet, then it would create a new hurricane with the corresponding data in the Hurricane_History table. If he enters data for a hurricane that already exists, then the data for that hurricane should be updated, and stored into the corresponding Hurricane_History row. Furthermore, the current category of the hurricane should be automatically populated with SQL using the data that was stored in the Category table.
So far, I have the three tables, the columns, the Delphi GUI, the connections (between Delphi and SQL Server), etc.
What I'm having a real hard time with is the SQL Triggers and Stored procedures needed to generate the data in the Hurricane_History table. Here's my algorithm, the first one for populating the category, and the second one for populating the data of the Hurricane_History table:
create trigger determine_category on Hurricanes for insert, update as
*when a value is inserted into Hurricanes.Rotational_Speed, match it with the corresponding row in the Categories table, and insert the corresponding category into the Category column of the hurricane's Hurricane_History row.*
create trigger populate_data on Hurricanes for insert, update as
*if Hurricane.name exists, perform an update instead of an insert for using Hurricanes.Day as Hurricanes_History.Ending_Day, Hurricanes.Latitude and Hurricane.Longitude as Hurricanes_History.Ending_Latitude and Hurricanes_History.Ending_Longitude, and the Category using the determine_category trigger.*
*if Hurricane.name does not exist, create a record in Hurricanes_History using the data from the newly inserted Hurricane record, and populating the Category using the determine_category trigger*
What I need help with is translating my thoughts and ideas into SQL code, so I was wondering if anyone might want to help me throughout this.
Thanks a bunch!
EDIT:
I just whipped up a simple stored procedure for determining the category. What I don't know how to do is use the result/output of the stored procedure as an insertion value. Does anyone have insight on how to do it?
CREATE PROCEDURE determine_category
#speed int(5)
AS
SELECT Category FROM Categories
WHERE Max_Speed >= #speed AND Min_Speed >= #speed
First, since you're using SQL Server and you can use stored procedures, don't use a trigger. It's not necessary. If your teacher needs justification, here's an article from SQL Server MVP Tom LaRock which discusses issues with handling triggers.
Second, as far as how to write the stored procedures, think about how to handle all the functionality logically. You've said you need to do the following:
Read existing hurricane information
Update existing hurricane information
Insert a new hurricane into the database
Your application should handle all of those as separate paths. And you need to think about the functionality before you write your first bit of T-SQL code. That means you have to have an interface which presents existing information. You're going to have to display the hurricanes existing in the database. Then once the user selects the one to get more information on, you'll have to pull back the hurricane history information. So I know in that situation I have two different data retrievals based on user input. That tells me I need to build the GUI interface to handle that progression logically and display the information in a way the user can use. And it also tells me I've got to build two different stored procedures. The second one will be passed some information identifying the hurricane to retrieve data on (which would be the primary key).
Now roll through the rest of the application's functionality. That should get you started.
Rather than use triggers to do this, I would be more inclined to perform logical DML SQL statements inside transactions. Triggers, whilst sometimes proving useful, are not really necessary in this scenario (unless they are required for your coursework).
As a first approach, think about what is required to complete the application -
A UI layer to present data to the user, allow a user to search, insert, update (and possibly delete) hurricane data.
In this layer, we'll most likely want to
1.present users with a list of previous hurricanes, perhaps with some key details displayed and give users the ability to select a particular hurricane and see all the details.
2.give users the ability to insert new hurricane data. Think about how category will be displayed to a user to choose and how inputted data will be taken from this layer and ultimately end up in the data layer. Think also about how and if we should validate the user input. What needs to be validated? Well, ensuring against SQL injection, that values are in permitted ranges and lengths, etc. if this were a real application, then user input validation would be a necessity.
A data layer used to store the data in a defined entity relationship.
A data access layer used to perform all data access logic in regard to manipulating the application data.
A Business logic layer that contains the classes required for the application. Will contain any of the rules associated with the entities and will be used to present data to the UI layer.
We could take an extremely simplified approach and have the UI layer call straight into the data layer through stored procedures (which would be acting as our data access layer and also our business logic layer as they will encapsulate the rules regarding whether a hurricane record already exists and needs updating or a new record needs creating, possibly some validation too).
Re: Inserting sproc output into a table. Use the following general syntax:
INSERT INTO table (field1, field2, field3)
EXEC yourSproc(param, param)
In the insert documentation, search for execute_statement for details.