I'm make a use case for a user authentication for a website, and when the user register a new account the website send a verification email to the user.
Is this use case correct? If not, how can I improve it?
This is the use case I made:
This diagram is syntactically wrong: the dashed lines should be arrows which are either «includes» or «extends» to explain the dependencies between use cases.
Semantically the diagram could be correct (depending on how you decide for the dependencies) with the following remarks:
The actor service must be an external autonomous system and not an internal verification service in the same system.
The use cases shall be independent and have no order between them. Hence, are you sure it’s ok to first login and then register?
Breaking a functionality (e.g. registration) down into smaller ones (e.g. send email) is called "functional decomposition". Although it is not forbidden by UML, it is not recommended by practitioners, as it leads to overly complex diagrams.
The key issue with this diagram is purpose: despite the many bubbles, we still don't know what the goals of the actors are, nor what the purpose of the system is. But this is what use cases should tell us. supposed t. Use case should focus. Login and registration are not really goals: they are just a necessary step to do something more meaningful.
In short, the diagram is correct, but it is also true that it depends on the level of detail you want to go into with the UML diagram. Generally, in the case of login or registration, we tend to represent more in detail. For example credential check, password recovery and other events.
Related
Goal:
Determine if a functional test was successful.
Scenario:
We have a functional requirement: "A user should be able to signup with username and password. The username has to be a valid email-adress. The password has to be at least 8 characters long".
We have a method "SignupResult UserManager.Signup(string username, string password)".
We want a happy-test with valid intputs, and a sad-test with invalid inputs.
Sub-Systems of the UserManager (e.g. Database) can be either mocked or real systems.
Question:
What would be the best way to determine if the user was successfully signed up. I can imagine the following options:
If any of the sub-system was mocked, one could check if a specific function like "DB.SaveUser(...)" was called. This would destroy the idea of a functional test being a blackbox test and requires that the test-writer has knowledge of the implementation.
If we use real sub-systems, one could for example check if the row in the DB exists. That would be not adequate like the attempt above.
One could use another function like "UserManager.CheckUser(...)" to check if the user was created. This would introduce another method that is tested, also there may be operations that would have no "test-counterpart", or one would have to implement them, just for testing - that seems not ideal.
We could check the result "SignupResult" and/or check for exceptions thrown. This would require defining the interface of the method. This also would require all methods to return a sensible value - I guess this will be a good approach anyway.
To me the last methods seems to be the way to go. Am I correct? Are there other approaches? How would we check side-effects like "an email was sent to the new user" ?
You may want to acquaint yourself with the concept of the Test Pyramid.
There's no single correct way to design and implement automated tests - only trade-offs.
If you absolutely must avoid any sort of knowledge of implementation details, there's really only way to go about it: test the actual system.
The problem with that is that automated tests tend to leave behind a trail of persistent state changes. For example, I once did something like what you're asking about and wrote a series of automated tests that used the actual system (a REST API) to sign up new users.
The operations people soon asked me to turn that system off, even though it only generated a small fraction of actual users.
You might think that the next-best thing would be a full systems test against some staging or test environment. Yes, but then you have to take it on faith that this environment sufficiently mirrors the actual production environment. How can you know that? By knowing something about implementation details. I don't see how you can avoid that.
If you accept that it's okay to know a little about implementation details, then it quickly becomes a question of how much knowledge is acceptable.
The experience behind the test pyramid is that unit tests are much easier to write and maintain than integration tests, which are again easier to write and maintain than systems tests.
I usually find that the sweet spot for these kinds of tests are self-hosted state-based tests where only the actual system dependencies such as databases or email servers are replaced with Fakes (not Mocks).
Perhaps it is the requirement that needs further refinement.
For instance, what precisely would your user do to verify if she has signed up correctly? How would she know? I imagine she'd look at the response from the system: "account successfully created". Then she'd only know that the system posts a message in response to that valid creation attempt.
Testing for the posted message is actionable, just having a created account is not. This is acceptable as a more specific test, at a lower test level.
So think about why exactly users should register? Just to see response? How about the requirement:
When a user signs up with a valid username and a valid password, then she should be able to successfully log into the system using the combination of that username and password.
Then one can add a definition of a successful login, just like the definitions of validity of the username and password.
This is actionable, without knowing specifics about internals. It should be acceptable as far as system integration tests go.
Given something simple like this:
<Dashboard v-if="$store.getters.ui.user.role == 'staff'/>
<Dashboard v-if="$store.getters.ui.user.role == 'manager'/>
... what is the best practice for defending against someone changing user.role from 'staff' to 'manager' in the browser.
(Of course, data is loaded from the server based on role here, so at best the curious user will see an empty and slightly broken interface, but better if they see nothing at all). Other than obfuscation of the rather obv's user.role=='staff', I can't see any way around it.
There is no way to prevent a user from being able to change the client-side code provided to him. The solution you mentioned is the right approach. Never trust the user with any sensitive data unless verified by your server.
This means that while a user with bad intentions might be able to change his role to "manager" and thereby get access to the dashboard (or even remove the if-statement only rendering the Dashboard conditionally - the code is there), the dashboard he sees cannot contain any sensitive data only supposed to be visible to users with a "manager" role.
The key is not providing the user any data he is not supposed to see in the first place, not obfuscate the data passed in hopes the user won't notice. You are not protecting against the average user but rather somebody who knows how to code and has the intention of breaking your application. Obfuscated code is a small hurdle and not sufficient to prevent attackers from seeing and understanding the underlying logic.
i'm new to system modeling and i have some problems expressing my ideas into diagrams especially into use case diagram because of the lack of dynamic interactions let say.
precondition : user must be connected
specifications
user will be able to view all his notes(in the home page kinda).
user can check a specific note and modify it by changing its title or its body or both.
user can access create note from the home page.
he must add title and body and at least one tag.
user can access create tag from create note page and from the home page.
when saving and returning to the main page system should save the note into backend.
when creating a tag user must enter the label and specify the color.
Questions:
1- is this a valid use case diagram for it?
2- should i add an association between the backend and create note and create tag?
No, that's not really a valid UC diagram. UCs are about added value. You started functional decomposition (like most people try when starting with UCs). A UC represents a single added value the system under consideration delivers to one of its actors. Here you have a note which for which you have CRUD. Unfortunately that already gives one a pain. Is the added value a general Manage X or is there a fundamental difference between Show and the editing? That depends on the context and there is no general answer to that. However, you do not describe steps you take (actions; like enter xy) or various scenarios (activities) as single use cases. You need to try to synthesize as much as possible in a single UC to show added values. This is difficult for techies.
As a rule of thumb: if your UC diagram resembles a spider web then your design is likely broken.
As always I recommend to read Bittner/Spence about use cases.
I have an API with endpoint GET /users/{id} which returns a User object. The User object can contain sensitive fields such as cardLast4, cardBrand, etc.
{
firstName: ...,
lastName: ...,
cardLast4: ...,
cardBrand: ...
}
If the user calls that endpoint with their own ID, all fields should be visible. However, if it is someone elses ID then cardLast4 and cardBrand should be hidden.
I want to know what are the best practices here for designing my response. I see three options:
Option 1. Two DTOs, one with all fields and one without the hidden fields:
// OtherUserDTO
{
firstName: ...,
lastName: ..., // cardLast4 and cardBrand hidden
}
I can see this becoming out of hand with DTOs based on role, what if now I have UserDTOForAdminRole, UserDTOForAccountingRole, etc... It looks like it quickly gets out of hand with the number of potential DTOs.
Option 2. One response object being the User, but null out the values that the user should not be able to see.
{
firstName: ...,
lastName: ...,
cardLast4: null, // hidden
cardBrand: null // hidden
}
Option 3. Create another endpoint such as /payment-methods?userId={userId} even though PaymentMethod is not an entity in my database. This will now require 2 api calls to get all the data. If the userId is not their own, it will return 403 forbidden.
{
cardLast4: ...,
cardBrand: ...
}
What are the best practices here?
You're gonna get different opinions about this, but I feel that doing a GET request on some endpoint, and getting a different shape of data depending on the authorization status can be confusing.
So I would be tempted, if it's reasonable to do this, to expose the privileged data via a secondary endpoint. Either by just exposing the private properties there, or by having 2 distinct endpoints, one with the unprivileged data and a second that repeats the data + the new private properties.
I tend to go for option 1 here, because an API endpoint is not just a means to get data. The URI is an identity, so I would want /users/123 to mean the same thing everywhere, and have a second /users/123/secret-properties
I have an API with endpoint GET /users/{id} which returns a User object.
In general, it may help to reframe your thinking -- resources in REST are generalizations of documents (think "web pages"), not generalizations of objects. "HTTP is an application protocol whose application domain is the transfer of documents over a network" -- Jim Webber, 2011
If the user calls that endpoint with their own ID, all fields should be visible. However, if it is someone elses ID then cardLast4 and cardBrand should be hidden.
Big picture view: in HTTP, you've got a bit of tension between privacy (only show documents with sensitive information to people allowed access) and caching (save bandwidth and server pressure by using copies of documents to satisfy more than one request).
Cache is an important architectural constraint in the REST architectural style; that's the bit that puts the "web scale" in the world wide web.
OK, good news first -- HTTP has special rules for caching web requests with Authorization headers. Unless you deliberately opt-in to allowing the responses to be re-used, you don't have to worry the caching.
Treating the two different views as two different documents, with different identifiers, makes almost everything easier -- the public documents are available to the public, the sensitive documents are locked down, operators looking at traffic in the log can distinguish the two different views because the logged identifier is different, and so on.
The thing that isn't easier: the case where someone is editing (POST/PUT/PATCH) one document and expecting to see the changes appear in the other. Cache-invalidation is one of the two hard problems in computer science. HTTP doesn't have a general purpose mechanism that allows the origin server to mark arbitrary documents for invalidation - successful unsafe requests will invalidate the effective-target-uri, the Location, the Content-Location, and that's it... and all three of those values have other important uses, making them more challenging to game.
Documents with different absolute-uri are different documents, and those documents, once copied from the origin server, can get out of sync.
This is the option I would normally choose - a client looking at cached copies of a document isn't seeing changes made by the server
OK, you decide that you don't like those trade offs. Can we do it with just one resource identifier? You immediately lose some clarity in your general purpose logs, but perhaps a bespoke logging system will get you past that.
You probably also have to dump public caching at this point. The only general purpose header that changes between the user allowed to look at the sensitive information and the user who isn't? That's the authorization header, and there's no "Vary" mechanism on authorization.
You've also got something of a challenge for the user who is making changes to the sensitive copy, but wants to now review the public copy (to make sure nothing leaked? or to make sure that the publicly visible changes took hold?)
There's no general purpose header for "show me the public version", so either you need to use a non standard header (which general purpose components will ignore), or you need to try standardizing something and then driving adoption by the implementors of general purpose components. It's doable (PATCH happened, after all) but it's a lot of work.
The other trick you can try is to play games with Content-Type and the Accept header -- perhaps clients use something normal for the public version (ex application/json), and a specialized type for the sensitive version (application/prs.example-sensitive+json).
That would allow the origin server to use the Vary header to indicate that the response is only suitable if the same accept headers are used.
Once again, general purpose components aren't going to know about your bespoke content-type, and are never going to ask for it.
The standardization route really isn't going to help you here, because the thing you really need is that clients discriminate between the two modes, where general purpose components today are trying to use that channel to advertise all of the standardized representations that they can handle.
I don't think this actually gets you anywhere that you can't fake more easily with a bespoke header.
REST leans heavily into the idea of using readily standardizable forms; if you think this is a general problem that could potentially apply to all resources in the world, then a header is the right way to go. So a reasonable approach would be to try a custom header, and get a bunch of experience with it, then try writing something up and getting everybody to buy in.
If you want something that just works with the out of the box web that we have today, use two different URI and move on to solving important problems.
I am starting to investigate the good practices about Public API, specifically about how to deal with breaking changes. There is a lot of technicalities related to the versioning (or non-versioning!), but I am more interested about the code base implication.
Imagine a basic scenario where you have a business rule "password must have at least 10 caracters". And you have a "Create User" scenario exposed in a public API, accepting a password.
You have hundreds of clients using it, and one day, you decide to change the business rule to "password must have at least 15 caracters". Even if you did not change the semantic of the API signature and payloads, you just introduced a breaking change in your API because you changed the behavior of this API.
How would you deal with this?
I only find wrong approaches:
Modify your domain invariants (business rules) with dated/versionned invariants: this would create a nightmare in the code readibility / testing / etc.
Duplicate your code base per API version: this would create a maintenance nightmare
Hope one day you will be able to deprecate all this and become clean again: in your dream...
Any real life experience on this in your job?
The easiest way is just to communicate with your clients and warn them of the upcoming change weeks/months before. This way they can prepare and be ready for the breaking change.
If you absolutely must support old clients, another option is to keep the domain invariant to 10, but add an additionnal api call for the create user scenario which checks the password length and verifies it is of length 15 outside the domain. Then, encourage your users to migrate to the new CreateUser endpoint. This works for simple cases like this one but will become very hard to do for complicated invariants or if your domain is used in different contexts (multiple Apis, desktop app etc).
If you decide to go with this route, a good tip is to make sure you have metrics to know how many clients use the old endpoint vs how many use the new endpoint. When you have reached a certain threshold you can shutdown the old endpoint and move the minimum password length of 15 invariant from the Api to the domain,