Where does failed ABAP authorization check get logged? - abap

In ABAP I can conduct authorization checks using the command AUTHORITY-CHECK. In case the user has the required privileges, sy-subrc equals 0. Otherwise, it is unlike 0.
I wonder if such failed authorization checks are logged somewhere since this can be interesting in terms of application security.
I am aware of the notion of security audit log, system log and so forth. However, I never stumbled upon the fact that such authorization checks are put there.
Also, I know the transaction SU53, however, I believe it does not perform any long-term logging activity.
Is there such a log that fetches all failed authorization checks?

If you switch on the authority trace in ST01 or STAUTHTRACE, the attempts (whether failed or passed) are logged. However, that is intended for development and debugging purposes only. Permanently logging all auth checks of all users may not only be significant performance issue and generate a huge amount of data in short time, but it may also be illegal as it constitutes a permanent surveillance of the actions and performance of the employees.

Related

How to handle secured API in service to service communication

I have a working monolith application (deployed in a container), for which I want to add notifications feature as a separate microservice.
I'm planning for the monolith to emit events to a message bus (RabbitMQ) where they will be received by the new service, which will send the notification to user. In order to compose a notification, it will need other information about the user from the monolit, so it will call monolith's REST API in order to obtain it.
The problem is, that access to the monolith's API requires authentication in form of a token. I was thinking of:
using the secret from the monolith to issue a never-expiring token - I don't think this is a great idea from the security perspective, and also I know that sometimes the keys rotate in which case the token would became invalid eventually anyway
using the message bus to retrieve the information - this does not seem a good idea either as the asynchrony would make it very complicated
providing all the info the notification service needs in the event - this would make them more coupled together, and moreover, I plan to also send notifications based on the state on the monolith not triggered by an event
removing the authentication from the monolith and implementing it differently (not sure how yet)
My question is, what are some of the good ways this kind of problem can be solved, and also, having just started learning about microservices, is what I am trying to do right in the first place?
When dealing with internal security you should always consider the deployment and how the APIs are exposed to the outside world, an API gateway might be used to simply make it impossible to access internal APIs. In that case, a fixed token might be good enough to ensure that the client is authorized.
In general, though, I would suggest looking into OAuth2 or a JWT-based solution as it helps to validate the identities of the calling system as well as their access grants.
As for your architecture doubts, you need to consider the following scenarios when building out the solution:
The remote call can fail, at any time for unknown reasons, as such you shouldn't acknowledge the notification event until you're certain that the notification has been processed successfully.
As you've mentioned RabbitMQ, you should aim to keep the notification queue as small as possible, to that effect, a cache that contains the user details might help speed things along (and help you reduce the chance of failure due to the external system not being available).
If your application sends a lot of notifications to potentially millions of different users, you could consider having a read-only database replica of the users which is accessible to the notification service, and directly read from the database cluster in batches. This reduces the load on the monolith and shift it to the database layer

ASP Net Core 3 Session (state) concurrency and integrity

I have a page which requests multiple requests concurrently. So those requests are in the very same session. For accessing the session I use everywhere IHttpContextAccessor.
My problem is that regardless of the timing, some request does not see other requests already set session state, instead sees some previous state. (again in timing, the set state operation happened already, still)
As far as I know each requests has its own copy of the state, which is written back... (well "when"?) to the common "one" state. If this "when" is the delayed to when request is completely served, then the scenario what I experiencing is easily happen: The 2nd concurrent request within the session got his copy after the 1st request modified the state but before as it was finished completely.
However this all above means that in case of concurrent request serving within a session there is no way to maintain session integrity. The 2nd not seeing the already done changes by the 1st, will write back something what is not consistent with the already done 1st process change.
Am I missing something?
Is there any workaround? (with some cost of course)
First, you may know this already, but it bears point out, just in case: session state is specific to one client. What you're talking about here, then, is the same client throwing multiple concurrent requests at the same time, each of which is touching the same piece of session state. That, in general, seems like a bad design. If there's some actual application reason to have multiple concurrent requests from the same client, then what those requests do should be idempotent or at least not step on each others toes. If it's a situation where the client is just spamming the server, either due to impatience or maliciousness, it's really not your concern whether their session state becomes corrupted as a result.
Second, because of the reasons outline above, concurrency is not really a concern for sessions. There's no use case I can imagine where the client would need to send multiple simultaneous requests that each modify the same session key. If there is, please elucidate by editing your question accordingly. However, I'd still imagine it would be something you likely shouldn't be persisting in the session in the first place.
That said, the session is thread-safe in that multiple simultaneous writes/reads will not cause an exception, but no guarantee is or can be made about integrity. That's universal across all concurrency scenarios. It's on you, as the developer, to ensure data integrity, if that's a concern. You do so, by designing a concurrency strategy. That could be anything from locks/semaphores to gate access or just compensating for things happening out of band. For example, with EF, you can employ concurrency tokens in your database tables to prevent one request overwriting another. The value of the token is modified with each successful update, and the application-known value is checked against the current database value before the update is made, to ensure that it has not been modified since the application initiated the update. If it has, then an exception is thrown to give the application a chance to catch and recover by cancelling the update, getting the fresh data and modifying that, or just pushing through an overwrite. This is to elucidate that you would need to come up with some sort of similar strategy if the integrity of the session data is important.

What are some use cases for the Application Log (BC-SRV-BAL)

Hello fellow developers,
I recently stumbled upon the Application Log and find it to be quite handy. Now I am wondering, from a best practice perspective, what are some use cases for utilizing the Application Log vs. normal messages / class based exceptions?
Normally application log is used when end-user need not be informed of this information. Application log complements the normal messages and class based exceptions but not completely replace them.
Imagine a situation, there is an issue with data on a background processing. If a developer want to see what is the data that was being processed (after it is processed), it will be difficult. A developer can thus write some data to application log based on his gut if there is a possibility of failure.
Normally, this application logging is controlled by some user parameters and also the granularity of the data that is being stored in application log.
Hope this helps.
The application log comes in handy to
store messages. Interactive messages and exceptions are lost after the user clicks them away. The application log stores that information for longer periods of time.
log background processes. These have no direct means to inform a user because there is no user, only some other process that triggered the batch.
provide additional details. Interactive messages are usually minimized to not spam the user with too many popups. The application log can provide additional aspects and side infos to accompany the main result.
log "undercurrents". If a reuse component is unsure what level of detail its consumer wants, it can write an application log with high level of detail that the consumer later can consume or not, as desired.
It is not appropriate when
you want to process the logged details in an automatic way. Application logs are for display to the end user. Application processing should store or hand over data in a more appropriate format.
you need to process vast amounts of data. Writing the application log is fast, but takes time for the database roundtrips, such that large numbers of records can slow down the actual application too much.
you need to store sensitive data. Application logs are secured with authorization checks, but still they may not be the appropriate place for really sensitive information.

Blocking duplicate http request using mod security

I am using mod security to look for specific values in post parameters and blocking the request if duplicate comes in. I am using mod security user collection to do just that. The problem is that my requests are long running so a single request can take in more than 5 minutes. The user collection i assume does not get written to disk until the first request gets processed. If during the execution of the first request another request comes in using the duplicate value for post parameter the second request does not gets blocked since the collection is not available yet. I need to avoid this situation. Can I use memory based shared collections across requests in mod security? Any other way? Snippet below:
SecRule ARGS_NAMES "uploadfilename" "id:400000,phase:2,nolog,setuid:%{ARGS.uploadfilename},initcol:USER=%{ARGS.uploadfilename},setvar:USER.duplicaterequests=+1,expirevar:USER.duplicaterequests=3600"
SecRule USER:duplicaterequests "#gt 1" "id:400001,phase:2,deny,status:409,msg:'Duplicate Request!'"
ErrorDocument 409 "<h1>Duplicate request!</h1><p>Looks like this is a duplicate request, if this is not on purpose, your original request is most likely still being processed. If this is on purpose, you'll need to go back, refresh the page, and re-submit the data."
ModSecurity is really not a good place to put this logic.
As you rightly state there is no guarantee when a collection is written, so even if collections were otherwise reliable (which they are not - see below), you shouldn't use them for absolutes like duplicate checks. They are OK for things like brute force or DoS checks where, for example, stopping after 11 or 12 checks rather than 10 checks isn't that big a deal. However for absolute checks, like stopping duplicates, the lack of certainty here means this is a bad place to do this check. A WAF to me should be an extra layer of defence, rather than be something you depend on to make your application work (or at least stop breaking). To me, if a duplicate request causes a real problem to the transactional integrity of the application, then those checks belong in the application rather than in the WAF.
In addition to this, the disk based way that collections work in ModSecurity, causes lots of problems - especially when multiple processes/threads try to access them at once - which make them unreliable both for persisting data, and for removing persisted data. Many folks on the ModSecurity and OWASP ModSecurity CRS mailing lists have seen errors in the log file when ModSecurity tried to automatically clean up collections, and so have seen collections files grow and grow until it starts to have detrimental effects on Apache. In general I don't recommend user collections for production usage - especially for web servers with any volume.
There was a memcache version of ModSecurity created that was created which stopped using the dusk based SDBM format which may have addressed a lot of the above issues however it was not completed, though it may be part of ModSecurity v3. I still disagree however that a WAF is the place to check this.

How to keep an API idempotent while receiving multiple requests with the same id at the same time?

From a lot of articles and commercial API I saw, most people make their APIs idempotent by asking the client to provide a requestId or idempotent-key (e.g. https://www.masteringmodernpayments.com/blog/idempotent-stripe-requests) and basically store the requestId <-> response map in the storage. So if there's a request coming in which already is in this map, the application would just return the stored response.
This is all good to me but my problem is how do I handle the case where the second call coming in while the first call is still in progress?
So here is my questions
I guess the ideal behaviour would be the second call keep waiting until the first call finishes and returns the first call's response? Is this how people doing it?
if yes, how long should the second call wait for the first call to be finished?
if the second call has a wait time limit and the first call still hasn't finished, what should it tell the client? Should it just not return any responses so the client will timeout and retry again?
For wunderlist we use database constraints to make sure that no request id (which is a column in every one of our tables) is ever used twice. Since our database technology (postgres) guarantees that it would be impossible for two records to be inserted that violate this constraint, we only need to react to the potential insertion error properly. Basically, we outsource this detail to our datastore.
I would recommend, no matter how you go about this, to try not to need to coordinate in your application. If you try to know if two things are happening at once then there is a high likelihood that there would be bugs. Instead, there might be a system you already use which can make the guarantees you need.
Now, to specifically address your three questions:
For us, since we use database constraints, the database handles making things queue up and wait. This is why I personally prefer the old SQL databases - not for the SQL or relations, but because they are really good at locking and queuing. We use SQL databases as dumb disconnected tables.
This depends a lot on your system. We try to tune all of our timeouts to around 1s in each system and subsystem. We'd rather fail fast than queue up. You can measure and then look at your 99th percentile for timings and just set that as your timeout if you don't know ahead of time.
We would return a 504 http status (and appropriate response body) to the client. The reason for having a idempotent-key is so the client can retry a request - so we are never worried about timing out and letting them do just that. Again, we'd rather timeout fast and fix the problems than to let things queue up. If things queue up then even after something is fixed one has to wait a while for things to get better.
It's a bit hard to understand if the second call is from the same client with the same request token, or a different client.
Normally in the case of concurrent requests from different clients operating on the same resource, you would also want to implementing a versioning strategy alongside a request token for idempotency.
A typical version strategy in a relational database might be a version column with a trigger that auto increments the number each time a record is updated.
With this in place, all clients must specify their request token as well as the version they are updating (typical the IfMatch header is used for this and the version number is used as the value of the ETag).
On the server side, when it comes time to update the state of the resource, you first check that the version number in the database matches the supplied version in the ETag. If they do, you write the changes and the version increments. Assuming the second request was operating on the same version number as the first, it would then fail with a 412 (or 409 depending on how you interpret HTTP specifications) and the client should not retry.
If you really want to stop the second request immediately while the first request is in progress, you are going down the route of pessimistic locking, which doesn't suit REST API's that well.
In the case where you are actually talking about the client retrying with the same request token because it received a transient network error, it's almost the same case.
Both requests will be running at the same time, the second request will start because the first request still has not finished and has not recorded the request token to the database yet, but whichever one ends up finishing first will succeed and record the request token.
For the other request, it will receive a version conflict (since the first request has incremented the version) at which point it should recheck the request token database table, find it's own token in there and assume that it was a concurrent request that finished before it did and return 200.
It's seems like a lot, but if you want to cover all the weird and wonderful failure modes when your dealing with REST, idempotency and concurrency this is way to deal with it.