Cognito authen TooManyRequestsException error? how to fix it to test performance? - amazon-cognito

I have serverless model with cognito, api, lambda, dynamo.
I want to test performance with 10000 users asccess in the sametime. But Cognito seems to only allow authen 120 requests/s
I'm using jmeter to test. I'm having trouble logging in with a large number of users.
Please help!Thanks

https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html#limits-hard
There is a hard limit on the number of authentication requests supported to a Cognito hosted UI of 150 per second.
The 'hard' limit means AWS will not increase this limit for you.
However, the limitation is not on Cognito, but rather the hosted UI. So if you want to support more concurrent logins you might need to host your own authentication UI.
There might be another workaround that I have not tested (I'm not sure if it will work). I think you can have a hosted UI per app client. What you could try is creating more App Clients, they can basically be identical. Then you would need to split your traffic across the UIs. Clearly this wouldn't help your operational concurrency (unless you put a load balancer at the front) but it might help you in testing.

Related

Leveraging Cloudflare workers with AWS API Gateway

I know this is not a programming question, but a valid questions to decide the architecture.
I am working on creating APIs would be used by third-party developers. This means the developers needs to sign-up for a plan (possibly using Stripe), get API key and start making requests.
I am leaning more towards using https://aws.amazon.com/api-gateway/ for the benefits it provides around API management, security (via Cognito) and having a developer portal (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html). In my observation so far, Cloudflare API gateway does not provide these benefits. Another benefit is using CDK to manage the entire stack programmatically.
When comparing serverless functions, I am more interested in leveraging Cloudflare workers because of
No cold start issue.
Better pricing.
However, I am unsure about a few things
If a request comes to API gateway and authenticates perfectly, how do I securely invoke Cloudflare worker?
I am sure there would be some latency added between the 2 systems. Are there any ways to minimize the latency?
The guidance is very much appreciated.
Thank you

Google Cloud Run - Understanding of Authenticating end users

I have a web application which runs until now with cloud run, but without access restriction. Now it should be available only for certain users.
I read https://cloud.google.com/run/docs/authenticating/end-users and also tried both
mentioned ways: Google-Sign-In and the "Identity Platform" tutorial.
If I understand correctly, you have to program the actual user handling yourself in both variants. For example, determining which email addresses have access to the application, etc.
I was looking for a declarative way where, ideally, I only maintain a list of permitted email addresses and the "cloud run application" is only "magically" linked to this. With the result that only these users get access to the web application. That doesn't seem possible?
Ideally, the actual application should not be changed at all and an upstream layer would take care of the authentication and authorization, possibly in conjunction with the "Identiy Platform".
Best regards and any hint is welcome
Thomas
Let me add some sugar to this to better understand all these.
A Cloud Run application is packaged by you, you maintain the source code, if this is a website, placing a login button and handling authentication is your job to accomplish.
A Cloud Run system which is running all this on a hardware, it doesn't "look into" or handles your application code outside of the "code". Simply put it doesn't know if it's a Java or Python code and how to handle authentication out of the box for you - but read further.
If you require a simple way to authorize look into API Gateway it can be placed "before" Cloud Run. It might not be exactly your use case. These exists only for "API" designed services.
That upstream layer you need is the managed Identity platform, but the CODE should be assembled by you and deployed inside your Cloud Run service. The code will be the UI driven part, the authorization logic is handled by the Identity Platform so it reduces the amount of development time.
Your users would sign up using a dedicated registration page, and sign in by entering their emails and passwords. Identity Platform offers a pre-built authentication UI you can use for these pages, or you can build your own. You might also want to support additional sign-in methods, such as social providers (like Facebook or Google), phone numbers, OIDC, or SAML.
Look into some of the advanced examples to get a feeling how authorization can be customized further: Only allowing registration from a specific domain you could reuse one of these samples to maintain that shortlist of users that you mentioned.
In addition to #Pentium10's answer, you can also make all users authenticate to your app somewhat forcibly. (Imagine you're building an internal portal for your company, or an /admin panel for your app that only certain users/groups can access.)
This sort of use case can be achieved by placing Cloud Identity-Aware Proxy (IAP) in front of your Cloud Run service. That way, all requests go through this proxy that validates the caller. This is not like Identity Platform in the sense that visitors don't create accounts on your website (they use existing Google accounts or other IdPs like ActiveDirectory, or whatever you configure on IAP).
I have a little tutorial at https://github.com/ahmetb/cloud-run-iap-terraform-demo/ since IAP+Cloud Run integration is still not GA and therefore not fully documented.

How can handle the local authentication when a SSO website is down?

We are developing an SSO application using IdentityServer4 as the authentication (not authorization) infrastructure for other (client) websites in our company. One of our main concerns is the failure of the SSO website. In this situation, what considerations should we consider to minimize clients issues?
For example, we want to create a local login page in each application and ask each application to authenticate it using the OTP mechanism. Is this enough or are there better solutions?
For security reasons, you should not try to add some local login, it will just make things more complicated, complex and probably less secure.
Because your tokens have certain lifetime (like 1 hour default and if your SSO goes down for a short while, then you clients can continue to operate (unless you query your SSO all the time).
If you want to make it more reliable, then you need to start looking at load-balancers and having multiple instances running of IdentityServer. That can work if you do take care to have the same keys on all the instances.

Microservices - how to solve security and user authentication?

There is a lot of discussion about microservice architecture. What I am missing - or maybe what I did not yet understand is, how to solve the issue of security and user authentication?
For example: I develop a microservice which provides a Rest Service interface to a workflow engine. The engine is based on JEE and runs on application servers like GlassFish or Wildfly.
One of the core concepts of the workflow engine is, that each call is user centric. This means depending of the role and access level of the current user, the workflow engine produces individual results (e.g. a user-centric tasklist or processing an open task which depends on the users role in the process).
In my eyes, thus a service is not accessible from everywhere. For example if someone plans to implement a modern Ajax based JavaScript application which should use the workflow microservice there are two problems:
1) to avoid the cross-scripting problem from JavaScript/Ajax the JavaScript Web application needs to be deployed under the same domain as the microservice runs
2) if the microservice forces a user authentication (which is the case in my scenario) the application need to provide a transparent authentication mechanism.
The situation becomes more complex if the client need to access more than one user-centric microservices forcing user authentication.
I always end up with an architecture where all services and the client application running on the same application server under the same domain.
How can these problems be solved? What is the best practice for such an architecture?
Short answer: check OAUTH, and manage caches of credentials in each microservice that needs to access other microservices. By "manage" I mean, be careful with security. Specially, mind who can access those credentials and let the network topology be your friend. Create a DMZ layer and other internal layers reflecting the dependency graph of your microservices.
Long answer, keep reading. Your question is a good one because there is no simple silver bullet to do what you need although your problem is quite recurrent.
As with everything related with microservices that I saw so far, nothing is really new. Whenever you need to have a distributed system doing things on behalf of a certain user, you need distributed credentials to enable such solution. This is true since mainframe times. There is no way to violate that.
Auto SSH is, in a sense, such a thing. Perhaps it may sound like a glorified way to describe something simple, but in the end, it enables processes in one machine to use services in another machine.
In the Grid world, the Globus Toolkit, for instance, bases its distributed security using the following:
X.509 certificates;
MyProxy - manages a repository of credentials and helps you define a chain of certificate authorities up to finding the root one, which should be trusted by default;
An extension of OpenSSH, which is the de facto standard SSH implementation for Linux distributions.
OAUTH is perhaps what you need. It is a way provide authorization with extra restrictions. For instance, imagine that a certain user has read and write permission on a certain service. When you issue an OAUTH authorization you do not necessarily give full user powers to the third party. You may only give read access.
CORS, mentioned in another answer, is useful when the end client (typically a web browser) needs single-sign-on across web sites. But it seems that your problem is closer to a cluster in which you have many microservices that are managed by you. Nevertheless, you can take advantage of solutions developed by the Grid field to ensure security in a cluster distributed across sites (for high availability reasons, for instance).
Complete security is something unattainable. So all this is of no use if credentials are valid forever or if you do not take enough care to keep them secret to whatever received them. For such purpose, I would recommend partitioning your network using layers. Each layer with a different degree of secrecy and exposure to the outside world.
If you do not want the burden to have the required infrastructure to allow for OAUTH, you can either use basic HTTP or create your own tokens.
When using basic HTTP authentication, the client needs to send credentials on each request, therefore eliminating the need to keep session state on the server side for the purpose of authorization.
If you want to create your own mechanism, then change your login requests such that a token is returned as the response to a successful login. Subsequent requests having the same token will act as the basic HTTP authentication with the advantage that this takes place at the application level (in contrast with the framework or app server level in basic HTTP authentication).
Your question is about two independent issues.
Making your service accessible from another origin is easily solved by implementing CORS. For non-browser clients, cross-origin is not an issue at all.
The second problem about service authentication is typically solved using token based authentication.
Any caller of one of your microservices would get an access token from the authorization server or STS for that specific service.
Your client authenticates with the authorization server or STS either through an established session (cookies) or by sending a valid token along with the request.

How to avoid Selenium integration tests being considered suspicious logins

We are doing tests for our Google OpenAuth integration using Selenium on multiple different platforms/browsers using a couple of different cloud providers. For all tests, we are using the same Google account, which was created for this purpose. We are continuously running into the issue that these logins (from the different providers and their devices around the world) are being considered suspicious logins and therefore rejected [1]. How is it possible to avoid this problem of these integration tests being considered suspicious logins? Is it possible to disable this detection? Do Google have test-users that one can use? Do we need to proxy the requests, so they always come from the same address? Please beware, that since we are using Selenium as a service, we have no control of the clients and their location.
Any ideas?
[1] https://security.google.com/settings/u/2/security/activity
One thing I can recommend is to utilize Google's API's instead of going through their webapp. They most likely have added security for their web apps.
For example, if you are trying to retrieve mail, then check this out.