We are running a web application (shiny-server, where coding is done in R) and want to add an authentication layer to it.
Rather than building something to do this in R, I thought of using meteor to create auth tokens and all that.
This is the way i was thinking of doing it:
A user logs in with meteor and meteor creates a database entry that looks something like this:
{ "createdAt" : 1372521823708,
"_id" : "HSdbPBuYy5wW6FBPL",
"services" : { "password" : { "srp" : { "identity" : "vKpxEzXboBaQsWYyJ",
"salt" : "KRt5HrziG6RDnWN8o",
"verifier" : "8d4b6a5edd21ce710bd08c6affb6fec29a664fbf1f42823d5cb8cbd272cb9b2b3d5faa681948bc955353890f645b940ecdcc9376e88bc3dae77042d14901b5d22abd00d37a2022c32d925bbf839f65e4eb3a006354b918d5c8eadd2216cc2dbe0ce12e0ad90a383636a1327a91db72cf96cd4e672f68544eaea9591f6ed102e1" } },
"resume" : { "loginTokens" : [
{ "token" : "t9Dxkp4ANsYKuAQav",
"when" : 1372521823708 } ] } },
"emails" : [
{ "address" : "example#example.com",
"verified" : false } ] }
The user is redirected to the "old application". Here we check local storage (should be the same local storage as meteor if we use the same outward facing host and port, correct?)
and find this information:
Meteor.loginToken: t9Dxkp4ANsYKuAQav
Meteor.userId: HSdbPBuYy5wW6FBPL
The local storage data is investigated by "the other application" and it does a simple database query against the meteor db to verify that the local storage information matches what is in the database. Perhaps also check some kind of expiration date. If this matches, the application renders, otherwise it doesn't.
Is this a decently safe way to do it? Will it work to share local storage between the applications?
Of course you'll have to make sure that your WebSockets are running over TLS. LocalStorage uses a simple Same-origin Policy. So yes it will work. LocalStorage is as secure as a cookie so that's ok.
TLDR:
Yes and Yes
Related
I'm working on a React app and testing some CRUD functionality by mocking the backend, creating some data through GraphiQL, and running the app (amplify mock, then yarn start).
I want to be able to create mock data tied to my user as the owner because most types in the schema are set up with owner authorization:
type XYZ
#auth(rules: [{ allow: owner, operations: [update, delete, create] }]) {
id: ID!
...more types...etc
}
Right now, I
run amplify mock
Go to GraphiQL local endpoint (192.etc....)
Run some createXYZ mutations to create data
Run my app w yarn start
login with testUser & password
Test the deleteXYZ button which should ideally remove a particular XYZ from the mocked data this is what doesn't work
I suspect what's happening is that I didn't run the createXYZ mutation as testUser, just as a generic GraphiQL user, so the owner property isn't tied to "myUserId". Is that the problem here?
How would I specify owner on my create mutations in GraphiQL?
This is the error I'm getting, pretty sure it means the XYZ object's owner is different than my testUser submitting the deleteXYZ request:
Error while executing Local DynamoDB
{
"version": "2018-05-29",
"operation": "DeleteItem",
"key": {
"id": {
"S": "18b152a6-c98d-4336-be74-1e122191"
}
},
"condition": {
"expression": "( #owner0 = :identity0) AND attribute_exists(#id)",
"expressionNames": {
"#owner0": "owner",
"#id": "id"
},
"expressionValues": {
":identity0": {
"S": "fd2a7758-f7ba-4d57-bdb0-e5346492"
}
}
}
}
Could I have to add the owner id in Amplify's GraphiQL Auth options popup?
I just ran into this issue. I was able to work around it by putting my Cognito User Sub in the username field.
I'd like to use Terraform to create AWS Cognito User Pool with one test user. Creating a user pool is quite straightforward:
resource "aws_cognito_user_pool" "users" {
name = "${var.cognito_user_pool_name}"
admin_create_user_config {
allow_admin_create_user_only = true
unused_account_validity_days = 7
}
}
However, I cannot find a resource that creates AWS Cognito user. It is doable with AWS Cli
aws cognito-idp admin-create-user --user-pool-id <value> --username <value>
Any idea on how to do it with Terraform?
In order to automate things, it can be done in terraform using a null_resource and local_exec provisioner to execute your aws cli command
e.g.
resource "aws_cognito_user_pool" "pool" {
name = "mypool"
}
resource "null_resource" "cognito_user" {
triggers = {
user_pool_id = aws_cognito_user_pool.pool.id
}
provisioner "local-exec" {
command = "aws cognito-idp admin-create-user --user-pool-id ${aws_cognito_user_pool.pool.id} --username myuser"
}
}
This isn't currently possible directly in Terraform as there isn't a resource that creates users in a user pool.
There is an open issue requesting the feature but no work has yet started on it.
As it is not possible to do that directly through Terraform in opposite to matusko solution I would recommend to use CloudFormation template.
In my opinion it is more elegant because:
it does not require additional applications installed locally
it can be managed by terraform as CF stack can be destroyed by terraform
Simple solution with template could look like below. Have in mind that I skipped not directly related files and resources like provider. Example also contains joining users with groups.
variables.tf
variable "COGITO_USERS_MAIL" {
type = string
description = "On this mail passwords for example users will be sent. It is only method I know for receiving password after automatic user creation."
}
cf_template.json
{
"Resources" : {
"userFoo": {
"Type" : "AWS::Cognito::UserPoolUser",
"Properties" : {
"UserAttributes" : [
{ "Name": "email", "Value": "${users_mail}"}
],
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
}
},
"groupFooAdmin": {
"Type" : "AWS::Cognito::UserPoolUserToGroupAttachment",
"Properties" : {
"GroupName" : "${user_pool_group_admin}",
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
},
"DependsOn" : "userFoo"
}
}
}
cognito.tf
resource "aws_cognito_user_pool" "user_pool" {
name = "cogito-user-pool-name"
}
resource "aws_cognito_user_pool_domain" "user_pool_domain" {
domain = "somedomain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "admin" {
name = "admin"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
user_init.tf
data "template_file" "application_bootstrap" {
template = file("${path.module}/cf_template.json")
vars = {
user_pool_id = aws_cognito_user_pool.user_pool.id
users_mail = var.COGNITO_USERS_MAIL
user_pool_group_admin = aws_cognito_user_group.admin.name
}
}
resource "aws_cloudformation_stack" "test_users" {
name = "${var.TAG_PROJECT}-test-users"
template_body = data.template_file.application_bootstrap.rendered
}
Sources
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpooluser.html
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack
Example
Simple project based on:
Terraform,
Cognito,
Elastic Load Balancer,
Auto Scaling Group,
Spring Boot application
PostgreSQL DB.
Security check is made on ELB and Spring Boot.
This means that ELB can not pass not authorized users to application. And application can do further security check based on PostgreSQL roleswhich are mapped to Cognito roles.
Terraform Project and simple application:
https://github.com/test-aws-cognito
Docker image made out of application code:
https://hub.docker.com/r/testawscognito/simple-web-app
More information how to run it in terraform git repository's README.MD.
It should be noted that the aws_cognito_user resource is now supported in the AWS Terraform provider, as documented here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user
Version 4.3.0 at time of writing.
I have an Express app which supports Google authentication and authorization via passport. I have begun integrating it with Google Assistant and things were going quite well but I am having trouble with the account linking as described at https://developers.google.com/actions/identity/google-sign-in#start_the_authentication_flow
Using the method in the docs at https://codelabs.developers.google.com/codelabs/actions-2/#4 I was able to get user details but when I try to modify to support
app.intent('Start Signin', conv => {
conv.ask(new SignIn('To get your account details'))
})
and
app.intent('Get Signin', (conv, params, signin) => { ...}
the dialogflow always falls back to my default fallback intent and I get an error in Express console
Error: Dialogflow IntentHandler not found for intent: Default Fallback Intent
My dialogflow intent is set to use webhook and other intents work fine (until I add these sign-in intents!)
Reading this thread Dialogflow IntentHandler not found for intent: myIntent (Dialogflow V2) it was suggested that the intent name rather than the action name is used so I check my Actions on Google simulator and the request contains:
"inputs": [
{
"intent": "actions.intent.SIGN_IN",
"rawInputs": [
{
"inputType": "KEYBOARD"
}
],
"arguments": [
{
"name": "SIGN_IN",
"extension": {
"#type": "type.googleapis.com/google.actions.v2.SignInValue",
"status": "OK"
}
}
]
}
],
so I tried updating my Dialogflow intent name to actions.intent.SIGN_IN and modifying the intent name in my Express app accordingly but it doesn't make any difference.
The simulator response includes:
"responseMetadata": {
"status": {
"code": 14,
"message": "Webhook error (206)"
},
but I'm not sure if that is just because for some reason the intent names are not matching up. Any help much appreciated!
As you speculate in the comments, the issue is that your "Get Signin" Intent isn't registered to get the event indicating that the user has signed in (or failed to). Since there is no such Intent setup, it ends up calling the Fallback Intent, which apparently doesn't have an Intent Handler registered in your webhook.
To make your "Get Signin" Intent get the sign-in event, set the "Event" field to actions_intent_SIGN_IN. (Note the similarity to the Intent name you saw in the simulator, but using underscores instead of dots.)
As an aside, the simulator was showing you what the communication between the Assistant and Dialogflow looks like, so it can be somewhat confusing to understand what Dialogflow is doing with it. It didn't have anything to do with the name of your Intent or anything else.
Finally, it often isn't necessary to do this check. You will know if the user is signed in because either the auth token has been set or the id token has been set (depending on your method of Account Linking).
https://firebase.google.com/docs/reference/security/database/#authtokenF
{
"rules": {
"c":{
".write":"newData.child('email').val()=== auth.token.email"
},
}
}
Always it showing "Simulated write denied"
How to solve this problem ? Is there any mistake with my firebase rule
It looks like you're not providing an email address in the authentication data.
When you select a provider, the simulator shows the exact auth.token payload that it will use. For the Google provider my Auth token payload looks like this:
The simulator takes the literal JSON that is shown in here, and uses it as auth.token.
{
"provider": "google",
"uid": "27e08474-4e33-460d-ba92-ba437c6aa962"
}
Since there is no email provided, your rules (correctly) fail.
For testing this scenario, you'll want to switch to a custom provider, so that you can specify your own auth token with an email property:
I am using Big query sample code to work with big query. I am getting the following error while reading dataset list using the big query api.
The code is
Bigquery bigquery = Bigquery.builder(httpTransport, jsonFactory)
.setHttpRequestInitializer(requestInitializer)
.setJsonHttpRequestInitializer(new JsonHttpRequestInitializer() {
public void initialize(JsonHttpRequest request) {
BigqueryRequest bigqueryRequest = (BigqueryRequest) request;
bigqueryRequest.setPrettyPrint(true);
}
}).build();
Datasets.List datasetRequest = bigquery.datasets().list(PROJECT_ID);
DatasetList datasetList = datasetRequest.execute();
if (datasetList.getDatasets() != null) {
java.util.List datasets = datasetList.getDatasets();
for (Object dataset : datasets) {
System.out.format("%s\n", ((com.google.api.services.bigquery.model.DatasetList.Datasets)dataset).getDatasetReference().getDatasetId());
}
}
The exception is
Exception in thread "main" com.google.api.client.googleapis.json.GoogleJsonResponseException: 401 Unauthorized
{
"code" : 401,
"errors" : [ {
"domain" : "global",
"location" : "Authorization",
"locationType" : "header",
"message" : "User is not a trusted tester",
"reason" : "authError"
} ],
"message" : "User is not a trusted tester"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:159)
at com.google.api.client.googleapis.json.GoogleJsonResponseException.execute(GoogleJsonResponseException.java:187)
at com.google.api.client.googleapis.services.GoogleClient.executeUnparsed(GoogleClient.java:115)
at com.google.api.client.http.json.JsonHttpRequest.executeUnparsed(JsonHttpRequest.java:112)
at com.google.api.services.bigquery.Bigquery$Datasets$List.execute(Bigquery.java:964)
at ShortSample.main(ShortSample.java:74
)
I don't see this as an authentication issue as I could use the same code to connect to Google Plus account via Google plus api. I also observed that api examples are stale.
Any Suggestions to fix it.
I suspect you're using an older version of the BigQuery Java client library that is based on a prerelease version of the API (v2beta1). If that's the case, try upgrading to the latest version of the client library here:
http://mavenrepo.google-api-java-client.googlecode.com/hg/com/google/apis/google-api-services-bigquery/v2-rev5-1.5.0-beta/
We'll make sure the links on the API client library page are updated, too!
I have encountered a similar problem and jcondit's solution works well for me (updating the jars). And for the authentication code, you may also take a look at here.