Amplify/Cognito no analytics data showing in Pinpoint - amazon-cognito

No data from my user events is showing up in pinpoint.
I have a frontend react native app which uses the Amplify Auth library configured as:
Amplify.configure({
Analytics: {
AWSPinpoint: {
region: ENV.REGION,
appId: ENV.PINPOINT_APP_ID,
},
},
Auth: {
region: ENV.REGION,
userPoolId: ENV.USER_POOL_ID,
userPoolWebClientId: ENV.USER_POOL_CLIENT_ID,
authenticationFlowType: ENV.AUTHENTICATION_FLOW_TYPE,
oauth: {
domain: ENV.OAUTH_DOMAIN,
scope: ["email", "openid", "profile"],
redirectSignIn: appConfig.scheme,
redirectSignOut: appConfig.scheme,
responseType: "code",
urlOpener,
},
federationTarget: "COGNITO_USER_POOLS",
},
..
In the backend I connected Cognito with pinpoint and use an IAM role with the following policies:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cognito-idp:Describe*"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobiletargeting:UpdateEndpoint",
"mobiletargeting:PutEvents"
],
"Resource": [
"arn:aws:mobiletargeting:eu-west-1:73463623453:apps/my-pinpoint-project-id/*"
]
}
]
}
When I use the app to log in. No data appears in pinpoint.
However, when I do the same using the cli, then the data does show up in pinpoint:
AWS_DEFAULT_PROFILE=novasport-dev aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --auth-parameters USERNAME=05ac342c-2134-48f9-b124-b1favc5d0bb1,PASSWORD=myPwd --client-id myWebClienId --analytics-metadata AnalyticsEndpointId=my-pinpoint-project-id
It seems like in my FE app the Amplify Auth library is not able to send the data to pinpoint. When I track the network request I also don't see a call being executed that represents the analytics data.
How can I get the analytics data from my FE app to Pinpoint? Am I missing some configuration?
EDIT
We are using the modular imports of Amplify as such
import Amplify from "#aws-amplify/core";
package.json:
"#aws-amplify/api": "4.0.3",
"#aws-amplify/api-graphql": "2.0.3",
"#aws-amplify/auth": "4.0.3",
"#aws-amplify/core": "4.1.1",

Go and check Cognito Identity Pool -> select the Identity Pool which includes your app name --> Edit Identity Pool --> Authentication Provider --> Cognito --> check these values ===> User Pool Id, App client Id and Authenticated role selection.
In my case, setting the 'Authenticated role selection' to 'Use default role' solved the issue.

Related

ECS task accessing S3 bucket website with Block Public Access enabled: "Access Denied"

I have an ECS task configured to run an nginx container that I want to use as a reverse proxy to a S3 bucket website.
For security purposes, Block public access is turned on for the bucket so I am looking for a way to give Read access only to the ECS task.
I want my ECS task running an nginx reverse proxy to have S3:GetObjects access to my website bucket. The bucket cannot be public so I want to restrict it to the ecs task using the ecs task IAM role as Principal.
IAM role:
arn:aws:iam:::role/ was configured with an attached policy that allows all S3 actions in the bucket and its objects:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "S3:*",
"Resource": [
"arn:aws:s3:::<BUCKET>",
"arn:aws:s3:::<BUCKET>/*"
]
}
]
}
In Trusted Entities, I added permission to assume the ECS Task role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The issue is that the EC2 target group health check is always returning Access Denied to the bucket and its objects.
[08/Jun/2020:20:33:19 +0000] “GET / HTTP/1.1” 403 303 “-“ “ELB-HealthChecker/2.0”
I also tried to give it permission to by adding the bucket policy below, but I believe it is not needed as the IAM role already have access to it…
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allowNginxProxy",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::<BUCKET>/*",
"arn:aws:s3:::<BUCKET>"
]
}
]
}
I have also tried using ”AWS": "arn:aws:iam::<ACCOUNT_NUMBER>:role/<ECS_TASK_ROLE>" as Principal.
Any suggestions?
Another possibility here:
Check if your S3 Objects are encrypted? If yes, your ECS Task Role should also have the permission to perform decryption. Otherwise, you would also get permission denied exception. One example can be found here.

how does one block a domain on api gateway

I was just wondering if there is any way to toggle/turn off/disable/block an api gateway request from a particular domain. I need to test if the service is ever down to see if the error messaging is working
In chrome I can block the request in the network console, however, I can not do this in IE. is there a way to turn the api off temporarily?
or can i block it in IE?
You can get this working with IP rather than the domain.
API gateway allows you to attach "Resource Policy" to your API
Example below
The following example resource policy is a "blacklist" policy that denies (blocks) incoming traffic to an API from two specified source IP addresses.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": ["arn:aws:execute-api:region:account-id:api-id/*"]
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": ["arn:aws:execute-api:region:account-id:api-id/*"],
"Condition": {
"IpAddress": {
"aws:SourceIp": ["10.24.34.0/23",
"10.24.34.0/24"]
}
}
}]
}
More Info
API Gateway Resource Policy
API Gateway Resource Policy Examples

aws S3 400 Bad Request

I'm attempting to narrow down the following 400 Bad Request error:
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 7FBD3901B77A07C0), S3 Extended Request ID: +PrYXDrq9qJwhwHh+DmPusGekwWf+jmU2jepUkQX3zGa7uTT3GA1GlmHLkJjjjO67UQTndQA9PE=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1343)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:961)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:489)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:448)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:397)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:378)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4039)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1177)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1152)
at com.amazonaws.services.s3.AmazonS3Client.doesObjectExist(AmazonS3Client.java:1212)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.makeS3Crops(APWebFeedArticleResolverImpl.java:904)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.resolve(APWebFeedArticleResolverImpl.java:542)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.xfire.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:54)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler.sendMessage(ServiceInvocationHandler.java:322)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler$1.run(ServiceInvocationHandler.java:86)
at java.lang.Thread.run(Thread.java:662)
I'm testing something as imple as this:
boolean exists = s3client.doesObjectExist("aws-wire-qa", "wfiles/in/wire.json");
I manually added the wfiles/in/wire.json file. I get back true when I run this line inside a local app. But inside a separate remote service it throws the error above. I use the same credentials inside the service as I use in my local app. I also set bucket as "Enable website hosting", but no difference.
My permissions are set as:
Grantee: Any Authenticated AWS User
y List
y Upload/DeleteView
y PermissionsEdit Permissions
So I thought the error could be related to not having a policy on the bucket and created a policy file on the bucket for GET/PUT/DELETE objects, but I'm still getting the same error. My policy look like this:
{
"Version": "2012-10-17",
"Id": "Policy1481303257155",
"Statement": [
{
"Sid": "Stmt1481303250933",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::755710071517:user/law"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::aws-wire-qa/*"
}
]
}
I was told it can't be a firewall or a proxy issue. What else I could try? The error is very non-specific. And so far I did only local development, so I have no idea what else can be not set up here. Would much appreciate some help here.
curl -XPUT 'http://localhost:9200/_snapshot/repo_s3' -d '{
"type": "s3",
"settings": {
"bucket": "my-bucket",
"base_path": "/folder/in/bucket",
"region": "eu-central"
}
}'
In my case that was a region issue!
I had to remove the region from the elasticsearch.yml and set in the command. If I don't remove the region from the yml file, elastic won't start (with the latest s3-repository plugin)
Name: repository-s3
Description: The S3 repository plugin adds S3 repositories
Version: 5.2.2
* Classname: org.elasticsearch.plugin.repository.s3.S3RepositoryPlugin
I have been getting this error for days, and in every case it was because my temporary access token had expired (or because I'd inadvertently built an instance of hdfs-site.xml containing an old token into a JAR). It had nothing to do with regions.
Using Fiddler I've seen that my url was wrong.
I didn't need to use ServiceURL property and config class, instead, I used this constructor for the client, use your region as the third parameter.
AmazonS3Client s3Client = new AmazonS3Client(
ACCESSKEY,
SECRETKEY,
Amazon.RegionEndpoint.USEast1
);
I too had the same error and later found that this was due to an issue withe proxy setting. After disabling the proxy was able to upload to S3 fine.
-Dhttp.nonProxyHosts=s3***.com
It is just to register my particular case...
I am configuring dspace to use S3. It is very clearly explained, but with region "eu-north-1" does not work. Error 400 is returned by Amazonaws.
Create a bucket test with us-west-1 (by default) , and try.
Bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
CORS policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
},
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*",
"https://yourwebsite.com" //Optional
],
"ExposeHeaders": []
}
]

How to allow S3 downloads from "owner" while restricting referers in Bucket Policy

I have put the following bucket policy in effect for the product downloads bucket on my website. It works perfectly for http traffic. However this policy also prevents me from downloading directly from the S3 console, or from 3rd party S3 clients like S3Hub.
How can I add to or change this policy to be able to interact with my files "normally" as a logged-in owner, but still restrict http traffic as below?
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::downloads.example.net/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://example16.herokuapp.com/*",
"http://localhost*",
"https://www.example.net/*",
"http://stage.example.net/*",
"https://stage.example.net/*",
"http://www.example.net/*"
]
}
}
}
]
}
Remove:
"Principal": "*",
Replace with:
"NotPrincipal": { "AWS": "Your-AWS-account-ID" },
The policy should then apply only to requests that are not authorized by credentials associated with your account.
Note that because of the security implications of its logic inversion, NotPrincipal should only ever be used with Deny policies, not Allow policies, with few exceptions.

S3 Cross Account Notifications

It is possible to send S3 events from Account A to an SQS topic in Account B. But, the only way I have been able to achieve this is by opening the permissions for the sendMessage action in SQS to allow everyone access.
Is it possible to configure S3 events to sendMessage to a different account with some permission restrictions in place on the SQS topic?
For example, if I try to restrict access to a specific account (e.g. 123456789012, I receive an error in the S3 console when I try to save the event: "Unable to validate the following destination configurations : Permissions on the destination queue do not allow S3 to publish notifications from this bucket"
{
"Version": "2012-10-17",
"Id": "sqs-permission",
"Statement": [
{
"Sid": "sqs-permision-statement",
"Effect": "Allow",
"Principal": {
"AWS": "123456789012"
},
"Action": "SQS:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:210987654321:my-queue"
}
]
}
According to the documented example, the authorization needs to be granted to S3, not the account owning the bucket.
"Principal": {
"AWS": "*"
},
...
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:bucket-name"
}
}
The * principal seems unusually permissive, but the likely explanation is that aws:SourceArn is not a value that could be spoofed by a malicious user, any more than, say, aws:SourceIp.
By contrast, the SNS example shows this principal, which seems more appropriate, if it works for SQS notifications:
"Principal": {
"Service": "s3.amazonaws.com"
},
You'd still want to include the Condition block.