Unable to retrieve recordings from Amazon S3 Bucket - amazon-s3

We created an instance in Amazon Connect and we're having Data Storage section to store call recordings on Amazon Connect.Usually, the call recordings are storing in Amazon S3 but in our case recordings are not storing in Amazon S3 but it is creating a bucket in Amazon S3

The "folder structure" your referring to is actually part of the object name that is created in S3, see this link for more information here. You are not seeing any "folder structure" because no objects have been created with that prefix yet. In order for Amazon Connect to create a call recording, you must enable recording for a Contact Flow. Once a call is processed through a Contact Flow that has recording enabled, then you will see the recording as as object in S3 with the expected prefix ("folder structure").
To enable recording in a call flow, add the Set Recording Behavior step to your Contact Flow.
This can be found under the Set section of the available steps in the Contact Flow Editor.

Related

Container Field under Storage Account Settings is not displayed

While creating a stream analytics job for IoT Edge, one needs to associate the job with a container in a storage account. In storage account settings under configure section of Stream Analytics Job - Container Field is not being displayed when "Add Storage Account" is selected.
Is there a new workflow to add a storage container for a stream analytics job?
I tried adding a storage container in the storage account of the same resource group. It didn't help.
Documentation Link: https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-deploy-stream-analytics?view=iotedge-2018-06#create-an-azure-stream-analytics-job
Issue is at Step 3, where "Container" field is not being displayed in Azure Portal - Steam Analytics Job Page
Configure IoT Edge settings - Source Documentation
1.Under Configure, select Storage account settings then select Add storage account.
2.Select the Storage account that you created at the beginning of this tutorial from the drop-down menu.
3.For the Container field, select Create new and provide a name for the storage container.
4.Select Save.
Image from documentation 1
The workflow does seem to have changed. Instead of specifying the container in the Portal, you only get to supply a storage account. Once you've done that, when you publish, a container will be created by ASA with your Edge Job inside of it. Future publishes will use the same container.
If you want, you can use the button at the bottom of the docs to give feedback to Microsoft so they can change the documentation.

Download email attachment and upload S3 bucket AWS

I have a web app hosted on aws under free tier limit. What I want to achieve is that whenever I received an email, i want the system to download its attachment(will be images only), upload that image to s3 and save the image id in database with email's sender email address. I don't want to use zapier api etc, I want to code it my self. How can I achieve this?
This really depends upon how your email is hosted.
You could use Receive Email with Amazon Simple Email Service.
The flow could then either be:
SES -> S3 -> Trigger Event -> AWS Lambda function, or
SES -> SNS -> AWS Lambda function
You would then need to write a Lambda function to do the processing you described.
If, on the other hand, your email is being hosted elsewhere, then you will need a mechanism to trigger some code when an email is received (somehow on your email system) or a scheduled Lambda function to poll the email system to see whether new mail is available.

Can I route my mailgun email to S3 bucket and save data there?

I have a requirement where I need to fetch email data and save somewhere. I can route it to my server and save the data there. Is there any feature where the email data can be routed to an S3 Bucket and I save the data there?
As I can see, you need a backend app that recursively fetch your emails using the Mailgun API formatted as json, then you can save these as text files on S3 service that can be queried later by another backend application.
It requires programming and a server infrastructure.
Regards,

How do I trigger an action on my application users' data sync on Amazon Cognito

I need to trigger a check on the user data sync event of Amazon Cognito. How can I hook to the sync event?
I'd like to also enhance that data in some cases for the mobile application to fetch on next sync. Is the data editable from the server (not the mobile device)?
Currently Cognito does not support triggers on sync data. If you would like to see this feature added, please request in our forums, that helps us to prioritize our roadmap.
Yes, you can edit the data using your AWS credentials (not one received from Cognito). Please refer to our blog for more details.
Thanks,
Rachit
Amazon Cognito Streams is now available to all and is the best way to trigger an action on CognitoSync. This requires creating a Kinesis Stream and then edit the Cognito Identity Pool to link the stream to it. This stream can be consumed by Lambda or a any other Kinesis consumer, like RedShift or KCL application.

Is there a way to pull out AWS EC2 bill usage ? Does amazon provide us with any REST API to get this data

We would like to manage AWS users billing cycles. Does AWS provide an API to get this informtation programmatically ?
Using Cloudwatch, you can get Billing Alerts. You will be notified by SNS when your billing gets above your defined limits. Read more about this in this blog post or in the documentation. Of course you can also use the CloudWatch API to get custom reports using this data.