Asp.net core 2.0 site running on LightSail Ubuntu can't access AWS S3 - amazon-s3

I have a website that I've build with Asp.net core 2.0. The website gets a list of files sitting in my ASW S3 bucket and displays them as links for authenticated users. When I run the website locally I have no issues and am able to access S3 to generate pre-signed urls. When I deploy the web app to LightSail ubuntu I get this incredibly useful error message : AmazonS3Exception: Access Denied.
At first I thought is was a region issues. I changed my S3 buckets to use the same region as my Lightsail ubuntu instance (East #2). Then I thought it my be a CORS issues and made sure that my buckets allowed CORS.
I'm kinda stuck at the moment.

I had exactly the same issue, then i solved it by creating an environment variable. To add environment variable permanently in Ubuntu open the environment file by the following command
sudo vi /etc/environment
then add your credential like this
AWS_ACCESS_KEY_ID=YOUR_KEY_ID
AWS_SECRET_ACCESS_KEY=YOUR_SECRECT_ACCESS_KEY
Then save the environment file, and Restart your asp core app.

Related

What's the best strategy for flutter-web to access its resources on server?

I've deployed a release in an ubuntu server with apache. When I access the deployed app, it shows in console: Error while trying to use the following icon from the Manifest: https://mydomain/icons/Icon-192.png (Download error or resource isn't a valid image).
Seems like flutter web is trying to download the Icon-192.png that resides on a subdirectory without permissions, giving me a Forbidden error.
I'd like to know, what's the best approach to access contents (assets like favicon, images, etc) on the deployed directory and subdirectories without making it public.

Spinnaker Support for App ELB in AWS

Am facing 2 issues with Spinnaker new installation.
I could not see my Application load balancers listed in dropdown of load balancers tab while creating pipeline. We are currently using only app. load balancers in our current set up. I tried editing the JSON file of pipeline with below config and it didn't work. I verfied it by checking the ASG created in my AWS account and checked if there is any ELB/Target group associated but I couldn't see any.
"targetGroups": [
"TG-APP-ELB-NAME-DEV"
],
Hence, I would like to confirm how I can get support of App. ELB into Spinnaker installation and how to use it.
Also I have an ami search issue found.My current set up briefing is below.
One managing account - prod where my spinnaker ec2 is running & my prod application instances are running
Two managed accounts - dev & test where my application test instances are running.
When I create a new AMI in my dev AWS account and am trying to search the newly created AMI from my Spinnaker and it failed with error that it couldn't search the AMI first. Then I shared my AMI in dev to prod after which it was able to search it but failed with UnAuthorized error
Please help me clarify
1. If sharing is required for any new AMI from dev -> Prod or our spinnakerManaged role would take care of permissions
2. How to fix this problem and create AMI successfully.
Regarding #1, have you created the App Load Balancer through the Spinnaker UI or directly through AWS?
If it is the former, then make sure it follows the naming convention agreed by Spinnaker (I believe the balancer name should start with the app name)

Amazon S3 suddenly stopped working with EC2 but working from localhost

Create folders and upload files to my S3 bucket stopped working.
The remote server returned an error: (403) Forbidden.
Everything seems to work previously as i did not change anything recently
After days of testing - i see that i am able to create folders in my bucket from localhost but same code doesnt work on the EC2 instance.
I must resolve the issue ASAP.
Thanks
diginotebooks
Does your EC2 instance have a role? If yes, what is this role? Is it possible that someone detached or modified a policy that was attached to it?
If your instance doesn't have a role, how do you upload files to S3? Using the AWS CLI tools? Same questions for the IAM profile used.
If you did not change anything - are you using the same IAM credentials from the server and localhost? May be related to this.
Just random thoughts...

private yum/rpm repository in AWS S3 with basic authentication

I need to create a yum repository. I want to store the files in S3 because: A) we already have a ton of files there; and B) because price is not bad (according to some definition of "not bad").
The repository needs to be private - yum clients will need to provide some kind of credentials to access it.
yum allows you to use Basic HTTP Authentication for private repos:
baseurl=https://user:pass#s3-site.whatever.com
I could enable Static Website Hosting on an S3 bucket, but it doesn't seem to support Basic Auth.
There's a yum plugin called yum-s3-iam that allows you to setup access control based on the IAM role of the instance where yum is running, but this only works with instances in Amazon.
I could create a front-end instance, mount the S3 bucket to it with s3fs, install Apache with Basic Auth, but this would require running that front-end, which I'm trying to avoid (I want to reduce the number of moving parts).
Then there's the s3auth proxy which basically does the same thing, but at a higher level. It still requires a front-end instance.
Is there a better solution, something that avoids the need to create a front-end instance?

not able to deploy war file in jelastic cloud

I developed a web application using java and mongodb. I used glassfish server.
I tried to deploy it on jelastic cloud service
I uploaded my war file. But when I run it after deploying the war file it shows a 404 error. Why? The project works fine on my machine.
There are at least few potential causes:
your app needs some resources which are not started by default (such as DerbyDB). In this case you can check GlassFish log file - server_instance.log for more details.
you are trying to get resources from wrong context, make sure you are trying to get it via correct context name