getting Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1 despite having credentials in config file - amazon-s3

I have a typescript/node-based application where the following line of code is throwing an error:
const res = await s3.getObject(obj).promise();
The error I'm getting in terminal output is:
❌ Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
However, I do actually have a credentials file in my .aws directory with values for aws_access_key_id and aws_secret_access_key. I have also exported the values for these with the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. I have also tried this with and without running export AWS_SDK_LOAD_CONFIG=1 but to no avail (same error message). Would anyone be able to provide any possible causes/suggestions for further troubleshooting?

Install npm i dotenv
Add a .env file with your AWS_ACCESS_KEY_ID etc credentials in.
Then in your index.js or equivalent file add require("dotenv").config();
Then update the config of your AWS instance:
region: "eu-west-2",
maxRetries: 3,
httpOptions: { timeout: 30000, connectTimeout: 5000 },
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});

Try not setting AWS_SDK_LOAD_CONFIG to anything (unset it). Unset all other AWS variables. In Mac/linux, you can do export | grep AWS_ to find others you might have set.
Next, do you have AWS connectivity from the command line? Install the AWS CLI v2 if you don't have it yet, and run aws sts get-caller-identity from a terminal window. Don't bother trying to run node until you get this working. You can also try aws configure list.
Read through all the sections of Configuring the AWS CLI, paying particular attention to how to use the credentials and config files at $HOME/.aws/credentials and $HOME/.aws/config. Are you using the default profile or a named profile?
I prefer to use named profiles, but I use more than one so that may not be needed for you. I have always found success using the AWS_PROFILE environment variable:
export AWS_PROFILE=your_profile_name # macOS/linux
setx AWS_PROFILE your_profile_name # Windows
$Env:AWS_PROFILE="your_profile_name" # PowerShell
This works for me both with an Okta/gimme-aws-creds scenario, as well as an Amazon SSO scenario. With the Okta scenario, just the AWS secret keys go into $HOME/.aws/credentials, and further configuration such as default region or output format go in $HOME/.aws/config (this separation is so that tools can completely rewrite the credentials file without touching the config). With the Amazon SSO scenario, all the settings go in the config.

Related

`Unable to read keyfile` error when trying to initialize Datomic Cloud with Client API on AWS S3

I'm testing setting up Datomic Cloud using an IntelliJ IDE. I'm following the Client API tut from Datomic but am stuck initializing the client.
The spec from an API client is here, and the tut is here, under the step Using Datomic Cloud.
So the tut says to init a client like so:
(require '[datomic.client.api :as d])
(def cfg {:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system name>"
:creds-profile "<your_aws_profile_if_not_using_the_default>"
:endpoint "<your endpoint>"})
They say to include an AWS profile if not using the default. I am using the default as far as I know--I'm not part of an org in AWS.
This is the (partially redacted) code from my tutorial.core namespace, where I'm trying to init Datomic:
(ns tutorial.core
(:gen-class))
(require '[datomic.client.api :as d])
(def cfg {:server-type :cloud
:region "us-east-2"
:system "roam"
:endpoint "https://API_ID.execute-api.us-east-2.amazonaws.com"
})
(def client (d/client cfg))
(d/create-database client {:db-name "blocks"})
(d/connect client {:db-name "blocks"})
However, I'm getting an error from Clojure: Forbidden to read keyfile at s3://URL/roam/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
Do I need some sort of credential? Could anything else be causing this error? I got the endpoint URL from the ClientApiGatewayEndpoint in my CloudFormation Datomic stack.
Please let me know if I should provide more info! Thanks.
I tried the solution mentioned here and it didn't not work, I can't find an answered question for this anywhere online.
When you initialize the client from your computer, the datomic library is trying to reach S3 to read some configuration file using whatever AWS credentials you passed. You mention that you are using the default profile, so most likely you have a ~/.aws/credentials file with a [default] entry and an access and secret key.
The error:
Forbidden to read keyfile at s3://URL/roam/datomic/access/admin/.keys.
Make sure that your endpoint is correct, and that your ambient AWS
credentials allow you to GetObject on the keyfile.
Means that the datomic library can't read the file from S3. There's many reasons why this could be the case. Try running the following using the AWS cli
aws s3 cp s3://URL/roam/datomic/access/admin/.keys .
I'm assuming that it will fail and that would mean your default profile doesn't have the necessary permissions to read this resource from S3, thus the error you get from datomic. To fix it you would need to add the necessary permission to your AWS user IAM role (GetObject on the keyfile, as suggested by the error message).
One thing you could try is creating a profile. In your ~/.aws/credentials
[datomic]
aws_access_key_id = your-access-key
aws_secret_access_key = your-secret-key
region = us-east-2
Then change your cfg map to:
(def cfg {:server-type :cloud
:region "us-east-2"
:system "roam"
:creds-profile "datomic"
:endpoint "https://API_ID.execute-api.us-east-2.amazonaws.com"
}

.NET Core 3.x setting development AWS credentials

I have EC2 instances (via Elastic Beanstalk) running my ASP.Net Core 3.1 web app without a problem. AWS credentials are included in the key pair configured with the instance.
I want to now store my Data Protection keys in a S3 bucket that I created for them, so I can share the keys among all of the EC2 instances. However, when I add this service in my Startup.ConfigureServices, I get a runtime error locally:
services.AddDefaultAWSOptions(Configuration.GetAWSOptions("AWS"));
services.AddAWSService<IAmazonS3>();
services.AddDataProtection()
.SetApplicationName("Crums")
.PersistKeysToAWSSystemsManager("/CrumsWeb/DataProtection");
My app runs fine locally if I comment out the .PersistKeysToAWSSystemsManager("/CrumsWeb/DataProtection"); line above. When I uncomment the line, the error occurs. So it has something to do with that, but I can't seem to figure it out.
I was going to use PersistKeysToAwsS3 by hotchkj, but it was deprecated when AWS came out with PersistKeysToAWSSystemsManager.
The runtime error AmazonClientException: No RegionEndpoint or ServiceURL configured happens on CreateHostBuilder in my Program.cs:
I've spent many hours on this trying just to get Visual Studio 2019 to run my app locally, using suggestions from these sites:
https://aws.amazon.com/blogs/developer/configuring-aws-sdk-with-net-core/
https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-netcore.html
ASP NET Core AWS No RegionEndpoint or ServiceURL configured when deployed to Heroku
No RegionEndpoint or ServiceURL configured
https://github.com/secretorange/aws-aspnetcore-environment-startup
https://www.youtube.com/watch?v=C4AyfV3Z3xs&ab_channel=AmazonWebServices
My appsettings.Development.json (and I also tried it in appsettings.json) contains:
"AWS": {
"Profile": "default",
"Region": "us-east-1",
"ProfilesLocation": "C:\\Users\\username\\.aws\\credentials"
}
And the credentials file contains:
[default]
aws_access_key_id = MY_ACCESS_KEY
aws_secret_access_key = MY_SECRET_KEY
region = us-east-1
toolkit_artifact_guid=GUID
I ended up abandoning PersistKeysToAWSSystemsManager for storing my Data Protection keys because I don't want to set up yet another AWS service just to store keys in their Systems Manager. I am already paying for an S3 account, so I chose to use the deprecated NuGet package AspNetCore.DataProtection.Aws.S3.
I use server-side encryption on the bucket I created for the keys. This is the code in Startup.cs:
services.AddDataProtection()
.SetApplicationName("AppName")
.PersistKeysToAwsS3(new AmazonS3Client(RegionEndpoint.USEast1), new S3XmlRepositoryConfig("S3BucketName")
{
KeyPrefix = "DataProtectionKeys/", // Folder in the S3 bucket for keys
});
Notice the RegionEndpoint parameter in the PersistKeysToAwsS3, which resolved the No RegionEndpoint or ServiceURL Configured error.
I added the AmazonS3FullAccess policy to the IAM role that's running in all my instances.
This gives the instance the permissions to access the S3 bucket. My local development computer also seems to be able to access the S3 bucket, although I don't know where it's getting credentials from. I tried several iterations of appsettings.json and credentials file changes to locally set region and credentials, but nothing worked. Maybe it's using credentials I entered when I set up the AWS Toolkit in Visual Studio.

Stackdriver Node.js Logging not showing up

I have a Node.js application, running inside of a Docker container and logging events using Stackdriver.
It is a Node.Js app, running with Express.js and Winston for logging and using a StackDriverTransport.
When I run this container locally, everything is logged correctly and shows up in the Cloud console. When I run this same container, with the same environment variables, in a GCE VM, the logs don't show up.
What do you mean exactly by locally? Are you running the container on the Cloud Shell vs running it on an instance? Keep in mind that if you create a container or instance that has to do something that needs privileges (like the Stackdriver logging client library) and run it, if that instance doesn't have a service account with that role/privileges set up it won't work.
Yu mentioned that you use the same environment variables, I take that one of the env vars points to your json key file. Is the key file present in that path on the instance?
From Winston documentation it looks like you need to specify the key file location for the service account:
const winston = require('winston');
const Stackdriver = require('#google-cloud/logging-winston');
winston.add(Stackdriver, {
projectId: 'your-project-id',
keyFilename: '/path/to/keyfile.json'
});
Have you checked if this is defined with the key for the service account with a logging role?

Configuring Google cloud bucket as Airflow Log folder

We just started using Apache airflow in our project for our data pipelines .While exploring the features came to know about configuring remote folder as log destination in airflow .For that we
Created a google cloud bucket.
From Airflow UI created a new GS connection
I am not able to understand all the fields .I just created a sample GS Bucket under my project from google console and gave that project ID to this Connection.Left key file path and scopes as blank.
Then edited airflow.cfg file as follows
remote_base_log_folder = gs://my_test_bucket/
remote_log_conn_id = test_gs
After this changes restarted the web server and scheduler .But still my Dags is not writing logs to the GS bucket .I am able to see the logs which is creating logs in base_log_folder .But nothing is created in my bucket .
Is there any extra configuration needed from my side to get it working
Note: Using Airflow 1.8 .(Same issue I faced with AmazonS3 also. )
Updated on 20/09/2017
Tried the GS method attaching screenshot
Still I am not getting logs in the bucket
Thanks
Anoop R
I advise you to use a DAG to connect airflow to GCP instead of UI.
First, create a service account on GCP and download the json key.
Then execute this DAG (you can modify the scope of your access):
from airflow import DAG
from datetime import datetime
from airflow.operators.python_operator import PythonOperator
def add_gcp_connection(ds, **kwargs):
"""Add a airflow connection for GCP"""
new_conn = Connection(
conn_id='gcp_connection_id',
conn_type='google_cloud_platform',
)
scopes = [
"https://www.googleapis.com/auth/pubsub",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/cloud-platform",
]
conn_extra = {
"extra__google_cloud_platform__scope": ",".join(scopes),
"extra__google_cloud_platform__project": "<name_of_your_project>",
"extra__google_cloud_platform__key_path": '<path_to_your_json_key>'
}
conn_extra_json = json.dumps(conn_extra)
new_conn.set_extra(conn_extra_json)
session = settings.Session()
if not (session.query(Connection).filter(Connection.conn_id ==
new_conn.conn_id).first()):
session.add(new_conn)
session.commit()
else:
msg = '\n\tA connection with `conn_id`={conn_id} already exists\n'
msg = msg.format(conn_id=new_conn.conn_id)
print(msg)
dag = DAG('add_gcp_connection', start_date=datetime(2016,1,1), schedule_interval='#once')
# Task to add a connection
AddGCPCreds = PythonOperator(
dag=dag,
task_id='add_gcp_connection_python',
python_callable=add_gcp_connection,
provide_context=True)
Thanks to Yu Ishikawa for this code.
Yes, you need to provide additional information for both, S3 and GCP connection.
S3
Configuration is passed via extra field as JSON. You can provide only profile
{"profile": "xxx"}
or credentials
{"profile": "xxx", "aws_access_key_id": "xxx", "aws_secret_access_key": "xxx"}
or path to config file
{"profile": "xxx", "s3_config_file": "xxx", "s3_config_format": "xxx"}
In case of the first option, boto will try to detect your credentials.
Source code - airflow/hooks/S3_hook.py:107
GCP
You can either provide key_path and scope (see Service account credentials) or credentials will be extracted from your environment in this order:
Environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to a file with stored credentials information.
Stored "well known" file associated with gcloud command line tool.
Google App Engine (production and testing)
Google Compute Engine production environment.
Source code - airflow/contrib/hooks/gcp_api_base_hook.py:68
The reason for logs not being written to your bucket could be related to service account rather than config on airflow itself. Make sure it has access to the mentioned bucket. I had same problems in the past.
Adding more generous permissions to the service account, e.g. even project wide Editor and then narrowing it down. You could also try using gs client with that key and see if you can write to the bucket.
For me personally this scope works fine for writing logs: "https://www.googleapis.com/auth/cloud-platform"

Git - Push to Deploy and Removing Dev Config

So I'm writing a Facebook App using Rails, and hosted on Heroku.
On Heroku, you deploy by pushing your repo to the server.
When I do this, I'd like it to automatically change a few dev settings (facebook secret, for example) to production settings.
What's the best way to do this? Git hook?
There are a couple of common practices to handle this situation if you don't want to use Git hooks or other methods to modify the actual code upon deploy.
Environment Based Configuration
If you don't mind having the production values your configuration settings in your repository, you can make them environment based. I sometimes use something like this:
# config/application.yml
default:
facebook:
app_id: app_id_for_dev_and_test
app_secret: app_secret_for_dev_and_test
api_key: api_key_for_dev_and_test
production:
facebook:
app_id: app_id_for_production
app_secret: app_secret_for_production
api_key: api_key_for_production
# config/initializers/app_config.rb
require 'yaml'
yaml_data = YAML::load(ERB.new(IO.read(File.join(Rails.root, 'config', 'application.yml'))).result)
config = yaml_data["default"]
begin
config.merge! yaml_data[Rails.env]
rescue TypeError
# nothing specified for this environment; do nothing
end
APP_CONFIG = HashWithIndifferentAccess.new(config)
Now you can access the data via, for instance, APP_CONFIG[:facebook][:app_id], and the value will automatically be different based on which environment the application was booted in.
Environment Variables Based Configuration
Another option is to specify production data via environment variables. Heroku allows you to do this via config vars.
Set up your code to use a value based on the environment (maybe with optional defaults):
facebook_app_id = ENV['FB_APP_ID'] || 'some default value'
Create the production config var on Heroku by typing on a console:
heroku config:add FB_APP_ID=the_fb_app_id_to_use
Now ENV['FB_APP_ID'] is the_fb_app_id_to_use on production (Heroku), and 'some default value' in development and test.
The Heroku documentation linked above has some more detailed information on this strategy.
You can explore the idea of a content filter, based on a 'smudge' script executed automatically on checkout.
You would declare:
some (versioned) template files
some value files
a (versioned) smudge script able to recognize its execution environment and generate the necessary (non-versioned) final files from the value files or (for more sensitive information) from other sources external to the Git repo.