Missing environment variable: S3_ACCESS_KEY_ID
is the error I am getting even after assigning it. I used aws configure command in which I inserted the environment variables. But while listing, I get this error. What should I do??.
COMMAND LINE::
$ export S3_ACCESS_KEY_ID=************
$ s3 list
Missing environment variable: S3_SECRET_ACCESS_KEY
The immediate problem is that the environment variable is wrong.
You set:
export AWS_ACCESS_KEY_ID=
but it is looking for S3_ACCESS_KEY_ID:
$ s3 list
Missing environment variable: S3_SECRET_ACCESS_KEY
What is possibly more interesting however, is that you did use aws configure in the first place, although this is not shown in recent edits, only in the images in original post. We would expect aws configure to correctly set the environment. And we would also expect the variables to be named AWS_* not S3_*. So why is s3 list looking for S3_*?
I can't find any reference to s3 list. Are you sure this is the correct command. Do you actually want to use something like: aws s3 ls ?
For newbie to AWS, read AWS CLI getting started documentation.
The recommended way for AWS cli is using aws configure to setup your credential and environment. If you insists to setup env variable manually, you need to make 3 export. (key shown are example shown from AWS CLI documentation)
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2
Related
I'm having a problem with aws CloudFormation…
I guess as I'm new I'm missing something…
So I installed sam cli on my mac and it generated this .yaml file
then I go to cloud formation and try to upload this file to a stack
during creation it gives me an error:
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless
Application Specification document. Number of errors found: 1. Resource
with id [HelloWorldFunction] is invalid. 'CodeUri' is not a valid S3 Uri
of the form 's3://bucket/key' with optional versionId query parameter..
Rollback requested by user.
What should I do here ?
I'm trying to create a lambda function with trigger on s3 file upload and I need an .yaml file for CloudFormation to describe all the services and triggers… I found it extremely difficult to find a template which works…
How should I try to fix this ? when even cli generated yaml files don't work ?
Shouldn't CloudFormation initialize a lambda function when there no such function created yet?
Thanks a lot
The templates that AWS SAM uses are more flexible than those that can be interpreted by AWS CloudFormation. The problem you're running into here is that AWS SAM can handle relative paths on your file system as a CodeUri for your lambda function, CloudFormation however expects an S3 uri in order to retrieve the function code to upload to the lambda.
You should have a look at the sam package command. This command will resolve all sam specific things (e.g., it will upload the code to S3 and replace the CodeUri in the template). And create a "packaged template" file that you will be able to upload to CloudFormation.
You can also use the sam deploy command, which will package the template and deploy it to cloudformation itself.
I have tried to find out in what order the statements of the serverless file are evaluated (maybe it is more common to say that 'variables are resolved').
I haven't been able to find any information about this and to some extent it makes working with serverless feel like a guessing game for me.
As an example, the latest surprise I got was when I tried to run:
$ sls deploy
serverless.yaml
useDotenv: true
provider:
stage: ${env:stage}
region: ${env:region}
.env
region=us-west-1
stage=dev
I got an error message stating that env is not available at the time when stage is resolved. This was surprising to me since I have been able to use env to resolve other variables in the provider section, and there is nothing in the syntax to indicate that stage is resolved earlier.
In what order is the serverless file evaluated?
In effect you've created a circular dependency. Stage is special because it is needed to identify which .env file to load. ${env:stage} is being resolved from ${stage}.env, but Serverless needs to know what ${stage} is in order to find ${stage}.env etc.
This is why it's evaluated first.
Stage (and region, actually) are both optional CLI parameters. In your serverless.yml file what you're setting is a default, with the CLI parameter overriding it where different.
Example:
provider:
stage: staging
region: ca-central-1
Running serverless deploy --stage prod --region us-west-2 will result in prod and us-west-2 being used for stage and region (respectively) for that deployment.
I'd suggest removing any variable interpolation for stage and instead setting a default, and overriding via CLI when needed.
Then dotenv will know which environment file to use, and complete the rest of the template.
Updating my question : My test scenario is to get the file sizes in a particular S3 bucket. For this I have installed robotframework-aws library. Now I am not sure which keyword to use in order to get the file sizes. Here is the code i have written till now :
Run AWS CLI Command to capture file size of S3 MRL Hub Source
Create Session With Keys region: us-east-1 access_key: xxxx secret_key: xxxx aws_session_token: str
Read File From S3 bucket_name: com-abc-def-ghi key: name/of/the/file/i/am/looking/for.parquet
With this code i am getting the following error :
InvalidRegionError: Provided region_name 'region: us-east-1' doesn't match a supported format.
You can use Run command, which is part of the OperatingSystem Library and it is already included by default when you install RobotFramework.
With this you can make Robotframework run any command prompt that you which. For example:
*** Settings ***
Library OperatingSystem
*** Test Cases ***
Test run command
${output}= Run aws --version
log ${output}
The problem comes when you want to use the interactive capabilities of aws configure. I mean, you can't expect that Robotframework test case asks for your input. In this case you need to provide all aws configure options before hand. This means that you need to prepare a profile file for your test case and then you can concatenate more commands, like:
*** Test Cases ***
Test run command
${output}= Run aws configure --profile <profilename> && set https_proxy http://webproxy.xyz.com:8080
log ${output}
Or better use a profile file directly with s3, like, aws s3 ls --profile <profilename>.
Bear in mind that the best way to do this is using some kind of external library, like AWSLibrary or create your own custom library using boto3 Python library.
In .ebextensions, I have a file (environmentvariables.config) that looks like this:
commands:
01_get_env_vars:
command: aws s3 cp s3://elasticbeanstalk-us-east-1-466049672149/ENVVAR.sh /home/ec2-user
02_export_vars:
command: source /home/ec2-user/ENVVAR.sh
The shell script is a series of simple export key=value commands.
The file is correctly placed on the server, but it seems like it isn't being called with source. If I manually log into the app and use source /home/ec2-user/ENVVAR.sh, it sets up all my environment variables, so I know the script works.
Is it possible to set up environment variables this way? I'd like to store my configuration in S3 and automate the setup so I don't need to commit any variables to source control (option_settings) or manually enter them into the console.
Answer:
Active load S3 variables in Rails app to bypass environment variable issue altogether.
Put a json in S3, download to server, read ENV VAR from there in environment.rb
I am trying to get the bq CLI to work with multiple service accounts for different projects without having to re-authenticate using gcloud auth login or bq init.
An example of what I want to do, and am able to do using gsutil:
I have used gsutil with a .boto configuration file containing:
[Credentials]
gs_service_key_file = /path/to/key_file.json
[Boto]
https_validate_certificates = True
[GSUtil]
content_language = en
default_api_version = 2
default_project_id = my-project-id
[OAuth2]
on a GCE instance to run an arbitrary gsutil command as a service. The service does not need to be unique or globally defined on the GCE instance: as long as a service is set up in my-project-id and a private key has been created, then the private key file referenced in the .boto config will take care of authentication. For example, if I run
BOTO_CONFIG=/path/to/my/.boto_project_1
export BOTO_CONFIG
gsutil -m cp gs://mybucket/myobject .
I can copy from any project that I have a service account set up with, and for which I have the private key file defined in .boto_project_1. In this way, I can run a similar gsutil command for project_2 just be referencing the .boto_project_2 config file. No manual authentication needed.
The case with bq CLI
In the case of the bigquery command line interpreter, I want to reference a config file or pass a config option like a key file to run a bq load command, ie. upload the same .csv file that is in GCS for various projects. I want to automate this without having to bq init each time.
I have read here that you can configure a .biqqueryrc file and pass in your credential and key files as options; however the answer is from 2012, references outdated bq credential files, and throws errors due to the openssl and pyopenssl installs that it mentioned.
My question
Provide two example bq load commands with any necessary options/biqueryrc files to correctly load a .csv file from GCS into bigquery for two distinct projects without needing to bq init/authenticate manually between the two commands. Assume the .csv file is already correctly in each project's GCS bucket.
Simply use gcloud auth activate-service-account and use the global --project flag.
https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
https://cloud.google.com/sdk/gcloud/reference/