I'm trying to integrate serverless to my circleci workflow.
I tried first adding both, key and secret to AWS permissions, but that did not work.
Then, I added key and secret to Environment variables and in my config file:
sudo npm install -g serverless
sls config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
sls deploy -v
But I see the same error:
Serverless Error ---------------------------------------
You are not currently logged in. Follow instructions in http://slss.io/run-in-cicd to setup env vars for authentication.
Anyone had this issue? I could not find an answer or hint online. Thanks.
This likely only applies to those trying to use Serverless Enterprise with the monitoring & dashboards they have set up. #wintvelt's answer wouldn't work for me because if i deleted the org variable, it would likely break the connection needed for Enterprise. So steps for my CircleCI setup:
In CircleCI, create a Context for each environment with the AWS Key ID and Secret as environment variables (putting them in a context is a nice to have, you could use other methods of making Circle inject environment variables into builds).
In your Serverless Framework dashboard, create a new access key which you will use in Circle.
Create a new environment variable SERVERLESS_ACCESS_KEY with the value from step 2.
I got this idea from reading how Seed.run has users integrate with Serverless. For more info read this link: https://seed.run/docs/integrating-with-serverless-enterprise.
Just checked Circleci stopped supporting AWS Permissions as a configurable option in the settings page.
You need to set the credentials as environment variables for the projects. The credentials should be named exactly AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
that's all you need to do. you don't have to do any additional step. I tried this on my project and it worked.
Your deployment step should simply be
sls deploy
As a follow-up to the previous answer: I had exactly the same error.
I took the solution from the chat as a solution.
For me the fixes I applied:
In CircleCI project settings, under "AWS permissions" I added the AWS Access Key ID and Secret Access key
In CircleCI project settings, under "Environment variables", I also added the AWS Access Key ID and Secret Access key
From my serverless.yml file, I deleted the line with org variable
For me, 1. and 2. alone was not enough. I also had to remove the line from my yml file to make deployment via CircleCI work.
For those landing here with the same issue, hope this helps!
Related
I've gotten an Amplify project dropped in my lap where the backend environment is deleted (or lost when the project were moved to another account).
I haven't worked with Amplify before, so I'm not sure how "automatic" everything is.
I noticed that the project has a folder called 'amplify-backup' which contain a bunch of json and graphql config files, so I assumed that I could use those somehow to restore the backend environment in AWS, but I can't seem to find any information on how to do so.
There's currently no backend environment in the AWS console and I don't really know which services the backend environment should contain.
Is it possible to restore the backend environment and all the services that the application need or do I need to figure out which services are needed?
If so, any pointers on how to find which services that are used?
If the project files still exist (amplify directory), you may be able to re-create the project with the existing resources.
One idea could be to clone the git repository from when the amplify project files were intact and run amplify init
OR
amplify-backup is generally generated automatically when doing commands with amplify. You could try rename to amplify and run amplify init.
See more here for re-creating an amplify project on another account: https://docs.amplify.aws/cli/migration/cli-migrate-aws-account/
I know this has been asked many times because of the complete mess Google have made with authentication but I can't find an answer. I'm trying to create a CI pipeline that can use service account credentials from a file. I want to be able to run it locally or from a server. From what I've read gcloud inexplicably ignores the GOOGLE_APPLICATION_CREDENTIALS env var so I have to globally set my creds with the following, meaning I can kiss goodbye to any kind of parallelisation:
gcloud auth activate-service-account --key-file=$(GOOGLE_APPLICATION_CREDENTIALS)
Surely it must be possible to run multiple commands in parallel with different SA credentials?
Also, the above approach ignores the project ID specified in the key file, so gcloud tries to target the last project ID I personally set for myself.
Is there a solution to this ridiculousness? I'm looking for a non-interactive, non-destructive (i.e. won't trash my personal creds) way of calling gcloud in parallel with different service accounts and automatically using their project IDs. Is this possible?
Well it actually is possible with this:
CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$(GOOGLE_CREDENTIALS_FILE) \
CLOUDSDK_CORE_PROJECT=$(GCP_PROJECT) \
gcloud run deploy --allow-unauthenticated $(CLOUD_RUN_CONFIG) --image $(GCR_DOCKER_IMAGE)
It's a shame the docs are so poor it's taken me forever to find this info. Why gcloud doesn't just use the same env vars as all the libraries will remain a mystery to everyone outside Google...
I am new with Amazon codecommit.
Following their instruction, I did some works like below
make a new IAM user with AdministratorAccess
make a new codecommit repository
install awscli and did aws configure
When I right finished those things, I could pull/push from codecommit.
However it became disabled with intellij Idea.
I did something like...
I pull a project from gitlab
git remote rm origin
git remote add origin [code commit url]
git branch --set-upstream-to origin/master
Now I type git [pull / push] origin master, I got this error message.
unable to access 'https://git-codecommit.ap-northeast-1.amazonaws.com/v1/repos/test17/': The requested URL returned error: 403
When I access this url via browser, it requires id/pw. But my IAM user account information is not working.
What should I do? Is there any way to switch gitlab and codecommit in intellij?
Thanks.
IntelliJ does not use awscli. It uses the default system shell.
From the description, it looks like push/pull does not work for the command-line git in the native shell, so the issue is not IntelliJ-related.
Probably git tries to use wrong credentials save in its credential.helper, that is why it fails.
Check git config credential.helper to see if any is configured. If there is one, try disabling it or clear the saved credentials.
From the description it looks like you are trying to connect to a CodeCommit repository in Intellij using https. To do this you need to generate GitCredentials(username/password) for your iam user in the IAM console.
Detailed steps are documented in the aws documentation: http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html
Once you have the username/password you can use those credentials to connect to your CodeCommit repository in Intellij.
Tested on a Mac. Your milage may vary!
I just ran into the same issue. MacOS stores the GIT UID and PW in the Keychain (in your Applications > Utilities Folder). I deleted all references to AWS Code Commit from the keychain, which forced me to reenter the UID & PW. This seems to have solve the problem.
As a side note: I think this happened because I revoked a prior GIT credential on AWS and created a new one. I think that the keychain was entering the old UID/PW which then failed during authentication.
First, you are going to want to create an IAM user with appropriate permissions and then create Git credentials. Then go to IntelliJ IDEA and say you are opening project from VCS with Git credentials, use the AWS git credentials you created and log in. Once you have logged in, you should be able to pull/push to the repo. If you are still having issues and have checked the credentials you are using are active, along with the IAM user those credentials are attached to have the right permissions, I would recommend creating a ticket on AWS support as there may be something wrong with your account that AWS staff will need to fix.
As anyone who has ever had the misfortune of having to interact with the panoply of Google CLI binaries programmatically will have realised, authenticating with the likes of gcloud, gsutil, bq, etc. is far from intuitive or trivial, especially when you need to work across different projects.
I am running various cron jobs that interact with Google Cloud Storage and BigQuery for different projects. Since the cron jobs may overlap, renaming config files is clearly not an option, and nor would any sane person take that approach.
There must surely be some sort of method of passing a path to a service account's key pair file to these CLI binaries, but bq help yields nothing.
The Google documentation, while verbose, is largely useless, taking one on a tour of how OAuth2 works, etc, instead of explaining what must surely be a very common requirement, vis-a-vis, how to actually authenticate a service account without running commands that modify central config files.
Can any enlightened being tell me whether the engineers at Google decided to add a feature as simple as passing the path to a service account's key pair file to the likes of gsutil and bq? Or perhaps I could simply export some variable so they know which key pair file to use for authentication?
I realise these simplistic approaches may be an insult to the intelligence, but we aren't concerning ourselves with harnessing nuclear fusion, so we needn't even consider what Amazon got so right with their approach to authentication in comparison...
Configuration in the Cloud SDK is global for the user, but you can specify what aspects of that config to use on a per command basis. To accomplish what you are trying to do you can:
gcloud auth activate-service-account foo#developer.gserviceaccount.com --key-file ...
gcloud auth activate-service-account bar#developer.gserviceaccount.com --key-file ...
At this point, both sets of credentials are in your global credentials store.
Now you can run:
gcloud --account foo#developer.gserviceaccount.com some-command
gcloud --account bar#developer.gserviceaccount.com some-command
in parallel, and each will use the given account without interfering.
A larger extension of this is 'configurations' which do the same thing, but for your entire set of config (including settings like account and project).
# Create first configuration
gcloud config configurations create myconfig
gcloud config configurations activate myconfig
gcloud config set account foo#developer.gserviceaccount.com
gcloud config set project foo
# Create second configuration
gcloud config configurations create anotherconfig
gcloud config configurations activate anotherconfig
gcloud config set account bar#developer.gserviceaccount.com
gcloud config set project bar
And you can say which configuration to use on a per command basis.
gcloud --configuration myconfig some-command
gcloud --configuration anotherconfig some-command
You can read more about configurations by running: gcloud topic configurations
All properties have corresponding environment variables that allow you to set that particular property for a single command invocation or for a terminal session. They take the form:
CLOUDSDK_<SECTION>_<PROPERTY>
for example: CLOUDSDK_CORE_ACCOUNT
You can see all the available config settings by running: gcloud help config
The equivalent of the --configuration flag is: CLOUDSDK_ACTIVE_CONFIG_NAME
If you really want complete isolation, you can also change the Cloud SDK's config directory by setting CLOUDSDK_CONFIG to a directory of your choosing. Note that if you do this, the config is completely separate including the credential store, all configurations, logs, etc.
I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!