I've gotten an Amplify project dropped in my lap where the backend environment is deleted (or lost when the project were moved to another account).
I haven't worked with Amplify before, so I'm not sure how "automatic" everything is.
I noticed that the project has a folder called 'amplify-backup' which contain a bunch of json and graphql config files, so I assumed that I could use those somehow to restore the backend environment in AWS, but I can't seem to find any information on how to do so.
There's currently no backend environment in the AWS console and I don't really know which services the backend environment should contain.
Is it possible to restore the backend environment and all the services that the application need or do I need to figure out which services are needed?
If so, any pointers on how to find which services that are used?
If the project files still exist (amplify directory), you may be able to re-create the project with the existing resources.
One idea could be to clone the git repository from when the amplify project files were intact and run amplify init
OR
amplify-backup is generally generated automatically when doing commands with amplify. You could try rename to amplify and run amplify init.
See more here for re-creating an amplify project on another account: https://docs.amplify.aws/cli/migration/cli-migrate-aws-account/
Related
I'm trying to figure out a way to have one repository that is my front-end code pulled into another repository that houses my application code. Both of these repos are hosted by Azure Dev Ops, however they are under different accounts (if that matters).
I have followed the directions I found on Microsoft's site for setting up the .npmrc file on my development machine, but I'm getting lost at how to structure my URL on the application site to pull in my front-end code package.
Everything I've tried has given me a authentication error so I must be missing something.
According your statements, I noticed that you need to pull the front-end code into your application code repository. Since these two repos are under different accounts and hosted by Azure Dev Ops.
We recommend you can try to Import the front-end code repository to your application code project. And then we can refer the doc: Check out multiple repositories in your pipeline.
Here are the detail steps:
Go to your front-end code repos and use the admin account to create a new PAT with full access.
Go to the application code project, then use the ‘Import repository’
Create a new pipeline and check out the both repos.
And for the authentication error, you can try to use the npm Authenticate task in your pipeline.
I'm currently developing a simple webapp with seperated frontend (Vue) and backend (quarkus REST API) project. For now, I've setup a MVP, where the frontend is displaying some simple data which is called from the backend. To get a working MVP i need to setup CORS support. However, first i want to explain my setup:
Setup
I'm starting developing environment of my frontend with npm run serve and of my backend with ./mvnw quarkus:dev. Frontend is running on localhost:8081 and backend running on localhost:8080.
Heroku allows to run your apps locally aswell with the command heroku local web. Frontend is running on port 0.0.0.0:5001 and backend on 0.0.0.0:5000.
To achieve this setup i setup two .env files on my frontend which are pointing to my backend api. If i want to work in development mode the file .env.development is loaded:
VUE_APP_ROOT_API=http://localhost:8080
and if i run heroku local web the file .env.local with
VUE_APP_ROOT_API=0.0.0.0:5000
is loaded.
In my backend I've set
quarkus.http.cors=true
in my application.properties.
Now I want to deploy those two projects to heroku and use it in production. Therefore I setup two heroku projects and set a config variable in my frontend project with the following value:
VUE_APP_ROOT_API:https://mybackend.herokuapp.com
Calls from my frontend are successfully working!
Question
For the next step, I want to restrict it more and just enable my frontend to call my API. I know i can set something like
quarkus.http.cors.origins=myfrontend.herokuapp.com
However, I dont know how i could do this on quarkus with different environments (development, local and production)? I've found this link but I don't know how to configure heroku and my backend app correctly. Do i need to setup different profiles which are applied on my different environments? Or is there another solution? Do i need Herokus Config Variables?
Thanks for the help so far!
quarkus.http.cors.origins is overridable at runtime so you have several possibilities.
You could use a profile and have everything set up in your application.properties with %prod.quarkus.http.cors.origins=.... Then you either use -Dquarkus.profile=prod when launching your application or you use QUARKUS_PROFILE=prod as an environment variable.
Another option is to use an environment variable for quarkus.http.cors.origins. That would be QUARKUS_HTTP_CORS_ORIGINS=....
My recommendation would be to use a profile. That way you can safely check that all your configuration is consistent at a glance.
I'm trying to integrate serverless to my circleci workflow.
I tried first adding both, key and secret to AWS permissions, but that did not work.
Then, I added key and secret to Environment variables and in my config file:
sudo npm install -g serverless
sls config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
sls deploy -v
But I see the same error:
Serverless Error ---------------------------------------
You are not currently logged in. Follow instructions in http://slss.io/run-in-cicd to setup env vars for authentication.
Anyone had this issue? I could not find an answer or hint online. Thanks.
This likely only applies to those trying to use Serverless Enterprise with the monitoring & dashboards they have set up. #wintvelt's answer wouldn't work for me because if i deleted the org variable, it would likely break the connection needed for Enterprise. So steps for my CircleCI setup:
In CircleCI, create a Context for each environment with the AWS Key ID and Secret as environment variables (putting them in a context is a nice to have, you could use other methods of making Circle inject environment variables into builds).
In your Serverless Framework dashboard, create a new access key which you will use in Circle.
Create a new environment variable SERVERLESS_ACCESS_KEY with the value from step 2.
I got this idea from reading how Seed.run has users integrate with Serverless. For more info read this link: https://seed.run/docs/integrating-with-serverless-enterprise.
Just checked Circleci stopped supporting AWS Permissions as a configurable option in the settings page.
You need to set the credentials as environment variables for the projects. The credentials should be named exactly AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
that's all you need to do. you don't have to do any additional step. I tried this on my project and it worked.
Your deployment step should simply be
sls deploy
As a follow-up to the previous answer: I had exactly the same error.
I took the solution from the chat as a solution.
For me the fixes I applied:
In CircleCI project settings, under "AWS permissions" I added the AWS Access Key ID and Secret Access key
In CircleCI project settings, under "Environment variables", I also added the AWS Access Key ID and Secret Access key
From my serverless.yml file, I deleted the line with org variable
For me, 1. and 2. alone was not enough. I also had to remove the line from my yml file to make deployment via CircleCI work.
For those landing here with the same issue, hope this helps!
I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.
I've got an Appveyor project setup and working awesomely. Now, I want to upload artifacts S3 for easy hosting. This seems fairly easy as outlined in the documentation. My question is, where do I put the secret with write permission? I don't want to push it to my public repo for obvious reasons. On travis I could put it in an environment variable that was never logged. How would I go about this in Appveyor?
I assume you need to store this in YAML. You can use secure variables. Or you can simple put your secrets in clear text to S3 deployment configuration in UI, then save and press Export YAML and you will have YAML section with secrets encrypted.