I am trying to setup the automated deployment through Bitbucket pipeline , but still not succeed might be my business requirement is not fulfill by the Bitbucekt pipeline.
Current Setup :
Dev Team push the code to default branch from their local machines. and as Team lead reviews their code and updated on UAT and production server manually by running the commands on the Server CLI directly.
#hg branch
#hg pull
#hg update
Automated deployment we want :
we have 3 environment DEV, UAT/Staging and production.
on the bases of the environments i have created
Release branches . DEV-Release, UAT-Release and PROD-Release respectively.
dev team push the code directly to the default branch dev lead will check the changes and then create a pull request form default to UAT-Release branch and after successful deployment on UAT server the again create Pull request from default to production branch and pipeline should be executed on the pull request and then started copying the bundle.zip on AWS S3 and then to AWS EC2 instance.
Issues :
The issue i am facing is bitbucket-pipeline.yml is not same on all release branches because the branch name s difference due to that when we create a pull request for any release branch we are getting the conflict of that file .
id there any why i can use the same bitbucket-pipline.yml file for all the branches and deployment should be happened on that particular for which pull request is created.
can we make that file dynamic for all branches with environment variables?
if the bitbucket pipeline can not fulfill my business requirement then what is other solution ?
if you guys think my business requirement is not good or justifiable just let me know on what step i have to change to achieve the final result of automated deployments
Flow :
Developer Machine push to--> Bitbucket default branch ---> Lead will review the code then pull request for any branch (UAT,PROD) --- > pipeline will be executed an push the code to S3 bucket ----> Awscodedeply ---> EC2 application server.
waiting for the prompt response.
Related
Does anyone have an idea how to "undeploy" an API proxy from an "archive" type Apigee-x environment? It seems like it can't be done from the Apigee UI, it throws an error:
"This operation is not supported. The Environment DeploymentType is ARCHIVE. The required Environment DeploymentType is PROXY".
The environment type can't be changed. The available CLI commands are "delete", "deploy", "describe", "list", "update" (no "undeploy" command found), "delete" doesn't work as it can't delete an active deployment. The final goal is to be able to delete the environment, which requires to remove/undeploy all API proxies from it first.
I found a solution. The "undeploy" feature I was looking for is not included in the current Apigee-x release. On the Apigee community, Google staff stated that they are looking into implementing it at some point. Until then there is a workaround, where one can deploy an archive with no deployments defined to the environment. Once this is done the Proxy is "undeployed" and the environment could be deleted. Here is the step-by-step process of doing it.
I am using the below command in jenkins to deploy the api proxies to apigee edge.
apigeetool deployproxy -u abc -o nonprod -e dev -n poc-jenkins1 -p xyz
But am getting the below error.
Error: Path /poc-deployment-automation conflicts with existing deployment path for revision 1 of the APIProxy poc-deploy-automation in organization nonprod, environment dev
Here is my requirement , please help me what command to use.
If API doesn’t exist in target environment, Create Api in new environment with version 1.
If API already exist in target environment, Create Api in new environment with new version (previous version + 1)
So what command should we use to fix the above error and what should we use to do the above 2 tasks.
Help Appreciated.
The apigeetool deployproxy command supports by default your requirements. It deploys the revision 1 if there is no proxy with the name, and increases the revision if it already exists.
However, based on the error you mentioned, it seems that you have a path conflict between two proxies. You are trying to deploy a proxy to a /poc-deployment-automation basepath, but there is another proxy called poc-deploy-automation which is listening on the same basepath. It is not possible, even if the proxy name is different, because the basepath is what apigee uses to redirect traffic to your proxy.
Check the xml file at the root of your proxy and change the basepath attribute.
Also, the basepath of an API Proxy can be anything, but could not be the same used at the same time by two proxies--only one can be deployed at time. The revision numbers are irrelevant in this situation.
This question relates to how to propagate a data factory through CI (in VSTS) if there is a self hosted Integration Runtime defined in the Data Factory.
I have a 3 environments set up - Dev / UAT / Prod each with their own data factory.
The Dev hosts the master collaboration branch. I am using VSTS to retrieve the artifacts from the adf_publish branch and deploying the template to UAT (prod will be done later). I followed much of what is in this guide here.
When deploying to blank UAT with a self-hosted integration runtime (IR), the IR that is deployed in UAT is a copy of the shared IR from dev (not a linked type) and this causes an error since the credentials used by the IR will not be correct. I expect this since we are really just deploying an exact copy of the Resource Group template with just the factory name overridden however the IR will not work without it being re-credentialled with the self hosted IR VMs.
If I pre-register a linked IR with the UAT environment (linked to the dev IRs), then the deployment fails with a conflict because an IR in the resource group template is the same name as the one I just created in UAT. If it is a different name - no conflict but the linked services will be pointing to the template IR and not the one I created for UAT
The docs have a note that says the IR runtime should be the same across all the platforms but I do not think this can be true - one of them (presumably the source/dev) must be a shared type and the others linked and authorized.
One option I could see (untested) is to have each environments IR reference be a separate connection to an actual IR but then there then needs to be some way of overriding the linked services to point to the current environments IR reference (by template parameter override?). In this scenario, we need to block the templates IR from being deployed as it won't be needed and won't work.
Has anyone had success in getting CI working in this situation? My sense is the doc was written with the globally shared IR. Either that or I need to better understand the aim of Auto Integration setting in the linked services definition.
Many thanks.
Mark.
Update
I think there are a couple of bugs in the service so not expecting an answer. I'll post updates here if I see resolution from the bug report I have posted here for the dev group.
In a nutshell, this only affects you if
you have a self hosted integration runtime (IR), and
you are trying to deploy a template to a new data factory from an existing data factory (as you would in Dev->UAT->Prod)
you have a datalake (ADL) linked service defined and using the self hosted IR.
If you have a self hosted IR in the template, the newly deployed copy will not be registered with any server (either linked or unique to the new ADF) as the template only records an IR, it does not instantiate one.
While this can be fixed in post deployment config or scripting, what it can't fix is the dependency in ADL. This is because the ADL linked service wants to encrypt the service principal with the IR....but the IR does not exist at the time of template deployment (i.e. is not configured on a server and not active).
It is no better if you select Managed service identity as the auth on the ADL linked service instead of service principal, then the template fails to deploy because there are no credentials to encrypt and it looks like the resource is expecting to encrypt something.
The work around right now is to use Azure hosted IR for datalake connections. Unfortunately for us this causes a security problem because shared IRs cannot be whitelisted in our ADL Gen 1.
I'll keep you posted.
I'm trying to deploy using CircleCI -> S3 -> CodeDeploy -> EC2.
I was able to upload deploy image onto S3 from CircleCI, but unable to deploy S3 to EC2 instance. Here's the error.
The overall deployment failed because too many individual instances
failed deployment, too few healthy instances are available for
deployment, or some instances in your deployment group are
experiencing problems. (Error code: HEALTH_CONSTRAINTS)
The error was provided from CodeDeploy. I can't figure out why and how.
I'd appreciate if you give some advise.
If you are running on Ubuntu there might be plenty of reasons, here is a checklist can verify
Check code-deploy agent is installed on your EC2 Instance. Please refer this document to install code deploy agent.
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html
$ sudo service codedeploy-agent status
In case if you are running Ubuntu release 20.x and you get this error
./install:22:in block in method_missing': undefined method path' for
#<IO:> (NoMethodError)
try running the install file via this script
sudo ./install auto > /tmp/logfile
Check you have EC2 Instance Code Deploy Role -> Create a code deployment role and assign it to the Instance, https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html.
In case if you assign the EC2 Role after initiate, restart the server.
Check your appsec.yml file placement as per the top answer, try to avoid any long timeout in it.
Log into your instance check your error log
$ tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log
You should be able to figure out what caused the individual instances to fail by digging into the deployment instance details:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-view-instance-details.html
These should contain more detailed information about why your application was unable to be deployed.
This error is commonly due to problems in the configuration of the appSpec.yml or appSpec.json file (It depends on the format you are using).
"If you have any Hook I recommend that you remove them, check if it works, then you can add one by one (the Hooks) and so you can identify the error"
The appspec.yml file should be located at the root of your project:
│-- appspec.yml
│-- index.html
└-- scripts
│-- install_dependencies
│-- start_server
└-- stop_server
In the scripts folder you will have to place the processes that you want to be executed according to the Hook
Here is an example of the appspec.yml file
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/html/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server
timeout: 300
runas: root
I hope I can help you 😃👻🕺🏾
Make sure the CodeDeploy Host Agent Service is running in your target EC2 instance.
The error you are facing is a generic error message thrown on any of the event failure which could be beforeblockTraffic, blockTraffic, ApplicationStop etc.
The first step in this case would be check whether code deploy agent is running or not if first event i.e. BeforeBlockTraffic event is failed.
As you can see in the screenshot below, the event failure message would tell you the exact error behind.
From the failed deployments, I can see all lifecycle events were skipped. Instance i-0bcc36e73851297f2 is currently in Stopped state but I can see the IAM instance profile is missing. Your Amazon EC2 instances need permission to access the Amazon S3 buckets or GitHub repositories where the applications that will be deployed by AWS CodeDeploy are stored. To launch Amazon EC2 instances that are compatible with AWS CodeDeploy, you must create an additional IAM role, an instance profile. 1
For such failures, you can always begin with a general troubleshooting checklist for a failed deployment 2 and then look for troubleshooting guides on Deployment Issues and Instance issues3.
1[http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html]1
2 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-general.html]2
3 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting.html]3
Check the status of the Code Deploy Agent. In my case, the agent wasn't up.
Please check the role given to the ec2 machine(where the agent is running). It should have s3 access as well. This resolved my issue.
"The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path 'appspec.yml'"
Please place your appspec.yml file in your root folder to solve this error
To access your after script and before script
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
So I'm writing a Facebook App using Rails, and hosted on Heroku.
On Heroku, you deploy by pushing your repo to the server.
When I do this, I'd like it to automatically change a few dev settings (facebook secret, for example) to production settings.
What's the best way to do this? Git hook?
There are a couple of common practices to handle this situation if you don't want to use Git hooks or other methods to modify the actual code upon deploy.
Environment Based Configuration
If you don't mind having the production values your configuration settings in your repository, you can make them environment based. I sometimes use something like this:
# config/application.yml
default:
facebook:
app_id: app_id_for_dev_and_test
app_secret: app_secret_for_dev_and_test
api_key: api_key_for_dev_and_test
production:
facebook:
app_id: app_id_for_production
app_secret: app_secret_for_production
api_key: api_key_for_production
# config/initializers/app_config.rb
require 'yaml'
yaml_data = YAML::load(ERB.new(IO.read(File.join(Rails.root, 'config', 'application.yml'))).result)
config = yaml_data["default"]
begin
config.merge! yaml_data[Rails.env]
rescue TypeError
# nothing specified for this environment; do nothing
end
APP_CONFIG = HashWithIndifferentAccess.new(config)
Now you can access the data via, for instance, APP_CONFIG[:facebook][:app_id], and the value will automatically be different based on which environment the application was booted in.
Environment Variables Based Configuration
Another option is to specify production data via environment variables. Heroku allows you to do this via config vars.
Set up your code to use a value based on the environment (maybe with optional defaults):
facebook_app_id = ENV['FB_APP_ID'] || 'some default value'
Create the production config var on Heroku by typing on a console:
heroku config:add FB_APP_ID=the_fb_app_id_to_use
Now ENV['FB_APP_ID'] is the_fb_app_id_to_use on production (Heroku), and 'some default value' in development and test.
The Heroku documentation linked above has some more detailed information on this strategy.
You can explore the idea of a content filter, based on a 'smudge' script executed automatically on checkout.
You would declare:
some (versioned) template files
some value files
a (versioned) smudge script able to recognize its execution environment and generate the necessary (non-versioned) final files from the value files or (for more sensitive information) from other sources external to the Git repo.