Sonar Analysis of website before deployment - apache

I currently have a website (consisting only of static files) and have currently automated the deployment of the website when changes are pushed to the master branch by using a Jenkins multibranch pipeline.
I'm planning to add an extra set of validations before deployment, and I've come across Sonar. Sonar can't be run on static files on its own; it requires these files to be served by a web server such as Apache2, because it also verifies HTTP headers.
Consequently, as long as my changes are not deployed in production, I will not be able to run Sonar on a particular development branch, and would have to wait until the branch is merged into master to obtain the results.
In this case, can you please give hints on how I can get validations results before deployment?

I would setup a Test environment on another machine. It should mirror your production environment as close as possible. Publish to there first. Run Sonar. If all checks out, then deploy to prod. This is a basic Continuous Deployment scenario.

Related

What's the best way to achieve the following using Codepipeline/Codebuild/S3

We have a localization microservice and a CI/CD pipeline per branch.
We also have develop, staging, and master branches that get deployed to dev, staging, and prod accounts through the respective pipelines.
dev CI/CD pipeline submits jobs for translation to localization microservice based on en.json file in the source code (which also gets synced to S3 in addition to translated files such as fi.json, fr.json being created from the results of the localization microservice. The microservice may take days to get back with results and so CI/CD pipeline just submits the job and doesn't wait for the results.
We will be pushing to develop branch a lot more frequently than staging and prod.
When the translations do come back from the localization microservice and get stored in S3 in the dev account, we want to make sure that only the specific version of the files gets synced to the s3 bucket in staging and production, that the corresponding commit/sourcecode was approved for release in those envs. Any changes made to en.json and thus to fr.json, fi.json in dev account since the release should not be pushed. How can this be controlled?

How to securely set up continuous delivery?

Setup:
Private master repo and every developer has their own private fork.
Currently using CircleCI, but we'd be happy to switch to satisfy requirements
Branches on master repo are protected with merge restrictions
Requirements:
Build + test on forked pull requests
Deploy to different environments based on master repo branch updates
Not all developers can be fully trusted with production credentials
Partial Solution:
Enable building and passing secrets on forked pull requests (Reference)
Use CircleCI contexts to set environment variables per branch. This allows different deploy targets.
Problems:
All repo specific secrets as well as all global contexts are now accessible by anyone who can open a PR.
Even if we disable building on forked pull requests, anyone with write access to at least one repo can access all global contexts.
Question:
This would seems to be a very common use case. How do other companies solve it?
Is CircleCI not the right tool for this? - No, it is not (see below).
Should we build a custom solution?
Edit1:
CircleCI got back to me and surprisingly this is not a use case they support. Looking into other providers now. Above questions are still unanswered.
Edit2:
I've also contacted TravisCi and SemaphoreCi and it appears that only TravisCi supports building forked PRs and not leaking secrets into them (Reference).
SempahoreCi is missing (1) building forked PRs and (2) hiding secrets from the deployment phase in non-master workflows
CircleCi has restricted contexts, but they would require manually changing workflows. Definitely not easy to set up and I don't fully understand how they would work.

Triggering iOS build/test job via Github pull request on CloudBees

I would like Jenkins to comment whether a merge passes or fails (much like Travis CI) on Github pull requests. I understand this is a feature on BuildHive. However, I cannot find an option on BuildHive for using customer provided slaves. My question is twofold:
Is there an option to limit builds to customer provided slaves on BuildHive?
Is there a way I could enable comments on pull requests using DEV#cloud (the actual job must be run on a customer provided slave)? If so, could you point me in the right direction to get this set up?
DEV#cloud can validate pull request as BuildHive does, with some additional configuration. See http://wiki.cloudbees.com/bin/view/DEV/Github+Pull+Request+Validation
Answering in the order of your questions:
BuildHive uses the Validated Merge plugin for Git from Jenkins Enterprise to enable Jenkins to perform pull requests and run the builds before doing a push to the main repo. That said, currently you cannot use Customer Provided Executors with BuildHive.
DEV#cloud: Normally, all Jenkins Enterprise plugins are available in a paid tier of DEV#cloud. However, this plugin is not - as the plugin sets up a git server within Jenkins - not easily achievable in a cloud setup. I have created a ticket on CloudBees support requesting that the plugin be made available and the engineering team will investigate into delivering the feature.
Meanwhile, if you like you can use Jenkins Enterprise to use the feature (however it is an on-premises solution).

Need advice regarding deployment on multiple remote machines

Currently I am using ms-deploy to build and deploy on several machines using team-city. In my current scenario, I need to build, package and deploy on Dev. After this I need to deploy this package on test and Live servers (which are on different domain. I understand how we do it but problem is Web transformation only occurs for test and live configs if we build a package. It means if I want to use the same package that is created for Dev cannot be used, as web transformation only occurred for Dev web config. Also know that we can change web config when un-packaging but that parameters are very limited. We have a lot of changes not just the connection string or db changes.
Another solution is to add another step to build packages for test and live as part of Dev deployment but then it means a lot of copying on remote servers, once for test and once for live which is a lot of time consuming due to different domains.
Can you please guide what is the best solution in this scenario. So I can use team-city to publish to Dev and test and live using same package and different web configs in one go.
To configure items at deployment time which are not automatically created for you. You can add a file named parameters.xml to your project and extend what you want to make available at deployment time.
Here's some documentation on the approach Using Deployment Parameters for Web.Config File Settings.

Arguments for and against having live websites as working copies under Subversion

Currently my team's development workflow is as follows:
1: Repository -> Local Development (Working Copy) -> Commit when finished
2: Repository -> Testing Server (Working Copy) -> Testing by client etc
3: Repository -> Production Server (Working Copy)
Ongoing updates are deployed using SVN update.
I wanted to find out weather people are for or against having live websites on the production server as working copies rather than using svn export. I restrict FTP access and only developers can run SVN update's via shell and have denied access to .svn folders in the apache conf.
I like having some build process (using Apache ant, for example) to deploy the web site from a working copy. Even if it just makes a copy initially, it might later on filter some resources, generate files, minify Javascript, or whatever.
At the minimum, the working copy should be a working copy of a tag in SVN, and yu should switch to another tag when releasing a new version. This would at least prevent from doing updates from a development branch of from the trunk, which could be unstable.