Working in a team environment, we have a Team Foundation Server that also contains a Team Build component. It is configured to automatically build all projects and solutions at specific times or on request.
We develop a product that is built with several solultions that depend on eachother. When things have been changed in one solution, it has to be rebuilt locally manually in both debug and release mode so that changes take effect in another solution that depends on it.
Also when a developer retrieves all sources the first time, he has to build all solutions manually in the correct order to get a working environment.
What is the best way to automate things like this? Create .cmd files that trigger the correct msbuild files? Using a program such as CruiseControl.NET?
What do you people do to maintain a clean local development environment?
What I did for our Team was to provide a Visual Studio Solution which contains all projects. Then I created a simple .cmd file which uses the commmandline tools of Visual Studio to build this solution with their respective debug/release/profile configurations. This is a one step build solution that can be used from every engineering machine.
The next level is the continuous integration system that is setup to check for changes every 15 min and start a build if there are changes in the VCS. I'm using hudson as our CI system. The CI system is used to build the native projects, the java projects as well as the flex stuff. Since everything can be build from the commandline it's pretty easy to use it with hudson or CruiseControl.NET.
Related
I haven't done any cshtml front-end development for a few years.
What's the current, generally accepted way for ASP.NET Core front-end developers to work across a range of tools on Windows?
By that, I mean a way to have the front-end JS build and the .NET project(s) also build and to work rapidly in the browser and code.
My thinking is.
We have much better command line story around dotnet today.
Some folk like VS Code.
Some folk prefer VS 2019, and some like either, depending.
We need to work on UI aspects sometimes.
But we also need to attach a debugger and debug the server logic sometimes.
The build server should have no problem, be simple, and rely mostly on build logic held in the repo.
Tooling, and kicking off the whole build and serve process should be understandable and familiar.
It should be pretty simple to get going after a team noob clones the repo.
My initial thought would be to setup NPM then use something like Gulp to kick off everything, including running dotnet run.
Then when running under the Visual Studio 2019 debugger, use the Task Runner Explorer to kick off the Gulp stuff but skip the dotnet run part.
(shame there doesn't seem to be a command line for start VS(Code or 2019) and attach debugger)
Now I'm expecting to get a "primarily opinion based" SO beating, but there are general trends and ideas that go into designing all these tools for how they can all play ball together and what the dev story looks like.
You've pretty much already described the process. However, I'll add a few things:
You don't need the dotnet run bit. Visual Studio and VS Code are both capable of debugging directly.
You can assign the gulp tasks to build tasks in Task Runner Explorer, so you really don't even then to think about running those directly. I'm not as sure on this aspect of VS Code, but I'm sure there's probably some extension to handle it, if it's not already built-in.
If you want true ease of development, the best thing you can do is use Docker. Just add a Dockerfile to each project that actually runs (i.e. not a class library) and set up the steps to build and run it there. In Visual Studio, you can right-click the project and choose Add > Docker Support, and it will actually generate a ready-made Dockerfile, though you may need to add a step or two to handle the client-side build steps. In any case, this then becomes truly click and run, with nothing to worry about. The story is even better when you use docker-compose, as then Visual Studio and VS Code can spin up your entire application stack all at once, including external dependencies such as a database, Redis instance, etc. If you haven't used Docker before, start now. It's absolutely revolutionary for development.
One note for CI/CD, as much as possible, you should add a YAML file to describe your CI/CD pipeline. Depending on the the actual provider you're using for build/release, there might be some differences, so consult the relevant documentation. (Azure DevOps, for example, doesn't currently support describing release pipelines in yaml, though you can still do your build that way.) In any case, this allows you to configure all this in code, and have it committed to source control.
You may consider the same for your infrastructure. Azure has ARM templates, AWS has CloudFormation, GCP has Deployment Manager. There's also third-party tools like Terraform or Ansible. All of these, in some form or fashion (usually JSON or YAML) allow you to define all the characteristics of the infrastructure you're going to deploy to and commit that to source control. This makes deployment and things like creating new environments as breeze.
I'm doing a proof of concept on continuous integration and whether our development team will benefit from automated builds and automated deployments to reduce human error.
I've already come quite far in the process but have some questions on how to configure our incremental builds to avoid rebuilding of dependencies that had no code changes.
In addition I’d like our deployment tool to identify and deploy only assemblies rebuilt as a result of a code change.
We already use Microsoft products like TFS for source control, Visual Studio for development and Team Foundation Build for continuous integration builds. We’re currently leaning toward InRelease for deployment as it seem to integrate well with Team Foundation Build.
But first, here is our current setup...
There are 200+ C# solution files, each containing one or more projects. It is not practical in the environment to combine these projects into less solutions, i.e. by design. Projects within a solution uses project references to resolve dependencies and file references to projects in other solutions. As far as I know, this is the recommended approach by Microsoft when dealing with a large amount of projects.
We use a "branch by feature" strategy e.g. isolated development on concurrent features branches which is merged up to a stable Main branch when complete. When it's time for a release, a release is branched from main and isolated for hotfixes and deployment. The feature branches and main branch have a CI build triggered by code check-ins. Releases will mostly like be manually executed from InRelease against a selected release branch. A release will be deployed through various environments including INTEGRATION/TEST, UAT and ultimately to all our clients. We're still fleshing out the details of branching strategy, but that's a question for another time.
The current problems to solve:
1. Avoid rebuilding of dependencies that have no code changes...
When we deploy new functionality or a patch to a client, we want to push the absolute minimum in files. Our company has a very large customer base (thousands of customers) with sometimes slow internet connections, so doing a full deployment of all assemblies (200+) to every customer is not an option. I've partially solved the problem by setting up incremental builds which correctly rebuilds only changed projects as expected but also rebuilds all the dependent projects even though NO CODE CHANGES were made to them. This results in both the changed assemblies and dependencies having new timestamps. If we use the change of timestamp to identify which assemblies to deploy, then this would result in deployment of functionally unchanged assemblies. The goal here is to deploy only assemblies where the code has changed and assemblies where breaking changes occur.
For example:
Solution B, has a project called Project B
Solution A, has a project called Project A
Project B -> Project A (where Project A has a file dependency on Project B)
When a non-breaking change is made in Project A, say to the interior of a method, then the expected result is: only A is built and therefore a candidate for deployment.
When a breaking change is made in Project A then that will break Project B, the expected result is: Both A and B is built and therefore a candidate for deployment.
Currently MSBuild rebuilds all dependents regardless, which is not what we want.
2. Automatically identify which assemblies should be deployed...
I have a partial solution to the problem.
When a build is performed, my build process template is configured to run a MSBuild script containing a list of solutions to build in a particular order.
This operation is performed in the build agent’s workspace. Every time a new build is performed the build process template creates an unique drop folder in the format and copies the binaries from the build agent workspace to the drop folder. This is out of the box functionality taken care of by the standard build process template. The build has been configured not to clear the build agent workspace, so the first time it runs it will build all projects within a solution but subsequent builds will only build projects that have code changes or is dependent on other projects (incremental build?). Therefore unchanged assemblies will have the original time stamps and changed assemblies will have new timestamps.
We have a tool that can do folder comparisons between drop folders and output the results to a txt file. This allows us to identify which binaries have been added/changed/removed since the last deployment. It also gives us the added benefit of comparing the list of actual artefacts to a manifest of expected artefacts as defined by the developer. This will ensure that no assemblies get deployed that has not been specified and proven to be unit tested.
The question is how can be we leverage InRelease to deploy only the required files as per the example above and not all files in the drop folder?
Install a TFS Proxy in before your build machine, this reduce the net traffic
You will start with a branch strategy like Service Pack, you can read a documentation about in ALM Rangers guidance... And adapt you process template to build just the part of code changed. I think in BRD Lite, another guidance by ALM Rangers, you will found more information.
We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests).
What solutions exist to implement a continuous integration machine (i.e. Build machine).
The requirements for this are:
A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about)
Before the actual build, the latest version of the source code must be acquired from the repository we are building from
The build must build the entire product
The build must build all Unit Tests
The build must execute all unit tests
A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded.
The summary must contain which changesets were in this build that were not yet in the previous successful (!) build
The system must be configurable so that it can build from multiple branches(/Repositories).
Ideally, this system would run on a single box (our product isn't that big) without any server components.
What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done?
Thanks
TeamCity, from JetBrains, the makers of ReSharp, will do all of that. You will have to configure it for what specifically it means to "build your product", but you can configure up everything you specified with it.
The software can alert you to failed builds, even down to alerting only the person responsible for checking in code that broke the build. It even comes with handy web pages you can view to see only your own changes, which builds they've been through successfully, which ones are pending, and which ones are currently being executed.
Since it is a distributed product, you can make it grow with your organization and product. If at some point you discover that you're waiting for the build to complete too much, because a lot of builds are being queued up, you can add more build agents. The build agents are basically separate client programs you install on additional machines, that execute the actual build configurations.
It comes in two flavors, the professional version and the enterprise version. The professional version is free, can contain up to 20 build configurations, 20 users, and 3 build agents. The enterprise version has unlimited users and build configurations, and you can also use LDAP based security (think domain verified users.) There's also some other bonuses from the enterprise version. You can also buy licenses for more build agents if you need more than the initial 3.
Now, if "no server components" means you don't want it to act like a web server, you're going to be hard pressed to find something that will react to your commits.
However, if you mean that you don't want to have to install a server OS, then TeamCity can work on workstation versions of Windows as well. That isn't to say that you shouldn't consider setting up a proper server for it, but it will run on a workstation if that is what you require.
Our product BuildMaster does all of the things you listed by design and there is a free, somewhat limited edition (e.g. you can only have a limited number of issue tracking providers integrate with it, the database change script packaging tool isn't included in the free version, etc.) for 5 users or fewer.
What you've described is the basics of a CI Tool, so every CI Tool should be OK.
I use Cruise Control.NET but it is bugged with Mercurial and is not very straightforward at first glance. I am nevertheless happy with it. Other tools that come in my mind are Hudson, Team Build (from TFS) and TeamCity.
I have not tried other tools but you can see pros/cons here :
TeamCity vs CC.net
Hudson vs CC.net, Link 1 and Link 2
CC.net vs TFS
EDIT : I forgot to mention that Hudson and Cruise Control.net are Open Source project, you can easily write plugins and patches to your install.
EDIT² : Mercurial bugs seem to be fixed in the upcoming 1.6 version of ccnet (changes commited to the trunk this week).
There's always BuildBot which I like (and have contributed some code to ). It's fairly easy to set-up and run on any OS, and to do simple tasks like that you say, and remarkably flexible if you need it.
What you might find missing is batteries-included log-scrapers and/or report generators that other more commercial CI-servers comes with, especially for Enterprise-y frameworks.
It scales pretty well too, Mozilla and Chromium use it, amongst others.
I've never worked on tremendously huge projects and the workflow we use at work is check-out/code/compile locally to test/commit. I was wondering how a build server would change this process. How do developer test their code when the application is too huge to compile locally? They just code, commit and pray?
Absolutely not.
The developer usually has a build file which can build the project for him or her, which has some "targets" defined which do the testing. If you have a really big project, you may have certain portions of it precompiled for you, so you don't have to build the whole thing in one big chunk. You usually do your testing locally before you commit to your repository. Breaking the build in big projects can mark you as an object of ridicule and scorn. Breaking the build in really important, really big projects can be career limiting... ;-)
The build SERVER itself doesn't change this. The build server only runs your build file and the targets you tell it to.
There are also build components (I've just started using TeamCity - no affiliation) that allow "personal builds".
I haven't used it yet as we haven't got it set up properly but my understanding is that TeamCity allows running a build (and tests if they are any run on the server) with your changes before committing (and optionally the server will commit your changes if the build is succesful). in TeamCity this is called a Pre-Tested Commit.
Currently I'm tasked with doing the daily build. We have an ASP.NET 2005 website with a SQL Server 2005 backend. Our current source control is Visual Source Safe 2005.
At this point, I use the brute-force method of daily builds.
Get Latest version of source code
Get Latest version of Database release script
Backup old website files to a directory
Publish new code to my local machine
Run on my server to keep the test/stage site working
Push newly created files to the website
Run SQL Script on test database (assuming updates, otherwise I don't bother)
Test website on the Test Server.
Looking at the idea of automated builds intrigues me since it means that I do less each morning. How would you recommend I proceed? I want to have a fully fleshed out idea before I present it to my boss.
Ditch VSS, move to Subversion, and check out CruiseControl.NET. Alternatively, if you have a MSDN developer license, you can run TFS workgroup edition and set up a build server on any old XP box. Its what we do at our shop.
As Assaf noted, you can use CC.NET with VSS directly. Nice.
TeamCity has worked well for me. It has a very simple setup. Combine it with an MsBuild script for your operations and you're auto-matic.
For build management I wholeheartedly recommend TeamCity. It doesn't require IIS6 (like CC.net does) since it runs on it's own copy of Tomcat and the setup is all done thru various forms. This is a big deal to me since the build server is just an XPPro box. It integrates well with SVN and there is no crazy XML file manipulation like I had to do with CruiseControl.Net. Big win for me.
For a build runner we use NAnt to send emails to various people, copy the packaged builds where they're supposed to go, run NUnit and NCover, and deploy the software to our web farm.
For automated testing we use Watin.
http://www.nunit.org/index.php
http://www.jetbrains.com/teamcity
http://ncover.sourceforge.net/
http://subversion.tigris.org/
http://nant.sourceforge.net/
http://watin.sourceforge.net/
Try CruiseControl.Net. It's free, and whatever customized daily/continuous routine you want it to perform you can always add with scripts.
Remember, it's not just about daily (nightly) builds, but also about letting you catch build errors in time (since it continuously builds after every source commit/check-in). You don't necessarily test every code chance on every possible platform and build configuration, but CC can do exactly that for you (in the background).
http://confluence.public.thoughtworks.org/display/CCNET/Visual+Source+Safe+Source+Control+Block
All of what you are doing can be performed by a set of batch files, depending on how automated your test environment is. The main batch file can be started as a 'scheduled task' at midnight or whatever. That's how we 'do it cheap' here and at other places I've worked. If you need help with a particular batch, I can provide a sample.
I second (or third) the reccomendation for Subversion/CruiseControl.net. Also, if it is appropriate, check out hosted services for SVN like CVSDude. You'll probably become well versed with MSBuild in the process too. Once you get it setup it is great.
The cost doesn't come from licensing of the tools or even hardware necessarily, but from your time building and maintaining the system - and depending on what you are doing, that could become significant.
Start with the basics and incrementally improve it over time. Like anything else, if you try to come out of the gate with lots of automation and functionality you could find yourself mired in it fulltime for weeks.
Whatever tools you use, house them in a virtual machine (ie., vmware).
When the equipment inevitably goes south, you can copy the image onto any machine and not miss a beat because your build server decided to take the day off, assuming of course, you back up.