How good is Bamboo support in CAKE? We're currently on Team City but considering moving to Bamboo. What we need:
1) Report error from cake script
2) Set build number from cake script
3) Publish artifacts from cake script
All these are currently possible with Team City but I can't find anything other than IsRunningOnBamboo for Bamboo.
The built in support in Cake for Bamboo, compared to TeamCity, isn't as good at the minute.
You should be able to report an error. This is typically done in Continuous Integration Servers by looking for a non zero exit code. This is what Cake does out of the box, so an error thrown from your Cake build script, should fail the build in Bamboo.
In TeamCity, setting the build number, and publishing artifacts, are done through the Service Messages that TeamCity supports. If Bamboo has a similar feature, then there is no reason that Cake, either out of the box, or within your own script, shouldn't be able to support it. It would just be a case of figuring out how it is done.
We would be happy to accept a PR to provide this functionality out of the box in future versions of Cake.
The current BambooProvider does provide some additional information in the form of the Build, Plan and Repository information:
https://cakebuild.net/api/Cake.Common.Build.Bamboo.Data/BambooBuildInfo/
https://cakebuild.net/api/Cake.Common.Build.Bamboo.Data/BambooPlanInfo/
https://cakebuild.net/api/Cake.Common.Build.Bamboo.Data/BambooRepositoryInfo/
Related
I haven't done any cshtml front-end development for a few years.
What's the current, generally accepted way for ASP.NET Core front-end developers to work across a range of tools on Windows?
By that, I mean a way to have the front-end JS build and the .NET project(s) also build and to work rapidly in the browser and code.
My thinking is.
We have much better command line story around dotnet today.
Some folk like VS Code.
Some folk prefer VS 2019, and some like either, depending.
We need to work on UI aspects sometimes.
But we also need to attach a debugger and debug the server logic sometimes.
The build server should have no problem, be simple, and rely mostly on build logic held in the repo.
Tooling, and kicking off the whole build and serve process should be understandable and familiar.
It should be pretty simple to get going after a team noob clones the repo.
My initial thought would be to setup NPM then use something like Gulp to kick off everything, including running dotnet run.
Then when running under the Visual Studio 2019 debugger, use the Task Runner Explorer to kick off the Gulp stuff but skip the dotnet run part.
(shame there doesn't seem to be a command line for start VS(Code or 2019) and attach debugger)
Now I'm expecting to get a "primarily opinion based" SO beating, but there are general trends and ideas that go into designing all these tools for how they can all play ball together and what the dev story looks like.
You've pretty much already described the process. However, I'll add a few things:
You don't need the dotnet run bit. Visual Studio and VS Code are both capable of debugging directly.
You can assign the gulp tasks to build tasks in Task Runner Explorer, so you really don't even then to think about running those directly. I'm not as sure on this aspect of VS Code, but I'm sure there's probably some extension to handle it, if it's not already built-in.
If you want true ease of development, the best thing you can do is use Docker. Just add a Dockerfile to each project that actually runs (i.e. not a class library) and set up the steps to build and run it there. In Visual Studio, you can right-click the project and choose Add > Docker Support, and it will actually generate a ready-made Dockerfile, though you may need to add a step or two to handle the client-side build steps. In any case, this then becomes truly click and run, with nothing to worry about. The story is even better when you use docker-compose, as then Visual Studio and VS Code can spin up your entire application stack all at once, including external dependencies such as a database, Redis instance, etc. If you haven't used Docker before, start now. It's absolutely revolutionary for development.
One note for CI/CD, as much as possible, you should add a YAML file to describe your CI/CD pipeline. Depending on the the actual provider you're using for build/release, there might be some differences, so consult the relevant documentation. (Azure DevOps, for example, doesn't currently support describing release pipelines in yaml, though you can still do your build that way.) In any case, this allows you to configure all this in code, and have it committed to source control.
You may consider the same for your infrastructure. Azure has ARM templates, AWS has CloudFormation, GCP has Deployment Manager. There's also third-party tools like Terraform or Ansible. All of these, in some form or fashion (usually JSON or YAML) allow you to define all the characteristics of the infrastructure you're going to deploy to and commit that to source control. This makes deployment and things like creating new environments as breeze.
We are looking for a software to run our test cases automatically.
We want a software which will run on our server (or a commercial), which automatically gets the newest commit on github. Then compiles the commit of the project with CMake and run Ctest on our test cases. The results should then be visualized on a nice website.
I had a look at CDash, but as the documentation is so bad I did not even get it to get the latest commit from github.
So my questions are:
Is there a good tutorial to CDash? Except the bad wiki page.
What software is available for running tests on new commits to github, what are their advantages and drawbacks?
In answer to your second question, Jenkins is a robost and extensible continuous integration tool that can be integrated tightly with GitHub using a plug-in (or loosely using standard Git support). It also supports CMake via a plug-in. Whether it has disadvantages that will make it less useful for you depends on your organization and build process, but I've found it to be highly customizable to a wide variety of processes. I recommend taking a look at it.
There's also a third-party Ctest plugin available for Jenkins.
CDash works in pair with CTest. If you are already using CMake then it should be fairly easy to submit your testing results to CDash. I'd recommend reading the CTest documentation:
http://www.vtk.org/Wiki/CMake_Testing_With_CTest
You can either install your own CDash server or use Kitware's hosted server at my.cdash.org. You can test your server with a sample project available at:
http://www.cdash.org/cdash/resources/software.html
We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests).
What solutions exist to implement a continuous integration machine (i.e. Build machine).
The requirements for this are:
A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about)
Before the actual build, the latest version of the source code must be acquired from the repository we are building from
The build must build the entire product
The build must build all Unit Tests
The build must execute all unit tests
A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded.
The summary must contain which changesets were in this build that were not yet in the previous successful (!) build
The system must be configurable so that it can build from multiple branches(/Repositories).
Ideally, this system would run on a single box (our product isn't that big) without any server components.
What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done?
Thanks
TeamCity, from JetBrains, the makers of ReSharp, will do all of that. You will have to configure it for what specifically it means to "build your product", but you can configure up everything you specified with it.
The software can alert you to failed builds, even down to alerting only the person responsible for checking in code that broke the build. It even comes with handy web pages you can view to see only your own changes, which builds they've been through successfully, which ones are pending, and which ones are currently being executed.
Since it is a distributed product, you can make it grow with your organization and product. If at some point you discover that you're waiting for the build to complete too much, because a lot of builds are being queued up, you can add more build agents. The build agents are basically separate client programs you install on additional machines, that execute the actual build configurations.
It comes in two flavors, the professional version and the enterprise version. The professional version is free, can contain up to 20 build configurations, 20 users, and 3 build agents. The enterprise version has unlimited users and build configurations, and you can also use LDAP based security (think domain verified users.) There's also some other bonuses from the enterprise version. You can also buy licenses for more build agents if you need more than the initial 3.
Now, if "no server components" means you don't want it to act like a web server, you're going to be hard pressed to find something that will react to your commits.
However, if you mean that you don't want to have to install a server OS, then TeamCity can work on workstation versions of Windows as well. That isn't to say that you shouldn't consider setting up a proper server for it, but it will run on a workstation if that is what you require.
Our product BuildMaster does all of the things you listed by design and there is a free, somewhat limited edition (e.g. you can only have a limited number of issue tracking providers integrate with it, the database change script packaging tool isn't included in the free version, etc.) for 5 users or fewer.
What you've described is the basics of a CI Tool, so every CI Tool should be OK.
I use Cruise Control.NET but it is bugged with Mercurial and is not very straightforward at first glance. I am nevertheless happy with it. Other tools that come in my mind are Hudson, Team Build (from TFS) and TeamCity.
I have not tried other tools but you can see pros/cons here :
TeamCity vs CC.net
Hudson vs CC.net, Link 1 and Link 2
CC.net vs TFS
EDIT : I forgot to mention that Hudson and Cruise Control.net are Open Source project, you can easily write plugins and patches to your install.
EDIT² : Mercurial bugs seem to be fixed in the upcoming 1.6 version of ccnet (changes commited to the trunk this week).
There's always BuildBot which I like (and have contributed some code to ). It's fairly easy to set-up and run on any OS, and to do simple tasks like that you say, and remarkably flexible if you need it.
What you might find missing is batteries-included log-scrapers and/or report generators that other more commercial CI-servers comes with, especially for Enterprise-y frameworks.
It scales pretty well too, Mozilla and Chromium use it, amongst others.
I'm trying to create a CI process for SQL Server Reporting Services.
I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom.
Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own.
The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message.
Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in
http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx
(ie: AfterCompile)
The core steps that I'd like to achieve are:
Bundle the RDL and datasource files
Connect to the host server to
register/deploy the reports
Re-apply any subscriptions that
previously existed
Run tests to verify the deployment
succeeded and is returning results
as expected
I have found another article on Report services deployment:
Reporting Services Deployment
But it doesn't mention the best practice for customizing the standard build process.
Any help would be appreciated.
For anyone interested I've just stumbled apon an answer to the first question I asked:
The first snag that I have come across is that the build always fails, how can I set the status of the build to success?
You can find a solution to this at
Link
The options available for this property are:
Unknown
Failed
Succeeded
Don't forget to also set the TestStatus else the build will only partially succeed
Still looking for the best practice for creating a custom build sequence.
I ask this question because I find the the community contributions to the various build engines (like MSBuild and NAnt) do include all the tasks that promote for CI servers, like getting versions from source control, cleaning folders, changing build numbers, sending emails, etc...
Is it only because it "listens" to the changes happens on the source control repository? what else am I missing?
Grzegorz Oledzki linked a good resource for finding the differences between multiple CI solutions, but it should be noted that the intent of MSBuild is to specifically turn code into binary and is used by CI software to build the source. It's true that it can do other things but most of its tasks lie closely within that realm.
In addition to what you mentioned about listening to the repo, some CI servers can do all kinds of things like^1:
multi-agent building (not just multi-core, msbuild can do that, but multi-machine)
monitoring build status
notifications (e-mail/sms/rss/whatnot)
assigning blame for broken builds
administrative features
supporting XFDs (extreme feedback devices)
automated deployment
And generally all from a handy UI.
1 Not all CI software will have all of these features, it is by no means meant to be exhaustive and there is some overlap.
I believe CI (Continuous Integration) feature matrix will answer all your questions about particular CI providers and their capabilities.
Wow there are just so many answers to this. As for what a CI system can do that a Build Script can't do other than listen to your Version Control System... Well for starters systems like TeamCity can let you first test your code on the build server and then check it in if it passes all the tests for starters.
I highly recommend using a CI server but I prefer to keep all of the build logic in a MSBuild file and all of the who to notify when it fails etc. in the CI server. Keeping the logic in the Build file helps you to reproduce the build on your own machine and makes it simple to set up new projects in the CI server or to change how the CI server builds the project