I feel like I'm experiencing a common problem, but I wasn't able to find anyone asking about it.
As the title indicates, we're forced to use AccuRev for SCM. We have our development stream under our QA stream. Let's say we're working on a big project that will be in QA for weeks before being released (we're following a scrum strategy). In the meantime, there's a bug fix that needs to go out before this big project. The problem is the bug fix affects some of the same files the big project uses. How would I get my files from development to production, without bringing the big project with me?
Sorry, I hope that makes sense!
Thanks!
If you are using the change package feature in AccuRev, you can select that issue and just promote those changes into the production stream.
Otherwise, you will need to determine the promote transaction(s) of the of the bug you fixed in development and change palette those fixes into production.
I think I found a solution that will work for us.
I plan on creating a snapshot off of the development stream after each successful deployment to production. Bug fixes/smaller projects will work off of this snapshot. That way I can keep anything I'm currently working on in DEV from getting inherited into my bug fix. When I'm ready to deploy my bug fix, I'll create another snapshot and re-parent my big project there. Then I'll revert the change package in QA and development, re-parent my bug fix to development, and promote as normal.
It's a slightly modified version of what's explained in this article: https://accurev.wordpress.com/2008/03/27/pattern-for-stable-development/
Related
In our team we have a number of APIs specified using the Open API Specification (formerly Swagger). We use Maven and OpenAPI Generator to generate code, build and publish the artifact to our local nexus. We build our code on TeamCity. The artifact is given the version that is specified in the pom.xml file of Maven.
During development we only use snapshot versions, that is versions which can be overwritten and will be cleaned up. This is opposite to release versions, that cannot be overwritten and needs administrative privileges to clean up. The reason for this is, that a developer usually changes a little bit at the time, which is much more convenient with snapshot versions. This also makes cleaning up outdated unreleased artifacts much easier.
Our problem is, that from time to time a developer makes API changes but forgets to set a new version. This works fine locally, but when the code is build on TeamCity the changed API overwrites the artifact of an older version. A developer not working on this branch will then experience a compile error, because the code does not match the API artifact being used.
What does others do? Is there a best practice? Preferably with standard tools. We have tried many things and nothing works well. At the same time this issue is so basic that someone must have a good solution - or at least experience enough to point to the least bad solution.
Can I change the default precompilation location to something different than ~/.perl6? I want to allow for using different versions of Rakudo-Star on my system without having to totally remove this directory each time I switch versions (i.e. to avoid precompilation issues).
There is absolutely no reason for you to remove ~/.perl6. The whole precompilation management system was specifically designed for being able to share installed modules between different versions of rakudo. Everything should just work fine without any user intervention.
If you do come across issues, that's bugs we want to fix. So please report those issues to rakudobug#perl.org or on the #perl6 IRC channel. Feel free to alert me (nine) on the IRC channel about those.
Trying to figure out the best way to use Nuget in a development environment to manage our own libraries.
We want to standardize on Nuget way of doing things for our 3rd party libs, but would also like to use Nuget to manage our internal utility libraries, for developers consuming the in house libs this is great and everyones happy. However, for devs actively working on the Utility lib it seems to be more problematic, their previous process of build lib , build main app , F5 and go is now slowed down with publishing, and updating and potentially lots of packages, not to mention the moaning about additional process!
We use TDD on the internal libs but everyone needs to be able to debug and modify libs along with main app, have seen Phil Haacks demo on debug packages in 1.3 and read David Ebbos blog, but that fits different scenario.
So what is the best process for dev/debug cycles? if to use Nuget then we need to accept the existing constraints, or is there a hybrid practice people are using and maybe 1.3 gets closer to automating all this, or do we just avoid Nuget for internal packages which would be a real shame.
Loving Nuget, maybe wanting way to much from the little guy, feedback appreciated.
Thanks
I'd suggest you use separate network shares or feeds (similar to what myget.org supports in the cloud) for different scenarios.
You could imagine creating a CI share, a QA share, a Releases share, ...
Make people working on the referenced library do CI builds that drop CI packages on the CI repository for instance, and have them picked up by other projects (who just need to do a simple update, could be automated through PowerShell in pre-build: check for new version, if so, update).
Just make sure that when products release their milestones, they also release with released dependencies (could be as simple as switching feeds, releases will always have a higher version number than CI builds).
Hope that helps!
Cheers,
Xavier
If you're working on the source code for the lib and the main app at the same time, I'd say NuGet is probably not a good solution. I think it'll only work in situations where you work with a "stable" version of the library that don't need to change frequently during the development of your main app.
That said - is it possible the development on your library could be done in isolation? You already mention you're doing TDD on the lib, so why can't that work be done, then built, deployed, then the main app work done?
Sometimes I did some mistake. After that I made some bug and for some reason the program won't run. I want to go back to previous version. How would I do that?
In xCode 4 you can create a local git repository on creating a project. Look here.
If you are not using xCode 4, you can use external tools for git or svn. Just google for it, you will find a lot of solutions!
You need to use a revision control system. This is a big topic, so start by reading the Wikipedia page.
Xcode has support for a few revision control systems. This question discusses most of them, but git support wasn't available back then (although, personally, I have an irrational hatred of git).
I'm wondering how Software Development Team distribute their Standard IDE(s)?
E.g. developing with Eclipse, custom Code formatter, svn Resository, Copyright Header..
At the moment my Team has a standard zip File which is then distributed withhin the developers.
Problem:
If one file, a Plugin or the IDE itself changes, e.g. new Coding Guidlines, Upgrade Eclipse 3.5.1 the whole distribution has to be done again. Every developer needs to unzip the bundel again. Imagine your working with different Workspaces (Jetty, different Tomcamt Versions, WTP) due to Project History That doesn't scale
I know that there are some related Articels
A new version of Eclipse just came out. Is there anything I can do to avoid having to manually hunt down my plugins again?
Manage Your Eclipse Install With A Local Git Repository
And some comercial Programs.
Eclipse also has a new Update-Installer Approach
But I don't see the Killer App. How do your team solve this? Is there a best practice?
I guess best would be a Program letting you choose your current Project and then downloads the configured IDE from the Server and leting you know if Project Config Files are Updated
For eclipse look at Buckminster it targets exactly your target I suppose, didn't use it personally through.
At my previous company they wrote a custom update agent that pulled from a centrally configured server which was updated by the team leaders. It worked well, until people wanted to install their own plugins.
Basically, a developer wanted a plugin, fought in futility to get it included in the default (managed) repo, installed it himself, then updates broke on his machine when the team lead had a sudden stroke of common sense and included it.
They never did come up with a 'good' way to manage it. But, at least they didn't put us all on terminal servers with thin clients.