Deploying large releases into Maven Central - maven-2

Do artifacts ever expire on the Maven Central Repository?
Is there a size limit to how big each artifact can be?
I ask because some artifacts can get very big and I am concerned that this may cause problems down the line.
I'll give you a simple example. My library depends on the Boost C++ library. Boost starts out with 241MB of sources (75MB compressed). When you compile it, you end up with 2.78GB of binaries (200MB compressed) per compiler/platform combination (i.e. Visual Studio 2010, Windows, 32-bit). You then have to multiply that number by the number of platforms you want to support.
On the one hand, I don't want users building Boost themselves because it is a very painful and lengthy process. On the other hand, I get the feeling that uploading GB of artifacts per release is not the right way to go ;)
My library only depends on a very small subset of of Boost so technically speaking I could upload just that subset (at a cost of approximately 10MB per platform). I am concerned about what will happen long-term. What happens if more people begin using Boost and each one uploads the subset that they depend on...?
See http://sourceforge.net/projects/boost/files/boost-binaries/1.44.0/ for an example of how Boost modules can be split up. As you can see, individual modules are quite small.
A similar topic has come up before: http://maven.40175.n5.nabble.com/Best-practice-re-releasing-large-assembly-artifacts-td3250739.html but in my case I am not trying to deploy assemblies into central. I am trying to deploy individual artifacts that happen to be very large.
Let me know what you think.

I ended up posting my artifact as-is. No one has complained yet.

Sign up for the user list linked at the end of this document and then lets discuss it. There aren't hard and fast rules for what we allow into Central, but I'd like to gather more information to help you build things in the most efficient and community friendly way.
Things in Central never expire, and there isn't a specific size limit, although we may look closely at things that appear to be over sized.

Related

IBM S/390 mainframe COBOL source code

We have an S/390 mainframe at my new job that’s been running COBOL applications since the late 90’s. The mainframe is getting old enough that we need to migrate to a newer system. We’re a small enough business that we can’t warrant spending the money to upgrade to new mainframe hardware and the program logic has been a constant work in progress for 30+ years, so it has a lot of functional value. I’ve been considering moving the functionality to a Linux machine and using something like OpenCOBOL to recompile as an executable binary instead of trying to rewrite it in a newer language. I haven’t messed with a mainframe enough to have any clue how or where to access this information and the gentleman that wrote all of the programs is unfortunately no longer with us. I’ve read that SSH is an option, but I’m not even sure how to get the ball rolling on that with a mainframe. I use Linux on a fairly regular basis, so I’m familiar with SSH, but from my understanding those mainframes aren’t a simple OS that you can merely connect to and navigate the file system to retrieve data like we can in modern operating systems. Can anyone give me some pointers to get a sense of direction for accessing the source code for the COBOL programs? Are there default locations that they are stored, etc.? They’re somewhat simple programs that don’t use any DB2 functionality and will hopefully compile on a different system with relatively minimal debugging and fixes. I’m certain that I’ve left out necessary information that would help getting an answer to this question, and I can provide any additional information that is needed to help you all help me. I suspect that SSH isn’t enabled by default, but maybe I’m wrong there too. Any assistance is greatly appreciated. Thanks everyone!
Although not a programming question I'll provide some guidance I think might help you.
First, this is a business decision about where to invest.
Do we upgrade the system to a newer model and upgrade some software and acquire the skills to keep the system running? (System Programming, OS upgrade and cost of migration, newer platform (used z13 could be an economical option, storage systems to support the mainframe)
Migration of existing workloads to other platforms. (Cost to migrate code, sizing of performance needs, new technologies to replace existing access methods like VSAM or dare I say ISAM if the applications are old enough)
Status Quo ... leave things where they are and keep the lights on
In evaluating any option you have to assess the risk to the business and what would a disruption cost? IMHO, its less about a technology like SSH or COBOL on Linux but requires some serious assessment of the current state, the acceptable to be scenarios and the cost of pursuing one of those options.
My comments are not intended to instill fear but provide a framework of how do I approach analyzing a challenge of this magnitude.
There is no default location where source code is stored on z/OS (it is z/OS you're talking about, right?). Source code is usually stored in PDS data sets. The naming of those depends on the installation, i.e. the company, and whether or not any software like Endevor, ChangeMan, etc. is being used to maintain the sources.
Since this is old z/OS (OS/390) COBOL code, chances are the code is making use of OS specifics such as record level I/O, VSAM data sets, etc. These are the parts that will not work on a non-z/OS platform without major rewrite. So, you will need to look into the sources.
SSH is available on z/OS, but it needs to be configured and enabled. You need to check with your z/OS sysprog. FTP, and NFS are other options, but again, they need to be configured and enabled.
Transfering the sources is the least of your problems, I'd say.
I have to agree with the prior two answers, but have some additional suggestions. This is a business decision what to do on the system.
Finding the program to understand what it does is the first requirement. Since you know what program is running that may be the name of the source file. That you will need to find. The source file probably will be in some library manager, the first place to look is in the ISPF menu system. There will be an option for the library manager you are using if you are using one. Based on your description you may be using something called SCLM which would should up, or you might see Librarian or Panvalet. You will need to get into ISPF by connecting using a 3270 connection emulator. Once you find the file, using FTP or SFTP may be the best, or your emulator may just provide a transfer mechanism. You will need to find the related files as well, which should also be defined in the library manager.
Once you have the file, you will need to figure out what it uses as mentioned above, it will be working with some kind of data file, and that will be the biggest part to deal with.
If it is a batch program it is probably part of a schedule, and there are other programs also running that you will need to find and figure out how they fit together.
Once you have an understanding of all the parts then you can work to make the right business decision as to how this should be run. You may want to upgrade, you may want to look to getting z/OS as a cloud service if you don't want to upgrade but you want the function. Or it may be a simple program you could move. That will be much easier to figure out once you have the details.
You say the program logic has changing for 30+ years. Was it only one person making all the changes ? Would anyone on the team have some idea about the PDS's that the user had access to ? That might be one of the places to look for. As the previous answers suggested , most shops would have store the source code in some kind of config mgmt tool like SCLM or panvelet. If you have access to the load code, there are utilities that can be used to inspect the load member to get a CSECT listing which would have the names of the obj members that make up that load.You can check with your mainframe admins. That can get you the source code file names. We use SSH from USS in our shop to move code from a HFS folder to gitlab. I have also used plain FTP to just transfer source code files to my workstation . But yes, first you have to find where it is stored.

I want a sandboxed test environment that is *always* an exact copy of Production

I'm having an issue with a web application I am responsible for maintaining.
The system experiences regular bugs, and our support vendors are always asking us to see if we can "replicate the error in UAT". This is obviously a reasonable request. A lot of the time, for various reasons (some of which are clear, some of which are not), these errors are not present in UAT. This lack of bug reproducability in a testing environment is adding huge amounts of friction to the bug resolution process.
There are 3 key pieces of our system architecture where these bugs are flaring (the CMS, the API layer, and the database). I am proposing we set up a system job that perpetually clones these 3 parts of the system in to a sandboxed test environment. This cloning would happen periodically (eg, once every 24 hours), and automatically.
Is there a technical term for this sort of environment? Is this an established method of helping diagnose system issues? Is there somewhere I can read up on the industry best practices for establishing something like this? Thanks.
The technical term for this kind of process is replication it is often done for some systems like databases, but normally not for testing purpose, but in order to increase available, so the replication is used as a failover spare.
An exact copy of a production system, with all the data is not you'll find often, due to the high demand on resources. Also at some points to two systems have to differ. Most systems (I know of) have tons of interfaces you just can't copy a complete system systems.
Also: you only need the copy of the production system when you actually debugging an issue. And if you are in the middle of that you probably don't want everything to go away and get replaced by a new copy.
So instead I would recommend to setup scripts that allows to obtain a copy of the relevant parts on demand.
Also you might want to consider how you might be able to modify your system to make it easier to setup a copy.
For example, when you have all the setup automated (with chef/docker or similar) you should be able to setup the same system again anywhere you want, so you now you just have to get the production data over.
Which is an interesting point. Production data often contains secret information (because it is vital to the business, or because it is personal data). You don't want this kind of stuff hang around in a test system everybody can access.

How do you handle technology updates in long running projects?

Let's assume you're in the middle of a long running project (long running = several years) and, as expected, there will be several things coming up with brand new releases. There might be a new .Net Framework with brand new features (e.g. Linq, Entity Framework, WPF, WF...), a new Visual Studio or V.next of your favorite Control Library, a new Mock Framework and a lot more things.
What are your guidelines for handling these technology updates? Do you adopt them instantly or do you ignore them until the end of the project? Do you have different guidelines for different things (Tools, Frameworks, supporting stuff)?
In my experience, these decisions are always made on a case-by-case basis. Several factors are considered, including:
How mature is the new technology? Does the organization like to be at the forefront working with bleeding edge new technologies, or does it prefer to work with proven tools and methodologies?
What skill sets do your people have? Are they consistent with use of the new technology, or is more training needed? Will improved productivity outweigh the time it takes to come up to speed?
What investment do you have in the existing technology? What is the cost of moving to the new technology? How much rework and rewriting of code is involved?
What is the requirement? Is it supported by the existing techology, or are new tools needed to fulfill the requirement?
What are the performance expectations? Does the new technology provide a performance improvement that cannot be met with the old technology?
What about the technological culture? Is the organization vendor specific (e.g. a Microsoft shop)? Can open-source code be used?
What is the scope of the project? Is it a large project that would benefit from supporting technologies like frameworks and tools, or is it a small project that would be unduly weighed down and complicated by these things?
How is the new technology supported? Does the vendor have good documentation? Is there someone you can talk to if you have problems? Or are you an organization that has people that know how to solve problems without a support contract?
Is the technology comfortable to work with? Does it seem to make sense? Is it clean and elegant? Do other people seem to like it? Are other people having problems with it?
Is the technology the latest flavor of the week? Has it proven itself in the battlefield to produce tangible results, or is it just a religion?
How much time do you have to learn the new technology and iron out the kinks? Do the benefits outweigh the costs?
As a very brief example, I chose Link to SQL for my most recent project, because the project was complex enough to warrant an ORM, L2S performs well and is lightweight, we are a Microsoft Shop, and it is my sense that the Entity framework is not quite ready for prime time (even though Microsoft says that it will be the go-to framework for the future).
Stick with what you've started with.
A large and long running project often comes with a huge and highly complex code-base. Any change or upgrade to a new version of a library can add bugs in very subtle and unexpected ways.
Also: For large projects the tools and libraries used should have been tested and evaluated in the design-phase. Unless you find a show-stopper or a security issue it's best to not upgrade.
Always remember: Don't change horses in the middle of a stream. :-)
I would say different factors pitch in, like-
Say a software is nearing its end of life, for example last April, Microsoft retired mainstream support for SQL Server 2000, and your product uses it then its wiser to go for the next version of SQL Server in your next release.
Another factor which comes into play is how much value does the new features in the latest release of a software would bring to your product. It may well be the case that the new release of .NET framework has something which does not add any value to your product, then that does not build a strong case to upgrade.
Budget is also an important factor. I think you need to upgrade licenses in order to step up to the next release unless you are already part of something like software assurance.
Training to the team is also a factor. If the latest release is going to add to your product then you will have to train your team as well.
Well, there could be other telling factors too. These were the ones off the top of my head. I hope it helps.
cheers
If you're talking about a framework-specific example, the biggest piece of advice I'll give you is keep the system and your application separate. This is why I love patterns such as Model-View-Controller - it keeps your code modular and means you can upgrade sections without breaking the app as an entirety.
On a more practical level, if your framework has a Git or SVN repository, checkout the usual 'system' directory from the repo, then you can call 'svn update' occasionally to keep up with the latest and greatest builds.
I would suggest that the project not last that long. Develop the application in smaller pieces with iterations every couple months. That way, as new technology comes out, you can make the necessary change and implement updates as you go rather then have to decide to redevelop the whole application. As you say, trying to develop the whole application as things change just doesn't work.
As another poster said, it's certainly a case-by-case basis thing. What you can upgrade and when is determined mostly by how hard or easy it is to test the new version of the system. Having a comprehensive automated test suite for your application helps a lot with this.
Generally, I try to update to the latest stable release of libraries and so on as often as possible, because that makes maintenance easier. If you don't update, you may find yourself patching or working around bugs in the version of the library you are using. If you update less frequently, each update will be more work because you have more changes to deal with, and it's been longer since you last touched the system, and thus you remember less about it.

How do you organize code in embedded projects?

Highly embedded (limited code and ram size) projects pose unique challenges for code organization.
I have seen quite a few projects with no organization at all. (Mostly by hardware engineers who, in my experience are not typically concerned with non-functional aspects of code.)
However, I have been trying to organize my code accordingly:
hardware specific (drivers, initialization)
application specific (not likely to be reused)
reusable, hardware independent
For each module I try to keep the purpose to one of these three types.
Due to limited size of embedded projects and the emphasis on performance, it is often keep this organization.
For some context, my current project is a limited DSP application on a MSP430 with 8k flash and 256 bytes ram.
I've written and maintained multiple embedded products (30+ and counting) on a variety of target micros, including MSP430's. The "rules of thumb" I have been most successful with are:
Try to modularize generic concepts as much as possible (e.g. separate driver code from application code). -- It makes for easier maintenance and reuse/porting of a project to another target micro in the future.
DO NOT start by worrying about optimized code at the very beginning. Try to solve the domain's problem first and optimize second. -- Your target micro can handle a lot more "stuff" than you might expect.
Work to ensure readability. Although most embedded projects seem to have short development-cycles, the projects often live longer than you might expect and another developer will undoubtedly have to work with your code.
I've worked on 8-bit PIC processors with similar limitations.
One restriction you don't have is how many comments you make or what you choose to name your methods, variables, etc.. Take advantage. Speed and size constraints do sometimes trump organization, but you can always explain.
Another tip is to break up a logical source file into even more pieces than you need, then bind them by #includeing them in a compilation unit. This allows you to have lots of reusable code (even one routine per file) but combine in whatever order you need. This is useful e.g. when trying to meet compilation unit size restrictions, or to pick and choose which common subroutines you need on the next project.
I try to organize it as if I had unlimited RAM and ROM, and it usually works out fine. As mentioned elsewhere, do not try to optimize it until you absolutely need to.
If you can get a pin-compatible processor that has more resources, it's better to get it working on that, concentrating on good structure and layout, then optimize for size later when you understand the code better.
Except under exceptional circumstances (see note), the organisation of your code will have no impact on the final product. (contents of the code are obviously a different matter)
So with that in mind you should organise your code as you would any other project.
With that said, the following are fairly typical:
If this is a processor that you've worked on before, or will be working on in the future, you will usually want to keep a dedicated hardware abstraction layer that can be shared between projects in the future. Typically this module would contain items like routines for managing any uarts, timers etc.
Usually it's reasonable to maintain a set of platform specific code for initialisation and setup that performs all of the configuration and initialisation up to the point where your executive takes over and runs your application. It will also include platform specific hal routines.
The executive/application is probably maintained as a separate module. All of the hardware specific code should be hidden in the hal (as mentioned above).
By splitting your code up like this you also have the option of compiling and running your application as a simulation, on a completely different platform, just by replacing the hardware specific code with routines that mimic the hardware.
This can be good for unit testing and debugging and algorithmic problems you might have.
Exceptional circumstances as might be imposed by unusual compiler restrictions. eg. I've come across some compilers that expect all interrupt service routines to be compiled within a single object file.
I've worked with some sensors like the Tmote Sky, I too have seen poor organization, and I have to admit i have contributed to it. Anyway I'd say that some confusion has to be, because loading too much modules or too much part of program will be (imho) resource killing too, so try to be aware of a threshold between organization and usability on the low resources.
Obviously this don't mean let caos begin, but for example try to get a look on the organization of the tinyOS source code and applications, it's an idea on what I'm trying to say.
Although it is a bit painful, one organization technique that is somewhat common with embedded C libraries is to split every single function and variable into a separate C source file, and then aggregate the resulting collection of O files into a library file.
The motivation for doing this is that for most normal linkers the unit of linkage is an object, for every object you either get the whole object or none of it. Since there is a 1-1 relationship between C files and object files, putting each symbol in it's own C file gives each one it's own object. This in turn lets the linker pull in only that subset of functions and variables that are actually used.
This sort of game doesn't help at all for headers they can happily be left as single files.

Accurev SCM [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Does anyone use Accurev for Source Control Management? We are switching (eventually) from StarTeam to Accurev.
My initial impression is that the GUI tool is severely lacking, however the underlying engine, and the branches as streams concept is incredible.
The biggest difficulty we are facing is assessing our own DIY tools that interfaced with starteam, and either replacing them with DIY new tools, or finding and purchasing appropriate replacements.
Additionally, is anyone using the AccuWork component for Issue management? Starteam had a very nice change request system, and AccuWork does not come close to matching it. We are evaluating either using Accuwork, or buying a 3rd party package such as JIRA.
Opinions?
Sweet mother of god is Accurev awful. 300k lines of code? Try it with millions, with hundreds of developers working on scores of projects.
Continuous integration? Sure, that's something that developers can approximate by doing regular merges in, say, perforce, git, mercurial, or any of the countless other tools that actually gets the work done, but it becomes the choice of the developer as to how to proceed. For architects, leads, build engineers, or anyone who actually uses source control to slice and dice, Accurev is horrific.
I went to an "Advanced Accurev topics" talk, and the first tidbit was a large shell command for clearing out Accurev's client-side caching/sync mechanism to correct for when Accurev updates silently fail to pull down files that should be updated.
A Timestamp Optimization checkbox? Deep overlaps? Modal dialogs with only one background process? (That would be okay if those processes were anything other than glacial.) Cascading graphs of selectively configured streams put in place just to be able to pull off components and cross-merging? Updates aren't actually atomic without time-locking?! (honest answer: update again)
Every time I try and do anything serious in Accurev, I feel like I'm playing Russian Roulette at a table with HAL9000, Skynet, and a Speak & Spell. On the line? Four more hours of my life.
Why am I here, griping about Accurev? Because my other machine has taken a full four hours to try to update 10MB of files over VPN. Why? Because some other change has come up stranded and requires some sort of catastrophic resync scan for elements. The worst part? All of these files were on a workspace on the same computer. We're talking about several hours just to get a recently updated workspace to the point where I can put the right notches in the stream history.
One word Accurev review: Avoid
Honestly, I feel like I need to double check to see if I'm using the same tool as these folks that seem to like Accurev. I used Subversion in my previous job and liked it a lot. We never had any issues with it to speak of, and of course the price is right. My biggest problem with Accurev is that it seems they felt the need to be different for different's sake. It uses a completely different vocabulary to express versioning concepts, that even after using it for almost 6 months, feels very foreign to me. It has no fewer than 8 or 9 states any given file can be in, compared to about around 1/2 as many for Subversion. The GUI is crappy and slow, and the IDE integration plugins are sub-par. I had assumed that at some point I would "get" Accurev and see why it's so much better, but that has yet to happen. My advice is to stay away.
I have used AccuRev for nine months and I anxiously await the day I use it no more. My one line review is:
It's like source control written by developers who have read about it in a
book, but have never actually used it before.
Basic concepts are missing or extremely complicated. Example, I've just lost 8 hours work because there's no good way to "revert" a change once it's in a stream. You can "purge" that transaction - but thats it its gone, you can't then cherry pick the changes you really wanted.
The GUI is slow, bloated and inconsistent. Warnings are cryptic eg "error merging element id 1234556". Every single dialog box is modal. As one poster said, there are 9 states a file can be in - but what's more, you must manually click through a list box of 9 options to see the setting for each file.
The streams model sounds like a good idea, but the default behavior of "inheriting" changes from a parent stream is actually incredibly bad in practice. Just say the word "Deep Overlap" to anyone who has really used AccuRev and watch them shudder, turn pale, and/or faint. Making the streams is very easy, but actually merging them with any meaningful differences is arcane and non-deterministic.
No one has mentioned this, but the whole system of "include/exclude" rules to manage file and directory filters is completely broken. This system lives outside the transaction system so there's no way to revert, track history or reproduce changes to a live source stream - for example when Johnny Intern decides the "core" library isn't useful to the entire development team.
The only reason I can account for Accurev's popularity is that it is optimized for the "Demo to Management" case. We're using AccuRev for serious software development - dozens of projects and many more developers. The streams and the GUI look great but after a few weeks use varnish comes off revealing a old, busted, mechanical-turk like system.
Stay far away from Accurev - use Git or Mercurial if you want something modern and free, or Perforce if you want something rock solid, well-supported but expensive.
EDIT:
As a postscript here's one of the many examples of the lack of care and general shoddiness in the UI:
The default difference viewer has its numbering "off by one" - for example if you have 2 diffs in a file - the viewer shows diff "0 of 1" and diff "1 of 1". I mean, really, would you feel comfortable trusting your code to a system that exhibits such a stupid and easily fixable bug.
I worked on (and administrated) AccuRev for more than a year lately, and most of my impressions are very good.
We evaluated it alongside with "Plastic SCM", SVN and "ClearCase-UCM" (that we already owned and used), and decided to dump ClearCase and SVN (both were used in two different groups) and to purchase AccuRev.
First, the stream architecture is much more solid, easy and safe SCM method than the old branching architecture that all the other tools are tied to (yes even ClearCase's "streams" are eventually a wrapper of branches). There's a lot of articles about the differences in their site, you can search and read about it to make it understandable. (Try this link, and this one too)
The timesafe architecture - you can't delete anything from the depots (=repositories) database. I saw tools that this operation is possible with the proper admin permissions. In AccuRev you will use internal commands in order to change or fix a mistake you made, which in turn will be recorded as a new transaction also. Very smart very safe.
integrations! AccuRev integrates with so many tools (to give you ALM bundle) - bug tracking tools (like JIRA, ClearQuest), IDEs, testing tools (quality center), and if you can't find one you can write your own (they provide Java/Perl/XML/CLI SDKs)
Change Package, I don't know about you, but I can't stand SCM tools that doesn't supply change management (did anyone said SVN?), like ClearCase "activities", and AccuRev's "issues". It's a must for my opinion, and one of my CM "best practice". And they can be integrated with your bug tracking tool too, so your users can work on real tasks like features and defects.
the support is just amazing. As a former client of IBM (because of Rational ClearCase is now part of IBM), the shift to AccuRev was simply awesome. During the evaluation they gave numerous on-line support to understand how we want the tool to act for us, so we tweaked it together even before we payed a cent. And they kept that degree of responsiveness after that evaluation period too; We had some problem during an upgrade from 4.5.4 to 4.6, in just a couple of hours (while the upgrade is still in process) the support guy contacted me back, suggested a few tips, connected to my desktop and finally fixed the problem before any other company's support would even started to try to figure who you are. Of course that if you choose open source tool than you're on your own!
Also the tool comes with help system, which can be even too verbose sometimes. And don't forget the forums (especially on cmcrossroads) that are very good in supplying rapid answers too.
And there's so many more....
Of course there are also drawbacks (which software is perfect?) - I would like, for example, to see the file(s) <-> issue(s) association during check-in too, like in ClearCase, not just in "promotion" like it is today - but IMHO they are really minor.
So, as you can understand, if you read it all, I'm a big fan of AccuRev, and I'm highly recommending it. IMHO, it is today one of the best SCM tools you got the chance to work on; modern, wise, easy and strong.
My current client uses Accurev for SCM and after a few projects using a DVCS like Git or Mercurial, I can honestly say that using Accurev is about as enjoyable as closing your face in a car door.
The GUI for Mac and Linux is god awful slow. You can forget using the refactoring support in your IntelliJ or NetBeans IDE, if you use Accurev...that is unless you are going to write your own plugin.
Oh yea...let's not forget about this little chestnut ==> evil twin.
On a positive note, it could be worse...it could be Clearcase.
Accurev sucks! It's overcomplicated for the price of productivity of the team.
I've worked with several SCMs and the idea of accurev is great but not practical. It's Merge Hell with a hierarchy that looks good in the UI but is pain to deal with when it comes to real life.
Specially when you refactor your code (something that some people actually do everyonce in a while) and you get in a mess when a defunct file is not promote all the way up. Or even worse if somebody else overrides a defunct file and creates a new file with the same name....etc
The UI is incredible terrible. Which honestly doesn't matter how good you think the backend is. You will still use the UI (I use the VS plugin which is half decent except it freezes the IDE sometimes, nice huh!).
If you live in the 80's and are planing to use the command line for you day to day use, then i guess you can avoid the UI. If you have an integration build server then of course you have no choice but to use the command line (No native tasks for MSbuild/ANT/NANT that i know of). I just heard that they are doing some work with http://www.electric-cloud.com/. Don't know anything about it still.
Accurev is new therefore there has little resources available online as apposed to svn which you would find tons of integration work that was done by hundreds (with jira for example).
If you are a manager. Accurev will make you feel good looking at the streams, because it does look pretty as long as you dont have to deal with it..
If you are a developer, (a junior developer will not care much, he/she will do whatever you ask them to do)
If you are an architect, refactors a lot, re-addresses architecural descisions...etc you will find accurev as your worste enemy, moving stuff around is pain. Very anti-agile if you ask me. It's not fluid..
If you are a build engineer, you will find it PAIN to get all the developers into a procedure, which you will have to do if you use accurev (ex. promote their code to the agreed upon stream in preperation for a release)....
CRM is supposed to make things easier... I dont see Accurev at this point doing that.. It's still not mature enough, If you want to be a pioneer and strugle in the hope for things to get better..go for it..
Otherwise, don't re-invent the wheel and go with something more established with much more case studies and applications. Because to be practical, what accurev claims to offer that differs is not worth it when you deal with it's pains on a daily basis...
We've been using AccuRev for a few years now. It's a serious improvement over our last tool (Razor) and while I'd recommend it for others- it does have a few drawbacks.
Benefits:
The stream based interface is quite intuitive. I make snapshots every second week and have a number of ongoing development streams branching off the snapshot.
Moving changes between stream is really easy, just select the change, send it to the "change palette" and select the destination stream. It guides you through all the files that need to be merged.
The command-line utilities are great. We've managed to script most of our release generation around it.
Integrations for Visual Studio, Bugzilla, etc...
Drawbacks:
As monjardin pointed out, the client GUI can be slow. I use the windows version for all my history/stream searching since it's much faster than the X11 one. Of course, the GUI's written in Java so performance obviously wasn't their first concern.
It's starting to get slow for really large databases (I'm talking over 300,000 LOC), although they've apparently addressed it in today's release of 4.7.
We opted to go with the cheaper license and not get the change packages feature (I can't see them working that well anyways, as the entire idea of promoting individual changes flies in the face of continuous integration). So far it hasn't hurt us.
Overall, for the price you pay it's a nice tool. We evaluated ClearCase, MKS, Spectrum and Subversion during our trial period. Subversion may have been a good choice, but it was still pretty green when we were evaluating. I've never heard of Plastic before, but I regret not evaluating Perforce.
Also, I understand that the engineers over at Trolltech (makers of Qt) have recently switched to git. I'd be interested in checking that out as well.
We use AccuRev for 4 years already. I hate it very much, mostly because of its horrible GUI. Several years ago AccuRev sent a survey for their clients to fill and at the end of the survey there was a field with suggestions. I started collecting things which annoy me the most, and below you'll find what I have now. Unfortunately, it's full of AccuRev terminology, but I think you'll get the idea anyway.
Accurev GUI possible improvements
Working with history
When examining history, a developer most often wants to see diff to previous transaction/version. This should be as accessible as a double click. For example double-clicking on file in transaction log could open diff to previous version. Double-clicking on a file in Default Group filter could open diff to backed, double-clicking on a file in Modified search could open diff to most recent. That would save tons of time.
Common experience is that developers rarely open files for editing from within AccuRev. Rather they very often diff files, then revert or promote changes. So double click should not open files for editing, it should diff them instead. This may be an option in preferences, so different people may decide whether they want double click to diff files or open files.
It should be possible to select two transactions in stream or workspace history and perform a file diff between them.
Overlaps merging
Merging overlaps in a stream requires performing "Deep Overlap" search in a workspace that takes much more time than to search overlaps in specific stream. Then it is needed to sort overlaps by overlap stream and merge only those from specific stream. There should be more convenient way to perform merging overlaps in stream. For example ability to limit deep overlap search by specific stream and do not show overlaps in parent streams. Limiting Deep Overlap search by timelocked stream is not very useful if you are several streams under this timelocked stream or there is no timelock on parents at all.
Now there is a simplified way that involves creating change palettes, but it is still not convenient. Merge menu item should be available at stream level if there is a workspace under that stream that may be used for overlaps merge.
Annotate tool
Annotate tool is very awkward:
using slider at the top to browse different version resets position in the file that is VERY annoying for large files;
should be able to open history at the specific transaction directly from annotate tool. Now developer needs to remember transaction# and search for it in stream history (also need to search for the stream where transaction was made).
Stream Favorites
Context menu item "Add to stream filter" was removed when new stream favorites were introduced. It should be possible to right click on stream and add it to one of the stream favorites (2nd level context menu, or dialog may pop up). Now it is very annoying to edit stream favorites, particularly when you need to have 2 similar sets of streams.
Stream browser
It should be easy to copy stream name to clipboard. Now you need to open "Change stream" dialog for that. Ctrl+C in stream browser could copy the name of selected stream to clipboard.
There is no way to copy stream name from stream view. Right click on tab could copy stream name in clipboard, or show context menu with "copy stream name to clipboard" item in it.
Diff and merge tool
Shows only the first different character in line, not the whole line difference, does not highlight syntax. Luckily, diff tool may be easily switched to external tool, so this is minor.
Other suggestions
Option in preferences to enable Multiple columns sort mode by default.
It would be nice to save not only the latest keep/promote log, but at least 5-10 older ones.
File extension column in stream or workspace view with ability to sort on it would be great.
Reordering tabs would be nice.
Very small font in keep/promote/lock message under Windows, it's unreadable. Increase the font size or allow user to change it.
Implement more convenient way to locally ignore files, environment variable is not very useful (user may want to ignore different sets of files in different streams/depots).
For the last 3 years AccuRev added 3 things from this list (I removed them as they are already implemented):
Hardcoded (can't be customized) keyboard shortcuts for most actions
Made it possible to call diff for several files from transaction at once (before that one should have to right click on every file and call "Diff to previous version" from context menu.
Added search for text to Annotate tool. But because of the position reset when you try to switch to different version (see above) the Annotate tool is still unusable.
Besides GUI there are fundamental flaws in AccuRev in the whole:
Difficult to update backwards
You can't easily update backwards. There is accurev update -t <transaction-number> command, but if you updated to transaction 100, you can't update to transaction 95 using accurev update -t 95. In order to do that you need to set up time lock on your backed stream (which will introduce transaction in AccuRev) and update your workspace.
Deep Overlaps
When you update you may happen to have invalid state of sources without any notice. This is because Overlaps feature. Overlap is basically a conflict (when file is changed by both you and them). If you have overlap in your workspace, you'll need to merge it before you are allowed to update. But if you have overlap in the stream, under which you have your workspace, you won't have any notice about that, but overlapped file won't be updated in your workspace. Consider the following stream structure
[Depot Root] <- [Team stream] <- [Your stream] <- [Your workspace]
Let's say you changed foo.cpp and promoted it to [Your stream]. After that someone from your team changed both foo.h and foo.cpp, let's say added method to the class Foo, and promoted the files to [Team stream]. After you update your workspace, you'll get new version of foo.h (because you didn't change it), but you won't get foo.cpp because it's overlapped in [Your stream]. So, your update will go clean, but linker will complain about unresolved symbol Foo::NewMethod if you try to build after that.
I've been a long time Accurev user, and have recently moved to a job where I'm using Perforce. I gotta tell you, I wish I had Accurev back. I do agree - the UI is slow and has problems.
However there are some truly AWESOME visualization tools in there. I can't believe that anyone would look at the version history browser and not fall in love! The stream browser is a great simple tool to understand what is going on in your development organization.
Also, dirt simple to administer. Accurev is actually one of my favourite tools.
One of the best days I've ever had at my current job is they day we ditched Accurev and moved to Subversion. Accurev uses overly complicated concepts. Like one of the commenters above, after working with it for years, I still didn't understand the different states that artifacts could be in. It seems that Accurev's greatest asset is its whitepapers and stream visualization, both of which are v appealing to management but does nothing for developers. I use Subversion, Mercurial and Git for various projects and would recommend these tools over any other.
Accurev is anti-agile tool:
Main idea of Accurev is to use different streams for different teams, so, changes made by team1 won't affect team2.Sounds good, but in a real world we all know that at the end we have to merge the code from both teams and believe me it's nightmare in Accurev. The more changes both teams will do in their streams, the more time everybody will spend on merging at the end. It's the same if every team will do their development in the separate branch using SVN and trying to merge everything after 1 month of development....Basically Accurev creates late merge price and you are going to pay this price for ever if you will choose Accurev for more than 1 team.
In order to fix problem created by point 1 people decide to refuse from cross functional teams in favor of functional. They even provide argument to support this idea, like "knowledge expertise" principle. In other words when you don't have cross functional teams (and Agile as well) it's easier to have experts for particular part of the system, so they will perform code reviews better and act as "information/design/implementation experts". We all know that information expert is an anti-pattern not only in Agile, since it's better to spread the expertise in order to avoid knowledge bottlenecks in development.
Put me in the anti-Accurev camp. We moved to it recently, and it's been horrible. We have a number of quite large projects, and Accurev seems to be almost unusable for the quantity of files we have. Over a VPN, forget it. It takes forever to update, the cross-stream management doesn't work in any intuitive way, the UI is complex and slow.
Additionally, support for it in a number of tools we use is either non-existent or poorly implemented.
Add the various bugs that keep popping up, and I'd say we wasted a great deal of money for something that is done much better by open-source software, such as Subversion. We still use CVS for some projects, and even it is so much better for normal operations and workflow that I'd pick it over Accurev.
Another big thumbs down for Accurev. Every simple operation seems to become so horribly complex - cryptic error messages send you scattering to manual, only finding theoretical explanations about concepts that shouldn't have existed in the first place.
UI is so slow and unresponsive it makes you want to gouge your eyes out.
Stay away.
Accurev has some great concepts; but suffers from:
1) many many inconsistencies in the command-line interface.
2) many bugs and nuisances in the application/interface. E.g. their time-safe property is not actually time-safe at all because of several bugs that affect snapshots and pass-through streams.
3) major bugs in critically important features... As above; time-safe bugs; bugs in merging by issues.
3) they are a year behind where they should be, because they wasted a whole year on trying to move their backend to a database - this will be version 5 which may never see the light of day.
4) The marketing is excellent; but the product does not live up to the marketing hype
5) every release has had major critical bugs that has required them to release immediate hotfixes. This has been a major disruption for us. And these aren't minor bugs.
6) doesn't scale well... takes up a huge amount of disk space and gets slower over time
Having said all that; it's still a good product; but if I were to do it all again I'd consider git instead.
After 4 months, my very negative opinion hasn't changed at all. While Accurev has some very nice concepts, the slowness and complexity far outweigh the advantages, at least for us. Aside from the usual complaints about the GUI and the obscurity of a number of features, one of the absolutely most annoying faults is how many hoops you have to jump through just to update a workspace, made much worse by the inability to update only one directory (or directory tree).
A typical update consists of waiting a loooong time to be told you have overlaps. Of course, you aren't told what the overlaps are. So, you have to do an overlap search, wait another loooong time, resolve the overlaps, do another update, wait a looooong time, and hope it worked this time.
Some of our remote developers update as infrequently as possible because the update time over VPN is absurd. Granted, we have an enormous number of source files across a number of products, and if we reorganized everything we could probably improve performance.
However, we hired Accurev (at a significant cost) to come in and tell us how to set everything up. Still sucks. Aside from that, we really shouldn't have to reorganize the way we work with our sources to suit a source-code-control system. It's a tool, not a business model.
Lastly, we've been trying out an Accurev plugin for IntelliJ, written by Accurev. It works just as poorly as the rest, and, while Accurev has been very responsive about fixing the plugin, we aren't their QA group, nor did we sign up to be an alpha test site (yes, it's that buggy). We finally gave up and wrote our own plugin that actually works.
#Steveth
The Interface is lousy...However the streams model is very innovative.
Being able to create a stream for a new project off the trunk stream, and having 5 developers working on it, and not having any form of merge collisions when we merge that stream back into the main trunk is unheard of, yet it works well in Accurev.
At a previous employer we reviewed Accurev and Plastic SCM. At the end of the day, I was not impressed with Accurev's interface, or the so-called "streams". We went with Plastic, and nobody complained.
#Jonathan
The streams are interesting,but I don't see how any version control can magically avoid collisions when two people touch the same code in the same file. Accurev's model was intriguing, but at the end of the day, nice clean branching and merging with a drop dead easy interface made Plastic the choice for us. Plastic's timeline view (I forget the actual name), showing the branch/merge/check-in history made it very simple to review the history of the project from a bird's eye view.
Accurev is simply the worst tool I have ever used.
Subversion is very good, esp if you are migrating from cvs.
My company ha been using Accurev since early 2010, coming from StarTeam before that and CVS in the very distant past. I haven't used CVS (having been on a different team at the time) so I have no comparisons there, and I never bothered to learn StarTeam too intimately.
Since then I've also played with both the CLI and Tortoise versions of SVN, Git, and Mercurial (Hg) in my free time. I plan on giving Git a more thorough go at some point, but I found Hg to be much more intuitive and easy (at least under Windows). Anyway, like I said management saddled us with Accurev and after spending time to get fairly well acquainted with it (GUI and CLI both) as a developer... I absolutely hate it.
Someone earlier in the thread summed it up as software written by devs that had read about SCM in a book but never used it... I agree whole-heartedly but you also get the feeling that they had the same level of experience with GUIs, efficient processing, etc. (In fact, I see that Accurev has a new product called "Kando" based on Git...sounds like they've finally realized how bad their model is. But to quote a coworker "I wouldn't trust anything written by the same team at this point"... I have to wonder if it is a coincidence that there is a baby-wipe product named "Kandoo"...)
Ok, obviously I don't care for the product. If you've spent the time to read this thread, then obviously there are quite a few folks with similar views on it. But I wanted to share some of my own gripes that I've had with it over the last few years as well -- btw if it helps anyone, I think we were using v4.7 previously and have been on v5.3 (?) now for quite some time.
My biggest beef with Accurev is how horribly slow and inefficient it is. Notice I didn't use the word GUI -- I've tried both GUI and CLI-- the slow parts are on the server, so you're screwed either way. It seems like I see one of those damn modal dialog/status bars at every turn... I switch tabs -- bam!: processing, please wait. I reparent a stream -- oh wait just another minute. For "Updates" I expect it to be a little slow (although sometimes it gets annoying when it screams "Overlap" [aka a conflict] at me when I happen to have a file with IDENTICAL content to what it's pushing down). I change directories browsing to a path... processing, processing, "oh you want to go down one more sub-folder"... let me process that some more. You get the idea.
Is that my only beef? Hell no.
1) For the merge tool, I've had the "ignore white space" option checked for years, but I can only ever recall it working ONE time (for example, say we're talking about about comparing say 2 versions of a JSP where I converted spaces to tabs or trimmed some trailing white space or something). Why is this an issue? Because it becomes pure torture for every other developer that looks in the history and wants to see what REALLY changed. If they can't get implement this correctly, don't put the F***ING option there. (Note:using WinMerge as an external compare tool, with appropriate settings works fine)
2) I've had instances where checking a file into one stream and then needing to put an IDENTICAL copy of that same file into another stream (using the same issues #) causes it to throw a temper tantrum. If I use the wrong issue #, it goes in with no problem. This is probably an isolated case (and maybe due to other poor process decisions my company saddles us with) but I thought I'd mention it for completeness.
3) The history? All stored on the server. Translation: If you enjoyed waiting for it to switch tabs, create/reparent a workspace, and update then you're in for more of the same when you want to view history.
4) The way it's exclusionary rules are done is not only terrible but also pathetic. Under Windows, you actually have to create an environment variable where you can create some exclusions to files that you don't want to show up. IT DOES NOT SUPPORT REGEX. I've seen several other SCMs that offer much better approaches (I'm fond of the the ignore files used in Hg. I think there is something similar in Git too) -- not only are both regex and glob patterns supported, but defining this in a FILE is more system-friendly and much easier to edit that putting it into an Environment variable. Not only that, but it seems that the ignore filters are iffy at best. The way our projects are defined have the build folder under the project folder (which is source controlled) and trying to exclude all folders under the the build folder doesnt seem to work -- most of them still show up in my "External" filter even after setting up rules.
5) It's check-in process (a "Promote") also seems to run with the theme of slow and inefficient. We use an external ticket system (not AccuWork... our ticketing system has its flaws but after using AccuRev, I can't image that product to be much better). Anyway, when we say "Promote [this file]", first it pops up with another modal dialog (after the required waiting, while it does more stat processing), then it presents a list of ALL tickets it has pulled (there are a lot...too many to reliably find anything). Next, we must enter our ticket number from the other system, and wait some more while it takes forever to find a match (I thought it already pulled the list...geez). Finally, it will display the matches, then we pick one and tell it to promote using that ticket number. After yet some more waiting, we're finally done.
I could go on but I'll stop there.... this post is getting too long. Instead, let me sum up Accurev in my own way: After having to wait for all these slow annoying "Stat processing", etc dialogs during an issue where we were trying to quickly get a fix out, I came up with a new slogan for them: "AccuRev: when seconds count, your fix is only minutes away".
Since management won't get rid of Accurev (I know they won't go for anything without Enterprise support but I've begged for them to consider anything else: SmartGit...Kiln...Perforce...), I have been using TortoiseHg to locally version control my files (in addition to Accurev). It is a little more work. But for those saddled with Accurev, it makes life so much easier. You get: better diff management -- MUCH MUCH easier to see and review code changes after an "accurev update", the ability to view some history without waiting 10 years for the server, ability to share directly between you and another dev (assuming they also install it), ability to revert/restore your changes if you accidentally wipe something out while trying to get clear of Accurev's merge hell ("Overlapped" files), and even more if you can get the rest of your team using it.
EDIT: Forgot to mention, during a conversion with our build engineers I was told that while Accurev has a Java API that you can develop for, it apparently requires purchasing some sort of additional licensing. I can't confirm this since a) I can't find pricing anywhere on Accurev's website* and b) I doubt like hell they'd tell me at work...
*Kinda weird considering I can find some sort of rough pricing for Perforce, Kiln, StarTeam and SmartGit quite easily. I usually get a sketchy feeling when some product won't list any sort of price up front, guess it shouldn't surprise me too much that Accurev falls into that category...
Well, all I can say is that I completely agree. The back-end is great but the UI sucks. The stream functionality is great because it makes merging no brainier as all changes from parent streams are automatically propagated to all children. I wrote a post about Accurev UI that explains most of the shortcomings I've come across for last 2 years.
The sort answer:
Use the latest SVN server and SmartSVN (the community edition is free) as a client.
You will not pay anything and you can get everything you need.
The gory details:
BTW the feature of imposing the change management rules during checking is trivial to write as a SVN hook. We did it in a couple of hours, in one hundred lines (or thereabout) of code - it works wonderfully and never broke. It integrates SVN with Bugzilla and imposes rules such as:
In order to commit you have to enter a message
In order to commit you must have entered a Bugzilla ID that is in a "Valid" commit state.
... and so on, you can build your own rules to your heart's content
Accurev seems to be marketware to me ... lousy GUI client ... very slow (we had to upgrade the HW to make it actually work effectively), and of course ... you have to pay for it! Ah yes, if you do use it, I hope that you do not have to replicate your server between some place in the US and some place in India :)
Perforce is more robust but it is not very easy to administer. In any case, it is a superior product in comparison to Accurev.
VSS and stuff like that should not even be considered as "version control" systems when it comes to writing professional software (typically, enterprise software) in the 21st century. That's like writing your reports on a typewriter ;-)
If you know what you are doing (with your software) then SVN will be a robust and efficient solution for you. With (at least) two robust and efficient revision control systems in existence today (SVN/GIT) there is very little room to justify working with a proprietary solution; some reasons could be "inertia": you have it, you don't care paying for it, and you didn't have any major issues -- in other words, it works for you.
I use SVN everywhere, when it didn't exist I was using CVS, and before that ... no I am not going to tell you how old I am ;-)
Hope this helped ...
Ciao.
I've been the SVN and Accurev administrator for a time. Accurev took a long time to grow on me - about six months, but I like it now for a corporate enterprise environment. Here's a few things to consider.
Pros:
Personal code history
The code changes are kept on the server when the users performs a keep. The keep is personal to the user and doesn't distribute to other users until a promote command is issued.
The code performed by a keep is kept on the server and available even if the user performs a revert operation.
In most cases, promoting the code to higher streams for distribution is fairly simple.
Administration is fairly simple
Installation works well
Performance is much improved on version 5.3 which changed the backend to a PostGres database
The CLI is rich and extensive
Cons:
A real clunky user interface
Resolving overlaps can be complex, just like conflicts in SVN
However, like any complex tool, your appreciation will increase the more you understand and know about it.
I used AccuRev at a previous job and didn't have any problems with it, but I very much prefer Subversion (even without comparing the price difference). I remember the client GUI being pretty slow too. Also, I do recall that the GUI just called their command-line utilities to interface with the repository. So, it probably won't be that hard to use those interfaces for your DIY tools.
I have used Accurev for one year. I don't like it. Here are some problems I encountered:
1. Its GUI is terrible: it's so slow that each time I switch between tabs (streams and workspaces) or perform some actions I have to wait for several seconds. It sometimes gave you a confusing error message that could not help find what's wrong.
2. It has so many concepts that you have to spend much time learning Accurev itself.
3. I once encountered such a problem: I have a version controlled file modified by our build process. Later my teammate moved that file to other location in his workspace and promoted the change. When I run "accurev update" it simply told me "some file has been moved" and everything looked normal. But actually the command stopped at the moved file and no longer updated other files. It's very confusing - your update command did not update the worksapce but you have no idea about it. The only outputed message "some file has been moved" looked just like other verbose output. It did not tell me my update failed or aborted or something else.
Before that I used SVN and ClearCase. SVN is a great tool, simple and easy to use. And I did not have so many complaints about ClearCase. Accurev is really frustrating...
I've just come across this discussion and thought I would share our experiences with AccuRev.
We have been using the Dimensions SCM from Serena for around 8 years. Two years ago we had a major problem integrating our India based Development team with our UK Dev team. It was clear that we were not going to meet our needs with the current system and hence we set about evaluating a number of options. I discuss all of this in this article How We Integrated Our Offshore Dev Team.
Our experience of using AccuRev has so far been very positive.
It is easy to setup and administer.
Users are able to get going very very quickly (especially important for the India dev team)
We've never had a problem with speed (in fact this is one of the main plus points for us)
The replication works like a dream
I do agree that the UI can be a bit clunky (especially the Unix client). I am hoping that it will be better in the latest version when we update to that next month.
All in all I would say that this was one of the best decisions and purchases we have made.
Note: I am an AccuRev user and I like it very much. I have already upvoted a few answers here, and would like to add:
I've just recently stumbled over this "review" of AccuRev in the book Continuous Delivery by Jez Humble and David Farley:
[Chapter 14, p 385]
Commercial Version Control Systems
(...) the only commercial VCSs that we are able to wholeheartedly recommend are:
(...)
AccuRev. Offers ClearCase-like ability to do stream-based development without the crippling administrative overhead and poor performance associated with ClearCase.
(...)
To which I might add that I never have used ClearCase, but I am the AccuRev admin around here, and it is indeed very little work to administer. (WRT performance, this question might give more insight.)