Is Process Table the same thing as the Job Queue in UNIX? - process

As a student that is learning Operating System, I learned that there is a data structure in the kernel space called "Process Table", which maintains all information about processes.
Later on, when I got to the topics of scheduling, I was told that all processes that got into the system would first be put into a data structure called "Job Queue", which seemed to also maintain general information about processes.
This got me thinking, is the "Process Table" here the same as the "Job Queue"? Maybe this is a trivial question, just want to make sure I understand things right. I knew that I might need to look into the Linux kernel source code to figure this out, but could anyone give me any quick insight?

Is Process Table the same thing as the Job Queue in UNIX?
No, they are not the same.
Read this textbook about operating systems; and the wikipage on processes.
The Linux OS is mostly open source. You can download (here) and study its source code, and look inside the source code of GNU bash or GNU make to understand better how syscalls(2) are used. You could also play with strace(1).
See also the kernelnewbies and OSDEV websites.
Of course, the kernel has a lot of data structures (and millions lines of source code). Read more about the kernel scheduler.

Related

How does Smalltalk image handle IO?

I just started to learn Smalltalk, went through its syntax, but hasn't done any real coding with it. While reading some introductory articles and some SO questions like:
What gives Smalltalk the ability to do image persistence, and why
can't languages like Ruby/Python serialize
themselves?
What is a Smalltalk “image”?
One question always comes into my mind: How does Smalltalk image handle IO?
A smalltalk program can resume from where it exits, using information stored in the image. Say I have some opened TCP connections(not to mention all sorts of buffer), how do they get recovered? There seems to be no way other than reopening them(confirmed by this answer). And if Smalltalk does reopen those connections, isn't it going against the idea of "resume execution of the program at a later time exactly from where you left off"? Or is there some magic behind it?
I don't mind if the answer is specific to certain dialects, say Pharo.
Also would be interested to know some resources to learn more about this topic.
As you have noted some resources are not part of the memory heap and therefore will not be recovered just by loading the image back in memory. In particular this applies to all kinds of resources managed by the operating system, and cross-platform Smalltalks where you can copy the image from one OS to another and restart the image even have to restore such resources differently than they were before.
The trick in the Smalltalks I have seen is that all classes receive a message immediately after the image resumed. By implementing a method for that message they can restore any transient resources (sockets, connections, foreign handles, ...) that their instances might need. To find all instances some Smalltalks provide messages such as allInstances, or you must maintain a registry of the relevant objects yourself.
And if Smalltalk does reopen those connections, isn't it going against the idea of "resume execution of the program at a later time exactly from where you left off"?
From a user perspective, after that reinitialization and reallocation of resources, everything still looks like "exactly where you left off", even though some technical details have changed under the hood. Of course this won't be the case if it is impossible to restore the resources (no network, for example). Some limits cannot be overcome by Smalltalk magic.
How does the Smalltalk image handle IO?
To make that resumption described above possible, all external resources are wrapped and represented as some kind of Smalltalk object. The wrapper objects will be persisted in the image, although they will be disconnected from the outside world when Smalltalk is shut down. The wrappers can then restore the external resources after the image has been started up again.
It might be useful to add a small history lesson: Smalltalk was created in the 1970s at Xerox's Palo Alto Research Center (PARC). In the same time, at the same place, Personal Computing was invented. Also in the same time at the same place, the Ethernet was invented.
Smalltalk was a single integrated system, it was at the same time the IDE, the GUI, the shell, the kernel, the OS, even the microcode for the CPU was part of the Smalltalk System. Smalltalk didn't have to deal with non-Smalltalk resources from outside the image, because for all intents and purposes, there was no "outside". It was possible to re-create the exact machine state, since there wasn't really any boundary between the Virtual Machine and the machine. (Almost all the system was implemented in Smalltalk. There were only a couple of tiny bits of microcode, assembly, and Mesa. Even what we would consider device drivers nowadays were written in Smalltalk.)
There was no need to persist network connections to other computers, because nobody outside of a few labs had networks. Heck, almost no organization even had more than one computer. There was no need to interact with the host OS because Smalltalk machines didn't have an OS; Smalltalk was the OS. (You probably know the famous quote from Dan Ingalls' Design Principles Behind Smalltalk: "An operating system is a collection of things that don't fit into a language. There shouldn't be one.") Because Smalltalk was the OS, there was no need for a filesystem, all data was simply objects.
Smalltalk cannot control what is outside of Smalltalk. This is a general property that is not unique to Smalltalk. You can break encapsulation in Java by editing the compiled bytecode. You can break type-safety in Haskell by editing the compiled machine code. You can create a memory leak in Rust by editing the compiled machine code.
So, all the guarantees, features, and properties of Smalltalk are only available as long as you don't leave Smalltalk.
Here's an even simpler example that does not involve networking or moving the image to a different machine: open a file in the host filesystem. Suspend the image. Delete the file. Resume the image. There is no possible way to resume the image in the same state.
All Smalltalk can do, is approximate the state of external resources as good as it possibly can. It can attempt to re-open the file. If the file is gone, it can maybe attempt to create one with the same name. It can try to resume a network connection. If that fails, it can try to re-establish the connection, create a new connection to the same address.
But ultimately, everything outside the image is outside of the control of Smalltalk, and there is nothing Smalltalk can do about it.
Note that this impedance mismatch between the inside of the image and the "outside world" is one of the major criticisms that is typically leveled at Smalltalk. And if you look at Smalltalk systems that try to integrate deeply with the outside world, they often have to compromise. E.g. GNU Smalltalk, which is specifically designed to integrate deeply into a Unix system, actually gives up on the image and persistence.
I'll add one more angle to the nice answers of Joerg and JayK.
What is important to understand is the context of the time and age Smalltalk was created. (Joerg already pointed out important aspect of everything being Smalltalk). We are talking about time right after ARPANET.
I think they were not expecting the collaboration and interconnection we have nowadays. The image was meant as a record of a session of a single programmer without any external communication. Times changed and now you naturally ask the IO question. As JayK noted you can re-init the image so you will get image similar to the point you ended your session.
The real issue, the reason I've decided to add my 2c, is the collaboration among multiple developers. This is where the image, the original idea, is, in my opinion, outlived. There is no way to share an image among multiple developers so they could develop at the same time and share the code.
Imagine wouldn't it be great if you could have one central image and the developers would have only diffs and open their environment where they ended with everyone's new code incorporated? Sound familiar? This is kind of VCS we have like mercurial, git etc. without the image, only code. Sadly, to say such image re-construction does not exist.
Smalltalk is trying to catch up with the std. versioning tooling we use nowadays.
(Side note: Smalltalk had their own "versioning" systems but they rather lack in many ways compared to the current VCS. The ones used Monticello (Pharo), ENVY (VA Smalltalk), and Store (VisualWorks).)
Pharo is now trying to catch the train and implement the git functionality via iceberg. The Smalltalk/X-jv branch has integrated decent mercurial support. The Squeak has Squot for git (for now) [thank you #JayK].
Now "only" (that is BIG only) to add support for central/diff images :).
For some practical code dealing with image startUp and shutDown, take a look at the class side of ZnServer. SessionManager is the class providing all the functionality you need to deal with giving up system resources on shutdown and re-aquiring them again on startup.
Need to chime in on the source control discussion a bit.
The solution I have seen with VS(E)/VA is that you work with Envy/PVCS and share this repository with the developers.
Every developer has his/her own image with all the pros and cons of images.
One company I was working for was discussing whether it wouldnt make sense to build up the development image egain from scratch every couple of weeks in order to get rid of everything that might dilute the code quality (open Filehandles, global variables, pool dictionary entries, you name it, you will get it and it will crash your code during run-time).
When it comes to building a "run-time", you take the plain tiny standard image, and all your code comes from bind files and SLLs.

IBM S/390 mainframe COBOL source code

We have an S/390 mainframe at my new job that’s been running COBOL applications since the late 90’s. The mainframe is getting old enough that we need to migrate to a newer system. We’re a small enough business that we can’t warrant spending the money to upgrade to new mainframe hardware and the program logic has been a constant work in progress for 30+ years, so it has a lot of functional value. I’ve been considering moving the functionality to a Linux machine and using something like OpenCOBOL to recompile as an executable binary instead of trying to rewrite it in a newer language. I haven’t messed with a mainframe enough to have any clue how or where to access this information and the gentleman that wrote all of the programs is unfortunately no longer with us. I’ve read that SSH is an option, but I’m not even sure how to get the ball rolling on that with a mainframe. I use Linux on a fairly regular basis, so I’m familiar with SSH, but from my understanding those mainframes aren’t a simple OS that you can merely connect to and navigate the file system to retrieve data like we can in modern operating systems. Can anyone give me some pointers to get a sense of direction for accessing the source code for the COBOL programs? Are there default locations that they are stored, etc.? They’re somewhat simple programs that don’t use any DB2 functionality and will hopefully compile on a different system with relatively minimal debugging and fixes. I’m certain that I’ve left out necessary information that would help getting an answer to this question, and I can provide any additional information that is needed to help you all help me. I suspect that SSH isn’t enabled by default, but maybe I’m wrong there too. Any assistance is greatly appreciated. Thanks everyone!
Although not a programming question I'll provide some guidance I think might help you.
First, this is a business decision about where to invest.
Do we upgrade the system to a newer model and upgrade some software and acquire the skills to keep the system running? (System Programming, OS upgrade and cost of migration, newer platform (used z13 could be an economical option, storage systems to support the mainframe)
Migration of existing workloads to other platforms. (Cost to migrate code, sizing of performance needs, new technologies to replace existing access methods like VSAM or dare I say ISAM if the applications are old enough)
Status Quo ... leave things where they are and keep the lights on
In evaluating any option you have to assess the risk to the business and what would a disruption cost? IMHO, its less about a technology like SSH or COBOL on Linux but requires some serious assessment of the current state, the acceptable to be scenarios and the cost of pursuing one of those options.
My comments are not intended to instill fear but provide a framework of how do I approach analyzing a challenge of this magnitude.
There is no default location where source code is stored on z/OS (it is z/OS you're talking about, right?). Source code is usually stored in PDS data sets. The naming of those depends on the installation, i.e. the company, and whether or not any software like Endevor, ChangeMan, etc. is being used to maintain the sources.
Since this is old z/OS (OS/390) COBOL code, chances are the code is making use of OS specifics such as record level I/O, VSAM data sets, etc. These are the parts that will not work on a non-z/OS platform without major rewrite. So, you will need to look into the sources.
SSH is available on z/OS, but it needs to be configured and enabled. You need to check with your z/OS sysprog. FTP, and NFS are other options, but again, they need to be configured and enabled.
Transfering the sources is the least of your problems, I'd say.
I have to agree with the prior two answers, but have some additional suggestions. This is a business decision what to do on the system.
Finding the program to understand what it does is the first requirement. Since you know what program is running that may be the name of the source file. That you will need to find. The source file probably will be in some library manager, the first place to look is in the ISPF menu system. There will be an option for the library manager you are using if you are using one. Based on your description you may be using something called SCLM which would should up, or you might see Librarian or Panvalet. You will need to get into ISPF by connecting using a 3270 connection emulator. Once you find the file, using FTP or SFTP may be the best, or your emulator may just provide a transfer mechanism. You will need to find the related files as well, which should also be defined in the library manager.
Once you have the file, you will need to figure out what it uses as mentioned above, it will be working with some kind of data file, and that will be the biggest part to deal with.
If it is a batch program it is probably part of a schedule, and there are other programs also running that you will need to find and figure out how they fit together.
Once you have an understanding of all the parts then you can work to make the right business decision as to how this should be run. You may want to upgrade, you may want to look to getting z/OS as a cloud service if you don't want to upgrade but you want the function. Or it may be a simple program you could move. That will be much easier to figure out once you have the details.
You say the program logic has changing for 30+ years. Was it only one person making all the changes ? Would anyone on the team have some idea about the PDS's that the user had access to ? That might be one of the places to look for. As the previous answers suggested , most shops would have store the source code in some kind of config mgmt tool like SCLM or panvelet. If you have access to the load code, there are utilities that can be used to inspect the load member to get a CSECT listing which would have the names of the obj members that make up that load.You can check with your mainframe admins. That can get you the source code file names. We use SSH from USS in our shop to move code from a HFS folder to gitlab. I have also used plain FTP to just transfer source code files to my workstation . But yes, first you have to find where it is stored.

Why use AMQP/ZeroMQ/RabbitMQ

as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.

How to save a program's progress, and resume later?

You may know a lot of programs, e.g some password cracking programs, we can stop them while they're running, and when we run the program again (with or without entering a same input), they will be able to continue from where they have left. I wonder what kind of technique those programs are using?
[Edit] I am writing a program mainly based on recursion functions. Within my knowledge, I think it is incredibly difficult to save such states in my program. Is there any technique, somehow, saves the stack contents, function calls, and data involved in my program, and then when it is restarted, it can run as if it hasn't been stopped? This is just some concepts I got in my mind, so please forgive me if it doesn't make sense...
It's going to be different for every program. For something as simple as, say, a brute force password cracker all that would really need to be saved was the last password tried. For other apps you may need to store several data points, but that's really all there is too it: saving and loading the minimum amount of information needed to reconstruct where you were.
Another common technique is to save an image of the entire program state. If you've ever played with a game console emulator with the ability to save state, this is how they do it. A similar technique exists in Python with pickling. If the environment is stable enough (ie: no varying pointers) you simply copy the entire apps memory state into a binary file. When you want to resume, you copy it back into memory and begin running again. This gives you near perfect state recovery, but whether or not it's at all possible is highly environment/language dependent. (For example: most C++ apps couldn't do this without help from the OS or if they were built VERY carefully with this in mind.)
Use Persistence.
Persistence is a mechanism through which the life of an object is beyond programs execution lifetime.
Store the state of the objects involved in the process on the local hard drive using serialization.
Implement Persistent Objects with Java Serialization
To achieve this, you need to continually save state (i.e. where you are in your calculation). This way, if you interrupt the probram, when it restarts, it will know it is in the middle of calculation, and where it was in that calculation.
You also probably want to have your main calculation in a separate thread from your user interface - this way you can respond to "close / interrupt" requests from your user interface and handle them appropriately by stopping / pausing the thread.
For linux, there is a project named CRIU, which supports process-level save and resume. It is quite like hibernation and resuming of the OS, but the granularity is broken down to processes. It also supports container technologies, specifically Docker. Refer to http://criu.org/ for more information.

Accurev SCM [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Does anyone use Accurev for Source Control Management? We are switching (eventually) from StarTeam to Accurev.
My initial impression is that the GUI tool is severely lacking, however the underlying engine, and the branches as streams concept is incredible.
The biggest difficulty we are facing is assessing our own DIY tools that interfaced with starteam, and either replacing them with DIY new tools, or finding and purchasing appropriate replacements.
Additionally, is anyone using the AccuWork component for Issue management? Starteam had a very nice change request system, and AccuWork does not come close to matching it. We are evaluating either using Accuwork, or buying a 3rd party package such as JIRA.
Opinions?
Sweet mother of god is Accurev awful. 300k lines of code? Try it with millions, with hundreds of developers working on scores of projects.
Continuous integration? Sure, that's something that developers can approximate by doing regular merges in, say, perforce, git, mercurial, or any of the countless other tools that actually gets the work done, but it becomes the choice of the developer as to how to proceed. For architects, leads, build engineers, or anyone who actually uses source control to slice and dice, Accurev is horrific.
I went to an "Advanced Accurev topics" talk, and the first tidbit was a large shell command for clearing out Accurev's client-side caching/sync mechanism to correct for when Accurev updates silently fail to pull down files that should be updated.
A Timestamp Optimization checkbox? Deep overlaps? Modal dialogs with only one background process? (That would be okay if those processes were anything other than glacial.) Cascading graphs of selectively configured streams put in place just to be able to pull off components and cross-merging? Updates aren't actually atomic without time-locking?! (honest answer: update again)
Every time I try and do anything serious in Accurev, I feel like I'm playing Russian Roulette at a table with HAL9000, Skynet, and a Speak & Spell. On the line? Four more hours of my life.
Why am I here, griping about Accurev? Because my other machine has taken a full four hours to try to update 10MB of files over VPN. Why? Because some other change has come up stranded and requires some sort of catastrophic resync scan for elements. The worst part? All of these files were on a workspace on the same computer. We're talking about several hours just to get a recently updated workspace to the point where I can put the right notches in the stream history.
One word Accurev review: Avoid
Honestly, I feel like I need to double check to see if I'm using the same tool as these folks that seem to like Accurev. I used Subversion in my previous job and liked it a lot. We never had any issues with it to speak of, and of course the price is right. My biggest problem with Accurev is that it seems they felt the need to be different for different's sake. It uses a completely different vocabulary to express versioning concepts, that even after using it for almost 6 months, feels very foreign to me. It has no fewer than 8 or 9 states any given file can be in, compared to about around 1/2 as many for Subversion. The GUI is crappy and slow, and the IDE integration plugins are sub-par. I had assumed that at some point I would "get" Accurev and see why it's so much better, but that has yet to happen. My advice is to stay away.
I have used AccuRev for nine months and I anxiously await the day I use it no more. My one line review is:
It's like source control written by developers who have read about it in a
book, but have never actually used it before.
Basic concepts are missing or extremely complicated. Example, I've just lost 8 hours work because there's no good way to "revert" a change once it's in a stream. You can "purge" that transaction - but thats it its gone, you can't then cherry pick the changes you really wanted.
The GUI is slow, bloated and inconsistent. Warnings are cryptic eg "error merging element id 1234556". Every single dialog box is modal. As one poster said, there are 9 states a file can be in - but what's more, you must manually click through a list box of 9 options to see the setting for each file.
The streams model sounds like a good idea, but the default behavior of "inheriting" changes from a parent stream is actually incredibly bad in practice. Just say the word "Deep Overlap" to anyone who has really used AccuRev and watch them shudder, turn pale, and/or faint. Making the streams is very easy, but actually merging them with any meaningful differences is arcane and non-deterministic.
No one has mentioned this, but the whole system of "include/exclude" rules to manage file and directory filters is completely broken. This system lives outside the transaction system so there's no way to revert, track history or reproduce changes to a live source stream - for example when Johnny Intern decides the "core" library isn't useful to the entire development team.
The only reason I can account for Accurev's popularity is that it is optimized for the "Demo to Management" case. We're using AccuRev for serious software development - dozens of projects and many more developers. The streams and the GUI look great but after a few weeks use varnish comes off revealing a old, busted, mechanical-turk like system.
Stay far away from Accurev - use Git or Mercurial if you want something modern and free, or Perforce if you want something rock solid, well-supported but expensive.
EDIT:
As a postscript here's one of the many examples of the lack of care and general shoddiness in the UI:
The default difference viewer has its numbering "off by one" - for example if you have 2 diffs in a file - the viewer shows diff "0 of 1" and diff "1 of 1". I mean, really, would you feel comfortable trusting your code to a system that exhibits such a stupid and easily fixable bug.
I worked on (and administrated) AccuRev for more than a year lately, and most of my impressions are very good.
We evaluated it alongside with "Plastic SCM", SVN and "ClearCase-UCM" (that we already owned and used), and decided to dump ClearCase and SVN (both were used in two different groups) and to purchase AccuRev.
First, the stream architecture is much more solid, easy and safe SCM method than the old branching architecture that all the other tools are tied to (yes even ClearCase's "streams" are eventually a wrapper of branches). There's a lot of articles about the differences in their site, you can search and read about it to make it understandable. (Try this link, and this one too)
The timesafe architecture - you can't delete anything from the depots (=repositories) database. I saw tools that this operation is possible with the proper admin permissions. In AccuRev you will use internal commands in order to change or fix a mistake you made, which in turn will be recorded as a new transaction also. Very smart very safe.
integrations! AccuRev integrates with so many tools (to give you ALM bundle) - bug tracking tools (like JIRA, ClearQuest), IDEs, testing tools (quality center), and if you can't find one you can write your own (they provide Java/Perl/XML/CLI SDKs)
Change Package, I don't know about you, but I can't stand SCM tools that doesn't supply change management (did anyone said SVN?), like ClearCase "activities", and AccuRev's "issues". It's a must for my opinion, and one of my CM "best practice". And they can be integrated with your bug tracking tool too, so your users can work on real tasks like features and defects.
the support is just amazing. As a former client of IBM (because of Rational ClearCase is now part of IBM), the shift to AccuRev was simply awesome. During the evaluation they gave numerous on-line support to understand how we want the tool to act for us, so we tweaked it together even before we payed a cent. And they kept that degree of responsiveness after that evaluation period too; We had some problem during an upgrade from 4.5.4 to 4.6, in just a couple of hours (while the upgrade is still in process) the support guy contacted me back, suggested a few tips, connected to my desktop and finally fixed the problem before any other company's support would even started to try to figure who you are. Of course that if you choose open source tool than you're on your own!
Also the tool comes with help system, which can be even too verbose sometimes. And don't forget the forums (especially on cmcrossroads) that are very good in supplying rapid answers too.
And there's so many more....
Of course there are also drawbacks (which software is perfect?) - I would like, for example, to see the file(s) <-> issue(s) association during check-in too, like in ClearCase, not just in "promotion" like it is today - but IMHO they are really minor.
So, as you can understand, if you read it all, I'm a big fan of AccuRev, and I'm highly recommending it. IMHO, it is today one of the best SCM tools you got the chance to work on; modern, wise, easy and strong.
My current client uses Accurev for SCM and after a few projects using a DVCS like Git or Mercurial, I can honestly say that using Accurev is about as enjoyable as closing your face in a car door.
The GUI for Mac and Linux is god awful slow. You can forget using the refactoring support in your IntelliJ or NetBeans IDE, if you use Accurev...that is unless you are going to write your own plugin.
Oh yea...let's not forget about this little chestnut ==> evil twin.
On a positive note, it could be worse...it could be Clearcase.
Accurev sucks! It's overcomplicated for the price of productivity of the team.
I've worked with several SCMs and the idea of accurev is great but not practical. It's Merge Hell with a hierarchy that looks good in the UI but is pain to deal with when it comes to real life.
Specially when you refactor your code (something that some people actually do everyonce in a while) and you get in a mess when a defunct file is not promote all the way up. Or even worse if somebody else overrides a defunct file and creates a new file with the same name....etc
The UI is incredible terrible. Which honestly doesn't matter how good you think the backend is. You will still use the UI (I use the VS plugin which is half decent except it freezes the IDE sometimes, nice huh!).
If you live in the 80's and are planing to use the command line for you day to day use, then i guess you can avoid the UI. If you have an integration build server then of course you have no choice but to use the command line (No native tasks for MSbuild/ANT/NANT that i know of). I just heard that they are doing some work with http://www.electric-cloud.com/. Don't know anything about it still.
Accurev is new therefore there has little resources available online as apposed to svn which you would find tons of integration work that was done by hundreds (with jira for example).
If you are a manager. Accurev will make you feel good looking at the streams, because it does look pretty as long as you dont have to deal with it..
If you are a developer, (a junior developer will not care much, he/she will do whatever you ask them to do)
If you are an architect, refactors a lot, re-addresses architecural descisions...etc you will find accurev as your worste enemy, moving stuff around is pain. Very anti-agile if you ask me. It's not fluid..
If you are a build engineer, you will find it PAIN to get all the developers into a procedure, which you will have to do if you use accurev (ex. promote their code to the agreed upon stream in preperation for a release)....
CRM is supposed to make things easier... I dont see Accurev at this point doing that.. It's still not mature enough, If you want to be a pioneer and strugle in the hope for things to get better..go for it..
Otherwise, don't re-invent the wheel and go with something more established with much more case studies and applications. Because to be practical, what accurev claims to offer that differs is not worth it when you deal with it's pains on a daily basis...
We've been using AccuRev for a few years now. It's a serious improvement over our last tool (Razor) and while I'd recommend it for others- it does have a few drawbacks.
Benefits:
The stream based interface is quite intuitive. I make snapshots every second week and have a number of ongoing development streams branching off the snapshot.
Moving changes between stream is really easy, just select the change, send it to the "change palette" and select the destination stream. It guides you through all the files that need to be merged.
The command-line utilities are great. We've managed to script most of our release generation around it.
Integrations for Visual Studio, Bugzilla, etc...
Drawbacks:
As monjardin pointed out, the client GUI can be slow. I use the windows version for all my history/stream searching since it's much faster than the X11 one. Of course, the GUI's written in Java so performance obviously wasn't their first concern.
It's starting to get slow for really large databases (I'm talking over 300,000 LOC), although they've apparently addressed it in today's release of 4.7.
We opted to go with the cheaper license and not get the change packages feature (I can't see them working that well anyways, as the entire idea of promoting individual changes flies in the face of continuous integration). So far it hasn't hurt us.
Overall, for the price you pay it's a nice tool. We evaluated ClearCase, MKS, Spectrum and Subversion during our trial period. Subversion may have been a good choice, but it was still pretty green when we were evaluating. I've never heard of Plastic before, but I regret not evaluating Perforce.
Also, I understand that the engineers over at Trolltech (makers of Qt) have recently switched to git. I'd be interested in checking that out as well.
We use AccuRev for 4 years already. I hate it very much, mostly because of its horrible GUI. Several years ago AccuRev sent a survey for their clients to fill and at the end of the survey there was a field with suggestions. I started collecting things which annoy me the most, and below you'll find what I have now. Unfortunately, it's full of AccuRev terminology, but I think you'll get the idea anyway.
Accurev GUI possible improvements
Working with history
When examining history, a developer most often wants to see diff to previous transaction/version. This should be as accessible as a double click. For example double-clicking on file in transaction log could open diff to previous version. Double-clicking on a file in Default Group filter could open diff to backed, double-clicking on a file in Modified search could open diff to most recent. That would save tons of time.
Common experience is that developers rarely open files for editing from within AccuRev. Rather they very often diff files, then revert or promote changes. So double click should not open files for editing, it should diff them instead. This may be an option in preferences, so different people may decide whether they want double click to diff files or open files.
It should be possible to select two transactions in stream or workspace history and perform a file diff between them.
Overlaps merging
Merging overlaps in a stream requires performing "Deep Overlap" search in a workspace that takes much more time than to search overlaps in specific stream. Then it is needed to sort overlaps by overlap stream and merge only those from specific stream. There should be more convenient way to perform merging overlaps in stream. For example ability to limit deep overlap search by specific stream and do not show overlaps in parent streams. Limiting Deep Overlap search by timelocked stream is not very useful if you are several streams under this timelocked stream or there is no timelock on parents at all.
Now there is a simplified way that involves creating change palettes, but it is still not convenient. Merge menu item should be available at stream level if there is a workspace under that stream that may be used for overlaps merge.
Annotate tool
Annotate tool is very awkward:
using slider at the top to browse different version resets position in the file that is VERY annoying for large files;
should be able to open history at the specific transaction directly from annotate tool. Now developer needs to remember transaction# and search for it in stream history (also need to search for the stream where transaction was made).
Stream Favorites
Context menu item "Add to stream filter" was removed when new stream favorites were introduced. It should be possible to right click on stream and add it to one of the stream favorites (2nd level context menu, or dialog may pop up). Now it is very annoying to edit stream favorites, particularly when you need to have 2 similar sets of streams.
Stream browser
It should be easy to copy stream name to clipboard. Now you need to open "Change stream" dialog for that. Ctrl+C in stream browser could copy the name of selected stream to clipboard.
There is no way to copy stream name from stream view. Right click on tab could copy stream name in clipboard, or show context menu with "copy stream name to clipboard" item in it.
Diff and merge tool
Shows only the first different character in line, not the whole line difference, does not highlight syntax. Luckily, diff tool may be easily switched to external tool, so this is minor.
Other suggestions
Option in preferences to enable Multiple columns sort mode by default.
It would be nice to save not only the latest keep/promote log, but at least 5-10 older ones.
File extension column in stream or workspace view with ability to sort on it would be great.
Reordering tabs would be nice.
Very small font in keep/promote/lock message under Windows, it's unreadable. Increase the font size or allow user to change it.
Implement more convenient way to locally ignore files, environment variable is not very useful (user may want to ignore different sets of files in different streams/depots).
For the last 3 years AccuRev added 3 things from this list (I removed them as they are already implemented):
Hardcoded (can't be customized) keyboard shortcuts for most actions
Made it possible to call diff for several files from transaction at once (before that one should have to right click on every file and call "Diff to previous version" from context menu.
Added search for text to Annotate tool. But because of the position reset when you try to switch to different version (see above) the Annotate tool is still unusable.
Besides GUI there are fundamental flaws in AccuRev in the whole:
Difficult to update backwards
You can't easily update backwards. There is accurev update -t <transaction-number> command, but if you updated to transaction 100, you can't update to transaction 95 using accurev update -t 95. In order to do that you need to set up time lock on your backed stream (which will introduce transaction in AccuRev) and update your workspace.
Deep Overlaps
When you update you may happen to have invalid state of sources without any notice. This is because Overlaps feature. Overlap is basically a conflict (when file is changed by both you and them). If you have overlap in your workspace, you'll need to merge it before you are allowed to update. But if you have overlap in the stream, under which you have your workspace, you won't have any notice about that, but overlapped file won't be updated in your workspace. Consider the following stream structure
[Depot Root] <- [Team stream] <- [Your stream] <- [Your workspace]
Let's say you changed foo.cpp and promoted it to [Your stream]. After that someone from your team changed both foo.h and foo.cpp, let's say added method to the class Foo, and promoted the files to [Team stream]. After you update your workspace, you'll get new version of foo.h (because you didn't change it), but you won't get foo.cpp because it's overlapped in [Your stream]. So, your update will go clean, but linker will complain about unresolved symbol Foo::NewMethod if you try to build after that.
I've been a long time Accurev user, and have recently moved to a job where I'm using Perforce. I gotta tell you, I wish I had Accurev back. I do agree - the UI is slow and has problems.
However there are some truly AWESOME visualization tools in there. I can't believe that anyone would look at the version history browser and not fall in love! The stream browser is a great simple tool to understand what is going on in your development organization.
Also, dirt simple to administer. Accurev is actually one of my favourite tools.
One of the best days I've ever had at my current job is they day we ditched Accurev and moved to Subversion. Accurev uses overly complicated concepts. Like one of the commenters above, after working with it for years, I still didn't understand the different states that artifacts could be in. It seems that Accurev's greatest asset is its whitepapers and stream visualization, both of which are v appealing to management but does nothing for developers. I use Subversion, Mercurial and Git for various projects and would recommend these tools over any other.
Accurev is anti-agile tool:
Main idea of Accurev is to use different streams for different teams, so, changes made by team1 won't affect team2.Sounds good, but in a real world we all know that at the end we have to merge the code from both teams and believe me it's nightmare in Accurev. The more changes both teams will do in their streams, the more time everybody will spend on merging at the end. It's the same if every team will do their development in the separate branch using SVN and trying to merge everything after 1 month of development....Basically Accurev creates late merge price and you are going to pay this price for ever if you will choose Accurev for more than 1 team.
In order to fix problem created by point 1 people decide to refuse from cross functional teams in favor of functional. They even provide argument to support this idea, like "knowledge expertise" principle. In other words when you don't have cross functional teams (and Agile as well) it's easier to have experts for particular part of the system, so they will perform code reviews better and act as "information/design/implementation experts". We all know that information expert is an anti-pattern not only in Agile, since it's better to spread the expertise in order to avoid knowledge bottlenecks in development.
Put me in the anti-Accurev camp. We moved to it recently, and it's been horrible. We have a number of quite large projects, and Accurev seems to be almost unusable for the quantity of files we have. Over a VPN, forget it. It takes forever to update, the cross-stream management doesn't work in any intuitive way, the UI is complex and slow.
Additionally, support for it in a number of tools we use is either non-existent or poorly implemented.
Add the various bugs that keep popping up, and I'd say we wasted a great deal of money for something that is done much better by open-source software, such as Subversion. We still use CVS for some projects, and even it is so much better for normal operations and workflow that I'd pick it over Accurev.
Another big thumbs down for Accurev. Every simple operation seems to become so horribly complex - cryptic error messages send you scattering to manual, only finding theoretical explanations about concepts that shouldn't have existed in the first place.
UI is so slow and unresponsive it makes you want to gouge your eyes out.
Stay away.
Accurev has some great concepts; but suffers from:
1) many many inconsistencies in the command-line interface.
2) many bugs and nuisances in the application/interface. E.g. their time-safe property is not actually time-safe at all because of several bugs that affect snapshots and pass-through streams.
3) major bugs in critically important features... As above; time-safe bugs; bugs in merging by issues.
3) they are a year behind where they should be, because they wasted a whole year on trying to move their backend to a database - this will be version 5 which may never see the light of day.
4) The marketing is excellent; but the product does not live up to the marketing hype
5) every release has had major critical bugs that has required them to release immediate hotfixes. This has been a major disruption for us. And these aren't minor bugs.
6) doesn't scale well... takes up a huge amount of disk space and gets slower over time
Having said all that; it's still a good product; but if I were to do it all again I'd consider git instead.
After 4 months, my very negative opinion hasn't changed at all. While Accurev has some very nice concepts, the slowness and complexity far outweigh the advantages, at least for us. Aside from the usual complaints about the GUI and the obscurity of a number of features, one of the absolutely most annoying faults is how many hoops you have to jump through just to update a workspace, made much worse by the inability to update only one directory (or directory tree).
A typical update consists of waiting a loooong time to be told you have overlaps. Of course, you aren't told what the overlaps are. So, you have to do an overlap search, wait another loooong time, resolve the overlaps, do another update, wait a looooong time, and hope it worked this time.
Some of our remote developers update as infrequently as possible because the update time over VPN is absurd. Granted, we have an enormous number of source files across a number of products, and if we reorganized everything we could probably improve performance.
However, we hired Accurev (at a significant cost) to come in and tell us how to set everything up. Still sucks. Aside from that, we really shouldn't have to reorganize the way we work with our sources to suit a source-code-control system. It's a tool, not a business model.
Lastly, we've been trying out an Accurev plugin for IntelliJ, written by Accurev. It works just as poorly as the rest, and, while Accurev has been very responsive about fixing the plugin, we aren't their QA group, nor did we sign up to be an alpha test site (yes, it's that buggy). We finally gave up and wrote our own plugin that actually works.
#Steveth
The Interface is lousy...However the streams model is very innovative.
Being able to create a stream for a new project off the trunk stream, and having 5 developers working on it, and not having any form of merge collisions when we merge that stream back into the main trunk is unheard of, yet it works well in Accurev.
At a previous employer we reviewed Accurev and Plastic SCM. At the end of the day, I was not impressed with Accurev's interface, or the so-called "streams". We went with Plastic, and nobody complained.
#Jonathan
The streams are interesting,but I don't see how any version control can magically avoid collisions when two people touch the same code in the same file. Accurev's model was intriguing, but at the end of the day, nice clean branching and merging with a drop dead easy interface made Plastic the choice for us. Plastic's timeline view (I forget the actual name), showing the branch/merge/check-in history made it very simple to review the history of the project from a bird's eye view.
Accurev is simply the worst tool I have ever used.
Subversion is very good, esp if you are migrating from cvs.
My company ha been using Accurev since early 2010, coming from StarTeam before that and CVS in the very distant past. I haven't used CVS (having been on a different team at the time) so I have no comparisons there, and I never bothered to learn StarTeam too intimately.
Since then I've also played with both the CLI and Tortoise versions of SVN, Git, and Mercurial (Hg) in my free time. I plan on giving Git a more thorough go at some point, but I found Hg to be much more intuitive and easy (at least under Windows). Anyway, like I said management saddled us with Accurev and after spending time to get fairly well acquainted with it (GUI and CLI both) as a developer... I absolutely hate it.
Someone earlier in the thread summed it up as software written by devs that had read about SCM in a book but never used it... I agree whole-heartedly but you also get the feeling that they had the same level of experience with GUIs, efficient processing, etc. (In fact, I see that Accurev has a new product called "Kando" based on Git...sounds like they've finally realized how bad their model is. But to quote a coworker "I wouldn't trust anything written by the same team at this point"... I have to wonder if it is a coincidence that there is a baby-wipe product named "Kandoo"...)
Ok, obviously I don't care for the product. If you've spent the time to read this thread, then obviously there are quite a few folks with similar views on it. But I wanted to share some of my own gripes that I've had with it over the last few years as well -- btw if it helps anyone, I think we were using v4.7 previously and have been on v5.3 (?) now for quite some time.
My biggest beef with Accurev is how horribly slow and inefficient it is. Notice I didn't use the word GUI -- I've tried both GUI and CLI-- the slow parts are on the server, so you're screwed either way. It seems like I see one of those damn modal dialog/status bars at every turn... I switch tabs -- bam!: processing, please wait. I reparent a stream -- oh wait just another minute. For "Updates" I expect it to be a little slow (although sometimes it gets annoying when it screams "Overlap" [aka a conflict] at me when I happen to have a file with IDENTICAL content to what it's pushing down). I change directories browsing to a path... processing, processing, "oh you want to go down one more sub-folder"... let me process that some more. You get the idea.
Is that my only beef? Hell no.
1) For the merge tool, I've had the "ignore white space" option checked for years, but I can only ever recall it working ONE time (for example, say we're talking about about comparing say 2 versions of a JSP where I converted spaces to tabs or trimmed some trailing white space or something). Why is this an issue? Because it becomes pure torture for every other developer that looks in the history and wants to see what REALLY changed. If they can't get implement this correctly, don't put the F***ING option there. (Note:using WinMerge as an external compare tool, with appropriate settings works fine)
2) I've had instances where checking a file into one stream and then needing to put an IDENTICAL copy of that same file into another stream (using the same issues #) causes it to throw a temper tantrum. If I use the wrong issue #, it goes in with no problem. This is probably an isolated case (and maybe due to other poor process decisions my company saddles us with) but I thought I'd mention it for completeness.
3) The history? All stored on the server. Translation: If you enjoyed waiting for it to switch tabs, create/reparent a workspace, and update then you're in for more of the same when you want to view history.
4) The way it's exclusionary rules are done is not only terrible but also pathetic. Under Windows, you actually have to create an environment variable where you can create some exclusions to files that you don't want to show up. IT DOES NOT SUPPORT REGEX. I've seen several other SCMs that offer much better approaches (I'm fond of the the ignore files used in Hg. I think there is something similar in Git too) -- not only are both regex and glob patterns supported, but defining this in a FILE is more system-friendly and much easier to edit that putting it into an Environment variable. Not only that, but it seems that the ignore filters are iffy at best. The way our projects are defined have the build folder under the project folder (which is source controlled) and trying to exclude all folders under the the build folder doesnt seem to work -- most of them still show up in my "External" filter even after setting up rules.
5) It's check-in process (a "Promote") also seems to run with the theme of slow and inefficient. We use an external ticket system (not AccuWork... our ticketing system has its flaws but after using AccuRev, I can't image that product to be much better). Anyway, when we say "Promote [this file]", first it pops up with another modal dialog (after the required waiting, while it does more stat processing), then it presents a list of ALL tickets it has pulled (there are a lot...too many to reliably find anything). Next, we must enter our ticket number from the other system, and wait some more while it takes forever to find a match (I thought it already pulled the list...geez). Finally, it will display the matches, then we pick one and tell it to promote using that ticket number. After yet some more waiting, we're finally done.
I could go on but I'll stop there.... this post is getting too long. Instead, let me sum up Accurev in my own way: After having to wait for all these slow annoying "Stat processing", etc dialogs during an issue where we were trying to quickly get a fix out, I came up with a new slogan for them: "AccuRev: when seconds count, your fix is only minutes away".
Since management won't get rid of Accurev (I know they won't go for anything without Enterprise support but I've begged for them to consider anything else: SmartGit...Kiln...Perforce...), I have been using TortoiseHg to locally version control my files (in addition to Accurev). It is a little more work. But for those saddled with Accurev, it makes life so much easier. You get: better diff management -- MUCH MUCH easier to see and review code changes after an "accurev update", the ability to view some history without waiting 10 years for the server, ability to share directly between you and another dev (assuming they also install it), ability to revert/restore your changes if you accidentally wipe something out while trying to get clear of Accurev's merge hell ("Overlapped" files), and even more if you can get the rest of your team using it.
EDIT: Forgot to mention, during a conversion with our build engineers I was told that while Accurev has a Java API that you can develop for, it apparently requires purchasing some sort of additional licensing. I can't confirm this since a) I can't find pricing anywhere on Accurev's website* and b) I doubt like hell they'd tell me at work...
*Kinda weird considering I can find some sort of rough pricing for Perforce, Kiln, StarTeam and SmartGit quite easily. I usually get a sketchy feeling when some product won't list any sort of price up front, guess it shouldn't surprise me too much that Accurev falls into that category...
Well, all I can say is that I completely agree. The back-end is great but the UI sucks. The stream functionality is great because it makes merging no brainier as all changes from parent streams are automatically propagated to all children. I wrote a post about Accurev UI that explains most of the shortcomings I've come across for last 2 years.
The sort answer:
Use the latest SVN server and SmartSVN (the community edition is free) as a client.
You will not pay anything and you can get everything you need.
The gory details:
BTW the feature of imposing the change management rules during checking is trivial to write as a SVN hook. We did it in a couple of hours, in one hundred lines (or thereabout) of code - it works wonderfully and never broke. It integrates SVN with Bugzilla and imposes rules such as:
In order to commit you have to enter a message
In order to commit you must have entered a Bugzilla ID that is in a "Valid" commit state.
... and so on, you can build your own rules to your heart's content
Accurev seems to be marketware to me ... lousy GUI client ... very slow (we had to upgrade the HW to make it actually work effectively), and of course ... you have to pay for it! Ah yes, if you do use it, I hope that you do not have to replicate your server between some place in the US and some place in India :)
Perforce is more robust but it is not very easy to administer. In any case, it is a superior product in comparison to Accurev.
VSS and stuff like that should not even be considered as "version control" systems when it comes to writing professional software (typically, enterprise software) in the 21st century. That's like writing your reports on a typewriter ;-)
If you know what you are doing (with your software) then SVN will be a robust and efficient solution for you. With (at least) two robust and efficient revision control systems in existence today (SVN/GIT) there is very little room to justify working with a proprietary solution; some reasons could be "inertia": you have it, you don't care paying for it, and you didn't have any major issues -- in other words, it works for you.
I use SVN everywhere, when it didn't exist I was using CVS, and before that ... no I am not going to tell you how old I am ;-)
Hope this helped ...
Ciao.
I've been the SVN and Accurev administrator for a time. Accurev took a long time to grow on me - about six months, but I like it now for a corporate enterprise environment. Here's a few things to consider.
Pros:
Personal code history
The code changes are kept on the server when the users performs a keep. The keep is personal to the user and doesn't distribute to other users until a promote command is issued.
The code performed by a keep is kept on the server and available even if the user performs a revert operation.
In most cases, promoting the code to higher streams for distribution is fairly simple.
Administration is fairly simple
Installation works well
Performance is much improved on version 5.3 which changed the backend to a PostGres database
The CLI is rich and extensive
Cons:
A real clunky user interface
Resolving overlaps can be complex, just like conflicts in SVN
However, like any complex tool, your appreciation will increase the more you understand and know about it.
I used AccuRev at a previous job and didn't have any problems with it, but I very much prefer Subversion (even without comparing the price difference). I remember the client GUI being pretty slow too. Also, I do recall that the GUI just called their command-line utilities to interface with the repository. So, it probably won't be that hard to use those interfaces for your DIY tools.
I have used Accurev for one year. I don't like it. Here are some problems I encountered:
1. Its GUI is terrible: it's so slow that each time I switch between tabs (streams and workspaces) or perform some actions I have to wait for several seconds. It sometimes gave you a confusing error message that could not help find what's wrong.
2. It has so many concepts that you have to spend much time learning Accurev itself.
3. I once encountered such a problem: I have a version controlled file modified by our build process. Later my teammate moved that file to other location in his workspace and promoted the change. When I run "accurev update" it simply told me "some file has been moved" and everything looked normal. But actually the command stopped at the moved file and no longer updated other files. It's very confusing - your update command did not update the worksapce but you have no idea about it. The only outputed message "some file has been moved" looked just like other verbose output. It did not tell me my update failed or aborted or something else.
Before that I used SVN and ClearCase. SVN is a great tool, simple and easy to use. And I did not have so many complaints about ClearCase. Accurev is really frustrating...
I've just come across this discussion and thought I would share our experiences with AccuRev.
We have been using the Dimensions SCM from Serena for around 8 years. Two years ago we had a major problem integrating our India based Development team with our UK Dev team. It was clear that we were not going to meet our needs with the current system and hence we set about evaluating a number of options. I discuss all of this in this article How We Integrated Our Offshore Dev Team.
Our experience of using AccuRev has so far been very positive.
It is easy to setup and administer.
Users are able to get going very very quickly (especially important for the India dev team)
We've never had a problem with speed (in fact this is one of the main plus points for us)
The replication works like a dream
I do agree that the UI can be a bit clunky (especially the Unix client). I am hoping that it will be better in the latest version when we update to that next month.
All in all I would say that this was one of the best decisions and purchases we have made.
Note: I am an AccuRev user and I like it very much. I have already upvoted a few answers here, and would like to add:
I've just recently stumbled over this "review" of AccuRev in the book Continuous Delivery by Jez Humble and David Farley:
[Chapter 14, p 385]
Commercial Version Control Systems
(...) the only commercial VCSs that we are able to wholeheartedly recommend are:
(...)
AccuRev. Offers ClearCase-like ability to do stream-based development without the crippling administrative overhead and poor performance associated with ClearCase.
(...)
To which I might add that I never have used ClearCase, but I am the AccuRev admin around here, and it is indeed very little work to administer. (WRT performance, this question might give more insight.)