What exactly does "Shelve base revisions of files under distributed version control systems" do in IntelliJ? - intellij-idea

This is the official description of the docs
If this option is enabled, the base revision of files will be saved to
a shelf that will be used during a 3-way merge if applying a shelf
leads to conflicts. If it is disabled, IntelliJ IDEA will look for the
base revision in the project history, which may take a while;
moreover, the revision that the conflicting shelf was based on may be
missing (for example, if the history was changed as a result of the
rebase operation).
What does this setting do in more simple terms and does it have implications on merge conflicts as well without using shelves explicitly?
The reason for this question is mainly an issue we are frequently facing during merge conflicts, where old changes keep re-appearing out of nowhere that delete lines of code which are still being used.
One of the patch files in the .idea/shelf directory contains these exact changes which makes me assume it is related to what we are witnessing.
Uncommitted_changes_before_Checkout_at_28_12_2021_10_30_[Default_Changelist]

Related

Strategies for maintaining a Symbol Store for nightly builds & release builds

I am trying to set up a central symbol server for my organization and its various products. Each product has a nightly build, as well as "one-off" beta, RC, and release builds.
The goal I have is to keep about a month's worth of nightly build symbols, as we do a lot of "dogfooding' here so people use internal builds, and we'd like to easily debug files we get from our internal winqual when possible.
I also need to be able to permanently keep all beta, RC, and release build symbols.
After doing much research, I think the best approach here is to have two symbol servers: one for the nightly builds (which have the previous ~30 builds registered), and another to permanently store the beta, RC, and release symbols. I would have the build scripts add to the symbol store using the product and version tags to record the product and build number. After a successful build, a script would use history.txt from the symbol server to identify the oldest build not deleted, then delete it from the symstore.
In the case of the "one off" builds for betas, RCs, and release versions, they would be identified by a build & install person once they're created, and added to the 2nd symbol server (for permanent storage) as well.
So I've a few questions: Does this seem at all reasonable? There must be an easier way to do this, won't most organizations with a symbol server need to tackle this problem?
Secondly, if I am to go ahead with this approach, is there a fool-proof way to identify the oldest known symbol set registered with the server? I'd thought about using last modified dates, but history.txt seems most appropriate but a script parsing that may be error-prone. I was hoping it'd be possible to just add a symbol with product & version info, as well as delete one with product & version info.
Thanks in advance for any help. I'll gladly answer any questions anyone may have, or provide any clarifications.
I think two separate symbol stores is indeed going to be your best bet. For managing the store for your nightly builds, I'd recommend taking a look at AgeStore: http://msdn.microsoft.com/en-us/library/ff560046(v=vs.85).aspx
I wrote a widget in Wise Script that runs for every build that does the following.
Assuming our versioning is 1.0.1.0
1.) 1.0.1.0 through 1.0.6.0 symbols are added to the store (5 runs)
2.) every time there's a build, the symbol store history file is parsed...
3.) when 1.0.7.0 is built and the symbols are added, 1.0.1.0 symbols are deleted from the symbol store.
I'm basically parsing out the version number and if the third place is more than 5 less than current, I parse the transaction ID and run a symstore.exe del /i %TRANS_ID%
This prunes my symbols for daily/CI builds to only the last 5 build symbols.
Any notable symbols such as a release, hotfix, patch...I simply change the product name and add the symbols manually...that way, I am only pruning the dailies.
If you'd like, I could cut/paste the code in here as SMS Installer code (same as Wise). I also wrote a similar widget that keeps my local archive 5 builds deep. That way, I'm not wasting space for my local archive for CI builds. I use both, but you could use either. The both use a simple .INI file for runtime navigation. That way, I can put them both in my Jenkins/Jobs/ folder and simply edit the .INI file for each. They're very light weight as the .EXE is only 161K for the archive_prunerator and 161 for the symstore_widget, plus a 4 or 5 line .INI file.
AJ

Does Ivy have different resolution behavior depending on status attribute?

My colleague pointed out a flaw in maintaining our artifacts (still somewhat new to Ivy):
The release builds are marked as “integration” which means it is rechecking for new versions on each build slowing down the build even when it has cached the dependencies.
That did not make much sense to me, since, I think, Ivy still needs to check what is in repo before making a decision about the version to deliver. So, I decided to research that a bit to understand exactly what are the effects of marking libraries with different status values.
I cannot find much in the documentation, though, or on the net. What am I missing?
Could someone please shed some light on this?
Thank you
The status is just a string, that can be defined for ivy. They don't affect the resolving of artifacts per se. It has no effect on the default retrieval. It's just a marker for an artifact.
Status:
Status of a revision A module's status indicates how stable a module
revision can be considered. It can be used to consolidate the status
of all the dependencies of a module, to prevent the use of an
integration revision of a dependency in the release of your module.
Three statuses are defined by default in Ivy:
integration: revisions builded by a continuous build, a nightly
build, and so on, fall in this category
milestone: revisions delivered to the public but not actually
finished fall in this category
release: a revision fully tested and labelled fall in this
category
You need to declare the dependency as changing or the resolver definition to achieve what your co-worker mentioned:
Changes in artifacts Some people, especially those coming from maven 2
land, like to use one special revision to handle often updated
modules. In maven 2 this is called a SNAPSHOT version, and some argue
that it helps save disk space to keep only one version for the high
number of intermediary builds you can make whilst developing.
Ivy supports this kind of approach with the notion of "changing
revision". A changing revision is just that: a revision for which Ivy
should consider that the artifacts may change over time. To handle
this, you can either specify a dependency as changing on the
dependency tag, or use the changingPattern and changingMatcher
attributes on your resolvers to indicate which revision or group of
revisions should be considered as changing.
Once Ivy knows that a revision is changing, it will follow this
principle to avoid checking your repository too often: if the module
metadata has not changed, it will considered the whole module
(including artifacts) as not changed. Even if the module descriptor
file has changed, it will check the publication data of the module to
see if this is a new publication of the same revision or not. Then if
the publication date has changed, it will check the artifacts' last
modified timestamps, and download them accordingly.
So if you want to use changing revisions, use the publish task to
publish your modules, it will take care of updating the publication
date, and everything will work fine. And remember to set
checkModified=true" on your resolver too!

CVS check out i get four options. What do they mean?

When I am checking out an application i get four options HEAD, Branches, Versions and Dates what do they mean? What is the difference between each of them?
A code repository is a tree of versions, each of which represents the state of the code at some particular point. It's possible to create a new branch of the tree from any point. Thus…
HEAD is the tip of the main trunk of the tree.
A branch is some other route through the tree of versions (e.g., to support a particular set of releases or develop a feature). If you ask to check out a branch, you typically get the tip of that branch.
A version represents an exact state of the code. In CVS, versions are per-file. (Other source control systems have global versioning.)
A date-based checkout represents getting the state of the code at a particular moment. This can be very useful for tracking down bugs.
The other thing that you'll see is a tagged version. That's where a name is given to a particular state of the tree (e.g., to represent an exact release).
Please take a look at Open Source Development with CVS, especially the Branches chapter. That explains how CVS works and what the head revision is and what branches are.

What is your review process for Rhapsody development?

My team is using the IBM's Rhapsody tool to do real-time embedded development. Unfortunately, we are unhappy with our current review process.
More specifically, we've had difficulty because:
there is a lack of a good diff tool for diagram changes
the Rhapsody diff tool doesn't generate reports that you can use in a review
source file history is spotty because source files are products in MDD thus not configured in a VCS at a high granularity
running diffs on source code sometimes pulls in unrelated changes made by other devs
sometimes changing a property of a model element changes dozens of source files
it's easy to change a source file through a property change and not know it
Does anyone have any tips for making peer reviews on Rhapsody development robust but low-hassle? Any best practices and lessons learned you would like to share? I'm not looking for a mature process write-up; tidbits I didn't know about would be great.
We use Rhapsody for the same purpose at my workplace. Reviews of model changes are done with a script that opens diffmerge on two copies of our repository (one at the start of the changes, one at the latest). That shows all of the pertinent changes, without any of the internal cruft Rhapsody adds.
Our repo doesn't track the generated sources, but we see plenty of irrelevant changes in Rhapsody's sbs files frequently. We've started setting sbs files as read-only on the filesystem, and then changing them to read/write from the properties panel in Rhapsody. That doesn't stop the files you mark as read/write from having cruft inserted, but it prevents unrelated files from being modified.
I still haven't found a way to make Rhapsody stop inserting irrelevant changes (for example: it sometimes adds and removes filename fields between saves, despite minimal changes to the model). It creates a lot of merge conflicts, and I've personally started taking 5 or so minutes per commit to only add the changes that matter.
We have been using Rhapsody for development for the past 5 years. Our current process involves using the Rhapsody COM interface and the Microsoft Word COM interface to dump review packages to Word for design reviews. We also do this to generate the reference manual portion of our SUM.
For code we review the generated source.
We put the model into our version control system, and lock down model elements after they have been reviewed. If your version control tool makes things read only when they are checked in, it prevents you from accidentally changing a model element.
The COM interface is also good for dumping the model to make PowerPoint slides of diagrams if you want to present your design to a customer. You will have to tweak the slides after they are generated, as the pictures usually end up looking a little funny, but it gives a quick starting point.
It is also possible to prevent Rhapsody from writing timestamps to the sbs files by setting the property CG::General::IncrementalCodeGenAcrossSession to false. This can help reduce the amount of unnecessary data.
See this link

Best approach to perform a CMMI Physical Configuration Audit?

The organization I currently work for an organization that is moving into the whole CMMI world of documenting everything. I was assigned (along with one other individual) the title of Configuration Manager. Congratulations to me right.
Part of the duties is to perform on a regular basis (they are still defining regular basis, it will either by quarterly or monthly) a physical configuration audit. This is basically a check of source code versions deployed in production to what we believe to be the source code versions in production.
Our project is a relatively small web application with written in Java. The file types we work with are java, jsp, xml, property files, and sql packages.
The problem I have (and have expressed but seem to be going ignored) is how am I supposed to physical log on to the production server and verify file versions and even if I could it would take a ridiculous amount of time?
The file versions are not even currently in the file(i.e. in a comment or something). It was suggested that we place visible version numbers on each screen that is visible to the users also. I thought this ridiculous also, since the screens themselves represent only a small fraction of the code we maintain.
The tools we currently use are Netbeans for our IDE and Serena Dimensions as our versioning tool.
I am specifically looking for ideas on how to perform this audit in a hopefully more automated way, that will be both accurate and not time consuming.
My idea is currently to add a comment to the top of each file that contains the version number of that file, a script that runs when a production build is created to create an XML file or something similar containing the file name and version file of each file in the build. Then when I need to do an audit I go to the production server grab the the xml file with the info, and compare it programmatically to what we believe to be in production, and output a report.
Any better ideas. I know this has to have been done already, and seems crazy to me that I have not found any other resources.
You could compute a SHA1 hash of the source files on the production server, and compare that hash value to the versions stored in source control. If you can find the same hash in source control, then you know what version is in production. If you can't find the same hash in source control, then there are untracked modifications in production and your new job title is justified. :)
The typical trap organizations fall into with the CMMI is trying to overdo everything. If I could suggest anything, it'd be start small & only do what you need. So consider any problems that you may have had in the CM area peviously.
The CMMI describes WHAT an organisation should do, but leaves the HOW up to you. The CMMI specification, chapter 2 is well worth a read - it describes the required, expected, and informative components of the specification - basically the goals are required, the practices are expected, and everything else is informative. This means there is only a small part of the specification which a CMMI appraiser can directly demand - the goals. At the practice level, it is permissable to have either the practices as described, or acceptable alternatives to them.
In the case of configuration audits, goal SG3 is "Integrity of baselines is established and maintained". SP3.2 says "Perform configuration audits to maintain integrity of the configuration baselines." There is nothing stated here about how often these are done, or how long they may take.
In my previous organisation, FCA/PCA was usually only done as part of the product release process, and we used ClearCase as the versioning tool, with labels applied across the codebase to define baselines. We didn't have version numbers in all the source files, nor did we have version numbers on all the products screens - the CM activity was doing the right thing & was backed up by audits, and this was never an issue in any CMMI appraisal.
We could use the deltas between labels to look at what files had changed, perform diffs to see the actual code changes. An important part of the process is being able to link those changes back to either a requirement/bug report/whatever the reason was which initiated the change.
Our auditing did use scripts to automate the process, but these were in-house developed scripts are specific to ClearCase - basically they would list all the files, their versions in the CM system, and the baseline/config item to which they belonged.
can't you use your source control for this? if you deploy a version and tag your sourcecontrol with that deployment, you can then verify against the source control system