I have case as follows and don't know if there is any conveninet solution:
I wrote some set of sources and put them in into a package.
Next, I refactored it deeply for performance reasons.
Now, I have new version which contains at least one bug, that I have to find.
I would like to have both version of my package in one project and easily switch between them
when I compile and run test application.
Of course I can compile both of them and choose in runtime because of names conflicts.
Is it any smart way to solve this?
You are looking for a version control system (which is supported through intellij IDEA).
Try git, they have good support in IDEA.
http://www.jetbrains.com/idea/webhelp/using-git-integration.html
http://www.jetbrains.com/idea/features/version_control.html
Here is a good link for git / vcs tutorials.
http://sixrevisions.com/resources/git-tutorials-beginners/
I have been reading about semver. I really like the general idea. However, when it comes to putting it to practice, I feel like I'm missing some key pieces of information. I'm not sure where the name of a library exists, or what to do with file variants. For instance, is the file name something like [framework]-[semver].min.js? Are there popular JavaScript frameworks that use semver? I don't know of any.
Thank you!
Let me try to explain you.
If you are not developing a library that you like to keep for years to come, don't bother about it.. If you prefer to version every development, read the following.
Suppose you are an architect or developer developing a library that is aimed to be used by hundreds of developers over time, in a distributed manner. You really need to be cautious of what you are doing, what your developers are adding (so interesting features that grabs your attention to push those changes in the currently distributed file). You dont know how do you tell your library users to upgrade. In what scenarios? People followed some sort of versioning, and interestingly, their thoughts all are working fine.
Then why do you need semver ?
It says "There should be a concrete specification for anything for a group of people to follow anything collectively, even though they know it in their minds". With that thought, they made a specification. They have made their observation and clubbed all the best practices in the world about versioning software mainly, and given a single website where they listed them. that is semver.org. Its main principles are :
Imagine you have already released your library with a version "lib.1.0.98", Now follow these rules for subsequent development.
Let your library is bundled and named as xyz and,
Given a version number MAJOR.MINOR.PATCH, (like xyz.MAJOR.MINOR.PATCH), increment the:
1. MAJOR version when you make incompatible API changes
(existing code of users of your library breaks if they adapt this without code changes in their programs),
2. MINOR version when you add functionality in a backwards-compatible manner
(existing code works, and some improvements in performance and features also), and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
If you are not a developer or are not in a position to develop a library of a standard, you need not worry at all about semver.
Finally, the famous [d3] library follows this practice.
Semantic Versioning only defines how to name your versions. It does not specify what you will do with your version number afterwards. You can put the version numbers in package names, you can store it in a properties file inside your application, or just publish it in a wiki. All those options are opened to discussion and not part of the problem space addressed by SemVer.
semver is used by npm and bower (and perhaps some other tools) for dependency management. Using semver it is possible to decide which versions of which packages to use if multiple libraries used depend on the same library.
As others have said, semantic versioning is a standard versioning scheme that tells your users which versions of your library should be compatible with each other, and which ones are not.
The idea, is to be able to give your users more confidence that it's safe to upgrade to a newer patch/version, because it's tried, tested, and true to being backwards compatible with the previous version (minor increments). That is, perceptively that's what your telling your users.
As far as tooling goes, I don't do much in javascript, but I typically let my build server handle stamping my assemblies etc with the correct version. I have a static major number I upgrade whenever I make breaking changes, a static minor number I upgrade everytime I add new features, and an auto-incrementing Patch number whenever I checkin bug fixes.
Especially if this is a javascript library you plan to share on a public repository of some kind (nuget, gem, etc) you probably want some for of automated packaging system, and you put the logic in there for specifying your version number (in the package meta data, in the name of the javascript file, which is typically the standard I've seen).
Take a look at sbt which is the Scala Build Tool. In it, we write dependencies like this:
val scalatest = "org.scalatest" %% "core" % "2.1.7" "test"
val jodatime = "org.joda" % "jodatime" % "1.4.5"
Wherein the operator %% means "the current version of Scala that you're building." Packaging things in this language generally create JAR files with the name like this <my project>_<scala version>_<library version>.jar which is quite handy for semantically naming things automagically. The % operator can be interpreted as "don't version this part."
That said, this resulted from the fact that the same library compiled to different Scala versions were not binary compatible with each other. So it was more as a result of, rather than a conscious design choice, the binary incompatibilities.
I'm writing a command-line utility in Go that (as part of its operation) needs to get a password from the user. There's a great gopass module for Unix that does this, and I know how to write one for the Windows console. The problem is that the Windows module obviously won't build on *nix, and the *nix version won't build on Windows. Since Go lacks any preprocessor support (as far as I can tell), I have absolutely no idea what the right way to approach this is. I know it's possible, since Go itself must do this for its own libraries, but the tooling I'm used to (conditional imports/preprocessors/etc.) seems to be missing.
Go has build constraints, which can either be specified as comments in a .go file, or as part of the file name.
One set of constraints is for target operating system, so you can have one file for Windows, one for e.g. Linux and implement the same function in two different ways in the two.
More information on build constraints are at http://golang.org/pkg/go/build/#hdr-Build_Constraints
Need To Set Version Information on the existing .dll
I need to add these to dll
1.File Version
2.Product Version
Tried this free version.
does not work
any Idea ?
There is a tool named verpatch that does exactly that.
After you download it you can run it from command line as below:
verpatch your.dll /pv "product.version" /va "file.version"
There are many other flags that can be used to add extra information.
Try:
verpatch /?
There is Resource Tuner Console from Heaventools Software.
Resource Tuner Console is a command-line tool that enables developers to automate editing of different resource types in large numbers of Windows 32- and 64-bit executable files.
See specifically the Changing Version Variables And Updating The Version Information page for greater details.
I've created a tool for this purpose because didn't find anything that is enough easy to use and easy to automate. Developers find it useful.
I'm sorry if that might seem as a self-ad but I know how annoying is to sync versions...
I'm working on Eclipse RCP, of which i explored few concepts required for my project, I knew how to export RCP product(which is portable).
My development approach was, for each Java File change I'm deleting
the previously exported product and exporting it again. I think my
approach is dumb, there might be better ways.
For a fix in java file, each time exporting is time consuming. As a
workaround I thought of replacing the class file generated in bin to
my plugin jar, but for my java file, there are multiple class files
generated with classname$1.class, etc. It was difficult to replace
all these class files into my plugin.jar.
What is the better practice in such situation. What do expert RCP developers do, for a java change to be reflected to a product exported version without deleting product or creating new. Isn't there any hot-deployment kinda thing, as an analogy Jsp change into Application server is a hot deployment.
Looking forward for suggestions.
Day to day I generally just run my product in the debugger - code changes are reflected immediately.
However you can use p2 to update a previously exported product - although this requires exporting a new version of the product first to generate a compatible p2 repository. An alternative is to push your changes to a build server and have it build the new product and p2 repository for you. I find Tycho is a good choice to help automate my builds.