I have been reading about semver. I really like the general idea. However, when it comes to putting it to practice, I feel like I'm missing some key pieces of information. I'm not sure where the name of a library exists, or what to do with file variants. For instance, is the file name something like [framework]-[semver].min.js? Are there popular JavaScript frameworks that use semver? I don't know of any.
Thank you!
Let me try to explain you.
If you are not developing a library that you like to keep for years to come, don't bother about it.. If you prefer to version every development, read the following.
Suppose you are an architect or developer developing a library that is aimed to be used by hundreds of developers over time, in a distributed manner. You really need to be cautious of what you are doing, what your developers are adding (so interesting features that grabs your attention to push those changes in the currently distributed file). You dont know how do you tell your library users to upgrade. In what scenarios? People followed some sort of versioning, and interestingly, their thoughts all are working fine.
Then why do you need semver ?
It says "There should be a concrete specification for anything for a group of people to follow anything collectively, even though they know it in their minds". With that thought, they made a specification. They have made their observation and clubbed all the best practices in the world about versioning software mainly, and given a single website where they listed them. that is semver.org. Its main principles are :
Imagine you have already released your library with a version "lib.1.0.98", Now follow these rules for subsequent development.
Let your library is bundled and named as xyz and,
Given a version number MAJOR.MINOR.PATCH, (like xyz.MAJOR.MINOR.PATCH), increment the:
1. MAJOR version when you make incompatible API changes
(existing code of users of your library breaks if they adapt this without code changes in their programs),
2. MINOR version when you add functionality in a backwards-compatible manner
(existing code works, and some improvements in performance and features also), and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
If you are not a developer or are not in a position to develop a library of a standard, you need not worry at all about semver.
Finally, the famous [d3] library follows this practice.
Semantic Versioning only defines how to name your versions. It does not specify what you will do with your version number afterwards. You can put the version numbers in package names, you can store it in a properties file inside your application, or just publish it in a wiki. All those options are opened to discussion and not part of the problem space addressed by SemVer.
semver is used by npm and bower (and perhaps some other tools) for dependency management. Using semver it is possible to decide which versions of which packages to use if multiple libraries used depend on the same library.
As others have said, semantic versioning is a standard versioning scheme that tells your users which versions of your library should be compatible with each other, and which ones are not.
The idea, is to be able to give your users more confidence that it's safe to upgrade to a newer patch/version, because it's tried, tested, and true to being backwards compatible with the previous version (minor increments). That is, perceptively that's what your telling your users.
As far as tooling goes, I don't do much in javascript, but I typically let my build server handle stamping my assemblies etc with the correct version. I have a static major number I upgrade whenever I make breaking changes, a static minor number I upgrade everytime I add new features, and an auto-incrementing Patch number whenever I checkin bug fixes.
Especially if this is a javascript library you plan to share on a public repository of some kind (nuget, gem, etc) you probably want some for of automated packaging system, and you put the logic in there for specifying your version number (in the package meta data, in the name of the javascript file, which is typically the standard I've seen).
Take a look at sbt which is the Scala Build Tool. In it, we write dependencies like this:
val scalatest = "org.scalatest" %% "core" % "2.1.7" "test"
val jodatime = "org.joda" % "jodatime" % "1.4.5"
Wherein the operator %% means "the current version of Scala that you're building." Packaging things in this language generally create JAR files with the name like this <my project>_<scala version>_<library version>.jar which is quite handy for semantically naming things automagically. The % operator can be interpreted as "don't version this part."
That said, this resulted from the fact that the same library compiled to different Scala versions were not binary compatible with each other. So it was more as a result of, rather than a conscious design choice, the binary incompatibilities.
Related
The title tells it all:
Does breezy fully replace bzr, at least in msys2?
E.g., by aliasing.
I found little info on this:
https://github.com/NixOS/nixpkgs/issues/80740
Yes, Breezy is a full replacement for Bazaar. It's derived from the Bazaar codebase, and compatible with the Bazaar command-line interface.
There are a large number of changes to the internal API, but unless you use third-party plugins or use scripts that use the bzrlib API, that should not be relevant to you.
We've also dropped support for a number of older platforms (e.g. Windows '95 and '98). I don't think msys2 was ever explicitly supported as a platform, but we're happy to help fix any issues you may run into. See https://www.breezy-vcs.org/pages/support.html for ways to reach out to us.
You can read more about the rationale for the fork here:
https://lists.ubuntu.com/archives/bazaar/2017q2/076170.html
https://www.jelmer.uk/breezy-intro.html
I have already built pypi package stored on pypi server few days back. Now I want to compare source code diff between already built pypi package and recent code built today. Is there any way to this?
I want to compare already built pypi package and newly build code. And If there is any difference in source code then only create a new package and upload it to pypi server
If you have only Python bytecodes, you cannot get the corresponding source code (that hypothetical transformation is called decompilation, and is not possible in general; read e.g. about Rice's theorem). Since any translation (such as the one done by the python program) from source code to bytecode is losing some information (e.g. name of local variables, comments explaining the intent of the code).
Equality of the behavior of functions by static analysis of their source code (and the observable behavior of your code is what you really care about) is an undecidable problem. Learn more about λ-calculus, it is deeply related to that question.
The source code (by definition, the preferred form of code on which developers work) is not only for computers, but mostly for fellow developers: in other words, most of its value and its meaning is a social one (and that is what free software is about). Read more about the semantics of programs.
For example, renaming a variable from i to x may convey the implicit hypothesis that the intended dynamic runtime type of the value of that variable was an integer, and becomes a floating point.
Maybe you want some kind of package manager (or some version control system, if you deal with source code, or some build automation tool, if you build then install software out of it). Python has something to manage packages. The scons build automation uses Python, but there are many other build automation tools, GNU make being a common one (that you could use to drive compilation from .py source files to .pyc bytecode files and their installation). For version control, I recommend git.
PS. Your question is very unclear and smells like some XY problem.
My simple question is: why can't I just use exact versions in my package.json? How is this different from a lockfile?
The main difference is that lockfiles also lock nested dependencies - all of the dependencies of your dependencies, and so on. Managing and tracking all of those changes can be incredibly difficult, and the number of packages that are used can grow exponentially.
There are also situations where you cannot manually specify that a particular version of a package should be used - consider 2 libraries that specify foo at ~1.0.0 and ~2.0.0 respectively. The difference in major version tells us that the API of foo#v1 is not going to match the API of foo#v2, so there's no way you could override the package version at your app level without causing conflicts and failures.
Finally, you might wonder "why have semver at all then? Why not just have all packages manually specify the exact version of their dependencies?" One of the main advantages of semver is it means you don't have to update every dependency in the tree whenever a sub-dependency updates. If I rely on foo, and foo relies on bar, and bar just had a critical bug that was patched, and we're using exact versions for everything, then foo must also be updated before I can get the fix. If foo and bar have different maintainers, or if foo is abandoned, that could take a while and I may need to fork the project (something I've done more than once in Java-land).
This is very useful for maintaining ecosystems of libraries because it fundamentally reduces the amount of maintenance work required per-node in the dependency tree, making it easier to extract libraries and patterns. I once had an early project where we were building a component library that used exact versions, and any time the core library containing shared functionality was updated, we had to submit a PR to each of the other packages to update the version, and sometimes followup PRs to components that depended on those. Needless to say, we consolidated the packages after a few months.
Hope that helps!
We are struggeling hard with how to use features the correct way.
Let’s say we have the plug-in org.acme.module which depends on org.thirdparty.specific and org.acme.core.
And we have the plug-in org.acme.other which depends on org.acme.core.
We want to create an application from these, which includes a target file and a product file. We have the following options:
One feature per module:
org.acme.core.feature
org.acme.core
org.acme.module.feature
org.acme.module
org.acme.other.feature
org.acme.other
org.thirdparty.specific.feature
org.thirdparty.specific
This makes the target and product files gigantic, and the dependencies are very hard to manage manually.
One feature per dependency group:
org.acme.module.feature
org.acme.core
org.acme.module
org.thirdparty.specific
org.acme.other.feature
org.acme.core
org.acme.other
This approach makes the dependencies very easy to manage, and the target and product files are easy to read and maintain. However it does not work at all. The moment org.acme.core changes, you need to change ALL the features. Furthermore, the application has no say in what to package, so it can’t even decide to update org.acme.core (because of a bugfix or something).
Platform Feature:
org.acme.platform.feature
org.acme.core
org.acme.other
org.thirdparty.specific (but could be its own feature)
org.acme.module.feature
org.acme.module
This is the approach used for Hello World applications and Eclipse add-ons - and it only works for those. Since all modules' target platforms would point to org.acme.platform.feature, every time anything changes for any platform plug-in, you'd have to update org.acme.platform.feature accordingly.
We actually tried that approach with only about 50 platform plug-ins. It's not feasible to have a developer change the feature for every bugfix. (And while Tycho supports version "0.0.0", Eclipse does not, so it's another bag of problems to use that. Also, we need reproducibility, so having PDE choose versions willy-nilly is out of the question.)
Again it all comes down to "I can't use org.acme.platform.feature and override org.acme.core's version for two weeks until the new feature gets released.
The entire problem is made even more difficult since sometimes more than one configuration of plug-ins are possible (let's say for different database providers), and then there are high level modules using other child modules to work correctly, which has to be managed somehow.
Is there something we are missing? How do other companies manage these problems?
The Eclipse guys seem to use the “one feature per module” approach. Not surprisingly, since it’s the only one that works. But they don’t use target platforms nor product files.
The key to a successful grouping is when to use "includes" in features and when to just use dependencies. The difference is that "includes" are really included, i.e. p2 will install included bundles and/or included features all the time. That's the reason why you need to update a bundle in every feature if it's included. If you don't update it, you will end up with multiple versions in the install.
Also, in the old day one had to specify dependencies in features. These days, p2 will mostly figure out dependencies from the bundles. Thus, I would actually stop specifying dependencies in features but just includes. Think of features as a way to specify what gets aggregated.
Another key point to grouping is - less is more. If you have as many features as bundles chances a pretty high that you have a granularity issue. Instead, think about what would a user install separately. There is no need to have four features for things that a user would never install alone. Features should not be understood as a way of grouping development/project structures - that's where folders in SCM or different SCM repos are ok. Think of features as deployment structures.
With that approach, I would recommend a structure similar to the following example.
my.product.base
base feature containing the bare minimum of the product
could be org.acme.core plus a few minimum
my.product.base.dependencies
features with 3rd party libraries for my.product.base
my.addon.xyz
feature bundling an add-on
separate features for things that can be installed separately
my.addon.xyz.dependencies
3rd party libraries for add-on dependencies
Now in the product definition I would list just my.product.base. There is no need to also list the dependencies features. p2 will fetch and install the dependencies automatically. However, if you want to bind your product to specific versions of the dependencies and don't want p2 to select any matching one, then you must include the my.product.base.dependencies feature.
In the target definition I would include a "my.product.sdk" feature. That feature is an aggregation feature of all other features. It makes target platform management easier. I typically create an sdk feature with everything.
Another feature that is also very often seen is a "master" feature. This is an "everything" feature that maybe used for creating a p2 repository during the build. The resulting p2 repository is then used for assembling products.
For a more real world example see here:
http://git.eclipse.org/c/gyrex/gyrex-server.git/tree/releng/features
Features and Continuous Delivery
There was a comment regarding frequent updates to feature.xml. A feature.xml only needs to be modified when there is a change in structure. No updates need to happen when the bundle version is modified. You should reference bundles in features with version 0.0.0. That makes Tycho to fill in the proper version at build time. Thus, all you need to do is commit a change to any bundle and then kick off a rebuild. Tycho also takes care of updating the feature qualifier based on the qualifiers of the contained bundles. Thus, the new feature qualifier will be different than in a previous build.
I'm looking for a formal definition of version number formats for .NET Core project.json files.
version
Visual studio creates a default version number of "1.0.0-*". I would love for this to mean the * gets updated on successive builds (it doesn't). The build version number is 1.0.0. What does the * mean and what are the legal possibilities?
dependencies
I expected the dependency numbering to follow the nuget versioning rules given that KPM is basically a nuget front-end, but it doesn't appear to support bracket numbering (eg "[1,2)") - I get "not a valid version string" when I try anything other than a blank or x.x-* format.
Outside of the source, does anyone have a link to a formal definition?
I'm not sure what's wrong with looking into the source for a definition. I think that's the most accurate place to search, especially now that vNext is hosted on GitHub.
Looking at the exception described, we're pointed to SemanticVersion.cs.
In the method TryParseInternal, it's fairly obvious why you're running into issues when attempting to declare min/max versions that way. There is simply no handling for [,] or (,) built into that method.
If we look into the regular NuGet version specification, it's obvious that TryParseVersionSpec does have this handling built in.
As for documentation specifying acceptable formats, you'll probably have to wait until it's out of CTP status. If you believe it's an issue, you should document it in GitHub. The contributors are very responsive to these types of issues. Personally I'm not sure if there's a need for setting a maximum version of a dependency when it's deployed with your build.