Given that it's very hard to find anything about dependency version ranges in the official documentation (the best I could come up with is http://docs.codehaus.org/display/MAVEN/Dependency+Mediation+and+Conflict+Resolution), I wonder if they're still considered a 1st class citizen of Maven POMs.
I think most people would agree that they're a bad practice anyway, but I wonder why it's so hard to find anything official about it.
They are not deprecated in the formal sense that they will be removed in a future version. However, their limitations (and the subsequent lack of wide adoption), mean that they are not as useful as originally intended, and also that they are unlikely to get improvements without a significant re-think.
This is why the documentation is only in the form of the design doc - they exist, but important use cases were never finished to the point where I'd recommend generally using them.
If you have a use case that works currently, and can accommodate the limitations, you can expect them to continue to work for the forseeable future, but there is little beyond that in the works.
I don't know why you think that version ranges are not documented. There is a concrete abstract in the Maven Complete Reference documentation.
Nevertheless - a huge problem (in my opinion) is that it is documented that "Resolution of dependency ranges should not resolve to a snapshot (development version) unless it is included as an explicit boundary." (the link you provided) but the system behaves different. If you use version ranges you will get SNAPSHOT versions if they exists in your range (MNG-3092). The discussion if this is wanted or not has not ended yet.
Currently - if you use version ranges - you might get SNAPSHOT dependencies. So you really have to be careful and decide if this is wanted. It might be useful for your own developed depedencies but I doubt that you should use it for 3rd party libraries.
Version ranges are the only reason that Maven is still useful. Even considering not using them is bad practice as it leads you into the disaster of multi-module builds, non-functional parent poms, builds that take 10 minutes or longer, badly structured projects like Spring, Hibernate and Wicket as we cover on our Illegal Argument podcast.
To answer your question, they are not deprecated and are actively used in many projects successfully (except when Sonatype allows corrupt metadata into Apache Maven Central).
If you want a really good example of a non-multi-module build (reactor.xml's only) where version ranges are used extensively, go look at Sticky code (http://code.google.com/p/stickycode/)
Related
From OptaPlanner 8.17, it seems that the code of task assigning example project has been refactored a lot. I didn't succeed in finding in the release notes nor on Github any comment about these changes.
In particular, the implementation of the problem to solve doesn't involve chained variables anymore since this version. Could someone from the OptaPlanner team explain why ? I'm also a bit confuse because the latest version of the documentation related to this example project is still referencing the previous deleted classes from version before 8.17 (eg.
org/optaplanner/examples/taskassigning/domain/TaskOrEmployee.java).
It's using #PlanningListVariable, an new (experimental) alternative to chained planning variables, which is far easier to understand and maintain.
Documentation for this new feature hasn't been written yet. We're finishing up the ListVariableListener interface and then the documantation will be updated to cover #PlanningListVariable too. At that time, it will be ready for announcement.
Unlike a normal feature, this big, complex feature took more than a year to bake. That's why it's been delivered in portions. One could argue the task assignment example shouldn't have escaped the feature branch, but it was proving extremely expensive to not merge the stable feature branches in sooner rather than later.
I'm working on a project which includes OptaPlanner. Here I understand that a list cannot be annotated with #PlanningVariable:
OptaPlanner currently doesn’t support a #PlanningVariable on a collection. Although a future version will support it for flexibility reasons, it probably has an inherent performance and complexity cost, so it might be better to avoid it anyway.
I was wondering if such a version supporting this feature is already available, even if I understand the problems it creates with performance and complexity.
It's not yet available (and I can't speculate when it will be available), see https://issues.jboss.org/browse/PLANNER-728 for the specification.
It's an important issue.
I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.
This is more general question then language-specific, altho I bumped into this problem while playing with python ncurses module. I needed to display locale characters and have them recognized as characters, so I just quickly monkey-patched few functions / methods from curses module.
This was what I call a fast and ugly solution, even if it works. And the changes were relativly small, so I can hope I haven't messed up anything. My plan was to find another solution, but seeing it works and works well, you know how it is, I went forward to other problems I had to deal with, and I'm sure if there's no bug in this I won't ever make it better.
The more general question appeared to me though - obviously some languages allow us to monkey-patch large chunks of code inside classes. If this is the code I only use for myself, or the change is small, it's ok. What if some other developer takes my code though, he sees that I use some well-known module, so he can assume it works as it's used to. Then, this method suddenly behaves diffrent then it should.
So, very subjective, should we use monkey patching, and if yes, when and how? How should we document it?
edit: for #guerda:
Monkey-patching is the ability to dynamicly change the behavior of some piece of code at the execution time, without altering the code itself.
A small example in Python:
import os
def ld(name):
print("The directory won't be listed here, it's a feature!")
os.listdir = ld
# now what happens if we call os.listdir("/home/")?
os.listdir("/home/")
Don't!
Especially with free software, you have all the possibilities out there to get your changes into the main distribution. But if you have a weakly documented hack in your local copy you'll never be able to ship the product and upgrading to the next version of curses (security updates anyone) will be very high cost.
See this answer for a glimpse into what is possible on foreign code bases. The linked screencast is really worth a watch. Suddenly a dirty hack turns into a valuable contribution.
If you really cannot get the patch upstream for whatever reason, at least create a local (git) repo to track upstream and have your changes in a separate branch.
Recently I've come across a point where I have to accept monkey-patching as last resort: Puppet is a "run-everywhere" piece of ruby code. Since the agent has to run on - potentially certified - systems, it cannot require a specific ruby version. Some of those have bugs that can be worked around by monkey-patching select methods in the runtime. These patches are version-specific, contained, and the target is frozen. I see no other alternative there.
I would say don't.
Each monkey patch should be an exception and marked (for example with a //HACK comment) as such so they are easy to track back.
As we all know, it is all to easy to leave the ugly code in place because it works, so why spend any more time on it. So the ugly code will be there for a long time.
I agree with David in that monkey patching production code is usually not a good idea.
However, I believe that for languages that support it, monkey patching is a very valuable tool for unit testing. It allows you to isolate the piece of code you need to test even when it has complex dependencies - for instance with system calls that cannot be Dependency Injected.
I think the question can't be addressed with a single definitive yes-no/good-bad answer - the differences between languages and their implementations have to be considered.
In Python, one needs to consider whether a class can be monkey-patched at all (see this SO question for discussion), which relates to Python's slightly less-OO implementation. So I'd be cautious and inclined to expend some effort looking for alternatives before monkey-patching.
In Ruby, OTOH, which was built to be OO down into the interpreter, classes can be modified irrespective of whether they're implemented in C or Ruby. Even Object (pretty much the base class of everything) is open to modification. So monkey-patching is rather more enthusiastically adopted as a technique in that community.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
One thing I've always found frustrating is when a library I use is no longer maintained. Even looking at update history and community beforehand, I've run into the situation where I check back later to find that the version I'm using is the last version.
Generally this goes unnoticed until a few months have passed, or some bug/limitation has been found. I run into this fairly often when coding in Python, because my desire to upgrade to a new version of the interpreter can easily introduce problems in libraries that worked fine before. My question is: what is the best response to this situation?
Do you become the maintainer of the old library? Even if you're only fixing the bugs you care about, this is still a lot of work. Especially if the library is large, complex, and has less-than-well-documented code (the case more often than not).
Do you switch to a different library (if there is one)? This is also a significant undertaking, with the potential to introduce new bugs, especially if the only alternatives approach the problem from a different angle. This can be true even if you had the foresight to write an abstraction layer for the old library's functionality.
Do you roll your own? It probably ends up as less code than the old library, since you only write the parts you care about. It's therefore easier to maintain in the future. But now you've wasted days/weeks/months to produce something that is probably less functional, and is guaranteed to introduce tons of new bugs.
I realize the answer depends on the specific case: the size of the library, whether source is available, how maintainable it is, how much of it your code uses, how deeply your code relies on it, etc. I'm looking for answers across a range of cases. What are your experiences with this problem?
Well, you've found one argument to lessen the number of external dependencies...
I've come across this in several Java projects I've audited; it seems people have a tendency to drop in a Jar found somewhere on the Web for the tiniest amount of reuse possible from it. The result is a mess of dependencies that ends up undermining the code base. I prefer to use external components sparingly.
It's probably most useful to ask what you can do before. Make a point of evaluating the future lifetime of an external component before you start using it. Do some research on how large its developer community and its user community are. Also, prefer to use a component that has one or two "lesser" alternatives which you could also use.
If there's something you're tempted to use, but it has only one or two people working on it and isn't used much beyond their own project, then you should probably roll your own - or join forces with the maintainers of the component.
I think your really answer is in how do you select third party libraries to include in your code.
If you happen to like constantly upgrading your code to the latest version of the language then by default you can only use libraries that have active communities behind them
In fact I would go as far as saying that the only time that you want to use a third party open source library is when the community behind it is large (say at least 40+ users) and it has undergone a few releases.
For a commercial library the same thing applies how long is the company going to be around and how many other clients use it.
If you can't find a library in this position then ensure that you abstract the third party library out of your code so replacement isn't hard in the future.
When the Java EE framework my employer chose went belly up, we went out and found a newer, better one. Fortunately Spring was available.
We prefer to roll our own for that very reason. We end up with full control over it, full knowledge of how it works, and we can change it any way we want. When our ass is on the line when the blame game is played, we prefer to reduce the risk and do it ourselves.
We had a situation once where we did use an external library, and it got rewritten and repurposed by the author and no longer did what we expected. We rolled over that, wrote our own version, and continued safely.
The bottom line is safety, and minimization of risk.
If the source is available, the licence is open and the library does the job really well, you have the option to fork the library. By doing this, you can also add new features to it. If the library has lots of things to fix and the code is a mess, it is better to find something else to work with.