Why chained variables not involved anymore in the taskassigning example solution from Optaplanner 8.17? - optaplanner

From OptaPlanner 8.17, it seems that the code of task assigning example project has been refactored a lot. I didn't succeed in finding in the release notes nor on Github any comment about these changes.
In particular, the implementation of the problem to solve doesn't involve chained variables anymore since this version. Could someone from the OptaPlanner team explain why ? I'm also a bit confuse because the latest version of the documentation related to this example project is still referencing the previous deleted classes from version before 8.17 (eg.
org/optaplanner/examples/taskassigning/domain/TaskOrEmployee.java).

It's using #PlanningListVariable, an new (experimental) alternative to chained planning variables, which is far easier to understand and maintain.
Documentation for this new feature hasn't been written yet. We're finishing up the ListVariableListener interface and then the documantation will be updated to cover #PlanningListVariable too. At that time, it will be ready for announcement.
Unlike a normal feature, this big, complex feature took more than a year to bake. That's why it's been delivered in portions. One could argue the task assignment example shouldn't have escaped the feature branch, but it was proving extremely expensive to not merge the stable feature branches in sooner rather than later.

Related

OptaPlanner: List as Planning Variable

I'm working on a project which includes OptaPlanner. Here I understand that a list cannot be annotated with #PlanningVariable:
OptaPlanner currently doesn’t support a #PlanningVariable on a collection. Although a future version will support it for flexibility reasons, it probably has an inherent performance and complexity cost, so it might be better to avoid it anyway.
I was wondering if such a version supporting this feature is already available, even if I understand the problems it creates with performance and complexity.
It's not yet available (and I can't speculate when it will be available), see https://issues.jboss.org/browse/PLANNER-728 for the specification.
It's an important issue.

What happened to RNN and Seq2seq options in TensorFlow?

It looks like lots of stuff was recently deprecated or moved to contrib, and it doesn't seem like there are any up-to-date equivalents in core. What was the reasoning behind this? I can't seem to find any GitHub issue or discussion explaining these changes. Example GitHub search
Around the time of the 1.0 release, we made a large effort to carefully choose what should be part of the "official" supported API and feature set, because anything included would have to be supported for a long, long time and becomes very hard to change. The rules for stuff in contrib are less tight in terms of making non-backwards-compatible changes, so components judged likely to change substantially in the near future (i.e., to not be stable yet) were moved to contrib or elsewhere.

Should Maven dependency version ranges be considered deprecated?

Given that it's very hard to find anything about dependency version ranges in the official documentation (the best I could come up with is http://docs.codehaus.org/display/MAVEN/Dependency+Mediation+and+Conflict+Resolution), I wonder if they're still considered a 1st class citizen of Maven POMs.
I think most people would agree that they're a bad practice anyway, but I wonder why it's so hard to find anything official about it.
They are not deprecated in the formal sense that they will be removed in a future version. However, their limitations (and the subsequent lack of wide adoption), mean that they are not as useful as originally intended, and also that they are unlikely to get improvements without a significant re-think.
This is why the documentation is only in the form of the design doc - they exist, but important use cases were never finished to the point where I'd recommend generally using them.
If you have a use case that works currently, and can accommodate the limitations, you can expect them to continue to work for the forseeable future, but there is little beyond that in the works.
I don't know why you think that version ranges are not documented. There is a concrete abstract in the Maven Complete Reference documentation.
Nevertheless - a huge problem (in my opinion) is that it is documented that "Resolution of dependency ranges should not resolve to a snapshot (development version) unless it is included as an explicit boundary." (the link you provided) but the system behaves different. If you use version ranges you will get SNAPSHOT versions if they exists in your range (MNG-3092). The discussion if this is wanted or not has not ended yet.
Currently - if you use version ranges - you might get SNAPSHOT dependencies. So you really have to be careful and decide if this is wanted. It might be useful for your own developed depedencies but I doubt that you should use it for 3rd party libraries.
Version ranges are the only reason that Maven is still useful. Even considering not using them is bad practice as it leads you into the disaster of multi-module builds, non-functional parent poms, builds that take 10 minutes or longer, badly structured projects like Spring, Hibernate and Wicket as we cover on our Illegal Argument podcast.
To answer your question, they are not deprecated and are actively used in many projects successfully (except when Sonatype allows corrupt metadata into Apache Maven Central).
If you want a really good example of a non-multi-module build (reactor.xml's only) where version ranges are used extensively, go look at Sticky code (http://code.google.com/p/stickycode/)

Agile practices to avoid deprecated code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am converting an open source Java library to C#, which has a number of methods and classes tagged as deprecated. This project is an opportunity to start with a clean slate, so I plan to remove them entirely. However, being new to working on larger projects, I am nervous that the situation will arise again. Since much of agile development revolves around making something work now and refactoring later if needed, it seems like deprecation of APIs must be a common problem. Are there preventative measures I can take to avoid/minimize API deprecation, even if I am not entirely sure of the future direction of a project?
I'm not sure there is much you can do. Requirements change, and if you absolutely have to make sure that clients of the API are not broken by newer API version, you'll have rely on simply deprecating code until you think that no-one is using the deprecated code.
Placing [Obsolete] attributes on code causes the compiler to create warnings if there are any references to the obsolete methods. This way clients of the API, if they are diligent about fixing their compiler warnings, can gradually move to the new methods without having everything break with the new version.
Its useful if you use the ObsoleteAttribute's override which takes a string:
[Obsolete("Foo is deprecated. Use Bar instead for munging widgets.")]
<frivolous>
Perhaps you could create a TimeBombAttribute:
[TimeBomb(new DateTime(2010,1,1), "Foo will blow up! Better use Bar, or else."]
In your code, reflect for methods with the timebomb attribute and throw KaboomException if they are called after the specified date. That'll make sure that after 1st January 2010 no-one is using the obsolete methods, and you can clean up your API nicely. :)
</frivolous>
As Matt says, the Obsolete attribute is your friend... but whenever you apply it, provide details of how to change calling code. That way you've got a lot better chance of people actually changing. You might also want to consider specifying which version you anticipate removing the method in (probably the next major release).
Of course, you should be diligent in making sure you don't call the obsolete code - particularly in sample code.
Since much of agile development revolves around making something work now and refactoring later if needed
That's not agile. It's cowboy coding disguised under the label of agile.
The ideal is that whatever you complete, is complete, according to whatever Definition of Done you have. Usually the DoD states something along the lines of "feature impelmented, tested and related code refactored". Of course, if you are working on a throwaway prototype, you can have a more relaxed DoD.
API modifications are a difficult beast. If they are only project-internal APIs you are modifying, the best way to go is to refactor early. If you need to change the internal API, just go ahead and change all API clients at the same time. This way the refactoring debt does not grow very large and you don't have to use deprecation.
For published APIs you probably have some source and binary compatibility guarantees you have to maintain, at least until the next major release or so. Marking the old APIs deprecated works while maintaining compatibility. As with internal APIs, you should fix your internal code as soon as possible to not use the deprecated APIs.
Matt's answer is solid advice. I just wanted to mention that intially you probably want to use something along the lines of:
[Obsolete("Please use ... instead ", false)]
Once you have the code ported, change the false to true and the compiler will then treat all the calls to the method as an error.
Watch Josh Bloch's "How to Design a Good API and Why It Matters"
Most important w/r/t deprecation is knowing that "when in doubt, leave it out." Watch the video for clarification, but it has to do with having to support what you provide forever. If you are realistically expecting that API to be reused, you're effectively setting your decisions in stone.
I think API design is a much trickier thing to do in an Agile fashion because you're expecting it to be reused probably in many different ways. You have to worry about breaking others that are dependent on you, and so while it can be done, it's tough to have the right design emerge without getting a quick turnaround from other teams. Of course deprecation is going to help here, but I think YAGNI is a lot better design heuristic when it comes to APIs.
I think deprecation of code is an inevitable byproduct of Agile processes like continuous refactoring and incremental development. So if you end up with deprecated code as you work on your project, that's not necessarily a bad thing--just a fact of life. Of course, you will probably find that, rather than deprecating code, you end up keeping a lot of code but refactoring it into different methods, classes, and so on.
So, bottom line: I wouldn't worry about deprecating code during Agile development. If it served its purpose for a while, you're doing the right thing.
The rule of thumb for API design is to focus on what it does, rather than how it does it. Once you know the end goal, figure out the absolute minimum input you need and use that. Avoid passing your own objects as parameters, pass only data.
Seperate configuration from execution. For exmaple, maybe you have an image encoder/decoder.
Instead of making a call like:
Encoder.Encode( bytes, width, height, compression_type, compression_ratio, palette, etc etc);
Make it
Encoder.setCompressionType(compression_type);
Encoder.setCompressionType(compression_ratio);
etc,etc
Encoder.Encode(bytes, width, height);
That way adding or removing settings is much less likely to break existing implementations.
For deprecation, there's basically 3 types of APIs: internal, external, and public.
Internal is when its only your team working on the code. Deprecating these APIs isn't a big deal. Your team is the only one using it, so they aren't around long, there's pressure to change them, people aren't afraid to change them, and people know how to change them.
External is when its the same code base, but different teams are using it. This might be some common libraries in a large company, or a popular open source library. The point is, people can choose the version of code they compile with. The ease of deprecating an API depends on the size of the organization and how well they communicate. IMO, its the deprecator's job to update old code, rather than mark it deprecated and let warnings fly throughout the code base. Why the deprecator instead of the deprecatee? Because the depcarator is in the know; they know what changed and why.
Those two cases are pretty easy. So long as there is backwards compatibility, you can generally do whatever you'd like, update the clients yourself, or convince the maintainers to do it.
Then there are public api's. These are basically external API's that the clients don't have much control over, such as a web API. These are incredibly hard to update or deprecate. Most won't notice its broken, won't have someone to fix it, won't get notifications that its changing, and will only fix it once its broken (after they've yelled at you for breaking it, over course).
I've had to do the above a few times, and it is such a chore. I think the best you can do is purposefully break it early, wait a bit, and then restore it. You send out the usual warnings and deprecations first, of course, but - trust me - nothing will happen until something breaks.
An idea I've yet to try is to let people register simple apps that run small tests. When you want to do an API update, you run the external tests and contact the affected people.
Another approach to be popular is to have clients depend on (web) services. There are constructs out there that allow you to version your services and allow clients to perform lookups. This adds a lot more moving parts and complexity into the equation, but can be helpful if you are looking at turning over a lot of versions, and having to support multiple versions in production.
This article does a good job of explaining the problem and an approach.

To monkey-patch or not to?

This is more general question then language-specific, altho I bumped into this problem while playing with python ncurses module. I needed to display locale characters and have them recognized as characters, so I just quickly monkey-patched few functions / methods from curses module.
This was what I call a fast and ugly solution, even if it works. And the changes were relativly small, so I can hope I haven't messed up anything. My plan was to find another solution, but seeing it works and works well, you know how it is, I went forward to other problems I had to deal with, and I'm sure if there's no bug in this I won't ever make it better.
The more general question appeared to me though - obviously some languages allow us to monkey-patch large chunks of code inside classes. If this is the code I only use for myself, or the change is small, it's ok. What if some other developer takes my code though, he sees that I use some well-known module, so he can assume it works as it's used to. Then, this method suddenly behaves diffrent then it should.
So, very subjective, should we use monkey patching, and if yes, when and how? How should we document it?
edit: for #guerda:
Monkey-patching is the ability to dynamicly change the behavior of some piece of code at the execution time, without altering the code itself.
A small example in Python:
import os
def ld(name):
print("The directory won't be listed here, it's a feature!")
os.listdir = ld
# now what happens if we call os.listdir("/home/")?
os.listdir("/home/")
Don't!
Especially with free software, you have all the possibilities out there to get your changes into the main distribution. But if you have a weakly documented hack in your local copy you'll never be able to ship the product and upgrading to the next version of curses (security updates anyone) will be very high cost.
See this answer for a glimpse into what is possible on foreign code bases. The linked screencast is really worth a watch. Suddenly a dirty hack turns into a valuable contribution.
If you really cannot get the patch upstream for whatever reason, at least create a local (git) repo to track upstream and have your changes in a separate branch.
Recently I've come across a point where I have to accept monkey-patching as last resort: Puppet is a "run-everywhere" piece of ruby code. Since the agent has to run on - potentially certified - systems, it cannot require a specific ruby version. Some of those have bugs that can be worked around by monkey-patching select methods in the runtime. These patches are version-specific, contained, and the target is frozen. I see no other alternative there.
I would say don't.
Each monkey patch should be an exception and marked (for example with a //HACK comment) as such so they are easy to track back.
As we all know, it is all to easy to leave the ugly code in place because it works, so why spend any more time on it. So the ugly code will be there for a long time.
I agree with David in that monkey patching production code is usually not a good idea.
However, I believe that for languages that support it, monkey patching is a very valuable tool for unit testing. It allows you to isolate the piece of code you need to test even when it has complex dependencies - for instance with system calls that cannot be Dependency Injected.
I think the question can't be addressed with a single definitive yes-no/good-bad answer - the differences between languages and their implementations have to be considered.
In Python, one needs to consider whether a class can be monkey-patched at all (see this SO question for discussion), which relates to Python's slightly less-OO implementation. So I'd be cautious and inclined to expend some effort looking for alternatives before monkey-patching.
In Ruby, OTOH, which was built to be OO down into the interpreter, classes can be modified irrespective of whether they're implemented in C or Ruby. Even Object (pretty much the base class of everything) is open to modification. So monkey-patching is rather more enthusiastically adopted as a technique in that community.