I'm using Optaplanner to make a schedule for a school. Every works good, except that it sometimes have gaps between the lessons, which I do not want. I have rules for punishing this but I think the search space is so big that it will take quite a while before it "fixes" this.
Is it possible to tell Optaplanner to try out some "selected/calculated" moves first and then continue with the moves it is supposed to do?
I'm using the time grain patterns with this.
It is possible to do, but not recommended. There is good literature on the subject here: https://www.optaplanner.org/docs/optaplanner/latest/move-and-neighborhood-selection/move-and-neighborhood-selection.html
I would start first by experimenting with configuring the generic moves that are provided out-of-the-box in optaplanner, which you can do by editing the XML file included with the distribution.
If a custom move is really what you need, you can refer to the docs above, but it is much harder to avoid bugs this way and you will want to enable full assert to double-check that score corruption is not occurring after implementation.
It is listed as a language with native DbC support on the Wikipedia beside Eiffel and Spec#, but I can't find any mention whatsoever in the docs or in the test suite.
2019 Update
Imo, no.
Because I don't think 6.d "implements most DbC features natively" for a reasonable definition of "most" I've removed it from the Wikipedia Design by Contract page.
(If you think it should be put back in the native section despite this SO and my notes above and below, please make sure it appears in alphabetical order.)
I think:
P6 has raw materials that should be reusable to "implement most of DbC".
A natural start would be a userland module. (Which would then naturally fit on the Wikipedia page, but in the Languages with third-party support section).)
A sketch of what I'm thinking follows.
1. ORing preconditions and ANDing postconditions/invariants in the context of routine composition/inheritance/delegation:
Implementing a way to dynamically call (or perhaps just statically refer to) just the PRE statements/blocks and, separately, just the POST statements/blocks, of "relevant ancestor" routines.
Determining "relevant ancestors". For a class hierarchy (or object delegation chain) that doesn't involve multiple dispatch, "relevant ancestors" is presumably easy to determine based on the callsame mechanism. But it feels very different in the general case where there can be many "competing" candidates based on the very different paradigm of multiple dispatch. Are they all "relevant ancestors", such that it's appropriate to combine all their PRE and POST conditions? I currently think not.
Modifying routine selection/dispatch. See eg OO::Actors for what might be a template for how to do so most performantly. The goal is that, per DbC rules, the PRE statements/blocks of a winning routine and its "relevant ancestors" are logically ORed together and POST statements/blocks are logically ANDed.
Supporting class level PRE and POST blocks. One can already write PRE and POST blocks in a class, but they are associated with construction of the class, not subsequent calls to methods within the class. For the latter the following S04 speculation seems to be the ticket:
It is conjectured that PRE and POST submethods in a class could be made to run as if they were phasers in any public method of the class. This feature is awaiting further exploration by means of a ClassHOW extension.
Original answer
Check out Block Phasers, in particular the PRE and POST phasers. I haven't used them, and it's something like 25 years since I read the Eiffel book, but they look the part to me.
The PRE and POST phasers are tested in S04-phasers/pre-post.t. I see at least one bug TODO.
It would be wonderful if you would check out the doc, experiment with them (maybe using an online P6 evaluator), and report back so we can see what you think of them, hear if you encountered the TODO'd bug or any others, and decide what to do:
The Wikipedia page says it lists "Languages that implement most DbC features natively". Presumably the "most" qualifier is subjective. Does P6 implement all (or "most") DbC features natively? If not, it presumably needs to be removed from the Wikipedia page.
Unless we decide the P6 does DbC claim is bogus, we presumably need to add 'DbC' and 'Design by Contract' into the doc and doc index. (Presumably you searched for one or both of those, didn't find a match, and that's what led you to think you couldn't find them, right?)
We also need examples for PRE and POST regardless of whether or not they're officially considered to be DbC features. But we already know that in the sense that P6 has power out the wazoo and much of it still isn't documented as part of official p6doc despite many folk contributing. There's a lot to do! If you can come up with a couple really nice, simple examples using PRE and POST, perhaps developed from what you see in the roast tests, that would be spectacular. :)
I now have sufficent exposure to the Objective-C that if i'm stuck with anything, I know how to think of the problem in terms of a likely tool I need and go look for it. Simple really. There's A Method For That. So nothings a real problem anymore.
Now I'm looking deeper at the language in broader terms. We write stuff. The compiler hews out all the code to execute it. From a simple flashlight app thats a if/then decision to turn on, to a highly complex accelerometer driven 3D shoot 'em up with blood 'n guts and body parts following all sorts of physics, the compiler prepares the code ready to be executed like a giant railway layout. No matter how random it appears on the screen, everything possible can be generically described and prepared for.
So here's the question:
Are there cases where something completely unexpected to the software designer can still be handled without an execution halt? Maybe I'd better re-frame the question a few different ways: Can a ( objective-C ) program meta-compile within itself in response to an unplanned-for user request? or to re-put my opening remark, are there tools or methods for unlikely descriptions of unlikely problems?
I think #kfb has the right comment about metaprogramming. Check out the Runtime docs in conjunction with metaprogramming tutorials.
Parts of your last question might be in the realm of this doc.
If your looking for ways to reduce the size of your code base for the lesser used features, one idea might be to make the features internet based (assuming connectivity is not a problem).
I'm currently writing a basic platformer game with the idea of learning the basis patterns of a 2D game development. One of them is the amazing A Star Pathfinding. After reading and study a few articles (two of them are the best: Link and http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html#S7), tutorials and source code, I did a simple A Star Pathfinding using the Manhattan method that works well for my personal learning process.
The next, and logical step, is to do the same on a platform scenario where the NPC must jump or climb ladders to reach the goal target. I've found these screencastings http://www.youtube.com/watch?v=W2xoJfT6sek and https://www.youtube.com/watch?v=Bc7tu-KwfoU that show perfectly what I want to do.
I haven't found for the moment any resource that explains the method to implement it. Can someone give me a starting point I can work with?
Thanks in advance
As far as I can tell, the only difference that jumping/ladders make is that they affect possible movement. So, all your algorithm has to do is take into account where a character can and can't move to, and the costs (if you're implementing that) of moving there.
I haven't any clean A* code around fit for an example but let me give you a few pointers when implementing A*.
First off, the most essential tip that I often find neglected in tutorials is use a priority queue.
Whenever you generate possible move states, put them into the priority queue with priority being the value of your heuristic for that move state. This differentiates A* from DFS (which uses a stack and backtracking) or simple greedy path finding (which simply goes to the best state with respect to the present state).
Next, mind those pointers! I don't know what language will you implement this but even if you do it in some language that abstracts pointers (unlike C), you have to understand the concept of a deep copy versus a plain shallow copy. I remember the first time I did A* (in Python) and I did not pay attention to this. My states were repeatedly overwritten leading to unpredictable behavior. Turns out I did not make deep copies!
That said, depending on how complex the task you have in mind, all those deep copies might clutter up your memory. While good memory management will help, for big problem instances that need real-world performance, I tend to just use iterative-deepening A* so that after every iteration, I get to unclutter my memory.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Background:
I am taking a class at my university called "Software Constraints". In the first lectures we were learning how to build good APIs.
A good example we got of a really bad API function is the socket public static void Select(IList checkRead, IList checkWrite, IList checkError, int microseconds); in C#. The function receives 3 lists of sockets, and destroys them making the user have to clone all the sockets before feeding them into the Select(). It also has a timeout (in microseconds) which is an int, that sets the maximum time the server can wait for a socket. The limits of this is +/-35 minutes (because it is an int).
Questions:
How do you define an API as
'bad'?
How do you define an
API as 'good'?
Points to consider:
Function names that are hard to remember.
Function parameters that are hard to understand.
Bad documentation.
Everything being so interconnected that if you need to change 1 line of code you will actually need to change hundreds of lines in other places.
Functions that destroy their arguments.
Bad scalability due to "hidden" complexity.
It's required from the user/dev to build wrappers around the API so that it can be used.
In API design I've always found this keynote very helpful:
How to Design a Good API and Why it Matters - by Joshua Bloch
Here's an excerpt, I'd recommend reading the whole thing / watching the video.
II. General Principles
API Should Do One Thing and Do it Well
API Should Be As Small As Possible But No Smaller
Implementation Should Not Impact API
Minimize Accessibility of Everything
Names Matter–API is a Little Language
Documentation Matters
Document Religiously
Consider Performance Consequences of API Design Decisions
Effects of API Design Decisions on Performance are Real and Permanent
API Must Coexist Peacefully with Platform
III. Class Design
Minimize Mutability
Subclass Only Where it Makes Sense
Design and Document for Inheritance or Else Prohibit it
IV. Method Design
Don't Make the Client Do Anything the Module Could Do
Don't Violate the Principle of Least Astonishment
Fail Fast - Report Errors as Soon as Possible After They Occur
Provide Programmatic Access to All Data Available in String Form
Overload With Care
Use Appropriate Parameter and Return Types
Use Consistent Parameter Ordering Across Methods
Avoid Long Parameter Lists
Avoid Return Values that Demand Exceptional Processing
You don't have to read the documentation to use it correctly.
The sign of an awesome API.
Many coding standards and longer documents and even books (Framework Design Guidelines) have been written on this topic, but much of this only helps at a fairly low level.
There is also a matter of taste. APIs may obey every rule in whatever rulebook, and still suck, due to slavish adherence to various in-vogue ideologies. A recent culprit is pattern-orientation, wherein Singleton Patterns (little more than initialized global variables) and Factory Patterns (a way of parameterizing construction, but often implemented when not needed) are overused. Lately, it's more likely that Inversion of Control (IoC) and associated explosion in the number of tiny interface types that adds redundant conceptual complexity to designs.
The best tutors for taste are imitation (reading lots of code and APIs, finding out what works and doesn't work), experience (making mistakes and learning from it) and thinking (don't just do what's fashionable for its own sake, think before you act).
Useful - it addresses a need that is not already met (or improves on existing ones)
Easy to explain - the basic understanding of what it does should be simple to grasp
Follows some object model of some problem domain or real-world. It uses constructs that make sense
Correct use of synchronous and asynchronous calls. (don't block for things that take time)
Good default behavior - where possible allow extensibility and tweaking, but provide defaults for all that is necessary for simple cases
Sample uses and working sample applications. This is probably most important of all.
Excellent documentation
Eat your own dog food (if applicable)
Keep it small or segment it so that it is not one huge polluted space. Keep functionality sets distinct and isolated with few if any dependencies.
There are more, but that is a good start
A good API has a semantic model close to the thing it describes.
For example, an API for creating and manipulating Excel spreadsheets would have classes like Workbook, Sheet, and Cell, with methods like Cell.SetValue(text) and Workbook.listSheets().
A good API allows the client to do pretty much everything they need to do, but doesn't require them to do a lot of mindless busy-work. Examples of "mindless busy work" would be initializing data structure fields, calling several routines in a sequence that never varies with no real custom code in between, etc.
The surest sign of a bad API is if your clients all want to wrap it with their own helper code. At the absolute least, your API should have provided that helper code. Most likely, it should have been designed to provide the higher level of abstraction the clients are rolling themselves on their own every time.
A bad API is one that is not used by its intended audience.
A good API is one that is used by its intended audience for the purpose for which it was designed.
A great API is one that is used by both its intended audience, for the its intended purpose, and an unintended audience for reasons unanticipated by its designers.
If Amazon publishes its API as both SOAP and REST, and the REST version wins out, that doesn't mean the underlying SOAP API was bad.
I'd imagine that the same will be true for you. You can read all you want about design and try your best, but the acid test will be use. Spend some time building in ways to get feedback on what works and what doesn't and be prepared to refactor as needed to make it better.
A good API is one that makes simple things simple (minimum boilerplate and learning curve to do the most common things) and complicated things possible (maximum flexibility, as few assumptions as possible). A mediocre API is one that does one of these well (either insanely simple but only if you're trying to do really basic stuff, or insanely powerful, but with a really steep learning curve, etc). A horrible API is one that does neither of these well.
There are several other good answers on this already, so I thought I'd just throw in some links I didn't see mentioned.
Articles
"A Little Manual Of API Design" by Jasmin Blanchette of Trolltech
"Defining QT-Style C++ APIs" also Trolltech
Books:
"Effective Java" by Joshua Bloch
"The Practice Of Programming" by Kernighan and Pike
I think a good API should allow custom IO and memory management hooks if it's applicable.
A typical example is you have your custom compressed archive format for data on disk and a third party library with a poor api wants to access data on disk and expects a path to a file where it can load its data.
This link has some good points:
http://gamearchitect.net/2008/09/19/good-middleware/
If the API produces an error message, ensure that the message and diagnostics help a developer work out what the problem is.
My expectation is that the caller of an API passes in input that is correct. A developer is the consumer of any error messages produced by the API (not the end user), and messages aimed at the developer help the developer debug their calling program.
An API is bad when it is badly documented.
An API is good when it is well documented and follows a coding standard.
Now these are two, very simple and also very hard points to follow, this brings one into the area of software architecture. You need a good architect that structures the system and helps the framework follow its own guidlines.
Commenting code, writing an well explained manual for the API is Mandatory.
An API can be good if it has a good documentation which explains how to use it. But if the code is clean, good and follows a standard inside itself, it doesnt matter if it doenst have a decent documentation.
I've written a little about coding structure here
I think what is paramount is readability, by which I mean the quality that makes the greatest number of programmers to make sense of what the code is doing in the shortest possible amount of time. But judging which piece of software is readable and which is not has that indescribable human quality: fuzziness. The points you mention do partly succeed in crystallizing it. However, in its entirety it has to remain a case-by-case affair and it would be really hard to come up with universal rules.