I am trying to use the Optaplanner for a business case that I have, which is about the VRP problem. As I am not an expert programmer, I deal with the problem of imputing the information to the software. The files in the example are .vrp files so I could not manage to find out how can I change them. I have my files in txt format but I cannot find a way how to use them. I hope that there is a easy way to maybe convert the txt files into vrp.
Thank you very much in advance.
Michail
No, there is no magic program that takes an arbitrary formatted txt file and turns into the vrp format used by the example in OptaPlanner. The example is meant as a starting point for programmers, so they can customize to their business needs and put it in their business UI.
Note: OptaPlanner itself has no input/output formats. The OptaPlanner example VehicleRouting can load a file in the format vrp (or also xstream xml) in to it's VRP domain objects, but it also possible to write some code to load it from a database or something else.
Related
A few questions scratch an itch around perl6 grammars and raster (binary in general) data. For what I understand, the text approach is to work at the grapheme-level trhough grammars, may we approach raster data that way ? Can we make custom grapheme definition to approach raster data or a basic unit of binary data to parse them using Grammars ?
Seeing that perl6 is defined by perl6 grammars, can we define similar grammars as kind of "validation" test with a basic case being if the grammar can parse the data, the data is well-formed and is structurally validated ? Using this approach for text data, it is kind of obvious with grammars as the basic unit are text-oriented but can we customize those back-end definition (by example, it's possible to overwrite the :sigspace to make rules and tokens parse with a another separator for grapheme) to enable the power of grammars in the binary data territory ?
Thanks!
For the background part:
During the past few weeks, I begin to learn-ish Perl6 by personal interest. After seeing this talk at FOSDEM 2019 and I begin to ask myself (and the people around me) about using using grammars to inspect/parse binary data. My usecase will be for example to replicate the Cloud Optimized Geotiff validator without the support of a GDAL binding (I didn't see one yet in perl6). It's clearly a learning project for me.
The Spec for Cloud Optimized Geotiff
For now, the basic idea is to parse the binary structure with the help of perl6 grammars if it possible as a first basic step, hoping to be able to inspect the data and metadata as a main goal.
Note : Not native speaker, if some parts need rewriting/precisions feel free to point out.
As only comments where posted, I will summarize all the answers I got from the comments here, my further research and the #perl6 IRC chatroom.
Concerning the support of binding for X library (in the test case, it was GDAL), the strategy inside the perl6 community is to either leverage :
Use the Inline::Foo modules aiming at launching and accessing the ecosystem of the Foo language (by example : Inline::Perl5, Inline::Python and so on). List of Inline::X modules from the Perl6 Module Directory ;
Use or write a binding using NativeCall to bind to dynamic libraries who follow the C Calling Covention ;
Use or write a native perl6 module.
Concerning the parsing of binary data, I'll split the subject in two parts :
Generally speaking ;
Leveraging Grammars ;
1. Generally speaking
Leveraging the P5pack module or using Inline::Perl5 to use the unpack/pack is actually (with perl6.c) the best to parse binary data structure (the former seems favoraed as it's native module).
Go to see first comment from #raiph to a SO anwser showing a basic use case.
2. Leveraging the grammars
With perl6.c, grammars can only parse text.
However, the question about parsing binary data seems to be moderatly hot (based on feedbacks seen on the #perl6 irc channel) and a few to document, yet not implemetend, seems to pave the way with a hope to see it happens in a future (near or distant?).
The last part of the #raiph's anwser list a lot of ressources aiming at that direction. Moreover, in the Synopses 05 - Regexes and Rules : line 432 a :bytes modifier is evoked. We will have to see at which point those modifiers will be implemented and what is missing to bring them to the language.
On the #perl6 irc channel, MasterDuke said « also, i think the nqp binary reading/writing ops that jnthn recently specced and nine implemented were a prerequisite for anything further ». I still have to investigated what exactly he is talking about but it seems to go to the good direction.
One of the main point, IMO, is the related to the grapheme definition based on the UTF-8 one. If we were able to overwrite the grapheme definition to a custom one for specialized grammar as we can for now overwrite the :sigspace modifier to affect what is the separators for rulesand tokens, we will access a new way to operate around data structure and grammars. For now, the grapheme is defined in the string-level not the grammar-level or meta. See #timotimo comments linking to the UTF-8 document describing the Grapheme Cluster Boundary Rules.
A way to bend the rules was linked by #jjmererlo : Parsing GFX3 format with perl6 grammars.
I'm new to VRP solving. I've got optaplanner's demo VRP running.
I have about 400 text addresses as my waypoints. I've geocoded them, so I have lat/long.
I sense that I need to calculate a LOT of distances between waypoints. I've seen the file format for .vrp and as yet haven't found how to generate that format from my list of text addresses.
I sense that graphhopper might help me to do that.
I'm still getting graphopper going. I have downloaded open street map data from https://extract.bbbike.org/ in PBF format. I sense that I need to use that to that data with graphhopper to generate input to optaplanner.
am I on the right track?
can someone point me to a guide? (I realise this is pretty niche, and I might have to find my way a little...)
Thanks and Regards
Here's the code how I generated the VRP file used in optaplanner-examples.
Developing functional specifications is never a pleasurable experience, but I kind of find a sick pleasure in planning a project well. I think I have some father issues.
Regardless of my own issues, I can find any number of articles on how to create a single functional spec in varying degrees of usefulness. There are templates and examples aplenty, and I've got a good library of my own. However I am finding it difficult to find anyone who discusses a manner in which to produce multiple functional specs with any efficiency.
Does anyone know of a source discussing how to manage the process of quickly generating disparate types of functional specs? Say a company that delivers web apps, perhaps using a rapid development tool like ColdFusion or PhoneGap or something where the experience lies within the use of the tool not the end result. So the functional specs can have a wonderful array of difference in them.
Can anyone point me towards a way of managing this process to ease the burden of building each of these from scratch?
EDIT - I really like OmniGraffle, however I'm not trying to maintain a look and feel or do anything visual (saving past screen shots might be useful if they can be indexed). Code Snippets seems closer to what I wanted. But in actuality I think I am looking for the method to archive/index past blocks of text.
So if I described a purchase order system a year ago and I am building something similar today, I want to find that functional spec from a year ago to have some example text to start from.
In my head this is liek some novel writing software where like code snippets a block of text (either a scene, chapter or blurb or whatever can be written and then moved aroudn int eh body of the whole. yWriter does this. However I need to find a way to index/search through these large chunks of text for relevance. I am hoping to learn more about that kind of system.
Fleshing out the ambiguity
If you are asking about templates that are primarily textual, then your best bet is probably just to have a 'stationary' file that you can open a copy, adding pieces that are copies of the template structure you've saved to the 'stationary', and then save out the draft spec.
If you are referring to diagrams and other visual schematic that follow a 'spec language' that is unique to your development framework, then I would suggest a tool like OmniGraffle, Visio, or LucidCharts, which have active communities that develop 'stencil libraries' (e.g. graffletopia)
I think you more mean #1, in which case you might look to examples like OmniOutliner templates which can contain sophisticated stylization of fonts and format, akin to 'type styles' in Word documents.
Code Snippets are one mechanism for solving this, but you will only get snippet libraries for programming IDEs, which generally will lack text style features. Code Snippet libraries are like text macros: short strips that expand into large blocks of text. You could create your own snippets for the different structures of project spec that related to each kind of framework.
Another solution is to leverage the file interoperability of tools like OmniGraffle and OmniOutliner (or other pairings). WhenOmniGraffle opens an Outliner file, it displays the list structure as a tree of objects/nodes. After adding more nodes, the OmniGraffle file can be re-opened in OmniOutliner and viewed as a list, with all the attached Outliner styles.
This is a nice multi-modal approach, but locks you into a toolset. Probably unavoidable until more people demand tooling to do this kind of thing.
I work for a Telecomusnications company as a Test Engineer. As part of my Job, I need to do a regression test to compare Bills each production drop. Could some one please suggest toos to compare PDF bills from Past release to cucrrent release? Tool should be able to compare Bill format, Line Spacing, Charges, Messages etc.
This is a very broad question. I would suggest using something like PDFSharp to analyse your PDFs. The rest is largely an implementation exercise.
It will take a bit of code to get it working to a reasonable degree of accuracy.
You would pretty much need to code the thing from scratch. The good thing is that PDFSharp (or similar libraries) will take the pain out of analysing the PDF files.
Another way to solve this problem could be to transform your PDF into images and then automate some visual comparison on them. There are a couple of tools out there to do such tasks:
testAPI
Sikuli
PerceptualDiff
I have a game I wrote in Actionscript 3 I'm looking to port to iOS. The game has about 9k LOC spread across 150 classes, most of the classes are for data models, state handling and level generation all of which should be easy to port.
However, the thought of rejiggering the syntax by hand across all these files is none too appealing. Are there tools that can help me speed up this process?
I'm not looking for a magical tool here, nor am I looking for a cross compiler, I just want some help converting my source files.
I don't know of a tool, but this is the way I'd try and attack your problem if there really is a lot of (simple) code to convert. I'm sure my suggestion is not that useful on parts of the code that are very flash-specific (all the DisplayObject stuff?) and also not that useful on lots of your logic. But it would be fun to build! :-)
Partial automatic conversion should be possible, especially if the objects are just 'data containers', watch out for bringing too much as3-idiom over to objective-c though, it might not always be a good fit.
Unless you want to create your own (semi) parser for as3 you'd need some sort of a parser, apparently FlexPMD has one (never used it), and there probably are others.
After getting your hands on a parser you have to find some way of suggesting to the system what parts could be converted automatically. You could try and add rules to the parser/generator script for the general case. For more specific cases I'd use custom metadata on the actual class/property/method, assuming a real as3 parser would correctly parse those.
Now part of your work will shift from hand-converting files to hand-annotating files, but that might be ok for you.
Have the parser parse your classes and define actions based on your metadata that will determine what kind of objective-c class to generate. If you get this working it could at least get you all your classes, their simple properties and method signatures (getting the body of the methods converted might be a bit too much to ask but you could include it as a comment so you'd have a nice reference while hand-translating).
PS: if you make this into a one way process be very sure you don't need to re-generate it later - it would be bad if you find out that you have been modifying the generated code and somehow need to re-generate all those classes -- that would mean you'll have to redo all your hard work!
I've started putting a tool together to take the edge off the menial aspects of this process.
I'm trying to figure out if there's enough interest to make it clean and stable enough to release for others to use. I may just do it anyway.
http://meanwhileatthelab.blogspot.com.au/2012/08/automating-process-of-converting-as3-to.html
It's so far saving me a lot of time while porting one of my fairly large games from AS3 to objc.
Check out the Sparrow Framework. It's purported to be designed with Actionscript developers in mind, recreating classes that sort of emulate display list and things like that. You'll have to dive into some "rejiggering" for sure no matter what you do if you don't want to use the CS5 packager.
http://www.sparrow-framework.org/
even if some solution exists, note that architectural logic is DIFFERENT, and many more other details.
Anyway even if posible, You will have a strange hybrid.
I am coming back from WWDC2012, and the message is (as always..) performance anf great user experience.
So You should rewrite using a different programming model.