Conditionally running Optaplanner phases based on problem input - optaplanner

I'm using Optaplanner to solver a TWVRP problem, with the "delay until last" pattern to enable multiple workers performing some tasks. The cycle detection needed for that scales horribly with the problem size, so I would like to run the local search phase only if the problem size is below some threshhold. How can I accomplish that? For instance,
if(problemSize < X){
solveWithCHandLS(problem)
}else{
solveWithCHonly(problem)
}

OptaPlanner configuration can optionally be coded. So instead of giving OptaPlanner the XML or the properties in Quarkus, you can code your own solver config:
SolverConfig solverConfig = new SolverConfig();
From there, use your IDE code completion to discover the API. It is more or less a 1-to-1 mapping to the solver config XML. When you are done building your solver config, you need to build your solver from it:
SolverFactory<...> solverFactory = SolverFactory.create(solverConfig);
Solver<...> solver = solverFactory.buildSolver();
For each of your problem sizes, create a different config and build a solver from the config that is appropriate in the moment.

Related

Synchronizing USRP source blocks - multiple B2xx devices

I am trying to create a synchronized usrp source block in gnu radio consisting of multiple B210 USRP devices. Lang: C++.
From what I have found I need to:
Instantiate multiple multi_usrp_sptr as each B210 requires one and multiple B210 devices cannot be addressed by using single sptr
Use external frequency and PPS sources - an option that can be selected from block or set programmatically
Synchronize re/tuning to achieve repeatable phase offset between nodes - this can be achieved using timed commands API https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD
Synchronize sample streams using time_spec property issue_stream cmd
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
I looked into the gr-uhd folder where the sink/source code resided and found functions that could be altered.
Unfortunately I don't know how to copy or export this library to do these modifications and later compile to insert my custom blocks to GNU Radio, because gr-uhd seems to be built in and compiled at GR installation.
I attempted coping and then making the lib but that's not the way - it didn't succeed. Should I add my own source block via gr_modtool and insert only the commands I need?
Compatibility with uhd and its functions apart from just adding a few lines would be advantageous not to write the source from scratch.
Please advise
Edit
Experimental flowchart, based on Marcus Müller suggestion:
Experimental usrp synchronization flow
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
For a USRP sink: add tags containing dictionaries with the correct command times to the streams. The GNU Radio API docs have information on how these dictionaries need to look like. The time field is what you need to set with an appropriate value.
For a USRP source: Use the set_start_time on the uhd_usrp_source block; use the same dictionaries described above to issue commands like tuning, gain setting at a coordinated time.
I was trying to find a proper way of synchronizing the USRPs via tags.
There are a few issues I came across in this approach:
Timed commands require the knowledge of the current moment in time, which is done via usrp.get_time_now(), even though I would request the USRP to give the time through tags I would have to somehow extract it from the output. (make some kind of loop and proper triggering) (source: https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD) or maybe plan everything not in a relative way - using absolute values instead of offsets. I have seen an approach to regularly reset the sense of time each PPS (set it to 0.0) and maybe then setting time of commands within range of 0.0-1.0 would be acceptable. Then the loop for reading and inserting time into commands would also be redundant.
I didn't found a way to create dicts in GR via blocks to make the solution scalable (without writing a few lines of code in textbox) or writing OOT block
In the end there is so little information to tell what kind of solution is most appropriate (PDU, events, are tags still relevant in GR
?), and the docs are so very scarce, that after some mailing I decided to add a simple class that inherits from the main top_bock.py and after instantiation of top_block it calls a few functions to synchronize the devices. This kind of solution is not the most flexible one, and the parent class top_block.py has to be called through the inheriting one, but it enables an easy programming interface.
Soon I will add an example of the code used in inheriting class just in case.
If there is any more neat, dynamic or scalable solution please let me know or point me to sources.

Cloning partially solved MIP and keeping current B&B tree

I would like to partially solve a MIP, clone the problem and have that copy of the problem continue optimization but with a different strategy (node selection rule, variable selection rule, etc), and keeping the current branch-and-bound tree. I know that this can't be done with either CPLEX or Gurobi, since they would start optimization from scratch in the copy.
Is there any way of doing this with SCIP?
I would really appreciate any help.
Best,
Rodolfo
If you don't insist on having a copy/clone, you always have the possibility to code your stopping criterion in terms of an event handler. I am sure you know our How to on adding event handlers.
There is also an event handler in the scip source code, the so-called soft time limit event handler src/scip/event_softtimelimit.c. There you can find sample code that changes the time limit after the first solution has been found. Parameters can be fed one by one by using the SCIPchg{Real,Bool,Int,Longint,Char,String}Param() methods in the code, or passed as a settings file, which might be easier if you want to change lots of parameters without adapting the code each time.
It is good practice to use settings files saved via the set diffsave command, which saves only the nondefault-settings. Otherwise, using a complete settings file, you might run into troubles because a time limit or memory limit gets changed without control.
A copy that includes data structures such as the tree used during the branch-and-bound solving process is currently not possible. The copy-mechanism of SCIP only allows to copy the problem as a whole and adjust the formulation by changing variable domains and/or objective coefficients.

How to optimize webpack's build time using prefetchPlugin & analyse tool?

Previous research:
As webpack's wiki says, it is possible to use the analyse tool to optimize build performance:
from: https://github.com/webpack/docs/wiki/build-performance#hints-from-build-stats
Hints from build stats
There is an analyse tool which visualise your build and also provides
some hint how build size and build performance can be optimized.
You can generate the required JSON file by running webpack --profile
--json > stats.json
I generate the stats file (available here)
uploaded it to webpack's analize tool and under Hints tab
I told to use the prefetchPlugin:
from: http://webpack.github.io/analyse/#hints
Long module build chains
Use prefetching to increase build performance.
Prefetch a module from the middle of the chain.
I digged the web inside out to find the only documentation available on prefechPlugin is this:
from: https://webpack.js.org/plugins/prefetch-plugin/
PrefetchPlugin
new webpack.PrefetchPlugin([context], request)
A request for a normal module, which is resolved and built even before
a require to it occurs. This can boost performance. Try to profile the
build first to determine clever prefetching points.
My questions:
How to properly use prefetchPlugin?
What is the right workflow to use it with the Analyse tool?
How do I know if the prefetchPlugin works? how can I measure it?
What it means to Prefetch a module from the middle of the chain?
I'll really appreciate some examples
Please help me make this question a valuable resource for the next developer who wants to use the prefechPlugin and the Analyse tools.
Thank you.
Yeah, The pre-fetch plugin documentation is pretty much non-existent. After figuring it out for myself, its pretty simple to use, and there's not much flexibility to it. Basically, it takes two arguments, the context (optional) and the module path (relative to context). The context in your case would be /absolute/path/to/your/project/node_modules/react-transform-har/ assuming that the tilde in your screenshot is referring to node_modules as per webpack's node_module resolution.
The actual prefetch module should be ideally no more than three module dependencies deep. So in your case isFunction.js is the module with the long build chain and ideally it should be pre-fetched at getNative.js
However, I suspect there's something funky in your config, because your build chain dependencies are referring to module dependencies, which should be automatically optimized by webpack. I'm not sure how you got that, but in our case, we don't see any warnings about long build chains in node_modules. Most of our long build chains are due to deeply nested react components which require scss. ie:
Regardless, you'll want to add a new plugin for each of the warnings, like so:
plugins: [
new webpack.PrefetchPlugin('/web/', 'app/modules/HeaderNav.jsx'),
new webpack.PrefetchPlugin('/web/', 'app/pages/FrontPage.jsx')
];
The second argument must be a string to the relative location of the module. Hope this makes sense.
The middle of your chain there is probably react-transform-hmr/index.js as it starts about half way through. You could try PrefetchPlugin('react-transform-hmr/index') and rerun your profile to see if it helps speed up your total time to build.

SONAR: config duplicate line for java project

Maybe it is a stupid question, but I cannot find out where can I config the minimal lines of code for duplication check in SONAR. In project settings there is only a switch to turn on cross project check. Any ideas?
B.R.
We have an open ticket on this: http://jira.codehaus.org/browse/SONARJAVA-91
As a background: Sonar has been using its own duplication detection mechanism since end of 2011 (from Sonar 2.11 IIRC). At that time, a decision was made that the number of lines or the number of tokens should not be configurable, in order to prevent the possibility to fool the engine. However, as the detection mechanism has not reached perfection yet ;-), we will allow to set "sonar.cpd.java.minimumLines" property in the mean time.
this feature is only available for languages other than Java ( http://docs.codehaus.org/display/SONAR/Analysis+Parameters#AnalysisParameters-Duplications )

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.