Is there a min-conflict algorithm in optaplanner? or how to do it?
What about using it as part of the neighborhood selection like:
Define custom swap factory that construct neighborhood as follow
Get all violations per variable to optimize, thus requires a call to scoreDirector.calculateScore then parse/process constraintMatches
Order by variables lowest score or highest violations
Construct neighborhood via swapping those variables first
If that's viable, is there a way to get the constraintMatches without the need to re-call the calculateScore in order to speed up the process
This algorithm isn't supported out of the box yet by OptaPlanner. I 'd call it Guided Local Search. But it's not that hard to add yourself. In fact, it's not a matter of changing the algorithm, but changing the entity selectors.
Something like this should work:
<swapMoveSelector>
<entitySelector>
<cacheType>STEP</cacheType>
<probabilityWeightFactoryClass>...MyProbabilityWeightFactory</probabilityWeightFactoryClass>
</entitySelector>
</swapMoveSelector>
Read about the advanced swapMoveSelector configuration, entity selector, sorted selection and probability selection.
The callback class you implement for the probabilistic selection or sorted selection should prioritize entities that are part of a conflict.
I would definitely use sorted or probabilistic selection on the entity selector, not the entire swapMoveSelector because that is overkill, cpu hungry and memory hungry.
I would prefer probabilistic selection over sorted selection. Even though sorted selection better reflects your pseudo code, I believe (but haven't proven) that probabilistic selection will do better, given the nature of Metaheuristics. Try both, run some benchmarks with the Benchmarker and let us know what works best ;)
Not sure about how to solve your overall problem but for your last point:
You can create a PhaseLifecycleListener and attach it via ((DefaultSolver) solver).addPhaseLifecycleListener
In the stepStarted or stepEnded(depending on your need) you can then call
stepScope.getScoreDirector().getConstraintMatchTotals()
to get the constraint totals.
Hope this somewhat helps.
Related
The requirements:
100k lines
One of the columns is not text - its custom painted with wxDC*.
The items addition is coming from another thread using wxThreadEvent.
Up until now I used wxDataViewListCtrl, but it takes too long to AppendItem 100 thousand time.
wxListCtrl (in virtual mode) does not have the ability to use wxDC* - please correct me if I am wrong.
The only thing I can think of is using wxDataViewCtrl + wxDataViewModel. But I can't understand how to add items.
I looked at the samples (https://github.com/wxWidgets/wxWidgets/tree/WX_3_0_BRANCH/samples/dataview), too complex for me.
I cant understand them.
I looked at the wiki (https://wiki.wxwidgets.org/WxDataViewCtrl), also too complex for me.
Can somebody please provide a very simple example of a wxDataViewCtrl + wxDataViewModel with one string column and one wxDC* column.
Thanks in advance.
P.S.
Per #HajoKirchhoff's request in the comments, I am posting some code:
// This is called from Rust 100k times.
extern "C" void Add_line_to_data_view_list_control(unsigned int index,
const char* date,
const char* sha1) {
wxThreadEvent evt(wxEVT_THREAD, 44);
evt.SetPayload(ViewListLine{index, std::string(date), std::string(sha1)});
wxQueueEvent(g_this, evt.Clone());
}
void TreeWidget::Add_line_to_data_view_list_control(wxThreadEvent& event) {
ViewListLine view_list_line = event.GetPayload<ViewListLine>();
wxVector<wxVariant> item;
item.push_back(wxVariant(static_cast<int>(view_list_line.index)));
item.push_back(wxVariant(view_list_line.date));
item.push_back(wxVariant(view_list_line.sha1));
AppendItem(item);
}
Appending 100k items to a control will always be slow. That's because it requires moving 100k items from your storage to the controls storage. A much better way for this amount of data is to have a "virtual" list control or wxGrid. In both cases the data is not actually transferred to the control. Instead when painting occurs, a callback function will transfer only the data required to paint. So for a 100k list you will only have "activity" for the 20-30 lines that are visible.
With wxListCtrl see https://docs.wxwidgets.org/3.0/classwx_list_ctrl.html, specify the wxLC_VIRTUAL flag, call SetItemCount and then provide/override
OnGetItemText
OnGetItemImage
OnGetItemColumnImage
Downside: You can only draw items contained in a wxImageList, since the OnGetItemImage return indizes into the list. So you cannot draw arbitrary items using a wxDC. Since the human eye will be overwhelmed with 100k different images anyway, this is usually acceptable. You may have to provide 20/30 different images beforehand, but you'll have a fast, flexible list.
That said, it is possible to override the OnPaint function and use that wxDC to draw anything in the list. But that'll get difficult pretty soon.
So an alternative would be to use wxGrid, create a wxGridTableBase derived class that acts as a bridge between the grid and your actual 100k data and create wxGridCellRenderer derived classes to render the actual data onscreen. The wxGridCellRenderer class will get a wxDC. This will give you more flexibility but is also much more complex than using a virtual wxListCtrl.
The full example of doing what you want will inevitably be relatively complex. But if you decompose in simple parts, it's really not that difficult: you do need to define a custom model, but if your list is flat, this basically just means returning the value of the item at the N-th position, as you can trivially implement all model methods related to the tree structure. An example of such a model, although with multiple columns can be found in the sample, so you just need to simplify it to a one (or two) column version.
Next, you are going to need a custom renderer too, but this is not difficult neither and, again, there is an example of this in the sample too.
If you have any concrete questions, you should ask them, but it's going to be difficult to do much better than what the sample shows and it does already show exactly what you want to do.
Thank you every one who replied!
#Vz.'s words "If you have any concrete questions, you should ask them" got me thinking and I took another look at the samples of wxWidgets. The full code can be found here. Look at the following classes:
TreeDataViewModel
TreeWidget
TreeCustomRenderer
I iterate over finie_vertieces, finite_edges and finite_faces after generating constrained delauny triangulation with Loyd optimization. I am on VS2012 using CGAL 4.12 under release mode. I see for a given case finite_verices list is repeatable (so is the vertex list under finite_faces), however, the ordering of the edges in finite_edges seems to change from run to run
for(auto eit = cdtp.finite_edges_begin(); eit != cdtp.finite_edges_end(); ++eit)
{
const auto isConstrainedEdge = cdtp.is_constrained(*eit);
auto & cFace = *(eit->first);
auto cwVert = cFace.vertex(cFace.cw(eit->second));
auto ccwVert = cFace.vertex(cFace.ccw(eit->second));
I use the above code snippet to extract vertex list, and vertex list with a given edge changes from run to run.
Any help is appreciated resolving this, as I am looking for consistent behavior in the code. My triangulation involves many line constraints on a two dimensional domain.
I was told it's likely dependable behaviour, but there is no guarantee of order. IIRC the documentation says the traversal order is not guaranteed. I think it's best to assume the iterators' transversal is not deterministic and could change.
You could use any of the _info extensions to embed information into the face, edge, etc (a hash perhaps?) which you could then check against to detect a change.
In my use case, I wanted to traverse the mesh in parallel and OpenMP didn't support the iterators. So I hold a vector of the Face_handles in memory which I can then easily index over. In conjunction with the _info data, you could use this to build a vector of edges,faces, etc with a guaranteed order using unique information in the ->info() field.
Another _info example.
I've been working on the HM reference software for a while, to improve something in the intra prediction part. Now a new intra prediction algorithm is added to the code and I let the encoder choose between my algorithm and the default algorithm of HM (according to the RDCost of course).
What I need now, is to signal a flag for each PU, so that the decoder will be able to perform the same algorithm as the encoder decides in the rate distortion loop.
I want to know what exactly should I do to properly add this one bit flag to the stream, without breaking anything in the code.
Assuming that I want to use a CABAC context model to keep the track of my flag's statistics, what else should I do:
adding a new context model like ContextModel3DBuffer m_cCUIntraAlgorithmSCModel to the TEncSbac.h file.
properly initializing the model (both at encoder and decoder side) by looking at how the HM initialezes other context models.
calling the function m_pcBinIf->encodeBin(myFlag, cCUIntraAlgorithmSCModel) and m_pcTDecBinIfdecodeBin(myFlag, cCUIntraAlgorithmSCModel) at the encoder side and decoder side, respectively.
I take these three steps but apparently it breaks something.
PS: Even an equiprobable signaling (i.e. without using CABAC contexts) will be useful. I just want to send this flag peacefully!
Thanks in advance.
I could solve this problem finally. It was a bug in the CABAC context initialization.
But I want to share this experience as many people may want to do the same thing.
The three steps that I explained are essentially necessary to add a new syntax element, but one might be very careful with the followings:
In the beginning, you need to decide either you want to use a separate context model for your syntax element? Or you want to use an existing one? In case of CABAC separation, you should define a ContextModel3DBuffer and the best way to do that is: finding a similar syntax element in the code; then duplicating its ``ContextModel3DBuffer'' definition and ALL of its occurences in the code. This way assures that you are considering everything.
Encoding of each syntax elements happens in two different places: first, in the RDO loop to make a "decision", and second, during the actual encoding phase and when the decisions are being encoded (e.g. encodeCtu function).
The order of encoding/decoding syntaxt elements should be the same at the encoder/decoder sides. For example if your new syntax element is encoded after splitFlag and before predMode at the encoder side, you should decode it exactly between splitFlag and predMode at the decoder side.
The context model is implemented as a 3D matrix in order to let track the statistics of syntaxt elements separately for different block sizes, componenets etc. This means that when you want to call the function encodeBin, you may make sure that a correct index is being used. I've made stupid mistakes in this part!
Apart from the above remarks, I found a the function getState very useful for debugging. This function returns the state of your CABAC context model in an arbitrary place of the code when you have access to it. It is very useful to compare the state at the same place of the encoder and the decoder when you have a mismatch. For example, it happens a lot that you encode a 1 but you decode a 0. In this case, you need to check the state of your CABAC context before encoding and decoding. They should be the same. If they are not the same, track back the error to find the first place of mismatch.
I hope it was helpful.
can someone explain the purpose of the methods that need to be implemented for incremental score calculation? I understand all the after... methods, but why should I adjust the score before an entity is added, removed or a variable is changed (beforeEntityAdded, beforeVariableChanged, beforeEntityRemoved)?
See this image from the 6.0.0.Final docs:
Also see the section "incremental score calculation" (which also explains why this is so much faster than SimpleScoreCalculator). Look at the example implementations. You'll see that beforeVariableChanged() is needed to retract the violated constraint matches that no longer match.
In the diagram above, the ChangeMove needs to get +1 because AB no longer match during the beforeVariableChanged() method and -1 becaues AC now match during the afterVariableChanged method.
The only effect AudioUnit on iOS is the "iTunes EQ", which only lets you use EQ pre-sets. I would like to use a customized eq in my audio graph
I came across this question on the subject and saw an answer suggesting using this DSP code in the render callback. This looks promising and people seem to be using this effectively on various platforms. However, my implementation has a ton of noise even with a flat eq.
Here's my 20 line integration into the "MixerHostAudio" class of Apple's "MixerHost" example application (all in one commit):
https://github.com/tassock/mixerhost/commit/4b8b87028bfffe352ed67609f747858059a3e89b
Any ideas on how I could get this working? Any other strategies for integrating an EQ?
Edit: Here's an example of the distortion I'm experiencing (with the eq flat):
http://www.youtube.com/watch?v=W_6JaNUvUjA
In the code in EQ3Band.c, the filter coefficients are used without being initialized. The init_3band_state method initialize just the gains and frequencies, but the coefficients themselves - es->f1p0 etc. are not initialized, and therefore contain some garbage values. That might be the reason for the bad output.
This code seems wrong in more then one way.
A digital filter is normally represented by the filter coefficients, which are constant, the filter inner state history (since in most cases the output depends on history) and the filter topology, which is the arithmetic used to calculate the output given the input and the filter (coeffs + state history). In most cases, and of course when filtering audio data, you expect to get 0's at the output if you feed 0's to the input.
The problems in the code you linked to:
The filter coefficients are changed in each call to the processing method:
es->f1p0 += (es->lf * (sample - es->f1p0)) + vsa;
The input sample is usually multiplied by the filter coefficients, not added to them. It doesn't make any physical sense - the sample and the filter coeffs don't even have the same physical units.
If you feed in 0's, you do not get 0's at the output, just some values which do not make any sense.
I suggest you look for another code - the other option is debugging it, and it would be harder.
In addition, you'd benefit from reading about digital filters:
http://en.wikipedia.org/wiki/Digital_filter
https://ccrma.stanford.edu/~jos/filters/