Test driven development Optaplanner - optaplanner

In the library Optaplanner, in the file "CloudBalancingScoreConstraintTest.java" there is the following line of code: scoreVerifier.assertHardWeight("requiredCpuPowerTotal", -570, solution). How was calculated the expected weight "-570"? this was known before creating the classes (CloudBalance.java, CloudComputer.java) in a test driven development approach or after the creation of the classes?

TLDR: ignore CloudBalancingScoreConstraintTest and look at CloudBalancingConstraintProviderTest instead.
Long explanation:
CloudBalancing currently still defaults to scoreDrl. It also has an alternative implementation with ConstrainStreams: CloudBalancingConstraintProvider. ConstraintStreams are in many ways better than scoreDRL, as of OptaPlanner 8.4.0 equally fast and have 99% feature parity with DRL. Once that's 100%, all examples will use ConstraintStreams by default.
So why is it -570? Because ScoreVerifier checks all constraints. So add one constraint and you have to adjust all your tests. Very painful. Not TDD.
What's the fix? Use ConstraintVerifier. ConstraintVerifier is ScoreVerifier++. ConstraintVerifier tests the matchWeight of one constraint.
`constraintWeight * matchWeight * (+1 for reward | -1 for penalize) = score impact`
It even ignores the constraintWeight of the constraint, which is a blessing once the business stakeholders start tweaking the constraint weights. Additionally, it's far less verbose to use (no solution instance needed). What's the catch? It only works for ConstraintStreams.
An example:
#Test
public void requiredCpuPowerTotal() {
CloudComputer computer1 = new CloudComputer(1, 1, 1, 1, 2);
CloudComputer computer2 = new CloudComputer(2, 2, 2, 2, 4);
CloudProcess unassignedProcess = new CloudProcess(0, 1, 1, 1);
// Total = 2, available = 1.
CloudProcess process1 = new CloudProcess(1, 1, 1, 1);
process1.setComputer(computer1);
CloudProcess process2 = new CloudProcess(2, 1, 1, 1);
process2.setComputer(computer1);
// Total = 1, available = 2.
CloudProcess process3 = new CloudProcess(3, 1, 1, 1);
process3.setComputer(computer2);
constraintVerifier.verifyThat(CloudBalancingConstraintProvider::requiredCpuPowerTotal)
.given(unassignedProcess, process1, process2, process3)
.penalizesBy(1); // Only the first computer.
}
To learn more, watch Lukas's OptaPlanner Test driven development video.

Related

How to reduce Selenium requests (traffic) as much as possible? (Less traffic on residential proxy)

I am writing scrapers using residential proxies(quite expensive), which I noticed the traffic was quite heavy and normally Selenium sends more than one request for a single URL. I've disabled as much as I can, I am wondering if there's still anything that I can do to reduce the amount of traffic in total. Thanks.images
prefs = {
"profile.managed_default_content_settings.images": 2,
"profile.default_content_setting_values.javascript": 2,
"profile.managed_default_content_settings.stylesheets": 2,
"profile.managed_default_content_settings.plugins": 2,
"profile.managed_default_content_settings.popups": 2,
'disk-cache-size': 4096,
"profile.managed_default_content_settings.media_stream": 2,
# "profile.managed_default_content_settings.cookies": 2,
# "profile.default_content_setting_values.notifications": 2,
"profile.managed_default_content_settings.geolocation": 2,
# "download.default_directory": "d:/temp",
# "plugins.always_open_pdf_externally": True,
}
self.chrome_options.add_experimental_option("prefs", prefs)
I've tried to disable as many chrome functions as I could, images, javascripts,stylesheets, etc as shown in the pic.

Dynamic planningVariable with rangeProvider

I try to keep the domain for my problem as simple as possible:
I have jobs with jobSteps. The job needs a start time and the jobSteps have different machines, process times and an allowed overprocesstime. But the maximal overprocesstime is different for each job or even 0 (not allowed).
now the jobs should be scheduled with some conditions:
once a job started it have to be finished without any pause (each step must be worked through)
The jobs should be startet as soon as possible.
So i have two planning Variables: start time of the job and the usedOverprocessTime for the jobstep
(it can make sense to overprocess some jobs - cause the next machine is still working a short time on an other job)
simple example:
job 1: {[machine: A, processTime: 80, allowedOverprocessTime: 20], [machine: B, processTime: 80, allowedOverprocessTime: 0], [machine: C, processTime: 100, allowedOverprocessTime: 30]}
job 2: {[machine: A, processTime: 50, allowedOverprocessTime: 10], [machine: B, processTime: 80, allowedOverprocessTime: 30], [machine: C, processTime: 100, allowedOverprocessTime: 30]}
The code for the start time of the jobs is kinda simple:
#PlanningEntity
#Entity
public class Job {
...
#PlanningVariable(valueRangeProviderRefs = "jobStartTimeRange")
private Integer jobStartTime;
and then i have a planningsolution class:
#PlanningSolution
public class JobSchedule {
...
#ValueRangeProvider(id = "jobStartTimeRange")
public List<Integer> createStartTimeList() {
// here i calculate a worst case range if the jobs have to
// be worked on one after the other without overlapping
}
sofar this should work. now the problem with the overprocess time:
#PlanningEntity
public class JobStep {
...
#PlanningVariable(valueRangeProviderRefs = "plannedOverprocessTimeRange")
public Integer plannedOverprocessTime = 0;
for the provider i would need the current step to return an individual range from 0 to the specific max value. I even had the idea of returning a Map<JobStep, List> so each step could lookup the specific range. But the annotation is only for collections.
so is it somehow possible to create the ranges for the planningvariable dynamic like i need to? I would be surprised if i were the first to have this request to be honest.
OptaPlanner supports a wide range of ValueRangeProvider implementations, and it also allows you to implement your own. The question does not make it clear whether you've already tried this mechanism and it didn't work for your use case, or if you're not aware of this functionality at all.
My interpretation is that, if you put a ValueRangeProvider on your planning entity, you will be able to do what you need.

Karate force scenarios inside a feature file to execute sequentially on multiple threads

When running all of my feature files, through bamboo/maven, using the "clean test" command, how do I force the scenarios inside each feature file to run in order? On multiple threads.
For example, if I have 100 feature files, with 20 scenarios in each feature file, when I run them on with 5 threads, the order of the feature files doesn't matter, feature 10 can run before feature 15, but the scenarios inside of each feature have to run in sequential order.
I need to run feature 10 scenario 1, then feature 10 scenario 2, and so on.
So with 5 threads:
thread 1 can run feature 1
thread 2 can run feature 10
thread 3 can run feature 3
thread 4 can run feature 2
thread 5 can run feature 4
But I need each scenario 1 through 20, to execute in order.
So with 5 threads:
thread 1 feature 1 scenario 1, then scenario 2, then scenario 3, ext.
thread 2 feature 10 scenario 1, then scenario 2, then scenario 3, ext.
thread 3 feature 3 scenario 1, then scenario 2, then scenario 3, ext.
thread 4 feature 2 scenario 1, then scenario 2, then scenario 3, ext.
thread 5 feature 4 scenario 1, then scenario 2, then scenario 3, ext.
Is #parallel=false the answer? Do I really need to add that to the top of every single feature file. Like I said I could have 100 feature files in my repository, maybe more.
Or do I have to add #parallel=false on the command line? If so, would it be like the other options, "-Dparallel=false"?
If your Scenario-s are written so that they depend on each other, this is a bad-practice. Please read: https://stackoverflow.com/a/46080568/143475 very carefully.
So yes, Karate does not support a "global" switch to enable the behavior you describe. And one of the reasons is to discourage bad practices.
You will have to add #parallel=false for all features. Even this may not have the desired effect you want in the 1.0 version, because of some behavior changes: https://github.com/intuit/karate/wiki/1.0-upgrade-guide

pysnmp - how to use compiled mibs during agent implementation

the snmp agent implementation examples provided in pysnmp don't really leverage the mib.py file generated by compiling a mib. Is it possible to use this file to simplify agent implementation? Is such an example available, for a table. thanks!
You are right, existing mibdump.py tool is primarily designed for manager-side MIB compilation. However compiled MIB is still useful or even sometimes crucial for agent implementation.
For simple scalars you can mass-replace MibScalar classes with MibScalarInstance ones. And add an extra trailing .0 to their OID. For example this line:
sysDescr = MibScalar((1, 3, 6, 1, 2, 1, 1, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readonly")
would change like this:
sysDescr = MibScalarInstance((1, 3, 6, 1, 2, 1, 1, 1, 0), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readonly")
For SNMP tables it's way more tricker because there can be several cases. If it's a static table which never changes its size, then you can basically replace MibTableColumn with MibScalarInstance plus append the index part of the OID. For example this line:
sysORID = MibTableColumn((1, 3, 6, 1, 2, 1, 1, 9, 1, 2), ObjectIdentifier()).setMaxAccess("readonly")
would look like this (note index 12345):
sysORID = MibScalarInstance((1, 3, 6, 1, 2, 1, 1, 9, 1, 2, 12345), ObjectIdentifier()).setMaxAccess("readonly")
The rest of MibTable* classes can be removed from the MIB.py.
For dynamic tables that change their shape either because SNMP agent or SNMP manager modify them, you might need to preserve all the MibTable* classes and extend/customize the MibTableColumn class to make it actually managing your backend resources in response to SNMP calls.
A hopefully relevant example.

Lua: ProteaAudio API confuse -- How to use it?

Hello everyone.
Sorry for my noob question as I'm just a non-programmer trying to learn to program with Lua.
I'm so attracted with Lua since it's indeed very simple, either in size as well as in syntax.
And I decided to explore further experiment with this Brazilian born language, like playing with sound -- as I did in Python and Ruby.
So I found this ProteaAudio and tried to play the sample scripts came within package I downloaded from here.
The package comes with two sample scripts:
first named example.lua to play the ogg sample file (also comes within the package)
and another to play function generated sound named scale.lua
The first script runs just fine on my Win 7 and Ubuntu 12.04 x86 machine.
But the second script only runs on Windows and got an error when I tried to run it on Ubuntu, generating this message:
../lua52: scale.lua:13: bad argument #1 to 'soundLoop' (number expected, got nil)
stack traceback:
[C]: in function 'soundLoop'
scale.lua:13: in function 'playNote'
scale.lua:29: in main chunk
[C]: in ?
The full original source-code from scale.lua is:
-- function creating a sine wave sample:
function sampleSine(freq, duration, sampleRate)
local data = { }
for i = 1,duration*sampleRate do
data[i] = math.sin( (i*freq/sampleRate)*math.pi*2)
end
return proAudio.sampleFromMemory(data, sampleRate)
end
-- plays a sample shifted by a number of halftones for a definable period of time
function playNote(sample, pitch, duration, volumeL, volumeR, disparity)
local scale = 2^(pitch/12)
local sound = proAudio.soundLoop(sample, volumeL, volumeR, disparity, scale)
proAudio.sleep(duration)
proAudio.soundStop(sound)
end
-- create an audio device using default parameters and exit in case of errors
require("proAudioRt")
if not proAudio.create() then os.exit(1) end
-- generate a sample:
local sample = sampleSine(440, 0.5, 88200)
-- play scale (a major):
local duration = 0.5
for i,note in ipairs({ 0, 2, 4, 5, 7, 9, 11, 12 }) do
playNote(sample, note, duration)
end
-- cleanup
proAudio.destroy()
And since I got confused with this ProteaAudio Lua API, I really can't get why this error comes.
Please help.
This is actually just a guess, but...
To play a "major" scale upwards (8 notes, jumping: full full half, full full full half) the original code does:
local duration = 0.5
for i,note in ipairs({ 0, 2, 4, 5, 7, 9, 11, 12 }) do
playNote(sample, note, duration)
end
where the sample is a handle to a pre-generated sample created by proAudio.sampleFromMemory which is returned by function sampleSine, that passed it a calculated 'table' representing a 440hz sine-wave (concert-pitch frequency for note 'A4', the first above middle 'C').
Thus playing an 'A major scale' by changing (increasing) the 'pich' (frequency) of that sample (in 8 steps=notes). That pitch-calculation is done by function playNote.
Function playNote accepts the following arguments:
sample, pitch, duration, volumeL, volumeR, disparity,
but it currently does not receive the arguments:
volumeL, volumeR, disparity (which will then be nil).
So when function playNote tries to call:
proAudio.soundLoop(sample, volumeL, volumeR, disparity, scale),
then the call will end up like:
proAudio.soundLoop(sample, nil, nil, nil, scale),
where the sample is passed on and scale is the 'playback-pitch' of that sample, as just calculated (according to specified note) by function playNote.
Your error-message states: bad argument #1 to 'soundLoop' (number expected, got nil).
Hmm, that seems consistent with what is happening (assuming that 'bad argument #1' is the second argument, in this case volumeL).
So,
you might want to try specifying some values for volumeL, volumeR, disparity like:
local duration = 0.5
local volumeL = 1.0
local volumeR = 1.0
local disparity = 0.0
for i,note in ipairs({ 0, 2, 4, 5, 7, 9, 11, 12 }) do
playNote(sample, note, duration, volumeL, volumeR, disparity)
end
From the proteaAudio documentation one can read about soundLoop's arguments:
sample - A sample handle returned by a previous load() call
volumeL - (optional) Left volume
volumeR - (optional) Right volume
disparity - (optional) Time difference between left and right channel in seconds.
Use negative values to specify a delay for the left
channel, positive for the right.
pitch - (optional) Pitch factor for playback. 0.5 corresponds to one octave
below, 2.0 to one above the original sample.
If that should do the trick, then the arguments might not be so optional on Ubuntu.
Hope this helps!