Reducing downtime (idle time) when arriving early for a visit - optaplanner

Optaplanner is being used to plan the routes of a fleet of vehicles and I am optimizing the route times.
I have a scenario where I have one visit with time windows in the morning and the second visit with time window in the afternoon. However the vehicle leaves when the time window opens, makes the first delivery and heads to the second visit. Since the second visit has a time window in the afternoon, the vehicle has to wait for the time window of this visit to open, which introduces downtime. This downtime (idle time) can be reduced by leaving the depot later. So I would like to ask if:
Is there any rule to q backtrack to the depot or to the previous visit, wait longer to continue, and thereby reduce the downtime or waiting time on the second visit?
I have tried different variants:
1- I implemented a constraint to penalize if early to a visit and penalize with customer -> customer.getReadyTime() - customer.getArrivalTime().
This may optimize but it does not roll back the arrivalTime.
2- Modify my listener (ArrivalTimeUpdatingVariableListener, updateArrivalTime method).When calculating the arrival time, if there is idle time, I go to the previous visit and subtract the idle time. However, in some cases it does not recursively update all previous visits correctly, and in other cases it gives me a "VariableListener corruption". I have had no success for this variant either.
Is there any rule to wait or roll back and update all visits again?
I attach my constraint and listener for better context.
ArrivalTimeUpdatingVariableListener.class:
protected void updateArrivalTime(ScoreDirector scoreDirector, TimeWindowedVisit sourceCustomer) {
Standstill previousStandstill = sourceCustomer.getPreviousStandstill();
Long departureTime = previousStandstill == null ? null
: (previousStandstill instanceof TimeWindowedVisit)
? ((TimeWindowedVisit) previousStandstill).getArrivalTime() + ((TimeWindowedVisit) previousStandstill).getServiceDuration()
: ((PlanningVehicle) previousStandstill).getDepot() != null
? ((TimeWindowedDepot) ((PlanningVehicle) previousStandstill).getDepot()).getReadyTime()
: 0;
TimeWindowedVisit shadowCustomer = sourceCustomer;
Long arrivalTime = calculateArrivalTime(shadowCustomer, departureTime);
while (shadowCustomer != null && !Objects.equals(shadowCustomer.getArrivalTime(), arrivalTime)) {
scoreDirector.beforeVariableChanged(shadowCustomer, "arrivalTime");
shadowCustomer.setArrivalTime(arrivalTime);
scoreDirector.afterVariableChanged(shadowCustomer, "arrivalTime");
departureTime = shadowCustomer.getDepartureTime();
shadowCustomer = shadowCustomer.getNextVisit();
arrivalTime = calculateArrivalTime(shadowCustomer, departureTime);
}
}
private Long calculateArrivalTime(TimeWindowedVisit customer, Long previousDepartureTime) {
long arrivalTime = 0;
if (customer == null || customer.getPreviousStandstill() == null) {
return null;
}
if (customer.getPreviousStandstill() instanceof PlanningVehicle) {
arrivalTime = Math.max(customer.getReadyTime(),
previousDepartureTime + customer.distanceFromPreviousStandstill());
} else {
arrivalTime = previousDepartureTime + customer.distanceFromPreviousStandstill();
// to reach backwards and (attempt to) shift the previous arrival time.
Standstill previousStandstill = customer.getPreviousStandstill();
long idle = customer.getReadyTime() - arrivalTime;
if (previousStandstill != null && idle > 0) {
arrivalTime += idle;
if (previousStandstill instanceof TimeWindowedVisit) {
long previousArrival = ((TimeWindowedVisit) previousStandstill).getArrivalTime() + idle;
if (previousArrival > ((TimeWindowedVisit) previousStandstill).getDueTime()){
System.out.println("Arrival es mayor que el duetime");
previousArrival = ((TimeWindowedVisit) previousStandstill).getDueTime() - ((TimeWindowedVisit) previousStandstill).getServiceDuration();
}
((TimeWindowedVisit) previousStandstill).setArrivalTime(previousArrival);
}
}
}
// breaks
return arrivalTime;
}
ConstraintProvider.class:
private Constraint arrivalEarly(ConstraintFactory constraintFactory) {
return constraintFactory.from(TimeWindowedVisit.class)
.filter((customer) -> !customer.getVehicle().isGhost() && customer.getArrivalTime() < customer.getReadyTime())
.penalizeConfigurableLong(
VehicleRoutingConstraintConfiguration.MINIMIZE_IDLE_TIME,
customer -> customer.getReadyTime() - customer.getArrivalTime());
}

Related

project job scheduling: Multiple job parallel problems

In the same project, I don't want two jobs to run in parallel. How should I design it?
Is there a rule in the drl file that does not allow two jobs under the same project to run at the same time?
If there is no such thing, how should two jobs under the same project not run simultaneously?
rule "nonrenewableResourceCapacity"
when
$resource : Resource(renewable == false, $capacity : capacity)
accumulate(
ResourceRequirement(resource == $resource,
$executionMode : executionMode,
$requirement : requirement)
and Allocation(executionMode == $executionMode);
$used : sum($requirement);
$used > $capacity
)
then
scoreHolder.addHardConstraintMatch(kcontext, 0, $capacity - $used);
end
rule "renewableResourceUsedDay"
salience 1 // Do these rules first (optional, for performance)
when
ResourceRequirement(resourceRenewable == true, $executionMode : executionMode, $resource : resource)
Allocation(executionMode == $executionMode,
$startDate : startDate, $endDate : endDate)
then
for (int i = $startDate; i < $endDate; i++) {
insertLogical(new RenewableResourceUsedDay($resource, i));
}
end
rule "renewableResourceCapacity"
when
RenewableResourceUsedDay($resource : resource, $capacity : resourceCapacity, $usedDay : usedDay)
accumulate(
ResourceRequirement(resource == $resource,
$executionMode : executionMode,
$requirement : requirement)
and Allocation(executionMode == $executionMode, $usedDay >= startDate, $usedDay < endDate);
$used : sum($requirement);
$used > $capacity
)
then
scoreHolder.addHardConstraintMatch(kcontext, 0, $capacity - $used);
end
// ############################################################################
// Soft constraints
// ############################################################################
rule "totalProjectDelay"
when
Allocation(jobType == JobType.SINK, endDate != null, $endDate : endDate,
$criticalPathEndDate : projectCriticalPathEndDate)
then
scoreHolder.addSoftConstraintMatch(kcontext, 0, $criticalPathEndDate - $endDate);
end
rule "totalMakespan"
when
accumulate(
Allocation(jobType == JobType.SINK, $endDate : endDate);
$maxProjectEndDate : max($endDate)
)
then
scoreHolder.addSoftConstraintMatch(kcontext, 1, - (Integer) $maxProjectEndDate);
end
In task assignment, when you never want to run 2 jobs in parallel (so its a hard constraint, for all jobs), I'd probably make it a build-in hard constraint and basically model it like TSP.
If it's just pairs of specific jobs that shouldn't run in parallel, I'd have the variable listener detect that the 2 jobs would be run at the same time and delay the start time of the job that can start the latest. If they can both start at the same time, the one with the lowest id starts first and the other is delayed. This last bit is to avoid score corruption with incremental calculation.

Bukkit countdown timer starts counting down infinite (2,1,-1,-2 etc)

So i got this problem when one - two+ players are online the countdown timer goes in minus like this 4,3,2,1,0 -1,-2,-3 etc.
Does anyone know how i can fix this, been struggling with it for quite a long time now :P
Here is my countdown class:
#Override
public void run() {
if (timeUntilStart == 0) {
if (!Game.canStart()) {
if(Bukkit.getOnlinePlayers().size() <= 2) {
plugin.restartCountdown();
ChatUtilities.broadcast(ChatColor.RED + "Not enough players to start. Countdown will");
ChatUtilities.broadcast(ChatColor.RED + "restart.");
for (Player p : Bukkit.getOnlinePlayers()) p.playSound(p.getLocation(), Sound.ENDERDRAGON_WINGS, 5, 1);
return;
}else{
if(Game.canStart()) {
if(Bukkit.getOnlinePlayers().size() >= 2) {
Game.start();
}
}
}
}
}
boolean broadcast = false;
for (Player p : Bukkit.getOnlinePlayers()) {
p.setLevel(timeUntilStart);
if (timeUntilStart < 11 || timeUntilStart == 120 ||timeUntilStart == 60 || timeUntilStart == 30) {
p.playSound(p.getLocation(), Sound.ORB_PICKUP, 5, 0);
if (timeUntilStart == 1) p.playSound(p.getLocation(), Sound.ORB_PICKUP, 5, 1);
broadcast = true;
}
}
if (broadcast) ChatUtilities.broadcast(String.valueOf(timeUntilStart) + " §6Seconds until the game starts!");{
}
{
timeUntilStart -= 1;
}
}
}
The only case in which your method returns and timeUtilStart is not decremented is
timeUntilStart == 0 && !Game.canStart() && Bukkit.getOnlinePlayers().size() <= 2
As defined by the first three if blocks in your code.
This explains why your countdown does not stop when you have 3 or more players around.
I believe this mistake happened because of messy {} blocks and indentation. Take a step back and closely read the code you wrote again and fix brackets as well as indentation.
Good formatting is not a pointless chore, it's an essential tool to help yourself understand what you already wrote.
Have you tried using the bukkit scheduler? People tend to forget that bukkit's API can handle countdowns very well. Just call the scheduler with this
myInteger = Bukkit's.getScheduler.scheduleSyncRepeatingTask(plugin, new runnable(), 0L, 20L)
Put your JavaPlugin extension class in as plugin, use runnable adding unimplemented methods, 0L is ticks before first run, 20L is ticks between each run.
Cancel the countdown like this
Bukkit's.getScheduler.cancelTask(myInteger)

OptaPlanner- TSPTW minimizing total time

I am using OptaPlanner to solve what is effectively the Traveling Salesman Problem with Time Windows (TSPTW). I have a working initial solution based on the OptaPlanner provided VRPTW example.
I am now trying to address my requirements that deviate from the standard TSPTW, which are:
I am trying to minimize the total time spent rather than the total distance traveled. Because of this, idle time counts against me.
In additional to the standard time windowed visits I also must support no-later-than (NLT) visits (i.e. don't visit after X time) and no-earlier-than (NET) visits (i.e don't visit before X time).
My current solution always sets the first visit's arrival time to that visit's start time. This has the following problems with respect to my requirements:
This can introduce unnecessary idle time that could be avoided if the visit was arrived at sometime later in its time window.
The behavior with NLT is problematic. If I define an NLT with the start time set to Long.MIN_VALUE (to represent that it is unbounded without resorting to nulls) then that is the time the NLT visit is arrived at (the same problem as #1). I tried addressing this by setting the start time to the NLT time. This resulted in arriving just in time for the NLT visit but overshooting the time windows of subsequent visits.
How should I address this/these problems? I suspect a solution will involve ArrivalTimeUpdatingVariableListener but I don't know what that solution should look like.
In case it's relevant, I've pasted in my current scoring rules below. One thing to note is that "distance" is really travel time. Also, for domain reasons, I am encouraging NLT and NET arrival times to be close to the cutoff time (end time for NLT, start time for NET).
import org.optaplanner.core.api.score.buildin.hardsoftlong.HardSoftLongScoreHolder;
global HardSoftLongScoreHolder scoreHolder;
// Hard Constraints
rule "ArrivalAfterWindowEnd"
when
Visit(arrivalTime > maxStartTime, $arrivalTime : arrivalTime, $maxStartTime : maxStartTime)
then
scoreHolder.addHardConstraintMatch(kcontext, $maxStartTime - $arrivalTime);
end
// Soft Constraints
rule "MinimizeDistanceToPreviousEvent"
when
Visit(previousRouteEvent != null, $distanceFromPreviousRouteEvent : distanceFromPreviousRouteEvent)
then
scoreHolder.addSoftConstraintMatch(kcontext, -$distanceFromPreviousRouteEvent);
end
rule "MinimizeDistanceFromLastEventToHome"
when
$visit : Visit(previousRouteEvent != null)
not Visit(previousRouteEvent == $visit)
$home : Home()
then
scoreHolder.addSoftConstraintMatch(kcontext, -$visit.getDistanceTo($home));
end
rule "MinimizeIdle"
when
Visit(scheduleType != ScheduleType.NLT, arrivalTime < minStartTime, $minStartTime : minStartTime, $arrivalTime : arrivalTime)
then
scoreHolder.addSoftConstraintMatch(kcontext, $arrivalTime - $minStartTime);
end
rule "PreferLatestNLT"
when
Visit(scheduleType == ScheduleType.NLT, arrivalTime < maxStartTime, $maxStartTime : maxStartTime, $arrivalTime : arrivalTime)
then
scoreHolder.addSoftConstraintMatch(kcontext, $arrivalTime - $maxStartTime);
end
rule "PreferEarliestNET"
when
Visit(scheduleType == ScheduleType.NET, arrivalTime > minStartTime, $minStartTime : minStartTime, $arrivalTime : arrivalTime)
then
scoreHolder.addSoftConstraintMatch(kcontext, $minStartTime - $arrivalTime);
end
To see an example that uses real road times instead of road distances: In the examples app, open Vehicle Routing, click button Import, load the file roaddistance/capacitated/belgium-road-time-n50-k10.vrp. Those times were calculated with GraphHopper.
To see an example that uses Time Windows, open the Vehicle Routing and quick open a dataset that is called cvrptw (tw stands for Time Windows). If you look at the academic spec (linked from docs chapter 3 IIRC) for CVRPTW, you'll see it already has a hard constraint "Do not arrive after time window closes" - so you'll see that one in score rules drl. As for arriving too early (and therefore losing the idle time): copy paste that hard constraint, make it a soft, make it use readyTime instead of dueTime and reverse it's comparison and penalty calculation. I actually originally implemented that (as it's the logical thing to have), but because I followed the academic spec (to compare with results of the academics) I had to remove it.
I was able to solve my problem by modifying ArrivalTimeUpdatingVariableListener's updateArrivalTime method to reach backwards and (attempt to) shift the previous arrival time. Additionally, I introduced a getPreferredStartTime() method to support NLT events defaulting to as late as possible. Finally, just for code cleanliness, I moved the updateArrivalTime method from ArrivalTimeUpdatingVariableListener into the Visit class.
Here is the relevant code from the Visit class:
public long getPreferredStartTime()
{
switch(scheduleType)
{
case NLT:
return getMaxStartTime();
default:
return getMinStartTime();
}
}
public Long getStartTime()
{
Long arrivalTime = getArrivalTime();
if (arrivalTime == null)
{
return null;
}
switch(scheduleType)
{
case NLT:
return arrivalTime;
default:
return Math.max(arrivalTime, getMinStartTime());
}
}
public Long getEndTime()
{
Long startTime = getStartTime();
if (startTime == null)
{
return null;
}
return startTime + duration;
}
public void updateArrivalTime(ScoreDirector scoreDirector)
{
if(previousRouteEvent instanceof Visit)
{
updateArrivalTime(scoreDirector, (Visit)previousRouteEvent);
return;
}
long arrivalTime = getPreferredStartTime();
if(Utilities.equal(this.arrivalTime, arrivalTime))
{
return;
}
setArrivalTime(scoreDirector, arrivalTime);
}
private void updateArrivalTime(ScoreDirector scoreDirector, Visit previousVisit)
{
long departureTime = previousVisit.getEndTime();
long arrivalTime = departureTime + getDistanceFromPreviousRouteEvent();
if(Utilities.equal(this.arrivalTime, arrivalTime))
{
return;
}
if(arrivalTime > maxStartTime)
{
if(previousVisit.shiftTimeLeft(scoreDirector, arrivalTime - maxStartTime))
{
return;
}
}
else if(arrivalTime < minStartTime)
{
if(previousVisit.shiftTimeRight(scoreDirector, minStartTime - arrivalTime))
{
return;
}
}
setArrivalTime(scoreDirector, arrivalTime);
}
/**
* Set the arrival time and propagate the change to any following entities.
*/
private void setArrivalTime(ScoreDirector scoreDirector, long arrivalTime)
{
scoreDirector.beforeVariableChanged(this, "arrivalTime");
this.arrivalTime = arrivalTime;
scoreDirector.afterVariableChanged(this, "arrivalTime");
Visit nextEntity = getNextVisit();
if(nextEntity != null)
{
nextEntity.updateArrivalTime(scoreDirector, this);
}
}
/**
* Attempt to shift the arrival time backward by the specified amount.
* #param requested The amount of time that should be subtracted from the arrival time.
* #return Returns true if the arrival time was changed.
*/
private boolean shiftTimeLeft(ScoreDirector scoreDirector, long requested)
{
long available = arrivalTime - minStartTime;
if(available <= 0)
{
return false;
}
requested = Math.min(requested, available);
if(previousRouteEvent instanceof Visit)
{
//Arrival time is inflexible as this is not the first event. Forward to previous event.
return ((Visit)previousRouteEvent).shiftTimeLeft(scoreDirector, requested);
}
setArrivalTime(scoreDirector, arrivalTime - requested);
return true;
}
/**
* Attempt to shift the arrival time forward by the specified amount.
* #param requested The amount of time that should be added to the arrival time.
* #return Returns true if the arrival time was changed.
*/
private boolean shiftTimeRight(ScoreDirector scoreDirector, long requested)
{
long available = maxStartTime - arrivalTime;
if(available <= 0)
{
return false;
}
requested = Math.min(requested, available);
if(previousRouteEvent instanceof Visit)
{
//Arrival time is inflexible as this is not the first event. Forward to previous event.
//Note, we could start later anyways but that won't decrease idle time, which is the purpose of shifting right
return ((Visit)previousRouteEvent).shiftTimeRight(scoreDirector, requested);
}
setArrivalTime(scoreDirector, arrivalTime + requested);
return false;
}

How to implement rate limiting using Redis

I use INCR and EXPIRE to implement rate limiting, e.g., 5 requests per minute:
if EXISTS counter
count = INCR counter
else
EXPIRE counter 60
count = INCR counter
if count > 5
print "Exceeded the limit"
However, 5 requests can be sent at the last second minute one and 5 more requests at the first second of minute two, i.e., 10 requests in two seconds.
How can this problem be avoided?
Update: I came up with this list implementation. Is this a good way to do it?
times = LLEN counter
if times < 5
LPUSH counter now()
else
time = LINDEX counter -1
if now() - time < 60
print "Exceeded the limit"
else
LPUSH counter now()
LTRIM counter 5
You could switch from "5 requests in the last minute" to "5 requests in minute x". By this it would be possible to do:
counter = current_time # for example 15:03
count = INCR counter
EXPIRE counter 60 # just to make sure redis doesn't store it forever
if count > 5
print "Exceeded the limit"
If you want to keep using "5 requests in the last minute", then you could do
counter = Time.now.to_i # this is Ruby and it returns the number of milliseconds since 1/1/1970
key = "counter:" + counter
INCR key
EXPIRE key 60
number_of_requests = KEYS "counter"*"
if number_of_requests > 5
print "Exceeded the limit"
If you have production constraints (especially performance), it is not advised to use the KEYS keyword. We could use sets instead:
counter = Time.now.to_i # this is Ruby and it returns the number of milliseconds since 1/1/1970
set = "my_set"
SADD set counter 1
members = SMEMBERS set
# remove all set members which are older than 1 minute
members {|member| SREM member if member[key] < (Time.now.to_i - 60000) }
if (SMEMBERS set).size > 5
print "Exceeded the limit"
This is all pseudo Ruby code, but should give you the idea.
The canonical way to do rate limiting is via the Leaky bucket algorithm. The downside of using a counter, is that a user can perform a bunch of request right after the counter is reset, i.e. 5 actions in the first second of the next minute for your case. The Leaky bucket algorithm solves this problem. Briefly, you can used ordered sets to store your "leaky bucket", using action time stamps as keys to fill it.
Check out this article for the exact implementation:
Better Rate Limiting With Redis Sorted Sets
UPDATE:
There is also another algorithm, which has some advantages compared to leaky bucket. It's called Generic Cell Rate Algorithm . Here's how it works at the higher level, as described in Rate Limiting, Cells, and GCRA:
GCRA works by tracking remaining limit through a time called the “theoretical arrival time” (TAT), which is seeded on the first request by adding a duration representing its cost to the current time. The cost is calculated as a multiplier of our “emission interval” (T), which is derived from the rate at which we want the bucket to refill. When any subsequent request comes in, we take the existing TAT, subtract a fixed buffer representing the limit’s total burst capacity from it (τ + T), and compare the result to the current time. This result represents the next time to allow a request. If it’s in the past, we allow the incoming request, and if it’s in the future, we don’t. After a successful request, a new TAT is calculated by adding T.
There is a redis module that implements this algorithm available on GitHub: https://github.com/brandur/redis-cell
This is an old question that was already answered, but here's an implementation I did taking some inspiration from here. I'm using ioredis for Node.js
Here is the rolling-window time limiter in all its asynchronous yet race-condition-free (I hope) glory:
var Ioredis = require('ioredis');
var redis = new Ioredis();
// Rolling window rate limiter
//
// key is a unique identifier for the process or function call being limited
// exp is the expiry in milliseconds
// maxnum is the number of function calls allowed before expiry
var redis_limiter_rolling = function(key, maxnum, exp, next) {
redis.multi([
['incr', 'limiter:num:' + key],
['time']
]).exec(function(err, results) {
if (err) {
next(err);
} else {
// unique incremented list number for this key
var listnum = results[0][1];
// current time
var tcur = (parseInt(results[1][1][0], 10) * 1000) + Math.floor(parseInt(results[1][1][1], 10) / 1000);
// absolute time of expiry
var texpiry = tcur - exp;
// get number of transacation in the last expiry time
var listkey = 'limiter:list:' + key;
redis.multi([
['zadd', listkey, tcur.toString(), listnum],
['zremrangebyscore', listkey, '-inf', texpiry.toString()],
['zcard', listkey]
]).exec(function(err, results) {
if (err) {
next(err);
} else {
// num is the number of calls in the last expiry time window
var num = parseInt(results[2][1], 10);
if (num <= maxnum) {
// does not reach limit
next(null, false, num, exp);
} else {
// limit surpassed
next(null, true, num, exp);
}
}
});
}
});
};
and here is a kind of lockout-style rate limiter:
// Lockout window rate limiter
//
// key is a unique identifier for the process or function call being limited
// exp is the expiry in milliseconds
// maxnum is the number of function calls allowed within expiry time
var util_limiter_lockout = function(key, maxnum, exp, next) {
// lockout rate limiter
var idkey = 'limiter:lock:' + key;
redis.incr(idkey, function(err, result) {
if (err) {
next(err);
} else {
if (result <= maxnum) {
// still within number of allowable calls
// - reset expiry and allow next function call
redis.expire(idkey, exp, function(err) {
if (err) {
next(err);
} else {
next(null, false, result);
}
});
} else {
// too many calls, user must wait for expiry of idkey
next(null, true, result);
}
}
});
};
Here's a gist of the functions. Let me know if you see any issues.
Note: The following code is a sample implementation in Java.
private final String COUNT = "count";
#Autowired
private StringRedisTemplate stringRedisTemplate;
private HashOperations hashOperations;
#PostConstruct
private void init() {
hashOperations = stringRedisTemplate.opsForHash();
}
#Override
public boolean isRequestAllowed(String key, long limit, long timeout, TimeUnit timeUnit) {
Boolean hasKey = stringRedisTemplate.hasKey(key);
if (hasKey) {
Long value = hashOperations.increment(key, COUNT, -1l);
return value > 0;
} else {
hashOperations.put(key, COUNT, String.valueOf(limit));
stringRedisTemplate.expire(key, timeout, timeUnit);
}
return true;
}
Here's my leaky bucket implementation of rate limiting, using Redis Lists.
Note: The following code is a sample implementation in php, you can implement it in your own language.
$list = $redis->lRange($key, 0, -1); // get whole list
$noOfRequests = count($list);
if ($noOfRequests > 5) {
$expired = 0;
foreach ($list as $timestamp) {
if ((time() - $timestamp) > 60) { // Time difference more than 1 min == expired
$expired++;
}
}
if ($expired > 0) {
$redis->lTrim($key, $expired, -1); // Remove expired requests
if (($noOfRequests - $expired) > 5) { // If still no of requests greater than 5, means fresh limit exceeded.
die("Request limit exceeded");
}
} else { // No expired == all fresh.
die("Request limit exceeded");
}
}
$redis->rPush($key, time()); // Add this request as a genuine one to the list, and proceed.
Your update is a very nice algorithm, although I a made couple of changes:
times = LLEN counter
if times < 5
LPUSH counter now()
else
time = LINDEX counter -1
if now() - time <= 60
print "Exceeded the limit"
else
LPUSH counter now()
RPOP counter
Similar to other Java answer but will less round trip to Redis:
#Autowired
private StringRedisTemplate stringRedisTemplate;
private HashOperations hashOperations;
#PostConstruct
private void init() {
hashOperations = stringRedisTemplate.opsForHash();
}
#Override
public boolean isRequestAllowed(String key, long limit, long timeout, TimeUnit timeUnit) {
Long value = hashOperations.increment(key, COUNT, 1l);
if (value == 1) {
stringRedisTemplate.expire(key, timeout, timeUnit);
}
return value > limit;
}
Here is an alternative approach. If the goal is to limit the number of requests to X requests per Y seconds with the timer starting when the first request is received, then you could create 2 keys for each user that you want to track: one for the time that the first request was received and another for the number of requests made.
key = "123"
key_count = "ct:#{key}"
key_timestamp = "ts:#{key}"
if (not redis[key_timestamp].nil?) && (not redis[key_count].nil?) && (redis[key_count].to_i > 3)
puts "limit reached"
else
if redis[key_timestamp].nil?
redis.multi do
redis.set(key_count, 1)
redis.set(key_timestamp, 1)
redis.expire(key_timestamp,30)
end
else
redis.incr(key_count)
end
puts redis[key_count].to_s + " : " + redis[key_timestamp].to_s + " : " + redis.ttl(key_timestamp).to_s
end
This is small enough that you might get away with not hashing it.
local f,k,a,b f=redis.call k=KEYS[1] a=f('incrby',k,ARGV[1]) b=f('pttl',k) if b<0 then f('pexpire',k,ARGV[2]) end return a
The parameters are:
KEYS[1] = key name, could be the action to rate limit for example
ARGV[1] = amount to increment, usually 1, but you could batch up per 10 or 100 millisecond intervals on the client
ARGV[2] = window, in milliseconds, to rate limit in
Returns: The new incremented value, which can then be compared to a value in your code to see if it's over the rate limit.
The ttl will not be set back to the base value with this method, it will continue to slide down until the key expires, at which point it will start over with ARGV[2] ttl on the next call.
Requests in Last interval / Sliding window
interval == Amount of time that number of requests(throughput) accepted
throughput == number of requests per interval
RequestTimeList == Each request time added to this list
// Remove older request entries
while (!RequestTimeList.isEmpty() && (now() - RequestTimeList.get(0)) > interval) {
RequestTimeList.remove(0)
}
if (RequestTimeList.length < throughput) {
RequestTimeList.add(now())
} else {
throw err;
}
Requests in Interval / Fixed window
I have tried with LIST, EXPIRE and PTTL
If tps is 5 per second, then
throughput = 5
rampup = 1000 (1000ms = 1sec)
interval = 200ms
local tpsKey = KEYS[1]
local throughput = tonumber(ARGV[1])
local rampUp = tonumber(ARGV[2])
-- Minimum interval to accept the next request.
local interval = rampUp / throughput
local currentTime = redis.call('PTTL', tpsKey)
-- -2 if the key does not exist, so set an year expiry
if currentTime == -2 then
currentTime = 31536000000 - interval
redis.call('SET', tpsKey, 31536000000, "PX", currentTime)
end
local previousTime = redis.call('GET', tpsKey)
if (previousTime - currentTime) >= interval then
redis.call('SET', tpsKey, currentTime, "PX", currentTime)
return true
else
redis.call('ECHO',"0. ERR - MAX PERMIT REACHED IN THIS INTERVAL")
return false
end
another way with List
local counter = KEYS[1]
local throughput = tonumber(ARGV[1])
local rampUp = tonumber(ARGV[2])
local interval = rampUp / throughput
local times = redis.call('LLEN', counter)
if times == 0 then
redis.call('LPUSH', counter, rampUp)
redis.call('PEXPIRE', counter, rampUp)
return true
elseif times < throughput then
local lastElemTTL = tonumber(redis.call('LINDEX', counter, 0))
local currentTTL = redis.call('PTTL', counter)
if (lastElemTTL-currentTTL) < interval then
return false
else
redis.call('LPUSH', counter, currentTTL)
return true
end
else
return false
end
Redis streams (introduced in redis 5.0, 2018) provide a nice way of implementing a sliding window api limiter. Here's my implementation in Python
r = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True)
#app.middleware("http")
async def rate_limit(request: Request, call_next):
request_time = time.time()
host = request.client.host
# Settings
window_seconds = 10
max_requests_in_window = 2
# Fetch the oldest element in the stream
# Returns a 0- or 1-element list like: [('1660835163482-0', {'': ''})]
oldest = r.xrange(name=host, min='-', max='+', count=1)
# if:
# - an oldest element exists AND
# - it's inside the time window AND
# - the stream is full
# deny the request
if len(oldest) > 0:
oldest_time = int(oldest[0][0].split('-')[0])/1000
if oldest_time >= request_time - window_seconds:
stream_size = r.xlen(name=host)
if stream_size >= max_requests_in_window:
return JSONResponse(status_code=403, content={'reason': oldest})
# Append this request to the stream and carry on
r.xadd(name=host, fields={'':''}, maxlen=max_requests_in_window, approximate=False)
# Carry on..
response = await call_next(request)
return response

Blackberry - systemLock() not working

I'm trying to use the systemLock() to lock the device when the getSpeed() returns a value greater than 20 m/s.
public void locationUpdated(LocationProvider provider, Location location)
{
if(location.isValid())
{
float speed = location.getSpeed();
// Information to be displayed on the device
StringBuffer sb = new StringBuffer();
sb.append("\n");
sb.append("Speed : ");
sb.append(speed);
sb.append(" m/s");
if(speed < 20){
appMan = ApplicationManager.getApplicationManager();
appMan.lockSystem(true);
}else{
}
MyApp.this.updateLocationScreen(sb.toString());
}
}
I have a RichTextField and I can use the .settext() successfully in the if/else statement to change the RichTextField's text so I must be using the lockSystem() wrong.
Edit
if(speed > 20 || Double.isNaN(speed)){
requestForeground();
appMan = ApplicationManager.getApplicationManager();
appMan.lockSystem(true);
}else{
}
The first thing that comes to the eyes is:
to lock the device when the getSpeed() returns a value greater than 20 m/s.
and
if (speed < 20) {
appMan = ApplicationManager.getApplicationManager();
appMan.lockSystem(true);
}
From the docs on Location
public float getSpeed()
Returns:
the current ground speed in m/s for
the terminal or Float.NaN if the speed is not known
In Java, any comparison against Float.NaN will return false, so your lock screen code block won't execute if your device is returning NaN as the speed. You might want to add Double.isNaN(speed) to your condition.