Unequally spaced vehicles in a flow: SUMO - sumo

I have needed an equally spaced vehicle flow. As per the documentation , vehicles should be equally spaced unless someone randomizes the flow. I didn't randomize the flow, but I am experiencing that the vehicles do not have the same headway.
Here is my rou.xml file entry, and I set sigma = 0 as well.
<flow id = "f1" color="1,1,1" begin = "0" type="Car" vehsPerHour="1500" number="100" route="route0" departSpeed="13.9"> </flow>
I am seeing majority of the vehicles have a headway around 27m and some other vehicles around 40m. There is a pattern. The first 2 vehicles of every 5 vehicles travel together (with 27m heading), and other other 3 travel together (with 27m heading) but with 40m gap between 3rd and the 2nd (e.g. V represents a vehicle VVV*****VV*****VVV*****VV****VVV*****V**V)
I tried this as well.
<flow id = "f1" color="1,1,1" begin = "0" type="Car" period="2.4" number="100" route="route0" departSpeed="13.9"> </flow>
But it is the same as the previous.
Is there a workaround for this?
Thanks!

It is a discretization error. Assuming you run with the default step length of one second the vehicles will be emitted at whole seconds only. To avoid this use only multiples of the step length as period (so either using a period of 2 or 3 or reducing the step length to 0.2 should help in your example). There is also a ticket concerning this topic: https://github.com/eclipse/sumo/issues/4277.

Related

pine script first trade cum. profit unreliable? BTCUSD 1 dollar initial investment

I made a pine script for tradingview that uses initial capital of 1 usd and an order size of 0.5 usd for trading BTCUSD. But for some reason the strategy tester list of trades, shows that the first trade has 88k% cum profit.
This makes absolutely no sense to me, as the specific trade entry is 10k and the exit is 11k as shown in the image
enter image description here
Also, the strategy never shorts, yet one some datasets/time frames, it ends up with a portfolio of negative thousands of dollars. how can it lose more than 100% of portfolio without ever shorting? It seems to me these numbers are not trustworthy.
For strategy input, i use
strategy("BTC9%lines", overlay=false, shorttitle = "FIBBTC redist", default_qty_type = strategy.cash, default_qty_value=0.5, commission_value = 0.01, initial_capital = 1, currency=currency.USD, calc_on_order_fills=false)
i found out the solution to the problem. yet i dont know how to delete my post, however perhaps people can benefit from me answering my own question
The problem was that i had a
qty
in my
strategy.entry
which overwrote my strategy default values.
why it was able to enter a trade with half of 1 btc at 10k usd with only 1 usd capital is still confusing.
Anyway i solved it by dividing the qty i set in strategy.entry with close
var qtyvaluelower=initialCapital*0.55
var qtyvaluehigher=initialCapital*0.45
if linecross and different and lowerthanlast
strategy.close_all(comment="lc")
strategy.entry("lower", strategy.long, qty=qtyvaluelower, comment="buy more")
if linecross and different and higherthanlast
strategy.close_all(comment="hc")
strategy.entry("higher", strategy.long, qty=qtyvaluehigher, comment="buy less")
changed to
var qtyvaluelower=initialCapital*0.55/close
var qtyvaluehigher=initialCapital*0.45/close
if linecross and different and lowerthanlast
strategy.close_all(comment="lc")
strategy.entry("lower", strategy.long, qty=qtyvaluelower, comment="buy more")
if linecross and different and higherthanlast
strategy.close_all(comment="hc")
strategy.entry("higher", strategy.long, qty=qtyvaluehigher, comment="buy less")

Sentinel 1 data gaps in swath overlap (not sequential scenes) in Google Earth Engine

I am working on a project using the Sentinel 1 GRD product in Google Earth Engine and I have found a couple examples of missing data, apparently in swath overlaps in the descending orbit. This is not the issue discussed here and explained on the GEE developers forum. It is a much larger gap and does not appear to be the product of the terrain correction as explained for that other issue.
This gap seems to persist regardless of year changes in the date range or polarization. The gap is resolved by changing the orbit filter param from 'DESCENDING' to 'ASCENDING', presumably because of the different swaths or by increasing the date range. I get that increasing the date range increases revisits and thus coverage but is this then just a byproduct of the orbital geometry? ie it takes more than the standard temporal repeat to image that area? I am just trying to understand where this data gap is coming from.
Code example:
var geometry = ee.Geometry.Polygon(
[[[-123.79472413785096, 46.20720039434629],
[-123.79472413785096, 42.40398120362418],
[-117.19194093472596, 42.40398120362418],
[-117.19194093472596, 46.20720039434629]]], null, false)
var filtered = ee.ImageCollection('COPERNICUS/S1_GRD').filterDate('2019-01-01','2019-04-30')
.filterBounds(geometry)
.filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING'))
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV'))
.filter(ee.Filter.eq('instrumentMode', 'IW'))
.select(["VV","VH"])
print(filtered)
var filtered_mean = filtered.mean()
print(filtered_mean)
Map.addLayer(filtered_mean.select('VH'),{min:-25,max:1},'filtered')
You can view an example here: https://code.earthengine.google.com/26556660c352fb25b98ac80667298959

Guess if a user will make or not a conversion

My friends,
In the past couple of years I read a lot about AI with JS and some libraries like TensorFlow. I have great interest in the subject but never used it on a serious project. However, after struggling a lot with linear regression to solve an optimization problem I have, I think that finally I will get much better results, with greater performance, using AI. I work for 12 years with web development and lots of server side, but never worked with any AI library, so please, have a little patience with me if I say something stupid!
My problem is this: every user that visits our platform (website) we save the Hour, Day of the week, if the device requesting the page was a smartphone or computer... and such of the FIRST access the user made. If the user keeps visiting other pages, we dont care, we only save the data of the FIRST visit. And if the user anytime does something that we consider a conversion, we assign that conversion to the record of the first access that user made. So we have almost 3 millions of lines like this:
SESSION HOUR DAY_WEEK DEVICE CONVERSION
9847 7 MONDAY SMARTPHONE NO
2233 13 TUESDAY COMPUTER YES
5543 19 SUNDAY COMPUTER YES
3721 8 FRIDAY SMARTPHONE NO
1849 12 SUNDAY COMPUTER NO
6382 0 MONDAY SMARTPHONE YES
What I would like to do is this: next time a user visits our platform, we wanna know the probability of that user making a conversion. If a user access now, our website, depending on their device, day of week, hour... we wanna know the probability of that user making a future conversion. With that, we can show very specific messages to the user while he is using our platform and a different price model according to that probability.
CURRENTLY we are using a liner regression, and it predicts if the user will make a conversion with an accuracy of around 30%. It's pretty low but so far, it's the best we got it, and this linear regression generates almost 18% increase in conversions when we use it to show specific messages/prices to that specific user compaired to when we dont use it. SO, with a 30% accuracy our linear regression already provides 18% better conversions (and with that, higher revenues and so on).
In case you are curious, our linear regression model works like this: we generate a linear equation to every first user access on our system with variables that our system tries to find in order to minimize error sqr(expected value - value). Using the data above, our model would generate these equations below (SUNDAY = 0, MONDAY = 1...COMPUTER = 0, SMARTPHONE = 1... CONVERSION YES = 1 and NO = 0)
A*7 + B*1 + C*1 = 0
A*13 + B*2 + C*0 = 1
A*19 + B*0 + C*0 = 1
A*8 + B*6 + C*1 = 0
A*12 + B*0 + C*1 = 0
A*0 + B*1 + C*0 = 1
So, our system find the best A, B and C that generates the minimizes error. How can we do that with AI? If possible, it would be nice if we could use TensorFlow or anything with JS! I know there are several AI models, and I have no idea which one would best fit what we need!

Matching an element in a column, to others in the same column

I have columns taken from excel as a dataframe, the columns are as follows:
HolidayTourProvider|Packages|Meals|Accommodation|LocalTravelVehicle|Cancellationfee
Holiday Tour Provider has a couple of company names
Packages, the features provided in each package are mostly the same like
Meals,Accommodation etc... even though one company may call it "Saver", others may call it "Budget". (each of column mostly follow Yes/No, except Local travel vehicle are again car names like Ford Taurus,jeep cherokee etc..
Cancellation amount is integers)
I need to write a function like
match(HolidayTP,Package)
where the user can give input like
match(AdventureLife, Luxury)
then I need to return all the packages that have similar features with Luxury by other Holiday Tour Providers, no matter what name they give the package like 'Semi Lux', 'Comfort' etc...
I want to give a counter for every match and display all the packages that exceed the counter by 3 or 4.
This is my first python code. I am stuck here.
fb is the total df I exported to
def mapHol(HTP, PACKAGE):
mfb = (fb['HTP']== HTP)&(fb['package']== package)
B = fb[mfb]
for i in fb[i]:
for j in B[j]:
if fb[i]==B[j]:
count+=1
I dont know how to proceed, please help me this is my first major project, I started on my own.

How to find results being between two values of different keys with Redis?

I'm creating a game matchmaking system using Redis based on MMR, which is a number that pretty much sums up the skill of a player. Therefore the system can match him/her with others who are pretty much with the same skill.
For example if a player with MMR of 1000 joins the queue, system will try to find other ppl with MMR of range 950 to 1050 to match with this player. But if after one minute it cannot find any player with given stats it will scale up the range to 900 to 1100 (a constant threshold).
What I want to do is really easy with relational database design but I can't figure out how to do it with Redis.
The queue table implementation would be like this:
+----+---------+------+-------+
| ID | USER_ID | MMR | TRIES |
+----+---------+------+-------+
| 1 | 50 | 1000 | 1 |
| 2 | 70 | 1500 | 1 |
| 3 | 350 | 1200 | 1 |
+----+---------+------+-------+
So when a new player queues up, it will check it's MMR against other players in the queue if it finds one between 5% Threshold it will match the two players if not it will add the new player to the table and wait for new players to queue up to compare or to pass 1 minute and the cronjob increment the tries and retry to match players.
The only way I can imagine is to use two separate keys for the low and high of each player in the queue like this
MatchMakingQueue:User:1:Low => 900
MatchMakingQueue:User:1:High => 1100
but the keys will be different and I can't get for example all users in between range of low of 900 to high of 1100!
I hope I've been clear enough any help would be much appreciated.
As #Guy Korland had suggested, a Sorted Set can be used to track and match players based on their MMR, and I do not agree with the OP's "won't scale" comment.
Basically, when a new player joins, the ID is added to a zset with the MMR as its score.
ZADD players:mmr 1000 id:50
The matchmaking is made for each user, e.g. id:50 with the following query:
ZREVRANGEBYSCORE players:mmrs 1050 950 LIMIT 0 2
A match is found if two IDs are returned and at least one of them is different than that of the new player. To make the match, both IDs (the new player's and the matched with one) need to be removed from the set - I'd use a Lua script to implement this piece of logic (matching and removing) for atomicity and communication reduction, but it can be done in the client as well.
There are different ways to keep track of the retries, but perhaps the simplest one is to use another Sorted Set, where the score is that metric.
The following pseudo Redis Lua code is a minimal example of the approach:
local kmmrs, kretries = KEYS[1], KEYS[2]
local id = ARGV[1]
local mmr = redis.call('ZSCORE', kmmrs, id)
local retries = redis.call('ZSCORE', kretries, id)
local min, max = mmr*(1-0.05*retries), mmr*(1+0.05*retries)
local candidates = redis.call('ZREVRANGEBYSCORE', kmmrs, max, min, 'LIMIT', 0, 2)
if #candidates < 2 then
redis.call('ZINCRBY', kretries, 1, id)
return nil
end
local reply
if candidates[1] ~= id then
reply = candidates[1]
else
reply = candidates[2]
end
redis.call('ZREM', kmmrs, id, reply)
redis.call('ZREM', kretries, id, reply)
return reply
Let me get the problem right! Your problem is that you want to find all the users in a given range of MMR value. What if You make other users say that "I'm falling in this range".
Read about Redis Pub/Sub.
When a user joins in, publish its MMR to rest of the players.
Write the code on the user side to check if his/her MMR is falling in the range.
If it is, user will publish back to a channel that it is falling in that range. Else, user will silently discard the message.
repeat these steps if you get no response back in 1 minute.
You can make one channel (let's say MatchMMR) for all users to publish MMR for match request which should be suscribed by all the users. And make user specific channel in case somebody has a MMR in the calculated range.
Form you published messages such that you can send all the information like "retry count", "match range percentage", "MMR value" etc. so that your code at user side can calculate if it is the right fit for the MMR.
Read mode about redis Pub/Sub at: https://redis.io/topics/pubsub