How to implement longest-queue-first rule for traffic lights in SUMO or FLOW? - sumo

How to implement longest-queue-first rule for traffic lights in SUMO or FLOW?
It seems SUMO only supports three kind of traffic light, which is not flexible enough.

SUMO has an in-built actuated traffic light algorithm which in conjunction with lane detectors, prolongs the green phase of the traffic light.
Longest queue first seems like a rule-based algorithm which can be easily implemented using TraCI. At the beginning of each phase, you could check the number of waiting vehicles per edge and set the green phase accordingly.

Related

LoRaWAN Coverage Design in Rural Areas

I need to cover a district with a LoRaWAN network for air quality sensors. I know that the LoRa/LoRaWAN technology is the perfect solution when Line of Sight is maintained, but is there any easy way to determine how many gateways are needed in rural areas? I amning plan to use Kerlink Wirnet iStation V1.5 as gateway and ESP32-based CO2 sensors. Many thanks in advance.
You definitely don't need line of site for LoRa communication.
The easiest, and most accurate way to estimate the number of required gateways is to do a field test with one single gateway and a test device (e.g.: an Adeunis Field Tester). This way you can check what the longest distance between an end device and the gateway can be. Using that information you can calculate the required density of base stations.
If you register for a free account on Actility's ThingPark Community Portal and you connect your Kerlink gateway to the ThingPark Network Server, you will be able to use Actility's Network Survey Tool that can visualize the coverage of your gateway an a map.
If you want to make a rough estimation I would say that in a rural environment, where devices are outdoor and the gateway antenna is on a 20m height pole or on top of a 20 height building the range of a gateway is around 1-3 km. If the end devices are indoors (in rooms having windows) this range is 0.5-1.5 km.
You could also use The Things Stack community edition (formerly known as TTN, The Things Network) in conjunction with ttnmapper.org. Note that there is currently a transition going on from TTN (V2) to The Things Stack V3, see the notice on the webpage. This method uses field tests similar to the system proposed in Norbert Herbert's answer; any simple node is sufficient because the GW's metadata are evaluated. You can track you field test live on a smartphone. As LoRaWAN coverage strongly depends on the gateway's placement, it should be at least similar to the intended position, better be the planned position proper.
For a dry run without any hardware, you may also have a look at the freeware program Radio Mobile by Roger Coudé VE2DBE, with more info by Remko Welling PE1MEW here. The program lets you simulate radio connections in a wide variety of settings, including a complete mapping of a region with multiple gateways.
Line of Sight is not always needed. There are many factors that will affect the reach of your modules, including the terrain (hills can get in the way, especially higher ranges), the settings you would use for your LoRa or LoRaWAN network, and where you position your gateway(s) – when using LoRaWAN – or transceivers, when using LoRa.
I live in a mixed environment, half hills and jungle and half dense, high rises, and I get about 10 km coverage, no LoS, and more if I get LoS from a height, both with LoRa and LoRaWAN, although reliability is not always guaranteed.
But first you have to decide whether you will go the LoRa or the LoRaWAN – this has implications on both the hardware and software budget: while LoRaWAN requires more equipment, and more onerous, it will simplify the setup cost, software-wise. I am very much a LoRa guy myself, but I do recognize the benefits of LoRaWAN for quick developments.
But it'd be cheap to do a first test with a couple of LoRa devices, to check how far you can reach in your region.

What is squeeze testing?

In the talk "Beyond DevOps: How Netflix Bridges the Gap," around 29:10 Josh Evans mentions squeeze testing as something that can help them understand system drift. What is squeeze testing and how is it implemented?
It seems to be a term used by the folks at Netflix.
It is to run some tests/benchmarks to see changes in performance and calculate the breaking point of the application. Then see if that last change was inefficient or determine the recommended auto-scaling parameters before deploying it.
There is a little more information here and here:
One practice which isn’t yet widely adopted but is used consistently
by our edge teams (who push most frequently) is automated squeeze
testing. Once the canary has passed the functional and ACA analysis
phases the production traffic is differentially steered at an
increased rate against the canary, increasing in well-defined steps.
As the request rate goes up key metrics are evaluated to determine
effective carrying capacity; automatically determining if that
capacity has decreased as part of the push.
As someone who helped with the development of squeeze testing at Netflix. It is using the large amount of stateless requests from actual production traffic to test the system. One of those ways is to put inordinately more load on one instance of a service until it breaks. Monitor the key performance metrics of that instance and use that information to inform how to configure auto scaling policies. It eliminates the problems of fake traffic not stressing the system in the right way.
The reasons it might not work for every one:
you need more traffic than any one instance can handle.
the requests need to be somewhat uniform in demand on the service under test.
clients & protocol need to be resilient to errors should things go wrong.
The way it is set up is a proxy is put in front of the service. The proxy is configured to send a specific RPS to one instance. I used the bresenham's line algorithm to evenly spread the fluctuation in incoming traffic over time to a precise out going RPS. turn up the dial on the RPS, watch it burn.

Need some explanation about Software-defined networking (SDN)

Can anyone give me an example of data plane and control plane in the 'traditional' model i.e when SDN does not apply.
I understand how SDN works, but I don't really know about the traditional model.
In SDN, the data plane and control plane are divided, so how are the data plane and control planes organized in the 'traditional' model?
In a traditional network device, the Control Plane has the L3 route processor and the L2 switch processor CPUs, which “control” the packet or data flow. Some of the different packets the control plane handles are a variety of traffic, including BPDUs, routing updates, HSRP, CDP, CEF, process-switched packets, ARP, and management traffic such as SSH, SNMP, RADIUS. All of these are processed by the router or switch’s control plane. The Data Plane (or Forwarding Plane) deals with anything that goes “through” the router/switch and not “to” the router/switch. As you can imagine there are many vendors each with their own flavor of how to best control the logic of the decision making, and also how to best handle packet flow and throughput. But the common factor here is that both the control and data planes exist on the same device, as opposed to being decoupled from each other as in SDN.
Well, first off, this is what i understand so far.
Data and Control Plane. Lets talk about traditional networking. You have multiple routers linked together. Now the routing is not static i.e. there is no fixed path to reach one computer to another in the world. The path keeps on changing depending on various parameters like hop counts / congestion / etc. So how is this dynamic nature achieved? There are routing algorithms and other mechanisms at play which decide which path to choose. Now all this decision making process form the control plane. The "brain" part in router that sends/receives packets destined for INTERMEDIATE ROUTERS ONLY and not some terminal computer connected to Internet form the control plane.
As for data plane that is actually what forwards / routes the packet to the dynamic path.
So simply put, in a traditional switch/router, the propreitory software LOCAL TO EACH ROUTER / SWITCH which is deciding the routing decision and FILLING the Switch/Router forwarding table forms the Control Plane and the FORWARDING TABLES ENTRIES ITSELF would be the data plane.
Let's say you and I are in charge of public transportation for a small city.
Before we send bus drivers out, we need to have a plan.
Control Plane = Learning what we will do
Our planning stage, which includes learning which paths the buses will take, is similar to the control plane in the network. We haven't picked up people yet, nor have we dropped them off, but we do know the paths and stops due to our plan. The control plane is primarily about the learning of routes.
In a routed network, this planning and learning can be done through static routes, where we train the router about remote networks, and how to get there. We also can use dynamic routing protocols, like RIP, OSPF and EIGRP to allow the routers to train each other regarding how to reach remote networks. This is all the control plane.
Data Plane = Actualy moving the packets based on what we learned.
Now, after the routers know how to route for remote networks, along comes a customers packet and BAM! this is were the data plane begins. The data plane is the actual movement of the customers data packets over the transit path. (We learned the path to use in the control plane stage earlier).
Let's say you and I are in charge of public transportation for a small city.
Before we send bus drivers out, we need to have a plan.
Control Plane = Learning what we will do
Our planning stage, which includes learning which paths the buses will take, is similar to the control plane in the network. We haven't picked up people yet, nor have we dropped them off, but we do know the paths and stops due to our plan. The control plane is primarily about the learning of routes.
In a routed network, this planning and learning can be done through static routes, where we train the router about remote networks, and how to get there. We also can use dynamic routing protocols, like RIP, OSPF and EIGRP to allow the routers to train each other regarding how to reach remote networks. This is all the control plane.
Data Plane = Actualy moving the packets based on what we learned.
Now, after the routers know how to route for remote networks, along comes a customers packet and BAM! this is were the data plane begins. The data plane is the actual movement of the customers data packets over the transit path. (We learned the path to use in the control plane stage earlier).

Why is rising edge preferred over falling edge

Flip-Flops(,Registers ...) are usually triggered by a rising or falling edge. But mostly in code you see an if-clause which uses the rising edge triggering. In fact i never saw a code with falling edge.
Why is that? Is it because naturally the programmers use rising edge, because they are used to, or is it because of some physical/analog law/fact, where the rising edge programming is faster/simpler/energy-efficient/... ?
As zennehoy says, it's convention - but one going back to when logic was done in discrete chips with a few gates or flipflops within them. Those packages of flipflops were always rising-edge triggered...as far as I recall, but maybe someone with better recollection of the yellow books will correct me!
So when synthesis came along, no doubt everyone felt comfortable carrying on that way!
Nothing more than a matter of convention.
Using the rising edge is more common, and most component libraries use the rising edge. This means that using those libraries requires you to also use rising edges, or add clock synchronization logic, or keep your paths so short that the delay is less than half a clock cycle. Just using rising edges everywhere is by far the easiest.
When you design a (single-edge) DFF in a chip, you must choose at which (rising or falling) clock edge it will operate. This decision is independent from the implementation approach (i.e., master-slave or pulsed-latch), and it does not alter the number of transistors in the DFF itself.
Since positive-edge is the typical default (as in FPGAs), to operate at the negative clock edge the usual procedure is to simply use a positive-edge DFF with an inverted version of the clock signal connected to its clock port. If this is done locally (near the DFF clock port), then two extra transistors are indeed needed (to build a CMOS inverter for the clock).
it is somewhat a matter of convention but if you look at the design of falling versus rising edge, there is only a difference of an added inverter, and it turned out to be 2 transistors less on rising edge
but there are designs out there that use both, for example in some data caches you write on rising edge and read on falling edge, or vice-versa depending on design choices!
good question, and try it out or take a course(maybe online) on digital integrated circuits

Lighting Control with the Arduino

I'd like to start out with the Arduino to make something that will (preferably) dim my room lights and turn on some recessed lighting for my computer when a button or switch is activated.
First of all, is this even possible with the Arduino?
Secondly, how would I switch on and off real lights with it? Some sort of relay, maybe?
Does anyone know of a good tutorial or something where at least parts of this are covered? I'll have no problems with the programming, just don't know where to start with hardware.
An alternative (and safer than playing with triacs – trust me I've been shocked by one once and that's enough!) is to use X-10 home automation devices.
There is a PC (RS232) device (CM12U UK or CM11 US) you can get to control the others. You can also get lamp modules that fit between your lamp and the wall outlet which allows you to dim the lamp by sending signals over the mains and switch modules which switch loads on and off.
The Arduino has a TTL level RS232 connector (it's basically what the USB connection uses) – Pins 0 and 1 on the Diecimila so you could use that, connect it via a level converter which you can buy or make and connect to the X-10 controller, theirs instructions on the on the Arduino website for making a RS232 port.
Alternatively you could use something like the FireCracker for X-10 which uses 310MHz (US) or 433MHz (UK) and have your Arduino send out RF signals which the TM12U converts into proper X-10 mains signals for the dimmers etc.
In the US the X-10 modules are really cheep as well (sadly not the case in the UK).
Most people do it using triacs. A triac is like two diodes in anti-parallel (in parallel, but with their polarity reversed) with a trigger pin. A triac conducts current in either direction only when it's triggered. Once triggered, it acts as a regular diode, it continues to conduct until the current drops bellow its threshold.
You can see it as a bi-directional switch on a AC line and can vary the mean current by triggering it in different moments relative to the moment the AC sine-wave crosses zero.
Roughly, it works like this: At the AC sine-wave zero, your diodes turn off and your lamp doesn't get any power. If you trigger the diodes, say, halfway through the sine's swing, you lamp will get half the normal current it would get, so it lights with half of it's power, until the sine-wave crosses zero again. At this point you start over.
If you trigger the triac sooner, your lamp will get current for a longer time interval, glowing brighter. If you trigger your triac latter, your lamp glows fainter.
The same applies to any AC load.
It is almost the same principle of PWM for DC. You turn your current source on and off quicker than your load can react, The amount of time it is turned on is proportional to the current your load will receive.
How do you do that with your arduino?
In simple terms you must first find the zero-crossing of the mains, then you set up a timer/delay and at its end you trigger the triac.
To detect the zero-crossing one normally uses an optocoupler. You connect the led side of the coupler with the mains and the transistor side with the interrupt pin of your arduino.
You can connect your arduino IO pins directly to the triacs' triggers, bu I would use another optocoupler just to be on the safe side.
When the sine-wave approaches zero, you get a pulse on your interrupt pin.
At this interrupt you set up a timer. the longer the timer, the less power your load will get. You also reset your triacs' pins state.
At this timers' interrupt you set your IO pins to trigger the triacs.
Of course you must understand a little about the hardware side so you don't fry your board, and burn your house,
And it goes without saying you must be careful not to kill yourself when dealing with mains AC =).
HERE is the project that got me started some time ago.
It uses AVRs so it should be easy to adapt to an arduino.
It is also quite complete, with schematics.
Their software is a bit on the complex side, so you should start with something simpler.
There is just a ton of this kind of stuff at the Make magazine site. I think you can even find some examples of similar hacks.
I use MOSFET for dimming 12V LED strips using Arduino. I chose IRF3710 for my project with a heat sink to be sure, and it works fine. I tested with 12V halogen lamp, it worked too.
I connect PWM output pin from Arduino directly to mosfet's gate pin, and use analogWrite in code to control brightness.
Regarding 2nd question about controlling lights, you can switch on/off 220V using relays, as partially seen on my photo, there are many boards for this, I chose this:
As a quick-start, you can get yourself one of those dimmerpacks (50-80€ for four lamps).
then build the electronics for the arduino to send DMX controls:
Arduino DMX shield
You'll get yourself both the arduino-expirience + a good chance of not frying your surrounding with higher voltage..