How to iteratively call duarouter algorithm in a traCI simulation? - sumo

I defined the flow by giving the from and to edges. When testing some traffic signal algorithm, I found the network would easily come to a gridlock,partially because vehicles cannot find the dynamic user equilibrium route. Thus my goal is to make those defined flow (vehicle) find dynamic user equilibrium route in every simulation time step. I know the duarouter should be the solution. But how can I call duarouter in every simulation time step, how where should I incorporate duarouter in my code?
I followed the example code provided by SUMO website. Basically I defined a run() function which defines my signal control algorithm. Then I call run() in the main function.
Where should I inset the duarouter?
How should I call it in my simulation loop for make sure in every time step, the vehicles in the network can find its user equilibrium route.
dua-iterate.py -n <PATH_TO_SUMO_NET> -t <PATH_TO_TRIPS>
def run():
"""execute the TraCI control loop"""
step = 0
NSphase = 0
EWphase = 2
while traci.simulation.getMinExpectedNumber() > 0:
traci.simulationStep() # this means let the simulation go forward for one step
step += 1
{signal control algorithm}
traci.close()
sys.stdout.flush()
if __name__ == "__main__":
traci.start(["/home/hao/sumo_binaries/bin/sumo-gui", "-c", "/home/hao/Documents/traci_test/randomnet4/random.sumo.cfg",
"--tripinfo-output", "tripinfo.xml"])
run()

Traci is a control interface to sumo. So the basic idea is that you can start a sumo server and connect Traci to the server. Traci will generate the routes based on your network and trips files, statically or dynamically.
In the SUE case, your code
traci.start(["/home/hao/sumo_binaries/bin/sumo-gui", "-c", "/home/hao/Documents/traci_test/randomnet4/random.sumo.cfg", "--tripinfo-output", "tripinfo.xml"])
actually
starts the sumo-gui server,
and connects Traci (SUE) to the server.
To use DUE with Traci, you should use the duaIterate.py in tools/assign folder. But
traci.start(["python", <PATH TO duaIterate.py>, "-n", <NETWORK FILE>, "-t", <TRIPS FILE>])
only tries to connect Traci (DUE) to a sumo/sumo-gui server. So you should first start the server manually:
sumo-gui -n suedstadt.net.xml --remote-port <PORT NUMBER>
The remote-port option here starts sumo-gui in the server mode. Now you can connect Traci to the server with the port option.
traci.start(["python", <PATH TO duaIterate.py>, "-n", <NETWORK FILE>, "-t", <TRIPS FILE>], port=<PORT NUMBER>)

Related

Python multiprocessing between ubuntu and centOS

I am trying to run some parallel jobs through Python multiprocessing. Here is an example code:
import multiprocessing as mp
import os
def f(name, total):
print('process {:d} starting doing business in {:d}'.format(name, total))
#there will be some unix command to run external program
if __name__ == '__main__':
total_task_num = 100
mp.Queue()
all_processes = []
for i in range(total_task_num):
p = mp.Process(target=f, args=(i,total_task_num))
all_processes.append(p)
p.start()
for p in all_processes:
p.join()
I also set export OMP_NUM_THREADS=1 to make sure that only one thread for one process.
Now I have 20 cores in my desktop. For 100 parallel jobs, I want to let it run 5 cycles so that each core run one job (20*5=100).
I tried to do the same code in CentOS and ubuntu. It seems that CentOS will automatically do a job splitting. In other words, there will be only 20 parallel running jobs at the same time. However, ubuntu will start 100 jobs simultaneously. As such, each core will be occupied by 5 jobs. This will significantly increase the total run time due to high work load.
I wonder if there is an elegant solution to teach ubuntu to run only 1 job per core.
To enable a process run on a specific CPU, you use the command taskset in linux. Accordingly you can arrive on a logic based on "taskset -p [mask] [pid]" that assigns each process to a specific core in a loop.
Also , python helps in incorporation of affinity control via sched_setaffinity that can be checked for confining a process to specific cores. Accordingly , you can arrive on a logic for usage of "os.sched_setaffinity(pid, mask)" where pid is the process id of the process whose mask represents the group of CPUs to which the process shall be confined to.
In python, there are also other tools like https://pypi.org/project/affinity/ that can be explored for usage.

How to start and stop multiple weblogic managed servers at one go through WLST

I am writing a code to start , stop, undeploy and deploy my application on weblogc.
My components need to be deployed on few managed servers.
When I do new deployments manually I can start and stop the servers in parallel, by ticking multiple boxes and selecting start and stop from the dop down. See below.
but when trying from WLST, i could do that in one server at a time.
ex:
start(name='ServerX',type='Server',block='true')
start(name='ServerY',type='Server',block='true')
shutdown(name='ServerX',entityType='Server',ignoreSessions='true',timeOut=600,force='true',block='true')
shutdown(name='ServerY',entityType='Server',ignoreSessions='true',timeOut=600,force='true',block='true')
Is there a way I can start stop multiple servers in once command?
Instead of directly starting and stopping servers, you create tasks, then wait for them to complete.
e.g.
tasks = []
for server in cmo.getServerLifeCycleRuntimes():
# to shut down all servers
if (server.getName() != ‘AdminServer’ and server.getState() != ‘RUNNING’ ):
tasks.append(server.start())
#or to start them up:
#if (server.getName() != ‘AdminServer’ and server.getState() != ‘SHUTDOWN’ ):
# tasks.append(server.shutdown())
#wait for tasks to complete
while len(tasks) > 0:
for task in tasks:
if task.getStatus() != ‘TASK IN PROGRESS’ :
tasks.remove(task)
java.lang.Thread.sleep(5000)
I know this is an old post, today I was reading this book "Advanced WebLogic Server Automation" written by Martin Heinzl so in the page 282 I found this.
def startCluster(clustername):
try:
start(clustername, 'Cluster')
except Exception, e:
print 'Error while starting cluster', e
dumpStack()
I tried it and it started managed servers in parallel.
Just keep in mind the AdminServer must be started first and your script must connect to the AdminServer before trying it.
Perhaps this would not be useful for you as the servers should be in a cluster, but I wanted to share this :)

Bluetooth Serial between Raspberry Pi 3 / Zero W and Arduino / HM-10

I am trying to establish a bluetooth serial communication link between a Raspberry Pi Zero W, running Raspbian Jessie [03-07-2017], and an Arduino (UNO).
I am currently able to write data to the Arduino using bluetoothctl.
The application requires that we are able to write data to a particular BLE Slave. There are multiple [HM-10] Slaves to switch between, the Slave needs to be chosen during the program execution.
There is no BAUD rate preference. Currently, we are using 9600 universally.
Functions have been created that automatically connect and then write data to an "attribute", this shows up as data on the Serial Monitor of the Arduino.
Python Code - using BlueZ 5.44 (manually installed):
import subprocess
from subprocess import Popen, PIPE
# Replaces the ':' with '_' to allow the MacAddress to be in the form
# of a "Path" when "selecting an attribute"
def changeMacAddr(word):
return ''.join(c if c != ':' else '_' for c in word)
# Connects to a given MacAddress and then selects the attribute to write to
def connBT(BTsubProcess, stringMacAddr):
BTsubProcess.stdin.write(bytes("".join("connect "+stringMacAddr +"\n"), "utf-8"))
BTsubProcess.stdin.flush()
time.sleep(2)
stringFormat = changeMacAddr(stringMacAddr)
BTsubProcess.stdin.write(bytes("".join("select-attribute /org/bluez/hci0/dev_"
+ stringFormat +
"/service0010/char0011" + "\n"), "utf-8"))
BTsubProcess.stdin.flush()
# Can only be run once connBT has run - writes the data in a list [must have numbers 0 - 255 ]
def writeBT(BTsubProcess, listOfData):
stringList = [str('{0} ').format(elem) for elem in listOfData]
BTsubProcess.stdin.write(bytes("".join("write " + "".join(stringList) + "\n"), "utf-8"))
BTsubProcess.stdin.flush()
# Disconnects
def clostBT(BTsubProcess):
BTsubProcess.communicate(bytes("disconnect\n", "utf-8"))
# To use the functions a subprocess "instance" of bluetoothctl must be made
blt = subprocess.Popen(["bluetoothctl"], stdin=subprocess.PIPE, shell=True)
# blt with then be passed into the function for BTsubProcess
# Note: the MacAddresses of the Bluetooth modules were pre-connected and trusted manually via bluetoothctl
This method works fine for small sets of data, but my requirements require me to stream data to the Arduino very quickly.
The current set up is:
Sensor data (accelerometer, EEG) via USB serial is received by the Pi
The Pi processes the data
Commands are then sent to the Arduino via the in built bluetooth of the Pi Zero W
However, while using this method the bluetooth data transmission would delay (temporarily freeze) when the sensor data changed.
The data transmission was flawless when using two pre-paired HM-10 modules, the Pi's GPIO serial port was configured using PySerial.
The following methods have also been tried:
Using WiringPi to set-up a bluetooth serial port on the /dev/ttyAMA0
using Python sockets and rfcomm
When attempting to use both of these methods. The Python code compiles, however, once the Serial Port is opened the data is seemingly not written and does not show up on the Arduino's Serial Monitor.
This then cripples the previous functions. Even when using bluetoothctl manually, the module cannot be unpaired/disconnected. Writing to the appropriate attribute does not work either.
A restart is required to regain normal function.
Is this approach correct?
Is there a better way to send data over BLE?
UPDATE: 05/07/2017
I am no longer working on this project. But troubleshooting has led me to believe that a "race condition" in the code may have led to the program not functioning as intended.
This was verified during the testing phase where a more barebones code was created that functioned very well.

Run Python with IDLE on a Windows machine, put a part of the code on background so that IDLE is still active to receive command

class PS():
def __init__(self):
self.PSU_thread=Process(target=self.read(199),)
self.PSU_thread.start()
def read():
while running:
"read the power supply"
def set(current):
"set the current"
if __name__ == '__main__':
p=PS()
Basically the idea of the code is to read the data of the power supply and at the same time to have the IDLE active and can accept command to control it set(current). The problem we are having is once the object p is initialized, the while loop will occupy the IDLE terminal such that the terminal cannot accept any command any more.
We have consider to create a service, but does it mean we have to make the whole code into a service?
Please suggest me any possible solutions, we want it to run but still be able to receive my command from IDLE.
Idle, as its name suggests, is a program development environment. It is not meant for production running, and you should not use it for that, especially not for what you describe. Once you have a program written, just run it with Python.
It sounds like what you need is a gui program, such as one based on tkinter. Here is a simulation of what I understand you to be asking for.
import random
import tkinter as tk
root = tk.Tk()
psu_volts = tk.IntVar(root)
tk.Label(root, text='Mock PSU').grid(row=0, column=0)
psu = tk.Scale(root, orient=tk.HORIZONTAL, showvalue=0, variable=psu_volts)
psu.grid(row=0, column=1)
def drift():
psu_volts.set(psu_volts.get() + random.randint(0, 8) - 4)
root.after(200, drift)
drift()
volts_read=tk.IntVar(root)
tk.Label(root, text='PSU Volts').grid(row=1, column=0)
tk.Label(root, textvariable=volts_read).grid(row=1, column=1)
def read_psu():
volts_read.set(psu_volts.get())
root.after(2000, read_psu)
read_psu()
lb = tk.Label(root, text="Enter 'from=n' or 'to=n', where n is an integer")
lb.grid(row=2, column=0, columnspan=2)
envar = tk.StringVar()
entry = tk.Entry(textvariable=envar)
entry.grid(row=3, column=0)
def psu_set():
try:
cmd, val = envar.get().split('=')
psu[cmd.strip()] = val
psu_volts.set((psu['to']-psu['from'])//2)
except Exception:
pass
envar.set('')
tk.Button(root, text='Change PSU', command=psu_set).grid(row=3, column=1)
root.mainloop()
Think of psu as a 'black box' and psu_volts.get and .set as the means of interacting with the box. You would have to substitute your in read and write code. Copy and save to a file. This either run it with Python or open in Idle to change it and run it.

Extend existing Twisted Service with another Socket/TCP/RPC Service to get Service informations

I'm implementing a Twisted-based Heartbeat Client/Server combo, based on this example. It is my first Twisted project.
Basically it consists of a UDP Listener (Receiver), who calls a listener method (DetectorService.update) on receiving packages. The DetectorService always holds a list of currently active/inactive clients (I extended the example a lot, but the core is still the same), making it possible to react on clients which seem disconnected for a specified timeout.
This is the source taken from the site:
UDP_PORT = 43278; CHECK_PERIOD = 20; CHECK_TIMEOUT = 15
import time
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.python import log
class Receiver(protocol.DatagramProtocol):
"""Receive UDP packets and log them in the clients dictionary"""
def datagramReceived(self, data, (ip, port)):
if data == 'PyHB':
self.callback(ip)
class DetectorService(internet.TimerService):
"""Detect clients not sending heartbeats for too long"""
def __init__(self):
internet.TimerService.__init__(self, CHECK_PERIOD, self.detect)
self.beats = {}
def update(self, ip):
self.beats[ip] = time.time()
def detect(self):
"""Log a list of clients with heartbeat older than CHECK_TIMEOUT"""
limit = time.time() - CHECK_TIMEOUT
silent = [ip for (ip, ipTime) in self.beats.items() if ipTime < limit]
log.msg('Silent clients: %s' % silent)
application = service.Application('Heartbeat')
# define and link the silent clients' detector service
detectorSvc = DetectorService()
detectorSvc.setServiceParent(application)
# create an instance of the Receiver protocol, and give it the callback
receiver = Receiver()
receiver.callback = detectorSvc.update
# define and link the UDP server service, passing the receiver in
udpServer = internet.UDPServer(UDP_PORT, receiver)
udpServer.setServiceParent(application)
# each service is started automatically by Twisted at launch time
log.msg('Asynchronous heartbeat server listening on port %d\n'
'press Ctrl-C to stop\n' % UDP_PORT)
This heartbeat server runs as a daemon in background.
Now my Problem:
I need to be able to run a script "externally" to print the number of offline/online clients on the console, which the Receiver gathers during his lifetime (self.beats). Like this:
$ pyhb showactiveclients
3 clients online
$ pyhb showofflineclients
1 client offline
So I need to add some kind of additional server (Socket, Tcp, RPC - it doesn't matter. the main point is that i'm able to build a client-script with the above behavior) to my DetectorService, which allows to connect to it from outside. It should just give a response to a request.
This server needs to have access to the internal variables of the running detectorservice instance, so my guess is that I have to extend the DetectorService with some kind of additionalservice.
After some hours of trying to combine the detectorservice with several other services, I still don't have an idea what's the best way to realize that behavior. So I hope that somebody can give me at least the essential hint how to start to solve this problem.
Thanks in advance!!!
I think you already have the general idea of the solution here, since you already applied it to an interaction between Receiver and DetectorService. The idea is for your objects to have references to other objects which let them do what they need to do.
So, consider a web service that responds to requests with a result based on the beats data:
from twisted.web.resource import Resource
class BeatsResource(Resource):
# It has no children, let it respond to the / URL for brevity.
isLeaf = True
def __init__(self, detector):
Resource.__init__(self)
# This is the idea - BeatsResource has a reference to the detector,
# which has the data needed to compute responses.
self._detector = detector
def render_GET(self, request):
limit = time.time() - CHECK_TIMEOUT
# Here, use that data.
beats = self._detector.beats
silent = [ip for (ip, ipTime) in beats.items() if ipTime < limit]
request.setHeader('content-type', 'text/plain')
return "%d silent clients" % (len(silent),)
# Integrate this into the existing application
application = service.Application('Heartbeat')
detectorSvc = DetectorService()
detectorSvc.setServiceParent(application)
.
.
.
from twisted.web.server import Site
from twisted.application.internet import TCPServer
# The other half of the idea - make sure to give the resource that reference
# it needs.
root = BeatsResource(detectorSvc)
TCPServer(8080, Site(root)).setServiceParent(application)