How to continuously poll mpd for currently played song and write result to a file? - mpd

The only thing i need to extract from mpd is the currently played song/track. I have to ensure this always is up to date in the output file.

If you install mpc then you can do the following:
mpc idle player # block until the player changes songs
mpc current # outputs "Artist Name - Song Name" onto stdout
Do those in a loop, and output the result of current into a file, and you're done!
#!/bin/sh
while true
do
mpc current > current_song.txt
mpc idle player
done
The full list of what you can idle for is on the MPD command reference:
http://www.musicpd.org/doc/protocol/command_reference.html#status_commands

Related

How to use randomTrips.py in SUMO on win8

I'm using randomTrips.py in SUMO to generate random trips on win8. I have a map.net.xml file and try to create a trips.xml file through randomTrips.py. However, the problem occurs and I don't know how to deal with it. The code is as follows:
C:\Program Files (x86)\Eclipse\sumo\tools>randomTrips.py -n map.net.xml -l 200 -e -o map.trips.xml
I don't get the .trips.xml file I want. And the outcome is as follows, it seems that I have missed some properties of the function in my code, but I don't know how to correct it. If anyone knows how to solve the problem, pls give me some valuable suggestions. Thanks.
The outcome is :
Usage: randomTrips.py [options]
Options:
-h, --help show this help message and exit
-n NETFILE, --net-file=NETFILE
define the net file (mandatory)
-a ADDITIONAL, --additional-files=ADDITIONAL
define additional files to be loaded by the rout
-o TRIPFILE, --output-trip-file=TRIPFILE
define the output trip filename
-r ROUTEFILE, --route-file=ROUTEFILE
generates route file with duarouter
--weights-prefix=WEIGHTSPREFIX
loads probabilities for being source, destinatio
via-edge from the files named .src.xml,
.sink.xml and .via.xml
--weights-output-prefix=WEIGHTS_OUTPREFIX
generates weights files for visualisation
--pedestrians create a person file with pedestrian trips inste
vehicle trips
--persontrips create a person file with person trips instead o
vehicle trips
--persontrip.transfer.car-walk=CARWALKMODE
Where are mode changes from car to walking allow
(possible values: 'ptStops', 'allJunctions' and
combinations)
--persontrip.walkfactor=WALKFACTOR
Use FLOAT as a factor on pedestrian maximum spee
during intermodal routing
--prefix=TRIPPREFIX prefix for the trip ids
-t TRIPATTRS, --trip-attributes=TRIPATTRS
additional trip attributes. When generating
pedestrians, attributes for and
supported.
--fringe-start-attributes=FRINGEATTRS
additional trip attributes when starting on a fr
-b BEGIN, --begin=BEGIN
begin time
-e END, --end=END end time (default 3600)
-p PERIOD, --period=PERIOD
Generate vehicles with equidistant departure tim
period=FLOAT (default 1.0). If option --binomial
used, the expected arrival rate is set to 1/peri
-s SEED, --seed=SEED random seed
-l, --length weight edge probability by length
-L, --lanes weight edge probability by number of lanes
--speed-exponent=SPEED_EXPONENT
weight edge probability by speed^ (defaul
--fringe-factor=FRINGE_FACTOR
multiply weight of fringe edges by (defa
--fringe-threshold=FRINGE_THRESHOLD
only consider edges with speed above as
edges (default 0)
--allow-fringe Allow departing on edges that leave the network
arriving on edges that enter the network (via
turnarounds or as 1-edge trips
--allow-fringe.min-length=ALLOW_FRINGE_MIN_LENGTH
Allow departing on edges that leave the network
arriving on edges that enter the network, if the
at least the given length
--min-distance=MIN_DISTANCE
require start and end edges for each trip to be
least m apart
--max-distance=MAX_DISTANCE
require start and end edges for each trip to be
most m apart (default 0 which disables a
checks)
-i INTERMEDIATE, --intermediate=INTERMEDIATE
generates the given number of intermediate way p
--flows=FLOWS generates INT flows that together output vehicle
the specified period
--maxtries=MAXTRIES number of attemps for finding a trip which meets
distance constraints
--binomial=N If this is set, the number of departures per sec
will be drawn from a binomial distribution with
and p=PERIOD/N where PERIOD is the argument give
option --period. Tnumber of attemps for finding
which meets the distance constraints
-c VCLASS, --vclass=VCLASS, --edge-permission=VCLASS
only from and to edges which permit the given ve
class
--vehicle-class=VEHICLE_CLASS
The vehicle class assigned to the generated trip
(adds a standard vType definition to the output
--validate Whether to produce trip output that is already c
for connectivity
-v, --verbose tell me what you are doing
Probably the file name association with .py files is broken, see Python Command Line Arguments (Windows). Try to run the script with python explicitly:
python randomTrips.py -n map.net.xml -l 200 -e -o map.trips.xml
I just tried last week. You can search randomTrips.py under SUMO's folder. Then you find the location of randomTrips.py, then you open the cmd and call python to execute it. You also need to specify the net.xml.

SLURM releasing resources using scontrol update results in unknown endtime

I have a program that will dynamically release resources during job execution, using the command:
scontrol update JobId=$SLURM_JOB_ID NodeList=${remaininghosts}
However, this results in some very weird behavior sometimes. Where the job is re-queued. Below is the output of sacct
sacct -j 1448590
JobID NNodes State Start End NodeList
1448590 4 RESIZING 20:47:28 01:04:22 [0812,0827],[0663-0664]
1448590.0 4 COMPLETED 20:47:30 20:47:30 [0812,0827],[0663-0664]
1448590.1 4 RESIZING 20:47:30 01:04:22 [0812,0827],[0663-0664]
1448590 3 RESIZING 01:04:22 01:06:42 [0812,0827],0663
1448590 2 RESIZING 01:06:42 1:12:42 0827,tnxt-0663
1448590 4 COMPLETED 05:33:15 Unknown 0805-0807,0809]
The first lines show everything works fine, nodes are getting released but in the last line, it shows a completely different set of nodes with an unknown end time. The slurm logs show the job got requeued:
requeue JobID=1448590 State=0x8000 NodeCnt=1 due to node failure.
I suspect this might happen because the head node is killed, but the slurm documentation doesn't say anything about that.
Does anybody had an idea or suggestion?
Thanks
In this post there was a discussion about resizing jobs.
In your particular case, for shrinking I would use:
Assuming that j1 has been submitted with:
$ salloc -N4 bash
Update j1 to the new size:
$ scontrol update jobid=$SLURM_JOBID NumNodes=2
$ scontrol update jobid=$SLURM_JOBID NumNodes=ALL
And update the environmental variables of j1 (the script is created by the previous commands):
$ ./slurm_job_$SLURM_JOBID_resize.sh
Now, j1 has 2 nodes.
In your example, your "remaininghost" list, as you say, may exclude the head node that is needed by Slurm to shrink the job. If you provide a quantity instead of a list, the resize should work.

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

In Bluez A2DP: how can I modify the default audio sample rate

I am using Bluez4 to sink Audio from an iphone 5 to a Raspberry pi audio output.
The default settings for BLuez 4 A2DP appear to be S16_LE, 44,1kHz Stereo.
Similar to other posts about Bluez, I can't catch Select_Configuration DBus messages in order to change the sample rate dynamically. Instead I decided to try to find the default A2DP sample rate in the BLuez Stack.
Does anyone know where the default sample rate is set? My first thought was that it was in the BLuez/audio/ folder but nothing appears to change the default 44.1kHz sample rate.
Now I'm very curious to know where it is set.
Currently using this: sudo ./a2dp-alsa --sink | aplay -c 2 -r 44100 -f S16
would like to use this sudo ./a2dp-alsa --sink | aplay -c 2 -r 16000 -f S16
I came across these lines in a2dp-alsa.c
/* Initialise connection to ALSA */
g_handle = audio_init("hw:0,0", 48000);
maybe its hard coded in a2dp-alsa - not parameterizable

copying kernel and uboot into sdcard

I have a Freescale I.MX ARM board for which I am preparing the bootloader, kernel and Root filesystem on the sdcard.
I am a little confused about the order in which I partition and copy my files into sdcard. Let us say I have an empty sdcard 4GB size. I used gparted to first parition it into:
Firts partition 400 MB size as FAT32 system. this is my boot partition
Second partition is the rest of the card as ext3. This is my root file system partition.
Let us say my sdcard is under /dev/sdb.
Now I have seen many documents differing slightly in the way of copying the boot files.
Which is the right way?
Method 1:
(without mounting sdb partitions:
sudo dd if=u-boot.bin of=/dev/sdb bs=512 seek=2
sudo dd if=uImage of=/dev/sdb bs=512 seek=2
Mount sdb2 for copying rootfs:
mount /dev/sdb2 /mnt/rootfs
copy rootfs:
tar -xf tarfile /mnt/rootfs
Method 2:
Mount sdb1 boot partition:
mount /dev/sdb1 /mnt/boot
copy uboot and kernel:
cp u-boot.bin /mnt/boot/
cp uImage /mnt/boot/
Then copy rootfs as above!
Which is the correct one. I tried two but the sddcard is not even booting.
When I tried method 1, the card boot up until it says the rootfs is not found in the partition. I removed the card and inserted and found that the first fat 32 partition is somehow 'destroyed' as it says 'unallocated' on gparted.
Please help.
You need to mark first partition as bootable.
Check your first partition details in gparted or disk utility.
From disk utility you cab mark a partition bootable. by selecting specific partition and going into 'more action' option --> 'edit partition type'.
below is script to flash binaries onto SD card for my
Arndale OCTA board. You can see the placement of bootloader binaries:
BL1
dd iflag=dsync oflag=dsync if=arndale_octa.bl1.bin of=/dev/sde bs=512 seek=1
BL2
dd iflag=dsync oflag=dsync if=../arndale_octa.bl2.bin of=/dev/sde bs=512 seek=31
uboot
dd iflag=dsync oflag=dsync if=u-boot.bin of=/dev/sde bs=512 seek=63
kernel and trust software , ....
Please notes:
1) The partition table is at SDcard offset 0 (seek 0), then you have to run:fdisk /dev/sde
and create paratiions that does not overlapped with blocks ocppuied by kernel or trust software.
2) add the "dsync" option in dd command to gaurantee every written data is immediately flushed into SD card
In the most of the cases, imx processor requires bootloader at 0x400 offset. So whatever you are doing for u-boot is correct, you need to use dd command for that.
sudo dd if=u-boot.bin of=/dev/sdb bs=512 seek=2
While partitioning the sd card, Make sure that you are keeping enough room for u-boot image. So start your 1st bootable partition at let's say 1 MB offset.
You can simply copy your uImage and environment variables (uEnv.txt or boot.scr) through cp command.
For rootfs also, You can follow the same steps as kernel.