I am using Bluez4 to sink Audio from an iphone 5 to a Raspberry pi audio output.
The default settings for BLuez 4 A2DP appear to be S16_LE, 44,1kHz Stereo.
Similar to other posts about Bluez, I can't catch Select_Configuration DBus messages in order to change the sample rate dynamically. Instead I decided to try to find the default A2DP sample rate in the BLuez Stack.
Does anyone know where the default sample rate is set? My first thought was that it was in the BLuez/audio/ folder but nothing appears to change the default 44.1kHz sample rate.
Now I'm very curious to know where it is set.
Currently using this: sudo ./a2dp-alsa --sink | aplay -c 2 -r 44100 -f S16
would like to use this sudo ./a2dp-alsa --sink | aplay -c 2 -r 16000 -f S16
I came across these lines in a2dp-alsa.c
/* Initialise connection to ALSA */
g_handle = audio_init("hw:0,0", 48000);
maybe its hard coded in a2dp-alsa - not parameterizable
Related
I am trying to take the contents of a file that has a Hex number and convert that number to Binary and output to a file.
This is what I am trying but not getting the binary value:
xxd -r -p Hex.txt > Binary.txt
The contents of Hex.txt is: ff
I have also tried FF and 0xFF, but would like to just use ff since the device I am pulling the info from has it in that format.
Instead of 11111111 which it should be, I get a y with 2 dots above it.
If I change it to ee, I get an i with 2 dots. It seems to be reading it just fine but according to what I have read on the xxd -r -p command, it is not outputing it in the correct format.
The other ways I have found to convert Hex to Binary have either also not worked or is a pretty big Bash script that seems unnecessary to do what I thought would be a simple task.
This also gives me the y with 2 dots.
$ for i in $(cat Hex.txt) ; do printf "\x$i" ; done > Binary.txt
For some reason almost every solution I find gives me this format instead of a human readable Binary value with 1s and 0s.
Any help is appreciated. I am planning on using this in a script to pull the Relay values from Digital Loggers devices using curl and giving Home Assistant a readable file to record the Relay State. Digital Loggers curl cmd gives the state of all 8 relays at once using Hex instead of being able to pull the status of a specific relay.
If "file.txt" contains:
fe
0a
and you run this:
perl -ane 'printf("%08b\n",hex($_))' file.txt
You'll get this:
11111110
00001010
If you use it a lot, you might want to make a bash function of it in your login profile along these lines - being extremely respectful of spaces and semi-colons that might look unnecessary:
bin(){ perl -ane 'printf("%08b\n",hex($_))' $1 ; }
Then you'll be able to do:
bin file.txt
If you dislike Perl for some reason, you can achieve something similar without it as follows:
tr '[:lower:]' '[:upper:]' < file.txt |
while read h ; do
echo "obase=2; ibase=16; $h" | bc
done
I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks
I want to generate some trace file from disk IO, but the problem is I need the actual input data along with timestamp, logical address and access block size, etc.
I've been trying to solve the problem by using the "blktrace | blkparse" with "iozone" on the ubuntu VirtualBox environment, but it seems not working.
There is an option in blkparse for setting the output format to show the packet data, -f "%P", but it dose not print anything.
below is the command that i use:
$> sudo blktrace -a issue -d /dev/sda -o - | blkparse -i - -o ./temp/blktrace.sda.iozone -f "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n"
$> iozone -w -e -s 16M -f ./mnt/iozone.dummy -i 0
In the printing format "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n", all other things are printed well, but the "%P" is not printed at all.
Is there anyone who knows why the packet data is not displayed?
OR anyone who knows other way to get the disk IO packet data with actual input value?
As far as I know blktrace does not capture the actual data. It just capture the metadata. One way to capture real data is to write your own kernel module. Some students at FIU.edu did that in this paper:
"I/O deduplication: Utilizing content similarity to ..."
I would ask this question in linux-btrace mailing list as well:
http://vger.kernel.org/majordomo-info.html
I know that this topic has been discussed many times, but none of the answers helped me. For the record, i'm running Debian.
The deal is: I bought an usb powered led lamp, which is very simple and doesn't even have an on/off switch (it works and is always on). I want to be able to turn it on/off via command line. Here's what i tried:
echo on > /sys/bus/usb/devices/usb1/power/level # turn on
echo suspend > /sys/bus/usb/devices/usb1/power/level # turn off
which is what i've found on many forums. Turning "on" works, but "suspending" yields
-su: echo: write error: Invalid argument
for every usbN. I also tried
echo "0" > "/sys/bus/usb/devices/usbX/power/autosuspend_delay_ms"
which doesn't give an error, but also doesn't do anything (again, for every usbN)
trying
echo "usb1" > /sys/bus/usb/drivers/usb/unbind
works only for more "inteligent" devices, like the keyboard, the mouse, or the usb wifi card. What i mean is that only tyhose devices are turned off, other usbN don't give an error, but the lamp never goes off.
the contents of /sys/bus/usb/devices/ are
1-0:1.0 1-1:1.0 1-2:1.0 1-2:1.2 2-0:1.0 4-0:1.0 4-1:1.0 6-0:1.0 8-0:1.0 8-2:1.0 usb2 usb4 usb6 usb8
1-1 1-2 1-2:1.1 1-2:1.3 3-0:1.0 4-1 5-0:1.0 7-0:1.0 8-2 usb1 usb3 usb5 usb7
i tried to do
echo device_name > /sys/bus/usb/drivers/usb/unbind
with every single one of them, but only the devices usbN and N-M react, the ones of the form n-m:x.y yield
tee: /sys/bus/usb/drivers/usb/bind: No such device
(i tried putting in, for instance, "1-0:1.0", "1-0\:1.0" and "1-0\:1.0", all gave the same result).
One last thing, what is shown after executing
lsusb -t
does not change when i plug or unplug the lamp.
Any ideas?
Turn off device ID 2-1:
echo '2-1' |sudo tee /sys/bus/usb/drivers/usb/unbind
Turn device ID 2-1 back on:
echo '2-1' |sudo tee /sys/bus/usb/drivers/usb/bind
In my case, using device ID 2-1 controls power to my usb stick, and as a consequence controls the light.
TIP: If they work for you in Debian, create an alias for them to make life easier for you later.
Hope this helps,
Su
If all you want to do is reset a USB device to fix it once it gets into a broken state, then using the bind/unbind usbfs special files can be a bit of a pain (since device IDs can change, and they're a bit tricky to identify precisely if you don't want to rebind other devices). In this case I've found it much easier to use the vendor and product IDs given by lsusb with usb_modeswitch. For example, if I identify my wireless adapter using:
$ lsusb
Bus 001 Device 042: ID 7392:7811 Edimax Technology Co., Ltd EW-7811Un 802.11n Wireless Adapter [Realtek RTL8188CUS]
Bus 001 Device 035: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
I can then reset the wireless adapter using:
$ sudo usb_modeswitch -v 0x7392 -p 0x7811 --reset-usb
If you have more than one device attached with the same vendor and product IDs then usb_modeswitch provides bus and device number flags. For the wireless adapter example above I'd add -b 1 -g 42 to the flags.
Try this code it works for me (Only for rooted)
String[] cmdline = { "su", "-c", "echo '1-1' >/sys/bus/usb/drivers/usb/unbind" };
try {
Runtime.getRuntime().exec(cmdline);
} catch (IOException e) {
Log.e("MainActivity","Failed"+e);
}
and for bind again do this
String[] cmdline = { "su", "-c", "echo '1-1' >/sys/bus/usb/drivers/usb/bind" };
try {
Runtime.getRuntime().exec(cmdline);
} catch (IOException e) {
Log.e("MainActivity","Failed"+e);
}
Hallo.
I have a big video file. ffmpeg, tcprobe and other tool say, it is an h264-stream in an AVI-container.
Now i'd like to cut out small chunks form the video.
Problem: The index of the video seam corrupted/destroyed. I kind of fixed this via mplayer -forceidx -saveidx <IndexFile> <BigVideoFile>. The Problem here is, that I'm now stuck with mplayer/mencoder which can use this index file via -loadidx <IndexFile>. I have tried correcting the index like described in man aviindex (mplayer -frames 0 -saveidx mpidx broken.avi ; aviindex -i mpidx -o tcindex ; avimerge -x tcindex -i broken.avi -o fixed.avi), but this didn't fix my video - meaning that most tools i've tested couldn't search in the video file.
Problem: I cut out parts of the video via following command: mencoder -loadidx in.idx -ss 8578 -endpos 20 -oac faac -ovc x264 -sws 9 -lavfopts format=mp4 -x264encopts <LotsOfOpts> -of lavf -vf scale=800:-10,harddup in.avi -o out.mp4. Now here the problem is, that some videos are corrupted at the beginning. I think this is because the fact, that i do not necessarily cut at keyframe.
Questions:
What is the best way to fix the index of an avi "inline" so that every tool can again work as expected with it?
How can i split at the keyframes? Is there an mencoder-option for this?
Are Keyframes coming in a frequency? How to find out this frequency? (So with a bit of math it should be possible to calculate the next keyframe and cut there)
Is ther perhaps some completely other way to split this movie? Doing it by hand is no option, i've to cut out 1000+ chunks ...
Thanks a lot!
https://spreadsheets.google.com/ccc?key=0AjWmZ0umsuZHdHNzZVhuMTkxTHdYbUdCQzF3cE51Snc&hl=en lists the various options available for splitting accurately.
I would attempt to use avidemux to repair the file before doing anything. You may also have better results using an MP4 Based Container than AVI.
As for ensuring your specified intervals are right at the keyframe, I would suggest encoding with FFMPEG with the: -g 1 option before using the split below to ensure every frame is in fact a keyframe. FFMPEG refers to keyframs as GOP or Groups of Pictures instead.
ffmpeg -i input.avi -g 1 -vcodec copy -acodec copy out.avi
Then multiple splits (with FFMPEG) :
ffmpeg -i input.avi -ss 00:00:10 -t 00:00:30 out1.avi -ss 00:00:35 -t 00:00:30 out2.avi
Some more options to try:
x264 Mapping FFMPEG encoding in linux