I'm working on an application that records audio from an input device and have noticed that at times, the DataAvailable event handler is never called? I think it might be because the Bytes.Recorded = 0?
https://github.com/naudio/NAudio/blob/52491103dd368ebc243d2d5ff57fcdfaddc5cf42/NAudio.WinMM/WaveInEvent.cs#L156
_waveInEvent.DataAvailable += (sender, args) => ProcessData(args);
During local testing, I used stereo mix as an input and was still able to process some data even if nothing was playing. One of our users seem to be seeing an issue where the Bytes.Recorded = 0 when he disconnects his input system. I don't think I'm getting exceptions when this happens and it seems that there isn't any data to process at all?
Noting that it's an AOIP input feed in case that helps? Is an AOIP input feed treated differently from other inputs?
Related
I'm very new to embedded SW development and I'm trying to understand as much as possible by realizing simple code for a STM32G031C6T6 MCU, so probabily this is a stupid question but I can't find the answer anywere.
First I set up channel 1 of TIM1 for PWM generation, that is output on the PA8 pin and it's working properly.
Then I set channel 1 of TIM3 for input capture, that is performed on the PA6 pin rising edge.
In the code I started the timers with HAL_TIM_PWM_Start(&htim1, TIM_CHANNEL_1) and HAL_TIM_IC_Start_IT(&htim3, TIM_CHANNEL_1).
I run the the program and the input compare in TIM3 is triggered everytime there's a rising edge of the PWM signal generated by TIM1.
I can't figure out why this is happening and I'm unable to find any info about this behavior.
Last but not least I am sure that there's no physical connection between PA8 and PA6, plus the input compare channel is filtered (I tried to set the filter even at its max value) so that no noise from the PWM channel can trigger it.
screenshot of the configuration in cubeMx
[TIM3]: https://i.stack.imgur.com/ehXbe.jpg
[TIM1]: https://i.stack.imgur.com/s95kU.jpg
PS: after further testing i noticed that it happens only if I enable the input compare on channel 1 the other 3 channels seems to work just fine
So I have my Q and E set to control a Camera that is fixed in 8 directions. The problem is when I call Input.is_action_just_pressed() it sets true two times, so it does its content twice.
This is what it does with the counter:
0 0 0 0 1 1 2 2 2 2
How can I fix thix?
if Input.is_action_just_pressed("camera_right", true):
if cardinal_count < cardinal_index.size() - 1:
cardinal_count += 1
else:
cardinal_count = 0
emit_signal("cardinal_count_changed", cardinal_count)
On _process or _physics_process
Your code should work correctly - without reporting twice - if it is running in _process or _physics_process.
This is because is_action_just_pressed will return if the action was pressed in the current frame. By default that means graphics frame, but the method actually detect if it is being called in the physics frame or graphic frame, as you can see in its source code. And by design you only get one call of _process per graphics frame, and one call of _physics_process per physics frame.
On _input
However, if you are running the code in _input, remember you will get a call of _input for every input event. And there can be multiple in a single frame. Thus, there can be multiple calls of _input where is_action_just_pressed. That is because they are in the same frame (graphics frame, which is the default).
Now, let us look at the proposed solution (from comments):
if event is InputEventKey:
if Input.is_action_just_pressed("camera_right", true) and not event.is_echo():
# whatever
pass
It is testing if the "camera_right" action was pressed in the current graphics frame. But it could be a different input event that one being currently processed (event).
Thus, you can fool this code. Press the key configured to "camera_right" and something else at the same time (or fast enough to be in the same frame), and the execution will enter there twice. Which is what we are trying to avoid.
To avoid it correctly, you need to check that the current event is the action you are interested in. Which you can do with event.is_action("camera_right"). Now, you have a choice. You can do this:
if event.is_action("camera_right") and event.is_pressed() and not event.is_echo():
# whatever
pass
Which is what I would suggest. It checks that it is the correct action, that it is a press (not a release) event, and it is not an echo (which are form keyboard repetition).
Or you could do this:
if event.is_action("camera_right") and Input.is_action_just_pressed("camera_right", true) and not event.is_echo():
# whatever
pass
Which I'm not suggesting because: first, it is longer; and second, is_action_just_pressed is really not meant to be used in _input. Since is_action_just_pressed is tied to the concept of a frame. The design of is_action_just_pressed is intended to work with _process or _physics_process, NOT _input.
So, apparently theres a built in method for echo detection:
is_echo()
Im closing this.
I've encountered the same issue and in my case it was down to the fact that my scene (the one containing the Input.is_action_just_pressed check) was placed in the scene tree, and was also autoloaded, which meant that the input was picked up from both locations and executed twice.
I took it out as an autoload and Input.is_action_just_pressed is now triggered once per input.
I'm new to psychopy and python. I'm trying to program a way to quit a script (that I didn't write), by pressing a key for example. I've added this to the while loop:
while n < total
start=time.clock()
if len(event.getKeys()) > 0:
break
# Another while loop here that ends when time is past a certain duration after 'start'.
And it's not working, it doesn't register any key presses. So I'm guessing key presses are only registered during specific times. What are those times? What is required to register key presses? That loop is extremely fast, sending signals every few milliseconds, so I can't just add wait commands in the loop.
If I could just have a parallel thread checking for a key press that would be good too, but that sounds complicated to learn.
Thanks!
Edits: The code runs as expected otherwise (in particular no errors). "core" and "event" are included. There aren't any other "event" command of any kind that would affect the "key press log".
Changing the rest of the loop content to something that includes core.wait statements makes it work. So for anybody else having this difficulty, my original guess was correct: key presses are not registered during busy times (i.e. in my case a while statement that constantly checks the time), or possibly only during specific busy times... Perhaps someone with more knowledge can clarify.
....So I'm guessing key presses are only registered during specific
times. What are those times? What is required to register key
presses?....
To try and answer your specific question, the psychopy api functions/methods that cause keyboard events to be registered are ( now updated to be literally every psychopy 1.81 API function to do this):
event.waitKeys()[1]
event.clearEvents()[1]
event.getKeys()[2]
event.Mouse.getPressed()
win.flip()
core.wait()
visual.Window.dispatchAllWindowEvents()
1: These functions also remove all existing keyboard events from the event list. This means that any future call to a function like getKeys() will only return a keyboard event if it occurred after the last time one of these functions was called.
2: If keyList=None, does the same as *, else removes keys from the key event list that are within the keyList kwarg.
Note that one of the times keyboard events are 'dispatched' is in the event.getKeys() call itself. By default, this function also removes any existing key events.
So, without being seeing the full source of the inner loop that you mention, it seems highly likely that the event.getKeys() is never returning a key event because key events are being consumed by some other call within the inner loop. So the chance that an event is in the key list when the outer getKeys() is called is very very low.
Update in response to OP's comment on Jonas' test script ( I do not have enough rep to add comments to answers yet):
... Strange that you say this ..[jonas example code].. works
and from Sol's answer it would seem it shouldn't. – zorgkang
Perhaps my answer is giving the wrong understanding, as it is intended to provide information that shows exactly why Jonas' example should, and does, work. Jonas' example code works because the only time key events are being removed from the event buffer is when getKeys() is called, and any events that are removed are also returned by the function, causing the loop to break.
This is not really an answer. Here's an attempt to minimally reproduce the error. If the window closes on keypress, it's a success. It works for me, so I failed to reproduce it. Does it work for you?
from psychopy import event, visual, core
win = visual.Window()
clock = core.Clock()
while True:
clock.reset()
if event.getKeys():
break
while clock.getTime() < 1:
pass
I don't have the time module installed, so I used psychopy.core.Clock() instead but it shouldn't make a difference, unless your time-code ends up in an infinite loop, thus only running event.getKeys() once after a few microseconds.
I have a GSM module hooked up to PIC18F87J11 and they communicate just fine . I can send an AT command from the Microcontroller and read the response back. However, I have to know how many characters are in the response so I can have the PIC wait for that many characters. But if an error occurs, the response length might change. What is the best way to handle such scenario?
For Example:
AT+CMGF=1
Will result in the following response.
\r\nOK\r\n
So I have to tell the PIC to wait for 6 characters. However, if there response was an error message. It would be something like this.
\r\nERROR\r\n
And if I already told the PIC to wait for only 6 characters then it will mess out the rest of characters, as a result they might appear on the next time I tell the PIC to read the response of a new AT command.
What is the best way to find the end of the line automatically and handle any error messages?
Thanks!
In a single line
There is no single best way, only trade-offs.
In detail
The problem can be divided in two related subproblems.
1. Receiving messages of arbitrary finite length
The trade-offs:
available memory vs implementation complexity;
bandwidth overhead vs implementation complexity.
In the simplest case, the amount of available RAM is not restricted. We just use a buffer wide enough to hold the longest possible message and keep receiving the messages bytewise. Then, we have to determine somehow that a complete message has been received and can be passed to further processing. That essentially means analyzing the received data.
2. Parsing the received messages
Analyzing the data in search of its syntactic structure is parsing by definition. And that is where the subtasks are related. Parsing in general is a very complex topic, dealing with it is expensive, both in computational and laboriousness senses. It's often possible to reduce the costs if we limit the genericity of the data: the simpler the data structure, the easier to parse it. And that limitation is called "transport layer protocol".
Thus, we have to read the data to parse it, and parse the data to read it. This kind of interlocked problems is generally solved with coroutines.
In your case we have to deal with the AT protocol. It is old and it is human-oriented by design. That's bad news, because parsing it correctly can be challenging despite how simple it can look sometimes. It has some terribly inconvenient features, such as '+++' escape timing!
Things become worse when you're short of memory. In such situation we can't defer parsing until the end of the message, because it very well might not even fit in the available RAM -- we have to parse it chunkwise.
...And we are not even close to opening the TCP connections or making calls! And you'll meet some unexpected troubles there as well, such as these dreaded "unsolicited result codes". The matter is wide enough for a whole book. Please have a look at least here:
http://en.wikibooks.org/wiki/Serial_Programming/Modems_and_AT_Commands. The wikibook discloses many more problems with the Hayes protocol, and describes some approaches to solve them.
Let's break the problem down into some layers of abstraction.
At the top layer is your application. The application layer deals with the response message as a whole and understands the meaning of a message. It shouldn't be mired down with details such as how many characters it should expect to receive.
The next layer is responsible from framing a message from a stream of characters. Framing is extracting the message from a stream by identifying the beginning and end of a message.
The bottom layer is responsible for reading individual characters from the port.
Your application could call a function such as GetResponse(), which implements the framing layer. And GetResponse() could call GetChar(), which implements the bottom layer. It sounds like you've got the bottom layer under control and your question is about the framing layer.
A good pattern for framing a stream of characters into a message is to use a state machine. In your case the state machine includes states such as BEGIN_DELIM, MESSAGE_BODY, and END_DELIM. For more complex serial protocols other states might include MESSAGE_HEADER and MESSAGE_CHECKSUM, for example.
Here is some very basic code to give you an idea of how to implement the state machine in GetResponse(). You should add various types of error checking to prevent a buffer overflow and to handle dropped characters and such.
void GetResponse(char *message_buffer)
{
unsigned int state = BEGIN_DELIM1;
bool is_message_complete = false;
while(!is_message_complete)
{
char c = GetChar();
switch(state)
{
case BEGIN_DELIM1:
if (c = '\r')
state = BEGIN_DELIM2;
break;
case BEGIN_DELIM2:
if (c = '\n')
state = MESSAGE_BODY:
break;
case MESSAGE_BODY:
if (c = '\r')
state = END_DELIM;
else
*message_buffer++ = c;
break;
case END_DELIM:
if (c = '\n')
is_message_complete = true;
break;
}
}
}
I'm hell bent on making this work with NAudio, so please tell me if there's a way around this. I have streaming raw audio coming in from a serial device, which I'm trying to play through WaveOut.
Attempt 1:
'Constants 8000, 1, 8000 * 1, 1, 8
Dim CustomWaveOutFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.Pcm, SampleRate, Channels, AverageBPS, BlockAlign, BitsPerSample)
Dim rawStream = New RawSourceWaveStream(VoicePort.BaseStream, CustomWaveOutFormat)
'Run in background
Dim waveOut = New WaveOut(WaveCallbackInfo.FunctionCallback())
'Play stream
waveOut.Init(rawStream)
waveOut.Play()
This code works, but there's a tiny problem - the actual audio stream isn't raw PCM, it's raw MuLaw. It plays out the companding like a Beethoven's 5th on cheese-grater. If I change the WaveFormat to WaveFormatEncoding.MuLaw, I get a bad format exception because it's raw audio and there are no RIFF headers.
So I moved over to converting it to PCM:
Attempt 2:
Dim reader = New MuLawWaveStream(VoicePort.BaseStream, SampleRate, Channels)
Dim pcmStream = WaveFormatConversionStream.CreatePcmStream(reader)
Dim waveOutStream = New BlockAlignReductionStream(pcmStream)
waveOut.Init(waveOutStream)
Here, CreatePcmStream tries to get the length of the stream (even though CanSeek = false) and fails.
Attempt 3
waveOutStream = New BufferedWaveProvider(WaveFormat.CreateMuLawFormat(SampleRate, Channels))
*add samples when OnDataReceived()*
It too seems to suffer from lack of having a header.
I'm hoping there's something minor I missed in all of this. The device only streams audio when in use, and no data is received otherwise - a case which is handled by (1).
To make attempt (1) work, your RawSourceWaveStream should specify the format that the data really is in. Then just use another WaveFormatConversionStream.CreatePcmStream, taking rawStream as the input:
Dim muLawStream = New RawSourceWaveStream(VoicePort.BaseStream, WaveFormat.CreateMuLawFormat(SampleRate, Channels))
Dim pcmStream = WaveFormatConversionStream.CreatePcmStream(muLawStream);
Attempt (2) is actually very close to working. You just need to make MuLawStream.Length return 0. You don't need it for what you are doing. BlockAlignReductionStream is irrelevant to mu-law as well since mu law block align is 1.
Attempt (3) should work. I don't know what you mean by lack of a header?
In NAudio you are building a pipeline of audio data. Each stage in the pipeline can have a different format. Your audio starts off in Mu-law, then gets converted to PCM, then can be played. A buffered WaveProvider is used for you want playback to continue even though your device has stopped providing audio data.
Edit I should add that the IWaveProvider interface in NAudio is a simplified WaveStream. It has only a format and a Read method, and is useful for situations where Length is unknown and repositioning is not possible.