NAudio multiple WaveOut objects - naudio

Currently I am constructing a synthesizer. i am wondering if there is anyway of having multiple waveOut() objects due to me wanting my synth to be polyphonic (multiple keys pressed at the same time).

You should not open multiple WaveOut objects. Instead, create a mixer using MixingSampleProvider to sum together the outputs of all your synth's voices. It allows inputs to be dynamically added and automatically removes them when they finish. You need to configure it to produce an endless stream of silence if there are no inputs, or WaveOut will assume there is nothing more to play and stop automatically.

Related

How to do LAG like implementation in KSQLDB?

I've recently started working with ksql and wanted to check if someone can help me with a query design. The problem statement is that I have a video conferencing app where a broadcaster can start and pause the stream multiple times. I want to get the total played time and the total paused time for that stream. I have a click stream data which consists of start and pause timestamps. How should I go about it so that I can generate an optimized view.
Any help is very deeply appreciated :)
Thank You
Grouping events
The first problem you'll need to solve is how are you going to group start/stop events together?
Likely, you'll want to group them by some kind of USER_ID or other attribute that uniquely identifies the broadcaster that's starting/stopping the stream.
Likely, you'll also want to group by the some kind of STREAM_ID or other attribute that uniquely identifies the stream being played.
This may be sufficient, it you only want the total play time per-broadcaster, per-video. However, you may also want to take time into account. For example, if I watch a video today, and then watch it again tomorrow, is that two viewing sessions, with two independent view time totals, or do you not care?
One way of grouping events in time is using session windows. Before you sessionize the data you'd need to define the parameters that define your session. Here's a good example of using session windows in ksqlDB.
Another way of grouping events in time is using tumbling windows. Here's a good example of using tumbling windows.
Calculating play time
Once you've grouped your events, you'll likely need to calculate the play time. For example, if I start playing at time 5, and stop playing at time 8, then the amount of time I was watching the video is 5 - 8 = 3.
This requires capturing the play event and waiting for the stop event, and then outputting the difference in time. And doing some in a fault tolerant way.
At the time of writing, this would require a custom UDAF (custom user defined aggregate function).
A custom UDAF could capture the start event, store it for future reference, and output a '0' for the play time, and then when it sees the corresponding stop event it can remove the start event from its state, calculate the play time and return it.
Here's a good example of writing a custom UDF in ksqlDB, though you require a custom UDAF, which are covered here.
There is currently a PR open with an enhancement to the LATEST_BY_OFFSET method that may well serve your purpose. This enhances the method to allow it to capture the last N value, rather than just the last 1 value. Likely, this will be released in ksqlDB v0.13, and you can always pull the code and compile it locally, if you have any development experience. If it doesn't serve your purpose, then you may be able to use it as the starting point for developing your own.
Of course, these solutions requires your stream of source events to be correctly ordered, so that stop events never come before their associated play events.
Aggregating
Once you've calculated the play time between a pair of start/stop events, you'll then need to aggregate them. Here's a good example of how to aggregate in ksqlDB.

How to use xodus blobs in a multi-thread-scenario? And should they be closed?

First question:
The documentation states that you should not close the InputStream that you retrieved by the getBlob method. The javadoc of this method states, that you should close it. Who is right?
Second one:
I'm using xodus in an "asynchronous environment", where blob-streaming is suspended/resumed in a cooperative-multitasking-style using callbacks and backpressure-detection (in my specific case vertx write queues mixed with drain-handlers). So, while I'm never accessing the blob's InputStream from different Threads at the same time, I may access them from different Threads in time slots that are guaranteed to be isolated timewise from each other. Is this save?
In other words: The documentation told me that "Concurrent access" to the same blob is not possible - does this mean different Threads at the same time, or different Threads at any time?
Thank you so much for any help!
You should not close the input stream, as the documentation states. I've fixed the javadoc, thanks for noticing.
"Concurrent access" is meant as an access to a single instance of InputStream from different threads at the same time. Cooperative multitasking should work fine if the access to the stream is really successive and the happens-before order is kept.

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

Serial communication and richtextbox?

I have a general question concerning serial port communication and storage of the data. When communicating with a serial port (strictly reading from the port in this case) how would one go about storing and manipulating the data in vb.net? For my project I'm doing, I need to take strings from the serial port and then pull numbers from those strings and sort them (numerically, i.e. highest number found at the top and lowest number at the bottom) For some reason in my code I get inner exception errors when I try to move the data from strings to string arrays but I'm determined to figure it out.
As a general question in terms of vb.net programming in relation to serial port communication, is it intelligent to use backgroundworkers? For example, could/should I use a backgroundworker to handle reading from a serial port and then do arithmetic on my data outside of the backgroundworker?
I'm basically just trying to store my data from my serial port into an array, but I don't know how big the array will be that holds the data (i.e. I don't know how many times I'll have data sent to my serial port)
Any tips/info would be appreciated! Thanks
As a general rule if there is going to be any long running task, you should run it in a seperate thread. You do this so that the user experiance is not affected and the GUI stays responsive.
In the case of serial communications there is usually a poll respond architecture which requires constant event handling.
In my experiance I have had great success handling the Interaction with the serial port in a seperate thread that bubbles up events to the GUI. This way I can then process the data to be displayed or stored in another seperate thread and keep the polling running in almost real time.
When I was consuming registers I would store them many different ways but from what you describe, it sounds like the data you are consuming would be best stored in a List(of String).
This structure can be added to almost infinitly and throught the use of predicates can be sorted. The List structure in .net also has a method to convert itself to an array if necessary.
So here is how I can imagine your interaction:
The GUI thread is started and you initate a connection to your device.
You then set up a thread that will be receiveing the incoming communications from the device
In this thread when data is captured it triggers an event in the GUI.
In the GUI event handler the data is stored in a list and if manipulations need to be preformed on it, they are done in another processing thread that will have a call back handler.
In the call back you can then display or push the data to its final destination.
The key points are that if you are using a GUI you should absolutly have the communication in a seperate thread to maintian stability of the GUI and create a good user experiance.

iOS: Handling overlapping background requests

In an iOS app, I'm writing a class that will be messaged, go do a background request (via performSelectorInBackground:withObject:), and then return the result through a delegate method (that will then be displayed on a map). Everything seems to work right when one request is happening at a time, but I'm trying to figure out how to handle multiple overlapping requests. For example, if a user enters something in a search box that starts a background thread, and then enters something else before the initial background thread completes, how should this be handled?
There are a few options (don't let the second request start while the first is in progress, stop the first as soon as the second is requested, let both run simultaneously and return independent results, etc.), but is there a common/recommended way to deal with this?
I don't think there's universal answer to this. My suggestion is to separate tasks (in form of NSOperations and/or blocks) by their function and relationships between them.
Example: you don't want add image resizing operation to the same queue with fetching some unrelated feed from web, especially if no relationship between them exists. Or maybe you do because both require great amount of memory and because of that can't run in parallel.
But you'd probably want to add web image search operations to same queue while canceling operations of the same type added to this queue before. Each of those image search operations might init image resize operation and place it into some other queue. Now you have an relationship and have to cancel resizing in addition to image search operation. What if image search operation takes longer than associated resize operation? How do you keep a reference to it or know when it's done?
Yeah, it gets complicated easily and sorry if I didn't give you any concrete answers because of uniqueness of each situation but making it run like a Swiss clock in the end is very satisfying :)