I'm using NAudio to output audio files to both the speaker and a headset on a window 10 laptop. I created two WaveOut and assigned the corresponding device number. But I cannot here the audio from the speaker when the headset is plugged in. Can anyone let me know how to solve this? Here's my code (it works fine on the headset or the speaker separately, but I want to hear the sound from both of them at the same time):
var input1 = new Mp3FileReader(PATH + "left.mp3");
var input2 = new Mp3FileReader(PATH + "right.mp3");
var waveProvider = new MultiplexingWaveProvider(new IWaveProvider[] { input1, input2 }, 2);
var input3 = new Mp3FileReader(PATH + "left.mp3");
int channel = ((Mp3FileReader)input1).Mp3WaveFormat.Channels;
Debug.WriteLine(channel);
waveProvider.ConnectInputToOutput(0, 0);
waveProvider.ConnectInputToOutput(3, 1);
WaveOut wave = new WaveOut();
wave.DeviceNumber = 1;
wave.Init(waveProvider);
WaveOut wave1 = new WaveOut();
wave1.DeviceNumber = 0;
wave1.Init(input3);
wave.Play();
wave1.Play();
I think the issue is that you don't have two soundcards, you have one soundcard that is switching between playing sound out of the speakers and headphones. If you bought a USB headset then you'd have two soundcards, and should be able to play different sounds through each one separately.
Related
I had successfully communicate single SPI device (MCP3008). Is that possible running multiple (4x) SPI device on raspberry pi 2 with windows 10 iot?
I'm thinking to manually connect the CS(chip select) line and activate it before calling spi function and in-active it after done the spi function.
Can it be work on windows 10 iot?
How about configure the spi chip select pin? Change the pin number during the SPI initialization? Is that possible?
Any smarter way to use multiple (4 x MCP3008 ) SPI device on windows 10 iot?
(I'm planning to monitor 32 Analogue signal which will be input to my raspberry pi 2 running windows 10 iot)
Thanks a lot!
Of course you can use as many as you want (as many GPIO pins).
You just have to indicate the device to which you are calling.
First, set the configuration of the SPI for example, using chip select line 0
settings = new SpiConnectionSettings(0); //chip select line 0
settings.ClockFrequency = 1000000;
settings.Mode = SpiMode.Mode0;
String spiDeviceSelector = SpiDevice.GetDeviceSelector();
devices = await DeviceInformation.FindAllAsync(spiDeviceSelector);
_spi1 = await SpiDevice.FromIdAsync(devices[0].Id, settings);
You can not use this pin in further actions! So now you should configure the output ports using GpioPin class, which you will use to indicate the device.
GpioPin_19 = IoController.OpenPin(19);
GpioPin_19.Write(GpioPinValue.High);
GpioPin_19.SetDriveMode(GpioPinDriveMode.Output);
GpioPin_26 = IoController.OpenPin(26);
GpioPin_26.Write(GpioPinValue.High);
GpioPin_26.SetDriveMode(GpioPinDriveMode.Output);
GpioPin_13 = IoController.OpenPin(13);
GpioPin_13.Write(GpioPinValue.High);
GpioPin_13.SetDriveMode(GpioPinDriveMode.Output);
Always before transfer indicate device: (example method)
private byte[] TransferSpi(byte[] writeBuffer, byte ChipNo)
{
var readBuffer = new byte[writeBuffer.Length];
if (ChipNo == 1) GpioPin_19.Write(GpioPinValue.Low);
if (ChipNo == 2) GpioPin_26.Write(GpioPinValue.Low);
if (ChipNo == 3) GpioPin_13.Write(GpioPinValue.Low);
_spi1.TransferFullDuplex(writeBuffer, readBuffer);
if (ChipNo == 1) GpioPin_19.Write(GpioPinValue.High);
if (ChipNo == 2) GpioPin_26.Write(GpioPinValue.High);
if (ChipNo == 3) GpioPin_13.Write(GpioPinValue.High);
return readBuffer;
}
From: https://projects.drogon.net/understanding-spi-on-the-raspberry-pi/
The Raspberry Pi only implements master mode at this time and has 2 chip-select pins, so can control 2 SPI devices. (Although some devices have their own sub-addressing scheme so you can put more of them on the same bus)
I've successfully used 2 SPI devices in the DeviceTester project and Breathalyzer project within Jared Bienz's IoT Devices GitHub repo.
Notice, that in each project, the SPI interface descriptor is declared explicitly in the ControllerName property for the ADC and Display used in both of these projects. Detailed information around the Breathalyzer project can be found on my blog.
// ADC
// Create the manager
adcManager = new AdcProviderManager();
adcManager.Providers.Add(
new MCP3208()
{
ChipSelectLine = 0,
ControllerName = "SPI1",
});
// Get the well-known controller collection back
adcControllers = await adcManager.GetControllersAsync();
// Create the display
var disp = new ST7735()
{
ChipSelectLine = 0,
ClockFrequency = 40000000, // Attempt to run at 40 MHz
ControllerName = "SPI0",
DataCommandPin = gpioController.OpenPin(12),
DisplayType = ST7735DisplayType.RRed,
ResetPin = gpioController.OpenPin(16),
Orientation = DisplayOrientations.Portrait,
Width = 128,
Height = 160,
};
I'm not very familiar with Swift/Objective-C or the Cocoa environment and I've been having a lot of trouble figuring out how to send or listen for data from a USB device with CoreMIDI. I'm trying to send the message (144, 36, 5) to my MIDI controller (an Ableton Push) which I have accomplished before using the Bitwig Studio Scripting API. I haven't been able to find much on this other than Apple's docs and they haven't been particularly helpful for me. So far I've figured out how to get a list of devices and check out their names, but beyond that I'm stuck.
What I've written so far trying to send MIDI:
var pushDevice = MIDIGetDevice(2)
var secondEntity = MIDIDeviceGetEntity(pushDevice, 1)
var pushDestination = MIDIEntityGetDestination(secondEntity, 0)
var midiPort = MIDIPortRef()
let myData : [Byte] = [ Byte(144), Byte(36), Byte(5) ]
var packet = UnsafeMutablePointer<MIDIPacket>.alloc(1)
var pkList = UnsafeMutablePointer<MIDIPacketList>.alloc(1)
packet = MIDIPacketListInit(pkList)
packet = MIDIPacketListAdd(pkList, 1024, packet, 0, 3, myData)
MIDISend(midiPort, pushDestination, pkList)
I feel like a bit of a goof for not being able to figure this out, I imagine it has to be a simple solution that I'm just not able to figure out for some reason. I don't think I'm properly constructing the MIDIPacketList or the MIDIPort, and I have no idea how to go about creating a callback and listening for MIDI messages.
I figured out how to send MIDI data by grabbing the device via it's unique ID. I'm not sure how memory management works in Swift, so keep that in mind. I will check back in later if I figure out how to properly create a callback and listen for MIDI input.
import Foundation
import CoreMIDI
var midiClient = MIDIClientRef()
var result = MIDIClientCreate("Foo Client", nil, nil, &midiClient)
var outputPort = MIDIPortRef()
result = MIDIOutputPortCreate(midiClient, "Output", &outputPort);
var endPoint = MIDIObjectRef()
var foundObj = MIDIObjectType()
result = MIDIObjectFindByUniqueID(UNIQUE_ID, &endPoint, &foundObj)
var pkt = UnsafeMutablePointer<MIDIPacket>.alloc(1)
var pktList = UnsafeMutablePointer<MIDIPacketList>.alloc(1)
let midiData : [Byte] = [Byte(144), Byte(36), Byte(5)]
pkt = MIDIPacketListInit(pktList)
pkt = MIDIPacketListAdd(pktList, 1024, pkt, 0, 3, midiData)
MIDISend(outputPort, endPoint, pktList)
The app saves the camera output into a mov. file, then turn it to flv format that sent by AVPacket to rtmp server.
It switch every time between two files, one is written by the camera output and the other one is sent.
My problem is that the audio/video is getting out of sync after a while.
The first buffer sent is allways 100% sync but after awhile it get messed.
I belive its a DTS-PTS problem..
if(isVideo)
{
packet->stream_index = VIDEO_STREAM;
packet->dts = packet->pts = videoPosition;
videoPosition += packet->duration = FLV_TIMEBASE * packet->duration * videoCodec->ticks_per_frame * videoCodec->time_base.num / videoCodec->time_base.den;
}
else
{
packet->stream_index = AUDIO_STREAM;
packet->dts = packet->pts = audioPosition;
audioPosition += packet->duration = FLV_TIMEBASE * packet->duration / audioRate;
//NSLog(#"audio position = %lld", audioPosition);
}
packet->pos = -1;
packet->convergence_duration = AV_NOPTS_VALUE;
// This sometimes fails without being a critical error, so no exception is raised
if((code = av_interleaved_write_frame(file, packet)))
{
NSLog(#"Streamer::Couldn't write frame");
}
av_free_packet(packet);
You can research this sample: http://unick-soft.ru/art/files/ffmpegEncoder-vs2008.zip
But this sample is for Windows.
In this sample I use pts only for audio stream:
if (pVideoCodec->coded_frame->pts != AV_NOPTS_VALUE)
{
pkt.pts = av_rescale_q(pVideoCodec->coded_frame->pts,
pVideoCodec->time_base, pVideoStream->time_base);
}
I was experiencing a similar issue when switching out the AVAssetWriters, and noticed that it went way if I only started using the new AVAssetWriter when I got a video sample
https://medium.com/#brandon.kobel/ios-seamless-video-chunks-4383a5a3a874
I've just an SWF object to display the users webcam, however it won't let me make the webcam smaller than 320 x 240.
Is this the lowest size I can go?
Just for any reference if needed:
import flash.media.Camera;
import flash.media.Video;
var camera:Camera = Camera.getCamera();
var vid:Video = new Video(320, 240);
camera.setQuality(100, 300);
vid.smoothing = true;
vid.attachCamera(camera);
vid.x = stage.stageWidth/2 - vid.width/2;
vid.y = 0;
addChild(vid);
Thanks.
Have you tried this?
camera.setMode(videoWidth, videoHeight, video fps, favor area);
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/media/Camera.html?filter_flash=cs5&filter_flashplayer=10.2&filter_air=2.6
I'm currently getting a float array using directsound to record audio.
Now I would like to play that float array using XAudio2 (SlimDX also), but I'm not sure what to do since the sample example from SlimDX plays a .wav file.
here is how they do this:
XAudio2 device = new XAudio2();
MasteringVoice masteringVoice = new MasteringVoice(device);
var s = System.IO.File.OpenRead(fileName);
WaveStream stream = new WaveStream(s);
s.Close();
AudioBuffer buffer = new AudioBuffer();
buffer.AudioData = stream;
buffer.AudioBytes = (int)stream.Length;
buffer.Flags = BufferFlags.EndOfStream;
SourceVoice sourceVoice = new SourceVoice(device, stream.Format);
sourceVoice.SubmitSourceBuffer(buffer);
sourceVoice.Start();
// loop until the sound is done playing
while (sourceVoice.State.BuffersQueued > 0)
{
if (GetAsyncKeyState(VK_ESCAPE) != 0)
break;
Thread.Sleep(10);
}
// wait until the escape key is released
while (GetAsyncKeyState(VK_ESCAPE) != 0)
Thread.Sleep(10);
// cleanup the voice
buffer.Dispose();
sourceVoice.Dispose();
stream.Dispose();
Basically, what I would like to know is how to play a float array using slimDX?
Thanks in advance
I'm not an expert on audio stuff, but I do know that you can create a WaveFormat of IeeeFloat. Fill in all the other information, and then write your data to a DataStream and give that to the AudioBuffer. Then you can call Submit as normal.