I'm not very familiar with Swift/Objective-C or the Cocoa environment and I've been having a lot of trouble figuring out how to send or listen for data from a USB device with CoreMIDI. I'm trying to send the message (144, 36, 5) to my MIDI controller (an Ableton Push) which I have accomplished before using the Bitwig Studio Scripting API. I haven't been able to find much on this other than Apple's docs and they haven't been particularly helpful for me. So far I've figured out how to get a list of devices and check out their names, but beyond that I'm stuck.
What I've written so far trying to send MIDI:
var pushDevice = MIDIGetDevice(2)
var secondEntity = MIDIDeviceGetEntity(pushDevice, 1)
var pushDestination = MIDIEntityGetDestination(secondEntity, 0)
var midiPort = MIDIPortRef()
let myData : [Byte] = [ Byte(144), Byte(36), Byte(5) ]
var packet = UnsafeMutablePointer<MIDIPacket>.alloc(1)
var pkList = UnsafeMutablePointer<MIDIPacketList>.alloc(1)
packet = MIDIPacketListInit(pkList)
packet = MIDIPacketListAdd(pkList, 1024, packet, 0, 3, myData)
MIDISend(midiPort, pushDestination, pkList)
I feel like a bit of a goof for not being able to figure this out, I imagine it has to be a simple solution that I'm just not able to figure out for some reason. I don't think I'm properly constructing the MIDIPacketList or the MIDIPort, and I have no idea how to go about creating a callback and listening for MIDI messages.
I figured out how to send MIDI data by grabbing the device via it's unique ID. I'm not sure how memory management works in Swift, so keep that in mind. I will check back in later if I figure out how to properly create a callback and listen for MIDI input.
import Foundation
import CoreMIDI
var midiClient = MIDIClientRef()
var result = MIDIClientCreate("Foo Client", nil, nil, &midiClient)
var outputPort = MIDIPortRef()
result = MIDIOutputPortCreate(midiClient, "Output", &outputPort);
var endPoint = MIDIObjectRef()
var foundObj = MIDIObjectType()
result = MIDIObjectFindByUniqueID(UNIQUE_ID, &endPoint, &foundObj)
var pkt = UnsafeMutablePointer<MIDIPacket>.alloc(1)
var pktList = UnsafeMutablePointer<MIDIPacketList>.alloc(1)
let midiData : [Byte] = [Byte(144), Byte(36), Byte(5)]
pkt = MIDIPacketListInit(pktList)
pkt = MIDIPacketListAdd(pktList, 1024, pkt, 0, 3, midiData)
MIDISend(outputPort, endPoint, pktList)
Related
I'm new to Stackoverflow and this will be my first question. My HTML5 player works fine on Internet Explorer but doesn't work on google chrome. I'm using a PlayReady stream which is encrypted with CENC. How can I let this work on chrome? I don't have access to the servers, they're run by third parties.
Thanks
Technically it is possible to support Widevine while you're stream is PlayReady. This is possible since you use CENC. Since you don't have access to the servers like you mentioned you can use a technique called PSSH Forging. It basically replaces the pieces to make chrome think it's Widevine, since it's CENC the CDM will decrypt the video and the stream will play.
For the sake of ease i'm going to assume you use DASH.
We have here a PSSH Box:
const widevinePSSH = '0000005c7073736800000000edef8ba979d64acea3c827dcd51d21ed0000003c080112101c773709e5ab359cbed9512bc27755fa1a087573702d63656e63221848486333436557724e5a792b32564572776e64562b673d3d2a003200';
You need to replace 1c773709e5ab359cbed9512bc27755fa with your KID.
And then at the part where you insert you'r segment in the SourceBuffer (before appendSegment) you can do the following:
let segment = args[0];
segment = new Uint8Array(segment);
const newPssh = widevinePSSH.replace('1c773709e5ab359cbed9512bc27755fa', psshKid);
const subArray = new Uint8Array(DRMUtils.stringToArrayBuffer('70737368'));
let index = 0;
const found = subArray.every((item) => {
const masterIndex = segment.indexOf(item, index);
if (~masterIndex) {
index = masterIndex;
return true;
}
});
if (found) {
return originalSourceBufferAppendBuffer.apply(this, [].slice.call(args));
}
segment = DRMUtils.uInt8ArrayToHex(segment);
// Inject the forged signal
// 70737368 = pssh
segment = segment.substr(0, segment.lastIndexOf('70737368') - 8) + newPssh + segment.substr(segment.lastIndexOf('70737368') - 8);
// Fix the MOOV atom length
// 6d6f6f76 = moov
const header = segment.substr(0, segment.indexOf('6d6f6f76') - 8);
const payload = segment.substr(segment.indexOf('6d6f6f76') - 8);
const newLength = Math.floor(payload.length / 2);
segment = header + DRMUtils.intToHex(newLength, 8) + payload.substr(8);
segment = decode(segment).b;
Sadly i can only share bits and pieces but this is roughly what you should do to get it working.
We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)
This post is also posted on The Amazing Audio Engine forum.
Hi everyone, I am new to The Amazing Audio Engine and iOS dev, and have been trying to figure out how to get the BPM of a track.
So far I have found two articles on offline rendering on the forum:
http://forum.theamazingaudioengine.com/discussion/comment/1743/#Comment_1743
http://forum.theamazingaudioengine.com/discussion/comment/649#Comment_649
As far as I know the AEAudioControllerRenderMainOutput function is only correctly implemented in this fork.
I am trying to do offline rendering to process a track and then use the algorithm described here (JavaScript) and implemented here.
So far I'm loading this fork, and I am using Swift (I am part of Make School Summer Academy at the moment, which teaches Swift).
When playing a track this code works for me (No offline rendering!)
let file = NSBundle.mainBundle().URLForResource("track", withExtension:
"m4a")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleavedFloatStereoAudioDescription())
let receiver = AEBlockAudioReceiver { (source, time, frames, audioBufferList) -> Void in
let leftSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData)
// Advance the buffer sizeof(float) * 512
let rightSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples) rightSamples: \(rightSamples)")
}
audioController.addChannels([channel])
audioController.addOutputReceiver(receiver)
audioController.start()
Trying offline rendering
This is the code I am trying to run while I am using this fork
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleaved16BitStereoAudioDescription())
let file = NSBundle.mainBundle().URLForResource("track", withExtension: "mp3")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController.addChannels([channel])
audioController.start(nil)
audioController.stop()
var t = AudioTimeStamp()
let bufferLength: UInt32 = 4096
var buffer = AEAllocateAndInitAudioBufferList(audioController.audioDescription, Int32(bufferLength))
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer)
var renderDuration: NSTimeInterval = channel.duration
var sampleRate: Float64 = audioController.audioDescription.mSampleRate
var lengthInFrames: UInt32 = UInt32(renderDuration * sampleRate)
var songBuffer: [Float64]
t.mFlags = UInt32(kAudioTimeStampSampleTimeValid)
var frequencyAnalyzer = FrequencyAnalyzer()
println("renderDuration \(renderDuration)")
var outIsOpen = Boolean()
AUGraphClose(audioController.audioGraph)
AUGraphIsOpen(audioController.audioGraph, &outIsOpen)
println("AUGraphIsOpen: \(outIsOpen)")
for (var i: UInt32 = 0; i < lengthInFrames; i += bufferLength) {
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer);
t.mSampleTime += Float64(bufferLength)
println(t.mSampleTime)
let leftSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData)
let rightSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples.memory) rightSamples: \(rightSamples.memory)")
}
AEFreeAudioBufferList(buffer)
AUGraphOpen(audioController.audioGraph)
audioController.start(nil)
audioController.stop()
Offline rendering is not working for me ATM. The second example is not working it's getting me a lot of mixed errors which I don't understand.
A very common one is inside the channelAudioProducer function on this line:
// Tell mixer/mixer's converter unit to render into audio status = AudioUnitRender(group->converterUnit ? group->converterUnit : group->mixerAudioUnit, arg->ioActionFlags, &arg->originalTimeStamp, 0, *frames, audio);
It gives me EXC_BAD_ACCESS (code=EXC_I386_GPFLT). Among other errors this one is very common.
I am sorry I am a total noob on this field but some stuff I don't really understand. Should I use nonInterleaved16BitStereoAudioDescription or nonInterleavedFloatStereoAudioDescription? How does this implement the mData?
I would love to get some help on this since I'm kind of lost at the moment. Please when you answer me try to explain it as fully as you can, I am new to this stuff.
NOTE: Posting code in Objective-C is fine if you don't know Swift.
The app saves the camera output into a mov. file, then turn it to flv format that sent by AVPacket to rtmp server.
It switch every time between two files, one is written by the camera output and the other one is sent.
My problem is that the audio/video is getting out of sync after a while.
The first buffer sent is allways 100% sync but after awhile it get messed.
I belive its a DTS-PTS problem..
if(isVideo)
{
packet->stream_index = VIDEO_STREAM;
packet->dts = packet->pts = videoPosition;
videoPosition += packet->duration = FLV_TIMEBASE * packet->duration * videoCodec->ticks_per_frame * videoCodec->time_base.num / videoCodec->time_base.den;
}
else
{
packet->stream_index = AUDIO_STREAM;
packet->dts = packet->pts = audioPosition;
audioPosition += packet->duration = FLV_TIMEBASE * packet->duration / audioRate;
//NSLog(#"audio position = %lld", audioPosition);
}
packet->pos = -1;
packet->convergence_duration = AV_NOPTS_VALUE;
// This sometimes fails without being a critical error, so no exception is raised
if((code = av_interleaved_write_frame(file, packet)))
{
NSLog(#"Streamer::Couldn't write frame");
}
av_free_packet(packet);
You can research this sample: http://unick-soft.ru/art/files/ffmpegEncoder-vs2008.zip
But this sample is for Windows.
In this sample I use pts only for audio stream:
if (pVideoCodec->coded_frame->pts != AV_NOPTS_VALUE)
{
pkt.pts = av_rescale_q(pVideoCodec->coded_frame->pts,
pVideoCodec->time_base, pVideoStream->time_base);
}
I was experiencing a similar issue when switching out the AVAssetWriters, and noticed that it went way if I only started using the new AVAssetWriter when I got a video sample
https://medium.com/#brandon.kobel/ios-seamless-video-chunks-4383a5a3a874
I'm currently getting a float array using directsound to record audio.
Now I would like to play that float array using XAudio2 (SlimDX also), but I'm not sure what to do since the sample example from SlimDX plays a .wav file.
here is how they do this:
XAudio2 device = new XAudio2();
MasteringVoice masteringVoice = new MasteringVoice(device);
var s = System.IO.File.OpenRead(fileName);
WaveStream stream = new WaveStream(s);
s.Close();
AudioBuffer buffer = new AudioBuffer();
buffer.AudioData = stream;
buffer.AudioBytes = (int)stream.Length;
buffer.Flags = BufferFlags.EndOfStream;
SourceVoice sourceVoice = new SourceVoice(device, stream.Format);
sourceVoice.SubmitSourceBuffer(buffer);
sourceVoice.Start();
// loop until the sound is done playing
while (sourceVoice.State.BuffersQueued > 0)
{
if (GetAsyncKeyState(VK_ESCAPE) != 0)
break;
Thread.Sleep(10);
}
// wait until the escape key is released
while (GetAsyncKeyState(VK_ESCAPE) != 0)
Thread.Sleep(10);
// cleanup the voice
buffer.Dispose();
sourceVoice.Dispose();
stream.Dispose();
Basically, what I would like to know is how to play a float array using slimDX?
Thanks in advance
I'm not an expert on audio stuff, but I do know that you can create a WaveFormat of IeeeFloat. Fill in all the other information, and then write your data to a DataStream and give that to the AudioBuffer. Then you can call Submit as normal.