Size for webcam in Flash? - resize

I've just an SWF object to display the users webcam, however it won't let me make the webcam smaller than 320 x 240.
Is this the lowest size I can go?
Just for any reference if needed:
import flash.media.Camera;
import flash.media.Video;
var camera:Camera = Camera.getCamera();
var vid:Video = new Video(320, 240);
camera.setQuality(100, 300);
vid.smoothing = true;
vid.attachCamera(camera);
vid.x = stage.stageWidth/2 - vid.width/2;
vid.y = 0;
addChild(vid);
Thanks.

Have you tried this?
camera.setMode(videoWidth, videoHeight, video fps, favor area);
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/media/Camera.html?filter_flash=cs5&filter_flashplayer=10.2&filter_air=2.6

Related

stats.js shows FPS 0~2, render movement too slow

i'm beginner for three.js also using it for BIM project,
when i load a gltf file of ~25mb i can barely move the whole object and stats.js monitor shows fps of 0~2 at max
gltf file : https://github.com/xeolabs/xeogl/tree/master/examples/models/gltf/schependomlaan
im using THREE js with vuejs
//package.json
"stats.js": "^0.17.0",
"three": "^0.109.0",
import * as THREE from 'three';
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';
this.scene = new THREE.Scene();
this.stats = new Stats();
this.stats.showPanel( 0, 1, 2 ); // 0: fps, 1: ms, 2: mb, 3+: custom
let div = document.createElement('div')
div.appendChild(this.stats.dom)
div.style.position = 'absolute';
div.style.top = 0;
div.style.left = 0;
document.getElementsByClassName('gltfViewer')[0].appendChild( div );
// Camera
this.camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1500 );
this.camera.position.set( this.pos, this.pos, this.pos );
// renderer
this.raycaster = new THREE.Raycaster();
this.renderer = new THREE.WebGLRenderer({ canvas: document.getElementById('gltfViewerCanvas'), alpha: false });
this.renderer.setClearColor( 0xefefef );
this.renderer.setPixelRatio( window.devicePixelRatio );
this.renderer.setSize(window.innerWidth, window.innerHeight);
// adding controls
this.controls = new OrbitControls( this.camera, this.renderer.domElement );
this.controls.dampingFactor = 0.1;
this.controls.rotateSpeed = 0.12;
this.controls.enableDamping = true;
this.controls.update();
window.addEventListener('resize', _ => this.render());
this.controls.addEventListener('change', _ => this.render());
// light
var ambientLight = new THREE.AmbientLight( 0xcccccc );
this.scene.add( ambientLight );
var directionalLight = new THREE.DirectionalLight( 0xffffff );
directionalLight.position.set( 0, 1, 1 ).normalize();
this.scene.add( directionalLight );
// loading gltf file
// Instantiate a loader
this.gltfLoader = new GLTFLoader();
// Optional: Provide a DRACOLoader instance to decode compressed mesh data
this.dracoLoader = new DRACOLoader();
this.dracoLoader.setDecoderPath( 'three/examples/js/libs/draco' );
this.gltfLoader.setDRACOLoader( this.dracoLoader );
// Load a glTF resource
this.gltfLoader.load( this.src, this.onGLTFLoaded, this.onGLTFLoading, this.onGLTFLoadingError );
//onGLTFLoaded()
this.scene.add( optimizedGltf.scene );
// gltf.scene.getObjectById(404).visible = false;
this.listGltfObjects(gltf);
this.render();
// render ()
this.renderer.render( this.scene, this.camera );
this.stats.update();
// on mounted component :
animate()
// animate()
this.stats.begin()
this.render();
this.stats.end();
even after applying Draco compression using https://github.com/AnalyticalGraphicsInc/gltf-pipeline nothing changes.
Thanks
On filesize —
Draco compression reduces network size, but not the final amount of uncompressed data that must be sent to your GPU and rendered. If your original mesh was 100mb and you compress it to 25mb, you will still get the framerate of the original 100mb mesh. Aside: Using the -b option of glTF-Pipeline will reduce the size by another 50%, to 13MB, but again doesn't affect FPS.
On framerate —
This model contains 4280 meshes1, each requiring a GPU draw call. That is the source of your low QPS, and unfortunately it's a common problem in BIM models. You'll need to merge these meshes (in a program like Blender, or after loading in three.js) to as few as possible. A model like this should require < 100 draw calls, or even as few as 1.
1 To see this, try opening the model on https://gltf-viewer.donmccurdy.com/ and opening the JavaScript console. You should see a printout of the scene graph, which will contain many different meshes.

Output audio to speaker and headset at the same time?

I'm using NAudio to output audio files to both the speaker and a headset on a window 10 laptop. I created two WaveOut and assigned the corresponding device number. But I cannot here the audio from the speaker when the headset is plugged in. Can anyone let me know how to solve this? Here's my code (it works fine on the headset or the speaker separately, but I want to hear the sound from both of them at the same time):
var input1 = new Mp3FileReader(PATH + "left.mp3");
var input2 = new Mp3FileReader(PATH + "right.mp3");
var waveProvider = new MultiplexingWaveProvider(new IWaveProvider[] { input1, input2 }, 2);
var input3 = new Mp3FileReader(PATH + "left.mp3");
int channel = ((Mp3FileReader)input1).Mp3WaveFormat.Channels;
Debug.WriteLine(channel);
waveProvider.ConnectInputToOutput(0, 0);
waveProvider.ConnectInputToOutput(3, 1);
WaveOut wave = new WaveOut();
wave.DeviceNumber = 1;
wave.Init(waveProvider);
WaveOut wave1 = new WaveOut();
wave1.DeviceNumber = 0;
wave1.Init(input3);
wave.Play();
wave1.Play();
I think the issue is that you don't have two soundcards, you have one soundcard that is switching between playing sound out of the speakers and headphones. If you bought a USB headset then you'd have two soundcards, and should be able to play different sounds through each one separately.

Offline rendering with The Amazing Audio Engine

This post is also posted on The Amazing Audio Engine forum.
Hi everyone, I am new to The Amazing Audio Engine and iOS dev, and have been trying to figure out how to get the BPM of a track.
So far I have found two articles on offline rendering on the forum:
http://forum.theamazingaudioengine.com/discussion/comment/1743/#Comment_1743
http://forum.theamazingaudioengine.com/discussion/comment/649#Comment_649
As far as I know the AEAudioControllerRenderMainOutput function is only correctly implemented in this fork.
I am trying to do offline rendering to process a track and then use the algorithm described here (JavaScript) and implemented here.
So far I'm loading this fork, and I am using Swift (I am part of Make School Summer Academy at the moment, which teaches Swift).
When playing a track this code works for me (No offline rendering!)
let file = NSBundle.mainBundle().URLForResource("track", withExtension:
"m4a")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleavedFloatStereoAudioDescription())
let receiver = AEBlockAudioReceiver { (source, time, frames, audioBufferList) -> Void in
let leftSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData)
// Advance the buffer sizeof(float) * 512
let rightSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples) rightSamples: \(rightSamples)")
}
audioController.addChannels([channel])
audioController.addOutputReceiver(receiver)
audioController.start()
Trying offline rendering
This is the code I am trying to run while I am using this fork
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleaved16BitStereoAudioDescription())
let file = NSBundle.mainBundle().URLForResource("track", withExtension: "mp3")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController.addChannels([channel])
audioController.start(nil)
audioController.stop()
var t = AudioTimeStamp()
let bufferLength: UInt32 = 4096
var buffer = AEAllocateAndInitAudioBufferList(audioController.audioDescription, Int32(bufferLength))
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer)
var renderDuration: NSTimeInterval = channel.duration
var sampleRate: Float64 = audioController.audioDescription.mSampleRate
var lengthInFrames: UInt32 = UInt32(renderDuration * sampleRate)
var songBuffer: [Float64]
t.mFlags = UInt32(kAudioTimeStampSampleTimeValid)
var frequencyAnalyzer = FrequencyAnalyzer()
println("renderDuration \(renderDuration)")
var outIsOpen = Boolean()
AUGraphClose(audioController.audioGraph)
AUGraphIsOpen(audioController.audioGraph, &outIsOpen)
println("AUGraphIsOpen: \(outIsOpen)")
for (var i: UInt32 = 0; i < lengthInFrames; i += bufferLength) {
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer);
t.mSampleTime += Float64(bufferLength)
println(t.mSampleTime)
let leftSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData)
let rightSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples.memory) rightSamples: \(rightSamples.memory)")
}
AEFreeAudioBufferList(buffer)
AUGraphOpen(audioController.audioGraph)
audioController.start(nil)
audioController.stop()
Offline rendering is not working for me ATM. The second example is not working it's getting me a lot of mixed errors which I don't understand.
A very common one is inside the channelAudioProducer function on this line:
// Tell mixer/mixer's converter unit to render into audio status = AudioUnitRender(group->converterUnit ? group->converterUnit : group->mixerAudioUnit, arg->ioActionFlags, &arg->originalTimeStamp, 0, *frames, audio);
It gives me EXC_BAD_ACCESS (code=EXC_I386_GPFLT). Among other errors this one is very common.
I am sorry I am a total noob on this field but some stuff I don't really understand. Should I use nonInterleaved16BitStereoAudioDescription or nonInterleavedFloatStereoAudioDescription? How does this implement the mData?
I would love to get some help on this since I'm kind of lost at the moment. Please when you answer me try to explain it as fully as you can, I am new to this stuff.
NOTE: Posting code in Objective-C is fine if you don't know Swift.

Is there a better way to fix my AS2 preloader?

I have a game with a preloader in scene 1, with the following code on the time line.
stop();
loadingBar._xscale = 1;
var loadingCall:Number = setInterval(preloadSite, 50);
function preloadSite():Void {
var siteLoaded:Number = _root.getBytesLoaded();
var siteTotal:Number = _root.getBytesTotal();
var percentage:Number = Math.round(siteLoaded/siteTotal*100);
loadingBar._xscale = percentage;
bytesDisplay.text = percentage + "%";
if (siteLoaded >= siteTotal) {
clearInterval(loadingCall);
gotoAndPlay("StartMenu", 1);
}
}
The code works fine when there are no music files linked to frame 1. If there are music files linked, then everything loads before the preloader shows up.
I found this great webpage about preloaders, which speaks about the linkage issue, and suggests I put all the big files on frame 2, after the preloader, then skip them. I put my large files on frame 2 as suggested and the preloader worked again.
My question is, is there a better way to do this. This solution seems like a hack.
The only better option I can think of, is to NOT store the MP3 file in your Flash file, but rather load it in your preloader with your flash file's content. This is provided that you're storing your MP3 file somewhere else online (like on a server).
stop();
loadingBar._xscale = 1;
var sound:Sound = new Sound();
sound.loadSound("http://www.example.com/sound.mp3", false);
var loadingCall:Number = setInterval(preloadSite, 50);
function preloadSite():Void {
var siteLoaded:Number = _root.getBytesLoaded()+sound.getBytesLoaded();
var siteTotal:Number = _root.getBytesTotal()+sound.getBytesTotal();
var percentage:Number = Math.round(siteLoaded / siteTotal * 100);
loadingBar._xscale = percentage;
bytesDisplay.text = percentage + "%";
if (siteLoaded >= siteTotal) {
clearInterval(loadingCall);
gotoAndPlay("StartMenu", 1);
sound.start();
}
}

SlimDX Recording(OK) then playing (Problem!)

I'm currently getting a float array using directsound to record audio.
Now I would like to play that float array using XAudio2 (SlimDX also), but I'm not sure what to do since the sample example from SlimDX plays a .wav file.
here is how they do this:
XAudio2 device = new XAudio2();
MasteringVoice masteringVoice = new MasteringVoice(device);
var s = System.IO.File.OpenRead(fileName);
WaveStream stream = new WaveStream(s);
s.Close();
AudioBuffer buffer = new AudioBuffer();
buffer.AudioData = stream;
buffer.AudioBytes = (int)stream.Length;
buffer.Flags = BufferFlags.EndOfStream;
SourceVoice sourceVoice = new SourceVoice(device, stream.Format);
sourceVoice.SubmitSourceBuffer(buffer);
sourceVoice.Start();
// loop until the sound is done playing
while (sourceVoice.State.BuffersQueued > 0)
{
if (GetAsyncKeyState(VK_ESCAPE) != 0)
break;
Thread.Sleep(10);
}
// wait until the escape key is released
while (GetAsyncKeyState(VK_ESCAPE) != 0)
Thread.Sleep(10);
// cleanup the voice
buffer.Dispose();
sourceVoice.Dispose();
stream.Dispose();
Basically, what I would like to know is how to play a float array using slimDX?
Thanks in advance
I'm not an expert on audio stuff, but I do know that you can create a WaveFormat of IeeeFloat. Fill in all the other information, and then write your data to a DataStream and give that to the AudioBuffer. Then you can call Submit as normal.