Determine channel layout of .wav file - naudio

I'm reading .wav file using WaveFileReader. Lets say I got 3 channels. How do I know, its 2.1 (left, right, lfe) or 3.0 (left, center, right)?

Related

Is there a way to pan audio left and right with Expo-AV?

I am building a react-native app with expo and I need the audio to only come out of left or right earphones/speakers. I used expo-av to play the sound file. I have searched the expo-av docs for information about panning left and right, but I haven't found anything. Do I need to switch to a different dependency?
Nope, expo-av and react native track player both don't support this functionality. However you can use react-native-sound instead with the following functions.
getPan() : Return the stereo pan position of the audio player (not
the system-wide pan), ranging from -1.0 (full left) through 1.0
(full right). The default value is 0.0 (center).
setPan() : value {number} Set the pan, ranging from -1.0 (full left)
through 1.0 (full right).
getNumberOfChannels: Return the number of channels (1 for mono and 2
for stereo sound), or -1 before the sound gets loaded.
This package isn't going to work with expo go. So set up a new expo development client with the package included.

Icon or image (PNG), which is the most performative?

I'm working on a project that involves displaying a map (https://github.com/react-native-community/react-native-maps) and there are 34 possible types of picture pins (PNG) and I'd like to know which becomes more performative. Keep using these images or adopt the use of icons? Taking into account that the images have on average 10Kb
For those who do not know, converting to SVG can turn into an icon (https://github.com/oblador/react-native-vector-icons)

Metal: Textures loaded from alpha-less .png files don't display correctly

I'm working on porting a graphics system based on CoreGraphics to use Metal instead. However, I've noticed that there seems to be some kind of color system mixup when I load .png files that do not have an alpha channel. .png's with an alpha channel work fine, although I have to do some swizzling as my Metal context uses BGR colors. When I load and display a texture from an alpha-less .png, the color components appear to be out of order. Textures that were supposed to appear red appear blue and vice-versa, leading me to believe the order of the color components has been swapped.
Does anyone have any idea why this might be happening? There was no issue on the previous CG-based system.

Blend pixels on .net core

I am trying to overlay a part of one image on top of another image on .net core (code needs to be cross platform).
I considered using ImageSharp since it supports win,mac and linux.
But i couldn't find pixel blending on their features list, although i saw that you can access an individual pixel.
So the use case would be, i have two 4k Png images and i want a small part of the first image (roughly 10% square of the overall image) to be overlayed on top of the second image (but not the whole image just the same 10% space) and get the area where the merging happened as a new Jpeg image.
(the source PNGs have some degree of transparancy).
I considered cropping out the two parts i want to merge from the two 4k images and then blending them to get the final image, but that is slow for the needs of the project I'm working on.
ImageSharp does support pixel blending, you can specify the pixel blending mode during Draw/Fill operations by passing in an GraphicsOptions parameter and setting its BlenderMode and BlendPercentage(defaults to 100%) properties.
Currently ImageSharp has implementations for the following blending modes:
Normal
Multiply
Add
Substract
Screen
Darken
Lighten
Overlay
HardLight
Src
Atop
Over
In
Out
Dest
DestAtop
DestOver
DestIn
DestOut
Clear
Xor

How can I use python in Blender Game Engine (2.5+) to save the depth buffer AND the color buffer to file every frame?

I need to save the color buffer and depth buffer of a given camera in a scene in Blender Game Engine every frame to a file (each to their own file). Is this possible? How can this be setup using the BGE and python?
Ive asked on the Blender Artists forum but noone seems to have picked up the question.