NAudio - ASIO Playback to device (static only) - naudio

I'm trying to route ASIO audio to my playback devices, however, all I hear is static.
ASIO Setup
BIT_PER_SAMPLE = 24
SAMPLE_RATE = 48000
Currently, trying with 1 channel into 1 playback device called "Line 1" for testing. The playback of the sound is static.
EDIT: The code below has been updated. It will take 64 channels of ASIO input and route them one at a time into Waveout devices (I'm using Virtual Audio Cable to create them)
private static AsioOut asioOut;
private static AsioInputPatcher inputPatcher;
private static readonly int BIT_PER_SAMPLE = 16;
private static readonly int SAMPLE_RATE = 48000;
private static readonly int NUMBER_OF_CHANNELS = 64;
private static BufferedWaveProvider[] bufferedWaveProviders = new BufferedWaveProvider[NUMBER_OF_CHANNELS];
private static WaveOut[] waveouts = new WaveOut[NUMBER_OF_CHANNELS];
[STAThread]
static void Main(string[] args)
{
InitDevices();
Record();
while (true)
{
Console.WriteLine("Recording...press any key to Exit.");
Console.ReadKey(true);
break;
}
asioOut.Stop();
asioOut.Dispose();
asioOut = null;
}
private static void Record()
{
inputPatcher = new AsioInputPatcher(SAMPLE_RATE, NUMBER_OF_CHANNELS, NUMBER_OF_CHANNELS);
asioOut = new AsioOut(AsioOut.GetDriverNames()[0]);
asioOut.InitRecordAndPlayback(new SampleToWaveProvider(inputPatcher),
NUMBER_OF_CHANNELS, 0);
asioOut.AudioAvailable += OnAsioOutAudioAvailable;
asioOut.Play();
}
private static void InitDevices()
{
for (int n = -1; n < WaveOut.DeviceCount; n++)
{
WaveOutCapabilities caps = WaveOut.GetCapabilities(n);
if (caps.ProductName.StartsWith("Line"))
{
int _number = int.Parse(caps.ProductName.Split(' ')[1]);
if (_number <= NUMBER_OF_CHANNELS)
{
waveouts[_number - 1] = new WaveOut() { DeviceNumber = n };
bufferedWaveProviders[_number - 1] = new BufferedWaveProvider(new WaveFormat(SAMPLE_RATE, BIT_PER_SAMPLE, 2));
waveouts[_number - 1].Init(bufferedWaveProviders[_number - 1]);
waveouts[_number - 1].Play();
}
}
}
}
static void OnAsioOutAudioAvailable(object sender, AsioAudioAvailableEventArgs e)
{
inputPatcher.ProcessBuffer(e.InputBuffers, e.OutputBuffers,
e.SamplesPerBuffer, e.AsioSampleType);
for (int outputChannel = 0; outputChannel < e.OutputBuffers.Length; outputChannel++)
{
byte[] buf = new byte[e.SamplesPerBuffer * (BIT_PER_SAMPLE / 8)];
Marshal.Copy(e.OutputBuffers[outputChannel], buf, 0, e.SamplesPerBuffer * (BIT_PER_SAMPLE / 8));
bufferedWaveProviders[outputChannel].AddSamples(buf, 0, buf.Length);
}
e.WrittenToOutputBuffers = true;
}

There are two main problems with your code. First, InputBuffers is one per channel, not sample. Second, when you set e.WrittenToOutputBuffers = true you are saying that you have written to e.OutputBuffers which you haven't. So they will just contain uninitialized data.
If you want to see an example of low-level manipulation of ASIO buffers, then check out my ASIO patch bay sample project.

Related

How to change what block is at certain coordinates?

I am trying to recreate the mod in the YouTuber TommyInnit's video "Minecraft’s Laser Eye Mod Is Hilarious" as me and my friends wish to use it but we couldn't find it on the internet, and I have taken code from here for raycasting and also set up a keybind, but I cannot figure out how to setb the block you are looking at. I have tried to manage to get the block and set it but I can only find how to make new blocks that don't yet exist. My code is the following, with the block code being on line 142:
package net.laser.eyes;
import org.lwjgl.glfw.GLFW;
import net.minecraft.client.options.KeyBinding;
import net.minecraft.client.util.InputUtil;
import net.minecraft.text.LiteralText;
import net.fabricmc.api.ModInitializer;
import net.fabricmc.fabric.api.client.keybinding.v1.KeyBindingHelper;
import net.fabricmc.fabric.api.event.client.ClientTickCallback;
import net.fabricmc.api.*;
import net.fabricmc.fabric.api.client.rendering.v1.*;
import net.minecraft.block.*;
import net.minecraft.client.*;
import net.minecraft.client.gui.*;
import net.minecraft.client.util.math.*;
import net.minecraft.entity.*;
import net.minecraft.entity.decoration.*;
import net.minecraft.entity.projectile.*;
import net.minecraft.text.*;
import net.minecraft.util.hit.*;
import net.minecraft.util.math.*;
import net.minecraft.world.*;
public class main implements ModInitializer {
#Override
public void onInitialize() {
KeyBinding binding1 = KeyBindingHelper.registerKeyBinding(new KeyBinding("key.laser-eyes.shoot", InputUtil.Type.KEYSYM, GLFW.GLFW_KEY_R, "key.category.laser.eyes"));
HudRenderCallback.EVENT.register(main::displayBoundingBox);
ClientTickCallback.EVENT.register(client -> {
while (binding1.wasPressed()) {
client.player.sendMessage(new LiteralText("Key 1 was pressed!"), false);
}
});
}
private static long lastCalculationTime = 0;
private static boolean lastCalculationExists = false;
private static int lastCalculationMinX = 0;
private static int lastCalculationMinY = 0;
private static int lastCalculationWidth = 0;
private static int lastCalculationHeight = 0;
private static void displayBoundingBox(MatrixStack matrixStack, float tickDelta) {
long currentTime = System.currentTimeMillis();
if(lastCalculationExists && currentTime - lastCalculationTime < 1000/45) {
drawHollowFill(matrixStack, lastCalculationMinX, lastCalculationMinY,
lastCalculationWidth, lastCalculationHeight, 2, 0xffff0000);
return;
}
lastCalculationTime = currentTime;
MinecraftClient client = MinecraftClient.getInstance();
int width = client.getWindow().getScaledWidth();
int height = client.getWindow().getScaledHeight();
Vec3d cameraDirection = client.cameraEntity.getRotationVec(tickDelta);
double fov = client.options.fov;
double angleSize = fov/height;
Vector3f verticalRotationAxis = new Vector3f(cameraDirection);
verticalRotationAxis.cross(Vector3f.POSITIVE_Y);
if(!verticalRotationAxis.normalize()) {
lastCalculationExists = false;
return;
}
Vector3f horizontalRotationAxis = new Vector3f(cameraDirection);
horizontalRotationAxis.cross(verticalRotationAxis);
horizontalRotationAxis.normalize();
verticalRotationAxis = new Vector3f(cameraDirection);
verticalRotationAxis.cross(horizontalRotationAxis);
HitResult hit = client.crosshairTarget;
if (hit.getType() == HitResult.Type.MISS) {
lastCalculationExists = false;
return;
}
int minX = width;
int maxX = 0;
int minY = height;
int maxY = 0;
for(int y = 0; y < height; y +=2) {
for(int x = 0; x < width; x+=2) {
if(minX < x && x < maxX && minY < y && y < maxY) {
continue;
}
Vec3d direction = map(
(float) angleSize,
cameraDirection,
horizontalRotationAxis,
verticalRotationAxis,
x,
y,
width,
height
);
HitResult nextHit = rayTraceInDirection(client, tickDelta, direction);//TODO make less expensive
if(nextHit == null) {
continue;
}
if(nextHit.getType() == HitResult.Type.MISS) {
continue;
}
if(nextHit.getType() != hit.getType()) {
continue;
}
if (nextHit.getType() == HitResult.Type.BLOCK) {
if(!((BlockHitResult) nextHit).getBlockPos().equals(((BlockHitResult) hit).getBlockPos())) {
continue;
}
} else if(nextHit.getType() == HitResult.Type.ENTITY) {
if(!((EntityHitResult) nextHit).getEntity().equals(((EntityHitResult) hit).getEntity())) {
continue;
}
}
if(minX > x) minX = x;
if(minY > y) minY = y;
if(maxX < x) maxX = x;
if(maxY < y) maxY = y;
}
}
lastCalculationExists = true;
lastCalculationMinX = minX;
lastCalculationMinY = minY;
lastCalculationWidth = maxX - minX;
lastCalculationHeight = maxY - minY;
drawHollowFill(matrixStack, minX, minY, maxX - minX, maxY - minY, 2, 0xffff0000);
LiteralText text = new LiteralText("Bounding " + minX + " " + minY + " " + width + " " + height + ": ");
client.player.sendMessage(text.append(getLabel(hit)), true);
//SET THE BLOCK (maybe use hit.getPos(); to find it??)
}
private static void drawHollowFill(MatrixStack matrixStack, int x, int y, int width, int height, int stroke, int color) {
matrixStack.push();
matrixStack.translate(x-stroke, y-stroke, 0);
width += stroke *2;
height += stroke *2;
DrawableHelper.fill(matrixStack, 0, 0, width, stroke, color);
DrawableHelper.fill(matrixStack, width - stroke, 0, width, height, color);
DrawableHelper.fill(matrixStack, 0, height - stroke, width, height, color);
DrawableHelper.fill(matrixStack, 0, 0, stroke, height, color);
matrixStack.pop();
}
private static Text getLabel(HitResult hit) {
if(hit == null) return new LiteralText("null");
switch (hit.getType()) {
case BLOCK:
return getLabelBlock((BlockHitResult) hit);
case ENTITY:
return getLabelEntity((EntityHitResult) hit);
case MISS:
default:
return new LiteralText("null");
}
}
private static Text getLabelEntity(EntityHitResult hit) {
return hit.getEntity().getDisplayName();
}
private static Text getLabelBlock(BlockHitResult hit) {
BlockPos blockPos = hit.getBlockPos();
BlockState blockState = MinecraftClient.getInstance().world.getBlockState(blockPos);
Block block = blockState.getBlock();
return block.getName();
}
private static Vec3d map(float anglePerPixel, Vec3d center, Vector3f horizontalRotationAxis,
Vector3f verticalRotationAxis, int x, int y, int width, int height) {
float horizontalRotation = (x - width/2f) * anglePerPixel;
float verticalRotation = (y - height/2f) * anglePerPixel;
final Vector3f temp2 = new Vector3f(center);
temp2.rotate(verticalRotationAxis.getDegreesQuaternion(verticalRotation));
temp2.rotate(horizontalRotationAxis.getDegreesQuaternion(horizontalRotation));
return new Vec3d(temp2);
}
private static HitResult rayTraceInDirection(MinecraftClient client, float tickDelta, Vec3d direction) {
Entity entity = client.getCameraEntity();
if (entity == null || client.world == null) {
return null;
}
double reachDistance = 5.0F;
HitResult target = rayTrace(entity, reachDistance, tickDelta, false, direction);
boolean tooFar = false;
double extendedReach = 6.0D;
reachDistance = extendedReach;
Vec3d cameraPos = entity.getCameraPosVec(tickDelta);
extendedReach = extendedReach * extendedReach;
if (target != null) {
extendedReach = target.getPos().squaredDistanceTo(cameraPos);
}
Vec3d vec3d3 = cameraPos.add(direction.multiply(reachDistance));
Box box = entity
.getBoundingBox()
.stretch(entity.getRotationVec(1.0F).multiply(reachDistance))
.expand(1.0D, 1.0D, 1.0D);
EntityHitResult entityHitResult = ProjectileUtil.raycast(
entity,
cameraPos,
vec3d3,
box,
(entityx) -> !entityx.isSpectator() && entityx.collides(),
extendedReach
);
if (entityHitResult == null) {
return target;
}
Entity entity2 = entityHitResult.getEntity();
Vec3d hitPos = entityHitResult.getPos();
if (cameraPos.squaredDistanceTo(hitPos) < extendedReach || target == null) {
target = entityHitResult;
if (entity2 instanceof LivingEntity || entity2 instanceof ItemFrameEntity) {
client.targetedEntity = entity2;
}
}
return target;
}
private static HitResult rayTrace(
Entity entity,
double maxDistance,
float tickDelta,
boolean includeFluids,
Vec3d direction
) {
Vec3d end = entity.getCameraPosVec(tickDelta).add(direction.multiply(maxDistance));
return entity.world.raycast(new RaycastContext(
entity.getCameraPosVec(tickDelta),
end,
RaycastContext.ShapeType.OUTLINE,
includeFluids ? RaycastContext.FluidHandling.ANY : RaycastContext.FluidHandling.NONE,
entity
));
}
}
Firstly, I heavily recommend following the standard Java naming conventions as it will make your code more understandable for others.
The technical name for a block present in the world at a specific position is a "Block State", represented by the BlockState class.
You can only change the block state at a specific position on the server-side. Your raycasting code in ran on the client-side, so you need to use the Fabric Networking API. You can see the server-side Javadoc here and the client-side Javadoc here.
Thankfully, Fabric Wiki has a networking tutorial so you don't have to read all that Javadoc. The part that you're interested in is "sending packets to the server and receiving packets on the server".
Here's a guide specific for your use case:
Introduction to Networking
Minecraft operates in two different components; the client and the server.
The client is responsible for doing jobs such as rendering and GUIs, while the server is responsible for handling the world storage, entity AI etc. (talking about logical client and server here)
The physical server and physical client are the actual JAR files that are run.
The physical (dedicated) server contains only the logical server, while a physical client contains both a logical (integrated) server and a logical client.
A diagram that explains it can be found here.
So, the logical client cannot change the state of the logical server (e.g. block states in a world), so packets have to be sent from the client to the server in order for the server to respond.
The following code is only example code, and you shouldn't copy it! You should think about safety precautions like preventing cheat clients from changing every block. Probably one of the most important rules in networking: assume the client is lying.
The Fabric Networking API
Your starting points are ServerPlayNetworking and ClientPlayNetworking. They are the classes that help you send and receive packets.
Register a listener using registerGlobalReceiver, and send a packet by using send.
You first need an Identifier in order to separate your packet from other packets and make sure it is interpreted correctly. An Identifier like this is recommended to be put in a static field in your ModInitializer or a utility class.
public class MyMod implements ModInitializer {
public static final Identifier SET_BLOCK_PACKET = new Identifier("modid", "setblock");
}
(Don't forget to replace modid with your mod ID)
You usually want to pass data with your packets (e.g. block position and block to change to), and you can do so with a PacketByteBuf.
Let's Piece This all Together
So, we have an Identifier. Let's send a packet!
Client-Side
We will start by creating a PacketByteBuf and writing the correct data:
private static void displayBoundingBox(MatrixStack matrixStack, float tickDelta) {
// ...
PacketByteBuf data = PacketByteBufs.create();
buf.writeBlockPos(hit.getPos());
// Minecraft doesn't have a way to write a Block to a packet, so we will write the registry name instead
buf.writeIdentifier(new Identifier("minecraft", "someblock" /*for example, "stone"*/));
}
And now sending the packet
// ...
ClientPlayNetworking.send(SET_BLOCK_PACKET, buf);
Server-Side
A packet with the SET_BLOCK_PACKET ID has been sent, But we also need to listen and receive it on the server-side. We can do that by using ServerPlayNetworking.registerGlobalReceiver:
#Override
public void onInitialize() {
// ...
// This code MUST be in onInitialize
ServerPlayNetworking.registerGlobalReceiver(SET_BLOCK_PACKET, (server, player, handler, buf, sender) -> {
});
}
We are using a lambda expression here. For more info about lambdas, Google is your friend.
When receiving a packet, code inside your lambda will be executed on the network thread. This code is not allowed to modify anything related to in-game logic (i.e. the world). For that, we will use server.execute(Runnable).
You should read the buf on the network thread, though.
ServerPlayNetworking.registerGlobalReceiver(SET_BLOCK_PACKET, (server, player, handler, buf, sender) -> {
BlockPos pos = buf.readBlockPos(); // reads must be done in the same order
Block blockToSet = Registry.BLOCK.get(buf.readIdentifier()); // reading using the identifier
server.execute(() -> { // We are now on the main thread
// In a normal mod, checks will be done here to prevent the client from setting blocks outside the world etc. but this is only example code
player.getServerWorld().setBlockState(pos, blockToSet.getDefaultState()); // changing the block state
});
});
Once again, you should prevent the client from sending invalid locations

Optimizing Transposing and Comparison of Concurrent List in Java 8

I have an application in Java 8 Collecting Data of multiple Threads using BlockingQueue.
I need to perform comparison of samples.
But my application is very large, I implemented a mock application (Github) in Java 8.
I'm generating a chunk of bytes (really is random order).
The bytes are stored into ChunkDTO class.
I implemented a capturer the ChunkDTO in a List, code in Capturer class.
Each ChunkDTO of List is translated into a List of Samples (TimePitchValue exactly) returning a nested List (or List of List of TimePitchValue).
Later the nested List is transposed in order to performs comparisons between TimePitchValue with the same time value.
Due to enormous volume of TimePitchValue instances it's consumes huge time in my application.
Here some code (The complete functional Code is in Github) because is still large for this site).
public class Generator {
final static Logger LOGGER = Logger.getLogger("SampleComparator");
public static void main(String[] args) {
long previous = System.nanoTime();
final int minBufferSize = 2048;
int sampleRate = 8192;
int numChannels = 1;
int numBytesPerSample = 1;
int samplesChunkPerSecond = sampleRate / minBufferSize;
int minutes = 0;
int seconds = 10;
int time = 60 * minutes + seconds;
int chunksBySecond = samplesChunkPerSecond * numBytesPerSample * numChannels;
int pitchs = 32;
boolean signed = false;
boolean endianness = false;
AudioFormat audioformat = new AudioFormat(sampleRate, 8 * numBytesPerSample, numChannels, signed, endianness);
ControlDSP controlDSP = new ControlDSP(audioformat);
BlockingQueue<ChunkDTO> generatorBlockingQueue = new LinkedBlockingQueue<>();
Capturer capturer = new Capturer(controlDSP, pitchs, pitchs * time * chunksBySecond, generatorBlockingQueue);
controlDSP.getListFuture().add(controlDSP.getExecutorService().submit(capturer));
for (int i = 0; i < time * chunksBySecond; i++) {
for (int p = 0; p < pitchs; p++) {
ChunkDTO chunkDTO = new ChunkDTO(UtilClass.getArrayByte(minBufferSize), i, p);
LOGGER.info(String.format("chunkDTO: %s", chunkDTO));
try {
generatorBlockingQueue.put(chunkDTO);
} catch (InterruptedException ex) {
LOGGER.info(ex.getMessage());
}
}
try {
Thread.sleep(1000 / chunksBySecond);
} catch (Exception ex) {
}
}
controlDSP.tryFinishThreads(Thread.currentThread());
long current = System.nanoTime();
long interval = TimeUnit.NANOSECONDS.toSeconds(current - previous);
System.out.println("Seconds Interval: " + interval);
}
}
Capturer Class
public class Capturer implements Callable<Void> {
private final ControlDSP controlDSP;
private final int pitchs;
private final int totalChunks;
private final BlockingQueue<ChunkDTO> capturerBlockingQueue;
private final Counter intCounter;
private final Map<Long, List<ChunkDTO>> mapIndexListChunkDTO = Collections.synchronizedMap(new HashMap<>());
private volatile boolean isRunning = false;
private final String threadName;
private static final Logger LOGGER = Logger.getLogger("SampleComparator");
public Capturer(ControlDSP controlDSP, int pitchs, int totalChunks, BlockingQueue<ChunkDTO> capturerBlockingQueue) {
this.controlDSP = controlDSP;
this.pitchs = pitchs;
this.totalChunks = totalChunks;
this.capturerBlockingQueue = capturerBlockingQueue;
this.intCounter = new Counter();
this.controlDSP.getListFuture().add(this.controlDSP.getExecutorService().submit(() -> {
while (intCounter.getValue() < totalChunks) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
LOGGER.log(Level.SEVERE, null, ex);
}
}
capturerBlockingQueue.add(new ChunkDTOStopper());
}));
this.threadName = this.getClass().getSimpleName();
}
#Override
public Void call() throws Exception {
long quantity = 0;
isRunning = true;
while (isRunning) {
try {
ChunkDTO chunkDTO = capturerBlockingQueue.take();
if (chunkDTO instanceof ChunkDTOStopper) {
break;
}
//Find or Create List (according to Index) to add the incoming Chunk
long index = chunkDTO.getIndex();
int sizeChunk = chunkDTO.getChunk().length;
List<ChunkDTO> listChunkDTOWithIndex = getListChunkDTOByIndex(chunkDTO);
//When the List (according to Index) is completed and processed
if (listChunkDTOWithIndex.size() == pitchs) {
mapIndexListChunkDTO.remove(index);
TransposerComparator transposerComparator = new TransposerComparator(controlDSP, controlDSP.getAudioformat(), index, sizeChunk, listChunkDTOWithIndex);
controlDSP.getListFuture().add(controlDSP.getExecutorService().submit(transposerComparator));
}
quantity++;
intCounter.setValue(quantity);
LOGGER.info(String.format("%s\tConsumes:%s\ttotal:%05d", threadName, chunkDTO, quantity));
} catch (Exception ex) {
LOGGER.log(Level.SEVERE, null, ex);
}
}
LOGGER.info(String.format("%s\tReceived:%05d\tQty:%s\tPitchs:%s\tEND\n", threadName, quantity, quantity / pitchs, pitchs));
return null;
}
private List<ChunkDTO> getListChunkDTOByIndex(ChunkDTO chunkDTO) {
List<ChunkDTO> listChunkDTOWithIndex = mapIndexListChunkDTO.get(chunkDTO.getIndex());
if (listChunkDTOWithIndex == null) {
listChunkDTOWithIndex = new ArrayList<>();
mapIndexListChunkDTO.put(chunkDTO.getIndex(), listChunkDTOWithIndex);
listChunkDTOWithIndex = mapIndexListChunkDTO.get(chunkDTO.getIndex());
}
listChunkDTOWithIndex.add(chunkDTO);
return listChunkDTOWithIndex;
}
}
TransposerComparator class.
The optimization required is in this code, specifically on transposedNestedList method.
public class TransposerComparator implements Callable<Void> {
private final ControlDSP controlDSP;
private final AudioFormat audioformat;
private final long index;
private final int sizeChunk;
private final List<ChunkDTO> listChunkDTOWithIndex;
private final String threadName;
private static final Logger LOGGER = Logger.getLogger("SampleComparator");
public TransposerComparator(ControlDSP controlDSP, AudioFormat audioformat, long index, int sizeChunk, List<ChunkDTO> listChunkDTOWithIndex) {
this.controlDSP = controlDSP;
this.audioformat = audioformat;
this.index = index;
this.sizeChunk = sizeChunk;
this.listChunkDTOWithIndex = listChunkDTOWithIndex;
this.threadName = this.getClass().getSimpleName() + "_" + String.format("%05d", index);
}
#Override
public Void call() throws Exception {
Thread.currentThread().setName(threadName);
LOGGER.info(String.format("%s\tINI", threadName));
try {
int numBytesPerSample = audioformat.getSampleSizeInBits() / 8;
int quantitySamples = sizeChunk / numBytesPerSample;
long baseTime = quantitySamples * index;
// Convert the List of Chunk Bytes to Nested List of TimePitchValue
List<List<TimePitchValue>> nestedListTimePitchValue = listChunkDTOWithIndex
.stream()
.map(chunkDTO -> {
return IntStream
.range(0, quantitySamples)
.mapToObj(time -> {
int value = extractValue(chunkDTO.getChunk(), numBytesPerSample, time);
return new TimePitchValue(chunkDTO.getPitch(), baseTime + time, value);
}).collect(Collectors.toList());
}).collect(Collectors.toList());
List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue);
} catch (Exception ex) {
ex.printStackTrace();
LOGGER.log(Level.SEVERE, null, ex);
throw ex;
}
return null;
}
private static int extractValue(byte[] bytesSamples, int numBytesPerSample, int time) {
byte[] bytesSingleNumber = Arrays.copyOfRange(bytesSamples, time * numBytesPerSample, (time + 1) * numBytesPerSample);
int value = numBytesPerSample == 2
? (UtilClass.Byte2IntLit(bytesSingleNumber[0], bytesSingleNumber[1]))
: (UtilClass.byte2intSmpl(bytesSingleNumber[0]));
return value;
}
private static List<List<TimePitchValue>> transposedNestedList(List<List<TimePitchValue>> nestedList) {
List<List<TimePitchValue>> outNestedList = new ArrayList<>();
nestedList.forEach(pitchList -> {
pitchList.forEach(pitchValue -> {
List<TimePitchValue> listTimePitchValueWithTime = listTimePitchValueWithTime(outNestedList, pitchValue.getTime());
if (!outNestedList.contains(listTimePitchValueWithTime)) {
outNestedList.add(listTimePitchValueWithTime);
}
listTimePitchValueWithTime.add(pitchValue);
});
});
outNestedList.forEach(pitchList -> {
pitchList.sort(Comparator.comparingInt(TimePitchValue::getValue).reversed());
});
return outNestedList;
}
private static List<TimePitchValue> listTimePitchValueWithTime(List<List<TimePitchValue>> nestedList, long time) {
List<TimePitchValue> listTimePitchValueWithTime = nestedList
.stream()
.filter(innerList -> innerList.stream()
.anyMatch(timePitchValue -> timePitchValue.getTime() == time))
.findAny()
.orElseGet(ArrayList::new);
return listTimePitchValueWithTime;
}
}
I was testing:
With 5 Seconds in Generator class and the List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue); line in TransposerComparator class, Commented 7 Seconds needed, Uncommented 211 Seconds needed.
With 10 Seconds in Generator class and the List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue); line in TransposerComparator class, Commented 12 Seconds needed, Uncommented 574 Seconds needed.
I need to use the application at least 60 minutes.
With the purpose of reduce the needed (consumed) time, I have two ways:
I choose for short is to optimize the methods that I am currently using.
That should be successful but longer and is to use GPGPU, but I don't know where to start implementing it yet.
QUESTIONS
This Question is for the first way: What changes do you recommend in the code of the transposedNestedList method in order to improve speed?
Is there better alternative to use this Comparison?
outNestedList.forEach(pitchList -> {
pitchList.sort(Comparator.comparingInt(TimePitchValue::getValue).reversed());
});

error is occuring while creating custom tokenizer in lucene 7.3

I m trying to create new tokenizer by refering book tamingtext (which uses lucene 3.+ api) using new lucene api 7.3, but it is giving me error as mentioned below
java.lang.IllegalStateException: TokenStream contract violation: reset()/close() call missing, reset() called multiple times, or subclass does not call super.reset(). Please see Javadocs of TokenStream class for more information about the correct consuming workflow.
at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:109)
at java.io.Reader.read(Reader.java:140)
at solr.SentenceTokenizer.fillSentences(SentenceTokenizer.java:43)
at solr.SentenceTokenizer.incrementToken(SentenceTokenizer.java:55)
at solr.NameFilter.fillSpans(NameFilter.java:56)
at solr.NameFilter.incrementToken(NameFilter.java:88)
at spec.solr.NameFilterTest.testNameFilter(NameFilterTest.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Here is my SentenceTokenizer class
Initializing method, in older api there was super(Reader); but in current api there is no access to Reader class
public SentenceTokenizer(SentenceDetector detector) {
super();
setReader(reader);
this.sentenceDetector = detector;
}
Here is my reset method
#Override
public void reset() throws IOException {
super.reset();
sentenceDetector = null;
}
When i tried to access this method from custom TokenFilter, I m getting above error
public void fillSentences() throws IOException {
char[] c = new char[256];
int size = 0;
StringBuilder stringBuilder = new StringBuilder();
while ((size = input.read(c)) >= 0) {
stringBuilder.append(c, 0, size);
}
String temp = stringBuilder.toString();
inputSentence = temp.toCharArray();
sentenceSpans = sentenceDetector.sentPosDetect(temp);
tokenOffset = 0;
}
#Override
public final boolean incrementToken() throws IOException {
if (sentenceSpans == null) {
//invoking following method
fillSentences();
}
if (tokenOffset == sentenceSpans.length) {
return false;
}
Span sentenceSpan = sentenceSpans[tokenOffset];
clearAttributes();
int start = sentenceSpan.getStart();
int end = sentenceSpan.getEnd();
charTermAttribute.copyBuffer(inputSentence, start, end - start);
positionIncrementAttribute.setPositionIncrement(1);
offsetAttribute.setOffset(start, end);
tokenOffset++;
return true;
}
Here is my custom TokenFilter class
public final class NameFilter extends TokenFilter {
public static final String NE_PREFIX = "NE_";
private final Tokenizer tokenizer;
private final String[] tokenTypeNames;
private final NameFinderME[] nameFinderME;
private final KeywordAttribute keywordAttribute = addAttribute(KeywordAttribute.class);
private final PositionIncrementAttribute positionIncrementAttribute = addAttribute(PositionIncrementAttribute.class);
private final CharTermAttribute charTermAttribute = addAttribute(CharTermAttribute.class);
private final OffsetAttribute offsetAttribute = addAttribute(OffsetAttribute.class);
private String text;
private int baseOffset;
private Span[] spans;
private String[] tokens;
private Span[][] foundNames;
private boolean[][] tokenTypes;
private int spanOffsets = 0;
private final Queue<AttributeSource.State> tokenQueue =
new LinkedList<>();
public NameFilter(TokenStream in, String[] modelNames, NameFinderME[] nameFinderME) {
super(in);
this.tokenizer = SimpleTokenizer.INSTANCE;
this.nameFinderME = nameFinderME;
this.tokenTypeNames = new String[modelNames.length];
for (int i = 0; i < modelNames.length; i++) {
this.tokenTypeNames[i] = NE_PREFIX + modelNames[i];
}
}
//consumes tokens from the upstream tokenizer and buffer them in a
//StringBuilder whose contents will be passed to opennlp
protected boolean fillSpans() throws IOException {
if (!this.input.incrementToken()) return false;
//process the next sentence from the upstream tokenizer
this.text = input.getAttribute(CharTermAttribute.class).toString();
this.baseOffset = this.input.getAttribute(OffsetAttribute.class).startOffset();
this.spans = this.tokenizer.tokenizePos(text);
this.tokens = Span.spansToStrings(spans, text);
this.foundNames = new Span[this.nameFinderME.length][];
for (int i = 0; i < nameFinderME.length; i++) {
this.foundNames[i] = nameFinderME[i].find(tokens);
}
//insize
this.tokenTypes = new boolean[this.tokens.length][this.nameFinderME.length];
for (int i = 0; i < nameFinderME.length; i++) {
Span[] spans = foundNames[i];
for (int j = 0; j < spans.length; j++) {
int start = spans[j].getStart();
int end = spans[j].getEnd();
for (int k = start; k < end; k++) {
this.tokenTypes[k][i] = true;
}
}
}
spanOffsets = 0;
return true;
}
#Override
public boolean incrementToken() throws IOException {
//if there's nothing in the queue
if(tokenQueue.peek()==null){
//no span or spans consumed
if (spans==null||spanOffsets>=spans.length){
if (!fillSpans())return false;
}
if (spanOffsets>=spans.length)return false;
//copy the token and any types
clearAttributes();
keywordAttribute.setKeyword(false);
positionIncrementAttribute.setPositionIncrement(1);
int startOffset = baseOffset +spans[spanOffsets].getStart();
int endOffset = baseOffset+spans[spanOffsets].getEnd();
offsetAttribute.setOffset(startOffset,endOffset);
charTermAttribute.setEmpty()
.append(tokens[spanOffsets]);
//determine of the current token is of a named entity type, if so
//push the current state into the queue and add a token reflecting
// any matching entity types.
boolean [] types = tokenTypes[spanOffsets];
for (int i = 0; i < nameFinderME.length; i++) {
if (types[i]){
keywordAttribute.setKeyword(true);
positionIncrementAttribute.setPositionIncrement(0);
tokenQueue.add(captureState());
positionIncrementAttribute.setPositionIncrement(1);
charTermAttribute.setEmpty().append(tokenTypeNames[i]);
}
}
}
spanOffsets++;
return true;
}
#Override
public void close() throws IOException {
super.close();
reset();
}
#Override
public void reset() throws IOException {
super.reset();
this.spanOffsets = 0;
this.spans = null;
}
#Override
public void end() throws IOException {
super.end();
reset();
}
}
here is my test case for following class
#Test
public void testNameFilter() throws IOException {
Reader in = new StringReader(input);
Tokenizer tokenizer = new SentenceTokenizer( detector);
tokenizer.reset();
NameFilter nameFilter = new NameFilter(tokenizer, modelName, nameFinderMES);
nameFilter.reset();
CharTermAttribute charTermAttribute;
PositionIncrementAttribute positionIncrementAttribute;
OffsetAttribute offsetAttribute;
int pass = 0;
while (pass < 2) {
int pos = 0;
int lastStart = 0;
int lastEnd = 0;
//error occur on below invoke
while (nameFilter.incrementToken()) {
}
}
I have added following changes in my code and it work fine but i m now sure it is correct answer
public SentenceTokenizer(Reader reader,SentenceDetector sentenceDetector) {
super();
this.input =reader;
this.sentenceDetector = sentenceDetector;
}

Slow image processing of images from filesystem as compared to the webcam

I was able to follow the csharp-sample-apps from the github repo for Affectiva. I ran the demo using my webcam and the processing and performance was great.I am not getting the same processing speed from the PhotoDetector when I try to run it over images in filesystem. Any help or improvement would be appreciated.
namespace Logical.EmocaoFace
{
public class AnaliseEmocao : Affdex.ImageListener, Affdex.ProcessStatusListener
{
private Bitmap img { get; set; }
private Dictionary<int, Affdex.Face> faces { get; set; }
private Affdex.Detector detector { get; set; }
private ReaderWriterLock rwLock { get; set; }
public void processaEmocaoImagem()
{
for (int i = 0; i < resultado.count; i++){
RetornaEmocaoFace();
if (faceAffdex != null)
{
}
}
}
public void RetornaEmocaoFace(string caminhoImagem)
{
Affdex.Detector detector = new Affdex.PhotoDetector(1, Affdex.FaceDetectorMode.LARGE_FACES);
detector.setImageListener(this);
detector.setProcessStatusListener(this);
if (detector != null)
{
//ProcessVideo videoForm = new ProcessVideo(detector);
detector.setClassifierPath(#"D:\Desenvolvimento\Componentes\Afectiva\data");
detector.setDetectAllEmotions(true);
detector.setDetectAllExpressions(false);
detector.setDetectAllEmojis(false);
detector.setDetectAllAppearances(false);
detector.start();
((Affdex.PhotoDetector)detector).process(LoadFrameFromFile(caminhoImagem));
detector.stop();
}
}
static Affdex.Frame LoadFrameFromFile(string fileName)
{
Bitmap bitmap = new Bitmap(fileName);
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bitmap.Width, bitmap.Height);
BitmapData bmpData = bitmap.LockBits(rect, ImageLockMode.ReadWrite, bitmap.PixelFormat);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
int numBytes = bitmap.Width * bitmap.Height * 3;
byte[] rgbValues = new byte[numBytes];
int data_x = 0;
int ptr_x = 0;
int row_bytes = bitmap.Width * 3;
// The bitmap requires bitmap data to be byte aligned.
// http://stackoverflow.com/questions/20743134/converting-opencv-image-to-gdi-bitmap-doesnt-work-depends-on-image-size
for (int y = 0; y < bitmap.Height; y++)
{
Marshal.Copy(ptr + ptr_x, rgbValues, data_x, row_bytes);//(pixels, data_x, ptr + ptr_x, row_bytes);
data_x += row_bytes;
ptr_x += bmpData.Stride;
}
bitmap.UnlockBits(bmpData);
//Affdex.Frame retorno = new Affdex.Frame(bitmap.Width, bitmap.Height, rgbValues, Affdex.Frame.COLOR_FORMAT.BGR);
//bitmap.Dispose();
//return retorno;
return new Affdex.Frame(bitmap.Width, bitmap.Height, rgbValues, Affdex.Frame.COLOR_FORMAT.BGR);
}
public void onImageCapture(Affdex.Frame frame)
{
frame.Dispose();
}
public void onImageResults(Dictionary<int, Affdex.Face> faces, Affdex.Frame frame)
{
byte[] pixels = frame.getBGRByteArray();
this.img = new Bitmap(frame.getWidth(), frame.getHeight(), PixelFormat.Format24bppRgb);
var bounds = new Rectangle(0, 0, frame.getWidth(), frame.getHeight());
BitmapData bmpData = img.LockBits(bounds, ImageLockMode.WriteOnly, img.PixelFormat);
IntPtr ptr = bmpData.Scan0;
int data_x = 0;
int ptr_x = 0;
int row_bytes = frame.getWidth() * 3;
// The bitmap requires bitmap data to be byte aligned.
// http://stackoverflow.com/questions/20743134/converting-opencv-image-to-gdi-bitmap-doesnt-work-depends-on-image-size
for (int y = 0; y < frame.getHeight(); y++)
{
Marshal.Copy(pixels, data_x, ptr + ptr_x, row_bytes);
data_x += row_bytes;
ptr_x += bmpData.Stride;
}
img.UnlockBits(bmpData);
this.faces = faces;
frame.Dispose();
}
public void onProcessingException(Affdex.AffdexException A_0)
{
throw new NotImplementedException("Encountered an exception while processing " + A_0.ToString());
}
public void onProcessingFinished()
{
string idArquivo = CodEspaco + "," + System.Guid.NewGuid().ToString();
for(int i = 0; i < faces.Count; i++)
{
}
}
}
public static class GraphicsExtensions
{
public static void DrawCircle(this Graphics g, Pen pen,
float centerX, float centerY, float radius)
{
g.DrawEllipse(pen, centerX - radius, centerY - radius,
radius + radius, radius + radius);
}
}
}
Found the answer to my own question:
Using PhotoDetector is not ideal in this case since it is expensive to use the Face Detector configuration on subsequent frame calls.
The best option to improve the performance would be to use an instance of the FrameDetector Class.
Here is a getting started guide to analyze-frames.

Saved joints from kinect skeleton track

I work with the kinect. My goal is to store the values ​​gives me the kinect for the location of the body (head,hand etc). I have written some code but I can not understand what values ​​should save and how.I want to store in the db or in a txt file the position of the head,hands and foots.I want the data to understand the movements of the person who stand in front of kinect.For example if someone move her hand the kinect will sent a value.I must store it and understand and to understand the move taken.Sorry for the few info
here is my code:
using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Shapes;
using Microsoft.Kinect;
using System.Linq;
using System.IO;
namespace KinectSkeletonApplication1
{
public partial class MainWindow : Window
{
//Instantiate the Kinect runtime. Required to initialize the device.
//IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected.
KinectSensor sensor = KinectSensor.KinectSensors[0];
byte[] pixelData;
Skeleton[] skeletons;
public MainWindow()
{
InitializeComponent();
///////////////////////////////////
///////////////////////////////
//Runtime initialization is handled when the window is opened. When the window
//is closed, the runtime MUST be unitialized.
this.Loaded += new RoutedEventHandler(MainWindow_Loaded);
this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded);
sensor.ColorStream.Enable();
sensor.SkeletonStream.Enable();
}
void runtime_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
bool receivedData = false;
using (SkeletonFrame SFrame = e.OpenSkeletonFrame())
{
if (SFrame == null)
{
// The image processing took too long. More than 2 frames behind.
}
else
{
skeletons = new Skeleton[SFrame.SkeletonArrayLength];
SFrame.CopySkeletonDataTo(skeletons);
receivedData = true;
}
}
if (receivedData)
{
Skeleton currentSkeleton = (from s in skeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s).FirstOrDefault();
if (currentSkeleton != null)
{
SetEllipsePosition(head, currentSkeleton.Joints[JointType.Head]);
SetEllipsePosition(leftHand, currentSkeleton.Joints[JointType.HandLeft]);
SetEllipsePosition(rightHand, currentSkeleton.Joints[JointType.HandRight]);
SetEllipsePosition(shoulder_center, currentSkeleton.Joints[JointType.ShoulderCenter]);
}
}
}
//This method is used to position the ellipses on the canvas
//according to correct movements of the tracked joints.
//IMPORTANT NOTE: Code for vector scaling was imported from the Coding4Fun Kinect Toolkit
//available here: http://c4fkinect.codeplex.com/
//I only used this part to avoid adding an extra reference.
private void SetEllipsePosition(Ellipse ellipse, Joint joint)
{
Microsoft.Kinect.SkeletonPoint vector = new Microsoft.Kinect.SkeletonPoint();
vector.X = ScaleVector(640, joint.Position.X);
vector.Y = ScaleVector(480, -joint.Position.Y);
vector.Z = joint.Position.Z;
Joint updatedJoint = new Joint();
updatedJoint = joint;
updatedJoint.TrackingState = JointTrackingState.Tracked;
updatedJoint.Position = vector;
Canvas.SetLeft(ellipse, updatedJoint.Position.X);
Canvas.SetTop(ellipse, updatedJoint.Position.Y);
}
private float ScaleVector(int length, float position)
{
float value = (((((float)length) / 1f) / 2f) * position) + (length / 2);
if (value > length)
{
return (float)length;
}
if (value < 0f)
{
return 0f;
}
string r = Convert.ToString(value);
string path = #"C:\Test\MyTest.txt";
// This text is added only once to the file.
if (!File.Exists(path))
{
// Create a file to write to.
string createText = "Hello and Welcome" + Environment.NewLine;
File.WriteAllText(path, createText);
}
// This text is always added, making the file longer over time
// if it is not deleted.
//string appendText = "This is extra text" + Environment.NewLine;
File.AppendAllText(path, r);
// Open the file to read from.
string readText = File.ReadAllText(path);
return value;
}
void MainWindow_Unloaded(object sender, RoutedEventArgs e)
{
sensor.Stop();
}
void MainWindow_Loaded(object sender, RoutedEventArgs e)
{
sensor.SkeletonFrameReady += runtime_SkeletonFrameReady;
sensor.ColorFrameReady += runtime_VideoFrameReady;
sensor.Start();
}
void runtime_VideoFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
bool receivedData = false;
using (ColorImageFrame CFrame = e.OpenColorImageFrame())
{
if (CFrame == null)
{
// The image processing took too long. More than 2 frames behind.
}
else
{
pixelData = new byte[CFrame.PixelDataLength];
CFrame.CopyPixelDataTo(pixelData);
receivedData = true;
}
}
if (receivedData)
{
BitmapSource source = BitmapSource.Create(640, 480, 96, 96,
PixelFormats.Bgr32, null, pixelData, 640 * 4);
videoImage.Source = source;
}
}
}
}
I think what you are looking for is getting the X, Y, Z coordiantes for certain Joints?
In this case add this code:
Vector3D ShoulderCenter = new Vector3D(skeleton.Joints[JointType.ShoulderCenter].Position.X, skeleton.Joints[JointType.ShoulderCenter].Position.Y, skeleton.Joints[JointType.ShoulderCenter].Position.Z);
Vector3D RightShoulder = new Vector3D(skeleton.Joints[JointType.ShoulderRight].Position.X, skeleton.Joints[JointType.ShoulderRight].Position.Y, skeleton.Joints[JointType.ShoulderRight].Position.Z);
Vector3D LeftShoulder = new Vector3D(skeleton.Joints[JointType.ShoulderLeft].Position.X, skeleton.Joints[JointType.ShoulderLeft].Position.Y, skeleton.Joints[JointType.ShoulderLeft].Position.Z);
Vector3D RightElbow = new Vector3D(skeleton.Joints[JointType.ElbowRight].Position.X, skeleton.Joints[JointType.ElbowRight].Position.Y, skeleton.Joints[JointType.ElbowRight].Position.Z);
Vector3D LeftElbow = new Vector3D(skeleton.Joints[JointType.ElbowLeft].Position.X, skeleton.Joints[JointType.ElbowLeft].Position.Y, skeleton.Joints[JointType.ElbowLeft].Position.Z);
Vector3D RightWrist = new Vector3D(skeleton.Joints[JointType.WristRight].Position.X, skeleton.Joints[JointType.WristRight].Position.Y, skeleton.Joints[JointType.WristRight].Position.Z);
Vector3D LeftWrist = new Vector3D(skeleton.Joints[JointType.WristLeft].Position.X, skeleton.Joints[JointType.WristLeft].Position.Y, skeleton.Joints[JointType.WristLeft].Position.Z);
This only defines the Vectors for the upper Body part as you can see. Just add the missing joints the way I did it.
You need following assemblies:
using System.Windows.Media;
using Microsoft.Kinect.Toolkit.Fusion;
using System.Windows.Media.Media3D;