Redis operations are very slow on GCP environment - redis

This is my code example running on local machine with ts-node
import * as redis from 'redis';
const redisClient = redis.createClient({
socket: {
port: 6379,
host: 'localhost',
},
});
redisClient.connect();
function generateRandomString(length: number) {
// generate random string...
}
const run = async () => {
let keys: string[] = [];
const count = 2000;
for(let i = 0; i < count; i++) {
keys.push(generateRandomString(20));
}
console.time('t1');
for(let i = 0; i < count; i++) {
await redisClient.sAdd(keys[i], 'asd');
}
console.timeEnd('t1');
await redisClient.flushAll();
process.exit();
}
run();
Running with ts-node. This operations took ~100ms - very nice.
After this, I prepared simple CloudFunction. The same code with fixes for CF.
Cloud Function with 256MB, and I set vpc connector to the redis
const redis = require('redis');
const redisClient = redis.createClient({
socket: {
port: 6379,
host: '...ip...',
},
});
redisClient.connect();
function generateRandomString(length) {
// generate random string...
}
exports.helloWorld = async (req, res) => {
let keys = [];
const count = 2000;
for(let i = 0; i < count; i++) {
keys.push(generateRandomString(20));
}
console.time('t1');
for(let i = 0; i < count; i++) {
await redisClient.sAdd(keys[i], 'asd');
}
console.timeEnd('t1');
await redisClient.flushAll();
res.status(200).send('done');
};
In the CloudFunction context t1 = 5s. Which is 50x slower. Is this normal? Maybe I'm doing something wrong? Maybe ClodFunction is not suitable for operations like this?
Thanks!
PS. I test the cloudFunction via Testing -> Test the function in gcp console. Maybe it matters

Related

react-native-background-actions issue for ios

Iam using react-native-background-actions to run a background task.
but when I exit app, the task just run in 30s and stop
How I can resolve to run forever, atleast untill app terminated
In my code, after 1s, increate 'COUNT' and save to Storage
// BackgroundTaskService.js
import AsyncStorage from '#react-native-community/async-storage';
import BackgroundService from 'react-native-background-actions';
import BackgroundJob from 'react-native-background-actions';
let sleep = ms => new Promise(resolve => setTimeout(resolve, ms));
const increaseCountTask = async taskDataArguments => {
const {delay} = taskDataArguments;
await new Promise(async resolve => {
for (let i = 0; BackgroundJob.isRunning(); i++) {
var value = await AsyncStorage.getItem('COUNT');
if(!value) {
await AsyncStorage.setItem('COUNT', "2");
}
var value = await AsyncStorage.getItem('COUNT');
await AsyncStorage.setItem('COUNT', (parseInt(value) + 1).toString());
value = await AsyncStorage.getItem('COUNT');
console.log('value', value);
await sleep(delay);
}
});
};
const options = {
taskName: 'Demo',
taskTitle: 'Demo Running',
taskDesc: 'Demo',
taskIcon: {
name: 'ic_launcher',
type: 'mipmap',
},
color: '#ff00ff',
parameters: {
delay: 1000,
},
actions: '["Exit"]',
};
const start = () => {
BackgroundService.start(increaseCountTask, options);
};
const stop = () => {
BackgroundService.stop();
};
export default {
start,
stop,
};
// App.js
BackgroundTaskService.start();
Are you testing on a physical device and have enabled Background App Refresh in the Xcode settings, as well as on your physical device? According to documentation in another package, React-native-background-task indicates that that package must be tested on a physical device, or else the iOS simulator version will have to emulate the tasks.
I attempted to rework your code. I am currently getting this to work with BackgroundService instead of BackgroundJob for running in the background of the app. I am also working on testing when the app is closed. Should be able to get back to you soon.
import BackgroundService from 'react-native-background-actions';
import AsyncStorage from "#react-native-async-storage/async-storage";
import BackgroundJob from 'react-native-background-actions';
import {Alert} from "react-native";
const sleep = (time) => new Promise((resolve) => setTimeout(() => resolve(), time));
// You can do anything in your task such as network requests, timers and so on,
// as long as it doesn't touch UI. Once your task completes (i.e. the promise is resolved),
// React Native will go into "paused" mode (unless there are other tasks running,
// or there is a foreground app).
const veryIntensiveTask = async (taskDataArguments) => {
// Example of an infinite loop task
const { delay } = taskDataArguments;
await new Promise( async (resolve) => {
for (let i = 0; BackgroundService.isRunning(); i++) {
console.log(i);
console.log("Running background task");
Alert.alert("Running background task");
await sleep(delay);
}
// for (let i = 0; BackgroundJob.isRunning(); i++) {
//
// var value = await AsyncStorage.getItem('COUNT');
// if(!value) {
// await AsyncStorage.setItem('COUNT', "2");
// }
// var value = await AsyncStorage.getItem('COUNT');
// await AsyncStorage.setItem('COUNT', (parseInt(value) + 1).toString());
// value = await AsyncStorage.getItem('COUNT');
// console.log('value', value);
// await sleep(delay);
//
// }
});
};
const options = {
taskName: 'Example',
taskTitle: 'ExampleTask title',
taskDesc: 'ExampleTask description',
taskIcon: {
name: 'ic_launcher',
type: 'mipmap',
},
color: '#ff00ff',
linkingURI: 'yourSchemeHere://chat/jane', // See Deep Linking for more info
parameters: {
delay: 10000,
},
};
const start = async () => {
return BackgroundService.start(veryIntensiveTask, options);
};
const stop = async () => {
return BackgroundService.stop();
};
export default {
start,
stop,
};
// await BackgroundService.start(veryIntensiveTask, options);
// await BackgroundService.updateNotification({taskDesc: 'New ExampleTask description'}); // Only Android, iOS will ignore this call
// // iOS will also run everything here in the background until .stop() is called
// await BackgroundService.stop();

How to pass audio stream recorded with WebRTC to Google Speech api for realtime transcription?

What I'm trying to do is get real time transcription for video recorded in the browser with webRTC. Use case is basically subtitles in real time like google hangouts has.
So I have a WebRTC program running in the browser. It sends webm objects back to the server. They are linear32 audio encodings. Google speech to text only accepts linear16 or Flac files.
Is there a way to convert linear32 to linear16 in real time?
Otherwise has anyone been able to hook up webRTC with Google speech to get real time transcriptions working?
Any advice on where to look to solve this problem would be great
Check out this repository it might help you - https://github.com/muaz-khan/Translator
Translator.js is a JavaScript library built top on Google Speech-Recognition & Translation API to transcript and translate voice and text. It supports many locales and brings globalization in WebRTC!
I had the same problem and failed with webRTC. I recommend you use the Web Audio Api instead if you are just interested in transcribing the audio from the video.
Here is how I did it with a nodejs sever and react client app. It is uploaded to github here
You need an audio worklet script. (Put it in the public folder because that is where the API expects to find it)
recorderWorkletProcessor.js (saved in public/src/worklets/recorderWorkletProcessor.js)
/**
An in-place replacement for ScriptProcessorNode using AudioWorklet
*/
class RecorderProcessor extends AudioWorkletProcessor {
// 0. Determine the buffer size (this is the same as the 1st argument of ScriptProcessor)
bufferSize = 2048;
// 1. Track the current buffer fill level
_bytesWritten = 0;
// 2. Create a buffer of fixed size
_buffer = new Float32Array(this.bufferSize);
constructor() {
super();
this.initBuffer();
}
initBuffer() {
this._bytesWritten = 0;
}
isBufferEmpty() {
return this._bytesWritten === 0;
}
isBufferFull() {
return this._bytesWritten === this.bufferSize;
}
/**
* #param {Float32Array[][]} inputs
* #returns {boolean}
*/
process(inputs) {
// Grabbing the 1st channel similar to ScriptProcessorNode
this.append(inputs[0][0]);
return true;
}
/**
*
* #param {Float32Array} channelData
*/
append(channelData) {
if (this.isBufferFull()) {
this.flush();
}
if (!channelData) return;
for (let i = 0; i < channelData.length; i++) {
this._buffer[this._bytesWritten++] = channelData[i];
}
}
flush() {
// trim the buffer if ended prematurely
const buffer = this._bytesWritten < this.bufferSize ? this._buffer.slice(0, this._bytesWritten) : this._buffer;
const result = this.downsampleBuffer(buffer, 44100, 16000);
this.port.postMessage(result);
this.initBuffer();
}
downsampleBuffer(buffer, sampleRate, outSampleRate) {
if (outSampleRate == sampleRate) {
return buffer;
}
if (outSampleRate > sampleRate) {
throw new Error("downsampling rate show be smaller than original sample rate");
}
var sampleRateRatio = sampleRate / outSampleRate;
var newLength = Math.round(buffer.length / sampleRateRatio);
var result = new Int16Array(newLength);
var offsetResult = 0;
var offsetBuffer = 0;
while (offsetResult < result.length) {
var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);
var accum = 0,
count = 0;
for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++) {
accum += buffer[i];
count++;
}
result[offsetResult] = Math.min(1, accum / count) * 0x7fff;
offsetResult++;
offsetBuffer = nextOffsetBuffer;
}
return result.buffer;
}
}
registerProcessor("recorder.worklet", RecorderProcessor);
Install Socket.io-client on front end
npm i socket.io-client
React component code
/* eslint-disable react-hooks/exhaustive-deps */
import { default as React, useEffect, useState, useRef } from "react";
import { Button } from "react-bootstrap";
import Container from "react-bootstrap/Container";
import * as io from "socket.io-client";
const sampleRate = 16000;
const getMediaStream = () =>
navigator.mediaDevices.getUserMedia({
audio: {
deviceId: "default",
sampleRate: sampleRate,
sampleSize: 16,
channelCount: 1,
},
video: false,
});
interface WordRecognized {
final: boolean;
text: string;
}
const AudioToText: React.FC = () => {
const [connection, setConnection] = useState<io.Socket>();
const [currentRecognition, setCurrentRecognition] = useState<string>();
const [recognitionHistory, setRecognitionHistory] = useState<string[]>([]);
const [isRecording, setIsRecording] = useState<boolean>(false);
const [recorder, setRecorder] = useState<any>();
const processorRef = useRef<any>();
const audioContextRef = useRef<any>();
const audioInputRef = useRef<any>();
const speechRecognized = (data: WordRecognized) => {
if (data.final) {
setCurrentRecognition("...");
setRecognitionHistory((old) => [data.text, ...old]);
} else setCurrentRecognition(data.text + "...");
};
const connect = () => {
connection?.disconnect();
const socket = io.connect("http://localhost:8081");
socket.on("connect", () => {
console.log("connected", socket.id);
setConnection(socket);
});
socket.emit("send_message", "hello world");
socket.emit("startGoogleCloudStream");
socket.on("receive_message", (data) => {
console.log("received message", data);
});
socket.on("receive_audio_text", (data) => {
speechRecognized(data);
console.log("received audio text", data);
});
socket.on("disconnect", () => {
console.log("disconnected", socket.id);
});
};
const disconnect = () => {
if (!connection) return;
connection?.emit("endGoogleCloudStream");
connection?.disconnect();
processorRef.current?.disconnect();
audioInputRef.current?.disconnect();
audioContextRef.current?.close();
setConnection(undefined);
setRecorder(undefined);
setIsRecording(false);
};
useEffect(() => {
(async () => {
if (connection) {
if (isRecording) {
return;
}
const stream = await getMediaStream();
audioContextRef.current = new window.AudioContext();
await audioContextRef.current.audioWorklet.addModule(
"/src/worklets/recorderWorkletProcessor.js"
);
audioContextRef.current.resume();
audioInputRef.current =
audioContextRef.current.createMediaStreamSource(stream);
processorRef.current = new AudioWorkletNode(
audioContextRef.current,
"recorder.worklet"
);
processorRef.current.connect(audioContextRef.current.destination);
audioContextRef.current.resume();
audioInputRef.current.connect(processorRef.current);
processorRef.current.port.onmessage = (event: any) => {
const audioData = event.data;
connection.emit("send_audio_data", { audio: audioData });
};
setIsRecording(true);
} else {
console.error("No connection");
}
})();
return () => {
if (isRecording) {
processorRef.current?.disconnect();
audioInputRef.current?.disconnect();
if (audioContextRef.current?.state !== "closed") {
audioContextRef.current?.close();
}
}
};
}, [connection, isRecording, recorder]);
return (
<React.Fragment>
<Container className="py-5 text-center">
<Container fluid className="py-5 bg-primary text-light text-center ">
<Container>
<Button
className={isRecording ? "btn-danger" : "btn-outline-light"}
onClick={connect}
disabled={isRecording}
>
Start
</Button>
<Button
className="btn-outline-light"
onClick={disconnect}
disabled={!isRecording}
>
Stop
</Button>
</Container>
</Container>
<Container className="py-5 text-center">
{recognitionHistory.map((tx, idx) => (
<p key={idx}>{tx}</p>
))}
<p>{currentRecognition}</p>
</Container>
</Container>
</React.Fragment>
);
};
export default AudioToText;
server.js
const express = require("express");
const speech = require("#google-cloud/speech");
//use logger
const logger = require("morgan");
//use body parser
const bodyParser = require("body-parser");
//use corrs
const cors = require("cors");
const http = require("http");
const { Server } = require("socket.io");
const app = express();
app.use(cors());
app.use(logger("dev"));
app.use(bodyParser.json());
const server = http.createServer(app);
const io = new Server(server, {
cors: {
origin: "http://localhost:3000",
methods: ["GET", "POST"],
},
});
//TODO: run in terminal first to setup credentials export GOOGLE_APPLICATION_CREDENTIALS="./speech-to-text-key.json"
const speechClient = new speech.SpeechClient();
io.on("connection", (socket) => {
let recognizeStream = null;
console.log("** a user connected - " + socket.id + " **\n");
socket.on("disconnect", () => {
console.log("** user disconnected ** \n");
});
socket.on("send_message", (message) => {
console.log("message: " + message);
setTimeout(() => {
io.emit("receive_message", "got this message" + message);
}, 1000);
});
socket.on("startGoogleCloudStream", function (data) {
startRecognitionStream(this, data);
});
socket.on("endGoogleCloudStream", function () {
console.log("** ending google cloud stream **\n");
stopRecognitionStream();
});
socket.on("send_audio_data", async (audioData) => {
io.emit("receive_message", "Got audio data");
if (recognizeStream !== null) {
try {
recognizeStream.write(audioData.audio);
} catch (err) {
console.log("Error calling google api " + err);
}
} else {
console.log("RecognizeStream is null");
}
});
function startRecognitionStream(client) {
console.log("* StartRecognitionStream\n");
try {
recognizeStream = speechClient
.streamingRecognize(request)
.on("error", console.error)
.on("data", (data) => {
const result = data.results[0];
const isFinal = result.isFinal;
const transcription = data.results
.map((result) => result.alternatives[0].transcript)
.join("\n");
console.log(`Transcription: `, transcription);
client.emit("receive_audio_text", {
text: transcription,
final: isFinal,
});
});
} catch (err) {
console.error("Error streaming google api " + err);
}
}
function stopRecognitionStream() {
if (recognizeStream) {
console.log("* StopRecognitionStream \n");
recognizeStream.end();
}
recognizeStream = null;
}
});
server.listen(8081, () => {
console.log("WebSocket server listening on port 8081.");
});
// =========================== GOOGLE CLOUD SETTINGS ================================ //
// The encoding of the audio file, e.g. 'LINEAR16'
// The sample rate of the audio file in hertz, e.g. 16000
// The BCP-47 language code to use, e.g. 'en-US'
const encoding = "LINEAR16";
const sampleRateHertz = 16000;
const languageCode = "en-US"; //en-US
const alternativeLanguageCodes = ["en-US", "ko-KR"];
const request = {
config: {
encoding: encoding,
sampleRateHertz: sampleRateHertz,
languageCode: languageCode,
//alternativeLanguageCodes: alternativeLanguageCodes,
enableWordTimeOffsets: true,
enableAutomaticPunctuation: true,
enableWordConfidence: true,
enableSpeakerDiarization: true,
diarizationSpeakerCount: 2,
model: "video",
//model: "command_and_search",
useEnhanced: true,
speechContexts: [
{
phrases: ["hello", "안녕하세요"],
},
],
},
interimResults: true,
};

What's the different between Async.queue and Promise.map?

I tried to stress test my api in ExpressJS and to handler multi request I used Promise.all and then Async.queue with concurrency option.
Promise:
export const myapi = async (args1, args2) => {
console.log('args:', args1, args2);
let testing_queue = [];
testing_queue.push(new Promise(async (resolve, reject) => {
let result = await doAComplexQuery(args1, args2); // SELECT... JOIN...
if (!result || result.length <= 0)
reject(new Error('Cannot find anything!'));
resolve(result);
}
));
return await Bluebird.map(testing_queue, async item => {
return item;
}, {concurrency: 4}); };
Async.queue: (https://www.npmjs.com/package/async)
export const myapi = async (args1, args2) => {
console.log('args:', args1, args2);
let testing_queue = Async.queue(function (task, callback) {
console.log('task', task);
callback();
}, 4);
testing_queue.push(async function () {
let result = await doAComplexQuery(args1, args2); // SELECT... JOIN...
if (!result || result.length <= 0)
throw new Error('Cannot find anything!');
return result;
}
);};
And try to make request as much as possible:
const response = async function () {
return await Axios.post('http://localhost:3000/my-api', {
"args1": "0a0759eb",
"args2": "b9142db8"
}, {}
).then(result => {
return result.data;
}).catch(error => {
console.log(error.message);
});
};
for (var i = 0; i < 10000; i++) {
response();
}
And Run. The #1 way returns many ResourceTimeout or Socket hang up responses. Meanwhile, the #2 returns success response for all requests and runs even faster.
So is the Async.queue better in this case?
I think it could help the speed if you raise the concurrency limit on your promise.map.

Express - Error: Can't set headers after they are sent.

When I test my application with siege siege -b -r 1 -c 100 https://*****/RTC/stats/rank
I get this error in my nodeJS console
_http_outgoing.js:491
throw new Error('Can\'t set headers after they are sent.');
^
Error: Can't set headers after they are sent.
at validateHeader (_http_outgoing.js:491:11)
at ServerResponse.setHeader (_http_outgoing.js:498:3)
at ServerResponse.header (/home/nodeJS/RTC-stats/node_modules/express/lib/response.js:767:10)
at ServerResponse.send (/home/nodeJS/RTC-stats/node_modules/express/lib/response.js:170:12)
at ServerResponse.json (/home/nodeJS/RTC-stats/node_modules/express/lib/response.js:267:15)
at ServerResponse.send (/home/nodeJS/RTC-stats/node_modules/express/lib/response.js:158:21)
at Request._callback (/home/nodeJS/RTC-stats/server.js:46:14)
at Request.self.callback (/home/nodeJS/RTC-stats/node_modules/request/request.js:186:22)
at emitTwo (events.js:126:13)
at Request.emit (events.js:214:7)
Here you can find the code I used:
const request = require('request');
const express = require('express');
const app = express();
let players = [
"df6c767a-c4a9-4a42-bbbc-e34c7b4f1e16",
"22366c4f-744a-422b-81ff-e21608dd5950",
"c553f6b4-da31-4879-88a6-5dfa28cae1ac",
"5e5cfb2e-29a5-407e-972b-9999fcd567af",
"5fcc4c0e-13ca-49d2-a949-b74ef17d146f",
"094fd818-f794-404c-a2a4-674d3be5e7d3",
];
let responses = [];
let completed_requests = 0;
app.get('/RTC/stats/rank', function (ereq, eres) {
for (var i = 0, len = players.length; i < len; i++) {
var playerUUID = players[i];
var options = {
url: 'https://r6db.com/api/v2/players/' + playerUUID + '?platform=PC',
headers: {
'x-app-id': '5e23d930-edd3-4240-b9a9-723c673fb648'
},
};
request(options, function(err, res, body) {
if (err) { return console.log(err); }
var playerInfo = JSON.parse(body);
responses.push(playerInfo.rank.emea);
completed_requests++;
if (completed_requests === players.length) {
completed_requests = 0;
eres.send(responses);
responses = [];
}
});
}
});
app.listen(3000)
I think there is a problem with the way I send the request to the api. My biggest gues is that it has to do with timing and the sending of the result back to the client. Or is using siege a bad way of testing this application.
const players = [];
app.get(`/`, async (ereq, eres) => {
const http = (options) => {
return new Promise((resolve, reject) => {
request(options, (err, res, body) => {
if (err) {
return reject(err);
}
return resolve(JSON.parse(body));
});
});
};
for(let i = 0; i < players.length; i++) {
var playerUUID = players[i];
var options = {
url: 'https://r6db.com/api/v2/players/' + playerUUID + '?platform=PC',
headers: {
'x-app-id': '5e23d930-edd3-4240-b9a9-723c673fb648'
},
};
try {
let playerInfo = await http(options);
response.push(playerInfo.rank.emea);
} catch (e) {
console.log(e); // I think that you should handle this a better way than just console.logging...
}
}
return eres.json(response);
});
Just make sure that you use promises. Promises will slow down the for loop in order for you to get a proper response and also allowing you to check whatever you need to before returning. If you need to return a response in the for loop, then I would recommend that you go ahead and break; out of the for loop before it continues to try and send data to a response that is already sent.

NodeJS PostgreSQL

I have a question, how can I output all rows from postgresql. Now I have some errors. Please help me. Thanks.
This is my code:
'use strict'
const res = client.query("SELECT * FROM public", function(err, rows, fileds) {
const row = [];
for(let i=0; i<rows.length; i++) {
row = rows[i];
console.log(row);
}
rows.forEach(async function(row) {
console.log(row.name);
})
console.log('Finish');
});
const func = ms => new Promise(res => setTimeout(res, ms));
console.dir({func});
console.dir(res);
client.end();
As you are not using pool yet, I assume you are using pg older than 6. You should do:
return client.query(sqlStatement)
.then(res =>
{
client.end();
return res.rows;
})
.catch(e =>
{
client.end();
console.error(e);
throw e;
});