App I want to make
I would like to make audio recognition mobile app like Shazam with
Expo
Expo AV(https://docs.expo.io/versions/latest/sdk/audio)
Tensorflow serving
Socket.IO
I want to send recording data to machine learning based recognition server via Socket.IO every second or every sample (Maybe it is too much to send data sample-rate times per second), and then mobile app receives and shows predicted result.
Problem
How to get data while recording from recordingInstance ? I read Expo audio document, but I couldn't figure out how to do it.
So far
I ran two example:
https://github.com/expo/audio-recording-example
https://github.com/expo/socket-io-example
Now I want to mix two examples. Thank you for reading. If I could console.log recording data, it would help much.
Related questions
https://forums.expo.io/t/measure-loudness-of-the-audio-in-realtime/18259
This might be impossible (to play animation? to get data realtime?)
https://forums.expo.io/t/how-to-get-the-volume-while-recording-an-audio/44100
No answer
https://forums.expo.io/t/stream-microphone-recording/4314
According to this question,
https://www.npmjs.com/package/react-native-recording
seems to be a solution, but it requires eject.
I think I found a good solution to this problem.
await recordingInstance.prepareToRecordAsync(recordingOptions);
recordingInstance.setOnRecordingStatusUpdate(checkStatus);
recordingInstance.setProgressUpdateInterval(10000);
await recordingInstance.startAsync();
setRecording(recordingInstance);
Above after creating and preparing for recording, I added a callback function that runs every 10 seconds.
const duration = status.durationMillis / 1000;
const info = await FileSystem.getInfoAsync(recording.getURI());
const uri = info.uri;
console.log(`Recording Status: ${status.isRecording}, Duration: ${duration}, Meterring: ${status.metering}, Uri: ${uri}`)
if(duration >10 && duration - prevDuration > 0){
sendBlob(uri);
}
setPrevDuration(duration);
The callback function checks if the duration is greater than 10 seconds and the difference between last duration is greater than 0, then sends the data through WebSocket.
Currently only problem, it doesn't run the callback the first time but runs the second time.
Related
I am new to ios threads. While calling the api in the particular screen its not giving the response until 60 seconds, In between time am calling other api's from same screens or other screens its kept loading. After 60 seconds, it will show the response.
We need to call the asynchronous api's using Alamofire. but its not working
private let alamofireManager : Session
let configuration = URLSessionConfiguration.default
configuration.timeoutIntervalForRequest = 300 // seconds
configuration.timeoutIntervalForResource = 500
alamofireManager = Session.init(configuration: configuration, serverTrustManager: .none)
alamofireManager.request("sample_api",method: .post,parameters: parameters,encoding: URLEncoding.default,headers: nil).responseJSON { (response) in}
First, when getting started it's not necessary to customize anything. Especially when the values you're customizing aren't much different from the defaults.
Second, Alamofire 5.6 now ships with Swift async / await APIs, so I suggest you use those when getting started. You can investigate your network calls without tying them to specific screens until you understand the calls, how they work, and when they should be called.
Third, a 60 second delay sounds like a timeout, as 60 seconds is the default timeout wait. Is that expected? Make sure you have proper access to your server. You should also print your response so you can see what's happening and whether you got data back or an error.
Fourth, use a Decodable type to encapsulate your expected response, it makes interacting with various APIs much simpler. Do not use Alamofire's responseJSON, it's now deprecated.
So you can start your experiment by using Alamofire's default APIs.
let response1 = await AF.request("url1").serializingDecodable(Response1.self)
debugPrint(response1)
let response2 = await AF.request("url2").serializingDecodable(Response2.self)
debugPrint(response2)
I'm working on a Discord bot which handles a multiplayer game with rpg elements - those rpg elements allow users to perform different income activities in specified interval (best example would be EPIC RPG).
Considering the game is multiplayer and pretty much only interval based, I want to prevent players from using any automation, which allows them to take the top ranks with minimum effort and keep the game fair!
I'm currently running it in a small test server and already had a guy using something what allowed him to send those commands each 10 seconds (EDIT: from his personal user account), resulting in over 5000 commands within 16 hours. He's very mysterious about details of whatever he's using to achieve this outcome. I also found out he can even set multiple intervals at once which countered the solution I tried and will describe next.
What I tried
Implementing a captcha which randomly generates when user sends the command and temporary ban when user fails to complete the captcha - This is only a partial solution because he can still use the automation while doing other work arround and pass captcha when it pops
Implementing bonus captcha which generates when user sends the command in same interval twice in row - This only works if there is one timer, setting more timers counters this
So my question which is by now pretty much obvious I'd say is: How can I detect automation (interval patterns?) on those commands to effectively annoy those botters with captcha till they rather give up and play manually?
I'll be very grateful for any ideas and suggestions! <3
PS: I'm suprised he's getting away with that for weeks and months - sending 5000 messages a day, even tho not daily I believe. Isn't that API abuse violating Discord's ToS?
This is a very easy question to answer.
The answer is:
const Discord = require('discord.js');
const client = new Discord.Client();
let prefix = "!";
client.on('ready', () => {console.log('I am Ready')});
client.on('message', (message) => {
if (!message.content.startsWith(prefix) || message.author.bot) return;
var args = message.content.slice(prefix.length).split(/ +/);
var commands = args.shift().toLowerCase();
if (command === "ping") {
return message.channel.send('Pong!');
}
})
client.login('INSERT TOKEN HERE');
So the if statement at the start of code checks if the message starts with the prefix and if the message was sent by a bot.
Hope This helped.
Happy Coding!
I was wondering if it is okay to set the time interval to trigger a function every 3 seconds. Let's say I have 5 different screens in my application and all 5 screens have the time interval set to 3 seconds and will keep on calling a function to auto refresh the screen.
My concern is, will it cause a heavy traffic to the server if there are multiple users using the app at the same time and the server will keep on receiving the request from the app?
Sample code :
componentDidMount(){
this.interval = setInterval(() => {
this.loadCase()
}, 3000);
}
componentWillUnmount(){
clearInterval(this.interval);
}
loadCase(){
CaseController.loadCase().then(data=>{
if(data.status == true){
this.setState({ case: data.case })
}
})
}
If you have an API endpoint that you need to poll every 3 seconds and you're looking to avoid redundant calls from the app, try using setInterval in your App.js, or wherever the root of your app is, and dump the result into Redux/whatever state management solution you're using so that you can access it elsewhere.
To answer your question regarding "heavy traffic," yeah, that is inevitably going to be a lot of API calls that your server will need to handle. If it's going to cause issues with the current setup of your API server, I would look closely at your app and see if there's a way you can reduce the effects that large numbers of users will have, whether that's some sort of caching, or increasing the amount of time between API calls, or entirely reconsidering this approach.
I am trying to record a call in chrome that is done with sipML5.
So basically the call handling is working fine and calls go through perfectly and both parties can hear each other perfectly.
The problem comes when I want to record the call.
So I just did the bare minimum to get a recording.
chunks = [];
navigator.mediaDevices.getUserMedia({audio:true})
.then(stream => {
rec = new MediaRecorder(stream);
rec.ondataavailable = e => {
chunks.push(e.data);
}
})
.catch(e=>console.log(e));
Then start a call with someone, it rings, they answer,
then I do rec.start(); in the console. Then after 5 seconds of conversation I end the call and do rec.stop(); in the console.
Then I do:
blob = new Blob(chunks,{type:'audio/ogg'});
$('.audio-remote', document).attr('src',URL.createObjectURL(blob))
to create the blob and provide the audio element with the src it needs.
That audio element then immediately starts playing back the audio and it is only the browser side of the conversation. Nothing is heard from the side of the cellphone. So only my voice through my mic is recorded.
Now I am not an expert on everything WebRTC, but it looks like getUserMedia provides you with a stream of only your devices and input and has nothing to do with the fact that your having a sip induced call with another party. Then obviously this is what sipML5 must do on the inside is take my input and send it through to the sip provider we use which translates it into the phone call. And the voice coming from the phone call is just presented to me through that audio element.
So I want to know if there is way to capture this conversation as is?
To capture both my input and the voice data from the sip client as one conversation.
Thanks in advance.
I'm building an app which requests data for several albums and playlists when it first loads.
For each of these I am calling either
models.Album.fromURI(uri, function(album){});
or
models.Playlist.fromURI(uri, function(playlist){});
For the majority of the time these work fine and I can get info from the album or playlist from within the callback function, however, occasionally (5% of the time) the callback function is never called and I'm left with an incomplete data set for my app to display.
I'm wondering if anyone else has encountered similar problems or has any insight into what might be causing it (API bugs, request rate limiting, etc)
Unfortunately, the Spotify Apps API 0.X lacked an error callback function that could be called when something went wrong when calling models.Album.fromURI or models.Playlist.fromURI.
This has been greatly improved in the Spotify Apps API 1.x through the use of Promises:
models.Track.fromURI('spotify:track:6a41rCqZhb2W6rpMolDR08').load('name')
.done(function(track) { console.log(track.name); })
.fail(function(track, error) { console.log(error.message); });