DiscordJS how to detect inhuman commands - automation

I'm working on a Discord bot which handles a multiplayer game with rpg elements - those rpg elements allow users to perform different income activities in specified interval (best example would be EPIC RPG).
Considering the game is multiplayer and pretty much only interval based, I want to prevent players from using any automation, which allows them to take the top ranks with minimum effort and keep the game fair!
I'm currently running it in a small test server and already had a guy using something what allowed him to send those commands each 10 seconds (EDIT: from his personal user account), resulting in over 5000 commands within 16 hours. He's very mysterious about details of whatever he's using to achieve this outcome. I also found out he can even set multiple intervals at once which countered the solution I tried and will describe next.
What I tried
Implementing a captcha which randomly generates when user sends the command and temporary ban when user fails to complete the captcha - This is only a partial solution because he can still use the automation while doing other work arround and pass captcha when it pops
Implementing bonus captcha which generates when user sends the command in same interval twice in row - This only works if there is one timer, setting more timers counters this
So my question which is by now pretty much obvious I'd say is: How can I detect automation (interval patterns?) on those commands to effectively annoy those botters with captcha till they rather give up and play manually?
I'll be very grateful for any ideas and suggestions! <3
PS: I'm suprised he's getting away with that for weeks and months - sending 5000 messages a day, even tho not daily I believe. Isn't that API abuse violating Discord's ToS?

This is a very easy question to answer.
The answer is:
const Discord = require('discord.js');
const client = new Discord.Client();
let prefix = "!";
client.on('ready', () => {console.log('I am Ready')});
client.on('message', (message) => {
if (!message.content.startsWith(prefix) || message.author.bot) return;
var args = message.content.slice(prefix.length).split(/ +/);
var commands = args.shift().toLowerCase();
if (command === "ping") {
return message.channel.send('Pong!');
}
})
client.login('INSERT TOKEN HERE');
So the if statement at the start of code checks if the message starts with the prefix and if the message was sent by a bot.
Hope This helped.
Happy Coding!

Related

Expo: Get audio data realtime and send via Socket.IO

App I want to make
I would like to make audio recognition mobile app like Shazam with
Expo
Expo AV(https://docs.expo.io/versions/latest/sdk/audio)
Tensorflow serving
Socket.IO
I want to send recording data to machine learning based recognition server via Socket.IO every second or every sample (Maybe it is too much to send data sample-rate times per second), and then mobile app receives and shows predicted result.
Problem
How to get data while recording from recordingInstance ? I read Expo audio document, but I couldn't figure out how to do it.
So far
I ran two example:
https://github.com/expo/audio-recording-example
https://github.com/expo/socket-io-example
Now I want to mix two examples. Thank you for reading. If I could console.log recording data, it would help much.
Related questions
https://forums.expo.io/t/measure-loudness-of-the-audio-in-realtime/18259
This might be impossible (to play animation? to get data realtime?)
https://forums.expo.io/t/how-to-get-the-volume-while-recording-an-audio/44100
No answer
https://forums.expo.io/t/stream-microphone-recording/4314
According to this question,
https://www.npmjs.com/package/react-native-recording
seems to be a solution, but it requires eject.
I think I found a good solution to this problem.
await recordingInstance.prepareToRecordAsync(recordingOptions);
recordingInstance.setOnRecordingStatusUpdate(checkStatus);
recordingInstance.setProgressUpdateInterval(10000);
await recordingInstance.startAsync();
setRecording(recordingInstance);
Above after creating and preparing for recording, I added a callback function that runs every 10 seconds.
const duration = status.durationMillis / 1000;
const info = await FileSystem.getInfoAsync(recording.getURI());
const uri = info.uri;
console.log(`Recording Status: ${status.isRecording}, Duration: ${duration}, Meterring: ${status.metering}, Uri: ${uri}`)
if(duration >10 && duration - prevDuration > 0){
sendBlob(uri);
}
setPrevDuration(duration);
The callback function checks if the duration is greater than 10 seconds and the difference between last duration is greater than 0, then sends the data through WebSocket.
Currently only problem, it doesn't run the callback the first time but runs the second time.

TheTVDB API - Starting out

I'm looking for assistance for the bare minimum code to pull some information from the TheTVDB API (v3).
I've never coded anything to do with APIs before.
I tried to shortcut using TVDBSharper, but that uses asynchronous routines, and tasks, etc. which I just can't get my head around at the moment, given the documentation is for C#, and I clearly don't understand how "await" works in VB.
I've tried searching for API examples, but most are about creating an API.
The first thing TheTVDB API documentation says is:
"Users must POST to the /login route with their API key and credentials in the following format in order to obtain a JWT token."
^ I don't know how to POST. Any examples I've seen are very long and confusing, and mostly in C#.
So (and I apologise for this drivel, but I've tried on and off for months now)…
Could someone please show me the minimal amount of VB.NET code to pull the show name from, for example series ID 73739 (Lost). Hopefully from there, I can start to figure some things out.
I have a valid API Key from the TheTVDB.
Mostly you don't need to understand async/await in any great detail but I was once where you are now, and though I don't claim to be an expert, I did manage to get my head around it like this:
You know how, if you had something that threw an exception and you never caught it:
Sub Main(arguments)
Whatever()
End Sub
Sub Whatever
StuffBefore()
OtherWhateverThrowsException()
StuffAfter()
End Sub
Sub OtherWhateverThrowsException()
StuffBefore()
throw New Exception("Blah")
End Sub
As soon as you threw that exception, your VB thread would stop what it was doing, and wind its way back up through the call stack until it popped out of the main, and crashed to the command line - a matrixy style "return to the source" if you like
Async Await is a bit like that. When there's some method that is going to take a long time to do its work (download strings from tvdb) we could make it sit around doing nothing in our code, having a up of coffee and waiting for TVDB's slow server. This makes things easy to understand because if we sit and wait, we wait 30 seconds, then we get the response, and process the response. Obviously we can't process the response before we get it so we have to sit around and wait for it, and this is always true..
But it'd be better if we could let our thread nip back out the way it came in, "go back to the source", do something else for someone else, and then call it(or another one of its coworkers, we probably don't care) back to carry on working for us when TVDB's server responds. This is what Async Await does for us. Methods that are marked Async are treated differently by the compiler, something like saving your progress on your xbox game. If you reach a point where you want to wait, you can issue the waiting command, the thread that was doing our work performs a savegame, goes off and works for someone else, then when we're ready it comes back, loads the game again and carries on where it left off.
The save game file is manifest as a Task; methods that once upon a time were subs (didn't return anything) should now be Functions that return a Task (a savegame with no associated data). Methods that once upon a time returned something like a string, should now be marked as returning a Task(Of String) - the Task part is to save the state of play (data that VB wants to work with), the string is the data your app wants to work with.
Once you mark something as Async, it needs to contain an Await statement. Await is that SaveYourGameAndGoDoSomethingElseWhileThisFinishes. Typically, while you're awaiting something your program won't have any other stuff it needs the thread to do, so it's not just your Function that calls TVDB's API that needs to Await/be marked Async - every single function in the chain, all the way up and out of your code, needs to be marked as Async, and typically you'll Await at every step of the way back up:
Sub DownloadTVDBButton_Click(arguments)
DoStuff()
End Sub
Sub DoStuff
StuffBefore()
GetFromTVDB()
StuffAfter()
End Sub
Sub GetFromTVDB()
Dim i = 1
GetDataFromTVDBServer() 'wait 30s for TVDB
ParseDataFromTVDB()
End Sub
Sub ParseDataFromTVDB()
End Sub
Becomes:
Async Sub DownloadTVDBButton_Click(arguments) 'windows forms event handlers are always subs. Do not use async subs in your own code
Await DoStuff()
End Sub
Function DoStuffAsync Returns Task
StuffBefore()
Await GetFromTVDBAsync()
StuffAfter()
End Function
Async GetFromTVDBAsync() Returns Task
Dim i = 1
Await GetDataFromTVDBServerAsync() 'go back up, and do something else for 30s
ParseDataFromTVDB()
End Sub
Sub ParseDataFromTVDB() 'downstream; doesn't need to be async/await
End Sub
We switched to using TVB's Async data call, so we await it. When we await, the thread would go back up to the previous function DoStuffAsync. Because we're awaiting that, the thread goes back up a level again into the button click handler. Because we're awaiting that also, the thread goes back up again and out of your code. It goes back to its regular day job of drawing the UI, making it looks like the program is still responding etc. When the TVDB call completes the thread comes back to the point just after it (ready to run ParseData), and it has all the data back from TVDB, and the savegame has been reloaded so everything it knew before/the state is as it was (variable i exists and is 1; you could conceive that it would have been lost otherwise when the thread went off to do something else)
In essence, async/await has allowed us to work exactly as we would have done without it, it's just that it built a little savegame mechanism that meant our thread could go off an do something else while TVDB was busy getting our data, rather than having to sit aorund doing nothing while we waited
It may also help to think of Await as a device that unpacks a save game and gets your data out of it. If a GetSomething() sits for 30s then returns a String you want, then GetSomethingAsync() will instantly return a Task that will (in 30s when the work is done) encloses that same String you want, and Await GetSomethingAsync() will wait until the Task is done then get the string you want out of it
Methods that are named like "...Async" should be thought of as "behave in an asyncronous way". They DON'T have to be marked with the Async modifier; Async is only needed if a method uses the Await word but I'm recommending you use Await on everything that returns a Task (i.e. is awaitable) all the way up and down your call tree. When you get more confident you don't always have to Await SomethingAsync but honestly the overhead of doing so is minimal and the consequences of not doing so are occasionally disastrous. All developers who follow convention always name their stuff ...Async if it behaves in an async way; you should adopt this too, and make sure you name all your Async methods with an"Async" at the end of the name
I don't know how to POST
You don't really need to. The TVDB API has a swagger endpoint; swagger is a way of describing a REST service programmatically so that your visual studio can build a set of classes to use it and provide you with nicely named things. Whipping out a WebClient and manually creating some JSON is very old school/low level
TVDB's swagger descriptor is at https://api.thetvdb.com/swagger.json
You're supposed to be able to right click your project, choose Add... Rest API Client:
,
Paste https://api.thetvdb.com/swagger.json in as the url and pick a namespace (an organizational unit) for all the generated classes to go in.
At the moment something in TVDB's API is causing AutoRest (the tool that VS uses to parse the API spec) to choke but ordinarily it would work out and you'd get a bunch of code (autorest generates c#; you'd be best off generating the c# into a new project and then adding reference to that project from your VB) objects to work with that would do all the POSTing etc for you.
As noted, my VS can't process the TVDB API at the moment and I dont have enough time today to figure out why, but you could sure post a question on AutoRest's github (or on SO) saying "why does https://api.thetvdb.com/swagger.json cause a "Input string not in correct format"" and get some more help
You asked (maybe implicitly) a couple of follow up questions in the comments:
I don't know about REST/swagger (I've heard of it though), and can't see any way to add to the project as you described, and I'm no closer to getting info from TheTVDB. However, it might have have helped me use functions in TVDBSharper. I will just have to try a few things with it. Thanks again
Yes; sorry - I should have been more explicit that "Add REST API client" is only available in a C# project because it relies on a tool that generates C#. This isnt a blocker though - you can just make a C# project and add it to your VB solution alongside your VB project; the two languages are totally interoperable. Your VB can tell your C# what to do
However, there isn't much point in trying at the moment, because the tool that is suppsoed to do it can't handle what TVDB is putting out; my VS can successfully ask the TVDB API to describe itself, but it doesn't seem able to understand the response.
In a nutshell; VS has a bug that means it can't use TVDB API directly, you're best off trying via TvDbSharper. The https://github.com/HristoKolev/TvDbSharper readme has some examples in. They're C# but basically "remove the semicolons and they'll pretty much work in VB"
Now, a bit about the headline terms here, background understanding if you like. API, RESTand swagger are easy enough to explain:
API
An API is effectively a website (in this case run by TVDB), intended for software to consume rather than humans. It takes raw data in and chucks raw data out - unlike a normal website intended for our eyes, nothing about it is presentational in the slightest.
REST
REST as a phrase and a concept is a source of confusion for many and a lot of times you try and read about what REST means and the blogs quickly start getting bogged down with details and make it too complex, with all these funky examples. They kinda forget to explain the REST part because it's come to mean not much at all - it's something so obvious and nondescript that we don't think about it any more.
In essence, something is RESTful if the server doesn't have to remember something about what you did before, in order to service a request you make now - every request stands on its own and can be serviced completely without reference to something else. This is a different workflow to other forms where you might want to change the name of something by issuing a editname('newname') command. What name actually gets edited depends on whether you first did selectshow() or selectactor() and also which show or which actor - a workflow like that means the server has to start remembering whether you selected a show or actor, and what show/actor was selected before it can process the editname() command. If you selected show 123, the edit would edit the name of the show id 123. If you selected an actor 456, the edit name would edit the name of an actor 456
Critically, if you replayed the same editname() at a different time a different thing would get edited because the state of your dialog with the server changes. It's kinda dumb to make the server have to remember all that, for everyone, when really we could push the job of identifying whether we want to name an actor or a show and which show, onto the client
By making it that you have editactorname(123,'Jon wayne') you're transferring all the info the server needs to perform the request; your credentials, the actor id, the new name, the fact that it's an actor name and not a show name. All this goes in the one request, and you can replay this request as many times as you like at any time, and it always has the same effect; things that happened before don't affect it (well.. apart from authentication)
It gets a bit woolly if taken literally - "well if the server doesn't remember anything how does it even remember I changed the name of actor 123, to Jon Wayne so it can service my later request of getactorname(123)?" but that's more about the state of the data in the server, not the state of your interaction with the server. Things that are truly stateless are mostly purely calculatory and not too useful; something somewhere needs to be able to remember something or there is nothing to calculate. Things are rarely completely stateless; even TVDB's API requires you to authenticate first, using a user/password/apikey and then the serverissues a token that becomes your username/password/apikey equivalent for every subsequent request - the server has to start remembering that token, or every time you quote it it will say "can't edit actor name; not authorized". So, yeah.. when viewed holistically something usually has to be rememberd at some point otherwise nothing works. REST things are rarely 100% truly stateless, but mostly they are - and it's really about that "when you want to edit the actor name, send a) that you want to edit actorname, b) what actor, c) what name, d) your credentials to prove youre allowed to" - everything the server needs in the one hit
Swagger
Now called OpenAPI, swagger is a protocol for describing an API: when an api has some actions that take some data, and return some data, it's helpful to know what the actions are called (setactoryearsactive), what type of data they take (date, date), what sort of things you should put in it (the from date, the to date or null if still active), what they return (boolean) and what the return means (true if success, false if not).
If we have a standardized way of describing these things then we can build standard software that reads the standard description of the API and writes a bunch of standard code that uses the API. This is software that writes a description so other software can read it and write software that uses the first set of software. It's an API API.
There is a lot of software here:
The API is software(tvdb),
The thing that generates the description of the API is software (Swagger),
The thing that consumes the description of the API and creates a client is software(AutoRest),
And the thing that uses the client is software (your app).
You could code your app to hit the api directly- the API's just responding
to HTTP requests, which are just text files formatted in a particular way sent to port 80 of the web server that hosts the API. You could write one such request in notepad and use telnet to send it and get a valid response. You could code your app to do it (you were just about to). You could use someone else's library (TvBbSharper) which does it somehow. You could use some software that generates something like TvDbSharper; it reads the description of the api and generates classes for you to use; those classes will make the http requests. Everything can be done at any level; you could write all your apps in assembler, the lowest of the low. It takes ages and it is boring - this is why we use ever higher levels of abstraction.
We make something and then make it do a thousand things and then realize that listing the same code over and over and changing one bit each time is boring, and repetitive and something a computer should do, so we devise ways of making it so software can write the boring repetitive code so that we can do the interesting things.
Swagger and AutoRest are those kind of things; Swagger inspects all the methods, what they take and return and generates a regular consistent description. AutoRest reads it and generates a regular consistent set of client classes. Then the human uses the client classes to do the interesting things. The AutoRest part doesn't work out for us at the moment; it's written by different people than the Swagger team so some differences arise; Awagger describes something and Autorest can't understand it. It will one day I'm sure (in this game of walls and ladders); such is the nature of open source - everyone has a different set of priorities.
Right now we could probably get AutoRest working by finding the one thing it is choking on and removing it. There may be no need; if the TvDbSharper guys have written enough of a set of client classes that you can use TvDbSharper to do all your necessary things. It is thus effectively already the set of client classes AutoRest would have built, maybe more; use TvDbSharper.
The idea behind Swagger and Autorest is that a TvDbSharper shouldn't need to exist: it's a very specific application, only works with tvdb, only works in .net.
If we put effort into making Swagger able to generate a description of any API written in any language, and we put effort into making Autorest able to consume that description and output any language, then we have something more useful than TvDbSharper/no need to TvDbSharper because we can generate something that does the same (of course, specific applications can be superior, just like bespoke tailored suits are superior bt that's another philosophy for another time)

RxJS polling by interval and when manually called

In my Angular app I'm working on notifications and I have an REST API to call for latest user's notifications. I need to call this API on few minutes since it's not really important that user gets notifications in real time (they probably won't even appear that fast). However the idea to refresh notifications on the client side is next:
When user logs in start refreshing notifications - here is the first manual call to start refreshing the API on few minutes
If user leaves app opened or is just navigating through the app then don't change timer and wait for the rest of the time
If user opens subpage where it can perform actions related to notifications and does it, then refresh notifications and reset timer
Refresh notifications until logout
I already have working code for the described procedure, but I'm somehow unsure that it's correct for what I need. Here is the code for performing calls (for manual check there is just a Subject and for stop checking there is a subscription to observable - code below is actually separated, but here is in one place because of readability):
// Subject for manual triggering
this.checkFeed = new Subject<void>();
// Call for refresh in own method
this.checkFeed.next();
// Waiting for manual refresh or triggering it on some interval after it was last triggered
this.feedSub = this.checkFeed.asObservable()
.switchMap(() => Observable.timer(0, this.interval))
.mergeMap(() => this.fetchChanges())
.distinctUntilChanged(this.compareFeed)
.subscribe(res => this.notify(res));
// Unsubscription when logging out
if (this.feedSub) this.feedSub.unsubscribe();
The part which I'm most unsure about is .switchMap(() => Observable.timer(0, this.interval)) since it needs 0 to start right away (which is ok, but still doesn't look correct at all?). So is there any better way to achieve what I described?
I also have another question how to start check for notifications from another observable - which operator should I use. As I mentioned I have call to the Subject's next in own method like this:
refreshFeed(): void {
this.checkFeed.next();
}
So when there is some other observable performing (the action when notifications should be refreshed) I need to call this one. What's the correct way to call void method when other observable has response from API? I was thinking of something like this:
someActionThatCanChangeNotifications(): Observable<any> {
return this.api.get('path/to/endpoint')
.do(() => this.feedService.refreshFeed());
}
Is this ok, or is there also any better way?
Thanks in advance for help!
So basically you have two observable.
One that you call manually:
this.checkFeed
and the interval(let's callit intervalObs):
this.intervalObs = Observable.timer(0, this.interval);
If you see it like this the easyest way is to merge you'r two source stream and then do whatever you want.
var mergedSource = Observable.merge(
this.checkFeed,
this.intervalObs)
subscription = mergedSource.subscribe(this.fetchChanges());
Maybe you need to do some more operation in between but this should give you a more readable alternative.
You can try this working plunker if you want something to play arround https://plnkr.co/edit/n4nNFEMa4YOh2KSjDpSJ?p=preview
From what I can see you've pretty much done it "correctly". As with programming in general, there are many possible (and correct) solutions to a single problem. Personally, I'd do this the same way.
I can give you some commentary on the two points you mentioned too:
.switchMap(() => Observable.timer(0, this.interval))
Observable.timer pretty much an Observable.interval with a custom timeout before the first value. Observable.timer(0, this.interval) is the correct usage.
An alternative could be Observable.just(0).concat(Observable.interval(this.interval)), which returns a value immediately and then starts the interval. I prefer the way you put however; I think it clearly states your intention: "Produce a value after 0 milliseconds, and then an interval of this.interval".
.do(() => this.feedService.refreshFeed())
I'd say this is the totally correct way of doing it. do is meant for side effects, eg. stuff that happening outside the observable.
I can say though, I wouldn't expect someActionThatCanChangeNotifications to kick off a refresh of the feed. When a function returns an observable, I would expect to return an observable that doesn't have any side effects. However, as we live in a non-perfect world, we can't always have what we want.
You can't expect every subscriber to remember to do .do(() => this.feedService.refreshFeed()), instead I'd add a notice in the doc comment for the function: "Note: The returned observable will refresh the feed on every next signal", or something of that kind.

Socket.io Rooms in a Hostile Network Environment?

I have a very frustrating problem with a client's network environment, and I'm hoping someone can lend a hand in helping me figure this out...
They have an app that for now is written entirely inside of VBA for Excel. (No laughing.)
Part of my helping them improve their product and user experience involved converting their UI from VBA form elements to a single WebBrowser element that houses a rich web app which communicates between Excel and their servers. It does this primarily via a socket.io server/connection.
When the user logs in, a connection is made to a room on the socket server.
Initial "owner" called:
socket.on('create', function (roomName, userName) {
socket.username = userName;
socket.join(roomName);
});
Followup "participant" called:
socket.on('adduser', function (userName, roomName){
socket.username = userName;
socket.join(roomName);
servletparam = roomName;
var request = require('request');
request(bserURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('messages', body);
});
servletparam = roomName + '|' + userName;
request( baseURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('participantList', body);
});
});
This all worked beautifully well until we got to the point where their VBA code would lock everything up causing the socket connection to get lost. When the client surfaces form it's forced VBA induced pause (that lasts anywhere from 20 seconds to 3 minutes), I try to join the room again by passing an onclick to an HTML element that triggers a script to rejoin. Oddly, that doesn't work. However if I wait a few seconds and click the object by hand, it does rejoin the room. Yes, the click is getting received from the Excel file... we see the message to the socket server, but it doesn't allow that call to rejoin the room.
Here's what makes this really hard to debug. There's no ability to see a console in VBA's WebBrowser object, so I use weinre as a remote debugger, but a) it seems to not output logs and errors to the console unless I'm triggering them to happen in the console, and b) it loses its connection when socket.io does, and I'm dead in the water.
Now, for completeness, if I remove the .join() calls and the .to() calls, it all works like we'd expect it to minus all messages being written into a big non-private room. So it's an issue with rejoining rooms.
As a long-time user of StackOverflow, I know that a long question with very little code is frowned upon, but there is absolutely nothing special about this setup (which is likely part of the problem). It's just simple emits and broadcasts (from the client). I'm happy to fill anything in based on followup questions.
To anyone that might run across this in the future...
The answer is to manage your room reconnection on the server side of things. If your client can't make reliable connections, or is getting disconnected a lot, the trick it to keep track of the rooms on the server side and join them when they do a connect.
The other piece of this that was a stumper was that the chat server and the web UI weren't on the same domain, so I couldn't share cookies to know who was connecting. In their case there wasn't a need to have them hosted in two different places, so I merged them, had Express serve the UI, and then when the client surfaced after a forced disconnect, I'd look at their user ID cookie, match them to the rooms they were in that I kept track of on the server, and rejoined them.

Porting PHP API over to Parse

I am a PHP dev looking to port my API over to the Parse platform.
Am I right in thinking that you only need cloud code for complex operations? For example, consider the following methods:
// Simple function to fetch a user by id
function getUser($userid) {
return (SELECT * FROM users WHERE userid=$userid LIMIT 1)
}
// another simple function, fetches all of a user's allergies (by their user id)
function getAllergies($userid) {
return (SELECT * FROM allergies WHERE userid=$userid)
}
// Creates a script (story?) about the user using their user id
// Uses their name and allergies to create the story
function getScript($userid) {
$user = getUser($userid)
$allergies = getAllergies($userid).
return "My name is {$user->getName()}. I am allergic to {$allergies}"
}
Would I need to implement getUser()/getAllergies() endpoints in Cloud Code? Or can I simply use Parse.Query("User")... thus leaving me with only the getScript() endpoint to implement in cloud code?
Cloud code is for computation heavy operations that should not be performed on the client, i.e. handling a large dataset.
It is also for performing beforeSave/afterSave and similar hooks.
In your example, providing you have set up a reasonable data model, none of the operations require cloud code.
Your approach sounds reasonable. I tend to put simply queries that will most likely not change on the client side, but it all depends on your scenario. When developing mobile apps I tend to put a lot of code in cloud code. I've found that it speeds up my development cycle. For example, if someone finds a bug and it's in cloud code, make the fix, run parse deploy, done! The change is available to all mobile environments instantly!!! If that same code is in my mobile app, it really sucks, cause now I have to fix the bug, rebuild, push it to the app store/google play, wait x number of days for it to be approved, have the users download it... you see where I'm going here.
Take for example your
SELECT * FROM allergies WHERE userid=$userid query.
Even though this is a simple query, what if you want to sort it? maybe add some additional filtering?
These are the kinds of things I think of when deciding where to put the code. Hope this helps!
As a side note, I have also found cloud code very handy when needing to add extra security to my apps.