HTML5 Canvas for falling word game - api

I want to develop a game which will have following content
1. User will log in
2. User will be provided by alphabets of a word falling from sky, and he would be required to complete the word before they hit the bottom level.
3. The words would be pulled up from a database.
4. The reward points gathered by user on completion of task, would be converted to a corresponding "Mobile recharge topup" and would be sent to users mobile.
I was planning to do this in a html5 using Canvas element. Could you let me know, if this is possible.
I have studied 5 mobile recharge api service, but none of them are satisfactory so far. Any direction in there?
To give you an idea of my expertise with this, I am a totally new user of web programming. I have been a systems programmer before, and need to develop this for assisting in a research project related to studying economic incentives of attracting low income workers to spend time on web, if enough incentive is provided.
I sincerely appreciate your time and help.
Thank you,
Mrunal

This is what I have found.
It is a half baked script ...
http://www.javascriptsource.com/games/falling-by-tim-withers-120409100502.html

Related

How to get access to this paper?

So I am doing my own research, and I need to read this paper.
CALVIN, T. W. (1977). "TNT Zero Acceptance Number Sampling." ASQC Technical Conference Transactions, Philadelphia, PA.
https://hero.epa.gov/hero/index.cfm/reference/details/reference_id/8389081
However, I checked on google scholar, I used the acedamic VPN, I searched on my university library,
none of them have the access of this paper.
I REALLY need this paper, and I do not live in the united stated, so I can not go to the library there.
Is there any chance you know how to get access to this?
Thank you so so much!

Choosing a chat-bot framework for data science research project and understanding the hidden costs of the development and rollout?

The question is about using a chat-bot framework in a research study, where one would like to measure the improvement of a rule-based decision process over time.
For example, we would like to understand how to improve the process of medical condition identification (and treatment) using the minimal set of guided questions and patient interaction.
Medical condition can be formulated into a work-flow rules by doctors; possible technical approach for such study would be developing an app or web site that can be accessed by patients, where they can ask free text questions that a predefined rule-based chat-bot will address. During the study there will be a doctor monitoring the collected data and improving the rules and the possible responses (and also provide new responses when the workflow has reached a dead-end), we do plan to collect the conversations and apply machine learning to generate improved work-flow tree (and questions) over time, however the plan is to do any data analysis and processing offline, there is no intention of building a full product.
This is a low budget academy study, and the PHD student has good development skills and data science knowledge (python) and will be accompanied by a fellow student that will work on the engineering side. One of the conversational-AI options recommended for data scientists was RASA.
I invested the last few days reading and playing with several chat-bots solutions: RASA, Botpress, also looked at Dialogflow and read tons of comparison material which makes it more challenging.
From the sources on the internet it seems that RASA might be a better fit for data science projects, however it would be great to get a sense of the real learning curve and how fast one can expect to have a working bot, and the especially one that has to continuously update the rules.
Few things to clarify, We do have data to generate the questions and in touch with doctors to improve the quality, it seems that we need a way to introduce participants with multiple choices and provide answers (not just free text), being in the research side there is also no need to align with any specific big provider (i.e. Google, Amazon or Microsoft) unless it has a benefit, the important consideration are time, money and felxability, we would like to have a working approach in few weeks (and continuously improve it) the whole experiment will run for no more than 3-4 months. We do need to be able to extract all the data. We are not sure about which channel is best for such study WhatsApp? Website? Other? and what are the involved complexities?
Any thoughts about the challenges and considerations about dealing with chat-bots would be valuable.

How do robots see captcha? or a group of alpha-numeric characters?

like why do websites insist on entering captchas?
i think that even robots are capable of reading that...
I want to know how this thing with captcha work...:/
In the past 10 years robots were unable to identify distorted test as entered in captcha . But now a days robots can identify distorted text and even door number plates used in so many websites as captcha .
I am not sure if this is the perfect answer but you might want to take a look at this blog post where author has mentioned that so far no program or robot has been able to read the CAPTCHA or any other form of distorted texts while computer programs can read the simple text or text in images, no such system has been so far developed to scan and read distorted texts or CAPTCHA.
There is a thing called Turing Test which differentiates a machine an a Human. Captchas are formed so that machines fail that test. But using AI , now-a-days machines have started passing Turing tests.
Captchas will typically display alphanumeric characters in a distorted way so that the human brain can process it in a way automatic character recognition can not. This trick is how the website administrator can tell humans and robots apart.
As machine learning algorithms get better, captchas are getting more complicated for actual humans to solve. This is an arms race between website administrators trying to keep their site safe from robots and hackers trying to create fake accounts in an automated way.
Google's reCAPTCHA asks you to read 2 fields. One is an actual captcha, the other is an image that Google's machine learning system has failed to read (such as a house street number from Google Street View). By solving this captcha, you are helping the machine to learn.

Guidelines for GIS Application Testing

I am a software tester by profession and I have worked on various technologies till date. I got a new assignment which is a GIS application. I am not aware of how to test GIS application, what parameters should be considered while testing etc.
I will really appreciate if anyone could help me out with some guidelines for testing GIS application.
Thank you in advance. :)
Ashok, possibly since the time whn the question has been asked you have been turned into GIS testing expert, but let me try to answer )))
I would focus on what the app should do with geometries:
does it takes into account the correct type of geometries, does it ignore the incorrect ones
If the app builds own geometries based on the original geometries, I would try different topologies that may be problematic for doing this. Say, the app should draw a geometry 5 px left to some original geometry, in parallel to it. I would try a loop that is lesser than 10 px diameter in order to have no space to 5 px left. And so on
I would test the huge values of data, so what will be if the app would try to consume the worldwide net of such geometries.

API to break voice into phonemes / synthesize new speech given speech samples?

You know those movies where the tech geeks record someone's voice, and their software breaks it into phonemes? Which they can then use to type in any phrase, and make it seem as if the target is saying it?
Does that software exist in an API Version? I don't even know what to Google.
There is no such software. Breaking arbitrary speech into its constituent phonemes is only a partially solved problem: speech-to-text software is still imperfect, as is text-to-speech.
The idea is to reproduce the timbre of the target's voice. Even if you were able to segment the audio perfectly, reordering the phonemes would produce audio with unnatural cadence and intonation, not to mention splicing artifacts. At that point you're getting into smoothing, time-scaling, and pitch correction, all of which are possible and well-understood in theory, but operate poorly on real-world data, especially when the audio sample in question is as short as a single phoneme, and further when the timbre needs to be preserved.
These problems are compounded on the phonetic side by allophonic variation in sounds based on accent and surrounding phonemes; in order to faithfully produce even a low-quality approximation of the audio, you'd need a detailed understanding of the target's language, accent, and speech patterns.
Furthermore, your ultimate problem is one of social engineering, and people are not easy to fool when it comes to the voices of people they know. Even with a large corpus of input data, at best you could get a short low-quality sample, hardly enough for a conversation.
So while it's certainly possible, it's difficult; even if it existed, it wouldn't always be good enough.
SRI International (the company that created Siri for iOS) has an SDK called EduSpeak, which will take audio input and break it down into individual phonemes. I know this because I sat through a demo of the product about a week ago. During the demo, the presenter showed us an application that was created using the SDK. The application gave a few lines of text for the presenter to read. After reading the text, the application displayed a bar chart where each bar represented a phoneme from his speech. The height of each bar represented a score of how well each phoneme was pronounced (the presenter was not a native English speaker, so he received lower scores on certain phonemes compared to others). The presenter could also click on each individual bar to have only that individual phoneme played back using the original audio.
So yes, software exists that divides audio up by phoneme, and it does a very good job of it. Now, whether or not those phonemes can be re-assembled into speech is an open question. If we end up getting a trial version of the SDK, I'll try it out and let you know.
If your aim is to mimic someone else's voice, then another attitude is to convert your own voice (instead of assembling phonemes). It is (surprisingly) called voice conversion, e.g http://www.busim.ee.boun.edu.tr/~speech/projects/Voice_Conversion.htm
The technology is called "voice synthesis" and "voice recognition"
The java API for this can be found here Java voice JSAPI
Apple has an API for this Apple speech
Microsoft has several ...one is discussed here Vista speech
Lyrebird is a start-up that is working on this very problem. Given samples of a person's voice and some written text, it can synthesize a spoken version of that written text in the voice of the person in the samples.
You can get interesting voice warping effects with a formant-aware pitch shift. Adobe Audition has a pretty good implementation. Antares produces some interesting vocal effects VST plugins.
These techniques use some form of linear predictive coding (LPC) to treat the voice as a source-filter model. LPC works on speech signals by estimating the resonance of the vocal tract (formant), reversing its effect with an inverse filter, and then coding the resulting residual signal. The residual signal is ideally an impulse train that represents the glottal impulse. This allows the scaling of pitch and formants independently, which leads to a much better gender conversion result than simple pitch shifting.
I dunno about a commercially available solution, but the concept isn't entirely out of the range of possibility. For example, the University of Delaware has fairly decent software for doing just that.
http://www.modeltalker.com