wit.ai support for languages other than English - wit.ai

I just start playing with wit.ai NLP and Bot Engine but found some difficulties to make it running using the Polish language.
Especially the built-in entities/functions (like wit/number or wit/age-of-person) seem not to work at all.
So here is my question - Does it make any sense to use wit.ai for languages other than English?
or
Can I verify if wit.ai is trained for any particular language?

Good question! Wit currently supports 50 languages: https://wit.ai/blog/2016/04/28/new-languages. Some of them are in Beta like Polish.
Wit relies heavily on machine learning for built-in entities like wit/location. So the more Polish apps (ie more validated expressions) the better it will be.
For some built-in entities like wit/datetime, wit/duration, wit/age-of-person, Wit uses a probabilistic parser that we open sourced here: duckling.wit.ai
Any help from the community is more than welcome... So if you want to participate, don't hesitate to look at the repo and check if your language is covered or needs improvement

Related

What is meant by, and where do I find "the documentation"?

I am new to programming, and often in looking for solutions to questions people will often recommend that question-askers "read the documentation".
By documentation are people referring to sites that describe use cases for various functions and uses of a specific language? (Ex. w3schools, devdocs)
Or is this something released by the creators of the specific programming language?
When I have tried looking up "the documentation" of specific languages I am usually presented with "documentation" as it refers to using comments to ensure that your code is readable for the next developer.
Specifically I am looking for something that goes more in depth into CSS (than w3schools) so I can develop a deeper understanding of it.
Programming languages, software, web applications etc. normally have some kind of user manual or „how-to“-instructions.
Example: https://docs.python.org/3/
This is the actual documentation of the latest Python release.
You‘ll find the documentations - or a link to it - mostly on the homepage of a products or services website.

In which language was Thimbleweed Park game made>

I am a game developer and I wanted to know in which language was thimbleweed park made? Was it c# or c++ or something else? I searched on google but there is no relevant information about it.
According to Wikipedia:
Gilbert had already started to look for adventure game engines in August 2014, but because of his experience of always wanting to modify engines to do exactly what he wants from them, he decided it would be easier to create his own engine.
He already had a 2D graphics engine written in C/C++ that he had used for his non-adventure games The Big Big Castle! and Scurvy Scallywags, which he decided to use for Thimbleweed Park; SDL was used for handling window creation and input, while Gilbert's own code was used for rendering the graphics. The only other thing that was needed for the engine was a scripting language; Gilbert had looked at Lua, and while he considered it "easy to integrate and highly optimized", he disliked its syntax. He considered making his own scripting language, but due to time concerns, he chose the language Squirrel instead.

Part-Of-Speech tagging and Named Entity Recognition for C/C++/Obj-C

need some help!
I'm trying to write some code in objective-c that requires part-of-speech tagging, and ideally also named entity recognition. I don't have much interest in "rolling my own", so I'm looking for a decent library to use for this purpose. Obviously the more accurate the better, but we're not talking anything critical here -- so as long as it's generally pretty accurate that's good enough.
It's going to be English-only, at least for the time being, but I don't want to have to do any training of models myself. So whatever the solution, it has to have an English language model already built.
And finally, it has to be available via a commercial-friendly license (e.g. BSD/Berkeley, LGPL). Can't do GPL or anything restrictive like that, though I'm open to paying a small amount for a commercial license if that's the only option.
C, C++ or Obj-C code is all fine.
So: Anyone familiar with something that'd do the trick here? Thanks!!
I suggest you check out the iOS 5 beta release notes.
As you've probably figured out most of the NLP code that's freely available is in python, perl or java. However, a quick look at Stanford's NLP tools page shows a few things in C/C++ that are available. Another list of tools can be found at a blog post.
Of the POS taggers, YamCha is well-known, though I have not used it myself (being a java/python/perl guy).
Unfortunately, I cannot suggest any NER nlp tools. However, I bet there's a maxent or svm implentation in C/C++ that you can work with:
1) create your training data and annotate it
2) define your features
3) use the ml library
Sorry I can't be of more help, but if anything else comes to mind I'll add it.
Maybe once I figure out objective-c to a respectable degree I'll write an NLP library for it!

Development frameworks for 2D game?

I wanted to start working on a simple game concept I have, just as a side project/hobby/learning experience.
Pygames or Pyglet came immediately to mind, but it looks like they aren't being actively developed. Or perhaps they are, but extreeemely slowly.
I want a high-level programming language, multi-OS support, 2D focus (or suitable for 2D stuff, anyway), and active development. What are my options?
I use Pygame and think it is a great option. The community is fairly active, I see a new or updated project posted about every other day on Pygame's homepage.
Another good option would be the Love2d framework, which you can find here: http://love2d.org/. Love2d uses the language Lua, which is a high level, beginner-friendly language like Python.
Pygame and Pyglet are indeed excellent choices. If the level of activity concerns you, just take a look at the Google Code page for Pyglet, seems it's fairly active to me.
If you prefer more actively developed engine (less documentation, but more up-to-date approaches), check out the Monacle Engine:
http://monoclepowered.org/
If want to get your hands dirty and take a more Do-It-Yourself approach, projects like
SDL, SFML and Allegro may provide a good foundation for a cross-platform 2D engine.
Another, more recent project that provides a very useful set of 2D primitives is the Clutter framework:
http://www.clutter-project.org/
Add a sound library like FMOD or BASS and physics engine like Box2D or Chipmunk and you may build pretty much everything.
The PopCap (a.k.a. SexyApp) engine was a popular choice in the past.
http://sourceforge.net/projects/popcapframework/
For real-world example, check out the acclaimed indie game World of Goo, which is using it.
http://en.wikipedia.org/wiki/World_of_Goo
If python API is preferred, there seems to be an effort to provide one here:
http://www.farbs.org/pycap.html

Tools available to do semantic analysis of text

I'm looking for code or a product or a service to do semantic analysis of text (sentences and or paragraphs) to categorize the text by general topic, e.g.
Finance
Entertainment
Technology
Business
Art
etc...
If you have a bunch of examples that have already been categorised, you can use these to train a classifier.
This is a very simple document classfication problem, and any suite of machine learning tools will have the algorithms and tutorials for this. For instance, check out weka: http://www.cs.waikato.ac.nz/ml/weka/
or rapidminer: http://rapid-i.com/content/blogcategory/38/69/
If your needs are limited, and you just want a simple API, you cannot go wrong with this Naive Bayes library: https://ci-bayes.dev.java.net/
Good luck!
If you want to evaluate a commercial service API, check out the VIKI engine APIs:
http://www.softwareevolution.it/en/products/viki-core-api.html
It is an easy to use Json service api with specific semantic features.
Would this be of any help to you?
http://en.wikipedia.org/wiki/Document_classification
It's not a finished product or service, neither code, but it describes the various algorithms that can be used for semantic analysis. Googling on a bit further, I believe that it's not really out of the laboratory yet. People are experimenting with KNN algorithms mostly, resulting in cool stuff, but not really what you need:
http://www.ebi.ac.uk/webservices/whatizit/info.jsf
But if there is some software that will do what you ask, it would be in this list:
http://www.kdnuggets.com/software/text.html
For example the LPU program, it seems to be able to learn if you feed it enough teaching documents.
http://www.cs.uic.edu/~liub/LPU/LPU-download.html
If you're into Python/interpreted languages, check out the excellent NLTK framework at nltk.org. It has an excellent how to page and a recently published O'Reilly book.
If you're into Java and/or require a more mature but harder to grasp framework, try GATE instead.