I've gone through every PDF on 01 - Intel graphics for linux. I've Googled, DuckDuckGo'ed, and I have even Binged! But I cannot find any information regarding the acronyms at all.
Is there anyone out there who knows what they mean?
Related
I want to generate some text from my bb10 app to give audio feedback to the user.
(But the screenreader like in the accessibility feature is not sufficient)
Has anybody already successfully got text-to-speech implemented?
There are countless open source projects that do this on PC platforms. You may have your best luck in fitting them to your needs. – Josh C
Any library you would recommend? It should have C or C++ interface and must work offline (no server based solution) and it should not occupy too much memory. – thowa
I had to check to make sure it was written in C++ which it is. It is called ESpeak. I heard about it nearly 7 years ago when I was looking for a speech synthesizer that was powerful/robust enough to sound like a human. I believe it was ESpeak, and back then it was a complicated task to get it to spew out realistic sounding speech.
http://sourceforge.net/projects/espeak/files/
This one looks promising as well; however it is written in java.
http://mary.dfki.de/Download/openmary-open-source-emotional-text-to-speech-synthesis-system-released
Found here https://github.com/marytts/marytts
I'm trying to give a little more clarity to TTS sentences by indicating emphasis, etc. I'm using the Chrome TTS API, which indicates that it accepts SSML-formatted documents in addition to raw text.
After many attempts, and a reading a few comments on the web, it doesn't look like this is actually supported, or possibly that this is up to individual voices for implementation.
Does anyone know:
Has SSML been abandoned under Chrome?
If not, is there any indication whether they expect to support it via native voice, or they're hoping that someone else will implement?
Do any Chrome voices currently exist that support this?
Thanks!
I'm a Chrome engineer. SSML support has not been implemented yet, but it's planned. Obviously not all engines would support it, but when we implement SSML support we'll also implement support for stripping SSML from engines that don't support it.
Sorry the documentation is misleading here.
Star this bug to express interest and get notified when it's fixed: https://code.google.com/p/chromium/issues/detail?id=88072
If anyone's looking at this later, you can control prosody on Mac Chrome using Apple's native command syntax, at least for the default voices:
the square root of [[pbas +4]] 2 [[char LTRL]]a[[char NORM]] to the [[pbas +4]] 14 [[char LTRL]]x[[char NORM]]
Documented here.
I am writing an OS X application which uses NSSpeechSynthesizer to read text to the user and highlights the word it is reading in the text field (NSTextView). This is done by implementing the speechSynthesizer:willSpeakWord:ofString: method of the NSSpeechSynthesizerDelegate protocol. This method provides the range (NSRange) of the word it is about to speak which I forward to the setSelectedRange method of my NSTextView.
All is well until I stop the reading with the stopSpeaking method. If I set it to read again after this, the ranges provided by speechSynthesizer:willSpeakWord:ofString: seem to be out of sync. It seems that it hesitates calling speechSynthesizer:willSpeakWord:ofString: for a few seconds resulting in the ranges being somewhat behind the speech when it finally does.
I have written a simple application illustrating the issue, which can be found here:
http://dl.dropbox.com/u/12516679/SpeechTest.zip
I hope that someone will look at this code (it really is simple) and either confirm that this indeed seems to be a bug, or (hopefully) tell me what I am doing wrong.
- UPDATE -
It turns out that the problem occurs with non-english voices. I was originally using Ida, which is a Danish voice. I have now tested it with many different voices and I can confirm that it works well with all English voices. However it fell out of sync with Danish, Swedish, Norwegian and Dutch. It probably affects other languages as well, but these are the ones I have tested so far.
Ok, I have found the source of the problem and a workaround. It has nothing to do with the language as such, but the fact that most non-English voices in OS X Lion are Nuance voices (made by Nuance Communications). I have confirmed this by testing with English Nuance voices and they indeed have the same problem. It looks like there is something wrong in the api for voices provided by Nuance.
I have created a workaround for the problem by instantiating a new NSSpeechSynthesizer object after the reading has been stopped. It’s not pretty but it works :)
It is a bug, but it has been solved in the update of the Nuance voices for Mountain Lion.
I'm trying to run a program written by someone else in LabVIEW. The program records voltage. However it won't open because it is missing subVIs.
Initially I thought that only one was missing and the rest weren't working becuase they were attached to it but after someone on this forum kindly found it for me the rest of the VIs still won't work so I think I need to download them again. However there are too many off them to get separately, also I tried googling them to no avail.
The subVIs are : Magnet Id, Hardware check, Plot Data and Print, Make Plot Lables, Plot it, Relabel It, Write File header, Record Analog Info, Fix Column Heading, Make Igor Label etc etc (this is not a complete list).
I feel that I should download a DAQ from The National Instruments Website but I am not sure which one. I am using 64 bit LabVIEW 2010 on Windows. Can someone please help me pick out the correct driver?
Thanks!
Just going from memory, those aren't NI VIs that I recognize, especially if the misspellings are in the original.
What hardware is this interfacing with?
You should still be able to open the main VI. It will not compile or run, since the subVIs are missing, but you should be able to open it and maybe get some clues about what it's doing.
The SubVIs all appear to be in the llb file so they should be available. I'm not sure how you can access the subVIs directly from the llb file so you may want to convert this to a project folder (this is the new way of creating libraries since version 8 I think). There are some pages on the NI website that may help, try Converting an LLB to a Project Library and then add this Project Library to your project.
From what I can see the VIs make use of the VISA drivers to communicate with the individual instruments so you should make sure you have this installed. You don't mention what version of LabVIEW 2010 you have but I think they should be provided even the Basic version, I know they are provided with the Professional Development System version.
Can anybody tell me where I can find information related to How to Bringup any arm board? I am looking for an overview as I am novice in ARM related stuffs. Any link/document will do ...It will be gr8 help if i can look for a case-study
any arm based board can be considered..I am looking for just a case study...simple in few steps??
Every single ARM "board" will be different. Read the datasheet for the ARM chip you have, that should have a section near the start about booting. Also, read the datasheet about your board, as it made have flash/boot loaders on there. If there are no loaders on the board, you'll have to either set the jumpers for the ARM (if that type supoprts it) to read from external rom, or JTAG the initial boot code into it.
Basically: Read the datasheets. Programming a device like an ARM isn't your usual compile/run stratergy like most software, especially not in the first stage.
edit:
If you don't even have a board yet, try going for this one:
http://beagleboard.org/
It has and ARM on it (as well as a decent GPU).
Check the DLP-2232PB-G evaluation kit from FTDI. Looks great for newbies trying to get into microcontrollers, and it comes with everything you need. It's a PIC controller - not an ARM controller, but the easiest starting point that I've seen... and same basic methods of development.
I would start with any documentation the IC manufacturer may have on "getting started".
http://free-electrons.com/doc/porting-kernel.odp
This link gives a good overview of the bringup of the board with a CPU for which the linux support package is available.
Linux sources in arch/arm have mach-* which are cpus supported by Linux Kernel.
With in the mach-* dir, there are some board specific files that are board specific BSPs.
You can take the process elucidated in this article and try using in your case.
Check out the ok6410-h at http://www.arm9board.net/sel/prddetail.aspx?id=348&pid=200
Quit a nice kicking-start kit coming with everyting you would ever need: documentations, source code, example programs.
recommendable for both newbies and experienced.