How to get the HEVC test video sequences? - hevc

I need the HEVC test video sequences, namely Traffic, PeopleOnStreet, Nebuta, SteamLocomotive, Kimono, ParkScene, Cactus, BasketBallDrive etc mentioned in Common Conditions and Software Reference Configuration’s document JCT-VCL1100. I need these video sequences for benchmarking and research purposes.

Most of the common sequences are from Xiph. You can find it here. Refer this question for more details.

Related

IPA (International Phonetic Alphabet) Transcription with Tensorflow

I'm looking into designing a software platform that will aid linguists and anthropologists in their study of previously unstudied languages. Statistics show that around 1,000 languages exist that have never been studied by a person outside of their respective speaker groups.
My goal is to utilize TensorFlow to make a platform that will allow linguists to study and document these languages more efficiently, and to help them create written systems for the ones that don't have a written system already. One of their current methods of accomplishing such a task is three-fold: 1) Record a native speaker conversing in the language, 2) Listening to that recording and trying to transcribe it into the IPA, 3) From the phonetics, analyzing the phonemics and phonotactics of the language to eventually create a written system for the speaker.
My proposed platform would cut that research time down from a minimum of a year to a maximum of six months. Before I start, I have some questions...
What would be required to train TensorFlow to transcribe live audio into the IPA? Has this already been done? and if so, how would I utilize a previous solution for this project? Is a project like this even possible with TensorFlow? if not, what would you recommend using instead?
My apologies for the magnitude of this question. I don't have much experience in the realm of machine learning, as I am just beginning the research process for this project. Any help is appreciated!
I guess I will take a first shot at answering this. Since the question is pretty general, my answer will have to be pretty general as well.
What would be required. At the very least you would have to have a large dataset of pre-transcribed data. Ideally a large amount of spoken language audio mapped to characters in the phonetic alphabet, so the system could learn the sound of individual characters rather than whole transcribed words. If such a dataset doesn't exist, a less granular dataset could be used, mapping single words to their transcriptions. Then you would need a model, that is the actual neural network architecture implemented in code. And lastly you would need some computing resources. This is not something you can train casually, you would either have to buy some time in a cloud based machine learning framework (like Google Cloud ML) or build a fairly expensive machine to train at home.
Has this been done? I don't know. I don't think so. There have been published papers reporting various degrees of success at training systems to transcribe speech. Here is one, for example, http://deeplearning.stanford.edu/lexfree/lexfree.pdf It seems that since the alphabet you want to transcribe to is specifically designed to capture the way words sound rather than just write down the words you might have more success at training such a model.
Is it possible with TensorFlow. Yes, most likely. TensorFlow is well suited for implementing most modern deep learning architectures. Unless you end up designing some really weird and very original model for this purpose, TensorFlow should work just fine.
Edit: after some thought in part 1, you would have to use a dataset mapping spoken words to their transcriptions, since I expect that the same sound pronounced separately would be different from when the same sound is used in a word.
This has actually been done, albeit in PyTorch, by a group at CMU: https://github.com/xinjli/allosaurus

Parsing HEVC for Motion Information

I parsed the HEVC stream by simply identifying sart code (000001 or 00000001), and now I am looking for the motion information in the NAL payload. My goal is to calculate the percentage of the motion information in the stream. Any ideas?
Your best bet is to start with the HM reference software (get it here: https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/trunk/) and add some debug info as the different kinds of data is read from the bitstream. This is likely much easier than writing bitstream decoder from scratch.
Check out the debug that is built into the software already, for example RExt__DECODER_DEBUG_BIT_STATISTICS or DEBUG_CABAC_BINS. This may do what you want already, if not it will be pretty close. I think information about bit usage can be best collected in source/Lib/TLibDecoder/TDecBinCoderCABAC.cpp during decode.
If you need to speed this up, you can of course skip the actual decode steps :)
At the decoder side, You can find the motion vector information as MVD, so you should using pixel decoding process to get the motion information. it need you to understand the process of the inter prediction at HEVC.
than you!

Semantic techniques in IOT

I am trying to use semantic technologies in IOT. From the last two months I am doing literature survey and during this time I came to know some of the tools required like (protege, Apache Jena). Now along with reading papers I want to play with semantic techniques like annotation, linking data etc so that I can get the better understanding of the concepts involved. For the same I have put the roadmap as:
Collect data manually (using sensors) or use some data set already on the web.
Annotate the dataset and possibly use ontology (not sure)
Apply open linking data principles
I am not sure whether this road map is correct or not. I am asking for suggestions in following points
Is this roadmap correct?
How should I approach for steps 2 and 3. In other words which tools should I use for these steps?
Hope you guys can help me in finding a proper way for handling this issue. Thanks
Semantics and IoT (or semantic sensor web [1]) is a hot topic. Congratulations that you choose a interesting and worth pursuing research topic.
In my opinion, your three steps approach looks good. I would recommend you to do a quick prototype so you can learn the possible challenges early.
In addition to the implementation technologies (Portege, etc.), there are some important works might be useful for you:
Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE). [2] It is an important work for sharing and exchanging sensor observation data. Many large organizations (NOAA, NASA, NRCan, AAFC, ESA, etc.) have adopted this standard. This standard has defined a conceptual data model/ontology (O&M, ISO 19156). Note: this is a very comprehensive standard, hence it's very BIG and can be time consuming to read. I recommend to read #2 mentioned below.
OGC SensorThings API (http://ogc-iot.github.io/ogc-iot-api/), a IoT cloud API standard based on the OGC SWE. This might be most relevant to you. It is a light-weight protocol of the SWE family, and designed specifically for IoT. Some early research work has been done to use JSON-LD to annotate SensorThings.
W3C Spatial Data on Web (http://www.w3.org/2015/spatial/wiki/Main_Page). It is an on-going joint work between W3C and OGC. Part of the goal is to mature SSN (Semantic Sensor Network) ontology. Once it's ready, the new SSN can be used to annotate SensorThings API for example. A work worth to monitor.
[1] Sheth, Amit, Cory Henson, and Satya S. Sahoo. "Semantic sensor web." Internet Computing, IEEE 12.4 (2008): 78-83.
[2] Bröring, Arne, et al. "New generation sensor web enablement." Sensors 11.3 (2011): 2652-2699.

API to break voice into phonemes / synthesize new speech given speech samples?

You know those movies where the tech geeks record someone's voice, and their software breaks it into phonemes? Which they can then use to type in any phrase, and make it seem as if the target is saying it?
Does that software exist in an API Version? I don't even know what to Google.
There is no such software. Breaking arbitrary speech into its constituent phonemes is only a partially solved problem: speech-to-text software is still imperfect, as is text-to-speech.
The idea is to reproduce the timbre of the target's voice. Even if you were able to segment the audio perfectly, reordering the phonemes would produce audio with unnatural cadence and intonation, not to mention splicing artifacts. At that point you're getting into smoothing, time-scaling, and pitch correction, all of which are possible and well-understood in theory, but operate poorly on real-world data, especially when the audio sample in question is as short as a single phoneme, and further when the timbre needs to be preserved.
These problems are compounded on the phonetic side by allophonic variation in sounds based on accent and surrounding phonemes; in order to faithfully produce even a low-quality approximation of the audio, you'd need a detailed understanding of the target's language, accent, and speech patterns.
Furthermore, your ultimate problem is one of social engineering, and people are not easy to fool when it comes to the voices of people they know. Even with a large corpus of input data, at best you could get a short low-quality sample, hardly enough for a conversation.
So while it's certainly possible, it's difficult; even if it existed, it wouldn't always be good enough.
SRI International (the company that created Siri for iOS) has an SDK called EduSpeak, which will take audio input and break it down into individual phonemes. I know this because I sat through a demo of the product about a week ago. During the demo, the presenter showed us an application that was created using the SDK. The application gave a few lines of text for the presenter to read. After reading the text, the application displayed a bar chart where each bar represented a phoneme from his speech. The height of each bar represented a score of how well each phoneme was pronounced (the presenter was not a native English speaker, so he received lower scores on certain phonemes compared to others). The presenter could also click on each individual bar to have only that individual phoneme played back using the original audio.
So yes, software exists that divides audio up by phoneme, and it does a very good job of it. Now, whether or not those phonemes can be re-assembled into speech is an open question. If we end up getting a trial version of the SDK, I'll try it out and let you know.
If your aim is to mimic someone else's voice, then another attitude is to convert your own voice (instead of assembling phonemes). It is (surprisingly) called voice conversion, e.g http://www.busim.ee.boun.edu.tr/~speech/projects/Voice_Conversion.htm
The technology is called "voice synthesis" and "voice recognition"
The java API for this can be found here Java voice JSAPI
Apple has an API for this Apple speech
Microsoft has several ...one is discussed here Vista speech
Lyrebird is a start-up that is working on this very problem. Given samples of a person's voice and some written text, it can synthesize a spoken version of that written text in the voice of the person in the samples.
You can get interesting voice warping effects with a formant-aware pitch shift. Adobe Audition has a pretty good implementation. Antares produces some interesting vocal effects VST plugins.
These techniques use some form of linear predictive coding (LPC) to treat the voice as a source-filter model. LPC works on speech signals by estimating the resonance of the vocal tract (formant), reversing its effect with an inverse filter, and then coding the resulting residual signal. The residual signal is ideally an impulse train that represents the glottal impulse. This allows the scaling of pitch and formants independently, which leads to a much better gender conversion result than simple pitch shifting.
I dunno about a commercially available solution, but the concept isn't entirely out of the range of possibility. For example, the University of Delaware has fairly decent software for doing just that.
http://www.modeltalker.com

What types of networking/authentication issues could you encounter when applications on different platforms interact over local and remote networks?

My boss has asked me to look into this question but I'm not sure where to begin to research it.
Any tips on the types of topics I should be looking into?
As mentioned in another comment, more details on the problem would be useful. But in principle the problems can occur on every layer, especially character encodings, BigEndian vs. LittleEndian and so forth.
In addition these two books I can highly recommend:
Tanenbaum: Distributed Systems
Tanenbaum: Computer Networks