Please explain me about what an AACExtractor.cpp is all about codewise.
Though the answer will not help the asker it will help other answer seekers .
AAC extractor.cpp is part of libstagefright.so , the stagefright media framework that handles playing of audio in android from .
AAC extractor is being used by awesome player.cpp which controls audio/video playback.
Its expected from AACextractor class
to help identify if the file is an "aacfile" (SniffAAC function )
it needs to implement methods that
extract information pertaining to meta-data (album art, artist etc.,)
extract necessary information from the file that is to be given to the codec
(sampling rate channels, initialize decoder , )
extract a frame and give it to the decoder .(support functions that implement seeking to particular frame etc.,)
These functions are used by other functions to get their work done.
This is an overview .The best way to understand is by printing logs .
Hope this helps
For more details look at
https://groups.google.com/forum/#!forum/android-porting
https://groups.google.com/forum/#!forum/android-framework
Related
I'm looking into what it takes to develop a PrintService on android. After reading some on-line docs I'm not quite clear on the format of data returned by PrintDocument.getData() method. I'd expect that in the case of PrintDocumentInfo.CONTENT_TYPE_PHOTO the returned data will be an image (I'm not quite sure about this). However, what can I expect when content type is CONTENT_TYPE_DOCUMENT?
There is a sample of PrintDocumentInfo that uses a builder to build a pdf file. Is this always the case? That is, is content of CONTENT_TYPE_DOCUMENT always in pdf format?
I'd appreciate any suggestions and/or pointers to relevant on-line docs.
Thanks.
It is always PDF for CONTENT_TYPE_DOCUMENT.
I realize GPUImage has been well documented and there's a lot of instructions on how to use it on the main github page. However, it fails to explain what a filter chain is - what's addTarget? What's missing is a simple enough diagram showing what needs to be added to what. Is it always GPUImageView (source?) -> add target -> [filter]? I'm sorry if this sounds daft, but I fail to follow the correct sequence given there are so many ways of using it. To me, it sounds like you're connecting it the other way round (such as saying: Connect the socket to the TV). Why not add filter to the source? I'm trying to use it but I get lost in all the addTargets. Thanks!
You can think of it as a series of inputs and outputs. Look in the GPUImage framework project to see which are inputs (typically filters) and which are outputs (imageview, moviewriter, etc..). Every target effects the next target in the chain.
Example:
GPUImageMovie -> GPUImageSepiaFilter -> GPUImageMovieWriter
A movie will be sent to the sepia filter that will perform its job, the movie with a sepia filter applied will be sent to the movie writer, then the movie writer will export a movie with a sepia filter applied.
To help visualize what's going on, any node editor program typically uses this scheme. Think of calling addTarget: as one of the connections in the attached image.
A google image search for Node Editor will give you plenty of other image to help picture what adding targets does.
I have made a software that uses WebRTC DSP libraries (AEC, NS, AGC, VAD). Now I need to know what algorithm uses each one to write my MasterĀ“s Thesis, but I don't find any information about that.
Someone knows the algorithms of this libraries, specially the Acoustic Echo Cancellation (like for example NLMS, that I know it's commonly used, but I don't know if WebRTC also uses it).
I've tryed to know the algorithm looking into the source code, but I don't understand enough.
Thanks in advance!
I've just successfully using standalone WebRTC aecm module for android. and there's some tips:
1.the most important is the thing called "delay", you can find the definition of it in dir:
..\src\modules\audio_processing\include\audio_processing.h
quote:
Sets the |delay| in ms between AnalyzeReverseStream() receiving a
far-end frame and ProcessStream() receiving a near-end frame
containing the corresponding echo. On the client-side this can be
expressed as delay = (t_render - t_analyze) + (t_process - t_capture)
where,
t_analyze is the time a frame is passed to AnalyzeReverseStream() and t_render is the time the first sample of the same frame is
rendered by the audio hardware.
t_capture is the time the first sample of a frame is captured by the audio hardware and t_pull is the time the same frame is passed to
ProcessStream().
if you wanna use aecm module in standalone mode, be sure you obey this doc strictly.
2.AudioRecord and AudioTrack sometimes block(due to minimized buffer size), so when you calc the delay, don't forget adding blocking time to it.
3.if you don't know how to compile aecm module, you may learn Android NDK first, and the module src path is
..\src\modules\audio_processing\aecm
BTW, this blog may help a lot in native dev. and debuging.
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-development/
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-debugging/
hope this may help you.
From the code inspection of AGC algorithm in WebRtc, it matches closely with the description in http://www.ti.com/lit/wp/spraal1/spraal1.pdf
It's based on NLMS, but has a variable step length (mu).
I am having to reference MPEG-2 code. My application needs analyze I-frames. Looking at the code I am unable to figure out how exactly this can be done.
Basically I want to extract the first or second I-frame. How do I do that and where do I find the information regarding the frame type?
I would appreciate any direction on this.
I Just wrote this answer: Can you find key frame (I-frame) in h264 video without decoding? i.e. is it in packet?
which applies to you just as well.
The start code of picture in MPEG2 is 0x00 .
Further, you can see byte 5 - to identify the picture type.
See here: http://dvd.sourceforge.net/dvdinfo/mpeghdrs.html
I am trying to implement a lotto game on the ipod and i need to parse the actual lotto page to get the winning numbers and to put them in an array to compare them with the numbers that the user had entered and find out if he is a winner or not. Can anyone help me with it because
really am stuck with it?
Thanks in advance! :)
What you need is a DOM parser, such as Apple's NSXMLParser.
Specifically you'd want to use XPath parsing to read the value of a specific DOM node on your lotto page's DOM tree.
As iOS does not come with NSXMLDocument (which supports XPath), you'd probably need to do as described here:
Using libxml2 for XML parsing and XPath queries in Cocoa
Other solutions for parsing XML/HTML:
How To Choose The Best XML Parser for Your iPhone Project