Mimic file IO in j2me midlet using RMS - file-io

I want to be able to record audio and save it to persistent storage in my j2me application. As I understand j2me does not expose the handset's file system, instead it wants the developer to use the RMS system. I understand the idea behind RMS but cannot seem to think of the best way to implement audio recording using it. I have a continuous stream of bits from the audio input which must be saved, 1) should I make a buffer and then periodically create a new record with the bytes in the buffer. 2) Should I put each sample in a new record? 3) should I save the entire recording file in a byte array and then only write it to the RMS on stop recording?
Is there a better way to achieve this other than RMS?

Consider this code below and edit it as necessary it should solve your problem by writing to the phone filesystem directly
getRoots();
FileConnection fc = null;
DataOutputStream dos = null;
fc = (FileConnection)Connector.open("file:///E:/");
if (!fc.exists())
{
fc.mkdir();
}
fc = (FileConnection) Connector.open("file:///E:/test.wav");
if (!fc.exists())
{
fc.create();
}
dos = fc.openDataOutputStream();
dos.write( recordedSoundArray);

Related

What is the best practice for downloading large CSV files from S3 in Java?

I'm trying to get a large CSV file from S3 but the download fails with “java.net.SocketException: Connection reset”, which is probably due to the InputStream simply being open for too long (the download often takes more than an hour since I am doing multiple time-consuming processes on the streamed content). This is how I currently parse the file:
InputStream inputStream = new GZIPInputStream(s3Client.getObject("bucket", "key").getObjectContent());
Reader decoder = new InputStreamReader(inputStream, Charset.defaultCharset());
BufferedReader isr = new BufferedReader(decoder);
CSVParser csvParser = new CSVParser(isr, CSVFormat.DEFAULT);
CSVRecord nextRecord = csvParser.iterator().next();
...
I know I have to split the download into multiple short getObject-calls with a defined offset for the GetObjectRequest, but I'm wondering how to define this offset in case of a CSV, since I need complete lines.
Do I have to ditch the parser library and parse each line into an Object myself so I can keep a count of the read bytes and use it as an offset for the next batch? That doesn't seem very robust to me. Is there any best practice way to achieve "batch downloading" of CSV records?
I decided on simply using the dedicated getObject(GetObjectRequest getObjectRequest, File destinationFile) method to copy the entire CSV to a temporary file on disk. This closes the HTTP connection as soon as possible and allows me to get the InputStream from the local file with no problems. It doesn't resolve the question of the best way to download in batches, but it's a nice and simple workaround.

How to set the input audio device for Microsoft.Speech recognizer in VB.Net or C# to any audio device

I want to use the Microsoft.Speech namespace in VB.NET to create a telephony application. I need to be able to set the recognizer input to any audio device installed on the system. Microsoft has the recognizer.SetInputToDefaultAudioDevice() method, but I need something like .SetInputToAudioDeviceID. How can I choose another wave audio input from the list of devices installed on my system? In SAPI, I would use MMSystem and SpVoice:
Set MMSysAudioIn1 = New SpMMAudioIn
MMSysAudioIn1.DeviceId = WindowsAudioDeviceID 'set audio input to audio device Id
MMSysAudioIn1.Format.Type = SAFT11kHz8BitMono 'set wave format, change to 8kHz, 16bit mono for other devices
Dim fmt As New SpeechAudioFormatInfo(1000, AudioBitsPerSample.Eight, AudioChannel.Mono)
Recognizer.SetInputToAudioStream(MMSysAudioIN1, fmt)
How do I do this with Microsoft.Speech?
MORE INFO: I want to take any wave input device in the Windows list of wave drivers and us that as input to speech recognition. Specifically, I may have a Dialogic card with wave input reported by TAPI as deviceID 1-4. In SAPI, I can use the SpMMAudioIn class to create a stream and set which device ID is associated with that stream. You can see some of that code above. Can I directly set Recognizer1.SetInputToAudioStream by the device ID of the device like I can in SAPI? Or do I have to create code that reads bytes and uses buffers, etc. Do I have to create a MemoryStream Object? I can't find any example code anywhere. What do I have to check in .NET to get access to ISpeechMMSysAudio/spMMAudioIn in case something like this would work? But hopefully, there is a way to use MemoryStream or something like it that takes a device ID and lets me pass that stream to the recognizer.
NOTE 2: I added "imports Speechlib" to the VB project and then tried to run the following code. It gives the error listed in the comments below about not being able to set the audio stream to a COM object.
Dim sre As New SpeechRecognitionEngine
Dim fmt As New SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono)
Dim audiosource As ISpeechMMSysAudio
audiosource = New SpMMAudioIn
audiosource.DeviceId = WindowsAudioDeviceID 'set audio input to audio device Id
' audiosource.Format.Type = SpeechAudioFormatType.SAFT11kHz16BitMono
sre.SetInputToAudioStream(audiosource, fmt) <----- Invalid Cast with COM here
It also appears the SpeechAudioFormatType does not support 8kHz formats. This just gets more and more complicated.
You would use SpeechRecognitionEngine.SetInputToAudioStream. Note that if you're having problems with streaming input, you may need to wrap the stream, as illustrated here.

pop at the beginning of playback

When playing a memory stream containing wav encoded audio, the playback starts with a sharp pop/crackle:
ms = new MemoryStream(File.ReadAllBytes(audio_filename));
[...]
dispose_audio();
sound_output = new DirectSoundOut();
IWaveProvider provider = new RawSourceWaveStream(ms, new WaveFormat());
sound_output.Init(provider);
sound_output.Play();
That pop/crackle does not occur when playing the wav file directly:
dispose_audio();
NAudio.Wave.WaveStream pcm = new WaveChannel32(new NAudio.Wave.WaveFileReader(audio_filename));
audio_stream = new BlockAlignReductionStream(pcm);
sound_output = new DirectSoundOut();
sound_output.Init(audio_stream);
sound_output.Play();
Same file is playing, but when the wav data are stored in a memory stream first, there is a somewhat loud pop at the beginning of the playback.
I am very much a newbie with NAudio and audio in general, so it's probably something silly, but I can't seem to figure it out.
You are playing the WAV file header as though it were audio. Instead of RawSourceWaveStream, you still need to use WaveFileReader, just pass in your memory stream.

Using a text file as Spark streaming source for testing purpose

I want to write a test for my spark streaming application that consume a flume source.
http://mkuthan.github.io/blog/2015/03/01/spark-unit-testing/ suggests using ManualClock but for the moment reading a file and verifying outputs would be enough for me.
So I wish to use :
JavaStreamingContext streamingContext = ...
JavaDStream<String> stream = streamingContext.textFileStream(dataDirectory);
stream.print();
streamingContext.awaitTermination();
streamingContext.start();
Unfortunately it does not print anything.
I tried:
dataDirectory = "hdfs://node:port/absolute/path/on/hdfs/"
dataDirectory = "file://C:\\absolute\\path\\on\\windows\\"
adding the text file in the directory BEFORE the program begins
adding the text file in the directory WHILE the program run
Nothing works.
Any suggestion to read from text file?
Thanks,
Martin
Order of start and await are indeed inversed.
In addition to that, the easiest way to pass data to your Spark Streaming application for testing is a QueueDStream. It's a mutable queue of RDD of arbitrary data. This means that you could create the data programmatically or load it from disk into an RDD and pass that to your Spark Streaming code.
Eg. to avoid the timing issues faced with the fileConsumer, you could try this:
val rdd = sparkContext.textFile(...)
val rddQueue: Queue[RDD[String]] = Queue()
rddQueue += rdd
val dstream = streamingContext.queueStream(rddQueue)
doMyStuffWithDstream(dstream)
streamingContext.start()
streamingContext.awaitTermination()
I am so stupid, I inverted calls to start() and awaitTermination()
If you want to do the same, you should read from HDFS, and add the file WHILE the program runs.

Save to buffer instead of file when recording aac with Audio Queue Services

My goal is to record audio in AAC format and send it over a network connection as a stream.
I'm using Audio Queue Services and have based my code on the SpeakHere example. I know that for writing to a file it uses the AudioFileWritePackets() function.
Here's the callback function:
void MyInputBufferHandler(void* inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc) {
Recorder *aqr = (Recorder *)inUserData;
if (inNumPackets > 0) {
// write packets to file
AudioFileWritePackets(aqr->mRecordFile, FALSE,
inBuffer->mAudioDataByteSize,
inPacketDesc, aqr->mRecordPacket,
&inNumPackets, inBuffer->mAudioData);
aqr->mRecordPacket += inNumPackets;
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if ([aqr IsRunning])
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
At first I thought AudioFileWritePackets works by directly writing the content of inBuffer->mAudioData. However when I manually write just the contents of mAudioData to a file the decoding doesn't seem to work.
On examining and comparing the raw data of what AudioFileWritePackets writes to a file and of me just writing mAudioData to a file, they seem to have attached a header to it.
As I can't find out how AudioFileWritePackets() works inside, my question is, how can I write the recorded audio to a buffer (or stream, if you prefer to call it this way) instead of a file, so that I can later send it over a network connection?
Again, in short here's what I need: record audio in aac format and stream the audio over network connection.
Thanks! I've been searching my head blue...
P.S: please if you point me to existing projects, make sure they are what I need. I really feel like I've been looking for all possible projects out there >.<
This should help.
Check the "HandleInputBuffer" method in "SpeechToTextModule.m".