I am using Node-RED and I am trying to store text to speech data in a Cloudant database. That works fine and I also can get it out in msg.payload.speech but when I feed it into Speech To Text, my whole app crashes..... with this error:
ERR Dropped log message: message too long (>64K without a newline)
so it seems that the Speech To Text node cannot handle large messages. It also seems that Text to Speech makes a very long string regardless of what you inject. One word or a whole paragraph does not make any difference.
Is there a way to get around this issue in Node-RED?
What happens if you split the audio that you feed to the STT service into smaller chunks? Does that work? How much audio are you trying to feed?
If you give us more details about what you are trying to accomplish then we should be able to help.
Can you also explain the problem that you are experiencing with TTS, what do you mean with "Text to Speech makes a very long string regardless of what you inject"?
thank you
Thanks for your reaction.
What I basically want to do is, using the S2T node in Node-RED. I have put a .wav file in a Cloudant database. So when I feed this .wav file into the S2T node, the app crashes. I used several ways to get the Speech into the database; 1. via the text to speech node, 2. added manually the .wav file in the database.
When I look in Cloudant, it is one long line of characters, so I have placed the wave-file on different lines, which did not help, then I have split up the wave-file in smaller chucks, this did not work either probably because the wave file loses it's structure.
The next thing I tried was using a flac file, which is also supported by T2S and S2T and this is a compressed audio file (factor 10), it would be less then 64k. But I got the message that only wav files are supported. Then I looked in the code of the S2T-node and found out that only wav is supported (the Watson S2T service in Bluemix supports more audio formats).
Related
It's basicly all in the title, how do I know or how can I access the duration of a soundNode in Web Audio API. I was expecting something like source.buffer.size or source.buffer.duration to be available.
The only alternative I can think of in case this is not possible to acomplish is to read the file metadata.
Assuming that you are loading audio, when you decode it with context.decodeAudioData the resulting arrayBuffer has a .duration property. This would be the same arraybuffer you use to create the source node.
You can look at the SoundJS implementation, although there are easier to follow tutorials out there too.
Hope that helps.
Good and bad news; nodes do not have length - but I bet you can achieve what you want another way.
Audio nodes are sound processors or generators. The duration of a sound processed or made by a node can change - to the computer it is all the same, lack of sound is a buffer full of zeroes instead of other values.
So, if you needed to dynamically determine the duration of a sound, you could write a node which timed 'silences' in the input audio it received.
However I suspect from your mentioning of meta data that you are loading existing sounds - in which case you should indeed use their meta data, or determine the length by loading into an audio element and requesting the length from that.
Using mp4box to add h264 to a fragmented dash container (mp4)
Appending the m4s (dash) media segments. Don't know how to preserve order in the box metadata and not sure if it can be edited when using mp4box. Basically if I look at the segments this demo:
http://francisshanahan.com/demos/mpeg-dash
there is a bunch of stuff latent in the m4s files that specify order. Like the mfhd atom that has the sequence number. Or the sidx that keeps the earliest presentation time (which is identical to the base media decode time in the tfdt). Plus my sampleDuration times are zeroed in the sample entries in the trun (track fragment run).
Have tried to edit my m4s files with mp4parser without luck. Wondering if somebody else has taken h264 and built up a MediaSource stream?
I have a requirement where user can upload files present in app to SharePoint via same app.
I tried using http://schemas.microsoft.com/sharepoint/soap/CopyIntoItems method of sharepoint. But it needs file in base64 encoded format to be embedded into body of SOAP request . My code crashed on device when I tried to convert even a 30 MB file in base64 encoded string? Same code executed just fine on simulator
Is there any other alternative to upload files (like file streaming etc) onto sharepoint?? I may have to upload files upto 500 MB? Is there more efficient library to convert NSData into base64 encoded string for large file???
Should I read file in chunks and then convert that into base64 encoded string and upload file once complete file is converted? Any other appraoches???
First off, your code probably crashed because it ran out of memory. I would do a loop where I read chunks that I converted and then pushed to a open socket. This probably means that you need to go to a lower level than NSURLConnection, I have tried to search for NSURLConnection and chunked upload without much success.
Some seem to suggest using ASIHttp, but looking at the homepage it seems abandoned by the developer, so I can't recommend that.
AFNetworking looks really good, it has blocks support and I can see in the example on the first page how it could be used for you. Look at the streaming request example. Basically create a NSInputStream that you push chunked data to and use it in a AFHTTPURLConnectionOperation.
Hey Overflow, I have an application which serves as a user interface for a spartan/command line program.
I have the program running on a separate process, and my application monitors it to see if it is responding and how mush CPU it is utilising.
Now I have a list of files in my program (listbox) which are to be sent to the application, which happens fine. But I want to be able to read text from the com-line so as to determine when the first file has been processed.
Com-line says one of "selecting settings", "unsupported format" and "cannot be fixed".
What I want to be able to do is when it says one of these three things, remove item(0) in listbox1.
Is this possible?
I thought of programming an event which handles com_exe.print or something or other, if possible.
Read the standard output from the process.
MSDN Article
Theres an example of synchronous reading from the process in that article.
You might be able to do what you want using the AttachConsole API function as described here. However, maybe an easier alternative would be if you could pipe the output of the command line app to a text file and then your app could just parse the text file (assuming that the command line file wouldn't lock the file completely, I'm not sure about that).
If you don't know how to pipe the output, this page has quite a bit of information.
The tiny question is:
How to start (realplayer ?) playing given online resourse (e.g. http://example.com/file.mp3)
PyS60, C++ or C# via RedFiveLabs would do.
EDIT1: Title changed from "Start RealPlayer on symbian" to the more appropriate.
I think the title is a little misleading if you just want to play back media content and not use a particular application for it.
In C++ there is CMdaAudioPlayerUtility::OpenUrlL() but it's not widely implemented. For example in S60 it will complete with KErrNotSupported status. To play files you can use other open functions in CMdaAudioPlayerUtility such as OpenFileL() or OpenDesL() but you need a separate mechanism for retrieving the files or at least the bytes onto the device.
There is also CVideoPlayerUtility::OpenUrlL() which supports rtsp audio streams but not http.