Audiobook chapters that don't start at beginning of file - sonos

We've implemented a SMAPI service and are attempting to serve up an audiobook. We can select the audiobook and start playback, but we run into issues when we want to move between chapters because our audio files are not split by chapter. Each audiobook is divided into roughly equal-length parts, and we have information on which part and how far into the part each chapter starts.
So we've run into an issue where our getMetadata response is giving back the chapters of the audiobook because that's how we'd like a user to be able to navigate the book, but our getMediaURI responses for each chapter are giving back URLs for the parts the audio files are divided into, and we seem to be unable to start at a specific position in those files.
Our first attempt to resolve the issue was to include positionInformation in our getMediaURI response. That would still leave us with an issue of ending a chapter at the appropriate place, but might allow us to start at the appropriate place. But according to the Sonos docs, you're not meant to include position information for individual audiobook chapters, and it seems to be ignored.
Our second thought, and possibly a better solution, was to use the httpHeaders section of the getMediaURI response to set a Range header for only the section of the file that corresponds to the chapter. But Sonos appears to have issues with us setting a Range header, and seems to either ignore our header or break when we try to play a chapter. We assume this is because Sonos is trying to set its own Range headers.
Our current thought is that we might be able to pass the media URLs through some sort of proxy, adjusting the Sonos Range header by adding an offset to the start and end values based on where the chapter starts in the audio file.
So right now we return <fileUrl> from getMediaURI and Sonos sends a request like this:
<fileUrl>
Range: bytes=100-200
Instead we would return <proxyUrl>?url=<urlEncodedFileUrl>&offset=3000 from getMediaURI. Sonos would send something like this:
<proxyUrl>?url=<htmlEncodedFileUrl>&offset=3000
Range: bytes=100-200
And the proxy would redirect to something like this:
<fileUrl>
Range: bytes=3100-3200
Has anyone else dealt with audio files that don't match up one-to-one with their chapters? How did you deal with it?

The simple answer is that Sonos players respect the duration of the file, not the duration expressed in the metadata. You can't get around this with positionInformation or Cloud Queues.
However, the note that you shouldn't use positonInformation for chapters in an audiobook seems incorrect, so I removed it. The Saving and Resuming documentation states that you should include it if a user is resuming listening. You could use this to start playback at a specific position in the audio file. Did you receive an error when you attempted to do this?
Note that you would not be able to stop playback within the file (for example, if a chapter ended before the file ended). The player would play the entire file before stopping. The metadata would also not change until the end of the file. So, for example, if the metadata for the file is "Chapter 2" and chapter 2 ends before the end of the file, the Sonos app would still display "Chapter 2" until the end of the file.
Also note that the reporting APIs have been deprecated. See Add Reporting for the new reporting endpoint that your service should host.

Related

Getting metadata (track info, artist name etc.) for radio stream

I have already checked the following links but they weren't much helpful (in parenthesis I've explained why it didn't work in my case as suggested in their answers)
Streams - hasOutOfBandMetadata and getStreamingMetadata (our content is already HLS)
Sonos player not calling GetStreamingMetadata (getMetdata is not called, only getMediaMetada is called since radio stream has unique id and is not a collection)
In Sonos API documentation it is mentioned that "hasOutOfBandMetadata" is deprecated and it is recommended that metadata be embedded inline with the content. However due to some limitations it can't be achieved in our service thus I have to go with the old way itself (whatsoever it is).
I suppose, ideally "getStreamingMetadata" should be called after setting "hasOutOfBandMetadata" to true but it's not happening.
Secondly, for testing purposes I set "secondsRemaining" and "secondsToNextShow" for different values to find out that "description" is also being displayed for those different time intervals (if I set secondsRemaining/secondsToNextShow to 20 then description is displayed for 20 seconds, if set to 200 then for 200 seconds and likewise). After the time lapses, information inside "description" disappears. So I guess there must be some call going to refresh metadata after the time lapses but couldn't figure out which call.
Kindly explain what is the proper way to get metadata for a continuous radio stream. On TuneIn radio you can find Radio Paradise for which metadata is getting updated as track changes. Even if they use metadata inline with their content there must be some way to achieve this.
Can you please post the calls and the the response that you are sending? This would help with troubleshooting this issue. Also what mimeType are you trying to use?
At this time the only full supported method for getting metadata for a continuous radio stream on Sonos that will be guaranteed to work in future releases is to embed metadata in line.

Asynchronous Pluggable Protocol for CID: (email), how to handle duplicate URLs

This is somewhat a duplicate of this question, but that question has no (valid) answer and is 1.5 years old so asking my own with hopes people have more info now.
If you are using multiple instances of a WebBrowser control, MSHTML, IHTMLDocument, or whatever... from inside the APP instance, mostly IInternetProtocol::Start, is there a way to know which instance is loading the resource? Or is there a way to use a different APP for each instance of the control, maybe by providing one via IDocHostUIHandler or ICustomDoc or otherwise? I'm currently using IInternetSession::RegisterNameSpace to make it process wide.
Optional reading below, don't feel you need to read it unless above isn't clear.
I'm working on a legacy (Win32 C++) email client that uses the MS ActiveX WebBrowser control (MSHTML or other names it goes by) to display HTML emails. It was saving everything to temp files, updating the cid: URLs, and then having the control load that. Now I want to do it the correct way, using APP. I've got it all working with some test code that just uses static variables/globals and loads one email.
My problem now is, the app might have several instances of the control all loading different emails (and other stuff) at the same time... not really multiple threads so much, just the asynchronous nature of the control. I can give each instance of the control a unique URL to load the email, say, cid:email-GUID, and then in my APP code I can use that URL to know which email to load. However, when it comes to loading any content inside the email, like attached images using src="cid:", those will not always be unique so I will not always know which image it is, for which email. I'd like to avoid having to modify the URLs of the HTML before displaying it (I'm doing that now for the temp file thing, but want to do it a better way).
IInternetBindInfo::GetBindString can return the referrer, BINDSTRING_XDR_ORIGIN, or the root URL, BINDSTRING_ROOTDOC_URL, but those require newer versions of IE and my legacy app must support older XP installs that might even have IE6 or IE7, so I'd rather not use these.
Tagged as TWebBrowser because that is actually what I'm using (Borland Builder 6 C++), but don't need answers specific to that platform.
As the Asynchronous Pluggable Protocol Handler us very low level, you cannot attach handlers individually to different rendering controls.
Here is a way to get the referrer:
Obtain BINDSTRING_HEADERS
Extract the referrer by parsing the line Referer: http://....
See also How can I add an extra http header using IHTTPNegotiate?
Here is another crazy way:
Create another Asynchronous Pluggable Protocol Handler by calling RegisterMimeFilter.
Monitor text/plain and text/html
Scan the incoming email source (content comes incrementally) and parse store all image links in a dictionary
In NameSpaceHandler you can use this dictionary to find the reference of any image resources.

Setting up box metadata for MediaSource to work

Using mp4box to add h264 to a fragmented dash container (mp4)
Appending the m4s (dash) media segments. Don't know how to preserve order in the box metadata and not sure if it can be edited when using mp4box. Basically if I look at the segments this demo:
http://francisshanahan.com/demos/mpeg-dash
there is a bunch of stuff latent in the m4s files that specify order. Like the mfhd atom that has the sequence number. Or the sidx that keeps the earliest presentation time (which is identical to the base media decode time in the tfdt). Plus my sampleDuration times are zeroed in the sample entries in the trun (track fragment run).
Have tried to edit my m4s files with mp4parser without luck. Wondering if somebody else has taken h264 and built up a MediaSource stream?

Amazon S3 pre signed URLs using Amazon Java SDK and extra / characters

I've been creating Presigned HTTP PUT URLs and everything was working great until I wanted to start using "folders" in S3; I wanted the key to have the character '/'.
Now I get Signature doesn't match when I send the HTTP PUT requests due to the fact the '/' probably changes to %2F... If I escape the character before creating the presigned URL it works great, but then the Amazon console management doesn't understand it and shows it as one file instead of subfolders.
Any idea?
P.s.
The HTTP PUT requests are sent using C++ with POCO NET library.
EDIT
I'm using Poco HttpRequest from C++ to my Java web server to generate a signed url (returned on the response).
C++ then uses this url to put a file in s3 using Poco again.
The problem was that the urls returned from the web server were parsed through Poco URI objects that auto decoded the s3 object key thus changing it.With that in mind I was able to fix my problem.
Tricky - I'll try to approach this bottom up.
Disclaimer: I got carried away visually inspecting the Poco libraries instead of actually debugging a code sample, which should yield more reliable results much faster, see below ;)
Analysis
If I escape the character before creating the presigned URL it works
great, but then the Amazon console management doesn't understand it
and shows it as one file instead of subfolders.
The latter stems from S3 not having a concept of folders on the storage level actually, see e.g. section Index Documents and Folders within Index Document Support:
Objects stored in Amazon S3 are stored within a flat container, i.e.,
an Amazon S3 bucket, and it does not provide any hierarchical
organization, similar to a file system's. However, you can create a
logical hierarchy using object key names and use these names to infer
logical folders that contain these objects.
That's exactly what the AWS Management Console is doing here as well:
The AWS Management Console also supports the concept of folders, by
using the same key naming convention used in the preceding sample.
However, your test regarding the assumption of / being encoded as %2F proves, that this is indeed how Poco::Net is encoding the URL when performing the HTTP PUT request.
(I'm actually a bit surprised that the AWS Java SDK seems to generate different URLs here for / vs. %2F, insofar a recent analysis regarding Why is my S3 pre-signed request invalid when I set a response header override that contains a “+”? seems to indicate respective canonicalization by the AWS .NET SDK, see below for more on this.)
Potential Solution
In order for your scenario to work as desired, you'll need to figure out where the URL is encoded this way - I could think of two components in principle:
Poco::Net
Finding out why Poco::Net is encoding the URL different than S3 (if at all, see below) is best done by debugging your code, here's where I'd start:
Class HTTPRequest uses class URI in turn, which automatically performs a few normalizations on all URIs and URI parts passed to it, in particular percent-encoded characters are decoded. The other way round is handled by method encode(), which is where things get interesting and call for a breakpoint, see URI.cpp:
lines 575 ff. - here encode() does its magic, which indeed seems to be in place, insofar neither the code within the function nor the various chars passed in via the reserved parameter contain the offending / (see lines 47 ff. for the respective constants in use)
consequently you might want to set a breakpoint in this function and backtrace the callstack to find out which code is actually doing the encoding upfront, which might not yield an offender at all, see below.
Java => C++ transition
You haven't specified yet, which channel is actually used to communicate the pre-signed URL generated by the AWS Java SDK to C++ in turn. Given the code review (mind you, visual inspection only, I haven't debugged this myself yet) of the Poco::Net functionality yields the conclusion, that no obvious offender can be identified in the library itself, thus it seems more likely that it might already enter your C++ layer encoded (easily verified via debugging of course) - are you by chance using any kind of web service between these components for example?
Good luck!

Suggestions for best way of implementing HTTP upload resume

I'm working on a project which will allow large files (GB+) to be uploaded via HTTP PUT and I need to implement a method for resuming the upload. Once a file is uploaded and finalized it is complete and cannot be modified. So far, I have 2 options in mind but neither of which fit perfectly:
Option 1
Client sends an initial HEAD request on the file which will return either 404 if it does not exist or the file details including current size along with an HTTP X-Header along the lines of X-Can-Resume or something like that to specify whether the file can be resumed and a Range header specifying which bytes it has. This seems OK but I'm not keen on the X-Header as it removes from the HTTP standard.
Option 2
Client sends a PUT request with a Content-Length header of 0 bytes and no body, the server can then send back a 308 Resume Incomplete (as proposed here http://code.google.com/p/gears/wiki/ResumableHttpRequestsProposal) or a 202 Accepted header to indicate whether to resume or start from the beginning. This also seems acceptable apart from the use of a non-standard header.
Any other suggestions on the best way to implement this?
Thanks,
J
In either solutions there is no existing client and server implementations, so I'm guessing you'll code both. I think you should just find a right balance between the simplest and what is described in the gears proposal (by the way, you probably know Gears is dead), and be prepared to change when a standard emerges.
If I were to implement this feature, I'd make it possible for the client to upload in chunks and I would add a message digest on the whole content and the chunks.