MP4 not working on mobile devices after change source parameter - html5-video

I have a strange problem.
On a website for my client, I'm showing some mp4 files using the HTML5 video element. The videos that are visible on the page while loading do show up on mobile devices without any problems.
When I try to change the source of a video element (after an AJAX request), the video element shows a black screen. The new video source I changed could be exactly the same as one that was already shown on page load, but after updating the src parameter it just won't show..
Already tried checking the mp4 encoding (which is H.264), the content-type in the server response-headers is correct (video/mp4) and the server seems to return "206 Partial Content". Also, gzip encoding for mp4 files is off.
If I check the remote debugger in Safari (inspecting Safari on an iPad), I get the error "An error occurred trying to load the resource". Below you find the response headers:
HTTP/1.1 206 Partial Content
Content-Type: video/mp4
ETag: "23f72-5a4561b99803e"
Last-Modified: Tue, 28 Apr 2020 09:03:40 GMT
Content-Range: bytes 0-147313/147314
Accept-Ranges: bytes
Date: Wed, 29 Apr 2020 05:13:12 GMT
Content-Length: 147314
Keep-Alive: timeout=5, max=84
Connection: Keep-Alive
Server: Apache
Does anyone have an idea what could be causing this issue?
Thanks!

The documentation for this can be a bit confusing - it can look like it is not possible to dynamically change the source (https://html.spec.whatwg.org/multipage/embedded-content.html):
Dynamically modifying a source element and its attribute when the element is already inserted in a video or audio element will have no effect. To change what is playing, just use the src attribute on the media element directly, possibly making use of the canPlayType() method to pick from amongst available resources. Generally, manipulating source elements manually after the document has been parsed is an unnecessarily complicated approach.
However, it can be changed and the code snippet below should work reliably cross browser - the video.load() line is key as it actually makes sure the new source is inserted. You can experiment by commenting out this line and seeing the difference:
var video = document.getElementById('video');
var source = document.createElement('source');
source.setAttribute('src', 'http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4');
video.appendChild(source);
video.play();
function changeSource() {
video.pause();
source.setAttribute('src', 'http://clips.vorwaerts-gmbh.de/VfE_html5.mp4');
video.load(); //This step is key
video.play();
}
<h1>Video source change test</h1>
<p>
<button id="sourceButtom" onclick="changeSource()">Click me to change the video source.</button>
<p>
<video id="video" width="320" controls height="240"></video>
The above is based on the excellent answer here: https://stackoverflow.com/a/18454389/334402

Related

How do I save the PDF to server? css2pdf#cloudformatter xeponline

I was wondering if anyone knew of a way to save the resulting PDF document to the server, instead of prompting the user to download it locally?
Using this:
http://www.cloudformatter.com/CSS2Pdf
Many thanks
Edit:
I am using the following JS to initiate the PDF.
$(function(){
$('#generatePDF').click(function(e) {
e.preventDefault();
var pdfdata = xepOnline.Formatter.Format('printableInvoice',
{
pageWidth:'216mm',
pageHeight:'279mm',
render: 'base64'
}
);
console.log(pdfdata);
});
});
Leaving the answer in place as the comments below are relevant. The original answer was how to get the source information (using the "base64" option), not the final PDF.
So to get the final PDF that is in memory, if you examine the code in Github:
https://github.com/Xportability/css-to-pdf/blob/master/js/xepOnline.jqPlugin.js
starting at the "else" at line 602 ... this "else" is executed if you force anything other than a download. If you chose "newwin" or "embed" as the method and the browser sniffing JS did not force it back to download (it does on Safari, IE and also mobile browsers), then this "else" is executed.
On a successful AJAX post, the function "xepOnline.Formatter.__postBackSuccess" is executed. This function starts at line 863. At line 865, the base64 encoded bytes of the actual PDF are loaded. If you debug your site and debug at that line of code, you can get the value of the var "base64" which will be the base64 encoded bytes.
So, if you only had Firefox and Chrome to consider, then you could do some mod to the code to post the result back to the server and not display it. If you have all those browsers to consider, you will need to add some option (like say option:'memory' which skips all browser sniffing, runs the AJAX version but with its own success function.
I can look at adding this to the library but you are free to pull it and make some mods yourself.

Mobile Site not giving correct Data - Beautiful Soup

I'm trying to get product details from the following website.
Baby Shampoo
Specifically the TCIN:# and product details.
But this information is not showing up in the page when I parse it.
A simple line like:
spans = soup.find_all("span", {"class" : "list-value"})
is turning up no results, and when do I go even more basic to:
print(soup.prettify)
I see the page print out but none of the details are in the page. I am not seeing any iframes on the page, and can't figure out why the data is not showing.
I even attempted to adjust my headers in the request:
headers = { 'User-Agent': 'Mozilla/5.0 (Linux; <Android Version>; <Build Tag etc.>) AppleWebKit/<WebKit Rev> (KHTML, like Gecko) Chrome/<Chrome Rev> Mobile Safari/<WebKit Rev>'}
and also:
headers = { 'User-Agent': 'Mozilla/5.0'}
but neither of these are changing the results. Any ideas what could be happening, and where this data could be located?
Thanks,
Mike
If you see all the Network Requests through Chrome Developer Options or Firefox Firebug, you can see all the http get and post requests made and then you have to find out which one contains the needed information. Make sure that you have Network toolbar enabled and Preserve Log checked before making the request in browser. In your case, the information is fetched by the GET request - http://tws.target.com/productservice/services/item_service/v1/by_itemid?id=13197674&callback=browseCallback

Should I use X-Content-Type-Options:nosniff on images?

I'm using X-Content-Type-Options:nosniff security header on my project, but it broke my images on IE. All other file types work fine, except of images.
I suspect that IE is handling image differently that type specific in Content-Type header (i.e. image/jpeg). I assume it recognizes images as application/octet-stream as I'm returning array of bytes in my app logic.
One solution is to use nosniff for all content-types instead of images (image/jpeg/ image/png...).
Do you agree this is best solution for this case?
What type IE assumes for image when it is returned as byte array?
Thanks

html5-video-tag supported, but mime-type not - give an alternative link

We use the html5-video-tag.
Sometimes we get only one video-source (.mp4).
For browsers which does not support the html5-video - all is OK, there the fall-back method works:
<video>
<object with flash></object>
</video>
Problem occurs if only a .mp4 is provided. The Firefox only displays "Kein Video mit unterstützem Format und Mime-Type gefunden". Yes Firefox cannot display .mp4 videos.
What can I do to force the Firefox (or any browser which does not support the mime-type) to show the alternative object-flash-section?
Exists an attribute for the html5-video-tag to force pass-through on error?
Or can I catch an Event "onerror" ...?
You can use JavaScript to detecter wether a file type is supported :
(function (video) {
if (!video.canPlayType || !video.canPlayType('video/mp4')) {
// Flash fallback
}
}(document.createElement('video')));
As said in HTML5 video, fallback to flash if no .ogv file

Reliably getting favicons in Chrome extensions, chrome://favicon?

I'm using the chrome://favicon/ in my Google Chrome extension to get the favicon for RSS feeds. What I do is get the base path of linked page, and append it to chrome://favicon/http://<domainpath>.
It's working really unreliably. A lot of the time it's reporting the standard "no-favicon"-icon, even when the page really has a favicon. There is almost 0 documentation regarding the chrome://favicon mechanism, so it's difficult to understand how it actually works. Is it just a cache of links that have been visited? Is it possible to detect if there was an icon or not?
From some simple testing it's just a cache of favicons for pages you have visited. So if I subscribe to dribbble.com's RSS feed, it won't show a favicon in my extension. Then if I visit chrome://favicon/http://dribbble.com/ it won't return right icon. Then I open dribbble.com in another tab, it shows its icon in the tab, then when I reload the chrome://favicon/http://dribbble.com/-tab, it will return the correct favicon. Then I open my extensions popup and it still shows the standard icon. But if I then restart Chrome it will get the correct icon everywhere.
Now that's just from some basic research, and doesn't get me any closer to a solution. So my question is: Is the chrome://favicon/ a correct use-case for what I'm doing. Is there any documentation for it? And what is this its intended behavior?
I've seen this problem as well and it's really obnoxious.
From what I can tell, Chrome populates the chrome://favicon/ cache after you visit a URL (omitting the #hash part of the URL if any). It appears to usually populate this cache sometime after a page is completely loaded. If you try to access chrome://favicon/http://yoururl.com before the associated page is completely loaded you will often get back the default 'globe icon'. Subsequently refreshing the page you're displaying the icon(s) on will then fix them.
So, if you can, possibly just refreshing the page you're displaying the icons on just prior to displaying it to the user may serve as a fix.
In my use case, I am actually opening tabs which I want to obtain the favicons from. So far the most reliable approach I have found to obtain them looks roughly like this:
chrome.webNavigation.onCompleted.addListener(onCompleted);
function onCompleted(details)
{
if (details.frameId > 0)
{
// we don't care about activity occurring within a subframe of a tab
return;
}
chrome.tabs.get(details.tabId, function(tab) {
var url = tab.url ? tab.url.replace(/#.*$/, '') : ''; // drop #hash
var favicon;
var delay;
if (tab.favIconUrl && tab.favIconUrl != ''
&& tab.favIconUrl.indexOf('chrome://favicon/') == -1) {
// favicon appears to be a normal url
favicon = tab.favIconUrl;
delay = 0;
}
else {
// couldn't obtain favicon as a normal url, try chrome://favicon/url
favicon = 'chrome://favicon/' + url;
delay = 100; // larger values will probably be more reliable
}
setTimeout(function() {
/// set favicon wherever it needs to be set here
console.log('delay', delay, 'tabId', tab.id, 'favicon', favicon);
}, delay);
});
}
This approach returns the correct favicon about 95% of the time for new URLs, using delay=100. Increasing the delay if you can accept it will increase the reliability (I'm using 1500ms for my use case and it misses <1% of the time on new URLs; this reliability worsens when many tabs are being opened simultaneously). Obviously this is a pretty imprecise way of making it work but it is the best method I've figured out so far.
Another possible approach is to instead pull favicons from http://www.google.com/s2/favicons?domain=somedomain.com. I don't like this approach very much as it requires accessing the external network, relies on a service that has no guarantee of being up, and is itself somewhat unreliable; I have seen it inconsistently return the "globe" icon for a www.domain.com URL yet return the proper icon for just domain.com.
Hope this helps in some way.
As of Oct 2020, it appears chrome extensions using manifest version 3 are no longer able to access chrome://favicon/* urls. I haven't found the 'dedicated API' the message refers to.
Manifest v3 and higher extensions will not have access to the
chrome://favicon host; instead, we'll provide a dedicated API
permission and different URL. This results in being able to
tighten our permissions around the chrome:-scheme.
In order to use chrome://favicon/some-site in extension. manifest.json need to be updated:
"permissions": ["chrome://favicon/"],
"content_security_policy": "img-src chrome://favicon;"
Test on Version 63.0.3239.132 (Official Build) (64-bit)
chrome://favicon url is deprecated in favor of new favicon API with manifest v3.
// manifest.json
{
"permissions": ["favicon"]
}
// utils.js
function getFaviconUrl(url) {
return `chrome-extension://${chrome.runtime.id}/_favicon/?pageUrl=${encodeURIComponent(url)}&size=32`;
}
Source: https://groups.google.com/a/chromium.org/g/chromium-extensions/c/qS1rVpQVl8o/m/qmg1M13wBAAJ
I inspected the website-icon in Chrome history page and found this simpler method.
You can get the favicon url by --
favIconURL = "chrome://favicon/size/16#1x/" + tab.url;
Don't forget to add "permissions" and "content_security_policy" to Chrome. (https://stackoverflow.com/a/48304708/9586876)
In the latest version of Chrome, Version 78.0.3904.87 (Official Build) (64-bit)) when tested, adding just img-src chrome://favicon; as content_security_policy will still show 2 warnings:
'content_security_policy': CSP directive 'script-src' must be specified (either explicitly, or implicitly via 'default-src') and must whitelist only secure resources.
And:
'content_security_policy': CSP directive 'object-src' must be specified (either explicitly, or implicitly via 'default-src') and must whitelist only secure resources.
To get rid of them use:
"permissions": ["chrome://favicon/"],
"content_security_policy": "script-src 'self'; object-src 'self'; img-src chrome://favicon;"
Now you can use chrome://favicon/http://example.com without getting any errors or warnings.