Facebook could not load image in meta og:image when share link on Facebook - apache

I try use debug tool and result:
Provided og:image, https:xyz.com/storage/posts/images/1615524371abc.jpg could not be downloaded. This can happen due to several different reasons such as your server using unsupported content-encoding. The crawler accepts deflate and gzip content encodings.
I use Laravel and Apache Server in Centos.
I don't know how to deal with it. Hope to find the answer here. Thanks

Related

How to secure streams from being embedded to unauthorised websites and domains with Ant Media Server?

I can easily watch and embedd any stream running on Ant Media Server with help of embedd URL but also it seems as other with the stream information can use the URL on their websites too.
I tried using CORS filter but it seems a little complicated and didn't work.
How can I easily prevent my streams from being embedded to unauthorized webistes/domains?
For workaround solutions in Ant Media server (v2.4.3 or older versions) please check here.
In v2.5.0 and above, you can allow selected domains through a single property file to let them embed the iframe code.
To allow only specific domains to embed the iframe code, edit the /usr/local/antmedia/webapps/app-name/WEB-INF/red5-web.properties file and add the below setting.
settings.contentSecurityPolicyHeaderValue=frame-ancestors 'self' https://allow-domain-name;
If you would like to allow multiple domains, then it should be like this.
settings.contentSecurityPolicyHeaderValue=frame-ancestors 'self' https://domain1 https://domain2;
​After making the changes, restart the server with sudo service antmedia restart.
'Self' is required to play the stream on the AMS dashboard panel itself. In this way, other than allowed domains, streams cannot be embedded using iframe code on other websites.

Chrome developer tools header formats and view source

Below are two screenshots from the same version of Chrome. I would like to know when and why header names sometimes are displayed with different word capitalization and also when is the view source/view parsed toggle available? I've read the developer tools documentation which says nothing about it and tried to load pages in different ways. The only pattern i suspect is content compression, could that be it?
Update: nope seen both versions on sites using gzip
It seems that it happens only for resources served over HTTP2/SPDY (compare this image served over HTTP2 with the same image served over HTTP). There is an old Chrome bug that proves that HTTP2/SPDY headers are being handled differently. I reported this as a bug here.

Does HTTP user-agent affect page accessed in practice ? Any examples?

I am using wget to download url that could be used on either linux/osx/windows. My question is if server behavior could be affected by user-agent string (-U) option ? According to this MS link web server can use this information to provide content that is tailored for your specific browser. According to Apache doc(access control section) you can use these directives to deny access to a particular browser (User-Agent). So I am wondering if I need to download links with different user-agent for different OS or one download would suffice.
Is this actually done ? I tried bunch of servers but did not really see different behavior across user agents.
There are sites that prevent scraping by returning an error response when they detect you're hitting their servers with an automation tool instead of a browser, and the user agent is one of the aspects of detecting that difference.
Other than that not much useful can be said about this, as we don't know what sites you want to target, what HTTP server they run and what code runs on top of that.

Can MIME types of Github Pages files be configured?

We have an MP4 video file in a Github Pages repository. The file is being served with a MIME type of application/octet-stream, which means Internet Explorer doesn't like it. It should be served as type video/mp4. Is there a way to configure Github Pages to use the proper MIME type, or should we find an alternate hosting solution for the video? This topic isn't addressed in the help pages.
The topic is addressed here: https://help.github.com/articles/mime-types-on-github-pages/
GitHub Pages supports more than 750 MIME types across 1,000s of file
extensions. The list of supported MIME types is generated from the
mime-db project, which aggregates MIME types from the Apache and Nginx
projects as well as the official IANA list of internet content types.
MIME types are additional headers that servers send to provide
browsers with information about the types of files being requested and
how to handle the file once downloaded.
To add or modify MIME types for use on GitHub Pages, see the mime-db
Contributing instructions.
The mime-db project is currently reporting mp4 as video/mp4:
"video/mp4": {
"source": "apache",
"compressible": false,
"extensions": ["mp4","mp4v","mpg4"]
},
Source: https://github.com/jshttp/mime-db/blob/46a40f0524a01fb3075a7ecde92e8e04fc93d599/db.json#L6233
If Github pages is still reporting mp4 files as application/octet-stream you should contact Github support.
The answer is no.
However, Ian's earlier answer is not strictly true. You can use github to host webpages. There are plenty of developer blogs up there.
For video I use Amazon s3 as it costs next to nothing for storing and serving video content and you can set the mimetypes as you require.
I store about 60GB of video and served 8GB last month for the cost of 9USD so it's worth it.
You're not supposed to use Github as a webserver, because it is a code hosting site. They manage your code repositories and are only concerned about showing code.

Gzip compression over SSL with Safari?

I ran into a really weird issue this morning when trying to access a web app I'm building using an iPad (Safari Mobile/Webkit). On the front end, the web app relies heavily on XHR/Ajax requests. On the back end, the server is configured to gzip compress responses if the "Accept-Encoding" includes "gzip".
Everything was working great until I flipped the server to SSL. Then I started getting intermittent "CFURLErrorDomain:303" errors in Safari.
After a quick search I found this link:
http://beyondrelational.com/modules/2/blogs/45/posts/12034/failed-to-load-resource-safari-issue.aspx
According to the link, Safari requires a content-length header when making XHR (ajax) request over SSL/HTTPS. In my case, the server is gzipping content directly to the output stream so I have no way of knowing what the final content length will be.
As a workaround, I added the following logic on the server:
if (request.isEncrypted()) gzip =
!request.getHeader("User-Agent").toLowerCase().contains("webkit");
In other words, if the connection is encrypted via SSL, and the browser is some webkit derivative (e.g. Safari, Chrome, etc), then don't compress the output. This seems to work but it really slows things down.
So my question is this:
Does Safari support gzip compressed responses over SSL or am I barking up the wrong tree?
Turns out the error I was seeing was a bug in the server and nothing to do with Safari. The server was relying on chunked transfer encoding when compressing large byte arrays. Individual "chunks" were broken up into pieces (header, body, trailer) and sent to the client in separate messages. The SSL client (safari) was expecting one contiguous "chunk" so it didn't know what to do when it saw an incomplete chunk. The server has been patched and the issue is now resolved.