HTTP/2 protocol not recognised - ssl

The latest update of PSI is reporting that all our site pages are using HTTP1/1 when in the fact they are running under HTTP/2. Confirmed this with the domain host and two separate tests.

Yup can confirm you're definitely using HTTP/2.
This looks to be a bug with PageSpeed Insights and looks like you're not the only one experiencing it.
If you run Lighthouse locally (either through Chrome, or even with the command line), or through Web Page Test then it doesn't complain about missing HTTP/2 but it does for PageSpeed Insights and web.dev/measure.
Suggest you follow that GitHub issue I linked above, and maybe add a comment with your own site.

Related

Cloudflare - Custom URL Purge not working on different accounts

What is the issue with Cloudflare custom URL purge caching not working on different accounts? Bascially I have the account, and this functionality does work, but my colleague who lives in USA, I live in Europe, for her this does not work.
Can the issues be with account settings? Basically she gets the error or sometimes the cache does not want to work
I've faced issues in the past where features in Cloudflare was not working due to plugins or browser configuration. First you should confirm if the error is indeed coming from Cloudflare or from the browser.
Ask her to try to use a different browser and also a different device to confirm the issue is either tied to the account or to her location.
Try to inspect the page when performing the action to try and find out what's happening behind the scenes.

Recent https (SSL) addition, getting site cannot provide secure connection error page

Recently our website went from http to https. I, and others, are randomly getting "The Site Can't Provide a Secure Connection" page. Upon refresh, the page loads just fine. Why are we getting this initial page randomly?
FYI... We have http to https redirects in place.
Impossible to say without more details, but some things I can suggest are:
You have multiple servers and some are configured correctly and some incorrectly.
You are not including the full certificate chain. Sometimes your browser has the missing intermediary cached and sometimes not (see this answer for more info here: https://serverfault.com/questions/826100/ca-certificate-trouble-with-squid-on-centos7/826321#826321)
A bug in browser/software. I had this issue on Chrome when using Apache HTTP/2. Never did figure it out but a Chrome update fixed it.
Run https://www.ssllabs.com/ssltest/ on your site to confirm not a problem with your https set up and, if that doesn't work, or you don't understand the results it gives, then update your question with more details (what Server and Browser you are using and what version, if you have any proxy in place between your Browser and the site and, ideally the website name) if you want people to help you.
Also be aware this is a programming site and some people don't like these questions here and will suggest other Stack Exchange sites but honestly don't know where this question is best placed: serverfault.com maybe, but is for professional SysAdmins only, Unix and Linux seems a little generic (not even sure if you are using a Linux webserver!), Webmasters is more for content and SEO questions, Information and Security is more for theoretical SSL/TLS questions...

Scrapy on Ubuntu web server getting 417 error

I have been developing a crawling script for a number of news websites and using Scrapy to handle the logic.
When I run my script on an Ubuntu web server (Digital Ocean, if that helps), a lot of the websites that return 200 on my local machine turn out to be 417 instead.
I was wondering how I should fix this, if it is a problem at all? I'm actually not quite sure if it is affecting the final output, but it seems like it has been.
Some of my own research has turned up:
http://www.checkupdown.com/status/E417.html . I've tried adding an Expect header to my requests, which hasn't worked
I've heard that it might be a problem with HTTP 1.1 vs 1.0? EDIT: Nope. Scrapy's HTTPDownloaderHandler automatically chooses 1.1 if it is available
417 is the error a web server gives you when your client says it expects content-types a,b,c, but the content that the server could deliver doesn't match any of these types.
This looks like a scrapy bug or, more likely, misconfiguration.
It seems either your public ip address was already banned or was banned while you scraped by the web server of the page you want to scrape. For the first situation you can reboot your instance to get a new public ip (at least this works on Amazon). For the second scenario, here are some tips from the official documentation to avoid this situation:
rotate your user agent from a pool of well-known ones from browsers
(google around to get a list of them)
disable cookies (see COOKIES_ENABLED) as some sites may use cookies to spot bot behaviour
use download delays (2 or higher). See DOWNLOAD_DELAY setting.
if possible, use Google cache to fetch pages, instead of hitting the
sites directly
use a pool of rotating IPs. For example, the free Tor
project or paid services like ProxyMesh
use a highly distributed downloader that circumvents bans internally, so you can just focus on parsing clean pages. One example of such downloaders is Crawlera
Additionally, you can reduce concurrent requests settings in your spider, that worked once for me.

Mod_Spdy not running on Centos

I'm setting up Apache on Centos the way I have done in the past, but for some reason mod_spdy is not running. I'm following the instructions here:
https://developers.google.com/speed/spdy/mod_spdy/
When I run rpm -U mod-spdy-beta_current_x86_64.rpm I get this message:
warning: mod-spdy-beta_current_x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
package mod-spdy-beta-0.9.4.3-420.x86_64 is already installed
If I open chrome://net-internals/#spdy and my site in another tab, it doesn't show my site. If I look in the network panel, I don't see the x-mod-spdy header.
Update: If I use Firefox firebug, I see the x-mod-spdy header. I don't see my site in Chrome spdy sessions, but I see other sites in it.
What could I be doing wrong?
Ok it seems the issue is that Chrome 40.x dropped support for SPDY/3 and only supports SPDY/3.1, but the mod_spdy module for Apache only supports SPDY/3, so basically no SPDY for Chrome users if you use Apache as a web server.
mod_spdy is currently in a bad state where either Google nor Apache is maintaining it after Google donated it to the Asf. Google recently made the statement that they will drop the SPDY support from Chrome in early 2016, but what they forgot to say that they started dropping older versions of SPDY already (including SPDY/3) (I like these partially true statements by the way), so basically if you are on Apache then for your Chrome users you can't provide SPDY short of implementing SPDY/3.1 yourself.
So, how was that "do no evil"? :-)
See details: https://groups.google.com/forum/#!topic/mod-spdy-discuss/FPEj0zG5I0Y
and https://code.google.com/p/mod-spdy/issues/detail?id=100&colspec=ID%20Type%20Status%20Priority%20Owner%20Summary%20Stars
One option you might consider is switching to Nginx and using SPDY/3.1 over there.

Recording scripts - "page not found" because of single protocol?

I have found a strange issue which I do not completely understand. When I run the LoadRunner with just a single protocol, the browser (when recording starts) is ran but says "page not found" (as if the proxy was not set).
How come? The protocols specify what traffic will be captured but I assumed in just does not record the ones not specified. But why the browser could not find the page in single protocol and could in multiple?
I've found that the single protocol mode (I assume web here) is somewhat erratic and does not work all the time. The workaround is to use the multiple protocol mode, but select only Web (HTTP/HTML). This works much better.
The actual reasons for why this is the case are unknown, but at least give it a try!
As for other issues:
Check that your PROXY settings are correct when you invoke IE for recording. Your issue sounds a little like a proxy issue, but please post more details if none of the above works.
Over 90% of recording issues can be tracked to environment items, specifically do you have the right match up between version of LR and version/manufacturer of your browser plus are you signed in with the proper credentials plus do you have any conflicting software packages loaded, such as antivirus, which could be impacting the recordingf mechansim.
Where to start?
Makes sure you are signed in with Administrative credentials
Disable any antivirus running locally
Validate your browser manufacturer and version with the requirements for your version of LoadRunner