OpenShift and Jekyll Cartridge Force SSL - ssl

I'm using an application on OpenShift started from the Jekyll Cartridge, but running octopress on top (I suppose that's the same difference).
I would like to automatically redirect all HTTP requests to HTTPS. So that it can only be viewed over HTTPS.
I don't see a way to do this with Jekyll served on OpenShift, using the cartridge. I can do it locally, by modifying my config.ru file but that has no effect on OpenShift. Is there a way to force this on my application?

If your app is served by apache, you can try to put an .htaccess file at you root, containing :
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
from openshift kb

I managed to solve the issue, I'm not too happy with it as a "solution", but it works so I'm posting it.
The Jekyll cartridge seemed to be using WEBrick as the webserver, and I couldn't get control over it well enough to make it enforce SSL.
Basically I made a new application based on the "Ruby 1.9" cartridge, instead of the Jekyll cartridge. This gave me an apache hosted application. I then had to put use the .htaccess file as suggested by David earlier, in the source (!) folder of my octopress blog:
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
That did the trick. I don't think it's an ideal solution though, so all better solutions are welcome.
Here's everything I did in detail to move from the Jekyll cartridge to the Ruby 1.9 Cartridge:
Make a new openshift application, using the Ruby 1.9 Cartridge
Clone the new openshift application's git repo to a local folder
Copy all the files from the original repository to this new repository
Re-generate using Jekyll (rake generate)
Add and commit everything
Then push to openshift (this should result in a blog working as before)
Merge with github repo (git pull )
Add github as a remote (and deal with some minor conflicts)
Now I can work in my repo and perform "git push" (to openshift) and "git push ".
To force SSL:
Create an .htaccess file in the "source" folder of my octopress blog and re-generate.
Note: Now I have to make sure to do "rake generate" before I push to openshift (although I guess I could automate that on openshift after an update).

Related

How to apply Apache RewriteRule only on my NAS?

I am developing a website, so i am using to develop, my NAS and i send it to official server when is OK.
So on my NAS, the website is on folder called "Multi-Plateform" (and i have some other folder in my NAS)
To do work it I found a rule for htaccess file in my website folder :
RewriteRule !^Multi-Plateform/ Multi-Plateform%{REQUEST_URI}
But this RewriteRule is creating a falldown of my site in the official server.
So how to apply the rules only when in my NAS ?
I tried : "Directory", "Location", "LocationMatch" but is not the solution for the official server.

Can a NuxtJs "universal" application be served from an apache server or does it need to be served via node?

I am wondering if Apache can be set up to run a NuxtJs "Universal" app. From looking at the documentation it seems the "SPA" version of the app is built into a dist directory that I know I can serve from apache without any problems. It also looks like the "Static" build can be served from apache as well. but the Nuxt documentation for deploying a universal app says
• Upload the contents of your application to your server of choice.
• Run nuxt build to build your application.
• Run nuxt start to start your application and start accepting requests.
I do not believe that Apache has any way of nuxt build or nuxt start which are node commands.
In case if anyone's struggling with this:
You will have to have NPM (NodeJS) installed on your server. By the way you'll need SSH access to the server.
Then should upload the whole project to the server. Say you want to run the project in Development environment. You should run npm run dev and since the default port in development environment is 3000 , your .htaccess file should be as follows:
RewriteEngine On
DirectoryIndex disabled
RewriteRule ^$ http://127.0.0.1:3000/ [P,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ http://127.0.0.1:3000/$1 [P,L]

Firefox/chrome shows apache request as blocked / stalled

I'm using Apache 2.4.33 running on MacOs 10.13.6 as a local web development server
I ran some updates last week, mainly to update php from 7.1 to 7.2. This included a few tweaks to my httpd.conf namely to load a different php module.
I also recreated the server self-signed certificates as per instructions here: https://gist.github.com/jonathantneal/774e4b0b3d4d739cbc53
I've started getting errors on some web pages where some files included in the page are blank. these are both .js and .css files.
If I refresh the page, it is always the same 2 or 3 files.
As this is a dev environment, the files are sym linked from the web/assets directory through to the source directories.
I can't see any permissions problems. Other files in the same directory, with same ownership and permissions don't have the same error.
Looking at the network panel in FF, it lists the file, no error status, not even 200 - completely blank. I can't see the file in the Apache access_log either. Nothing in the error log.
If I look at the "Timings" sub-tab, it has a status of blocked.
I'm not running the MacOs firewall.
My htaccess is very basic:
Options +FollowSymLinks
IndexIgnore */*
# We need mod_rewrite for enablePrettyUrl
RewriteEngine on
RewriteBase /
# If the directory or file exists, use the request directly
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
# Otherwise forward the request to index.php
RewriteRule . index.php
It's not CORS/cross domain ...
In FF the console gives a "Loading failed" error
in Chrome it gives "net::ERR_EMPTY_RESPONSE"
So, blocked by what?
it seems as though I had 2 versions of PHP installed. Apache line though I was using a homebrew installed version of 7.0 and the command line thought I was using the native 7.1
I guess somehow these were conflicting.
I ran brew uninstall php#7.0, brew cleanup & brew prune. Rebooted and everything worked OK.

how to stop to use ssl on tuleap 9.1?

On centos 6, tuleap 9.1, after installation I am only able to access the main page on http, the rest is not available because each links are root on https. Is there's a way to deactivate ssl completely?
I installed all, now can access to the first presentation page, but only if I use http and not https. Problem all the rest of the link of page ( create account, connexion etc...) redirect to https. I already try to deactivate https without success.
Can anyone can help to disable https and is stopping using ssl definitely can generate issue when using this tool?
You could force your website to only load on HTTP through your .htaccess file:
RewriteEngine On
RewriteCond %{HTTPS} =on
RewriteRule ^(.*)$ http://www.{HTTP_HOST}%{REQUEST_URI} [R=302,L,NE]
I've included www in the Rewrite, if you don't want that then you can remove that section. I've also set R=302so that it is a temporary redirect. Set this to R=301 once you know it is working, as that will make it permanent.
Make sure you clear your cache before you test this.

Mercurial: "remote: ssl required" even when pushing to HTTPS repository

I have Apache and hgwebdir.cgi running fine via HTTPS (with a self-signed certificate), I can view the repositories via a browser and clone it locally. I don't know if it'd have any effect, but I'm rewriting the URLs to make them prettier:
$ cat .htaccess
Options +ExecCGI
RewriteEngine On
RewriteBase /public
RewriteRule ^$ hgwebdir.cgi [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule (.*) hgwebdir.cgi/$1 [QSA,L]
However, when I try to run hg push to send the changes back, I get this:
$ hg push
pushing to https://<repository>
searching for changes
http authorization required
realm: Mercurial
user: virtualwolf
password:
remote: ssl required
Apache is set to redirect all requests that are on HTTP to HTTPS. The remote server is running CentOS, with Apache 2.2.3 and Mercurial 1.3.1 (both installed via yum).
I've done a bunch of searching on this problem, the closest I've come to an answer is this but it's referring to NGINX not Apache.
Thanks!
You can resolve this problem by running hg server like this (no push ssl):
hg serve --config web.push_ssl=No --config "web.allow_push=*"
So it turns out the problem was the same as described here. It wasn't anything directly to do with Mercurial, but was oddness on Apache's end.
I had to copy the SSLEngine On and associated SSLProtocol, SSLCipherSuite, SSLCertificateFile, and SSLCertificateKeyFile directives from my separate "Enable SSL" Apache configuration file to my Mercurial virtual host file, even though everything else was working quite happily via HTTPS.
Add this lines to your central repository where you want to push
[web]
push_ssl=False
allow_push=*
Needless to say, this is rather unsafe, but if you’re on a nice protected LAN at work and there’s a good firewall and you trust everybody on your LAN, this is reasonably OK.