Is it safe to download RPMs with HTTP (not HTTPS)? - ssl

I want to install a CentOS package from a mirror on the internet.
e.g. http://mirror.centos.org/centos-7/7/os/x86_64/Packages/unixODBC-2.3.1-14.el7.x86_64.rpm
That URL is http:// not https://, so there's no TLS encryption.
Downloading binaries from the internet without encryption in transit and authenticity checking seems like a bad idea from a security perspective.
If I modify the URL to add an s, making it https://, the download doesn't work. That server does not serve anything on 443 HTTPS.
So it seems my only choice is to download the file without TLS.
Unlike some Linux ISO files, there are no .md5 and .asc files next to the main file. So I cannot manually check the file hash against a signature.
How does security work for RPMs? If I have no encryption or certificate checking when I download, is there some other chain of trust? e.g. do RPMs contain a public key inside the file (e.g. GPG/PGP) that yum compares to one it already trusts? Or would I be installing a completely untrustworthy file? (

RPM packages can be signed with GPG signatures. All major RPM-using distributions (eg, CentOS, Fedora, RHEL) do that.
If an RPM is signed, dnf/yum will verify the signature when you try and install the package. If it's signed by an unknown signature, they will prompt you about trusting it. If the signature doesn't verify, they will abort the install.
HTTPS provides a bunch of guarantees, but they really boil down to: the data transmission between the mirror and your computer is secure (confidential, and un-tampered). It doesn't prevent someone malicious from modifying the package on the mirror itself. Or it doesn't prevent someone from setting up a bunch of malicious packages on a fake mirror and making them available to you.
GPG, on the other hand, lets you verify that the packages you are installing were published by an authorized system (eg, official CentOS build system) and hasn't been modified since.
You can use both GPG + HTTPS to get advantages of both.
Using just GPG + HTTP, though, means someone spying can see what you are downloading. But due to GPG, if they send you malicious data, you will identify it and abort the installation.

Related

Apache, Ubuntu, SSL, alias and virtual

First let me state that I am a Linux noob. I am learning as I go here. Here is my situation. I have an Ubuntu 16lts server, with apache. The software we just installed comes with "samples" These samples are stored in the same directory structure as the program. The instructions have you add an alias and a directory to the apache2 config file. Like so
Alias /pccis_sample /usr/share/prizm/Samples/php
This actually worked :)
However now we want to make sure this site is SSL. I did manage to use openssl to import to Ubuntu the certificates we wanted to use. (i am open to using self signed though at this point its non prod so i dont care)
In trying to find out the right way to tell Apache i want to use SSL for this directory and which cert i want to use. Things went wonky on me. I did manage to get it to use ssl but with browser warning as one would epexct with a self signed cert. I had thought that i could just install the cert on our devs machines and that would go away. But no dice. Now in trying to fix all that i just done broke it. SOOOO What I am looking for is not neccessarily and spoon fed answer but rather any good tools, scripts, articles tips tricks gotchas that i can use to get this sucker done.
Thanks
You need to import your certificate(s) into the browsers trusted store. For each browser on each machine you test with. "What a pain!" you probably think. You are right.
Make it less painful - go through it once. Create your own Certificate Authority, and add that to your browsers trusted certificates/issuers listing. This way, you modify each one once, but then any certificate created by your CA certificate's key will be considered valid by those clients.
https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/
Note that when configuring Apache or other services, they will still need an issued/signed certificate that corresponds correctly to the hostname that is being used to address them.
Words of warning - consider these to be big, red, bold, and blinking.
DO NOT take the lazy way and do a wildcard, etc. DO keep your key and passphrase under strict control. Remember - your clients will implicitly trust any certificate signed by this key, so it is possible for someone to use the key and create certificates for other domains and effectively MITM the clients.

Where does Apache Foundation keep checksums for projects?

Most tools I download have a SHASUM or MD5 file listed somewhere to checksum files once download.
However, I downloaded Zookeeper recently and was having a heck of a time finding the checksums for it. I could create them myself, but would also like to verify against a public checksum.
Might they also sign releases? How would I go about verifying with GPG.
No, the Apache Foundation does not maintain a centralized checksum repository for binary distributions for any Apache project, nor do they mandate it. Same goes for a signing certificate. Both of these are project-level concerns and must be requested per-project through their project-specific issue tracker.

Setting up test environment for SSL torrents using libtorrent and open tracker

So I am trying to setup a test environment for bittorrent file tranfers with SSL protection and I am having some troubles and would need some guidence.
My setup:
PC1: Running opentracker and is acting as the Certificate Authority.
PC2: Running libtorrent example client compiled with support for SLL encryption. Also acts as the publisher of the torrent file.
PC3: Same as PC2 but is not publishing any file.
When I use this setup without SSL torrents everything works as expected. The file gets transferred and if you go into the trackers stats page (trackerip/stats) it shows that 1 torrents is beeing served and there are 2 peers connected.
However, when I use my SSL torrent this is not happening. First of all, no file is being transferred. Second of all the tracker doesn't seem to recognize the torrent file i.e the tracker tells me it is currently not serving any torrents.
What could be wrong with my setup? And how do I start troubleshooting this?
Could it be that the tracker have to support HTTPS? Maybe I can't use open tracker. Do anyone have experience with this?
It is very likely that something is missing in the torrent file, but should I not be getting any errors in that case?
I am using the libtorrent example project "make_torrent" to make my ssl torrent and when I inspect it, it contains my certificate.
EDIT:
So a big part of my problem I assume is that I have zero experience from working with SSL stuff. So this is probably where I fail. I have read through both http://www.libtorrent.org/manual-ref.html#ssl-torrents and http://blog.libtorrent.org/2012/01/bittorrent-over-ssl/
and I am not sure I fully understand it.
I will try to explain how I have interpreted it and you guys can explain why I am wrong :) .
My interpretation:
The publisher of the torrent will include a x509 certificate signed with the publishers private key.
When a peer receives this torrent it will use the publishers public key (installed at an earlier time) to verify it's authenticity.
If everything is OK, the peer will generate a Certificate Signing Request and sign it with the peers private key and then send it to the publisher who signs it and returns a certificate. This is then the certificate that the peer will present to other peers.
Is this correct?

How to use SSL with HttpListener with an mkbundle'd Mono app

I have a .NET application built with Mono, that I've bundled into a native (Linux) executable using mkbundle. This is so that end users don't need to mess around and install Mono themselves.
The application uses ServiceStack, which under the hood uses HttpListener. I need the web services to be exposed over an SSL-enabled HTTP endpoint.
Normally, you would run something like httpcfg -add -port 1234 -p12 MyCert.pfx -pwd "MyPass" during configuration (all this really does is copy the certificate to a specific path), and HttpListener would automatically bind the certificate to the port.
So HttpListener loads certificates from a particular path at runtime.
Is that path hard-coded? Or is there some way I can tell it to use a certificate from another location, since the end user will not have Mono installed?
Yes the path that HttpListener expects to find certificates at is predefined, and cannot be specified by the user, programatically or through a config file. The Mono EndPointListener class will look for the path:
~/.config/.mono/httplistener/
HttpListener code:
string dirname = Environment.GetFolderPath (Environment.SpecialFolder.ApplicationData);
string path = Path.Combine (dirname, ".mono");
path = Path.Combine (path, "httplistener");
As you have noted this is the same path the httpcfg copies certificates to.
Even though you are using mkbundle, this is still where HttpListener will expect to read the certificate from, regardless of the fact that the Mono runtime is installed.
In your application startup, you should:
Check for the existence of the directories, and create as required
Write your certificate and key to that path from an embedded resource in your application. PouPou's answer here shows the method used by HttpCfg.exe.
Therefore eliminating the requirement to run httpcfg, you will effectively be building that functionality straight into your application.
Does Mono perform any validation of the certificates it loads from there for HttpListener? i.e., will it expect to find the issuer's certificate in the certificate store?
I don't know for sure if Mono checks for a valid corresponding issuers certificate in the certificate store at the point of creating the listener, or upon each connection request. However you can add a CA cert to the certificate store yourself, or import all the standard Mozroot certificates.
The full source code for Mozroots is here. This shows how to import the CA certs.
Is the path to the certificate store also hard-coded?
The certificate store should be managed through the X509StoreManager provider.

How to package a WCF service from a makefile...?

I have a WCF project in Visual Studio that I need to deploy to a client's test server. I was on the brink of declaring "Mission Accomplished" when I realized that I have no idea how to take my project from Visual Studio 2010 to something that I can deploy on the client's server.
My gist of this problem is that we use a makefile to do building and packaging when deploying to the client. This means that I need a command-line executable to do whatever it is that I need to do to deploy my WCF service. I did discover right-clicking the project and selecting "Build Deployment Package", but since I need to execute via command-line, I don't think this is going to help much.
The bonus second part of this problem is that, once I get the packaged file the client's server, I'm not sure what to do with it. Now, if I knew what to expect from the packaged deployment file, I might have a better idea, but until then, it's all just speculation.
OK, here is what I came up with.
Packaging
First, the packaging. Use msbuild. Something like this (apparently you need to use a v4 or better version of .NET for it to succeed):
C:/Windows/Microsoft.NET/Framework/v4.0.30319/msbuild.exe {project_file} /t:package /target:Build /p:PlatformTarget=x86;
Fairly easy, right?
Deployment
Now, the bonus part of the question, the deployment. This consists of the easy part and the hard part. The easy part was getting the .zip file created with msbuild.exe added into IIS. I found 2 possibilities.
Commandline
The first is the command-line, which gave me issues (something about being unable to cast 'Microsoft.Web.Deployment.DeploymentProviderOptions' to type 'Microsoft.Web.Deployment.DeploymentProviderOptions' --- I KNOW, RIGHT?). Anyway, this is the command-line I used. It may help someone, or it may not. Again, I had issues with it.
c:\inetpub\wwwroot>"c:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe" -verb:sync -presync:runCommand="md c:\inetpub\wwwroot\{MyWCFCodeDest} & c:\windows\system32\inetsrv\appcmd add site /name:{MyWCFCodeDest} /id:22 bindings:http/*:54095: /physicalPath:c:\inetpub\wwwroot\{MyWCFCodeDest}" -source:package={ZipFileFromMSBuild.exe} -dest:auto -setParam:"IIS Web Application Name"="{MyIISName}"
UI
OK, so I decided I would be happy with using the second way. It's by far the easiest if you don't care about automation. Open up IIS Manager, right-click the computer OR the website (depending on whether you want it as its own website or an application in an existing website), Deploy, Import, and follow the wizard to the end.
Errors in Deployment
And now where I spent most of my time. I hit my newly deployed .svc file and get an error. This error involves the certificate I was using. Now, maybe not all deployments will have to worry about this, but mine did. The error was lengthy, something about "keyset does not exist" and "cannot be activated due to an exception during compilation" and "may not have a private key that is capable of key exchange or the process may not have access rights for the private key". I tried a bunch of stuff, including using mmc to re-import certs and makecert to recreate both my CA and my personal cert. None of that was the problem for me (ymmv). Finally, I focused on user rights. I found that if I gave the Everyone user permission to the private key for the cert (the cert needs to have a private key), everything worked. Obviously not a solution I want for a client, so I hunted down the correct user to give rights to. Surprisingly, this took a while. Various websites had me adding Network Service, ASPNET, current user, the user specified in machine.config (which is in the .NET directory somewhere), IIS_{MachineName}... none of these worked. The one I had to add was IIS_IUSRS.
So, a handful of caviats that may help your sanity when you scream at your monitor that this isn't working for you, despite following all the directions. Because apparently IIS changes far too much over time and this stuff does matter:
Windows 7 Ultimate sp1
IIS 7.5.7600.16385
Useful Related Stuff
Also, some commandline tools you may be interested in:
-winhttpcertcfg.exe -l -c LOCAL_MACHINE\My -s "{cert_name}" -- lists the users authorized to access the cert's private key (you can also do it the old fashioned way through file properties); I tried downloading winhttpcertcfg.exe, but it was part of a Windows 2003 package that gives warnings about not being compatible (not sure if it came from my attempt to install that file or if it now comes with something I already had installed)
-winhttpcertcfg.exe -g -c LOCALHOST\My -s "{cert_name}" -a IIS_IUSRS -- adds IIS_IUSRS to the permissions for the cert's private key
-findprivatekey.exe My LocalMachine -n "{cert_name}" -- Finds the private key file for the specified cert; for some reason, this is a tool that you have to build in Dev Studio on your own (found in some WCF examples downloaded from Microsoft)
-cacls.exe {private_key_file_for_cert} /E /G "IIS_IUSRS" -- another way to add a user to the private key's permissions
-mmc -- launchs a manager for installed certificate
-makecert -n "CN={CertificateAuthorityName}" -r -sv {CertificateAuthorityName}.pvk {CertificateAuthorityName}.cer -- create a certificate authority cert
-makecert -sk {SignedCertName} -iv {CertificateAuthorityName}.pvk -n "CN={SignedCertName}" -ic {CertificateAuthorityName}.cer {SignedCertName}.cer -sr localmachine -ss My -- create a certificate signed by a certificate authority
One last thing: if you want to import your certs using mmc, you need to launch mmc, File->Add/Remove Snapin. Add the Certificates snapin. Import the certificate authority to the Trusted Root Certification Authorities and the certificate signed by the certificate authority to Personal.
Hopefully you have enjoyed your ride here. Please wait for the browser to come to a complete stop before exiting, and please remember to take any personal items with you.
Additional Discoveries
When it came time to deploy everything to a test server (rather than my development machine), I didn't expect all the hassles that I encountered. I'm documenting these here, again, in an effort to help some other poor, lost soul (or myself at a later date).
-This one should have bee obvious: FindPrivateKey.exe wasn't on the server. I had to jump through some hoops to get it there. ymmv.
-Only the client 4.0 version of .NET had been installed on the server. By the time I discovered this AND realized it was a problem, a few hours had passed. Discovery of the installed .NET versions came courtesy of netfx_setupverifier, which I got from one of Microsoft's websites. The client version doesn't include all the WCF stuff.
-IIS needed some additional settings (files found in the .NET Framework version directory, run from the commandline):
aspnet_regiis.exe -i -enable
ServiceModelReg.exe -r
-cacls.exe informed me that it was deprecated and that I should use icacls.exe. The commandline for icacls is something like:
icacls.exe {private_key_file_for_cert} /GRANT "IIS_IUSRS":R (note, didn't exactly work for me, but you can always just go to the {private_key_file_for_cert} file, probably in ProgramData\Microsoft\Crypto\RSA\MachineKeys, and give permissions via Explorer - right-click - properties)
-You may need to add a handler mapping for the WCF. I highly recommend having it running under an Application Pool that is .NET v4.0.