Most tools I download have a SHASUM or MD5 file listed somewhere to checksum files once download.
However, I downloaded Zookeeper recently and was having a heck of a time finding the checksums for it. I could create them myself, but would also like to verify against a public checksum.
Might they also sign releases? How would I go about verifying with GPG.
No, the Apache Foundation does not maintain a centralized checksum repository for binary distributions for any Apache project, nor do they mandate it. Same goes for a signing certificate. Both of these are project-level concerns and must be requested per-project through their project-specific issue tracker.
Related
I want to install a CentOS package from a mirror on the internet.
e.g. http://mirror.centos.org/centos-7/7/os/x86_64/Packages/unixODBC-2.3.1-14.el7.x86_64.rpm
That URL is http:// not https://, so there's no TLS encryption.
Downloading binaries from the internet without encryption in transit and authenticity checking seems like a bad idea from a security perspective.
If I modify the URL to add an s, making it https://, the download doesn't work. That server does not serve anything on 443 HTTPS.
So it seems my only choice is to download the file without TLS.
Unlike some Linux ISO files, there are no .md5 and .asc files next to the main file. So I cannot manually check the file hash against a signature.
How does security work for RPMs? If I have no encryption or certificate checking when I download, is there some other chain of trust? e.g. do RPMs contain a public key inside the file (e.g. GPG/PGP) that yum compares to one it already trusts? Or would I be installing a completely untrustworthy file? (
RPM packages can be signed with GPG signatures. All major RPM-using distributions (eg, CentOS, Fedora, RHEL) do that.
If an RPM is signed, dnf/yum will verify the signature when you try and install the package. If it's signed by an unknown signature, they will prompt you about trusting it. If the signature doesn't verify, they will abort the install.
HTTPS provides a bunch of guarantees, but they really boil down to: the data transmission between the mirror and your computer is secure (confidential, and un-tampered). It doesn't prevent someone malicious from modifying the package on the mirror itself. Or it doesn't prevent someone from setting up a bunch of malicious packages on a fake mirror and making them available to you.
GPG, on the other hand, lets you verify that the packages you are installing were published by an authorized system (eg, official CentOS build system) and hasn't been modified since.
You can use both GPG + HTTPS to get advantages of both.
Using just GPG + HTTP, though, means someone spying can see what you are downloading. But due to GPG, if they send you malicious data, you will identify it and abort the installation.
I am trying to install Apache Lucene 5.3.0 (from source). The mirror sites provide lucene-5.3.0-src.tgz and lucene-5.3.0-src.tgz.asc (the signature file), however the latter is using RSA key ID 3FCFDB3E which is not contained in the KEYS file (which is also provided).
$ gpg --verify ./lucene-5.3.0-src.tgz.asc
gpg: assuming signed data in `./lucene-5.3.0-src.tgz'
gpg: Signature made 2015-08-17T13:35:34 CEST using RSA key ID 3FCFDB3E
gpg: Can't check signature: public key not found
The issue seems to be affecting multiple sites. What should the fingerprint be? Why is the key not included in the KEYS file?
It seems that the key of Paul Noble is what you're looking for, having fingerprint
CFCE 5FBB 920C 3C74 5CEE E084 C38F F5EC 3FCF DB3E
The key is rather new (a bunch of weeks old).
You cannot be really sure whether he's permitted to release for Apache Lucene, nor that the key actually belongs to him.
A short Google survey at least indicates several contributions to free software, and also the Linux kernel, but this should not be considered trustworthy without further analysis. There are also no incoming certifications that could be used to verify the key through the web of trust.
I'd recommend to get in touch with Apache, possibly on the Lucene mailing list, and query the developers how to verify the release based on what trust assumptions. And indicate there's something wrong which they should change (eg., include the key in the KEYS file).
the policies around where KEYS files are "published" for the lucene project is kind of confusing, compounded by the use of id.apache.org -> people.apache.org and it's automatic generation of a KEYS file for each project committee based on their current keys. some backstory here...
https://issues.apache.org/jira/browse/LUCENE-5143
The current policy is that each new Lucene/Solr release publishes a snapshot of all the current keys, so for 5.3.0 that would be here...
https://www.apache.org/dist/lucene/java/5.3.0/KEYS
...but the older "non version specific" keys files are still available - notably for verifying older versions of software in the archives that may predate the current policy.
You can always see the list of all "active" keys for a given apache project/user using http://people.apache.org/keys/
http://people.apache.org/keys/group/lucene.asc
A merge from a feature branch to trunk took over 45 minutes to complete.
The merge included a whole lot of jars (~250MB), however, when I did it on the server with the file:// protocol the process took less than 30 seconds.
SVN is being served up by Apache over https.
The version of SVN on the server is
svn, version 1.6.12 (r955767)
compiled Sep 3 2013, 17:49:49
My local version is
svn, version 1.7.7 (r1393599)
compiled Oct 8 2012, 20:42:17
On checking the Apache logs I made over 10k requests and apparently each of these requests went through an authentication layer.
Is there a way to configure the server so that it caches the credentials for a period and doesn't make so many authentication requests?
I guess the tricky part is making sure the credentials are only cached for the life of single svn 'request'. If svn merge makes lots of unique individual https requests, how would you determine how long to store the credential for without adding potential security holes?
First of all I'd strongly suggest you upgrade the server to a 1.7 or 1.8 versions since 1.7 and newer servers support an updated version of the protocol that requires fewer requests for many actions.
Second, if you're using path based authorization you probably want SVNPathAuthz short_circuit in your configuration. Without this for secondary paths (i.e. paths not in the request URI) as may happen for many recursive requests (especially log) when the authorization for those paths are run it runs back through the entire Apache httpd authentication infrastructure. With the setting instead of running the entire authentication/authorization infrastructure for httpd, we simply ask mod_authz_svn to authorize the action against the path. Running through the entire httpd infrastructure can be especially painful if you're using LDAP and it needs to go back to the LDAP server to check credentials. The only reason not to use the short_circuit setting is if you have some other authentication module that depends on the path, I've yet to see an actual setup like this in the wild though.
Finally, if you are using LDAP then I suggest you configure the caching of credentials since this can greatly speed up authentication. Apache httpd provides the mod_ldap module for this and suggest you read the documentation for it.
If you provide more details of the server side setup I might be able to give more tailored suggestions.
The comments suggesting that you not put jars in the repository are valuable, but with some configuration improvements you can help resolve some of your slowness anyway.
The merge included a whole lot of jars (~250MB)
That's your problem! If you go through your network via http://, you have to send those jars via http://, and that can be painfully slow. You can increase the cache size of Apache httpd, or you can setup a parallel svn:// server, but you're still sending 1/4 gigabyte of jars through the network. It's why file:// was so much faster.
You should not be storing jars in your Subversion repository. Here's why:
Version control gives you a lot of power:
It helps you merge differences between branches
It helps you follow the changes taking place.
It helps identify a particular change and why a particular change took place.
Storing binary files like jars provide you none of that. You can't merge binary files, and you can't track their changes.
Not only that, but version control systems usually use diffs to track changes. This saves a lot of space. Imagine a 1 kilobyte text file. In 5 revisions, six lines are changed. Instead of taking up 6K of space, only 1K plus those six changes are stored.
When you store a jar, and then a new version of that jar, you can't easily do a diff, and since jar format is zip, you can't really compress them either, store five versions of a jar in Subversion, and you store pretty close to five times the size of that jar. If a jar file is 10K, you're storing 50K of space for that jar.
So, not only are jar files taking up a lot of space, and they don't give you any power in storage, they can quickly take over your repository. I've seen sites where over 90% of a 8 gigabyte repository is nothing but compiled code and third party jars. And, the useful life of these binary files is really quite limited too. So, in these places, 80% of their Subversion repository is wasted space.
Even worse, you tend to lose where you got that jar, and what is in it. When users put in a jar called commons-beans.jar, I don't know what version that jar is, whether that jar was built by someone, and whether it was somehow munged by that person. I've see users merge two separate jars into a single jar for ease of use. If someone calls that jar commmons-beanutils-1.5.jar because it was version 1.5, it's very likely that someone will update it to version 1.7, but not change the name. (It would affect the build, you have to add and delete, there is always some reason).
So, there's a massive amount of wasted space with little benefit and almost no information. Storing jars is just plain bad news.
But your build needs jars! What should you do?
Get a jar repository like Nexus or Artifactory. Both of these repository managers are free and open source.
Once you store your jars in there, you can fetch the revision of the jar you want either through Maven, Gradel, or if you use Ant and want to keep your Ant build system, Ivy. You can also, if you don't feel like being that fancy, fetch the jars via an Ant <get/> task. If you use Jenkins, Jenkins can easily deploy the built jars for other projects to use in your Maven repository.
So, get rid of the jars. Merging will then be a simple diff between text files. Merging branches will be much quicker, and less information has to be sent over the network. If you don't want to switch to Maven, then use Ivy, or simply update your builds with the <wget> task to fetch the jars and the versions you need.
I have a WCF project in Visual Studio that I need to deploy to a client's test server. I was on the brink of declaring "Mission Accomplished" when I realized that I have no idea how to take my project from Visual Studio 2010 to something that I can deploy on the client's server.
My gist of this problem is that we use a makefile to do building and packaging when deploying to the client. This means that I need a command-line executable to do whatever it is that I need to do to deploy my WCF service. I did discover right-clicking the project and selecting "Build Deployment Package", but since I need to execute via command-line, I don't think this is going to help much.
The bonus second part of this problem is that, once I get the packaged file the client's server, I'm not sure what to do with it. Now, if I knew what to expect from the packaged deployment file, I might have a better idea, but until then, it's all just speculation.
OK, here is what I came up with.
Packaging
First, the packaging. Use msbuild. Something like this (apparently you need to use a v4 or better version of .NET for it to succeed):
C:/Windows/Microsoft.NET/Framework/v4.0.30319/msbuild.exe {project_file} /t:package /target:Build /p:PlatformTarget=x86;
Fairly easy, right?
Deployment
Now, the bonus part of the question, the deployment. This consists of the easy part and the hard part. The easy part was getting the .zip file created with msbuild.exe added into IIS. I found 2 possibilities.
Commandline
The first is the command-line, which gave me issues (something about being unable to cast 'Microsoft.Web.Deployment.DeploymentProviderOptions' to type 'Microsoft.Web.Deployment.DeploymentProviderOptions' --- I KNOW, RIGHT?). Anyway, this is the command-line I used. It may help someone, or it may not. Again, I had issues with it.
c:\inetpub\wwwroot>"c:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe" -verb:sync -presync:runCommand="md c:\inetpub\wwwroot\{MyWCFCodeDest} & c:\windows\system32\inetsrv\appcmd add site /name:{MyWCFCodeDest} /id:22 bindings:http/*:54095: /physicalPath:c:\inetpub\wwwroot\{MyWCFCodeDest}" -source:package={ZipFileFromMSBuild.exe} -dest:auto -setParam:"IIS Web Application Name"="{MyIISName}"
UI
OK, so I decided I would be happy with using the second way. It's by far the easiest if you don't care about automation. Open up IIS Manager, right-click the computer OR the website (depending on whether you want it as its own website or an application in an existing website), Deploy, Import, and follow the wizard to the end.
Errors in Deployment
And now where I spent most of my time. I hit my newly deployed .svc file and get an error. This error involves the certificate I was using. Now, maybe not all deployments will have to worry about this, but mine did. The error was lengthy, something about "keyset does not exist" and "cannot be activated due to an exception during compilation" and "may not have a private key that is capable of key exchange or the process may not have access rights for the private key". I tried a bunch of stuff, including using mmc to re-import certs and makecert to recreate both my CA and my personal cert. None of that was the problem for me (ymmv). Finally, I focused on user rights. I found that if I gave the Everyone user permission to the private key for the cert (the cert needs to have a private key), everything worked. Obviously not a solution I want for a client, so I hunted down the correct user to give rights to. Surprisingly, this took a while. Various websites had me adding Network Service, ASPNET, current user, the user specified in machine.config (which is in the .NET directory somewhere), IIS_{MachineName}... none of these worked. The one I had to add was IIS_IUSRS.
So, a handful of caviats that may help your sanity when you scream at your monitor that this isn't working for you, despite following all the directions. Because apparently IIS changes far too much over time and this stuff does matter:
Windows 7 Ultimate sp1
IIS 7.5.7600.16385
Useful Related Stuff
Also, some commandline tools you may be interested in:
-winhttpcertcfg.exe -l -c LOCAL_MACHINE\My -s "{cert_name}" -- lists the users authorized to access the cert's private key (you can also do it the old fashioned way through file properties); I tried downloading winhttpcertcfg.exe, but it was part of a Windows 2003 package that gives warnings about not being compatible (not sure if it came from my attempt to install that file or if it now comes with something I already had installed)
-winhttpcertcfg.exe -g -c LOCALHOST\My -s "{cert_name}" -a IIS_IUSRS -- adds IIS_IUSRS to the permissions for the cert's private key
-findprivatekey.exe My LocalMachine -n "{cert_name}" -- Finds the private key file for the specified cert; for some reason, this is a tool that you have to build in Dev Studio on your own (found in some WCF examples downloaded from Microsoft)
-cacls.exe {private_key_file_for_cert} /E /G "IIS_IUSRS" -- another way to add a user to the private key's permissions
-mmc -- launchs a manager for installed certificate
-makecert -n "CN={CertificateAuthorityName}" -r -sv {CertificateAuthorityName}.pvk {CertificateAuthorityName}.cer -- create a certificate authority cert
-makecert -sk {SignedCertName} -iv {CertificateAuthorityName}.pvk -n "CN={SignedCertName}" -ic {CertificateAuthorityName}.cer {SignedCertName}.cer -sr localmachine -ss My -- create a certificate signed by a certificate authority
One last thing: if you want to import your certs using mmc, you need to launch mmc, File->Add/Remove Snapin. Add the Certificates snapin. Import the certificate authority to the Trusted Root Certification Authorities and the certificate signed by the certificate authority to Personal.
Hopefully you have enjoyed your ride here. Please wait for the browser to come to a complete stop before exiting, and please remember to take any personal items with you.
Additional Discoveries
When it came time to deploy everything to a test server (rather than my development machine), I didn't expect all the hassles that I encountered. I'm documenting these here, again, in an effort to help some other poor, lost soul (or myself at a later date).
-This one should have bee obvious: FindPrivateKey.exe wasn't on the server. I had to jump through some hoops to get it there. ymmv.
-Only the client 4.0 version of .NET had been installed on the server. By the time I discovered this AND realized it was a problem, a few hours had passed. Discovery of the installed .NET versions came courtesy of netfx_setupverifier, which I got from one of Microsoft's websites. The client version doesn't include all the WCF stuff.
-IIS needed some additional settings (files found in the .NET Framework version directory, run from the commandline):
aspnet_regiis.exe -i -enable
ServiceModelReg.exe -r
-cacls.exe informed me that it was deprecated and that I should use icacls.exe. The commandline for icacls is something like:
icacls.exe {private_key_file_for_cert} /GRANT "IIS_IUSRS":R (note, didn't exactly work for me, but you can always just go to the {private_key_file_for_cert} file, probably in ProgramData\Microsoft\Crypto\RSA\MachineKeys, and give permissions via Explorer - right-click - properties)
-You may need to add a handler mapping for the WCF. I highly recommend having it running under an Application Pool that is .NET v4.0.
I'm trying to convince the higher-ups at my work place to migrate to Apache Ivy. I've managed to get a few sandbox projects working using Ivy to power the build, and now I have a greenlight to put together a migration proposal.
We all agree on one thing: we don't want to trust JARs that are located in public directories! I know, I know, a bit paranoid, yes. But we'd like to have a setup where we pull a JAR from a trusted source (either downloading it from the open source project itself, or most likely, gulp, a public repo), and use it for some time before we "certify" it (give it our blessing as a safe artifact to use).
Then we want to have a common repository for all JARs used by our many projects.
My original thinking was to place this repository up in version control (we have an SVN server). But I wasn't sure what best practices dictate. It might make more sense to put our JARs on a file server and FTP to them in the Ivy script.
Either way, SVN (HTTPS) or FTP, all of our servers are authenticated. So, a small number of questions:
Where should we be publishing all of our "certified" JARs (everything from `log4j` to any homegrown JARs we produce)? What do best practices dictate?
The "ivyrep" resolver-type does not take username or passwd atrributes. If our "JAR server" (FTP, SVN, etc.) is authenticated, how do I configure the Ivy scripts to login?
I must echo Brian's recommendation to use a repository manager like Nexus. It's a lot less work in the long run. You'll also discover that the professional version of Nexus enables you to create approval processes around repositories which you plan to use in your build. See the procurement suite functionality.
If, on the other hand, you are determined to build your own repository, then ivy has the tools for the job. You need to become very familiar with the ivy settings file and how it declares and uses resolvers.
If repository is accessible via HTTPS the the url resolver should be able to access it. The resolver will assume that each version of an artifact is in a different directory and you'll need to specify the URL pattern that ivy will need to use when accessing the repository:
<url name="two-patterns-example">
<ivy pattern="http://ivyrep.mycompany.com/[module]/[revision]/ivy-[revision].xml" />
<artifact pattern="http://ivyrep.mycompany.com/[module]/[revision]/[artifact]-[revision].[ext]" />
</url>
The pattern is fully flexible to how you store the artifacts.
Authentication is also handled in the settings file using the credentials tag.
Finally, the FTP protocol is also supported. It's hard to find in the doco, but it's supported by the vfs resolver.
I think that's enough information on an option I don't recommend :-) Having said that I once created an FTP based repository for managing releases to clients. It's useful to have a tool this powerful :-)
Why not use something like Sonatype's Nexus. I've seen it used for Maven, and I believe it'll work for Ivy.
You can set it up to download from remote repositories into (say) a 'test' repository. You can then evaluate those .jars, and if they're good, upload them into an 'approved' repository for general consumption. There's some authentication surrounding this, but you'd have to evaluate that in greater depth. Certainly you can restrict the uploading into repositories via a username/password pair.