PHPUnit: local VS global install - testing

Installing PHPUnit with composer globally seems more convenient to me for those two reasons:
1. Using it everywhere without needing an extra install.
2. Just running phpunitinstead of vendor/bin/phpunit (using an alias might solve this)
Are there any reasons why a local install might be the better choice? For example: using the exact same versions every time. (don't have a lot of experience with PHPUnit, so not sure if this really is an issue or not)

The big disadvantage of installing packages globally is that you might end up with different versions of PHPUnit between developers in your team (unless you are the only developer). This might cause some side effects.
If you install it locally using composer.json, then every developer in your team will have exactly the same version as you do for that specific application. Also, everybody will see when you change the version in composer.json.
If you don't like typing vendor/bin/phpunit, you can use Makefile (which is also in your project):
test:
vendor/bin/phpunit --configuration=test/Unit/phpunit.xml
then run it ...
make test

I like to install it via composer and the require-dev block, but another way that does come highly recommended is to download the phpunit.phar into the project, to use that.
Either way, you control exactly which version is being used (and when it's updated) - which is the most important part, as you can't so easily control what people have installed globally.

Related

Given an npm package, how do I know whether it will work in browser?

I've recently installed some npm package (recommended Kubernetes client) for my react app.
After writing code that uses the package and deploying the code for testing I got some weird errors about missing functions or packages. Then I've read the documentation and realized that the package was Node-only.
Is there any way to check that the npm package works in browser before writing code that uses the package?
Python packages specify compatible python versions. Do npm packages have something like this whether they indicate support for particular Node versions and the browsers?
Some packages/libraries contain .browserlistrc file which I've found to be a starting point to find out the browsers and platforms the devs intend to support or have their code compile for. While it may not always be true and the package might just be able to support a browser that isn't mentioned, it's a good starting point. It surely helps to find out if IE (the bane of front-end dev) is supported or not.
Then again many packages don't necessarily include a .browserlistrc. You can then check the package.json for a "browserslist" field.
If neither are found, you can always clone the repo and add your own .browserlistrc in the root with queries that will let you know if the package supports your intended browser or platform - little more work but yeah it can help. Not full proof but a decent enough way to find out.
Though the best answer is really to just ask the maintainers.

Ensure npm/pip dependencies are binary-preserved

My company has a policy that all projects should not reference any 3rd party code servers after the release. Basically they ask to make local mirrors of all package servers. This is to ensure we can reproduce the release, given that it is always a risk that somebody will change the code on the server, not controlled by us, without changing the library version. It is also a security risk to blindly use external servers.
What is the proper way to fulfill this policy with npm? If I understand it correctly, package-lock is not enough - it will give me a warning if hash is changed, but I will not be able to reproduce the build.
There is npm-mirror, but it seems old and I was not able to run it. Are there better up-to-date alternatives?
Also, I was thinking about just preserving a copy of node_modules, but this doesn't really work. We are building our projects on different environments, and node_modules folder is environment specific and needs to be built separately for each.
We also use python and I would assume I need to find the solution for the same problem with pip

Cygwin & OCaml: OPAM + Batteries

I extensively use Cygwin on a Windows 8 environment (I do not want to go ahead and boot/load Linux directly on the machine). I use the OCamlIDE plug-in for Eclipse and have experienced relatively no problems using this workflow setup.
However, I would like to use Batteries so that I may make use of use of its dynamic arrays among a few other interesting features that will speed up my development process.
I have tried this method: http://ocaml.org/install.html, but I get the following error:
$ sh ./opam_installer.sh /usr/local/bin
No file yet for i686:CYGWIN_NT-6.2-WOW64
What am I missing and how would I configure Cygwin so that it can accept the Opam installer? When I tried yet a different way of building Opam, I got:
'i686-w64-mingw32-gcc' is not recognized as an internal or external command,
as a Makefile error and reason for building failure. It seems something is wrong related to mingw32-gcc, what do I need to install and/or configure for my Cygwin to get it to compile/build things properly. I have wget and curl installed as well.
My overall question: What is the best way to get Batteries installed on my system with the minimum of time spent tracing all of its dependencies by hand? Is there a way I can just build the library module, such as BatDynArray and the includes:
include BatEnum.Enumerable
include BatInterfaces.Mappable
That way I can just call them directly in my code with open...;; and/or include...;;;
OCaml works beautifully on Windows with WODI, which is a Cygwin-based distribution that includes Batteries and tons of other useful packages (which are a pain to install manually on Windows).
I urge you to take a shot at WODI, which I believe to be an indispensable tool for the
rest of us, the forgotten souls, who have to deal with Windows.
First of all, include does not do what you think it does. open Batteries should be exactly what you're looking for. OPAM is not yet solid on windows (maybe Thomas could give an update on where things stand).
Frankly, I would recommend to install a linux on a VM, you should be able to get started with OPAM instantly then. Otherwise, take a look at this package manager for OCaml which focuses on cross platform support: http://yypkg.forge.ocamlcore.org/. I've never tried it myself however. The last package manger you could try is GODI, I'm not sure about its windows support though.
Finally, if none of these options work then it should be possible to install batteries from the source. All you need is OCaml and make. And if there are problems with this approach then you should definitely follow up on them either here or on the bug tracker because batteries does intend to support windows AFAIK.

How do I run my package's tests using different versions of its prerequisites?

Suppose I've written a Haskell package that I'd like to release to Hackage.
Suppose I've written automated tests for it, so I know it works on my machine, with the version of GHC I have installed, and the versions of other packages it depends on that I have installed.
Is there an automated way of running my package's tests using other versions of packages it depends on, and other versions of GHC (and versions of Hugs, etc)?
The objective is not only to check that it works with the prerequisites I think it should work with, but also to confirm it doesn't work with versions I expect it not to work with.
I think for now your best bet is cabal-dev or capri and some homegrown scripts.
Use cabal configure --preference=DEPENDENCY, as described by cabal configure --help. I don't know, how it work, maybe just try it out. For instance, to test with the old base package, try
cabal configure "--preference=base==3.0.0"
You may put the combinations of dependecies you want to test into some shell script, however you like.

Why do techs recommend YUM installs yet repositories and providers are ages behind?

I have been reading page after page after page about the benefits of using YUM package installer and how NOBODY should built installs from source files (which again makes no sense to me) yet the repositories and source builders always package files in Tarball format, leaving a TON of work (which usually ends up going wrong) to the individual instead of formatting SRPMs for the end user.
Has the world gone mad? I feel like I am taking crazy pills!
Well, first of all there's more to life than just RPM and YUM. An SRPM would be (somewhat) useless to Debian, for instance.
As for why you'd use a package repository over building everything yourself, well I don't know about you, but I've much rather just run (I'm using Ubuntu so I have apt-get instead of yum):
# apt-get install firefox
Than trying to figure out all the dependencies, as well as all the dependencies dependencies, make sure I have the correct versions of everything, download/build/install any that I don't have (or are out of date: if updating existing dependencies, make sure the newer versions don't break any existing software that I have and make sure I don't end up with 15 different versions of the same thing), and only after all that then download/configure/build/install firefox.
Then realise I'll also want Open Office or MySQL and start all over again!
That said, there are some packages that I install the latest version of from source. For example, I run my media centre off MythTV and I always like to build the latest version of that from Subversion. But even then, with a package manager, that's as easy as:
# apt-get build-dep mythtv
> cd ~/src/mythtv/
> svn co <svn repo of mythtv>
> configure && (etc)
That is, the package management software already knows all the dependencies for MythTV and it can download and install them automatically. Why spend hours tracking it all down manually?
In the end, it sounds to me like maybe you'd prefer a distro like Gentoo... that's the benefit of Linux, of course. If you don't like how things are run in the Fedora/RedHat distribution, you can just choose a different one.
There are a few reasons to use a packaging infrastructure (like yum):
Creating "installations" is much easier to do, due to automatic dependency installation. From the simple yum install blah to creating chroots with mock/--installroot, or live CDs, etc.
Managing those installations. From the obvious yum update to operations which are much harder to do otherwise like: yum --security update, yum --bz=1234 update-minimal, yum --disablerepo=testing distro-sync.
Auditing those installations. The obvious examples here being yum history (not available in plain RHEL-5 atm.) and yum verify.
...however speed is not a factor, for instance Fedora rawhide moves as fast as gentoo.
RHEL-5 does not move that quickly, because it's 3 years old and is not supposed to break ... not because it's managed using yum/rpms. There are third party providers, like iuscommunity, which release co-installable newer releases for various packages. Or if you need to you create your own.
Or you can run a production server on Fedora rawhide or gentoo, both will have the latest packages really quickly ... I would not recommend that option though.
Among other things, tarballs are system independent and YUM appears to be RPM-based and thus mostly usable by Linux only (plus Netware and AIX, so as I said, Linux only :) )