Updating to PHP 5.3 with deprecated functions warning disabled - migration

I'm very keen to update a number of our servers to PHP 5.3. This would be in readiness for Zend Framework 2 and also for the apparent performance updates. Unfortunately, i have large amounts of legacy code on these servers which in time will be fixed, but cannot all be fixed before the migration. I'm considering updating but disabling the deprecated function error on all but a few development sites where i can begin to work through updating old code.
error_reporting(E_ALL ^ E_DEPRECATED);
Is there any fundamental reason why this would be a bad idea?

Well, you could forget that you set the flag and wonder why your application breaks in a next PHP update. It can be very frustrating to debug an application without proper error reporting. That's one reason I can think of.
However, if you do it, document it somewhere. It can save you a couple of hours before you remember setting the flag at all.

If you haven't already you should read the migration guide with particular focus on Backward Incompatible Changes and Removed Extensions.
You have bigger issues than deprecation. Ignoring E_DEPRECATED will not suffice. Because of the incompatible changes there will also be other type of errors or, maybe, even worse, unexpected behaviors.
Here's a simple example:
<?php
function goto($line){
echo $line;
}
goto(7);
?>
This code will work fine and output 7 in PHP 5.2.x but will give you a parse error in PHP 5.3.x.
What you need to do is take each item in that guide and check your code and update where needed. To make this faster you could ignore the deprecated functionality in a first phase and just disable error reporting for E_DEPRECATED, but you can't assume that you will only get some harmless warnings when porting to another major PHP branch.
Also don't forget about your hack and fix the deprecated issues as soon as possible.
Regards,
Alin
Note: I tried to answer the question from a practical point of view, so please don't tell me that ignoring warnings is bad. I know that, but I also know that time is not an infinite resource.

I presume you have some kind of test server? If not, you really should set one up and test your code in PHP 5.3. If your code is thoroughly Unit Tested, testing it will take seconds, and fixing it will be fairly quick too, as the unit tests will tell you exactly where to look. If not, then consider making Unit Testing it all a priority before the next release, and in the meantime go through it all, first with E_DEPRECATED warnings disabled and fix anything which comes up, then with it re-enabled once you have time. You could also run a global find-and-replace for easier to fix errors.

Related

How can a modified Julia package be used natively?

So, there is this cool package I've found but it leaves a lot to be desired. Since it made more sense to modify it, rather than build a new one myself, I changed the code in the corresponding source directory (C:\Users[my username].julia\v0.4[package name]\src). I made sure to modify not just the base.jl file, but also the [name of package].jl one so that there are no issues with dependencies or the new functions I added. I tried running the package several times to ensure that Julia doesn't spit out any errors or exceptions (the original package had some deprecated stuff, which I also remedied). Still, I fail to use the additional functionality of the package that I augmented. Any help would be greatly appreciated.
I'm using Julia ver 0.4.2, on a Windows 7 machine. As an IDE I use Notepad++. Thanks
I'm not exactly sure what you tried, but here's a guess as to what's going on: if you've already loaded the package in your julia session, edits to the source files won't take effect unless you explicitly reload the package. There are some good workflow tips here, and more explanation of the module system here.
However, for a newbie the easiest thing might be to quit julia and restart.
As far as making changes to a package, as Gnumic commented, your best approach is to make a branch and commit your changes there. Once you become convinced your changes represent an improvement, consider sharing your changes with the rest of the world.

Elixir: lint for confirming that every function has type sepcification

Is there a lint for Elixir (like for Javascript) which checks that every function has a type specification?
There is an Erlang compiler switch, +warn_missing_spec, which does this, but I'm having trouble getting it to work with Elixir at the moment, I think there is a bug with it's parsing of the ELIXIR_ERL_OPTS environment variable which is converting +warn_missing_spec into -warn_missing_spec which isn't a valid compiler option. I'm going to open an issue on the tracker, but thought you might like to know that this does indeed exist.
EDIT: As José mentioned below, the correct flag is ERL_COMPILER_OPTIONS. You can enable the missing spec warning during compilation by doing the following:
ERL_COMPILER_OPTIONS="warn_missing_spec" mix compile
Keep in mind you may get superfluous warnings from Elixir itself, for functions like __MODULE__. It should still be useful though. One last thing to note, I discovered this morning that there is a problem using this flag with mix compile, and that it's currently only warning about mix.exs. This is being fixed, and may even be fixed by the time you see this, but it's something to be aware of.

Best practice for writing tests that reproduce bugs

I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.

grunt lesslint how to prevent output from being written to console

We are trying to use grunt-lesslint in our project, as our UI developer is comfortable fix errors in less file. grunt-recess seems more powerful but not sure if it can point errors in less file itself. I am unable to comprehend enough from lesslint page, and there do not seem to be many examples. Does anyone know the following:
How to prevent lesslint from displaying on the console. I use formatters and the report file is generated, but it also prints on console, which I do not want to.
How to make lesslint fail only in the case of errors (not warnings). Also csslint seems to report errors also, while lesslint mostly gives warnings only, why is that so? Does lesslint throw errors as well? How to make it fail only in case of errors?
I tried using 'checkstyle-xml' formatter, but it does not seem to use it (I have used in jshint and it gives a properly formatted xml, which it does not give for lesslint).
Is it possible to compile less (many files or directories) in conjunction with lesslint? Any example?
Thanks,
Paddy
I'd say it's more of a common practice to display stdout for this kind of thing; the JSHint plugin does it, as does any other linting plugin that I've used. If you get in another developer that uses Grunt they'll probably expect stdout too. If you really want to override this, use grunt-verbosity: https://npmjs.org/package/grunt-verbosity
Again, this is a convention in Grunt; if a task has any warnings then it fails. The reason being if you lint a file and the linter flags something up it should be dealt with straight away, rather than delay it; six months time you have 500 errors that you haven't fixed and you're less likely to fix them then. Most linting plugins allow you to specify custom options (I've used CSS Lint and that is very customisable), so if you don't like a rule you can always disable it.
This should work. If there's a bug with this feature you should report it on the issue tracker, where it will be noticed by the developers of the plugin. https://github.com/kevinsawicki/grunt-lesslint/issues
Yes. You can set up a custom task that runs both your linter and compile in one step: something like grunt.registerTask('buildless', 'Lint and compile LESS files.', ['lesslint', 'less']); note that you'll have to install https://github.com/gruntjs/grunt-contrib-less to get that to work. Also note that, failing linting will not compile your LESS files; mandate that your code always passes the lint check; you'll help everyone involved in the project.

Why is my vs2012 forcing extra parentheses

First, thank you for taking pity on me and reading this issue. I CANNOT for the life of me figure out what extension I might have installed that is causing this issue, but it is EXTREMELY cumbersome.
Whenever I begin to type code (VB I think it also occurs in C#), for example "For Each" once I hit the F it forces a set of parentheses. Which would look like F(), but because I keep typing it looks like F(or). This only occurs when coding inside code blocks like a function or a sub, but when I'm creating the function it does not occur. I've disabled any and all power tools and the like, or at least I'm 90% sure I've done this for all of them, and yet it still occurs.
I'm usually pretty proficient at digging about the net and finding the answer, but for this one I'm at a loss. There is just too many keywords involved, so all I see is non-related topics, or how to make the parentheses occur, not get rid of them.
If anyone can provide some steps to resolve this, I'm happy and eager to try them. It's just such a hassle to live with for right now.
If you think it is a Visual Studio extension, then start by disabling all of them and adding them back one at a time.
You can also run VS with the command line switched to disable features.
Devenv switches
The simple answer to the cause is the Codealike VS Extension. I logged a bug with them and hopefully they'll fix it soon