How to debug behat features using xdebug? - behat

Having hard time with Behat, cent find the way to debug (php/xdebug using breakpoints and steps).
Does anybody have experience or maybe there is better way to do the same?
Edited:
"behat/mink": "*",
"behat/mink-extension": "*",
"behat/mink-zombie-driver": "*#dev",
"behat/mink-selenium2-driver": "*"
Testing regular feature on website.

You should set the following environment variable before launching your tests:
XDEBUG_CONFIG="XDEBUG_SESSION_START="
It works for me.

In php.ini* find xdebug section
Add another line saying "xdebug.remote_autostart=1"
Be ready to comment this line out after you're done with cli testing / debugging.
* the xdebug section may be harder to locate:
there may be multiple php.ini files, e.g. one for cli, another for
apache.
there may be multiple php versions
the xdebug may have it's separate file, e.g. on some linuxes it could be under /etc/php/7.0/cli/conf.d/20-xdebug.ini
Once commenting the autostart gets annoying in the long run, it may be worth investing in some parametrized switch or separate ini files. But I would give myself a week of annoyance before going down this path.

Related

Could not find List::Util

I'm trying to compile some raku code I saw on https://replit.com/languages/raku. The code is from Why is Raku reporting "two terms in a row" when I define a new operator?.
It begins like this:
unit module Format;
use List::Util;
...
It fails to compile with:
 raku ./main.raku
===SORRY!=== Error while compiling /home/runner/l4gp3hvdnhd/./main.raku
Could not find List::Util in:
inst#/home/runner/.raku
inst#/opt/rakudo-pkg/share/perl6/site
inst#/opt/rakudo-pkg/share/perl6/vendor
inst#/opt/rakudo-pkg/share/perl6/core
ap#
nqp#
perl5#
at /home/runner/l4gp3hvdnhd/./main.raku:3
exit status 1
On the other hand I see this is a valid module - https://raku.land/zef:lizmat/List::Util.
Why is it failing?
TL;DR Run zef install --/test List::Util in the console, put use lib '.'; at the top of your Raku main.raku, and run, don't walk, with your program, before gremlins gleefully render your efforts in vain. Or maybe just listen to Liz and Rawley.
As Liz and Rawley have noted, you need List::Util installed.
But while I largely agree with them in practice (it may be a pain to use replit to do what you're trying to do) I think a different response to complement theirs might be helpful.
One of the ways replit is trying to distinguish itself from other online evaluators, is that it is trying to be akin to a full dev environment.
In reality it's early days in their ambitious project, and beggars can't be choosers (if you're not paying, it's hard to complain if things don't work out as you might want), but of particular relevance for this SO it is worth noting that it does have console/shell facilities and they've installed Rakudo Star, or perhaps just something like it, including the Raku package manager pretty much everyone uses (zef).
Thus this command, which I just ran in replit's console of a new raku session, worked:
zef install --/test List::Util;
(The --/test tells zef not to run tests. I've only got a free account and it looked like replit killed zef's process when I ran just zef install List::Util during its running of tests. Presumably they take too long, but I don't know.)
And then this main.raku also worked:
use lib '.'; # Tell Raku(do) libs are in current directory.
use List::Util <notall>; # Load and import `notall` from module.
say notall { 42 }, 99; # Try it.
But now the rub. As I was composing this answer, the expected happened. My internet connection momentarily flaked out, the replit rebooted the session, and while my main.raku code was rescued, both List::Util and my console history had disappeared, so I had to paste the install command again and rerun it to get the module installed again.
It's all just throwaway container magic, and there's only so much replit has done thus far to make the simulation of a real full local dev environment really work.
Maybe if your Internet connection is rock solid and/or you're using a paid replit account and/or it's the full moon, it'll all work out. Or maybe you're best off following Rawley's advice.
Speaking of which (I mean Rawley's advice to set up your Raku dev environment locally), if you do install locally you can also install the awesome free version of CommaIDE.
You do not have List::Util installed. Since you're using an online interpreter you will most likely have a lot of trouble doing this. Instead I recommend installing Raku on your local machine with rakubrew.
Then run the following commands:
rakubrew build # Make sure to follow the instructions at the end
rakubrew build-zef
zef install List::Util
Now you should be able to run your code on your local machine, and you'll have access to the List::Util library.

Gulp with WinSCP - livereload and less

I am using gulp with livereload, less, and others on a remote server. I have successfully used gulp before many times, and have never experienced this scenario.
I am using WinSCP to save/edit files (I double click the file, and it opens in Sublime Text ... I save it, WinSCP automatically uploads it back to the server ).
However, when doing it this way gulp-less fails almost all the time. I have two core CSS files that are compiled - one is Bootstrap and one is my own - they should both be compiled with their own gulp tasks upon modification - but Bootstrap fails every time, and the other file fails about half of the time.
When I say fails, here's what I mean - with Bootstrap I always get the same error:
variable #grid-columns is undefined in file ....grid.less line no. 48
This happens even if I define #grid-columns as the first-heading in my main Bootstrap "import" file.
The other one fails in that livereload reloads the compiled CSS at some point prior to the Less file being compiled. This should be impossible, but it happens somehow.
However, when from the SSH command prompt, I type touch myfile.less or touch anybootstrapfile.less, everything works perfectly.
Obviously this is very annoying, but its livable. I think there must be some way to fix it though. Any ideas on what in the world could make this happen and what (if anything) I could do to fix it?
Chances are that, when WinSCP uploads the file after you save it in the internal editor, it sets a timestamp of uploaded file to older than it was before.
This might be particularly true, if you are using the FTP protocol, or Windows Vista or older.
For details, refer to WinSCP FAQ:
Why are the changes, I upload to webserver, not visible in the web browser?
Alright. With the help of Martin, who I now realize is the developer of WinScp, I have solved this. I had been working with the gulp.wait plugin and inserting a pause before the final reload - and it wasn't working.
Because... that wasn't where the problem was. The problem was happening at the time of upload; in that the file was getting 'touched' to soon (or there was something with the timestamp that ended up causing the functional equivelant).
So I moved the wait process to just before the less process was called like so:
gulp.src(src)
.pipe(plumber({
errorHandler:onError
})
.pipe(wait(500))
.pipe(less())
//etc
I'm gonna experiment, 500ms is probably longer than needed..but not too long to be painful. This solved the problem instantly.
Thanks Martin!

Local Monticello repository

I would like to run a local Monticello HTTP repository at work, so that we can share code easily among colleagues.
Is there a way to run something similar to SmalltalkHub privately?
EDIT:
I have tried all the options here and neither of them seems to work smoothly. Let me recap the options:
1) WebDAV on Apache, following Stuart. I have tried it, following some online guides. My current dav.conf looks like this:
DavLockDB /tmp/DavLock
Alias /pharo /opt/data/pharo
<Location /pharo>
Order Allow,Deny
Allow from all
Options Indexes MultiViews
Dav On
AuthType None
</Location>
I worked for a few days. Then suddenly I am not able to read new versions of a certain package. Whenever I write a version in an image and read it in another one, I get an exception ZnInvalidUTF8. I am not sure why, it may be that WebDAV has issues listing too many files?
2) Setting up my FTP. It seems to work, but when I try to set this repository as a remote in the versionner I get MCFtpRepository doesNotUnderstnd: #koRemote
3) SqueakSource3, following Tobias. I have tried running the two Gofer commands in both Pharo2 and Pharo3. In Pharo2 it does not load at all. In Pharo3, more or less everything works. I had to fix a few errors due to deprecated or removed messages, but in the end I am able to create projects and write to them.
The problem arises when I read. Apparently SS3 keeps some kind of internal cache. The result is that the list of packages I see on the project page is different from the list of packages that the client gets. The difference seems to be that the client requires a short version of the page, like http://localhost:8080/ss/MyProject/?C=M;O%3DD, and the results there are consistently less than in the full page http://localhost:8080/ss/MyProject.
Moreover, even on the project page, the list of versions remains cached until I navigate on a different project.
4) SmallTalkHub, following Sean. I have tried both using images from the INRIA server and images suggested from the Pharo-VM-loader (they may be the same).
I had to install Seaside again, since there was no ZnZincAdaptor in the downloaded image. I am now able to start SmallTalkHub, but as soon as I try to register a user, I get an error MessageNotUnderstood: receiver of "new" is nil. I am not able to track where this error comes from (is there a way to open a server-side debugger instead of returing 500 in Seaside?).
After this error, I can see a user both in mongodb and in the interface, but I am not able to login.
5) Git using filetree, as suggested by Kylon. This would prevent me from using MetaCello to handle dependencies and looks even more compelx than the other options.
At this point I am at a loss. :-( If I want to use Pharo, I will need to be able to collaborate with my colleagues. Using open source repositories is not an option, at least right now.
Is there a simple, tried and tested way to set up such a repository?
SqueakSource3 or SmallTalkHub would be even better, thanks to their user interfaces, but I really need at least basic collboration. Having an option that can run on a headless server would also be a big plus, as if this becomes a tool we use, it will not be doable to host the repository on my laptop.
Per this thread on the Pharo Dev mailing list:
Setting up the Server:
Download a SmalltalkHub image (https://ci.inria.fr/pharo-contribution/job/SmalltalkHub/)
Install mongodb on your computer (for Debian: apt-get install mongodb)
Launch the SmalltalkHub image
Evaluate: ZnZincServerAdaptor startOn: 8080
Visit http://localhost:8080/tools/hub, create an account and a project
In addition to Sean's answer - if you just want a Metacello repository, and don't necessarily need the full SmalltalkHub stuff, then you just need a WebDav server. Apache will work fine, and I've even used Confluence's WebDAV support (with some tweaking) successfully in the past.
In addition to the other answers:
Just storing your versions in DropBox work very well!
You can also install SqueakSource3 (like SmalltalkHub, doesn't need MongoDB):
Gofer new
url:'http://www.smalltalkhub.com/mc/Seaside/MetacelloConfigurations/main';
package: 'ConfigurationOfSeaside3';
load.
((Smalltalk at: #ConfigurationOfSeaside3) project version: #stable) load.
Gofer new
url:'http://www.squeaksource.com/MetacelloRepository';
package: 'ConfigurationOfSqueakSource';
load.
((Smalltalk at: #ConfigurationOfSqueakSource) project version: #bleedingEdge) load: #('All').
Then start your Adaptor (eg ZnZincServerAdaptor startOn: 8080) and goto http://localhost:8080/instalSS
Another way is go down the popular route of Git. I am using Github for my projects and it works great while Git itself works very well locally too. So if are already familiar with Git then its a very good choice
You can find more information here https://ci.inria.fr/pharo-contribution/job/PharoForTheEnterprise/lastSuccessfulBuild/artifact/GitAndPharo/GitAndPharo.pier.html
Sorry about the bad smalltalkhub experience. I have made some fixes to the configuration, and need to check if that is enough

Serving lua pages in apache windows

I have been using php for CGI scripting for some time now and recently got interested in lua.
I installed the latest version of luarocks(2.1.2) and the bundled version of lua(5.1.4). I wanted to start from the basics and hence installed cgilua(5.1.4-2) and all its dependencies using "luarocks install cgilua".
I am able to run simple lua scripts using the shebang line to point to my lua interpreter but when i use it to point to the cgi launcher "cgilua.cgi.exe" to run .lp files it just won't work. I edited my httpd configuration file to allow cgi execution in my htdocs and cgi-bin directory and used the cgi-script handler for .lp pages. I am trying to run the login.lp example in the cgilua examples directory. I even added the line "Content-type:text/html" to no avail. Executing the cgilua.cgi.exe file from the command line without arguments just closes the application with the message "cgilua.cgi.exe" stopped working".
Could anyone tell me what am I missing? Maybe the launcher is supposed to be used in a different way?
I don't suppose permissions have a part to play in this as in windows all users have at least read and execute permissions.
The url I'm trying to access is http://localhost/login.lp. My apache error log shows "Premature end of script headers: login.lp" with a 500 internal server error and the same thing if I access http://localhost/cgilua.cgi.exe
I don't know what your requirements are, but perhaps it will be easier to simply use apache's mod_lua.
http://httpd.apache.org/docs/trunk/mod/mod_lua.html

httpd.config error when insntalling mod_wsgi+django

its been more than 2 weeks. i try to instal but still getting error.
firstly, indeed I have searched similar error but i didn't find solution at all. if you find it. please let me know.
second, this is my state :
1. i have installed python 2.7 and django 1.5.1 (it works).
2. i also install MAMP.
3. i try to configure mod_wsgi and want it integrated with my MAMP apache server.
4. using mac mountain lion 10.8.4
my configuration file :
/etc/apache2/original/extra/httpd-userdir.conf inside my apache2/original/extra/
/etc/apache2/users/akhyar.conf pastebin.com/zcY58WTV (sorry about this Iam new on stack overflow)
/etc/apache2/httpd.conf pastebin.com/je2D8zMz
third, this is my error :
when i run apachectl configtest this error appears my error
so, what is going on actually? can someone tell me why and show me the mistakes?
if its been solved, what is the next step for configuring mod_wsgi on my MAMP?
thanks before, any help are highly appreciated.
In this file, line 15, you're including the per-user conf files:
http://pastebin.com/7y7ibuqP
On line 473 of this one, one of those per-user conf files, you're including the above file again:
http://pastebin.com/zcY58WTV
This causes infinite recursion while trying to parse the conf files.
I think there are some other errors too, and to be honest the files are pretty messy, but the best way forward is to remove all Include directives from akhyar.conf. For the most part they're already duplicates, where they're not, inline the contents of those files instead of using Include. If there are other errors, you'll at least see useful line numbers to start tracking them down.
Also note that the [warn] lines are just warnings - which you should probably fix, but the server will still run without them, that's not what's causing the error.