.pm file:
package fo_condition_editor;
use utf8;
use diagnostics -trace;
use strict;
use warnings FATAL => 'all';
{...}
use Encode;
my $msg = {};
return 1;
{..}
sub ..() {
$msg->{saved} = 1;
I use this pm to show popup. When form is submitted, popup is refreshed.
In my local server everything works fine, but in other server i had problem with variables $msg. $msg is empty during printing, but when i submit again in $msg are old things.
I think is problem with apache configuration.
The probloem - if I get this correctly - is that the code
my $msg = {};
is only executed when this package is required/used for the first time. After that (in the current mod_perl Instance) this wont be executed any more, and $msg keeps whatever value it has for the next requests.
There are a lot of ways to work around this problem. One schema, I use some times, is to define a "tear-down/reset" method for each "package / module-Entity" I use. In the package itself I push a reference of this method to a global Variable. And in my "core-Handler" called by mod_perl I have a tear-down/reset method, which iterates over the registered handlers and calls them to reset the data.
HTH
Georg
Related
I've got some problem with Cypress and wait command:
I am using similar to this code:
const counter = cy.get('something')
counter.contains('0') //OK
const container = cy.xpath('something multiple').children()
container.click({multiple:true})
//cy.wait(200)
counter.contains('3') //NOK
Only when I am using cy.wait() this code works. I've tried to use the internaltimeout for this code and
it's not working. Only works when using cy.wait.
It is not recommended to save Elements in variables, please use Alise instead.
cy.get('something').as('counter');
cy.get('#counter') .....
Read the documentation for your reference:
https://docs.cypress.io/guides/core-concepts/variables-and-aliases.html
#HatemHatamleh is correct about not using the return value.
With cy.*() commands you are defining command execution steps, which will run separately from the JS in the tests.
Inserting cy.wait() kind-of works because it allows time for the commands to execute, but it's not at all the correct thing to do as it can fail depending on cpu load, async calls, etc.
Think of commands as a separate "thread", try to define them with chaining, but when that's not possible use alias' as #HatemHatamleh suggests.
I'm trying to find out if there's a way to complete a response under mod_perl 2 without returning to the main handler. Haven't been able to find a method for that in the docs so far. The following is an example of what I'm trying to achieve:
#!/usr/bin/perl
# This is some mod_perl handler
use strict;
use warnings;
use Apache2::Const ':common';
sub handler {
my $r = shift;
if ($r->method eq 'POST') {
# just to do something as example
do_post_response($r);
}
$r->content_type('text/plain');
print "Thank you, goodbye.";
return Apache2::Const::OK;
}
sub do_post_response {
my $r = shift;
unless (check_somthing()) {
# Suppose I find a situation that requires
# a different response than normal...
$r->content_type('text/plain');
print "We have a situation...";
$r->something_to_finish_the_request_immediatly(Apache2::Const::OK);
}
}
In a regular Perl script, running as stand alone or under mod_cgi, I could just exit() with the new response, but under mod_perl I need to return something in the original handlersubroutine. This is leading me to keep track of a whole chain of calls where all of them have to return something until I get back to the main handler.
For example, instead of:
unless (check_something()) { ...
I need to do things like:
my $check = check_something();
return $check if $check;
and I also have to do something similar in the main handler, which is quite ungly for some situation handlings.
Is there a way to close the request when inside a nested call, just like what I tried to illustrate with my example?
EDIT: I've found that I can call a goto LABEL and place that label just before the last return in the main handlersubroutine. It works, but still feels like a dirty hack. I really hope there's a nicer way.
I think you are still fine to call exit() because mod_perl overrides what exit does:
exit
In the normal Perl code exit() is used to stop the program flow and exit the Perl interpreter. However under mod_perl we only want the stop the program flow without killing the Perl interpreter.
You should take no action if your code includes exit() calls and it's OK to continue using them. mod_perl worries to override the exit() function with its own version which stops the program flow, and performs all the necessary cleanups, but doesn't kill the server. This is done by overriding:
*CORE::GLOBAL::exit = \&ModPerl::Util::exit;
https://perl.apache.org/docs/2.0/user/coding/coding.html
In regards to Error handling in PHP -- As far I know there are 3 styles:
die()or exit() style:
$con = mysql_connect("localhost","root","password");
if (!$con) {
die('Could not connect: ' . mysql_error());
}
throw Exception style:
if (!function_exists('curl_init')) {
throw new Exception('need the CURL PHP extension.
Recomplie PHP with curl');
}
trigger_error() style:
if(!is_array($config) && isset($config)) {
trigger_error('Error: config is not an array or is not set', E_USER_ERROR);
}
Now, in the PHP manual all three methods are used.
What I want to know is which style should I prefer & why?
Are these 3 drop in replacements of each other & therefore can be used interchangeably?
Slightly OT: Is it just me or everyone thinks PHP error handling options are just too many to the extent it confuses php developers?
The first one should never be used in production code, since it's transporting information irrelevant to end-users (a user can't do anything about "Cannot connect to database").
You throw Exceptions if you know that at a certain critical code point, your application can fail and you want your code to recover across multiple call-levels.
trigger_error() lets you fine-grain error reporting (by using different levels of error messages) and you can hide those errors from end-users (using set_error_handler()) but still have them be displayed to you during testing.
Also trigger_error() can produce non-fatal messages important during development that can be suppressed in production code using a custom error handler. You can produce fatal errors, too (E_USER_ERROR) but those aren't recoverable. If you trigger one of those, program execution stops at that point. This is why, for fatal errors, Exceptions should be used. This way, you'll have more control over your program's flow:
// Example (pseudo-code for db queries):
$db->query('START TRANSACTION');
try {
while ($row = gather_data()) {
$db->query('INSERT INTO `table` (`foo`,`bar`) VALUES(?,?)', ...);
}
$db->query('COMMIT');
} catch(Exception $e) {
$db->query('ROLLBACK');
}
Here, if gather_data() just plain croaked (using E_USER_ERROR or die()) there's a chance, previous INSERT statements would have made it into your database, even if not desired and you'd have no control over what's to happen next.
I usually use the first way for simple debugging in development code. It is not recommended for production. The best way is to throw an exception, which you can catch in other parts of the program and do some error handling on.
The three styles are not drop-in replacements for each other. The first one is not an error at all, but just a way to stop the script and output some debugging info for you to manually parse. The second one is not an error per se, but will be converted into an error if you don't catch it. The last one is triggering a real error in the PHP engine which will be handled according to the configuration of your PHP environment (in some cases shown to the user, in other cases just logged to a file or not saved at all).
One question for you is here ;)
I have this function:
function Set-DbFile {
param(
[Parameter(ValueFromPipeline=$true)]
[System.IO.FileInfo[]]
$InputObject,
[Parameter(ValueFromPipelineByPropertyName=$true)]
[scriptblock]
$Properties
)
process {
$InputObject | % {
Write-Host `nInside. Storing $_.Name
$props = & $Properties
Write-Host ' properties for the file are: ' -nonew
write-Host ($props.GetEnumerator()| %{"{0}-{1}" -f $_.key,$_.Value})
}
}
}
Look at the $Properties. It should be evaluated for each file and then the file and the properties should be processed further.
Example how to use it might be:
Get-ChildItem c:\windows |
? { !$_.PsIsContainer } |
Set-DbFile -prop {
Write-Host Creating properties for $_.FullName
#{Name=$_.Name } # any other properties based on the file
}
When I copy & paste function Set-dbFile to command line and run the example snippet, everything is fine.
However, when I store the function in a module, import it and run the example, the $_ variable is empty. Does anybody know why? And how to solve it? (other solutions are welcome as well)
Results for function defined in a script/typed in commandline:
Inside. Storing adsvw.ini
Creating properties for C:\windows\adsvw.ini
properties for the file are: Name-adsvw.ini
Inside. Storing ARJ.PIF
Creating properties for C:\windows\ARJ.PIF
properties for the file are: Name-ARJ.PIF
....
Results for function defined in module:
Inside. Storing adsvw.ini
Creating properties for
properties for the file are: Name-
Inside. Storing ARJ.PIF
Creating properties for
properties for the file are: Name-
....
The problem here is down to scope hierarchy. If you define two functions like...
function F1{
$test="Hello"
F2
}
function F2{
$test
}
Then F2 will inherit the variable scope of F1 since it's called from F1's scope. If you define function F2 in a module and export the function the $test variable is not available since the module has it's own scope tree. See the Powershell Language Specification (Section 3.5.6):
In your case the current node variable is defined in the local scope and hence it will not survive into the module scope since it's in a different tree with a different scope root (apart from global variables).
To quote the text on the GetNewClosure() method in the Powershell Language Specification (Section 4.3.7):
Retrieves a script block that is bound
to a module.Any local variables that
are in the context of the caller will
be copied into the module.
...hence GetNewClosure() works a treat since it bridges the local scope/module divide. I hope this helps.
Looks like GetNewClosure() is as good a work around as any, but it changes the way the script block sees those variables. Passing $_ to the scriptblock as an argument works, too.
It has nothing to do with normal scope issues (e.g., global vs local), but it appears like that at first. Here's my very simplified reproduction and some explanation following:
script.ps1 for normal dot-sourcing:
function test-script([scriptblock]$myscript){
$message = "inside"
&{write-host "`$message from $message"}
&$myscript
}
Module\MyTest\MyTest.psm1 for importing:
function test-module([scriptblock]$myscript){
$message = "inside"
&{write-host "`$message from $message"}
&$myscript
}
function test-module-with-closure([scriptblock]$myscript){
$message = "inside"
&{write-host "`$message from $message"}
&$myscript.getnewclosure()
}
Calls and output:
» . .\script.ps1
» import-module mytest
» $message = "outside"
» $block = {write-host "`$message from $message (inside?)"}
» test-script $block
$message from inside
$message from inside (inside?)
» test-module $block
$message from inside
$message from outside (inside?)
» test-module-with-closure $block
$message from inside
$message from inside (inside?)
So I started hunting around since this piqued my curiosity, and I found a few interesting things.
This Q&A, which also features a link to this bug report is pretty much the exact same topic, as are some other blog articles I ran across. But while it was reported as a bug, I disagree.
The about_Scopes page has this to say (w:
...
Restricting Without Scope
A few Windows PowerShell concepts are similar to scope or interact with
scope. These concepts may be confused with scope or the behavior of scope.
Sessions, modules, and nested prompts are self-contained environments,
but they are not child scopes of the global scope in the session.
...
Modules:
...
The privacy of a module behaves like a scope, but adding a module
to a session does not change the scope. And, the module does not have
its own scope, although the scripts in the module, like all Windows
PowerShell scripts, do have their own scope.
Now I understand the behavior, but it was the above and a few more experiments that led me to it:
If we change $message in the scriptblock to $local:message then all 3 tests have a blank space, because $message is not defined in the scriptblock's local scope.
If we use $global:message, all 3 tests print outside.
If we use $script:message, the first 2 tests print outside and the last prints inside.
Then I also read this in about_Scopes:
Numbered Scopes:
You can refer to scopes by name or by a number that
describes the relative position of one scope to another.
Scope 0 represents the current, or local, scope. Scope 1
indicates the immediate parent scope. Scope 2 indicates the
parent of the parent scope, and so on. Numbered scopes
are useful if you have created many recursive
scopes.
If we use $((get-variable -name message -scope 1).value) in order to attempt getting the value from the immediate parent scope, what happens? We still get outside rather than inside.
At this point it was clear enough to me that sessions and modules have their own declaration scope or context of sorts, at least for script blocks. The script blocks act like anonymous functions in the environment in which they're declared until you call GetNewClosure() on them, at which point they internalize copies of the variables they reference of the same name in the scope where GetNewClosure() was called (using locals first, up to globals). A quick demonstration:
$message = 'first message'
$sb = {write-host $message}
&$sb
#output: first message
$message = 'second message'
&$sb
#output: second message
$sb = $sb.getnewclosure()
$message = 'third message'
&$sb
#output: second message
I hope this helps.
Addendum: Regarding design.
JasonMArcher's comment made me think about a design issue with the scriptblock being passed into the module. In the code of your question, even if you use the GetNewClosure() workaround, you have to know the name of the variable(s) where the scriptblock will be executed in order for it to work.
On the other hand, if you used parameters to the scriptblock and passed $_ to it as an argument, the scriptblock does not need to know the variable name, it only needs to know that an argument of a particular type will be passed. So your module would use $props = & $Properties $_ instead of $props = & $Properties.GetNewClosure(), and your scriptblock would look more like this:
{ (param [System.IO.FileInfo]$fileinfo)
Write-Host Creating properties for $fileinfo.FullName
#{Name=$fileinfo.Name } # any other properties based on the file
}
See CosmosKey's answer for further clarification.
I believe you need to call getnewclosure() on that script block before you run it. Called from a script file or module, script blocks are evaluated at compile time. When you work from the console, there is no "compile time". It's evaluated at run time, so it behaves differenly there than when it's in the module.
In a previous ticket i asked about logging PHP errors in MySQL which gives me:
function myErrorHandler($errno, $errstr, $errfile, $errline)
{
// mysql connect etc here...
$sql = "INSERT INTO `error_log` SET
`number` = ".mysql_real_escape_string($errno).",
`string` = ".mysql_real_escape_string($errstr).",
`file` = ".mysql_real_escape_string($errfile).",
`line` = ".mysql_real_escape_string($errline);
mysql_query($sql);
// Don't execute PHP internal error handler
return true;
}
// set to the user defined error handler
$new_error_handler = set_error_handler("myErrorHandler");
I can make this work but only if it is triggerred like this:
trigger_error("message here");
However, I also want the error handler to be called for all errors such as syntax errors like:
echo "foo;
But these errors are just outputted to the screen, what am i doing wrong?
You can only handle runtime errors with a custom error handler. The echo "foo error in your example happens when parsing (i.e. reading in) the source. Since PHP can not fully parse the code, it can also not run your error handler on this error.
If You're forced to test if syntax is correct, You can use php_check_syntax function, with filename parameter PHP Manual php_check_syntax
php_check_syntax also provides second parameter, witch when used will be populated by the error string, as far as i remember
That's indeed terrible way of error logging
You don't need not a single advantage of a database. Would you make a database lookup for the certain line number? Or order your results by file name?
database is a subject of many errors itself.
You've been told already that it's impossible to catch a parse error at the program logic level, because a syntactically wrong program will never run.
Let's take your code as an example. It will raise a MySQL error (because of poorly formed query) which you will never see. As well as any other errors occurred. That's what I am talking about.