Maintain $dbh (database handle) across all php files - pdo

How many ways are there to maintain $dbh (database handle) across all php files,
so that once $dbh created, I can query and update database from any php file and any time, without having to log in.
1) apply $dbh global in every php file ?
2) apply $dbh in the parameter of the called function ?
3) ?
What other ways are there to, so as to query and update without ever having to log in again and which is better and simple.
Thanks for your input.
regards
Clement

In the file that creates $dbh, put
global $dbh;
...
$dbh = new DatabaseClass();
$dbh->example_login("user","pass");
...
In every file and function that wants to use $dbh, put
global $dbh;
...
$result = $dbh->query("SELECT * FROM XYZ");
...
at the start to mark $dbh as global. You could also use a singleton type pattern, although this is considered bad practice in PHP.

Related

SQL Powershell to Create Backup of Objects

I would like to create a powershell script that I can run to backup objects to file before updating them. My goal is to backup objects before changing them in case something breaks. I would like to pass in parameters to run like the following:
backupobjects.ps1 -servername -databasename -schemaname -objectname -outputdirectory
So if I call this powershell script and pass in parameters the script will connect to the database and find the object and save the CREATE script and save the object to the outputdirectory passed in and put BEFORE_objectname.sql as the filename.
I am just starting in powershell so accepting parameters I have not learned yet.
Any guidance or suggestions would be helpful.
Rather than write it for you, here are a couple of nudges:
1) param is how you pass in parameters in powershell. I like to do it like so:
param (
[string] $server = (Read-Host "Enter a server name"),
[string] $db = (Read-Host "Enter a database name")
)
you then reference $server and $db later in your script as though you'd explicitly initialized them.
2) Most (if not all) objects in SQL server have a Script() method attached to them. For instance take a look at the Table class.
3) You can control how objects are scripted using the ScriptingOptions class. When you invoke the Script() method on an object, pass a ScriptingOptions object as an argument and the scripting behavior will be governed it.

SQL SMO Objects SqlRestore Method (ReadFileList)

Good afternoon all-
I've searched around quite a bit, and found a few good resources on how to dynamically determine the names of the logical data file names contained within an and SQL .bak file. The SMO method Im working with requires that I pass the ServerName, however my requirement calls for passing the actual file path to the backup. I can get what I need in T-SQL, but I'd really like to determine a way to do it leveraging SMO's. Below is the T-SQL which gets me the information I require:
RESTORE FILELISTONLY
FROM N'C:\Directory\File.bak'
WITH FILE = 1
Unfortunately SqlRestore.ReadFileList(ServerName) will not work, as the backup set has not been restored to a server yet. Essentially I need this information so I can pass it to Restore.RelocateFiles.Add. I'm actually a DBA just dabbling in C#, so if you need more information just let me know and I will try to fill in the gaps. Thanks for any assistance!
The Powershell script below shows how you can read a backup file based on a file path:
$ServerName="SERVER\MYSQLSERVER"
$svrConn = new-object Microsoft.SqlServer.Management.Common.ServerConnection
$svrConn.ServerInstance=$secondaryServerName
$svrConn.LoginSecure = $true
$svr = new-object Microsoft.SqlServer.Management.Smo.Server ($svrConn)
$fullResotrePath = "\\Path\MyDatabase.bak"
$res = new-object Microsoft.SqlServer.Management.Smo.Restore
$res.Devices.AddDevice($fullRestorePath, [Microsoft.SqlServer.Management.Smo.DeviceType]::File)
$dt = $res.ReadFileList($svr)
foreach($r in $dt.Rows)
{
foreach ($c in $dt.Columns)
{
Write-Host $c "=" $r[$c]
}
}

Why are my package (our'd) variables getting cleared out between PerlChildInitHandler and PerlResponseHandler in mod_perl2?

I have mod_perl2 running on a virtual host and I'm trying to make my mysql connection persistent between requests to handle server load. I have read all of the documentation and a book on the topic and I still have no idea why this bare-bones implementation of a mod_perl2 web application replies with "It's broken!".
package Test;
use strict;
use warnings;
use Apache2::Const;
use Carp qw{croak};
use DBI;
our $mysql_handle;
sub handler {
print "Content-Type: text/plain\n\n";
print (defined $mysql_handle ? "It's defined!" : "It's broken!");
return Apache2::Const::OK;
}
sub child_init {
my ($db, $host, $port, $user, $pass)
= qw{app_db localhost 3306 app_user app_pass};
$mysql_handle
= DBI->connect("dbi:mysql:database=$db;host=$host;port=$port", $user, $pass)
or croak("Failed to establish a connection with mysqld: $DBI::errstr");
return Apache2::Const::OK;
}
1;
This is very strange and makes no sense at all to me. It's as if $mysql_handle is lexically-scoped -- when it's not! Please, can some one explain this to me?
You should look at Apache::DBI for mysql connection persistance in mod_perl. It overloads DBI's connect and disconnect which allows you to use DBI->connect(...) normally, with the added benefit of the code working in or out of a mod perl environment.
As far as the scoping issue, I'd need a little more feedback on your mp setup. I would try use vars '$mysql_handle' or even $Test::mysql_handle = DBI->connect(...) and see if you don't get the results you are looking for.

$_ variable used in function from a module is empty (PowerShell)

One question for you is here ;)
I have this function:
function Set-DbFile {
param(
[Parameter(ValueFromPipeline=$true)]
[System.IO.FileInfo[]]
$InputObject,
[Parameter(ValueFromPipelineByPropertyName=$true)]
[scriptblock]
$Properties
)
process {
$InputObject | % {
Write-Host `nInside. Storing $_.Name
$props = & $Properties
Write-Host ' properties for the file are: ' -nonew
write-Host ($props.GetEnumerator()| %{"{0}-{1}" -f $_.key,$_.Value})
}
}
}
Look at the $Properties. It should be evaluated for each file and then the file and the properties should be processed further.
Example how to use it might be:
Get-ChildItem c:\windows |
? { !$_.PsIsContainer } |
Set-DbFile -prop {
Write-Host Creating properties for $_.FullName
#{Name=$_.Name } # any other properties based on the file
}
When I copy & paste function Set-dbFile to command line and run the example snippet, everything is fine.
However, when I store the function in a module, import it and run the example, the $_ variable is empty. Does anybody know why? And how to solve it? (other solutions are welcome as well)
Results for function defined in a script/typed in commandline:
Inside. Storing adsvw.ini
Creating properties for C:\windows\adsvw.ini
properties for the file are: Name-adsvw.ini
Inside. Storing ARJ.PIF
Creating properties for C:\windows\ARJ.PIF
properties for the file are: Name-ARJ.PIF
....
Results for function defined in module:
Inside. Storing adsvw.ini
Creating properties for
properties for the file are: Name-
Inside. Storing ARJ.PIF
Creating properties for
properties for the file are: Name-
....
The problem here is down to scope hierarchy. If you define two functions like...
function F1{
$test="Hello"
F2
}
function F2{
$test
}
Then F2 will inherit the variable scope of F1 since it's called from F1's scope. If you define function F2 in a module and export the function the $test variable is not available since the module has it's own scope tree. See the Powershell Language Specification (Section 3.5.6):
In your case the current node variable is defined in the local scope and hence it will not survive into the module scope since it's in a different tree with a different scope root (apart from global variables).
To quote the text on the GetNewClosure() method in the Powershell Language Specification (Section 4.3.7):
Retrieves a script block that is bound
to a module.Any local variables that
are in the context of the caller will
be copied into the module.
...hence GetNewClosure() works a treat since it bridges the local scope/module divide. I hope this helps.
Looks like GetNewClosure() is as good a work around as any, but it changes the way the script block sees those variables. Passing $_ to the scriptblock as an argument works, too.
It has nothing to do with normal scope issues (e.g., global vs local), but it appears like that at first. Here's my very simplified reproduction and some explanation following:
script.ps1 for normal dot-sourcing:
function test-script([scriptblock]$myscript){
$message = "inside"
&{write-host "`$message from $message"}
&$myscript
}
Module\MyTest\MyTest.psm1 for importing:
function test-module([scriptblock]$myscript){
$message = "inside"
&{write-host "`$message from $message"}
&$myscript
}
function test-module-with-closure([scriptblock]$myscript){
$message = "inside"
&{write-host "`$message from $message"}
&$myscript.getnewclosure()
}
Calls and output:
» . .\script.ps1
» import-module mytest
» $message = "outside"
» $block = {write-host "`$message from $message (inside?)"}
» test-script $block
$message from inside
$message from inside (inside?)
» test-module $block
$message from inside
$message from outside (inside?)
» test-module-with-closure $block
$message from inside
$message from inside (inside?)
So I started hunting around since this piqued my curiosity, and I found a few interesting things.
This Q&A, which also features a link to this bug report is pretty much the exact same topic, as are some other blog articles I ran across. But while it was reported as a bug, I disagree.
The about_Scopes page has this to say (w:
...
Restricting Without Scope
A few Windows PowerShell concepts are similar to scope or interact with
scope. These concepts may be confused with scope or the behavior of scope.
Sessions, modules, and nested prompts are self-contained environments,
but they are not child scopes of the global scope in the session.
...
Modules:
...
The privacy of a module behaves like a scope, but adding a module
to a session does not change the scope. And, the module does not have
its own scope, although the scripts in the module, like all Windows
PowerShell scripts, do have their own scope.
Now I understand the behavior, but it was the above and a few more experiments that led me to it:
If we change $message in the scriptblock to $local:message then all 3 tests have a blank space, because $message is not defined in the scriptblock's local scope.
If we use $global:message, all 3 tests print outside.
If we use $script:message, the first 2 tests print outside and the last prints inside.
Then I also read this in about_Scopes:
Numbered Scopes:
You can refer to scopes by name or by a number that
describes the relative position of one scope to another.
Scope 0 represents the current, or local, scope. Scope 1
indicates the immediate parent scope. Scope 2 indicates the
parent of the parent scope, and so on. Numbered scopes
are useful if you have created many recursive
scopes.
If we use $((get-variable -name message -scope 1).value) in order to attempt getting the value from the immediate parent scope, what happens? We still get outside rather than inside.
At this point it was clear enough to me that sessions and modules have their own declaration scope or context of sorts, at least for script blocks. The script blocks act like anonymous functions in the environment in which they're declared until you call GetNewClosure() on them, at which point they internalize copies of the variables they reference of the same name in the scope where GetNewClosure() was called (using locals first, up to globals). A quick demonstration:
$message = 'first message'
$sb = {write-host $message}
&$sb
#output: first message
$message = 'second message'
&$sb
#output: second message
$sb = $sb.getnewclosure()
$message = 'third message'
&$sb
#output: second message
I hope this helps.
Addendum: Regarding design.
JasonMArcher's comment made me think about a design issue with the scriptblock being passed into the module. In the code of your question, even if you use the GetNewClosure() workaround, you have to know the name of the variable(s) where the scriptblock will be executed in order for it to work.
On the other hand, if you used parameters to the scriptblock and passed $_ to it as an argument, the scriptblock does not need to know the variable name, it only needs to know that an argument of a particular type will be passed. So your module would use $props = & $Properties $_ instead of $props = & $Properties.GetNewClosure(), and your scriptblock would look more like this:
{ (param [System.IO.FileInfo]$fileinfo)
Write-Host Creating properties for $fileinfo.FullName
#{Name=$fileinfo.Name } # any other properties based on the file
}
See CosmosKey's answer for further clarification.
I believe you need to call getnewclosure() on that script block before you run it. Called from a script file or module, script blocks are evaluated at compile time. When you work from the console, there is no "compile time". It's evaluated at run time, so it behaves differenly there than when it's in the module.

How to correctly free a PDO instance

for example:
$db = new PDO();
// some code using $db here
// and next, i want to free this var and close all connection and so on
$db = NULL; // or how correctly?
Is that correct way to free all SQL results and connections?
you could do that, but often not necessary. if created in a function and no other vars are using it, $db will release its contents when it goes out of scope (usually at the end of the function). if $db is a global, it will be released when the script ends.