for example:
$db = new PDO();
// some code using $db here
// and next, i want to free this var and close all connection and so on
$db = NULL; // or how correctly?
Is that correct way to free all SQL results and connections?
you could do that, but often not necessary. if created in a function and no other vars are using it, $db will release its contents when it goes out of scope (usually at the end of the function). if $db is a global, it will be released when the script ends.
Related
When trying to run the following in Redis using booksleeve.
using (var conn = new RedisConnection(server, port, -1, password))
{
var result = conn.Server.FlushDb(0);
result.Wait();
}
I get an error saying:
This command is not available unless the connection is created with
admin-commands enabled"
I am not sure how do i execute commands as admin? Do I need to create an a/c in db with admin access and login with that?
Updated answer for StackExchange.Redis:
var conn = ConnectionMultiplexer.Connect("localhost,allowAdmin=true");
Note also that the object created here should be created once per application and shared as a global singleton, per Marc:
Because the ConnectionMultiplexer does a lot, it is designed to be
shared and reused between callers. You should not create a
ConnectionMultiplexer per operation. It is fully thread-safe and ready
for this usage.
Basically, the dangerous commands that you don't need in routine operations, but which can cause lots of problems if used inappropriately (i.e. the equivalent of drop database in tsql, since your example is FlushDb) are protected by a "yes, I meant to do that..." flag:
using (var conn = new RedisConnection(server, port, -1, password,
allowAdmin: true)) <==== here
I will improve the error message to make this very clear and explicit.
You can also set this in C# when you're creating your multiplexer - set AllowAdmin = true
private ConnectionMultiplexer GetConnectionMultiplexer()
{
var options = ConfigurationOptions.Parse("localhost:6379");
options.ConnectRetry = 5;
options.AllowAdmin = true;
return ConnectionMultiplexer.Connect(options);
}
For those who like me faced the error:
StackExchange.Redis.RedisCommandException: This operation is not
available unless admin mode is enabled: ROLE
after upgrading StackExchange.Redis to version 2.2.4 with Sentinel connection: it's a known bug, the workaround was either to downgrade the client back or to add allowAdmin=true to the connection string and wait for the fix.
Starting from 2.2.50 public release the issue is fixed.
How many ways are there to maintain $dbh (database handle) across all php files,
so that once $dbh created, I can query and update database from any php file and any time, without having to log in.
1) apply $dbh global in every php file ?
2) apply $dbh in the parameter of the called function ?
3) ?
What other ways are there to, so as to query and update without ever having to log in again and which is better and simple.
Thanks for your input.
regards
Clement
In the file that creates $dbh, put
global $dbh;
...
$dbh = new DatabaseClass();
$dbh->example_login("user","pass");
...
In every file and function that wants to use $dbh, put
global $dbh;
...
$result = $dbh->query("SELECT * FROM XYZ");
...
at the start to mark $dbh as global. You could also use a singleton type pattern, although this is considered bad practice in PHP.
I would like to create a powershell script that I can run to backup objects to file before updating them. My goal is to backup objects before changing them in case something breaks. I would like to pass in parameters to run like the following:
backupobjects.ps1 -servername -databasename -schemaname -objectname -outputdirectory
So if I call this powershell script and pass in parameters the script will connect to the database and find the object and save the CREATE script and save the object to the outputdirectory passed in and put BEFORE_objectname.sql as the filename.
I am just starting in powershell so accepting parameters I have not learned yet.
Any guidance or suggestions would be helpful.
Rather than write it for you, here are a couple of nudges:
1) param is how you pass in parameters in powershell. I like to do it like so:
param (
[string] $server = (Read-Host "Enter a server name"),
[string] $db = (Read-Host "Enter a database name")
)
you then reference $server and $db later in your script as though you'd explicitly initialized them.
2) Most (if not all) objects in SQL server have a Script() method attached to them. For instance take a look at the Table class.
3) You can control how objects are scripted using the ScriptingOptions class. When you invoke the Script() method on an object, pass a ScriptingOptions object as an argument and the scripting behavior will be governed it.
I have a lot of trouble with the combination of symfony2 and doctrine2. I have to deal with huge datasets (around 2-3 million write and read) and have to do a lot of additional effort to avoid running out of memory.
I figgured out 2 main points, that "leak"ing memory (they are actually not really leaking, but allocating a lot).
The Entitymanager entity storage (I don't know the real name of this one) it seems like it keeps all processed entities and you have to clear this storage regularly with
$entityManager->clear()
The Doctrine QueryCache - it caches all used Queries and the only configuration I found was, that you are able to decide what kind of Cache you wanna use. I didn't find a global disable neither a useful flag for each query to disable it.
So usually I disable it for every query object with the function
$qb = $repository->createQueryBuilder($a);
$query = $qb->getQuery();
$query->useQueryCache(false);
$query->execute();
So.. that's all I figured out right now..
My questions are:
Is there a easy way to deny some objects from the Entitymanagerstorage?
Is there a way to set the querycache use in the entitymanager?
Can I configure this caching behaviors somewhere in the Symfony/doctrine configuration?
Would be very cool if someone has some nice tips for me.. otherwise this may help some rookie..
cya
As stated by the Doctrine Configuration Reference by default logging of the SQL connection is set to the value of kernel.debug, so if you have instantiated AppKernel with debug set to true the SQL commands get stored in memory for each iteration.
You should either instantiate AppKernel to false, set logging to false in you config YML, or either set the SQLLogger manually to null before using the EntityManager
$em->getConnection()->getConfiguration()->setSQLLogger(null);
Try running your command with --no-debug. In debug mode the profiler retains informations about every single query in memory.
1. Turn off logging and profiling in app/config/config.yml
doctrine:
dbal:
driver: ...
...
logging: false
profiling: false
or in code
$this->entityManager->getConnection()->getConfiguration()->setSQLLogger(null);
2. Force garbage collector. If you actively use CPU then garbage collector waits and you can find yourself with no memory soon.
At first enable manual garbage collection managing. Run gc_enable() anywhere in the code. Then run gc_collect_cycles() to force garbage collector.
Example
public function execute(InputInterface $input, OutputInterface $output)
{
gc_enable();
// I'm initing $this->entityManager in __construct using DependencyInjection
$customers = $this->entityManager->getRepository(Customer::class)->findAll();
$counter = 0;
foreach ($customers as $customer) {
// process customer - some logic here, $this->em->persist and so on
if (++$counter % 100 == 0) {
$this->entityManager->flush(); // save unsaved changes
$this->entityManager->clear(); // clear doctrine managed entities
gc_collect_cycles(); // PHP garbage collect
// Note that $this->entityManager->clear() detaches all managed entities,
// may be you need some; reinit them here
}
}
// don't forget to flush in the end
$this->entityManager->flush();
$this->entityManager->clear();
gc_collect_cycles();
}
If your table is very large, don't use findAll. Use iterator - http://doctrine-orm.readthedocs.org/projects/doctrine-orm/en/latest/reference/batch-processing.html#iterating-results
Set SQL logger to null
$em->getConnection()->getConfiguration()->setSQLLogger(null);
Manually call function gc_collect_cycles() after $em->clear()
$em->clear();
gc_collect_cycles();
Don't forget to set zend.enable_gc to 1, or manually call gc_enable() before use gc_collect_cycles()
Add --no-debug option if you run command from console.
got some "funny" news from doctrine developers itself on the symfony live in berlin - they say, that on large batches, us should not use an orm .. it is just no efficient to build stuff like that in oop
.. yeah.. maybe they are right xD
As per the standard Doctrine2 documentation, you'll need to manually clear or detatch entities.
In addition to that, when profiling is enabled (as in the default dev environment). The DoctrineBundle in Symfony2 configures a several loggers use quite a bit of memory. You can disable logging completely, but it is not required.
An interesting side effect, is the loggers affect both Doctrine ORM and DBAL. One of loggers will result in additional memory usage for any service that uses the default logger service. Disabling all of these would be ideal in commands-- since the profiler isn't used there yet.
Here is what you can do to disable the memory-intense loggers while keeping profiling enabled in other parts of Symfony2:
$c = $this->getContainer();
/*
* The default dbalLogger is configured to keep "stopwatch" events for every query executed
* the only way to disable this, as of Symfony 2.3, Doctrine Bundle 1.2, is to reinistiate the class
*/
$dbalLoggerClass = $c->getParameter('doctrine.dbal.logger.class');
$dbalLogger = new $dbalLoggerClass($c->get('logger'));
$c->set('doctrine.dbal.logger', $dbalLogger);
// sometimes you need to configure doctrine to use the newly logger manually, like this
$doctrineConfiguration = $c->get('doctrine')->getManager()->getConnection()->getConfiguration();
$doctrineConfiguration->setSQLLogger($dbalLogger);
/*
* If profiling is enabled, this service will store every query in an array
* fortunately, this is configurable with a property "enabled"
*/
if($c->has('doctrine.dbal.logger.profiling.default'))
{
$c->get('doctrine.dbal.logger.profiling.default')->enabled = false;
}
/*
* When profiling is enabled, the Monolog bundle configures a DebugHandler that
* will store every log messgae in memory.
*
* As of Monolog 1.6, to remove/disable this logger: we have to pop all the handlers
* and then push them back on (in the correct order)
*/
$handlers = array();
try
{
while($handler = $logger->popHandler())
{
if($handler instanceOf \Symfony\Bridge\Monolog\Handler\DebugHandler)
{
continue;
}
array_unshift($handlers, $handler);
}
}
catch(\LogicException $e)
{
/*
* As of Monolog 1.6, there is no way to know if there's a handler
* available to pop off except for the \LogicException that's thrown.
*/
if($e->getMessage() != 'You tried to pop from an empty handler stack.')
{
/*
* this probably doesn't matter, and will probably break in the future
* this is here for the sake of people not knowing what they're doing
* so than an unknown exception is not silently discarded.
*/
// remove at your own risk
throw $e;
}
}
// push the handlers back on
foreach($handlers as $handler)
{
$logger->pushHandler($handler);
}
Try disabling any Doctrine caches that exist. (If you're not using APC / other as a cache then memory is used).
Remove Query Cache
$qb = $repository->createQueryBuilder($a);
$query = $qb->getQuery();
$query->useQueryCache(false);
$query->useResultCache(false);
$query->execute();
There's no way to globally disable it
Also this is an alternative to clear that might help (from here)
$connection = $em->getCurrentConnection();
$tables = $connection->getTables();
foreach ( $tables as $table ) {
$table->clear();
}
I just posted a bunch of tips for using Symfony console commands with Doctrine for batch processing here.
ORM settings in Coldfusion application.cfc run before anything else runs (onapplicationstart, etc). So how do you set a dynamic datasource (code before the ORM init) in application.cfc? we can set it after and it re-points the ORM to a dynamic datasource, but that requires that the hardcoded datasource must be valid as well. This is tenuous at best.
Here is an example:
<cfscript>
this.name = "someapp_#hash(cgi.http_host)#";
this.ormenabled = "true";
this.ormsettings = { cfclocation = "config/definitions", eventhandling = "true",datasource="STATICDATASOURCE" };
</cfscript>
If it's not specified in application.cfc scope then you get errors like "ORM is not configured for the current application."
We need to be able to get the datasource from a text file on the server.
this.datasource="YourDatasourceName";
Well, if you wanted to store a file, for this example we'll call it "datasource.xml" consisting of:
<dataSourceName>Name goes here</dataSourceName>
You can read it in with:
dataFile = fileRead("pathToFile/datasource.xml");
data = xmlParse(dataFile);
dataSourceName = data.dataSourceName.xmlText;
this.datasource=dataSourceName;
ORM datasource just uses the default datasource if not defined.
Having said that, if you want to add / remove datasource dynamically, see Administrator API at: http://help.adobe.com/en_US/ColdFusion/9.0/Admin/WSc3ff6d0ea77859461172e0811cbf364104-7fcf.html (available since CF8)
I'm not sure if you can re-set the this.ormsettings.datasource to something else at runtime (i.e. onApplicationStart()? or onServerStart()?), but many of the settings can be set again. You may want to try it out.