After I update my manufacturer table, I set the global state as follows:
Yii::app()->setGlobalState('manufacturer_updated', new DateTime());
But the cache listing of getAllManufacturers call from dropdown box is not getting refreshed:
<?php echo $form->dropDownList($model,'manufacturer_id', CHtml::listData(Manufacturer::model()->getAllManufacturers(), 'id', 'name'), array('prompt'=>'Select Manufacturer')); ?>
Function in my model called by the dropdownbox:
public function getAllManufacturers()
{
$sql = 'SELECT id, name FROM manufacturer WHERE store_id =:store_id ORDER BY name';
$cmd = Yii::app()->db->cache(72000, new CGlobalStateCacheDependency('manufacturer_updated'))->createCommand($sql); //20 hours cache
$cmd->bindValue(':store_id', Yii::app()->session["current_store_id"], PDO::PARAM_INT);
return $cmd->queryAll();
}
Actually, 2-3 times it worked in the beginning. After that, the cache is never getting refreshed. However, changing the expiry time to 0 causes it to get refreshed. But when I set back the expiry value to its original, the cache strangely brings back its stale data.
Why is this query cache still keeping its stale data?
It is working well now after restarting my memcache server. I think, when we make any code changes on this, we have to restart the cache server. Can anyone please comment on this?
Related
Execute the following server code, and then check the promotion table and task table in the database. The related fields have been updated correctly, which indicates that the transaction has been successfully committed.
using (ITransaction tx = session.BeginTransaction())
{
try
{
Promotion p = session.Get<Promotion>(request.PromotionId);
p.Status = PromotionStatus.Canceled;
foreach (Task task in p.Tasks)
{
if (task.AnnounceStatus == TaskAnnounceStatus.New)
{
task.AnnounceStatus = TaskAnnounceStatus.PromotionCanceled;
task.CancelTime = DateTime.Now;
//session.Update(task);
}
}
tx.Commit();
}
catch
{
tx.Rollback();
throw;
}
}
Then execute the following query(Query A), the data obtained is also the updated value. It looks like everything is very good.
tasks = session.Query<Task>().Where(p => p.AnnounceStatus == Model.TaskAnnounceStatus.New && p.ProcessStatus == Model.TaskProcessStatus.New).ToList();
However, if I execute a query on the task using the following code before committing the transaction, the result of the above query(Query A) will get the old unmodified value. At the same time, what you see in the database is still the correctly updated value.
Task task = session.Get<Task>(taskId);
So I modified the first piece of code and explicitly called the update method (see the code at the comment), and everything worked fine this time.
My guess is that Nhibernate's cache is causing the above problem. I use syscache2 to manage the second-level cache, the cache was set to ReadWrite, and use sessionFacotry.getCurrentSession to manage Nhibernate's session.
Hope someone can help me explain how this works.
You execute query session.Get<Task>(taskId); first. This loads the entity in first level cache.
Then in your transaction, you Get the Promotion entity. The Task is the IEnumerable property of it. As lazy load may be, your foreach loop iterate through Task entity with ID taskID - Modifies it - Updates it - Transaction is successful. As all this is happening inside the transaction, your initial entity returned by session.Get<Task>(taskId); is not updated. It still hold the old values.
Then, you again session.Query<Task>() outside the transaction. This time, NHibernate see that the entity with same identifier is already loaded in session cache (with session.Get<Task>(taskId); query), it does not load that entity again, it simply returns the entity already in session cache. As that entity hold the old values, you see the problem.
To confirm this, put all these queries inside the transaction block and check the result.
Alternatively, manage so scope of session properly.
Understand that your ISession is your Unit Of Work; scope it carefully.
I have this PDO statement:
parent::$db->custom('UPDATE users_credits SET availabe = availabe - :reward_credits, used = used + :reward_credits WHERE user_id = :user_id', array(
'reward_credits' => $reward_credits,
'user_id' => $user_id
));
For some reasons it simply does not work. I tried the very same query on the DB manually and it works.
What's wrong with PDO and how do I achieve the very same result I would achieve normally?
Thanks for any suggestion
First of all. There is nothing wrong with PDO and never has been.
It is some your own custom code to blame.
Simple checklist to solve any PDO related problem
Make sure you can see all the PHP errors.
Configure PDO to throw exceptions in SQL errors, by calling this after connect
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
Debug your code.
I don't understand why my request returns me an empty array with the code below.
Using grails and an H2 database
Animal lion = new Animal()
lion.save()
println lion.id
println sql.rows("select * from animal")
The outputs are
1
[]
Why do I get an empty array ?
If I go and check in the memory database at
localhost/Zoo/dbconsole
I get the line as I should be having. Is there some kind of a time limit that I have to wait before doing my sql request ?
Is this in Grails? If so, try:
lion.save( flush: true )
It's probably that Hibernate hasn't flushed the changes to the database before you do your select (especially as it looks like the above code is all in the same transaction).
I have a completely empty RavenHQ database that's linked to my Appharbor application. The amount of space the database is currently using is 1.1mb out of an available 25mb for my bronze account. The database previously had records in it, but I have deleted them using "delete collection" in the management studio.
The very first time I call session.Store(myobject), and BEFORE I call .SaveChanges(), I get the following error.
System.InvalidOperationException: Url: "/docs/Raven/Hilo/AccItems"
Raven.Database.Exceptions.OperationVetoedException: PUT vetoed by Raven.Bundles.Quotas.Triggers.DatabaseSizeQoutaForDocumetsPutTrigger because: Database size is 45,347 KB, which is over the allowed quota of 25,600 KB. No more documents are allowed in.
Now, the document is definitely not that big, so I don't know what this error can mean, especially as I don't think I've even hit the database at that point since I haven't closed the session by calling SaveChanges(). Any ideas? Here's the code itself.
XDocument doc = XDocument.Parse(rawXml);
var accItems = ExtractItemsFromFeed(doc);
using (IDocumentSession session = _store.OpenSession())
{
var dbItems = session.Query<AccItem>().ToList();
foreach (var item in accItems)
{
var existingRecord = dbItems.SingleOrDefault(x => x.Source == x.SourceId == cottage.SourceId);
if (existingRecord == null)
{
session.Store(item);
_logger.Info("Saved new item {0}.", item.ShortName);
}
else
{
existingRecord.ShortName = item.ShortName;
_logger.Info("Updated item {0}.", item.ShortName);
}
session.SaveChanges();
}
}
Any other comments about the style of this code would be most welcome, as I was unsure of the best way to approach the "update existing item or create if it isn't there" scenario.
The answer here was as follows.
RavenHQ support found that the database was indeed oversized, but it seemed that the size reported in the Appharbor-branded RavenHQ control panel was incorrect. I had filled up the database way over the limit with a previous faulty version of the code posted above, so the error message I received was actually correct.
Fixing this problem without paying to upgrade the database wasn't straightforward, as it's not possible to shrink the database. As I also wasn't able to delete my single Appharbor/RavenHQ database or create another one that left me with the choice of creating an entirely new Appharbor application, or registering directly with RavenHQ for a new account. I chose the latter. The RavenHQ-branded control panel is slightly different to the Appharbor one, in that it has the ability to create and delete databases.
So to summarize: there doesn't seem to be any benefit to using RavenHQ as an add-on to Appharbor - you might as well go and get a proper free RavenHQ account.
I have a lot of trouble with the combination of symfony2 and doctrine2. I have to deal with huge datasets (around 2-3 million write and read) and have to do a lot of additional effort to avoid running out of memory.
I figgured out 2 main points, that "leak"ing memory (they are actually not really leaking, but allocating a lot).
The Entitymanager entity storage (I don't know the real name of this one) it seems like it keeps all processed entities and you have to clear this storage regularly with
$entityManager->clear()
The Doctrine QueryCache - it caches all used Queries and the only configuration I found was, that you are able to decide what kind of Cache you wanna use. I didn't find a global disable neither a useful flag for each query to disable it.
So usually I disable it for every query object with the function
$qb = $repository->createQueryBuilder($a);
$query = $qb->getQuery();
$query->useQueryCache(false);
$query->execute();
So.. that's all I figured out right now..
My questions are:
Is there a easy way to deny some objects from the Entitymanagerstorage?
Is there a way to set the querycache use in the entitymanager?
Can I configure this caching behaviors somewhere in the Symfony/doctrine configuration?
Would be very cool if someone has some nice tips for me.. otherwise this may help some rookie..
cya
As stated by the Doctrine Configuration Reference by default logging of the SQL connection is set to the value of kernel.debug, so if you have instantiated AppKernel with debug set to true the SQL commands get stored in memory for each iteration.
You should either instantiate AppKernel to false, set logging to false in you config YML, or either set the SQLLogger manually to null before using the EntityManager
$em->getConnection()->getConfiguration()->setSQLLogger(null);
Try running your command with --no-debug. In debug mode the profiler retains informations about every single query in memory.
1. Turn off logging and profiling in app/config/config.yml
doctrine:
dbal:
driver: ...
...
logging: false
profiling: false
or in code
$this->entityManager->getConnection()->getConfiguration()->setSQLLogger(null);
2. Force garbage collector. If you actively use CPU then garbage collector waits and you can find yourself with no memory soon.
At first enable manual garbage collection managing. Run gc_enable() anywhere in the code. Then run gc_collect_cycles() to force garbage collector.
Example
public function execute(InputInterface $input, OutputInterface $output)
{
gc_enable();
// I'm initing $this->entityManager in __construct using DependencyInjection
$customers = $this->entityManager->getRepository(Customer::class)->findAll();
$counter = 0;
foreach ($customers as $customer) {
// process customer - some logic here, $this->em->persist and so on
if (++$counter % 100 == 0) {
$this->entityManager->flush(); // save unsaved changes
$this->entityManager->clear(); // clear doctrine managed entities
gc_collect_cycles(); // PHP garbage collect
// Note that $this->entityManager->clear() detaches all managed entities,
// may be you need some; reinit them here
}
}
// don't forget to flush in the end
$this->entityManager->flush();
$this->entityManager->clear();
gc_collect_cycles();
}
If your table is very large, don't use findAll. Use iterator - http://doctrine-orm.readthedocs.org/projects/doctrine-orm/en/latest/reference/batch-processing.html#iterating-results
Set SQL logger to null
$em->getConnection()->getConfiguration()->setSQLLogger(null);
Manually call function gc_collect_cycles() after $em->clear()
$em->clear();
gc_collect_cycles();
Don't forget to set zend.enable_gc to 1, or manually call gc_enable() before use gc_collect_cycles()
Add --no-debug option if you run command from console.
got some "funny" news from doctrine developers itself on the symfony live in berlin - they say, that on large batches, us should not use an orm .. it is just no efficient to build stuff like that in oop
.. yeah.. maybe they are right xD
As per the standard Doctrine2 documentation, you'll need to manually clear or detatch entities.
In addition to that, when profiling is enabled (as in the default dev environment). The DoctrineBundle in Symfony2 configures a several loggers use quite a bit of memory. You can disable logging completely, but it is not required.
An interesting side effect, is the loggers affect both Doctrine ORM and DBAL. One of loggers will result in additional memory usage for any service that uses the default logger service. Disabling all of these would be ideal in commands-- since the profiler isn't used there yet.
Here is what you can do to disable the memory-intense loggers while keeping profiling enabled in other parts of Symfony2:
$c = $this->getContainer();
/*
* The default dbalLogger is configured to keep "stopwatch" events for every query executed
* the only way to disable this, as of Symfony 2.3, Doctrine Bundle 1.2, is to reinistiate the class
*/
$dbalLoggerClass = $c->getParameter('doctrine.dbal.logger.class');
$dbalLogger = new $dbalLoggerClass($c->get('logger'));
$c->set('doctrine.dbal.logger', $dbalLogger);
// sometimes you need to configure doctrine to use the newly logger manually, like this
$doctrineConfiguration = $c->get('doctrine')->getManager()->getConnection()->getConfiguration();
$doctrineConfiguration->setSQLLogger($dbalLogger);
/*
* If profiling is enabled, this service will store every query in an array
* fortunately, this is configurable with a property "enabled"
*/
if($c->has('doctrine.dbal.logger.profiling.default'))
{
$c->get('doctrine.dbal.logger.profiling.default')->enabled = false;
}
/*
* When profiling is enabled, the Monolog bundle configures a DebugHandler that
* will store every log messgae in memory.
*
* As of Monolog 1.6, to remove/disable this logger: we have to pop all the handlers
* and then push them back on (in the correct order)
*/
$handlers = array();
try
{
while($handler = $logger->popHandler())
{
if($handler instanceOf \Symfony\Bridge\Monolog\Handler\DebugHandler)
{
continue;
}
array_unshift($handlers, $handler);
}
}
catch(\LogicException $e)
{
/*
* As of Monolog 1.6, there is no way to know if there's a handler
* available to pop off except for the \LogicException that's thrown.
*/
if($e->getMessage() != 'You tried to pop from an empty handler stack.')
{
/*
* this probably doesn't matter, and will probably break in the future
* this is here for the sake of people not knowing what they're doing
* so than an unknown exception is not silently discarded.
*/
// remove at your own risk
throw $e;
}
}
// push the handlers back on
foreach($handlers as $handler)
{
$logger->pushHandler($handler);
}
Try disabling any Doctrine caches that exist. (If you're not using APC / other as a cache then memory is used).
Remove Query Cache
$qb = $repository->createQueryBuilder($a);
$query = $qb->getQuery();
$query->useQueryCache(false);
$query->useResultCache(false);
$query->execute();
There's no way to globally disable it
Also this is an alternative to clear that might help (from here)
$connection = $em->getCurrentConnection();
$tables = $connection->getTables();
foreach ( $tables as $table ) {
$table->clear();
}
I just posted a bunch of tips for using Symfony console commands with Doctrine for batch processing here.