I am currently writing a script that lets me import multiple products in magento.
$product = Mage::getModel('catalog/product');
$product->setSku($data['sku']);
//etc etc
$product->save();
The product gets created perfectly but it won't show up in my frontend until I either save it in the backend (without changing anything!) OR I rebuild the indexes in the backend.
I did a diff on the relevant database tables to see what's changing when I save the product and added those fields to my import script, but it did not have any effect. The imported product has to be OK since it shows up when I rebuild the indexes via the backend manually.
Caching is completely disabled.
Now my question is: How can I rebuild the indexes after importing my products?
You can use such a model in Index module.
$processes = Mage::getSingleton('index/indexer')->getProcessesCollection();
$processes->walk('reindexAll');
Since you need to rebuild all the indexes, there is no filters aplied to collection. But you can filter index processes list by set of parameters (code, last time re-indexed, etc) via addFieldToFilter($field, $condition) method.
Small Suggestion
Would be great to set indexes to manual mode while you importing the products, it will help you to speed up the import process, because some of them observe product saving event , so it takes some time. You can do it in the following way:
$processes = Mage::getSingleton('index/indexer')->getProcessesCollection();
$processes->walk('setMode', array(Mage_Index_Model_Process::MODE_MANUAL));
$processes->walk('save');
// Here goes your
// Importing process
// ................
$processes->walk('reindexAll');
$processes->walk('setMode', array(Mage_Index_Model_Process::MODE_REAL_TIME));
$processes->walk('save');
There are at least two circumstances that prevent indexer to reindex a product on save.
One: the "Manual update" setting in the Indexes properties you find under System, Index Management. You should set it to "Update on Save" if you want a product to be indexed upon a save.
Two: the setIsMassupdate product flag that is used, for example, in DataFlow batch import procedures in order to prevent indexer to be triggered upon each product save method call.
Hope this helps.
Regards, Alessandro
Related
In my sitecore instance, I have content for 2 templates, Product and Product Category. The products have a multilist that link to the Product Category as lookups. The Products also have an indexing computed field setup that precomputes some data based on selected Product Categories. So when a user changes a Product, Sitecore's indexing strategy indexes the Product with the computed field.
My issue is, when a user changes the data in the Product Category, I want Sitecore to reindex all of the related products. I'm not sure how to do this. I do not see any hook where I could detect that a Product Category is being indexed, so I could programmatically trigger an index to products
You could achieve using the indexing.getDependencies pipeline. Add a processor to it - your custom class should derive from Sitecore.ContentSearch.Pipelines.GetDependencies.BaseProcessor and override Process(GetDependenciesArgs context).
In this function you can get your IndexedItem and add, based on this information, other items to the Dependencies. These dependencies will be indexed as well. Benefit of this way of working is that the dependent items will be indexed in the same job, instead of calling new jobs to update them.
Just be aware of the performance hit this could cause if badly written as this code will be called on all indexes. Get out of the function as soon as you can if not applicable.
Some known issues about this pipeline can be found on the kb.
One way would be to add a custom OnItemSave handler which will check if the template of the changed item has the ProductCategory template, and will problematically trigger the index update.
To only reindex the changed items, you can pickup the related Products and register them in the HistoryEngine by using the HistoryEngine.RegisterItem method:
Sitecore.Context.Database.Engines.HistoryEngine.RegisterItemSaved(myItem, new ItemChanges(myItem));
Some useful instructions how to create OnItemSave handler can be found here: https://naveedahmad.co.uk/2011/11/02/sitecore-extending-onitemsave-handler/
You could add or change one of existing index update strategies. (configuration\sitecore\contentSearch\indexConfigurations\indexUpdateStrategies configuration node)
As example you could take
Sitecore.ContentSearch.Maintenance.Strategies.SynchronousStrategy
One thing, you need to change
public void Run(EventArgs args, bool rebuildDescendants)
method. args contains changed item reference. All that you need, trigger update of index for related items.
After having custom update strategy you should add it to your index, strategies configuration node.
I am working with Endeca 6.4.1 and have many auto-generated dimensions present in my pipeline (mapped using Dev-studio), the application's indexing is CAS-less. So only FCM is creating Dimensions and assigning dValIds. I am using Endeca SEO, so the dVal Id directly reflects in my URL, and if an auto-gen dimension's value's Id changes, a link to that navigation State is lost.
I have a flat file as the dimension's source, for example
product.feature|neon finish
What I want is that, if the value some day changes to Neon-finish or Neon color, the dValId that was assigned to neon finish should be transferred to the new value. I can keep a custom mapping of the change to track that neon finish has been changed to a new value.
Is there any way to achieve this, may be by using some manipulators?
Please share your thoughts.
There are two basic ways to do this:
1) Update the state files when you change a dimension value (APPDIR/data/state/autogen_dimensions.xml ). This would most likely be a manual process.
2) A more robust but complex solution is to change the dimension values to be some ID number and use a synonym for the display name. Then the display name can change without a change to the id number. This may require some serious changes to your pipeline.
Good luck
I want to delete All Quotation, Sales Orders and Invoices from my database for this i have to delete all stock moves which are 'validated' and in 'done' state, how can i delete stock moves from command line ? OR How can I change status from 'Done' to 'Draft' ?
ODOO was designed so that all non-draft documents should be stored in database (as I understand, in some countries ERP should do this by law restrictions).
Nevertheless, in some cases one should completely remove already existing documents: for example, I created test invoice and confirmed it, but I don't want to preserve it in my actual system.
All next steps can be done ONLY by your own risk! You can spoil something or even break your database if you're not completely sure of what you're doing! I STRONGLY recommend to make backup of your database before doing something like this!
Generally odoo database structure is very clean and simple. You can connect to your database (I prefer some GUI tool like pgadmin3) and manually adjust necessary tables.
Here's small description of related tables (just remove unnecessary items):
sale_order: all your quotations;
sale_order_line: items in quotations;
account_invoice: all your invoices;
account_invoice_line: items in all your invoices;
account_move: journal entries;
account_move_line: items in journal entries.
I don't recommend using this method if you have alternative solution made from odoo interface!
Note: I wouldn't recommend this method unless your absolutely sure of what you are doing.
You can do this using SQL statements directly.
If you want to delete all stock moves in done state:
DELETE FROM stock_move WHERE state='done'
If you want to change stock moves from done (or any other state) to draft state:
UPDATE stock_move SET state='draft' WHERE state='done'
There are two way
1) GUI Editor(Pgadmin3 is widely using...)
Help: https://doc.odoo.com/install/linux/postgres/
2) Via Terminal , fire these commands one by one,
sudo su postgres THEN
psql database_name THEN
fire_your_query
Note : i agree with César if you sure then and then only go with it.
Thanks
Tidyway Team
I have an SQL server data base, and i want to import the data into the Share point, i found a way that i could do it but i think it's not efficient, because i have to create a list manually for each table in the data base, if the data base has a 100 table, should i keep creating 100 lists 100 times for those tables ??, i need i way to import them from automatically, i mean without creating lists each time, is there a way for that?
NOTE: http://www.dotnetcurry.com/showarticle.aspx?ID=794 , this link show hoe to create list each time.
The external content type route seems like the recommended route. If you have already created these there is a possibility you will be able to use some sort of powershell script to create the lists for you. I can't say i've ever used this but if it does what it says it should help you
Scripting External content type lists
Cheers
Truez
I need some guidance on adding / updating SQL records using EF. Lets say I am writing an application that stores info about files on a hard disk, into an EF4 database. When you press a button, it will scan all the files in a specified path (maybe the whole drive), and store information in the database like the file size, change date etc. Sometimes the file will already be recorded from a previous run, so its properties should be updated; sometimes a batch of files will be detected for the first time and will need to be added.
I am using EF4, and I am seeking the most efficient way of adding new file information and updating existing records. As I understand it, when I press the search button and files are detected, I will have to check for the presence of a file entity, retrieve its ID field, and use that to add or update related information; but if it does not exist already, I will need to create a tree that represents it and its related objects (eg. its folder path), and add that. I will also have to handle the merging of the folder path object as well.
It occurs to me that if there are many millions of files, as there might be on a server, loading the whole database into the context is not ideal or practical. So for every file, I might conceivably have to make a round trip to the database on disk to detect if the entry exists already, retrieve its ID if it exists, then another trip to update. Is there a more efficient way I can insert/update multiple file object trees in one trip to the DB? If there was an Entity context method like 'Insert If It Doesnt Exist And Update If It Does' for example, then I could wrap up multiple in a transaction?
I imagine this would be a fairly common requirement, how is it best done in EF? Any thoughts would be appreciated.(oh my DB is SQLITE if that makes a difference)
You can check if the record already exists in the DB. If not, create and add the record. You can then set the fields of the record which will be common to insert and update like the sample code below.
var strategy_property_in_db = _dbContext.ParameterValues().Where(r => r.Name == strategy_property.Name).FirstOrDefault();
if (strategy_property_in_db == null)
{
strategy_property_in_db = new ParameterValue() { Name = strategy_property.Name };
_dbContext.AddObject("ParameterValues", strategy_property_in_db);
}
strategy_property_in_db.Value = strategy_property.Value;