how can I manage changes on odoo - odoo

I am new to odoo, and I would like to know how to manage changes on model and view.
This kind of material is saved in database so that git does not know the changes.
Is there any way to manage changes so that I can put the changes from development to production?

If you would like to apply the changes you perform in the XML source files to your existing database, you must perform an upgrade in the according modules.
For example, if you change a view in the module mail, you must upgrade the module mail.
You can do so by either:
Re-launching your server, using the -u parameter (e.g. -u mail)
In the web interface, go to Apps > The module (e.g. Discuss), hit the upgrade button.
The same logic applies if you perform changes in the stored models fields, if you add or alter some.
Notice that if the records you change in the XML files are surrounded by a <data noupdate="1">, this mean that these records are not updated on upgrades. If this is your case, you have to update the records manually. This is the case for email templates for example, so changes done by the users in databases in production are not overwritten by the default email template content.

Related

Add data automatically to a table B when you add data to table A

Can I update a table in Keystone when I add data to another table?
For example: I have a table named Property where I add details of the property. As soon as I enter the data into this Property table, another table, named NewTable, should automatically get populated with the contents.
Is there a way to achieve this?
There are two ways I can see to approach this:
The afterOperation hook, which lets you configure an async function that runs after the main operation has finished
A database trigger that runs on UPDATE and INSERT
afterOperation Hook
See the docs here. There's also a hooks guide with some context on how the hooks system works.
In your case, you'll be adding a function to your Property list config.
The operation argument will tell you what type of operation just occurred ('create', 'update', or 'delete') which may be handy if you also want to reflect changes to Property items or clean up records in NewTable when a Property item is deleted.
Depending on the type of operation, the data you're interested in will be available in either the originalItem, item or resolvedData arguments:
For create operations, resolvedData will contain the values supplied but you'll probably want to reference item, it'll also contain generated and defaulted values that were applied, such as the new item's id. In this case originalItem will be null.
For update operations, resolvedData will be just the data that changed, which should have everything you need to keep the copy in sync. If you want a move compete picture originalItem and item will be the entire item before and after the update is applied.
For delete operations originalItem will be the last version of the item before it was removed from the DB. resolvedData and item will both be null.
The context argument is a reference to the Keystone context object which includes all the APIs you'll need to write to your NewTable list. You probably want the Query API, eg. context.query.NewTable.createOne(), context.query.NewTable.updateOne(), etc.
The benefits to using a Keystone hook are:
The logic is handled within the Keystone app code which may make it easier to maintain if your devs are mostly focused on JavaScript and TypeScript (and maybe not so comfortable with database functionality).
It's database-independent. That is, the code will be the same regardless of which database platform your project uses.
Database Triggers
Alternatively, I'm pretty sure it's possible to solve this problem at the database level using UPDATE and INSERT triggers.
This solution is, in a sense, "outside" of Keystone and is database specific. The exact syntax you'll need depends on the DB platform (and version) your project is built on:
PostgreSQL
MySQL
SQLite
You'll need to manually add a migration that creates the relevant database structure and add it to your Keystone migrations dir. Once created, Prisma (the DB tooling Keystone uses internally) will ignore the trigger when it's performing its schema comparisons, allowing you to continue using the automatic migrations functionality.
Note that, due to how Prisma works, the table with the copy of the data (NewTable in your example) will need to either be:
Defined as another Keystone list so Prisma can create and maintain the table, or..
Manually created in different database schema, so Prisma ignores it. (I believe this isn't possible on SQLite as it lacks the concepts of multiple schemas within a single DB)
If you try to manually create and manage a table within the default database schema, Prisma will get confused (producing a Drift detected: Your database schema is not in sync with your migration history error) and prompt you to reset your DB.

Write conflict: I want to always Drop Changes

I have a split database and I have duplicated front-end file to make multiple copies for different users. Every-time a change is made on one front-end form, I want the other forms in other front-ends to always drop changes. How can I trap this write conflict to always drop changes maybe through VBA if possible?
Not quite sure what you mean by "drop changes" - the frontend should never be redesigned during normal use.
You must distribute a new copy of the frontend to the users.
A smooth and proven method using a shortcut and a script is described in my article:
Deploy and update a Microsoft Access application with one click
(If you don't have an account, browse for the link: Read the full article)
Edit:
If it is the data that is updated by several users, and you update via VBA, you may study another of my articles:
Handle concurrent update conflicts in Access silently
Though simple to use, the code is a bit too much to post here. It is also on GitHub:
VBA.ConcurrencyUpdates

DB Evolution in Play Framework 2.0

On play 1.0 when we change a variable type or for example changing from #OneToMany to #ManyToMany in a Model Play was handling the change automatically but with play 2.0 evolution script drop the database. There is any way to make Play 2.0 apply the change without dropping the DB ?
Yes, there is a way. You need to disable automatic re-creation of 1.sql file and start to write own evolutions containing ALTERS instead of CREATES - numbering them with 2.sql, 3.sql etc.
In practice that means, that if you're working with single database you can also just... manage the database's tables and columns using your favorite DB GUI. The evolutions are useful only when you can't use GUI (host doesn't allow external connections and hasn't any GUI) or when you are planning to run many instances of the app on the separate databases. Otherwise manual writing the statement will be probably more complicated than using GUI.
Tip: sometimes, if I'm not sure if I added all required relations and constraints into my manual evolutions I'm deleting them (under git controlled folder!) and running the app with Ebean plugin enabled and saving proposed 1.sql but I'm NOT applying the changes. Later using git I'm reverting my evolutions and compare with saved auto-generated file and convert CREATE statements to the ALTERs. There's no better option for managing changes without using 3-rd part software.

Changing createDate on Liferay Journal Article (Web Content) via Liferay API

So here's the situation. I want to add 'old' news from our previous website in to an asset publisher portlet on our new Liferay 6.1 site. The problem is that I want them to show up as if I had added them in the past.
So, I figure, how hard can it be to modify the createDate? I've since been able to directly access the MySQL database and perform updates on the article object's createDate field. However, it doesn't seem to propagate to my Liferay deployment, regardless of clearing caches, reindexing search indices, and restarting Liferay. The web content still maintains it's 'original' createDate even though the database shows it as the value I have changed it to.
Here's the query I used:
mysql> UPDATE JournalArticle SET createDate='2012-03-08 15:17:12' WHERE ArticleID = 16332;
I have since learned that it is a no-no to directly manipulate the database, as the dynamics of database/Liferay isn't as straight forward as Liferay performing lookups. So it looks like I might need to use the Liferay API, namely, setCreateDate as seen here.
But I have absolutely no idea where and how to leverage the API. Do I need to create a dummy portlet with the sole purpose of using this API call? Or can I create a .java file somewhere on the server running my Liferay deployment and run it to leverage this method?
I only have like 15 articles I need to do this to. I can find them by referencing the ArticleID and GroupID.
Any help would be greatly appreciated. I've grepped the Liferay deployment and found setCreateDate being used heavily within .java files inside the knowledge-base-portlet, but I can't tell how else to directly use them without creating a portlet.
On the other hand, if anybody knows how to get my database to propagate it's changes to the Liferay deployment, even though I know it's a dirty hack, that would probably be the easiest.
Thanks; I really appreciate it.
Using of Liferay API is of course the clear and better way, but for only 15 articles I would try to change it directly through the database.
I checked the database and it seems that Liferay stores the data in these tables: JOURNALARTICLE and ASSETENTRY.
Try to change the created date in both these tables.
Then reload the cache: Control Panel -> Server Administration --> Clear Database Cache.
You can write hook for application startup event. This way whenever liferay is first started it will change the create date as you desire. Later if you want to remove the hook it can be done easily. See here on how to create a hook and deploy it.
http://www.liferay.com/community/wiki/-/wiki/Main/Portal+Hook+Plugins
Also, changing in database itself is not at all recommended even for 1 value/article. Always use Liferay provided service api to modify.

Do you put your database static data into source-control ? How?

I'm using SQL-Server 2008 with Visual Studio Database Edition.
With this setup, keeping your schema in sync is very easy. Basically, there's a 'compare schema' tool that allow me to sync the schema of two databases and/or a database schema with a source-controlled creation script folder.
However, the situation is less clear when it comes to data, which can be of three different kind :
static data referenced in the code. typical example : my users can change their setting, and their configuration is stored on the server. However, there's a system-wide default value for each setting that is used in case the user didn't override it. The table containing those default settings grows as more options are added to the program. This means that when a new feature/option is checked in, the system-wide default setting is usually created in the database as well.
static data. eg. a product list populating a dropdown list. The program doesn't rely on the existence of a specific product in the list to work. This can be for example a list of unicode-encoded products that should be deployed in production when the new "unicode version" of the program is deployed.
other data, ie everything else (logs, user accounts, user data, etc.)
It seems obvious to me that my third item shouldn't be source-controlled (of course, it should be backuped on a regular basis)
But regarding the static data, I'm wondering what to do.
Should I append the insert scripts to the creation scripts? or maybe use separate scripts?
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
Should I differentiate my two kind of data? (the first one being usually created by a dev, while the second one is usually created by a non-dev)
How do you manage your DB static data ?
I have explained the technique I used in my blog Version Control and Your Database. I use database metadata (in this case SQL Server extended properties) to store the deployed application version. I only have scripts that upgrade from version to version. At startup the application reads the deployed version from the database metadata (lack of metadata is interpreted as version 0, ie. nothing is yet deployed). For each version there is an application function that upgrades to the next version. Usually this function runs an internal resource T-SQL script that does the upgrade, but it can be something else, like deploying a CLR assembly in the database.
There is no script to deploy the 'current' database schema. New installments iterate trough all intermediate versions, from version 1 to current version.
There are several advantages I enjoy by this technique:
Is easy for me to test a new version. I have a backup of the previous version, I apply the upgrade script, then I can revert to the previous version, change the script, try again, until I'm happy with the result.
My application can be deployed on top of any previous version. Various clients have various deployed version. When they upgrade, my application supports upgrade from any previous version.
There is no difference between a fresh install and an upgrade, it runs the same code, so I have fewer code paths to maintain and test.
There is no difference between DML and DDL changes (your original question). they all treated the same way, as script run to change from one version to next. When I need to make a change like you describe (change a default), I actually increase the schema version even if no other DDL change occurs. So at version 5.1 the default was 'foo', in 5.2 the default is 'bar' and that is the only difference between the two versions, and the 'upgrade' step is simply an UPDATE statement (followed of course by the version metadata change, ie. sp_updateextendedproperty).
All changes are in source control, part of the application sources (T-SQL scripts mostly).
I can easily get to any previous schema version, eg. to repro a customer complaint, simply by running the upgrade sequence and stopping at the version I'm interested in.
This approach saved my skin a number of times and I'm a true believer now. There is only one disadvantage: there is no obvious place to look in source to find 'what is the current form of procedure foo?'. Because the latest version of foo might have been upgraded 2 or 3 versions ago and it wasn't changed since, I need to look at the upgrade script for that version. I usually resort to just looking into the database and see what's in there, rather than searching through the upgrade scripts.
One final note: this is actually not my invention. This is modeled exactly after how SQL Server itself upgrades the database metadata (mssqlsystemresource).
If you are changing the static data (adding a new item to the table that is used to generate a drop-down list) then the insert should be in source control and deployed with the rest of the code. This is especially true if the insert is needed for the rest of the code to work. Otherwise, this step may be forgotten when the code is deployed and not so nice things happen.
If static data comes from another source (such as an import of the current airport codes in the US), then you may simply need to run an already documented import process. The import process itself should be in source control (we do this with all our SSIS packages), but the data need not be.
Here at Red Gate we recently added a feature to SQL Data Compare allowing static data to be stored as DML (one .sql file for each table) alongside the schema DDL that is currently supported by SQL Compare.
To understand how this works, here is a diagram that explains how it works.
The idea is that when you want to push changes to your target server, you do a comparison using the scripts as the source data source, which generates the necessary DML synchronization script to update the target. This means you don't have to assume that the target is being recreated from scratch each time. In time we hope to support static data in our upcoming SQL Source Control tool.
David Atkinson, Product Manager, Red Gate Software
I have come across this when developing CMS systems.
I went with appending the static data (the stuff referenced in the code) to the database creation scripts, then a separate script to add in any 'initialisation data' (like countries, initial product population etc).
For the first two steps, you could consider using an intermediate format (ie XML) for the data, then using a home grown tool, or something like CodeSmith to generate the SQL, and possible source files as well, if (for example) you have lookup tables which relate to enumerations used in the code - this helps enforce consistency.
This has another benefit that if the schema changes, in many cases you don't have to regenerate all your INSERT statements - you just change the tool.
I really like your distinction of the three types of data.
I agree for the third.
In our application, we try to avoid putting in the database the first, because it is duplicated (as it has to be in the code, the database is a duplicate). A secondary benefice is that we need no join or query to get access to that value from the code, so this speed things up.
If there is additional information that we would like to have in the database, for example if it can be changed per customer site, we separate the two. Other tables can still reference that data (either by index ex: 0, 1, 2, 3 or by code ex: EMPTY, SIMPLE, DOUBLE, ALL).
For the second, the scripts should be in source-control. We separate them from the structure (I think they typically are replaced as time goes, while the structures keeps adding deltas).
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
We have a complete procedure for that, and a readme coming with each release, with scripts and so on...
First off, I have never used Visual Studio Database Edition. You are blessed (or cursed) with whatever tools this utility gives you. Hopefully that includes a lot of flexibility.
I don't know that I'd make that big a difference between your type 1 and type 2 static data. Both are sets of data that are defined once and then never updated, barring subsequent releases and updates, right? In which case the main difference is in how or why the data is as it is, and not so much in how it is stored or initialized. (Unless the data is environment-specific, as in "A" for development, "B" for Production. This would be "type 4" data, and I shall cheerfully ignore it in this post, because I've solved it useing SQLCMD variables and they give me a headache.)
First, I would make a script to create all the tables in the database--preferably only one script, otherwise you can have a LOT of scripts lying about (and find-and-replace when renaming columns becomes very awkward). Then, I would make a script to populate the static data in these tables. This script could be appended to the end of the table script, or made it's own script, or even made one script per table, a good idea if you have hundreds or thousands of rows to load. (Some folks make a csv file and then issue a BULK INSERT on it, but I'd avoid that is it just gives you two files and a complex process [configuring drive mappings on deployment] to manage.)
The key thing to remember is that data (as stored in databases) can and will change over time. Rarely (if ever!) will you have the luxury of deleting your Production database and replacing it with a fresh, shiny, new one devoid of all that crufty data from the past umpteen years. Databases are all about changes over time, and that's where scripts come into their own. You start with the scripts to create the database, and then over time you add scripts that modify the database as changes come along -- and this applies to your static data (of any type) as well.
(Ultimately, my methodology is analogous to accounting: you have accounts, and as changes come in you adjust the accounts with journal entries. If you find you made a mistake, you never go back and modify your entries, you just make a subsequent entries to reverse and fix them. It's only an analogy, but the logic is sound.)
The solution I use is to have create and change scripts in source control, coupled with version information stored in the database.
Then, I have an install wizard that can detect whether it needs to create or update the db - the update process is managed by picking appropriate scripts based on the stored version information in the database.
See this thread's answer. Static data from your first two points should be in source control, IMHO.
Edit: *new
all-in-one or a separate script? it does not really matter as long as you (dev team) agree with your deployment team. I prefer to separate files, but I still can always create all-in-one.sql from those in the proper order [Logins, Roles, Users; Tables; Views; Stored Procedures; UDFs; Static Data; (Audit Tables, Audit Triggers)]
how do you make sure they execute it: well, make it another step in your application/database deployment documentation. If you roll out application which really needs specific (new) static data in the database, then you might want to perform a DB version check in your application. and you update the DB_VERSION to your new release number as part of that script. Then your application on a start-up should check it and report an error if the new DB version is required.
dev and non-dev static data: I have never seen this case actually. More often there is real static data, which you might call "dev", which is major configuration, ISO static data etc. The other type is default lookup data, which is there for users to start with, but they might add more. The mechanism to INSERT these data might be different, because you need to ensure you do not destoy (power-)user-created data.