What are the advantages of creating an Odoo module as opposed to forking it? - odoo

We are interested in using Odoo but we would need to modify it slightly for our use case, for instance modifying partner by adding fields, and integrating with an external system.
Is it best to fork it or to make a module with the changes in? The changes would be quite specific to our use case and existing system so it's unlikely it would be useful to anyone else as a module/app.
My thinking is that by forking it would be easier to stay up to date with Odoo - we just have to pull in changes from upstream occasionally. It seems like with a module you would end up with lots of stale code that's difficult to update because you've moved it outside the source tree.
It also seems like it would be easier to deploy because you have all the code in one place rather than two.

As from my POV and according to many years of ERP - experience, it is the best advice to implement those kind of changes always in an own module (inheriting all the required standard components) and leave the standard untouched. This applies also for very specific customizations as well as for general improvements.
This procedure provides you with the best flexibility in installing, updating, distributing and maintaining your code while not being touched when updates to the standard modules are carried out on the target systems.
You will also be able to share and move your code between dev/test/prod systems and applying a version control on it.
Please always make sure to be in line with valid license obligations applying to your code (especially when inheriting standard modules).
Hope this helps ;-)

Related

Can you Auto update branches from the Main trunk?

Here is the scenario.
We are developing a product where we have a base product and regional variations for the product. We have all the common code checked into the main trunk while we have created 2 branches (branch_us, branch_uk) for the variations off of the main trunk. There is common code that is constantly being checked into the main trunk and the code that is being checked into branch_uk,branch_us is dependent on the code that is checked into the main trunk. This is being done because we expect more regions to added in future releases and as a result we want to have max reuse as well as thin regional variations layer.
Based on the current strategy, the developer will have to develop locally and then manually check-in the common files into main_trunk and regional variations into branch_uk & branch_us. Then everytime code is checked into the main_trunk, we will have to perform a merge from main_trunk->branch_uk & main_trunk->branch_us before we can perform a build for branch_uk & branch_uk (two separate deployments) because of dependency of new code in branch_uk/us branch to the new common code in main_trunk. This model seems extremely painful to think about and unproductive.
I'm by no means an expert on TFS. Here is what I am seeking opinion on:
Is there a way TFS can dynamically pull changes into branch_uk/branch_us from the main_trunk without doing a manual merge after every check-in (in the main_trunk)?
Do you guys have any other recommendations on the code management process that might be more effective/productive than the current one?
Any thoughts and feedback will be much appreciated!
This seems like a weird architecture to me, but of course I'm coming at it from a position of almost total ignorance, so there might be a compelling reason to approach it that way.
That being said: It sounds to me like you don't have a single application with two regional variations, you have two separate applications that share a common ancestor. The short answer to your question is "No". A slightly longer answer is "No, but you could write code to automate it."
A more thoughtful question-answer is "Are you sure centralized version control is the right tool for the job?" It might be more intuitive to use Git for this. What you have are, in effect, a base repository and two forks of that repository. Developers can work against whatever fork makes sense, and if something represents a change that should apply to all localizations, open a pull request to have the change merged into the base repository. This would require more discipline on the part of the developers, since they would have to ensure that their commits are isolated such that they can open a pull request that contains just commits that apply to the core platform. Git has powerful but difficult history-rewriting tools that can assist. Or, of course, they could just switch back and forth between working on the core platform, then pulling changes from the core platform back up to the separate repositories. This puts you back to where you started, but Git merges are very fast and shouldn't be a big issue.
Either way, thinking of the localizations are a single application is your mistake.
A non-source control answer might involve changing the application's architecture so that all localizations run off of the same codebase, but with locale-specific functionality expressed in a combination of configuration flags and runtime-discoverable MEF plugins, or making a "core" application platform that runs as an isolated service, and separately developed locale-specific services that express only deviations from the core application platform.

Should I default the environment for someone using my library?

I have been having this debate with a friend where i have a library (its python but I didn't include that as a tag as the question is applicable to any language) that has a few dependencies. The debate is whether to provide a default environment in the initialization or force the user of the code to explicitly set one.
My opinion is to force the user as its explicit and will avoid confusion and make it clear what they are pointing to.
My friend this is safer and more convenient to default to an environment and let the user override if he wants to.
Thoughts ? Are there any good references or examples / patterns in popular libraries that support either of our arguments? also, any popular blogs or articles that discuss this API design point?
I don't have any references, but here are my thoughts as a potential user of said library.
I think it's good to have a default configuration available to allow developers to quickly evaluate the library. I don't want to have to go through a bunch of configuration just to see if the library will do what I need. Once I'm happy that the library will do what i need it to do, then I'm happy to configure it the way I want.
A good example is Microsoft's ASP.Net MVC framework. When you create a new MVC project it hooks in a default authentication and membership provider, which allows the developer to very quickly get a functioning application up and running. It is also easy to configure different providers to be used if the default one's don't meet the requirements of the application in question.
As a slightly different example, Atlassian Confluence is wiki software which supports many different back-end databases. Atlassian could have chosen to have no default DB configuration, but instead Confluence ships with a default, simple, file-based database to allow users to evaluate the software. For production installations you can then hook up to Oracle, SQL Server, mySQL or whatever else you like.
There may be instances where a default configuratino for a library doesn't really make sense, but I think that would be a special case, rather than a general rule.
It depends. If you can provide sensible defaults, you might want to do that: it will make life easer on the occasional user of the library as they can set only the relevant settings, as opposed to the whole environment (with possibly settings the implications of which they don't fully understand (yet)). You are correct, that in situations it is possible this leads to frustration and confusion as the defaulted settings might cause unexpected behavior (unexpected by the (inexperienced) user) -- you have to weigh the reduced frustration of convenience against the price of not-understood defaults to make the choice for each of these possible-to-default settings, which choice might affect the choice for other, related settings as well
On the other hand, if there is no sensible default (e.g. DB credentials, remote address), you should require the user to provide those settings.
The key in both cases is to provide enough information in the documentation of the library and in the error messages (either for missing settings or conflicting ones) that the user can figure out what those settings actually mean/control without having to read through the source code of the library. This part is hard because 1) it is usally tedious from the point of view of the library developer (so it is often skimped) and 2) the documentation has to be written from the mindset of a newbie to the library, which is often different from the library developer's mindset -- the latter knows the implicit connections/implications, the former has to be told about those in an understandable way.
Although not exactly identical in terms of problem domain, this strikes me as the Convention over Configuration argument.
There has been quite a lot momentum behind CoC in recent years, and in my mind, it makes a whole lot of sense. As long as flexibility is not lost, you have everything to gain. Lower friction development is what we are all after, and if I've got to configure every aspect of your API in order to get it working, I'm less inclined to use it over another API of equal functionality.
I happen to like Hanselman's podcasts, so if you want a little light listening, check out this podcast.
I think your question needs some clarification. For starters, I don't think a library should have any runtime configuration. In terms of dependencies, library dependencies should be handled in a manner appropriate to the environment they are being written for. In python, those dependencies should be in the setup.py file (under requirements), and ultimately that file should meet the requirements of whatever service you plan on making it available on (i.e. pypi for python).
For applications, it is completely okay to require runtime configuration, but you should try to have sensible defaults. If your application depends on libraries, that dependency should be handled in the same way a library dependency would be handled, even though that information may be redundant in the context of an installer (if needed). For the most part first-run scripts and their ilk should be apart of the installer/rpm.
For Web Frameworks, it is typical that your app would carry configuration with it, and likely that it would need to be installed in a different way than traditional applications. Here, about the only thing you can do is try to follow the conventions of whatever framework you are writing in.

What is best way of changing the ABAP standard code

I have almost 4 months learning/working in SAP. I've done several reports and enhancements all along this time but recently I began to work in a requirement which is related to Mobile Data Entry or RF and it basically consists to add the EAN and some other data to the dynpro 2502.
I made a copy of the dynpro 2502 in program SAPLLMOB into SAPLXLRF 9502, related the user exit MWMRF502 and programmed the basic functionality of it but it is not working as I expected because this exit is very limited and it only lets me import and export a small group of data and is difficult to perform exactly as the standard.
I've been searching all over internet and a lot of people make their own implementations and other just simply change the standard. I don't know how to make my own implementation cause I don't understand all the process within and the alternative of changing the standard code would be better for performance and time spent in development but as I quoted I would have to change the standard code and that's something I would like to do only if there's no other option.
But the question is ¿Is it OK to change the standard? ¿How often is the the standard code changed in SAP implementations? ¿What would be the better alternative?
Thanks in advance.
You are asking the right sort of questions and it is good that you are not just plowing ahead without thinking about the consequences of what you are doing. Keep researching!
As far as changing the SAP standard goes, you generally do not want to copy an object to change it. For screens SAP quite often creates a user-exit with a sub-screen that can be modified by the customer. For Web-Dynpro you can use enhancement points and/or bADI's to extend the functionality.
Try to look for one of the following:
A SAP BAdI in the area that you want to change (transaction SE18),
a user-exit allowing you to change the necessary screen(s) (transaction SMOD),
explicit enhancement points within the functionality,
one of the implicit enhancement points in functionality
There are a lot of documentation on sdn.sap.com as well as within the SAP help regarding the topics above.
If none of are available, you may have no other choice but to modify (repair) the SAP standard objects. In order to be able to change the SAP standard you need to register the object(s) that you have to change on SAP OSS and get a repair key that the system needs to allow you to make changes. Always ensure that the SAP Modification Assistant is switched on when making changes, this will make your life a lot easier when you patch or upgrade your system.
If at all possible try to find an experienced ABAP programmer to help you with this.
Also see this question regarding changing SAP standard code:
Edit: Thomas Weiss on SDN has a helpful blog series on the enhancement and switch framework.
Always make sure that there's absolutely no other way to implement the functionality you need. If you're sure about that, then either write your own implementation from scratch, or simply change SAP's code. Just don't copy SAP's programs to the customer namespace, because I can guarantee you that that'll turn into a maintenance nightmare. You'll have to decide yourself whether the size of the change is worth the time building your own implementation, or changing SAP's.
If you decide to change SAP's code, keep in mind that all changes will pop up for review when the system is upgraded, which will take time to evaluate and adjust to the new SAP code.
Your options are, from most to least desirable:
Check the documentation of the application on help.sap.com for possible extensibility scenarios. There are many ways how SAP intends for you to customize their applications through various kinds of event architectures. Unfortunately any attempts by the various departments at SAP to agree on one event architecture and then stick to it failed. So you have user exits, BTEs, FQEVENTS, BAdIs, explicit enhancement spots and many more. If you want to know what's used by the application you need to change, RTM.
Use an implicit enhancement spot. Enhancements are a great way to modify standard software in ways SAP did not anticipate, because they are easy to disable and usually pretty stable during upgrades (use the transaction SPAU_ENH after an upgrade to confirm that your enhancements still make sense in the new version of the program). You will find implicit enhancement spots at the beginning and end of every include and every kind of subroutine, which allows you to inject arbitrary ABAP code in these locations.
But sometimes there just is no implicit enhancement spot where you need it to be. In that case you can copy the whole program into the customer-namespace and modify it. This gives you the freedom to do whatever you want with the program while still retaining the original program as a possible fall-back. It is usually a good idea to use as many components from the original program as possible, by including its includes or calling FORMs from the original program via PERFORM formname IN PROGRAM originalprogram. The main problem with this method is that after a new release, your program might no longer behave as expected. You will have to look at the new version of the program and see if there are any changes you need to port to your version. And there is nothing in the SAP standard that assists you with this maintenance task. So you are responsible to keep a list of all your copies of standard programs.
Just modify the program directly. But this is really a last-resort option for programs that are too complex to copy into the customer-namespace. The problem with this is that it means SAP will no longer offer you support for that program. If you post a ticket about that program on launchpad.support.sap.com, and they find out you modified the program, they will assume it's your own fault and close the ticket. But fortunately, when you upgrade your system, you have the transaction SPAU that will help you to merge your changes with the new versions of the modified SAP programs.

Agile practices to avoid deprecated code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am converting an open source Java library to C#, which has a number of methods and classes tagged as deprecated. This project is an opportunity to start with a clean slate, so I plan to remove them entirely. However, being new to working on larger projects, I am nervous that the situation will arise again. Since much of agile development revolves around making something work now and refactoring later if needed, it seems like deprecation of APIs must be a common problem. Are there preventative measures I can take to avoid/minimize API deprecation, even if I am not entirely sure of the future direction of a project?
I'm not sure there is much you can do. Requirements change, and if you absolutely have to make sure that clients of the API are not broken by newer API version, you'll have rely on simply deprecating code until you think that no-one is using the deprecated code.
Placing [Obsolete] attributes on code causes the compiler to create warnings if there are any references to the obsolete methods. This way clients of the API, if they are diligent about fixing their compiler warnings, can gradually move to the new methods without having everything break with the new version.
Its useful if you use the ObsoleteAttribute's override which takes a string:
[Obsolete("Foo is deprecated. Use Bar instead for munging widgets.")]
<frivolous>
Perhaps you could create a TimeBombAttribute:
[TimeBomb(new DateTime(2010,1,1), "Foo will blow up! Better use Bar, or else."]
In your code, reflect for methods with the timebomb attribute and throw KaboomException if they are called after the specified date. That'll make sure that after 1st January 2010 no-one is using the obsolete methods, and you can clean up your API nicely. :)
</frivolous>
As Matt says, the Obsolete attribute is your friend... but whenever you apply it, provide details of how to change calling code. That way you've got a lot better chance of people actually changing. You might also want to consider specifying which version you anticipate removing the method in (probably the next major release).
Of course, you should be diligent in making sure you don't call the obsolete code - particularly in sample code.
Since much of agile development revolves around making something work now and refactoring later if needed
That's not agile. It's cowboy coding disguised under the label of agile.
The ideal is that whatever you complete, is complete, according to whatever Definition of Done you have. Usually the DoD states something along the lines of "feature impelmented, tested and related code refactored". Of course, if you are working on a throwaway prototype, you can have a more relaxed DoD.
API modifications are a difficult beast. If they are only project-internal APIs you are modifying, the best way to go is to refactor early. If you need to change the internal API, just go ahead and change all API clients at the same time. This way the refactoring debt does not grow very large and you don't have to use deprecation.
For published APIs you probably have some source and binary compatibility guarantees you have to maintain, at least until the next major release or so. Marking the old APIs deprecated works while maintaining compatibility. As with internal APIs, you should fix your internal code as soon as possible to not use the deprecated APIs.
Matt's answer is solid advice. I just wanted to mention that intially you probably want to use something along the lines of:
[Obsolete("Please use ... instead ", false)]
Once you have the code ported, change the false to true and the compiler will then treat all the calls to the method as an error.
Watch Josh Bloch's "How to Design a Good API and Why It Matters"
Most important w/r/t deprecation is knowing that "when in doubt, leave it out." Watch the video for clarification, but it has to do with having to support what you provide forever. If you are realistically expecting that API to be reused, you're effectively setting your decisions in stone.
I think API design is a much trickier thing to do in an Agile fashion because you're expecting it to be reused probably in many different ways. You have to worry about breaking others that are dependent on you, and so while it can be done, it's tough to have the right design emerge without getting a quick turnaround from other teams. Of course deprecation is going to help here, but I think YAGNI is a lot better design heuristic when it comes to APIs.
I think deprecation of code is an inevitable byproduct of Agile processes like continuous refactoring and incremental development. So if you end up with deprecated code as you work on your project, that's not necessarily a bad thing--just a fact of life. Of course, you will probably find that, rather than deprecating code, you end up keeping a lot of code but refactoring it into different methods, classes, and so on.
So, bottom line: I wouldn't worry about deprecating code during Agile development. If it served its purpose for a while, you're doing the right thing.
The rule of thumb for API design is to focus on what it does, rather than how it does it. Once you know the end goal, figure out the absolute minimum input you need and use that. Avoid passing your own objects as parameters, pass only data.
Seperate configuration from execution. For exmaple, maybe you have an image encoder/decoder.
Instead of making a call like:
Encoder.Encode( bytes, width, height, compression_type, compression_ratio, palette, etc etc);
Make it
Encoder.setCompressionType(compression_type);
Encoder.setCompressionType(compression_ratio);
etc,etc
Encoder.Encode(bytes, width, height);
That way adding or removing settings is much less likely to break existing implementations.
For deprecation, there's basically 3 types of APIs: internal, external, and public.
Internal is when its only your team working on the code. Deprecating these APIs isn't a big deal. Your team is the only one using it, so they aren't around long, there's pressure to change them, people aren't afraid to change them, and people know how to change them.
External is when its the same code base, but different teams are using it. This might be some common libraries in a large company, or a popular open source library. The point is, people can choose the version of code they compile with. The ease of deprecating an API depends on the size of the organization and how well they communicate. IMO, its the deprecator's job to update old code, rather than mark it deprecated and let warnings fly throughout the code base. Why the deprecator instead of the deprecatee? Because the depcarator is in the know; they know what changed and why.
Those two cases are pretty easy. So long as there is backwards compatibility, you can generally do whatever you'd like, update the clients yourself, or convince the maintainers to do it.
Then there are public api's. These are basically external API's that the clients don't have much control over, such as a web API. These are incredibly hard to update or deprecate. Most won't notice its broken, won't have someone to fix it, won't get notifications that its changing, and will only fix it once its broken (after they've yelled at you for breaking it, over course).
I've had to do the above a few times, and it is such a chore. I think the best you can do is purposefully break it early, wait a bit, and then restore it. You send out the usual warnings and deprecations first, of course, but - trust me - nothing will happen until something breaks.
An idea I've yet to try is to let people register simple apps that run small tests. When you want to do an API update, you run the external tests and contact the affected people.
Another approach to be popular is to have clients depend on (web) services. There are constructs out there that allow you to version your services and allow clients to perform lookups. This adds a lot more moving parts and complexity into the equation, but can be helpful if you are looking at turning over a lot of versions, and having to support multiple versions in production.
This article does a good job of explaining the problem and an approach.

To monkey-patch or not to?

This is more general question then language-specific, altho I bumped into this problem while playing with python ncurses module. I needed to display locale characters and have them recognized as characters, so I just quickly monkey-patched few functions / methods from curses module.
This was what I call a fast and ugly solution, even if it works. And the changes were relativly small, so I can hope I haven't messed up anything. My plan was to find another solution, but seeing it works and works well, you know how it is, I went forward to other problems I had to deal with, and I'm sure if there's no bug in this I won't ever make it better.
The more general question appeared to me though - obviously some languages allow us to monkey-patch large chunks of code inside classes. If this is the code I only use for myself, or the change is small, it's ok. What if some other developer takes my code though, he sees that I use some well-known module, so he can assume it works as it's used to. Then, this method suddenly behaves diffrent then it should.
So, very subjective, should we use monkey patching, and if yes, when and how? How should we document it?
edit: for #guerda:
Monkey-patching is the ability to dynamicly change the behavior of some piece of code at the execution time, without altering the code itself.
A small example in Python:
import os
def ld(name):
print("The directory won't be listed here, it's a feature!")
os.listdir = ld
# now what happens if we call os.listdir("/home/")?
os.listdir("/home/")
Don't!
Especially with free software, you have all the possibilities out there to get your changes into the main distribution. But if you have a weakly documented hack in your local copy you'll never be able to ship the product and upgrading to the next version of curses (security updates anyone) will be very high cost.
See this answer for a glimpse into what is possible on foreign code bases. The linked screencast is really worth a watch. Suddenly a dirty hack turns into a valuable contribution.
If you really cannot get the patch upstream for whatever reason, at least create a local (git) repo to track upstream and have your changes in a separate branch.
Recently I've come across a point where I have to accept monkey-patching as last resort: Puppet is a "run-everywhere" piece of ruby code. Since the agent has to run on - potentially certified - systems, it cannot require a specific ruby version. Some of those have bugs that can be worked around by monkey-patching select methods in the runtime. These patches are version-specific, contained, and the target is frozen. I see no other alternative there.
I would say don't.
Each monkey patch should be an exception and marked (for example with a //HACK comment) as such so they are easy to track back.
As we all know, it is all to easy to leave the ugly code in place because it works, so why spend any more time on it. So the ugly code will be there for a long time.
I agree with David in that monkey patching production code is usually not a good idea.
However, I believe that for languages that support it, monkey patching is a very valuable tool for unit testing. It allows you to isolate the piece of code you need to test even when it has complex dependencies - for instance with system calls that cannot be Dependency Injected.
I think the question can't be addressed with a single definitive yes-no/good-bad answer - the differences between languages and their implementations have to be considered.
In Python, one needs to consider whether a class can be monkey-patched at all (see this SO question for discussion), which relates to Python's slightly less-OO implementation. So I'd be cautious and inclined to expend some effort looking for alternatives before monkey-patching.
In Ruby, OTOH, which was built to be OO down into the interpreter, classes can be modified irrespective of whether they're implemented in C or Ruby. Even Object (pretty much the base class of everything) is open to modification. So monkey-patching is rather more enthusiastically adopted as a technique in that community.