Should be a groupId unique in multimodule project? - maven-2

I have a maven multimodule project. What the best practices for creating groupId? Can I use one common groupId for all modules or unique per each?

There is no general rule. Use the groupId to separate (or not) things that have different concerns, but coarse grained. XWiki is a good illustration of this approach. Hibernate is another example, they use the same groupId for all their modules.
But nothing forces you to use a unique groupId per module, this seems clearly too fine grained to me (this sounds like creating a package for each class).
In a corporate environment, using something like a.b.appname and then a.b.appname.moduleN if the application is big and has many modules is a common pattern.

Related

FlatBuffer schema design for frameworks

I'm looking for advice on structuring FlatBuffer schemas for a framework which allows users to extend the data types defined by the framework, but also allows the framework developers to add new fields when new versions of the framework are published.
My original thinking was that when you create a project using this framework, it would generate several FlatBuffer schema files which you could then edit for your specific project. You could then compile the schemas and start developing code using the framework APIs.
However, this becomes a problem when the framework developers decide to add fields to the base types. As you probably know, FlatBuffers requires that any additional fields be appended to the end (or at least have a higher ID than other fields). So there is a conflict between the additions made by the framework developer and the framework user.
One possible solution would be to have a set of 'non-user-extensible' types that are owned by the framework creator, and which should not be modified by users of the framework; and these types would then be embedded within the data types defined by the framework user. However, given the restrictions on fields changing size, I am not sure if this would even work.
I'm also willing to hear alternatives to using flatbuffers if it turns out that there is no good solution otherwise.
To have open ended extension like that, you should really have the framework authors and users work in two separate tables.. where one can own the other. There is no good way to extend a single table if all contributors aren't sharing the schema in source control.
If these extensions must be in a single object for whatever reason, then Protocol Buffers is more flexible than FlatBuffers, since it doesn't require adjacent field ids. You can simply say that all field ids >=1000 are for framework users, for example.
In retrospect (answering my own question two years later), it seems that FlatBuffers was not the right choice for my use case. These days I'm using a combination of msgpack (in cases where I care about byte-size) and JSON (in cases where I don't) and I'm pretty happy with each.

SQL foreign key between projects in different VS solutions

I want to split the development of our data warehouse solution in to manageable Visual Studio projects that can be edited independently and grow organically. I have developed a project that contains all the conformed dimensions like Date in one solution. Can I reference my Date dimension from a different solution where I need to include a foreign key? I have tried to reference the dacpac containing the conformed dimension but this does not work as expected.
In the past I've managed these situations using separate projects in a single solution. Clearly the main goal is to avoid merge conflicts between the developers, with this arrangement the only place this should happen is in the solution file, which should be pretty rare.
There are a couple of things to be careful of here:
The reference must be of type "Same database"
The reference needs to be in the direction Fact -> Dimension
When you deploy the "Fact" project, you need to be sure to specify the option to "Include Composite Objects". This is the default in Visual Studio.
Circular references are not allowed. If you have these the workaround (very briefly, google for more) is to create a "parent" project with references to both "sides" of the "circle".
That said, it is also possible to create a foreign key to a referenced dacpac, rather than a database project. Again the reference needs to be in the direction Fact -> Dimension. You will also need to give some thought to your build process as in effect you are taking a binary dependency on the "Dimensions" dacpac. You can also add the dacpac to the referencing project (I tend to create a folder for these) so it ends up in the same place in source control).
In case it helps, I have created a solution that demonstrates both techniques and shared it here: https://github.com/gavincampbell-dev-example-repos/FKsToReferencedDacpacs
In short, you can not create an FK referencing a table in another database, both tables should be located in the same database. FK is a database object, not a server object.
You can however use triggers to manage FK, which is of course not the best solution.
FOREIGN KEY constraints can reference only tables within the same database on the same server. Cross-database referential integrity must be implemented through triggers. For more information, see CREATE TRIGGER (Transact-SQL).
https://msdn.microsoft.com/en-us/library/ms189049.aspx
Based on your comment (see below), here are some thoughts:
You have several good arguments. Though I was looking for a way where we could model several business processes in parallel this might not be an option. It seems that the only option might be to have one big project and then handle how to manage more than one developer working in the solution at the same time - thanks
One database = one project
Enforce communication between team members
Update your VCS policies to minimize problems
Educate the team members about possible problems and their workarounds
Including deployment options
Provide sufficient resources (including dev workstations) and use local instances for development
Build an SIT (integration) environment where you can test changes and how they behave during deployment.
Build a shared DEV environment which is used to merge the work of your team members
Commit/push files into CVS which are considered as done. Commit/push regularly
Route tasks to minimize conflicts (if a team member has something to do with object A, try to route all tasks related to object A to that developer or postpone the second change until the first one is done)
If you use dacpac for deployment, include the configuration file (tested in SIT env) next to the dacpac file. (SIT should reflect the production environment when you are testing the deployment)
And again: Enforce communication inside and outside the team. This is the key.

Are there any specific scenarios to use Liferay search container over Dandelion datatables framework?

Are there any specific scenarios to use Liferay search container over Dandelion data tables framework,when Data tables provide far better collection of features(such as multi column sorting,filtering,searching,i18,etc) and is easy to integrate too.To rephrase my question,should data tables be preferred over search container for all scenarios.
It's 100% your choice. Search Container is styled as every built-in list of entities within Liferay (because Liferay uses Search Container). If you use it or choose any other method/framework/technology is strictly your choice.
Make your choice based on
appearance and level of visual integration you'd like to have
familiarity with the framework
suitability for the job
maintainability of the solution for whoever is going to maintain your code
assumed stability (or level of maintenance) for your solution of choice
If you end up using either one of the proposed solutions or yet another one: So be it. For your future maintainers sake, just make sure to choose one and standardize on it.
If you're customizing Liferay's UI, you might still need to understand Search Container, but that's a different story.

Symfony 2 : Location of Entities

I'm pretty new at Symfony 2 and I was wondering something :
Let's assume I have 2 bundles in my projects. I want to use entities generated from my database in both bundles.
Where am I suppose to generate the entities ? (To me the best way would be outside the bundles but I can't find out how to do that)
Thanks for your help.
I think there is two solutions, you have to think of the design of your application.
Are you sure you need two bundles ? If the link is so strong between the two, why didn't you choose to make only one bundle ? In this case, you'll just have to generate the entities into this bundle.
Other case : you effectively need two bundles, but in this specific application you need to make a link between the two. In this case, I think you should generate the entities in the bundle where it belongs, and if you need so you can use them in another bundle (thank to use MyApp\MyBundle\Entities\...;). You have to think in terms of generic code when using Symfony, in order to be able to reuse your bundles in other projects. ;)

Practical usage of the Unit Of Work & Repository patterns

I'm building an ORM, and try to find out what are the exact responsibilities of each pattern. Let's say I want to transfer money between two accounts, using the Unit Of Work to manage the updates in a single database transaction.
Is the following approach correct?
Get them from the Repository
Attach them to my Unit Of Work
Do the business transaction & commit?
Example:
from = acccountRepository.find(fromAccountId);
to = accountRepository.find(toAccountId);
unitOfWork.attach(from);
unitOfWork.attach(to);
unitOfWork.begin();
from.withdraw(amount);
to.deposit(amount);
unitOfWork.commit();
Should, as in this example, the Unit Of Work and the Repository be used independently, or:
Should the Unit Of Work use internally a Repository and have the ability to load objects?
... or should the Repository use internally a Unit Of Work and automatically attach any loaded entity?
All comments are welcome!
The short answer would be that the Repository would be using the UoW in some way, but I think the relationship between these patterns is less concrete than it would initially seem. The goal of the Unit Of Work is to create a way to essentially lump a group of database related functions together so they can be executed as an atomic unit. There is often a relationship between the boundaries created when using UoW and the boundaries created by transactions, but this relationship is more coincidence.
The Repository pattern, on the other hand, is a way to create an abstraction resembling a collection over an Aggregate Root. More often than not the sorts of things you see in a repository are related to querying or finding instances of the Aggregate Root. A more interesting question (and one which doesn't have a single answer) is whether it makes sense to add methods that deal with something other than querying for Aggregates. On the one hand there could be some valid cases where you have operations that would apply to multiple Aggregates. On the other it could be argued that if you're performing operations on more than one Aggregate you are actually performing a single action on another Aggregate. If you are only querying data I don't know if you really need to create the boundaries implied by the UoW. It all comes down to the domain and how it is modeled.
The two patterns are dealing at very different levels of abstraction, and the involvement of the Unit Of Work is going to be dependent on how the Aggregates are modeled as well. The Aggregates may want to delegate work related to persistence to the Entities its managing, or there could be another layer of abstraction between the Aggregates and the actual ORM. If your Aggregates/Entities are dealing with persistence themselves, then it may be appropriate for the Repositories to also manage that persistence. If not, then it doesn't make sense to include UoW in your Repository.
If you're wanting to create something for general public consumption outside of your organization, then I would suggest creating your Repository interfaces/base implementations in a way that would allow them to interact directly with your ORM or not depending on the needs of the user of your ORM. If this is internal, and you are doing the persistence work in your Aggregates.Entities, then it makes sense for your Repository to make use of your UoW. For a generic Repository it would make sense to provide access to the UoW object from within Repository implementations that can make sure it is initialized and disposed of appropriately. On that note, there will also be times when you would likely want to utilize multiple Repositories within what would be a single UoW boundary, so you would want to be able to pass in an already primed UoW to the Repository in that case.
I recommend you to use approach when repository uses UoW internally. This approach has some advantages, especially for web application.
In web application recommended pattern of using UoW is Unit of Work (session) per HTTP request. So if your repositories will share UoW, you will be able to use 1st level cache (using identity map) for object that were requested by other repositories (like data dictionaries that are referenced by multiple aggregates). Also you will have to commit only one transaction instead of multiple, so it will work much better in terms of the performance.
You could take a look at Hibernate/NHibernate source codes that are mature ORMs in Java/.NET world.
Good Question!
Depends on what your work boundaries are going to be. If they are going to span multiple repositories then you might have to create another abstraction to ensure that multiple repositories are covered. It would be like a small "service" layer that is defined in Domain Driven Design.
If your unit of work is going to be pretty much per Repository then I would go with the second option.
My question, however, to you would be, how can you worry about repository when writing an ORM? They are going to be defined and used by the consumers of your Unit of Work right? If so, you have no option but to just provide a Unit of Work and your consumers will have to enlist the repositories with your unit of work and will also be responsible for controlling the boundaries of unit of work. Isn't it?