I'm in the middle of building an application but found myself too easily creating new packages without keeping the project's structure in mind.
Now, I'm trying to redo the whole project structure on paper first. I am using a Settings class with public properties, accessed as settings for several other classes around the project.
Now, since this Settings class applies for the whole project, I am unsure if it should be packaged and if so, in what kind of package should it exist? Or should it be in the root (the default package) with the main application class?
I've been thinking about putting it in my utils package, then again I don't think it really is an utlity. Any strategies on how to decide on such package structure for example for a Settings class?
Use of the default package is discouraged anyway (in java it is actually enforced as a warning as far as I know), even for the class containing the main.
Other than that, I prefer having a config package, even if it's the only class in there. I don't think it would fit in the utils package.
IMHO you should put it into a separate, low level package, since many other classes depend on it, but it (presumably) doesn't depend on anything. So it should definitely not be put in one package with the main application class. It could be in the utils package though, or in a separate package on the same level (e.g. config).
By "low level" I simply mean "low on the package dependency hierarchy", where a package A which depends on package B is higher than B. So it does not directly relate to the actual package hierarchy. The point is to avoid dependency cycles between your packages, so that you can have such an ordering between your packages.
Btw you should not use the root package in a real application.
Related
I'm writing a Kotlin program which according to convention lives in src/main/kotlin/mypackage/*.kt with each source file containing package mypackage.
I have used the IntelliJ IDEA option to create a test class, FooBarTest, which lives in src/test/kotlin/mypackage/FooBarTest.kt. So far, so good.
However, to my surprise, FooBarTest.kt does not contain package mypackage. This means the things it tests, would need to be imported explicitly with separate import statements.
Is IntelliJ IDEA telling me a surprising truth, that unlike main source files, test source files should not specify a package?
Or is it making a mistake, omitting a package statement that should be there, and I should go ahead and put in the package mypackage statement by hand?
I think IDEA's making a mistake -- or at least, being less helpful than it might.
Of course, there's no real necessity for test classes to be in the same package as the tested classes. But in my experience, it makes good sense: they're easier to find, and as you say, it avoids lots of import statements.
It also makes the file hierarchy align with the package hierarchy. Again, while in Kotlin there's no absolute necessity for that, it does make files easier to find and avoid unexpected clashes, and I've not yet found a reason to diverge from that.
I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.
The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.
Moving to intellij i'm trying to understand properly the logic behind the its project structure. I come from eclipse. After reading for a while i understood the relation between workspace and project, then between project and modules. However something that is puzzling me is the logic of the default project configuration in Intellij. Indeed, when you create a project there is an initial module which to a certain extend is equivalent to the Project itself. To be more precise, the initial module folder is the Project folder. This is kind of confusing to me. Then when you add more module they are sub-module of that module.
My first question is what is the rationale of making this first module equivalent to the project folder ?
Following this, i would further ask, what the point of having modules as sub-module of others.
In eclipse i use to have simply different project (i.e. module) independent from each other and adding the dependency as necessary. So how does the Idea solution makes it better, if not what is the rational here ?
I saw that one can start an empty project and then add modules to it. However in that case, the modules added are added as subfolder of the Project and therefore there is no initial module equivalent to the Project folder ? So why this difference and what is the rationale behind it ?
What would be the better approach, the first or second ?
Would it be ok to have this first initial module with no src or test folder but just with the proper facet so as to spread it to the sub-module?
I would appreciate if someone could explain a bit the rational of all of it ?
I will move to SBT soon (i.e. maven structure which I suppose inspired all modern IDE project Structure) if one want to explain within that context fine, nevertheless i want to understand the rationale in intelliJ first.
Many thanks,
-M-
PS: What i'm looking for is some advise for some multi-module project structure in Intellij as i'm moving my eclipse workspaces to it.
I think that it's not uncommon for projects to be relatively small, so they don't need fancy modules with dependency management etc. In that case, I find the default project created by IntelliJ to fit perfectly my needs: no need to add submodules, everything is directly in the parent project, it reduces the structure to its bare minimum.
On the other hand, big projects with submodules will likely resemble the structure of a Maven multimodule project (perhaps SBT too, but I don't know this tool at all). You have a parent root which acts as a container for submodules. The parent project may also store configuration (a default SDK, a language level etc. that will be inherited by the submodules). The actual code will be contained in the submodules.
Regarding your questions, it all depends on the kind of project you are developing. For a small codebase, you could keep a simple project with no submodule. For bigger codebases, you can either create modules manually, or import an existing Maven/SBT/whatever project, which will automatically create modules reflecting the imported structure.
Not having come from a C/compiled languages background, I'm finding it hard to get to grips with using Go's packages mechanism to create modular code.
In Python, to import a module and get access to it's functions and whatnot, it's a simple case of
import foo
where foo.py is the name of the module you want to import in the same directory. Otherwise you can add an empty __init__.py into a subfolder and access the modules via
from subfolder import foo
You can then access functions by simply referencing them through the module name, e.g. y = foo.bar(y). This makes it easy to separate logical pieces of code from one another.
In Go however, you specify the package name in the source file itself, e.g.
package foo
at the top of the 'foo' module, which you can then supposedly import through
import (
"foo"
)
and then refer to it through that, i.e. y := foo.Bar(x) . But what I can't wrap my head around is how this works in practice. The relevant docs on golang.org seem terse, and directed to people with more (any) experience using makefiles and compilers.
Can someone please clearly explain how you are meant to modularise your code in Go, the right project structure to do so, and how the compilation process works?
Wiki answer, please feel free to add/edit.
Modularization
Multiple files in the same package
This is just what it sounds like. A bunch of files in the same directory that all start with the same package <name> directive means that they are treated as one big set of code by Go. You can transparently call functions in a.go from b.go. This is mostly for the benefit of code organization.
A fictional example would be a "blog" package might be laid out with blog.go (the main file), entry.go, and server.go. It's up to you. While you could write a blog package in one big file, that tends to affect readability.
Multiple packages
The standard library is done this way. Basically you create modules and optionally install them into $GOROOT. Any program you write can import "<name>" and then call <name>.someFunction()
In practice any standalone or shared components should be compiled into packages. Back to the blog package above, If you wanted to add a news feed, you could refactor server.go into a package. Then both blog.go and news.go would both import "server".
Compilation
I currently use gomake with Makefiles. The Go installation comes with some great include files for make that simplify the creation of a package or a command. It's not hard and the best way to get up to speed with these is to just look at sample makefiles from open source projects and read "How to Write Go Code".
In addition to the package organisation, Like pip in python, use dep https://github.com/golang/dep for go package management. if you use it on existing go package it will automatically build the dependency tree with versions for all the packages being used. when shifting to production server, dep ensure will use Gopkg.toml to install all the required packages.
Just use dep ensure -add , other commands for dep are:
Commands:
init Set up a new Go project, or migrate an existing one
status Report the status of the project's dependencies
ensure Ensure a dependency is safely vendored in the project
version Show the dep version information
check Check if imports, Gopkg.toml, and Gopkg.lock are in sync
Examples:
dep init set up a new project
dep ensure install the project's dependencies
dep ensure -update update the locked versions of all dependencies
dep ensure -add github.com/pkg/errors add a dependency to the project
NInject's module architecture seems useful but I'm worried that it is going to get in a bit of a mess.
How do you organise your modules? Which assembly do you keep them in and how do you decide what wirings go in which module?
Each subsystem gets a module. Of course the definition of what warrants categorisation as a 'subsystem' depends...
In some cases, responsibility for some bindings gets pushed up to a higher level as a lower-level subsystem/component is not in a position to make a final authoritative decision - in some cases this can be achieved by passing parameters into the Module.
Replying to my own post after a couple of years of using NInject.
Here is how I organise my NInjectModules, using a Book Store as an example:
BookStoreSolution
Domain.csproj
Services.csproj
CustomerServicesInjectionModule.cs
PaymentProcessingInjectionModule.cs
DataAccess.csproj
CustomerDatabaseInjectionModule.cs
BookDatabaseInjectionModule.cs
CustomSecurityFramework.csproj
CustomSecurityFrameworkInjectionModule.cs
PublicWebsite.csproj
PublicWebsiteInjectionModule.cs
Intranet.csproj
IntranetInjectionModule.cs
What this is saying is that each project in the system comes prepackaged with one or more NInject modules that know how to setup the bindings for that project's classes.
Most of the time an individual application is not going to want to make significant changes to the default injection modules provided by a project. For example, if I am creating a little WinForm app which needs to import the DataAccess project, normally I am also going to want to have all the project's Repository<> classes bound to their associated IRepository<> interfaces.
At the same time, there is nothing forcing an individual application to use a particular injection module. An application can create its own injection module and ignore the default modules provided by a project that it is importing. In this way the system still remains flexible and decoupled.