Is there strict definition what a module is (in programming languages)? What minimal properties are required? I.e. on what grounds one may claim that given programming language has modules or not?
Is there any classification of module features so that one implementation is stronger/better than the other?
i don't think there is any kind of 'official module definition'. what i think is:
modules are about separation of interests. that usually includes separation on files level but much more important are namespaces. when i write my code i don't want to bother about other programmers except module names. and different languages fail at different level of module management.
javascript has no modules at all. there is one global namespace people started to use anonymous functions and invent different ways of combining different libraries without polluting the namespace.
i don't know c++ much but AFAIK it allows you to declare classes but there isn't any way to bundle many classes into something bigger so if you try to link someone's library with your code you may have linkage errors.
java has packages and naming conventions (from domain name) so it's rather easy to keep separated namespaces (although it's not enforced by the language) but transitive dependencies sucks as on your dependency tree there can't be two same modules with different versions
so you have to pick/create your own definition of a module
Related
If I build several classes and I import the same library for each class, am I going to make my project heavy ?
Or is exactly the same as importing it only once ?
thanks
Typically the linker (or it's equivalent) will ensure you have only one copy.
There are some subtleties with things such as Java Application Servers where you may want to "isolate" classes (typically applications) and so pay the cost of having duplicate copies of some common libraries.
Generally speaking just design/code naturally and the right thing will happen.
What is meant by the "dependency inversion principle" in object-oriented programming? What does it do?
In object-oriented programming,
the dependency inversion principle refers to a specific form of decoupling where conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are inverted (e.g. reversed) for the purpose of rendering high-level modules independent of the low-level module implementation details.
The principle states:
A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend upon details. Details should depend upon abstractions.
Source
The main reason for using dependency inversion is to allow for different implementations of those lower-level modules to be selected either at compile-time in the application or at runtime by configuration. This is a big win for testing because it allows you to completely isolate the code being tested and use mock objects.
Another way this is a huge help is for client deployments. Let's say you have different customers with different auth systems, or different databases, or reporting systems, or whatever. You can configure their system at deployment time by changing an XML file to choose the right implementations of those components to load, with no code changes at all.
At which point do you decide that some of your subroutines and common code should be placed in a class library or DLL? In one of my applications, I would like to share some of my common code between different projects (as we all know, it's a programming sin to duplicate code).
The vast majority of my code is all within a single project. I also have one small utility that's partitioned from the main executable that runs with elevated permissions for a sole purpose. The two items have, at most, three subroutines in common. Should these common subroutines be placed and called from a class library? How do you decide when to do this? When you have at least one shared subroutine? Twenty-plus lines of code?
I don't believe that this should be language specific or framework dependent, but if so, I'm using the .NET framework.
There's more ways to share code between applications than with a DLL. From what it sounds like, you're not talking about a lot of code, so it sounds like you don't need to worry about it too much.
In general, I use the following rule of thumb:
For trivial code duplication (a couple simple 1-2 line functions, that are easy to understand and debug) I'll just copy and paste the code.
For more complicated functions (a small library of stand-alone helper functions, contained in a file or two, which require a modest level of maintenance and debugging) I'll simply include the file in both projects (either by linking, or defining a subrepository, or something like that).
For more extensive code sharing (a group of interrelated classes, or a database communication layer, which is useful for multiple projects) I'll refactor them out into a standalone library, and package and distribute them using whatever's appropriate for whatever I'm programming in.
Because the complexity of managing your code increases by an order of magnitude for each step (when you're packaging DLLs for multiple projects you now need to think about versioning issues) you only want to move to the next level when you need to. It doesn't sound like you're feeling the pain of handling your common code yet, and if that's the case there's no real need.
If code is shared between multiple applications, then it has to reside in a DLL or class library.
For a larger application you might also want to break different subsystems of the application into separate libraries. That way each project can focus on one particular task. This simplifies the structure of your application and makes it easier to find any one piece of code. For example, you might have a GUI application with different DLL's (.NET projects) for:
Working with a specific Network protocol
Accessing common code, for example utility classes
Access to legacy code (via PInvoke)
etc...
I'm working on several distinct but related projects in different programming languages. Some of these projects need to parse filenames written by other projects, and expect a certain filename pattern.
This pattern is now hardcoded in several places and in several languages, making it a maintenance bomb. It is fairly easy to define this pattern exactly once in a given project, but what are the techniques for defining it once and for all for all projects and for all languages in use?
Creating a Domain Specific Language, then compile that into the code for each of the target languages that you are using would be the best solution (and most elegant).
Its not difficult to make a DSL - wither embed it in something (like inside Ruby since its the 'in' thing right now, or another language like LISP/Haskell...), or create a grammar from scratch (use Antlr?). It seems like the project is large, then this path is worth your while.
I'd store the pattern in a simple text file and, depending on a particular project:
Embed it in the source at build time (preprocessing)
If the above is not an option, treat it as a config file read at runtime
Edit: I assume the pattern is something no more complicated than a regex, otherwise I'd go with the DSL solution from another answer.
You could use a common script, process or web service for generating the file names (depending on your set-up).
I don't know which languages you are speaking about but most of languages can use external dynamic libraries dlls/shared objects and export common functionality from this library.
For example you implement function get file name in simple c lib and use acrros rest of languages.
Another option will be to create common code dynamically as part of the build process for each language this should not be to complex.
I will suggest using dynamic link approach if feasible (you did not give enough information to determine this),since maintaining this solution will be much easier then maintaining code generation for different languages.
Put the pattern in a database - the easiest and comfortable way could be using XML database. This database will be accessible by all the projects and they will read the pattern from there
There are two popular naming conventions:
vc90/win64/debug/foo.dll
foo-vc90-win64-debug.dll
Please discuss the problems/benefits associated with either approach.
I am also wondering if it is possible to expose meta-data (i.e. compiler, platform, build-type) in approach #1 in an easy to use, cross-platform manner.
#2 is good for distribution, where several variation will be packaged in the same folder/zip file together. However, you probably don't want all that information in the file name itself, as it make it difficult to vary those via parameters to your makefile/csproj/nant script etc. It would be easier to have several files called "foo" in different folders (where you can decide the folder structure)
For .NET assemblies, you can store this information in the assembly itself:
http://www.codinghorror.com/blog/archives/000142.html
I'm not familiar enough with other assembly types to know what they provide.