I am writing code on top an established Enterprise application. I see that the application has 4 modules as shown below.
-Srk
-SrkEJB
-SrkUtils
-SrkWeb
I have gone through the code and I see that some modules are tiny for example: SrkEJB module has got just 2 EJBS. I don't see any reason to create a separate module for 2 Java classes.
I have simplified the above approach and is shown below.
Srk
- com.srk.utils
- com.srk.ejb
- com.srk.web
How is the first module based architecture different from the second from an architectural stand point? Generally, which is the followed mostly, when creating an application from scratch? If not, What could be the trade-offs of each of the approaches? I believe this is a not specific to Java alone.
I don't see any reason to create a separate module for 2 Java classes.
I believe this is an invalid chain of reasoning. You should create a separate module whenever you discover classes, related to the same subject. The number of classes is irrelevant here. You may have a module containing only one class, if the purpose of that class is really different from others.
The name of the package / module should clearly speak its purpose. So, module name "Srk" seems bad to me (I suppose "srk" is some kind of abbreviated company name and is not related to architecture).
The same may be related to "utils", which sounds very generic to me. It is impossible to tell, what are those utils about.
Related
I'm trying to update a software system to JDK-11 using modules, and everything was going just fine right up until I slammed head-on into the aforementioned issue.
I have a legacy signed JAR that I need to incorporate for interaction with legacy systems. There's no way to update the JAR and no way to get a new version. The JAR must be signed in order to be usable (the whole "trusted code" deal and whatnot). The problem is that the JAR contains classes in the unnamed (root) package. Yeah. Stupid. Bad practice. Blablabla. It's still there, and I still need to use it.
I've not found any documentation or answers anywhere that would remotely suggest that what I need is possible. In fact, the opposite is true: everyone is adamant that in the "new"(ish) module system, no class may reside in the unnamed package.
Needless to say I'm unable to both modify the contents of the JAR, or get at the sources to render a new one - that's without even considering the issue of the signature...
That said: I refuse to believe the folks at Oracle would leave such a glaring oversight with regards to legacy code. As we all know, a lot of the time we have no choice but to use it for legitimate reasons, and we can't do anything to fix/update/refactor/etc... I would have hoped there was a mechanism added to the module system to support this, albeit for extreme cases only, etc...etc...
Disclaimer: I do fully understand why this isn't meant to be supported. What I'm having a hard time with is the lack of a workaround...
Thanks!
I've already tried:
creating a facade module that transitively adds the offending module (obviously no dice, same problem)
unpacking-and-repacking the module while temporarily disabling signature validation in a test env (fails because the class is apparently referenced within many other, properly-organized classes)
finding an updated module (no luck here, either)
beheading a chicken and roasting it over a pentagram while invoking the aid of ancient pagan gods (tasty, but didn't fix it)
curling up in a ball under my desk and weeping until execution succeeds (that's where I'm typing this from)...
I didn't find any explanation in the reference, but when I type impl in IntelliJ IDEA, I get an error:
It seems that it's treated as a reserved word, but what's it for?
I tried putting many kinds of stuffs after impl but I get the error every time.
Update: It's renamed to expect after Kotlin 1.2.
It's for future multiplatform project support, and it's the pair of the header keyword which #hotkey explained in their comment here. It appeared in one of Andrey Breslav's presentations which you can find here, this specific topic starts at the 14:25 mark.
To sum it up briefly, the basic idea he presents is that you could have a common module shared between your platforms, in which there are some functions that are declared but not implemented, and are marked with the header keyword. Then, for the different platforms (JVM, JS, etc) you could have separate modules that implement these functions in platform specific ways - these actual implementations are where the impl keyword would be used.
He says that this whole system is just an internal prototype for now, so this presentation is probably all the public info we have about it. I'd also be interested in more details about this mechanism though :)
Update: as of the Kotlin 1.2 Beta, these keywords have been now replaced with expect and actual.
I try to learn about RPackageTags:
It seems RPackageTags are just something like sub-packages?
Unlike let’s say tags in OS X, one item (here one class) cannot have more than one tag?
A tag is always specific to an RPackage? The tags in Package1-Tag1 and Package2-Tag1 are not the same, i.e. are two different instances of RPackageTag?
There is the possibilty that Package1-Tag1 is just an RPackage and also the possibility that Package1-Tag1 is the combination of RPackage Package1 and RPackageTag Tag1?
Is that right? What is the idea behind the introduction of RPackageTags?
How are RPackageTags related to Monticello packages?
Some answers:
Yes, they are like subpackages
For now, yes. This is because we needed to keep some compatibility between RPackages and System Categories and we decided (for now), that the scheme would be: RPackage+Tag = SystemCategory. This will change in the future, by removing system categories and allowing tags to be like OSX tags.
For now, yes... see point (2) :)
No, you cannot. This is because of organisation issues derived also from the attach RPackage+Tag to a SystemCategory: If you have a package A-B and a Package A with tag B, both would share the SystemCategory A-B... which is seriously bad for the organization of the system.
Of course, as you can see, the real problem here is to have an ancient way of organisation still in system. This will change, in Pharo4 or (most probably) in Pharo5, and for now we need to live with this convenience solution.
Ring package structure was adopted in Pharo 3. Approximately what was known to be an MC package became RPachage, pure-smalltalk categories disappeared and were replaced by RPackageTags. You can think about it as a way to create categorise classes inside your MC package. I don't know what Pharo board will decide in future, but for now you can have only 1 tag per class.
Regarding to the solution described in this post, a third assembly is required to forward the type resolution to the correct assembly.
When adding this reference to the Android class library project using the type, the forwarding seems to not be done. The reference needs to be added in the Android application project which is the end point of the build process.
Does any solution exist to add the reference embedding the forwarding in the project requiring it ?
I mean, if in my solution architecture I use :
MyApp.Core - PCL
MyApp.Core.Droid - Android class library
MyApp.UI.Droid - Android Application
The usage of System.Net namespace (System.Net.Socket.AddressFamily for example) is done in my ViewModel, which is located in MyApp.Core.Droid (redirection of MyApp.Core with some plugins). In this case, it is more logical (and readable) to have the reference in the MyApp.Core.Droid. But in the fact, the assembly resolution is done (from what I understand) when packaging the application, so in MyApp.UI.Droid. So in this case, the reference needs to be added to MyApp.UI.Droid in order to be found (failed if added to MyApp.Core.Droid).
In this case the solution works, but its quite obvious to understand for a new programmer joining the team which, has not been facing the trouble and understands why this reference needs to be added to the UI project...
I'm not sure my thought is easy to understand by the way I introduce it. Let me know if you need more explanation.
Thanks,
Guillaume.
I'm not entirely sure why this 'fails if added to MyApp.Core.Droid' - it feels like this should be added. However, I know that Xamarin have tweaked and changed the dependency resolution scripts a few times.
With that said, I think the best answer to your question is 'don't worry about it too much' - this is only a small inconveneinve right now and it will be resolved by Xamarin's updates 'soon'.
The current PCL support is something that I and a number of others have worked on in order to make things work. This set of 'hacks' is a workaround for the lack of 'proper PCL' support - it simulates what the Microsoft PCL build platform does on WindowsPhone, WPF, etc, but it isn't a perfect implementation.
Xamarin have now committed to 'proper PCL' support. When that happens then these type-forwarding dependencies will automatically be added. The good news is that this support is perhaps now only days, weeks or at most months away.
Since upgrading from 4.7 to ECC6 the ABAP compiler has become a lot stricter on the use of certain statements in the OO context.
For instance you're not allowed to use the statement LIKE, but in stead have to use TYPE and internal tables does not have an implicit header line, etc.
These restrictions are explained in greater detail here
MY QUESTION: To what extent does this restriction affect your existing code-base?.
We have over a thousand "Classes" written since 1998 in OO as far as it was available at the time. For the most part each class is its own include in SE38, with the class definition and implementation together in this include.
Up to now, we could successfully change and activate these classes as long as the main program was pre-existing in 4.7. Now we are trying to use one of these older classes in a new main program for regression test purposes, and we are getting the following error:
"Within classes and interfaces, you can only use "TYPE" to refer to ABAP Dictionary types (not "LIKE" or "STRUCTURE")."
This error is valid as per the current definition of the SAP language.
I would like to know wheter the SAP interpreter continues to run old code with obsolete statements intentionally, or whether a future patch may correct this "feature" and cause these classes to stop compiling.
Each development object is tagged with a version corresponding to the SAP version it was developed on. You can see this in version management or table VRSD.
As I understand it, that is there specifically so that code with statements that have been made illegal in later versions will survive an upgrade and continue to run.
This is why, when you attach an include developed in 4.5b to a class that was developed in NW700, it won't compile. The compiler knows that this is new dev, and its applying the rules accordingly.
The ABAP community has been informed for a really long time (years) that LIKEs, work areas, RANGEs etc. are obsolete.
I don't think SAP will kill any old code, but I wouldn't count on it if I were in charge.
So can they cause it to stop compiling: yes, will they: probably not.