Lithium: How do I change the location Connections' and similar classes look for adapters - lithium

I've been trying to get Connections to use a customised adapter located at app/extensions/data/source/database/adapter/. I thought extending the Connections class and replacing
protected static $_adapters = 'data.source';
with
protected static $_adapters = 'adapter.extension.data.source';
and changing the connections class used at the top of app/config/bootstrap/connections.php to use app\extensions\data\Connections;
would be enough to get it started. However this just leads to a load of errors where the code is still trying to use the original Connections class.
Is there a simple way to achieve this, or do I have to recreate the entire set of classes from lithium/data in extensions with rewritten class references?
EDIT:
Turns out I was going about this the wrong way. After following Nate Abele's advice, Libraries::path('adapter') showed me where to correctly place the MySql.php file I'm trying to override ;-)

For dealing with how named classes (i.e. services, in the abstract) are located, you want to take a look at the Libraries class, specifically the paths() method, which allows you to define how class paths are looked up.
You can also look at the associated definitions, like locate() and $_paths, to give you an idea of what the default configuration looks like.
Finally, note that the Connections class is 'special' since it defines one path dynamically, based on the supplied configuration: http://li3.me/docs/api/lithium/1.0.x/lithium/data/Connections::_class()
This should help you reconfigure how your classes are organized without extending/overriding anything. Generally you shouldn't need to do that unless you need some drastically different behavior.

Related

Enforcing API boundaries at the Module (Distribution?) level

How do I structure Raku code so that certain symbols are public within the the library I am writing, but not public to users of the library? (I'm saying "library" to avoid the terms "distribution" and "module", which the docs sometimes use in overlapping ways. But if there's a more precise term that I should be using, please let me know.)
I understand how to control privacy within a single file. For example, I might have a file Foo.rakumod with the following contents:
unit module Foo;
sub private($priv) { #`[do internal stuff] }
our sub public($input) is export { #`[ code that calls &private ] }
With this setup, &public is part of my library's public API, but &private isn't – I can call it within Foo, but my users cannot.
How do I maintain this separation if &private gets large enough that I want to split it off into its own file? If I move &private into Bar.rakumod, then I will need to give it our (i.e., package) scope and export it from the Bar module in order to be able to use it from Foo. But doing so in the same way I exported &public from Foo would result in users of my library being able to use Foo and call &private – exactly the outcome I am trying to avoid. How do maintain &private's privacy?
(I looked into enforcing privacy by listing Foo as a module that my distribution provides in my META6.json file. But from the documentation, my understanding is that provides controls what modules package managers like zef install by default but do not actually control the privacy of the code. Is that correct?)
[EDIT: The first few responses I've gotten make me wonder whether I am running into something of an XY problem. I thought I was asking about something in the "easy things should be easy" category. I'm coming at the issue of enforcing API boundaries from a Rust background, where the common practice is to make modules public within a crate (or just to their parent module) – so that was the X I asked about. But if there's a better/different way to enforce API boundaries in Raku, I'd also be interested in that solution (since that's the Y I really care about)]
I will need to give it our (i.e., package) scope and export it from the Bar module
The first step is not necessary. The export mechanism works just as well on lexically scoped subs too, and means they are only available to modules that import them. Since there is no implicit re-export, the module user would have to explicitly use the module containing the implementation details to have them in reach. (As an aside, personally, I pretty much never use our scope for subs in my modules, and rely entirely on exporting. However, I see why one might decide to make them available under a fully qualified name too.)
It's also possible to use export tags for the internal things (is export(:INTERNAL), and then use My::Module::Internals :INTERNAL) to provide an even stronger hint to the module user that they're voiding the warranty. At the end of the day, no matter what the language offers, somebody sufficiently determined to re-use internals will find a way (even if it's copy-paste from your module). Raku is, generally, designed with more of a focus on making it easy for folks to do the right thing than to make it impossible to "wrong" things if they really want to, because sometimes that wrong thing is still less wrong than the alternatives.
Off the bat, there's very little you can't do, as long as you're in control of the meta-object protocol. Anything that's syntactically possible, you could in principle do it using a specific kind of method, or class, declared using that. For instance, you could have a private-class which would be visible only to members of the same namespace (to the level that you would design). There's Metamodel::Trusting which defines, for a particular entity, who it does trust (please bear in mind that this is part of the implementation, not spec, and then subject to change).
A less scalable way would be to use trusts. The new, private modules would need to be classes and issue a trusts X for every class that would access it. That could include classes belonging to the same distribution... or not, that's up to you to decide. It's that Metamodel class above who supplies this trait, so using it directly might give you a greater level of control (with a lower level of programming)
There is no way to enforce this 100%, as others have said. Raku simply provides the user with too much flexibility for you to be able to perfectly hide implementation details externally while still sharing them between files internally.
However, you can get pretty close with a structure like the following:
# in Foo.rakumod
use Bar;
unit module Foo;
sub public($input) is export { #`[ code that calls &private ] }
# In Bar.rakumod
unit module Bar;
sub private($priv) is export is implementation-detail {
unless callframe(1).code.?package.^name eq 'Foo' {
die '&private is a private function. Please use the public API in Foo.' }
#`[do internal stuff]
}
This function will work normally when called from a function declared in the mainline of Foo, but will throw an exception if called from elsewhere. (Of course, the user can catch the exception; if you want to prevent that, you could exit instead – but then a determined user could overwrite the &*EXIT handler! As I said, Raku gives users a lot of flexibility).
Unfortunately, the code above has a runtime cost and is fairly verbose. And, if you want to call &private from more locations, it would get even more verbose. So it is likely better to keep private functions in the same file the majority of the time – but this option exists for when the need arises.

Why does import need the project name and not just the object?

It took me a while to even understand the problem I'm about to describe, so please let me know if the description is confusing...
I have an object called "cProp" that defines a number of sub-classes. To refer to these classes in other files in the project, I have to do something like...
Dim cp = New cProp.InflationRow()
I understand why this is; since the InflationRow is "inside" the cProp, I need to tell it where to find it. Fine.
But this, of course, gets tedious, so sometimes you want to fix it...
Imports cProp
Why doesn't that work? Why do I have to...
Imports ProjectName.cProp
You might wonder why I care, but these files are used in numerous projects with different names. So if I use Imports I have to change the project name in a bunch of places. I am aware that Namespace is likely the solution I want, right?
My confusion stems from the fact that the compiler can figure out just fine which cProp (which is the only one) I'm referring to in the code, so why not in the Imports? I think I'm missing something fundamental here.
It's because your project has a root namespace - you can see this in your project properties. You can delete this in the project properties - this will result in allowing you to simply use the class name from another project. However, namespaces are a good way to organize your classes. In VB, the project defaults to having a root namespace, which you don't see in your code files.
Within your project, since it seems to be all within the same root namespace, you don't have to qualify with the root namespace for code within the root namespace. The "Imports" statements are not within a namespace (only types can be within namespaces) - that's why you have to provide the full qualification there.

VB.NET: Avoiding redundancies

Each of my VB.NET projects needs a certain set of custom modules.
For example:
modLog
modGUID
modControls
modRegistry
In SOME of these modules I have a few references to other modules.
For example the sub modLog.WriteLog goes like this:
Public Sub WriteLog(Byval uText As String)
If globalclassOEM.IsOEMVersion Then
sPath = (some custom path)
Else
sPath = (some default path)
End if
'Write log text to file
End Sub
This is really stupid because sometimes I have to add really many modules and classes to a tiny projects just to solve some dependencies as the above and to be able to use a few modules that I really need.
Is there any best tactics in VB.NET to avoid such situations?
The best way to avoid such problems, would be to avoid that problem ;) Means: Libraries should do exactly what they are meant to do and not do some "extra work" in the backgorund. In your example: Why does the WriteLog function need to determine the path and why doesnt the caller define it and pass it to the logging function/class?
IF you still want or need to have the functions in that way, you might circumvent the problem by defining interfaces and then put ALL your interfaces into a single library, but NOT the classes that implement them. That would still require to load the file with the interface definitions, but of course you don't need to load any class that implements it.
You might also use some kind of plugin system and when your logging class (in this example) is created, it might try to dynamically load the required assemblies. If they do not exit, the class will without them, otherwise it can use them as pretended. Doesnt make programmers life easier, though (imho).
The "best" way (imho again) would be the first suggestion. Dont have "low level" libraries referencing other libraries. Everything else most likely would be considered to be a design flaw and not a "solution".
I have not covered a whole heap of referencing in VB.net, however, would it be possible for you to create a .dll with all the base modules. This would mean you could reference this .dll saving you time. For the extenuating circumstances where you have references to other modules you could just manually write that module.
As others have alluded to, you never want to directly include the same code file in multiple projects. That is almost never a good idea and it always leads to messy code. If you have common code that you want to share between two different projects, then you need to create a third project (a class library) which will contain the common code, and then the other two projects will just reference to the class library. It's best if you can have all three projects in the same solution and then you can use project references. However, if you can't do that, you can still have a direct file reference to the DLL that is output by that class library project.
Secondly, if you really want to avoid spaghetti code, you should seriously look into dependency-injection (DI). The reason I, and others have suggested this, is because, even if you move the common code into class libraries so that it can be shared by multiple projects, you'll still have the problem that your class libraries act as "black-boxes" that magically figure out everything for you and act appropriately in every situation. On the face of it, that sounds like a good thing for which a developer should strive, but in reality, that leads to bad code in the long run.
For instance, what happens when you want to use the same logging class library in 100 different projects and they all need to do logging in slightly different ways. Now, your class library has to magically detect all of those different situations and handle them all. For instance, some projects may need to save the log to a different file name. Some may need to store the log to the Windows event log or a database. Some may need to automatically email a notification when an error is logged. Etc. As you can imagine, as the projects increase and the requirements grow, the logging class library will need to get more and more complex and confusing which will inevitably lead to more bugs.
DI, on the other hand, solves all these issues, and if you adhere to the methodology, it will essentially force you to write reusable code. In simple terms, it just means that all the dependencies of a class should be injected (passed by parameter) into it. In other words, if the Logger class needs an event log, or a database connection, it should not create or reach out and find those things itself. Instead, it should simply require that those dependencies be passed into it (often in the constructor). Your example using DI might look something like this:
Public Interface ILogFilePathFinder
Function GetPath() As String
End Interface
Public Class LogFilePathFinder
Implements ILogFilePathFinder
Public Sub New(isOemVersion As Boolean)
_isOemVersion = isOemVersion
End Sub
Private _isOemVersion As Boolean
Function GetPath() As String Implements ILogFilePathFinder.GetPath
If _isOemVersion Then
Return "C:\TestOem.log"
Else
Return "C:\Test.log"
End If
End Function
End Class
Public Interface ILogger
Sub WriteLog(ByVal text As String)
End Interface
Public Class FileLogger
Implements ILogger
Public Sub New(pathFinder As ILogFilePathFinder)
_pathFinder = pathFinder
End Sub
_pathFinder As ILogFilePathFinder
Public Sub WriteLog(text As String) Implements ILogger.WriteLog
Dim path As String = _pathFinder.GetPath()
' Write text to file ...
End Sub
End Class
As you can see, it requires a little bit of extra work, but when you design your code like this, you'll never regret it. You'll notice that the logger class requests a path finder as a dependency. The path finder, in turn, requests an OEM setting as a dependency. So, to use it, you would need to do something like this:
Dim pathFinder As ILogFilePathFinder = New FileLogPathFinder(_OemSettings.IsOemVersion) ' Note, don't use a global to store the settings, always ask for them to be injected
Dim logger As ILogger = New FileLogger(pathFinder)
logger.WriteLog("test")
Now, you can easily reuse all of this code in any situation. For instance, if you have different projects that need to use a different log file path, they can still use the common FileLogger class, they just need to each implement their own version of ILogFilePathFinder and then inject that custom path finder into the common FileLogger. Hopefully you see how doing it this way can be very useful and flexible.

How to find and remove unused class files from a project

My XCode project has grown somewhat, and I know that there are class files in there which are no longer being used. Is there an easy way to find all of these and remove them?
If the class files just sit in your project without being part of a target, just click on the project itself in the tree view, so you see all files in the table. Make sure you see the "Target" column in the table view, iterate through your targets and find the files that don't have a check anywhere -> they are no longer compiled.
But if you still compile the classes and they are no longer used, that case is a bit more difficult. Check out this project
http://www.karppinen.fi/analysistool/#dependency-graphs
You could create a dependency graph and try to find orphaned classes that way.
Edit: Link went dead, but there still seem to be projects of Objective-C dependency graphs around, for example https://github.com/nst/objc_dep
if they are C or C++ symbols, then you can just let the linker do the work for you.
if you're looking to remove objc symbols, then try to refactor the class name (e.g. to rename the class), and preview the dependencies that it turns up. if you reference classes/selectors/etc. by strings then... it may not be so effective. unfortunately, you often have to also test manually, to verify that removing a class does not break anything. remember that resources (like xibs) may reference/load objc classes as well.
This is a tricky question due to how dynamic objective-c is as you can never guarantee that a class is not going to be used.
Consider if you generate a class name and a selector at run time and then look up that class, instantiate that class and then call a method on that newly created object using that newly created selector. No where in your code do you explicitly name and instantiate that object but you are able to use it anyways. You could get that class name and selector name from anywhere outside of your code, even from some data from a server some where. How would you ever know which class is not going to be used? Because of this there are no tools that are able to perform what you are requesting.
Searching the project with the class name might be an option, thought it may not be the best solution. Specially it might be time consuming when you have many classes.

Is it good practice to call module functions directly in VB.NET?

I have a Util module in my VB.NET program that has project-wide methods such as logging and property parsing. The general practice where I work seems to be to call these methods directly without prefixing them with Util. When I was new to VB, it took me a while to figure out where these methods/functions were coming from. As I use my own Util methods now, I can't help thinking that it's a lot clearer and more understandable to add Util. before each method call (you know immediately that it's user-defined but not within the current class, and where to find it), and is hardly even longer. What's the general practice when calling procedures/functions of VB modules? Should we prefix them with the module name or not?
Intellisense (and "Goto Definition") should make it trivial to find where things are located, but I always preface the calls with a better namespace, just for clarity of reading. Then it's clear that it's a custom function, and not something built in or local to the class you're working with.
Maybe there's a subtle difference I'm missing, but I tend to use shared classes instead of modules for any code that's common and self-contained - it just seems easier to keep track of for me, and it would also enforce your rule of prefacing it, since you can't just call it from everywhere without giving a namespace to call it from.
I usually put the complete namespace for a shared function, for readibility.
Call MyNameSpace.Utils.MySharedFunction()
Util is such a generic name.
Example from the .Net framework. You have System.Web.HttpUtility.UrlEncode(...). Usually you refer to this as HttpUtility.UrlEncode since you have an import statement at the top.
The name of the class which has the static utility methods should be readable and explainable. That is good practice. If you have good class names they might just as well reside in a Utils namespace, but the class name should not be Utils.
Put all your logging in a Logger class. All your string handing in a StringUtils class etc. And try to keep the class names as specific as possible, and I'd rather have more classes with fewer functions than the other way around.