I'm really impressed with OpenTest project. Found it highly intriguing how many ideas this project is sharing with some projects I created and worked on. Like your epic architecture with actors pulling tasks.. and many others :)
Did you think about including other automation technologies to base Actors on?
I could see two main groups:
1 Established test automation tooling like TestCafe (support for non-selenium gui testing could leverage the whole solution a lot)
2 Custom tooling needed for specific tasks. Would be great to have an actor with some domain-specific capabilities. Now as I can see this could be achieved by introducing another layer of execution worker called by an actor using rest api. What I mean is the possibility of using/including them as new 'actor types' with custom keywords releted.
Thank you for your nice words. We spent a lot of time thinking through the architecture and implementation of OpenTest and it's very rewarding to see that people understand and appreciate the design.
Implementing new keywords (test actions) can be done without creating custom test actors, by creating a new Java class that inherits from the TestAction base class and override its run method. For a simple example, you can take a look at the implementation of the Delay test action. You can then package the new test action in a JAR and drop it (along with any dependencies) in the user-jars subdirectory in your test actor's working directory. The test actor will dynamically load all the JARs it finds in there and will find the new test action class (using reflection) so you can make use of it in your tests. Some useful info and things to look out for:
Your Java project is going to have to define a dependency on the opentest-base project (which is where the TestAction base class is implemented).
When you copy the JAR to where your test actor is, make sure to copy any dependency JARs along with it. Please note that a lot of the dependencies that you might need are already included with the core test actor binaries (you can have a look at the POM.xml to see what they are).
If you happen to have any dependencies that conflict with the other JARs that included with the core test actor binaries, you can apply a technique called shading to "hide" the conflicting classes under a different package name. Most of the times you're not going to need this, but if you do and you get stuck let me know and I'll give you some pointers.
Here's sample project that demonstrates how to build an OpenTest extension that creates a couple of custom keywords: https://github.com/adrianth/opentest-extension-sample
And here's an extensive video tutorial about creating custom OpenTest keywords: https://getopentest.org/tutorials/custom-keywords.html
Related
Currently in our project we have layered architecture implemented in following way where Controller, Service, Repository are placed in the same package for each feature, for instance:
feature1:
Feature1Controller
Feature1Service
Feature1Repository
feature2:
Feature2Controller
Feature2Service
Feature2Repository
I've found following example of arch unit test where such classes are placed in dedicated packages https://github.com/TNG/ArchUnit-Examples/blob/master/example-junit5/src/test/java/com/tngtech/archunit/exampletest/junit5/LayeredArchitectureTest.java
Please suggest whether there is possibility to test layered architecture when all layers are in single package
If the file name conventions are followed properly across your project, how about you write custom test cases instead of using layeredArchitecture().
For Example:
classes().that().haveSimpleNameEndingWith("Service")
.should().onlyBeAccessed().byClassesThat().haveSimpleNameEndingWith("Controller")
noClasses().that().haveSimpleNameEndingWith("Service")
.should().accessClassesThat().haveSimpleNameEndingWith("Controller")
I know this question is rather old. But for the record, this has been possible for a while using predicates for the layers, e.g.
layeredArchitecture().consideringAllDependencies()
.layer("Controllers").definedBy(HasName.Predicates.nameEndingWith("Controller"))
.layer("Services").definedBy(HasName.Predicates.nameEndingWith("Service"))
.layer("Repository").definedBy(HasName.Predicates.nameEndingWith("Repository"))
.whereLayer("Controllers").mayNotBeAccessedByAnyLayer()
.whereLayer("Services").mayOnlyBeAccessedByLayers("Controllers")
.whereLayer("Repository").mayOnlyBeAccessedByLayers("Services")
However, I'm not sure how well this works in practice. Because usually you don't just have classes following this naming pattern and that's it. A service might also have some POJO as method parameter type (e.g. MyInput) and that should maybe for example not be used by repositories as well. Also, using forward dependency rules (mayOnlyAccessLayers(..)) this might then cause unwanted violations.
I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.
The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.
Here is what I'm looking for:
I'd like to separate pieces of functionality into modules or components of some sort to limit visibility of other classes to prevent that each class has access to every other class which over time results in spaghetti code.
In Java & Eclipse, for example, I would use packages and put each package into a separate project with a clearly defined dependency structure.
Things I have considered:
Using separate folders for source files and using Groups in Xcode:
Pros: simple to do, almost no Xcode configuration needed
Cons: no compile-time separation of functionality, i.e. access to everything is only one #import statement away
Using Frameworks:
Pros: Framework code cannot access access classes outside of framework. This enforces encapsulation and keeps things separate
Cons: Code management is cumbersome if you work on multiple Frameworks at the same time. Each Framework is a separate Xcode project with a separate window
Using Plugins:
Pros: Similar to Frameworks, Plugin code can't access code of other plugins. Clean separation at compile-time. Plugin source can be part of the same Xcode project.
Cons: Not sure. This may be the way to go...
Based on your experience, what would you choose to keep things separate while being able to edit all sources in the same project?
Edit:
I'm targeting Mac OS X
I'm really looking for a solution to enforce separation at compile time
By plugins I mean Cocoa bundles (http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/LoadingCode/Concepts/Plugins.html)
I have worked on some good-sized Mac projects (>2M SLOC in my last one in 90 xcodeproj files) and here are my thoughts on managing them:
Avoid dynamic loads like Frameworks, Bundles, or dylibs unless you are actually sharing the binaries between groups. These tend to create more complexity than they solve in my experience. Plus they don't port easily to iOS, which means maintaining multiple approaches. Worst, having lots of dynamic libraries increases the likelihood of including the same symbols twice, leading to all kinds of crazy bugs. This happens when you directly include some "helper" class directly in more than one library. If it includes a global variable, the bugs are awesome as different threads use different instances of the global.
Static libraries are the best choice in many if not most cases. They resolve everything at build time, allowing code stripping in your C/C++ and other optimizations not possible in dynamic libraries. They get rid of "hey, it loads on my system but not the customer's" (when you use the wrong value for the framework path). No need to deal with slides when computing line numbers from crash stacks. They catch duplicate symbols at build time, saving many hours of debugging pain.
Separate major components into separate xcodeproj. Really think about what "major" means here, though. My 90-project product was way too many. Just doing dependency checking can become a very non-trivial exercise. (Xcode 4 can improve this, but I left the project before we ever were able to get Xcode 4 to reliably build it, so I don't know how well it did in the end.)
Separate public from private headers. You can do this with static libs just as well as you can with Frameworks. Put the public headers in a different directory. I recommend each component have its own public include directory for this purpose.
Do not copy headers. Include them directly from the public include directory for the component. Copying headers into a shared tree seems like a great idea until you do it. Then you find that you're editing the copy rather than the real one, or you're editing the real one, but not actually copying it. In any case, it makes development a headache.
Use xcconfig files, not the build pane. The build pane will drive you crazy in these kinds of big projects. Mine tend to have lines like this:
common="../../common"
foo="$(common)/foo"
HEADER_SEARCH_PATHS = $(inherited) $(foo)/include
Within your public header path, include your own bundle name. In the example above, the path to the main header would be common/foo/include/foo/foo.h. The extra level seems a pain, but it's a real win when you import. You then always import like this: #import <foo/foo.h>. Keeps everything very clean. Don't use double-quotes to import public headers. Only use double-quotes to import private headers in your own component.
I haven't decided the best way for Xcode 4, but in Xcode 3, you should always link your own static libraries by adding the project as a subproject and dragging the ".a" target into your link step. Doing it this way ensures that you'll link the one built for the current platform and configuration. My really huge projects haven't been able to convert to Xcode 4 yet, so I don't have a strong opinion yet on the best way there.
Avoid searching for custom libraries (the -L and -l flags at the link step). If you build the library as part of the project, then use the advice above. If you pre-build it, then add the full path in LD_FLAGS. Searching for libraries includes some surprising algorithms and makes the whole thing hard to understand. Never drop a pre-built library into your link step. If you drop a pre-built libssl.a into your link step, it actually adds a -L parameter for the path and then adds -lssl. Under default search rules, even though you show libssl.a in your build pane, you'll actually link to the system libssl.so. Deleting the library will remove the -l but not the -L so you can wind up with bizarre search paths. (I hate the build pane.) Do it this way instead in xcconfig:
LD_FLAGS = "$(openssl)/lib/libssl.a"
If you have stable code that is shared between several projects, and while developing those projects you're never going to mess with this code (and don't want the source code available), then a Framework can be a reasonable approach. If you need plugins to avoid loading large amounts of unnecessary code (and you really won't load that code in most cases), then bundles may be reasonable. But in the majority of cases for application developers, one large executable linked together from static libraries is the best approach IMO. Shared libraries and frameworks only make sense if they're actually shared at runtime.
My suggestion would be:
Use Frameworks. They're the most easily reusable build artifact of the options you list, and the way you describe the structure of what you are trying to achieve sounds very much like creating a set of Frameworks.
Use a separate project for each Framework. You'll never be able to get the compiler to enforce the kind of access restrictions you want if everything is dumped into a single project. And if you can't get the compiler to enforce it, then good luck getting your developers to do so.
Upgrade to XCode4 (if you haven't already). This will allow you to work on multiple projects in a single window (pretty much like how Eclipse does it), without intermingling the projects. This pretty much eliminates the cons you listed under the Frameworks option.
And if you are targeting iOS, I very strongly recommend that you build real frameworks as opposed to the fake ones that you get by using the bundle-hack method, if you aren't building real frameworks already.
I've managed to keep my sanity working on my project which has grown over the past months to fairly large (number of classes) by forcing myself to practice Model-View-Control (MVC) diligently, plus a healthy amount of comments, and the indispensable source control (subversion, then git).
In general, I observe the following:
"Model" Classes that serialize data (doesn't matter from where, and including app's 'state') in an Objective-C 1 class subclassed from NSObject or custom "model" classes that inherits from NSObject. I chose Objective-C 1.0 more for compatibility as it's the lowest common denominator and I didn't want to be stuck in the future writing "model" classes from scratch because of dependency of Objective-C 2.0 features.
View Classes are in XIB with the XIB version set to support the oldest toolchain I need to support (so I can use a previous version Xode 3 in addition to Xcode 4). I tend to start with Apple provided Cocoa Touch API and frameworks to benefit from any optimization/enhancement Apple may introduce as these APIs evolve.
Controller Classes contain usual code that manages display/animation of views (programmatically as well as from XIBs) and data serialization of data from "model" classes.
If I find myself reusing a class a few times, I'd explore refactoring the code and optimizing (measured using Instruments) into what I call "utility" classes, or as protocols.
Hope this helps, and good luck.
This depends largely on your situation and your own specific preferences.
If you're coding "proper" object-oriented classes then you will have a class structure with methods and variables hidden from other classes where necessary. Unless your project is huge and built of hundreds of different distinguishable modules then its probably sufficient to just group classes and resources into folders/groups in XCode and work with it that way.
If you've really got a huuge project with easily distinguishable modules then by all means create a framework. I would suggest though that this would only really be necessary where you are using the same code in different applications, in which case creating a framework/extra project would be a good way to effectively copy code between projects. In practically all other cases it would probably just be overkill and much more complicated than needed.
Your last idea seems to be a mix of the first two. Plugins (as I understand you are describing - tell me if I'm wrong) are just separated classes in the same project? This is probably the best way, and should be done (to an extent) in any case. If you are creating functionality to draw graphs (for example) you should section off a new folder/group and start your classes and functionality within that, only including those classes into your main application where necessary.
Let me put it this way. There's no reason to go over the top... but, even if just for your own sanity - or the maintainability of your code - you should always endeavour to group everything up into descriptive groups/folders.
Suppose I am working on exposing some of my server-side classes to a GWT application, but certain parts could be done much better using GWT-specific components (like JSNI, for instance).
What are some techniques for doing so without being too hacky?
For instance, I am aware of using a subpackage and using the <super-source/> tag, but this requires the package names to be different, which causes eclipse to complain. The general solution in the community is to then tell eclipse to use that as a source folder, but then eclipse complains about there being two classes with the same name.
Ideally, there would just be a way to keep everything in a single source tree, and actually have different classes which apply the alternate implementations. This would feel like a more OO approach.
I would like to add a suffix to a class like _gwt which accomplishes this automatically, and I know I could write a script to do this kind of transformation, but that is a kludge for sure.
I've been considering using Google's GIN/GUICE libraries for my projects in general, and I think there might be some kind of a solution there, but I am not sure as I have not thoroughly investigated it.
What are some solutions you have tried in the past on GWT projects?
The easiest way to have split implementations is to use super-source code, but only enough to instantiate a uniquely-named instance or dispatch to a different method. Ideally, the super-source implementation is just a few lines long, and not so bad that you can't roll it by hand.
To work around the Eclipse / javac double-mapping and package name issues, the GWT source uses two top-level roots for user code: user/src and user/super. For example, the AutoBeans package has a split-implementation of JSON quoting and evaluation, one for the JVM and one for the browser.
There's really no non-kludgy way to implement super-source, as this is a feature way outside what you can specify in the language. There's nothing that lets you say "use this implementation in this environment" without the use of some external tool.
NInject's module architecture seems useful but I'm worried that it is going to get in a bit of a mess.
How do you organise your modules? Which assembly do you keep them in and how do you decide what wirings go in which module?
Each subsystem gets a module. Of course the definition of what warrants categorisation as a 'subsystem' depends...
In some cases, responsibility for some bindings gets pushed up to a higher level as a lower-level subsystem/component is not in a position to make a final authoritative decision - in some cases this can be achieved by passing parameters into the Module.
Replying to my own post after a couple of years of using NInject.
Here is how I organise my NInjectModules, using a Book Store as an example:
BookStoreSolution
Domain.csproj
Services.csproj
CustomerServicesInjectionModule.cs
PaymentProcessingInjectionModule.cs
DataAccess.csproj
CustomerDatabaseInjectionModule.cs
BookDatabaseInjectionModule.cs
CustomSecurityFramework.csproj
CustomSecurityFrameworkInjectionModule.cs
PublicWebsite.csproj
PublicWebsiteInjectionModule.cs
Intranet.csproj
IntranetInjectionModule.cs
What this is saying is that each project in the system comes prepackaged with one or more NInject modules that know how to setup the bindings for that project's classes.
Most of the time an individual application is not going to want to make significant changes to the default injection modules provided by a project. For example, if I am creating a little WinForm app which needs to import the DataAccess project, normally I am also going to want to have all the project's Repository<> classes bound to their associated IRepository<> interfaces.
At the same time, there is nothing forcing an individual application to use a particular injection module. An application can create its own injection module and ignore the default modules provided by a project that it is importing. In this way the system still remains flexible and decoupled.