Elixir: Access test helpers from dependency - testing

In the java-world, artifacts for a specific library often come in two flavors, one with and one without test helpers. Is there an equivalent in the Elixir-world?
Specifically, I would like to be able to expose mocks or data generators in an application A. Application B now depends on A and gains the ability to use the exposed helpers from A in its own tests. Now, I do not want those helpers to appear anywhere in production, so they should only be included when specifically asked for (e.g. by MIX_ENV=test).
EDIT: Essentially, the question comes down to: "How to make tests from A available for the tests of B?"

Some time ago, I found a working solution: I extend elixirc_paths specifically for the :test environment and put shared helpers into another directory within the application (e.g. test/support). In mix.exs:
def project do
[
# ...
elixirc_paths: elixirc_paths(Mix.env),
]
end
defp elixirc_paths(:test), do: ["lib", "test/support"]
defp elixirc_paths(_), do: ["lib"]
Then, depending applications can use the modules in test/support when they compile the application in the test environment.

Related

CMake - how to properly handle dependencies between executables?

I'm trying to setup a project that would consist of two executables: the actual application app and a test application test (i.e. executable that runs unit tests).
Obviously, test depends on functions/classes defined in app, meaning that correct build order has to be ensured. What is more however, app has a few external dependencies of its own, e.g. boost, which makes test transitively dependent on them as well.
What is the most idiomatic way of resolving these dependencies?
Two approaches I tried are:
Make an intermediate app_lib library target, consisting of all source files except main.cpp, then link both executables against it (target_link_libraries(test PRIVATE app_lib)).
Set ENABLE_EXPORTS property on app target, allowing test to link against it directly (target_link_libraries(test PRIVATE app)).
While both of these approaches work, they both seem quite hacky. The latter feels a tad better but if I understand it correctly, it was originally meant to enable plugin development, hence the "hacky" feeling.
To reiterate - what would be the correct way of setting up such project? Are these two the only possible solutions or there is another, better one?

Conan.io use on embeddeds Software development

Please allow me two questions to the use in Conan.io in our environment:
We are developing automotive embedded software. Usually, this includes integration of COTS libraries, most of all for communication and OS like AUTOSAR. These are provided in source code. Typical uC are Renesas RH850, RL78, or similar devices from NXP, Cypress, Infinion, and so on. We use gnumake (MinGW), Jenkins for CI, and have our own EclipseCDT distribution as standardized IDE.
My first question:
Those 3rd party components are usually full of conditional compilation to do a proper compile-time configuration. With this approach, the code and so the resulting binaries are optimized, both in size and in run-time behavior.
Besides those components, we of course have internal reusable components for different purposes. The Compile-time configuration here is not as heavy as in the above example, but still present.
In one sentence: we have a lot of compile-time configuration - what could be a good approach to set up a JFrog / Conan based environment? Stay with the sources in every project?
XRef with Conan:
Is there a way to maintain cross-reference information coming from Conan? I am looking for something like "Project xxx is using Library lll Version vvv". In that way, we would be able to automatically identify other "users" of a library in case a problem is detected.
Thanks a lot,
Stefan
Conan recipes are based on python and thus are very flexible, being able to implement any conditional logic that you might need.
As an example, the libxslt recipe in ConanCenter contains something like:
def build(self):
self._patch_sources()
if self._is_msvc:
self._build_windows()
else:
self._build_with_configure()
And following this example, the autotools build contains code like:
def _build_with_configure(self):
env_build = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
full_install_subfolder = tools.unix_path(self.package_folder)
# fix rpath
if self.settings.os == "Macos":
tools.replace_in_file(os.path.join(self._full_source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
configure_args = ['--with-python=no', '--prefix=%s' % full_install_subfolder]
if self.options.shared:
configure_args.extend(['--enable-shared', '--disable-static'])
else:
configure_args.extend(['--enable-static', '--disable-shared'])
So Conan is able to implement any compile time configuration. That doesn't mean that you need to build always from sources. The parametrization of the build is basically:
Settings: for "project wide" configuration, like the OS or the architecture. Settings values typically have the same value for all dependencies
Options: for package specific configuration, like a library being static or shared. Every package can have its own value, different to other packages.
You can implement the variability model for a package with settings and options, build the most used binaries. When a given variant is requested, Conan will error saying there is not precompiled binary for that configuration. Users can specify --build=missing to build those from sources.

OpenTest custom test actors

I'm really impressed with OpenTest project. Found it highly intriguing how many ideas this project is sharing with some projects I created and worked on. Like your epic architecture with actors pulling tasks.. and many others :)
Did you think about including other automation technologies to base Actors on?
I could see two main groups:
1 Established test automation tooling like TestCafe (support for non-selenium gui testing could leverage the whole solution a lot)
2 Custom tooling needed for specific tasks. Would be great to have an actor with some domain-specific capabilities. Now as I can see this could be achieved by introducing another layer of execution worker called by an actor using rest api. What I mean is the possibility of using/including them as new 'actor types' with custom keywords releted.
Thank you for your nice words. We spent a lot of time thinking through the architecture and implementation of OpenTest and it's very rewarding to see that people understand and appreciate the design.
Implementing new keywords (test actions) can be done without creating custom test actors, by creating a new Java class that inherits from the TestAction base class and override its run method. For a simple example, you can take a look at the implementation of the Delay test action. You can then package the new test action in a JAR and drop it (along with any dependencies) in the user-jars subdirectory in your test actor's working directory. The test actor will dynamically load all the JARs it finds in there and will find the new test action class (using reflection) so you can make use of it in your tests. Some useful info and things to look out for:
Your Java project is going to have to define a dependency on the opentest-base project (which is where the TestAction base class is implemented).
When you copy the JAR to where your test actor is, make sure to copy any dependency JARs along with it. Please note that a lot of the dependencies that you might need are already included with the core test actor binaries (you can have a look at the POM.xml to see what they are).
If you happen to have any dependencies that conflict with the other JARs that included with the core test actor binaries, you can apply a technique called shading to "hide" the conflicting classes under a different package name. Most of the times you're not going to need this, but if you do and you get stuck let me know and I'll give you some pointers.
Here's sample project that demonstrates how to build an OpenTest extension that creates a couple of custom keywords: https://github.com/adrianth/opentest-extension-sample
And here's an extensive video tutorial about creating custom OpenTest keywords: https://getopentest.org/tutorials/custom-keywords.html

How to set up rspec-rails to generate feature specs for capybara

I'm using rspec-rails 2.12.0 and capybara 2.0.1 for testing. In capybara 2.x you need to put your specs in spec/features instead of spec/requests. Is there a way so if I were to generate a scaffold ala 'rails g scaffold Model' that rspec would generate the feature specs for me in the correct directory?
"controller" and "request" specs are tied to the inner app mechanism and thus can be auto generated by scaffold generator mimicking the controller structure.
"Feature" specs are completely different conceptually from these specs as they describe end user interactions with the application, they cannot be generated in advance as there is no way to effectively guess what feature you want to test. Feature specs also spread across multiple controllers, you don't want them to be mapped to your controller scaffold.
The only thing that could be done is generate an almost empty feature/xyz file for you to fill in, which is pretty useless as chances are you will have to delete/rename it.

Apache Ivy Configurations

I'm slowly beginning to understand the importance of module configurations within the Ivy universe. However it is still difficult for me to clearly see how the same chunk of code could have different configurations that have different dependency requirements (the one exception is in the case of test configs that require JUnit on top of the normal dependencies -- I actually understand that 100%!)
For instance, take the following code:
package org.myorg.myprogram.core;
// Import an object from a dependency
import org.someElse.theirJAR.Widget;
public class MyCode
{
public MyCode()
{
if(Widget.SOME_STATIC == 3)
System.out.println("Fizz");
else
System.out.println("Buzz");
}
}
Now aside from the fact that this is terrible code, I just don't see how my program (which, let's pretend is JARred up into MyProgram.jar) could be set to have multiple "configurations"; some of which may require theirJAR and its Widget class, and others that don't. To me, if we fail to provide MyCode with a Widget it will die at runtime, always.
Again, I understand the necessity for test configurations; just not anything else (I have also asked questions about compile- vs run-time dependencies, and I guess I also see the necessity for those as well). But beyond test configs, compile-time configs, and runtime configs, what other module configurations could you possibly need? How would MyCode need a Widget in some cases, and not in other cases, yet still run perfectly fine without a Widget?
I greatly appreciate any help wrapping my brain around this!
Hibernate is a good example. Hibernate supports multiple cache implementations to act as its level-2 cache. You don't want to transitively depend on all the possible caches, only the one you use.
In general, we use the typical compile, test, runtime set of configurations.
To add to SteveD's answer, remember that dependencies can be more than just .jar files. Some dependencies come with source and javadoc files, release notes, license files, etc. Multiple configurations of the dependency might let you select the subset of files you wish to resolve.
You might also want to use configurations to control the contents of different distributions. For example you might want to release the jar on it's own ("master" configuration in Maven parlance) and additionally build a tar package containing all runtime dependencies, with (or without) source code.
Another use for configurations is when you target multiple platforms. I often release groovy scripts packaged to run as standalone jars or as tomcat web applications