How can I make proguard minification respect method usages in the androidTest dependency tree? - proguard

I'm running androidTest UI tests against the proguarded release build of my project, so that we can catch proguard misconfiguartions in an early phase of development. However this causes issues when we use production methods in UI tests that are not used (1) in production code. Because proguard only traverse the implementation tree of dependencies and not androidTestImplementation, those methods will be dropped or altered. See illustration.
Is there anyway I can instruct proguard to also traverse the androidTestImplementation dependecy tree before minifying production code?
In this illustration method m1() will be dropped from the relase build APK since "App main" is not calling it. The UI tests will fail because the method is not found.
(1) Not used or used differently: For instance if a method has a boolean input parameter and all invocations in production code send in "true", proguard will hard code that value to true and drop the parameter from the signature. If a UI test calls the method with "false" it will then not find the method signature. Now you can argue that since all production invocations are with value "true" the method should be changed anyway and you are right - but I think the punishment of failing the tests is too harsh. It should be a compiler or lint warning, not a UI test fail with NoSuchMethodError.

Related

Gradle. Custom function in block plugins{}

Can i write in my custom plugin some function like kotlin("jvm")?
plugins {
java
kotlin("jvm") version "1.3.71"
}
I want to write function myplugin("foo") in my custom plugin and then use it like
plugins {
java
kotlin("jvm") version "1.3.71"
custom.plugin
myplugin("foo")
}
How i can do it?
I think that plugins block is some kind of a macro expression. It is parsed and precompiled using a very limited context. Probably, the magic happens somewhere in kotlin-dsl. This is probably the only way to get static accessors and extension functions from plugins to work in Kotlin. I've never seen a mention of this process in Gradle's documentation, but let me explain my thought. Probably, some smart guys from Gradle will correct me.
Let's take a look at some third-party plugin, like Liquibase. It allows you to write something like this in your build.gradle.kts:
liquibase {
activities {
register("name") {
// Configure the activity here
}
}
}
Think about it: in a statically compiled language like Kotlin, in order for this syntaxt to work, there should be an extension named liquibase on a Project type (as it is the type of this object in every build.gradle.kts) available in the classpath of a Gradle's VM that executes the build script.
Indeed, if you click on it, you'll see something like:
fun org.gradle.api.Project.`liquibase`(configure: org.liquibase.gradle.LiquibaseExtension.() -> Unit): Unit =
(this as org.gradle.api.plugins.ExtensionAware).extensions.configure("liquibase", configure)
But take a look at the file where it is defined. In my case it is ~/.gradle/caches/6.3/gradle-kotlin-dsl-accessors/cmljl3ridzazieb8fzn553oa8/cache/src/org/gradle/kotlin/dsl/Accessors39qcxru7gldpadn6lvh8lqs7b.kt. It is definitelly an auto-generated file. A few levels upper in a file tree — at ~/.gradle/caches/6.3/gradle-kotlin-dsl-accessors/ in my case — there are dozens of similar directories. I guess, one by every plugin/version I've ever used with Gradle 6.3. Here is another one for the Detekt plugin:
fun org.gradle.api.Project.`detekt`(configure: io.gitlab.arturbosch.detekt.extensions.DetektExtension.() -> Unit): Unit =
(this as org.gradle.api.plugins.ExtensionAware).extensions.configure("detekt", configure)
So, we have a bunch of .kt files defining all that extensions for different plugins applied to the project. That files are obviously pre-cached and precompiled and their content is available in build.gradle.kts. Indeed, you can find classes directories beside those sources.
The sources are generated based on the content of the applied plugins. It is probably a tricky task that includes some magic, reflection and introspection. Sometimes this magic doesn't work (due too chatic Groovy nature) and then you need to use some crappy DSL from this package.
How are they generated? I see no other way, but to
Parse the build.script.kts with an embedded Kotlin compiler / lexer
Extract all the plugins sections
Compile them, probably against some mocks (remember that Project is not yet available: we're not executing the build.gradle.kts itself yet!)
Resolve the declared plugins from Gradle Plugin repository (with some nuances coming from settngs.gradle.kts)
Introspect plugin's artifacts
Generate the sources
Compile the sources
Add the resulting classes to the script's classpath
And here is the gotcha: there is a very limited context (classpath, classes, methods — call it whatever) available when compiling the plugins block. Actually, no plugins are yet applied! Because, you know, you're parsing the block that applies plugins. Chickens, eggs, and their problems, huh…
So, and we're getting closer to the answer on your question, to provide custom DSL in plugins block, you need to modify that classpath. It's not a classpath of your build.gradle.kts, it's the classpath of the VM that parses build.gradle.kts. Basically, it's Gradle's own classpath — all the classes bundled in a Gradle distribution.
So, probably the only way to provide really custom DSLs in plugins block is to create a custom Gradle distribution.
EDIT:
Indeed, totally forgot to test the buildSrc. I've created a file PluginExtensions.kt in it, with a content
inline val org.gradle.plugin.use.PluginDependenciesSpec.`jawa`: org.gradle.plugin.use.PluginDependencySpec
get() = id("org.gradle.war") // Randomly picked
inline fun org.gradle.plugin.use.PluginDependenciesSpec.`jawa`(): org.gradle.plugin.use.PluginDependencySpec {
return id("org.gradle.cunit") // Randomly picked
}
And it seems to be working:
plugins {
jawa
jawa()
}
However, this is only working when PluginExtensions.kt is in the default package. Whenever I put it into a sub-package, the extensions are not recognized, even with an import:
Magic!
The kotlin function is just a simple extension function wrapping the traditional id method, not hard to define:
fun PluginDependenciesSpec.kotlin(module: String): PluginDependencySpec =
id("org.jetbrains.kotlin.$module")
However, this extension function is part of the standard gradle kotlin DSL API, which means it's available without any plugin. If you want to make a custom function like this available, you would need a plugin. A plugin to load your plugin. Not very practical.
I also tried using the buildSrc module to make an extension function like the above. But it turns out that buildSrc definitions aren't even available from the plugins DSL block, which has a very constrained syntax. That wouldn't have been very practical anyway, you would have needed to make a buildSrc folder for every project in which you have wanted to use the extension.
I'm not sure if this is possible at all. Try asking on https://discuss.gradle.org/.

what is aptMode of kapt used for?

In the doc there are three values for the aptMode.
Is there any detail information about these values ?
What is the meaning of "stubs" ?
See https://blog.jetbrains.com/kotlin/2015/06/better-annotation-processing-supporting-stubs-in-kapt/ (stubs are described in the second paragraph, but the first one provides context):
The initial version of kapt worked by intercepting communication between annotation processors (e.g. Dagger 2) and javac, and added already-compiled Kotlin classes on top of the Java classes that javac saw itself in the sources. The problem with this approach was that, since Kotlin classes had to be already compiled, there was no way for them to refer to any code generated by the processor (e.g. Dagger’s module classes). Thus we had to write Dagger application classes in Java.
As discussed in the previous blog post, the problem can be overcome by generating stubs of Kotlin classes before running javac and then running real compilation after javac has finished. Stubs contain only declarations and no bodies of methods. The Kotlin compiler used to create such stubs in memory anyways (they are used for Java interop, when Java code refers back to Kotlin), so all we had to do was serialize them to files on disk.
And also this answer.
But now stubs are generated by default, you can explicitly disable this generation by using aptMode=apt or only generate stubs by using aptMode=stubs. I think they are primarily for use internally by build systems (e.g. Gradle), as described in https://www.bountysource.com/issues/38443087-support-for-kapt-for-improved-kotlin-support:
There's 4 steps.
kaptGenerateStubsKotlin:
run kotlinc with plugin:org.jetbrains.kotlin.kapt3:aptMode=stubs
kaptKotlin
run kotlinc with plugin:org.jetbrains.kotlin.kapt3:aptMode=apt
compileKotlin
run kotlinc regularly
compileJava
run javac with -proc:none and pass the generated sources from step 2.
These steps are slightly different with each minor version of kotlin so this will be interesting.

gtest - why does one test affect behavior of other?

Currently I have a gtest which has a gtest object with some member variables and functions.
I have a simple test, as well as more complex tests later on. If I comment out the complex tests, my simple test runs perfectly fine. However, when I include the other tests (even though I'm using gtest_filter to only run the first test), I start getting segfaults. I know it's impossible to debug without posting my code, but I guess I wanted to know more at a high level how this could occur. My understanding is that TEST_F constructs/destructs a new object every time it is run, so how could it be possible that the existence of a test affects another? Especially if I'm filtering, shouldn't the behavior be exactly the same?
TEST_F does not construct/destruct a new "object" ( at this point I assume that object here is to be interpreted as instance of the feature test class) for each test
What is done before each test of the test feature is to call the SetUp method and after each test the TearDown method is called.
Test feature constructor and destructor are called only once.
But because you did not provide a mvce , we can not assume further

SpecFlow test doesn't open repository methods

I wrote SpecFlow method, (just add some items in repositories and calculate values after). It worked without mistakes. But now it doesn't.
Made mock of me repositories and filled values them.
I debug it by steps and found, that repositories methods doesn't call. Debuger just ignores and skips it. It isn't joke, I'm sure I made debug point in the right method.
It was surprise for me, SpecFlow Mock test doesn't call repository method and you can't debug behavior in the method. Mock repository just return value. If you can't get value - that means you added incorrectly data in the repository

Class Foo is implemented in both MyApp and MyAppTestCase. One of the two will be used. Which one is undefined

Recently I started Unit testing my application. This project (in Xcode4) was created without a unit test bundle so I had to set it up.
I have followed the steps from here: http://cocoawithlove.com/2009/12/sample-mac-application-with-complete.html
And It was working well for simple classes but now I am trying to test a class that depends on another and that on another, etc.
First I got a linker error so I added *.m files to the test case target but now I get a warning for every class I am trying to test:
Class Foo is implemented in both MyApp
and MyAppTestCase. One of the two will
be used. Which one is undefined.
I wonder why is that? How can I solve this? Maybe I missed something when setting the unit test target?
Edit - The Solution
Set "Bundle Loader" correctly to $(BUILT_PRODUCTS_DIR)/AppName.app/AppName
Set "Symbols Hidden by Default" to NO (in Build Settings of the target application). This is where the linker errors come from because it is YES by default!. I've been struggling with this for so long!.
Source: Linking error for unit testing with XCode 4?
Class Foo is implemented in both MyApp and MyAppTestCase. One of the two will be used. Which one is undefined.
I wonder why is that?
because both images (the app and the unit test bundle) define the implementation of the class. the class is dynamically loaded into the objc runtime. the objc runtime uses a flat namespace. how this works:
the binary is loaded, starting with its dependencies
as each binary is loaded, the objc classes register with the objc runtime
if a class with a specific name is loaded twice, the behaviour is undefined. one implementation of a class (with identical names) can be loaded into the objc runtime.
the typical problem here is that you will be returned one implementation - your app will likely crash when the type conflicts (when the class does not come from the same source file).
you typically avoid this by either renaming a class, or export the class in one image. renaming the class obviously does not apply to your case. you have one file Foo.m which is being compiled, exported, and loaded by two images when it should be in one.
this should be interpreted by you as a duplicate symbol linker error. even though the implementation is the same source file (and the implementation is the same) - this a problem that you must fix.
How can I solve this?
if Foo.m is a class of the app, you have to remove (do not compile and link) Foo.m from the unit test. if it's part of the unit test, then do not compile and link it into the unit test target.
then, follow the instructions in the post for linking/loading your unit test to the app. it's in this general area of the post: where "WhereIsMyMac" is the name of the application you're unit testing. This will let the testing target link against the application (so you don't get linker errors when compiling). the important part is that your test files are compiled in the unit test target (only), and your app's classes are compiled and linked into the app. you can't just add them - they link and load dynamically.
Maybe I missed something when setting the unit test target?
From the article you linked:
Note: The testing target is a separate target. This means that you need to be careful of target membership. All application source files should be added to the application target only. Test code files should be added to the testing target only.
the part that you got wrong is probably the link and load phases of the unit test bundle.
If you are using Cocoapods, your podfile only needs the dependencies in the section for the main target, not the test targets. If you do add duplicate dependencies for the test targets, you'll get the OP's error message.
target 'MyProject' do
pod 'Parse'
end
target 'MyProjectTests' do
end
target 'MyProjectUITests' do
end
For me, all I needed to do was uncheck the checkbox that makes the Foo class a member of the unit test target. It should not be a member of both targets, and should look like this:
In case you can't see the image, it's a screenshot of the Xcode "Target Membership" pane. There are two targets: one with an "A" application icon and the test name. The other is the unit test target, and has a unit test icon:
Target Membership
[X] Foo
[ ] FooTests
For me this happened because I deployed to the device and then to the simulator as I have NSZombies enabled. The solution was to switch to the simulator configuration & do a Product -> Clean then switch to the device configuration & do the same. Error went away. It's to do with build cache.
The reason is that you override RUNPATH_SEARCH_PATHS of your App Target`s build setting defined in other target.
Solution:
Go to your App Target and find RUNPATH_SEARCH_PATHS build setting and use there $(inherited) flag for both: Debug and Release
Come across the same issues, My situation is Class NSNotification is implemented in both /System/Library/Frameworks/Foundation.framework/Foundation, is there any dude come across the same issue, any direction or advise will be appriciated.