Forbid runtime dependency on package in Nix overlay - package-managers

Task description
I want to make sure that no derivation I install has no run-time dependency on specified set of derivation. If I ask nix-env to install package that has such run-time dependency, I want it to say that I am asking for impossible. Build-dependencies are fine. I want to avoid huge cascade rebuilds, though.
In other words, I want to make sure that derivation with name = evil never reaches my Nix store, but I am fine that it was used to build other derivations on Hydra. Here is what I tried:
Failed attempt: use derivation meta attribute
self: super: {
evil = super.evil // { meta.broken = True; };
}
but this makes nix-env to refuse install programs that has build-time dependencies on evil, for example it refuses to install go or haskell programs (which are statically linked) because compiler has some transitive dependency on evil.
Failed attempt: replace evil with something harmless
I write overlay that replaces evil:
self: super {
evil = super.harmless; # e.g super.busybox
}
it causes major cascade rebuild.
Random idea
If there is function, like this:
self: super: {
ghc = forget_about_dependencies_but_retain_hash_yes_I_know_what_I_Do [super.evil] super.ghc;
# same for rustc, go and other compilers that link statically.
}
that would be 90% solution for me.

It seems impossible to prevent some derivation from being in store, but it is possible to make sure profile does not contain run-time dependencies:
self: super: {
world = (super.buildEnv {
name = "world";
paths = with super; [ foo bar baz ];
}).overrideAttrs (_: { disallowedRequisites = [ super.evil super.ugly ]; });
}
So, if you put all derivations you want in "world", you can be sure that evil and ugly are not in dependencies. But they will be downloaded into store to build "world", even if they are not actually used by any derivations in paths.

Related

Enforcing read-only attributes from the metaclass

Yes, still going with this. My impression is that there's this powerful facility in Raku, which is not really easy to use, and there's so little documentation for that. I'd like to kind of mitigate that.
In this case, I'm trying to force attributes to be read-only by default, to make immutable classes. Here's my attempt:
my class MetamodelX::Frozen is Metamodel::ClassHOW {
method compose_attributes($the-obj, :$compiler_services) {
my $attribute-container = callsame;
my $new-container = Perl6::Metamodel::AttributeContainer.new(
:attributes($attribute-container.attributes),
:attribute_lookup($attribute-container.attribute_table),
:0attr_rw_by_default
);
$new-container.compose_attributes($the-obj, $compiler_services);
}
}
my package EXPORTHOW {
package DECLARE {
constant frozen = MetamodelX::Frozen;
}
}
I'm calling that from a main function that looks like this:
use Frozen;
frozen Foo {
has $.bar;
method gist() {
return "→ $!bar";
}
}
my $foo = Foo.new(:3bar);
say $foo.bar;
$foo.bar(33);
I'm trying to follow the source, that does not really give a lot of facilities to change attribute stuff, so there seems to be no other way that creating a new instance of the container. And that might fail in impredictable ways, and that's what it does:
Type check failed in binding to parameter '$the-obj'; expected Any but got Foo (Foo)
at /home/jmerelo/Code/raku/my-raku-examples/frozen.raku:7
Not clear if this is the first the-obj or the second one, but any way, some help is appreciated.

How to use `.kind()` on my custom errors?

I'm wondering how I can do some pattern-matching on my custom errors, to have a specific behaviour for a few errors and a generic behaviour for the others.
I have the following custom error, defined using thiserror as it seems to be the latest recommended crate for custom errors in July 2020.
#[derive(thiserror::Error, Debug)]
pub enum MyError{
#[error("Error while building the query")]
Builder(#[source] hyper::http::Error),
#[error("Generic error")]
Fuck,
#[error("Not OK HTTP response code")]
NotOK,
#[error("Request error")]
Request(#[source] hyper::Error),
}
pub async fn do_http_stuff() -> Result<Vec<u8>, MyError> {
...
}
And:
match do_http_stuff().await {
Ok(data) => ...,
Err(error) => match error.kind() {
MyError::NotOK => {
println!("not ok");
},
_ => {
println!("{}", error.to_string());
},
},
},
But .kind() is not implemented. When I search about how to manage errors in rust, kind is often in the examples. What kind of incantations should I do to have this .kind() method or something equivalent in my project too ?
Thanks.
I expect you're referring to the std::io::Error:kind method. It's specifically for IO errors. Because it needs to be able to represent OS errors, io::Error isn't defined as an enum, and we can't match on it. The kind method allows us to get an ErrorKind representing the generic cause of the error in a platform independent way.
This workaround is completely unnecessary in a normal Rust library, as all of your error cases can be expressed simply (as the variants of your enum). All you need to do is match on the error value directly

Is there any way to iterate all fields of a data class without using reflection?

I know an alternative of reflection which is using javassist, but using javassist is a little bit complex. And because of lambda or some other features in koltin, the javassist doesn't work well sometimes. So is there any other way to iterate all fields of a data class without using reflection.
There are two ways. The first is relatively easy, and is essentially what's mentioned in the comments: assuming you know how many fields there are, you can unpack it and throw that into a list, and iterate over those. Or alternatively use them directly:
data class Test(val x: String, val y: String) {
fun getData() : List<Any> = listOf(x, y)
}
data class Test(val x: String, val y: String)
...
val (x, y) = Test("x", "y")
// And optionally throw those in a list
Although iterating like this is a slight extra step, this is at least one way you can relatively easy unpack a data class.
If you don't know how many fields there are (or you don't want to refactor), you have two options:
The first is using reflection. But as you mentioned, you don't want this.
That leaves a second, somewhat more complicated preprocessing option: annotations. Note that this only works with data classes you control - beyond that, you're stuck with reflection or implementations from the library/framework coder.
Annotations can be used for several things. One of which is metadata, but also code generation. This is a somewhat complicated alternative, and requires an additional module in order to get compile order right. If it isn't compiled in the right order, you'll end up with unprocessed annotations, which kinda defeats the purpose.
I've also created a version you can use with Gradle, but that's at the end of the post and it's a shortcut to implementing it yourself.
Note that I have only tested this with a pure Kotlin project - I've personally had problems with annotations between Java and Kotlin (although that was with Lombok), so I do not guarantee this will work at compile time if called from Java. Also note that this is complex, but avoids runtime reflection.
Explanation
The main issue here is a certain memory concern. This will create a new list every time you call the method, which makes it very similar to the method used by enums.
Local testing over 10000 iterations also show a general consistency of ~200 milliseconds to execute my approach, versus roughly 600 for reflection. However, for one iteration, mine uses ~20 milliseconds, where as reflection uses between 400 and 500 milliseconds. On one run, reflection took 1500 (!) milliseconds, while my approach took 18 milliseconds.
See also Java Reflection: Why is it so slow?. This appears to affect Kotlin as well.
The memory impact of creating a new list every time it's called can be noticeable though, but it'll also be collected so it shouldn't be that big a problem.
For reference, the code used for benchmarking (this will make sense after the rest of the post):
#AutoUnpack data class ExampleDataClass(val x: String, val y: Int, var m: Boolean)
fun main(a: Array<String>) {
var mine = 0L
var reflect = 0L
// for(i in 0 until 10000) {
var start = System.currentTimeMillis()
val cls = ExampleDataClass("example", 42, false)
for (field in cls) {
println(field)
}
mine += System.currentTimeMillis() - start
start = System.currentTimeMillis()
for (prop in ExampleDataClass::class.memberProperties) {
println("${prop.name} = ${prop.get(cls)}")
}
reflect += System.currentTimeMillis() - start
// }
println(mine)
println(reflect)
}
Setting up from scratch
This bases itself around two modules: a consumer module, and a processor module. The processor HAS to be in a separate module. It needs to be compiled separately from the consumer for the annotations to work properly.
First of all, your consumer project needs the annotation processor:
apply plugin: 'kotlin-kapt'
Additionally, you need to add stub generation. It complains it's unused while compiling, but without it, the generator seems to break for me:
kapt {
generateStubs = true
}
Now that that's in order, create a new module for the unpacker. Add the Kotlin plugin if you didn't already. You do not need the annotation processor Gradle plugin in this project. That's only needed by the consumer. You do, however, need kotlinpoet:
implementation "com.squareup:kotlinpoet:1.2.0"
This is to simplify aspects of the code generation itself, which is the important part here.
Now, create the annotation:
#Retention(AnnotationRetention.SOURCE)
#Target(AnnotationTarget.CLASS)
annotation class AutoUnpack
This is pretty much all you need. The retention is set to source because it has no value at runtime, and it only targets compile time.
Next, there's the processor itself. This is somewhat complicated, so bear with me. For reference, this uses the javax.* packages for annotation processing. Android note: this might work assuming you can plug in a Java module on a compileOnly scope without getting the Android SDK restrictions. As I mentioned earlier, this is mainly for pure Kotlin; Android might work, but I haven't tested that.
Anyways, the generator:
Because I couldn't find a way to generate the method into the class without touching the rest (and because according to this, that isn't possible), I'm going with an extension function generation approach.
You'll need a class UnpackCodeGenerator : AbstractProcessor(). In there, you'll first need two lines of boilerplate:
override fun getSupportedAnnotationTypes(): MutableSet<String> = mutableSetOf(AutoUnpack::class.java.name)
override fun getSupportedSourceVersion(): SourceVersion = SourceVersion.latest()
Moving on, there's the processing. Override the process function:
override fun process(annotations: MutableSet<out TypeElement>, roundEnv: RoundEnvironment): Boolean {
// Find elements with the annotation
val annotatedElements = roundEnv.getElementsAnnotatedWith(AutoUnpack::class.java)
if(annotatedElements.isEmpty()) {
// Self-explanatory
return false;
}
// Iterate the elements
annotatedElements.forEach { element ->
// Grab the name and package
val name = element.simpleName.toString()
val pkg = processingEnv.elementUtils.getPackageOf(element).toString()
// Then generate the class
generateClass(name,
if (pkg == "unnamed package") "" else pkg, // This is a patch for an issue where classes in the root
// package return package as "unnamed package" rather than empty,
// which breaks syntax because "package unnamed package" isn't legal.
element)
}
// Return true for success
return true;
}
This just sets up some of the later framework. The real magic happens in the generateClass function:
private fun generateClass(className: String, pkg: String, element: Element){
val elements = element.enclosedElements
val classVariables = elements
.filter {
val name = if (it.simpleName.contains("\$delegate"))
it.simpleName.toString().substring(0, it.simpleName.indexOf("$"))
else it.simpleName.toString()
it.kind == ElementKind.FIELD // Find fields
&& Modifier.STATIC !in it.modifiers // that aren't static (thanks to sebaslogen for issue #1: https://github.com/LunarWatcher/KClassUnpacker/issues/1)
// Additionally, we have to ignore private fields. Extension functions can't access these, and accessing
// them is a bad idea anyway. Kotlin lets you expose get without exposing set. If you, by default, don't
// allow access to the getter, there's a high chance exposing it is a bad idea.
&& elements.any { getter -> getter.kind == ElementKind.METHOD // find methods
&& getter.simpleName.toString() ==
"get${name[0].toUpperCase().toString() + (if (name.length > 1) name.substring(1) else "")}" // that matches the getter name (by the standard convention)
&& Modifier.PUBLIC in getter.modifiers // that are marked public
}
} // Grab the variables
.map {
// Map the name now. Also supports later filtering
if (it.simpleName.endsWith("\$delegate")) {
// Support by lazy
it.simpleName.subSequence(0, it.simpleName.indexOf("$"))
} else it.simpleName
}
if (classVariables.isEmpty()) return; // Self-explanatory
val file = FileSpec.builder(pkg, className)
.addFunction(FunSpec.builder("iterator") // For automatic unpacking in a for loop
.receiver(element.asType().asTypeName().copy()) // Add it as an extension function of the class
.addStatement("return listOf(${classVariables.joinToString(", ")}).iterator()") // add the return statement. Create a list, push an iterator.
.addModifiers(KModifier.PUBLIC, KModifier.OPERATOR) // This needs to be public. Because it's an iterator, the function also needs the `operator` keyword
.build()
).build()
// Grab the generate directory.
val genDir = processingEnv.options["kapt.kotlin.generated"]!!
// Then write the file.
file.writeTo(File(genDir, "$pkg/${element.simpleName.replace("\\.kt".toRegex(), "")}Generated.kt"))
}
All of the relevant lines have comments explaining use, in case you're not familiar with what this does.
Finally, in order to get the processor to process, you need to register it. In the module for the generator, add a file called javax.annotation.processing.Processor under main/resources/META-INF/services. In there you write:
com.package.of.UnpackCodeGenerator
From here, you need to link it using compileOnly and kapt. If you added it as a module to your project, you can do:
kapt project(":ClassUnpacker")
compileOnly project(":ClassUnpacker")
Alternative source setup:
Like I mentioned earlier, I bundled this into a jar for convenience. It's under the same license as SO uses (CC-BY-SA 3.0), and it contains the exact same code as in the answer (although compiled into a single project).
If you want to use this one, just add the Jitpack repo:
repositories {
// Other repos here
maven { url 'https://jitpack.io' }
}
And hook it up with:
kapt 'com.github.LunarWatcher:KClassUnpacker:v1.0.1'
compileOnly "com.github.LunarWatcher:KClassUnpacker:v1.0.1"
Note that the version here may not be up to date: the up to date list of versions is available here. The code in the post still aims to reflect the repo, but versions aren't really important enough to edit every time.
Usage
Regardless of which way you ended up using to get the annotations, the usage is relatively easy:
#AutoUnpack data class ExampleDataClass(val x: String, val y: Int, var m: Boolean)
fun main(a: Array<String>) {
val cls = ExampleDataClass("example", 42, false)
for(field in cls) {
println(field)
}
}
This prints:
example
42
false
Now you have a reflection-less way of iterating fields.
Note that local testing has been done partially with IntelliJ, but IntelliJ doesn't seem to like me - I've had various failed builds where gradlew clean && gradlew build from a command line oddly works fine. I'm not sure whether this is a local problem, or if this is a general problem, but you might have some issues like this if you build from IntelliJ.
Also, you might get errors if the build fails. The IntelliJ linter builds on top of the build directory for some sources, so if the build fails and the file with the extension function isn't generated, that'll cause it to appear as an error. Building usually fixes this when I tested (with both modules and from Jitpack).
You'll also likely have to enable the annotation processor setting if you use Android Studio or IntelliJ.
here is another idea, that i came up with, but am not satisfied with...but it has some pros and cons:
pros:
adding/removing fields to/from the data class causes compiler errors at field-iteration sites
no boiler-plate code needed
cons:
won't work if default values are defined for arguments
declaration:
data class Memento(
val testType: TestTypeData,
val notes: String,
val examinationTime: MillisSinceEpoch?,
val administeredBy: String,
val signature: SignatureViewHolder.SignatureData,
val signerName: String,
val signerRole: SignerRole
) : Serializable
iterating through all fields (can use this directly at call sites, or apply the Visitor pattern, and use this in the accept method to call all the visit methods):
val iterateThroughAllMyFields: Memento = someValue
Memento(
testType = iterateThroughAllMyFields.testType.also { testType ->
// do something with testType
},
notes = iterateThroughAllMyFields.notes.also { notes ->
// do something with notes
},
examinationTime = iterateThroughAllMyFields.examinationTime.also { examinationTime ->
// do something with examinationTime
},
administeredBy = iterateThroughAllMyFields.administeredBy.also { administeredBy ->
// do something with administeredBy
},
signature = iterateThroughAllMyFields.signature.also { signature ->
// do something with signature
},
signerName = iterateThroughAllMyFields.signerName.also { signerName ->
// do something with signerName
},
signerRole = iterateThroughAllMyFields.signerRole.also { signerRole ->
// do something with signerRole
}
)

Class cannot resolve module as content unless #Stepwise used

I have a Spock class, that when run as a test suite, throws Unable to resolve iconRow as content for geb.Page, or as a property on its Navigator context. Is iconRow a class you forgot to import? unless I annotate my class with #Stepwise. However, I really don't want the test execution to stop on the first failure, which #Stepwise does.
I've tried writing (copy and pasting) my own extension using this post, but I still get these errors. It is using my extension, as I added some logging statements that were printed out to the console.
Here is one of my modules:
class IconRow extends Module {
static content = {
iconRow (required: false) {$("div.report-toolbar")}
}
}
And a page that uses it:
class Report extends SomeOtherPage {
static at = {$("div.grid-container").displayed}
static content = {
iconRow { module IconRow }
}
}
And a snippet of the test that is failing:
class MyFailingTest extends GebReportingSpec {
def setupSpec() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
#Unroll
def "I work"() {
given:
at Report
expect:
this == that
where:
this << ["some list", "of values"]
that << anotherModule.someContent*.#id
}
#Unroll
def "I don't work"() {
given:
at Report
expect:
this == that
where:
this << ["some other", "list", "of values"]
that << iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}
When executed as a suite I work passes and I don't work fails because it cannot identify "iconRow" as content for the page. If I switch the order of the test cases, I don't work will pass and I work will fail. Alternatively, if I execute each test separately, they both pass.
What I have tried:
Adding/removing the required: true property from content in the modules
Prefixing the module name with the class, such as IconRow.iconRow
Defining my modules as static #Shared properties
Initialize the modules both in and outside of my setupSpec()
Making simple getter methods in each module's class that return the module, and referencing content such as IconRow.getIconRow().columnHeaders*.attr("innerText")*.toUpperCase()
Moving the contents of my setupSpec() into setup()
Adding autoClearCookies = false into my GebConfig.groovy
Making a #Shared Report report variable and prefix all modules with that such as report.iconRow
Very peculiar note about that last bullet point -- it magically resolves the modules that don't have the prefix -- so it won't resolve report.IconRow but will resolve just iconRow -- absolutely bizarre, because if I remove that variable the module that was just previously working suddenly can't be resolved again. I even tried declaring this variable and then not prefixing anything, and that did not work either.
Another problem that I keep banging my head against the wall with is that I'm also not sure of where the problem is. The error it throws leads me to believe that it's a project setup issue, but running each feature individually works fine, so it appears to be resolving the classes just fine.
On the other hand, perhaps it's an issue with the session and/or cookies? Although I have yet to see any official documentation on this, it seems to be the general consensus (from other posts and articles I've read) that only using #Stepwise will maintain your session between feature methods. If this is the case, why is my extension not working? It's pretty much a copy and paste of #Stepwise without the skipFeaturesAfterFirstFailingFeature method (I can post if needed), unless there is some other stuff going on behind the scenes with #Stepwise.
Apologies for the wall of text, but I've been trying to figure this out for about 6 hours now, so my brain is pretty fried.
Geb has special support for #Stepwise, if a spec is annotated with it it does not call resetBrowser() after each test, instead it is called after the spec is completed. See the code on github
So basically you need to change your setupSpec to setup so that it will be executed before each test.
Regarding your observation, if you just run a focused test the setupSpec is executed for that test and thus it passes. The problem arises, that the cleanup is invoked afterwards and resets the browser, breaking subsequent tests.
EDIT
I overlooked your usage of where blocks, everything in the where block needs to be statically (#Shared) available, so using instance level constructs won't work. Resetting the browser will also kill every reference so just getting it before wont work either. Basically, don't use Geb objects in where blocks!
Looking at your code however I don't see any reason to use data driven tests here.
This can be easily done with one assertion in a normal test
It is good practice for unit tests to just test one thing. Geb however, is not an unit test but an acceptance/frontend test. The problem here is that they are way slower than unit tests and it makes sense to combine sensible assertions into one test.
class MyFailingTest extends GebReportingSpec {
def setup() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
def "I work"() {
given:
at Report
expect:
["some list", "of values"] == anotherModule.someContent*.#id
}
def "I don't work"() {
given:
at Report
expect:
["some other", "list", "of values"] == iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}

import cycle in golang with test packages

I am trying to refactor some test code and in two packages I need to do the same thing (connect to a DB). I am getting an import cycle. I get why I can't do it, but am wondering what the best way around it is.
Some specifics, I have three packages: testutils, client, engine.
In engine I define an interface & implementation (both exported).
package engine
type interface QueryEngine {
// ...
}
type struct MagicEngine {
// ...
}
And then in the testutils package I will create a MagicEngine and try and return it.
package testutils
func CreateAndConnect() (*engine.MagicEngine, error) {
// ....
}
Now in the test code (using a TestMain) I need to do something like
package engine
func TestMain(m *testing.M) {
e, err := testutils.CreateAndConnect()
// ....
os.Exit(m.Run())
}
This is of course a cycle. I want to do this so that I can in the client package also use this testutils.CreateAndConnect() method. I don't want to repeat the code in both packages. I don't want it in the main code of the engine package, it is very specific to the tests.
I tried adding it as an exported method on the engine test class (engine/engine_test.go) and using it in the client/client_test.go. No dice. :/
I feel I have done this in other languages, but could be crazy. What is the best way to structure this code for reusability?
You could use black-box style testing because the components of engine are exported. Change your tests to be in package engine_test:
package engine_test
import "engine"
import "testutils"
func TestMain(m *testing.M) {
e, err := testutils.CreateAndConnect()
// ....
os.Exit(m.Run())
}