access environment variables in jenkins shared library code - jenkins-shared-libraries

When I use my new shared library I cannot access environment variables for any src class which is executed either directly by the Jenkinsfile or via a var/*.groovy script. This problem persists even when I add withEnv to the var/*groovy script.
What is the trick to get environment variables to propagate to jenkins shared library src class execution?
Jenkinsfile
withEnv(["FOO=BAR2"]) {
println "Jenkinsfile FOO=${FOO}"
library 'my-shared-jenkins-library'
lib.displayEnv()
Shared Library var/lib.groovy
def displayEnv() {
println "Shared lib var/lib FOO=${FOO}"
MyClass c = new MyClass()
}
Shared Library src/MyClass.groovy
class MyClass() {
MyClass() {
throw new Exception("Shared lib src/MyClass FOO=${System.getenv('FOO')")
}
}
** Run Result **
Jenkinsfile FOO=BAR
Shared lib var/lib FOO=BAR
java.lang.Exception: Shared lib src/MyClass FOO=null
...

It sure looks like the only way to handle this is to pass the this from Jenkins file down to the var/lib.groovy and harvest from that object
Jenkinsfile
withEnv(["FOO=BAR2"]) {
library 'my-shared-jenkins-library'
lib.displayEnv(this)
var/lib.groovy
def displayEnv(script) {
println "Shared lib var/lib FOO=${FOO}"
MyClass c = new MyClass(script)
}
src class
MyClass(def script) {
throw new Exception("FOO=${script.env.FOO}")
}

I believe you can populate the environment variable as below, where shared library can access.
Jenkisfile
env.FOO="BAR2"
library 'my-shared-jenkins-library'
lib()
vars/lib.groovy
def call(){
echo ("FOO: ${FOO}")
echo ("FOO:"+env.FOO)
}

Another method is use the "steps" variable:
In Jenkinsfile
mypackages.myclass.mymethod(steps)
In src
class myclass implements Serializable {
void mymethod(steps) {
String myEnvVar = steps.sh(returnStdout: true, script: "env | grep 'myVar' | cut -f 2- -d '='")
}
}

I stumbled upon this problem lately, so I'm gonna add my $0.02.
The basic template I use for var/*.groovy is:
// var/myMethod.groovy
import cool.package.Clazz
def call(Map m) {
m.put('env', env)
m.put('steps', steps)
new Clazz(m).call()
}
And the template for src/**/*.groovy
// src/cool/package/Clazz.groovy
class Clazz {
private String cool_field_1 = "default-value-1"
private int cool_value = 42
def env
def steps
def call() {
steps.echo("env.BUILD_TAG: ${env.BUILD_TAG}")
//...
}
}
In Jenkinsfile it is used standard way:
#Library('mylib#mybranch')
pipeline {
stages {
stage('St 1') {
steps { myMethod('cool_value': 43) }
}
}
}
Disclaimer: I don't do Groovy but since it looks similar to Java I can use it a little. Also using Map seems to give the advantage of quite flexible interface.

Not sure what the experts will say about the solution but I was able to access the variables defined in my Jenkinsfile from the shared library using evaluate.
Jenkinsfile
myVar = "abc"
vars/test.groovy
String myVar = evaluate("myVar")

For me this just works.
Jenkinsfile:
#Library('jenkins-library') _
pipeline {
agent any
environment {
FOO = 'bar'
}
stages {
stage('Build') {
steps {
script {
buildImage()
...
The library vars/buildImage.groovy:
def call() {
println(this.env.FOO)
println(env.FOO)
}
So to pass the environment to a class in the library, just use this in the vars/yourfunc.groovy.

Related

Bytebuddy Advice is not always working in Java agent

I have a simple bytebuddy agent which records method entry/exit. I run it as
export MAVEN_OPTS="-javaagent:$JAVA_AGENT=$CONFIG_FILE"
mvn clean test -DargLine="-javaagent:$JAVA_AGENT_JAR"
It works well for couple of my Java projects. However, when I tried it on few open source projects: flink, guava, dubbo it dint work.
JDK: 1.8
ByteBuddy: 1.10.18
Here is code snippet
#Override
public void instrument(Instrumentation instrumentation) {
final Advice methodAdvice = Advice.to(MethodTracer.class);
ElementMatcher.Junction matcher = nameStartsWithAnyOf(includes);
new AgentBuilder.Default()
.with(new TracerLogger())
.type(matcher)
.transform((builder, typeDescription, classLoader, module) -> {
builder = builder.visit(new AsmVisitorWrapper.ForDeclaredMethods().method(ElementMatchers.isMethod(), methodAdvice));
System.out.println("transform() called");
return builder;
}
})
.installOn(instrumentation);
}
public class MethodTracer {
#Advice.OnMethodEnter(inline = false)
public static StackNode enter(
#Advice.Origin("#t") String type, #Advice.Origin("#m") String method, #Advice.Origin("#s") String signature) {
System.out.println("OnMethodEnter() called");
return CallableTracer.enter(type, method, signature, false);
}
#Advice.OnMethodExit(inline = false, onThrowable = Throwable.class)
public static void exit(#Advice.Enter StackNode node) {
CallableTracer.exit(node);
}
When I run, it prints transform() called
but it never prints OnMethodEnter() called
any hint to resolve this? Thanks
Update: To answer Rafael's questions: Yes, tracer logger outputs. Some samples are here:
[2020-12-28 08:22:40:292] [DEBUG] [TracerLogger] onTransformation: class org.apache.flink.api.common.functions.AbstractRichFunction, null, false, agent.rt.bytebuddy.dynamic.DynamicType$Default$Unloaded#7e56d17f
[2020-12-28 08:22:40:292] [DEBUG] [TracerLogger] onComplete: org.apache.flink.api.common.functions.AbstractRichFunction, null, false
I don't see any log from onError(), so not sure if its transformation error.
adding these to pom.xml allowed me to register retransformation strategy, but the Advice is still not working.
<Can-Redefine-Classes>true</Can-Redefine-Classes>
<Can-Retransform-Classes>true</Can-Retransform-Classes>
This is how I register retransformation strategy:
.with(AgentBuilder.RedefinitionStrategy.RETRANSFORMATION)
.with(AgentBuilder.InitializationStrategy.NoOp.INSTANCE)
.with(AgentBuilder.TypeStrategy.Default.REDEFINE)
I ran with .with(AgentBuilder.Listener.StreamWriting.toSystemOut()). for flink methods, it prints loaded=false

What parameter should I feed to Frida `ObjC.api.class_addMethod()` to make it happy?

I want to use Frida to add a class method to the existing Objective C class on Mac OS. After I read the Frida docs, I tried the following code:
const NSString = ObjC.classes.NSString
function func (n) { console.log(n) }
var nativeCb = new NativeCallback(func, 'void', ['int'])
ObjC.api.class_addMethod(
NSString.handle,
ObjC.selector('onTest:'),
nativeCb,
ObjC.api.method_getTypeEncoding(nativeCb)
)
The above code looks straightforward. However, after the ObjC.api.class_addMethod() call, the attached App and the Frida REPL both froze, it looks that the pointers are not right.
I have tried many possible parameter values for a whole night but still can figure the problem out. What's wrong with my code?
Only two issues:
method_getTypeEncoding() can only be called on a Method, which the NativeCallback is not. You could pass it the handle of an existing Objective-C method that has the same signature as the one you're adding, or use Memory.allocUtf8String() to specify your own signature from scratch.
Objective-C methods, at the C ABI level, have two implicit arguments preceding the method's arguments. These are:
self: The class/instance the method is being invoked on.
_cmd: The selector.
Here's a complete example in TypeScript:
const { NSAutoreleasePool, NSString } = ObjC.classes;
const onTest = new NativeCallback(onTestImpl, "void", ["pointer", "pointer", "int"]);
function onTestImpl(selfHandle: NativePointer, cmd: NativePointer, n: number): void {
const self = new ObjC.Object(selfHandle);
console.log(`-[NSString onTestImpl]\n\tself="${self.toString()}"\n\tn=${n}`);
}
function register(): void {
ObjC.api.class_addMethod(
NSString,
ObjC.selector("onTest:"),
onTest,
Memory.allocUtf8String("v#:i"));
}
function test(): void {
const pool = NSAutoreleasePool.alloc().init();
try {
const s = NSString.stringWithUTF8String_(Memory.allocUtf8String("yo"));
s.onTest_(42);
} finally {
pool.release();
}
}
function exposeToRepl(): void {
const g = global as any;
g.register = register;
g.test = test;
}
exposeToRepl();
You can paste it into https://github.com/oleavr/frida-agent-example, and then with one terminal running npm run watch you can load it into a running app using the REPL: frida -n Telegram -l _agent.js. From the REPL you can then call register() to plug in the new method, and test() to take it for a spin.

phpunit: reusing dataprovider

I want to run multiple test cases against the content of a whole set of files. I could use a data provider to load my files and use the same provider for all the tests like this:
class mytest extends PHPUnit_Framework_TestCase {
public function contentProvider() {
return glob(__DIR__ . '/files/*');
}
/**
* #dataProvider contentProvider
*/
public function test1($file) {
$content = file_get_contents($file);
// assert something here
}
...
/**
* #dataProvider contentProvider
*/
public function test10($file) {
$content = file_get_contents($file);
// assert something here
}
}
Obviously that means if I have 10 test cases, each file is loaded 10 times.
I could adjust the data provider to load all files and return one big structure with all the contents. But since the provider is called separately for each test it would still mean each file is loaded 10 times and in addition it would load all files into memory at the same time.
I could of course condense the 10 tests into one test with 10 assertions, but then it would abort right after the first assertion fails and I really want a report of all things that are wrong with the file.
I know that data providers can also return an iterator. But phpunit seems to rerun the iterator separately for each test, still resulting in loading each file 10 times.
Is there a clever way to make phpunit run an iterator only once and pass the result to each test, before continuing?
Test dependencies
If some tests are dependents, you should use the #depends annotation to declare Test dependencies. The data returned by the dependency is used by the test declaring this dependency.
But, if a test declared as dependency failed, the dependent test is not executed.
Statically stored data
To share data between tests, it's common to setup fixtures statically.
You can use the same method with data providers:
<?php
use PHPUnit\Framework\TestCase;
class MyTest extends TestCase
{
private static $filesContent = NULL;
public function filesContentProvider()
{
if (self::$filesContent === NULL) {
$paths = glob(__DIR__ . '/files/*');
self::$filesContent = array_map(function($path) {
return [file_get_contents($path)];
}, $paths);
}
}
/**
* #dataProvider filesContentProvider
*/
public function test1($content)
{
$this->assertNotEmpty($content, 'File must not be empty.');
}
/**
* #dataProvider filesContentProvider
*/
public function test2($content)
{
$this->assertStringStartsWith('<?php', $content,
'File must start with the PHP start tag.');
}
}
As you can see, it's not supported out of the box. As the test class instance is destroyed after each test method execution, you have to store the initialized data in a class variable.

Executing Specific Geb Tests according to environment

I have a set of Spec tests I am executing within a Grails Project.
I need to execute a certain set of Specs when I am on local, and another set of Spec when I run the pre-prod environment.
My current config is executing all my specs at the same time for both environements, which is something I want to avoid.
I have multiple environments, that I have configured in my GebConfig:
environments {
local {
baseUrl = "http://localhost:8090/myApp/login/auth"
}
pre-prod {
baseUrl = "https://preprod/myApp/login/auth"
}
}
You could use a spock config file.
Create annotations for the two types of tests - #Local and #PreProd, for example in Groovy:
import java.lang.annotation
#Retention(RetentionPolicy.RUNTIME)
#Target([ElementType.TYPE, ElementType.METHOD])
#Inherited
public #interface Local {}
Next step is to annotate your specs accordingly, for example:
#Local
class SpecificationThatRunsLocally extends GebSpec { ... }
Then create a SpockConfig.groovy file next to your GebConfig.groovy file with the following contents:
def gebEnv = System.getProperty("geb.env")
if (gebEnv) {
switch(gebEnv) {
case 'local':
runner { include Local }
break
case 'pre-prod':
runner { include PreProd }
break
}
}
EDIT: It looks like Grails is using it's own test runner which means SpockConfig.groovy is not taken into account when running specifications from Grails. If you need it to work under Grails then the you should use #IgnoreIf/#Require built-in Spock extension annotations.
First create a Closure class with the logic for when a given spec should be enabled. You could put the logic directly as a closure argument to the extension annotations but it can get annoying to copy that bit of code all over the place if you want to annotate a lot of specs.
class Local extends Closure<Boolean> {
public Local() { super(null) }
Boolean doCall() {
System.properties['geb.env'] == 'local'
}
}
class PreProd extends Closure<Boolean> {
public PreProd() { super(null) }
Boolean doCall() {
System.properties['geb.env'] == 'pre-prod'
}
}
And then annotate your specs:
#Requires(Local)
class SpecificationThatRunsLocally extends GebSpec { ... }
#Requires(PreProd)
class SpecificationThatRunsInPreProd extends GebSpec { ... }

Can I instantiate beanshell class sourced from another Beanshell script?

I would like to run class imported from different beanshell file. But I've no idea how instantiate class from main beanshell file. Is this possible?
Class which I import:
class HelloW {
public void run(){
print("Hello World");
}
}
Main beanshell file which should run and instantiate class:
Interpreter i = new Interpreter();
i.source("HelloW.bsh");
The BeanShell documentation is pretty good in this area, so you should read through that first. In your case there are few issues. That said, there are Scripted Objects. Also, the .bsh file you start with needs to execute the scripted object. Taking your example this code should work:
Hello() {
run(){
print("Hello World");
}
return this;
}
myHello = Hello();
myHello.run(); // Hello World
*UPDATED answer for version BeanShell 2.0b1 and later which support scripted classes *:
I created two beanshell files and placed them in a directory "scripts".
The first "executor.bsh" is what you are calling the "parent" script, I believe.
// executor.bsh
addClassPath(".");
importCommands("scripts");
source(); // This runs the script which defines the class(es)
x = new HelloWorld();
x.start();
The second file contains the scripted class. Note that I am using a Scripted Command and according to BeanShell documentation, the file name must be the same as the command name.
// source.bsh
source() {
public class HelloWorld extends Thread {
count = 5;
public void run() {
for(i=0; i<count; i++)
print("Hello World!");
}
}
}
I invoked executor.bsh in a java class with:
Interpreter i = new Interpreter();
i.source("scripts/executor.bsh");
// Object val = null;
// val = i.source("scripts/executor.bsh");
// System.out.println("Class:" + val.getClass().getCanonicalName());
// Method m = val.getClass().getMethod("start", null);
// m.invoke(val, null);
Note that I left some commented code which also shows me executing the scripted class from Java, using Reflection. And this is the result:
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!