Loading external .class files with Javassist - javassist

I have a directory called 'TestDir' which contains several external .class files that i would like to load and modify at runtime with JAssist.
I understand this is how you are supposed to load external classes with javassist:
ClassPool pool = ClassPool.getDefault();
pool.insertClassPath("C:\\Users\\MainPC\\Documents\\TestDir");
CtClass clazz = pool.getCtClass("TestClass");
This works without throwing any exceptions, however the moment try to call any methods of 'clazz', like for example:
System.out.println(clazz.getGenericSignature());
I receive the following exception:
Exception in thread "main" java.lang.RuntimeException: cannot find TestClass: TestDir.TestClass found in TestClass.class
at javassist.CtClassType.getClassFile3(CtClassType.java:211)
at javassist.CtClassType.getClassFile2(CtClassType.java:178)
at javassist.CtClassType.getGenericSignature(CtClassType.java:379)
at ReflectionTests.main(ReflectionTests.java:33)
Can someone explain to me why this happens?

I did something like this javassist, and as far as I remember, the package name is part of the class file. That means, your code needs to be changed to:
ClassPool pool = ClassPool.getDefault();
pool.insertClassPath("C:\\Users\\MainPC\\Documents");
CtClass clazz = pool.getCtClass("TestDir/TestClass");

Related

#HystrixProperty cannot be resolved to a type

I have a method marked with #HystrixCommand that has a fallback method defined. I'm trying to add a hystrix property to it so that in case of a timeout it degrades gracefully into a fallback method.
But when I add the #HystrixProperty it shows an error in the STS IDE (3.8.2 Release) saying #HystrixProperty cannot be resolved to a type.
Here is what I'm trying to do:
#HystrixCommand(fallbackMethod="fallbackPerformOperation",
commandProperties={#HystrixProperty(name="execution.isolation.thread.timeoutInMilliseconds",value="5000")})
public Future<Object> performOperation(String requestString) throws InterruptedException {
return new AsyncResult<Object>() {
#Override
public Object invoke() {.......
}}}
and this is the error being shown in the IDE:
I'm unable to figure out what the problem is.
Do I need to clear the STS Cache? If so how do I do it?
Thank You.
With in the IDE it is not obvious to suggest the import HystrixProperty class, thus you need to manually import this
import com.netflix.hystrix.contrib.javanica.annotation.HystrixProperty;
Then the error should be gone

Class cannot resolve module as content unless #Stepwise used

I have a Spock class, that when run as a test suite, throws Unable to resolve iconRow as content for geb.Page, or as a property on its Navigator context. Is iconRow a class you forgot to import? unless I annotate my class with #Stepwise. However, I really don't want the test execution to stop on the first failure, which #Stepwise does.
I've tried writing (copy and pasting) my own extension using this post, but I still get these errors. It is using my extension, as I added some logging statements that were printed out to the console.
Here is one of my modules:
class IconRow extends Module {
static content = {
iconRow (required: false) {$("div.report-toolbar")}
}
}
And a page that uses it:
class Report extends SomeOtherPage {
static at = {$("div.grid-container").displayed}
static content = {
iconRow { module IconRow }
}
}
And a snippet of the test that is failing:
class MyFailingTest extends GebReportingSpec {
def setupSpec() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
#Unroll
def "I work"() {
given:
at Report
expect:
this == that
where:
this << ["some list", "of values"]
that << anotherModule.someContent*.#id
}
#Unroll
def "I don't work"() {
given:
at Report
expect:
this == that
where:
this << ["some other", "list", "of values"]
that << iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}
When executed as a suite I work passes and I don't work fails because it cannot identify "iconRow" as content for the page. If I switch the order of the test cases, I don't work will pass and I work will fail. Alternatively, if I execute each test separately, they both pass.
What I have tried:
Adding/removing the required: true property from content in the modules
Prefixing the module name with the class, such as IconRow.iconRow
Defining my modules as static #Shared properties
Initialize the modules both in and outside of my setupSpec()
Making simple getter methods in each module's class that return the module, and referencing content such as IconRow.getIconRow().columnHeaders*.attr("innerText")*.toUpperCase()
Moving the contents of my setupSpec() into setup()
Adding autoClearCookies = false into my GebConfig.groovy
Making a #Shared Report report variable and prefix all modules with that such as report.iconRow
Very peculiar note about that last bullet point -- it magically resolves the modules that don't have the prefix -- so it won't resolve report.IconRow but will resolve just iconRow -- absolutely bizarre, because if I remove that variable the module that was just previously working suddenly can't be resolved again. I even tried declaring this variable and then not prefixing anything, and that did not work either.
Another problem that I keep banging my head against the wall with is that I'm also not sure of where the problem is. The error it throws leads me to believe that it's a project setup issue, but running each feature individually works fine, so it appears to be resolving the classes just fine.
On the other hand, perhaps it's an issue with the session and/or cookies? Although I have yet to see any official documentation on this, it seems to be the general consensus (from other posts and articles I've read) that only using #Stepwise will maintain your session between feature methods. If this is the case, why is my extension not working? It's pretty much a copy and paste of #Stepwise without the skipFeaturesAfterFirstFailingFeature method (I can post if needed), unless there is some other stuff going on behind the scenes with #Stepwise.
Apologies for the wall of text, but I've been trying to figure this out for about 6 hours now, so my brain is pretty fried.
Geb has special support for #Stepwise, if a spec is annotated with it it does not call resetBrowser() after each test, instead it is called after the spec is completed. See the code on github
So basically you need to change your setupSpec to setup so that it will be executed before each test.
Regarding your observation, if you just run a focused test the setupSpec is executed for that test and thus it passes. The problem arises, that the cleanup is invoked afterwards and resets the browser, breaking subsequent tests.
EDIT
I overlooked your usage of where blocks, everything in the where block needs to be statically (#Shared) available, so using instance level constructs won't work. Resetting the browser will also kill every reference so just getting it before wont work either. Basically, don't use Geb objects in where blocks!
Looking at your code however I don't see any reason to use data driven tests here.
This can be easily done with one assertion in a normal test
It is good practice for unit tests to just test one thing. Geb however, is not an unit test but an acceptance/frontend test. The problem here is that they are way slower than unit tests and it makes sense to combine sensible assertions into one test.
class MyFailingTest extends GebReportingSpec {
def setup() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
def "I work"() {
given:
at Report
expect:
["some list", "of values"] == anotherModule.someContent*.#id
}
def "I don't work"() {
given:
at Report
expect:
["some other", "list", "of values"] == iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}

Using Janino in YARN with Apache Twill causes "Imported class x.y could not be loaded"

I'm porting an open source project, which uses Janino for dynamic compilation of classes, to YARN, using Apache Twill. This works great except one last error. When Janino is used with twill, I'm getting an exception that a class cannot be found, although the class in in the Classpath and even used.
The exception I'm getting is:
2014-06-09T18:30:40,093Z ERROR o.a.d.e.p.i.p.ProjectRecordBatch [zk1]
[37daf04b-7d82-4d2f-987c-59851f2aeafe:frag:0:0]
AbstractSingleRecordBatch:next(AbstractSingleRecordBatch.java:60) -
Failure during query
org.apache.drill.exec.exception.SchemaChangeException: Failure while
attempting to load generated class
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:243)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:57)
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.next(LimitRecordBatch.java:99)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.next(RemovingRecordBatch.java:94)
at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.drill.exec.exception.ClassTransformationException: Failure
Generating transformation classes for value:
package org.apache.drill.exec.test.generated;
import org.apache.drill.exec.exception.SchemaChangeException;
import org.apache.drill.exec.expr.holders.BitHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;
import org.apache.drill.exec.ops.FragmentContext;
import org.apache.drill.exec.record.RecordBatch;
import org.apache.drill.exec.vector.RepeatedVarCharVector;
import org.apache.drill.exec.vector.VarCharVector;
import org.apache.drill.exec.vector.complex.impl.RepeatedVarCharReaderImpl;
public class ProjectorGen0 {
RepeatedVarCharVector vv0;
RepeatedVarCharReaderImpl reader4;
VarCharVector vv5;
public boolean doEval(int inIndex, int outIndex)
throws SchemaChangeException
{
{
VarCharHolder out3 = new VarCharHolder();
complex:
vv0 .getAccessor().getReader().setPosition((inIndex));
reader4 .read(0, out3);
BitHolder out8 = new BitHolder();
out8 .value = 1;
if (!vv5 .getMutator().setSafe((outIndex), out3)) {
out8 .value = 0;
}
if (out8 .value == 0) {
return false;
}
}
{
return true;
}
}
public void doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing)
throws SchemaChangeException
{
{
int[] fieldIds1 = new int[ 1 ] ;
fieldIds1 [ 0 ] = 0;
Object tmp2 = (incoming).getValueAccessorById(RepeatedVarCharVector.class, fieldIds1).getValueVector();
if (tmp2 == null) {
throw new SchemaChangeException("Failure while loading vector vv0 with id: org.apache.drill.exec.record.TypedFieldId#1cf4a5a0.");
}
vv0 = ((RepeatedVarCharVector) tmp2);
reader4 = ((RepeatedVarCharReaderImpl) vv0 .getAccessor().getReader());
int[] fieldIds6 = new int[ 1 ] ;
fieldIds6 [ 0 ] = 0;
Object tmp7 = (outgoing).getValueAccessorById(VarCharVector.class, fieldIds6).getValueVector();
if (tmp7 == null) {
throw new SchemaChangeException("Failure while loading vector vv5 with id: org.apache.drill.exec.record.TypedFieldId#1ce776c0.");
}
vv5 = ((VarCharVector) tmp7);
}
}
}
at
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:302)
at
org.apache.drill.exec.ops.FragmentContext.getImplementationClass(FragmentContext.java:185)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:240)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:57)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.next(LimitRecordBatch.java:99)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.next(RemovingRecordBatch.java:94)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Caused by:
org.codehaus.commons.compiler.CompileException: Line 4, Column 8:
Imported class "org.apache.drill.exec.exception.SchemaChangeException"
could not be loaded at
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:9014)
at org.codehaus.janino.UnitCompiler.import2(UnitCompiler.java:192) at
org.codehaus.janino.UnitCompiler.access$000(UnitCompiler.java:104) at
org.codehaus.janino.UnitCompiler$1.visitSingleTypeImportDeclaration(UnitCompiler.java:166)
at
org.codehaus.janino.Java$CompilationUnit$SingleTypeImportDeclaration.accept(Java.java:171)
at org.codehaus.janino.UnitCompiler.(UnitCompiler.java:164) at
org.apache.drill.exec.compile.JaninoClassCompiler.getClassByteCode(JaninoClassCompiler.java:53)
at
org.apache.drill.exec.compile.QueryClassLoader.getClassByteCode(QueryClassLoader.java:69)
at
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:256)
at
org.apache.drill.exec.ops.FragmentContext.getImplementationClass(FragmentContext.java:185)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:240)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:57)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.next(LimitRecordBatch.java:99)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.next(RemovingRecordBatch.java:94)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
As you can see, the type of the exception is SchemaChangeException but the internal exception is a ClassNotFoundException for SchemaChangeException:
Line 4, Column 8: Imported class
"org.apache.drill.exec.exception.SchemaChangeException" could not be
loaded
So there is something wrong with the class loader, which changes when the application is run with Apache Twill. It works stand alone, but in both cases the underlying jars are identical.
Apache Twill also has a function to add additional resources, but adding my jar there didn't work either, instead I got an exception, that the jar is already included:
Exception in thread "ServiceDelegate STARTING" java.lang.RuntimeException: java.util.zip.ZipException: duplicate entry: lib/drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar
at com.google.common.base.Throwables.propagate(Throwables.java:160) at
org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:133)
at
org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:82)
at
org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:109)
at
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
at java.lang.Thread.run(Thread.java:744) Caused by:
java.util.zip.ZipException: duplicate entry:
lib/drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar at
java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:215)
at
java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:109)
at
org.apache.twill.internal.ApplicationBundler.copyResource(ApplicationBundler.java:347)
at
org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:140)
at
org.apache.twill.yarn.YarnTwillPreparer.createContainerJar(YarnTwillPreparer.java:388)
at
org.apache.twill.yarn.YarnTwillPreparer.access$300(YarnTwillPreparer.java:106)
at
org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:264)
at
org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:253)
at
org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:98)
... 4 more
The underlying classloader used is the URLClassLoader. It's initialized with an empty array, but it works for the stand alone application, the problem is only when it runs with Apache Twill, where does it get the URLs it should look up from? How could I check it?
The classloader definition:
public class QueryClassLoader extends URLClassLoader {
static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(QueryClassLoader.class);
private final ClassCompiler classCompiler;
private AtomicLong index = new AtomicLong(0);
private ConcurrentMap<String, byte[]> customClasses = new MapMaker().concurrencyLevel(4).makeMap();
public QueryClassLoader(boolean useJanino) {
super(new URL[0]);
if (useJanino) {
this.classCompiler = new JaninoClassCompiler(this);
} else {
throw new UnsupportedOperationException("Drill no longer supports using the JDK class compiler.");
}
}
...
Any ideas where I could look into, why the error occurs or how to solve it?
The same question was asked in the Apache Twill mailing list. Here is the discussion and proposed solution to it.
http://mail-archives.apache.org/mod_mbox/twill-dev/201406.mbox/%3CCAHqY-MOa8jBYs%3DEZENxxNZg-9YGMR5SASg76P_k6%2Bm6p2L9JuQ%40mail.gmail.com%3E
Repeat my answer in the mail content:
I am not familiar with how janino works, but it seems to me that it may not be using context ClassLoader to load classes or as least the thread that is compiling the generated class does not have the context
ClassLoader set properly.
The way that Twill works is pretty straightforward. It creates a "launcher.jar", which has no dependency on any library and start the JVM in a YARN container like this:
java -cp launcher.jar ....
Hence the system classloader has no user/library classes, but only the Launcher class.
Then in the Launcher.main() method, it creates a URLClassLoader, using all the jars + .class files inside the "container.jar" file, to load the user TwillRunnable. It also sets it as the context ClassLoader of the thread that calls the "run()" method. So, if you want to load class manually (through ClassLoader or Class.forName) in a different thread than the "run()" thread, you'll have to use set the context ClassLoader of that thread or explicitly construct the ClassLoader with the correct parent ClassLoader.

Mono and Extension Methods with MonoDevelop 2.8.5

I have written a unit test with MD 2.8.5 in a project that includes System.Core and with build target Mono/.NET 3.5. I really like the Assert.Throws of the newer NUnit, so decided to write an extension method for it. I created a new file with this as its content in the same namespace as the test. Can anyone see my error?
public delegate void TestDelegate();
public static class AssertThrows
{
public static T Throws<T>(this Assert assert, TestDelegate td)
where T : Exception
{
try
{
td();
}
catch(T e)
{
return e;
}
catch
{
throw new AssertionException("Wrong exception type.");
}
throw new AssertionException("Did not throw an error.");
}
}
MonoDevelop "sees" the extension method through its code completion. However, the compiler reports:
Performing main compilation...
/Users/shamwow/dev/EngineTests.cs(19,37): error CS0117:
`NUnit.Framework.Assert' does not contain a definition for `Throws'
/Applications/MonoDevelop.app/Contents/MacOS/lib/monodevelop/AddIns/NUnit/nunit.framework.dll (Location of the symbol related to previous error)
Build complete -- 1 error, 0 warnings
(I know MD and Mono are not the same.)
I assume you're trying to use it just as:
Assert.Throws<FooException>(() => ...);
Extension methods don't work like that - they appear to be instance methods on the extended type. As you won't have an instance of Assert, you can't call your extension method like that.

Symbian: kern-exec 3 panic on RLibrary::Load

I have troubles with dynamic loading of libraries - my code panics with Kern-Exec 3. The code is as follows:
TFileName dllName = _L("mydll.dll");
TFileName dllPath = _L("c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(dllName, dllPath); // Kern-Exec 3!
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
I receive panic on TInt res = dll.Load(dllName, dllPath); What can I do to get rid of this panic? mydll.dll is really my dll, which has only 1 exported function (for test purposes). Maybe something wrong with the DLL? Here's what it is:
def file:
EXPORTS
_ZN4Init4InitEv # 1 NONAME
pkg file:
#{"mydll DLL"},(0xED3F400D),1,0,0
;Localised Vendor name
%{"Vendor-EN"}
;Unique Vendor name
:"Vendor"
"$(EPOCROOT)Epoc32\release\$(PLATFORM)\$(TARGET)\mydll.dll"-"!:\sys\bin\mydll.dll"
mmp file:
TARGET mydll.dll
TARGETTYPE dll
UID 0x1000008d 0xED3F400D
USERINCLUDE ..\inc
SYSTEMINCLUDE \epoc32\include
SOURCEPATH ..\src
SOURCE mydllDllMain.cpp
LIBRARY euser.lib
#ifdef ENABLE_ABIV2_MODE
DEBUGGABLE_UDEBONLY
#endif
EPOCALLOWDLLDATA
CAPABILITY CommDD LocalServices Location MultimediaDD NetworkControl NetworkServices PowerMgmt ProtServ ReadDeviceData ReadUserData SurroundingsDD SwEvent TrustedUI UserEnvironment WriteDeviceData WriteUserData
source code:
// Exported Functions
namespace Init
{
EXPORT_C TInt Init()
{
// no implementation required
return 0;
}
}
header file:
#ifndef __MYDLL_H__
#define __MYDLL_H__
// Include Files
namespace Init
{
IMPORT_C TInt Init();
}
#endif // __MYDLL_H__
I have no ideas about this... Any help is greatly appreciated.
P.S. I'm trying to do RLibrary::Load because I have troubles with static linkage. When I do static linkage, my main program doesn't start at all. I decided to check what happens and discovered this issue with RLibrary::Load.
A KERN-EXEC 3 panic is caused by an unhandled exception (CPU fault) generated by trying to invalidly access a region of memory. This invalid memory access can be for both code (for example, bad PC by stack corruption) or data (for example, accessing freed memory). As such these are typically observed when dereferencing a NULL pointer (it is equivalent to a segfault).
Certainly the call to RLibrary::Load should never raise a KERN-EXEC 3 due to programmatic error, it is likely to be an environmental issue. As such I have to speculate on what is happening.
I believe the issue that is observed is due to stack overflow. Your MMP file does not specify the stack or heap size the initial thread should use. As such the default of 4Kb (if I remember correctly) will be used. Equally you are using TFileName - use of these on the stack is generally not recommended to avoid... stack overflow.
You would be better off using the _LIT() macro instead - this will allow you to provide the RLibrary::Load function with a descriptor directly referencing the constant strings as located in the constant data section of the binary.
As a side note, you should check the error value to determine the success of the function call.
_LIT(KMyDllName, "mydll.dll");
_LIT(KMyDllPath, "c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(KMyDllName, MyDllPath); // Hopefully no Kern-Exec 3!
if(err == KErrNone)
{
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
}
// else handle error
The case that you can't use static linkage should be a strong warning to you. It shows that there is something wrong with your DLL and using dynamic linking won't change anything.
Usually in these cases the problem is in mismatched capabilities. DLL must have at least the same set of capabilities that your main program has. And all those capabilities should be covered by your developer cert.