OSGi application subsystem is ACTIVE, but its components are not - kotlin

I have created an OSGi bundle (written in Kotlin) containing a very basic component, which I have annotated as #Component(immediate = true). This bundle behaves as expected using Felix 6.0.3.
#Component(immediate = true)
class Bongo #Activate constructor(
#Reference(service = LoggerFactory::class)
private val logger: Logger
) {
init {
System.err.println("-------------- BONGO!")
logger.info("Started {}", this::class.java)
}
#Activate
fun doIt() {
throw InternalError("BOOM!")
}
}
I then zip this bundle up (with some others) and feed it into Apache Aries as a trivial application subsystem. I haven't created an explicit SUBSYSTEM.MF here because the default values appear to be what I want. Aries installs and starts my subsystem, and then reports that it is ACTIVE. I have even confirmed that a BundleActivator has been invoked correctly. However, I see no evidence that my #Component has been started. It looks like SCR has ignored it, which seems odd because I would have thought that I'd need SCR to run an application subsystem. (I have heard that Declarative Services have replaced BundleActivator...)
I have scoured the OSGi documentation and found no mention of needing to do anything with an OSGi subsystem other than "start" it, so I am baffled at how to proceed from here. Can anyone suggest anything I might have missed please?
For reference, these are the Felix / Aries bundles from my bndrun file:
org.apache.aries.subsystem.api;version='[2.0.10,2.0.11)',\
org.apache.aries.subsystem.core;version='[2.0.10,2.0.11)',\
org.apache.aries.util;version='[1.1.1,1.1.2)',\
org.apache.felix.bundlerepository;version='[2.0.10,2.0.11)',\
org.apache.felix.configadmin;version='[1.9.18,1.9.19)',\
org.apache.felix.coordinator;version='[1.0.2,1.0.3)',\
org.apache.felix.log;version='[1.2.2,1.2.3)',\
org.apache.felix.logback;version='[1.0.2,1.0.3)',\
org.apache.felix.scr;version='[2.1.20,2.1.21)',\
org.eclipse.equinox.region;version='[1.2.101,1.2.102)',\
Thanks,
Chris

Thanks to Neil Bartlett, I now understand that each application subsystem would need to contain its own SCR bundle before Felix could find its components. Specifically:
SCR is not just a dependency, it scans bundles for the Service-Component header. The Declarative Services specification does not describe any way for SCR to discover bundles inside a subsystem of the running framework, therefore your bundles will be invisible to it.
David Jencks has also elaborated specifically about the Felix SCR:
IIRC you need to configure SCR with the ds.global.extender
flag set to true, then the single SCR will find components everywhere.

Related

How to provide an HttpClient to ktor server from the outside to facilitate mocking external services?

I am trying to provide an HttpClient from the outside to my ktor server so that I can mock external services and write tests, however I get this exception when I run my test:
Please make sure that you use unique name for the plugin and don't install it twice. Conflicting application plugin is already installed with the same key as `Compression`
io.ktor.server.application.DuplicatePluginException: Please make sure that you use unique name for the plugin and don't install it twice. Conflicting application plugin is already installed with the same key as `Compression`
at app//io.ktor.server.application.ApplicationPluginKt.install(ApplicationPlugin.kt:112)
at app//com.example.plugins.HTTPKt.configureHTTP(HTTP.kt:13)
at app//com.example.ApplicationKt.module(Application.kt:14)
at app//com.example.ApplicationTest$expected to work$1$1.invoke(ApplicationTest.kt:39)
at app//com.example.ApplicationTest$expected to work$1$1.invoke(ApplicationTest.kt:38)
and thats a bit unexpected to me because I am not applying the Compression plugin twice as far as I can tell. If I run the server normally and manually call my endpoint with curl then it works as expected. What am I doing wrong?
I added a runnable sample project here with a failing test.
sample project
official ktor-documentation-sample project.
The problem is that you have the application.conf file and by default, the testApplication function tries to load modules which are enumerated there. Since you also explicitly load them in the application {} block the DuplicatePluginException occurs. To solve your problem you can explicitly load an empty configuration instead of the default one:
// ...
application {
module(client)
}
environment {
config = MapApplicationConfig()
}
// ...

Network calls not working in EXE distribution build of Compose for Desktop Application

I have used Ktor JVM Client for doing network calls in Compose for Desktop Application.
Network calls are working fine in Debug build means when I am just running the application it's working fine.
But when I create the EXE distribution file, by executing the packageExe task in Gradle, it's creating an EXE file. I have installed it on my machine. Then I am running the application and I am seeing that network calls are not working. I have checked internet is working properly.
Please provide a solution to fix this issue. Thanks in advance.
Your question doesn't leave any details about the failure type (compile error? runtime exception? Empty data? etc.).
But if I had to speculate based on such limited information, I'd guess it's probably this: https://github.com/JetBrains/compose-jb/issues/429
Specifically, when packaging, you need to specify which JVM modules you want to be packed into your distributable app, and likely you are missing your crypto module. Try something like this:
compose.desktop {
application {
mainClass = "MainKt"
nativeDistributions {
modules("jdk.crypto.ec")
targetFormats(TargetFormat.Dmg, TargetFormat.Msi, TargetFormat.Deb)
packageName = "untitled"
}
}
}

getClassPath() method in the WebLogic context class loader does not consider package preferences in weblogic.xml

A related problem: Get the class path from the context class loader (of WebLogic for instance)
This is already solved in How to set up the context class loader's classpath for runtime compilation?.
Current problem: Get the same (proper) class path used to run a web app
Reflectively calling the "getClassPath()" method works but it returns a dirty classpath containing unanted modules from $ORACLE_HOME/oracle_common/modules directory.
Problematic scenario:
Deploy a web app "Parent" in WebLogic // <- works
Get the context class path (it's a String object) by reflexively calling "getClassPath()" // <- works
Fork a process out of the main "Parent" process and run it using the context class path // <- fails
04:36:45,238 [Thread-41] ERROR ChildProcess - Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapp
er.configOverride(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/cfg/MutableConfigOverride;
Explanation
While the context class path contains the necessary dependencies for the child process to run, they are overshadowed by WebLogic's own dependencies. The result is runtime failures such as the one shown above.
Workaround
a) Use a new version of WebLogic server that hopefully would use newer versions of the artifacts needed by the child process // risky endeavour
b) Manually process the context class path and remove any artifact that would shadow their more-recent counterparts
Solution b) looks more practical, but I don't like it for meany reasons:
The reflexive call to "getClassPath" returns a String, and looking for artifact's names in Strings feels frail and weak
I wouldn't know what shadows what. WebLogic prepends its weird artifacts at the start of the string before listing the web app's own dependencies.
Only weblogic.xml has info on the web app's package preferences. I wish I could mimic how WebLogic processes this file to run the web app (Parent) and use that to properly run the child process
It seems to me that forking a process out from a web app running in WebLogic does not enjoy the same package preferences expressed in "weblogic.xml" that the web app (Parent) enjoyed when it was deployed and started running.
Other than the above suggestions, I am welcoming of any stronger solutions

IntelliJ run vs running a jar, with a Springboot Kotlin, Multi module Gradle project with Social Oauth2

TL;DR: Why does everything run fine when started via IntelliJ, and why is it broken when call java -jar app.jar. And how do I fix this?
Alright, I have some issues with a backend I am trying to dockerize. I have an application created with Spring Boot (1.4.2.RELEASE) following the Spring Oauth (2.0.12.RELEASE) guide on their page. I follow the Gradle version, since I prefer Gradle over Maven. Also I am using Kotlin instead of Java. Everything is fine, I start via IntelliJ my backend with static front end, I can login via Facebook (and Google and Github), I receive a nice Principal witch holds al the information I need, and I can modify Spring Security to authorize and permit endpoints. So far so good.
Now for the bad part, when I run either ./gradlew clean build app:bootrun or ./gradlew clean build app:jar and run the jar via java -jar (like I will do in my Docker container), my backend comes up. My static front end pops up. Now I want to login via Facebook, I end up on the Facebook login page, I enter my credentials, and... nothing!
I end up back on my homepage, not logged in, no log messages that mean anything to me, just silence. The last thing I see in the log is:Getting user info from: https://graph.facebook.com/me
This Url will give me in my browser:
{
"error": {
"message": "An active access token must be used to query information about the current user.",
"type": "OAuthException",
"code": 2500,
"fbtrace_id": "GV/58H5f4fJ"
}
}
When going to this URL via an IntelliJ start, it will give me credential details. Obviously something is going wrong, but I have no clue what. Especially since a run from IntelliJ works fine. There is some difference between how the jar is started, and how IntelliJ's run config works, but I have no clue where to search for what. I could post trace logging, or all my Gradle files, but perhaps thats too much info to put in 1 question. I will defenitly update this question if someone needs some more details :)
The structure outline of this project is as follows:
root:
- api: is going to be opensourced later, contains rest definitions and DTOs.
- core: contains the meat. Also here is included in the gradle file
spring-boot-starter, -web, -security, spring-security-oauth2, and some jackson stuff.
- rest: contains versioned rest service implementations.
- app: contains angular webjars amongst others, the front end, and
my `#SpringBootApplication`, `#EnableOAuth2Client`
and the impl of `WebSecurityConfigurerAdapter`.
Why does everything run fine when started via IntelliJ, and why is it broken using bootRun or the jar artefact. And how do I fix this?
I found it, the problem was not Multi module Graldle, Spring boot, or Oauth2 related. In fact it was due to a src set config of Gradle, where Java was supposed to be in a Java src set folder, and Kotlin in a Java src set folder:
sourceSets {
main.java.srcDirs += 'src/main/java'
main.kotlin.srcDirs += 'src/main/kotlin'
}
As Will Humphreys stated in his comment above, IntelliJ takes all source sets, and runs the app. However, when building the jar via Gradle, these source sets are stricter. I had a Java file in my Kotlin src set, which is no problem for IntelliJ. But the jar created by Gradle takes into account the source sets as defined in the build.gralde file, which are stricter.
I found my missing bean issue with the code below:
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
System.out.println("Let's inspect the beans provided by Spring Boot:");
String[] beanNames = ctx.getBeanDefinitionNames();
Arrays.sort(beanNames);
for (String beanName : beanNames) {
System.out.println(beanName);
}
};
}
The Bean I missed was called AuthenticationController, which is a #RestController, and kinda crucial for my authentication code.

Using Attach API Outside Of JDK

I have a small application that uses the Attach API to modify some third party classes during runtime. Alas, I have run into a large problem: the Attach API only comes with the JDK. The necessary files I can copy from the JDK and add into my project, but the library responsible for this(attach.(dll|so)) I can't. This is because I would have to copy attach.lib from a resource inside jar, and put it in the JRE/lib directory.
An action that would not work if the user isn't root on a Linux machine, therefore losing compatibility to alot of users (as this app is supposed to run on a server, and most servers are Linux, and I can't be sure all are root)
I looked into all the classes responsible for the attach API (VirtualMachine, AttachProvider etc) but found no place where it is loading the library.
Is it possible to do this? I mean, can I use the Attach API outside of a JDK installation? If so, how?
You can do so by modifying java.library.path:
static void addToLibPath(String path) throws NoSuchFieldException,
SecurityException,
IllegalArgumentException,
IllegalAccessException
{
if (System.getProperty("java.library.path") != null) {
// If java.library.path is not empty, we will prepend our path
// Note that path.separator is ; on Windows and : on Unix-like,
// so we can't hard code it.
System.setProperty("java.library.path",
path + System.getProperty("path.separator")
+ System.getProperty("java.library.path"));
} else {
System.setProperty("java.library.path", path);
}
// Important: java.library.path is cached
// We will be using reflection to clear the cache
Field fieldSysPath = ClassLoader.class.getDeclaredField("sys_paths");
fieldSysPath.setAccessible(true);
fieldSysPath.set(null, null);
}
Call addToLibPath("path") will add "path" to java.library.path.
Please note that java.library.path is cached, and reflection is required to clear the cache.
As far as I know, you need to run the application looking to do the "attach" from within a JDK (not a JRE). By doing this, you don't need to worry about providing the Attach API or its dependencies - as they are all provided for and managed by the JDK. That said, you shouldn't have any "root" concerns with doing this - as you can extract and run/use a JDK as any user (it doesn't have to be installed / executed as "root"). That said, you'll just need to ensure that your program doing the attaching and the program being attached to are running as the same OS user as to not run into security restrictions.
Our experience is that there is no reliable way to use the attach API without a full JDK. This was particularly acute on Windows. You might get it to work, but you might want to look into plain old JMX instead.