I am testing CXF REST services in Karaf using Pax Exam. The tests almost always run without a hitch on my machine. When run in Jenkins (under Maven build) they typically fail. The failures seem random and unpredictable. The error I receive during the failure deals with attempt to run a Karaf command. The commands are executed by the following snippet:
def byteArrayOutputStream = new ByteArrayOutputStream();
def printStream = new PrintStream(byteArrayOutputStream);
CommandProcessor commandProcessor = getOsgiService(CommandProcessor.class);
CommandSession commandSession = commandProcessor.createSession(System.in, printStream, System.err);
commandSession.put("APPLICATION", System.getProperty("karaf.name", "root"));
commandSession.put("USER", "karaf");
commandSession.execute(command)
These are the commands I am trying to execute in the tests setup method:
'features:addurl mvn:org.apache.cxf.karaf/apache-cxf/2.7.2/xml/features', 'features:install http', 'features:install cxf'
This is the exception:
org.apache.felix.gogo.runtime.CommandNotFoundException: Command not found: features:addurl
Apparently occasionally Karaf does not start correctly and cannot process these commands. The error like this one happen randomly in different tests on different Karaf commands. On my machine they are more likely to happen if the machine is under load.
What may cause Karaf to behave in such a manner? How to prevent these errors from happening?
Thank you,
Michael
There is is also pax-exam-karaf, it also has a feature installer which is usable from the configuration. If you want to stick to the "manual" installation you shoul make sure the features service is installed beforehand. For example let the service be injected.
Related
First, important note: This is a project in which the Selenium/Cucumber test suite is working. It's working locally for the developers, and it's working in the Tekton environment.
Second: I have an identical IntelliJ environment as the developers. And other Selenium/Cucumber projects are running fine on my local machine.
When a try to run a fresh copy of the project in question locally, none of the tests start. And, there is no error message.
Normally, this will be cause by missing Glue parameters in the Run/Debug configuration, since IntelliJ for some reason often is unable to add these automatically. This is not the case here.
The test runner when I try to start a specific scenario:
Sorry for the language. The tests are all in Norwegian. But I don't think that matters for seeing the problem.
Normally, when there is an actual error regarding the test step - in the case "Gitt jeg er en vanlig søker" - there would be some info to the right of the steps overview. This is now blank, as can be seen. So, the test stes aren't even started.
The run/debug configuration:
e2e.cucumber.felles is the package where the Java file with the first step definition is located. So, that's not the reason.
Any ideas?
So I'm running an integration test/spec using configuration for an ActiveMQ in-memory broker.
SomeSpec.groovy:
#SpringApplicationConfiguration(SomeApplication.class)
#WebIntegrationTest(randomPort = true)
class SomeSpec extends Specification {
application.properties
spring.activemq.in-memory=true
spring.activemq.pooled=false
The in memory broker starts up and runs fine when I do gradle test and
also runs fine when used with gradle bootRun at the command line. However when I run inside IntelliJ without explicitly having it run gradle test the in memory broker does not start and the tests fail.
How can I take and advantage of the nice test/spec running features in IntelliJ but still have it initialize the in memory queue properly? I know with Grails you could run with JUnit or Grails. Is there something similar with Spring/SpringBoot so everything starts up properly.
It's probably because your config files are not refreshed under project/out/production/config/ location.
When you run it from cmd line, it takes the latest application.properties, so every thing is fine.
But Idea takes the already compiled config files, and if they are not rebuilt inside Idea then it still loads the load configuration.
Testing Arquillian 1.9.final TOMCAT-EMBED-7 container, and I'm getting questionable results around creating a WebArchive for testing.
In /src/main/resources, I have several configuration files that I do not want to use when running the integration tests, instead I want to provide named ones stored in /src/embed-itest/resources.
org.jboss.shrinkwrap.api.Filter x = Filters.exclude(".*Test.*|.*xml|.*properties");
WebArchive webArchive = ShrinkWrap
.create(WebArchive.class, "mytest.war")
.addPackages(true, x, "com.myapp")
//and some other additions
Then at the end of the srhinkwrap process, I add the specific test files I want to use:
File n = new File("src/embed-itest/resources/test-log4j.properties");
webArchive.addAsResource(n,"log4j.properties");
However, the behavior is still running as though it is using the /src/main/resources/log4j.properties. I've verified the _DEFAULT_DEFAULT_mytest.war really does have the test-log4j.properties content as log4j.properties, but running the tests the behavior is that of /src/main/resources/log4j.properties. (and this is true for other configuration files, such as camelContext.xml I've tried to override).
Anyone have some insight please? I was hoping to leverage the ability to create a custom WebArchive with specific files in the archive to more precisely test, but the actual behavior seems to be as if it was the 'standard' created war limiting what I thought was a great capability of arquillian.
I think the problem is that yo are using the Tomcat embedded approach, which means you are sharing the JVM of your tests with your Tomcat instance. I suggest you try with managed or remote mode.
I’m trying to integrate Bugzilla Testopia with Jenkins with the aid of the Testopia Plugin for Jenkins.
The general configuration is probably fine as the connection between Testopia and Jenkins is well established (Jenkins log says: ‘Connecting to Testopia to retrieve automated test cases’ and no error occurs then). However I’m unable to retrieve any information concerning Test Runs/Test Cases etc. from Testopia.
Moreover I cannot perform any of ‘Iterative Test Build Steps’. If I want to ‘Execute Shell’ in ‘Iterative Test Build Steps’ with Testopia Plugin no operation is carried out (even if I try: echo 12345 etc.). If I use ‘Single Test Build Steps‘ then shell command is executed.
My goal is to retrieve test class name from Testopia (it is stored in Testopia Test Case’s field - Automation/Scripts ) and then run maven build from Jenkins with this class name set as the parameter. Afterwards depending on the Jenkins build success or failure status I’d like to update the Test Case Status in Testopia.
How to fetch any information from Testopia into Jenkins?
Why any of ‘Iterative Test Build Steps’ is not executed?
Any clues? - Testopia Plugin site example wasn't too helpful for me.
Both Bugzilla and Jenkins are hosted on the same Ubuntu 14.04. I've got the latest stable versions of Jenkins, Bugzilla and Testopia.
Thanks in advance,
M.
EDIT:
Well this debugging does not work for me. I added a new log recorder with the 'ALL' level chosen and I cannot see any additional log neither in job's console output nor in the newly created logger output.
Maybe sth is wrong with my Testopia installation? Some more details concerning my configuration:
I've got Testopia installed on the same machine (as Jenkins) and usually I access it through: http://'ip_address'/bugzilla
- in Testopia plugin configuration my URL to Testopia installation is: http://'ip_address'/bugzilla/xmlrpc.cgi
- I've got only one bugzilla account - these admin's credentials I use in Jenkins
- sometimes in Jenkins I can see a warning concerning improper reverse proxy configuraiton - maybe it has sth to do with the problem
After job execution all Testopia's fields are 0 - Run Id, Build Id etc. - which obviously indicates that no information was successfully retrieved from Testopia.
Any ideas how to check why I cannot retrieve any information from Testopia?
EDIT 2:
In the meantime I think I've found a clue in the jenkins.log file in jenkins installation directory:
Exceptions like these occur:
INFO: TESTOPIA_TEST_SUITE_3 #13 main build action completed: SUCCESS
org.apache.xmlrpc.XmlRpcException: The requested method 'TestRun.get' was not found.
at org.apache.xmlrpc.client.XmlRpcStreamTransport.readResponse(XmlRpcStreamTransport.java:197)
...
org.apache.xmlrpc.XmlRpcException: The requested method 'TestRun.get_test_cases' was not found.
at org.apache.xmlrpc.client.XmlRpcStreamTransport.readResponse(XmlRpcStreamTransport.java:197)
Shall I insert the full stacktrace?
It looks like that plugin logs a fair amount of information, though not all of it to the build console output itself.
To debug further, you could try adding a new log recorder for the logger jenkins.plugins.testopia (with log level "all"), run a build, then refresh the web page for the newly-created log recorder to see the output.
You should at least see "Filtering for automated test cases" after connection, information about each test case found, and then log output for each iterative build step as it's run on each test case.
I got the similiar problem when I use python xmlrpc to commnicate with my bugzilla-testopia server.
I chekc the error code and find XMLRPC.pm under my bugzilla install location : "./WebService/Server/XMLRPC.pm".
I know the error was thrown by this sub function, which checks login status and forward to modules where the moduel.function must be in PUBLIC_METHODS:
sub handle_login {
...
if (none { $_ eq $method } $class->PUBLIC_METHODS) {
ThrowCodeError('unknown_method', { method => $full_method });
}
...
}
I don't know why but TestCase.get could not be found in PUBLIC_METHODS, so I just marked them as comment and then it works. You can do this quick way to make sure your client setting is correct. Then you should solve the "PUBLIC_METHODS" problem next.
I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)