HttpHostConnectException let Apache Stanbol Integration Tests fail - apache

I tried to install the Stanbol version from branch "release-0.12" from github.
On my system I have:
Apache Maven 3.0.5
Maven home: /usr/share/maven
Java version: 1.7.0_55, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-7-openjdk-i386/jre
When I start the command:
mvn install
I get the following error for the Apache Stanbol Integration Tests => error-log
The first lines of the error are:
06.08.2014 15:47:02.025 *INFO * [main] Setting org.osgi.service.http.port=8765
06.08.2014 15:47:02.026 *INFO * [main] Starting launcher ...
06.08.2014 15:47:02.030 *INFO * [main] HTTP server port: 8765
15:47:03,614 INFO StanbolTestBase:163 - Got HttpHostConnectException at
http://localhost:8765/ - will retry
When I skiped the test I also got no response from the server...
I already tried it with java-version 1.6, but there I got the error:
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireJavaVersion failed
with message:
Java 7 or higher is required to compile this module
Has someone an idea what I made wrong (does it need some further software requirements)? Or how can I get the server running correctly?

The integration test starts a Stanbol Server (actually the full launcher) in its own JVM. The tests waits for up to 180 seconds for this server to start. During that time the test resends some test requests to check if the server is up and running.
Based on the provided log this period starts at about "15:47" so the test should wait until about "15:50" before it gives up.
Because of the line
^C15:48:42,236 INFO StanbolTestBase:146 - Got 404 at http://localhost:8765/entityhub - will retry
in the log my guess is that the build process was manually canceled with ^C before the server was fully started.
The server side logs of the test run are available at target/launchdir/stanbol/logs/error.log. If the integration tests do fail one will usually find the reason in this log file.

Related

Problems with CATALINA_PID and ARTIFACTORY_PID while upgrading Artifactory to the latest version

While upgrading my Artifactory server (free OSS version) from the version 5.2.0 to the latest 5.4.5, I was hit by an ARTIFACTORY_PID problem.
After migrating from 5.3.2 to 5.4.0, the Artifactory server did not want to start anymore complaining about
PID file /var/opt/jfrog/run/artifactory.pid not readable (yet?) after start.
I found the only way around it is to remove the line export CATALINA_PID=$ARTIFACTORY_PID from the setenv.sh of the tomcat.
Note that upgrade from 5.2.0 to 5.3.2 went smoothly.
However, after upgrading from 5.4.0 to the latest 5.4.5 this trick does not work anymore. Now I get an error:
Job for artifactory.service failed because a configured resource limit was exceeded. See "systemctl status artifactory.service" and "journalctl -xe" for details.
And when executing service artifactory status, I get:
● artifactory.service - Setup Systemd script for Artifactory in Tomcat Servlet Engine
Loaded: loaded (/usr/lib/systemd/system/artifactory.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: resources) since Tue 2017-07-25 09:40:10 CEST; 4s ago
Process: 31912 ExecStart=/opt/jfrog/artifactory/bin/artifactoryManage.sh start (code=exited, status=0/SUCCESS)
Jul 25 09:40:10 linux systemd[1]: Failed to start Setup Systemd script for Artifactory in Tomcat Servlet Engine.
Jul 25 09:40:10 linux systemd[1]: Unit artifactory.service entered failed state.
Jul 25 09:40:10 linux systemd[1]: artifactory.service failed.
In fact Artifactory is now running showing version 5.4.5, but I am not happy about all those errors above.
Plus I am a bit failing to understand the purpose of CATALINA_PID and/or ARTIFACTORY_PID. Why the tomcat was failing on the startup because of this file? What was wrong with the permissions? I think I did no extra actions before.
The only difference that before it was installed from an official downloaded rpm. But now using an official remote yum repo.
If I try to create an empty /var/opt/jfrog/run/artifactory.pid file, while Artifactory is running, it gets deleted. Who is deleting this file and why? Is it a standard tomcat behavior?
OS: CentOS 7, up to date.
In my case (in a slow virtual machine) the error message from the command artifactoryManage.sh start was:
ERROR: Artifactory Tomcat server did not start in 60 seconds. Please check the logs
The log file told that the only problem was slowness (/var/opt/jfrog/artifactory/logs/artifactory.log):
### Artifactory successfully started (64.802 seconds) ###
The problem was solved by adding a longer timeout to the service definition at /etc/systemd/system/artifactory.service:
[Service]
Environment=START_TMO=120
After editing the service definition, as you know, systemctl daemon-reload was needed.
Run this script:
/opt/jfrog/artifactory/bin/artifactoryManage.sh start
It will show the exact error to you.
In my case it was java version not updated. So I updated to java 1.8.

Red5 Service Fails to Start on Win 10 - Incorrect Function

I installed Red5. Service installed ok, but when I manually try to start it, I get the following error in my Windows Event Log:
"The Red5 Media Server service terminated with the following service-specific error:
Incorrect function"
In the commons-daemon.log, I see the following:
[2017-05-17 20:36:54] [info] [11044] Starting service...
[2017-05-17 20:36:54] [error] [11044] Failed creating java
[2017-05-17 20:36:54] [error] [11044] ServiceStart returned 1
[2017-05-17 20:36:54] [info] [13816] Run service finished.
[2017-05-17 20:36:54] [info] [13816] Commons Daemon procrun finished
Event ID was 7024. Any help would be appreciated. Thanks!
Had the same issue on Windows Server 2012 R2. Solution was to install the latest release of the Java SE Development Kit. In my case I just installed Java SE DK 8U144: http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-windows-x64.exe
I resolved it by setting up the rtmp:// url correctly. I did not enter in a stream name, as I thought the name parameter above the rtmp text box would do it for me.
It works, but it seems that the chip in my Note 5 just doesn't have the cpu power to stream smoothly, although I will admit I have only spent about 20 minutes trying to tune settings for best performance on my Sprint cell service.
Thank you for your quick reply.

Live Migration Failure: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate shared memory

I have a 2 node OpenStack Mitaka environment consisting of a controller/compute node and a compute node.
I've followed the setup guide to enable instance live migration using LVM block storage. I.e.: There's no shared storage backend, just local LVM block storage.
Using OpenStack Horizon to perform the live migration a success message is displayed, however, the migration is far from successful. This worked pretty much out-of-the-box with our Juno installation. I've exhausted Google and cannot find any other instances of people facing the same problem. I thought it might have been a time synchronisation problem so have set both nodes to UTC. Still the problems persists.
Source machine /var/log/nova/nova-compute.log
2016-08-12 15:56:42.120 2230 ERROR nova.virt.libvirt.driver [req-b71ea7b0-5fa8-4b57-92d2-4edec62135c2 b017d86d1143461a92a267d4b912c104 88c686f09e1b427fb750f5c00716f84e - - -] [instance: 5763b6b6-370c-448c-8e8f-8b71eafaa8f1] Migration operation has aborted
2016-08-12 15:56:42.470 2230 ERROR nova.virt.libvirt.driver [req-b71ea7b0-5fa8-4b57-92d2-4edec62135c2 b017d86d1143461a92a267d4b912c104 88c686f09e1b427fb750f5c00716f84e - - -] [instance: 5763b6b6-370c-448c-8e8f-8b71eafaa8f1] Live Migration failure: internal error: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate shared memory
Target node /var/log/libvirt/libvirtd.log
2016-08-12 15:56:41.864+0000: 2170: error : qemuMonitorJSONGetMigrationStatsReply:2443 : internal error: info migration reply was missing return status
2016-08-12 15:56:41.864+0000: 2170: error : virNetClientProgramDispatchError:177 : Cannot open log file: '/var/log/libvirt/qemu/instance-0000006a.log': Device or resource busy
There are no other events captured in the source or target nova or libvirt logs.
I should also note that I am trying to use qemu+tcp (libvirt listening enabled, default tcp port, no auth) rather than qemu+ssh in order to keep things simple while testing. In fact, I intend to only use qemu+tcp anyway.
Which version of ubuntu did you deploy?
I had the same error with ubuntu 14.04 and mitaka version.
And I figured out that default kernel (3.13) makes this problem.
I upgraded the kernel from 3.13 to 4.40 and this problem is gone now.
I hope my experience help you solve this problem out.
Thanks

Polymer web-component-tester / selenium is stalling

We are trying to run the web-component-tester however it keeps stalling on the Selenium step.
When I run wct -l chrome --verbose
I get the following
hook: prepare
hook: prepare:selenium
hook done: prepare:selenium
Starting Selenium server for local browsers
11:47:46.357 INFO - Launching a standalone server
Setting system property webdriver.chrome.driver to C:\Users\<user>\AppData\Roaming\npm\node_modules\web-component-tester\node_modules\wct-local\node_modules\selenium-standalone\.selenium\chromedriver\2.13-x64-chromedriver
Setting system property webdriver.ie.driver to C:\Users\<user>\AppData\Roaming\npm\node_modules\web-component-tester\node_modules\wct-local\node_modules\selenium-standalone\.selenium\iedriver\2.44.0-x64-IEDriverServer.exe
11:47:46.809 INFO - Java: Oracle Corporation 25.31-b07
11:47:46.809 INFO - OS: Windows 7 6.1 x86
11:47:46.872 INFO - v2.44.0, with Core v2.44.0. Built from revision 76d78cf
11:47:47.669 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:11655/wd/hub
11:47:47.669 INFO - Version Jetty/5.1.x
11:47:47.685 INFO - Started HttpContext[/selenium-server,/selenium-server]
11:47:47.919 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#3c1e69
11:47:47.919 INFO - Started HttpContext[/wd,/wd]
11:47:47.919 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
11:47:47.919 INFO - Started HttpContext[/,/]
11:47:47.935 INFO - Started SocketListener on 0.0.0.0:11655
11:47:47.935 INFO - Started org.openqa.jetty.jetty.Server#11bc7ed
Then after a few minutes of stalling it follows up with
hook done: prepare with error: [Error: Unable to connect to selenium]
Error: Unable to connect to selenium
Running these same tests from the browser works without a problem.
While it has hanged, we can still connect via browser to http://127.0.0.1:11655/wd/hub
I've tried Googling but not a lot of luck.
What are the things that could go wrong or how can I debug this further?
One of the node modules doesn't use the Internet Explorer proxy values.
In your environment variables, create a new one
Name : no_proxy
Value : localhost, 127.0.0.1
Don't forget to restart your shell session after applying
I had the same issue and was due to the proxy setting of my org. When i went out of my org's network and connected through internet, i was able to run the tests through selenium. I have not yet able to find a way through the proxy settings as of now. I was running on mac. You might need to try through normal internet setup and check.

Cargo start up error

I am running 'mvn clean install -Dmaven.test.skip=true' command for a web application. but it gives following error. I set the catalina home as an environment variable. i am using maven 3.1.1 version. Please help me
FATAL ERROR in native method: JDWP No transports i
nitialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
[WARNING] [talledLocalContainer] ERROR: transport error 202: bind failed: Addres
s already in use
[ERROR] Failed to execute goal org.codehaus.cargo:cargo-maven2-plugin:1.2.4:star
t (start-container) on project RCMigrationWebApp: Execution start-container of g
oal org.codehaus.cargo:cargo-maven2-plugin:1.2.4:start failed: Failed to start t
he Tomcat 7.x container. Deployable [http://localhost:8080/cargocpc/index.html]
failed to finish deploying within the timeout period [120000]. The Deployable st
ate is thus unknown. -> [Help 1]
Probably you have to kill any process running at 8080, looks like the port is already in use..
ERROR: transport error 202: bind failed: Address already in use
Port in use. Run netstat -bv if you are using Windows. You will get to know which process is holding up the port. With your stackTrace, it looks very obvious.
Let me know if it helps.