I have a spark cluster on mesos with 1 master and 1 slave on the same host.
Then I can execute official spark examples with spark-submit as follow:
//bin/spark-submit --deploy-mode cluster --master mesos://<master_ip>:7077 --class org.apache.spark.examples.SparkPi /opt/spark/lib/spark-examples-1.4.0-hadoop2.6.0.jar
Also I try to build app with IntelliJ IDEA. When I execute code on locall machine:
import org.apache.spark.{SparkConf, SparkContext}
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf()
.setAppName("Simple Application")
.setMaster("local")
val sc = new SparkContext(conf)
...
}
}
All run fine locally, but when change to run on spark-mesos:
import org.apache.spark.{SparkConf, SparkContext}
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf()
.setAppName("Simple Application")
.setMaster("mesos://<master_ip>:7077")
val sc = new SparkContext(conf)
...
}
}
The output error it's:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/myusername/project/spark-01/lib/spark-assembly-1.4.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/myusername/project/spark-01/lib/spark-examples-1.4.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/08/18 16:29:53 INFO SparkContext: Running Spark version 1.4.0
15/08/18 16:29:54 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/08/18 16:29:54 INFO SecurityManager: Changing view acls to: myusername
15/08/18 16:29:54 INFO SecurityManager: Changing modify acls to: myusername
15/08/18 16:29:54 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(myusername); users with modify permissions: Set(myusername)
15/08/18 16:29:56 INFO Slf4jLogger: Slf4jLogger started
15/08/18 16:29:56 INFO Remoting: Starting remoting
15/08/18 16:29:56 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#172.23.10.21:37362]
15/08/18 16:29:56 INFO Utils: Successfully started service 'sparkDriver' on port 37362.
15/08/18 16:29:56 INFO SparkEnv: Registering MapOutputTracker
15/08/18 16:29:56 INFO SparkEnv: Registering BlockManagerMaster
15/08/18 16:29:57 INFO DiskBlockManager: Created local directory at /tmp/spark-29fef56b-0a26-4cd7-b391-2f436bca1c55/blockmgr-b7febe40-5d37-4862-be78-4b6f4df1738c
15/08/18 16:29:57 INFO MemoryStore: MemoryStore started with capacity 953.4 MB
15/08/18 16:29:57 INFO HttpFileServer: HTTP File server directory is /tmp/spark-29fef56b-0a26-4cd7-b391-2f436bca1c55/httpd-94618d51-782f-4262-a113-8d44bf0b29d7
15/08/18 16:29:57 INFO HttpServer: Starting HTTP Server
15/08/18 16:29:57 INFO Utils: Successfully started service 'HTTP file server' on port 59838.
15/08/18 16:29:57 INFO SparkEnv: Registering OutputCommitCoordinator
15/08/18 16:29:57 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/08/18 16:29:57 INFO SparkUI: Started SparkUI at http://172.23.10.21:4040
Failed to load native Mesos library from /home/myusername/current/idea.14/bin::/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
Exception in thread "main" java.lang.UnsatisfiedLinkError: no mesos in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
at java.lang.Runtime.loadLibrary0(Runtime.java:849)
at java.lang.System.loadLibrary(System.java:1088)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:54)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:79)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2535)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:489)
at SimpleApp$.main(SimpleApp.scala:11)
at SimpleApp.main(SimpleApp.scala)
I solve the problem importing all .so dependencies of mesos, but not is a pretty solution and then al developers need to know mesos .so
I research about: How run spark app on IntelliJ IDEA but all examples show the first scenario runing locally.
Questions:
Is it the valid scenario to develop an sapark app? or the rigth flow is develop algorith locally and later use spark-submit to run on mesos?
Anybody know a better way to run spark app from IntelliJ IDEA to run on spark mesos cluster?
Related
We are getting an the below error try.SeleniumSpanExporter","log-time-local": "2022-09-22T13:07:01.425Z","log-time-utc": "2022-09-22T13:07:01.425Z","method": "lambda$export$4"} 13:07:01.425 DEBUG [LocalDistributor.add] - Exception while adding Node http://10.251.155.85:5555 java.io.UncheckedIOException: java.net.ConnectException: connection timed out: /10.251.155.**:5555
Hub Command: java -jar selenium-server-4.0.0.jar hub
O/P-HUB Output
Node Command: java -jar selenium-server-4.4.0.jar node --hub http://10...**:4444/grid/register [Passing the hub IP]
O/P-
C:\Users\Administrator>java -jar C:\apps\relay\webcluster\selenium-server.jar node --publish-events tcp://10.251.155.74:4442 --subscribe-events tcp://10.251.155.74:4443
07:35:34.313 INFO [LogManager$RootLogger.log] - Using the system default encoding
07:35:34.317 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
07:35:34.481 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://10.251.155.74:4442 and tcp://10.251.155.74:4443
07:35:34.565 INFO [UnboundZmqEventBus.<init>] - Sockets created
07:35:35.567 INFO [UnboundZmqEventBus.<init>] - Event bus ready
07:35:35.683 INFO [NodeServer.createHandlers] - Reporting self as: http://10.251.155.85:5555
07:35:35.752 INFO [NodeOptions.getSessionFactories] - Detected 4 available processors
07:35:35.783 INFO [NodeOptions.discoverDrivers] - Discovered 2 driver(s)
07:35:35.821 INFO [NodeOptions.report] - Adding Chrome for {"browserName": "chrome"} 4 times
07:35:35.821 INFO [NodeOptions.report] - Adding Firefox for {"browserName": "firefox"} 4 times
07:35:35.868 INFO [Node.<init>] - Binding additional locator mechanisms: relative, name, id
07:35:36.138 INFO [NodeServer$1.start] - Starting registration process for Node http://10.251.155.85:5555
07:35:36.138 INFO [NodeServer.execute] - Started Selenium node 4.4.0 (revision e5c75ed026a): http://10.251.155.85:5555
07:35:36.169 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
07:35:46.186 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
07:35:56.202 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
I also tried with passing java -jar selenium-server-.jar node --publish-events tcp://:8886 --subscribe-events tcp://:8887 still no luck .
Using filebeat 7.5.2:
I'm using a filebeat configuration with close_eof enabled and I run filebeat with the flag --once. I can see the harvester reaching eof but the filebeat keeps going.
Flebeat conf:
filebeat.inputs:
- type: log
close_eof: true
enabled: true
paths:
- "${LOGS_PATH}"
scan_frequency: 1s
fields: {
machine: "${HOST}"
}
output.logstash:
hosts: ["192.168.41.6:5044"]
bulk_max_size: 1024
timeout: 30s
pipelining: 1
workers: 1
And I run it using:
filebeat run --once -v -c "PATH TO CONF..."
And some logs from the filebeat instance:
...
2020-02-04T18:30:16.950Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-04T18:30:17.059Z INFO [publisher] pipeline/module.go:97 Beat name: logstash
2020-02-04T18:30:17.167Z WARN beater/filebeat.go:152 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.168Z INFO instance/beat.go:429 filebeat start running.
2020-02-04T18:30:17.168Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-04T18:30:17.168Z INFO registrar/migrate.go:104 No registry home found. Create: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat
2020-02-04T18:30:17.179Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-04T18:30:17.192Z INFO registrar/registrar.go:108 No registry file found under: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json. Creating a new re
gistry file.
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:145 Loading registrar data from /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-04T18:30:17.193Z WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.193Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-04T18:30:17.194Z INFO log/input.go:152 Configured paths: [/tmp/tmp.BXJtfiaEzb/*.log]
2020-02-04T18:30:17.206Z INFO input/input.go:114 Starting input of type: log; ID: 13918413832820009056
2020-02-04T18:30:17.225Z INFO input/input.go:167 Stopping Input: 13918413832820009056
2020-02-04T18:30:17.225Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-04T18:30:17.225Z INFO log/harvester.go:251 Harvester started for file: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:384 Running filebeat once. Waiting for completion ...
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:386 All data collection completed. Shutting down.
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:139 Stopping Crawler
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:149 Stopping 1 inputs
2020-02-04T18:30:17.258Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:30:17.296Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... Only metrics here ...
2020-02-04T18:35:55.686Z INFO log/harvester.go:274 End of file reached: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log. Closing because close_eof is enabled.
2020-02-04T18:35:55.686Z INFO crawler/crawler.go:165 Crawler stopped
... MORE METRICS ...
2020-02-04T18:36:26.609Z ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 192.168.41.6:49662->192.168.41.6:5044: i/o timeout
2020-02-04T18:36:26.621Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-04T18:36:28.520Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-04T18:36:28.520Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:36:28.521Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... MORE METRICS ...
From this I'm outputing this to Logstash 7.5.2 running in the same Ubuntu 18 VM. Running Logstash with log level trace does not output any error.
So I have configured filebeat to accept input via TCP. This is filebeat.yml file.
filebeat.inputs:
- type: tcp
host: ["localhost:9000"]
max_message_size: 20MiB
For some reason filebeat does not start the TCP server at port 9000. I have verified this using wireshark. Wireshark shows nothing at port 9000.
This is output of command "filebeat -e -d "*"" run on terminal
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:468 Home path: [/usr/local/Cellar/filebeat/6.2.4] Config path: [/usr/local/etc/filebeat] Data path: [/usr/local/var/lib/filebeat] Logs path: [/usr/local/var/log/filebeat]
2019-08-14T09:12:40.745-0600 DEBUG [beat] instance/beat.go:495 Beat metadata path: /usr/local/var/lib/filebeat/meta.json
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:475 Beat UUID: 764da0fd-ea93-4777-b1ea-63149be0d6b6
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.4
2019-08-14T09:12:40.745-0600 DEBUG [beat] instance/beat.go:230 Initializing output plugins
2019-08-14T09:12:40.745-0600 DEBUG [processors] processors/processor.go:49 Processors:
2019-08-14T09:12:40.745-0600 INFO pipeline/module.go:76 Beat name: Ad-MBP.domain
2019-08-14T09:12:40.745-0600 ERROR fileset/modules.go:95 Not loading modules. Module directory not found: /usr/local/Cellar/filebeat/6.2.4/module
2019-08-14T09:12:40.745-0600 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:301 filebeat start running.
2019-08-14T09:12:40.745-0600 DEBUG [registrar] registrar/registrar.go:90 Registry file set to: /usr/local/var/lib/filebeat/registry
2019-08-14T09:12:40.746-0600 INFO registrar/registrar.go:110 Loading registrar data from /usr/local/var/lib/filebeat/registry
2019-08-14T09:12:40.746-0600 INFO registrar/registrar.go:121 States Loaded from registrar: 0
2019-08-14T09:12:40.746-0600 WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-08-14T09:12:40.746-0600 INFO crawler/crawler.go:48 Loading Prospectors: 1
2019-08-14T09:12:40.746-0600 DEBUG [registrar] registrar/registrar.go:152 Starting Registrar
2019-08-14T09:12:40.746-0600 DEBUG [cfgfile] cfgfile/reload.go:95 Checking module configs from: /usr/local/etc/filebeat/modules.d/*.yml
2019-08-14T09:12:40.746-0600 DEBUG [cfgfile] cfgfile/reload.go:109 Number of module configs found: 0
2019-08-14T09:12:40.746-0600 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 0
2019-08-14T09:12:40.746-0600 INFO cfgfile/reload.go:127 Config reloader started
2019-08-14T09:12:40.748-0600 DEBUG [cfgfile] cfgfile/reload.go:151 Scan for new config files
2019-08-14T09:12:40.748-0600 DEBUG [cfgfile] cfgfile/reload.go:170 Number of module configs found: 0
2019-08-14T09:12:40.748-0600 INFO cfgfile/reload.go:219 Loading of config files completed.
I am not sure what I am doing wrong..
I believe filebeat inputs are only available from filebeat 6.3+, anything older used filebeat prospectors.
6.3 TCP input documentation, nothing available for 6.2 or older as it uses prospectors:
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-input-tcp.html
Your logs show that you are on filebeat version 6.24, could you try out your configuration with 6.3+?
I want to start hiveserver2 but if I use hive command to start hiveserver2 hive --service hiveserver2 I have this problem:
ERROR StatusLogger No log4j2 configuration file found. Using default
configuration: logging only errors to the console.
the default port is 10000.
hive is running well but still have this problem
ERROR StatusLogger No log4j2 configuration file found. Using default
configuration: logging only errors to the console. Connecting to
jdbc:hive2:// Connected to: Apache Hive (version 2.1.1) Driver: Hive
JDBC (version 2.1.1) Transaction isolation:
TRANSACTION_REPEATABLE_READ Beeline version 2.1.1 by Apache Hive hive>
how can I add log4j2 configuration?
i m working on windows 8 and apache hive.
I'm using WindowsServer2008 and MySql 5.6.31. I wanted to upgrade
SonarQube from version 5.2 to 5.6. After starting SonarQube the
log-file shows the below lines. Everything should be finde, except the WebServer doesn't become operational:
INFO ce[o.s.c.a.WebServerWatcherImpl] Waiting for Web Server to be operational...
INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer...
When I try to reach the WebServer in the browser, I get the message from ApacheTomcat:
HTTP Status 404 - /sessions/new
type Status report
message /sessions/new
description The requested resource is not available.
Apache Tomcat/8.0.30
Does anyone know why the WebServer doesn't become operational?
Wrapper Manager: JVM #1 Running a 64-bit JVM. Wrapper Manager:
Registering shutdown hook Wrapper Manager: Using wrapper Load native
library. One or more attempts may fail if platform specific libraries
do not exist. Loading native library failed:
wrapper-windows-x86-64.dll Cause: java.lang.UnsatisfiedLinkError: no
wrapper-windows-x86-64 in java.library.path Loaded native library:
wrapper.dll Calling native initialization method. Initializing
WrapperManager native library. Java Executable:
C:\ProgramData\Oracle\Java\javapath\java.exe Windows version: 6.1.7601
Java Version : 1.8.0_91-b15 Java HotSpot(TM) 64-Bit Server VM Java
VM Vendor : Oracle Corporation
Control event monitor thread started. Startup runner thread started.
WrapperManager.start(org.tanukisoftware.wrapper.WrapperSimpleApp#38af3868,
args[]) called by thread: main Communications runner thread started.
Open socket to wrapper...Wrapper-Connection Opened Socket from 31000
to 32000 Send a packet KEY : fnnZL60VqJstVqYQ
handleSocket(Socket[addr=/127.0.0.1,port=32000,localport=31000])
Received a packet LOW_LOG_LEVEL : 1 Wrapper Manager: LowLogLevel from
Wrapper is 1 Received a packet PING_TIMEOUT : 200 PingTimeout from
Wrapper is 200000 Received a packet PROPERTIES : (Property Values)
Received a packet START : start calling WrapperListener.start()
Waiting for WrapperListener.start runner thread to complete.
WrapperListener.start runner thread started. WrapperSimpleApp:
start(args) Will wait up to 2 seconds for the main method to complete.
WrapperSimpleApp: invoking main method
2016.07.28 13:48:38 INFO app[o.s.a.AppFileSystem] Cleaning or creating temp directory D:\SonarQube\sonarqube-5.6\temp
2016.07.28 13:48:38 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[es]: C:\Program Files\Java\jre1.8.0_91\bin\java
-Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=D:\SonarQube\sonarqube-5.6\temp -javaagent:C:\Program Files\Java\jre1.8.0_91\lib\management-agent.jar -cp
./lib/common/;./lib/search/ org.sonar.search.SearchServer
D:\SonarQube\sonarqube-5.6\temp\sq-process6103713257744114100properties
Send a packet START_PENDING : 5000 Send a packet START_PENDING : 5000
WrapperSimpleApp: start(args) end. Main Completed=false,
exitCode=null WrapperListener.start runner thread stopped. returned
from WrapperListener.start() Send a packet STARTED : Startup runner
thread stopped. Received a packet PING : ping Send a packet PING : ok
2016.07.28 13:48:41 INFO es[o.s.p.ProcessEntryPoint] Starting es
2016.07.28 13:48:41 INFO es[o.s.s.EsSettings] Elasticsearch listening on 127.0.0.1:9001
2016.07.28 13:48:42 INFO es[o.elasticsearch.node] [sonar-1469706518062] version[1.7.5], pid[3788],
build[00f95f4/2016-02-02T09:55:30Z]
2016.07.28 13:48:42 INFO es[o.elasticsearch.node] [sonar-1469706518062] initializing ...
2016.07.28 13:48:42 INFO es[o.e.plugins] [sonar-1469706518062] loaded [], sites []
2016.07.28 13:48:43 INFO es[o.elasticsearch.env] [sonar-1469706518062] using [1] data paths, mounts [[Data (D:)]], net
usable_space [29.5gb], net total_space [249.9gb], types [NTFS]
Received a packet PING : ping Send a packet PING : ok
2016.07.28 13:48:46 WARN es[o.e.bootstrap] JNA not found. native methods will be disabled.
2016.07.28 13:48:47 INFO es[o.elasticsearch.node] [sonar-1469706518062] initialized
2016.07.28 13:48:47 INFO es[o.elasticsearch.node] [sonar-1469706518062] starting ...
2016.07.28 13:48:47 INFO es[o.e.transport] [sonar-1469706518062] bound_address {inet[/127.0.0.1:9001]}, publish_address
{inet[/127.0.0.1:9001]}
2016.07.28 13:48:47 INFO es[o.e.discovery] [sonar-1469706518062] sonarqube/NDLYofdsQU6dCANZLN0p9w Received a packet PING : ping Send a
packet PING : ok
2016.07.28 13:48:50 INFO es[o.e.cluster.service] [sonar-1469706518062] new_master
[sonar-1469706518062][NDLYofdsQU6dCANZLN0p9w][DEERLA7LRUD10A][inet[/127.0.0.1:9001]]{rack_id=sonar-1469706518062},
reason: zen-disco-join (elected_as_master)
2016.07.28 13:48:50 INFO es[o.elasticsearch.node] [sonar-1469706518062] started
2016.07.28 13:48:50 INFO es[o.e.gateway] [sonar-1469706518062] recovered [0] indices into cluster_state
2016.07.28 13:48:51 INFO app[o.s.p.m.Monitor] Process[es] is up
2016.07.28 13:48:51 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[web]: C:\Program Files\Java\jre1.8.0_91\bin\java
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=D:\SonarQube\sonarqube-5.6\temp -javaagent:C:\Program Files\Java\jre1.8.0_91\lib\management-agent.jar -cp
./lib/common/;./lib/server/;D:\SonarQube\sonarqube-5.6\lib\jdbc\mysql\mysql-connector-java-5.1.35.jar
org.sonar.server.app.WebServer
D:\SonarQube\sonarqube-5.6\temp\sq-process200048299209178132properties
Received a packet PING : ping Send a packet PING : ok
2016.07.28 13:48:55 TRACE web[o.s.p.Lifecycle] tryToMoveTo from INIT to STARTING => true
2016.07.28 13:48:55 INFO web[o.s.p.ProcessEntryPoint] Starting web
2016.07.28 13:48:56 INFO web[o.s.s.a.TomcatContexts] Webapp directory: D:\SonarQube\sonarqube-5.6\web
2016.07.28 13:48:56 INFO web[o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-xxx.xxx.x.xxx-xxxx"]
2016.07.28 13:48:56 INFO web[o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read Received a packet PING : ping
Send a packet PING : ok
2016.07.28 13:48:57 INFO web[o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-xxx.xxx.x.xxx-xxxx"]
2016.07.28 13:48:57 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
2016.07.28 13:48:57 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.07.28 13:48:57 TRACE web[o.s.p.Lifecycle] tryToMoveTo from STARTING to STARTED => true
2016.07.28 13:48:58 INFO app[o.s.p.m.Monitor] Process[web] is up
2016.07.28 13:48:58 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[ce]: C:\Program Files\Java\jre1.8.0_91\bin\java
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=D:\SonarQube\sonarqube-5.6\temp -javaagent:C:\Program Files\Java\jre1.8.0_91\lib\management-agent.jar -cp
./lib/common/;./lib/server/;./lib/ce/*;D:\SonarQube\sonarqube-5.6\lib\jdbc\mysql\mysql-connector-java-5.1.35.jar
org.sonar.ce.app.CeServer
D:\SonarQube\sonarqube-5.6\temp\sq-process346661778793077863properties
2016.07.28 13:48:59 TRACE ce[o.s.p.Lifecycle] tryToMoveTo from INIT to STARTING => true
2016.07.28 13:48:59 INFO ce[o.s.p.ProcessEntryPoint] Starting ce
2016.07.28 13:48:59 INFO ce[o.s.c.a.WebServerWatcherImpl] Waiting for Web Server to be operational...
2016.07.28 13:49:00 INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer... Received a packet PING : ping Send a packet
PING : ok
2016.07.28 13:49:02 INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer... Received a packet PING : ping Send a packet
PING : ok
There should be a line on the log like this
2017.01.08 23:12:11 WARN web[o.s.s.p.DatabaseServerCompatibility] Database must be upgraded. Please backup database and browse /setup
The server is waiting the user to go to the /setup page to upgrade the DB before continue.
As per below log line
WARN web[o.s.s.p.DatabaseServerCompatibility] Database must be upgraded. Please backup database and browse /setup
Go to http://<sonar-host>:9000/<context path>/setup
and click on Migrate button.
After successful migration, your server will ready to use.