It does not pass from Web UI to Select Version - ambari

I can not proceed to the next state.
/var/log/ambari-server/ambari-server.log
INFO [ambari-client-thread-37] AmbariMetaInfo:1430 - Stack HDP-2.0.6 is not active, skipping VDF
INFO [ambari-client-thread-37] AmbariMetaInfo:1430 - Stack HDP-2.0.6 is not active, skipping VDF
INFO [ambari-client-thread-37] AmbariMetaInfo:1430 - Stack HDP-2.0.6.GlusterFS is not active, skipping VDF
INFO [ambari-client-thread-37] AmbariMetaInfo:1430 - Stack HDP-2.1 is not active, skipping VDF
INFO [ambari-client-thread-37] AmbariMetaInfo:1430 - Stack HDP-2.1.GlusterFS is not active, skipping VDF
INFO [ambari-client-thread-37] AmbariMetaInfo:1428 - Stack HDP-2.3.GlusterFS is not valid, skipping VDF: The service 'OOZIE' in stack 'HDP:2.3.GlusterFS' extends a non-existent service: 'common-services/OOZIE/5.0.0.2.3'
...
How can I solve this problem?

Related

Hive3.1 metastore process generates a large number of s3 accesses resulting in additional overhead

We store the hdfs data in s3 in the production environment.
After upgrading the hive version from 1.2 to 3.1, we found that the number of visits to s3 increased rapidly, which resulted in some additional expenses.
The following figure shows the access statistics of the s3 bucket. When hive is running, there are still a large number of s3 access requests even if no external application is used. When hive is shut down, the access volume returns to normal immediately.
Check the hive metastore log when all upper-level applications of hive are closed, and find that the log grows rapidly, and there are a large number of accesses to the hive table (s3 bucket)
The following code is part of get_table access. I analyzed the full amount of logs and found that it is a traversal scan of all tables in hive (thousands of tables), so a large number of s3 access requests are generated.
2023-02-06T16:46:42,899 INFO [pool-6-thread-3]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(952)) - 3: source:**.***.*.*** get_table : tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,899 INFO [pool-6-thread-3]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(360)) - ugi=hive ip=**.***.*.*** cmd=source:**.***.*.*** get_table :tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,907 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(96)) - Starting translation for processor HMSClient-#master1 on list 1
2023-02-06T16:46:42,907 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(115)) - Table ods_wkfl_act_ru_event_subscr,#bucket=-1,isBucketed:false,tableType=EXTERNAL_TABLE,tableCapabilities=null
2023-02-06T16:46:42,908 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(438)) - Transformer return list of 1
2023-02-06T16:46:42,908 INFO [pool-6-thread-3]: authorization.StorageBasedAuthorizationProvider (StorageBasedAuthorizationProvider.java:userHasProxyPrivilege(172)) - userhive has host proxy privilege.
2023-02-06T16:46:42,908 INFO [pool-6-thread-3]: authorization.StorageBasedAuthorizationProvider (StorageBasedAuthorizationProvider.java:checkPermissions(395)) - Path authorization is skipped for path s3a://**/apps/hive/warehouse/ods_qa/data/wkfl_act_ru_event_subscr20230202.
2023-02-06T16:46:42,940 INFO [pool-6-thread-3]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(952)) - 3: source:**.***.*.*** get_table : tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,940 INFO [pool-6-thread-3]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(360)) - ugi=hive ip=**.***.*.*** cmd=source:**.***.*.*** get_table :tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,948 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(96)) - Starting translation for processor HMSClient-#master1 on list 1
I compared the configurations of hive1.2.1 and hive3, found their differences, and tried to modify the value of hive.compactor.initiator.on to false, but this did not solve the problem
I compared the difference between the metastore logs of hive1.2 and hive3.1, and found that get_all_databases is used in hive1.2, which means that only one s3 access will be generated in one scan cycle. For specific logs, see the following code
2023-02-06 16:49:57,578 INFO [pool-5-thread-200]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(317)) - ugi=ambari-qa ip=**.**.**.** cmd=source:**.**.**.** get_all_databases
2023-02-06 16:49:57,661 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.dir does not exist
2023-02-06 16:49:57,661 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.driver.parallel.compilation does not exist
2023-02-06 16:49:57,661 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.file does not exist
2023-02-06 16:49:57,661 INFO [pool-5-thread-200]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStoreForConf(619)) - 200: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
2023-02-06 16:49:57,787 INFO [pool-5-thread-200]: metastore.ObjectStore (ObjectStore.java:initializeHelper(383)) - ObjectStore, initialize called
2023-02-06 16:49:57,871 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.dir does not exist
2023-02-06 16:49:57,871 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.driver.parallel.compilation does not exist
2023-02-06 16:49:57,872 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.file does not exist
2023-02-06 16:49:57,893 INFO [pool-5-thread-200]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(163)) - Using direct SQL, underlying DB is MYSQL
2023-02-06 16:49:57,893 INFO [pool-5-thread-200]: metastore.ObjectStore (ObjectStore.java:setConf(297)) - Initialized ObjectStore
2023-02-06 16:52:54,140 INFO [pool-5-thread-200]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(773)) - 200: source:**.**.**.** get_all_functions
2023-02-06 16:52:54,140 INFO [pool-5-thread-200]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(317)) - ugi=ambari-qa ip=**.**.**.** cmd=source:**.**.**.** get_all_functions
2023-02-06 16:52:56,155 INFO [pool-5-thread-200]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(773)) - 200: Shutting down the object store...
2023-02-06 16:52:56,155 INFO [pool-5-thread-200]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(317)) - ugi=ambari-qa ip=**.**.**.** cmd=Shutting down the object store...
After the above analysis, I think it is due to the surge of s3 access caused by this problem
Now what I want to know is which hive parameters I can modify, or take another way, so that the hive metastore scans no longer traverses the entire table, but uses the get_all_databases method. I see few similar answers on the Internet. Now I am not sure whether my thinking is correct, I really need your help, thank you very much!

Unable to connect to Hub & Node in Selenium 4.4.0

We are getting an the below error try.SeleniumSpanExporter","log-time-local": "2022-09-22T13:07:01.425Z","log-time-utc": "2022-09-22T13:07:01.425Z","method": "lambda$export$4"} 13:07:01.425 DEBUG [LocalDistributor.add] - Exception while adding Node http://10.251.155.85:5555 java.io.UncheckedIOException: java.net.ConnectException: connection timed out: /10.251.155.**:5555
Hub Command: java -jar selenium-server-4.0.0.jar hub
O/P-HUB Output
Node Command: java -jar selenium-server-4.4.0.jar node --hub http://10...**:4444/grid/register [Passing the hub IP]
O/P-
C:\Users\Administrator>java -jar C:\apps\relay\webcluster\selenium-server.jar node --publish-events tcp://10.251.155.74:4442 --subscribe-events tcp://10.251.155.74:4443
07:35:34.313 INFO [LogManager$RootLogger.log] - Using the system default encoding
07:35:34.317 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
07:35:34.481 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://10.251.155.74:4442 and tcp://10.251.155.74:4443
07:35:34.565 INFO [UnboundZmqEventBus.<init>] - Sockets created
07:35:35.567 INFO [UnboundZmqEventBus.<init>] - Event bus ready
07:35:35.683 INFO [NodeServer.createHandlers] - Reporting self as: http://10.251.155.85:5555
07:35:35.752 INFO [NodeOptions.getSessionFactories] - Detected 4 available processors
07:35:35.783 INFO [NodeOptions.discoverDrivers] - Discovered 2 driver(s)
07:35:35.821 INFO [NodeOptions.report] - Adding Chrome for {"browserName": "chrome"} 4 times
07:35:35.821 INFO [NodeOptions.report] - Adding Firefox for {"browserName": "firefox"} 4 times
07:35:35.868 INFO [Node.<init>] - Binding additional locator mechanisms: relative, name, id
07:35:36.138 INFO [NodeServer$1.start] - Starting registration process for Node http://10.251.155.85:5555
07:35:36.138 INFO [NodeServer.execute] - Started Selenium node 4.4.0 (revision e5c75ed026a): http://10.251.155.85:5555
07:35:36.169 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
07:35:46.186 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
07:35:56.202 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
I also tried with passing java -jar selenium-server-.jar node --publish-events tcp://:8886 --subscribe-events tcp://:8887 still no luck .

CypressError: `cy.visit()` failed trying to load: https://dev-eccc.env.xxxx.com/ via Gitlab CI job

I have setup a Gitlab ci/cd job to execute all cypress integration tests. I found that all tests are getting fail due to home URL is getting failed to load by cy.visit().
On my local machine it is working fine.
below is the complete Error trace:
CypressError: `cy.visit()` failed trying to load:
https://dev-eccc.env.ihsmarkit.com/
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: getaddrinfo ENOTFOUND dev-eccc.env.ihsmarkit.com
Common situations why this would fail:
- you don't have internet access
- you forgot to run / boot your web server
- your web server isn't accessible
- you have weird network configuration settings on your computer
Because this error occurred during a `before all` hook we are skipping the remaining tests in the current suite: `Facility Register`
at http://localhost:45271/__cypress/runner/cypress_runner.js:156433:23
at visitFailedByErr (http://localhost:45271/__cypress/runner/cypress_runner.js:155794:12)
at http://localhost:45271/__cypress/runner/cypress_runner.js:156432:11
at tryCatcher (http://localhost:45271/__cypress/runner/cypress_runner.js:10130:23)
at Promise._settlePromiseFromHandler (http://localhost:45271/__cypress/runner/cypress_runner.js:8065:31)
at Promise._settlePromise (http://localhost:45271/__cypress/runner/cypress_runner.js:8122:18)
at Promise._settlePromise0 (http://localhost:45271/__cypress/runner/cypress_runner.js:8167:10)
at Promise._settlePromises (http://localhost:45271/__cypress/runner/cypress_runner.js:8243:18)
at _drainQueueStep (http://localhost:45271/__cypress/runner/cypress_runner.js:4837:12)
at _drainQueue (http://localhost:45271/__cypress/runner/cypress_runner.js:4830:9)
at Async.../../node_modules/bluebird/js/release/async.js.Async._drainQueues
(http://localhost:45271/__cypress/runner/cypress_runner.js:4846:5)
at Async.drainQueues (http://localhost:45271/__cypress/runner/cypress_runner.js:4716:14)
From Your Spec Code:
at Object.homepage_test (http://localhost:45271/__cypress/tests?p=cypress/integration/eccc-
app-ui-cypress/API/register_api.ts:71:8)
at Context.eval (http://localhost:45271/__cypress/tests?p=cypress/integration/eccc-app-ui-
cypress/API/register_api.ts:13:25)
From Node.js Internals:
Error: getaddrinfo ENOTFOUND dev-eccc.env.ihsmarkit.com
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:60:26)

Apache flink - Timeout after submitting job on hadoop / yarn cluster

I am trying to upgrade our job from flink 1.4.2 to 1.7.1 but I keep running into timeouts after submitting the job. The flink job runs on our hadoop cluster (version 2.7) with Yarn.
I've seen the following behavior:
Using the same flink-conf.yaml as we used in 1.4.2: 1.5.6 / 1.6.3 / 1.7.1 all versions timeout while 1.4.2 works.
Using 1.5.6 with "mode: legacy" (to switch off flip-6) works
Using 1.7.1 with "mode: legacy" gives timeout (I assume this option was removed but the documentation is outdated? https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#legacy)
When the timeout happens I get the following stacktrace:
INFO class java.time.Instant does not contain a getter for field seconds
INFO class com.bol.fin_hdp.cm1.domain.Cm1Transportable does not contain a getter for field globalId
INFO Submitting job 5af931bcef395a78b5af2b97e92dcffe (detached: false).
INFO ------------------------------------------------------------
INFO The program finished with the following exception:
INFO org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
INFO at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
INFO at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:798)
INFO at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:289)
INFO at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
INFO at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1035)
INFO at org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1111)
INFO at java.security.AccessController.doPrivileged(Native Method)
INFO at javax.security.auth.Subject.doAs(Subject.java:422)
INFO at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
INFO at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
INFO at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1111)
INFO Caused by: java.lang.RuntimeException: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:43)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJobWithConfig(IntervalJobStarter.java:32)
INFO at com.bol.fin_hdp.Main.main(Main.java:8)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO at java.lang.reflect.Method.invoke(Method.java:498)
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
INFO ... 12 more
INFO Caused by: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
INFO at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
INFO at com.bol.fin_hdp.cm1.job.Job.execute(Job.java:54)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:41)
INFO ... 19 more
INFO Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
INFO at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:371)
INFO at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
INFO at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:216)
INFO at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
INFO at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:301)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
INFO at java.lang.Thread.run(Thread.java:748)
INFO Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
INFO ... 17 more
INFO Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-01...
INFO at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
INFO at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
INFO at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
INFO at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
INFO ... 15 more
INFO Caused by: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-017.example.com/some.ip.address:46500
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212)
INFO ... 7 more
What changed in flip-6 that might cause this behavior and how can I fix this?
For our jobs on YARN w/Flink 1.6, we had to bump up the web.timeout setting via -yD web.timeout=100000.
In our case, there was a firewall between the machine submitting the job and our Hadoop cluster.
In newer Flink versions (1.7 and up) Flink uses REST to submit jobs. The port number for this REST service is random on yarn setups and could not be set.
Flink 1.8.0 introduced a config option to set this to a port or port range using:
rest.bind-port: 55520-55530

Cannot load plugin events-log in Gerrit

EDIT:
There was an update to the plugin some weeks ago, and now I get in the Jenkins log that:
Aug 14, 2018 8:57:26 AM WARNING com.sonyericsson.hudson.plugins.gerrit.trigger.playback.GerritMissedEventsPlaybackManager performCheck
Missed Events Playback used to be NOT supported. now it IS!
Aug 14, 2018 8:57:26 AM INFO com.sonymobile.tools.gerrit.gerritevents.GerritConnection run
And in the GERRIT_SITE/logs/error_log it says plugin is loaded:
[2018-08-14 10:56:57,213] [ShutdownCallback] INFO com.google.gerrit.pgm.Daemon : caught shutdown, cleaning up
[2018-08-14 10:56:57,380] [ShutdownCallback] INFO org.eclipse.jetty.server.AbstractConnector : Stopped ServerConnector#500beb9f{HTTP/1.1,[http/1.1]}{127.0.0.1:8081}
[2018-08-14 10:56:57,403] [ShutdownCallback] INFO org.eclipse.jetty.server.handler.ContextHandler : Stopped o.e.j.s.ServletContextHandler#3437fc4f{/,null,UNAVAILABLE}
[2018-08-14 10:56:57,469] [ShutdownCallback] WARN org.apache.sshd.server.channel.ChannelSession : doCloseImmediately(ChannelSession[id=1, recipient=1]-ServerSessionIm$
[2018-08-14 10:56:57,508] [ShutdownCallback] INFO com.google.gerrit.sshd.SshDaemon : Stopped Gerrit SSHD
[2018-08-14 10:57:21,044] [main] WARN com.google.gerrit.sshd.SshDaemon : Cannot format SSHD host key [EdDSA]: invalid key type
[2018-08-14 10:57:21,061] [main] WARN com.google.gerrit.server.config.GitwebCgiConfig : gitweb not installed (no /usr/lib/cgi-bin/gitweb.cgi found)
[2018-08-14 10:57:22,289] [main] INFO org.eclipse.jetty.util.log : Logging initialized #15822ms
[2018-08-14 10:57:22,430] [main] INFO com.google.gerrit.server.git.LocalDiskRepositoryManager : Defaulting core.streamFileThreshold to 1339m
[2018-08-14 10:57:22,784] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loading plugins from /opt/gerrit/plugins
[2018-08-14 10:57:23,056] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin delete-project, version v2.13-61-g8d6b23b122
[2018-08-14 10:57:23,500] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin events-log, version v2.13-66-ge95af940c6
[2018-08-14 10:57:24,150] [main] INFO com.google.gerrit.server.git.GarbageCollectionRunner : Ignoring missing gc schedule configuration
[2018-08-14 10:57:24,151] [main] INFO com.google.gerrit.server.config.ScheduleConfig : accountDeactivation schedule parameter "accountDeactivation.interval" is not co$
[2018-08-14 10:57:24,151] [main] INFO com.google.gerrit.server.change.ChangeCleanupRunner : Ignoring missing changeCleanup schedule configuration
[2018-08-14 10:57:24,295] [main] INFO com.google.gerrit.sshd.SshDaemon : Started Gerrit SSHD-CORE-1.6.0 on *:29418
[2018-08-14 10:57:24,298] [main] INFO org.eclipse.jetty.server.Server : jetty-9.3.18.v20170406
[2018-08-14 10:57:25,454] [main] INFO org.eclipse.jetty.server.handler.ContextHandler : Started o.e.j.s.ServletContextHandler#73f0b216{/,null,AVAILABLE}
[2018-08-14 10:57:25,475] [main] INFO org.eclipse.jetty.server.AbstractConnector : Started ServerConnector#374013e8{HTTP/1.1,[http/1.1]}{127.0.0.1:8081}
[2018-08-14 10:57:25,476] [main] INFO org.eclipse.jetty.server.Server : Started #19011ms
[2018-08-14 10:57:25,478] [main] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review 2.15.1 ready
So now this is solved.
I am trying to solve the issue with Missed Events Playback warning I get in Jenkins.
I've enabled the REST API in Jenkins with my generated http password from Gerrit web UI.
So my issue is with the events-log plugin.
I've installed the events-log.jar plugin under GERRIT_SITE/gerrit/plugins
This directory has drwxr-xr-x as permission settings.
GERRIT_SITE/gerrit/logs/error_log gives me this when restarting:
[2018-06-21 13:40:34,678] [main] WARN com.google.gerrit.sshd.SshDaemon : Cannot format SSHD host key [EdDSA]: invalid key type [2018-06-21 13:40:34,697] [main] WARN com.google.gerrit.server.config.GitwebCgiConfig : gitweb not installed (no /usr/lib/cgi-bin/gitweb.cgi found) [2018-06-21 13:40:35,761] [main] INFO org.eclipse.jetty.util.log : Logging initialized #11099ms [2018-06-21 13:40:35,925] [main] INFO com.google.gerrit.server.git.LocalDiskRepositoryManager : Defaulting core.streamFileThreshold to 1339m [2018-06-21 13:40:36,410] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Removing stale plugin file: plugin_events-log_180621_1333_5163201567282630382.jar [2018-06-21 13:40:36,410] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loading plugins from /opt/gerrit/plugins [2018-06-21 13:40:36,528] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin delete-project, version v2.13-61-g8d6b23b122 [2018-06-21 13:40:36,614] [main] WARN com.google.gerrit.server.plugins.PluginLoader : **Cannot**
**load plugin events-log** java.lang.NoSuchMethodError: com.google.gerrit.server.git.WorkQueue.createQueue(ILjava/lang/String;)Ljava/util/concurrent/ScheduledThreadPoolExecutor;
at com.ericsson.gerrit.plugins.eventslog.EventQueue.start(EventQueue.java:35)
at com.google.gerrit.lifecycle.LifecycleManager.start(LifecycleManager.java:92)
at com.google.gerrit.server.plugins.ServerPlugin.startPlugin(ServerPlugin.java:251)
at com.google.gerrit.server.plugins.ServerPlugin.start(ServerPlugin.java:192)
at com.google.gerrit.server.plugins.PluginLoader.runPlugin(PluginLoader.java:491)
at com.google.gerrit.server.plugins.PluginLoader.rescan(PluginLoader.java:419)
at com.google.gerrit.server.plugins.PluginLoader.start(PluginLoader.java:324)
at com.google.gerrit.lifecycle.LifecycleManager.start(LifecycleManager.java:92)
at com.google.gerrit.pgm.Daemon.start(Daemon.java:349)
at com.google.gerrit.pgm.Daemon.run(Daemon.java:256)
at com.google.gerrit.pgm.util.AbstractProgram.main(AbstractProgram.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.google.gerrit.launcher.GerritLauncher.invokeProgram(GerritLauncher.java:223)
at com.google.gerrit.launcher.GerritLauncher.mainImpl(GerritLauncher.java:119)
at com.google.gerrit.launcher.GerritLauncher.main(GerritLauncher.java:63)
at Main.main(Main.java:24) [2018-06-21 13:40:36,687] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin gitiles, version dd264dd2d4 [2018-06-21 13:40:36,728] [main] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin its-jira, version v2.15 [2018-06-21 13:40:37,034] [main] INFO com.google.gerrit.server.git.GarbageCollectionRunner : Ignoring missing gc schedule configuration [2018-06-21 13:40:37,034] [main] INFO com.google.gerrit.server.config.ScheduleConfig : accountDeactivation schedule parameter "accountDeactivation.interval" is not configured [2018-06-21 13:40:37,034] [main] INFO com.google.gerrit.server.change.ChangeCleanupRunner : Ignoring missing changeCleanup schedule configuration [2018-06-21 13:40:37,060] [main] INFO com.google.gerrit.sshd.SshDaemon : Started Gerrit SSHD-CORE-1.6.0 on *:29418 [2018-06-21 13:40:37,074] [main] INFO org.eclipse.jetty.server.Server : jetty-9.3.18.v20170406 [2018-06-21 13:40:38,104] [main] INFO org.eclipse.jetty.server.handler.ContextHandler : Started o.e.j.s.ServletContextHandler#2c8469fe{/,null,AVAILABLE} [2018-06-21 13:40:38,113] [main] INFO org.eclipse.jetty.server.AbstractConnector : Started ServerConnector#3803bc1a{HTTP/1.1,[http/1.1]}{127.0.0.1:8081} [2018-06-21 13:40:38,115] [main] INFO org.eclipse.jetty.server.Server : Started #13456ms [2018-06-21 13:40:38,118] [main] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review 2.15.1 ready
I would like some help on why the plugin is not loading/enables when other plugins are working.
Note 1: Jenkins v2.107.2 and Gerrit v2.15.1 are installed on different linux based servers. And I'm able to trigger a build from Gerrit.
Note 2: I tried both with plugin-manager (uninstalled for now) and with command wget https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.15/job/plugin-events-log-bazel-stable-2.15/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/events-log/events-log.jar, which is the way I'm doing now.
Note 3: events-log in gerrit.config looks like this:
[plugin "events-log"]
maxAge = 20
returnLimit = 10000
storeDriver = org.postgresql.Driver
storeUsername = gerrit
storeUrl = jdbc:postgresql:/var/lib/postgresql/9.5/main
urlOptions = loglevel=INFO
urlOptions = logUnclosedConnections=true
copyLocal = true