Micronaut's EmbeddedServer startup extremely slow - macos-mojave

I created a micronaut "Hello World!" application and a JUnit test according to the Micronaut user guide:
https://docs.micronaut.io/latest/guide/index.html#creatingClient
on macOS Mojave (10.14) with Java 1.8.0_25-b17.
Unit test:
package hello;
import io.micronaut.http.HttpStatus;
import io.micronaut.http.client.RxHttpClient;
import io.micronaut.runtime.server.EmbeddedServer;
import io.micronaut.test.annotation.MicronautTest;
import org.junit.jupiter.api.Test;
import javax.inject.Inject;
import static org.junit.jupiter.api.Assertions.assertEquals;
#MicronautTest
public class HelloControllerTest {
#Inject
EmbeddedServer embeddedServer;
#Test
public void testIndex() throws Exception {
// or (instead of the #Inject):
// EmbeddedServer embeddedServer = ApplicationContext.run(EmbeddedServer.class);
try(RxHttpClient client = embeddedServer.getApplicationContext().createBean(RxHttpClient.class, embeddedServer.getURL())) {
assertEquals(HttpStatus.OK, client.toBlocking().exchange("/hello").status());
}
}
}
The "Hello World!" application starts up quickly (about a second). The JUnit test, however, takes more than 75 seconds to complete. It 'hangs' on the following line for 75 seconds:
server = ApplicationContext.run(EmbeddedServer.class);
Suggested fix in /etc/hosts doesn't work
I've tried this suggested fix (adding the hostname to /etc/hosts after the "127.0.0.1 localhost" and "::1 localhost" entries -- both with and without the '.local' suffix):
https://docs.micronaut.io/latest/guide/index.html#problems
Jvm takes a long time to resolve ip-address for localhost
with no luck. I restarted my machine after changes to /etc/hosts.
The hostname resolution does not seem to be the problem though; I tested it with the inetTester.jar mentioned in the above link (download here: https://github.com/thoeni/inetTester), it takes only 6ms. I guess it must be something else.
(On the other hand, everyone who had problems with micronaut application startup time (which I do not) on macOS, and fixed it by adding hostname to /etc/hosts, also mentions this same ~75 second delay -- this can't really be a coincidence?)
Log file
The two lines in the log file, before and after the 75 second pause:
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Registering singleton bean io.micronaut.http.server.netty.NioEventLoopGroupFactory#4b1c0397 for type [io.micronaut.http.server.netty.EventLoopGroupFactory] using bean key io.micronaut.http.server.netty.NioEventLoopGroupFactory
22:56:22.040 [main] DEBUG io.micronaut.context.lifecycle - Created bean [io.micronaut.http.server.netty.NettyHttpServer#2fe88a09] from definition [Definition: io.micronaut.http.server.netty.NettyHttpServer] with qualifier [null]
And a bit of context:
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: interface io.micronaut.http.server.netty.ssl.ServerSslBuilder
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: interface io.micronaut.http.server.netty.ssl.ServerSslBuilder
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Resolving beans for type: io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - Looking up existing beans for key: io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - No beans found for key: io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: interface io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: interface io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Found no possible candidate beans of type [io.netty.channel.ChannelOutboundHandler] for qualifier: null
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - No existing bean found for bean key: io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: interface io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: class io.micronaut.http.server.netty.EpollEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: class io.micronaut.http.server.netty.EpollEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: class io.micronaut.http.server.netty.KQueueEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: class io.micronaut.http.server.netty.KQueueEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory] for type: interface io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Finalized bean definitions candidates: [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory]
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Found concrete candidate [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory] for type: io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.834 [main] DEBUG io.micronaut.context.lifecycle - Created bean [io.micronaut.http.server.netty.NioEventLoopGroupFactory#4b1c0397] from definition [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory] with qualifier [null]
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Registering singleton bean io.micronaut.http.server.netty.NioEventLoopGroupFactory#4b1c0397 for type [io.micronaut.http.server.netty.EventLoopGroupFactory] using bean key io.micronaut.http.server.netty.NioEventLoopGroupFactory
22:56:22.040 [main] DEBUG io.micronaut.context.lifecycle - Created bean [io.micronaut.http.server.netty.NettyHttpServer#2fe88a09] from definition [Definition: io.micronaut.http.server.netty.NettyHttpServer] with qualifier [null]
22:56:22.041 [main] DEBUG i.m.context.DefaultBeanContext - Registering singleton bean io.micronaut.http.server.netty.NettyHttpServer#2fe88a09 for type [io.micronaut.runtime.server.EmbeddedServer] using bean key io.micronaut.http.server.netty.NettyHttpServer
22:56:22.042 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
22:56:22.050 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
22:56:22.050 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
22:56:22.056 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
22:56:22.063 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#2ca6546f
22:56:22.063 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#6b63d445
22:56:22.063 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#7578e06a
22:56:22.064 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#30b2b76f
22:56:22.064 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#56da52a7
22:56:22.064 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#23ee75c5

I had a similar issue where the compiled native image was taking over 40 seconds to start. The problem in my case was the environment detection, disabling it solved my problem. I did it in code, you can do it via properties as explained in their docs.
fun main(args: Array<String>) {
Micronaut.build()
.packages("com.example")
.deduceEnvironment(false) // this line did the trick
.mainClass(Application.javaClass)
.start()
}

For anybody coming with the same problem for me only the hosts edit fixes it.
For me just normal startup is between 1.6 and 6-8 seconds depending on what is in the hosts file /etc/hosts.
127.0.0.1 localhost -> 6-8s startup
127.0.0.1 localhost MacBook-Pro.local -> 1.6s startup.
So essentially just add $hostname to your 127.0.0.1 route in the hosts file.

I had the same issue where in my case it was 2 minutes and 20 seconds of waiting. Simple workaround is to specify a server port in the Micronaut application configuration like this:
micronaut:
server:
host: localhost
port: 8080
But it will not work when you want to run more than one test at once.
I think that the problem is caused by searching for any available port implemented in SocketUtils.findAvailableTcpPort() and executed in the constructor of the NettyHttpServer class when no port is specified and the environment is Environment.TEST.
Update: from my experience the problem appears only on some networks. For example I don't have any problem in my home network but I had that problem in a hotel network. Probably domain name lookup can influence that - so what about to try to change the DNS server?

it seems coming from the network.
when i plugged out my cable and my wifi, i had that :
12:34:31.324 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [test]
12:34:32.061 [main] WARN i.netty.util.internal.MacAddressUtil - Failed to find a usable hardware address from the network interfaces; using random bytes: 11:02:e9:bf:a8:0e:df:83
and my test ran in 267ms (without it's about 30s)

Related

Connnections handling

So I've been using karate for a while now, and there has been an issue we were facing since over the last year: org.apache.http.conn.ConnectTimeoutException
Other threads about that mentioned connectionTimeout exception were solvable by specifying proxy, but taht did not help us.
After tons of investigation, it turned out that our Azure SNAT was exhausted, meaning Karate was opening way too many connections.
To verify this I enabled log debugging and used this feature
Background:
* url "https://www.karatelabs.io/"
Scenario:
* method GET
* method GET
the logs then had following lines
13:10:17.868 [main] DEBUG com.intuit.karate - request:
1 > GET https://www.karatelabs.io/
1 > Host: www.karatelabs.io
1 > Connection: Keep-Alive
1 > User-Agent: Apache-HttpClient/4.5.13 (Java/17.0.4.1)
1 > Accept-Encoding: gzip,deflate
13:10:17.868 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection request: [route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 0 of 5; total allocated: 0 of 10]
13:10:17.874 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection leased: [id: 0][route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 1 of 5; total allocated: 1 of 10]
13:10:17.875 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Opening connection {s}->https://www.karatelabs.io:443
13:10:17.883 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connecting to www.karatelabs.io/34.149.87.45:443
13:10:17.883 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Connecting socket to www.karatelabs.io/34.149.87.45:443 with timeout 30000
13:10:17.924 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled protocols: [TLSv1.3, TLSv1.2]
13:10:17.924 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled cipher suites:[...]
13:10:17.924 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Starting handshake
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Secure session established
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated protocol: TLSv1.3
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated cipher suite: TLS_AES_256_GCM_SHA384
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer principal: CN=karatelabs.io
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer alternative names: [karatelabs.io, www.karatelabs.io]
13:10:18.012 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - issuer principal: CN=Sectigo RSA Domain Validation Secure Server CA, O=Sectigo Limited, L=Salford, ST=Greater Manchester, C=GB
13:10:18.014 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connection established localIp<->serverIp
13:10:18.015 [main] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-0: set socket timeout to 120000
13:10:18.015 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Executing request GET / HTTP/1.1
...
13:10:18.066 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Connection can be kept alive indefinitely
...
...
13:10:18.196 [main] DEBUG com.intuit.karate - request:
2 > GET https://www.karatelabs.io/
13:10:18.196 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection request: [route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 0 of 5; total allocated: 0 of 10]
13:10:18.196 [main] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection leased: [id: 1][route: {s}->https://www.karatelabs.io:443][total available: 0; route allocated: 1 of 5; total allocated: 1 of 10]
13:10:18.196 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Opening connection {s}->https://www.karatelabs.io:443
13:10:18.196 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connecting to www.karatelabs.io/34.149.87.45:443
13:10:18.196 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Connecting socket to www.karatelabs.io/34.149.87.45:443 with timeout 30000
13:10:18.206 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled protocols: [TLSv1.3, TLSv1.2]
13:10:18.206 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Enabled cipher suites:[...]
13:10:18.206 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Starting handshake
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - Secure session established
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated protocol: TLSv1.3
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - negotiated cipher suite: TLS_AES_256_GCM_SHA384
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer principal: CN=karatelabs.io
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - peer alternative names: [karatelabs.io, www.karatelabs.io]
13:10:18.236 [main] DEBUG o.a.h.c.s.SSLConnectionSocketFactory - issuer principal: CN=Sectigo RSA Domain Validation Secure Server CA, O=Sectigo Limited, L=Salford, ST=Greater Manchester, C=GB
13:10:18.236 [main] DEBUG o.a.h.i.c.DefaultHttpClientConnectionOperator - Connection established localIp<->serverIp
13:10:18.236 [main] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-1: set socket timeout to 120000
...
13:10:18.279 [main] DEBUG o.a.h.impl.execchain.MainClientExec - Connection can be kept alive indefinitely
...
...
13:10:18.609 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager is shutting down
13:10:18.610 [Finalizer] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-1: Shutdown connection
13:10:18.611 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager shut down
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager is shutting down
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.DefaultManagedHttpClientConnection - http-outgoing-2: Shutdown connection
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager shut down
13:10:18.612 [Finalizer] DEBUG o.a.h.i.c.PoolingHttpClientConnectionManager - Connection manager is shutting down
"Connecting to socket" and "handshake" indicate that karate is establishing a new connection instead of using an already opened one, even though I am sending a request to the same host.
On the other hand, on longer scenarios, I was seeing "http-outgoing-x: Shutdown connection" after about ~1s from opening it, in the middle of the run, despite having "karate.configure('readTimeout', 120000)" specified.
I don't think that was intentional, especially after seeing the "keep-alive" header and the "Connection can be kept alive indefinitely" in the log"
That being said, is there any way to force karate to use the same connection instead of establishing a new one each request?
As far as we know, we use the Apache HTTP Client API the right way.
But you never know. The best thing is for you to dive into the code and see what we could be missing. Or you could provide a way to replicate following these instructions: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue

Error using http post request with void output in Karate Framework

I try to run a post request that does not have an output body. The output is void. I get the 200 answer but then it fails. I have already spoken to my colleagues at the company and they have indicated that it is not possible to execute these types of requests in karate. Is it a bug on the part of karate's developers? Here's the error:
[ForkJoinPool-1-worker-1] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection [id: 3][route: {s}->REMOVED:443] can be kept alive indefinitely
[ForkJoinPool-1-worker-1] DEBUG org.apache.http.impl.conn.DefaultManagedHttpClientConnection - http-outgoing-3: set socket timeout to 0
[ForkJoinPool-1-worker-1] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 3][route: {s}->REMOVED:443][total available: 1; route allocated: 1 of 5; total allocated: 1 of 10]
[ForkJoinPool-1-worker-1] ERROR com.intuit.karate - java.lang.RuntimeException: java.io.EOFException, http call failed after 1601 milliseconds for URL:REMOVED
[ForkJoinPool-1-worker-1] ERROR com.intuit.karate - http request failed:
java.lang.RuntimeException: java.io.EOFException
There's a new version just released: https://github.com/intuit/karate/releases/tag/v1.0.0 - so try that first.
Do update here if that works. That said - the log you provided does not help at all. The best thing you can do is follow this process, so that the developers get a decent chance to understand what you mean by "does not have an output body" and "void": https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

server.HiveServer2: Error starting priviledge synchonizer

Hive version 3.1.2
Hadoop components(hdfs/yarn/historyjob) with kerberos authentication.
hive kerberos config:
hive.server2.authentication=KERBEROS
hive.server2.authentication.kerberos.principal=hiveserver2/_HOST#BDP.COM
hive.server2.authentication.kerberos.keytab=/etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
hive.metastore.sasl.enabled=true
hive.metastore.kerberos.keytab.file=/etc/kerberos/hadoop/metastore.bdp-05.keytab
hive.metastore.kerberos.principal=metastore/_HOST#BDP.COM
First, start the Metastore:
./bin/hive --service metastore > /dev/null &
Nothing unnormal in the log.
Then start hiveserver2 :
./bin/hive --service hiveserver2 > /dev/null &
Here is the start logs :
2020-12-30T11:28:48,746 INFO [main] server.HiveServer2: Starting HiveServer2
2020-12-30T11:28:49,168 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:49,171 INFO [main] cli.CLIService: SPNego httpUGI not created, spNegoPrincipal: , ketabFile:
2020-12-30T11:28:49,187 INFO [main] SessionState: Hive Session ID = 0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,052 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,066 INFO [main] session.SessionState: Created local directory: /tmp/hive/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,069 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49/_tmp_space.db
2020-12-30T11:28:50,600 INFO [main] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://bigdata-server-05:9083
2020-12-30T11:28:50,605 INFO [main] metastore.HiveMetaStoreClient: HMSC::open(): Could not find delegation token. Creating KERBEROS-based thrift connection.
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Connected to metastore.
2020-12-30T11:28:50,654 INFO [main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hiveserver2/bigdata-server-05#BDP.COM (auth:KERBEROS) retries=1 delay=1 lifetime=0
2020-12-30T11:28:50,781 INFO [main] service.CompositeService: Operation log root directory is created: /tmp/hive/operation_logs
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread pool size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread wait queue size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread keepalive time: 10 seconds
2020-12-30T11:28:50,784 INFO [main] service.CompositeService: Connections limit are user: 0 ipaddress: 0 user-ipaddress: 0
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:OperationManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:SessionManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:CLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:HiveServer2 is inited.
2020-12-30T11:28:50,835 INFO [pool-7-thread-1] SessionState: Hive Session ID = 693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,838 INFO [main] results.QueryResultsCache: Initializing query results cache at /tmp/hive/_resultscache_
2020-12-30T11:28:50,844 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,844 INFO [main] results.QueryResultsCache: Query results cache: cacheDirectory /tmp/hive/_resultscache_/results-23ae949b-6894-4a17-8141-0eacf5fe5a63, maxCacheSize 2147483648, maxEntrySize 10485760, maxEntryLifetime 3600000
2020-12-30T11:28:50,846 INFO [pool-7-thread-1] session.SessionState: Created local directory: /tmp/hive/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,849 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4/_tmp_space.db
2020-12-30T11:28:50,861 INFO [main] events.NotificationEventPoll: Initializing lastCheckedEventId to 0
2020-12-30T11:28:50,862 INFO [main] server.HiveServer2: Starting Web UI on port 10002
2020-12-30T11:28:50,885 INFO [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
2020-12-30T11:28:50,894 INFO [main] util.log: Logging initialized #4380ms
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:OperationManager is started.
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:SessionManager is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:CLIService is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is started.
2020-12-30T11:28:51,013 WARN [main] security.HadoopThriftAuthBridge: Client-facing principal not set. Using server-side setting: hiveserver2/_HOST#BDP.COM
2020-12-30T11:28:51,013 INFO [main] security.HadoopThriftAuthBridge: Logging in via CLIENT based principal
2020-12-30T11:28:51,019 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,019 INFO [main] security.HadoopThriftAuthBridge: Logging in via SERVER based principal
2020-12-30T11:28:51,023 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,030 INFO [main] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,033 INFO [main] security.TokenStoreDelegationTokenSecretManager: New master key with key id=0
2020-12-30T11:28:51,034 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: New master key with key id=1
2020-12-30T11:28:51,040 INFO [main] thrift.ThriftCLIService: Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2020-12-30T11:28:51,040 INFO [main] service.AbstractService: Service:HiveServer2 is started.
2020-12-30T11:28:51,041 ERROR [main] server.HiveServer2: Error starting priviledge synchonizer:
java.lang.NullPointerException: null
at org.apache.hive.service.server.HiveServer2.startPrivilegeSynchonizer(HiveServer2.java:985) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:726) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1037) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) [hive-service-3.1.2.jar:3.1.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_271]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_271]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_271]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_271]
at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.3.jar:?]
2020-12-30T11:28:51,044 INFO [main] server.HiveServer2: Shutting down HiveServer2
In my case, the hiveserver2-sit.xml was created by Apache Ranger when turning the ranger-hive-plugin on. Once I disable the ranger-hive-plugin, hiveserver2-sit.xml was edited by Ranger.
Here are the remaining configurations:
<configuration>
<property>
<name>hive.security.authorization.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.security.authorization.manager</name>
<value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider</value>
</property>
<property>
<name>hive.security.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value>
</property>
<property>
<name>hive.conf.restricted.list</name>
<value>hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager</value>
</property>
</configuration>
Start hiveServer2 will encounter the previous exception.
Remove hiveserver2-site.xml will work fine.
I don't know why? Somebody can explain?
is this still actual ? , if yes , check logs . You should see that it tries to connect to zookeeper , if not described it will try to connect to localhost:2181 , so either it must be there or zk quorum servers should be described.

Run Codeception acceptance tests against a self-signed ssl site

I have a website with a letsencrypt ssl cert. When I ran codeception acceptance tests against it, the test stalls until I press ctrl+z. When I ran the same test against a non ssl site, there is no problem.
That is my setup in acceptance.suite.yml. The phantomjs.cli.args paramater is from this site: http://szdredd.blogspot.de/2013/10/codeception-phantomjs-setup-for.html
class_name: AcceptanceTester
modules:
enabled: [WebDriver]
config:
WebDriver:
url: https://www.domain.de/
browser: phantomjs
My selenium log looks like this:
17:07:15.681 INFO - Executing: [new session: Capabilities [{browserName=phantomjs}]])
17:07:15.682 INFO - Creating a new session for Capabilities [{browserName=phantomjs}]
17:07:15.682 INFO - executable: /usr/bin/phantomjs
17:07:15.683 INFO - port: 27757
17:07:15.683 INFO - arguments: [--webdriver=27757, --webdriver-logfile=/phantomjsdriver.log]
17:07:15.683 INFO - environment: {}
PhantomJS is launching GhostDriver...
[INFO - 2016-02-20T17:07:15.754Z] GhostDriver - Main - running on port 27757
[INFO - 2016-02-20T17:07:15.765Z] Session [64316920-d7f4-11e5-a0c5-8954be0ea076] - CONSTRUCTOR - Desired Capabilities: {"browserName":"phantomjs"}
[INFO - 2016-02-20T17:07:15.765Z] Session [64316920-d7f4-11e5-a0c5-8954be0ea076] - CONSTRUCTOR - Negotiated Capabilities: {"browserName":"phantomjs","version":"1.9.0","driverName":"ghostdriver","driverVersion":"1.0.3","platform":"linux-unknown-64bit","javascriptEnabled":true,"takesScreenshot":true,"handlesAlerts":false,"databaseEnabled":false,"locationContextEnabled":false,"applicationCacheEnabled":false,"browserConnectionEnabled":false,"cssSelectorsEnabled":true,"webStorageEnabled":false,"rotatable":false,"acceptSslCerts":false,"nativeEvents":true,"proxy":{"proxyType":"direct"}}
[INFO - 2016-02-20T17:07:15.765Z] SessionManagerReqHand - _postNewSessionCommand - New Session Created: 64316920-d7f4-11e5-a0c5-8954be0ea076
17:07:15.771 INFO - Done: [new session: Capabilities [{browserName=phantomjs}]]
17:07:15.774 INFO - Executing: [implicitly wait: 0])
17:07:15.777 INFO - Done: [implicitly wait: 0]
17:07:15.790 INFO - Executing: [get: https://www.waldhelden.de/])
[INFO - 2016-02-20T17:07:33.916Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW
[INFO - 2016-02-20T17:08:55.442Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW
[INFO - 2016-02-20T17:09:02.008Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW
17:09:13.204 INFO - Session 7c5ef02c-9361-49c8-894d-234989179189 deleted due to client timeout
[INFO - 2016-02-20T17:09:13.211Z] ShutdownReqHand - _handle - About to shutdown
I found an advise on this side, but when I add that configuration I an error:
capabilities:
phantomjs.cli.args: ['--ignore-ssl-errors=true']
Caused by: org.openqa.selenium.WebDriverException: The best matching driver provider org.openqa.selenium.htmlunit.HtmlUnitDriver can't create a new driver instance for Capabilities [{phantomjs.cli.args=[--ignore-ssl-errors=true], browserName=phantom}]
Who knows how to setup codeception to ignore ssl errors? Any help appreciated!
Thanks
Udo
For testing my site I use Phantoman to automatically run and close phantomJS. In codeception.yml I have:
config:
Codeception\Extension\Phantoman:
path: 'vendor/bin/phantomjs'
port: 4444
debug: true
ignoreSslErrors: true
sslProtocol: any
Codeception\Extension\Recorder:
delete_successful: true

How can I trace the failure ot TSaslTransport (hive related)

I've been debugging a JDBC Connection error in hive, similar to what was asked here:
Hive JDBC getConnection does not return.
By turning on log4j properly, i finally got down to seeing this , before the getConnection() hangs. What is thrift waiting for ? If this is related to using the wrong thrift APIs, how can I determine versioning differences between client/server?
I have tried copying all libraries from my hive server onto my client app to test if it is some kind of minor thrift class versioning error, but that didnt solve the problem, my JDBC connection still hangs.
0 [main] DEBUG org.apache.thrift.transport.TSaslTransport - opening transport org.apache.thrift.transport.TSaslClientTransport#219ba640
0 [main] DEBUG org.apache.thrift.transport.TSaslTransport - opening transport org.apache.thrift.transport.TSaslClientTransport#219ba640
3 [main] DEBUG org.apache.thrift.transport.TSaslClientTransport - Sending mechanism name PLAIN and initial response of length 14
3 [main] DEBUG org.apache.thrift.transport.TSaslClientTransport - Sending mechanism name PLAIN and initial response of length 14
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Writing message with status START and payload length 5
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Writing message with status START and payload length 5
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Writing message with status COMPLETE and payload length 14
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Writing message with status COMPLETE and payload length 14
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Start message handled
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Start message handled
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Main negotiation loop complete
5 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: Main negotiation loop complete
6 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: SASL Client receiving last message
6 [main] DEBUG org.apache.thrift.transport.TSaslTransport - CLIENT: SASL Client receiving last message