It takes a long time to connect to a GemFire server in my application - gemfire

I am setting up the spring-data-gemfire in my project at my workplace. However, it takes a long time to connect to a GemFire server and retrieve the data from my application.
2020-Mar-13 09:00:10 | TRACE | annotation.ScheduledAnnotationPostProcessor | No #Scheduled annotations found on bean class: class org.springframework.gemfire.client.CLientCacheFactoryBean
2020-Mar-13 09:00:10 | TRACE | support.DefaultListableBeanFactory |Finished creating instance of bean 'gemfireCache'
2020-Mar-13 09:00:42 | TRACE | internal.ClassPathLoader | getResource(gemfire.properties)
2020-Mar-13 09:00:42 | TRACE | internal.ClassPathLoader | getResource trying: sun.misc.Launcher$AppClassLoader#73d16e93
2020-Mar-13 09:00:42 | TRACE | internal.ClassPathLoader | getResource trying: java.net.Launcher$AppClassLoader#73d16e93
.....
May I know why it takes more than 40 secs from "Finished creating instance of bean.." to "getResource.." ?
Current Versions
Spring Data GemFire 2.2.1.RELEASE
Spring Boot 2.2.1.RELEASE
3 Server and 2 Locator (Installed at virtual Machine)
Private Local Network
Using Java Serialization
Using Caching Proxy for my client
When i try a simple gemfire application at my home, it took around 5 seconds.
[TRACE] 2020-03-14 16:54:58.623 [main] CachedIntrospectionResults - Found bean property 'propertiesPersister' of type [org.springframework.util.PropertiesPersister]
[TRACE] 2020-03-14 16:54:58.623 [main] CachedIntrospectionResults - Found bean property 'singleton' of type [boolean]
[TRACE] 2020-03-14 16:54:58.626 [main] TypeConverterDelegate - Converting String to [class [Lorg.springframework.core.io.Resource;] using property editor [org.springframework.core.io.support.ResourceArrayPropertyEditor#669253b7
[TRACE] 2020-03-14 16:54:58.627 [main] PathMatchingResourcePatternResolver - Resolved classpath location [META-INF/gemfire-named-queries.properties] to resources []
[TRACE] 2020-03-14 16:54:58.627 [main] DefaultListableBeanFactory - Invoking afterPropertiesSet() on bean with name '(inner bean)#6cc0bcf6'
[TRACE] 2020-03-14 16:54:58.628 [main] DefaultListableBeanFactory - Finished creating instance of bean '(inner bean)#6cc0bcf6'
[TRACE] 2020-03-14 16:54:58.630 [main] DefaultListableBeanFactory - Finished creating instance of bean '(inner bean)#1a6f5124'
[TRACE] 2020-03-14 16:54:58.631 [main] DefaultListableBeanFactory - Creating instance of bean '(inner bean)#49a64d82'
[TRACE] 2020-03-14 16:54:58.635 [main] DefaultListableBeanFactory - Invoking afterPropertiesSet() on bean with name '(inner bean)#49a64d82'
[TRACE] 2020-03-14 16:54:58.635 [main] DefaultListableBeanFactory - Finished creating instance of bean '(inner bean)#49a64d82'
[TRACE] 2020-03-14 16:54:58.636 [main] DefaultListableBeanFactory - Returning cached instance of singleton bean 'gemfireCache'
[TRACE] 2020-03-14 16:55:03.415 [main] ClassPathLoader - getResource(gemfire.properties)
[TRACE] 2020-03-14 16:55:03.418 [main] ClassPathLoader - getResource trying: sun.misc.Launcher$AppClassLoader#73d16e93
[TRACE] 2020-03-14 16:55:03.418 [main] ClassPathLoader - getResource trying: java.net.URLClassLoader#1500e009
[TRACE] 2020-03-14 16:55:03.418 [main] ClassPathLoader - getResource(gfsecurity.properties)
[TRACE] 2020-03-14 16:55:03.419 [main] ClassPathLoader - getResource trying: sun.misc.Launcher$AppClassLoader#73d16e93
[TRACE] 2020-03-14 16:55:03.419 [main] ClassPathLoader - getResource trying: java.net.URLClassLoader#1500e009
[TRACE] 2020-03-14 16:55:03.429 [main] ClassPathLoader - forName(org.apache.logging.log4j.core.impl.Log4jContextFactory)
[TRACE] 2020-03-14 16:55:03.429 [main] ClassPathLoader - forName trying: sun.misc.Launcher$AppClassLoader#73d16e93
[TRACE] 2020-03-14 16:55:03.429 [main] ClassPathLoader - forName found by: sun.misc.Launcher$AppClassLoader#73d16e93 2020-03-14 16:55:03,432 main INFO Log4j Core is available and using Log4jProvider
[TRACE] 2020-03-14 16:55:03.434 [main] ClassPathLoader - forName(org.apache.geode.internal.logging.log4j.Log4jAgent)
[TRACE] 2020-03-14 16:55:03.434 [main] ClassPathLoader - forName trying: sun.misc.Launcher$AppClassLoader#73d16e93
[TRACE] 2020-03-14 16:55:03.435 [main] ClassPathLoader - forName found by: sun.misc.Launcher$AppClassLoader#73d16e93 020-03-14 16:55:03,436 main INFO Using org.apache.geode.internal.logging.log4j.Log4jAgent by default for service org.apache.geode.internal.logging.ProviderAgent
[TRACE] 2020-03-14 16:55:03.848 [main] ThreadContext - get() - in thread [main]
[DEBUG] 2020-03-14 16:55:03.854 [main] geode - LogWriter is created.
[DEBUG] 2020-03-14 16:55:03.854 [main] security - SecurityLogWriter is created.
[TRACE] 2020-03-14 16:55:03.858 [main] ClassPathLoader - getResource(org/apache/geode/internal/GemFireVersion.properties)
[TRACE] 2020-03-14 16:55:03.858 [main] ClassPathLoader - getResource trying: sun.misc.Launcher$AppClassLoader#73d16e93
[TRACE] 2020-03-14 16:55:03.858 [main] ClassPathLoader - getResource found by: sun.misc.Launcher$AppClassLoader#73d16e93
My Simple gemfire application
#SpringBootApplication
#ClientCacheApplication(locators = {#ClientCacheApplication.Locator(host = "192.167.0.5", port = 10311) }, socketBufferSize = 90000, subscriptionEnabled = true)
#EnableEntityDefinedRegions(basePackageClasses = Person.class, clientRegionShortcut = ClientRegionShortcut.CACHING_PROXY)
#EnableGemfireRepositories(basePackageClasses = PersonRepository.class)
public class Application {

In some cases it may simply be that you're having DNS lookup issues - especially since performance seems to be different between your home and other location(s).
You can try this:
Get your hostname using the hostname command.
Add this as an alias to your /etc/hosts file for 127.0.0.1. For example if your hostname looks something like foobar.local then your /etc/hosts should be adjusted to include this:
127.0.0.1 localhost foobar foobar.local
Notice that I added both the FQDN (including .local as returned by hostname) as well as the shorter version.

Well, this is a loaded question.
There are many things that transpire both before and after a client (cache) connects to a cluster (or server). Many questions come to mind since your description is less than descriptive in this case.
You mention you are connecting to a server, so that implies your Spring Boot, Spring Data for Pivotal GemFire (SDG) application is a client, and specifically a ClientCache? If so, what sort of network are you going over (localhost, private internal network, VPN, cloud network, etc)?
Is there a firewall, proxy, NAT (e.g. Router, Switch), or other network appliance involved? If you are noticing latency, then there is a good chance of a network problem.
How have you configured the Pool between the client and server? For example, are you connecting to 1 or more Locators, or a single Locator. Are you connecting directly to a server?
How large is your cluster? Is there more than 1 server? Do you have single-hop enabled?
What sort of data access operation are you performing? How large is the result set (i.e. how many objects)? How large are the objects?
What form of serialization are you using (e.g. do you are application domain model types implement java.io.Serializable? Are you using PDX by chance? Are you using GemFire Deltas and DataSerialization?
Have you tried running the same application in different contexts?
Do you have your application publicly accessible somewhere, such as in GitHub?
Can you share your configuration, full log files for both client and server, Thread dumps for both client and server, etc?
Is the Region in which you are accessing data from your application a PARTITION Region? Is persistence enabled? If a multi-node (server) cluster, are all the members hosting the PR up?
etc, etc, etc...
The small snippet of log content you did share...
2020-Mar-13 09:00:42 | TRACE | internal.ClassPathLoader | getResource(gemfire.properties)
2020-Mar-13 09:00:42 | TRACE | internal.ClassPathLoader | getResource trying: sun.misc.Launcher$AppClassLoader#73d16e93
2020-Mar-13 09:00:42 | TRACE | internal.ClassPathLoader | getResource trying: java.net.Launcher$AppClassLoader#73d16e93
Seems to suggest that it is trying to resolve a gemfire.properties file. Have you configured a gemfire.properties? If so, it might be better to define a Spring Boot application.properties resource and use the corresponding, equivalent spring.data.gemfire.* properties instead.
If your filesystem in good health?
The ClassPathLoader class is a Pivotal GemFire internal class, so whatever is happening, is outside of the control of Spring (Data GemFire) and entirely to do with GemFire itself.
You should also be aware that just because the GemFireCache (e.g. a ClientCache) instance has been created, it does not necessarily mean the cache, or the GemFire System in general, has been fully initialized yet. There are many asynchronous tasks (e.g. such as Pool min size creation) that happen in the background after certain GemFire objects (e.g. cache, then Regions, then Indexes and DiskStores, etc) are created.
Also, the cache (e.g. ClientCache) instance that is created is just a container to hold all the other GemFire objects, such as Regions, which actually hold the data. Regions must be created and initialized before any data access operations can occur. However, in order to create Regions, you need a cache (i.e. a ClientCache instance in the first place).
What type is your client Regions (e.g. ClientRegionShortcut.PROXY or ClientRegionShortcut.CACHING_PROXY)?
There are too many blanks in your details to give you a more precise answer.

Related

spring cloud config server cannot bind to gitlab

I am using Spring springCloudVersion, "2021.0.3" to setup a Config Server that uses my gitlab account for its files. I created a deploy token with premissions:
read_repository, read_registry, write_registry, read_package_registry, write_package_registry
and then I use those in the spring server application.properties:
spring.application.name=config-server
spring.application.version=0.1.0
server.port=8012
spring.cloud.config.server.git.uri=https://gitlab.com/[account]/[repo]
spring.cloud.config.server.git.skip-ssl-validation=true
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.default-label=main
spring.cloud.config.server.git.basedir=https://gitlab.com/[account]/[repo]
spring.cloud.config.server.git.username=[my-token-username]
spring.cloud.config.server.git.password=[my-token-password]
I was getting errors on startup about not binding the base directory until I put the same uri there as in spring.cloud.config.server.git.uri (they are now identical but it feels wrong)
When I try to startup my ConfigServerApplication I get the following:
ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'defaultEnvironmentRepository' defined in class path resource [org/springframework/cloud/config/server/config/DefaultRepositoryConfiguration.class]: Unsatisfied dependency expressed through method 'defaultEnvironmentRepository' parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'multipleJGitEnvironmentProperties': Could not bind properties to 'MultipleJGitEnvironmentProperties' : prefix=spring.cloud.config.server.git, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.cloud.config.server.git.basedir' to java.io.File
2022-06-30 12:04:53.020 INFO 1594 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-06-30 12:04:53.026 INFO 1594 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-06-30 12:04:53.033 ERROR 1594 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to bind properties under 'spring.cloud.config.server.git.basedir' to java.io.File:
Property: spring.cloud.config.server.git.basedir
Value: https://gitlab.com/[account]/[repo]
Origin: class path resource [application.properties] - 9:40
Reason: failed to convert java.lang.String to java.io.File (caused by java.lang.IllegalStateException: Could not retrieve file for URL [https://gitlab.com/[account]/[repo]]: URL [https://gitlab.com/[account]/[repo]] cannot be resolved to absolute file path because it does not reside in the file system: https://gitlab.com/[account]/[repo]
I don't know what file it is trying to access - in that repo I have
README.md
application.properties
and I would think the properties files is what it is trying to access?
I have tried using both my account credentials and the deploy token with the same results. I did not try the deploy key because this will not be access locally, but via cloud.
Finally tracked it down - I had to add:
spring.profiles.active=native
and remove
spring.cloud.config.server.git.basedir

server.HiveServer2: Error starting priviledge synchonizer

Hive version 3.1.2
Hadoop components(hdfs/yarn/historyjob) with kerberos authentication.
hive kerberos config:
hive.server2.authentication=KERBEROS
hive.server2.authentication.kerberos.principal=hiveserver2/_HOST#BDP.COM
hive.server2.authentication.kerberos.keytab=/etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
hive.metastore.sasl.enabled=true
hive.metastore.kerberos.keytab.file=/etc/kerberos/hadoop/metastore.bdp-05.keytab
hive.metastore.kerberos.principal=metastore/_HOST#BDP.COM
First, start the Metastore:
./bin/hive --service metastore > /dev/null &
Nothing unnormal in the log.
Then start hiveserver2 :
./bin/hive --service hiveserver2 > /dev/null &
Here is the start logs :
2020-12-30T11:28:48,746 INFO [main] server.HiveServer2: Starting HiveServer2
2020-12-30T11:28:49,168 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:49,171 INFO [main] cli.CLIService: SPNego httpUGI not created, spNegoPrincipal: , ketabFile:
2020-12-30T11:28:49,187 INFO [main] SessionState: Hive Session ID = 0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,052 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,066 INFO [main] session.SessionState: Created local directory: /tmp/hive/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,069 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49/_tmp_space.db
2020-12-30T11:28:50,600 INFO [main] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://bigdata-server-05:9083
2020-12-30T11:28:50,605 INFO [main] metastore.HiveMetaStoreClient: HMSC::open(): Could not find delegation token. Creating KERBEROS-based thrift connection.
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Connected to metastore.
2020-12-30T11:28:50,654 INFO [main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hiveserver2/bigdata-server-05#BDP.COM (auth:KERBEROS) retries=1 delay=1 lifetime=0
2020-12-30T11:28:50,781 INFO [main] service.CompositeService: Operation log root directory is created: /tmp/hive/operation_logs
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread pool size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread wait queue size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread keepalive time: 10 seconds
2020-12-30T11:28:50,784 INFO [main] service.CompositeService: Connections limit are user: 0 ipaddress: 0 user-ipaddress: 0
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:OperationManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:SessionManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:CLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:HiveServer2 is inited.
2020-12-30T11:28:50,835 INFO [pool-7-thread-1] SessionState: Hive Session ID = 693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,838 INFO [main] results.QueryResultsCache: Initializing query results cache at /tmp/hive/_resultscache_
2020-12-30T11:28:50,844 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,844 INFO [main] results.QueryResultsCache: Query results cache: cacheDirectory /tmp/hive/_resultscache_/results-23ae949b-6894-4a17-8141-0eacf5fe5a63, maxCacheSize 2147483648, maxEntrySize 10485760, maxEntryLifetime 3600000
2020-12-30T11:28:50,846 INFO [pool-7-thread-1] session.SessionState: Created local directory: /tmp/hive/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,849 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4/_tmp_space.db
2020-12-30T11:28:50,861 INFO [main] events.NotificationEventPoll: Initializing lastCheckedEventId to 0
2020-12-30T11:28:50,862 INFO [main] server.HiveServer2: Starting Web UI on port 10002
2020-12-30T11:28:50,885 INFO [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
2020-12-30T11:28:50,894 INFO [main] util.log: Logging initialized #4380ms
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:OperationManager is started.
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:SessionManager is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:CLIService is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is started.
2020-12-30T11:28:51,013 WARN [main] security.HadoopThriftAuthBridge: Client-facing principal not set. Using server-side setting: hiveserver2/_HOST#BDP.COM
2020-12-30T11:28:51,013 INFO [main] security.HadoopThriftAuthBridge: Logging in via CLIENT based principal
2020-12-30T11:28:51,019 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,019 INFO [main] security.HadoopThriftAuthBridge: Logging in via SERVER based principal
2020-12-30T11:28:51,023 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,030 INFO [main] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,033 INFO [main] security.TokenStoreDelegationTokenSecretManager: New master key with key id=0
2020-12-30T11:28:51,034 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: New master key with key id=1
2020-12-30T11:28:51,040 INFO [main] thrift.ThriftCLIService: Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2020-12-30T11:28:51,040 INFO [main] service.AbstractService: Service:HiveServer2 is started.
2020-12-30T11:28:51,041 ERROR [main] server.HiveServer2: Error starting priviledge synchonizer:
java.lang.NullPointerException: null
at org.apache.hive.service.server.HiveServer2.startPrivilegeSynchonizer(HiveServer2.java:985) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:726) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1037) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) [hive-service-3.1.2.jar:3.1.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_271]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_271]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_271]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_271]
at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.3.jar:?]
2020-12-30T11:28:51,044 INFO [main] server.HiveServer2: Shutting down HiveServer2
In my case, the hiveserver2-sit.xml was created by Apache Ranger when turning the ranger-hive-plugin on. Once I disable the ranger-hive-plugin, hiveserver2-sit.xml was edited by Ranger.
Here are the remaining configurations:
<configuration>
<property>
<name>hive.security.authorization.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.security.authorization.manager</name>
<value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider</value>
</property>
<property>
<name>hive.security.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value>
</property>
<property>
<name>hive.conf.restricted.list</name>
<value>hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager</value>
</property>
</configuration>
Start hiveServer2 will encounter the previous exception.
Remove hiveserver2-site.xml will work fine.
I don't know why? Somebody can explain?
is this still actual ? , if yes , check logs . You should see that it tries to connect to zookeeper , if not described it will try to connect to localhost:2181 , so either it must be there or zk quorum servers should be described.

Apache NiFi TLS failed to start

I have tried to install NiFi toolkit to enable TLS. I followed instructions given here.
I followed the steps,using localhost and host as sachith but when I try to start NiFi again, I get following error.
Bootstrap Config File: /home/sachith/nifi-1.9.2/conf/bootstrap.conf
16:59:23,949 |-INFO in ch.qos.logback.classic.LoggerContext[default] -
Could NOT find resource [logback-test.xml] 16:59:23,950 |-INFO in
ch.qos.logback.classic.LoggerContext[default] - Could NOT find
resource [logback.groovy] 16:59:23,950 |-INFO in
ch.qos.logback.classic.LoggerContext[default] - Found resource
[logback.xml] at
[file:/home/sachith/Documents/Projects/nifi-1.9.2-bin/nifi-1.9.2/conf/logback.xml]
16:59:24,032 |-INFO in
ch.qos.logback.classic.joran.action.ConfigurationAction - debug
attribute not set 16:59:24,046 |-INFO in
ch.qos.logback.classic.joran.action.ConfigurationAction - Will scan
for changes in
[file:/home/sachith/Documents/Projects/nifi-1.9.2-bin/nifi-1.9.2/conf/logback.xml]
16:59:24,046 |-INFO in
ch.qos.logback.classic.joran.action.ConfigurationAction - Setting
ReconfigureOnChangeTask scanning period to 30 seconds 16:59:24,049
|-INFO in
ch.qos.logback.classic.joran.action.LoggerContextListenerAction -
Adding LoggerContextListener of type
[ch.qos.logback.classic.jul.LevelChangePropagator] to the object stack
16:59:24,062 |-INFO in
ch.qos.logback.classic.jul.LevelChangePropagator#48140564 -
Propagating DEBUG level on Logger[ROOT] onto the JUL framework
16:59:24,062 |-INFO in
ch.qos.logback.classic.joran.action.LoggerContextListenerAction -
Starting LoggerContextListener 16:59:24,062 |-INFO in
ch.qos.logback.core.joran.action.AppenderAction - About to instantiate
appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
16:59:24,067 |-INFO in ch.qos.logback.core.joran.action.AppenderAction
Naming appender as [APP_FILE] 16:59:24,084 |-INFO in c.q.l.core.rolling.SizeAndTimeBasedRollingPolicy#93122545 - Archive
files will be limited to [100 MB] each. 16:59:24,127 |-INFO in
c.q.l.core.rolling.SizeAndTimeBasedRollingPolicy#93122545 - No
compression will be used 16:59:24,128 |-INFO in
c.q.l.core.rolling.SizeAndTimeBasedRollingPolicy#93122545 - Will use
the pattern
/home/sachith/Documents/Projects/nifi-1.9.2-bin/nifi-1.9.2/logs/nifi-app_%d{yyyy-MM-dd_HH}.%i.log
for the active file 16:59:24,131 |-INFO in
ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP#7c30a502 - The date
pattern is 'yyyy-MM-dd_HH' from file name pattern
'/home/sachith/Documents/Projects/nifi-1.9.2-bin/nifi-1.9.2/logs/nifi-app_%d{yyyy-MM-dd_HH}.%i.log'.
nifi-app.log
2020-06-26 09:27:50,296 INFO [main] org.eclipse.jetty.server.Server Started #186210ms
2020-06-26 09:27:50,297 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration': Unsatisfied dependency expressed through method 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is org.springframework.beans.factory.BeanExpressionException: Expression parsing failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jwtAuthenticationProvider' defined in class path resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizer': FactoryBean threw exception on object creation; nested exception is org.apache.nifi.authorization.exception.AuthorizerCreationException: Unable to locate configured Access Policy Provider: file-access-policy-provider
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
Edit : After Andy's initial answer.
My authorizers.xml
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">file-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">CN=sachith, OU=NiFi</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
<property name="Node Group"></property>
</accessPolicyProvider>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">CN=sachith, OU=NiFi</property>
</userGroupProvider>
You haven't provided the complete log output, so there may be other issues here, but it looks like the problem is nested exception is org.apache.nifi.authorization.exception.AuthorizerCreationException: Unable to locate configured Access Policy Provider: file-access-policy-provider. Did you provide an authorizers.xml file which defines a FileAccessPolicyProvider?
Look specifically at Step 7 of the walkthrough guide you linked to.

Micronaut's EmbeddedServer startup extremely slow

I created a micronaut "Hello World!" application and a JUnit test according to the Micronaut user guide:
https://docs.micronaut.io/latest/guide/index.html#creatingClient
on macOS Mojave (10.14) with Java 1.8.0_25-b17.
Unit test:
package hello;
import io.micronaut.http.HttpStatus;
import io.micronaut.http.client.RxHttpClient;
import io.micronaut.runtime.server.EmbeddedServer;
import io.micronaut.test.annotation.MicronautTest;
import org.junit.jupiter.api.Test;
import javax.inject.Inject;
import static org.junit.jupiter.api.Assertions.assertEquals;
#MicronautTest
public class HelloControllerTest {
#Inject
EmbeddedServer embeddedServer;
#Test
public void testIndex() throws Exception {
// or (instead of the #Inject):
// EmbeddedServer embeddedServer = ApplicationContext.run(EmbeddedServer.class);
try(RxHttpClient client = embeddedServer.getApplicationContext().createBean(RxHttpClient.class, embeddedServer.getURL())) {
assertEquals(HttpStatus.OK, client.toBlocking().exchange("/hello").status());
}
}
}
The "Hello World!" application starts up quickly (about a second). The JUnit test, however, takes more than 75 seconds to complete. It 'hangs' on the following line for 75 seconds:
server = ApplicationContext.run(EmbeddedServer.class);
Suggested fix in /etc/hosts doesn't work
I've tried this suggested fix (adding the hostname to /etc/hosts after the "127.0.0.1 localhost" and "::1 localhost" entries -- both with and without the '.local' suffix):
https://docs.micronaut.io/latest/guide/index.html#problems
Jvm takes a long time to resolve ip-address for localhost
with no luck. I restarted my machine after changes to /etc/hosts.
The hostname resolution does not seem to be the problem though; I tested it with the inetTester.jar mentioned in the above link (download here: https://github.com/thoeni/inetTester), it takes only 6ms. I guess it must be something else.
(On the other hand, everyone who had problems with micronaut application startup time (which I do not) on macOS, and fixed it by adding hostname to /etc/hosts, also mentions this same ~75 second delay -- this can't really be a coincidence?)
Log file
The two lines in the log file, before and after the 75 second pause:
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Registering singleton bean io.micronaut.http.server.netty.NioEventLoopGroupFactory#4b1c0397 for type [io.micronaut.http.server.netty.EventLoopGroupFactory] using bean key io.micronaut.http.server.netty.NioEventLoopGroupFactory
22:56:22.040 [main] DEBUG io.micronaut.context.lifecycle - Created bean [io.micronaut.http.server.netty.NettyHttpServer#2fe88a09] from definition [Definition: io.micronaut.http.server.netty.NettyHttpServer] with qualifier [null]
And a bit of context:
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: interface io.micronaut.http.server.netty.ssl.ServerSslBuilder
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: interface io.micronaut.http.server.netty.ssl.ServerSslBuilder
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Resolving beans for type: io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - Looking up existing beans for key: io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - No beans found for key: io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: interface io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: interface io.netty.channel.ChannelOutboundHandler
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Found no possible candidate beans of type [io.netty.channel.ChannelOutboundHandler] for qualifier: null
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.833 [main] TRACE i.m.context.DefaultBeanContext - No existing bean found for bean key: io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.833 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: interface io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: class io.micronaut.http.server.netty.EpollEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: class io.micronaut.http.server.netty.EpollEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: class io.micronaut.http.server.netty.KQueueEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [] for type: class io.micronaut.http.server.netty.KQueueEventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Resolved bean candidates [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory] for type: interface io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Finalized bean definitions candidates: [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory]
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Found concrete candidate [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory] for type: io.micronaut.http.server.netty.EventLoopGroupFactory
22:55:06.834 [main] DEBUG io.micronaut.context.lifecycle - Created bean [io.micronaut.http.server.netty.NioEventLoopGroupFactory#4b1c0397] from definition [Definition: io.micronaut.http.server.netty.NioEventLoopGroupFactory] with qualifier [null]
22:55:06.834 [main] DEBUG i.m.context.DefaultBeanContext - Registering singleton bean io.micronaut.http.server.netty.NioEventLoopGroupFactory#4b1c0397 for type [io.micronaut.http.server.netty.EventLoopGroupFactory] using bean key io.micronaut.http.server.netty.NioEventLoopGroupFactory
22:56:22.040 [main] DEBUG io.micronaut.context.lifecycle - Created bean [io.micronaut.http.server.netty.NettyHttpServer#2fe88a09] from definition [Definition: io.micronaut.http.server.netty.NettyHttpServer] with qualifier [null]
22:56:22.041 [main] DEBUG i.m.context.DefaultBeanContext - Registering singleton bean io.micronaut.http.server.netty.NettyHttpServer#2fe88a09 for type [io.micronaut.runtime.server.EmbeddedServer] using bean key io.micronaut.http.server.netty.NettyHttpServer
22:56:22.042 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
22:56:22.050 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
22:56:22.050 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
22:56:22.056 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
22:56:22.063 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#2ca6546f
22:56:22.063 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#6b63d445
22:56:22.063 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#7578e06a
22:56:22.064 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#30b2b76f
22:56:22.064 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#56da52a7
22:56:22.064 [main] TRACE io.netty.channel.nio.NioEventLoop - instrumented a special java.util.Set into: sun.nio.ch.KQueueSelectorImpl#23ee75c5
I had a similar issue where the compiled native image was taking over 40 seconds to start. The problem in my case was the environment detection, disabling it solved my problem. I did it in code, you can do it via properties as explained in their docs.
fun main(args: Array<String>) {
Micronaut.build()
.packages("com.example")
.deduceEnvironment(false) // this line did the trick
.mainClass(Application.javaClass)
.start()
}
For anybody coming with the same problem for me only the hosts edit fixes it.
For me just normal startup is between 1.6 and 6-8 seconds depending on what is in the hosts file /etc/hosts.
127.0.0.1 localhost -> 6-8s startup
127.0.0.1 localhost MacBook-Pro.local -> 1.6s startup.
So essentially just add $hostname to your 127.0.0.1 route in the hosts file.
I had the same issue where in my case it was 2 minutes and 20 seconds of waiting. Simple workaround is to specify a server port in the Micronaut application configuration like this:
micronaut:
server:
host: localhost
port: 8080
But it will not work when you want to run more than one test at once.
I think that the problem is caused by searching for any available port implemented in SocketUtils.findAvailableTcpPort() and executed in the constructor of the NettyHttpServer class when no port is specified and the environment is Environment.TEST.
Update: from my experience the problem appears only on some networks. For example I don't have any problem in my home network but I had that problem in a hotel network. Probably domain name lookup can influence that - so what about to try to change the DNS server?
it seems coming from the network.
when i plugged out my cable and my wifi, i had that :
12:34:31.324 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [test]
12:34:32.061 [main] WARN i.netty.util.internal.MacAddressUtil - Failed to find a usable hardware address from the network interfaces; using random bytes: 11:02:e9:bf:a8:0e:df:83
and my test ran in 267ms (without it's about 30s)

JDBC client to Hive - No data or no sasl data in the stream Exception

We have a Kerberised cluster and I'm trying to run a Java action in Oozie where I make a JDBC connection to Hive. This JDBC connections works fine on the Sandbox without Kerberos.
The connection string is as simple as the following, where I'm providing username and password in it:
Connection con = DriverManager.getConnection("jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET","user123","passw123");
The Oozie action (strangely) completes succesfully, and the Java action log does not present any error:
1742 [main] INFO org.apache.hive.jdbc.Utils - Supplied authorities: W12345:10000
1742 [main] INFO org.apache.hive.jdbc.Utils - Resolved authority: W12345:10000
1766 [main] INFO org.apache.hive.jdbc.HiveConnection - Will try to open client transport with JDBC Uri: jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET
<<< Invocation of Main class completed <<<
Oozie Launcher ends
1785 [main] INFO org.apache.hadoop.mapred.Task - Task:attempt_1464245290012_0129_m_000000_0 is done. And is in the process of committing
1847 [main] INFO org.apache.hadoop.mapred.Task - Task attempt_1464245290012_0129_m_000000_0 is allowed to commit now
1854 [main] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_1464245290012_0129_m_000000_0' to hdfs://danskehadoop/user/user123/oozie-oozi/0000013-160527101253015-oozie-oozi-W/JavaAction--java/output/_temporary/1/task_1464245290012_0129_m_000000
1909 [main] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1464245290012_0129_m_000000_0' done.
But in reality the Java main does not complete correctly the execution (and does not execute the needed queries) because the JDBC connection fails with an exception that I can see only in the Hive log:
ERROR [HiveServer2-Handler-Pool: Thread-78363]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
I'm actually connected to the cluster, and already done further kinit on my username.
Does anybody know what could the cause of this exception be?
Thanks in advance for the help!
Antonio
This happened to me on MapR hadoop distribution platform.
In my case it was Keepalived checking Hive port every 5 seconds and producing such error. I simply used "nc" command to check if Hive port is in use and did not use any authentication method. Later I switched to "maprcli" command which uses SASL authentication and the error was gone.