NullPointerException while raise an embedded ldap server using spring - apache

I'm trying to raise the Spring Embedded Ldap Server using:
But I'm keep on getting this exception:
2010-06-10 14:33:35,559 ERROR main ApacheDSContainer start - Server startup failed
java.lang.NullPointerException
at org.apache.directory.server.core.schema.DefaultSchemaService.initialize(DefaultSchemaService.java:382)
at org.apache.directory.server.core.DefaultDirectoryService.initialize(DefaultDirectoryService.java:1425)
at org.apache.directory.server.core.DefaultDirectoryService.startup(DefaultDirectoryService.java:907)
at org.springframework.security.ldap.server.ApacheDSContainer.start(ApacheDSContainer.java:160)
at org.springframework.security.ldap.server.ApacheDSContainer.afterPropertiesSet(ApacheDSContainer.java:113)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1469)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:563)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:872)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3764)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4212)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:740)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:736)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:448)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:700)
at org.apache.catalina.startup.Catalina.start(Catalina.java:552)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433)
I'm using spring 3.0.2 and added the following jars for the ldap:
spring-security-ldap-3.0.2.RELEASE.jar
spring-ldap-1.3.0.RELEASE-all.jar
apacheds-all-1.5.6.jar
shared-ldap-0.9.15.jar
slf4j-api-1.5.6.jar
slf4j-simple-1.5.6.jar
Help please....

#Zorkus: I'm not sure exactly what kind of problem you came across with Apache Directory, and what is the root cause of that, but if all you need is a working embedded java LDAP server for integration testing with Spring Security then you might want to consider some alternatives.
I've recently started to investigate for alternatives, because I couldn't achieve with Apache Directory what I wanted despite a lot of time and effort invested. (I basically wanted to replicate the schema and the user database of an Active Directory instance into an embedded server.)
What I found is that the UnboundID LDAP SDK is a nice replacement. Integrating with it requires a bit more effort than a one-liner in your spring context (like <security:ldap-server/>), but not much more. Starting up an LDAP server requires just a few lines of code:
InMemoryDirectoryServerConfig config =
new InMemoryDirectoryServerConfig("dc=example, dc=com");
// schema config only necessary if the standard
// schema provided by the library doesn't suit your needs
config.setSchema(Schema.getSchema("your-custom-schema.schema"));
// listener config only necessary if you want to make sure that the
// server listens on port 33389, otherwise a free random port will
// be picked at runtime - which might be even better for tests btw.
config.setListenerConfigs(
new InMemoryListenerConfig("myListener", null, 33389, null, null, null));
InMemoryDirectoryServer ds = new InMemoryDirectoryServer(config);
ds.startListening();
// import your test data from ldif files
ds.importFromLDIF(true,"content.ldif");
The only dependency you will need for this to work is:
<dependency>
<groupId>com.unboundid</groupId>
<artifactId>unboundid-ldapsdk</artifactId>
<version>2.3.1</version>
</dependency>
It would be pretty easy to wrap the above code in a class that you can instantiate and configure from your Spring context.
For documentation and code examples on the UnboundID LDAP SDK see: https://www.unboundid.com/products/ldap-sdk/docs/
(I'm not affiliated with UnboundID in any way.)

Check whether the authorization state used by the LDAP client has access to the schema.

Related

PutHiveStreaming Processor in Nifi throws NPE

I'm debugging a HiveProcessor which follows the official PutHiveStreaming processor, but it writes to Hive 2.x instead of 3.x. The flow runs in Nifi cluster 1.7.1. Although this exception happens, data is still written to Hive.
The exception is:
java.lang.NullPointerException: null
at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.getFilteredObjects(AuthorizationMetaStoreFilterHook.java:77)
at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.filterDatabases(AuthorizationMetaStoreFilterHook.java:54)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1147)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:471)
at sun.reflect.GeneratedMethodAccessor1641.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
at com.sun.proxy.$Proxy308.isOpen(Unknown Source)
at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:270)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.<init>(AbstractRecordWriter.java:95)
at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:82)
at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:60)
at org.apache.nifi.util.hive.HiveWriter.lambda$getRecordWriter$0(HiveWriter.java:91)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.nifi.util.hive.HiveWriter.getRecordWriter(HiveWriter.java:91)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:75)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHive2Streaming.makeHiveWriter(PutHive2Streaming.java:1152)
at org.apache.nifi.processors.hive.PutHive2Streaming.getOrCreateWriter(PutHive2Streaming.java:1065)
at org.apache.nifi.processors.hive.PutHive2Streaming.access$1000(PutHive2Streaming.java:114)
at org.apache.nifi.processors.hive.PutHive2Streaming$1.lambda$process$2(PutHive2Streaming.java:858)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHive2Streaming$1.process(PutHive2Streaming.java:855)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2211)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2179)
at org.apache.nifi.processors.hive.PutHive2Streaming.onTrigger(PutHive2Streaming.java:808)
at org.apache.nifi.processors.hive.PutHive2Streaming.lambda$onTrigger$4(PutHive2Streaming.java:672)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHive2Streaming.onTrigger(PutHive2Streaming.java:672)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I also like to re-produce the error. Would using TestRunners.newTestRunner(processor); be able to find it? I refer to the test case for Hive 3.x
https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestPutHiveStreaming.java
The other way is to run Hive 2.x and Nifi container locally. But then I have to run docker cp to copy the nar package by mvn, and attach remote JVM from intellij as this blog describes.
https://community.hortonworks.com/articles/106931/nifi-debugging-tutorial.html
Have someone done similar? or is there an easier way to debug a custom processor?
This is a red herring error, there's some issue on the Hive side where it can't get its own IP address or hostname, and issues this error periodically as a result. However I don't think it causes any real problems, as you said the data gets written to Hive.
Just for completeness, in Apache NiFi PutHiveStreaming is built to work against Hive 1.2.x, not Hive 2.x. There are currently no specific Hive 2.x processors, we've never determined whether the Hive 1.2.x processors work against Hive 2.x.
For debugging, if you can run Hive in a container and expose the metastore port (9083 is the default I believe), then you should be able to create an integration test using things like TestRunners and run NiFi locally from your IDE. This is how other integration tests are performed for external systems such as MongoDB or Elasticsearch for example.
There is a MiniHS2 class in the Hive test suite for integration testing, but it is not in a published artifact so unfortunately we're left with having to run tests against a real Hive instance.
NPE doesn't show up after hcatalog.hive.client.cache.disabled set to true
Kafka Connect recommends this setting, too.
from Kafka Connect Doc https://docs.confluent.io/3.0.0/connect/connect-hdfs/docs/hdfs_connector.html
As connector tasks are long running, the connections to Hive metastore
are kept open until tasks are stopped. In the default Hive
configuration, reconnecting to Hive metastore creates a new
connection. When the number of tasks is large, it is possible that the
retries can cause the number of open connections to exceed the max
allowed connections in the operating system. Thus it is recommended to
set hcatalog.hive.client.cache.disabled to true in hive.xml.
When Max Concurrent Tasks of PutHiveStreaming is set more than 1, this property is automatically set as false
Also the document from Nifi resolved the issue already.
The NiFi PutHiveStreaming has a pool of connections, therefore
multithreaded; Setting hcatalog.hive.client.cache.disabled to true
would allow each connection to set is own Session without relying on
the cache.
ref:
https://community.hortonworks.com/content/supportkb/196628/hive-client-puthivestreaming-fails-against-partiti.html

java.lang.NoClassDefFoundError: org.apache.woden.resolver.URIResolver

getting the below exception after deploying the application in WAS8. using axis2 1.7.4 family with Woden api and impl 1.0M8. Will someone please help
java.lang.NoClassDefFoundError: org.apache.woden.resolver.URIResolvergetting
at java.lang.J9VMInternals.verifyImpl(Native Method)
at java.lang.J9VMInternals.verify(J9VMInternals.java:93)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:170)
at org.apache.axis2.deployment.ModuleDeployer.deploy(ModuleDeployer.java:65)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:815)
at org.apache.axis2.deployment.RepositoryListener.loadClassPathModules(RepositoryListener.java:222)
at org.apache.axis2.deployment.RepositoryListener.init2(RepositoryListener.java:71)
at org.apache.axis2.deployment.RepositoryListener.<init>(RepositoryListener.java:64)
at org.apache.axis2.deployment.DeploymentEngine.loadFromClassPath(DeploymentEngine.java:177)
at org.apache.axis2.deployment.FileSystemConfigurator.getAxisConfiguration(FileSystemConfigurator.java:135)
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContext(ConfigurationContextFactory.java:64)
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContextFromFileSystem(ConfigurationContextFactory.java:210)
at org.apache.axis2.client.ServiceClient.configureServiceClient(ServiceClient.java:151)
at org.apache.axis2.client.ServiceClient.<init>(ServiceClient.java:144)
and later below I am getting classnotfoundexception also
Caused by: java.lang.ClassNotFoundException: org.apache.woden.resolver.URIResolver
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:506)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:650)
... 27 more
If you're going to bring your own web services implementation, you have to run with PARENT_LAST class loading (or, preferably, package your version of the web services implementation in a shared library with an isolated class loader) and disable the built-in web services engine in WebSphere. Based on the exception stack, what appears to be happening is that something in your web services engine is interacting with WebSphere's version and triggering a load for a dependency that WebSphere doesn't package. Setting the environment to use your version will resolve that.
Note that WebSphere already includes Axis2, so unless you are strictly dependent on that specific point release, there's a very good chance that the best solution for you is just to rely on WebSphere's web services provider, rather than bringing your own. That will greatly simplify your configuration, since you won't need to mess with class loading delegation settings or system properties disabling the web services provider.

Oozie Java Action access to Hive Server 2(Kerberized) using delegation token

Currently I am having an issue really need some help.
We are trying kerberize our hadoop cluster including hive server2 and oozie. My oozie job spins off a java action in data node which tries to connect to kerberized hive server 2.
There is no user’s kerberos keytab for authentication. So I can only use delegation token passed by oozie in the java action to connect to hive server 2.
My question is: is there any way that I can use delegation token in a oozie java action to connect to hive server 2? If so, how can I do it through hive JDBC?
Thanks
Jary
When using Oozie in a kerberized cluster...
for a "Hive" or "Pig" Action, you must configure <credentials> of
type HCat
for a "Hive2" Action (just released with V4.2) you must configure
<credentials> of type Hive2
for a "Java" action opening a custom JDBC connection to HiveServer2,
I fear that Oozie cannot help -- unless there is an undocumented hack that would make it possible to reuse this new Hive2 credential?!?
Reference: Oozie documentation about Kerberos credentials
AFAIK you cannot use Hadoop delegation tokens with HiveServer2. HS2 uses Thrift for managing client connections, and Thrift supports Kerberos; but the Hadoop delegation tokens are something different (Kerberos was never intended for distributed computing, a workaround was needed)
What you can do is ship a full set of GSSAPI configuration, including a keytab, in your "Java" Action. It works, but there are a number of caveats:
the Hadoop Auth library seems to be hard-wired on the local ticket
cache in a very lame way; if you must connect to both HDFS and
HiveServer2, then do HDFS first, because as soon as JDBC initiates
its own ticket based on your custom conf, the Hadoop Auth will be broken
Kerberos configuration is tricky, GSSAPI config is worse, and since
these are security features the error messages are not very
helpful, by design (would be bad taste to tell hackers why their intrusion
attempt was rejected)
use OpenJDK if possible; by default the Sun/Oracle JVM has
limitations on cryptography (because of silly and obsolete US
exports policies) so you must download 2 JARs with "unlimited
strength" crypto settings to replace the default ones
Reference: another StackOverflow post that I found really helpful to set up "raw" Kerberos authentication when connecting to HiveServer2; plus a link about a very helpful "trace flag" for debugging your GSSAPI config e.g.
-Djava.security.debug=gssloginconfig,configfile,configparser,logincontext
Final warning: Kerberos is black magic. It will suck your soul away. More prosaically, it will have you lose many man-days to cryptic config issues, and team morale will suffer. We've been there.
Like Samson said Java action in Oozie requires additional authentication to connect to some "kerberized" services like hive. It can be achieved in a relativly simple way, without modifications in application.
Oozie action
<action name="java-action">
<java>
...
<main-class>some.App</main-class>
<java-opts>-Djavax.security.auth.useSubjectCredsOnly=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=jaas.conf</java-opts>
<file>hdfs://some/path/App.jar</file>
<file>hdfs://some/path/user.keytab</file>
<file>hdfs://some/path/jaas.conf</file>
</java>
...
</action>
jaas.conf
com.sun.security.jgss.initiate {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
principal="USER#EXAMPLE.COM"
doNotPrompt=true
keyTab="user.keytab"
};

DataSource naming JBossEAP 6.2 vs Web Logic

I am porting a suite of related applications from WebLogic to JBoss EAP v6.2.
I have set up a data source connection using the JBoss command line interface and hooked it to an oracle database. This database has a name of "mydatasource" and a JNDI name of
"java:jboss/datasources/mydatasource" as per JBoss standards. I can test and validate this database connection.
However, when I try to port the code and run it, the connection doesn't work. The code that worked in WebLogic was simply:
InitialContext ic = new InitialContext() ;
DataSource ds = (DataSource)ic.lookup(dataSource) ;
with a value in dataSource of "mydatasource".
This worked in Web Logic but in JBoss it throws a NameNotFoundException
javax.naming.NameNotFoundException: mydatasource-- service jboss.naming.context.java.mydatasource
Clearly there is a difference in how the InitialContext is set up between the two servers.
But this port involves a large number of small applications, all of which connect to the datasource via code like that above. I don't want to rewrite all that code.
Is there a way through configuration (InitialContextFactory, maybe) to define the initial context such that code like that above will work without rewriting, or perhaps is there another way of naming the datasource that JBoss will accept that would allow code like that above to work without rewriting?
Or must we bite the bullet and accept that this code needs a rewrite?
Update: Yes, I know that simply passing "java:jboss/datasources/mydatasource" to the InitialContext lookup solves the problem, but I am looking for a solution via configuration, rather than via coding if there is such a solution.
The way to do this correctly through configuration is to use
java:comp/env/jdbc/myDataSource
then use resource-ref in web.xml to map it to the declare datasource and use weblogic.xml or jboss-web.xml to actually map it to the real one
in weblogic admin console, when you define datasource it can be jdbc/realDataSource
JNDI path Tomcat vs. Jboss
For weblogic http://docs.oracle.com/cd/E13222_01/wls/docs103/jdbc_admin/packagedjdbc.html

How to address JNDI configuration when using mvn scala:console

I'm troubleshooting a Mapper problem and I'm running into an issue trying to use a Mapper class inside of the Scala/Lift console. Our MetaMappers have their datasource configured through a ConnectionIdentifier that points to a JDBC datasource configured in JNDI. This works great when bootstrapping through Jetty.
When loading the console and running (new bootstrap.liftweb.Boot).boot to initialize, Schemifier.schemify fails JNDI configuration is not available.
scala> (new bootstrap.liftweb.Boot).boot
java.lang.NullPointerException: Looking for Connection Identifier ConnectionIdentifier(jdbc/svcHub) but failed to find either a JNDI data source with the name jdbc/svcHub or a lift connection manager with the correct name
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$.newConnection(DB.scala:134)
at net.liftweb.mapper.DB$.getConnection(DB.scala:230)
at net.liftweb.mapper.DB$.use(DB.scala:581)
at net.liftweb.mapper.Schemifier$.schemify(Sche...
Essentially, I'd like to have full MetaMapper functionality from within the console. My question is: What's the best way to bootstrap a Lift app from the console such that the JNDI-based dependencies can also be fulfilled outside of a JNDI-capable web container?
Under a application server it's likely that the server will provide a JNDI context for you. In a standalone application you must provide a JNDI Context your self. For that you can use a javax.naming.InitialContext.
There is a nice example using Apache's DBCP here: http://commons.apache.org/dbcp/guide/jndi-howto.html. Of course, will you have to fix the Datasource objects to the implementation you are using.
This will be enough (not very elegant, though) for simple JNDI usage.