I am getting error when listing table please help me out.
sqoop list-tables --connect "jdbc:sqlserver://serverName=YYYYYYYY;database=Operational_Standards;integratedSecurity=true;authent
icationScheme=JavaKerberos"
Warning: /usr/hdp/2.2.4.12-1/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
INFO sqoop.Sqoop: Running Sqoop version: 1.4.5.2.2.4.12-1
INFO manager.SqlManager: Using default fetchSize of 1000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.12-1/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.12-1/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.12-1/hive/lib/hive-jdbc-0.14.0.2.2.4.12-1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.c
lass]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR manager.CatalogQueryManager: Failed to list tables
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host serverName=YYYYY, port 1433 has failed. Error: "null. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
Related
I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.
We use ActiveMQ 5.15.6, and I need your guidance to extract the ActiveMQ statistics via command line. At the moment we use the web console to get the ActiveMQ statistics which can be accessed via:
http://<IPAddress>:8161/admin/queues.jsp
And when I run ./activemq bstat it gives the below output
$./activemq bstat
INFO: Loading '/etc/default/activemq'
INFO: Using java '/bin/java'
Java Runtime: Oracle Corporation 1.8.0_252 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre
Heap sizes: current=62976k free=62319k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/apps/activemq/current/conf/login.config -Dactivemq.classpath=/apps/activemq/current/conf:/apps/activemq/apache-activemq-5.15.6//../lib/: -Dactivemq.home=/apps/activemq/current -Dactivemq.base=/apps/activemq/current -Dactivemq.conf=/apps/activemq/current/conf -Dactivemq.data=/apps/activemq/current/data
Extensions classpath:
[/apps/activemq/current/lib,/apps/activemq/current/lib/camel,/apps/activemq/current/lib/optional,/apps/activemq/current/lib/web,/apps/activemq/current/lib/extra]
ACTIVEMQ_HOME: /apps/activemq/current
ACTIVEMQ_BASE: /apps/activemq/current
ACTIVEMQ_CONF: /apps/activemq/current/conf
ACTIVEMQ_DATA: /apps/activemq/current/data
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO: Broker not available at: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
Can you please advise what command or script do I need to run to get the stats via command line?
The output is telling you what is wrong namely that the command line client cannot connect to the JMX port where the broker should be exposing its JMX mbeans which the 'bstat' command uses to collect broker metrics. You either need to enable JMX on the broker or configure the bstat command to point to where you've configure the JMX port to be:
activemq bstat –jmxurl service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
To understand the broker JMX configuration please read the docs which are located here.
I am trying to setup HDP 3.1.0 on Oracle Linux 7.
Ambari, HDFS and HIVE Metastore services are already running but HiveServer2 is not starting.
When I try to start it manually:
# hive --service hiveserver2
I get this after several minutes of waiting:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/phoenix/phoenix-5.0.0.3.1.0.0-78-server.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-12-15 14:15:28: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 99822aa9-957a-439e-904e-d9adce9a7893
Hive Session ID = fa34c442-4598-4f85-9493-daaf93804164
Hive Session ID = ed8de700-4ebf-4985-ae13-830b306be0e7
Hive Session ID = 6093d16b-53f0-4e21-9429-8046d3f3917a
Hive Session ID = a4fc572d-d56f-4c8a-97a0-8fc8bc115233
Hive Session ID = 02fdb753-45f7-4009-8283-bf3d5eef00b2
Hive Session ID = 47be06ad-42d2-4281-83f3-7e9b4cac1690
Hive Session ID = dae77692-3296-464f-995b-cb45a98d2e09
Hive Session ID = c4d49aa0-f829-4765-adbc-9afd5414775b
Hive Session ID = 8e26f8d8-bb01-4384-bfa2-8cb5ea66d1e8
This is what netstat is reporting:
# netstat -ntpl | egrep "10000|10001|10002"
tcp 0 0 192.168.1.100:10001 0.0.0.0:* LISTEN 422/java
tcp 0 0 192.168.1.100:10002 0.0.0.0:* LISTEN 26918/java
Nobody is listening on port 10000 :(
This is what I have in /hive-site.xml:
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
</property>
I assume I can ignore SLF4J warnings, correct? What else should I check?
It turned out that that this service was trying to grab more memory than 1024 MB allowed by the default /etc/hadoop/conf/yarn-site.xml setting:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
I increased that limit to 1792 and the issue was resolved.
I'm trying to start Pentaho server on Debian Jessie.
Pentaho crap itself by showing the following error:
15:55:24,198 WARN [PentahoSolutionSpringApplicationContext] Exception encountered during context initialization - cancelling refresh attempt
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.h2.tools.Server' defined in file [/opt/pentaho-biplatform-ce-6.1.0.1-196/biserver-ce/pentaho-solutions/system/GettingStartedDB-spring.xml]: Invocation of init method failed; nested exception is org.h2.jdbc.JdbcSQLException: Exception opening port "H2 TCP Server (tcp://localhost:9092)" (port may be in use), cause: "timeout" [90061-131]
Error is very clear - port 9092 is used by something else. The problem is that it is actually used by Pentaho, so it's complaining about the port which is currently used by itself...
To test that I've changed the port to 9093 in the following file:
./pentaho-solutions/system/GettingStartedDB.properties
The only difference between exceptions now was that port, which was 9093 this time, so it's definitely complaining about the port it is using, very weird.
Full log can be found here: http://ix.io/1ydv
Ideas?
Try to add the following attribute to the CATALINA_OPTS options in the start_pentaho.sh file :
CATALINA_OPTS="... -Dh2.bindAddress=ip_of_your_machine"
It helped me to remove the Exception opening port "H2 TCP Server (tcp://localhost:9092)" (port may be in use) error.
adding as follows in CATALINA_OPTS options in the start_pentaho.sh file is solving this issue:
CATALINA_OPTS="... -Dh2.bindAddress=localhost"
The root cause of the problem is that your server's hostname does not points to 127.0.0.1
Just add(edit) this line into your /etc/hosts:
127.0.0.1 localhost YOUR_HOST_NAME
I am working on ehcache replication using RMI for the first time. I am not able to get pass through the below error.
The root cause from the below stack trace shows:
Caused by: java.lang.ClassNotFoundException: net.sf.ehcache.distribution.RMICachePeer_Stub (no security manager: RMI class loader disabled)
I am using jdk1.6.0_16 with weblogic 12c and I have already tried below ways but didnt get any success.
Added the ehcache jar to the classpath. set CLASSPATH=%CLASSPATH%;C:\Oracle\Middleware\wlserver_12.1\server\lib\ehcache-2.8.3.jar;
set JAVA_OPTIONS=%JAVA_OPTIONS% -Djava.security.manager -Djava.security.policy==C:/Oracle/Middleware/wlserver_12.1/server/lib/weblogic.policy -Djava.rmi.server.codebase=file:///C:/Oracle/Middleware/wlserver_12.1/server/lib/ehcache-2.8.3.jar
set the System.setSecurityManager(new RMISecurityManager()) on the application startup.
Any help is appreciated.
Caused by: org.hibernate.cache.CacheException: net.sf.ehcache.CacheException: Problem starting listener for RMICachePeer //localhost:40001/com.wipro.dms.digi.domain.Location. Initial cause was
RemoteException occurred in server thread; nested exception is:
java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is:
java.lang.ClassNotFoundException: net.sf.ehcache.distribution.RMICachePeer_Stub (no security manager: RMI class loader disabled)