I am new to Ignite and trying to setup ignite in my MacOs. Then want to create one table in Ignite using sqline.I am using fallowing Steps.
1.Ignite Download:
2. Set Ignite Path. export IGNITE_HOME=“/Users/username/apps/ apache-ignite-2.7.5-bin”
3. Start Ignite Node.
4. Redirected to ignite folder.cd /Users/username/apps/apache-ignite-2.7.5-bin
5. start ignite node.
apache-ignite-2.7.5-bin % bin/ignite.sh
I am able to see below logs in last 3 lines
[13:11:12] Ignite node started OK (id=c5598906)
[13:11:12] Topology snapshot [ver=1, locNode=c5598906,servers=1, clients=0, state=ACTIVE, CPUs=12, offheap=3.2GB, heap=3.6GB]
7.open one more new terminal.
pwd
/Users/username/apps/apache-ignite-2.7.5-bin
8. Start sqlline
apache-ignite-2.7.5-bin % ./sqlline.sh --verbose=true -u jdbc:ignite:thin://127.0.0.1/
But I am getting below Error Message
zsh: no such file or directory: ./sqlline.sh
apache-ignite-2.7.5-bin %
Could Someone Please guide me why I am not able to start sqlline. And how should I do that.
I updated the version from 2.7 to 2.8 and it worked.
Try specifying ./bin/sqlline.sh :)
Related
When I running command below,
s3-dist-cp --src s3://test/9.19 --dest hdfs:///user/hadoop/test
I got a error about auxService.
20/02/03 07:52:13 INFO mapreduce.Job: Task Id : attempt_1580716305878_0001_m_000000_2, Status : FAILED
Container launch failed for container_1580716305878_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
In many QnA, I found a solution like this
link.
But there is no process for nodemanager.
[hadoop#ip-172-31-37-115 ~]$ initctl list | grep yarn
hadoop-yarn-timelineserver start/running, process 8149
hadoop-yarn-resourcemanager start/running, process 17331
hadoop-yarn-proxyserver start/running, process 8147
My EMR was created by quick menu with emr-5.28.0.
Is there anyone knows about this problem?
Thanks!
I'm sure there's some way to update the configs, but what I did was create a cluster using the 'advanced' setup and chose these software packages:
Ganglia
Hive
Hue
Mahout
Pig
Tez
Spark
Hadoop
(8 in total)
Most of those, except spark, are installed with the default settings (the first radio button for software packages in quick setup). One of these software packages or something related to it is what causes s3-dist-cp to be installed, and I was able to use it with no problems with that setup.
I am trying to start Ignite cluster from the command line on windows:
this is what I did:
Download Ignite binary version and kept it in C driver.
Set Environment Variable IGNITE_HOME to that folder location.
in command line I open the directory:
C:\apache-ignite-fabric-2.2.0-bin\bin
the from that directory :
C:\apache-ignite-fabric-2.2.0-bin\bin>sh ignite.sh examples/config/example-ignite.xml
I am getting the following error:
Failed to create Ignite component (consider adding ignite-spring module to classpath) [component=SPRING, cls=org.apache.ignite.internal.processors.spring.IgniteSpringProcessorImpl]
what can be the reason for this error?
found the solution for that:
need to run it in bat file and not sh file:
C:\apache-ignite-fabric-2.2.0-bin\bin>ignite.bat examples/config/example-ignite.xml
If you're on Windows I imagine you should try ignite.bat?
ignite.sh might have problems with classpath when run on Windows, that would explain it.
I am trying to deploy a production cluster for my Flink program. I am using a standard hadoop-core EMR cluster with Flink 1.3.2 installed, using YARN to run it.
I am trying to configure my RocksDB to write my checkpoints to an S3 bucket. I am trying to go through these docs: https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#set-s3-filesystem. The problem seems to be getting the dependencies working correctly. I receive this error when trying run the program:
java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(EmrFileSystem.java:93)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:328)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:350)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createStreamFactory(FsStateBackend.java:282)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createStreamFactory(RocksDBStateBackend.java:273
I have tried both leaving and adjusting the core-site.xml and leaving it as is. I have tried setting the HADOOP_CLASSPATH to the /usr/lib/hadoop/share that contains(what I assume are) most of the JARs described in the above guide. I tried downloading the hadoop 2.7.2 binaries, and copying over them into the flink/libs directory. All resulting in the same error.
Has anyone successfully gotten Flink being able to write to S3 on EMR?
EDIT: My cluster setup
AWS Portal:
1) EMR -> Create Cluster
2) Advanced Options
3) Release = emr-5.8.0
4) Only select Hadoop 2.7.3
5) Next -> Next -> Next -> Create Cluster ( I do fill out names/keys/etc)
Once the cluster is up I ssh into the Master and do the following:
1 wget http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-hadoop27-scala_2.11.tgz
2 tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
3 cd flink-1.3.2
4 ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
5 Change conf/flink-conf.yaml
6 ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
My conf/flink-conf.yaml I add the following fields:
state.backend: rocksdb
state.backend.fs.checkpointdir: s3:/bucket/location
state.checkpoints.dir: s3:/bucket/location
My program's checkpointing setup:
env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_ONCE)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(getCheckpointMinPause)
env.getCheckpointConfig.setCheckpointTimeout(getCheckpointTimeout)
env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
env.setStateBackend(new RocksDBStateBackend("s3://bucket/location", true))
If there are any steps you think I am missing, please let me know
I assume that you installed Flink 1.3.2 on your own on the EMR Yarn cluster, because Amazon does not yet offer Flink 1.3.2, right?
Given that, it seems as if you have a dependency conflict. The method org.apache.hadoop.conf.Configuration.addResource(Lorg/apache/hadoop/conf/Configuration) was only introduced with Hadoop 2.4.0. Therefore I assume that you have deployed a Flink 1.3.2 version which was built with Hadoop 2.3.0. Please deploy a Flink version which was built with the Hadoop version running on EMR. This will most likely solve all dependency conflicts.
Putting the Hadoop dependencies into the lib folder seems to not reliably work because the flink-shaded-hadoop2-uber.jar appears to have precedence in the classpath.
hbase dependency: /usr/local/hadoop/hbase-0.98.14/lib/hbase-common-0.98.14-hadoop2.jar
KYLIN_JVM_SETTINGS is -Xms1024M -Xmx4096M -XX:MaxPermSize=128M
KYLIN_DEBUG_SETTINGS is not set, will not enable remote debuging
KYLIN_LD_LIBRARY_SETTINGS is not set, lzo compression at MR and hbase might not work
A new Kylin instance is started by sreeharsha, stop it using "kylin.sh stop"
Please visit http://:7070/kylin to play with the cubes! (Useranme: ADMIN, Password: KYLIN)
You can check the log at ./bin/../tomcat/logs/kylin.log
sreeharsha#localhost:/usr/local/hadoop/kylin-1.0$
Got it, just changed the property export HBASE_MANAGES_ZK=true in hbase-env.sh and restarted all the hadoop deamons its working fine
I'm running a DSE cluster in AWS: m2.4xlarge instances running Datastax Enterprise 4.6.1, with Cassandra 2.0.12.200 and Opscenter 5.1.0.
When we try to do a backup of a keyspace, we get this:
Snapshot of keyspaces [XXXXXXX] on node XXX.XXX.XXX.XXX failed: javax.management.RuntimeMBeanException: java.lang.RuntimeException: Tried to hard link to file that does not exist /raid0/cassandra/data/XXXXXX/XXXXXX/XXXXXXXXXXXX-jb-1-Index.db
Any ideas?
This is likely the following known issue:
https://issues.apache.org/jira/browse/CASSANDRA-6433
The workaround for this is a rolling restart and it is fixed in c* 2.1. It seems to be caused when you drop a keyspace and re-create it again.
I had a similar issue with a dropped keyspace and compaction.
Run a "nodetool repair" and check the "system" and "opscenter" keyspaces for traces of the deleted keyspace. You might need to manually remove stale rows.