Datastax Ring nodes showing wrongly - datastax

We have 3 nodes in our existing Datastax cluster. Sometime the ring status showing wrongly, after dse service restart on wrong nodes the issue got solved.
Correct :
nodetool status
Datacenter: SearchGraph
=======================
UN 10.10.1.56 1015.41 MiB 1 ? 936a1ac0-6d5e-4a94-8953-d5b5a2016b92 rack1
UN 10.10.1.46 961.43 MiB 1 ? 3f41dc2a-2672-47a1-90b5-a7c2bf17fb50 rack1
UN 10.10.1.36 1013.72 MiB 1 ? 0822145f-4225-4ad3-b2be-c995cc230830 rack1
Wrong :
Datacenter: DC1
===============
?N 10.10.1.46 ? 1 ? null r1
?N 10.10.1.36 ? 1 ? null r1
Datacenter: SearchGraph
=======================
UN 10.10.1.56 1005.33 MiB 1 ? null rack1
Configuration Details is same on all nodes :
cat /etc/dse/cassandra/cassandra.yaml | grep endpoint_snitch:
endpoint_snitch: GossipingPropertyFileSnitch
cat /etc/dse/cassandra/cassandra-rackdc.properties |grep -E 'dc=|rack='
dc=SearchGraph
rack=rack1

Issue got Resolved after restarting DSE nodes with saved_caches and hints folder content removed. I had tried to change the DC name in cassandra-rackdc.properties and revert it back earlier, may be issue caused by these.Thanks

Related

How to see the data(primary and backup) on each of the node in the Apache Ignite?

I have 6 Ignite nodes and all are connected well to form a cluster. Also, i am giving the backup copies as 2 . Now i have sent 20 data to the cluster to check the partition and data(primary and backup). I can see the count using the cache -a -r command .
Is there a command or way where i can see the actual data in each of the node, where i can see the primary data as well as the backup copies?
You could use cache -scan -c=cacheName
Entries in cache: SQL_PUBLIC_PERSON
+=============================================================================================================================================+
| Key Class | Key | Value Class | Value |
+=============================================================================================================================================+
| java.lang.Integer | 1 | o.a.i.i.binary.BinaryObjectImpl | SQL_PUBLIC_PERSON_.. [hash=357088963, NAME=Name1] |
+---------------------------------------------------------------------------------------------------------------------------------------------+
use help cache to see all cache related commands.
see: https://apacheignite-tools.readme.io/docs/command-line-interface
You also have the option of turning on SQL: https://apacheignite-sql.readme.io/docs/schema-and-indexes
and: https://apacheignite-sql.readme.io/docs/getting-started
then use JDBC/sql to see entries in your cache.

Removing Seed node from Datastax datacenter

I have 6 nodes in our data center of which 3 node also act as seed nodes.I was planning to remove 3 nodes from rack2 and also reduce seed node count to two. With "nodetool decommission" we can remove nodes but is there any extra steps involve in removing seed nodes.
UN 10.10.1.56 339.96 MiB 1 ? rack1
UN 10.10.1.46 334.72 MiB 1 ? rack1
UN 10.10.2.76 307.72 MiB 1 ? rack2
UN 10.10.2.66 296.15 MiB 1 ? rack2
UN 10.10.2.86 316.89 MiB 1 ? rack2
UN 10.10.1.36 375.69 MiB 1 ? rack1
You need to update seed list on nodes before decomissioning seed node - change list, optionally do the rolling restart one node at time, or in the new versions use nodetool reloadseeds

How to start certain number of nodes in Redis cluster

To create and start a cluster in Redis, I use create-cluster.sh file inside
/redis-3.04/utils/create-cluster
With the use of this I can create as many nodes I want by changing the:
Settings
PORT=30000
TIMEOUT=2000
NODES=10
REPLICAS=1.
I wonder if I can create for example 10 nodes (5 masters with 5 slaves) in the beginning but start only 4 masters and 4 slaves (meet and join).
Thanks in advance.
Yes. You can add more nodes if load increase on your existing cluster .
Basic Steps are :
Start new redis instances - Let's say you want to add 2 more master and there slaves (Total 4 redis instances)
Then using redis-trib utility do following :
redis-trib.rb add-node <new master node:port> <any existing master>
e.g. ./redis-trib.rb add-node 192.168.1.16:7000 192.168.1.15:7000
After this new node will be assigned an id . Note that id and run following command to add slave to node that we added in prev step
/redis-trib.rb add-node --slave --master-id <master-node-id> <new-node> <master-node>
./redis-trib.rb add-node --slave --master-id 6f9db976c3792e06f9cd252aec7cf262037bea4a 192.168.1.17:7000 192.168.1.16:7000
where 6f9db976c3792e06f9cd252aec7cf262037bea4a is id of 192.168.1.16:7000.
Using similar steps you can add 1 more master-slave pair .
Since these node do not contains any slots to serve, you have move some of the slots from existing masters to new masters .( Re-Sharding)
To that you can run following command/Resharding steps :
6.1 ./redis-trib.rb reshard <any-master-ip>:<master-port>
6.2 It will ask : How many slots do you want to move (from 1 to 16384)? => Enter number of slots you want to move
6.3 Then it will ask : What is the receiving node ID?
6.4 Enter node id to which slots need to be moved . (new masters)
6.5 It will prompt :
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: (enter source node id or all)
6.6 Then it will prompt info saying Moving slot n to node node-id like
Moving slot 10960 from 37d10f18f349a6a5682c791bff90e0188ae35e49
Moving slot 10961 from 37d10f18f349a6a5682c791bff90e0188ae35e49
Moving slot 10962 from 37d10f18f349a6a5682c791bff90e0188ae35e49
6.7 It will ask : Do you want to proceed with the proposed reshard plan (yes/no)? Type Yes and enter and you are done .
Note : If data is large it might take some time to reshard.
Few Commands :
To know all nodes in cluster and cluster nodes with node ids:
redis-cli -h node-ip -p node-port cluster nodes
e.g. redis-cli -h 127.0.0.1 -p 7000 cluster nodes
To know all slots in cluster :
redis-cli -h 127.0.0.1 -p 7000 cluster slots
Ref : https://redis.io/commands/cluster-nodes
Hope this will help .

How to check Oracle internal process?

I want to know what oracle internal process is running for the below session details.
How to check what process is being carried out by "ora_j001" ?
Please provide me query to find out the process ?
INST_ID SID SERIAL# USERNAME OSUSER MACHINE PROCESS OS Process ID VALUE STATUS LAST_CALL_ET PROGRAM
1 1303 13000 APPS orafin ARG-FIN1A-DC 3842124 3842124 224905256 ACTIVE 57661 oracle#ARG-FIN1A-DC (J001)
$ ps -ef | grep 3842124
orafin 3842124 1 0 18:24:54 - 2:02 ora_j001_FINPROD1
argora 4395248 4784358 0 10:41:08 pts/6 0:00 grep 3842124
$ hostname
ARG-FIN1A-DC
In such kind of process how to check whether what kind of oracle internal process is running ?
You have listed your SID there. This will find the current SQL being run by any SID. Tie this back to DBA_JOBS or DBA_SCHEDULER_JOBS to see job related activity.
select q.sql_text, q.piece from V$SQLTEXT_WITH_NEWLINES
where q.SQL_ID = <SID>
order by 2;

How to monitor the process in a container?

I currently look into the LXC container API. I am trying to figure out how can I make the operating system know to which container the currently running process belongs. In this way, OS can allocate resource for processes according to the container.
I am assuming your query is - Given a PID, how to find the container in which this process is running?
I will try to answer it based on my recent reading on Linux containers. Each container can be configured to start with its own user and group id mappings.
From https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html:
lxc.id_map
Four values must be provided. First a character, either 'u', or 'g', to specify whether user or group ids are being mapped. Next is
the first userid as seen in the user namespace of the container. Next
is the userid as seen on the host. Finally, a range indicating the
number of consecutive ids to map.
So, you would add something like this in config file (Ex: ~/.config/lxc/default.conf):
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
The above basically means that uids/gids between 0 and 65536 are mapped to numbers between 100000 and 1655356. So, a uid of 0 (root) on container will be seen as 100000 on host
For Example, inside container it will look something like this:
root#unpriv_cont:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 02:18 ? 00:00:00 /sbin/init
root 157 1 0 02:18 ? 00:00:00 upstart-udev-bridge --daemon
But on host the same processes will look like this:
ps -ef | grep 100000
100000 2204 2077 0 Dec12 ? 00:00:00 /sbin/init
100000 3170 2204 0 Dec12 ? 00:00:00 upstart-udev-bridge --daemon
100000 1762 2204 0 Dec12 ? 00:00:00 /lib/systemd/systemd-udevd --daemon
Thus, you can find the container of a process by looking for its UID and relating it to the mapping defined in that container's config.