In multiple Name Nodes configuration if one Name Node Fails? - hadoop-yarn

What Resource Manager does if the job request is associated with the failed Name Node namespace. Say for example ..../finance is the namespace associated with namespace.
Which Name Name will serve this information to Resource manager as Name Node are Isolated?
Thanks in advance.
Suman Kumar

The HDFS HA will handle the scenario as there will be standby Name Node for every Active Name Node.
Please correct if my understanding is wrong.

Related

How to pass Mapping details from config file to aws glue job

I am trying to achieve simple ETL job using python script.
i could able take the file from one s3 location and land into another s3 location.
where i am trying to apply mapping concept, how can i handle mapping in separate config file.
example,in source a column should be split and mapped into targetc c and d.
-> source a+ b should be store in z column in target
Please share your thoughts or reference link.
Thanks in advance.

Is possible to get the logstream into a CloudWatch metric filter?

I want to create a CloudWatch metric filter so that I count the number of log entries containing the error line
Connection State changed to LOST
I have CloudWatch Log Group called "nifi-app.log" with 3 log streams (one for each EC2 instance named `i-xxxxxxxxxxx', 'i-yyyyyyyyyy', etc)
Ideally I would want to extract a metric nifi_connection_state_lost_count with a dimension InstanceId where the value is the log stream name.
From what I gather from the documentation, it is possible to extract dimension from the log file contents themselves but I do not see any way to refer to the log stream name for example.
The log entries look like this
2022-03-15 09:44:47,811 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener#3fe60bf7 Connection State changed to LOST
I know that I can extract fields from that log entries with [date,level,xxx,yy,zz] but what I need is not in the log entry itself, it's part of the log entry metadata (the log stream name).
The log files are NiFi log files and do NOT have the instance name, hostname, or anything like that printed in each log line, and I would rather not try to change the log format as it would require a restart of the NiFi cluster and I'm not even sure how to change it.
So, is it possible to get the log stream name as dimension for a CW metric filter in some other way?

Tivoli command line to search objects

Is there a CLI command to find Tivoli objects that matches a search pattern?
For example, I need to search if any object with name containing "DUMMY" is present? If so, what is the type of the object (is it a Job Stream, or a Tivoli Job or a Calendar or an Event or a Resource)
I came across a concept that Tivoli stores all its object defiNitions in a database like repository, and we can execute SQL like SELECT queries as to find the object definition, but I am not sure how to do that using a CLI command.
composer command line allows to filter using wildcards (# and ?), but you have to run the command once for each object type

SolrCloud update node by node after schema update

Is it possible to update one node by one in SOLRCluster so you after update of SOLR schema you will not have to reindex all nodes at once and have downtime of search?
Current SolrCluster configuration
So basically can I:
Put one node into recovering mode
Update this node
Update schema.xml
Reindex Node
Bring this node as Leader
Put another node into recovery mode
Update this node too and launch it
Or I didn't get something?
Your question is how to update the schema.xml in SolrCloud.
This answer is about changes in schema.xml which does not imply the nead of a reindexing.
I also assume that you have create your collection with an named configuration in zookeeper(collection.configName).
In this case:
Upload your changed configuration folder to zookeeper.
Reload your collection
Be aware that step 2. need the name of the collection (does not support alias).

Connecting to Oracle11g database from Websphere message broker 6

I am trying a simple insert command from websphere message broker 6 from a compute node.
The data source name which is provided in the odbc.ini file in the message broker is specified in the node property of the compute node. And have wrote the following ESQL code.
SET TABLE = 'MYTABLE';
SET MYVALUE = 'TESTVALUE';
INSERT INTO Database.TABLE VALUES(MYVALUE);
The connection url is provided in the tnsnames.ora. The url is cluster url. Which points to 3 database instances.
When I am running the query i am getting exception that table or view does not exist in the trace.
But when i connect to db using any of the 3 direct urls, i am able to see the table.
Note: database is oracle11g
Can any one explain me what is happening?
The problem was that my application was using the same DSN used by my broker. And while creating the broker, the username and password provided was pointing to different schema, which is not having the the tables for my application.
The solution was creating a new DSN, and using mqsisetdbparams to point it to the correct schema.