Index Name in Kibana can be different or should be same as inddex created by logstash when it supply the data to elastic search? - indexing

I am using ELK stack in my organization. i wanted to understand the pipeline across ELK stack along with filebeat.
I could see the index name called (let s take it as XYZ) present in my kibana UI through console command "GET _all".
I wanted to know if the index name i.e 'XYZ' was provided by logstash to elasticsearch? or is it could be different name?

Index name used in outputs.elasticsearch of logstash can be different to the actual index name you see in kibana monitoring or the _cat/indices API call when you use index aliases.
An example is when you use Index Lifecycle management to rollover and retire older indices. For this to work, you’d need to create an index alias say xyz, an ilm policy to define rollover parameters like size etc and finally an index template that maps this index alias and Ilm policy to an index pattern. Here, you’d specify the index name in logstash to be xyz but the actual index names in elasticsearch would be something like xyz-00000, xyz-00001 etc.
More on aliases here Elasticsearch Aliases

Related

How to setup splunk summary index?

I'm a bit confused with setting up summary index in splunk.
I have an index name index_1 which receive logs from my app.
There are much too many logs, and I need to save an aggregation of them.
I have tried setting up the summary index from here to an index name summary,
but when I search the index there are no log entries.
My search is as follow:
index=index_1 ... level>30
I couldn't understand when to use the collect command and when setting up from the web ui is enough.
Your search, index=index_1 ... level>30 should reduce the number of events being returned, and to only those events you want to store in the summary index. In this case, it looks like you're only interested in keeping events where level>30.
At the end of your search, you need to include the collect command. The collect command will take the remaining events, and write it to the named index, so collect index=summary
Overall, your search should look like
index=index_1 ... level>30 | collect index=summary
Here is an older blog post discussing summary indexing that may help you understand the process and good practices around using it.
https://davidveuve.com/tech/how-i-use-summary-indexes-in-splunk/

Using index in DSE graph

I'm trying to get the list of persons in a datastax graph that have the same address with other persons and the number of persons is between 3 and 5.
This is the query:
g.V().hasLabel('person').match(__.as('p').out('has_address').as('a').dedup().count().as('nr'),__.as('p').out('has_address').as('a')).select('nr').is(inside(3,5)).select('p','a','nr').range(0,20)
At first run I've noticed this error messages:
Could not find an index to answer query clause and graph.allow_scan is
disabled: ((label = person))
I've enabled graph.allow_scan=true and now it's working
I'm wondering how can I create an index to be able to run this query without enabling allow_scan=true ?
Thanks
You can create an index by adding it to the schema using a command like this:
schema.vertexLabel('person').index('address').materialized().by('has_address').add()
Full documentation on adding indexes is available here: https://docs.datastax.com/en/latest-dse/datastax_enterprise/graph/using/createIndexes.html
You should not enable graph.allow_scan=true as under the covers it is turning on ALLOW FILTERING on the CQL queries. This will cause a lot of cluster scans and will inevitably time out with any real amount of data in the system. You should never enable this in any sort of production environment.
I am not sure that indexing is the solution for your problem.
The best way to do this would be to reify addresses as nodes and look for nodes with an indegree between 3 and 5.
You can use index on textual fields of your address nodes.

How to index and serve poems using apache solr

I am using solr 4.10. I have to index poetry data in solr. Now what should be the document structure. Basically, I want to give a search facility for a term in poem. Only that specific distich should be given back. Now should I index complete poem in single document or one document per distich. I know some poems have two lines for single concept and some 4 etc. Now What should be its storing format ?
Index the distiches individually and link them through a poem identifier and a sequence id. That way you can also retrieve the distich before or after - or the whole poem.
If there's certain use cases that need to treat the poems as a whole instead, create a separate collection and index to both collections. That way you can adjust and tweak the search results as you need, depending on the use case.

REST API query using schema index on Neo4j

Since version 2.0, Neo4j has a preferred way for index creation : http://docs.neo4j.org/chunked/milestone/rest-api-schema-indexes.html
Following the documentation, I was able to easily create an index named "node" on the "label" node property.
Now, I have two questions :
(1) Index creation can take some time to run on large graphs. How can I now when the indexing process is done ? (It is mention in the documentation, but they don't say how they do it)
You can check the status of your index by listing all the indexes for the relevant label. The created index will show up, but have a state of POPULATING until the index is ready, where it is marked as ONLINE.
(2) How can I query, using the REST API (not Cypher) and the newly created index, to get the set of nodes matching a pattern, using the newly created index. For example :
curl -X GET -H "Accept: application/json" http://localhost:17474/db/data/schema/index/node/?query=label:Energy
Thanks
You can check with the :schema command in the browser, it will then show up as being "ONLINE", from that moment on the updates to the index will happen transactionally with your graph data.
Why don't you want to use Cypher?
You would use this endpoint, which returns nodes by label and property.

why neo4j indexing can't find the nodes which I know they exist?

I have created and indexed my graph database through localhost:7474 in neo4j(visually).
The Nodes have three properties,name,priority,link.
and I created index on name property of nodes through
add or remove indexes
tab of localhost:7474(as shown in picture)
but when I try to retrieve nodes based on their names,in data browser,console,or my java application the nodes can not be found.
in console or data browser,when I write this query for red(there is a node with the name of red),for example:
start n=node:name(name="red")
return n;
I get returned 0 rows.
and when I type this query:
start n=node:node(name="red")
return n;
or this one:
start n=node:Node(name="red")
return n;
I get Indexnodedoes not exist,IndexNodedoes not exist,in console or data browser.
my database file is in the same path which neo4j default.graphdb file exists(I mean in "C:\Users\fereshteh\Documents\Neo4j" ),and I first created the index,and then the graph database.
I don't know what I am doing wrong,please help me,I will be so thankful.
version of neo4j:1.9.4
I believe your assumption about how to set up the indexing is incorrect. You can read here for more information, but basically there are 3 things that are needed to create/read from an index. The Index name, the entry key, and the entry value.
What you have specified above in the Web Console is the Index name, but in your cypher query, you are specifying the entry key. You either want to use the Node Auto index, or to create a node in cypher and index it there but that isn't an option in 1.9.4.