I haven't worked on my Elasticsearch + Kibana project in a while, but when I go on my elasticsearch-head frontend now, I notice the addition of multiple indexes (image below)
If I scroll to the right, I see my Logstash related files, such as logstash-2015.06.26. So everything seems to be normal besides the fact that these elements appear (they are also numerous)
Surely I must have done something wrong for this to happen, it seems as if Logstash is parsing all the files of an unwanted directory. I cannot find anything unusual in my input or filter files.
Any ideas on how to figure this out?
Thank you.
You were scanned by a vulnerability scanner. Since you had no security on your cluster, a vulnerability scanner tried to do a POST to some URLs on your port 9200 which caused indexes to be created.
Related
I have an internal Apache server for testing purpose, not client facing.
I wanted to upgrade the server to apache 2.4, but there is no space left, so I was trying to delete some files on the server.
After checking file size, I found a folder /var/lib/elasticsearch takes 80g space. For example, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2015.12.08 takes 60g already. I'm not sure what's elasticsearch. Is it safe if i delete this logstash? Thanks!
Elasticsearch is a search engine, like a NoSql database, and it stores the data in indeces. What you are seeing is the data of one index.
Probobly someone was using the index aroung 2015 when the index was timestamped.
I would just delete it.
I'm afraid that only you can answer that question. One use for logstash+elastic search are to help make sense out of system logs. That combination isn't normally setup by default, so I presume someone set it up at some time for some reason, and it has obviously done some logging. Only you can know if it is still being used, or if it is safe to delete.
As other answers pointed out Elastic search is a distributed search engine. And I believe an earlier user was pushing application or system logs using Logstash to this Elastic search instance. If you can find the source application, check if the log files are already there, if yes, then you can go ahead and delete your index. I highly doubt anyone still needs the logs back from 2015, but it is really your call to see what your application's archiving requirements are and then take necessary action.
I'd welcome any help regarding simple issue: I have clustered environment and I enabled Lucene replication in properties (lucene.replicate.write=true). Now, all the tutorials are instructing me to reindex Lucene.
Should I run it on one node? On both? Simultaneously or sequentially?
This question has been asked in Liferay Forum as well: https://www.liferay.com/community/forums/-/message_boards/view_message/69175435.
Thank you!
Basically what I did at first was following:
cluster.link.enabled=true
lucene.replicate.write=true
and the result was NOT WORKING replication.
What I tried next was to overcome this issue and continue with clustering the rest of the portal which at the end helped lucene as well. My progress was to:
deploy cluster activation keys
deploy ehcache-cluster-web.war
portal-ext.properties:
cluster.link.enabled=true
cluster.link.autodetect.address=<COMMONLY_ACCESSIBLE_IP_AND_PORT>
lucene.commit.batch.size=1
lucene.commit.time.interval=5000
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
cluster.link.channel.properties.control=<PATH_TO_XML>
cluster.link.channel.properties.transport.0=<PATH_TO_XML>
portal.instance.protocol=http
portal.instance.http.port=8080
setenv.sh
-Djava.net.preferIPv4Stack=true
-Djgroups.bind_addr=<IP_OF_THE_NODE>
edit clusterlink_control and clusterlink_transport files by Liferay tutorials
when servers shutted down delete contents of data/lucene and in Control Panel run reindaxation on one node
At the end, Lucene replication IS WORKING. What I think could be significant is following. At first, portal.properties explanation on keys lucene.commit.* is kind of hard to comprehend. By trial and error I found out that these two keys are in AND relation. Also, I found out about portal.instance.* keys which are used for multiple purposes in clustering and can matter if you have loadbalancers and/or Apaches between the nodes and autodetect fails.
There are multiple ways to configure search clustering in Liferay. If you use the lucene.replicate.write=true way, you're looking at several reindexing runs: On every restart of a server you must reindex that server's documents, as it might have missed indexing requests when it was down.
So, short answer: Don't worry, reindex both. Sooner or later you'll do it anyways, no matter if you need only one now.
I am having a very hard time making RavenFS behave properly and was hoping that I could get some help.
I'm running into two separate issues, one where uploading files to the ravenfs while using an embedded db inside a service causes ravendb to fall over, and the other where synchronizing two instances setup in the same way makes the destination server fall over.
I have tried to do my best in documenting this... Code and steps to reproduce these issues are located here (https://github.com/punkcoder/RavenFSFileUploadAndSyncIssue), and video is located here (https://youtu.be/fZEvJo_UVpc). I tried looking for these issues in the issue tracker and didn't find something directly that looked like it related, but I may have missed something.
Solution for this problem was to remove Raven from the project and replace it with MongoDB. Binary storage in Mongo can be done on the record without issue.
I am new to sitecore and currently in sitecore developer training. Today I ahve faced weird issue and after trainer also not able to resolve, I think I should post in this forum.
I have added some custom search field to the solution. These fields also added in Lucene Default Search config. After deploying the solution, I am tried to rebuild index option from developer menu, However I am unable to see any Indexes list over there. I am getting message as "Indexes List Failed to Render"
Also I have tried
sitecore desktop-> Control Panel-> Indexing-> Indexing manager But
Sitecore dialog box does not pop up.
desktop-> Control Panel-> Database-> Rebuild index didnt work.
IIS Reset.
Any help in this regard is highly appreciated. Thanks in advance!!
I would recommend you patch in a separate config with your custom index configuration than changing the default lucene index config. You may need to post your custom field configuration so we can figure out what's causing the error.
Thanks for help. I have now able to figure out what was going wronng. I have not made changes using patch and and messed up directly into the Sitecore.Content.Search.config instead of Lucene config. Because of this changes I am having Sitecore Configuration exception and that caused to indexes list disapper.
I had a similar issue and this worked for me
While integrating Solr, I disabled all Lucene configs and after that internal search wasn't working, and also no search indexes were visible in the developer tab. I enabled this config and its all good now
Sitecore.ContentSearch.Lucene.Indexes.Sharded.Master.config
I'm having a quite annoying problem, and came up with a quite ugly hack to make it work.
I develop an Hta application using a CouchDB database (for internal company use). The problem is there seems to be some very aggressive caching of the database queries, and it's been hard to come up with solutions.
So the updated data in the database just won't come up in the browser, who still has the previous request results in his cache, until the entire app is started anew.
Oh, and CouchDB (or it's mochiweb server) doesn't allow unknown GET variables, so the usual solution of appending some sort of timestamp just won't work.
I have found some sort of solution, but it's damn ugly. Solutions are:
Only open documents with latest revision number (easy and nice, won't work on views)
Use apache as forward proxy listening to 200+ ports, and select one at random on each read query. (that's the ugly one).
Hta accepts ajax calls to other ports (maybe even on other domains, strange behaviour), so it works nicely, I just have a 1/200 chance that new data won't come up, but that's still better then 1/1, I can live with that.
So what I'm asking is, is there a better solution to this ? Can I hack in to the mochiweb server to modify cache headers (and hope they're not going to be ignored) ? Is there a special unknown "throwaway" key I could use in the url's to append some random string ? Or is there a way to tell Hta not to cache anything (from within the app, this is supposed to work on hundreds of computers) ?
it's still ugly but slightly less ugly than your current apache setup but Can't you use an apache rewrite rule to allow you to set an arbitrary no_cache attribute on the url? apache can throw it away so couchdb won't see it.