Is it possibile to know where solr finished to index my data?
I work with solrcloud 4.9.0 and zookeeper for conf file manager.
I have the data.import file, but in it there is only where the indexing is STARTED not when it ended.
You can get the dataimporthandler status using:
<MY_SERVER>/solr/dataimport?command=status
Reading the status you can understand if the import is still running. A similar procedure (with a different url) is advised in "Solr in Action" book in order to check if a backup is still running.
Another option would involve the use of listeners as advised here.
I also use the /dataimport?command=status way to check if the job is done or not, and while it works, sometimes I get the impression it is a bit flaky.
There are listeners you can use: see here I would really like to use those, but of course you need to write java code and handle your jar in solr etc. So it is a bit of a PITA
Related
I have an internal Apache server for testing purpose, not client facing.
I wanted to upgrade the server to apache 2.4, but there is no space left, so I was trying to delete some files on the server.
After checking file size, I found a folder /var/lib/elasticsearch takes 80g space. For example, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2015.12.08 takes 60g already. I'm not sure what's elasticsearch. Is it safe if i delete this logstash? Thanks!
Elasticsearch is a search engine, like a NoSql database, and it stores the data in indeces. What you are seeing is the data of one index.
Probobly someone was using the index aroung 2015 when the index was timestamped.
I would just delete it.
I'm afraid that only you can answer that question. One use for logstash+elastic search are to help make sense out of system logs. That combination isn't normally setup by default, so I presume someone set it up at some time for some reason, and it has obviously done some logging. Only you can know if it is still being used, or if it is safe to delete.
As other answers pointed out Elastic search is a distributed search engine. And I believe an earlier user was pushing application or system logs using Logstash to this Elastic search instance. If you can find the source application, check if the log files are already there, if yes, then you can go ahead and delete your index. I highly doubt anyone still needs the logs back from 2015, but it is really your call to see what your application's archiving requirements are and then take necessary action.
We are trying to move to cluster with Apache Camel. So far we had it on one node and worked well.
One node:
I have readlock strategy set to 'changed' which keeps track of file changes with camelLock file and only when the file has finished downloading, it will be picked up for processing. But camel readlock strategy 'changed' is discouraged in clustering. According to the camel documentation 'idempotent' is recommended. This is what happens when I am testing with 5GB file.
Two nodes:
I have readlock strategy to 'idempotent' which distributes files to one of the nodes but camel starts processing the file even before the file has finished downloading.
Is there a way to stop camel from processing even before file has downloaded when readlock strategy is idempotent?
Even though both "readLock=changed" and "readLock=idempotent" cause the file-consumer to wait, they really address quite different use-cases: while "readLock=changed" guards against the file being incomplete (i.e. still being written by some producer/sender), "readLock=idempotent" guards against a file being read by two consumer routes. It's a bit confusing that they're addressed by the same option.
First, to address the "changed" scenario: can the sender be changed so that it writes the file in one directory and then, when it is done writing, it copies it into the directory being monitored by your file-consumer? If this is under your control, this is a good way of letting the OS handle things instead of trying to deal with it yourself. (This does not address the issue of the multiple readers.) Otherwise, I suggest you revert back to readLock=changed
Next, on multiple readers, one work around is to only have this route run on only one node of your cluster. Sometimes this might defeat the purpose of clustering, but it is quite possible that you're starting up additional nodes to help with some other routes, and you're fine with this particular route running on just one node. It's a bit of a hack to do something like this, because all nodes are no longer equal, but it is still an option to consider. Simplest would be to start one node with some environment property that flags it as the node that will handle file-reading... or some similar approach.
If you do want the route on multiple nodes, you can start by using the option "idempotent=true" but this is not good enough on its own. The option uses a repository, where it records what files have been read before, and the default repository is in-memory (i.e. each node has its own). So, the default implementation is helpful if the same file is actually being received more than once, and you wish to skip it. However, if you want it to work across nodes, you have to use a different repository.
One central repository could be a database. In that case use can use Camel's JDBC or JPA based repositories. Or, you could use something like Hazelcast. See here for your options: http://camel.apache.org/idempotent-consumer.html
You can use readLock=idempotent-changed.
idempotent-changed is for using an idempotentRepository and changed as the combined read-lock. This allows you to use read locks that supports clustering if the idempotent repository implementation supports that.
You can read more about these idempotent-changed options here: https://camel.apache.org/components/3.13.x/file-component.html
We also used readLock=changed in Docker clustered mode and worked perfectly since we used readLockMinAge for certain interval.
and when shall I use it? How is it configured can anyone please tell me in detail?
The data-config.xml file is an example configuration file for how to use the DataImportHandler in Solr. It's one way of getting data into Solr, allowing one of the servers to connect through JDBC (or through a few other plugins) to a database server or a set of files and import them into Solr.
DIH has a few issues (for example the non-distributed way it works), so it's usually suggested to write the indexing code yourself (and POST it to Solr from a suitable client, such as SolrJ, Solarium, SolrClient, MySolr, etc.)
It has been mentioned that the DIH functionality really should be moved into a separate application, but that hasn't happened yet as far as I know.
I'd welcome any help regarding simple issue: I have clustered environment and I enabled Lucene replication in properties (lucene.replicate.write=true). Now, all the tutorials are instructing me to reindex Lucene.
Should I run it on one node? On both? Simultaneously or sequentially?
This question has been asked in Liferay Forum as well: https://www.liferay.com/community/forums/-/message_boards/view_message/69175435.
Thank you!
Basically what I did at first was following:
cluster.link.enabled=true
lucene.replicate.write=true
and the result was NOT WORKING replication.
What I tried next was to overcome this issue and continue with clustering the rest of the portal which at the end helped lucene as well. My progress was to:
deploy cluster activation keys
deploy ehcache-cluster-web.war
portal-ext.properties:
cluster.link.enabled=true
cluster.link.autodetect.address=<COMMONLY_ACCESSIBLE_IP_AND_PORT>
lucene.commit.batch.size=1
lucene.commit.time.interval=5000
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
cluster.link.channel.properties.control=<PATH_TO_XML>
cluster.link.channel.properties.transport.0=<PATH_TO_XML>
portal.instance.protocol=http
portal.instance.http.port=8080
setenv.sh
-Djava.net.preferIPv4Stack=true
-Djgroups.bind_addr=<IP_OF_THE_NODE>
edit clusterlink_control and clusterlink_transport files by Liferay tutorials
when servers shutted down delete contents of data/lucene and in Control Panel run reindaxation on one node
At the end, Lucene replication IS WORKING. What I think could be significant is following. At first, portal.properties explanation on keys lucene.commit.* is kind of hard to comprehend. By trial and error I found out that these two keys are in AND relation. Also, I found out about portal.instance.* keys which are used for multiple purposes in clustering and can matter if you have loadbalancers and/or Apaches between the nodes and autodetect fails.
There are multiple ways to configure search clustering in Liferay. If you use the lucene.replicate.write=true way, you're looking at several reindexing runs: On every restart of a server you must reindex that server's documents, as it might have missed indexing requests when it was down.
So, short answer: Don't worry, reindex both. Sooner or later you'll do it anyways, no matter if you need only one now.
I have 5000 files in a folder and on daily basis new file keep loaded in same file. I need to get the latest file only on daily basis among all the files.
Will it be possible to achieve the scenario in Mule out of box.
Tried keeping file component inside Poll component( To make use of waterMark) but not working.
Is there any way we can achieve this. If not please suggest the best way ( Any possible links).
Mule Studio: 5.3, RunTime 3.7.2.
Thanks in advance
Short answer: Not really any extremely quick out of the box solution. But there are other ways. Im not saying this is the right or only way of solving it, but I've earlier implemented a similar scenario in this way:
A Normal File inbound with a database table as file-log. Each time a new file is processed, a component checks if its name appears in the table. By choice or filter I only continue if it isn't in there already - and after processing I add the filename to the table.
This is a quite "heavy" solution though. A simpler access would be to use an idempotent filter with a object store. For example a Redis server: https://github.com/mulesoft/redis-connector/blob/master/src/test/resources/redis-objectstore-tests-config.xml
It is actually very simple if your incoming file contains timestamp........you can configure the file inbound connector by setting file:filename-regex-filter pattern="myfilename_#[function:timestamp].csv". I hope this helps
May be you can use a quartz scheduler( mention the time in cron expression), followed by a groovy script in which you can start the file connector . Keep the file connector in another flow.