How to maintain lucene indexes in azure cloud-app - indexing

I just started playing with the Azure Library for Lucene.NET (http://code.msdn.microsoft.com/AzureDirectory). Until now, I was using my own custom code for writing lucene indexes on the azure blob. So, I was copying the blob to localstorage of the azure web/worker role and reading/writing docs to the index. I was using my custom locking mechanism to make sure we dont have clashes between reads and writes to the blob. I am hoping Azure Library would take care of these issues for me.
However, while trying out the test app, I tweaked the code to use compound-file option, and that created a new file everytime I wrote to the index. Now, my question is, if I have to maintain the index - i.e keep a snapshot of the index file and use it if the main index gets corrupt, then how do I go about doing this. Should I keep a backup of all the .cfs files that are created or handling only the latest one is fine. Are there api calls to clean up the blob to keep the latest file after each write to the index?
Thanks
Kapil

After i answered this, we ended up changing our search infrastructure and used Windows Azure Drive. We had a Worker Role, which would mount a VHD using the Block Storage, and host the Lucene.NET Index on it. The code checked to make sure the VHD was mounted first and that the index directory existed. If the worker role fell over, the VHD would automatically dismount after 60 seconds, and a second worker role could pick it up.
We have since changed our infrastructure again and moved to Amazon with a Solr instance for search, but the VHD option worked well during development. it could have worked well in Test and Production, but Requirements meant we needed to move to EC2.

i am using AzureDirectory for Full Text indexing on Azure, and i am getting some odd results also... but hopefully this answer will be of some use to you...
firstly, the compound-file option: from what i am reading and figuring out, the compound file is a single large file with all the index data inside. the alliterative to this is having lots of smaller files (configured using the SetMaxMergeDocs(int) function of IndexWriter) written to storage. the problem with this is once you get to lots of files (i foolishly set this to about 5000) it takes an age to download the indexes (On the Azure server it takes about a minute,, of my dev box... well its been running for 20 min now and still not finished...).
as for backing up indexes, i have not come up against this yet, but given we have about 5 million records currently, and that will grow, i am wondering about this also. if you are using a single compounded file, maybe downloading the files to a worker role, zipping them and uploading them with todays date would work... if you have a smaller set of documents, you might get away with re-indexing the data if something goes wrong... but again, depends on the number....

Related

A persistent simple data storage for Node.JS app implementation?

I'm planning to launch a simple Node.JS utility and push it to heroku. A fire and forget solution, will sleep for like 90% of the time probably. Unfortunately it seems that I require a persistent data storage for my purposes (heroku apps get rebooted daily and storing everything in RAM is unrealistic), and I don't know which way to look as:
Most SQL hostings are paid / limited time free / require constant refreshing ( like freemysqlhosting ).
Storing stuff in plain .txt format is seemingly hard to implement, besides git always overwrites the contents of a tracked .txt file, and leaving it untracked disposes of it on heroku and leads to ENOENT No such file error. Yeah, I tried.
So, the question is - how do I implement a simple and built in solution for storing data? Are there any relevant typical solutions? It's going to be equivalent to just 1 SQL table.
As you can see, you can answer this on many levels - maybe suggest a free deploy and forget SQL hosting (it obviously has to support external connections), maybe tell me how to keep a file tracked in git without actually replacing all of its content with every commit, maybe suggest some module to install. I hope this is not too broad..

When AEM is configured to use a S3 data store will it make blue-green deployments faster?

Background
We know it's possible to setup a devops pipeline that deploys updates to AEM via a blue/green approach by using crx2oak to migrate the content from old to new environment. Why is out of scope of this question.
The problem with this approach is the content copy operation can take a significant time, as the amount of content in the JCR grows. Other ideas to mittigate this are appreciated.
We also know that AEM can have a S3 datastore that off-loads the binary content into a S3 bucket which would not be re-built during blue/green deployment as per:
https://helpx.adobe.com/experience-manager/6-3/sites/deploying/using/storage-elements-in-aem-6.html#OverviewofStorageinAEM6
What is unclear from Adobe's documentation is whether the same S3 bucket can be shared across AEM instances (i.e. blue/green instances). Maybe it's just my google fu that has failed...
Question(s)
When a new AEM instance is configured to use a S3 datastore that already has content in it from the old instance, when crx2oak is used to migrate content, will the new instance be able to access the existing content?
Are there any articles/blogs that describe what the potential time savings of this approach would be?
Yes I could do an experiment, and may do so in the future to answer my own question. I'm looking for information from anyone who has already done this? I'm an engineer so will not re-invent the wheel if someone else has done so.
You can certainly share the same S3 bucket between instances - in fact, this is commonly used along with binary-less replication from author->publisher(s) and is a tried and true configuration.
It's even possible to share the same bucket between completely different environments (e.g. DEV/STAGE, or BLUE/GREEN in your case). The main "gotcha" to be aware of is with regard to DataStore Garbage Collection (DSGC) because it's very possible that there will be blobs which are referenced by only some of the instances sharing the bucket and so when purging unused blobs this needs to be taken into account.
This is all part of the design though, and there is a flag designed specifically for this purpose which tells DSGC to only execute the first phase (the "mark" phase) of GC, and skip the 2nd "sweep" phase, until all instances have marked which blobs they wish to keep/discard. Once all instances have done so the sweep phase can be run to purge blobs not needed by any instances using the bucket.
For a more detailed explanation see the Oak docs:
https://jackrabbit.apache.org/oak/docs/plugins/blobstore.html#Shared_DataStore_Blob_Garbage_Collection_Since_1.2.0
I find it helps to understand that pretty much all of the datastore implementations are done such that blobs are stored according to their checksum, so the same file added uploaded twice will only have one copy stored in the datastore, and there will be two segment store records referencing that same blob. In the same way, multiple AEM instances sharing the same bucket will be able to find a given blob regardless of which instance put it there in the first place.
You can observe see this in action easily with FileDataStore by finding a blob and sha256'ing it - e.g. (this example is on OS X, the checksum command on Linux/Windows will be slightly different):
$ shasum -a256 crx-quickstart/repository/datastore/0c/9e/40/0c9e405fc8d0f0405930cd0044611cfbf014938a1837ae0cfaa266d7732d1002
0c9e405fc8d0f0405930cd0044611cfbf014938a1837ae0cfaa266d7732d1002 crx-quickstart/repository/datastore/0c/9e/40/0c9e405fc8d0f0405930cd0044611cfbf014938a1837ae0cfaa266d7732d1002
There you can see that a) the filename is the checksum, and b) it's nested using the first 3 pairs of characters from that checksum, so you can locate the file by just knowing the hash and if you store the same binary, even if the name or JCR metadata is different, the blob referenced will be the same literal file on disk.
From memory S3 datastore uses prefixes rather than directory nesting because this performance better, but the principle is the same.
Finally, a couple of things to consider are:
1) S3 storage is relatively cheap (and practically unlimited) so there is an argument to be made that it's not as necessary to perform regular DSGC unless you're really trying to pinch pennies.
2) If you do run DSGC you need to think about how this will work with whatever backup strategy you're using for the AEM instances. For instance, if you roll back a segment store shortly after running DSGC you'll likely have to recover some of those purged blobs. You can use versioning and/or lifecycle rules to help with this, but it can add significant additional complexity and time to your restore process.
If you opt to simply skip DSGC and leave the blobs there indefinitely it's a good idea to make sure the access key or IAM roles AEM is using doesn't have the DeleteObject permission for the bucket, just to be sure a rogue GC process can't delete anything.
Hope this helps.
Edit
In all that I forgot to actually answer your question - yes it will save some time in cloning in most cases. You'll still need to sync the segment store (obviously) and there are various approaches for this. crx2oak is certainly one - you'll see in the documentation there are specific options for using it w/ S3 where you supply a configuration file (basically a serialised .config file like you'd use with Felix/OSGi).
You can also use something like rsync to simply copy the TAR files over (while at least the target AEM is stopped. Oak is generally atomic so a hot copy from the source can work in theory, but YMMV).
Finally you could obviously use Mongo and cluster the segment store that way, but all the usual cost/complexity/performance issues with doing so apply).
Another interesting development on the horizon for blue/green type is the CompositeNodeStore - there is a good talk from the 2017 adaptTo() conference that talks about this:
https://adapt.to/2017/en/schedule/zero-downtime-deployments-for-the-sling-based-apps-using-docker.html
An external datastore will help a lot, as usually the most space is used by binary assets. The pure content typed in by real people is much less.
On my current project (quite small, but relations should be normal):
Repository 4,8 GB total (4.1 GB Segment Store, 780 MB Index)
File DataStore 222 GB total
If you wanna do it, I have the following remarks:
There are different datastores available. For testing I would start with the File DataStore.
The S3 DataStore makes only sense in my point of view, if you are hosting at Amazons AWS anyway. Adobe Managed Services is doing this, and so S3 makes sense for them. But also there only if you have more than 500 GB assets.
If you use the green/blue approach, then be careful the DataStore garbage collection (just do it manually). The shared Datastore is meant for several publishers, that have the same content. As example you could have the following situation: Your editors delete some assets, you run the DataStore GC and finally your rollback your environment. That means the assets are still in the content repository, but the binaries are cleaned out of the DataStore.
In order to to use a shared file datastore, you need to do the following:
Unpack Quickstart java -jar AEM_6.3_Quickstart.jar -unpack
Create an directory for the file datastore (anywhere outside of the crx-quickstart folder)
Create a directory install inside the extracted crx-quickstart folder
Create a file called org.apache.jackrabbit.oak.plugins.blob.datastore.FileDataStore.cfg inside this install folder
This file contains just 1 line path=<path to file datastore> (see https://jackrabbit.apache.org/oak/docs/osgi_config.html)
Place a reference.key file inside the datastore directory. First time it will be created automatically. But if you use always the same key, the same hash-values are used all datastores across all your environments. This is also a prerequisite for a feature called "binary-less replication" (so binary would only be replicated the first time between author and publisher)
kind regards,
Alex

What's elasticsearch and is it safe to delete logstash?

I have an internal Apache server for testing purpose, not client facing.
I wanted to upgrade the server to apache 2.4, but there is no space left, so I was trying to delete some files on the server.
After checking file size, I found a folder /var/lib/elasticsearch takes 80g space. For example, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2015.12.08 takes 60g already. I'm not sure what's elasticsearch. Is it safe if i delete this logstash? Thanks!
Elasticsearch is a search engine, like a NoSql database, and it stores the data in indeces. What you are seeing is the data of one index.
Probobly someone was using the index aroung 2015 when the index was timestamped.
I would just delete it.
I'm afraid that only you can answer that question. One use for logstash+elastic search are to help make sense out of system logs. That combination isn't normally setup by default, so I presume someone set it up at some time for some reason, and it has obviously done some logging. Only you can know if it is still being used, or if it is safe to delete.
As other answers pointed out Elastic search is a distributed search engine. And I believe an earlier user was pushing application or system logs using Logstash to this Elastic search instance. If you can find the source application, check if the log files are already there, if yes, then you can go ahead and delete your index. I highly doubt anyone still needs the logs back from 2015, but it is really your call to see what your application's archiving requirements are and then take necessary action.

Does WIndows Search(Win 2008R2)/Indexing Service(Win2003) has any impact on Directory.GetFiles(searchPattern, SearchOption.AllDirectories) method?

We are having a strange issue with Directory.GetFiles method trying search for a Word Document from a UNC Folder Share (NTFS Disk) on a Win2008R2 VMServer. The share contain over 10K Files in the Parent Folder and 75K Files in a SubDirectory.
It was all working fine in Win2003 Server. When migrated to Win2008R2 Server, the WinForms application freezes over this method and taking almost 13 minutes to Open a single File from a Client machine connected to the File Share via a VPN Network that has Download Speed bandwidth of 1Mbps (not throughput).
After search & research, we realized the Windows Search service was not turned on and the Service was started and the share was indexed. We saw a performance improvement where the time taken to open a file using GetFiles Method came down to 3 Minutes from 13 minutes.
But this is not consistent. During day time when bandwidth is much lower than 1MBPS (say 0.5 MBPS) the time-span to open the document is again between 8-12 minutes.
At this point we are not sure of which one is causing the problem?
Not possible solutions:
1) Creating multiple directories and organizing files.
2) Increasing bandwidth.
3) Using direct filepath instead of Directory.GetFiles/EnumerateFiles
Any help is highly appreciated. Thanks!
Oh yeah, good stuff. You will notice that even if the service is off, running it twice (within a short tiem of each other) will run much faster the second time. Actually, here is a good one for you, run it twice, let the first one run for a minute. The second one will catch up the first one almost immediately and then they will both be at the same spot for the rest of the time. (if what I said makes sense).
Here is what is happening, GetFiles() and GetDirectories() do use the indexing service. Also, if your indexing service is off, this just means it will not automatically get data about the files, but when you access the file (windows explorer / GetFiles) it will index them, so that if you ask for the same thing with a set amount of time, it wont have to query the Hard-Drives' Table-Of-Contents again.
As far as it running faster and slower when the indexing service is on, this is because windows knows it cannot keep track of every file on the computer. Therefore, after a set amount of time, the file is considered stale and the indexing service will do an IO call to get the info to update the index database, when you ask about the file.
This wiki talks about it, just a little. Not very thorough.

Is there an automated way to push all my javascript/css/images to s3 everytime I do a website push?

So I am in the process of moving all the thumbnails of my major sites to S3 and now I am thinking about how I can consistently put all my CSS/JS/images that power the actual sites to it. It's easy enough to upload everything the first time but I am trying to think of a way to somehow automate the process everytime I push out to production.
Does anyone have any clever ways of doing this?
I used to use s3sync to compare and update the assets just before upload the site files using a bash file to iterate through my files
This works well but when the amount of likes to compare (lets say thousands) gets big this process start being really slow. If you have an small architecture (in term of assets) this would do the trick
to make this better I would recommend capistrano or some other assistant that helps you to deploy...this way you can run at all once..
upload assets
deploy your files
In the other hand you could take a look to cloudfront (amazon's CDN) and set it up using ORIGIN..this way you dont need to worry about upload the files to s3 since they will be automatically pulled on demand. The down side of this approach is the caching if you need to update a file and keep the same name (AKA expire the object)...you can do this in cloudfront but will need an script to do the task.
Depending in the traffic (and other factors, ofcourse) one or other path will fit the best.