Katta in production environment - lucene

According to the website Katta is a scalable, failure tolerant, distributed, indexed, data storage.
I would like to know if it is ready to be deployed into production environment. Anyone already using it and has advices? Any pitfalls? Recommendations? Testimonials? Please share.
Any answer would be greatly appreciated.

We have tried using katta and for what its worth - found it very stable, relatively easy to manage (as compared to managing plain vanilla lucene)
Only pitfall I can think of is lack of realtime updates - when we tested it (about 9-10 months back) update meant, updating index using a separate process (hadoop job or what have you...) and replacing the live index, this was a deal-breaker for us.
If you are looking into distributed lucene you should really tryout ElasticSearch or Solandra

Related

MLflow: Why can't backend-store-uri be an s3 location?

I'm new to mlflow and I can't figure out why the artifact store can't be the same as the backend store?
The only reason I can think of is to be able to query the experiments with SQL syntax... but since we can interact with the runs using mlflow ui I just don't understand why all artifacts and parameters can't go to a same location (which is what happens when using local storage).
Can anyone shed some light on this?
MLflow's Artifacts are typically ML models, i.e. relatively large binary files. On the other hand, run data are typically a couple of floats.
In the end it is not a question of what is possible or not (many things are possible if you put enough effort into it), but rather to follow good practices:
storing large binary artifacts in an SQL database is possible but is bound the degrade the performance of the database sooner or later, and this in turn will degrade your user experience.
storing a couple of floats from a SQL database for quick retrieval for display in a front-end or via command line is a robust industry-proven classic
It remains true that the documentation of MLflow on the architecture design rationale could be improved (as of 2020)

How to run a single MediaWiki on many servers?

I couldn't find a tutorial on this, all I found was info on how to run multiple wikis from one server. Issues are things like high-speed shared storage for images between servers and good performance with some sort of centralized caching.
Does anybody know of any guides?
It's really hard to give advice to such vague requirements, but doing my best.
Image storage: what are the storage/load needed? For a Wikipedia-sized cluster the only solution known to work is OpenStack Swift, however it's a huge PITA to use so NFS is probably the sane choice.
Shared caching: memcached.

Scaling CakePHP Version 2.3.0

I'm beginning a new project using CakePHP. I like the "auto-magic" features, I think its a good fit for the project. I'm wondering about the potential to scale CakePHP to several million IP hits a day. and hundreds of thousands of database writes and reads a day. Also about 50,000 to 500,000 users, often with 3000 concurrently using the site. I'm making use of heavy stored procedures to offset this, and I'm accessing several servers including a load balancer.
I'm wondering about the computational time of some of the auto-magic and how well Cake is able to assist with session requests making many db hits. Has anyone has had success with cake running from a single server array setup with this level of traffic? I'm not using the cloud or a distributed database (yet). I'm really worried about potential bottlenecks with using this framework. I'm interested in advice from anyone who has worked with Cake in production. I've reseached, but I would love a second opinion. Thank you for your time.
This is not a problem but optimization is up to you.
There are different cache methods available you can implement, memcache, redis, full page caching... All of that is supported by cacke already. What you cache and where is up to you.
For searching you could try elastic search to speedup things
There are before dispatcher filters to by pass controller instantiation (you might want to do that in special cases, check the asset filter for example)
Use nginx not apache
Also I would not start with over optimizing and over-thinking this before any code is written, start well, think about caching but when you start to come across bottleneck analyse and fix them. Otherwise you'll waste a lot of time with over optimization before you even have written anything that works.
Cake itself is very fast. Just to proof the bullshit factor of these fancy benchmarks some frameworks do we did one using a dispatcher filter to "optimize" it and even beat Yii who seems to be pretty eager to show how fast it is, but benchmarks are pointless, specially in a huge project where so many human made fail can be introduced.

Ideas for a distributed processing project?

I am looking for a project idea in distributed processing on Unix based systems. I wish to use only the C programming language. I have to finish the project in 4 months and it's a part of my course work. Can someone help me with an idea?
Cryptography problems
Distributed Ray Tracer
Chess AI (really, AI for any game)
Large Prime Number Search
Web crawler or other search mechanism
Generic Problem Solver (push out problem definition on the fly, followed by problem data).
Note on the last one:
An example would be if you have a gaming website with lots of board games that you were coming out with all the time. You don't want to have to install new clients on all your servers every time you write a new AI for a board game, so you have a program which you can send new AIs to and then after that you can just send the game data and the pushed AI will be used to solve the problem. This is best used for problems which can be broken into smaller chunks.
It is hard to answer without knowing anything about performance, the scale of the project, what you are trying to accomplish, etc. For example, is it one task or multiple tasks? Is the project just totally open?
4 months is pretty short, but maybe some kind of physics problem or math problem. Sorting or some kind of database work might be dull but beneficial.
Check out mapreduce for ideas! I was really motivated by this work, personally.
We used distributed processing here at work, but it's such a broad field..
Yeah.
Why not write a distributed compiler. You may then present an interface for people to compile things on the fly, and it will be passed to your distribute compilenet. Java is probably well-suited, and you'll get to do fun things, like be very mindful of security and so on.
The BOINC project is always looking for help and is very interesting:
http://boinc.berkeley.edu/
If you want to leave your mark and change the way we search the web,
look into B-Trees.
B-Trees and offspring/variants are the working horse of the internet.
Google uses them extensively to index the web.
Database indexes/indices are B-Tree offspring/variants.
Every LAMP system uses a database and indexes/indices.
Also, they are used extensively in distributed VLDB (Very Large DataBases)
Perhaps you can improve existing distributed databases (Cassandra and HBase)
These are lofty goals, but for me, this would leave a lasting mark
in the way Web data is processed, indexed and stored.
Write a distributed, fault tolerant, redundant network B+Tree or B*Tree.
Read Drozdek's book Data Structures and Algorithms in C++.
It's a good survey of B-Trees.
Read about skip trees
http://www.cs.huji.ac.il/~ittaia/papers/AAY-OPODIS05.pdf
Read about Efficient B-tree Based Indexing for Cloud Data Processing
http://www.comp.nus.edu.sg/~ooibc/vldb10-cgindex.pdf
Google search "Network B+Tree"
https://www.google.com/search?rlz=1C1CHKZ_enUS431US431&sourceid=chrome&ie=UTF-8&q=Network+B%2BTree

How to configure a Firebird Database to run in memory

I'm running a software called Fishbowl inventory and it is running on a firebird database (Windows server 2003) at this time the fishbowl software is running extremely slow when more then one user accesses the software. I'm thinking I maybe able to speed up the application by forcing the database to run "In Memory". However I can not find documentation on how to do this. Any help would be greatly appreciated.
Thank you in advance.
Robert
Firebird does not have memory tables - they may or may not be added in future versions (>3) but certainly not in the upcoming 2.5. There can be any other number of reasons why your software is slow with multiple users; however, Firebird itself has pretty good concurrency, so make sure you find the actual bottleneck first.
+1 to Holger. Find the bottleneck first.
SinĂ¡tica Monitor may help you.
In-memory tables are nice either for OLAP (when data is not changing) or for temporary internal data storage.
In both cases data loss is not danger.
Pity that FB has no in-memory mode. I think about using SQLite as result.
As for caching, i think simple parallel thread that reads all the blocks of database file would make it in-memory - in OS cache if OS has enough memory.
But i also think, that OS already cached as much of DB file as it could and agressive forcing to cache would make overall performance even worse.
I had read an article some time ago, from someone who did a memory drive (like in old DOS) and ran a Database there. The problem is if anything fails, you lose everything. You should do backups very often to ensure a minimum of security.
Not a good idea at all I think.