What are "Hits & Misses" in reference to APC opcode caching? I've installed APC and it's running great, but I've got "some" misses and I'm wondering if that's "bad". Also, I am running Openx and, as such, am filling up the "Cache full count(s)" pretty quickly. What do I need to change in the configuration to minimize that? Any recommended configurations?
Some misses are to be expected.
Hits = things are in cache
Miss = things not (yet) in cache. New or less-used things will always be a miss, so you'll always expect some.
You may need to tune how much memory you're dedicating to APC - Its sort of a guessing game, balancing how much memory your machine has and how much you 'usually' have filled in APC (it should tell you a amount or percent full). You'll have to tweak various values to see. An OK baseline is a compressed version of all your source code at like gzip level 2 - assume you're taking out comments and variable names and stuff, and you'll never get over that. Then you can figure out how much to dedicate to the cache.
If you're using APC for key-value caching as well, that will fill up faster than just code caching - and you'll expect to fill it up eventually. You'll then need to find an amount that gives a miss ratio you're comfortable with.
Related
Been playing with ImageResizer for a bit now, and trying to do something, I am having trouble understanding the way to go about it.
Mainly I would like to stick to the idea of using the pipeline, and not trying to cheat it.
So.... Let's say, I pretty standard use ImageResizer For something like:
giants_logo.jpg?w=280&h=100
The File giants_logo.jpg
Processing Request is for a resized version of 'w=280&h=100'
In a clustered environment, what will happen is if this same request is served by 3 machines.
All 3 would end up doing the resize, and then storing their cached version in a local folder on disc. I could leverage a shared drive or something, but that has it's own limitations.
What I am looking to do, is get the processed file, and then copy it back up to the DB or S3 where the main images are served from.
My thought is.... I might have to write somehting like DiscCache, but with a complelty different guts, using the DB or S3 as the back end instead of the file system.
I realize the point of caching is speed, and what I am suggesting is negating that aspect..... but that's not the case if we layer the things maybe.
Anyway, What I am focused on is trying to keep track of the files generated, as well as avoid processing on multiple servers.
Any thoughts on the route I should look at to accomplish this?
TLDR; When DiskCache actually stops working well (usually between 1 and 20 million unique images), then switch to a CDN (unless it's too expensive), or a reverse proxy (unless your data set is really too huge to be bound by mortal infrastructure).
For petabyte data sets on the cheap when performance isn't king, it's a good plan. But for most people, it's premature. Even users with upwards of 20TB (source images) still use DiskCache. Really. Terabyte drives are cheap.
Latency is the killer.
To make this work you would need a central Redis server. MSSQL won't cut it (at least not on a VM or commodity hardware, we've tried). Given a Redis server, you can track what is done and stored (and perhaps even what is in progress, to de-duplicate effort in real time, as DiskCache does).
If you can track it, you can reuse it, and you can delete it. Reuse will be slower, since you're doubling the network traffic, moving the result twice. (But also decreasing it linearly with the number of servers in the cluster for source image fetches).
If bandwidth saturation is your bottleneck (very common), this could make performance worse. In fact, unless your read/write ratio is write and CPU heavy, you'll likely see worse performance than duplicated CPU effort under individual disk caches.
If you have the infrastructure to test it, put DiskCache on a SAN or shared drive; this will give you a solid estimate of the performance you can expect (assuming said drive and your blob storage system have comparable IO perf).
However, it's a fair amount of work, and you're essentially duplicating a subset of the functionality of reverse proxy (but with worse performance, since every response has to be proxied through the unlucky cluster server, instead of being spooled directly from disk).
CDNs and Reverse proxies to the rescue
Amazon CloudFront or Varnish can serve quite well as reverse proxies/caches for a web farm or cluster. Now, you'll have a bit less control over the 'garbage collection' process, but... also less code to maintain.
There's also ARR, but I've heard neither success nor failure stories about it.
But it sounds fun!
Send me a Github link and I'll help out.
I'd love to get a Redis-coordinated, cloud-agnostic poor-man's blob cache system out there. You bring the petabytes and infrastructure, I'll help you with the integration and troublesome bits. Efficient HTTP proxying is probably the hardest part; the rest is state management and basic threading.
You might want to have a look at a modified AzureReader2 plugin at https://github.com/orbyone/Sensible.ImageResizer.Plugins.AzureReader2
This implementation stores the transformed image back to the Azure blob container on the initial requests, so subsequent requests are redirected to that copy.
I'm seeing a high amount of fragmentation on APC (>80%) but performance actually seems pretty good. I've read another post that advises disabling object caching in wordpress / w3tc, but I wonder if the reduction in fragmentation is better than the performance boost of caching the objects in the first place.
Fragmented APC is still a few times better than without APC, so please don't deactivate APC.
Increase your memory instead. With more memory APC will fragment lot less. This will be healthier for APC itself.
APC itself has no "defragmentation" process. You could restart yout http service or call apc_clear_cache() in an php script. But beware of the performance impact for the next minutes when your cache will be rebuilt.
Fragmentation on disk based systems is important because the head physically has to move to each location to read it. The APC cache though by definition is in Random Access Memory so the penalty for having to read a different location is in the order of a couple of CPU cycles, ie negligible unless you're seriously loading the CPU. And if you're doing that then you have bigger problems.
Also don't assign too much RAM to APC. You really want 5-10% more than the maximum possible cache. Any more is a waste of precious RAM.
I think it is misleading to put fragmentation as a metric on the APC monitor page as it's just not that important and people worry unduly. Running with highly fragmented APC is orders of magnitude better than running without it at all.
I'm beginning a new project using CakePHP. I like the "auto-magic" features, I think its a good fit for the project. I'm wondering about the potential to scale CakePHP to several million IP hits a day. and hundreds of thousands of database writes and reads a day. Also about 50,000 to 500,000 users, often with 3000 concurrently using the site. I'm making use of heavy stored procedures to offset this, and I'm accessing several servers including a load balancer.
I'm wondering about the computational time of some of the auto-magic and how well Cake is able to assist with session requests making many db hits. Has anyone has had success with cake running from a single server array setup with this level of traffic? I'm not using the cloud or a distributed database (yet). I'm really worried about potential bottlenecks with using this framework. I'm interested in advice from anyone who has worked with Cake in production. I've reseached, but I would love a second opinion. Thank you for your time.
This is not a problem but optimization is up to you.
There are different cache methods available you can implement, memcache, redis, full page caching... All of that is supported by cacke already. What you cache and where is up to you.
For searching you could try elastic search to speedup things
There are before dispatcher filters to by pass controller instantiation (you might want to do that in special cases, check the asset filter for example)
Use nginx not apache
Also I would not start with over optimizing and over-thinking this before any code is written, start well, think about caching but when you start to come across bottleneck analyse and fix them. Otherwise you'll waste a lot of time with over optimization before you even have written anything that works.
Cake itself is very fast. Just to proof the bullshit factor of these fancy benchmarks some frameworks do we did one using a dispatcher filter to "optimize" it and even beat Yii who seems to be pretty eager to show how fast it is, but benchmarks are pointless, specially in a huge project where so many human made fail can be introduced.
So I'm writing an application that is very heavy on SQLite usage. I'm working on writing into my application an in memory caching system that will allow me to sort and filter my data (my own personal Core Data...in essence). I'm doing this because it seems to me that this is a better/faster option than to constantly make read requests from the SQLite database. Plus, most fields/columns will be searchable/sortable, and to set up indexes on each one seems less than ideal. But I'm not sure. I know the SQLite database is cached some in memory but I don't know to what extent or how much of an advantage that would be for me. Implementing my own caching system will be complex and will probably add to my memory footprint, especially since I'm loading each table completely into memory to perform the sort/filters. I'm more than willing to do it if it helps the performance of my app, but will it? Is the SQLite caching sufficient for me to rely solely on that or will it get bogged down when the tables start getting large (10,000+ rows)? I guess I'm asking if anyone has enough experience with SQLite to recommend one over the other.
Before anyone asks: no I can't use Core Data. Core Data isn't flexible enough for me to use in my application.
Ok, so here's what I've figured: the choice depends greatly on your requirements. I ended up removing (as much as possible) the SQLite cache, loading up what I needed, and sorting/filtering it using my own routines. This works remarkably well for me. But I've realized by implementing this that this wouldn't work in a lot of situations. Specifically, I've done a lot to make sure that my DB size is as small as possible. I'm basically storing only simple/small text and numbers. Everything else is a reference to an outside file. This makes my database small enough to use less as a database and more as an indexing service, which works well for loading the info into memory and sorting/filtering.
So, the answer depends a lot on the database. If you're storing large fields that might potentially take a lot of memory, it's probably best to let SQLite handle the cache. On the other hand, if you know the fields will be small, the SQLite cache will only inflate your memory and the round trips to the database to sort/filter data will only increase your latency. Instead, it's better to do the sorting/filtering yourself, though I will admit that's a lot work. But in the end it made my app a lot faster than round-tripping it to the DB.
My rails application always reaches the threshold of the disk I/O rate set by my VPS at Linode. It's set at 3000 (I up it from 2000), and every hour or so I will get a notification that it reaches 4000-5000+.
What are the methods that I can use to minimize the disk IO rate? I mostly use Sphinx (Thinking Sphinx plugin) and Latitude and Longitude distance search.
What are the methods to avoid?
I'm using Rails 2.3.11 and MySQL.
Thanks.
did you check if your server is swapping itself to death? what does "top" say?
your Linode may have limited RAM, and it could be very likely that it is swapping like crazy to keep things running..
If you see red in the IO graph, that is swapping activity! You need to upgrade your Linode to more RAM,
or limit the number / size of processes which are running. You should also add approximately 2x the RAM size as Swap space (swap partition).
http://tinypic.com/view.php?pic=2s0b8t2&s=7
Since your question is too vague to answer concisely, this is generally a sign of one of a few things:
Your data set is too large because of historical data that you could prune. Delete what is no longer relevant.
Your tables are not indexed properly and you are hitting a lot of table scans. Check with EXAMINE on each of your slow queries.
Your data structure is not optimized for the way you are using it, and you are doing too many joins. Some tactical de-normalization would help here. Make sure all your JOIN queries are strictly necessary.
You are retrieving more data than is required to service the request. It is, sadly, all too common that people load enormous TEXT or BLOB columns from a user table when displaying only a list of user names. Load only what you need.
You're being hit by some kind of automated scraper or spider robot that's systematically downloading your entire site, page by page. You may want to alter your robots.txt if this is an issue, or start blocking troublesome IPs.
Is it going high and staying high for a long time, or is it just spiking temporarily?
There aren't going to be specific methods to avoid (other than not writing to disk).
You could try using a profiler in production like NewRelic to get more insight into your performance. A profiler will highlight the actions that are taking a long time, however, and when you examine the specific algorithm you're using in that action, you might discover what's inefficient about that particular action.