I am building a mapping application in Rails 3. Data for this specific project is static and therefore I have decided to use TileStache to seed my cache and serve the tiles as static images. The issue is that the map data is fairly sensitive and requires authorization. I am using NGinx+Passenger as a web server. The map tiles ranges about 20 kb in size. I have written a proof of concept going directly to rails for authorization and using send_file. Performance is OK. Considering that the files are pretty small I am wondering if the Rack SendFile middleware would help performance in this specific situation. My concerns are current performance and degrading performance as the user base grows. Would anyone have any suggestions?
Related
How we can increase the speed performance in Nuxt with SSR for the following points.
Reduce unused JavaScript
Avoid serving legacy JavaScript to modern
Minimize main-thread work
Reduce JavaScript execution time
Avoid enormous network payloads
Pretty generic questions, so let's go point by point:
Reduce unused JavaScript: you can tree-shake your code (+3rd party) + lazy load your routes + components (Nuxt does that nicely)
Avoid serving legacy JavaScript to modern: the modern property is nice for that
Minimize main-thread work: beware of the heavy 3rd party scripts, like Google Analytics/GTM, heavy chats, heavy operations etc. Using a Service worker can help, other you could also try Partytown
Reduce JavaScript execution time: same, depends of your code here. More analysis of it will be required
Avoid enormous network payloads: check if you're making huge amounts of HTTP calls or loading big 5MB of i18n JSON files
As always, you cannot have a quick and simple answer on that kind of subject. You either need a performance expert or debug/learn it yourself.
This is a nice start, you could get quite a lot of explanations regarding core web vitals.
This frontend checklist is always a nice article to read too.
PS: also if the matter is mostly SSR, it may come down to have a better infrastructure on the backend, with bulkier VPS server, some improved DB, maybe some Elasticsearch, some cache etc etc... (all the usual things you can improve on the backend)
For speed optimization, we need to follow the following steps.
Need to optimize the images
Use shouldPreload in the render function nuxt.config.js
Use compressor: shrinkRay(), for compression
Use dns-prefetch for Google fonts
Use minify js and css
optimize API queries
I have an app hosted on Heroku. I seek to extract text from various PDFs. I'm currently using tesseract for this.
Since Heroku does not offer that much storage space and the .traineddata files are big in size (need to use all of them), is it possible to somehow store the tessdata language data on S3? I was not able to find any solution to this yet.
All I could find is that I can define the --tessdata-dir PATH, but that's for a directory.
Sadly, I'm not sure Heroku is a good fit for your needs if you can't make all the data fit within the heroku slug. Even if you could get it to work, it would be quite a performance hit.
You'd probably be better off setting the Tesseract as an API with it's own server(s), then sending whatever you need to that API from heroku (or moving the entire app over). Depending on the size of the rest of your app and how quickly Tesseract is growing in size, that might just mean Tesseract gets it's own heroku app with absolutely minimal dependencies or might mean moving that part of the app to AWS or something.
I am developing SpriteKit universal iOS application that will contain many spritekit images. I was wondering if there are better methods for working with large amount of data images in iOS apps from what's available as default from Apple (i.e. Core Data). Some resources point out that working with a database, like SQL, to save and load images data to/from a disk, improves the overall app performance in terms of memory resources.
What is the really best way to manage sprites content in the iOS application?
CoreData is an object graph management system first, and a front-end to a database second. Its intended use case it to manage models and their relationships to other models, much like tables in SQL. Therefore, it is not the correct solution for persisting binary data, such as an image. Flat file storage is definitely the better choice. Look into NSFileManager for writing files to disk.
If writing your own disk cache sounds like a daunting task, you should consider using a third-party framework. Nuke is a popular image cache written completely in Swift. It also handles fetching images over the Internet and provides extensions for integration with UIKit.
Alternatively, just search "image cache" on GitHub to see plenty of other options.
I have been hoping to find out what different server setups equate to in theory for concurrent page requests, and the answer always seems to be soaked in voodoo and sorcery. What is the approximation of max concurrent page requests for the following setups?
apache+php+mysql(1 server)
apache+php+mysql+caching(like memcached or similiar (still one server))
apache+php+mysql+caching+dedicated Database Server (2 servers)
apache+php+mysql+caching+dedicatedDB+loadbalancing(multi webserver/single dbserver)
apache+php+mysql+caching+dedicatedDB+loadbalancing(multi webserver/multi dbserver)
+distributed (amazon cloud elastic) -- I know this one is "as much as you can afford" but it would be nice to know when to move to it.
I appreciate any constructive criticism, I am just trying to figure out when its time to move from one implementation to the next, because they each come with their own implementation feat either programming wise or setup wise.
In your question you talk about caching and this is probably one of the most important factors in a web architecture r.e performance and capacity.
Memcache is useful, but actually, before that, you should be ensuring proper HTTP cache directives on your server responses. This does 2 things; it reduces the number of requests and speeds up server response times (if you have Apache configured correctly). This can also be improved by using an HTTP accelerator like Varnish and a CDN.
Another factor to consider is whether your system is stateless. By stateless, it usually means that it doesn't store sessions on the server and reference them with every request. A good systems architecture relies on state as little as possible. The less state the more horizontally scalable a system. Most people introduce state when confronted with issues of personalisation - i.e serving up different content for different users. In such cases you should first investigate using the HTML5 session storage (i.e store the complete user data in javascript on the client, obviously over https) or if the data set is smaller, secure javascript cookies. That way you can still serve up cached resources and then personalise with javascript on the client.
Finally, your stack includes a database tier, another potential bottleneck for performance and capacity. If you are only reading data from the system then again it should be quite easy to horizontally scale. If there are reads and writes, its typically better to separate the read write datasets into a separate database and have the read only in another. You can then use more relevant methods to scale.
These setups do not spit out a single answer that you can then compare to each other. The answer will vary on way more factors than you have listed.
Even if they did spit out a single answer, then it is just one metric out of dozens. What makes this the most important metric?
Even worse, each of these alternatives is not free. There is engineering effort and maintenance overhead in each of these. Which could not be analysed without understanding your organisation, your app and your cost/revenue structures.
Options like AWS not only involve development effort but may "lock you in" to a solution so you also need to be aware of that.
I know this response is not complete, but I am pointing out that this question touches on a large complicated area that cannot be reduced to a single metric.
I suspect you are approaching this from exactly the wrong end. Do not go looking for technologies and then figure out how to use them. Instead profile your app (measure, measure, measure), figure out the actual problem you are having, and then solve that problem and that problem only.
If you understand the problem and you understand the technology options then you should have an answer.
If you have already done this and the problem is concurrent page requests then I apologise in advance, but I suspect not.
I am developing a browser based game, and I have a big map there. The terrain of the map is static. Therefore, I have some thousands of tiles that will not change (whether they represent a forest, a desert, whatever), just the players above it can change.
Hence, I wanted to store all my map in the player's computer. I am working with Ruby on Rails, and those map information are passed from the server to the javascript that runs on the user browser, in order to render a pretty map. But it makes me pretty sad to have a 200kb .html file, containing all those map related information.
What would be the simplest way to solve this issue? Cookies! Well. That's what I thought. A complete map information can get to almost 200kb (they are pretty big). A cookie can have at most 4kb.. I don't feel that the right way to achieve my objective is to create tons of cookies, one for each row of the map, for instance. Is there any more elegant way to have this static information lie on the player's browser, without creating lots of cookies? A way to cache it on his browser? I mean.. I can cache a 400kb image, why can't I cache a 200kb map structure?
Thanks in advance!
Fernando.
Well, HTML Local Storage gives you 5 MB (though data is stored *as strings*, so the actual amount of data you can fit in the container is likely a lot less than 5 MB.
This limit is oddly fluid. For one thing, it's just a recommended limit; and for another, i.e., Webkit-based browsers use UTF-16, which immediately cuts that in half (2.5 MB).
Browser support for Local Storage is good: IE, Firefox, Safari 4.0+, Chrome 4.0+, and Opera 10.5+. Both iPhone and Android are supported above versions 3.0 and 2.0. respectively.
Using Local Storage to preserve game state appears to be a proto-typical use case.
Finally, Paul Kinlan published an excellent step-by-step tutorial on HTML5Rocks, which i highly recommend (though it's a little more than a year old).
Have you considered storing it in a js file? Most browser will cache linked js files, allowing you to only serve it every once in a while. It would be very simple to deploy.