I need help regarding generating meta tags from database and setting them in different controller actions.
I have a table in DB, where I store meta information (keywords, description) for each controller action. I want to select this values in every action and set the tags fetched from DB using registerMetaTag().
What I want to know is how much this queries will effect page load time, and if there is a better approach for doing this?
Thanks,
Mark
This will be nearly unnoticeable if your database is set up traditionally. It will add 10,000ths of a second to load times for each query.
For such low frequency data though, you should be caching heavily, as you know it will not be changing often. This means the hit in performance will be negligible as it's pulled from a file/mem store/memory table depending how your caching is set up.
This is all a generalization of course, but then so was the question. If you've got any special set up or more specific optimization issues just comment or open a new question.
P.S
Don't micro optimize. Just do it, analyse the impact, decide if it needs performance improvement, and to what degree.
http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-micro-optimization-theater.html
Related
I've been reading the article The Vietnam of Computer Science by Ted Neward. Although there's much I don't understand, or have not fully grasped, I was struck by a thought while reading this paragraph.
The Partial-Object Problem and the Load-Time Paradox
It has long been known that network traversal, such as that done when making a traditional SQL request, takes a significant amount of time to process. ... This cost is clearly non-trivial, so as a result, developers look for ways to minimize this cost by optimizing the number of round trips and data retrieved.
In SQL, this optimization is achieved by carefully structuring the SQL request, making sure to retrieve only the columns and/or tables desired, rather than entire tables or sets of tables. For example, when constructing a traditional drill-down user interface, the developer presents a summary display of all the records from which the user can select one, and once selected, the developer then displays the complete set of data for that particular record. Given that we wish to do a drill-down of the Persons relational type described earlier, for example, the two queries to do so would be, in order (assuming the first one is selected):
SELECT id, first_name, last_name FROM person;
SELECT * FROM person WHERE id = 1;
In particular, take notice that only the data desired at each stage of the process is retrieved–in the first query, the necessary summary information and identifier (for the subsequent query, in case first and last name wouldn’t be sufficient to identify the person directly), and in the second, the remainder of the data to display. ... This notion of being able to return a part of a table (though still in relational form, which is important for reasons of closure, described above) is fundamental to the ability to optimize these queries this way–most queries will, in fact, only require a portion of the complete relation.
Skeleton Screens
Skeleton Screens are a relatively new UI design concept introduced in 2013 by Luke Wrobleski in this paper. It encourages avoiding spinners to indicate loading and to instead gradually build UI elements, during load time, which makes the user feel as if things are quickly progressing and progress is being made, even if it's, in fact, slower than than when using a traditional loading indicator.
Here is a Skeleton Screen in use in the Microsoft Teams chat app. It displays this while waiting for stored chat logs to arrive from the database.
Utilizing Skeleton Screen Style Data Loading as a Paradigm for Data Retrieval
While Neward's paper is focusing on Object Relational Mappers, I had this thought about structuring data retrieval.
The paragraph quoted above indicates a struggle with querying too much data at one time, increasing data retrieval times until all the indicated data is gathered. What if, similar to Neward's SQL example above, smaller chunks of data were retrieved, as necessary, and loaded piecemeal into the application?
This would necessitate a fundamental shift in database querying logic, I think. It would obviously be a ridiculous suggestion to have this implemented in application code. To have your everyday developer write a multi-layered retrieval scheme to retrieve a single object would be insane. Rather, there would need to be some sort of built-in method whereby the developer can indicate which properties are considered to be required (username, Id, permissions, roles, etc...) which must be retrieved fully before moving forward, and which properties are ancillary. After all, many app users navigate through an application faster than all the data can populate, if they are familiar with the application and just need to navigate to a certain page. All they need is enough data to be loaded, which is the point of this scheme.
On the database side, there would probably a series of smaller retrievals, rather than a large one. I know this is probably more expensive, although I'm not certain of the technicalities, and while database performance may suffer, application performance might improve, at least as perceived by the user.
Conclusion
Imagine pulling up your Instagram and having the first series of photos (the ones you can see) load, twice as quickly as before. Photos are likely your first priority. It wouldn't matter if your username, notification indicator, and profile picture takes a few extra seconds to populate, since you have already been fed your expected data and have begun consuming as a user. Contrast that with having structural data loaded first. Nobody cares about seeing their username or the company logo load. They want to see content.
I don't know if this is a terrible idea or something that has already been considered, but I'd love to get some feedback on it.
What do you think?
Is it possible, from a technical standpoint?
I have an application that allows the user to drill down through data from a single large table with many columns. It works like this:
There is a list of distinct top-level table values on the screen.
User clicks on it, then the list changes to the distinct next-level values for whatever was clicked on.
User clicks on one of those values, taken to 3rd level values, etc.
There are about 50 attributes they could go through, but it usually ends up only being 3 or 4. But since those 3 or 4 vary among the 50 possible attributes, I have to persist the selections to the browser. Right now I do it in a hideous and bulky hidden form. It works, but it is delicate and suboptimal. In order for it to work, the value of whatever level attribute is on the screen is populated in the appropriate place on the hidden form on the click event, and then a jQuery Ajax POST submits the form. Ugly.
I have also looked at Backbone.js, but I don't want to roll another toolkit into this project while there may be some other simple convention that I'm missing. Is there a standard Rails Way of doing something like this, or just some better way period?
Possible Approaches to Single-Table Drill-Down
If you want to perform column selections from a single table with a large set of columns, there are a few basic approaches you might consider.
Use a client-side JavaScript library to display/hide columns on demand. For example, you might use DataTables to dynamically adjust which columns are displayed based on what's relevant to the last value (or set of values) selected.
You can use a form in your views to pass relevant columns names into the session or the params hash, and inspect those values for what columns to render in the view when drilling down to the next level.
Your next server-side request could include a list of columns of interest, and your controller could use those column names to build a custom query using SELECT or #pluck. Such queries often involve tainted objects, so sanitize that input thoroughly and handle with care!
If your database supports views, users could select pre-defined or dynamic views from the next controller action, which may or may not be more performant. It's at least an idea worth pursuing, but you'd have to benchmark this carefully, and make sure you don't end up with SQL injections or an unmanageable number of pre-defined views to maintain.
Some Caveats
There are generally trade-offs between memory and latency when deciding whether to handle this sort of feature client-side or server-side. It's also generally worth revisiting the business logic behind having a huge denormalized table, and investigating whether the problem domain can't be broken down into a more manageable set of RESTful resources.
Another thing to consider is that Rails won't stop you from doing things that violate the basic resource-oriented MVC pattern. From your question, there is an implied assumption that you don't have a canonical representation for each data resource; approaching Rails this way often increases complexity. If that complexity is truly necessary to meet your application's requirements then that's fine, but I'd certainly recommend carefully assessing your fundamental design goals to see if the functional trade-offs and long-term maintenance burdens are worth it.
I've found questions similar to yours on Stack Overflow; there doesn't appear to be an API or style anyone mentions for persisting across requests. The best you can do seems to be storage in classes or some iteration on what you're already doing:
1) Persistence in memory between sessions/requests
2) Coping with request persistence design-wise
3) Using class caching
My web app offers personalized recommendations. When a user starting to use it, about 1000+ rows are being inserted to one big recommendation table, correlating with other tables in the database. Every item the user votes for affects all of those 1000+ rows.
Since the recommendation info is only useful during the session, and since the recommendation table is getting huge, we'd like to switch to a more appropiate method. There's the possibility of deleting the relevant rows as soon as the user session is over. I guess PHP session array or temp tables are better for this case?
One temp table per session will lead to catalog pollution, so not really recommended.
Have you considered actually keeping the data, so as periodically mine it to improve the suggestions?
First: consider redesigning your data structure, I think it is not optimal.
Store a user's recommendation in a table user-recommendeditem-score: I don't see any need for a temp table or anything else.
Otherwise, you could start using sessions, but you should encapsulate the code carefully, making it easy to change if/when this solution is no more maintainable.
I suspect that the method is flawed - 1000+ recommendations per user? How many of them do they ever look at? If you don't know the answer to that question - then you need to spend some time thinking about why you don't know the answer.
Every item the user votes for affects all of those 1000+ rows
Are you sure your data is properly normalised?
But leaving that aside for the moment. The right place to generate / store that is in the database - a relational database is explicitly designed, and a lot more efficient about generating and maintaining tabular sets of data then a conventional programming language.
My members will have the ability to customise their profile page with X amount of widgets, each widget displays different data such as a list of music, list of people they are following etc.
Several widgets include:
- List of media they have uploaded
- List of people they are following
- List of people following them
- Html/Text widget
- Media Statistics (num downloads etc)
- Comments widget for other members to leave comments
Some widgets will have to page the data returned because there could be hundreds of results.
I haven't done any optimisation at the moment so it is doing lots of DB work to return all the data...what would be the most efficient way to retrieve the data...would 1 DB call per widget be acceptable? There could be around 5-20 widgets per page.
If you need more information about my situation please feel free to ask.
Paul
Short answer: It depends.
Start off from the unoptimised state, then use SQL profiler or a C# profiler like dotTrace to work out the best places to make improvements. Set a realistic goal to work towards (e.g. 'less than 800 milliseconds to load the page').
Generally I find performance starts to suffer after about 20-30 database calls in a request, but this is going to depend on your server, the location of the database etc.
There are many things that you can try: pre-caching, eager fetch using joins rather than selects etc. Nothing is going to guarantee better performance though unless it is applied intelligently.
For a page with lots of widgets, a common design pattern is to load each widget asynchronously using AJAX, rather than loading the entire page in one go.
since you've cut out out your work to widgets the proper thing to do would be for each widget to do a single query for all its required functionality. This would also be the case even if you retrieved widgets via AJAX (which as cbp noted is not a bad idea).
Secondly, i would set up some kind of mechanism for each widget to register its existence and then after all widgets have registered then i would fire a single query that would include all widget queries. (technically its again multiple queries but in a single round-trip, see MulriCriteria and MultiQueries in NH reference).
Also do not forget that lazy loads are hidden db retrievals and you could have a huge performance impact by using lazy load in a situation where an eager load is proper (for example Foo.Bar.Name where you always show the Bar.Name value when you present the Foo entity)
Performance degradation can occur even with less that 20-30 database call per request, but it depends on the size and complexity of your entities, queries, filters as well as the size of the data sets retrieved.
I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use.
I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;.
The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable.
I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require.
Some options I've thought of:
Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model.
Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.)
How would you deal with a complex reporting problem like this?
UPDATE for Mladen Jablanović
It started by having a couple of views for reporting purposes. My boss(es) decided they wanted more, so I started writing more. Some give couple hundred columns of data, based on the requirements I've been given.
I have a couple thousand lines of views all shoved in a single file now. I don't like that situation, so I want to reorganize/refactor the code. I'd also like an easy way of providing CSVs -- I'm currently running queries and emailing them by hand, which could easily be automated. Finally, I would like to be able to write some tests on the output of the views, since a couple of regressions have already popped up.
I haven't worked much with SQL and views directly, so I can't help you there, but you can certainly build an ActiveRecord model on top of a view, very easily in fact. The book Enterprise Rails has a whole chapter on it (here it is at Google Books).
We are using views in our DB extensively and some of them are exposed as Rails models. You work with them as you would with tables, except for you can't update them of course.
Also, some of the columns may be calculated using other columns (different ratios for example) so we don't do it in the view, but in the model instead (ok, not entirely true, we construct SQL snippet and pass it to :select => '' portion of find call).
Presentation logic (such as date and number formatting) goes to Rails views.
I'm afraid I can't help you with more concrete advice, as the scope of the question is pretty wide.
EDIT:
Hundreds of columns doesn't sound reasonable. Sounds like immense amount of data in one place. How do they use it at all? We have web application where they can drill down and filter the results, narrow timespan and time step etc, so they never have more then 10-20 columns in the reports.
We store our views one view per SQL file. Also, you can combine it with a numerical prefix in order to ensure proper creation order (in case some of them depend on others). No migrations there, whole DB layer is app-agnostic.
For CSV, you can create either a set of scripts you can invoke either manually, or using cron, or you can use FasterCSV from your Rails app and generate CSVs by HTTP request.