I want to make a table of "people", where each of their attributes is inline editable. There are about 500 people, and the list will grow over time. The people#index page contains this list.
Since there are a lot of form elements, the server takes a long time to render the page. The server I'm using to host is also fairly slow, so I thought about making the Javascript doing the rendering of these tables.
Currently, I created a new action to return JSON data of all the people's attributes, and I use Rails to generate just a template for the row. I then use jQuery to clone that template, and insert each attribute from the JSON data into a new copy, and add it to the rows.
This makes people#index load a lot faster, but now I run into the problem of the Javascript freezing up the page for a second while things load.
I looked into web workers for creating a thread to do this work. I plan to generate the extra rows on a separate thread, then add it to my table (I'm using dataTables).
I feel like I might be missing something. That this is a sloppy solution.
My question is, is there a better approach to this?
Possible options:
1 Pagination
I this in this case pagination is the best solution.
Pagination can be done on rails side(https://github.com/mislav/will_paginate and https://github.com/amatsuda/kaminari gems) or on js side.
2 You may also use something like http://trirand.com/blog/jqgrid/jqgrid.html
This JS component offers ability to inline edit data too and you may search for rails integration gems like https://github.com/allen13/rails-asset-jqgrid
Related
I'm trying to debug a slow Django listview page. The page currently displays all tests (133), as well as the latest result for each test. There are approximately 60,000 results, each result having a FK relationship to a single test.
I've optimized the SQL (I think) by selecting the latest result for each test, and passing it in as a prefetch related to my tests query. Django Debug Toolbar shows that the SQL is taking ~350ms, but the page load is ~3.5s.
If I restrict the list view to a single test though, the SQL is ~7.5ms, and page load is ~100ms.
I'm not fetching anything from S3 etc, and not rendering any images or the like.
So my question is, what could be causing the slowness? It seems like it is the SQL, since as the result set grows so does page load, but maybe rendering of each item or something similar? Any guidance on what to look into next would be appreciated.
You can force execute your Django query set:
test_list = list(your_queryset)
and return in your view simple text
HttpResponse("return this string"),
then you can check time without rendering. Also Django Debug Toolbar and slow your app, maybe here your case: https://github.com/jazzband/django-debug-toolbar/issues/910
I'm looking for a rails gem (or possibly several together) that will be the basis of the user facing front end for my application.
I'm constraint by a few things -
First, my user base is very technically challenged. All of the UI pieces have to be very easy to understand (in other words they've been seen a lot). It will be a stretch for these users to click on a column header and expect it to sort without some kind of prompting.
Second, the application flow needs to be very simple. As I mentioned in the first condition if I spread this out into a lot of small actions I'm likely to loose my user.
The core of the problem is that I have a dataset with 15 columns. I'd like to have the ability to:
have the users dynamically select which columns to view at one time
sort on any column in the view
filter the results (via text and attribute search)
pagination
I don't need any editing capabilities.
I've googled around for "ruby on rails datagrid" without much luck. I'm developing on Rails 3.1. Thank you for any help!
Check out the rails-casts on doing this with using the will_paginate gem and some sorting code: http://railscasts.com/episodes/240-search-sort-paginate-with-ajax and http://railscasts.com/episodes/228-sortable-table-columns
I'd take a look at DataTables. It's a JS table and there are some ruby gems that wrap it, e.g.: jquery-datatables-rails. There is also a railscast about it.
Try Datagrid - ruby library that helps you to build and represent table-like data with:
Customizable filtering
Columns
Sort order
Localization
Export to CSV
The templates for these HTML emails are all the same, but there are just different variables for say, first name, last name and such.
Would it just make sense to store the most minimal of data that I need, and load the template in and replace the variables everytime?
Another option would be to actually create the HTML file and store a reference to it, which probably would be the easiest to do except it might be a pain managing the files, and it adds complexity in regards to migrating, file permissions, et cetera.
Looking for opinions from people who've done this before...
GOAL/PURPOSE/USE:
I have a booking engine. When users make a booking, they are sent a confirmation email, generated from the sessionized booking data.
This email provides a "Cannot view this email? See it here" link which provides a web view of the email, in addition to a plaintext view.
I need to display the same email that was sent out, in addition to the plaintext view.
The template is subject to change, but I think because of that very fact I should have a table of templates and map the data to a template.
That's what I would do, because the template layout may change over the time, but the person information should remain the same. So, it makes sense to just store the person information in the database and leave the template out from the database.
In fact, it would be even better if you use template engine such as Velocity (in Java) to construct your HTML emails... very easy, by the way.
On the one hand cpu is more expensive then memory, so mostly it is better to save more data to reduce cpu power used by computation.
But in your case, I would save the minimal data, the emails or what you are tying to save, because it allows you to easily remodel your templates, and to reuse the data at multiple places of your application.
You persist redundant data (especially because of the template) which is in no way normalized. I would not suggest to do that. But mentioned in the comment it is important what you want to do with that data.
If you only save the data you need you could for example exchange that template easy and use another one.
Yea, your right on track. I did a similar thing. All dynamic/runtime variables were starting from ##symbol.
So in database you would have one Template table. One table would be for dynamic/runtime variables. One table for Mapping between Template and dynamic/runtime variables.
tblTemplate - TemplateID, TemplateValue
tblRuntimeVariables - RuntimeVariableID, VariableString, VariableSQL
tblMapping - TemplateID, RuntimeVariableID, RuntimeVariableValue
Advantage of using an extra mapping table is that on adding new dynamic variables to existing change would mean making no change to existing database. Only more rows would be added to tblMapping.
In my case I was also having one extra column for storing SQL Statements in tblRuntimeVariables in case the value for a runtime variable is fetched from database.
We have a CMS built entirely in house. I'm the new web developer guy with literally 4 weeks of ColdFusion Experience. What I want to do is add version control to our dynamic pages. Something like what Wordpress does. When you modify a page in Wordpress it makes some database entires and keeps a copy of each page when you save it. So if you create a page and modifiy it 6 times, all in one day you have 7 different versions to roll back if necessary. Is there a easy way to do something similar in Coldfusion?
Please note I'm not talking about source control or version control of actual CFM files, all pages are done on the backend dynamically using SQL.
sure you can. just stash the page content in another database table. you can do that with ColdFusion or via a trigger in the database.
One way (there are many) to do this is to add a column called "version" and a column called "live" in the table where you're storing all of your cms pages.
The column called live is option but might make it easier for your in some ways when starting out.
The column "version" will tell you what revision number of a document in the CMS you have. By a process of elimination you could say the newest one (highest version #) would be the latest and live one. However, you may need to override this some time and turn an old page live, which is what the "live" setting can be set to.
So when you click "edit" on a page, you would take that version that was clicked, and copy it into a new higher version number. It stays as a draft until you click publish (at which time it's written as 'live')..
I hope that helps. This kind of an approach should work okay with most schema designs but I can't say for sure either without seeing it.
Jas' solution works well if most of the changes are to one field, for example the full text of a page of content.
However, if you have many fields, and people only tend to change one or two at a time, a new entry in to the table for each version can quickly get out of hand, with many almost identical versions in the history.
In this case what i like to do is store the changes on a per field basis in a table ChangeHistory. I include the table name, row ID, field name, previous value, new value, and who made the change and when.
This acts as a complete change history for any field in any table. I'm also able to view changes by record, by user, or by field.
For realtime page generation from the database, your best bet are "live" and "versioned" tables. Reason being keeping all data, live and versioned, in one table will negatively impact performance. So if page generation relies on a single SELECT query from the live table you can easily version the result set using ColdFusion's Web Distributed Data eXchange format (wddx) via the tag <cfwddx>. WDDX is a serialized data format that works particularly well with ColdFusion data (sorta like Python's pickle, albeit without the ability to deal with objects).
The versioned table could be as such:
PageID
Created
Data
Where data is the column storing the WDDX.
Note, you could also use built-in JSON support as well for version serialization (serializeJSON & deserializeJSON), but cfwddx tends to be more stable.
OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.