Loading Razor views from a database - VirtualPathProvider and CacheDependency confusion - sql-server-2005

I'm confused as to how CacheDependency works in VirtualPathProvider.GetCacheDependency().
Every example I've seen creates a cache dependency based on some physical file on disk, while I'm returning records from a database. Right now, I'm overriding GetFileHash and just returning the last date/time the relevant record was modified as the hash string. This works well, and I'm not sure using a CacheDependency item would affect the performance as I'd still have to go check the database every time the view is requested to see if it's been updated, but I'm still curious how to use CacheDependency.
Has anyone used this when returning views from a database?
Update
Using this now (http://razorengine.codeplex.com/) which works VERY well.

The point of CacheDependency is to provide you with an event that will be called when the cache becomes invalid (because the file on disk changed). Check out SqlCacheDependency that does the same thing with SQL Server entries.

Related

Any ALV-specifics with itab created by RTTS?

I create internal table by two steps, both refer to the RTTS-techniques.
The first step loads and parses a tab-delimited file into a table.
The second step reads this table by RTTI, then, hardcoded, adds some other columns in front of the old columns from the file and, finally adds the old fields back again, the table now has about 12 new hardcoded columns, in front of those from the file. The RTTS helps to create the final table, which then is passed as the data source to the ALV grid.
My former requirement did not take into account that the ALV-grid-toolbar-functions will ever be needed by the end-user, however, as always, this has changed. I enabled the toolbar functions, the default ones, without any custom button.
So, now the user can remove some columns from the display or add them back again, she/he can also change their order. Everything is fine but I never encountered this situation with a table, which is created during runtime.
Are there special culprits I need to be aware of ?
<ITAB> created using RTTS functionality is fully supported either by the REUSE_ALV_LIST_DISPLAY or one of ALV OOPS technologies. All the layouts should work fine. In fact I think in the cl_salv_table=>factory RTTS is responsible for automatic creation of the field catalog of the ITAB since it do not need field catalog passed by the parameter. The only thing that I heard is lost pointers of the <ITAB> ant this leads to refresh problems and so on but this is different story.
From my experience, ALV column maximum size is 120 characters. So if your file could have more than that, you could have a problem. Otherwise, do not expect any major thing.

Effectively make database records read-only

How can I make sure that specific data in the database isn't altered anymore.
We are working with TSQL. Inside the database we store contract revisions. These have a status: draft / active. When the status has become active, the revision may never be altered anymore. A revision can have 8 active modules (each with its own table), each with their own settings and sub-tables. This creates a whole tree of tables with records that may never change anymore when the contract revision has been set to active.
Ideally I would simply mark those records as read-only. But such thing does not exists as of today. The next thing that comes to mind are triggers. Thus I have to add those triggers to a lot of tables, all which are related to the contract revision.
Now maybe there are other approaches, like a database only for archiving on which the user only has insert rights. Thus when a contract revision has become active, it is moved from one DB to the archive DB (insert is allowed). And can never be altered anymore (DENY UPDATE|DELETE).
But maybe there are other more ingenious options I haven't thought of, and you did. Maybe including the CLR or what not.
So how can I make a tree-structure of records inside our TSQL database effectively readonly that is the most maintenance free, easy to understand, quickly to setup, and can be applied in a most generic way?
What ever you do (triggers, granted rights...) might be overcome by a user with higher rights, this you know for sure...
Is this just to archive this data?
One idea coming into my mind was to create a nested XML with all data within on big structure and put this somewhere into a side table. Create a INSTEAD OF UPDATE,DELETE TRIGGER where you just do nothing. Let these tables be 1:1-related.
You can still work with this data, but not quite as fast as being read from physical tables.
If you want, you even might convert the XML to a string and calculate some Hash-Code, which you store in a different place to check for manipulations.
The whole process might be done in one single Stored Procedure call.

Building a ColdFusion Application with Version Control

We have a CMS built entirely in house. I'm the new web developer guy with literally 4 weeks of ColdFusion Experience. What I want to do is add version control to our dynamic pages. Something like what Wordpress does. When you modify a page in Wordpress it makes some database entires and keeps a copy of each page when you save it. So if you create a page and modifiy it 6 times, all in one day you have 7 different versions to roll back if necessary. Is there a easy way to do something similar in Coldfusion?
Please note I'm not talking about source control or version control of actual CFM files, all pages are done on the backend dynamically using SQL.
sure you can. just stash the page content in another database table. you can do that with ColdFusion or via a trigger in the database.
One way (there are many) to do this is to add a column called "version" and a column called "live" in the table where you're storing all of your cms pages.
The column called live is option but might make it easier for your in some ways when starting out.
The column "version" will tell you what revision number of a document in the CMS you have. By a process of elimination you could say the newest one (highest version #) would be the latest and live one. However, you may need to override this some time and turn an old page live, which is what the "live" setting can be set to.
So when you click "edit" on a page, you would take that version that was clicked, and copy it into a new higher version number. It stays as a draft until you click publish (at which time it's written as 'live')..
I hope that helps. This kind of an approach should work okay with most schema designs but I can't say for sure either without seeing it.
Jas' solution works well if most of the changes are to one field, for example the full text of a page of content.
However, if you have many fields, and people only tend to change one or two at a time, a new entry in to the table for each version can quickly get out of hand, with many almost identical versions in the history.
In this case what i like to do is store the changes on a per field basis in a table ChangeHistory. I include the table name, row ID, field name, previous value, new value, and who made the change and when.
This acts as a complete change history for any field in any table. I'm also able to view changes by record, by user, or by field.
For realtime page generation from the database, your best bet are "live" and "versioned" tables. Reason being keeping all data, live and versioned, in one table will negatively impact performance. So if page generation relies on a single SELECT query from the live table you can easily version the result set using ColdFusion's Web Distributed Data eXchange format (wddx) via the tag <cfwddx>. WDDX is a serialized data format that works particularly well with ColdFusion data (sorta like Python's pickle, albeit without the ability to deal with objects).
The versioned table could be as such:
PageID
Created
Data
Where data is the column storing the WDDX.
Note, you could also use built-in JSON support as well for version serialization (serializeJSON & deserializeJSON), but cfwddx tends to be more stable.

CouchDB View, Map, Index, and Sequence

I think read somewhere that when a View is requested the "map" is only run across documents that have been added since the last time it was requested? How is this determined? I thought I saw something about a sequence number. Is this something that you can get to? Its not part of the UUID trailing on the _rev field is it?
Any way to force a 'recalc' of the entire View (across all records)?
The section about View Indexes in the Technical Overview is a great guide to this.
The view builder uses the database sequence ID to determine if the view group is fully up-to-date with the database. If not, the view engine examines the all database documents (in packed sequential order) changed since the last refresh. Documents are read in the order they occur in the disk file, reducing the frequency and cost of disk head seeks.
As documents are examined, their previous row values are removed from the view indexes, if they exist. If the document is selected by a view function, the function results are inserted into the view as a new row.
CouchDB first checks to see if anything has changed in the entire database using a sequence id (that gets updated whenever there's a change to any document in the database). If something has changed it goes looking for those documents and runs the map function on them.
There really shouldn't be any need to rebuild/regenerate your views since it will incrementally refresh as you modify your documents (note that it won't update the view until you use it though). With hat said one way (and I'm sure there's a better way) would be to remove the design document describing the view and insert it again seeing as a design document is no different (almost) from a normal document.

Ajax autocomplete extender populated from SQL

OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.