Ektron: How to change the Framework API's default page size? - ektron

I've noticed that when pulling content form the framework API that there is a default page size of 50. I've tried adjusting the "ek_PageSize" AppSetting, but that doesn't seem to affect the API.
Basically in all my code I need to create a new PaginInfo object to update the number of items being returned.
var criteria = new ContentTaxonomyCriteria(ContentProperty.Id, EkEnumeration.OrderByDirection.Descending);
criteria.PagingInfo = new PagingInfo(100);
Does anyone know if there's a way to change that default value (for the entire site) without having to modify the PagingInfo object on the criteria on each call?

You could create a factory method that creates your criteria object. Then instead of instantiating the criteria object, you would call this factory method. From here, you can define an AppSetting that is unique to your code. There are several types of criteria objects used by the ContentManager, so you could even make the factory method generic.
private T GetContentCriteria<T>() where T : ContentCriteria, new()
{
// Sorting by Id descending will ensure newer content blocks are favored over older content.
var criteria = new T
{
OrderByField = ContentProperty.Id,
OrderByDirection = EkEnumeration.OrderByDirection.Descending
};
int maxRecords;
int.TryParse(ConfigurationManager.AppSettings["CmsContentService_PageSize"], out maxRecords);
// Only set the PagingInfo if a valid value exists in AppSettings.
// The Framework API's default page size of 50 will be used otherwise.
if (maxRecords > 0)
{
criteria.PagingInfo = new PagingInfo(maxRecords);
}
return criteria;
}

There's a page size App Setting in the web.config that I believe controls this default page size as well. But be warned as this will also change page sizes within the Workarea.
So if you set it to 100, then you'll see 100 users, content items, aliases, etc. per page instead of the default 50.

Related

Is there a Session-unique identifier that can be used as a cache key name?

I'm porting a legacy ASP.NET WebForms app to Razor. It had stored an object in the Session collection. Session storage is now limited to byte[] or string. One technique is to serialize objects and store as a string, but there are caveats. Another article suggested using one of the alternative caching options, so I'm trying to use MemoryCache.
For this to work as a Session replacement, I need a key name that's unique to the user and their session.
I thought I'd use Session.Id for this, like so:
ObjectCache _cache = System.Runtime.Caching.MemoryCache.Default;
string _keyName = HttpContext.Session.Id + "$searchResults";
//(PROBLEM: Session.Id changes per refresh)
//hit a database for set of un-paged results
List<Foo> results = GetSearchResults(query);
if (results.Count > 0)
{
//add to cache
_cache.Set(_keyName, results, DateTimeOffset.Now.AddMinutes(20));
BindResults();
}
//Called from multiple places, wish to use cached copy of results
private void BindResults()
{
CacheItem cacheItem = _cache.GetCacheItem(_keyName);
if (cacheItem != null) //in cache
{
List<Foo> results = (List<Foo>)cacheItem.Value;
DrawResults(results);
}
}
...but when testing, I see any browser refresh, or page link click, generates a new Session.Id. That's problematic.
Is there another built-in property somewhere I can use to identify the user's session and use for this key name purpose? One that will stay static through browser refreshes and clicks within the web app?
Thanks!
The answer Yiyi You linked to explains it -- the Session.Id won't be static until you first put something into the Session collection. Like so:
HttpContext.Session.Set("Foo", new byte[] { 1, 2, 3, 4, 5 });
_keyName = HttpContext.Session.Id + "_searchResults";
ASP.NET Core: Session Id Always Changes

Efficient way to bring parameters into controller action URL's

In ASP.Net Core you have multiple ways to generate an URL for controller action, the newest being tag helpers.
Using tag-helpers for GET-requests asp-route is used to specify route parameters. It is from what I understand not supported to use complex objects in route request. And sometimes a page could have many different links pointing to itself, possible with minor addition to the URL for each link.
To me it seems wrong that any modification to controller action signature requires changing all tag-helpers using that action. I.e. if one adds string query to controller, one must add query to model and add asp-route-query="#Model.Query" 20 different places spread across cshtml-files. Using this approach is setting the code up for future bugs.
Is there a more elegant way of handling this? For example some way of having a Request object? (I.e. request object from controller can be put into Model and fed back into action URL.)
In my other answer I found a way to provide request object through Model.
From the SO article #tseng provided I found a smaller solution. This one does not use a request object in Model, but retains all route parameters unless explicitly overridden. It won't allow you to specify route through an request object, which is most often not what you want anyway. But it solved problem in OP.
<a asp-controller="Test" asp-action="HelloWorld" asp-all-route-data="#Context.GetQueryParameters()" asp-route-somestring="optional override">Link</a>
This requires an extension method to convert query parameters into a dictionary.
public static Dictionary GetQueryParameters(this HttpContext context)
{
return context.Request.Query.ToDictionary(d => d.Key, d => d.Value.ToString());
}
There's a rationale here that I don't think you're getting. GET requests are intentionally simplistic. They are supposed to describe a specific resource. They do no have bodies, because you're not supposed to be passing complex data objects in the first place. That's not how the HTTP protocol is designed.
Additionally, query string params should generally be optional. If some bit of data is required in order to identify the resource, it should be part of the main URI (i.e. the path). As such, neglecting to add something like a query param, should simply result in the full data set being returned instead of some subset defined by the query. Or in the case of something like a search page, it generally will result in a form being presented to the user to collect the query. In other words, you action should account for that param being missing and handle that situation accordingly.
Long and short, no, there is no way "elegant" way to handle this, I suppose, but the reason for that is that there doesn't need to be. If you're designing your routes and actions correctly, it's generally not an issue.
To solve this I'd like to have a request object used as route parameters for anchor TagHelper. This means that all route links are defined in only one location, not throughout solution. Changes made to request object model automatically propagates to URL for <a asp-action>-tags.
The benefit of this is reducing number of places in the code we need to change when changing method signature for a controller action. We localize change to model and action only.
I thought writing a tag-helper for a custom asp-object-route could help. I looked into chaining Taghelpers so mine could run before AnchorTagHelper, but that does not work. Creating instance and nesting them requires me to hardcode all properties of ASP.Net Cores AnchorTagHelper, which may require maintenance in the future. Also considered using a custom method with UrlHelper to build URL, but then TagHelper would not work.
The solution I landed on is to use asp-all-route-data as suggested by #kirk-larkin along with an extension method for serializing to Dictionary. Any asp-all-route-* will override values in asp-all-route-data.
<a asp-controller="Test" asp-action="HelloWorld" asp-all-route-data="#Model.RouteParameters.ToDictionary()" asp-route-somestring="optional override">Link</a>
ASP.Net Core can deserialize complex objects (including lists and child objects).
public IActionResult HelloWorld(HelloWorldRequest request) { }
In the request object (when used) would typically have only a few simple properties. But I thought it would be nice if it supported child objects as well. Serializing object into a Dictionary is usually done using reflection, which can be slow. I figured Newtonsoft.Json would be more optimized than writing simple reflection code myself, and found this implementation ready to go:
public static class ExtensionMethods
{
public static IDictionary ToDictionary(this object metaToken)
{
// From https://geeklearning.io/serialize-an-object-to-an-url-encoded-string-in-csharp/
if (metaToken == null)
{
return null;
}
JToken token = metaToken as JToken;
if (token == null)
{
return ToDictionary(JObject.FromObject(metaToken));
}
if (token.HasValues)
{
var contentData = new Dictionary();
foreach (var child in token.Children().ToList())
{
var childContent = child.ToDictionary();
if (childContent != null)
{
contentData = contentData.Concat(childContent)
.ToDictionary(k => k.Key, v => v.Value);
}
}
return contentData;
}
var jValue = token as JValue;
if (jValue?.Value == null)
{
return null;
}
var value = jValue?.Type == JTokenType.Date ?
jValue?.ToString("o", CultureInfo.InvariantCulture) :
jValue?.ToString(CultureInfo.InvariantCulture);
return new Dictionary { { token.Path, value } };
}
}

How do I add a lazy loaded column in EntitySpaces?

If you do not have experience with or aren't currently using EntitySpaces ("ES") ORM this question is not meant for you.
I have a 10 year old application that after 4 years now needs my attention. My application uses a now defunct ORM called EntitySpaces and I'm hoping if you're reading this you have experience or maybe still use it too! Switching to another ORM is not an option at this time so I need to find a way to make this work.
Between the time I last actively worked on my application and now (ES Version 2012-09-30), EntitySpaces ("ES") has gone through a significant change in the underlying ADO.net back-end. The scenario that I'm seeking help on is when an entity collection is loaded with only a subset of the columns:
_products = new ProductCollection();
_products.Query.SelectAllExcept(_products.Query.ImageData);
_products.LoadAll();
I then override the properties that weren't loaded in the initial select so that I may lazyload them in the accessor. Here is an example of one such lazy-loaded property that used to work perfectly.
public override byte[] ImageData
{
get
{
bool rowIsDirty = base.es.RowState != DataRowState.Unchanged;
// Check if we have loaded the blob data
if(base.Row.Table != null && base.Row.Table.Columns.Contains(ProductMetadata.ColumnNames.ImageData) == false)
{
// add the column before we can save data to the entity
this.Row.Table.Columns.Add(ProductMetadata.ColumnNames.ImageData, typeof(byte[]));
}
if(base.Row[ProductMetadata.ColumnNames.ImageData] is System.DBNull)
{
// Need to load the data
Product product = new Product();
product.Query.Select(product.Query.ImageData).Where(product.Query.ProductID == base.ProductID);
if(product.Query.Load())
{
if (product.Row[ProductMetadata.ColumnNames.ImageData] is System.DBNull == false)
{
base.ImageData = product.ImageData;
if (rowIsDirty == false)
{
base.AcceptChanges();
}
}
}
}
return base.ImageData;
}
set
{
base.ImageData = value;
}
}
The interesting part is where I add the column to the underlying DataTable DataColumn collection:
this.Row.Table.Columns.Add(ProductMetadata.ColumnNames.ImageData, typeof(byte[]));
I had to comment out all the ADO.net related stuff from that accessor when I updated to the current (and open source) edition of ES (version 2012-09-30). That means that the "ImageData" column isn't properly configured and when I change it's data and attempt to save the entity I receive the following error:
Column 'ImageData' does not belong to table .
I've spent a few days looking through the ES source and experimenting and it appears that they no longer use a DataTable to back the entities, but instead are using a 'esSmartDictionary'.
My question is: Is there a known, supported way to accomplish the same lazy loaded behavior that used to work in the new version of ES? Where I can update a property (i.e. column) that wasn't included in the initial select by telling the ORM to add it to the entity backing store?
After analyzing how ES constructs the DataTable that is uses for updates it became clear that columns not included in the initial select (i.e. load) operation needed to be added to the esEntityCollectionBase.SelectedColumns dictionary. I added the following method to handle this.
/// <summary>
/// Appends the specified column to the SelectedColumns dictionary. The selected columns collection is
/// important as it serves as the basis for DataTable creation when updating an entity collection. If you've
/// lazy loaded a column (i.e. it wasn't included in the initial select) it will not be automatically
/// included in the selected columns collection. If you want to update the collection including the lazy
/// loaded column you need to use this method to add the column to the Select Columns list.
/// </summary>
/// <param name="columnName">The lazy loaded column name. Note: Use the {yourentityname}Metadata.ColumnNames
/// class to access the column names.</param>
public void AddLazyLoadedColumn(string columnName)
{
if(this.selectedColumns == null)
{
throw new Exception(
"You can only append a lazy-loaded Column to a partially selected entity collection");
}
if (this.selectedColumns.ContainsKey(columnName))
{
return;
}
else
{
// Using the count because I can't determine what the value is supposed to be or how it's used. From
// I can tell it's just the number of the column as it was selected: if 8 colums were selected the
// value would be 1 through 8 - ??
int columnValue = selectedColumns.Count;
this.selectedColumns.Add(columnName, columnValue);
}
}
You would use this method like this:
public override System.Byte[] ImageData
{
get
{
var collection = this.GetCollection();
if(collection != null)
{
collection.AddLazyLoadedColumn(ProductMetadata.ColumnNames.ImageData);
}
...
It's a shame that nobody is interested in the open source EntitySpaces. I'd be happy to work on it if I thought it had a future, but it doesn't appear so. :(
I'm still interested in any other approaches or insight from other users.

Datatables: How to reload server-side data with additional params

I have a table which gets its data server-side, using custom server-side initialization params which vary depending upon which report is produced. Once the table is generated, the user may open a popup in which they can add multiple additional filters on which to search. I need to be able to use the same initialization params as the original table, and add the new ones using fnServerParams.
I can't figure out how to get the original initialization params using the datatables API. I had thought I could get a reference to the object, get the settings using fnSettings, and pass those settings into a new datatables instance like so:
var oSettings = $('#myTable').dataTable().fnSettings();
// add additional params to the oSettings object
$('#myTable').dataTable(oSettings);
but the variable returned through fnSettings isn't what I need and doesn't work.
At this point, it seems like I'm going to re-architect things so that I can pass the initialization params around as a variable and add params as needed, unless somebody can steer me in the right direction.
EDIT:
Following tduchateau's answer below, I was able to get partway there by using
var oTable= $('#myTable').dataTable(),
oSettings = oTable.fnSettings(),
oParams = oTable.oApi._fnAjaxParameters(oSettings);
oParams.push('name':'my-new-filter', 'value':'my-new-filter-value');
and can confirm that my new serverside params are added on to the existing params.
However, I'm still not quite there.
$('#myTable').dataTable(oSettings);
gives the error:
DataTables warning(table id = 'myTable'): Cannot reinitialise DataTable.
To retrieve the DataTables object for this table, please pass either no arguments
to the dataTable() function, or set bRetrieve to true.
Alternatively, to destroy the old table and create a new one, set bDestroy to true.
Setting
oTable.bRetrieve = true;
doesn't get rid of the error, and setting
oSettings.bRetrieve = true;
causes the table to not execute the ajax call. Setting
oSettings.bDestroy = true;
loses all the custom params, while setting
oTable.bDestroy = true;
returns the above error. And simply calling
oTable.fnDraw();
causes the table to be redrawn with its original settings.
Finally got it to work using fnServerParams. Note that I'm both deleting unneccessary params and adding new ones, using a url var object:
"fnServerParams": function ( aoData ) {
var l = aoData.length;
// remove unneeded server params
for (var i = 0; i < l; ++i) {
// if param name starts with bRegex_, sSearch_, mDataProp_, bSearchable_, or bSortable_, remove it from the array
if (aoData[i].name.search(/bRegex_|sSearch_|mDataProp_|bSearchable_|bSortable_/) !== -1 ){
aoData.splice(i, 1);
// since we've removed an element from the array, we need to decrement both the index and the length vars
--i;
--l;
}
}
// add the url variables to the server array
for (i in oUrlvars) {
aoData.push( { "name": i, "value": oUrlvars[i]} );
}
}
This is normally the right way to retrieve the initialization settings:
var oSettings = oTable.fnSettings();
Why is it not what you need? What's wrong with these params?
If you need to filter data depending on your additional filters, you can complete the array of "AJAX data" sent to the server using this:
var oTable = $('#myTable').dataTable();
var oParams = oTable.oApi._fnAjaxParameters( oTable );
oParams.push({name: "your-additional-param-name", value: your-additional-param-value });
You can see some example usages in the TableTools plugin.
But I'm not sure this is what you need... :-)

belongsTo only being set on first and last member of hasMany

My adapter uses findHasMany to load child records for a hasMany relationship.
My findHasMany adapter method is directly based on the test case for findHasMany. It retrieves the contents of the hasMany on demand, and eventually does the following two operations:
store.loadMany(type, hashes);
// ...
store.loadHasMany(record, relationship.key, ids);
(The full code for the findHasMany is below, in case the issue is there, but I don't think so.)
The really strange behavior is: it seems that somewhere within loadHasMany (or in some subsequent async process) only the first and last child records get their inverse belongsTo property set, even though all the child records are added to the hasMany side. I.e., if posts/1 has 10 comments, this is what I get, after everything has loaded:
var post = App.Posts.find('1');
post.get('comments').objectAt(0).get('post'); // <App.Post:ember123:1>
post.get('comments').objectAt(1).get('post'); // null
post.get('comments').objectAt(2).get('post'); // null
// ...
post.get('comments').objectAt(8).get('post'); // null
post.get('comments').objectAt(9).get('post'); // <App.Post:ember123:1>
My adapter is a subclass of DS.RESTAdapter, and I don't think I'm overloading anything in my adapter or serializer that would cause this behavior.
Has anybody seen something like this before? It's weird enough I though someone might know why it's happening.
Extra
Using findHasMany lets me load the contents of the hasMany only when the property is accessed (valuable in my case because calculating the array of IDs would be expensive). So say I have the classic posts/comments example models, the server returns for posts/1:
{
post: {
id: 1,
text: "Linkbait!"
comments: "/posts/1/comments"
}
}
Then my adapter can retrieve /posts/1/comments on demand, which looks like this:
{
comments: [
{
id: 201,
text: "Nuh uh"
},
{
id: 202,
text: "Yeah huh"
},
{
id: 203,
text: "Nazi Germany"
}
]
}
Here is the code for the findHasMany method in my adapter:
findHasMany: function(store, record, relationship, details) {
var type = relationship.type;
var root = this.rootForType(type);
var url = (typeof(details) == 'string' || details instanceof String) ? details : this.buildURL(root);
var query = relationship.options.query ? relationship.options.query(record) : {};
this.ajax(url, "GET", {
data: query,
success: function(json) {
var serializer = this.get('serializer');
var pluralRoot = serializer.pluralize(root);
var hashes = json[pluralRoot]; //FIXME: Should call some serializer method to get this?
store.loadMany(type, hashes);
// add ids to record...
var ids = [];
var len = hashes.length;
for(var i = 0; i < len; i++){
ids.push(serializer.extractId(type, hashes[i]));
}
store.loadHasMany(record, relationship.key, ids);
}
});
}
Solution
Override the DS.RelationshipChange.getByReference method by inserting the following code into your app:
DS.RelationshipChange.prototype.getByReference = function(reference) {
var store = this.store;
// return null or undefined if the original reference was null or undefined
if (!reference) { return reference; }
if (reference.record) {
return reference.record;
}
return store.materializeRecord(reference);
};
Yes, this is overriding a private, internal method in Ember Data. Yes, it may break at any time with any update. I'm pretty sure this is a bug in Ember Data, but I'm not 100% certain this is the right solution. But it does solve this problem, and possibly other relationship-related problems.
This fix is designed to be applied to Ember Data master as of 29 Apr 2013.
Reason
DS.Store.loadHasMany calls DS.Model.hasManyDidChange, which retrieves references for all the child records and then sets the hasMany's content to the array of references. This kicks off a chain of observers., eventually calling DS.ManyArray.arrayContentDidChange, in which the first line is this._super.apply(this, arguments);, calling the superclass method Ember.Array.arrayContentDidChange. That Ember.Array method includes an optimization that caches the first and last object in the array and calls objectAt on only those two array members. So there's the part that singles out the first and last record.
Next, since DS.RecordArray implements an objectAtContent method (from Ember.ArrayProxy), the objectAtContent implementation calls DS.Store.recordForReference, which in turn calls DS.Store.materializeRecord. This last function adds a record property to the reference that is passed in as a side effect.
Now we get to what I think is a bug. In DS.ManyArray.arrayContentDidChange, after calling the superclass method, it loops through all the new references and creates a DS.RelationshipChangeAdd instance that encapsulates the owner and child record references. But the first line inside the loop is:
var reference = get(this, 'content').objectAt(i);
Unlike what happens above to the first and last record, this calls objectAt directly on the Ember.NativeArray and bypasses the ArrayProxy methods including the objectAtContent hook, which means that DS.Store.materializeRecord--which adds the record property on the reference object--may have never been called on some references.
Next, the relationship changes created in the loop are immediately afterward (in the same run loop) applied with this call tree: DS.RelationshipChangeAdd.sync -> DS.RelationshipChange.getFirstRecord -> DS.RelationshipChange.getByReference. This last method expects the reference object to have a record property. However, the record property is only set on the first and last reference objects, for reasons explained above. Therefore, for all but the first and last records, the relationship fails to be established because it doesn't have access to the child record object!
The above fix calls DS.Store.materializeRecord whenever the record property doesn't exist on the reference. The last line in the function is the only thing added. On the one hand, it looks like this was the original intention: that var store = this.store; line in the original declares a variable that isn't otherwise used in the function, so what's it there for? Also, without the added line, the function doesn't always return a value, which is a little unusual for a function which is expected to do so. On the other hand, this could lead to mass materialization in some cases where that would be undesirable (but, the relationships just won't work without it in some cases, it seems).
Possibly related
The "chain of observers" I mentioned takes a bit of an odd path. The initiating event was setting the content property on a DS.ManyArray, which extends Ember.ArrayProxy--therefore the content property has a dependent property arrangedContent. Importantly, the observers on arrangedContent are executed before observers on content are executed (see Ember.propertyDidChange). However, the default implementation of Ember.ArrayProxy.arrangedContentArrayDidChange simply calls Ember.Array.arrayContentDidChange, which DS.ManyArray implements! The point being, this looks like a recipe for some code to execute in an unintended order. That is, I think Ember.ManyArray.arrayContentDidChange may getting executed earlier than expected. If this is the case, the above mentioned code that expects the record property to already exist on all references may have been expecting this reasonably, as one of the observers directly on the content property may call DS.Store.materializeRecord on each reference. But I haven't dug deep enough to find out if this is true.