Tracking Community Table in Ektron - ektron

I have added one user in one community group In Ektron.In which table in Ektron keeps this entry?
I also want to know Is there any table entry changes when admin delete that user from that community group.

Read my caveat first - about half way down in the answer. Bottom line: you are better off working with the API if possible. If it doesn't appear to be possible, then work with Ektron Support so that they are aware of the use case and can try to work it into future versions of the product!
Using Ektron v9.0 sp1, I found the following:
This SQL script gives you the definition of the community group:
SELECT * FROM community_group_tbl WHERE group_id = 1
One of the fields in this table is folder_id. If you use this to look up the corresponding record in the content_folder_tbl, you should find a record where folder_type is equal to 6. This value corresponds with the Community folder type in EkEnumerations:
public enum FolderType
{
Content = 0,
Blog = 1,
Domain = 2,
DiscussionBoard = 3,
DiscussionForum = 4,
Root = 5,
Community = 6,
Media = 7,
Calendar = 8,
Catalog = 9,
Program = 14,
}
My folder id was 80, so I used this SQL:
SELECT * FROM content_folder_tbl WHERE folder_id = 80
I also noticed that there is a record in taxonomy_tbl where folder_id is equal to 80:
SELECT * FROM taxonomy_tbl WHERE folder_id = 80
SELECT * FROM taxonomy_tbl WHERE taxonomy_parent_id = (SELECT TOP 1 taxonomy_id FROM taxonomy_tbl WHERE folder_id = 80)
I must admit, however -- I wasn't able to find the full membership list in the database. I found a table called user_to_group_tbl, but it seemed to only have the CMS Users that belonged to the group, with membership users apparently being stored somewhere else.
Now for the caveat: are you sure you want to be looking all this stuff up directly in the database? I know that for some scenarios it can be the best way to go, but the more complex the lookup, the more risky a direct SQL query becomes. Selecting a content block or a taxonomy folder are relatively straight-forward, but this lookup already looks complex. Multiple tables are involved, and you'll be bypassing all the business logic that Ektron has built in to its API.
This API code will get all the users from a community group:
var cgm = new CommunityGroupManager();
var users = cgm.GetUserList(1);
foreach (var directoryUser in users)
{
Response.Write(directoryUser.Id + " - " + directoryUser.Username + "<br/>");
}
You could combine this code with the concept of CMS Extensions and set up event handlers like so:
public class UserGroupCustomExtension : UserGroupStrategy
{
public override void OnAfterAddUserGroup(Ektron.Cms.UserGroupData userGroupData, CmsEventArgs eventArgs)
{
base.OnAfterAddUserGroup(userGroupData, eventArgs);
if (userGroupData.GroupType == (int) Ektron.Cms.Common.EkEnumeration.GroupType.CommunityGroup)
{
// do stuff here...
}
}
}
I'm not sure if the exact event you're looking for is currently available, but there are a lot you can tap into. And if the event you need isn't available, be sure to post it as a feature request in their developer forum: http://developer.ektron.com/Forums/?v=f&f=107.

Related

Redis Secondary Indicing maintenance add,remove are fine what about updates

All articles and write ups Ive read all talking about insertion only, and i have a problem.
I have implemented a secondary index mechanism using transactions (MULTI) and sets in order to save time when looking up entities, the index is saved by a set names by property name and value.
say we have a Person
Person Jack = new Person() { Id = 1, Name = Jack, Age = 30 }
Person Jena = new Person() { Id = 2, Name = Jena, Age = 30 }
when I choose to index the Age property and insert both, I look up the Age of 1 and 2 prep them and insert them to the corresponding set and update the index in the same transaction.
age_30_index holds ids 1,2
when deleting Jack, I prep Jacks age, 30, and remove id 1 from age_30_index and remove Jack from its set again within one transaction, all great, well .. almost.
the problem start when I want to change the Age and update the cache, look at the following scenario:
var p = GetEntity<Person>(id: 1)
p.Age = 31
UpdateEntity(p)
now with the concept above, i will have age_30_index -> 1,2 and age_31_index -> 1
that is because when updating the entity in cache I don't know what is the value of the property stored in cache.. therefore cant remove it from the index.
another problem is deleting by Id, or deleting like this:
var p = GetEntity<Person>(id: 1)
p.Age = 31
DeleteEntity(p)
An easy solution would be using a distributed lock and lock by entity name, get the entities from cache, delete the indexes and continue, but with tests I ran, the performance is lacking.
any other option I thought about is not thread safe because its not atomic.
Is there any other way to achieve what I'm tying to do?
the project is c# .net framework with redis on windows, redisearch.io seems nice but its out of scope unfortunately.
The Hard Way
If RediSearch is really something you don't want to get into because you are running Redis on Windows - my big comment is that you are using the wrong data structure for storing age, numerics generally should be stored in sorted sets so you can query them more easily
so for your age you would have the sorted set Person:Age and when you add Jack (let's say Jack's Id is 1 and age is 30 like your example) you would set your index like so:
ZADD Person:Age 30 1
then when you need to update Jack's age to 31, all you would need to do is update the member in the sorted set:
ZADD Person:Age 31 1
you'd then be able to query all the 31 year olds with a ZRANGE which in StackExchange.Redis looks like:
db.SortedSetRangeByScore("Person:Age", 31, 31, Exclude.None);
that's it, no need for you to look it up in an older collection and a newer one.
Your bigger concern about atomicity should be surrounding if you need to scale out redis. If you need to do that, there's no good atomic way to update a distributed index without RediSearch. That's because all the keys in a multi-key operation or transaction have to be in the same slot, or redis will not accept the operation. Because you are on windows I'm assuming you are not using a clustered multi-shard environment and that you are either stand-alone or sentinel.
Assuming you are running stand-alone or sentinel you can run the updates inside of a lua script which would allow you to run all of your commands sequentially basically your script would update your entity and then update all the accompanying indexes so if you have a sorted set for age in you example Person:Age your script to update that would look something like:
redis.call('HSET', KEYS[1], ARGV[1])
redis.call('ZADD', 'Person:Age', ARGV[1], KEYS[1])
naturally, you will have to change this script out with whatever the reality of your index is, so you'll probably want to do some kind of dynamic script generation. I'd use scripting over MULTI because MULTI on StackExchange.Redis has slightly different behavior than what's expected by redis due to it's architecture.
The best way with RediSearch and Redis.OM
The best way to handle this would be to use RediSearch (I'm putting this part second even though it really is the right answer because it sounds like something you want to avoid, given your environment). With RediSearch + Redis.OM .NET you would just update the item in question and call collection.Save() so for your example here - the following does everything you're looking for (insert, retrieve, update, delete)
using Redis.OM;
using TestRedisOmUpdate;
var provider = new RedisConnectionProvider("redis://localhost:6379");
provider.Connection.CreateIndex(typeof(Person));
var collection = provider.RedisCollection<Person>();
//insert
var id = await collection.InsertAsync(new() {Name = "Jack", Age = 30});
//update
foreach (var person in collection.Where(p => p.Name == "Jack"))
{
person.Age = 31;
}
await collection.SaveAsync();
//check
foreach (var person in collection.Where(p => p.Name == "Jack"))
{
Console.WriteLine($"{person.Name} {person.Age}");
}
//delete
provider.Connection.Unlink(id);
that all would work off of the model:
[Document]
public class Person
{
[Indexed]
public string Name { get; set; }
[Indexed]
public int Age { get; set; }
}
And you don't have the headache or hassle of maintaining the indexes yourself.

LDAP_MATCHING_RULE_IN_CHAIN not working with default AD groups - Domain Users

In my program, I need to fetch all the AD groups for a user.
The current version of my program uses System.DirectoryServices.AccountManagement.UserPrincipal.GetAuthorizationGroups.
This works good until we had a new customer with a much largeer AD. There it is really slow. (Up to 60 seconds)
Now I've been looking around, and saw the posts that the AccountManagement is easy to use, but slow.
I also found that LDAP_MATCHING_RULE_IN_CHAIN should also fetch all the nested groups that a user is member of. And is more performant.
From this link.
But I'm having an issue with the default groups that in AD exists.
For example the group "Domain Users" is not returned by the function.
They also have a group "BDOC" that as member have "Domain Users". That group is also not returned.
Trough the GetAuthorizationGroups it is returned correct.
I'm using following code to fetch the groups by user.
VB.NET:
Dim strFilter As String = String.Format("(member:1.2.840.113556.1.4.1941:={0})", oUserPrincipal.DistinguishedName)
Dim objSearcher As New DirectoryServices.DirectorySearcher("LDAP://" & oLDAPAuthenticationDetail.Domain & If(Not String.IsNullOrWhiteSpace(oLDAPAuthenticationDetail.Container), oLDAPAuthenticationDetail.Container, String.Empty))
objSearcher.PageSize = 1000
objSearcher.Filter = strFilter
objSearcher.SearchScope = DirectoryServices.SearchScope.Subtree
objSearcher.PropertiesToLoad.Add(sPropGuid)
objSearcher.PropertiesToLoad.Add(sPropDisplayName)
Dim colResults As DirectoryServices.SearchResultCollection = objSearcher.FindAll()
Afterwards I was testing with the script from the link, if it was possible to fetch all the users from the Domain Users group, by changing the "member" to "memberOf" in the filter.
When I put the Domain Admins group in the filter, it shows the admins correct.
When I put the Domain Users group in the filter, it returns nothing.
Powershell:
$userdn = 'CN=Domain Users,CN=Users,DC=acbenelux,DC=local'
$strFilter = "(memberOf:1.2.840.113556.1.4.1941:=$userdn)"
$objDomain = New-Object System.DirectoryServices.DirectoryEntry("LDAP://rootDSE")
$objSearcher = New-Object System.DirectoryServices.DirectorySearcher
$objSearcher.SearchRoot = "LDAP://$($objDomain.rootDomainNamingContext)"
$objSearcher.PageSize = 1000
$objSearcher.Filter = $strFilter
$objSearcher.SearchScope = "Base"
$colProplist = "name"
foreach ($i in $colPropList)
{
$objSearcher.PropertiesToLoad.Add($i) > $nul
}
$colResults = $objSearcher.FindAll()
foreach ($objResult in $colResults)
{
$objItem = $objResult.Properties
$objItem.name
}
I don't know what I'm doing wrong. Or is it maybe just not possible to fetch the "default groups" with that filter?
What is a good alternative then?
The default group is odd. It is not stored in memberOf, or even in the member attribute of the group. That's why your search won't find it. The default group is determined by the primaryGroupId of the user. That attribute stores the RID (the last part of the SID) of the group. It's kind of dumb, I know :)
I actually wrote an article about the 3 (yes 3) different ways someone can be a member of a group: What makes a member a member?
I also wrote an article about getting all of the groups a single user belongs to, and how to account for all 3 ways: Finding all of a user’s groups
For example, here is the C# code I put in that article about how to find the name of the primary group for a user (given a DirectoryEntry). It shouldn't be too hard to translate that to VB.NET:
private static string GetUserPrimaryGroup(DirectoryEntry de) {
de.RefreshCache(new[] {"primaryGroupID", "objectSid"});
//Get the user's SID as a string
var sid = new SecurityIdentifier((byte[])de.Properties["objectSid"].Value, 0).ToString();
//Replace the RID portion of the user's SID with the primaryGroupId
//so we're left with the group's SID
sid = sid.Remove(sid.LastIndexOf("-", StringComparison.Ordinal) + 1);
sid = sid + de.Properties["primaryGroupId"].Value;
//Find the group by its SID
var group = new DirectoryEntry($"LDAP://<SID={sid}>");
group.RefreshCache(new [] {"cn"});
return group.Properties["cn"].Value as string;
}
You are right that the AccountManagement namespace makes things easy, but it really does have terrible performance sometimes. I never use it anymore. I find that DirectoryEntry/DirectorySearcher gives you much more control of how often your code makes calls out to AD.
I have been meaning to write an article about writing high performance code with DirectoryEntry, but I haven't gotten around to it yet.
Update: So if you need the nested groups for the user, including membership through the primary group, then you can find the primary group first, then do an LDAP_MATCHING_RULE_IN_CHAIN search for groups that have both the user and the primary group as members:
(|(member:1.2.840.113556.1.4.1941:={userDN})(member:1.2.840.113556.1.4.1941:={primaryGroupDN}))
Update: If you want to include Authenticated Users in your search (edit the DC portion of the distinguishedName):
(|(member:1.2.840.113556.1.4.1941:=CN=S-1-5-11,CN=ForeignSecurityPrincipals,DC=domain,DC=com)(member:1.2.840.113556.1.4.1941:={userDN})(member:1.2.840.113556.1.4.1941:={primaryGroupDN})
Note that you can also use the tokenGroups attribute of a user to find out all of the authentication groups for a user. The tokenGroups attribute is a constructed attribute, so you only get it if you specifically ask for it (with DirectoryEntry.RefreshCache() or, in a search, add it to DirectorySearcher.PropertiesToLoad.
The caveat is that tokenGroups is a list of SIDs, not distinguishedName, but you can bind directly to an object using the SID with the LDAP://<SID={sid}> syntax.

Raven DB patch request not running on start

I currently have a ravenDB database with a model that has a specific set of fields that I have been working with. I realized there is a field or two that I need to add and have successfully used ravenDB's patch request once to patch my documents in my database to initialize those fields on all the pre existing documents. I wanted to add another field again but I cannot get the patch code to run again to update my documents another time. I was wondering if someone could tell me if there was any documentation or methods to check the database at deploy and see if the models are the same, if not to patch the ones that are not but leave the ones that are alone and ensure after an update the pre existing models are not reset to what the patch is patching.
private void updateDb(IDocumentStore store)
{
store.DatabaseCommands.UpdateByIndex("Interviews_ByCandidateInterviewAndDate",
new IndexQuery{
Query = "Candidate:"
},
new []{
new PatchRequest{
Type = PatchCommandType.Set,
Name = "IsArchived",
Value = true
},
new PatchRequest{
Type = PatchCommandType.Set,
Name = "ArchiveDate",
Value = null
},
new PatchRequest{
Type = PatchCommandType.Set,
Name = "TestingField",
Value = 14
}
},
new BulkOperationOptions
{
AllowStale = false
}
);
}
The first two patch requests went through and shows up in the database but one thing I cannot see is if i were to run this patch again to get that third field into the model, would it change all the values that are already existing in the database for the first two to true and null or would it leave them the way they are and more importantly, I cannot get this code to run again.
Any pointers in the right direction would be greatly appreciated! thanks.
You query is wrong:
Query = "Candidate:"
Should have no results (invalid query).
Use:
Query = "Candidate:*"

How do I install and setup the RavenDb index replication

rI've looked at the questions and indeed the RavenDb docs. There's a little at RavenDb Index Replication Docs but there doesn't seem any guidance on how/when/where to create the IndexReplicationDestination
Our use case is very simple (it's a spike). We currently create new objects (Cows) and store them in Raven. We have a couple of queries created dynamically using LINQ (e.g. from c in session.Query<Cows> select c).
Now I can't see where I should define the index to replicate. Any ideas? I've got hold of the bundle and added it to the server directory (I'm assuming it should be in RavenDB.1.0.499\server\Plugins where RavenDB.1.0.499\server contains Raven.Server.exe)
Edit: Thanks Ayende... the answer below and in the ravendb groups helped. There was a facepalm moment. Regardless here's some detail that may help someone else. It really is very easy and indeed 'just works':
a) Ensure that the plugins are being picked up. You can view these in the statistics - available via the /localhost:8080/stats url (assuming default settings). You should see entries in 'Extensions' regarding to the IndexReplication bundle.
If not present ensure the versions of the DLLs (bundle and server) are the same
b) Ensure the index you want to replicate has been created. They can be created via Client API or HTTP API.
Client API:
public class Cows_List : AbstractIndexCreationTask<Cow>
{
public Cows_List()
{
Map = cows => from c in cows select new { c.Status };
Index( x => x.Status, FieldIndexing.Analyzed);
}
}
HTTP API (in studio):
//Cows/List
docs.Cows
.Select(q => new {Status = q.Status})
c) create the replication document. The clue here is DOCUMENT. Like everything stored, it too is a document. So after creating it must be stored in the Db :
var replicationDocument = new Raven.Bundles.IndexReplication.Data.IndexReplicationDestination
{
Id = "Raven/IndexReplication/Cows_List", ColumnsMapping = { {"Status", "Status"} },
ConnectionStringName = "Reports", PrimaryKeyColumnName = "Id",
TableName = "cowSummaries"
};
session.Store(replicationDocument);
sesson.SaveChanges();
d) Ensure you have the following in the CLIENT (e.g. MVC app or Console)
e) Create the RDBMS schema. I have a table in 'cowReports' :
CREATE TABLE [dbo].[cowSummaries](
[Id] nvarchar NULL,
[Status] nchar NULL)
My particular problem was not adding the index document to the store. I know. facepalm. Of course everything is a document. Works like a charm!
You need to define two things.
a) An index that transform the document to the row shape.
b) A document that tell RavenDB what is the connection string name, the table name, and the columns to map

Nhibernate Tag Cloud Query

This has been a 2 week battle for me so far with no luck. :(
Let me first state my objective. To be able to search entities which are tagged "foo" and "bar". Wouldn't think that was too hard right?
I know this can be done easily with HQL but since this is a dynamically built search query that is not an option. First some code:
public class Foo
{
public virtual int Id { get;set; }
public virtual IList<Tag> Tags { get;set; }
}
public class Tag
{
public virtual int Id { get;set; }
public virtual string Text { get;set; }
}
Mapped as a many-to-many because the Tag class is used on many different types. Hence no bidirectional reference.
So I build my detached criteria up using an abstract filter class. Lets assume for simplicity I am just searching for Foos with tags "Apples"(TagId1) && "Oranges"(TagId3) this would look something like.
SQL:
SELECT ft.FooId
FROM Foo_Tags ft
WHERE ft.TagId IN (1, 3)
GROUP BY ft.FooId
HAVING COUNT(DISTINCT ft.TagId) = 2; /*Number of items we are looking for*/
Criteria
var idsIn = new List<int>() {1, 3};
var dc = DetachedCriteria.For(typeof(Foo), "f").
.CreateCriteria("Tags", "t")
.Add(Restrictions.InG("t.Id", idsIn))
.SetProjection( Projections.ProjectionList()
.Add(Projections.Property("f.Id"))
.Add(Projections.RowCount(), "RowCount")
.Add(Projections.GroupProperty("f.Id")))
.ProjectionCriteria.Add(Restrictions.Eq("RowCount", idsIn.Count));
}
var c = Session.CreateCriteria(typeof(Foo)).Add(Subqueries.PropertyIn("Id", dc))
Basically this is creating a DC that projects a list of Foo Ids which have all the tags specified.
This compiled in NH 2.0.1 but didn't work as it complained it couldn't find Property "RowCount" on class Foo.
After reading this post I was hopeful that this might be fixed in 2.1.0 so I upgraded. To my extreme disappointment I discovered that ProjectionCriteria has been removed from DetachedCriteria and I cannot figure out how to make the dynamic query building work without DetachedCriteria.
So I tried to think how to write the same query without needing the infamous Having clause. It can be done with multiple joins on the tag table. Hooray I thought that's pretty simple. So I rewrote it to look like this.
var idsIn = new List<int>() {1, 3};
var dc = DetachedCriteria.For(typeof(Foo), "f").
.CreateCriteria("Tags", "t1").Add(Restrictions.Eq("t1.Id", idsIn[0]))
.CreateCriteria("Tags", "t2").Add(Restrictions.Eq("t2.Id", idsIn[1]))
In a vain attempt to produce the below sql which would do the job (I realise its not quite correct).
SELECT f.Id
FROM Foo f
JOIN Foo_Tags ft1
ON ft1.FooId = f.Id
AND ft1.TagId = 1
JOIN Foo_Tags ft2
ON ft2.FooId = f.Id
AND ft2.TagId = 3
Unfortunately I fell at the first hurdle with this attempt, receiving the exception "Duplicate Association Path". Reading around this seems to be an ancient and still very real bug/limitation.
What am I missing?
I am starting to curse NHibernates name at making what is you would think so simple and common a query, so difficult. Please help anyone who has done this before. How did you get around NHibernates limitations.
Forget reputation and a bounty. If someone does me a solid on this I will send you a 6 pack for your trouble.
I managed to get it working like this :
var dc = DetachedCriteria.For<Foo>( "f")
.CreateCriteria("Tags", "t")
.Add(Restrictions.InG("t.Id", idsIn))
.SetProjection(Projections.SqlGroupProjection("{alias}.FooId", "{alias}.FooId having count(distinct t1_.TagId) = " + idsIn.Count,
new[] { "Id" },
new IType[] { NHibernateUtil.Int32 }));
The only problem here is the count(t1_.TagId) - but I think that the alias should be generated the same every time in this DetachedCriteria - so you should be on the safe side hard coding that.
Ian,
Since I'm not sure what db backend you are using, can you do some sort of a trace against the produced SQL query and take a look at the SQL to figure out what went wrong?
I know I've done this in the past to understand how Linq-2-SQL and Linq-2-Entities have worked, and been able to tweak certain cases to improve the data access, as well as to understand why something wasn't working as initially expected.