Retrieving Documents from Documentum based on the folders name - documentum

I am buildng a system that integrates the entities from different data stores on to a unified interface. The eventual target is to build a system that has a capability of querying objects located in multiple datastores on the basis of a unique keys. One of the our datastores is Documentum in which we are keeping all of our documents foldered by their unique names (Keys). The multiple data stores are having a same unique name for a particular entity. The only show stopper here is to get a list of the documents associated with the unique name of certain entity and retrieve the document from documentation. I am searching for a way (a query, or a procedure) to get this task to be done.

You can retrieve all the documents under a folder using the folder predicate in a DQL query:
select * from dm_document where folder('/mycabinet/myfolders/uniquefolder', DESCEND);

Another way to accomplish this is to add a new Documentum Type with a custom attribute to store your unique key. Then you can query directly on that attribute. If you would like to try this route, you should create a new Type that inherits from dm_document.
Then, your query could be like this:
select * from my_new_type where my_custom_attribute = <unique_key>
Folders can be a good solution if it helps you to organize and navigate the data, but they can also create some unique performance challenges. I would suggest against them if your dataset is very large and you don't need to navigate the folder structure.

Related

Creating 'custom' tables in PostgreSQL

I’ve hit sort of a roadblock in a current project I’m working on, I don’t have a lot of web developers in my office and as a matter in fact the only other web dev just went on vacation. Anyway I was wondering if anyone could help me with structuring two of my postgres tables.
The user needs to be able to create custom data tables, one for each specific program (a parent record). The form I’ve setup for these tables allows you to add or remove inputs based on how many fields you need and then specify the name, data_type, etc.
My initial idea was to create a new table in the dB each time a user created one of these custom tables. The other web dev, who has created something similar, said it would be better to create a fields table that stores each custom field information and then have a data table that stores every cell of data tying to a field id.
I understand having the fields table so that I can retrieve just the field information and build my front-end tables and edit forms dynamically, but I’m a little confused on how to get the data into the table. I’m used to having an array of objects and each object relating to an entire row. But with this method it’s storing each cell of data instead of row of data and I don’t know the best way to select and organize it on the backend.
Data for these tables are going to be imported in from CSV files formatted to the custom table structure, below is the current structure I have for my two tables. I got a suggestion on reddit to use JSON to store each rows data, but I'm wondering how I'll be able to do sorting and filtering with this data. My current table structure is listed below, and this is before I got the suggestion to use the json data. I'm guessing if I went that route I would remove the fieldId column and instead use it for
the JSON key name, and store that fields data with it.
fields
id -- name -- program_id -- type -- required -- position -- createdAt -- updatedAt
data
id -- fieldId -- data -- createdAt -- updatedAt
So I guess my question is does this sound like the right way to structure these tables for my needs and if so can I still perform sorting and filtering on it?

Cannot create nested documents in Moqui for Elasticsearch

I am not able to explicitly map a particular filed according to my specific requirements.
For example if I take the WorkEffort data model used in HiveMind, then while creating the schema to Index tasks, I cannot specify which type mapping I want to do regarding a specific field.
If I want to nest the details of the Milestones and People associated with a particular task and then query them according to the above values, then I am not able to achieve that.
For example if I want to fetch the results based on the Assignee or the Milestone, I will not be able to achieve that.
So is there any way where I can specify the type of mapping respective to that field, while creating the Document Data for defining a schema according to my choice?
This will allow me to specify which fields are nested and query them accordingly.

NHibernate: Returning only two properties of an entity in a Dictionary

We are using Oracle 10g database, NHibernate, WCF and Silverlight 3.0 in our project
The situation we have is that the entities in my project have many properties. But for certain situations, like showing the options in dropdown, I only want to retrieve the list of ID and Name field for that entity. I do not want to return a list of the entire entity object as a whole as there are many columns in the table. Presently I am using two SELECT queries: one to fetch the list of IDs and second to fetch the list of Names separately. Then I join these two queries and form a Dictionary and pass it to the UI.
The concern for me is that would it be possible to achieve this in a single query itself?
One approach that I know of is to create a new class having only the ID and Name property, import it into NHiberante and then form a list of this new class and send it to the UI. I want to avoid this approach for now as there are many tables for which I have to implement this functionality and hence I will have to create many new classes and corresponding xml files.
Any sort of help would be greatly appreciated.
Here is one way to do it using the Criteria API, Projections and AliasToBean. If it's a simple non-persistent class containing Id and Name you can reuse the class. NHibernate query CreateCriteria

NHibernate: Dynamic Table Mapping

I have a scenario where I want to persist document info record to a table specific to the typo of document, rather than a generic table for all records.
For example, records for Invoices will be stored in dbo.Doc_1000 and records for Receipts will be stored in dbo.Doc_2000 where 1000 and 2000 are id autogenerate and store in well-known table (dbo.TypeOfDoc.
Furthermore each dbo.Doc.xxx table have a group of system column (always the same) and could have a group of dynamic column (metadata).
Tables dbo.Doc.xxx and eventually dynamic column are clearly created at runtime.
If this is possible with NHibernate???
Thanks.
hope that I got your point. I am currently looking for a solution for a problem that looks similar. I want to integrate a feature in my application where the admin user can design an entity at runtime.
As far as I know, once the SessionFactory is configured and ready to use, there is no way to modify the mapping used by nhibernate. If you want to use a customized table structure that is configured, created and modified at runtime, you should have a place where a corresponding mapping lives, e.g. as a nhibernate mapping xml file and you have to set up a new SessionFactory each time you change the database model to reflect these changes.

Define Generic Data Model for Custom Product Types

I want to create a product catalog that allows for intricate details on each of the product types in the catalog. The product types have vastly different data associated with them; some with only generic data, some with a few extra fields of data, some with many fields that are specific to that product type. I need to easily add new product types to the system and respect their configuration, and I'd love tips on how to design the data model for these products as well as how to handle persistence and retrieval.
Some products will be very generic and I plan to use a common UI for editing those products. The products that have extensible configuration associated with them will get new views (and controllers) created for their editing. I expect all custom products to have their own model defined but to share a common base class. The base class would represent the generic product that has no custom fields.
Example products that need to be handled:
Generic product
Description
Light Bulb
Description
Type (with an enum of florescent, incandescent, halogen, led)
Wattage
Style (enum of flood, spot, etc.)
Refrigerator
Description
Make
Model
Style (with an enum in the domain model)
Water Filter information
Part number
Description
I expect to use MEF for discovering what product types are available in the system. I plan to create assemblies that contain product type models, views, and controllers, drop those assemblies into the bin, and have the application discover the new product types, and show them in the navigation.
Using SQL Server 2008, what would be the best way to store products of these various types, allowing for new types to be added without having to grow the database schema?
When retrieving data from the database, what's the best way to translate these polymorphic entities into their correct domain models?
Updates and Clarifications
To avoid the Inner Platform Effect, if there is a database table for every product type (to store the products of that type), then I still need a way to retrieve all products that spans product types. How would that be achieved?
I talked with Nikhilk in more detail about his SharePoint reference. Specifically, he was talking about this: http://msdn.microsoft.com/en-us/library/ms998711.aspx. It actually seems pretty attractive. No need to parse XML; and there is some indexing that could be done allowing for simple and fast queries over the data. For instance, I could say "find all 75-watt light bulbs" by knowing that the first int column in the row is the wattage when the row represents a light bulb. Something (NHibernate?) in the app tier would define the mapping from the product type to the userdata schema.
Voted down the schema that has the Property Table because this could lead to lots of rows per product. This could lead to index difficulties, plus all queries would have to essentially pivot the data.
Use a Sharepoint-style UserData table, that has a set of string columns, a set of int columns, etc. and a Type column.
Then you have a list of types table that specifies the schema for each type - its properties, and the specific columns they map to in the UserData table.
With things like Azure and other utility computing storage you don't even need to define a table. Every store object is basically a dictionary.
I think you need to go with a data model like --
Product Table
ProductId (PK)
ProductName
Details
Property Table
PropertyId (PK)
ProductId (FK)
ParentPropertyId (FK - Self referenced to categorize properties)
PropertyName
PropertyValue
PropertyValueTypeId
Property Value Lookup Table
PropertyValueLookupId (PK)
PropertyId (FK)
LookupValue
And then have a dynamic view based on this. You could use the PropertyValueTypeId coloumn to identify the type, using a convention, like (0- string, 1-integer, 2-float, 3-image etc) - But ultimately you can store everything untyped only. You could also use this column to select the control template to render the corresponding property to the user.
You can use the Value lookup table to keep lookups for a specific property (so that user can choose it from a list)
Summarizing lets look at the options under consideration for storing product information:
1) some xml format in the database
2) similar to the post above about having x number of type defined columns (sharepoint approach)
3) via generic table with name and type definitions stored in lookup table and values in secondary table with columns id, propertyid, value (similar to #2 however this approach would provide unlimited property information
4) some hybrid of the above option where product table would have x common columns (for storage of properties common with all products) with y user defined columns (this could be m of integer type and n of varchar types). This may be taking the best of #2 and a normalzied structure as if you knew all the properties of all products. You would be getting the best sql performance for the properties that you use the most (probably those that are common across all products) while still allowing custom columns for specific properties with each product.
Are there other options? In my opinion I would consider 4 above as the best hybrid of the combinations.
dave
Put as much of the shared anticipated structure in traditional normalized 3NF model, then augment with XML columns as appropriate.
I don't see MEF (or any other ORM) being able to do all this transparently.
I think you should avoid the Inner Platform Effect and actually build tables for your specialized entities. You'll be writing specific code to manage them so why not have proper backing tables too?
It will make your deployment slightly harder - drop in an assembly and run a script - but it will probably save you a lot of pain in the long run.
Jeff,
we currently use a XML field in the Products table to handle all product-specific data. So our Products table has a few common fields that all products share, an XML which contains whatever a particular product needs additionally, and a few computed fields that grab into the XML and surface some of the frequently queried fields as "virtual" fields on the Products table (e.g. "Style" would be set to whatever the current product defines, or NULL, if the product doesn't have a Style property).
So far, we've been quite flexible with that approach - if you create some decent XSD schemas for your XML, you can even create C# proxy classes for these fields.
Works nicely for us - joining the best of both the relational and XML worlds.
Marc