Any documentation available for xe:objectData? - objectdatasource

It seems, that the objectData control can be used as a performance boost for an xpage application. I understand the basic idea behind, but still have trouble to get it to work properly.
Using a objectData as input for a repeat control avoids the unneccessary refresh of the repeat during a partial refresh that was triggered on another refreshId than the id for the repeat. But due to the cache mechanism in the objectData, the objectData is not refreshed during a partial refresh of the surrounding div for example. Setting the scope to request, objectData is refreshed, but the issue with partial refresh also refreshing datasources outside the refreshId occurs.
A bit weird, I know, but I do not know, how to explain it better.
So is there any documentation or sample on how to use objectData? Found one sample in the JDBC sampleDb, but it did not help.

In the context of the repeat/specific row use case, introduce Partial Execution (execMode="partial" / execId="foo") to complement the Partial Refresh of the row. This will hone component tree execution to the row of interest and avoid redundant processing outside of the specified target area.
In terms of documentation for objectData, the best worked example is indeed within the XPagesJDBC.nsf sample database (JDBC_RowSetDatasource.xsp) - it succinctly demonstrates using this datasource as a delegate to create a specialized DataContainer object from the current "row" variable, and conversely managing the specialized saving of the DataContainer object during the save process. Although this example handles the delegation of SQL processing for the current row, the same approach is applicable under many use cases (eg: one example, the underlying view could be a view of XML documents, where you need to delegate per row handling using the objectData datasource through a custom specialized XML processing object).

Related

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

FactoryImpl to set atts via props for bound inputs

First, thanks for any advice. I am new to all of this and apologize for any obvious blunders.
Second, the question:
In an interface for entering clients that often possess a number of roles, it seemed efficient to create a set of inputs which possessed both visual characteristics and associated data binding based simply on the inputs name.
For example, inquirerfirstname would be any caller or emailer who contacted our company.
The name would dictate a label, placeholder, and the location in firebase where the data would be stored.
The single name could be used--I thought--with a relational table (state machine or series of nested ifs) to define the properties of the input and change its outward appearance and inner bindings through property manipulation.
I created a set of nested iffs, and console logged the property changes in the inputs, but their representation in the host element (a collection of inputs that generated messages to clients as well as messages to sales staff) remained unaffected.
I attempted using the ready callback. I forced the state change with a button.
I was unable to use the var name = new MyInput( name). I believe using this method would be most effective but am unsure how to "stamp" the JavaScript into a heavyweight stamped parent element.
An example of a more complicated and dynamic use of a constructor and a factory implementation that can read database (J-son) objects and respond to generate HTML elements would be awesome.
In vanilla a for each would seem to do the trick but definitions and structure as well as binding would not be organic--read it might be easier just to HTML stamp the inputs in polymer by hand.
I would be really greatful for any help. I have looked for a week and failed to find one example that took data binding, physical appearance, attribute swapping, property binding and object reading into account.
I guess it's a lot, but each piece independently (save the use of the constructor) I think I get.
Thanks again.
Jason
Ps: I am aware that the stamping of the element seems to preclude dynamic property attribute and binding assignments. I was hoping a compute attribute mixed with a factoryimpl would be an option (With a nice example).

GMF Model & Table View

I have been given this task and would appreciate it if someone helped really. I built a Graphical Model through GMF, which is as follows;
As you see one of the nodes in the model has been selected. The task here is to create an eclipse view with a table, which will be automatically updated upon the selection of a so-called "City Node". As you may guess from the model, the table should contain path costs to all of the cities. I will later expand my solution to include a modified Dijkstra Algorithm but right now i am stuck in the creation of a table view.
I tried to build it using a TableViewer but it seems fairly complex since we need to set the input of the table on ContentProvider, but the twist here is that since we need a SelectionListener to obtain the City Coordinates (as it was ordered to calculate path-costs through the distance between two connected cities divided by the max speed, which was indicated on the connecting streets in the graph) and also the currently selected city, the path-costs need to be automatically calculated and displayed in the table upon the receipt of a click-event. This means that we somehow need to update the input which is gonna be handed to ContentProvider on every selection change.
For further information, I get the current selection through selectionChanged method of ISelectionListener interface and inside this method I put the city information in an arraylist. However although i declared this arraylist outside of the method as public, I cannot seem to access it from the outside of the method and thus can not pass it contentProvider. Eventually the input of the table can not be updated. I tried to write this text as simple as possible and I hope you guys can help me. As I can not foresee now what should be done, I would really appreciate it.
You're on the right track!
In your selection listener's ISelectionListener.selectionChanged method you just have to set the new input for the viewer with TableViewer.setInput. Then, the IStructuredContentProvider.inputChanged method gets invoked on the content provider for the viewer. It's here where you can do your stuff with the new input and refresh the viewer with TableViewer.refresh.
You can also use the JFace databinding framework, but I think you should be fine with what I've mentioned above.

Restore one RCP view while restoring another

Two views in my application need to load same information when restoring state. My idea was, to avoid saving it twice, to have one view create another in init orcreatePartControl if it wasn't created yet. However,
PlatformUI.getWorkbench().getActiveWorkbenchWindow().getActivePage().showView(...)
doesn't work there, as getActivePage() returns null. Is it possible to work around this?
Delegate to a manager or service to load/maintain/save the shared state. That will ensure the first access initializes your information. When the view is instantiated just go to the manager and retrieve the information. If the user never instantiates your view, then you never had to do the extra work.
In the general case, you can't create/instantiate one view while creating/activating another view. Eclipse won't allow it, and will generate ERRORs in the error log.
EDIT:
3 standard persistence patterns I've seen used (and/or misused :-) are:
1) Have your plugin get its state location and simply serialize you state out there. (location provided for free if you subclass org.eclipse.core.runtime.Plugin) You can do it in your activator stop(BundleContext) method. You can uses classes like org.eclipse.ui.XMLMemento to serialize to/from XML if you don't already have a solution.
2) if you subclass org.eclipse.ui.plugin.AbstractUIPlugin you can use org.eclipse.ui.plugin.AbstractUIPlugin.getDialogSettings() to store your state. Potentially a little bulky as you would have to keep it up to date.
3) have your common manager update a preference, potentially using another serialization technique.

Why is my Navigation Properties empty in Entity Framework 4?

The code:
public ChatMessage[] GetAllMessages(int chatRoomId)
{
using (ChatModelContainer context = new ChatModelContainer(CS))
{
//var temp = context.ChatMessages.ToArray();
ChatRoom cr = context.ChatRooms.FirstOrDefault(c => c.Id == chatRoomId);
if (cr == null) return null;
return cr.ChatMessages.ToArray();
}
}
The problem:
The method (part of WCF-service) returns an empty array. If I uncomment the commented line it starts working as expected. I have tried turning of lazy loading but it didnt help.
Also, when it works, I get ChatMessages with a reference to ChatRoom populated but not the ChatParticipant. They are both referenced by the ChatMessage-entity in the schema with both Id and Navigation Properties. The Ids are set and points to the right entities but on the client-side only the ChatRoom-reference has been populated.
Related questions:
Is an array the preferred method to return collections of EF-entities like this?
When making a change in my model (edmx) Im required to run the "Generate Database from Model..."-option before I can run context.CreateDatabase(). Why? I get some error message pointing to old SSDL but I cant find where the SSDL is stored. Is this created when I run this "Generate Database..."-option?
Is it safe to return entire entity-graphs to the client? Ive read some about "circular reference exeptions" but is this fixed in EF4?
How and when is references populated in EF4? If I have lazy-loading turned on I suspect only entities I touch is populated? But with lazy loading turned off, should the entire graph be populated always then?
Are there any drawbacks of using self-updating entities over ordinary entities in EF4? I dont need self-updating right now but I might do later. Can I upgrade easily or should I start with self-updating from the start?
Why cant I use entity-keys with type string?
Each of your questions needs a separate answer, but I'll try to answer them as briefly as possible.
First of all, in the code sample you provided, you get a ChatRoom object and then try to access a related object that is not included in your query (ChatMessages). If lazy loading is turned off as you had suggested, then you will need the Include("ChatMessages") call in your query, so your LINQ query should look like this:
ChatRoom cr = context.ChatRooms.Include("ChatMessages").FirstOrDefault(c => c.Id == chatRoomId);
Please ensure that your connection string is in your config file as well.
For the related questions:
You can return collections in any way you choose - I have typically done them in a List object (and I think that's the common way), but you could use arrays if you want. To return as a list, use the .ToList() method call on your query.
I don't understand what you're trying to do here, are you using code to create your database from your EDMX file or something? I've typically used a database-first approach, so I create my tables etc then update my EDMX from the database. Even if you generate your DB from your model, you shouldn't have to run CreateDatabase in code, you should be able to run the generated script against your DB. If you are using code-only then you need to dump the EDMX file.
You can generally return entity graphs to the client, should handle ok.
EF4 should only populate what you need. If you use lazy loading, it will automatically load things that you do not include in your LINQ query when you reference them and execute the query (e.g. do a ToList() operation). This won't work so well if your client is across a physical boundary (eg a service boundary) obviously :) If you don't use lazy loading, it will load what you tell it to in your query and that is all.
Self tracking entities are used for n-tier apps, where objects have to be passed across physical boundaries (eg services). They come with an overhead of generated code for each object to keep track of its changes, they also generate POCO objects which are not dependent on EF4 (but obviously contain generated code that would make the tracked changes work with the EF4 tracker). I'd say it depends on your usage, if you're building a small app that's quite self contained, and don't really care about separation for testability without the infrastructure in place, then you don't need to use self tracking entities. I say only use framework features when you need them, so if you're not writing an enterprise scale application (enterprise doesn't have to be big, but something scalable, highly testable, high quality etc) then no need to go for self tracking POCOs.
I haven't tried but you should be able to do that - that would be a candidate for a separate question if you can't get it to work :)