What is the Best Practices for Optimal Performance in Silverlight and MVVM - silverlight-4.0

I have many Normalized tables - might be more than 50... I was wondering what is the best approach for defining ViewModels - individual ViewModel for each form or making Common ViewModel for multiple Forms. Because making individual forms might increase the size of the data that needs to be downloaded and it might increase the redundancy of data on the client. i.e. using Category on each form has different set of data for each of them. and On the otherside making common viewmodel for set of Forms might increase the complexity in managing stuff.
Is there any proper article describing such aspect of Development. What is the best practices for managing overall Application so that it will offer optimal performance. (Fetching minimum Data from Server)
Thanks for your time and help.

The amount of views & models will increase the size of your XAP file, which is downloaded completely on open, this can be compressed. Actual performance during use is different and depends on other factors as well, try using SilverlightSpy to get an idea of actual browser performance. It is possible to download parts of your silverlight app as required, but this is an advanced technique.
If Messaging is your main concern, then check out Binary Messaging.
I recommend using a new ViewModel for every view, or nested Usercontrol, then use an event aggregator for communication between models.

Typically you'll want to create a View Model for each View. If two Views display the same data and allow the user to perform the same actions but differ only in UI implementation then they can share a View Model but the goal is to keep your View Models cohesive. If your View Models contain code to operate multiple views you run the risk of implementing the "God Object" anti-pattern. If you find that your View Models all share a certain amount of common code, consider moving that code to a common base class.
Remember that two completely different View Models can manipulate the same Models. This might be the case if two views display the same data but each allows the user to interact with it in a unique way.
I would highly recommend reading Pro WPF and Silverlight MVVM by Gary Hall. It's a great book to get started with MVVM, particularly for use with WPF and/or Silverlight.

Related

Suggestions on Yii project structures?

I am currently developing a web project using Yii framework. I'm wondering where is a good place to put all the business logic, in the Controllers, or Models(models here as in mappings from database tables to actual objects)? Both doesn't seem right. I think I might need an extra "asset" layer in between controller and models, but I have no idea how to start. Any suggestions?
Generally the suggestion is to go about using Fat Models and Thin Controllers. So business logic in your model. It makes it far easier to make your code re-useable.
More info here:
http://www.yiiframework.com/doc/guide/1.1/en/basics.best-practices
If you've got a lot of custom logic, you could potentially have an "asset" layer of additional models that handled your DB models. Depends on your specific system though … I'm finding I do use CFormModel objects this way at times to map from a form with a bunch of different models to the models as needed.

Is it recommended to use Self Tracking Entities with WCF services?

I want to know if using Self Tacking Entities (in Entity Framework) is recommended with WCF services? If yes, then can you guide me to a tutorial which may guide how to do that?
Actually, I am going to develop a WPF application using Prism with MEF and MVVM. I have decided to use Entity Framework. I want suggestions and advices regarding this approach.
Any help will be appreciated.
I want to know if using Self Tacking Entities (in Entity Framework) is
recommended with WCF services?
It depends who you ask. If you ask MS they will tell you Yes because they simply don't have anything better to offer. STEs were response to this very old MS Connect suggestion. The problem is that EF itself has terrible bad support for merging changes between two entity graphs (you must do it completely yourselves) and developers working on MS platform (sometimes including me) share some common behaviors:
They are lazy to develop their own solution to problem and they expect some magic directly in APIs provided by MS.
Most of the time they are not trained / skilled / competent in the technology they have to use, because they have to move to a new one too often.
The only APIs they know are part of .NET Framework. They don't look for other options neither they compare features.
First two points are result of MS strategy where RAD become synonym for designer (or newly also T4 templates).
I share #Richard opinion about STEs. I would add one additional drawback of STEs - they move large datasets between participants. If you decide to get an entity graph from the server, change a single entity in the graph and push data back they will transfer again the whole graph. Transferring only changed entities results in fighting with STE's core logic. I'm also afraid that they track changes completely on per entity level instead of per property level. In case of modification to entities with large binary or string data it can result in transferring too much unneeded data between the service and the database and between the service and the client.
Anyway for a simple application with low data traffic and small entities they can do a good job and allow you building your application quickly but without strict separation of concerns. You will get entities from service and bind them directly to WPF UI and they will be able to track changes for you. Later you will push entities back to service and they will be able to persist changes. Your client and service will be tightly coupled but in some scenarios it can be good enough.
I would avoid self tracking entities in general - I blogged about it here.
Create your own DTOs and use them to manage the transfer of data - then biuold your POCO objects in the service and use them with entity framework for persistence
If you want self tracking then there is a slightly cleaner approach here

Is it feasable to have DAL and BLL layers in a Mac OS X Application?

I am developing a Mac OS X application using Objective-C and Xcode 4 and want to find out the best way of handling data access and undertaking business logic tasks without having to use CoreData.
I am from a .NET MVC background and would normally have my controller call through to a service layer (using a Repository Pattern) to return data that could be mapped to my View. This would work in a similar way to the traditional Business Logic and Data Access Layer.
However on the Mac most of my reading suggests that my Models and Controllers should share the responsibility of populating the Model with with data and undertake business and validation logic.
This seems to me a little restrictive and goes against the DRY principle as I may need to repeat some data access/business logic operations in other models thus having to write tha same bit of code again.
Therefore is it feasible to have a set of classes or external libraries that undertake business/data access logic (to a SQLite database) that can then be called from any controller? Therefore the Model will only contain data about itself and validation logic? Or does this go against the core MVC principles and ways of building applications on the Mac?
Is there a particular reason not to use Core Data in this scenario? It's highly-optimized for persisting objects to and from the local filesystem. It also performs validation at the model level, results caching, notifications, etc.
What you describe sounds like a good idea to me. Putting your validation and business logic in your model classes is proper use of MVC, and having the data stored in an sqlite database (that the model classes talk to) is a commonly used methodology too.
I'm not sure if we're on the same page with terminology: if you use that design, your classes "that undertake business/data access logic (to a sqlite database) that can then be called from any controller" will in fact be model classes.

Pattern for client-side update in SOA

I want to develop a data-driven WPF application, which uses WCF to connect to the server-side, which itself uses NHibernate to persist data. For examle there is a domain-object called "Customer" and there is also a flattened (with Automapper) "CustomerDTO" which is returned by a WCF-operation called "GetCustomer(int customerId)".
I don't know where I should make data-validation and how I should handle client-side updating, so that one could modify single or multiple properties on the client by editing a form and finally clicking "save"...
Could you please provide me with some common patterns in such a situation or any best-pratice examples, which target real LOB-applications (n-tiered pattern, multiple layers, etc.)
Sounds like a good fit for Self Tracking Entities. Be aware that STE's are hot off the press and still have a bit more maturing to do. The approach used with them should answer your common patterns question.

LinqToSql and WCF

Within an n-tier app that makes use of a WCF service to interact with the database, what is the best practice way of making use of LinqToSql classes throughout the app?
I've seen it done a couple of different ways but they seemed like they burned a lot of hours creating extra interfaces, message classes, and the like which reduces the benefit you get from not having to write your data access code.
Is there a good way to do it currently? Are we stuck waiting for the Entity Framework?
LINQ to SQL isn't really suitable for use with a distributed app. The change tracking and lazy loading is part of the DataContext which is tied to the database so cannot travel across the wire. You can move L2S entities across the wire, modify them, move them back and update the database by reattaching them to the DataContext but that is pretty limited and you lose all concurrency checks as the old values are never kept around.
BTW I believe the same is true for L2E.
It is certainly not a good idea to pass the linq-to-sql object around to other parts of a distributed system. If you do that, you would couple your clients to the structure of the database, which is never a good idea. This was/is one of the major problems with DataSets by the way.
It is better to create your own classes for the transfer of data object. Those classes, of course, would be implemented as DataContracts. In your service layer, you'd convert between the linq-to-sql objects and instances of the data carrier objects. It is tedious but it decouples the clients of the service from the database schema. It also has the advantage of giving you better control of the data that is passed around in your system.