SubSonic ORM with backbone.js - sql-server-2005

I am using Subsonic ORM by Rob Connery with Backbone.Js to build javascript single page demonstration application. in one of the service end point there is a contract that send all the records existing in the data source like below
[WebMethod]
[ScriptMethod(UseHttpGet = true)]
public TaskCollection GetAllTasks()
{
TaskCollection coll = new TaskCollection();
coll.Load();
return coll;
}
but seems that each Task in the collection is polluted with loads of properties that are required only on server side. This is the JSON returned on request
[{
"__type": "DAL.Task",
"Taskid": 1,
"Taskname": "welcome to india",
"Createdon": "\/Date(1334591056903)\/",
"Modifiedon": "\/Date(1334591056903)\/",
"ValidateWhenSaving": true,
"DirtyColumns": [],
"IsLoaded": true,
"IsNew": false,
"IsDirty": false,
"TableName": "task",
"ProviderName": null,
"NullExceptionMessage": "{0} requires a value",
"InvalidTypeExceptionMessage": "{0} is not a valid {1}",
"LengthExceptionMessage": "{0} exceeds the maximum length of {1}",
"Errors": []
}]
all i require is CreatedOn,ModifiedOn and TaskName, TaskId . How do i make sure only these are sent down the wire using SubSonic ORM

Here's a couple of ideas...
Use the viewmodel to autoselect the properties:
public class TaskView
{
public int TaskID { get; set; }
public string TaskDescription { get; set; }
}
...
var results = new Select().From(Tables.Task).ExecuteTypedList<TaskView>();
Use an anonymous type
var qry = new Select(new string[] { Task.Columns.TaskID, Task.Columns.TaskDescription }).From(Tables.Task);
var resultList = new List<object>();
using (IDataReader rdr = qry.ExecuteReader())
{
while (rdr.Read())
resultList.Add(new
{
TaskID = rdr[0].ToString(),
TaskDescription = rdr[1].ToString(),
});
}

I don't use SubSonic, but I have to admit this does seem like a good example of where you might want to use a ViewModel, that is, a model that is populated from your Model specifically for the view. Now as far as binding the ViewModel to the Model, and/or possibly generating properties for many ViewModels from the model for the view (because generating ViewModels can definitely be tedious and error prone for a bunch of models), well, I've heard of several general solutions. I'm actually trying to find a solution I'm happy with myself; in the meantime I've been forced to hand write them until I find a better solution. I believe if you're wanting strongly typed ViewModels, you can use a tool like AutoMapper, though I've never used it myself. I've also seen solutions which use or inherit from a C# dynamic and then modify the accessors (though I think that might be slightly problematic to generate JSON from).
The primary reason I've used ViewModels is that I can easily control the format of the Dates. But that might be done better by using a different JSON serializer. Of course, the use of ViewModels can also allow you flexibility to change your data layer as needed. But I have to admit, it's been tedious. I think my implementation can be handled better with a bit of automation, but I don't know how to handle that so far.
I realize this is only a partial answer. I'm curious what other answers might come up.

Related

aws api gateway & lambda: multiple endpoint/functions vs single endpoint

I have an AWS api that proxies lamba functions. I currently use different endpoints with separate lambda functions:
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
The process to manage all the endpoints and functions becomes cumbersome. Is there any disadvantage when I use a single endpoint to one lambda function which decides what to do based on the query string?
api.com/exec&func=getData --> exec --> if(params.func === 'getData') { ... }
It's perfectly valid to map multiple methods to a single lambda function and many people are using this methodology today as opposed to creating an api gateway resource and lambda function for each discrete method.
You might consider proxying all requests to a single function. Take a look at the following documentation on creating an API Gateway => Lambda proxy integration:
http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html
Their example is great here. A request like the following:
POST /testStage/hello/world?name=me HTTP/1.1
Host: gy415nuibc.execute-api.us-east-1.amazonaws.com
Content-Type: application/json
headerName: headerValue
{
"a": 1
}
Will wind up sending the following event data to your AWS Lambda function:
{
"message": "Hello me!",
"input": {
"resource": "/{proxy+}",
"path": "/hello/world",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"cache-control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/json",
"headerName": "headerValue",
"Host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
"Postman-Token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
"User-Agent": "PostmanRuntime/2.4.5",
"Via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
"X-Forwarded-For": "54.240.196.186, 54.182.214.83",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"queryStringParameters": {
"name": "me"
},
"pathParameters": {
"proxy": "hello/world"
},
"stageVariables": {
"stageVariableName": "stageVariableValue"
},
"requestContext": {
"accountId": "12345678912",
"resourceId": "roq9wj",
"stage": "testStage",
"requestId": "deef4878-7910-11e6-8f14-25afc3e9ae33",
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"apiKey": null,
"sourceIp": "192.168.196.186",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "PostmanRuntime/2.4.5",
"user": null
},
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"apiId": "gy415nuibc"
},
"body": "{\r\n\t\"a\": 1\r\n}",
"isBase64Encoded": false
}
}
Now you have access to all headers, url params, body etc. and you could use that to handle requests differently in a single Lambda function (basically implementing your own routing).
As an opinion I see some advantages and disadvantages to this approach. Many of them depend on your specific use case:
Deployment: if each lambda function is discrete then you can deploy them independently, which might reduce the risk from code changes (microservices strategy). Conversely you may find that needing to deploy functions separately adds complexity and is burdensome.
Self Description: API Gateway's interface makes it extremely intuitive to see the layout of your RESTful endpoints -- the nouns and verbs are all visible at a glance. Implementing your own routing could come at the expense of this visibility.
Lambda sizing and limits: If you proxy all -- then you'll wind up needing to choose an instance size, timeout etc. that will accommodate all of your RESTful endpoints. If you create discrete functions then you can more carefully choose the memory footprint, timeout, deadletter behavior etc. that best meets the needs of the specific invocation.
I would have commented to just add a couple of points to Dave Maple's great answer but I don't have enough reputation points yet so I'll add the comments here.
I started to head down the path of multiple endpoints pointing to one Lambda function that could treat each endpoint different by accessing the 'resource' property of the Event. After trying it I have now separated them into separate functions for the reasons that Dave suggested plus:
I find it easier to go through logs and monitors when the functions are separated.
One nuance that as a beginner I didn't pick up on at first is that you can have one code base and deploy the exact same code as multiple Lambda functions. This allows you to have the benefits of function separation and the benefits of a consolidated approach in your code base.
You can use the AWS CLI to automate tasks across the multiple functions to reduce/eliminate the downside of managing separate functions. For example, I have a script that updates 10 functions with the same code.
i've been building 5~6 microservices with Lambda-API Gateway, and been through several try & failure and success.
in short, from my experiences, it's better to delegate all the API calls to lambda with just one APIGateway wildcard mapping, such as
/api/{proxy+} -> Lambda
if you ever used any frameworks like grape you know that when making APIs, features like
"middleware"
"global exception handling"
"cascade routing"
"parameter validation"
are really crucial.
as your API grows, it's almost impossible to manage all the routes with API Gateway mapping, nor API Gateway support non of those feature also.
further more, it's not really practically to break lambda for each endpoints for development or deployment.
from your example,
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
imagine you have data ORM, User authentication logic, common view file (such as data.erb).. then how you gonna share that?
you might can break like,
api/auth/{+proxy} -> AuthServiceLambda
api/data/{+proxy} -> DataServiceLambda
but not like "per endpoint". you might can lookup concept of microservice and best practice about how you can split the service
for those web framework like features, checkout this we just built web framework for lambda since i needed this at my company.
A similar scenario is adressed in the official AWS blogpost named Best practices for organizing larger serverless applications.
The general recommendation is to split "monolithic lambdas" into separate lambdas and move the routing to the API Gateway.
This is what the blog writes about the "monolithic lambda" approach:
This approach is generally unnecessary, and it’s often better to take
advantage of the native routing functionality available in API
Gateway.
...
API Gateway is also capable of validating parameters, reducing the
need for checking parameters with custom code. It can also provide
protection against unauthorized access, and a range of other features
more suited to be handled at the service level.
Going from this:
To this
The responsibility of mapping API requests to Lambda in AWS is handled through a Gateway's API specification.
Mapping of URL paths and HTTP methods as well as data validation SHOULD be left up to the Gateway. There is also the question of permissions and API scope; you'll not be able to leverage API scopes and IAM permission levels in a normal way.
In terms of coding, to replicate this mechanism inside of a Lambda handler is an anti-pattern. Going down that route one will soon end up with something that looks like the routing for a node express server, not a Lambda function.
After having set up 50+ Lambdas behind API Gateway I can say that
function handlers should be kept as dump as possible, allowing them to be reusable independent from the context from which they're being invoked.
As far as I know, AWS allows only one handler per Lambda function. That’s why I have created a little "routing" mechanism with Java Generics (for stronger type checks at compile time). In the following example you can call multiple methods and pass different object types to the Lambda and back via one Lambda handler:
Lambda class with handler:
public class GenericLambda implements RequestHandler<LambdaRequest<?>, LambdaResponse<?>> {
#Override
public LambdaResponse<?> handleRequest(LambdaRequest<?> lambdaRequest, Context context) {
switch (lambdaRequest.getMethod()) {
case WARMUP:
context.getLogger().log("Warmup");
LambdaResponse<String> lambdaResponseWarmup = new LambdaResponse<String>();
lambdaResponseWarmup.setResponseStatus(LambdaResponse.ResponseStatus.IN_PROGRESS);
return lambdaResponseWarmup;
case CREATE:
User user = (User)lambdaRequest.getData();
context.getLogger().log("insert user with name: " + user.getName()); //insert user in db
LambdaResponse<String> lambdaResponseCreate = new LambdaResponse<String>();
lambdaResponseCreate.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseCreate;
case READ:
context.getLogger().log("read user with id: " + (Integer)lambdaRequest.getData());
user = new User(); //create user object for test, instead of read from db
user.setName("name");
LambdaResponse<User> lambdaResponseRead = new LambdaResponse<User>();
lambdaResponseRead.setData(user);
lambdaResponseRead.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseRead;
default:
LambdaResponse<String> lambdaResponseIgnore = new LambdaResponse<String>();
lambdaResponseIgnore.setResponseStatus(LambdaResponse.ResponseStatus.IGNORED);
return lambdaResponseIgnore;
}
}
}
LambdaRequest class:
public class LambdaRequest<T> {
private Method method;
private T data;
private int languageID;
public static enum Method {
WARMUP, CREATE, READ, UPDATE, DELETE
}
public LambdaRequest(){
}
public Method getMethod() {
return method;
}
public void setMethod(Method create) {
this.method = create;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public int getLanguageID() {
return languageID;
}
public void setLanguageID(int languageID) {
this.languageID = languageID;
}
}
LambdaResponse class:
public class LambdaResponse<T> {
private ResponseStatus responseStatus;
private T data;
private String errorMessage;
public LambdaResponse(){
}
public static enum ResponseStatus {
IGNORED, IN_PROGRESS, COMPLETE, ERROR, COMPLETE_DUPLICATE
}
public ResponseStatus getResponseStatus() {
return responseStatus;
}
public void setResponseStatus(ResponseStatus responseStatus) {
this.responseStatus = responseStatus;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public String getErrorMessage() {
return errorMessage;
}
public void setErrorMessage(String errorMessage) {
this.errorMessage = errorMessage;
}
}
Example POJO User class:
public class User {
private String name;
public User() {
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
JUnit test method:
#Test
public void GenericLambda() {
GenericLambda handler = new GenericLambda();
Context ctx = createContext();
//test WARMUP
LambdaRequest<String> lambdaRequestWarmup = new LambdaRequest<String>();
lambdaRequestWarmup.setMethod(LambdaRequest.Method.WARMUP);
LambdaResponse<String> lambdaResponseWarmup = (LambdaResponse<String>) handler.handleRequest(lambdaRequestWarmup, ctx);
//test READ user
LambdaRequest<Integer> lambdaRequestRead = new LambdaRequest<Integer>();
lambdaRequestRead.setData(1); //db id
lambdaRequestRead.setMethod(LambdaRequest.Method.READ);
LambdaResponse<User> lambdaResponseRead = (LambdaResponse<User>) handler.handleRequest(lambdaRequestRead, ctx);
}
ps.: if you have deserialisation problems (LinkedTreeMap cannot be cast to ...) in you Lambda function (because uf the Generics/Gson), use the following statement:
YourObject yourObject = (YourObject)convertLambdaRequestData2Object(lambdaRequest, YourObject.class);
Method:
private <T> Object convertLambdaRequestData2Object(LambdaRequest<?> lambdaRequest, Class<T> clazz) {
Gson gson = new Gson();
String json = gson.toJson(lambdaRequest.getData());
return gson.fromJson(json, clazz);
}
The way I see, choosing single vs multiple API is a function of following considerations:
Security: I think this is the biggest challenge of having a single API structure. It may be possible to have different security profile for different parts of the requirement
Think microservice model from business perspective:
The whole purpose of any API should be serving some requests, hence it must be well understood and easy to use. So related APIs should be combined. For example, if you have a mobile client and it requires 10 things to be pulled in and out from DB, it makes sense to have 10 endpoints into a single API.
But this should be within reason and should be seen in context of overall solution design. For example, if you design a payroll product, you may think to have separate modules for leave management and user details management. Even if they are often used by a single client, they should still be different API, because their business meaning is different.
Reusability: Applies to both code and functionality reusability. Code reusability is a easier problem to solve, ie build common modules for shared requirements and build them as libraries.
Functionality reusability is harder to solve. In my mind, most of the cases can be solved by redesigning the way endpoints/functions are laid out, because if you need duplication of functionality that means your initial design is not detailed enough.
Just found a link in another SO post which summarizes better

RavenDB - Can I create/query an index, from an application that doesn't share the CLR objects that were persisted?

This post goes some way to answering this question (I'll include the answer later), but I was hoping for some further details.
We have a number of applications that each need to access/manipulate data from Raven in their own way. Data is only written via the main web application. Other apps include batch-style tasks, reporting etc. In an attempt to keep each of these as de-coupled as possible, they are separate solutions.
That being the case, how can I, from the reporting application, create indexes over the existing data, using my locally defined types?
The answer from the linked question states
As long as the structure of the classes you are deserializing into partially matches the structure of the data, it shouldn't make a difference.
The RavenDB server doesn't care at all what classes you use in the client. You certainly could share a dll, or even share a portable dll if you are targeting a different platform. But you are correct that it is not necessary.
However, you should be aware of the Raven-Clr-Type metadata value. The RavenDB client sets this when storing the original document. It is consumed back by the client to assist with deserialization, but it is not fully enforced
It's the first part of that that I wanted clarification on. Do the object graphs for the docs on the server and types in my application have to match exactly? If the Click document on the server is
{
"Visit": {
"Version": "0",
"Domain": "www.mydomain.com",
"Page": "/index",
"QueryString": "",
"IPAddress": "127.0.0.1",
"Guid": "10cb6886-cb5c-46f8-94ed-4b0d45a5e9ca",
"MetaData": {
"Version": "1",
"CreatedDate": "2012-11-09T15:11:03.5669038Z",
"UpdatedDate": "2012-11-09T15:11:03.5669038Z",
"DeletedDate": null
}
},
"ResultId": "Results/1",
"ProductCode": "280",
"MetaData": {
"Version": "1",
"CreatedDate": "2012-11-09T15:14:26.1332596Z",
"UpdatedDate": "2012-11-09T15:14:26.1332596Z",
"DeletedDate": null
}
}
Is it possible (and if so, how?), to create a Map index from my application, which defines the Click class as follows?
class Click
{
public Guid Guid {get;set;}
public int ProductCode {get;set;}
public DateTime CreatedDate {get;set;}
}
Or would my class have to be look like this? (where the custom types are defined as a sub-set of the properties on the document above, with matching property names)
class Click
{
public Visit Visit {get;set;}
public int ProductCode {get;set;}
public MetaData MetaData {get;set;}
}
UPDATE
Following on from the answer below, here's the code I managed to get working.
Index
public class Clicks_ByVisitGuidAndProductCode : AbstractIndexCreationTask
{
public override IndexDefinition CreateIndexDefinition()
{
return new IndexDefinition
{
Map =
"from click in docs.Clicks select new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate}",
TransformResults =
"results.Select(click => new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate})"
};
}
}
Query
var query = _documentSession.Query<ReportClick, Clicks_ByVisitGuidAndProductCode>()
.Customize(x => x.WaitForNonStaleResultsAsOfNow())
.Where(x => x.CreatedDate >= start.Date && x.CreatedDate < end.Date);
where Click is
public class Click
{
public Guid Guid { get; set; }
public int ProductCode { get; set; }
public DateTime CreatedDate { get; set; }
}
Many thanks #MattJohnson.
The shape is a partial match, then it will fill in where it can, so your second example would work ok.
You can, however, create an index that projects the results you're showing in your first example. You would map by whatever you actually were going to filter or sort by, and then you would add a TransformResults section:
TransformResults = (database, clicks) =>
from click in clicks
select new {
click.Visit.Guid,
click.ProductCode,
click.MetaData.CreatedDate
};
When you query this index, it will come out in the shape you specified in the transform. This is a features called "Live Projections", which you can read more about here. (You won't need an .As() call, just use .Query<Click, YourIndex>() and it should work fine.)
Separately - what are you doing with MetaData is extraneous. Raven keeps metadata separated from the document. Read more on metadata here.
It looks like you have versioning concerns. If you are just keeping an audit trail, you should look at Raven's standard Versioning Bundle. If you have temporal effectivity concerns, consider using my new Temporal Versioning Bundle.

Why does storing a Nancy.DynamicDictionary in RavenDB only save the property-names and not the property-values?

I am trying to save (RavenDB build 960) the names and values of form data items passed into a Nancy Module via its built in Request.Form.
If I save a straightforward instance of a dynamic object (with test properties and values) then everything works and both the property names and values are saved. However, if I use Nancy's Request.Form then only the dynamic property names are saved.
I understand that I will have to deal with further issues to do with restoring the correct types when retrieving the dynamic data (RavenJObjects etc) but for now, I want to solve the problem of saving the dynamic names / values in the first place.
Here is the entire test request and code:
Fiddler Request (PUT)
Nancy Module
Put["/report/{name}/add"] = parameters =>
{
reportService.AddTestDynamic(Db, parameters.name, Request.Form);
return HttpStatusCode.Created;
};
Service
public void AddTestDynamic(IDocumentSession db, string name, dynamic data)
{
var testDynamic = new TestDynamic
{
Name = name,
Data = data
};
db.Store(testDynamic);
db.SaveChanges();
}
TestDynamic Class
public class TestDynamic
{
public string Name;
public dynamic Data;
}
Dynamic contents of Request.Form at runtime
Resulting RavenDB Document
{
"Name": "test",
"Data": [
"username",
"age"
]
}
Note: The type of the Request.Form is Nancy.DynamicDictionary. I think this may be the problem since it inherits from IEnumerable<string> and not the expected IEnumerable<string, object>. I think that RavenDB is enumerating the DynamicDictionary and only getting back the dynamic member-names rather than the member name / value pairs.
Can anybody tell me how or whether I can treat the Request.Form as a dynamic object with respect to saving it to RavenDB? If possible I want to avoid any hand-crafted enumeration of DynamicDictionary to build a dynamic instance so that RavenDB can serialise correctly.
Thank You
Edit 1 #Ayende
The DynamicDictionary appears to implement the GetDynamicMemberNames() method:
Taking a look at the code on GitHub reveals the following implementation:
public override IEnumerable<string> GetDynamicMemberNames()
{
return dictionary.Keys;
}
Is this what you would expect to see here?
Edit 2 #TheCodeJunkie
Thanks for the code update. To test this I have:
Created a local clone of the NancyFx/Nancy master branch from
GitHub
Added the Nancy.csproj to my solution and referenced the project
Run the same test as above
RavenDB Document from new DynamicDictionary
{
"Name": "test",
"Data": {
"$type": "Nancy.DynamicDictionary, Nancy",
"username": {},
"age": {}
}
}
You can see that the resulting document is an improvement. The DynamicDictionary type information is now being correctly picked up by RavenDB and whilst the dynamic property-names are correctly serialized, unfortunately the dynamic property-values are not.
The image below shows the new look DynamicDictionary in action. It all looks fine to me, the new Dictionary interface is clearly visible. The only thing I noticed was that the dynamic 'Results view' (as opposed to the 'Dynamic view') in the debugger, shows just the property-names and not their values. The 'Dynamic view' shows both as before (see image above).
Contents of DynamicDictionary at run time
biofractal,
The problem is the DynamicDictionary, in JSON, types can be either objects or lists ,they can't be both.
And for dynamic object serialization, we rely on the implementation of GetDynamicMemberNames() to get the properties, and I assume that is isn't there.

Passing IList<T> vs. IEnumerable<T> with protobuf-net

I noticed in the protobuf-net changelog that IList<> was supported but I'm getting the "Cannot create an instance of an interface" exception. If I change to IEnumerable<> then life is good. Does this sound correct?
// Client call
public override IList<IApplicationWorkflow> Execute(IRoleManagement service)
{
IList<ApplicationWorkflowMessagePart> list = service.RetrieveWorkflows(roleNames);
IList<IApplicationWorkflow> workflows = new List<IApplicationWorkflow>(list.Count);
foreach (ApplicationWorkflowMessagePart a in list)
{
workflows.Add(new ApplicationWorkflowImpl(a));
}
return workflows;
}
// Service contract
[OperationContract, ProtoBehavior]
[ServiceKnownType(typeof (ServiceFault))]
[FaultContract(typeof (ServiceFault))]
IList<ApplicationWorkflowMessagePart> RetrieveWorkflows(string[] roleNames);
// Service implementation
public IList<ApplicationWorkflowMessagePart> RetrieveWorkflows(string[] roleNames)
{
IList<IApplicationWorkflow> workflows = manager.RetrieveApplicationWorkflows(roleNames);
IList<ApplicationWorkflowMessagePart> workflowParts = new List<ApplicationWorkflowMessagePart>();
if (workflows != null)
{
foreach (IApplicationWorkflow workflow in workflows)
{
workflowParts.Add(
ModelMediator.GetMessagePart<ApplicationWorkflowMessagePart, IApplicationWorkflow>(workflow));
}
}
return workflowParts;
}
Thanks,
Mike
Also, is there document site that has this and other answers? I hate to be asking newb questions. :)
Currently it will support IList<T> as a property, as long as it doesn't have to create it - i.e. allowing things like (attributes not shown for brevity):
class Order {
private IList<OrderLine> lines = new List<OrderLine>();
public IList<OrderLine> Lines {get {return lines;}}
}
I would have to check, but for similar reasons, I expect it would work with Merge, but not Deserialize (which is what the WCF hooks use). However, I can't think of a reason it couldn't default to List<T>... it just doesn't at the moment.
The simplest option is probably to stick with List<T> / T[] - but I can have a look if you want... but I'm in a "crunch" on a (work) project at the moment, so I can't lift the bonnet today.
Re "this and other answers"... there is a google group, but that is not just protobuf-net (protobuf-net is simply one of many "protocol buffers" implementations).
You are also free to log an issue on the project site. I do mean to collate the FAQs and add them to the wiki on the site - but time is not always my friend...
But hey! I'm here... ;-p

Encapsulating common logic (domain driven design, best practices)

Updated: 09/02/2009 - Revised question, provided better examples, added bounty.
Hi,
I'm building a PHP application using the data mapper pattern between the database and the entities (domain objects). My question is:
What is the best way to encapsulate a commonly performed task?
For example, one common task is retrieving one or more site entities from the site mapper, and their associated (home) page entities from the page mapper. At present, I would do that like this:
$siteMapper = new Site_Mapper();
$site = $siteMapper->findByid(1);
$pageMapper = new Page_Mapper();
$site->addPage($pageMapper->findHome($site->getId()));
Now that's a fairly trivial example, but it gets more complicated in reality, as each site also has an associated locale, and the page actually has multiple revisions (although for the purposes of this task I'd only be interested in the most recent one).
I'm going to need to do this (get the site and associated home page, locale etc.) in multiple places within my application, and I cant think of the best way/place to encapsulate this task, so that I don't have to repeat it all over the place. Ideally I'd like to end up with something like this:
$someObject = new SomeClass();
$site = $someObject->someMethod(1); // or
$sites = $someObject->someOtherMethod();
Where the resulting site entities already have their associated entities created and ready for use.
The same problem occurs when saving these objects back. Say I have a site entity and associated home page entity, and they've both been modified, I have to do something like this:
$siteMapper->save($site);
$pageMapper->save($site->getHomePage());
Again, trivial, but this example is simplified. Duplication of code still applies.
In my mind it makes sense to have some sort of central object that could take care of:
Retrieving a site (or sites) and all nessessary associated entities
Creating new site entities with new associated entities
Taking a site (or sites) and saving it and all associated entities (if they've changed)
So back to my question, what should this object be?
The existing mapper object?
Something based on the repository pattern?*
Something based on the unit of work patten?*
Something else?
* I don't fully understand either of these, as you can probably guess.
Is there a standard way to approach this problem, and could someone provide a short description of how they'd implement it? I'm not looking for anyone to provide a fully working implementation, just the theory.
Thanks,
Jack
Using the repository/service pattern, your Repository classes would provide a simple CRUD interface for each of your entities, then the Service classes would be an additional layer that performs additional logic like attaching entity dependencies. The rest of your app then only utilizes the Services. Your example might look like this:
$site = $siteService->getSiteById(1); // or
$sites = $siteService->getAllSites();
Then inside the SiteService class you would have something like this:
function getSiteById($id) {
$site = $siteRepository->getSiteById($id);
foreach ($pageRepository->getPagesBySiteId($site->id) as $page)
{
$site->pages[] = $page;
}
return $site;
}
I don't know PHP that well so please excuse if there is something wrong syntactically.
[Edit: this entry attempts to address the fact that it is oftentimes easier to write custom code to directly deal with a situation than it is to try to fit the problem into a pattern.]
Patterns are nice in concept, but they don't always "map". After years of high end PHP development, we have settled on a very direct way of handling such matters. Consider this:
File: Site.php
class Site
{
public static function Select($ID)
{
//Ensure current user has access to ID
//Lookup and return data
}
public static function Insert($aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Do whatever it is you are doing
//Return new ID
}
public static function Update($ID, $aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Update necessary fields
}
Then, in order to call it (from anywhere), just run:
$aData = Site::Select(123);
Site::Update(123, array('FirstName' => 'New First Name'));
$ID = Site::Insert(array(...))
One thing to keep in mind about OO programming and PHP... PHP does not keep "state" between requests, so creating an object instance just to have it immediately destroyed does not often make sense.
I'd probably start by extracting the common task to a helper method somewhere, then waiting to see what the design calls for. It feels like it's too early to tell.
What would you name this method ? The name usually hints at where the method belongs.
class Page {
public $id, $title, $url;
public function __construct($id=false) {
$this->id = $id;
}
public function save() {
// ...
}
}
class Site {
public $id = '';
public $pages = array();
function __construct($id) {
$this->id = $id;
foreach ($this->getPages() as $page_id) {
$this->pages[] = new Page($page_id);
}
}
private function getPages() {
// ...
}
public function addPage($url) {
$page = ($this->pages[] = new Page());
$page->url = $url;
return $page;
}
public function save() {
foreach ($this->pages as $page) {
$page->save();
}
// ..
}
}
$site = new Site($id);
$page = $site->addPage('/');
$page->title = 'Home';
$site->save();
Make your Site object an Aggregate Root to encapsulate the complex association and ensure consistency.
Then create a SiteRepository that has the responsibility of retrieving the Site aggregate and populating its children (including all Pages).
You will not need a separate PageRepository (assuming that you don't make Page a separate Aggregate Root), and your SiteRepository should have the responsibility of retrieving the Page objects as well (in your case by using your existing Mappers).
So:
$siteRepository = new SiteRepository($myDbConfig);
$site = $siteRepository->findById(1); // will have Page children attached
And then the findById method would be responsible for also finding all Page children of the Site. This will have a similar structure to the answer CodeMonkey1 gave, however I believe you will benefit more by using the Aggregate and Repository patterns, rather than creating a specific Service for this task. Any other retrieval/querying/updating of the Site aggregate, including any of its child objects, would be done through the same SiteRepository.
Edit: Here's a short DDD Guide to help you with the terminology, although I'd really recommend reading Evans if you want the whole picture.