I am new to Perl 6. I have the following code in my Atom Editor, but I still don't understand how this works. I copied the following code, as the docs.raku.org said, but it seems that it does not work. So I changed the code to this:
use v6;
class HTTPHeader { ... }
class HTTPHeader does Associative {
has %!fields handles <self.AT-KEY self.EXISTS-KEY self.DELETE-KEY self.push
list kv keys values>;
method Str { say self.hash.fmt; }
multi method EXISTS-KEY ($key) { %!fields{normalize-key $key}:exists }
multi method DELETE-KEY ($key) { %!fields{normalize-key $key}:delete }
multi method push (*#_) { %!fields.push: #_ }
sub normalize-key ($key) { $key.subst(/\w+/, *.tc, :g) }
method AT-KEY (::?CLASS:D: $key) is rw {
my $element := %!fields{normalize-key $key};
Proxy.new(
FETCH => method () { $element },
STORE => method ($value) {
$element = do given $value».split(/',' \s+/).flat {
when 1 { .[0] } # a single value is stored as a string
default { .Array } # multiple values are stored as an array
}
}
);
}
}
my $header = HTTPHeader.new;
say $header.WHAT; #-> (HTTPHeader)
"".say;
$header<Accept> = "text/plain";
$header{'Accept-' X~ <Charset Encoding Language>} = <utf-8 gzip en>;
$header.push('Accept-Language' => "fr"); # like .push on a Hash
say $header.hash.fmt;
"".say;
say $header<Accept-Language>.values;
say $header<Accept-Charset>;
the output is:
(HTTPHeader)
Accept text/plain
Accept-Charset utf-8
Accept-Encoding gzip
Accept-Language en fr
(en fr)
utf-8
I konw it works, but the document in docs.raku.org is a little different to this, which doesn't have "self" before the AT-KEY method in the 7th line. Is there any examples that more detail about this?
Is there any examples that more detail about this?
Stack overflow is not really the place to request more detail on a published example. This is the perl6 doco on the community itself - I would suggest that the most appropriate place if you have further queries is the Perl6 users mailing list or, failing that, the IRC channel, perhaps.
Now that you've posted it though, I'm hesitant to let the question go unaddressed so, here are a couple of things to consider;
Firstly - the example you raised is about implementing associative subscripting on a custom (ie user defined) class - it's not typical territory for a self-described newbie. I think you would be better off looking at and implementing the examples at Perl6 intro by Naoum Hankache whose site has been very well received.
Option 1 - Easy implementation via delegation
Secondly, it's critical to understand that the example is showing three options for implementing associative subscripting; the first and simplest uses delegation to a private hash attribute. Perl6 implements associative and positional subscripts (for built-in types) by calling well-defined methods on the object implementing the collection type. By adding the handles trait on the end of the definition of the %!fields attribute, you're simply passing on these method calls to %!fields which - being a hash - will know how to handle them.
Option 2 - Flexible keys
To quote the example:
However, HTTP header field names are supposed to be case-insensitive (and preferred in camel-case). We can accommodate this by taking the *-KEY and push methods out of the handles list, and implementing them separately...
Delegating all key-handling methods to the internal hash means you get hash-like interpretation of your keys - meaning they will be case-sensitive as hash keys are case-sensitive. To avoid that, you take all key-related methods out of the handles clause and implement them yourself. In the example, keys are ran through the "normalizer" before being used as indexes into %!fields making them case-insensitive.
Option 3 - Flexible values
The final part of the example shows how you can control the interpretation of values as they go into the hash-like container. Up to this point, values supplied by assigning to an instance of this custom container had to either be a string or an array of strings. The extra control is achieved by removing the AT_KEY method defined in option 2 and replacing it with a method that supplies a Proxy Object. The proxy object's STORE method will be called if you're assigning to the container and that method scans the supplied string value(s) for ", " (note: the space is compolsory) and if found, will accept the string value as a specification of several string values. At least, that's what I think it does.
So, the example has a lot more packed into it than it looks. You ran into trouble - as Brad pointed out in the comments - because you sort-of mashed option 1 togeather with option 3 when you coppied the example.
Related
In ASP.Net Core you have multiple ways to generate an URL for controller action, the newest being tag helpers.
Using tag-helpers for GET-requests asp-route is used to specify route parameters. It is from what I understand not supported to use complex objects in route request. And sometimes a page could have many different links pointing to itself, possible with minor addition to the URL for each link.
To me it seems wrong that any modification to controller action signature requires changing all tag-helpers using that action. I.e. if one adds string query to controller, one must add query to model and add asp-route-query="#Model.Query" 20 different places spread across cshtml-files. Using this approach is setting the code up for future bugs.
Is there a more elegant way of handling this? For example some way of having a Request object? (I.e. request object from controller can be put into Model and fed back into action URL.)
In my other answer I found a way to provide request object through Model.
From the SO article #tseng provided I found a smaller solution. This one does not use a request object in Model, but retains all route parameters unless explicitly overridden. It won't allow you to specify route through an request object, which is most often not what you want anyway. But it solved problem in OP.
<a asp-controller="Test" asp-action="HelloWorld" asp-all-route-data="#Context.GetQueryParameters()" asp-route-somestring="optional override">Link</a>
This requires an extension method to convert query parameters into a dictionary.
public static Dictionary GetQueryParameters(this HttpContext context)
{
return context.Request.Query.ToDictionary(d => d.Key, d => d.Value.ToString());
}
There's a rationale here that I don't think you're getting. GET requests are intentionally simplistic. They are supposed to describe a specific resource. They do no have bodies, because you're not supposed to be passing complex data objects in the first place. That's not how the HTTP protocol is designed.
Additionally, query string params should generally be optional. If some bit of data is required in order to identify the resource, it should be part of the main URI (i.e. the path). As such, neglecting to add something like a query param, should simply result in the full data set being returned instead of some subset defined by the query. Or in the case of something like a search page, it generally will result in a form being presented to the user to collect the query. In other words, you action should account for that param being missing and handle that situation accordingly.
Long and short, no, there is no way "elegant" way to handle this, I suppose, but the reason for that is that there doesn't need to be. If you're designing your routes and actions correctly, it's generally not an issue.
To solve this I'd like to have a request object used as route parameters for anchor TagHelper. This means that all route links are defined in only one location, not throughout solution. Changes made to request object model automatically propagates to URL for <a asp-action>-tags.
The benefit of this is reducing number of places in the code we need to change when changing method signature for a controller action. We localize change to model and action only.
I thought writing a tag-helper for a custom asp-object-route could help. I looked into chaining Taghelpers so mine could run before AnchorTagHelper, but that does not work. Creating instance and nesting them requires me to hardcode all properties of ASP.Net Cores AnchorTagHelper, which may require maintenance in the future. Also considered using a custom method with UrlHelper to build URL, but then TagHelper would not work.
The solution I landed on is to use asp-all-route-data as suggested by #kirk-larkin along with an extension method for serializing to Dictionary. Any asp-all-route-* will override values in asp-all-route-data.
<a asp-controller="Test" asp-action="HelloWorld" asp-all-route-data="#Model.RouteParameters.ToDictionary()" asp-route-somestring="optional override">Link</a>
ASP.Net Core can deserialize complex objects (including lists and child objects).
public IActionResult HelloWorld(HelloWorldRequest request) { }
In the request object (when used) would typically have only a few simple properties. But I thought it would be nice if it supported child objects as well. Serializing object into a Dictionary is usually done using reflection, which can be slow. I figured Newtonsoft.Json would be more optimized than writing simple reflection code myself, and found this implementation ready to go:
public static class ExtensionMethods
{
public static IDictionary ToDictionary(this object metaToken)
{
// From https://geeklearning.io/serialize-an-object-to-an-url-encoded-string-in-csharp/
if (metaToken == null)
{
return null;
}
JToken token = metaToken as JToken;
if (token == null)
{
return ToDictionary(JObject.FromObject(metaToken));
}
if (token.HasValues)
{
var contentData = new Dictionary();
foreach (var child in token.Children().ToList())
{
var childContent = child.ToDictionary();
if (childContent != null)
{
contentData = contentData.Concat(childContent)
.ToDictionary(k => k.Key, v => v.Value);
}
}
return contentData;
}
var jValue = token as JValue;
if (jValue?.Value == null)
{
return null;
}
var value = jValue?.Type == JTokenType.Date ?
jValue?.ToString("o", CultureInfo.InvariantCulture) :
jValue?.ToString(CultureInfo.InvariantCulture);
return new Dictionary { { token.Path, value } };
}
}
I'm new to unit / integration testing and I want to do an integration test of my controller which looks simplified like this:
// ItemsController.php
public function edit() {
// some edited item
$itemEntity
// some keywords
$keywordEntities = [keyword1, keyword2, ...]
// save item entity
if (!$this->Items->save($itemEntity)) {
// do some error handling
}
// add/replace item's keywords
if (!$this->Items->Keywords->replaceLinks($itemEntity, $keywordEntities)) {
// do some error handling
}
}
I have the models Items and Keywords where Items belongsToMany Keywords. I want to test the error handling parts of the controller. So I have to mock the save() and replaceLinks() methods that they will return false.
My integration test looks like this:
// ItemsControllerTest.php
public function testEdit() {
// mock save method
$model = $this->getMockForModel('Items', ['save']);
$model->expects($this->any())->method('save')->will($this->returnValue(false));
// call the edit method of the controller and do some assertions...
}
This is working fine for the save() method. But it is not working for the replaceLinks() method. Obviously because it is not part of the model.
I've also tried something like this:
$method = $this->getMockBuilder(BelongsToMany::class)
->setConstructorArgs([
'Keywords', [
'foreignKey' => 'item_id',
'targetForeignKey' => 'keyword_id',
'joinTable' => 'items_keywords'
]
])
->setMethods(['replaceLinks'])
->getMock();
$method->expects($this->any())->method('replaceLinks')->will($this->returnValue(false));
But this is also not working. Any hints for mocking the replaceLinks() method?
When doing controller tests, I usually try to mock as less as possible, personally if I want to test error handling in controllers, I try to trigger actual errors, for example by providing data that fails application/validation rules. If that is a viable option, then you might want to give it a try.
That being said, mocking the association's method should work the way as shown in your example, but you'd also need to replace the actual association object with your mock, because unlike models, associations do not have a global registry in which the mocks could be placed (that's what getMockForModel() will do for you) so that your application code would use them without further intervention.
Something like this should do it:
$KeywordsAssociationMock = $this
->getMockBuilder(BelongsToMany::class) /* ... */;
$associations = $this
->getTableLocator()
->get('Items')
->associations();
$associations->add('Keywords', $KeywordsAssociationMock);
This would modify the Items table object in the table registry, and replace (the association collection's add() acts more like a setter, ie it overwrites) its actual Keywords association with the mocked one. If you'd use that together with mocking Items, then you must ensure that the Items mock is created in beforehand, as otherwise the table retrieved in the above example would not be the mocked one!
How can i get my json from web api to format only value or null for Option types and Discriminated Unions preferably using Newtonsoft.
I am currently using Newtonsoft and only have to add this to web api for it to work:
config.Formatters.JsonFormatter.SerializerSettings <- new JsonSerializerSettings()
When i consume the data on my side, i can easily convert it back to an F# item using: JsonConvert.DeserializeObject<'a>(json)
The api will be consumed by NON .NET clients as well so i would like a more standard formatted json result.
I would like to to fix my issue, w/o having to add code or decorators to all of my records/DU in order for it to work. I have lots of records with lots of properties, some are Option.
ex (this is how DU is serializing):
// When value
"animal": {
"case": "Dog"
}
// When no value
"animal": null
This is what I need:
// When value
"animal": "Dog"
// When no value
"animal": null
This is how an Option type is serializing:
"DocumentInfo": {
"case": "Some",
"fields": [
{
"docId": "77fb9dd0-bfbe-42e0-9d29-d5b1f5f0a9f7",
"docType": "Monkey Business",
"docName": "mb.doc",
"docContent": "why cant it just give me the values?"
}
]
}
This is what I need:
"DocumentInfo": {
"docId": "77fb9dd0-bfbe-42e0-9d29-d5b1f5f0a9f7",
"docType": "Monkey Business",
"docName": "mb.doc",
"docContent": "why cant it just give me the values?"
}
Thank you :-)
You could try using Chiron. I haven't used it myself so I can't give you an extensive example, but https://neoeinstein.github.io/blog/2015/12-13-chiron-json-ducks-monads/index.html has some bits of sample code. (And see https://neoeinstein.github.io/blog/2016/04-02-chiron-computation-expressions/index.html as well for some nicer syntax). Basically, Chiron knows how to serialize and deserialize the basic F# types (strings, numbers, options, etc.) already, and you can teach it to serialize any other type by providing that type with two static methods, ToJson and FromJson:
static member ToJson (x:DocumentInfo) = json {
do! Json.write "docId" x.docId
do! Json.write "docType" x.docType
do! Json.write "docName" x.docName
do! Json.write "docContent" x.docContent
}
static member FromJson (_:DocumentInfo) = json {
let! i = Json.read "docId"
let! t = Json.read "docType"
let! n = Json.read "docName"
let! c = Json.read "docContent"
return { docId = i; docType = t; docName = n; docContent = c }
}
By providing those two static methods on your DocumentInfo type, Chiron will automatically know how to serialize a DocumentInfo option. At least, that's my understanding -- but the Chiron documentation is sadly lacking (by which I mean literally lacking: it hasn't been written yet), so I haven't really used it myself. So this may or may not be the answer you need, but hopefully it'll be of some help to you even if you don't end up using it.
I have found the solution that allows me to use Newtonsoft (JSON.NET), apply custom converters for my types where needed and not require any changes to my DU's or Records.
The short answer is, create a custom converter for Json.Net and use the Read/Write Json overrides:
type CustomDuConverter() =
inherit JsonConverter() (...)
Unfortunately the ones I have found online that were already created doesn't work as is for my needs listed above, but will with slight modification. A great example is to look at: https://gist.github.com/isaacabraham/ba679f285bfd15d2f53e
To apply your custom serializer in Web Api for every call, use:
config.Formatters.JsonFormatter.SerializerSettings.Converters.Add(new CustomDuConverter())
To deserialize use (example that will deserialize to DU):
JsonConvert.DeserializeObject<Animal>("Dog", customConverter)
ex:
type Animal = Dog | Cat
json:
"animal": "Dog"
This will allow you to create a clean Api for consumers and allow you to consume 3rd party Json data into your types that use Option, etc.
My adapter uses findHasMany to load child records for a hasMany relationship.
My findHasMany adapter method is directly based on the test case for findHasMany. It retrieves the contents of the hasMany on demand, and eventually does the following two operations:
store.loadMany(type, hashes);
// ...
store.loadHasMany(record, relationship.key, ids);
(The full code for the findHasMany is below, in case the issue is there, but I don't think so.)
The really strange behavior is: it seems that somewhere within loadHasMany (or in some subsequent async process) only the first and last child records get their inverse belongsTo property set, even though all the child records are added to the hasMany side. I.e., if posts/1 has 10 comments, this is what I get, after everything has loaded:
var post = App.Posts.find('1');
post.get('comments').objectAt(0).get('post'); // <App.Post:ember123:1>
post.get('comments').objectAt(1).get('post'); // null
post.get('comments').objectAt(2).get('post'); // null
// ...
post.get('comments').objectAt(8).get('post'); // null
post.get('comments').objectAt(9).get('post'); // <App.Post:ember123:1>
My adapter is a subclass of DS.RESTAdapter, and I don't think I'm overloading anything in my adapter or serializer that would cause this behavior.
Has anybody seen something like this before? It's weird enough I though someone might know why it's happening.
Extra
Using findHasMany lets me load the contents of the hasMany only when the property is accessed (valuable in my case because calculating the array of IDs would be expensive). So say I have the classic posts/comments example models, the server returns for posts/1:
{
post: {
id: 1,
text: "Linkbait!"
comments: "/posts/1/comments"
}
}
Then my adapter can retrieve /posts/1/comments on demand, which looks like this:
{
comments: [
{
id: 201,
text: "Nuh uh"
},
{
id: 202,
text: "Yeah huh"
},
{
id: 203,
text: "Nazi Germany"
}
]
}
Here is the code for the findHasMany method in my adapter:
findHasMany: function(store, record, relationship, details) {
var type = relationship.type;
var root = this.rootForType(type);
var url = (typeof(details) == 'string' || details instanceof String) ? details : this.buildURL(root);
var query = relationship.options.query ? relationship.options.query(record) : {};
this.ajax(url, "GET", {
data: query,
success: function(json) {
var serializer = this.get('serializer');
var pluralRoot = serializer.pluralize(root);
var hashes = json[pluralRoot]; //FIXME: Should call some serializer method to get this?
store.loadMany(type, hashes);
// add ids to record...
var ids = [];
var len = hashes.length;
for(var i = 0; i < len; i++){
ids.push(serializer.extractId(type, hashes[i]));
}
store.loadHasMany(record, relationship.key, ids);
}
});
}
Solution
Override the DS.RelationshipChange.getByReference method by inserting the following code into your app:
DS.RelationshipChange.prototype.getByReference = function(reference) {
var store = this.store;
// return null or undefined if the original reference was null or undefined
if (!reference) { return reference; }
if (reference.record) {
return reference.record;
}
return store.materializeRecord(reference);
};
Yes, this is overriding a private, internal method in Ember Data. Yes, it may break at any time with any update. I'm pretty sure this is a bug in Ember Data, but I'm not 100% certain this is the right solution. But it does solve this problem, and possibly other relationship-related problems.
This fix is designed to be applied to Ember Data master as of 29 Apr 2013.
Reason
DS.Store.loadHasMany calls DS.Model.hasManyDidChange, which retrieves references for all the child records and then sets the hasMany's content to the array of references. This kicks off a chain of observers., eventually calling DS.ManyArray.arrayContentDidChange, in which the first line is this._super.apply(this, arguments);, calling the superclass method Ember.Array.arrayContentDidChange. That Ember.Array method includes an optimization that caches the first and last object in the array and calls objectAt on only those two array members. So there's the part that singles out the first and last record.
Next, since DS.RecordArray implements an objectAtContent method (from Ember.ArrayProxy), the objectAtContent implementation calls DS.Store.recordForReference, which in turn calls DS.Store.materializeRecord. This last function adds a record property to the reference that is passed in as a side effect.
Now we get to what I think is a bug. In DS.ManyArray.arrayContentDidChange, after calling the superclass method, it loops through all the new references and creates a DS.RelationshipChangeAdd instance that encapsulates the owner and child record references. But the first line inside the loop is:
var reference = get(this, 'content').objectAt(i);
Unlike what happens above to the first and last record, this calls objectAt directly on the Ember.NativeArray and bypasses the ArrayProxy methods including the objectAtContent hook, which means that DS.Store.materializeRecord--which adds the record property on the reference object--may have never been called on some references.
Next, the relationship changes created in the loop are immediately afterward (in the same run loop) applied with this call tree: DS.RelationshipChangeAdd.sync -> DS.RelationshipChange.getFirstRecord -> DS.RelationshipChange.getByReference. This last method expects the reference object to have a record property. However, the record property is only set on the first and last reference objects, for reasons explained above. Therefore, for all but the first and last records, the relationship fails to be established because it doesn't have access to the child record object!
The above fix calls DS.Store.materializeRecord whenever the record property doesn't exist on the reference. The last line in the function is the only thing added. On the one hand, it looks like this was the original intention: that var store = this.store; line in the original declares a variable that isn't otherwise used in the function, so what's it there for? Also, without the added line, the function doesn't always return a value, which is a little unusual for a function which is expected to do so. On the other hand, this could lead to mass materialization in some cases where that would be undesirable (but, the relationships just won't work without it in some cases, it seems).
Possibly related
The "chain of observers" I mentioned takes a bit of an odd path. The initiating event was setting the content property on a DS.ManyArray, which extends Ember.ArrayProxy--therefore the content property has a dependent property arrangedContent. Importantly, the observers on arrangedContent are executed before observers on content are executed (see Ember.propertyDidChange). However, the default implementation of Ember.ArrayProxy.arrangedContentArrayDidChange simply calls Ember.Array.arrayContentDidChange, which DS.ManyArray implements! The point being, this looks like a recipe for some code to execute in an unintended order. That is, I think Ember.ManyArray.arrayContentDidChange may getting executed earlier than expected. If this is the case, the above mentioned code that expects the record property to already exist on all references may have been expecting this reasonably, as one of the observers directly on the content property may call DS.Store.materializeRecord on each reference. But I haven't dug deep enough to find out if this is true.
Updated: 09/02/2009 - Revised question, provided better examples, added bounty.
Hi,
I'm building a PHP application using the data mapper pattern between the database and the entities (domain objects). My question is:
What is the best way to encapsulate a commonly performed task?
For example, one common task is retrieving one or more site entities from the site mapper, and their associated (home) page entities from the page mapper. At present, I would do that like this:
$siteMapper = new Site_Mapper();
$site = $siteMapper->findByid(1);
$pageMapper = new Page_Mapper();
$site->addPage($pageMapper->findHome($site->getId()));
Now that's a fairly trivial example, but it gets more complicated in reality, as each site also has an associated locale, and the page actually has multiple revisions (although for the purposes of this task I'd only be interested in the most recent one).
I'm going to need to do this (get the site and associated home page, locale etc.) in multiple places within my application, and I cant think of the best way/place to encapsulate this task, so that I don't have to repeat it all over the place. Ideally I'd like to end up with something like this:
$someObject = new SomeClass();
$site = $someObject->someMethod(1); // or
$sites = $someObject->someOtherMethod();
Where the resulting site entities already have their associated entities created and ready for use.
The same problem occurs when saving these objects back. Say I have a site entity and associated home page entity, and they've both been modified, I have to do something like this:
$siteMapper->save($site);
$pageMapper->save($site->getHomePage());
Again, trivial, but this example is simplified. Duplication of code still applies.
In my mind it makes sense to have some sort of central object that could take care of:
Retrieving a site (or sites) and all nessessary associated entities
Creating new site entities with new associated entities
Taking a site (or sites) and saving it and all associated entities (if they've changed)
So back to my question, what should this object be?
The existing mapper object?
Something based on the repository pattern?*
Something based on the unit of work patten?*
Something else?
* I don't fully understand either of these, as you can probably guess.
Is there a standard way to approach this problem, and could someone provide a short description of how they'd implement it? I'm not looking for anyone to provide a fully working implementation, just the theory.
Thanks,
Jack
Using the repository/service pattern, your Repository classes would provide a simple CRUD interface for each of your entities, then the Service classes would be an additional layer that performs additional logic like attaching entity dependencies. The rest of your app then only utilizes the Services. Your example might look like this:
$site = $siteService->getSiteById(1); // or
$sites = $siteService->getAllSites();
Then inside the SiteService class you would have something like this:
function getSiteById($id) {
$site = $siteRepository->getSiteById($id);
foreach ($pageRepository->getPagesBySiteId($site->id) as $page)
{
$site->pages[] = $page;
}
return $site;
}
I don't know PHP that well so please excuse if there is something wrong syntactically.
[Edit: this entry attempts to address the fact that it is oftentimes easier to write custom code to directly deal with a situation than it is to try to fit the problem into a pattern.]
Patterns are nice in concept, but they don't always "map". After years of high end PHP development, we have settled on a very direct way of handling such matters. Consider this:
File: Site.php
class Site
{
public static function Select($ID)
{
//Ensure current user has access to ID
//Lookup and return data
}
public static function Insert($aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Do whatever it is you are doing
//Return new ID
}
public static function Update($ID, $aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Update necessary fields
}
Then, in order to call it (from anywhere), just run:
$aData = Site::Select(123);
Site::Update(123, array('FirstName' => 'New First Name'));
$ID = Site::Insert(array(...))
One thing to keep in mind about OO programming and PHP... PHP does not keep "state" between requests, so creating an object instance just to have it immediately destroyed does not often make sense.
I'd probably start by extracting the common task to a helper method somewhere, then waiting to see what the design calls for. It feels like it's too early to tell.
What would you name this method ? The name usually hints at where the method belongs.
class Page {
public $id, $title, $url;
public function __construct($id=false) {
$this->id = $id;
}
public function save() {
// ...
}
}
class Site {
public $id = '';
public $pages = array();
function __construct($id) {
$this->id = $id;
foreach ($this->getPages() as $page_id) {
$this->pages[] = new Page($page_id);
}
}
private function getPages() {
// ...
}
public function addPage($url) {
$page = ($this->pages[] = new Page());
$page->url = $url;
return $page;
}
public function save() {
foreach ($this->pages as $page) {
$page->save();
}
// ..
}
}
$site = new Site($id);
$page = $site->addPage('/');
$page->title = 'Home';
$site->save();
Make your Site object an Aggregate Root to encapsulate the complex association and ensure consistency.
Then create a SiteRepository that has the responsibility of retrieving the Site aggregate and populating its children (including all Pages).
You will not need a separate PageRepository (assuming that you don't make Page a separate Aggregate Root), and your SiteRepository should have the responsibility of retrieving the Page objects as well (in your case by using your existing Mappers).
So:
$siteRepository = new SiteRepository($myDbConfig);
$site = $siteRepository->findById(1); // will have Page children attached
And then the findById method would be responsible for also finding all Page children of the Site. This will have a similar structure to the answer CodeMonkey1 gave, however I believe you will benefit more by using the Aggregate and Repository patterns, rather than creating a specific Service for this task. Any other retrieval/querying/updating of the Site aggregate, including any of its child objects, would be done through the same SiteRepository.
Edit: Here's a short DDD Guide to help you with the terminology, although I'd really recommend reading Evans if you want the whole picture.