Wagtail: Extend page model - extend

I have created in the past multiple pages for Wagtail.
Example:
class PlainPage(Page):
body = StreamField(BasicStreamBlock, null=True, blank=True)
content_panels = Page.content_panels + [
StreamFieldPanel('body'),
]
Now I would like to an extend all this pages by giving them the possibility to set them to no-index.
For this reason I would like to add a boolean field to the promote_panel.
What would be the best way adding this feature to all pages I have already created?
no_index = models.BooleanField(default=False)
promote_panels = Page.promote_panels + [
FieldPanel('no_index'),
]
What would be the correct Wagtail way, to extend all my Page classes with this code?

Using Django's Class Mixins, it is possible to add fields to all existing models without too much hassle.
1. Create a Mixin
First - create a new CustomPageMixin (name this whatever you want) that extends the Page model and has the meta abstract=True set.
class CustomPageMixin(Page):
class Meta:
abstract=True
no_index = models.BooleanField(default=False)
# adding to content_panels on other pages will need to use THIS promote_panels
# e.g. promote_panels = CustomPageMixin.promote_panels + [...]
promote_panels = Page.promote_panels + [
FieldPanel('no_index'),
]
2. Update ALL existing page models
Update all your models in use to use the mixin, instead of extending the Page class, they will actually extend your mixin directly.
from ... import CustomPageMixin
class StandardPage(CustomPageMixin):
#...
class HomePage(CustomPageMixin):
#...
3. Run Migrations
Note: This will add the no_index field to ALL pages that now extend your new mixin.
./manage.py makemigrations
./manage.py migrate
Potential Issues with this approach
This may not be the best way to do this, as it is a bit indirect and hard to understand at first glance.
This does not actually change the Page model fields, so it will only be available when you access the actual specific models' instance via Page.specific
It will be a bit more tricky to use this for special Page types such as AbstractEmailForm.

Related

django rest framework: Dynamic serializer and ViewSet

I'm new to django and django rest framework as a disclaimer.
I have a Model that contains metadata columns like last modified date and last modified user. This data should be available in the API for viewing but will be set automatically by the backend and hence must not be required for creation/update. As far as I understood I can create a dynamic serializer as shown in the docs.
However how can I use a dynamic serialize on a ViewSet? Or is that simply not possible?
If you want the last modified date and last modified user to be read only, you do not need to create a DynamicSerializer. All you need to do is to set the fields as read_only on the serializer.
class MyModelSerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
fields = (fields exposed to the API)
read_only_fields = ("last_modified_date", "last_modified_user")
After creating the serializer, it must be added to the ViewSet
class MyModelViewSet(viewsets.ModelViewSet):
queryset = MyModel.objects.all()
serializer_class = MyModelSerializer

Where to write predefined queries in django?

I am working with a team of engineers, and this is my first Django project.
Since I have done SQL before, I chose to write the predefined queries that the front-end developers are supposed to use to build this page (result set paging, simple find etc.).
I just learned Django QuerySet, and I am ready to use it, but I do not know on which file/class to write them.
Should I write them as methods inside each class in models.py? Django documentation simply writes them in the shell, and I haven't read it say where to put them.
Generally, the Django pattern is that you will write your queries in your views in the views.py file. Here you will take each of your predefined queries for a given URL and return a response that renders a template (that presumably your front end team will build with you.) or returns a JSON response (for example through Django Rest Framework for an SPA front-end).
The tutorial is strong on this, so that may be a better bet for where to put things than the docs itself.
Queries can be run anywhere, but django is built to receive Requests through the URL schema, and return a response. This is typically done in the views.py, and each view is generally called by a line in the urls.py file.
If you're particularly interested in following the fat models approach and putting them there, then you might be interested in the Manager objects, which are what define querysets that you get through, for example MyModel.objects.all()
My example view (for a class based view, which provides information about a list of matches:
class MatchList(generics.ListCreateAPIView):
"""
Retrieve, update or delete a Match.
"""
queryset = Match.objects.all()
serializer_class = MatchSerialiser
That queryset could be anything, though.
A function based view with a different queryset would be:
def event(request, event_slug):
from .models import Event, Comment, Profile
event = Event.objects.get(event_url=event_slug)
future_events = Event.objects.filter(date__gt=event.date)
comments = Comment.objects.select_related('user').filter(event=event)
final_comments = []
return render(request, 'core/event.html', {"event": event, "future_events": future_events})
edit: That second example is quite old, and the query would be better refactored to:
future_events=Event.objects.filter(date__gt=event.date).select_related('comments')
Edit edit: It's worth pointing out, QuerySet isn't a language, in the way that you're using it. It's django's API for the Object Relational Mapper that sits on top of the database, in the same way that SQLAlchemy also does - in fact, you can swap out or use SQLAlchemy instead of using the Django ORM, if you really wanted. Mostly you'll hear people talking about the Django ORM. :)
If you have some model SomeModel and you wanted to access its objects via a raw SQL query you would do: SomeModel.objects.raw(raw_query).
For example: SomeModel.objects.raw('SELECT * FROM myapp_somemodel')
https://docs.djangoproject.com/en/1.11/topics/db/sql/#performing-raw-queries
Django file structure:
app/
models.py
views.py
urls.py
templates/
app/
my_template.html
In models.py
class MyModel(models.Model):
#field definition and relations
In views.py:
from .models import MyModel
def my_view():
my_model = MyModel.objects.all() #here you use the querysets
return render('my_template.html', {'my_model': my_model}) #pass the object to the template
In the urls.py
from .views import my_view
url(r'^myurl/$', my_view, name='my_view'), # here you write the url that points to your view
And finally in my_template.html
# display the data using django template
{% for obj in object_list %}
<p>{{ obj }}</p>
{% endfor %}

Dependent instances in one list

I have a problem in code design. I try to read some files and create for every file one or more instances (dependin on the content). But some instances depend on other files in the list, so that every instance has to know the top class. Following example should illustrate what I mean:
class SetOfAll(object)
def __init__(self):
self.matrjoschkas = []
def add(self, matrjoschka):
self.matrjoschkas.append(matrjoschka)
def create_matrjoschkas(self):
for file in glob.glob('*.txt'):
self.add(Matrjoschka(file, self))
class Matrjoschka(object)
def __init__(self, file, container):
self._container = container
...
if some condition:
self._container.add(Matrjoschka(..., self._container))
Is there an elegant way to avoid, that every instance has to know the top class? Because in my case it's a little bit more complicated and it would be good if some factory could do it.
Well, there are certainly many ways of doing this, but from what I can see you just need a way to explicitly state the dependencies between files. You could then ask a factory to create a list of files based on the file's source and the configured dependencies.
Pseudo-code:
filesFactory = new FilesFactory({
file1: ['file2', 'file3'] //file1 depends on file2 and file3
});
filesSource = new GlobFilesSource('*.txt'); //FilesSource could be an abstraction and GlobFilesSource a concrete implementation
allFiles = filesFactory.resolveAllFilesFrom(filesSource); // ['file1', 'file2', 'file3']
If the dependency conditions are more complex than a simple id matching then you could just configure predicates. Here's a pseudo-code sample using a predicate to achieve the same dependency configuration as above:
[
{
predicate: function (currentFiles) {
return currentFiles.contains('file1');
},
files: ['file2', 'file3']
}
]
This design is much more flexible than yours because not only the Matrjoschka class doesn't have to know about it's container, but it also doesn't have to know about dependency rules.

Django REST framework flat, read-write serializer

In Django REST framework, what is involved in creating a flat, read-write serializer representation? The docs refer to a 'flat representation' (end of the section http://django-rest-framework.org/api-guide/serializers.html#dealing-with-nested-objects) but don't offer examples or anything beyond a suggestion to use a RelatedField subclass.
For instance, how to provide a flat representation of the User and UserProfile relationship, below?
# Model
class UserProfile(models.Model):
user = models.OneToOneField(User)
favourite_number = models.IntegerField()
# Serializer
class UserProfileSerializer(serializers.ModelSerializer):
email = serialisers.EmailField(source='user.email')
class Meta:
model = UserProfile
fields = ['id', 'favourite_number', 'email',]
The above UserProfileSerializer doesn't allow writing to the email field, but I hope it expresses the intention sufficiently well. So, how should a 'flat' read-write serializer be constructed to allow a writable email attribute on the UserProfileSerializer? Is it at all possible to do this when subclassing ModelSerializer?
Thanks.
Looking at the Django REST framework (DRF) source I settled on the view that a DRF serializer is strongly tied to an accompanying Model for unserializing purposes. Field's source param make this less so for serializing purposes.
With that in mind, and viewing serializers as encapsulating validation and save behaviour (in addition to their (un)serializing behaviour) I used two serializers: one for each of the User and UserProfile models:
class UserSerializer(serializer.ModelSerializer):
class Meta:
model = User
fields = ['email',]
class UserProfileSerializer(serializer.ModelSerializer):
email = serializers.EmailField(source='user.email')
class Meta:
model = UserProfile
fields = ['id', 'favourite_number', 'email',]
The source param on the EmailField handles the serialization case adequately (e.g. when servicing GET requests). For unserializing (e.g. when serivicing PUT requests) it is necessary to do a little work in the view, combining the validation and save behaviour of the two serializers:
class UserProfileRetrieveUpdate(generics.GenericAPIView):
def get(self, request, *args, **kwargs):
# Only UserProfileSerializer is required to serialize data since
# email is populated by the 'source' param on EmailField.
serializer = UserProfileSerializer(
instance=request.user.get_profile())
return Response(serializer.data)
def put(self, request, *args, **kwargs):
# Both UserSerializer and UserProfileSerializer are required
# in order to validate and save data on their associated models.
user_profile_serializer = UserProfileSerializer(
instance=request.user.get_profile(),
data=request.DATA)
user_serializer = UserSerializer(
instance=request.user,
data=request.DATA)
if user_profile_serializer.is_valid() and user_serializer.is_valid():
user_profile_serializer.save()
user_serializer.save()
return Response(
user_profile_serializer.data, status=status.HTTP_200_OK)
# Combine errors from both serializers.
errors = dict()
errors.update(user_profile_serializer.errors)
errors.update(user_serializer.errors)
return Response(errors, status=status.HTTP_400_BAD_REQUEST)
First: better handling of nested writes is on it's way.
Second: The Serializer Relations docs say of both PrimaryKeyRelatedField and SlugRelatedField that "By default this field is read-write..." — so if your email field was unique (is it?) it might be you could use the SlugRelatedField and it would just work — I've not tried this yet (however).
Third: Instead I've used a plain Field subclass that uses the source="*" technique to accept the whole object. From there I manually pull the related field in to_native and return that — this is read-only. In order to write I've checked request.DATA in post_save and updated the related object there — This isn't automatic but it works.
So, Fourth: Looking at what you've already got, my approach (above) amounts to marking your email field as read-only and then implementing post_save to check for an email value and perform the update accordingly.
Although this does not strictly answer the question - I think it will solve your need. The issue may be more in the split of two models to represent one entity than an issue with DRF.
Since Django 1.5, you can make a custom user, if all you want is some method and extra fields but apart from that you are happy with the Django user, then all you need to do is:
class MyUser(AbstractBaseUser):
favourite_number = models.IntegerField()
and in settings: AUTH_USER_MODEL = 'myapp.myuser'
(And of course a db-migration, which could be made quite simple by using db_table option to point to your existing user table and just add the new columns there).
After that, you have the common case which DRF excels at.

Document serialization with Doctrine MongoDB ODM

I'm trying to code a class handling serialization of documents by reading their metadata. I got inspired by this implementation for entities with Doctrine ORM and modified it to match how Doctrine ODM handles documents. Unfortunatly something is not working correctly as one document is never serialized more than once even if it is refered a 2nd time thus resulting on incomplete serialization.
For example, it outputs this (in json) for a user1 (see User document) that belongs to some place1 (see Place document). Then it outputs the place and the users belonging to it where we should see the user1 again but we don't :
{
id: "505cac0d6803fa1e15000004",
login: "user1",
places: [
{
id: "505cac0d6803fa1e15000005",
code: "place1",
users: [
{
id: "505c862c6803fa6812000000",
login: "user2"
}
]
}
]
}
I guess it could be related to something preventing circular references but is there a way around it ?
Also, i'm using this in a ZF2 application, would there be a better way to implement this using the ZF2 Serializer ?
Thanks for your help.
I have a serializer already written for DoctrineODM. You can find it in http://github.com/superdweebie/DoctrineExtensions - look in lib/Sds/DoctrineExtensions/Serializer.
If you are are using zf2, then you might also like http://github.com/superdweebie/DoctrineExtensionsModule, which configures DoctrineExtensions for use in zf2.
To use the Module, install it with composer, as you would any other module. Then add the following to your zf2 config:
'sds' => [
'doctrineExtensions' => [
'extensionConfigs' => [
'Sds\DoctrineExtensions\Serializer' => null,
),
),
),
To get the serializer use:
$serializer = $serivceLocator->get('Sds\DoctrineExtensions\Serializer');
To use the serializer:
$array = $serializer->toArray($document)
$json = $serializer->toJson($document)
$document = $serializer->fromArray($array)
$document = $serializer->fromJson($json)
There are also some extra annotations available to control serialization, if you want to use them:
#Sds\Setter - specify a non standard setter for a property
#Sds\Getter - specify a non standard getter fora property
#Sds\Serializer(#Sds\Ignore) - ignore a property when serializing
It's all still a work in progress, so any comments/improvements would be much appreciated. As you come across issues with these libs, just log them on github and they will get addressed promptly.
Finally a note on serializing embedded documents and referenced documents - embedded documents should be serialized with their parent, while referenced documents should not. This reflects the way data is saved in the db. It also means circular references are not a problem.
Update
I've pushed updates to Sds/DoctrineExtensions/Serializer so that it can now handle references properly. The following three (five) methods have been updated:
toArray/toJson
fromArray/fromJson
applySerializeMetadataToArray
The first two are self explainitory - the last is to allow serialization rules to be applied without having to hydrate db results into documents.
By default references will be serialized to an array like this:
[$ref: 'CollectionName/DocumentId']
The $ref style of referencing is what Mongo uses internally, so it seemed appropriate. The format of the reference is given with the expectation it could be used as a URL to a REST API.
The default behaviour can be overridden by defineing an alternative ReferenceSerializer like this:
/**
* #ODM\ReferenceMany(targetDocument="MyTargetDocument")
* #Sds\Serializer(#Sds\ReferenceSerializer('MyAlternativeSerializer'))
*/
protected $myDocumentProperty;
One alternate ReferenceSerializer is already included with the lib. It is the eager serializer - it will serialize references as if they were embedded documents. It can be used like this:
/**
* #ODM\ReferenceMany(targetDocument="MyTargetDocument")
* #Sds\Serializer(#Sds\ReferenceSerializer('Sds\DoctrineExtensions\Serializer\Reference\Eager'))
*/
protected $myDocumentProperty;
Or an alternate shorthand annotation is provided:
/**
* #ODM\ReferenceMany(targetDocument="MyTargetDocument")
* #Sds\Serializer(#Sds\Eager))
*/
protected $myDocumentProperty;
Alternate ReferenceSerializers must implement Sds\DoctrineExtensions\Serializer\Reference\ReferenceSerializerInterface
Also, I cleaned up the ignore annotation, so the following annotations can be added to properties to give more fine grained control of serialization:
#Sds\Serializer(#Sds\Ignore('ignore_when_serializing'))
#Sds\Serializer(#Sds\Ignore('ignore_when_unserializing'))
#Sds\Serializer(#Sds\Ignore('ignore_always'))
#Sds\Serializer(#Sds\Ignore('ignore_never'))
For example, put #Sds\Serializer(#Sds\Ignore('ignore_when_serializing')) on an email property - it means that the email can be sent upto the server for update, but can never be serialized down to the client for security.
And lastly, if you hadn't noticed, sds annotations support inheritance and overriding, so they play nice with complex document structures.
Another very simple, framework independent way to transforming Doctrine ODM Document to Array or JSON - http://ajaxray.com/blog/converting-doctrine-mongodb-document-tojson-or-toarray
This solution gives you a Trait that provides toArray() and toJSON() functions for your ODM Documents. After useing the trait in your Document, you can do -
<?php
// Assuming in a Symfony2 Controller
// If you're not, then make your DocmentManager as you want
$dm = $this->get('doctrine_mongodb')->getManager();
$report = $dm->getRepository('YourCoreBundle:Report')->find($id);
// Will return simple PHP array
$docArray = $report->toArray();
// Will return JSON string
$docJSON = $report->toJSON();
BTW, it will work only on PHP 5.4 and above.