I have a table events, where I log different kind of events, e.g. login, register, etc. This is based on the type column.
I'd like to have different models (or any other suitable solution) for each event type, like EventLogin. Thus it would look like I have different "tables" for each event type, even though they are saved in the same table.
Edit: of course, I don't want to think about it every time, i.e. I would like to do EventLogin->where("user_id", "=", $user->id) and not have to worry about remembering to do ->where("type", "=", "login").
Is it possible?
Think this can interest you :
http://vimeo.com/53183075 (morphMany() / morphTo() usage near 7'00)
Otherwise, #TheShiftExchange is right, i up him.
Is it possible?
Yes.
Just have different model files. You'll need to set the table name, since your model name wont be the same:
protected $table = 'your_table';
Otherwise it will work as a normal table.
You can apply this same principle to forms. Rather than have one model for each table, you can instead have one model for each form - and keep all the form logic in one place.
Related
Is there a way to setup something like a <SelectInput> filter on a column of the list to get only distinct values of this column ?
Something like the <ReferenceInput> but on the same table and with unique values ...
No, but for good reason. Say you have data with billions of distinct records. You don't want your frontend determining what is unique. Instead you want an API that can support that data specifically, and hopefully quickly.
So long story short, you'll need an API for that.
Along the lines of what Shawn K says, perhaps create a View on your backend that represents the state of what is currently 'distinct', acknowledging that it might be stale/non-realtime. Then you could use the contents of that View to represent the choices available to the user. If generating the distinct set of values is non-performant, then if you're in a DB like Postgres (et al), create a Materialized View, refreshed on a timer.
The binding of the view data to the becomes the trick at that point, but there are probably clues to doing that here of SO and you could piece these two together.
BTW, I use Views regularly to handle edge certain edge cases like this. Beats caching data in a middle tier for sure.
I have a User ndb.Model which has a username StringProperty that allows upper en lower case letters, at some point I wanted to fetch users by username but have the case forced to lowercase for the filtering. Therefor I added a ComputedProperty to User: username_lower which returns the lowercase version of the username as follows:
#ndb.ComputedProperty
def username_lower(self):
return self.username.lower()
then I filter the query like so:
query = query.filter(User.username_lower==username_input.lower())
This works, however it only does for users created (put) after I added this to the model. Users created before don't get filtered by this query. I first thought the ComputedProperty wasn't working for the older users. However, tried this and calling .username_lower on an old user does work.
Finally, I found a solution to this is to fetch all users and just run a .put_multi(all_users)
So seems like a ComputedProperty added later to the model works when you invoke it straight but doesn't filter at first. Does it not get indexed automatically ? or could it be a caching thing.. ?
any insight to why it was behaving like this would be welcome
thanks
this is the expected behaviour. The value of a ComputedProperty (or any property for that matter I guess) is indexed when the object is "put". The datastore does not do automatic schema updates or anything like that. When you update your schema you need to either allow for different schema versions in your code or update your entities individually. In the case of changes to indexing you have no choice but to update your entities. The MapReduce API can be used for updating entities to avoid request limitations and the like.
I have a Post model that has one huge column (full_html). So instead of doing a select "posts".* or whatever, I want to select every field except full_html by default (and only grab it when I actually try accessing the attribute)
My current solution is:
Post.select(Post.column_names.map(&:to_sym) - [:full_html]).where(...)
but it's pretty gross
Here is a similar SO Question regarding blobs. The last two answers open up a couple of alternatives that you might want to check out. I was going to suggest something similar to the second to the last where you store the full html in a different model and then associate the two together but that may open up other performance issues.
I am fairly new to nHibernate and DDD, so please bear with me.
I have a requirement to create a new report from my SQL table. The report is read-only and will be bound to a GridView control in an ASP.NET application.
The report contains the following fields Style, Color, Size, LAQty, MTLQty, Status.
I have the entities for Style, Color and Size, which I use in other asp.net pages. I use them via repositories. I am not sure If should use the same entities for my report or not. If I use them, where I am supposed to map the Qty and Status fields?
If I should not use the same entities, should I create a new class for the report?
As said I am new to this and just trying to learn and code properly.
Thank you
For reports its usually easier to use plain values or special DTO's. Of course you can query for the entity that references all the information, but to put it into the list (eg. using databinding) it's handier to have a single class that contains all the values plain.
To get more specific solutions as the few bellow you need to tell us a little about your domain model. How does the class model look like?
generally, you have at least three options to get "plain" values form the database using NHibernate.
Write HQL that returns an array of values
For instance:
select e1.Style, e1.Color, e1.Size, e2.LAQty, e2.MTLQty
from entity1 inner join entity2
where (some condition)
the result will be a list of object[]. Every item in the list is a row, every item in the object[] is a column. This is quite like sql, but on a higher level (you describe the query on entity level) and is database independent.
Or you create a DTO (data transfer object) only to hold one row of the result:
select new ReportDto(e1.Style, e1.Color, e1.Size, e2.LAQty, e2.MTLQty)
from entity1 inner join entity2
where (some condition)
ReportDto need to implement a constructor that has all this arguments. The result is a list of ReportDto.
Or you use CriteriaAPI (recommended)
session.CreateCriteria(typeof(Entity1), "e1")
.CreateCriteria(typeof(Entity2), "e2")
.Add( /* some condition */ )
.Add(Projections.Property("e1.Style", "Style"))
.Add(Projections.Property("e1.Color", "Color"))
.Add(Projections.Property("e1.Size", "Size"))
.Add(Projections.Property("e2.LAQty", "LAQty"))
.Add(Projections.Property("e2.MTLQty", "MTLQty"))
.SetResultTransformer(AliasToBean(typeof(ReportDto)))
.List<ReportDto>();
The ReportDto needs to have a proeprty with the name of each alias "Style", "Color" etc. The output is a list of ReportDto.
I'm not schooled in DDD exactly, but I've always modeled my nouns as types and I'm surprised the report itself is an entity. DDD or not, I wouldn't do that, rather I'd have my reports reflect the results of a query, in which quantity is presumably count(*) or sum(lineItem.quantity) and status is also calculated (perhaps, in the page).
You haven't described your domain, but there is a clue on those column headings that you may be doing a pivot over the data to create LAQty, MTLQty which you'll find hard to do in nHibernate as its designed for OLTP and does not even do UNION last I checked. That said, there is nothing wrong with abusing HQL (Hibernate Query Language) for doing lightweight reporting, as long as you understand you are abusing it.
I see Stefan has done a grand job of describing the syntax for that, so I'll stop there :-)
I am trying to figure out the best way to model a spreadsheet (from the database point of view), taking into account :
The spreadsheet can contain a variable number of rows.
The spreadsheet can contain a variable number of columns.
Each column can contain one single value, but its type is unknown (integer, date, string).
It has to be easy (and performant) to generate a CSV file containing the data.
I am thinking about something like :
class Cell(models.Model):
column = models.ForeignKey(Column)
row_number = models.IntegerField()
value = models.CharField(max_length=100)
class Column(models.Model):
spreadsheet = models.ForeignKey(Spreadsheet)
name = models.CharField(max_length=100)
type = models.CharField(max_length=100)
class Spreadsheet(models.Model):
name = models.CharField(max_length=100)
creation_date = models.DateField()
Can you think about a better way to model a spreadsheet ? My approach allows to store the data as a String. I am worried about it being too slow to generate the CSV file.
from a relational viewpoint:
Spreadsheet <-->> Cell : RowId, ColumnId, ValueType, Contents
there is no requirement for row and column to be entities, but you can if you like
Databases aren't designed for this. But you can try a couple of different ways.
The naiive way to do it is to do a version of One Table To Rule Them All. That is, create a giant generic table, all types being (n)varchars, that has enough columns to cover any forseeable spreadsheet. Then, you'll need a second table to store metadata about the first, such as what Column1's spreadsheet column name is, what type it stores (so you can cast in and out), etc. Then you'll need triggers to run against inserts that check the data coming in and the metadata to make sure the data isn't corrupt, etc etc etc. As you can see, this way is a complete and utter cluster. I'd run screaming from it.
The second option is to store your data as XML. Most modern databases have XML data types and some support for xpath within queries. You can also use XSDs to provide some kind of data validation, and xslts to transform that data into CSVs. I'm currently doing something similar with configuration files, and its working out okay so far. No word on performance issues yet, but I'm trusting Knuth on that one.
The first option is probably much easier to search and faster to retrieve data from, but the second is probably more stable and definitely easier to program against.
It's times like this I wish Celko had a SO account.
You may want to study EAV (Entity-attribute-value) data models, as they are trying to solve a similar problem.
Entity-Attribute-Value - Wikipedia
The best solution greatly depends of the way the database will be used. Try to find a couple of top use cases you expect and then decide the design. For example if there is no use case to get the value of a certain cell from database (the data is always loaded at row level, or even in group of rows) then is no need to have a 'cell' stored as such.
That is a good question that calls for many answers, depending how you approach it, I'd love to share an opinion with you.
This topic is one the various we searched about at Zenkit, we even wrote an article about, we'd love your opinion on it: https://zenkit.com/en/blog/spreadsheets-vs-databases/