After many years of C/C++, PHP, some Ruby and other languages on one hand an different projects with different frameworks on the other, I now want to learn Rails.
After working through (Getting started-) Guides, I think Rails is powerfull and fairly easy to learn. And I feel ready to start with a non Bookshop app.
But a friend warned me about Rails's 'convention over configuration' and the way it 'does things'. I cant see a 'problem' with that, but are there pitfalls?
And: Are there things Rails does very different than other framworks?
You're going to either get zero replies or a bunch of opinions. I would recommend googling for "rails is opinionated". Hopefully that will turn up more examples of what you might run into.
Is it a problem? No, not really. Can it be a problem? Yes, absolutely.
Integrating with legacy databases can be a PITA sometimes. Or if you have some insane desire to name all your primary keys something other than "id" that can be a problem.
Not so much a problem really, but you're fighting a lot of convention.
Really, other than legacy databases I can't think of anything off the top of my head bothers me about it's conventions.
What your friend says is only somehow true. Rails's naming conventions are powerfull and keep your brain free for other things.
But: If you think you have experience ... you are learning Rails -> you are back to school.
Rail's 'naming conventions' are not only conventions. They are Rails somehow. So if you break the convention, you are off road and soon in the middle of no where. I think that this part of Rails could be better pointed out in Guides.
Let me give an example: (you are tired of "books" and start wit a little app around 'Pubs')
You scaffold your Pup (intended typo)
You then put in some logic, put some work, then you realize your (oops) typo. Now dark clouds arise. Since you are experienced, you start correcting the typo ... PupsController -> PubsController, filename of PubsController (you are already off road) ...
You will end up at the database table 'pups' ... (middle of nowhere)
This happened because you think you are experienced. A beginner has built a new scaffold (without typo) or asked here on SO how to correct 'correctly')
An other example is to name thigs 'more nice'. After years and many projects you are probably one, that never uses "unspecified" names like 'user','role','guest','owner' for classes and so on. So you start to name them (nicley?): PubUser, PubOwner, ... Noboddy told you "DON'T".
You put all in a namespace (there are many people here saying "don't") with the nice name 'PubApp'
Although your files are well organized, you will end in tablenames: pub_app_pub_owners and so on, not to think about the name of assotiative Tables between them.
And later on you will type something like
link_to 'add' new_pub_app_pub_pub_guest_url
link_to 'add' new_pub_app_pub_owner_pub_url
This is probaly not what your intention was to make things 'clean'. And if you take a look at the 'beginers' link...
link_to 'add' new_pub_guest_url
What I do not want is to preferre one or the other.
I want to point out, that - since you are not experienced with Rails - you dont know where the things you are doing (off road) are leading you. With only a hard way to return.
Thats somehow a pitfall.
But next time you will know about that and make a compromise: 'Pubowner' and 'Guest' (and 'pa' as namespace (if you realy want to)
so
link_to 'add' new_pa_pubowner_guest_url
Is not so bad. But its hard to reverse things so think before ...
When writing application using any framework it may be necessary to write a lot of configuration code. However if we follow Rails Standard conventions then it is possible to avoid the excess configuration and in some cases no configuration at all. Thus, explicit configuration would be needed only in those cases where you can't follow the standard convention.
Following are the conventions provided by rails:
Naming conventions: Active Record uses some naming conventions to find out how the mapping between models and database tables should be created. Rails will pluralize your class names to find the respective database table. So, for a class Book, you should have a database table called books.
Example:
Database Table - Plural with underscores separating words (e.g., book_clubs).
Model Class - Singular with the first letter of each word capitalized (e.g., BookClub).
Schema Conventions: Active Record uses naming conventions for the columns in database tables, depending on the purpose of these columns.
Foreign keys - These fields should be named following the pattern singularized_table_name_id (e.g., item_id, order_id). These are the fields that Active Record will look for when you create associations between your models.
Primary keys - By default, Active Record will use an integer column named id as the table's primary key. When using migrations to create your tables, this column will be automatically created.
There is a point I never had before, that took me a nerve or two: Rails is caching a lot - even in development env. (for gods sake it does!)
Let me construct a scenario (no; not a construct, happend to me in a more complex variant)
After enough hours of work and you did all that was on the plan for the day, you close with a little cleanup. Check evering is still working - smile - and off
sleep
Back to the Computer you start all up and get a 'constant xy is not ...', so but, but why, overnight?
The answer is easy (if someone tells you at least once): Rail's caching does not (or better cant) check if a class / file / method is just removed, and not altered (sometimes ...)
So the (one to many) deleted file removed the Class, it contained from the world, but not from Rail's cache. Power off did the rest.
I had more subtile situations, that i solved with a rails server restart after i looked out of the window, if I am still on earth ...
What I try to point out is, there is no magic behind, but be warned, if you think you are smart enough to touch the framework code (why and how ever you want to do that). Big cache gets you back where you are, at beginners level.
Related
I've been trying out various approaches to "string interning" in a database that's accessed primarily using SQLAlchemy ORM. I've tried a couple things, and so far I'm not really loving any of them. It seems like a common pattern, and I feel like I might be missing some obvious, elegant solution.
To elaborate: the situation is that my database (Postgres, if it matters) table is likely to to contain many of the same strings, but they are still arbitrary, and not bounded in a way that a native enum type would be the right solution. I want to collect these strings in another table with an auto-incrementing PK and then reference them in the main table by FK. The goals here include both space savings and string "hygiene" (i.e. I'd like to be able to easily assess and track the growth of this string table.)
I've tried the obvious naive solution of creating a separate entity, but this seems to foist the mechanics of the string interning onto every consumer of the entity. i.e. every consumer has to traverse the relationship to get the value, like this: obj.interned_property.value And absent joinedload hints, it causes another database hit for every new access. (In general, I try to keep loading strategies out of the model itself, since different use cases often benefit from different loading strategies.) Adding a python property to traverse the relationship is not a good approach because it can't participate in SQLAlchemy filtering/ordering operations.
I've tried using the AssociationProxy extension, but I've been generally disappointed with it. I discovered that AssociationProxy attributes don't follow the same metadata contract of other SA ORM attributes; They lack an info property, for instance. An info dictionary was relatively simple to graft on, but this was really just the first shoe to drop. After that, I discovered that you can't filter against them in a query (at least not with the LIKE operator.) I've gotten to the point where I'm kinda sick of discovering the next thing that AssociationProxy attributes can't do.
The next thought I had was to do all the interning inside the database using triggers and updatable views, but that inherently hampers portability w/r/t database engine, and splits the logic between Python and PL/SQL which makes it harder for future developers coming into this code to figure out what's going on. And, it's a bunch of effort, so if I'm going to do it, I would like to feel more confident that it's the right way to go.
Anyway, it seems like this is a pretty common pattern, and I feel like someone must have figured out an elegant solution by now. So, I'd love to hear from someone who's been down this road before: what's the best way to handle string interning with SQLAlchemy?
I am using datamapper in a Sinatra application. I currently use the command
DataMapper.finalize.auto_upgrade!
to handle the migrations. I had two Classes (Artists and Events) with a 'has_n' and 'belongs_to' association. An Event 'belonged_to' one Artist and an Artist could have many Events associated with it.
I changed the association to be a many_to_many relationship by deleting the previous parts of the class definition which governed the original one_to_many association in the models and adding
has n, :artists, :through => Resource
to the Event class and the corresponding code to the Artist class. When I make a new Event, an error is kicked off.
#<DataObjects::IntegrityError: events.artist_id may not be NULL
The :artist_id field is a relic of the original association between the two classes. The new many_to_many association is accessed by event.artists[i] (where 'i' is just an integer index going from 0 to the number of associated artists -1). Apparently the original association method between the Artist and Event classes is still there? My guess is the solution to this is to not just use the auto_upgrade method built into datamapper but rather to write an explicit migration. If there is a way to handle this type of change to a database and still have the auto_upgrade method work, that would be great!
If you need more details about my models or anything please ask and I'll gladly add them.
In my experience, DataMapper's auto_upgrade does not work very well -- or, to say the least, it doesn't work the way I expect it to. If you want to add a new column to your model, it will do what it should; try to do anything more sophisticated to a column and it probably won't behave as you expect.
For example, if you create a property of type String, it will initially have a length of 50 characters. If you notice that 50 characters is not enough to hold your string, adding :length => 100 to the model won't be enough to make auto_upgrade change the column's width.
It seems you have stumbled upon another shortcoming, although one may argue that, in your case, maybe DataMapper's behavior isn't that bad (think of legacy databases). But the fact is that, when you changed the association, the Event's artist_id column wasn't removed, and then when you try to save an Event, you'll get an error because the database says it is a required field.
Notice that the error you are getting is not a validation error: DataMapper thinks everything looks ok, but gets an error from the database when trying to save the object.
Hope this helps!
Auto-upgrade is not a shortcoming at all! I think of auto-upgrade as a convenience feature of DataMapper. It's only intended purpose is to add columns for you as far as I know. So it is great for getting a project started quickly, and managing test and dev environments without having to write migrations, but not the best tool for making modifications to a mature, live project.
For that, DataMapper does have migrations! Use the dm-migrations gem. The shortcoming is that they are not very well documented... at all. I'm actually working on changing a current project of mine over to using migrations, and I hope to contribute some instructions to the dm-migrations github wiki. But if you aren't ready to switch to migrations, you can also just update columns manually using an SQL client, and then continue to use auto-upgrade for new columns. That's what I have been doing for 2 years on my project :)
What is the best way to implement a constructor for a record? It seems like a function should be able to return a record object in the instantiation of the record in some later model higher up the tree, but I can't get that to work. For now I just use a bunch of parameters at the top of the record that populate the variables stored in the record, but it seems like that will only work in simple cases.
Can anyone shed a little light? Perhaps I shouldn't be using a record but a model. Also does anyone know how the PDE functionality is coming? The book only says that it is coming, but I have seen some other things around.
I don't seem to have the clout to add tags (which makes sense, since my "reputation" is lower than yours) so sorry about that. I thought I had actually added one at one point, but perhaps I am mistaken.
I think you need to be clear what you mean by constructor since it has a very specific meaning in Modelica. If I understand your question correctly, it sounds like what you want to do is create an instance of a record that has some fields that are specified in the constructor arguments and from those arguments a bunch of other fields in the record are computed. Is that correct?
If so, there is a mechanism to do this. You mention "the book" but it isn't clear which one you mean. If it is mine, it definitely has no mention of these so called "record constructors" because it is too old. I do not know if Peter Fritzson's book mentions them either. However, they do exist and are documented in Section 12.6 of the Modelica 3.2 specification.
As for PDEs, there has been work into this kind of thing but nothing has really been done within the design group on this topic. I would add that if you want to solve either elliptical or parabolic PDEs on regular grids, this isn't too hard even with the current language. The only real drawback is that most tools probably don't handle sparsity very efficiently. Irregular grids would also be possible, but then you get into complicated basis functions. Finally, hyperbolic PDEs are, in my opinion, quite tricky (in any environment) due to the implicit physical constraints between time and space which are difficult to express (i.e. the CFL condition).
I hope that answers your questions so far.
I can only comment on your question regarding the book of Peter Fritzson. He confirmed that he's working on an update and he hopes to get it ready 'in the course of 2011'.
Original post here:
http://openmodelica.org/index.php/forum/topic?id=50
And thanks for initiating the modelica tag, I might be useful in the near future for me too... :-)
regards,
Roel
So far there've been several questions regarding this, and they've all come down to the same answer: one table for the language-neutral data, 1-* to a table with the translations and an indexed language ID field.
This has several problems:
Twice as much CRUD.
Need for Ajax CRUD if you want a decently friendly web UI.
More than twice the validation -- you need to ensure that the relationship is 1-* rather than 0-*.
Collation differences between languages isn't accommodated.
Queries require joins.
If you want slugs in multiple languages, oh boy.
A lot of database people have worked on all sorts of theoretical and practical problems, but surprisingly few people work on this one.
I think what we need ultimately is:
A field type that'll store multiple versions of strings
Multiple indices for each such field, one for each language or variation, with the option to specify the correct collation mode
A standard ORM object for this crazy thing
UI elements
Overkill? Sure, maybe, but the whole problem is a real nightmare as it is. And it's not exactly an uncommon scenario.
We gotta try to convince server vendors to work on this.
Edit: By the way, this is my first time using the community wiki; hopefully I'm doing it right.
Edit 2: Something about my wording seems to have made people think that I'm attacking the very concept of DBMS. I'm not; I'm simply saying that built-in support for localization is a much-needed feature.
I probably shouldn't have mentioned performance; it's of course completely negligible most of the time. The focus of my concern is on the fact that this really stifles productivity.
I'll provide an example. Suppose I have a very trivial table for a decidedly trivial store:
Products (id, price, description, name, slug)
In EF/MVC, I'd throw this in the ORM designer, maybe encapsulate it in a repository, build a Products controller, and have actions for Index, Details, Create, Update, Edit and Delete. To identify a product in any of the items, I'd simply do a WHERE(slug = #slug). I'd make a view model for the create/edit actions, design the form control, and wire it up straight to the repository. Done and done. To access the details for a product, the user would go to /products/details/product-slug.
But then since the rest of the website is bilingual, I decide to change the products table accordingly.
Products (id, price)
ProductsText (productId, language, description, name, slug)
Hey, that's not so bad. Yeah, not yet. Then you write your relationships and your constraints, and then you write you write out all your properties in the view-model, and then you make a complete CRUD controller for the ProductsText data or use jQuery/Ajax to add create/update/edit buttons on your Products controller, and then you add validation logic to make sure the user enters at least the primary language, and then when you want to read data for the end-user pages you write another query to take join ProductsText.slug and ProductsText.language with Products... I probably missed something, but you get the idea.
The complexity of the program just explodes with boilerplate code once you have localization involved.
Of course, I don't expect the problem to be solved completely, and it's obviously just as much a UI problem as it is a database problem. But there's just so much that could be done to make all this easier. A "multistring" field type might be a really good start.
Edit 3: Anyone ever hear of SQL Server Modeling Services? It has some localization tools in it that could be a step in the right direction. Still CTP though.
-- Simulate the French locale with the SET LANGUAGE statement.
SET LANGUAGE French
select Id, CountryName,
[System.Globalization].[SessionsString](CountryName, 1) as CountryNameString
from [Location].[CountriesTable]
What is a localized database field?
Typically in applications we've worked in, the UI is localized. This is accomplished using a database, and we put all the translations (and potentially the master phrases) in the table with a locale-code and phraseid being the primary key. This is fairly straightforward, requires a single reusable set of stored procs and has good performance and the usage is well-understood. We often allow translation on the fly so that the app interface includes a translation feature where corrections can be made and other users will see them live - either rich forms applications or web forms applications (depending on caching - which is another key feature of UI localization)
As far as querying requiring joins - that's just a fact of life in a normalized relational database, and performance there is usually managed with a good normalized design and proper indexing.
In other "data", it has made little sense to localize except under direction of the application requirements. For instance, even though you may offer a product in multiple countries, the SKU and distributor may be different. This level of localization is very application specific and we often dealt with it as a separate database and there really isn't anything tying those individually country database together - many products were not available although there may have been equivalent products in the other countries.
If you are selling the same products around the world, then you kind of fall into the original scenario in a kind of multi-lingual CMS. This requires significant work besides the low-level database. For instance, if someone corrects the default product description, what flags the translators that the translations need to also be corrected? These questions are non-trivial. Although I can see where database vendors could assist with features, these are intrinsic difficulties of application requirements and design and not necessarily something the database can add features which will universally solve.
The collation issue is indeed a little awkward. Typically data is stored in nvarchar and you would not know the collation you wanted for retrieval at the time you wrote the stored proc, since the locale would be a parameter. This only affects collections retrieved which need to be ordered by content, not usually natural key and certainly not retrieval by key - it's not a large problem, but is one which cannot easily be handled without dynamic SQL (casting using the preferred collation from a table depending upon the location passed in, if you mix data from different locales, you would have to decide if you want to sort by locale first and then it may be difficult to pick a collation which might work properly within all locales in the same result set). You are probably going to want to use a Windows collation with such a wide variety of data.
Similarly with ORMs, we typically treated the composite unique key of locale/phraseid as the key to retrieve objects (we typically also had a surrogate identity primary key) - I know that traditional ORMs don't necessarily like this departure from retrieval by a meaningless surrogate key.
I've encountered all of these issues for localized CRM-style web sites. Not fun to design and optimize, but it can be done. My 2ยข worth:
1. Twice as much CRUD.
This depends on how your CRUD is designed. Any of my stored procedures or functions that can retrieve a possibly-localized field take a locale/culture code parameter. All of these fields are also NVARCHAR to avoid encoding issues.
2. Need for Ajax CRUD if you want a decently friendly web UI.
I suppose so, but this is application-dependent. Should defer to the "internal" CRUD (DRY principle).
3. More than twice the validation -- you need to ensure that the relationship is 1-* rather than 0-*.
This also assumes that all content is required in all supported locales, instead of using a fallback mechanism. For example, Microsoft's MSDN content is available in multiple locales, but some is in only one (generally this is US English, the "neutral" locale for Microsoft).
For a CRM-style system, any locale can be used for the initial content as long as the fallback uses that if the neutral content is not available.
4. Collation differences between languages isn't accommodated.
I find that it is easier to put all collation support at the UI/reporting layer. Multilingual-aware tables with collation/locale specified on a row-by-row basis would be a very nice-to-have feature but I wouldn't like to wait for it to become available...
5. Queries require joins.
Yes, definitely makes the query a bit more complicated :-) but no real way around that. Can get even more complicated if locale fallback is included (a "locale specificity" ranking field helps here).
6. If you want slugs in multiple languages, oh boy.
This is the reason that the .NET replacement parameters in the format string were designed to be indexed, not positional (printf(), etc. are positional). An English format may need replacements in 1, 2, 3 order, while the German equivalent uses 3, 1, 2.
To make life easier for localizers, whenever I create a .NET resource bundle I document the parameters including index, data type (including minimum and/or maximum string lengths), and a contextual description - context is important for determining text gender in some locales.
Plurality may also require multiple related resources as some locales need more than just "single" and "plural" (e.g. "0 files", "1 file", "2 files").
The same rules must apply to any localizable column in the database.
Well the answers are not that helpfull so far. I had the same problem on various projects I was doing in the past. And there was never a shortcut nor a solution out of the box that helpped me to solve this problem in a easy way. But your approach is going into the right direction and with a little work on your Data Access Layer you can actualy abstract all the burden that is caused by this requirement.
So for Metadata like Types, Categories, Countries etc. performance is not an issue since the whole stuff can be cached. For freetext entries it is a different story. You most probably can't cache them and they tend to be quite long.
You might already know those pages:
http://www.codeproject.com/KB/aspnet/LocalizedSamplePart2.aspx
http://www.sisulizer.com/online-help/DatabaseLocalization.shtml
Best-practices for localizing a SQL Server (2005/2008) database
In my experience I haven't commonly run into the problem where the data stored in the database has many language-dependent versions of the same text. Typically a developed application will have many language files for all the text that's more or less statically built into the application. Then we see database data for text users enter. While an application may be used by users with many different languages, the situation where users type the same text in multiple languages is not so common. Typically uses of an application will show the UI in their language and then enter and view data in their language.
For example, users of our application in the US vs in Netherlands or Saudi Arabia would see the UI in the language of their choice, but for any given installation, the data they enter will consistently be in their native language.
Obviously this doesn't apply to all cases. CRMs are an example where you would have the same text with multiple translations, like Wikipedia, but I think what I described above is the more common scenario.
"A lot of database people have worked on all sorts of theoretical and practical problems, but surprisingly few people work on this one."
That's because there is nothing to work on, from a theoretical perspective, in your example. The so-called "problems" you mention are, all of them, nothing more than a direct consequence of the fact that you are managing more data.
"Twice as much CRUD."
And why is that a problem ? I know of at least a few systems I built that had a lot more of that than your example.
"Need for Ajax CRUD if you want a decently friendly web UI."
Is that really so ? I don't know, but at any rate how data is handled in the presentation layer, is no concern of the DBMS, and if the programmer thinks it is too difficult/cumbersome, then don't blame the DBMS for that.
More than twice the validation -- you need to ensure that the relationship is 1-* rather than 0-*.
And why is that a problem ? If more business rules are stated, more validation is required.
"Collation differences between languages isn't accommodated."
How so ? What is the sense of collating English text with French ? Of English text with Ukrainian or Russian or Chinese ? Or did you mean something else ?
"Queries require joins."
And why is that a problem ?
"If you want slugs in multiple languages, oh boy."
In what context ? For what purpose ?
SELECT language,nllabel FROM ...
NATURAL JOIN (SELECT 'EN' as language UNION SELECT 'FR' as language)
Oh but wait, I forgot ... JOINs are also a problem.
"and it's obviously just as much a UI problem as it is a database problem."
I disagree that it is. When looking at your problem from a database angle, there are two things that might possibly be a small beginning of a solution :
the possibility to do full view updating (both through JOIN and through GROUP, for your case).
the possibility to have attributes of type 'table' inside database tables. You could then have the entire set of applicable localized names-stuff as a sinle attribute in a single row for your product/...
As for full view updating : don't hold your breath. You'll suffocate long before it has arrived.
As for nested tables : they might already exist, if anyone has them Oracle will, I don't really know, but I'm not really confident that this will really make life easier on the UI side of things.
Oh, and BTW : SQL is nowhere near "theoretically pure".
Migrations are undoubtedly better than just firing up phpMyAdmin and changing the schema willy-nilly (as I did during my php days), but after using them for awhile, I think they're fatally flawed.
Version control is a solved problem. The main function of migrations is to keep a history of changes to your database. But storing a different file for each change is a clumsy way to track them. You don't create a new version of post.rb (or a file representing the delta) when you want to add a new virtual attribute -- why should you create a new migration when you want to add a new non-virtual attribute?
Put another way, just as you check post.rb into version control, why not check schema.rb into version control and make the changes to the file directly?
This is functionally the same as keeping a file for each delta, but it's much easier to work with. My mental model is "I want table X to have such and such columns (or really, I want model X to have such and such properties)" -- why should you have to infer from this how to get there from the existing schema; just open up schema.rb and give table X the right columns!
But even the idea that classes wrap tables is an implementation detail! Why can't I just open up post.rb and say:
Class Post
t.string :title
t.text :body
end
If you went with a model like this, you'd have to make a decision about what to do with existing data. But even then, migrations are overkill -- when you migrate data, you're going to lose fidelity when you use a migration's down method.
Anyway, my question is, even if you can't think of a better way, aren't migrations kind of gross?
why not check schema.rb into version control and make the changes to the file directly?
Because the database itself is not in sync with version control.
For instance, you could be using the head of the source tree. But you're connecting to a database that was defined as some past version, not the version you have checked out. The migrations allow you to upgrade or downgrade the database schema from any version and to any version, incrementally.
But to answer your last question, yes, migrations are kind of gross. They implement a redundant revision control system on top of another revision control system. However, neither of these revision control systems is really in sync with the database.
Just to paraphrase what others have said: migrations allow you to protect the data as your schema evolves. The notion of maintaining a single schema.rb file is attractive only until your app goes into production. Thereafter, you'll need a way to migrate your existing users' data as your schema changes.
There are also data-related issues that are important to consider, which migrations solve.
Say an old version of my schema has a feet and inches column. For efficiency purposes, I want to combine that into just an inches column to make sorting and searching easier.
My migration can combine all of the feet and inches data into the inches column (feet * 12 + inches) while it's updating the database (i.e. just before it removes the feet column)
Obviously this being in a migration makes it automatically work when you later apply the changes to your production database.
As it stands, they're annoying and inadequate but quite possibly the best option we have available to us at present. Quite a few smart people have spent quite a lot of time working on the problem and this, so far, is about the best they've been able to come up with. After about 20 years of mostly hand-coding database version updates, I came very rapidly to appreciate migrations as a major improvement when I found ActiveRecord.
As you say, version control is a solved problem. Up to a point I'd agree: it's very solved for text files in particular, less so for other file types and not really very much at all for resources such as databases.
How do migrations look if you view them as version control deltas for databases? They're the sum of the deltas you have to apply to get a schema from one version to another. I'm not aware that even git, for all its super-powerfulness, can take two schema files and generate the necessary DDL to do that.
As far as declaring table content in the model, I believe that's what DataMapper does (no personal experience). I think there may be some DDL inference capabilities there as well.
"even if you can't think of a better way, aren't migrations kind of gross?"
Yes. But they're less gross than anything else we have. Do please let us know when you've completed the non-gross alternative.
I suppose given "even if you can't think of a better way", then yes, in the grand scheme of things, migrations are kind of gross. So are Ruby, Rails, ORMs, SQL, web apps, ...
Migrations have the (not insignificant) advantage that they exist. Gross-but-exists tends to win out over Pleasant-but-nonexistent. I'm sure there probably are pleasant and nonexistent ways to migrate your data, but I'm not sure what that means. :-)
OK, I'm going to take a wild guess here and say that you're probably working all by yourself. In a group development project the power of each individual to take responsibility for just his/her changes to the database required for the code that developer is writing is much much more important.
The alternative is that larger groups of programmers (e.g. 10-15 Java developers where I work) end up relying on a couple of dedicated full time database administrators to do that along with their other maintenance, optimization, etc. duties.