Is it possible to use Linq-SQL without drag-and-drop? - sql

If i want to use Linq-SQL i also have to drag the DB Table unto the designer surface to create the entity classes.
I always like full control in my application and do not like the classes created by dotnet.
Is it possible to provide this connection between Linq and the DB using my own Data Access Layer Entity classes?
How can i get it done?

You can write your own classes very easily using Linq-to-SQL - just involves painting your classes with some Attributes.
For Example, this is a very simple table I have in one of my projects, and it works with Linq-to-SQL just fine:
[Table(Name = "Categories")]
public class Category : IDataErrorInfo
{
[Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)]
public int Id { get; set; }
[Column] public string Name { get; set; }
[Column] public string ExtensionString { get; set; }
}
The code was very easy, especially if you make your property names line up with your table names (you don't have to).
Then you just need a Repository to connect to the DB:
class CategoryRepository : ICategoryRepository
{
private Table<Category> categoryTable;
public CategoryRepository(string connectionString)
{
categoryTable = (new DataContext(connectionString)).GetTable<Category>();
}
}
Of course there is more to it, but this shows you the very basics and it is not hard to do once you understand it. This way you have 100% control over your classes and you can still take advantage of Linq-to-SQL.
I learned this approach from Pro ASP.NET MVC Framework, an awesome book.
If you want to see more, all of my Linq-to-SQL classes were written from scratch on one of my projects you can browse here.

To avoid drag & drop you can take a look at SqlMetal.exe.
However, it sounds like you really are requesting Persistence Ignorance, and I'm not sure that this is possible with L2S - it certainly isn't possible with LINQ to Entities until .NET 4...
I once wrote a blog post on using SqlMetal.exe and subsequently modifying the generated schema - perhaps you will find it useful, although it has a different underlying motivation.

I've got a couple tutorials up on CodeProject that walk through how to do this, including how to handle the relationships (M:M, 1:M, M:1) in an OO way and keep them in synch as you make updates:
A LINQ Tutorial: Mapping Tables to Objects
A LINQ Tutorial: Adding/Updating/Deleting Data

Related

executing a sql code for creating a table and database in zend framework

I wrote a sql script and in it I created a table ;
Now I need to know ,how I can execute this script? (with which codes?)
And I have another question : where? where I must write this codes?(which folder in zend project?)
if it is possible for you please explain with an example.thanks
Creating tables in the database
Zend Framework is not supposed to be the one creating the tables, thus, my suggestion is to run those scripts in other environment.
The fastest one is, probably, the very own SQL shell, but you can use another software such as MySQLWorkbench if you are using MySQL.
Once the tables are created, the access to the tables is made this way:
Introduction
When you are using Zend Framework, you are making use of the MVC pattern. I suggest you to read what is that: Wikipedia MVC
If you read the Wikipedia link, you probably know now that the acess to the database is going to be made by the model.
Thus, if you followed the recommended project structure that Zend provides you will have a models folder under your application folder. There, you are supposed to implement the classes that will make access to the DB.
But well... you now know where to locate those classes but you will ask me: how? It's easy if you know where to search. ZF provides an abstract class called Zend_Db_Table_Abstract that has all the methods that will make your life easier talking about interaction with your database's tables. This is the class that your classes should implement.
Example
Let's suppose you've got a page in your website in which you want to show to the user a list of products of your local store. You have a table in your database called "products" in which you have all the useful information such us name, price and availability.
You will have a controller with an action called indexAction() or listAction() this action is prepared to send the data to your view and will look like:
class Store_ProductsController extends Zend_Controller_Action {
public function indexAction(){
//TODO: Get data from the DataBase into $products variable
$this->view->products = $products;
}
}
And your view file will that that products variable and do sutff with it.
But now comes the magic, you will have a class that will access to the database as I've said, it'll be like:
class Model_Store_Products extends Zend_Db_Table_Abstract{
protected $_name = 'products';
public function getAllProducts(){
$select = $this->$select()
->from(array('P'=>$this->_name),
array('id', 'name', 'price', availability));
$productsArray = $this->fetchAll($select);
return $productsArray;
}
}
And ta-da, you have your array of products ready to be used by the controller:
class Store_ProductsController extends Zend_Controller_Action {
public function indexAction(){
$model = new Model_Store_Products();
$products = $model->getAllProducts();
$this->view->products = $products;
}
}
It can be said that, since fetchAll is public function, and our select does basically nothing but set which columns do we want (it doesn't even have a where clause), in this case, it would be easier to call the fetchAll directly from the controller with no where and it will recover the whole table (all columns):
class Store_ProductsController extends Zend_Controller_Action {
public function indexAction(){
$model = new Model_Store_Products();
$products = $model->fetchAll();
$this->view->products = $products;
}
}
Thus, our function in the model is not even needed.
This is the basic information of how to access to the database using Zend Framework. Further information of how to create the Zend_Db_Table_Select object can be found here.
I hope this helps.

ValueObject Persistence in NHibernate / Fluent NHibernate

I'm a total newbie with ORMs and the DDD, so please, be patient with me. Also, I'm no native speaker so the domain lingo will be a little hard to express in English.
I'm developing a system to control lawsuits.
My domain has an Entity called Case.
Public class Case
{
public virtual int Id { get; set; }
public virtual List<Clients> Clients { get; set;}
public virtual LawsuitType LawsuitType { get; set;}
}
The CaseType is, from what I gathered, a Value Object. It's a simple type, it has only the case type description. Example: "Divorce", "Child Support", etc. It is only the description that interests me. But I don't want to be a free descriptor. I want to control the options presented to the user, and also do some reports.
So I was thinking to map this on Database with the table "LawsuitTypes". The table would have a int Id, and a string descriptor.
Can I accomplish that using ComponentMap? Or have I got things wrong and the CaseType is an Entity?
Thanks, Luiz Angelo.
Edit:
Using an enum was suggested. But that wouldn't work because it would mean that the LawsuitTypes are set by the developer, and not the user. Some users have the power to add/remove LawsuitTypes, while others don't.
IMHO you should treat LawsuitTypes as an own entity. Keep in mind, that you may want to extend the LawsuitTypes with additional information some day (requirements changes very fast sometimes). What comes in my mind is a "default" property or somethig like that... This means additional work of cource, but this way you are more flexible for future needs.
If I understand your question correctly, the Description("") attribute and a simple enum should work. More on that here.
public enum LawsuitTypes
{
Divorce,
[Description("Child Support")]
ChildSupport,
[Description("Some Other Element")]
SomeOtherElement
}

Repository pattern, ViewModel and ORMs

With Repository pattern and ViewModels, how do you build queries against the database if you don't want the raw database objects to leak outside the repository? How do I actually create queries without loading ALL the database in memory and using LINQ to Objects? I can't expose IQueryable to the rest of the app.
For example, with EF I have a bunch of POCOs with several properties that match db fields, but also some stuff to work around enums not being directly support (for now) as well as foreign key IDs to prevent N+1 and easier querying and so on. I don't want them to leak out to the rest of the application, I want the application to just see a normal object graph.
public class DbUser
{
public int Id { get; set; }
public string Name { get set; }
public int GroupId { get; set; }
public DbGroup Group { get; set; }
public ICollection<DbComment> { get; set; }
}
public class User
{
public int Id { get; set; }
public string Name { get set; }
public Group Group { get; set; }
public ICollection<Comment> { get; set; }
}
The problem here is my repository will internally use EF for the querying (and in-memory stuff when unit testing). But how do I implement IQueryable<User> FindAll()? I can't just do return dbContext.Users.Select(u => new User(u)), as in that case I lose all possible query ability; it'll just load the whole user collection in memory, convert all the types to User from DbUser and then build LINQ queries on the in-memory collection - that is horribly inefficient.
I can't just build queries in the repository. On some pages I have queries that select a few fields, but also calculate some complex stuff from other related objects, filter them based on the result (for example count of comments with positive score), but I also need that back in the application. I could select all objects used to get the complex stuff and return them to the application (but not as db entities) but that would mean select a LOT of data.
Basically how do I prevent the database entities from polluting the rest of the application with their cruft and hacks, while still maintaining the ability to build queries outside of the repository?
CQRS (Command Query Responsibility Segregation) solves this problem. You have the 'real' model , the Domain model, with all the business rules and all that, and a 'query-ony' model which basically is a simple poco (which can be used directly by Views) that will be returned by a specialised query only repository.
The peristence model (EF entities) are used only to 'talk' with the db, the repos always returns or deals with domain/ application objects. Basically, you have to map the EF entities to the Domain ones (and viceversa when saving). In this way, you'll have separated models each with its own purpose.

Filter contents of lazy-loaded collection with NHibernate

I have a domain model that includes something like this:
public class Customer : EntityBase<Customer>, IAggregateRoot
{
public IList<Comment> Comments { get; set; }
}
public class Comment : EntityBase<Comment>
{
public User CreatedBy { get; set; }
public bool Private { get; set; }
}
I have a service layer through which I retrieve these entities, and among the arguments passed to that service layer is who the requesting user is.
What I'd like to do is be able to construct a DetachedCriteria in the service layer that would limit the Comment items returned for a given customer so the user isn't shown any comments that don't belong to them and are marked private.
I tried doing something like this:
criteria.CreateCriteria("Comments")
.Add(Restrictions.Or(Restrictions.Eq("Private", false),
Restrictions.And(Restrictions.Eq("Private", true),
Restrictions.Eq("CreatedBy.Id", requestingUser.Id))));
But this doesn't flow through to the lazy-loaded comments.
I'd prefer not to use a filter because that would require either interacting with the session (which isn't currently exposed to the service layer) or forcing my repository to know about user context (which seems like too much logic in what should be a dumb layer). The filter is a dirty solution for other reasons, too -- the logic that determines what is visible and what isn't is more detailed than just a private flag.
I don't want to use LINQ in the service layer to filter the collection because doing so would blow the whole lazy loading benefit in a really bad way. Lists of customers where the comments aren't relevant would cause a storm of database calls that would be very slow. I'd rather not use LINQ in my presentation layer (an MVC app) because it seems like the wrong place for it.
Any ideas whether this is possible using the DetachedCriteria? Any other ways to accomplish this?
Having the entity itself expose a different set of values for a collection property based on some external value does not seem correct to me.
This would be better handled, either as a call to your repository service directly, or via the entity itself, by creating a method to do this specifically.
To fit in best with your current model though, I would have the call that you currently make to get the the entities return a viewmodel rather than just the entities;
public class PostForUser
{
public Post Post {get; set;}
public User User {get; set;}
public IList<Comment> Comments}
}
And then in your service method (I am making some guesses here)
public PostForUser GetPost(int postId, User requestingUser){
...
}
You would then create and populate the PostForUser view model in the most efficient way, perhaps by the detached criteria, or by a single query and a DistinctRootEntity Transformer (you can leave the actual comments property to lazy load, as you probably won't use it)

NHibernate dynamic mapping

I am looking for some way to dynamically map database tables classes in my application using nhibernate (or if some other ORM works then let me know). I am fairly new to nhibernate, I used entity frameworks in the past though.
Most of my application will be using a static structures and fluent nhibernate to map them.
However there are multiple database tables that will be needed to be created and mapped to objects at each install site. These will all have as a base structure (id,name etc) however they will have additional fields depending on the type of data they are capturing. From some reading I found that I can use the "dynamic-component" mapping in xml to add fields using an IDictionary Attributes property. This is the first step and seems relatively straight forward. Ref (http://ayende.com/blog/3942/nhibernate-mapping-dynamic-component)
The second step is where I am struggling. I will need to define tables and map them depending on the client’s need. As stated above each of the tables will have a set of static properties, and some dynamic ones. They will also need to reference a static “Location”Class as shown below
Location (STATIC) (id,coordinates)
-----DynamicTable1 (DYNAMIC) (id,Name,location_id, DynamicAttribute1, DynamicAttribute2........)
-----DynamicTable2 (DYNAMIC) (id,Name,location_id, DynamicAttributeA, DynamicAttributeB....)
We will need to be able to create / map as many of these DynamicTables as the client needs. DynamicTable1, DynamicTable2 etc will most likely be different in some ways for most client sites. Is there any way in nhibernate to achieve this? The creating / management of the tables in the Database will be managed elsewhere, I just need some way to get this to map in my ORM.
A bit of background
This application will be used to store geological data. As geological data is inherently different depending on where it is, and geologist are using different methods and looking for different elements (gold, coal etc), the data structure to store this information needs to be extremely flexible.
Take a look at the new Mapping By Code functionality of NH 3.2. It should make it easy to create new table definitions at runtime. In contrast to Fluent, you don't need to write a mapping class, you just can add new classes in for loops:
// lookup all dynamic tables in the database using SQL or SMO or whatever
var dynamicTables = GetDynamicTables();
// map all dynamic tables
foreach(var table in dynamicTables)
{
mapper.Class<MyGenericEntity>(ca =>
{
// use an entity name to distinguish the mappings.
ca.EntityName(table.Name);
ca.Id(x => x.Id, map =>
{
map.Column("Id");
map.Generator(Generators.HighLow, gmap => gmap.Params(new { max_low = 100 }));
});
// map properties, using what ever is required: if's, for's ...
ca.Property(x => x.Something, map => map.Length(150));
});
}
Using the entity name you can store and load the entities to and from different tables, even if they are mapped as the same entity class. It is like Duck Typing With NHibernate..
Believe me, it won't be easy. If you are interested in a big challenge which impresses every NH expert, just go for it. If you just want to get it working you should choose a more classic way: create a static database model which is able to store dynamic data in a generic way (say: name value pairs).
see answer in Using nNHibernate with Emitted Code
class DynamicClass
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual Location Location { get; set; }
public virtual IDictionary DynamicData { get; set; }
}
Template
<hibernate-mapping>
<class name="DynamicClass">
...
<dynamic-component name="DynamicData">
<!--placeholder -->
</dynamic-component>
</class>
</hibernate-mapping>
replace <!--placeholder --> with generated
<property
name="P1"
type="int" />
<property
name="P2"
type="string" />
configure Sessionfactory
var sessionFactory = new NHibernate.Cfg.Configuration()
.AddXml(generatedXml)
... // DatabaseIntegration and other mappings
.BuildSessionFactory();
Query
var query = session.CreateCriteria<DynamicClass>();
foreach (var restriction in restrictions)
{
query.Add(Restrictions.Eq(restriction.Name, restriction.Value))
}
var objects = query.List<DynamicClass>();
Edit: ups i havent realised you need multiple tables per client
Option 1:
<class name="DynamicClass" table="tablenameplaceholder"> with replace and a different Sessionfactory for each dynamic class
Option 2:
Subclassing the dynamic class and use TPS (table per subclass) mappings
Option 3: see Stefans answer just with xml
<class name="DynamicTable1" class="DynamicClass" table="DynamicTable1">