Test for table/view existence with Scalaquery & create if not present - sql

I am writing some tests to automate checking if a database (a MS SQL
Server instance) has certain views, and if it does not, creating those
views using the BasicTable object.
Something like:
#Test def CheckAndBuildViewsOnDB() = {
VerifyViewExists(FooTable, BarTable) //FooTable et al defined as:
FooTable extends BasicTable[Foo], where Foo is a case class & FooTable
has a DDL create defined.
}
Based on this and cribbing from Stefan Zeiger's assertTablesExist example, I made a little method to check the db for a view, and if the
check throws an exception call that view's BasicTable ddl.create:
def VerifyViewExists(views:BasicTable*) = {
DatabaseSession.session() withSession { //helper class which
initiates a db connection & session
views map {
v => (try queryNA[Int]("select 1 from '"+ v.tableName +"'
where 1<0").list
catch {case _: Exception => v.ddl.create;
println("Couldn't find view "+v.tableName+", creating it
now...");})
} } }
Which seems reasonable to me, but has two problems:
this isn't the right way to type the views parameter as BasicTable,
resulting in "error: class BasicTable takes type parameters"
something funky is happening with the map argument v's scope,
resulting in "error: value tableName is not a member of type parameter
T0".
Pardon my ignorance with this question, as I suspect that the root of
my issue lies with not understanding Scala's type system.
Along with those two problems is the nagging feeling that I haven't
really done VerifyViewExists in the most succinct or readable style.

Since you don't care what the type parameter is, you should be able to solve #1 by adding [_]:
def VerifyViewExists(views:BasicTable[_]*) = {
My guess is that fixing #1 will cause #2 to disappear.
By the way it may be better to write foreach rather than map, since the latter will collect the results into a new collection, which I don't think you want.

Related

beego QueryTable table name: `cpes` not exists

I have this pice of code in beego:
o := orm.NewOrm()
qs := o.QueryTable("cpes")
and I now that beego connects well with the database, and the database have the 'cpes' table, but I keep getting an error becouse beego don't find the table.
¿How can I debug this further?
You must define model Cpes and register model 'cpes'.
Like:
type Cpes struct {
Id int
}
func (u *Cpes) TableName() string {
// db table name
return "cpes"
}
func init() {
orm.RegisterModel(new(Cpes))
}
I had the same problem. In my case, it was because I didn't register the model.
orm.RegisterModel(new(Member), new(Bank), new(Queue), new(Payment))
Make sure you registered all your models with beego.
The error message should have been more explicit though
I had the same problem a few weeks ago. The answer is in how Beego is translating the table name in the ORM.
The quick fix is to use
qs := o.QueryTable(new(cpes))
Where cpes is the model struct.
If you want to see this in action or this solution does not work for you try using the bee generate api command on your database. This will give you the models in a pre-made fashion as well as a bunch of code examples on how to use them.
Best of luck!

Linq order by with a field to retrieve dynamically in vb.net

I have a object Ob with several fields f1,..,fn (of different types).
Now a list of object is shown in a GridView and I need to implement the sorting method.
The real problem is:
how can I run
(from ob in Ob_list orderby ob.f1 ascending)
when the sorting field is represented by a string (i.e. "f1")?
Unfortunately I am not able to get it with the reflection (I am not able to do something like ob.GetType().GetField("f1"), this is not mapped into sql code).
I have several fields to possibly sort the rows, which is the best&fastest approach to this?
Thank you very much!
LINQ execution is deferred until you actually enumerate over the results or access the "count", etc. Because of this, you can build up your LINQ statement in stages.
The below code is done in C#, but I'm sure the equivalent is possible in VB.NET.
First setup your basic query:
var query = (from ob in Ob_list);
At this point, nothing has actually gone to the database due to deferred execution.
Next, conditionally add your order by components:
if (sortField == "f1")
{
query = query.OrderBy(o => o.f1);
}
else if (sortField == "f2")
{
query = query.OrderBy(o => o.f2);
}
else
{
//...
}
And finally, collect your results
foreach (var item in query)
{
// Process the item
}
I've found this question: How do I specify the Linq OrderBy argument dynamically?
I'm using Entity Framework, so the first answer did not solved my problem. The second one however, worked great!
Hope it helps!

Check if property exists in RavenDB

I want to add property to existing document (using clues form http://ravendb.net/docs/client-api/partial-document-updates). But before adding want to check if that property already exists in my database.
Is any "special,proper ravendB way" to achieve that?
Or just load document and check if this property is null or not?
You can do this using a set based database update. You carry it out using JavaScript, which fortunately is similar enough to C# to make it a pretty painless process for anybody. Here's an example of an update I just ran.
Note: You have to be very careful doing this because errors in your script may have undesired results. For example, in my code CustomId contains something like '1234-1'. In my first iteration of writing the script, I had:
product.Order = parseInt(product.CustomId.split('-'));
Notice I forgot the indexer after split. The result? An error, right? Nope. Order had the value of 12341! It is supposed to be 1. So be careful and be sure to test it thoroughly.
Example:
Job has a Products property (a collection) and I'm adding the new Order property to existing Products.
ravenSession.Advanced.DocumentStore.DatabaseCommands.UpdateByIndex(
"Raven/DocumentsByEntityName",
new IndexQuery { Query = "Tag:Jobs" },
new ScriptedPatchRequest { Script =
#"
this.Products.Map(function(product) {
if(product.Order == undefined)
{
product.Order = parseInt(product.CustomId.split('-')[1]);
}
return product;
});"
}
);
I referenced these pages to build it:
set based ops
partial document updates (in particular the Map section)

How to check unique constraint violation in nHibernate and DDD before saving?

I've got an Account model object and a UNIQUE constraint on the account's Name. In Domain Driven Design, using nHibernate, how should I check for the name's unicity before inserting or updating an entity?
I don't want to rely on a nHibernate exception to catch the error. I'd like to return a prettier error message to my user than the obscure could not execute batch command.[SQL: SQL not available]
In the question Where should I put a unique check in DDD?, someone suggested using a Specification like so.
Account accountA = _accountRepository.Get(123);
Account accountB = _accountRepository.Get(456);
accountA.Name = accountB.Name;
ISpecification<Account> spec = new Domain.Specifications.UniqueNameSpecification(_accountRepository);
if (spec.IsSatisfiedBy(accountObjA) == false) {
throw new Domain.UnicityException("A duplicate Account name was found");
}
with the Specification code as:
public bool IsSatisfiedBy(Account obj)
{
Account other = _accountRepository.GetAccountByName(obj.Name);
return (other == null);
}
This works for inserts, but not when doing an update because. I tried changing the code to:
public bool IsSatisfiedBy(Account obj)
{
Account other = _accountRepository.GetAccountByName(obj.Name);
if (obj == null) { // nothing in DB
return true;
}
else { // must be the same object.
return other.Equals(obj);
}
}
The problem is that nHibernate will issue an update to the database when it executes GetAccountByName() to recover a possible duplicate...
return session.QueryOver<Account>().Where(x => x.Name == accntName).SingleOrDefault();
So, what should I do? Is the Specification not the right way to do it?
Thanks for your thoughts!
I'm not a fan of the specification pattern for data access, it always seems like jumping hoops to get anything done.
However, what you've suggested, which really just boils down to:
Check if it already exists.
Add if it doesn't; Show user-friendly message if it does.
... is pretty much the easiest way to get it done.
Relying on database exceptions is the other way of doing it, if your database and it's .NET client gracefully propagates the table & column(s) that were infringing the unique constraint. I believe most drivers don't do so (??), as they just throw a generic ConstraintException that says "Constraint XYZ was violated on table ABC". You can of course have a convention on your unique constraint naming to say something like UK_MyTable_MyColumn and do string magic to pull the table & column names out.
NHibernate has a ISQLExceptionConverter that you can plug into the Configuration object when you set NHibernate up. Inside this, you get exposed to the exception from the .NET data client. You can use that exception to extract the table & columns (using the constraint name perhaps?) and throw a new Exception with a user friendly message.
Using the database exception way is more performant and you can push a lot of the detecting-unique-constraint-violation code to the infrastructure layer, as opposed to handling each one case by case.
Another thing worth pointing out with the query-first-then-add method is that to be completely transaction safe, you need to escalate the transaction level to serializable (which gives the worst concurrency) to be totally bullet proof. Whether you need to be totally bullet proof or not, depends on your application needs.
You need to handle it with Session.FlushMode mode to set to FlushMode.Commit and use transaction to rollback if at all update fired.

How do I wrap an EF 4.1 DbContext in a repository?

All,
I have a requirement to hide my EF implementation behind a Repository. My simple question: Is there a way to execute a 'find' across both a DbSet AND the DbSet.Local without having to deal with them both.
For example - I have standard repository implementation with Add/Update/Remove/FindById. I break the generic pattern by adding a FindByName method (for demo purposes only :). This gives me the following code:
Client App:
ProductCategoryRepository categoryRepository = new ProductCategoryRepository();
categoryRepository.Add(new ProductCategory { Name = "N" });
var category1 = categoryRepository.FindByName("N");
Implementation
public ProductCategory FindByName(string s)
{
// Assume name is unique for demo
return _legoContext.Categories.Where(c => c.Name == s).SingleOrDefault();
}
In this example, category1 is null.
However, if I implement the FindByName method as:
public ProductCategory FindByName(string s)
{
var t = _legoContext.Categories.Local.Where(c => c.Name == s).SingleOrDefault();
if (t == null)
{
t = _legoContext.Categories.Where(c => c.Name == s).SingleOrDefault();
}
return t;
}
In this case, I get what I expect when querying against both a new entry and one that is only in the database. But this presents a few issues that I am confused over:
1) I would assume (as a user of the repository) that cat2 below is not found. But it is found, and the great part is that cat2.Name is "Goober".
ProductCategoryRepository categoryRepository = new ProductCategoryRepository();
var cat = categoryRepository.FindByName("Technic");
cat.Name = "Goober";
var cat2 = categoryRepository.FindByName("Technic");
2) I would like to return a generic IQueryable from my repository.
It just seems like a lot of work to wrap the calls to the DbSet in a repository. Typically, this means that I've screwed something up. I'd appreciate any insight.
With older versions of EF you had very complicated situations that could arise quite fast due to the required references. In this version I would recomend not exposing IQueryable but ICollections or ILists. This will contain EF in your repository and create a good seperation.
Edit: furthermore, by sending back ICollection IEnumerable or IList you are restraining and controlling the queries being sent to the database. This will also allow you to fine tune and maintain the system with greater ease. By exposing IQueriable, you are exposing yourself to side affects which occur when people add more to the query, .Take() or .Where ... .SelectMany, EF will see these additions and will generate sql to reflect these uncontrolled queries. Not confining the queries can result in queries getting executed from the UI and is more complicated tests and maintenance issues in the long run.
since the point of the repository pattern is to be able to swap them out at will. the details of DbSets should be completly hidden.
I think that you're on a good path. The only thing I probaly ask my self is :
Is the context long lived? if not then do not worry about querying Local. An object that has been Inserted / Deleted should only be accessible once it has been comitted.
if this is a long lived context and you need access to deleted and inserted objects then querying the Local is a good idea, but as you've pointed out, you may run into difficulties at some point.