I have been trying to find an answer to this question all night and I still haven't come across a definite answer. Normally when I work with sqlite3 i would use the following pattern:
import sqlite3
db = sqlite3.connect('whatever.db')
cursor = db.cursor()
#do some stuff
cursor.close()
Now I trying to evolve my understanding of OOP and databases so I thought I would create a controller object to interact with the database. I have come up with the following:
A class which just defines the connection and cursor:
import sqlite3
class coffee_shop_controller:
def __init__(self):
self.db = sqlite3.connect("coffeeshop.db")
self.cursor = self.db.cursor()
def close(self):
self.cursor.close()
I subclass this for the various controllers that I need. For example:
class customer_controller(coffee_shop_controller):
"""creates a controller to add/delete/amend customer records in the
coffee shop database"""
def __init__(self):
super().__init__()
def add_customer(self,fn,ln,sa,t,pc,tn):
sql = """insert into customer
(first_name,last_name,street_address,town,post_code,telephone_number)
values
('{0}','{1}','{2}','{3}','{4}','{5}')""".format(fn,ln,sa,t,pc,tn)
self.cursor.execute(sql)
self.db.commit()
I appreciate that the design pattern may not be great (open to suggestions) and that I really should be preventing SQL injection but the closing the connection is interesting me at the moment.
From searching around the Python docs the comment line suggests that we can close the connection, not that we must. Is this right? Do I not really need to bother?
If I do need to bother then there seems to be a split on what I should do:
Manually close the connection
Use the __del__ method
Use with or atexit
Is there anything definitive here? The __del__ method makes the most sense to me but maybe that's my ignorance talking.
Thanks for any suggestions you can offer.
Adam.
It's a good practice to free the resources that you do not need anymore. Normally, database connections are pretty "expensive" and I would definitely recommend to open the connection, do the actual query and close the connection right after that.
In order to achieve a better control over this, I would also recommend to follow the Unit Of Work design pattern. Moreover, it will be great for you if you can combine Unit Of Work with some good ORM (such as SQLAlchemy or Django)
Related
I am locking for a read-write lock in Django, using PostgreSQL.
I know select_for_update
On top, I need SELECT FOR SHARE
I founds this super old Django-ticket
I couldn't find any third party library implementing this for Django
Now I am wondering:
Is there a good reason this was not implemented in the Django-ORM?
Does anyone know any third party library for this?
Or any easy work around?
Can't you wrap the logic in pl/pgsql function that uses select for share and you then call the function from django?
Is there a good reason this was not implemented in the Django-ORM?
The ticket you've posted probably provides the reason: no one is motivated enough to write a patch.
Does anyone know any third party library for this?
Not me, sorry.
And if by any chance you are thinking about ditching Django for some other ORM then you must ask yourself: "There is a feature I need that's missing in Django... what features will I miss in this other ORM?"
Or any easy work around?
Probably not. But here are some options:
Every ORM I know has an escape hatch to raw SQL. You probably know that, but reluctant to use it... But, as you also lack the motivation to make a pull request, that probably means that you do not have hundreds of requests that require SELECT FOR SHARE functionality. So you should consider it. Performing raw SQL queries
Stored procedures as steve mentioned
https://docs.djangoproject.com/en/3.0/topics/db/sql/#calling-stored-procedures
The last comment on the ticked you've posted is from a man (David Schwärzle) who claims that he has a solution (not for PostgreSQL specifically but a solution nevertheless)... maybe you should try and contact him.
Haven't tried it, but probably the way you can add the desired functionality is by Writing your own Query Expressions
You can easily implement with a raw query.
from django.db import connection
query = f"""SELECT * FROM "appname_modelname" WHERE id={obj_id} FOR SHARE"""
with connection.cursor() as cursor:
cursor.execute(query, None)
obj_id is python variable.
appname_modelname is name of table created in your db by django. Django by default combines lowercased app name and model name with underscore.
I'm pretty new to Entity Framework, and I might be having a hard time "asking the right question" on google. So i'll try here.
First some facts:
I'm working on a project based on the Entity Framework 6.x.
I'm using the Model First approach.
The database is an SQL Server.
My challenge:
Every time my Unit Tests run, I'm dropping creating and seeding a test database using the DropCreateDatabaseAlways<TContext> implementation. The Data Source is an (localdb)\v11.0 instance.
I've gotten to a point where I would like to map an entity to a database view. I can find plenty of material on how easy it is to do the mapping, but what I'm looking for is a way for the view to be applied to my test database upon database creation/initialization?
I'm trying to keep a pure Model First approach. Can anyone help with information on how the views (and Stored Procedures) can be created, when creating the database?
not sure this can be done.
I recently open an issue on codeplex for similar stuff but related to Code First.
The answer was: "edit generated code by hand".
In your case this should be done by inspiring yourself of this post about model first seed data. Instead of seeding, you should send DDL creating view through SQL(...) method as suggested in the answer of cited issue.
BTW: if you think, as I, that such a support should be a good idea, feel free to upvote code first support for view, and/or to create a model first support for view.
For a hobby project I am building an application to keep track of my money. Register everything that comes in and goes out. I am using sqlite as a database backend.
I have two data access models in mind.
Creating one master object as a sort of database connector, which contains methods which execute the queries and provide the required sets of data as a list of objects
Have objects who need data execute the queries themselves
Which one of these is 'the best' and why? Or are there different, better models out there?
The latter option is better. In the first option, you would end up having to touch your universal data access object for just about any update to the code that wasn't purely a change in display logic. If you have different data access objects, then you will have much more testable, maintainable code.
I suggest you read up a bit on the model-view-controller paradigm. The wikipedia article on it is a good start: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller.
Also, you didn't say which language/platform you were coding in, but most platforms have numerous options for auto-generating a starting point your data access classes from your database. You may find something like that useful.
Much of a muchness really, the thing to avoid is having the "same" sql sprinkled all over your code base.
The key point is. You've just added a new column to Table1. When you do Find In Files "Table1", how many hits are you going to get and where.
If you use one class and there's a lot of db operations, it's going to get very messy very quickly, but if you have one interface (say IModel) with one implementation, you can swap backends very easily.
So how many db operations, and how likely is it you will move away from SqlLite.
Please excuse my long-winded explanation, but I wanted to be as explicit as possible in the hopes of getting as much useful feedback on my situation as possible. You can skip to the questions at the bottom if you are impatient.
Explanation
At my current job, development is done in an antiquated language that is hard-wired to a proprietary DBMS that comes with the language. The language is CRUD-focused, and is essentially a glorified database querying/reporting/updating language with some programming features bolted on as an afterthought. Most programs are top-down procedures and there is very little code reuse; updating a record often requires updating many entangled, related records at the same time that you just need to "know about" as the proprietary database has no inherent foreign key relationships. If a table needs to be updated, we generally must grep our source code and update every procedure that creates/updates records for that table and recompile. I could go on with other annoyances, but needless to say, I am looking for a way to abstract away as much of this behavior as possible into reusable code segments.
The language has semi-recently added some support for object-oriented development, and I have been able to demonstrate the benefits of reusable code to my coworkers with a recent project written using OO constructs. However, my project was only possible because it was a rare task that did not require interacting with our database.
I have really been trying hard to find a way to create re-usable code using OO techniques with this language, but since everything is so database-focused, what I really need is a way to create container classes around our table designs, putting most of our data processing logic into class methods and merging N related tables into 1 singular class. This has brought me to the idea of ORM frameworks, which of course is non-existent on the language I am using at work.
What I have found, is that the DBMS for this language can run a SQL99 engine concurrently with the proprietary language engine, and it includes JDBC and ODBC drivers. This has opened the door for me to explore migration strategies, which is where I think we eventually need to go. Since the SQL engine runs concurrently with the old engine, it is possible for us to do an incremental migration, running new code alongside old code with an eventual goal of migrating our data to a "pure" SQL DBMS when all the old code is replaced.
I initially did quite a bit of reading and proposed Java (using JPA2 for ORM) to my manager, but I think I scared him as he views Java as being a bit heavyweight for our needs. I then did a little more digging and re-proposed Ruby using the JRuby interpreter (using either ActiveRecord or DataMapper for ORM), which was much better received as Rails seems to fit in well with the re-shifting of our development to Web-based front-ends that we are attempting to move to with our old cludgy code, and of course because the ability to interact with Java if the need arises is a great capability.
The Questions
Nearly all of the reading I have
been doing about ORM is focused on
starting with a class structure, and
creating the mapped database
structure as a secondary process.
Is going the other way around
(starting with an existing database
and mapping classes to it) a very
odd thing to do?
Assuming question #1 == true, how
flexible are existing ORM frameworks
such as JPA2, ActiveRecord,
DataMapper etc. to "imperfect" table
design? I am sure we will have to
do some refactoring of existing
table design, but would like to know
if I am undertaking a Herculean task
before I waste too much time on the
effort.
If anyone has a better idea for
language+ORM, I would love to hear
it. It must be SQL-ready using JDBC
or ODBC to fit into our incremental
migration plan.
If anyone has any experience on a similar effort and could point out any helpful resources (especially books), I would be very grateful!
Nearly all of the reading I have been doing about ORM is focused on starting with a class structure, and creating the mapped database structure as a secondary process. Is going the other way around (starting with an existing database and mapping classes to it) a very odd thing to do?
Not really. There are several approaches when dealing with the persistence layer of an application:
Top-down: You start with the object model and the mappings and you derive the database schema from that data.
Bottom-up: You start with your data model i.e. the database schema and you derive the object model and the mappings from the tables.
Middle-out: You start with the mapping and you generate the object model and the tables.
Meet-in-the-middle: You start with an existing database schema and an existing object model, you develop a mapping to map between the two (you can even introduce an additional object layer and brige the existing one).
The top-down approach is the most object-oriented but the meet-in-the-middle approach is probably the most common.
Assuming question #1 == true, how flexible are existing ORM frameworks such as JPA2, ActiveRecord, DataMapper etc. to "imperfect" table design? I am sure we will have to do some refactoring of existing table design, but would like to know if I am undertaking a Herculean task before I waste too much time on the effort.
I would say that JPA is not the most flexible, it will not deal very well with exotic or heavily denormalized schemas (the result might be ugly from an OO point of view). Accesses that don't go through JPA might also be a problem. A data mapper tool like iBatis (now mybatis) will give you more flexibility.
If anyone has a better idea for language+ORM, I would love to hear it. It must be SQL-ready using JDBC or ODBC to fit into our incremental migration plan.
I know that RoR can deal with existing databases, I'm just not sure what the result will look like. But I don't really have enough experience with RoR so I'll let experts elaborate on this.
If anyone has any experience on a similar effort and could point out any helpful resources (especially books), I would be very grateful!
I suggest to browse Scott Ambler website and his book(s):
The Process of Database Refactoring: Strategies for Improving Database Quality
More food for thought:
Working Effectively with Legacy Code by Michael Feathers
Clean Code by Robert Martin
I am putting some heavy though into re-writing the data access layer in my software(If you could even call it that). This was really my first project that uses, and things were done in an improper manner.
In my project all of the data that is being pulled is being stored in an arraylist. some of the data is converted from the arraylist into an typed object, before being put backinto an arraylist.
Also, there is no central set of queries in the application. This means that some queries are copy and pasted, which I want to eliminate as well.This application has some custom objects that are very standard to the application, and some queries that are very standard to those objects.
I am really just not sure if I should create a layer between my objects and the class that reads and writes to the database. This layer would take the data that comes from the database, type it as the proper object, and if there is a case of multiple objects being returned, return a list of those object. Is this a good approach?
Also, if this is a good way of doing things, how should I return the data from the database? I am currently using SqlDataReader.read, and filling an array list. I am sure that this is not the best method to use here, i am just not real clear on how to improve this.
The Reason for all of this, is I want to centralize all of the database operations into a few classes, rather than have them spread out amongst all of the classes in the project
You should use an ORM. "Not doing so is stealing from your customers" - Ayende
One thing comes to mind right off the bat. Is there a reason you use ArrayLists instead of generics? If you're using .NET 1.1 I could understand, but it seems that one area where you could gain performance is to remove ArrayLists from the picture and stop converting and casting between types.
Another thing you might think about which can help a lot when designing data access layers is an ORM. NHibernate and LINQ to SQL do this very well. In general, the N-tier approach works well for what it seems like you're trying to accomplish. For example, performing data access in a class library with specific methods that can be reused is far better than "copy-pasting" the same queries all over the place.
I hope this helps.
It really depends on what you are doing. If it is a growing application with user interfaces and the like, you're right, there are better ways.
I am currently developing in ASP.NET MVC, and I find Linq to SQL really comfortable. Linq to SQL uses code generation to create a collection of code classes that model your data.
ScottGu has a really nice introduction to Linq to SQL on his blog:
http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx
I have over the past few projects used a base class which does all my ADO.NET work and that all other data access classes inherit. So my UserDB class will inherit the DataAccessBase class. I have it at the moment that my UserDB class actualy takes the data returned from the database and populates a User object which is then returned to the calling Business Object. If multiple objects are returned then these are then a Generic list ie List<Users> is returned.
There is a good article by Daemon Armstrong (search Google for Daemon Armstrong which demonstrates on how this can be achived.
""http://www.simple-talk.com/dotnet/.net-framework/.net-application-architecture-the-data-access-layer/""
However I have now started to move all of this over to use the entitty framework as its performs much better and saves on all those manual CRUD operations. Was going to use LINQ to SQL but as it seems to be going to be dead in the water very soon thought it would be best to invest my time in the next ORM.
"I am really just not sure if I should create a layer between my objects and the class that reads and writes to the database. This layer would take the data that comes from the database, type it as the proper object, and if there is a case of multiple objects being returned, return a list of those object. Is this a good approach?"
I'm a Java developer, but I believe that the language-agnostic answer is "yes".
Have a look at Martin Fowler's "Patterns Of Enterprise Application Architecture". I believe that technologies like LINQ were born for this.