I recently started a project based on FuelPHP.
So on this site I will have lot of relations.
My question is, is it a good idea to use the query builder and the ORM?
For example would I use the ORM only for relations, and use the query builder to insert update and delete records in the database.
Or is this not a good idea?
The ORM is fairly powerful; it has Create, Read, Update and Delete (CRUD) functionality. http://docs.fuelphp.com/packages/orm/crud.html built into it so it should be possible to use the ORM for the most part. Saying that, I am currently working on a project where we have elected to use both the ORM and the query builder.
There is a thread on the FuelPHP forums that discusses this: http://fuelphp.com/forums/topics/view/7345
In general not a problem, but you have to take into account that the ORM does result caching (on a per request basis).
So if you run an ORM query to retrieve records, then run a QB query to modify those records, the ORM will still return the unmodified versions. Even if you run the same ORM query again (as it will see the data is cached, and will not run another query to retrieve them).
Related
In my Project iam about to use Hibernate but one create confusion is:
That I Read somewhere:
Hibernate has its own query language, i.e hibernate query language which is database independent
So if we change the database, then also our application will works as HQL is database independent
HQL contains database independent commands
Does it means that we dont have to write stored proceedure and views while using Hibernate in java?
Short answer: You dont have to write any query and/or stored procedure. (Also you can hibernate tell to create/update all required tables for you, during application start.)
Long answer: Hibernate can be used without any manual definition of a query. (Using the EntityManager, you can simple tell hibernate to get everything of user.class from the database.) However it does support HQL as well as SQL-Queries, also.
SQL Queries of course will stop to work, when you switch to another database later on. HQL will work for every Database, because hibernate is able to translate HQL Queries to any (of the supported) Database Languages.
But be aware: In my Opinion Hibernate is damn slow if you let hibernate do all the work. (Hibernate fires a LOT of single Select Queries, when loading entities with complex relations)
Because of performance issue in application we are using in-memory approach for one of my project, in which we are loading all tables in RAM in form of generic collections (using nhibernate).
Issue is that when we were using simple linq to sql approach that time the testing and QA team were easily able to get sql queries using sql profiles for page they were viewing.
but with new approach (in-memory), we are loading all data in collection in one go and then are using linq to get data from that collection, so the testing and QA teams are not able to get the sql queries to verify the business logic and verifying bugs.
Please suggest any solution which can help in this situation, i think its not possible to get sql from linq to object (as all data is already in collection). please suggest any solution/approach/tool which can help me, to generate sql of those linq which is getting run against the collection or any other good solution.
NOTE: i know getting sql out of linq to sql is not possible, i am looking for suggestion which can help my QA and testing team to verify the queries/business logic (like they were doing earlier by capturing sql). like if possible log the linq queries as string which can be further used to be run/analyze.
The only SQL statements that you are running in this situation are the initial SELECT queries to load your data into memory. Once you have done those, you are no longer running "queries", you are instead performing .NET Framework calls.
Given that you have fundamentally changed the architecture of the application, you need to communicate this to the testing and QA teams - they will not be able to "see" what the application is now doing under the hood in the way that they could previously. If this sort of "deep dive" capability is a requirement of your test teams then your architecture is likely to require further modifications.
When I started to code my sinatra application I never used it before. Note that I had and still have no experience with RoR. I had one .rb file and one .haml and was happy. Now I had to split .rb file into about 10 'library' files as the whole application gets more and more complex.
I store some application logs/info in csv files and now I am getting conflicts when accessing the csv file. So I think that I need to introduce "proper" database solution. I want it to be part of my ruby (sinatra) application.
How can I introduce 'light' sql database into my sintra application?
I am on ruby 1.8.7 (2010-08-16 patchlevel 302) [i386-mingw32] soon upgrading to 1.9 (hopefully)
I'd recommend looking at Sequel. It's very flexible and powerful, and works well with SQLite, MySQL, Postgres, Oracle and other DBMs. It's not opinionated about how you talk to the database; You can use it as an ORM or with simple datasets, and allows embedded SQL or more programmatic approaches.
For ORM, both ActiveRecord and Sequel are recommended. About database, I guess sqlite3 will be good enough for your need. Also you can choose mysql or pg.
If you want to use active_record, you'll find this article very useful.
And if Sequel is the choice, just read Sequel documents here.
After the gems installed. You can start writing some code to connect the db. Then maybe some migration task to build database tables (and don't forget build some corresponding models). Both gem have similar syntax for migrations. After that, import your csv data and well done.
I have had no trouble using either Active Record or DataMapper to add object persistence to my Sinatra apps. People also tell me Sequel is very good but philosophically it is not not worlds apart from Active Record imho.
Active Record and Sequel favour a more database-centric approach, whereby you spell out your tables as a set of database and table definitions in a collection of migration files and merge them into a schema which is then used to build or update your database tables. If you really care about the underlying SQL database then one of these is for you. I find them to be six of one, a half-dozen the other.
DataMapper is more object-centric and lets you define the properties and object relationships you need in your object's own class definition; and then when your app launches you make sure you call DataMapper.auto_upgrade! and it upgrades the database to suit your object graph. The upside is that you only have one place to look to find what properties your object might have. The downside is you have less control over the specifics of the underlying databases, though it's not impossible to tightly define the mappings, DataMapper works well when you care about object graphs over database tables.
The good news is they pretty much all work in the same way once you have your mappings from object graph to SQL database tables defined. All support lazy or pre-emptive loading of related collections of objects, many_to_many relationships, polymorphism, etc, and tend to vary only in configuration and seeding details.
I often start projects using DataMapper just for its speed of throwing up and tearing down database schemas, as the app's object graph is still in flux; I refactor quickly to use Active Record when the schema has settled down. Next project I think I'll give Sequel a go though, as people do seem to rave about it.
I have had success using datamapper with Sinatra, put like the other post you can also use Sequel and Active Record. One advantage to maybe using Active Record though is if you do ever want to use/learn ROR, Active Record is the default ORM so that might be something that you want to consider.
If you don't want to go the ORM route you can always use the SQL-Ruby gem which will allow you to create and run sql queries. Here is some example code from the website http://sqlite-ruby.rubyforge.org/
require 'sqlite'
db = SQLite::Database.new( "data.db" )
db.execute( "select * from table" ) do |row|
p row
end
db.close
I would like to build arbitrary queries to a database, by allowing the user to build queries "on the fly". For every object/table, being able to select its attributes, and then "building" the query (that would translate into a SQL statement) and finally launching it, all through a web interface.
The ticketing system "rt" does that, for example, and another example would be the http://gatherer.wizards.com/Pages/Advanced.aspx webpage.
I'm currently programming in rails but any existing solution that implements this (or something similar) would be welcome.
Just be careful when creating dynamically generated queries like this that will need to be executed via sp_executesql (example: ms sql server), etc..... make sure you cover all of your bases to ensure that your application isnt vulnerable to SQL injection attacks as this type of development will essentially get one in a lot of trouble if its done incorrectly.. I would recommend storing all queries in a table and only reading queries from this table to help isolate the queries that are being ran in your application. Just identify them with a label, and allow the EU to choose the label from a dropdown list control on the frontend.
Good luck and I'm not sure of any software that will help assist
Not quite sure what your use case is here but i would say check out the
Doctrine ORM ( Object Relational Mapper )
**Edit
After reading more and looking at the example. I would only suggest Doctrine for a large website.
Then use Doctrines DQL syntax with some javascript/jquery magic for the forms.
Note that the queries you're referencing aren't arbitrary: they're on a very specific problem domain, on a specific set of sql tables.
That said, if I were you I'd look into how people are building sql queries with javascript. Something like these:
http://code.google.com/p/django-querybuilder/
http://css.dzone.com/articles/sqlike-sql-querying-engine?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+zones%2Fria+(RIA+Zone)
http://thechangelog.com/post/4914956307/rel-arel-ported-to-node-js-with-some-changes
That'll at least get you a good idea of the underlying data structures.
I'm started looking at Subsonic just yesterday and having trouble figuring out how to do even the most basic tasks. I've watched the demos for ActiveRecord and SimpleRepository, but they don't fit what we want so I'm trying to use the Linq Templates.
The getting started guide for Linq walks through enough to do a query, but how do I do other things like insert a record and get it's auto-increment ID back?
Is there a reasonably comprehensive guide to using Subsonic Linq somewhere?
Well there is this:
http://subsonicproject.com/docs/Using_AdvancedTemplates
Which I can see is a bit sparse :). It works just like Linq to SQL in most cases in that you need to create "DB". That DB allows you to Insert, Delete, etc for all the objects. You can also do aggregates and so on.
using(var db=new NorthwindDB()){
db.Insert.Into("Name").Values("New Name").Execute();
}
The tools used to interact with the DB follow along with our Simple Query tool:
http://subsonicproject.com/docs/Simple_Query_Tool
If you want more things done for you (like getting the new id back, etc) you should stick with ActiveRecord.
Out of curiosity - what doesn't fit?