I am migrating raw PHP code to CakePHP and have some problems. As I have big problems with query to ORM transformation I temporary use raw SQL. All is going nice, but I met the ugly code and don't really know how to make it beautiful. I made DealersController and added function advanced($condition = null) (it will be called from AJAX with parameters 1-15 and 69). function looks like:
switch ($condition) {
case '1':
$cond_query = ' AND ( (d.email = \'\' OR d.email IS NULL) )';
break;
case '2':
$cond_query = ' AND (d.id IN (SELECT dealer_id FROM dealer_logo)';
break;
// There are many cases, some long, some like these two
}
if($user_group == 'group_1') {
$query = 'LONG QUERY WITH 6+ TABLES JOINING' . $cond_query;
} elseif ($user_group == 'group_2'){
$query = 'A LITLE BIT DIFFERENT LONG QUERY WITH 6+ TABLES JOINING' . $cond_query;
} else {
$query = 'A LITLE MORE BIT DIFFERENT LONG QUERY WITH 10+ TABLES JOINING' . $cond_query;
}
// THERE IS $this->Dealer->query($query); and so on
So.. As you see code looks ugly. I have two variants:
1) get out query addition and make model methods for every condition, then these conditions seperate to functions. But this is not DRY, because main 3 big queries is almost the same and if I will need to change something in one - I will need to change 16+ queries.
2) Make small reusable model methods/queries whitch will get out of DB small pieces of data, then don't use raw SQL but play with methods. It would be good, but the performance will be low and I need it as high as possible.
Please give me advice. Thank you!
If you're concerned about how CakePHP makes a database query for every joined table, you might find that the Linkable behaviour can help you reduce the number of queries (where the joins are simple associations on the one table).
Otherwise, I find that creating simple database querying methods at the Model level to get your smaller pieces of information, and then combining them afterwards, is a good approach. It allows you to clearly outline what your code does (through inline documentation). If you can migrate to using CakePHP's find methods instead of raw queries, you will be using the conditions array syntax. So one way you could approach your problem is to have public functions on your Model classes which append their appropriate conditions to an inputted conditions array. For example:
class SomeModel extends AppModel {
...
public function addEmailCondition(&$conditions) {
$conditions['OR'] = array(
'alias.email_address' => null,
'alias.email_address =' => ''
);
}
}
You would call these functions to build up one large conditions array which you can then use to retrieve the data you want from your controller (or from the model if you want to contain it all at the model layer). Note that in the above example, the conditions array is being passed by reference, so it can be edited in place. Also note that any existing 'OR' conditions in the array will be overwritten by this function: your real solution would have to be smarter in terms of merging your new conditions with any existing ones.
Don't worry about 'hypothetical' performance issues - if you've tried to queries and they're too slow, then you can worry about how to increase performance. But for starters, try to write the code as cleanly as possible.
You also might want to consider splitting up that function advanced() call into multiple Controller Actions that are grouped by the similarity of their condition query.
Finally, in case you haven't already checked it out, here's the Book's entry on retrieving data from models. There might be some tricks you hadn't seen before: http://book.cakephp.org/view/1017/Retrieving-Your-Data
If the base part of the query is the same, you could have a function to generate that part of the query, and then use other small functions to append the different where conditions, etc.
Related
I want to find sum of different columns, and I can do it using two ways as follows:
1st way
$obj = $modelClass::findBySql($sql, $params);
$clone = clone $obj;
$grand_amount = $clone->sum('amount');
$grand_tax_amount = $clone->sum('tax_amount');
$grand_total =$clone->sum('total_amount');
2nd way
$grand_amount = $modelClass::findBySql($sql1, $params)->sum('amount');
$grand_tax_amount = $modelClass::findBySql($sql1,$params)->sum('tax_amount');
$grand_total = $modelClass::findBySql($sql1, $params)->sum('total_amount');
Referring above two ways, which will be more efficient? Or both ways will execute same number of queries?
It does not matter if you clone query or not. sum() performs actual SQL query - if you call it three times, you will get three queries.
From microptimization perspective clone should be faster than creating the same query object three times. But you will not see much difference - performing real SQL query will be main bottleneck here, PHP overhead will be negligible.
BTW: Cloning from your example does not make much sense - you're never using original $obj and $clone is used three times. You can use $obj directly and avoid clonning - it make sense only if you want to modify cloned query without modifying original ActiveQuery object.
I have this function that takes data from the database and also has search. The problem is that when I search with Entity framework it's slow, but if I use the same query I got from the log and use it in SSMS it's fast. I must also say that there are allot of movies, 388262. I also tried adding an index on title at movie, but didn't help.
Query I use in SSMS:
SELECT *
FROM Movie
WHERE title LIKE '%pirate%'
ORDER BY ##ROWCOUNT
OFFSET 0 ROWS FETCH NEXT 30 ROWS ONLY
Entity code (_movieRepository.GetAll() returns Queryable not all movies):
public IActionResult Index(MovieIndexViewModel vm) {
IQueryable<Movie> query = _movieRepository.GetAll().AsNoTracking();
if (!string.IsNullOrWhiteSpace(vm.Search)) {
query = query.Where(m => m.title.ToLower().Contains(vm.Search.ToLower()));
}
vm.TotalItemCount = query.Count();
vm.Movies = query.Skip(_pageSize * (vm.Page - 1)).Take(_pageSize);
vm.PageSize = _pageSize;
return View(vm);
}
Caveat: I don't have much experience with the Entity framework.
However, you might find useful debugging tips available in the Entity Framework Performance Article from Simple talk. Looking at what you've posted you might be able to improve your query performance by:
Choosing only the specific column you're interested in (it sounds like you're only interested in querying for the 'Title' column).
Pay special attention to your data-types. You might want to convert your NVARCHAR variables to VARCHAR(40) (or some appropriate character limit)
try removing all of the ToLower() stuff,
if (!string.IsNullOrWhiteSpace(vm.Search)) {
query = query.Where(m => m.title.Contains(vm.Search)));
}
sql server (unlike c#) is not case sensitive by default (though you can configure it to be that way). Your query is forcing sql server to lower case every record in the table and then do the comparison.
Here is my test scenario.
When a command line executes - lot of values are written to different tables in a db.
There are multiple command line options and many values/tables to be verified in the db. How should I go about designing the check for values in db?
What I done have so far -
execute the command
Connect to db
run the query particular to the command (this will be very specific to the command on what tables I need to look in)
from the dataset returned check if dt[row][col]="value i expect".
Advantage here is that I have to write less code as part of framework.
Downside of this is that I have to write more code when developing each tests, not streamlined and I may get column names wrong sometimes.
So I am trying to see if I streamline the check better. Something like declare a class (for a table)with columns as exposed properties. This way i wont get the column names wrong at least.
Is this the simplest approach ( for the long run? I want to write more reusable code). If not, what best way there is?
Is there a easy way to export tables/column values from db to a c# project file (as a class/code) so that I don't have to put everything in code again.
Let me know if any details are not clear or if I would like me to elaborate bit more on the details.
Thanks for looking.
I don't know of a standard approach for this problem, but I'll offer some ideas.
I usually find myself creating classes to represent tables to take advantage of compile-time checks, so I think that's a good way to go. You might want to look into Linq-to-SQL -- I think it can do a lot of this for you. I sometimes use ActiveRecord in Ruby for this purpose, even on C# projects, because development with it is very quick.
Alternatively, you might want to put the tests in text files:
Command:
command to execute
Expected Data:
SELECT column FROM table;
row name, column name, expected data
row name, column name, expected data
row name, column name, expected data
Expected Data:
SELECT column FROM table;
row name, column name, expected data
row name, column name, expected data
Then write a little code to load and parse your files, run the command, and compare the results. This would factor out only the things that change with each test, so I don't know if it can get much better.
Another idea is to try pulling common code into a base class and keep the varying code in subclasses. This design follows the Template Method pattern.
Example
class MyTest : CommandLineTest {
public String Command() { return "Command to execute"; }
public String DataRetrievalCommand() { return "SELECT column FROM table"; }
public DataResult[] ExpectedData() {
return [ new DataResult("column", "row", "value"), ...];
}
}
The superclass will use these methods to get the commands and values, but it will be the one doing all the actual work. Similar idea as the text files, but the test specification is kept in code.
Hope this helps or at least gets a few ideas flowing.
I am using Groovy's Sql object to perform queries on a postgres db. The queries are being executed as follows:
List<Map> results = sql.rows("select * from my_table")
List<Map> result2= sql.rows("select * from my_second_table")
I have a groovy method that performs two queries and then does some processing to loop through the data to make a different dataset, however, on some occasions I recieve a postgres exception "This ResultSet is closed" error.
having searched, I originally thought it might be to do with the issue here: SQLException: This ResultSet is closed (running multiple queries and trying to access the data from the resultsets after the fact) - however, we only seem to get the exception on quite high load - which suggests that it isnt as simple as the first dataset is closed on executing the second query as if this was the case I would expect it to happen consistently.
Can anyone shed any light on how Groovy's Sql object handles these situations or suggest what might be going wrong?
Groovy SQL is kind of a weird cat. Easy to use for simple stuff. If you have more complex scenarios you probably are better off using something else. IMHO
I first suggest doing one query, storing the results into a collection, do the second query and store the results in a collection and then do your operations between two collections rather than result sets. If you data is too large for that, find some way to store the data locally before you start doing your aggregation or whatever.
If you don't like that, you might need to checkout the GDK source code to get a better idea what is done with the Sql.getInstance() related to result sets etc. Then you can sidestep whatever land mine you are inadvertently stepping on.
Perhaps
List<Map> results = sql.rows("select * from my_table")
List<Map> result2= sql.rows("select * from my_second_table")
will not work even in plain Java (as already said in the answer you provided when second call is made on statement all resources dedicated during the previous call have to be released). As mentioned by #Todd W Crone Groovy can optimize resources, e.g. release them dynamically or don't release them depending on certain run.
Actually I've tried with only one query. E.g. I've tried to get ResultSet and then iterate through it, like this (don't mind the names of table and field, query is rather simple; and result is one row that contains one column due to LIMIT 1 clause):
def resultSet = sql.executeQuery("SELECT age FROM person WHERE id = 12345 LIMIT 1")
resultSet.next()
and got This ResultSet is closed error. Seems that Groovy optimizes resources and closes ResultSet immediately. I didn't look into the source code of Groovy SDK. I found that eachRow and other methods with closure-style handlers work fine and don't throw This ResultSet is closed error.
Perhaps, methods with closure-style handlers can help you. For example, look at except from the article where rows() method with closure is used:
String query = 'select id as identifier, name as langName from languages'
def rows = db.rows(query, { meta ->
assert meta.tableName == 'languages'
assert meta.columnCount == 2
// ...
})
I have tried to use FLINQ but it is rather out of date with F# 3.0 beta.
Can someone give me some pointers on how to create dynamic SQL queries in F#?
We have recently developed a library, FSharpComposableQuery, aimed at supporting more flexible composition of query expressions in F# 3.0 and above. It's intended as a drop-in replacement overloading the standard query builder.
Tomas's example can be modified as follows:
open FSharpComposableQuery
// Initial query that simply selects products
let q1 =
<# query { for p in ctx.Products do
select p } #>
// Create a new query that specifies only expensive products
let q2 =
query { for p in %q1 do
where (p.UnitPrice.Value > 100.0M) }
This simply quotes the query expression and splices it into the second query. However, this results in a quoted query expression that the default QueryBuilder may not be able to turn into a single query, because q2 evaluates to the (equivalent) expression
query { for p in (query { for p in ctx.Products do
select p }) do
where (p.UnitPrice.Value > 100.0M) }
which (as in Tomas's original code) will likely be evaluated by loading all of the products into memory, and doing the selection in memory, whereas what we really want is something like:
query { for p in ctx.Products do
where (p.UnitPrice.Value > 100.0M) }
which will turn into an SQL selection query. FSharpComposableQuery overrides the QueryBuilder to perform this, among other, transformations. So, queries can be composed using quotation and antiquotation more freely.
The project home page is here: http://fsprojects.github.io/FSharp.Linq.ComposableQuery/
and there is some more discussion in an answer I just provided to another (old) question about dynamic queries: How do you compose query expressions in F#?
Comments or questions (especially if something breaks or something that you think should work doesn't) are very welcome.
[EDIT: Updated the links to the project pages, which have just been changed to remove the word "Experimental".]
In F# 3.0, the query is quoted automatically and so you cannot use quotation splicing (the <# foo %bar #> syntax) that makes composing queries possible. Most of the things that you could write by composing queries using splicing can still be done in the "usual LINQ way" by creating a new query from the previous source and adding i.e. filtering:
// Initial query that simply selects products
let q1 =
query { for p in ctx.Products do
select p }
// Create a new query that specifies only expensive products
let q2 =
query { for p in q1 do
where (p.UnitPrice.Value > 100.0M) }
This way, you can dynamically add conditions, dynamically specify projection (using select) and do a couple of other query compositions. However, you don't get the full flexibility of composing queries as with explicit quotations. I guess this is the price that F# 3.0 has to pay for a simpler syntax similar to what exists in C#.
In principle, you should be able to write query explicitly using the query.Select (etc.) operators. This would be written using explicit quotations and so you should be able to use splicing. However, I don't exactly know how the translation works, so I can't give you a working sample. Something like this should work (but the syntax is very ugly, so it is probably better to just use strings or some other techniques):
<# query.Select(Linq.QuerySource<_, _>(ctx.Products), fun prod ->
// You could use splicing here, for example, if 'projection' is
// a quotation that specifies the projection, you could write:
// %projection
prod.ProductName) #>
|> query.Run
The queries in F# 3.0 are based on IQueryable, so it might be possible to use the same trick as the one that I implemented for C#. However, I guess that some details would be different, so I wouldn't expect that to work straight away. The best implementation of that idea is in LINQKit, but I think it won't directly work in F#.
So, in general, I think the only case that works well is the first example - where you just apply additional query operators to the query by writing multiple queries.