PDO: Passing extra parameters to a prepared statment than needed - pdo

Can you send more parameters than needed to a prepared statement using PDO with no undesired side effects?
That mights seem like a strange question but I ask because I have 4 queries in a row which all use similar and different parameters. The relevant parts of the queries:
1st (select, different table to others):
WHERE threadID = :tid
2nd (select):
WHERE user_ID = :u_ID AND thread_ID = :tid
3rd (update if 2nd was successful):
SET time = :current_time WHERE user_ID = :u_ID AND thread_ID = :tid
4th (insert if 2nd was unsuccessful):
VALUES (:u_ID, :tid, :current_time)
Can I declare one array with the three parameters at the beginning and use it for all 4 queries?
To sort out any confusion, the queries would be executed seperately. It is the parameters variable being reused and so that would mean some queries would receive parameters they don't need. So something like:
$parameters = array(':tid' => $tid, ':u_ID' => $u_ID, ':current_time' => $time);
$1st = $db->prepare($query1);
$1st->execute($parameters);
$2nd = $db->prepare($query2);
$2nd->execute($parameters);
$3rd = $db->prepare($query3);
$3rd->execute($parameters);
$4th = $db->prepare($query4);
$4th->execute($parameters);
If I can, should I? Will this slow down or cause security flaws to my database or scripts?
If I can make this question a bit clearer, please ask.
Thank you!

Perhaps the documentation has been updated since this question was first asked, but now it is quite clearly stated "No"
You cannot bind more values than specified; if more keys exist in input_parameters than in the SQL specified in the PDO::prepare(), then the statement will fail and an error is emitted.
These answers should be useful in filtering out the extra parameters.

I know this is already answered and it's only asking about whether you can send extra params, but I thought people might arrive at this question, and want to know how to get around this limitation. Here's the solution I use:
$parameters = array('tid' => $tid, 'u_ID' => $u_ID, 'current_time' => $time);
$1st = $db->prepare($query1);
$1st->execute(array_intersect_key($parameters, array_flip(array('tid'))));
$2nd = $db->prepare($query2);
$2nd->execute(array_intersect_key($parameters, array_flip(array('u_ID', 'tid'))));
$3rd = $db->prepare($query3);
$3rd->execute(array_intersect_key($parameters, array_flip(array('u_ID', 'tid', 'current_time'))));
$4th = $db->prepare($query4);
$4th->execute(array_intersect_key($parameters, array_flip(array('u_ID', 'tid', 'current_time'))));
That array_interset_key and array_flip maneuver could be extracted to its own function, like:
function filter_fields($params,$field_names) {
return array_intersect_key($params, array_flip($field_names))
}
I just haven't got around to it yet.
The function flips your array of key names, so you have an array with no values, but the right keys. Then intersect filters the first array so you only have the keys that are in both arrays (in this case, only the ones in your array_flipped array). But you get the values for the original array (not the empties). So you make one array of parameters, but specify which params are actually sent to PDO.
So, with the function, you'd do:
$parameters = array('tid' => $tid, 'u_ID' => $u_ID, 'current_time' => $time);
$1st = $db->prepare($query1);
$1st->execute(filter_fields($parameters, array('tid')));
$2nd = $db->prepare($query2);
$2nd->execute(filter_fields($parameters, array('u_ID', 'tid')));
$3rd = $db->prepare($query3);
$3rd->execute(filter_fields($parameters, array('u_ID', 'tid', 'current_time')));
$4th = $db->prepare($query4);
$4th->execute(filter_fields($parameters, array('u_ID', 'tid', 'current_time')));
If you have PHP 5.4, you can use the square bracket array syntax, to make it even cooler:
$parameters = array('tid' => $tid, 'u_ID' => $u_ID, 'current_time' => $time);
$1st = $db->prepare($query1);
$1st->execute(filter_fields($parameters, ['tid']));
$2nd = $db->prepare($query2);
$2nd->execute(filter_fields($parameters, ['u_ID', 'tid']));
$3rd = $db->prepare($query3);
$3rd->execute(filter_fields($parameters, ['u_ID', 'tid', 'current_time']));
$4th = $db->prepare($query4);
$4th->execute(filter_fields($parameters, ['u_ID', 'tid', 'current_time']));

I got a chance to test my question, and the answer is you cannot send more parameters than the query uses. You get the following error:
PDOException Object
(
[message:protected] => SQLSTATE[HY093]: Invalid parameter number: parameter was not defined
[string:Exception:private] =>
[code:protected] => HY093
[file:protected] => C:\Destination\to\file.php
[line:protected] => line number
[trace:Exception:private] => Array
(
[0] => Array
(
[file] => C:\Destination\to\file.php
[line] => line number
[function] => execute
[class] => PDOStatement
[type] => ->
[args] => Array
(
[0] => Array
(
[:u_ID] => 1
[:tid] => 1
[:current_time] => 1353524522
)
)
)
[1] => Array
(
[file] => C:\Destination\to\file.php
[line] => line number
[function] => function name
[class] => class name
[type] => ->
[args] => Array
(
[0] => SELECT
column
FROM
table
WHERE
user_ID = :u_ID AND
thread_ID = :tid
[1] => Array
(
[:u_ID] => 1
[:tid] => 1
[:current_time] => 1353524522
)
)
)
)
[previous:Exception:private] =>
[errorInfo] => Array
(
[0] => HY093
[1] => 0
)
)
I don't know a huge amount about PDO, hence my question, but I think that because :current_time is sent but not used and the error message is "Invalid parameter number: parameter was not defined" you cannot send extra parameters which are not used.
Additionally the error code HY093 is generated. Now I can't seem to find any documentation explaining PDO codes anywhere, however I came across the following two links specifically about HY093:
What is PDO Error HY093
SQLSTATE[HY093]
It seems HY093 is generated when you incorrectly bind parameters. This must be happening here because I am binding too many parameters.

executing different type of multiple queries with one execute leads to problems. you can run multiple selects or multiple updates with one execute. For this case to create different prepared statements objects and pass the the parameters accordingly.
// for WHERE threadID = :tid
$st1 = $db->prepare($sql);
$st1->bindParam(':tid', $tid);
$st1->execute();
or
$st1->execute(array(':tid'=>$tid);
// for WHERE user_ID = :u_ID AND thread_ID = :tid
$st2 = $db->prepare($sql);
$st2->bindParam(':u_ID', $u_ID);
$st2->bindParam(':tid', $tid);
$st2->execute();
or
$st2->execute(array(':tid'=>$tid, ':u_ID' => $u_ID);
// for SET time = :current_time WHERE user_ID = :u_ID AND thread_ID = :tid
$st3 = $db->prepare($sql);
$st3->bindParam(':u_ID', $u_ID);
$st3->bindParam(':tid', $tid);
$st3->bindParam(':current_time', $current_time);
$st3->execute();
or
$st3->execute(array(':tid'=>$tid, ':u_ID' => $u_ID, ':current_time' => $current_time);
// for VALUES (:u_ID, :tid, :current_time)
$st4 = $db->prepare($sql);
$st4->bindParam(':u_ID', $u_ID);
$st4->bindParam(':tid', $tid);
$st4->bindParam(':current_time', $current_time);
$st4->execute();
or
$st4->execute(array(':tid'=>$tid, ':u_ID' => $u_ID, ':current_time' => $current_time);

Related

Changes b/w ElasticSearch 1.x and 2.x

Does documentation exist on how to change code written in NEST 1.x to 2.x?
I've looked at these sites and they're incomplete:
https://github.com/elastic/elasticsearch-net/blob/master/docs/2.0-breaking-changes/nest-breaking-changes.md
https://github.com/elastic/elasticsearch-net
https://www.elastic.co/blog/ga-release-of-nest-2-0-our-dot-net-client-for-elasticsearch
For example I'd like to know how to replace the following:
1)
given ISearchResponse<T> searchResults = ...
How to do:
searchResults.ConnectionStatus
searchResults.RequestInformation.Request
2)
client.Get<T>(s => s.Id(id));
3)
Given QueryContainer query
new SearchDescriptor<T>()
.From(from)
.Size(pageSize)
.Query(query); //this dosen't work anymore
4)
MatchQuery doesn't accept fuziness as double and type parameters as string as it used to
5) QueryDescriptor seems gone gasp
6) client.Update is busted
var result = client.Update<CustomerProfile>(request => request
.Id(customer.CustomerId)
.Doc(customer)
.Refresh()
);
7) client.Get is busted in a similar way to client.Update
8) In Mappings the following setup doesn't work anymore
CreateIndexDescriptor cid = ...
cid.NumberOfReplicas(numReplicas)
.NumberOfShards(numShards)
.Settings(s => s
.Add("merge.policy.merge_factor", "10")
.Add("search.slowlog.threshold.fetch.warn", "1s")
)
.Analysis(a => a.TokenFilters etc etc
EDIT
9) Date Ranges:
startDate and endDate are DateTime type
var qd = new QueryContainerDescriptor<EsActivity>();
QueryContainer qc = qd.Range(r =>
r.Field("esactivity.timestamp")
.GreaterThanOrEquals(DateMath.Anchored(startDate))
.LessThanOrEquals(DateMath.Anchored(endDate))
);
.GreaterThanOrEquals expects a double parameter but on the documentation page it takes DateMath.Anchored(startDate)
10) Highlighting:
highlightFields: List<string>
Action<HighlightFieldDescriptor<T>> [] tmp = highlightFields.Select(field =>
new Action<HighlightFieldDescriptor<T>>(
highlighter => highlighter.Field(field)
)
).ToArray();
sd:SearchDescriptor<..>..
sd.Highlight(h => h
.PreTags(preTag)
.PostTags(postTag)
.OnFields(tmp)
);
I see I can replace OnFields(tmp) with .Fields(f=>f.OnAll()) but I'd still like to specify the fields myself in some way.
And how come there is a HighlightQuery option available since we already apply highlighting on a query object.. now there are 2 query calls.
I've converted the highlighting above to
var tmp = highlightFields.Select(field =>
Tuple.Create<Field, IHighlightField>(
Field.Create(field),
new HighlightField()
)
).ToDictionary(x => x.Item1, x => x.Item2);
sd.Highlight(h => new Highlight
{
PreTags = new[] { preTag },
PostTags = new[] { postTag },
Fields = tmp
}
);
1) searchResults.ApiCall replaces searchResults .ConnectionStatus.
You can get the request bytes with searchResults.ApiCall.RequestBodyInBytes and you will also need to set .DisableDirectStreaming() on ConnectionSettings in order to capture the bytes as the request is written to the request stream directly by default.
2) Use client.Get<T>(id) - The first parameter is a DocumentPath<T> type.
3) To pass a QueryContainer to a Fluent API descriptor, just return it from the Func<QueryContainerDescriptor<T>, QueryContainer>
new SearchDescriptor<T>()
.From(from)
.Size(pageSize)
.Query(_ => query);
4) match query fuzziness as a double mapped to a formula to calculate edit distance in Elasticsearch 1.x. Since this was removed in Elasticsearch 2.x, it is also gone from NEST. You can set fuzziness edit distance with
client.Search<Document>(s => s
.Query(q => q
.Match(m => m
.Query("this is my query")
.Fuzziness(Fuzziness.EditDistance(3))
)
)
);
Not sure what you're referring to with type, but I think you're referring to document type? If that's the case, document type takes a Types type which string implicitly converts to
client.Search<Document>(s => s
.Type("other-type")
.MatchAll()
);
5) QueryDescriptor<T> was renamed to QueryContainerDescriptor<T> to better reflect the fact that it's a descriptor for building a QueryContainer
6) Update API works
// specifying id
client.Update<Document>("document-id", u => u
.Doc(document)
.Refresh()
);
Since the first parameter is a DocumentPath<T>, the document instance (if you have it) can be passed as the first parameter
client.Update<Document>(document, u => u
.Doc(document)
.Refresh()
);
where index, type and id will be inferred from the document instance
7) See above
8) Create index settings have been revised to reflect the level at which the settings appear in the REST API json call
client.CreateIndex("index-name", c => c
.Settings(s => s
.NumberOfShards(2)
.NumberOfReplicas(2)
.SlowLog(sl => sl
.Search(sls => sls
.Fetch(slsf => slsf
.ThresholdWarn("1s")
)
)
)
.Analysis(a => a) // etc...
)
);
You can also use strings for settings if you prefer, although the fluent API will ensure the correct setting values are sent e.g. "search.slowlog.threshold.fetch.warn" is now "index.search.slowlog.threshold.fetch.warn"
client.CreateIndex("index-name", c => c
.Settings(s => s
.NumberOfShards(2)
.NumberOfReplicas(2)
.Setting("index.search.slowlog.threshold.fetch.warn", "1s")
.Analysis(a => a) // etc...
)
);
merge.policy.merge_factor is removed in Elasticsearch 2.0

Laravel multiple where clauses in query from given array

I hope the title describes my problem good as enough.
I tried to make a geosearch-function in laravel. The queries as its own are correct. Now I try to get all articles from my table, whose match with the gotten zipcode of a former query. All functions I use you can found here: Laravel 5 add results from query in a foreach to array). But now I want to perform one query, within multiple or dynamic where clauses (with or).
The print_r($zipcodes) of my former query (get all zipcodes in a range from a zipcode $zipcodes = $this->getZipcodes($zipCoordinateId, $distance);) outputs:
Array
(
[0] => stdClass Object
(
[zc_zip] => 13579
[distance] => 0
)
[1] => stdClass Object
(
[zc_zip] => 12345
[distance] => 2.228867736739
)
[2] => stdClass Object
(
[zc_zip] => 98765
[distance] => 3.7191570094844
)
)
So how should my query in laravel should look, when I want to perform following?
SELECT *
FROM articles
WHERE zipcode = '13579'
OR zipcode = '98765'
OR zipcode = '12345';
Thank you in advance,
quantatheist
UPDATE
With the solution from balintant this is working fine. Here is my code:
// grabs all zipcodes matching the distance
$zipcodes = $this->getZipcodes($zipCoordinateId, $distance);
foreach ($zipcodes AS $key=>$val)
{
$zipcodes[$key] = (array) $val;
}
$codes = array_column($zipcodes, 'zc_zip');
$articles = Article::whereIn('zipcode', $codes)->get();
return view('pages.intern.articles.index', compact('articles'));
You can use both the whereIn and orWhere scopes. The first one better fits to your current example. Also, you can use array_column to get all the real zip codes from the array above.
$query->whereIn('zip', [12,34,999])->get();
// > array
Update:
When you want to use array_column to get the specific subvalues of the array (like zc_zip) you must first transform it's childs to an array. If it's a model you must transform it easily with toArray().
$zip_objects = [
(object) [ 'zc_zip' => 13579, 'distance' => 0 ],
(object) [ 'zc_zip' => 12345, 'distance' => 2.228867736739 ],
(object) [ 'zc_zip' => 98765, 'distance' => 3.7191570094844 ],
];
foreach ( $zip_objects AS $key=>$val )
{
$zip_objects[$key] = (array) $val;
}
$zip_codes = array_column($zip_objects, 'zc_zip');
var_dump($zip_codes);
// > array(3) {
// > [0]=>
// > int(13579)
// > [1]=>
// > int(12345)
// > [2]=>
// > int(98765)
// > }

Predicate in Map or Reduce (RavenDb)?

If I want to apply a predicate to a document before I aggregate in a Reduce function, do I want to place that predicate in the Map function, or in the Reduce function?
So for example putting the predicate in the Map function would look like this:
Map = orders => orders
.Where(order => order.Status != OrderStatus.Cancelled)
.Select(order => new
{
Name = order.Firstname + ' ' + order.Lastname,
TotalSpent = order.Total,
NumberOfOrders = 1
});
Reduce = results => results
.GroupBy(result => result.Email)
.Select(customer => new
{
Name = customer.Select(c => c.Name).FirstOrDefault(),
TotalSpent = customer.Sum(c => c.TotalSpent),
NumberOfOrders = customer.Sum(c => c.NumberOfOrders)
});
And putting it in the Reduce function would look like this:
Map = orders => orders
.Select(order => new
{
Name = order.Firstname + ' ' + order.Lastname,
TotalSpent = order.Total,
NumberOfOrders = 1,
Status = order.Status
});
Reduce = results => results
.Where(order => order.Status != OrderStatus.Cancelled)
.GroupBy(result => result.Email)
.Select(customer => new
{
Name = customer.Select(c => c.Name).FirstOrDefault(),
TotalSpent = customer.Sum(c => c.TotalSpent),
NumberOfOrders = customer.Sum(c => c.NumberOfOrders),
Status = (OrderStatus)0
});
The latter obviously makes more sense, however it means that I have to add the Status property to the class of the Reduce result and then just set it to some unknown value in the Reduce as it doesn't actually mean anything there.
Only the first approach works for map/reduce. And no, the order will be ignored and you can't do something like FirstOrDefault in the result.
You need to think of map/reduce as two independent functions whereas the reduce function can be run multiple times on the same input, that's why the format of the input must match the format of the output. This can also happen on different server in parallel and asynchronously, thus new documents can be saved while the indexing is running.

Zend_Db fetchAll() to return values as keys, not as key => value

Is there a way to change the default functionality in Zend_Db fetchall() method, so that it doesn't return this:
[0] => 100000055944231
[1] => 100000089064543
[2] => 100000145893011
[3] => 100000160760965
but this:
[100000055944231]
[100000089064543]
[100000145893011]
[100000160760965]
Although your question is actually flawed (noted by BartekR), I suppose you're trying to receive a simple array, instead of a multidimensional one.
You could do:
$results = $this->db->fetchAll($select);
$values = array_map(function($row) { return $row['column']; }, $results);
This will turn:
array(
array('column' => 'test'),
array('column' => 'test2'),
array('column' => 'test3'),
);
into
array(
'test',
'test2',
'test3'
);
(note; my example only works in PHP5.3+, if you're working with PHP5.2, you can define the function and use it by its name with array_map (e.g. array_map('methodName', $results))
I'm looking for a similar solution, I'm trying to load a field returned by the fetchAll($select) as the array key.. Without looping through the entire resultset.
So:
$results = $this->db->fetchAll($select, <FIELDNAME_TO_MAKE_KEY_OF_RESULTS_ARRAY>);
results[<my fieldname>]->dbfield2;
Further to Pieter's, I'd add the case where the rows are themselves arrays, and not just scalars; it's possible to nest the results, to as many fields as the query contains.
e.g. Here with two levels of nesting, respectively on field1 and field2.
$complex_results = array_map(function($row) { return array($row['field1'] => array($row['field2'] => $row)); }, $results);
As always, each row contains all fields, but $complex_results is indexed by field1, then field2 only.

NHibernate 3 - type safe way to select a distinct list of values

I am trying to select a distinct list of values from a table whilst ordering on another column.
The only thing working for me so far uses magic strings and an object array. Any better (type-safe) way?
var projectionList = Projections.ProjectionList();
projectionList.Add(Projections.Property("FolderName"));
projectionList.Add(Projections.Property("FolderOrder"));
var list = Session.QueryOver<T>()
.Where(d => d.Company.Id == SharePointContextHelper.Current.CurrentCompanyId)
.OrderBy(t => t.FolderOrder).Asc
.Select(Projections.Distinct(projectionList))
.List<object[]>()
.ToList();
return list.Select(l => new Folder((string)l[0])).ToList();
btw, doing it with linq won't work, you must select FolderOrder otherwise you'll get a sql error (ORDER BY items must appear in the select list if SELECT DISTINCT is specified.
)
and then doing that gives a known error : Expression type 'NhDistinctExpression' is not supported by this SelectClauseVisitor. regarding using anonymous types with distinct
var q = Session.Query<T>()
.Where(d => d.Company.Id == SharePointContextHelper.Current.CurrentCompanyId)
.OrderBy(d => d.FolderOrder)
.Select(d => new {d.FolderName, d.FolderOrder})
.Distinct();
return q.ToList().Select(f => new Folder(f));
All seems a lot of hoops and complexity to do some sql basics....
To resolve the type-safety issue, the syntax is:
var projectionList = Projections.ProjectionList();
projectionList.Add(Projections.Property<T>(d => d.FolderName));
projectionList.Add(Projections.Property<T>(d => d.FolderOrder));
the object [] thing is unavoidable, unless you define a special class / struct to hold just FolderName and FolderOrder.
see this great introduction to QueryOver for type-saftey, which is most certainly supported.
best of luck.