Do While statement not progressing to second result set in laravel using PDO - pdo

I have a stored procedure that returns multiple result sets. The results contain stats for a player, each result represents a year in which we have stats for him.
Stored procedure return from mysql CLI:
+----------+-----------+---------+------+
| compperc | passyards | passtds | ints |
+----------+-----------+---------+------+
| 61.4 | 319 | 2 | 1 |
| 85.7 | 76 | 0 | 0 |
| 20.0 | 9 | 0 | 1 |
| 57.1 | 30 | 0 | 0 |
| 100.0 | 59 | 1 | 0 |
| 66.7 | 21 | 0 | 0 |
| 50.0 | 86 | 1 | 0 |
| 60.0 | 38 | 0 | 0 |
+----------+-----------+---------+------+
8 rows in set (0.00 sec)
+----------+-----------+---------+------+
| compperc | passyards | passtds | ints |
+----------+-----------+---------+------+
| 80.0 | 40 | 0 | 0 |
| 0.0 | 0 | 0 | 0 |
| 100.0 | 40 | 0 | 0 |
+----------+-----------+---------+------+
3 rows in set (0.00 sec)
I'm using Laravel 4.1.2 and call the procedure in my Player Model with a raw PDO prepared statement:
$statDB = DB::connection('mysql')->getPdo();
$statDB->setAttribute(PDO::ATTR_EMULATE_PREPARES, true);
$results = $statDB->prepare("CALL fullstats(:id);");
$results->execute(array(':id'=> $id));
The previous block of code pulls in the proper result sets (manually iterating using
if($statDB->nextRowset()) { $statArr[] = $results->fetchAll(PDO::FETCH_ASSOC); } works) but when I try to iterate through it using a do-while statement it never gets to the second result set.
$statArr = array();
do
{
$statArr[] = $results->fetchAll(PDO::FETCH_ASSOC);
} while ($results->nextRowset());
I can add dd($statArr); immediately after I set the initial $statArr[] and it will return the set of stats for the first year. I can also add dd($results->nextRowset()); after I set the $statArr[] and it returns true so it theoretically should move through the additional result sets. If I let the statement execute I get a generic error from Laravel: PDOException SQLSTATE[HY000]: General error. It provides no additional details as to what's going wrong. I've tried the same do-while statement in a raw php file (on a different domain but the same server) using PDO and it works without a problem.
Is there some configuration option that I need to set to get this to work? I've been beating my head against the problem for an entire day and can't figure out why this isn't working. Any help is greatly appreciated.
Update:
The PDOException SQLSTATE[HY000]: General error is coming from the second $statArr[] = $results->fetchAll(); so it's entering the second result set, it just won't fetch the data. I also removed the PDO::FETCH_ASSOC from fetchAll() as I've read it doesn't work properly but the issue persists.

It turns out this is a bug in the PDO/mysql driver. The nextRowset() method doesn't return false when it's out of rowsets so the fetch statement tries to access data that isn't there. This causes the nondescript general error on the fetchall() method and in turn breaks the script.
The PHP bug report:
https://bugs.php.net/bug.php?id=62820
Since the nextRowset() method wasn't working properly I added a build query that gives me the count of the results I should expect and then have a counter in the loop with an if statement that breaks the loop if the loop count is equal to the count returned by the query.
Here's the working function:
public function allstats($id) {
$statDB = DB::connection('mysql')->getPdo();
$statDB->setAttribute(PDO::ATTR_EMULATE_PREPARES, true);
$years = $statDB->prepare('select count(distinct year) from pass_stats where player_id = :id');
$years->execute(array(':id'=> $id));
$years = (int) $years->fetchColumn();
$results = $statDB->prepare("CALL fullstats(:id);");
$results->execute(array(':id'=> $id));
$statArr = array();
$i = 1; // Set to one to mimic sql count
do
{
if($i === $years) { break; }
foreach($results->fetchall(PDO::FETCH_ASSOC) as $g) {
$statArr[$g['year']][] = $g;
}
$i++;
} while ($results->nextRowset()); // Check for next rowset
return $statArr;
}

Related

How to eval a string containing column names?

As I cannot attach a conditional formatting on a Table, I need an abstract function to chech if a set of records or all records have errors inside, and show these errors into forms and/or reports.
Because, to achieve this goal in the 'standard' mode, I have to define the rule [○for a field of a table every time I use that field in a control or report, and this means the need to repeate the same things an annoying lot of times, not to tell about introducing errors and resulting in a maintenance nightmare.
So, my idea is to define all the check for all the tables and their rows in an CheckError-table, like the following fragment related to the table 'Persone':
TableName | FieldName | TestNumber | TestCode | TestMessage | ErrorType[/B][/COLOR]
Persone | CAP | 4 | len([CAP]) = 0 or isnull([cap]) | CAP mancante | warning
Persone | Codice Fiscale | 1 | len([Codice Fiscale]) < 16 | Codice fiscale nullo o mancante | error
Persone | Data di nascita | 2 | (now() - [Data di nascita]) < 18 * 365 | Minorenne | info
Persone | mail | 5 | len([mail)] = 0 or isnull([mail] | email mancante | warning
Persone | mail | 6 | (len([mail)] = 0 or isnull([mail]) | richiesto l'invio dei referti via e- mail, | error
| | | and [modalità ritiro referti] = ""e-mail"" | ma l'indirizzo e-mail è mancante |
Persone | Via | 3 | len([Via]) = 0 or isnull([Via]) | Indirizzo mancante | warning
Now, in each form or report which use the table Persona, I want to set an 'onload' property to a function
' to validate all fields in all rows and set the appropriate bg and fg color
Private Sub Form_Open(Cancel As Integer)
Call validazione.validazione(Form, "Persone", 0)
End Sub
' to validate all fields in the row identified by ID and set the appropriate bg and fg color
Private Sub Codice_Fiscale_LostFocus()
Call validazione.validazione(Form, "Persone", ID)
End Sub
So, the function validazione, at a certain point, as exactly one row for the table Persone, and the set of expressions described in the column [TestCode] above.
Now, I need to logically evaluate the TestString against the table row, to obtain a true or a false.
If true, I'll set the fg and bg color of the field as normal
if false, I'll set the the fg and bg color as per error, info or warning, as defined by the column [ErrorType] above.
All the above is easy, ready, and running, except for the red statement above:
How can I evaluate the teststring against the table row, to obtain a result?
Thank you
Paolo

Is there any Eloquent way to get sum of relational table and sort by it?

Example:
table Users
ID | Username | sex
1 | Tony | m
2 | Andy | m
3 | Lucy | f
table Scores
ID | user_id | score
1 | 2 | 4
2 | 1 | 3
3 | 1 | 4
4 | 2 | 3
5 | 1 | 1
6 | 3 | 3
7 | 3 | 2
8 | 2 | 3
Expected Result:
ID | Username | sex | score_sum (sum) (desc)
2 | Andy | m | 10
1 | Tony | m | 8
3 | Lucy | f | 5
The code I use so far:
User model:
class User extends Authenticatable
{
...
public function scores()
{
return $this->hasMany('App\Score');
}
...
}
Score model
class Job extends Model
{
//i put nothing here
}
Code in controller:
$users = User::all();
foreach ($users as $user){
$user->score_sum = $user->scores()->sum('score');
}
$users = collect($users)->sortByDesc('score_sum');
return view('homepage', [
'users' => $users->values()->all()
]);
Hope my example above make sense. My code does work, but I thought there must be an Eloquent and elegant way to do this without foreach?
There are 2 options for doing this in an Eloquent way.
Option 1
The first way is to do this to add the score_sum as an attribute that is always included when querying the users model. This is only a good idea if you will be using the score_sum the majority of the time when querying the users table. If you only need the score_sum on very specific view or for specific business logic then I would use the second option below.
To do this you will add the attribute to the users model, you can look here for documentation: https://laravel.com/docs/5.6/eloquent-mutators#defining-an-accessor
Here is an example for your use case:
/app/User.php
class User extends Model
{
.
.
.
public function getScoreSumAttribute($value)
{
return $this->scores()->sum('score');
}
}
Option 2
If you just want to do this for a single use case, then the easiest solution is just to use the sum() function in the eventual foreach loop you will be using (most likely in the view).
For example in a view:
#foreach($users as $user)
<div>Username: {{$user->username}}</div>
<div>Sex: {{$user->sex}}</div>
<div>Score Sum: {{$user->scores()->sum('price')}}</div>
#endforeach
Additionally, if you do not want to do this in a foreach loop you can use a raw query in the Eloquent call in your Controller gets the `score_sum'. Here is an example of how that can be done:
$users = User::select('score_sum',DB::raw(SUM(score) FROM 'scores'))->get();
I did not have a quick environment to test this, you might need a WHERE clause in the DB::raw query
Hope this helps!
This is as nice as it gets:
User::selectRaw('*, (SELECT SUM(score) FROM scores WHERE user_id = users.id) as score_sum')
->orderBy('score_sum', 'DESC')
->get();

AQL can not find row by PK. Row is there when selecting

I have a namespace and a set in aerospike with information inside.
After performing a SELECT * FROM mytecache.search_results I get around 600 rows of this:
"bdee9a37e3217f28a057b5e0ebbef43f_Sabre_13" | "a:11:{s:5:"price";a:14:{s:5:"total";d:2947.5999999999999;s:11:"pricePerPax";d:1473.8;s:8:"totalflt";d:2947.5999999999999;s:9:"baseprice";d:1901.0799999999999;s:3:"tax";d:986.51999999999998;s:10:"servicefee";i:60;s:11:"markupprice";d:1961.0799999999999;s | "flight_bdee9a37e3217f28a057b5e0ebbef43f" | 147380 | LIST('["LH", "LH", "LH", "LH"]'....
As you know, the first column is the primary key, so I try to select the row (which just appeared in SELECT) with:
aql> SELECT * FROM mytecache.search_results WHERE PK = "bdee9a37e3217f28a057b5e0ebbef43f_Sabre_13"
and I get:
Error: (2) AEROSPIKE_ERR_RECORD_NOT_FOUND
Explain returns this:
aql> EXPLAIN SELECT * FROM mytecache.search_results WHERE PK="bdee9a37e3217f28a057b5e0ebbef43f_Sabre_13"
+------------------+--------------------------------------------+-------------+-----------+--------+---------+----------+----------------------------+-------------------+-------------------------+---------+
| SET | DIGEST | NAMESPACE | PARTITION | STATUS | UDF | KEY_TYPE | POLICY_REPLICA | NODE | POLICY_KEY | TIMEOUT |
+------------------+--------------------------------------------+-------------+-----------+--------+---------+----------+----------------------------+-------------------+-------------------------+---------+
| "search_results" | "6F62BE5323F0A51EF7DFDD5060A02E62CD2453F0" | "mytecache" | 623 | 2 | "FALSE" | "STRING" | "AS_POLICY_REPLICA_MASTER" | "BB9B25B63000056" | "AS_POLICY_KEY_DEFAULT" | 1000 |
+------------------+--------------------------------------------+-------------+-----------+--------+---------+----------+----------------------------+-------------------+-------------------------+---------+
1 row in set (0.002 secs)
asinfo outputs:
1 : node
BB9B25B63000056
2 : statistics
cluster_size=1;cluster_key=883272F81547;cluster_integrity=true;cluster_is_member=true;uptime=1146;system_free_mem_pct=90;system_swapping=false;heap_allocated_kbytes=6525778;heap_active_kbytes= 6529792;heap_mapped_kbytes=6619136;heap_efficiency_pct=99;heap_site_count=14;objects=6191;tombstones=0;tsvc_queue=0;info_queue=0;delete_queue=0;rw_in_progress=0;proxy_in_progress=0;tree_gc_queue=0; client_connections=19;heartbeat_connections=0;fabric_connections=0;heartbeat_received_self=7637;heartbeat_received_foreign=0;reaped_fds=47;info_complete=12326;proxy_retry=0;demarshal_error=0;early_ tsvc_client_error=0;early_tsvc_batch_sub_error=0;early_tsvc_udf_sub_error=0;batch_index_initiate=0;batch_index_queue=0:0,0:0,0:0,0:0;batch_index_complete=0;batch_index_error=0;batch_index_timeout=0 ;batch_index_unused_buffers=0;batch_index_huge_buffers=0;batch_index_created_buffers=0;batch_index_destroyed_buffers=0;batch_initiate=0;batch_queue=0;batch_error=0;batch_timeout=0;scans_active=0;qu ery_short_running=0;query_long_running=0;sindex_ucgarbage_found=0;sindex_gc_locktimedout=0;sindex_gc_list_creation_time=39;sindex_gc_list_deletion_time=0;sindex_gc_objects_validated=11734;sindex_gc _garbage_found=0;sindex_gc_garbage_cleaned=0;paxos_principal=BB9B25B63000056;migrate_allowed=true;migrate_partitions_remaining=0;fabric_bulk_send_rate=0;fabric_bulk_recv_rate=0;fabric_ctrl_send_rat e=0;fabric_ctrl_recv_rate=0;fabric_meta_send_rate=0;fabric_meta_recv_rate=0;fabric_rw_send_rate=0;fabric_rw_recv_rate=0
3 : features
peers;cdt-list;cdt-map;pipelining;geo;float;batch-index;replicas-all;replicas-master;replicas-prole;udf
4 : cluster-generation
1
5 : partition-generation
0
6 : build_time
Sat Nov 4 02:19:49 UTC 2017
7 : edition
Aerospike Community Edition
8 : version
Aerospike Community Edition build 3.15.0.2
9 : build
3.15.0.2
10 : services
11 : services-alumni
12 : build_os
debian8
Which seems wrong. Why is the query not returning the result, when it is clearly there ?
"As you know, the first column is the primary key," -- no first column is your first bin. The primary key is whatever you used to write the record. Aerospike does not store your primary key.
You are quite likely using a wrong PK for the record whose first bin is "bdee9a37e3217f28a057b5e0ebbef43f_Sabre_13"
https://www.aerospike.com/docs/tools/aql/data_management.html
“STATUS” is the status of the operation. It will be the code as returned by the client for the record. E.g: 0 is AEROSPIKE_OK and 2 is AEROSPIKE_ERR_RECORD_NOT_FOUND.
So, even your explain execution in AQL is telling you in the STATUS that the record is not found.

ABL Progress 4gl : For Each with Count in Output-Stream

Progress-Procedure-Editor:
DEFINE STREAM myStream.
OUTPUT STREAM myStream TO 'C:\Temp\BelegAusgangSchnittstelle.txt'.
FOR EACH E_BelegAusgang
WHERE E_BelegAusgang.Firma = '000'
AND E_BelegAusgang.Schnittstelle = '$Standard'
NO-LOCK:
PUT STREAM myStream UNFORMATTED
STRING(E_BelegAusgang.Firma)
'|'
STRING(E_BelegAusgang.BelegNummer)
'|'
STRING(E_BelegAusgang.Schnittstelle)
'|'
SKIP
.
END.
I get this (extraction):
Firma | BelegNr | Schnittstelle
000 | 3 | $Standard
000 | 3 | $Standard
000 | 3 | $Standard
000 | 3 | $Standard
000 | 3 | $Standard
000 | 8 | $Standard
000 | 8 | $Standard
What I need is to COUNT the BelegNr. So I import the data of the TXT to SQL Server.
On Server my query is:
SELECT [BelegNr]
,COUNT(*) AS [Anzahl]
FROM [TestDB].[dbo].[Beleg_Ausgang]
GROUP BY [BelegNr]
ORDER BY [Anzahl]
With that query I got (extraction):
BelegNr Anzahl
3 | 5
8 | 2
Is there a way to put the COUNT directly into the Progress-Code? I mean, I want my result directly from the Progress-Procedure-Editor.
In ABL you use BREAK BY instead of GROUP BY. One limit is that BREAK BY groups AND sorts.
You could for instance have another "FOR EACH" for this:
DEFINE VARIABLE iCount AS INTEGER NO-UNDO.
FOR EACH E_BelegAusgang NO-LOCK
WHERE E_BelegAusgang.Firma = '000'
AND E_BelegAusgang.Schnittstelle = '$Standard'
BREAK BY BelegNr:
iCount = iCount + 1.
IF LAST-OF(BelegNr) THEN DO:
DISPLAY BelegNr iCount.
iCount = 0.
END.
END.
You could also incorporate that code in the export but note: that will change the order of the file rows. Maybe that's a problem for you, maybe not!

How to write a select query or server-side function that will generate a neat time-flow graph from many data points?

NOTE: I am using a graph database (OrientDB to be specific). This gives me the freedom to write a server-side function in javascript or groovy rather than limit myself to SQL for this issue.*
NOTE 2: Since this is a graph database, the arrows below are simply describing the flow of data. I do not literally need the arrows to be returned in the query. The arrows represent relationships.*
I have data that is represented in a time-flow manner; i.e. EventC occurs after EventB which occurs after EventA, etc. This data is coming from multiple sources, so it is not completely linear. It needs to be congregated together, which is where I'm having the issue.
Currently the data looks something like this:
# | event | next
--------------------------
12:0 | EventA | 12:1
12:1 | EventB | 12:2
12:2 | EventC |
12:3 | EventA | 12:4
12:4 | EventD |
Where "next" is the out() edge to the event that comes next in the time-flow. On a graph this comes out to look like:
EventA-->EventB-->EventC
EventA-->EventD
Since this data needs to be congregated together, I need to merge duplicate events but preserve their edges. In other words, I need a select query that will result in:
-->EventB-->EventC
EventA--|
-->EventD
In this example, since EventB and EventD both occurred after EventA (just at different times), the select query will show two branches off EventA as opposed to two separate time-flows.
EDIT #2
If an additional set of data were to be added to the data above, with EventB->EventE, the resulting data/graph would look like:
# | event | next
--------------------------
12:0 | EventA | 12:1
12:1 | EventB | 12:2
12:2 | EventC |
12:3 | EventA | 12:4
12:4 | EventD |
12:5 | EventB | 12:6
12:6 | EventE |
EventA-->EventB-->EventC
EventA-->EventD
EventB-->EventE
I need a query to produce a tree like:
-->EventC
-->EventB--|
| -->EventE
EventA--|
-->EventD
EDIT #3 and #4
Here is the data with edges shown as opposed to the "next" column above. I also added a couple additional columns here to hopefully clear up any confusion about the data:
# | event | ip_address | timestamp | in | out |
----------------------------------------------------------------------------
12:0 | EventA | 123.156.189.18 | 2015-04-17 12:48:01 | | 13:0 |
12:1 | EventB | 123.156.189.18 | 2015-04-17 12:48:32 | 13:0 | 13:1 |
12:2 | EventC | 123.156.189.18 | 2015-04-17 12:48:49 | 13:1 | |
12:3 | EventA | 103.145.187.22 | 2015-04-17 14:03:08 | | 13:2 |
12:4 | EventD | 103.145.187.22 | 2015-04-17 14:05:23 | 13:2 | |
12:5 | EventB | 96.109.199.184 | 2015-04-17 21:53:00 | | 13:3 |
12:6 | EventE | 96.109.199.184 | 2015-04-17 21:53:07 | 13:3 | |
The data is saved like this to preserve each individual event and the flow of a session (labeled by the ip address).
TL;DR
Got lots of events, some duplicates, and need them all organized into one neat time-flow graph.
Holy cow.
After wrestling with this for over a week I think I FINALLY have a working function. This isn't optimized for performance (oh the loops!), but gets the job done for the time being while I can work on performance. The resulting OrientDB server-side function (written in javascript):
The function:
// Clear previous runs
db.command("truncate class tmp_Then");
db.command("truncate class tmp_Events");
// Get all distinct events
var distinctEvents = db.query("select from Events group by event");
// Send 404 if null, otherwise proceed
if (distinctEvents == null) {
response.send(404, "Events not found", "text/plain", "Error: events not found" );
} else {
var edges = [];
// Loop through all distinct events
distinctEvents.forEach(function(distinctEvent) {
var newEvent = [];
var rid = distinctEvent.field("#rid");
var eventType = distinctEvent.field("event");
// The main query that finds all *direct* descendents of the distinct event
var result = db.query("select from (traverse * from (select from Events where event = ?) where $depth <= 2) where #class = 'Events' and $depth > 1 and #rid in (select from Events group by event)", [eventType]);
// Save the distinct event in a temp table to create temp edges
db.command("create vertex tmp_Events set rid = ?, event = ?", [rid, event]);
edges.push(result);
});
// The edges array defines which edges should exist for a given event
edges.forEach(function(edge, index) {
edge.forEach(function(e) {
// Create the temp edge that corresponds to its distinct event
db.command("create edge tmp_Then from (select from tmp_Events where rid = " + distinctEvents[index].field("#rid") + ") to (select from tmp_Events where rid = " + e.field("#rid") + ")");
});
});
var result = db.query("select from tmp_Events");
return result;
}
Takeaways:
Temp tables appeared to be necessary. I tried to do this without temp tables (classes), but I'm not sure it could be done. I needed to mock edges that didn't exist in the raw data.
Traverse was very helpful in writing the main query. Traversing through an event to find its direct, unique descendents was fairly simple.
Having the ability to write stored procs in Javascript is freaking awesome. This would have been a nightmare in SQL.
omfg loops. I plan to optimize this and continue to make it better so hopefully other people can find some use for it.