Get response using pdo sql? - sql

I am trying to get the actual response (the data) from my database using prepared statements:
$stmt=$dbconn->prepare("SELECT user_videos FROM public.account_recover_users WHERE user_mail= :email");
$stmt->execute(array(':videos'=>$json_videos,':email'=>$email));
I know that $stmt->execute(array(':videos'=>$json_videos,':email'=>$email)); will return a boolean, not the actual data. But how to get the data from my database into an array? I will need to later return that data, the script is accessed via a GET request, and I will need to do exit("{'data':$data_from_db}"); so I don't want to fetch each row using foreach($stmt as $row). Just pass it all as it is.

$results = array();
while($row = $stmt->fetch(PDO::FETCH_ASSOC)){
$results[] = $row;
}

Related

What to use as my default to achieve the correct laravel select where?

Please, i'm writing a function and i need guidance
function($file_name='*'){
File::where('filename',$file_name)->get();
}
I want the filename to pull all the filename columns in the database table when file name is not defined, and when it is it should use the value to pull the right data.
My question is, what should i use as default for filename in in the function input to work?
Even if is raw sql i will appreciate
First of all you are missing, your functions name
function getFile($file_name = null){
$q = File::query();
if($file_name == null ){
// Do nothing this will get all files at the end since you haven't applied a where clause.
}else{
$q = File::where('filename',$file_name);
}
$result = $q->get();
return $result;
}

Split query with doctrine correctly

I have a doctrine query which e.g. look like this:
$object = $this->createQueryBuilder('object')
->leftJoin('object.element', 'element')->addSelect('element')
->leftJoin('object.element2', 'element2')->addSelect('element2')
->leftJoin('object.many', 'many')->addSelect('many')
->leftJoin('many.element3', 'element3')->addSelect('element3')
->leftJoin('many.element4', 'element4')->addSelect('element4')
->where('object.id = 1')
->getQuery()
->getSingleResult();
The real query have more joins and need a lot of memory and the db performance is not good. In native SQL I would split it and load it correctly. What I want todo is load with the first query the object and some basic joined data. This would look like this:
$object = $this->createQueryBuilder('object')
->leftJoin('object.element', 'element')->addSelect('element')
->leftJoin('object.element2', 'element2')->addSelect('element2')
->where('object.id = 1')
->getQuery()
->getSingleResult();
Now I also want to load the many, many.element3 and many.element4. In seperate query. When I just use doctrine lazy loading feature it will create a SQL query foreach but I only want this as 1 query.
I know it would be possible to set EAGER on that relation but I only want to EAGER temporarily for this query not always when somebody join the object and access it.
Did solve my problem the following way:
In my object I did add a new set function for the collection.
/**
* Set manies.
*/
public function setManies($manies)
{
// this clear and foreach is needed to keep it as ArrayCollection so doctrine dont need for the unitofwork request the db again
$this->manies->clear();
foreach ($manies as $many) {
$this->manies->add($many);
}
return $this;
}
In the repository my code look like this:
$object = $this->createQueryBuilder('object')
->leftJoin('object.element', 'element')->addSelect('element')
->leftJoin('object.element2', 'element2')->addSelect('element2')
->where('object.id = 1')
->getQuery()
->getSingleResult();
$object->setManies($this->getEntityManager()->getRepository(Many::class)->loadByObjectId(
$object->getId()
));
return $object;
here the function in the many repository
// loadByObjectId in the many repository
$manies = $this->createQueryBuilder('many')
->leftJoin('many.element3', 'element3')->addSelect('element3')
->leftJoin('many.element4', 'element4')->addSelect('element4')
->where('many.object = 1')
->getQuery()
->getResult();
With this the SQL is successfully splitted in multiple request. In my case instead of 60000 rows in 1 request, I only had 40 rows affected in 4 request which make the hydration of the object a lot faster.

How to save the Bigquery result as the table from PHP Script?

I query the Big Query table from PHP script and get the result. I want to save the result in new resultant table for future need...
The option #Pentium10 mentioned works, but there is another way to do it, when you already have query results and you want to save them.
All queries in BigQuery generate output tables. If you don't specify your own destination table, the output table will be an automatically generated table that only sticks around for 24 hours. You can, however, copy that temporary table to a new destination table and it will stick around for as long as you like.
To get the destination table, you need to look up the job (if you use the jobs.query() api, the job id is in the jobReference field of the response (see here). To look up the job, you can use jobs.get() with that job id, and you'll get back the destination table information (datasetId and tableId) from the configuration.query.destinationTable (the job object is described here).
You can copy that destination table to your own table by using the jobs.insert() call with a copy configuration section filled out. Info on copying a table is here.
Passing parameters to BQ call is tricky.
This should work in more recent cloud library versions:
public function runBigQueryJobIntoTable($query, $project, $dataset, $table)
{
$bigQuery = new BigQueryClient(['projectId' => $project]);
$destinationTable = $bigQuery->dataset($dataset)->table($table);
$queryJobConfig = $bigQuery->query($query)->destinationTable($destinationTable);
$job = $bigQuery->startQuery($queryJobConfig);
$queryResults = $job->queryResults();
while (!$queryResults->isComplete()) {
sleep(1);
$queryResults->reload();
}
return true;
}
For old versions:
public function runBigQueryJobIntoTable($query, $project, $dataset, $table)
{
$bigQuery = new BigQueryClient(['projectId' => $project]);
$jobConfig = [
'destinationTable' => [
'projectId' => $project,
'datasetId' => $dataset,
'tableId' => $table
]
];
$job = $bigQuery->runQueryAsJob($query, ['jobConfig' => $jobConfig]);
$queryResults = $job->queryResults();
while (!$queryResults->isComplete()) {
sleep(1);
$queryResults->reload();
}
return true;
}
You need to set the destinationTable with the call, the results will be written to the table you set.
https://developers.google.com/bigquery/querying-data#asyncqueries

"update" query - error invalid input synatx for integer: "{39}" - postgresql

I'm using node js 0.10.12 to perform querys to postgreSQL 9.1.
I get the error error invalid input synatx for integer: "{39}" (39 is an example number) when I try to perform an update query
I cannot see what is going wrong. Any advise?
Here is my code (snippets) in the front-end
//this is global
var gid=0;
//set websockets to search - works fine
var sd = new WebSocket("ws://localhost:0000");
sd.onmessage = function (evt)
{
//get data, parse it, because there is more than one vars, pass id to gid
var received_msg = evt.data;
var packet = JSON.parse(received_msg);
var tid = packet['tid'];
gid=tid;
}
//when user clicks button, set websockets to send id and other data, to perform update query
var sa = new WebSocket("ws://localhost:0000");
sa.onopen = function(){
sa.send(JSON.stringify({
command:'typesave',
indi:gid,
name:document.getElementById("typename").value,
}));
sa.onmessage = function (evt) {
alert("Saved");
sa.close;
gid=0;//make gid 0 again, for re-use
}
And the back -end (query)
var query=client.query("UPDATE type SET t_name=$1,t_color=$2 WHERE t_id = $3 ",[name, color, indi])
query.on("row", function (row, result) {
result.addRow(row);
});
query.on("end", function (result) {
connection.send("o");
client.end();
});
Why this not work and the number does not get recognized?
Thanks in advance
As one would expect from the initial problem, your database driver is sending in an integer array of one member into a field for an integer. PostgreSQL rightly rejects the data and return an error. '{39}' in PostgreSQL terms is exactly equivalent to ARRAY[39] using an array constructor and [39] in JSON.
Now, obviously you can just change your query call to pull the first item out of the JSON array. and send that instead of the whole array, but I would be worried about what happens if things change and you get multiple values. You may want to look at separating that logic out for this data structure.

How to get last write date of a RavenDB collection/index

I want to detect when a collection or index was last modified or written to.
Note: This works on document level, but i need on collection/index level. How to get last write date of a RavenDB document via C#
RavenQueryStatistics stats;
using(var session = _documentStore.OpenSession()) {
session.Query<MyDocument>()
.Statistics(out stats)
.Take(0) // don't load any documents as we need only the stats
.ToArray(); // this is needed to trigger server side query execution
}
DateTime indexTimestamp = stats.IndexTimestamp;
string indexEtag = stats.IndexEtag;;
getting meta data only via RavenDB Http Api:
GET http://localhost:8080/indexes/dynamic/MyDocuments/?metadata-only=true
would return:
{ "Results":[],
"Includes":[],
"IsStale":false,
"IndexTimestamp":"2013-09-16T15:54:58.2465733Z",
"TotalResults":0,
"SkippedResults":0,
"IndexName":"Raven/DocumentsByEntityName",
"IndexEtag":"01000000-0000-0008-0000-000000000006",
"ResultEtag":"3B5CA9C6-8934-1999-45C2-66A9769444F0",
"Highlightings":{},
"NonAuthoritativeInformation":false,
"LastQueryTime":"2013-09-16T15:55:00.8397216Z",
"DurationMilliseconds":102 }
ResultEtag and IndexTimestamp change on every write