Laravel 5.4: logging SQL queries along with results - sql

Logging SQL queries is widely described, for instance here:
How to get the query executed in Laravel 5?
but I found no infos about how to log the queries along with the query results or errors respectively.
Anyone who can fill the gap?
Thanks,
Armin.

IF you want to debug a query(ies) (based on your comment) there is this option
Before the query add
\DB::enableQueryLog();
and after the query you can do a dd or whatever with:
\DB::getQueryLog();
Note: This will debug all of the queries in between the two commands

Related

Mulesoft not able to pass dynamic SQL queries based on environments

Hello for demonstration purposes I trimmed out my actual sql query.
I have a SQL query
SELECT *
FROM dbdev.training.courses
where dbdev is my DEV database table name. When I migrate to TEST env, I want my query to dynamically change to
SELECT *
FROM dbtest.training.courses
I tried using input parameters like {env: p('db_name')} and using in the query as
SELECT * FROM :env.training.courses
or
SELECT * FROM (:env).training.courses
but none of them worked. I don't want my SQL query in properties file.
Can you please suggest a way to write my SQL query dynamically based on environment?
The only alternative way is to deploy separate jars for different environments with different code.
You can set the value of the property to a variable and then use the variable with string interpolation.
Warning: creating dynamic SQL queries using any kind of string manipulation may expose your application to SQL injection security vulnerabilities.
Example:
#['SELECT * FROM $(vars.database default "dbtest").training.courses']
Actually, you can do a completely dynamic or partially dynamic query using the MuleSoft DB connector.
Please see this repo:
https://github.com/TheComputerClassroom/dynamicSQLGETandPATCH
Also, I'm about to post an update that allows joins.
At a high level, this is a "Query Builder" where the code that builds the query is written in DataWeave 2. I'm working on another version that allows joins between entities, too.
If you have questions, feel free to reply.
One way to do it is :
Create a variable before DB Connector:
getTableName - ${env}.training.courses
Write SQL Query :
Select * from $(getTableName);

Oracle Sql - Time based sql injection

When trying to do an SQL injection on an Oracle SQL database I have the problem that most of the examples in the tutorials do not work. I already found out that I only can use CASE WHEN a THEN b ELSE c END instead of normal if statements.
The question I have now is how do I get time delay into the injection? Benchmark() and sleep() do not work either.
I already now that the table is named "flag" and the field name I want to read out is named "password".
My only information i get from the database is the time it needed to execute my input (or query since I bypass the input to inject SQL)
I found the following SQL statement on the web at SQL Injection Tutorial
select dbms_pipe.receive_message(('a'),10) from dual;
I am not certain I should be participating in this sort of thing, but since I found it with my first Google Search, I will go ahead and post it.
I tested it and it delayed the result by 10 seconds.

BQ PY Client Libraries :: client.run_async_query() vs client.run_sync_query()

I'm looking at BQ PY Client Libraries:
There used to be two different operations to query a table
client.run_async_query()
client.run_sync_query()
But in the latest version (v1.3) it seems there's only one operations to execute a query, Client.query(). Did I understand correctly?
And looking at GH code it looks Client.query() just returns the query job, not the actual query results / data.... Making me conclude it works in a similar way as client.run_async_query().. there's no replacement for client.run_sync_query() operation anymore which return query results (data) synchronously / immediately?
Thanks for the clarification!
Cheers!
Although .run_sync_query() has been removed, the Query reference says that short jobs may return results right away if they don't take long to finish:
query POST /projects/projectId/queries
Runs a BigQuery SQL query and returns results if the query completes within a specified timeout.

When querying MongoDB using DBeaver, what's the right syntax for filtering by date?

I recently discovered that DBeaver can connect to MongoDB. My next discovery was that DBeaver expects SQL-like queries instead of the JavaScript-like queries I use with the mongo command line client. I've been unable to find any good documentation on the syntax I should be using, so I've been learning by trial and error. I need some help filtering query results by date.
I have a collection named tasks. Each object in the collection has a startedAt attribute that holds a timestamp.
This query gives me lots of results using the command line client: db.tasks.find({startedAt:{$gt:ISODate("2017-03-03")}});
I'm guessing the syntax in DBeaver should be something like this: select * from tasks where startedAt > '2017-03-03';
But, I'm doing something wrong because I don't get any results in DBeaver unless I drop the where clause. What's the right way?

Getting the SQL from a Doctrine Migration

I have been researching a way to get the SQL statements that are built by a generated Migration file. These extend Doctrine_Migration_Base. Essentially I would like to save the SQL as change scripts.
The execution path leads me to Doctrine_Export which has methods that build the SQL statement and executes them. I have found no way of asking for just them. The export methods found in Doctrine_Export only operate on Doctrine_Record models and not Migration scripts.
From the command line './doctrine migrate version#' the path goes:
Doctrine_Cli::run(cmd)
Doctrine_Task_Migrate::setArguments(args)
Doctrine_Task_Migrate::execute()
Doctrine_Migration::migrate(to)
Doctrine_Migration_Process::Doctrine_Export::various
create, drop, alter methods with sql
equivalents.
Has anyone tackled this before? I really would not like to change Doctrine base files. Any help is greatly appreciated.
Could you make a dev server, and do the migration on that, storing a SQL Trace as you go?you don't have to keep the results, but you would get a list of every command.
Taking into account Rob Farley's suggestion, I modified:
Doctrine_Core::migrate
Doctrine_Task_Migrate::execute
When the execute method is called the optional argument 'dryRun' is checked. If true
then a 'Doctrine_Connection_Profiler' instance is created. The 'dryRun' value is then passed onto
the 'Doctrine_Core::migrate' method. The 'dryRun' value of true allows the changes to rollback when done executing the SQL statements. When the method returns, the profiler is parsed and non-empty SQL statements
not containing 'migration_version' are saved and displayed to the terminal.