MySQL Log of invalid Queries - sql

I AM NOT RUNNING THE COMMANDS FROM PHP!
I have MySQL log_error value set to /var/log/mysql/error.log
However when I connect to a database and run an SQL command, the error does not appear in the log.
SELECT *
FROM some_table
where this_is_invalid_and_will_fail="Doesn't matter because column doesn't exist!";
There are commands running from some sort of windows application. All I want to know is what invalid commands its sending to the MySQL server so that I can attempt to resolve them.

Error log doesn't do that:
https://dev.mysql.com/doc/refman/8.0/en/error-log.html
The error log contains information
indicating when mysqld was started and
stopped and also any critical errors
that occur while the server is
running.
MySQL doesn't log invalid/failed queries anywhere.
If it's for debugging purposes, you might try setting up a MySQL Proxy, which could log this I think:
http://dev.mysql.com/downloads/mysql-proxy/

Basically, there are 2 ways.
1) setup some kind of proxy, which can log error queries. There are a lot of forks of the original mysql proxy (which was mentioned by #Mchl) - https://github.com/search?utf8=%E2%9C%93&q=MySQL+Proxy Also you can find latest version of the original mysql proxy here https://github.com/mysql/mysql-proxy
I dont like this way, because its kind of too complicated for quick debug
2) you can enable general log in mysql (which will log ALL queries).
It will create big overhead and doesnt fit production requirements, but its fast and easy way to fix bugs in development environment.
Just login to your mysql and execute
SET GLOBAL general_log = 'ON';
SHOW GLOBAL VARIABLES LIKE 'general_log%';
You will see logs location, like
+------------------+------------------------+
| Variable_name | Value |
+------------------+------------------------+
| general_log | OFF |
| general_log_file | /mnt/ssd/mysql/s.log |
+------------------+------------------------+
After that execute
mysqladmin flush-logs -u root -p
When you will need to stop logs - just execute
SET GLOBAL general_log = 'OFF';

As of mysql version 5.6.3 (released 2011-10-03), there is a variable called log-raw which allows you to include invalid queries in the general query log:
https://dev.mysql.com/doc/refman/5.6/en/server-options.html#option_mysqld_log-raw
You will need to turn on general query log using general-log and general-log-file, e.g.:
[mysqld]
general-log=1
log-raw=1
general-log-file=/var/log/mysql/general.log

Could you enable the General Query Log? That should tell you everything you need to know.

This is not so trivial. The best way to do this, is to log bad queries in your application. There is no built-in way.

Related

Read access violation related to input(variable,anydtdtm.);

Somebody tell me I'm not crazy. I have SAS on a server, and I'm running the following code:
data wtf;
a=".123456 1 1";
b=input(a,anydtdtm.);
run;
If I run this on my local computer, no problem. If I run this on the server, I get:
ERROR: An exception has been encountered.
Please contact technical support and provide them with the following traceback information:
The SAS task name is [DATASTEP]
ERROR: Read Access Violation DATASTEP
Exception occurred at (04E0AB8C)
Task Traceback
Address Frame (DBGHELP API Version 4.0 rev 5)
0000000004E0AB8C 0000000009C4EC20 sasxdtu:tkvercn1+0x9B4C
0000000004E030D9 0000000009C4F100 sasxdtu:tkvercn1+0x2099
0000000005FF14BE 0000000009C4F108 uwianydt:tkvercn1+0x47E
0000000002438026 0000000009C4F178 tkmk:tkBoot+0x162E6
Does anyone else get this error???
This is an internal bug that cannot be resolved by the user. You'll need to send this information, your environment description, and the exact steps to recreate the bug over to SAS Technical Support to open up an investigation and determine a workaround.
If your server is a database not made up of .sas7bdat files, it might be due to the SAS/ACCESS engine attempting to translate the function into a way that the server's language can understand, but is unable to do so properly; that is, it might think it's doing it correctly, but it's not. There are special cases where this can occur, and you may have discovered one.
If you are in fact querying some other database, try adding this before running the data step:
options sastrace=',,,d' sastraceloc=saslog;
This will show all of the steps as SAS sends data & functions to and from the server, and may help give some insight.
I am getting the same error on Linux system running SAS 9.4
AUTOMATIC SYSSCP LIN X64
AUTOMATIC SYSSCPL Linux
AUTOMATIC SYSVER 9.4
AUTOMATIC SYSVLONG 9.04.01M3P062415
AUTOMATIC SYSVLONG4 9.04.01M3P06242015
Until SAS can fix the informat you probably need to add additional testing in your code to exclude strange values like that.

Is there alternative to "autotrace traceonly" on Apache Drill?

I'm new to Apache Drill.
For performance testing purpose, I'm trying to measure the time to execute a query. And also I do not need to print the executed result.
In Oracle SQL Plus, there is set autotrace traceonly. This setting feature is the following(quoting from the oracle web site):
Similar to SET AUTOTRACE ON, but suppresses the printing of the user's query output, if any. If STATISTICS is enabled, query data is still fetched, but not printed.
In Apache Drill's sqlline, I got the error like the following: Error: PARSE ERROR: Encountered "traceonly" at line 1, column 15...
Do you have any ideas for alternatives?
Thanks,
p.s.
I also read this answered question. Any command in mysql equivalent to Oracle's autotrace for performance turning
Unfortunately, it doesn't work on Apache Drill.
You could put your query(s) in a text file (e.g. query.sql), then run sqlline and forward the output to /dev/null:
bin/sqlline -u jdbc:drill:zk=localhost:2181 -f query.sql > /dev/null
It will still display some data, but only minimal:
1/1 select * from cp.`employee.json`;
1,155 rows selected (0.65 seconds)
Closing: org.apache.drill.jdbc.impl.DrillConnectionImpl
apache drill 1.4.0
"say hello to my little drill"

Invoking a large set of SQL from a Rails 4 application

I have a Rails 4 application that I use in conjunction with sidekiq to run asynchronous jobs. One of the jobs I normally run outside of my Rails application is a large set of complex SQL queries that cannot really be modeled by ActiveRecord. The connection this set of SQL queries has with my Rails app is that it should be executed anytime one of my controller actions is invoked.
Ideally, I'd queue a job from my Rails application within the controller for Sidekiq to go ahead and run the queries. Right now they're stored in an external file, and I'm not entirely sure what the best way is to have Rails run the said SQL.
Any solutions are appreciated.
I agree with Sharagoz, if you just need to run a specific query, the best way is to send the query string directly into the connection, like:
ActiveRecord::Base.connection.execute(File.read("myquery.sql"))
If the query is not static and you have to compose it, I would use Arel, it's already present in Rails 4.x:
https://github.com/rails/arel
You didn't say what database you are using, so I'm going to assume MySQL.
You could shell out to the mysql binary to do the work:
result = `mysql -u #{user} --password #{password} #{database} < #{huge_sql_filename}`
Or use ActiveRecord::Base.connection.execute(File.read("huge.sql")), but it won't work out of the box if you have multiple SQL statements in your SQL file.
In order to run multiple statements you will need to create an initializer that monkey patches the ActiveRecord::Base.mysql2_connection to allow setting MySQL's CLIENT_MULTI_STATEMENTS and CLIENT_MULTI_RESULTS flags.
Create a new initializer config/initializers/mysql2.rb
module ActiveRecord
class Base
# Overriding ActiveRecord::Base.mysql2_connection
# method to allow passing options from database.yml
#
# Example of database.yml
#
# login: &login
# socket: /tmp/mysql.sock
# adapter: mysql2
# host: localhost
# encoding: utf8
# flags: 131072
#
# #param [Hash] config hash that you define in your
# database.yml
# #return [Mysql2Adapter] new MySQL adapter object
#
def self.mysql2_connection(config)
config[:username] = 'root' if config[:username].nil?
if Mysql2::Client.const_defined? :FOUND_ROWS
config[:flags] = config[:flags] ? config[:flags] | Mysql2::Client::FOUND_ROWS : Mysql2::Client::FOUND_ROWS
end
client = Mysql2::Client.new(config.symbolize_keys)
options = [config[:host], config[:username], config[:password], config[:database], config[:port], config[:socket], 0]
ConnectionAdapters::Mysql2Adapter.new(client, logger, options, config)
end
end
end
Then update config/database.yml to add flags:
development:
adapter: mysql2
database: app_development
username: user
password: password
flags: <%= 65536 | 131072 %>
I just tested this on Rails 4.1 and it works great.
Source: http://www.spectator.in/2011/03/12/rails2-mysql2-and-stored-procedures/
Executing one query is - as outlined by other people - quite simply done through
ActiveRecord::Base.connection.execute("SELECT COUNT(*) FROM users")
You are talking about a 20.000 line sql script of multiple queries. Assuming you have the file somewhat under control, you can extract the individual queries from it.
script = Rails.root.join("lib").join("script.sql").read # ah, Pathnames
# this needs to match the delimiter of your queries
STATEMENT_SEPARATOR = ";\n\n"
ActiveRecord::Base.transaction do
script.split(STATEMENT_SEPARATOR).each do |stmt|
ActiveRecord::Base.connection.execute(stmt)
end
end
If you're lucky, then the query delimiter could be ";\n\n", but this depends of course on your script. We had in another example "\x0" as delimiter. The point is that you split the script into queries to send them to the database. I wrapped it in a transaction, to let the database know that there is coming more than one statement. The block commits when no exception is raised while sending the script-queries.
If you do not have the script-file under control, start talking to those who control it to get a reliable delimiter. If it's not under your control and you cannot talk to the one who controls it, you wouldn't execute it, I guess :-).
UPDATE
This is a generic way to solve this. For PostgreSQL, you don't need to split the statements manually. You can just send them all at once via execute. For MySQL, there seem to be solutions to get the adapter into a CLIENT_MULTI_STATEMENTS mode.
If you want to execute raw SQL through active record you can use this API:
ActiveRecord::Base.connection.execute("SELECT COUNT(*) FROM users")
If you are running big SQL every time, i suggest you to create a sql view for it. It be boost the execution time. The other thing is, if possible try to split all those SQL query in such a way that it will be executed parallely instead of sequentially and then push it to sidekiq queue.
You have to use ActiveRecord::Base.connection.execute or ModelClass.find_by_sql to run custom SQL.
Also, put an eye on ROLLBACK transactions, you will find many places where you dont need such ROLLBACK feature. If you avoid that, the query will run faster but it is dangerous.
Thanks all i can suggest.
use available database tools to handle the complex queries, such as views, stored procedures etc and call them as other people already suggested (ActiveRecord::Base.connection.execute and ModelClass.find_by_sql for example)- it might very well cut down significantly on query preparation time in the DB and make your code easier to handle
http://dev.mysql.com/doc/refman/5.0/en/create-view.html
http://dev.mysql.com/doc/connector-cpp/en/connector-cpp-tutorials-stored-routines-statements.html
abstract your query input parameters into a hash so you can pass it on to sidekiq, don't send SQL strings as this will probably degrade performance (due to query preparation time) and make your life more complicated due to funny SQL driver parsing bugs
run your complex queries in a dedicated named queue and set concurrency to such a value that will prevent your database of getting overwhelmed by the queries as they smell like they could be pretty db heavy
https://github.com/mperham/sidekiq/wiki/API
https://github.com/mperham/sidekiq/wiki/Advanced-Options
have a look at Squeel, its a great addition to AR, it might be able to pull some of the things you are doing
https://github.com/activerecord-hackery/squeel
http://railscasts.com/episodes/354-squeel
I'll assume you use MySQL for now, but your mileage will vary depending on the DB type that you use. For example, Oracle has some good gems for handling stored procedures, views etc, for example https://github.com/rsim/ruby-plsql
Let me know if some of this stuff doesn't fit your use case and I'll expand
I see this post is kind of old. But I would like to add my solution to it. I was in a similar situation; I also needed a way to force feed "PRAGMA foreign_keys = on;" into my sqlite connection (I could not find a previous post that spelled it out how to do it.) Anywho, this worked like a charm for me. It allowed me to write "pretty" sql and still get it executed. Blank lines are ignored by the if statement.
conn = ActiveRecord::Base.establish_connection(adapter:'sqlite3',database:DB_NAME)
sqls = File.read(DDL_NAME).split(';')
sqls.each {|sql| conn.connection.execute(sql<<';') unless sql.strip.size == 0 }
conn.connection.execute('PRAGMA foreign_keys = on;')
I had the same problem with a set of sql statements that I needed to execute all in one call to the server. What worked for me was to set up an initializer for Mysql2 adapter (as explained in infused answer) but also do some extra work to process multiple results. A direct call to ActiveRecord::Base.connection.executewould only retrieve the first result and issue an Internal Error.
My solution was to get the Mysql2 adapter and work directly with it:
client = ActiveRecord::Base.connection.raw_connection
Then, as explained here, execute the query and loop through the results:
client.query(multiple_stms_query)
while client.next_result
result = client.store_result
# do something with it ...
end

Validate Hive HQL syntax?

Is there a programmatic way to validate HiveQL statements for errors like basic syntax mistakes? I'd like to check statements before sending them off to Elastic Map Reduce in order to save debugging time.
Yes there is!
It's pretty easy actually.
Steps:
1. Get a hive thrift client in your language.
I'm in ruby so I use this wrapper - https://github.com/forward/rbhive (gem install rbhive)
If you're not in ruby, you can download the hive source and run thrift on the included thrift configuration files to generate client code in most languages.
2. Connect to hive on port 10001 and run a describe query
In ruby this looks like this:
RBHive.connect(host, port) do |connection|
connection.fetch("describe select * from categories limit 10")
end
If the query is invalid the client will throw an exception with details of why the syntax is invalid. Describe will return you a query tree if the syntax IS valid (which you can ignore in this case)
Hope that helps.
"describe select * from categories limit 10" didn't work for me.
Maybe this is related to the Hive version one is using.
I'm using Hive 0.8.1.4
After doing some research I found a similar solution to the one Matthew Rathbone provided:
Hive provides an EXPLAIN command that shows the execution plan for a query. The syntax for this statement is as follows:
EXPLAIN [EXTENDED] query
So for everyone who's also using rbhive:
RBHive.connect(host, port) do |c|
c.execute("explain select * from categories limit 10")
end
Note that you have to substitute c.fetch with c.execute, since explain won't return any results if it succeeds => rbhive will throw an exception no matter if your syntax is correct or not.
execute will throw an exception if you've got an syntax error or if the table / column you are querying doesn't exist. If everything is fine, no exception is thrown but also you'll receive no results, which is not an evil thing
In the latest version hive 2.0 comes with hplsql tool which allows us to validate hive commands without actually running them.
Configuration:
add the below XML in hive/conf folder and restart hive
https://github.com/apache/hive/blob/master/hplsql/src/main/resources/hplsql-site.xml
To Run the hplsql and validate the query , please use the below command:
To validate Singe Query
hplsql -offline -trace -e 'select * from sample'
(or)
To Validate Entire File
hplsql -offline -trace -f samplehql.sql
If the query syntax is correct , the response from hplsql would be something like this:
Ln:1 SELECT // type
Ln:1 select * from sample // command
Ln:1 Not executed - offline mode set // execution status
if the query Syntax is wrong , the syntax issue in the query will be reported
If the hive version is older, we need to manually place the hplsql jars inside the hive/lib and proceed.

Mysql query to return server load average

Does anyone know of a MySQL query that returns the server's current load average?
Do you mean the actual system load average? This has nothing to do with MySQL. For example on Linux, you can get it from /proc/loadavg.
Correct me if I'm wrong, but the load average variable is a property of the machine, not the MySQL server.
So to retrieve the avg. load you should be looking for a system call, not a SQL-query.
You might want to look into this statement:
http://dev.mysql.com/doc/refman/5.1/en/show-status.html
SHOW [GLOBAL | SESSION] STATUS
[LIKE 'pattern' | WHERE expr]
SHOW STATUS provides server status information. This information also can
be obtained using the mysqladmin
extended-status command. The LIKE
clause, if present, indicates which
variable names to match. The WHERE
clause can be given to select rows
using more general conditions, as
discussed in Section 20.28,
“Extensions to SHOW Statements”. This
statement does not require any
privilege. It requires only the
ability to connect to the server.
Do you have mytop installed?
mytop is a console-based (non-gui)
tool for monitoring the threads and
overall performance of a MySQL 3.22.x,
3.23.x, and 4.x server
Mytop allows you to monitor what is happening in real time, everything from number of queries per second to key efficiency of the queries.
See Using Mytop: A MySQL Monitor