rails_sql_views undefined method 'base_tables' on mysql2 - ruby-on-rails-3

I need to create views and found that using SQL to create them caused problems running tests.
On my development system, which is sqlite3, I rolled back my database migrations to before the views were created.
I added rails_sql_views gem from git://github.com/unleashed/rails_sql_views.
I modified my migration for the view so it is now
def up
# dividing by 100 for percentages and 1000 for kW, hence 100000
create_view :view_sub_power_ratings,
"select
((s.power_off * p.percent_off + s.standby * p.percent_standby + s.idle * p.percent_idle + s.normal * p.percent_normal + s.maximum * p.percent_max) * p.working_days * 24 +
(s.power_off * percent_off_nw + s.standby * percent_standby_nw + s.idle * p.percent_idle_nw + s.normal * p.percent_normal_nw + s.maximum * percent_max_nw) * (365 - p.working_days) * 24) /100000
as power_usage,
p.subscription_id,
s.device_id
from sub_category_params p
inner join devices d on d.device_category_id = p.device_category_id
inner join device_power_summaries s on s.device_id = d.id"
# dividing by 100 for percentage accuracy, 100 for percent used, and 1000 for kW, hence 10000000
create_view :view_sub_power_ratings_variations,
"select
(s.standby * s.standby_accuracy * p.percent_standby + s.idle * s.idle_accuracy *p.percent_idle + s.normal * s.normal_accuracy * p.percent_normal +
s.maximum * s.maximum_accuracy * p.percent_max + s.power_off * s.power_off_accuracy * p.percent_off) * 24 * p.working_days/ 10000000 as variation_wd,
(s.standby * s.standby_accuracy * p.percent_standby_nw + s.idle * s.idle_accuracy *p.percent_idle_nw + s.normal * s.normal_accuracy * p.percent_normal_nw +
s.maximum * s.maximum_accuracy * p.percent_max_nw + s.power_off * s.power_off_accuracy * p.percent_off_nw) * 24 * (365-p.working_days)/ 10000000 as variation_nw,
p.subscription_id,
s.device_id
from sub_category_params p
inner join devices d on d.device_category_id = p.device_category_id
inner join device_power_summaries s on s.device_id = d.id"
create_view :view_sub_power_maximums,
"select
r.power_usage + (v.variation_nw + v.variation_wd) as maximum_power_usage,
r.power_usage - (v.variation_nw + v.variation_wd) as minimum_power_usage,
v.subscription_id,
v.device_id
from sub_category_params p
inner join devices d on d.device_category_id = p.device_category_id
inner join device_power_summaries s on s.device_id = d.id
inner join view_sub_power_ratings r on d.id = r.device_id and p.subscription_id = r.subscription_id
inner join view_sub_power_ratings_variations v on d.id = v.device_id and p.subscription_id = v.subscription_id"
end
def down
drop_view :view_sub_power_ratings
drop_view :view_sub_power_ratings_variations
drop_view :view_sub_power_maximums
end
I then ran the migration again and all was fine. Views are created and the create_view is in schema.rb. Result!
However, on deploying to my staging server I'm getting an error on the migration. This is on MySQL2.
This is the trace
** Execute db:schema:dump
rake aborted!
undefined method `base_tables' for #<ActiveRecord::ConnectionAdapters::Mysql2Adapter:0xb876f1c>
/usr/local/rvm/gems/ruby-1.9.3-p125/bundler/gems/rails_sql_views-9d781715bcab/lib/rails_sql_views/schema_dumper.rb:98:in `tables_with_views_excluded'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/schema_dumper.rb:27:in `dump'
/usr/local/rvm/gems/ruby-1.9.3-p125/bundler/gems/rails_sql_views-9d781715bcab/lib/rails_sql_views/schema_dumper.rb:27:in `dump_with_views'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/schema_dumper.rb:21:in `dump'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/railties/databases.rake:354:in `block (4 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/railties/databases.rake:352:in `open'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/railties/databases.rake:352:in `block (3 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain'
/usr/local/rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/railties /databases.rake:161:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain'
/usr/local/rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke'
/usr/local/rvm/gems/ruby-1.9.3-p125/gems/activerecord-3.2.1/lib/active_record/railties/databases.rake:156:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain'
/usr/local/rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:116:in `invoke_task'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block (2 levels) in top_level'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `each'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block in top_level'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:88:in `top_level'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:66:in `block in run'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/lib/rake/application.rb:63:in `run'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/gems/rake-0.9.2.2/bin/rake:33:in `<top (required)>'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/bin/rake:19:in `load'
/usr/local/rvm/gems/ruby-1.9.3-p125#global/bin/rake:19:in `<main>'
Tasks: TOP => db:schema:dump
My Gemfile has
gem 'rails_sql_views', :git => 'git://github.com/unleashed/rails_sql_views', require: 'rails_sql_views'
The bundle install runs ok:
** [out :: myserver.com]
** [out :: myserver.com] Using rails_sql_views (0.8.0.1.unleashed) from git://github.com/unleashed/rails_sql_views (at master)
** [out :: myserver.com]
** [out :: myserver.com] Using sass (3.1.15)
** [out :: myserver.com] * mysql2-0.3.11.gem
** [out :: myserver.com] Removing outdated .gem files from vendor/cache
** [out :: myserver.com] * rails_sql_views-0.8.0.gem
** [out :: myserver.com] Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
command finished in 4018ms
I've seen some people reporting similar and the explanation has been that there is no adaptor for MySQL2. But it is there in the lib/rails_sql_views/connection_adapters directory : mysql2_adapter.rb
I've taken a peek and there is a method for base_tables
So what am I missing? Sorry, I know its going to be something quite obvious but I've still not quite got my head round how gems are integrated into the system.
Many thanks in advance

I've changed the source of the gem to:
git://github.com/centresource/rails_sql_views
and this has resolved the problem - the migration is now working ok

When you add a gem to your Gemfile you need to run bundle install.
This will see if your Gemfile has changed and if it has, for example adding a new gem, it will go out to the source, pull in the gem code and then include that in your application.
To see what gems the project has just look at the Gemfile.lock file that was generated from the last bundle.

Related

run DBMS_SCHEDULER.create_job from sys for a stored procedure owned by another schema [Oracle SQL]

I want to schedule runs to execute a stored procedure which belonged to a schema schemaA when logging in as SYS user.
The stored procedure procedureA takes in an input parameter varA. I was having some trouble running the scheduler.
BEGIN
DBMS_SCHEDULER.create_job(
job_name => 'test1',
job_type => 'STORED_PROCEDURE',
job_action => 'schemaA.procedureA(''varA''); ',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=minutely;BYMINUTE=0,10,20,30,40,50;BYSECOND=0',
enabled => TRUE,
comments => 'Your description of your job'
);
END;
I got error:
Error report -
ORA-27452: "schemaA.procedureA('varA'); " is an invalid name for a database object.
ORA-06512: at "SYS.DBMS_ISCHED", line 175
ORA-06512: at "SYS.DBMS_SCHEDULER", line 286
ORA-06512: at line 3
27452. 00000 - "\"%s\" is an invalid name for a database object."
*Cause: An invalid name was used to identify a database object.
*Action: Reissue the command using a valid name.
Wondering how should I fix this
You can't specify inputs in the ACTION parameter, only the name of the procedure. Use the NUMBER_OF_ARGUMENTS and ARGUMENTS parameters to specify the inputs.
See the documentation here: https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_SCHEDULER.html#GUID-1BC57390-C756-4908-A4D8-8D1EEC236E25
See an example here: Creating a dbms_scheduler.create_job with arguments
dbms_scheduler.create_job (
job_name=>'script_dbms_scheduler_test',
job_action=>'/data/home/workflow/script_dbms_scheduler.ksh',
job_type=>'executable',
number_of_arguments=>1,
auto_drop => TRUE,
comments=> 'Run shell-script script_dbms_scheduler.ksh');
dbms_scheduler.set_job_argument_value (
job_name =>'script_dbms_scheduler_test',
argument_position => 1,
argument_value => v_text);
dbms_scheduler.enable('script_dbms_scheduler_test');

How to find/fix a "stackoverflow exception in /action_dispatch/middleware/reloader"?

I get this Exception sporadic on my production debian / rails 3.2.22.5
EDIT: I know can reproduce it also in development
It feels that it has something to do with caching.
I have a "permission_checker" that is called width almost every call on the server. It checks permission of the object requested - going down all records/models till it finds the final owner of this object/record, and then decides:
when no owner found then 404
when owner found same as logged in user then process page
when owner found is the public owner then process page read only
else raise permission exception
so far so good
The code looks like that (the 'aaa' so on and level error just for debugging)
# some processing to find the owner
# id_val is a hash holding the result of this
logger.error "needed owner id :#{id_val}".yellow
logger.error "aaa".yellow
needed_owner_id=id_val[:person_id]
if needed_owner_id
logger.error "bbb #{needed_owner_id}".yellow
needed_owner=Person.find(needed_owner_id) #<----- here is the crash
logger.error "ccc".yellow
logger.error "needed owner id :#{needed_owner_id} (#{needed_owner.owner_type}".red.on_white
logger.error "ddd".yellow
logger.error "current person id :#{#current_person ? #current_person.id : nil}".red.on_white
else
logger.error "no needed owner found"
end
# further processing
if i don't get the stackoverflow, the output looks like this:
Started GET "/ldc/order_forms/ec0a08a6-8525-4c6e-9056-d387e7ad3b9c/edit"
Processing by Ldc::OrderFormsController#edit as HTML
Parameters: {"id"=>"ec0a08a6-8525-4c6e-9056-d387e7ad3b9c"}
LoginData Load (0.1ms) SELECT `login_data`.* FROM `login_data` WHERE `login_data`.`id` = '083a684f-db0e-4c87-a44e-d434d3334289' LIMIT 1
Ldc::Person Load (0.1ms) SELECT `ldc_people`.* FROM `ldc_people` WHERE `ldc_people`.`ustate` = 'A' AND `ldc_people`.`login_data_id` = '083a684f-db0e-4c87-a44e-d434d3334289' LIMIT 1
check in: {:order_form_id=>"ec0a08a6-8525-4c6e-9056-d387e7ad3b9c", :person_id=>nil}
Ldc::OrderForm Load (0.1ms) SELECT `ldc_order_forms`.* FROM `ldc_order_forms` WHERE `ldc_order_forms`.`ustate` = 'A' AND `ldc_order_forms`.`id` = 'ec0a08a6-8525-4c6e-9056-d387e7ad3b9c' LIMIT 1
CACHE (0.0ms) SELECT `ldc_order_forms`.* FROM `ldc_order_forms` WHERE `ldc_order_forms`.`ustate` = 'A' AND `ldc_order_forms`.`id` = 'ec0a08a6-8525-4c6e-9056-d387e7ad3b9c' LIMIT 1
needed owner id :{:person_id=>"c7285c28-0906-4592-bbd7-3fbe164a337e"}
aaa
bbb c7285c28-0906-4592-bbd7-3fbe164a337e
Ldc::Person Load (0.1ms) SELECT `ldc_people`.* FROM `ldc_people` WHERE `ldc_people`.`ustate` = 'A' AND `ldc_people`.`id` = 'c7285c28-0906-4592-bbd7-3fbe164a337e' LIMIT 1
ccc
needed owner id :c7285c28-0906-4592-bbd7-3fbe164a337e
ddd
current person id :c7285c28-0906-4592-bbd7-3fbe164a337e
--------- permission check done ---------
If it fails then its all the same until the crash:
check in: {:order_form_id=>"ec0a08a6-8525-4c6e-9056-d387e7ad3b9c", :person_id=>nil}
Ldc::OrderForm Load (3.2ms) SELECT `ldc_order_forms`.* FROM `ldc_order_forms` WHERE `ldc_order_forms`.`ustate` = 'A' AND `ldc_order_forms`.`id` = 'ec0a08a6-8525-4c6e-9056-d387e7ad3b9c' LIMIT 1
CACHE (0.0ms) SELECT `ldc_order_forms`.* FROM `ldc_order_forms` WHERE `ldc_order_forms`.`ustate` = 'A' AND `ldc_order_forms`.`id` = 'ec0a08a6-8525-4c6e-9056-d387e7ad3b9c' LIMIT 1
needed owner id :{:person_id=>"c7285c28-0906-4592-bbd7-3fbe164a337e"}
aaa
bbb c7285c28-0906-4592-bbd7-3fbe164a337e
Completed 500 Internal Server Error in 223.8ms
SystemStackError (stack level too deep):
actionpack (3.2.22.5) lib/action_dispatch/middleware/reloader.rb:70
The code for lib/action_dispatch/middleware/reloader.rb:70 is a raise:
def call(env)
#validated = #condition.call
prepare!
response = #app.call(env)
response[2] = ActionDispatch::BodyProxy.new(response[2]) { cleanup! }
response
rescue Exception
cleanup!
raise
end
Any ideas what these lines do, so I can search further?

Oracle: execute a Job dbms_scheduler

I Want to execute a Scheduler in Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
I have this package:
create or replace PACKAGE "S_IN_TDK" is
procedure parseMsg;
end;
and this job
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'parseMsg',
program_name => 'S_IN_TDK.parseMsg',
repeat_interval => 'FREQ=SECONDLY;INTERVAL=10',
--job_style => 'LIGHTWEIGHT',
comments => 'Job that polls device n2 every 10 seconds');
END;
but when I run the job I got this error:
Fallo al procesar el comando SQL
- ORA-27476: "S_IN_TDK.PARSEMSG" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 185
ORA-06512: at "SYS.DBMS_SCHEDULER", line 486
ORA-06512: at line 2
I also tried
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'parseMsg',
job_action => 'begin S_IN_TDK.parseMsg; end;',
repeat_interval => 'FREQ=SECONDLY;INTERVAL=10',
--job_style => 'LIGHTWEIGHT',
comments => 'Job that polls device n2 every 10 seconds');
END;
but then I got this error:
Informe de error -
ORA-06550: line 2, column 3:
PLS-00306: wrong number or types of arguments in call to 'CREATE_JOB'
ORA-06550: line 2, column 3:
PL/SQL: Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
Parameter program_name expects the name of a scheduler PROGRAM object. If you want to run an inline program then do use job_action parameter instead:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'parseMsg',
job_action => 'begin S_IN_TDK.parseMsg; end;',
repeat_interval => 'FREQ=SECONDLY;INTERVAL=10',
--job_style => 'LIGHTWEIGHT',
comments => 'Job that polls device n2 every 10 seconds');
END;
Note that job_action expects a complete PL/SQL block as input.
DBMS_SCHEDULER.CREATE_JOB has three required parameters: job_name, job_type, and job_action. Add job_type => 'PLSQL_BLOCK', and also add enabled => true, to make the job run immediately.
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'parseMsg',
job_type => 'PLSQL_BLOCK',
job_action => 'begin S_IN_TDK.parseMsg; end;',
repeat_interval => 'FREQ=SECONDLY;INTERVAL=10',
--job_style => 'LIGHTWEIGHT',
enabled => true,
comments => 'Job that polls device n2 every 10 seconds');
END;
/
Use this query to check the job status:
select *
from dba_scheduler_job_run_details
where job_name = 'PARSEMSG'
order by log_date desc;

Postgres query over ODBC a order of magnitude slower?

We have an application which gets some data from a PostgreSQL 9.0.3 database through the psqlodbc 09.00.0200 driver in the following way:
1) SQLExecDirect with START TRANSACTION
2) SQLExecDirect with
DECLARE foo SCROLL CURSOR FOR
SELECT table.alotofcolumns
FROM table
ORDER BY name2, id LIMIT 10000
3) SQLPrepare with
SELECT table.alotofcolumns, l01.languagedescription
FROM fetchcur('foo', ?, ?) table (column definitions)
LEFT OUTER JOIN languagetable l01 ON (l01.lang = 'EN'
AND l01.type = 'some type'
AND l01.grp = 'some group'
AND l01.key = table.somecolumn)
[~20 more LEFT OUTER JOINS in the same style, but for an other column]
4) SQLExecute with param1 set to SQL_FETCH_RELATIVE and param2 set to 1
5) SQLExecute with param1 set to SQL_FETCH_RELATIVE and param2 set to -1
6) SQLExecute with param1 set to SQL_FETCH_RELATIVE and param2 set to 0
7) deallocate all, close cursor, end transaction
The function fetchcur executes FETCH RELATIVE $3 IN $1 INTO rec where rec is a record and returns that record. Step 4-6 are executed again and again on user request and there are a lot more querys executed in this transaction in the meantime. It can also take quite some time before another user request is made. Usually the querys takes that long:
4) ~ 130 ms
5) ~ 115 ms
6) ~ 110 ms
This is normally too slow for a fast user experience. So i tried the same statements from psql command line with \timing on. For step 3-6 i used that statements:
3)
PREPARE p_foo (INTEGER, INTEGER) AS
SELECT table.alotofcolumns, l01.languagedescription
FROM fetchcur('foo', $1, $2) table (column definitions)
LEFT OUTER JOIN languagetable l01 ON (l01.lang = 'EN'
AND l01.type = 'some type'
AND l01.grp = 'some group'
AND l01.key = table.somecolumn)
[~20 more LEFT OUTER JOINS in the same style, but for an other column]
4-6)
EXPLAIN ANALYZE EXECUTE p_foo (6, x);
For the first EXECUTE it took 89 ms, but then it went down to ~7 ms. Even if i wait several minutes between the executes it stays at under 10 ms per query. So where could the additional 100 ms be gone? The application and database are on the same system, so network delay shouldn't be an issue. Each LEFT OUTER JOIN only returns one row and only one column of that result is added to the result set. So the result is one row with ~200 columns mostly of type VARCHAR and INTEGER. But that shouldn't be so much data to take around 100 ms to transfer on the same machine. So any hints would be helpful.
The machine has 2 GB of RAM and parameters are set to:
shared_buffers = 512MB
effective_cache_size = 256MB
work_mem = 16MB
maintenance_work_mem = 256MB
temp_buffers = 8MB
wal_buffers= 1MB
EDIT: I just found out how to create a mylog from psqlodbc, but i can't find timing values in there.
EDIT2: Also could add a timestamp to every line. This really shows that it takes >100ms until a respond is received by the psqlodbc driver. So i tried again with psql and added the option -h 127.0.0.1 to make sure it also goes over TCP/IP. The result with psql is <10ms. How is this possible?
00:07:51.026 [3086550720][SQLExecute]
00:07:51.026 [3086550720]PGAPI_Execute: entering...1
00:07:51.026 [3086550720]PGAPI_Execute: clear errors...
00:07:51.026 [3086550720]prepareParameters was not called, prepare state:3
00:07:51.026 [3086550720]SC_recycle_statement: self= 0x943b1e8
00:07:51.026 [3086550720]PDATA_free_params: ENTER, self=0x943b38c
00:07:51.026 [3086550720]PDATA_free_params: EXIT
00:07:51.026 [3086550720]Exec_with_parameters_resolved: copying statement params: trans_status=6, len=10128, stmt='SELECT [..]'
00:07:51.026 [3086550720]ResolveOneParam: from(fcType)=-15, to(fSqlType)=4(23)
00:07:51.026 [3086550720]cvt_null_date_string=0 pgtype=23 buf=(nil)
00:07:51.026 [3086550720]ResolveOneParam: from(fcType)=4, to(fSqlType)=4(23)
00:07:51.026 [3086550720]cvt_null_date_string=0 pgtype=23 buf=(nil)
00:07:51.026 [3086550720] stmt_with_params = 'SELECT [..]'
00:07:51.027 [3086550720]about to begin SC_execute
00:07:51.027 [3086550720] Sending SELECT statement on stmt=0x943b1e8, cursor_name='SQL_CUR0x943b1e8' qflag=0,1
00:07:51.027 [3086550720]CC_send_query: conn=0x9424668, query='SELECT [..]'
00:07:51.027 [3086550720]CC_send_query: conn=0x9424668, query='SAVEPOINT _EXEC_SVP_0x943b1e8'
00:07:51.027 [3086550720]send_query: done sending query 35bytes flushed
00:07:51.027 [3086550720]in QR_Constructor
00:07:51.027 [3086550720]exit QR_Constructor
00:07:51.027 [3086550720]read 21, global_socket_buffersize=4096
00:07:51.027 [3086550720]send_query: got id = 'C'
00:07:51.027 [3086550720]send_query: ok - 'C' - SAVEPOINT
00:07:51.027 [3086550720]send_query: setting cmdbuffer = 'SAVEPOINT'
00:07:51.027 [3086550720]send_query: returning res = 0x8781c90
00:07:51.027 [3086550720]send_query: got id = 'Z'
00:07:51.027 [3086550720]QResult: enter DESTRUCTOR
00:07:51.027 [3086550720]QResult: in QR_close_result
00:07:51.027 [3086550720]QResult: free memory in, fcount=0
00:07:51.027 [3086550720]QResult: free memory out
00:07:51.027 [3086550720]QResult: enter DESTRUCTOR
00:07:51.027 [3086550720]QResult: exit close_result
00:07:51.027 [3086550720]QResult: exit DESTRUCTOR
00:07:51.027 [3086550720]send_query: done sending query 1942bytes flushed
00:07:51.027 [3086550720]in QR_Constructor
00:07:51.027 [3086550720]exit QR_Constructor
00:07:51.027 [3086550720]read -1, global_socket_buffersize=4096
00:07:51.027 [3086550720]Lasterror=11
00:07:51.133 [3086550720]!!! poll ret=1 revents=1
00:07:51.133 [3086550720]read 4096, global_socket_buffersize=4096
00:07:51.133 [3086550720]send_query: got id = 'T'
00:07:51.133 [3086550720]QR_fetch_tuples: cursor = '', self->cursor=(nil)
00:07:51.133 [3086550720]num_fields = 166
00:07:51.133 [3086550720]READING ATTTYPMOD
00:07:51.133 [3086550720]CI_read_fields: fieldname='id', adtid=23, adtsize=4, atttypmod=-1 (rel,att)=(0,0)
[last two lines repeated for the other columns]
00:07:51.138 [3086550720]QR_fetch_tuples: past CI_read_fields: num_fields = 166
00:07:51.138 [3086550720]MALLOC: tuple_size = 100, size = 132800
00:07:51.138 [3086550720]QR_next_tuple: inTuples = true, falling through: fcount = 0, fetch_number = 0
00:07:51.139 [3086550720]qresult: len=3, buffer='282'
[last line repeated for the other columns]
00:07:51.140 [3086550720]end of tuple list -- setting inUse to false: this = 0x87807e8 SELECT 1
00:07:51.140 [3086550720]_QR_next_tuple: 'C' fetch_total = 1 & this_fetch = 1
00:07:51.140 [3086550720]QR_next_tuple: backend_rows < CACHE_SIZE: brows = 0, cache_size = 0
00:07:51.140 [3086550720]QR_next_tuple: reached eof now
00:07:51.140 [3086550720]send_query: got id = 'Z'
00:07:51.140 [3086550720] done sending the query:
00:07:51.140 [3086550720]extend_column_bindings: entering ... self=0x943b270, bindings_allocated=166, num_columns=166
00:07:51.140 [3086550720]exit extend_column_bindings=0x9469500
00:07:51.140 [3086550720]SC_set_Result(943b1e8, 87807e8)
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]retval=0
00:07:51.140 [3086550720]CC_send_query: conn=0x9424668, query='RELEASE _EXEC_SVP_0x943b1e8'
00:07:51.140 [3086550720]send_query: done sending query 33bytes flushed
00:07:51.140 [3086550720]in QR_Constructor
00:07:51.140 [3086550720]exit QR_Constructor
00:07:51.140 [3086550720]read -1, global_socket_buffersize=4096
00:07:51.140 [3086550720]Lasterror=11
00:07:51.140 [3086550720]!!! poll ret=1 revents=1
00:07:51.140 [3086550720]read 19, global_socket_buffersize=4096
00:07:51.140 [3086550720]send_query: got id = 'C'
00:07:51.140 [3086550720]send_query: ok - 'C' - RELEASE
00:07:51.140 [3086550720]send_query: setting cmdbuffer = 'RELEASE'
00:07:51.140 [3086550720]send_query: returning res = 0x877cd30
00:07:51.140 [3086550720]send_query: got id = 'Z'
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]QResult: in QR_close_result
00:07:51.140 [3086550720]QResult: free memory in, fcount=0
00:07:51.140 [3086550720]QResult: free memory out
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]QResult: exit close_result
00:07:51.140 [3086550720]QResult: exit DESTRUCTOR
EDIT3: I realized i didn't used the same query from the mylog in the psql test before. It seems psqlodbc doesn't use a PREPARE for SQLPrepare and SQLExecute. It adds the param value and send the query. As araqnid suggested i set the log_duration param to 0 and compared the results from postgresql log with that from the app and psql. The result are as follows:
psql/app pglog
query executed from app: 110 ms 70 ms
psql with PREPARE/EXECUTE: 10 ms 5 ms
psql with full SELECT: 85 ms 70 ms
So how to interpret that values? It seems the most time is spend sending the full query to the database (10000 chars) and generating a execution plan. If that is true changing the calls to SQLPrepare and SQLExecute to explicit PREPARE/EXECUTE statements executed over SQLExecDirect could solve the problem. Any objections?
I finally found the problem and it was that psqlodbc's SQLPrepare/SQLExecute by default doesn't execute a PREPARE/EXECUTE. The driver itself builds the SELECT and sends that.
The solution is to add UseServerSidePrepare=1 to the odbc.ini or to the ConnectionString for SQLDriverConnect. The total execution time for one query measured from the application dropped from >100ms to 5-10ms.
I don't think the timing between psql and your program are comparable.
Maybe I'm missing something, but in psql you are only preparing the statements, but never really fetching the data. EXPLAIN PLAN is not sending data either
So the time difference is most probably the network traffic that is needed to send all rows from the server to the client.
The only way to reduce this time is to either get a faster network or to select fewer columns. Do you really need all the columns that are included in "alotofcolumns"?

Rails more statements with ; doesnt work... :s

I have this code, but i cant make it work:
images = Image.find_by_sql('PREPARE stmt FROM \' SELECT * FROM images AS i WHERE i.on_id = 1 AND i.on_type = "profile" ORDER BY i.updated_at LIMIT ?, 6\'; SET #lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING #lower_limit;')
I got this error:
Mysql::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SET #lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING #lower_limit' at line 1: PREPARE stmt FROM ' SELECT * FROM images AS i WHERE i.on_id = 1 AND i.on_type = "profile" ORDER BY i.updated_at LIMIT ?, 6'; SET #lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING #lower_limit;
but if i use a sql app, like this, it works:
PREPARE stmt FROM ' SELECT * FROM images AS i WHERE i.on_id = 1 AND i.on_type = "profile" ORDER BY i.updated_at LIMIT ?, 6'; SET #lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING #lower_limit;
SOLVED:
This generates 2 queries and is worse, but now i can get the offset of the image.
The other way around was with just one query, but i would not got any offset of the image, and anyway I couldn't make it work.
def self.get_image_offset(id)
image_offset = Image.find_by_sql("SELECT COUNT(id) as pos FROM images WHERE updated_at <= (SELECT updated_at FROM images WHERE id = #{id})")[0].pos.to_i
end
def self.get_group_offset(id, per_block, image_offset = nil)
image_offset ||= Image.get_image_offset(id)
group_offset = (image_offset / per_block).floor * per_block
{:image_offset => image_offset, :group_offset => group_offset, :group_number => ( group_offset + per_block ) / per_block}
end
You will be better off using something like execute [1] if you are writing the whole sql yourself (which is not really the 'Rails way', but that's a whole other story all together)
[1] http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/DatabaseStatements.html#M001934