Halo,
first, i say thank you for helping me solve my problem before.
I'm really newbie using Postgresql.
now i have new problem,
i do select statement like this one :
select * from company where id=10;
when i see the query in pg_stat_statements, i just get the query like this :
select * from company where id=?;
from the result the value of id is missing,
how i can get the complete query without missing the value??
Thank you :)
Alternatively you could set log_min_duration to 0 which will lead Postgres to log every statement.
Pg_stat_statements is meant to be for stats and these are aggregated, if every lookup value would be in there the stats would be useless because it would be hard to group.
If you want to understand a query just run it with explain analyze and you will get the query plan.
Related
In the SELECT clause I have SELECT isnull(client,'')+'-'+isnull(supplier,''), is it ok to write GROUP BY client,supplier, or should I mandatorily write GROUP BY isnull(client,'')+'-'+isnull(supplier,'')?
It's better to GROUP BY client, supplier. That way if there's any indexer available it can be used. While the other solution also works it would require a whole table scan in every case.
Just list the column names. You can verify this by executing it, and also look at execution plans to verify index usage.
GROUP BY client,supplier
You can directly say group by client, supplier
Please see sample data and result after running query.
sample
I am sure this is as simple as a question can get but I have been stumped on it so figured that I would ask in hope of a quick response. Using an OLEDB connection I want to do a select statement but for the table I am selecting from, a table member also has to be there too which seems to be messing up my results.
Normally I would write to get the column "col1":
SELECT lib1.table.col1 FROM lib1.table
For the table I need the information from, the table has a "submember". From what I have gathered the syntax is something like this:
SELECT lib1.table(submember).col1 FROM lib1.table(submember)
The problem is that the results are giving me every column within the table, not just my "col1" data. I hope that this is well explained for what I am looking for. Thanks ahead of time for anyone who helps.
You should be able to create an ALIAS in QTEMP:
CREATE ALIAS QTEMP.TABLE FOR LIB1.TABLE (SUBMEMBER)
And then query through the temporarily created alias:
SELECT COL1 FROM QTEMP.TABLE
It will be automatically removed when your connection ends.
create alias library.aliasname for library.table(member)
Then do the select on the alias
Does anyone know if there is a way to check the status of a job currently running in the database or the status of how many % has actually been executed.
I am running a job but the job is taking a long time so I would like to check how many % has been executed from the query. I am running the query in Oracle sqldeveloper
Due to the set-based nature of SQL and database processing, you can't really get a "% complete" of a query since the oracle engine doesn't really know for sure. You could try looking at the view v$session_longops to see how far along parts of your SQL has gone (a large hash join or full table scan may show up here). Take a look at this Ask Tom for more info.
If your job has multiple SQL statements and you're trying to track how far along you are after each one, you could add some code to insert status updates on a control table after each statement.
The output_rows column in this query might be useful if your query is not in v$session_longops. The problem is you don't know what the total output rows is.
select plan_operation, plan_options, plan_object_name, plan_object_type, plan_bytes, plan_time, plan_cpu_cost, plan_io_cost, output_rows, workarea_mem
from v$sql_plan_monitor
where sid = 2646
and status = 'EXECUTING'
order by plan_line_id
You can check gv$session_longops where time_remaining>0;
select sid,target_desc,(Sofar*100)/totalwork as percentage_complete from gv$session_longops
would give you the percentage.
Assuming result of first query in A) (envelopecontrolnumber,partnerid,docfileid) = (000000400, 31,35)
A)
select envelopecontrolnumber, partnerid, docfileid
from envelopeheader
where envelopeid ='LT01ENV1107010000050';
select count(*)
from envelopeheader
where envelopecontrolnumber = '000000400'
and partnerid= 31 and docfileid<>35 ;
or
B)
select count(*)
from envelopeheader a
join envelopeheader b on a.envelopecontrolnumber = b.envelopecontrolnumber
and a.partnerid= b.partnerid
and a.envelopeid = 'LT01ENV1107010000050'
and b.docfileid <> a.docfileid;
I am using the above query in a sql function. I tried the queries in pgAdmin(postgres), it shows 16ms for A) and B). When I tried queries from B) separately on pgadmin. It still shows 16 ms separately for each one - making 32ms for B) - Which is wrong because when you run both the queries in one go from B), it shows 16 ms. Please suggest which one is better. I am using postgres database.
The time displayed includes time to :
send query to server
parse query
plan query
execute query
send results back to client
process all results
Try a simple query like "SELECT 1". You'll probably get 16 ms too.
It's quite likely you are simply measuring the ping time to your server.
If you want to know how much time on the server a query uses, you need EXPLAIN ANALYZE.
Option 1:
Run query A.
Get results.
Use these results to create query B.
Send query B.
Get results.
Option 2:
Run combined query AB.
Get results.
So, if you are using this from a client, connecting to Postgres, use the second option. There is an overhead for sending a query to the db and getting results back.
If you are using it inside an SQL function or procedure, the difference is probably negligible. I would still use the second option though. And in either case, I would check that queries B or AB are optimized (checked query plan, if indexes are used, etc).
Go option 1: the two queries are unrelated, so more efficient to do them separately.
Option A will be faster since you are interested in the count.
The join will create a temporary structure for join the data based on conditions and then performs the counting operation.
Hence option A is better and faster.
Do anyone knows about how many values I can give in a where in clause? I get 25000 values in a where in clause and mysql is unable to execute. Any thoughts? Awaiting for your thoughts
Although this is old, it still shows up in search results so is worth answering.
There is no hard-coded maximum in MySQL for the length of a query. This includes all parts of the query such as the WHERE clause.
However, there is a value called max_allowed_packet which determines the largest query you can run on the MySQL server process. It isn't to do with the number of elements in the query, but the total length of the query. So
SELECT * FROM mytable WHERE mycol IN (1,2,3);
is less likely to hit the limit than
SELECT * FROM mytable WHERE mycal IN ('This string','That string','Tother string');
The value of max_allowed_packet is configurable from server to server. But almost certainly, if you find yourself hitting the limit because you're writing SQL statements of epic length (rather than dealing with binary data which is a legitimate reason to hit it), then you need to re-think your SQL.
I think that if this restriction is a problem then you're doing something wrong.
Perhaps you could store the data from your where clause in a table and then join with it. This would probably be more efficient.
I think it is something with execution time.
I think you are doing soemthing like this: Correct me if i am wrong:
Select FROM table WHERE V1='A1' AND V2='A1' AND V3='A3' AND ... Vn='An'
There is always a efficient way how you can do your SELECT in your database. Working in a database is importent to keep in your mind that seconds are very importent.
If you can share how your query is look like, then we can help you making a efficient SELECT statement.
I wish u succes