In the SELECT clause I have SELECT isnull(client,'')+'-'+isnull(supplier,''), is it ok to write GROUP BY client,supplier, or should I mandatorily write GROUP BY isnull(client,'')+'-'+isnull(supplier,'')?
It's better to GROUP BY client, supplier. That way if there's any indexer available it can be used. While the other solution also works it would require a whole table scan in every case.
Just list the column names. You can verify this by executing it, and also look at execution plans to verify index usage.
GROUP BY client,supplier
You can directly say group by client, supplier
Please see sample data and result after running query.
sample
Related
I am trying to fetch few information's from multiple tables using a select statement. All the tables are related to one single part of my Application (client). There are multiple entries in the application. For example let say 180 entries. The line number of these entries are not marked in any table. But while i am fetching the information using select statement i would want to know from which line of the entries in application my result is coming from.
Is there any ways to achieve this.
I am using Oracle database.
Sounds like you could use the DBMS_APPLICATION_INFO package to do what you want:
https://docs.oracle.com/database/121/ARPLS/d_appinf.htm#ARPLS003
https://oracle-base.com/articles/8i/dbms_application_info
http://www.dba-oracle.com/t_in_dbms_application_info_set_module__what_makes_the_module_name.htm
You can add the pseudo-column rownum to your query with something like:
select rownum, q.* from (... your query with order by clause...) q;
I want to query a table to get how many columns in that table and the name of each column. this post tells us how to do it in BQ command line interface, but can we do it using query?
From the following doc, it seems that the meta-tables won't give this kind of information. So, I guess the answer is no.
https://cloud.google.com/bigquery/querying-data#using_meta-tables
You can try this
SELECT
count(*)
FROM
`project`.dataset.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS
where
table_name="table_name"
Halo,
first, i say thank you for helping me solve my problem before.
I'm really newbie using Postgresql.
now i have new problem,
i do select statement like this one :
select * from company where id=10;
when i see the query in pg_stat_statements, i just get the query like this :
select * from company where id=?;
from the result the value of id is missing,
how i can get the complete query without missing the value??
Thank you :)
Alternatively you could set log_min_duration to 0 which will lead Postgres to log every statement.
Pg_stat_statements is meant to be for stats and these are aggregated, if every lookup value would be in there the stats would be useless because it would be hard to group.
If you want to understand a query just run it with explain analyze and you will get the query plan.
Assume mytable is an Oracle table and it has a field called id. The datatype of id is NUMBER(8). Compare the following queries:
select * from mytable where id like '715%'
and
select * from mytable where id between 71500000 and 71599999
I would think the second is more efficient since I think "number comparison" would require fewer number of assembly language instructions than "string comparison". I need a confirmation or correction. Please confirm/correct and throw any further comment related to either operator.
UPDATE: I forgot to mention 1 important piece of info. id in this case must be an 8-digit number.
If you only want values between 71500000 and 71599999 then yes the second one is much more efficient. The first one would also return values between 7150-7159, 71500-71599 etc. and so forth. You would either need to sift through unecessary results or write another couple lines of code to filter the rest of them out. The second option is definitely more efficient for what you seem to want to do.
It seems like the execution plan on the second query is more efficient.
The first query is doing a full table scan of the id's, whereas the second query is not.
My Test Data:
Execution Plan of first query:
Execution Plan of second query:
I don't like the idea of using LIKE with a numeric column.
Also, it may not give the results you are looking for.
If you have a value of 715000000, it will show up in the query result, even though it is larger than 71599999.
Also, I do not like between on principle.
If a thing is between two other things, it should not include those two other things. But this is just a personal annoyance.
I prefer to use >= and <= This avoids confusion when I read the query. In addition, sometimes I have to change the query to something like >= a and < c. If I started by using the between operator, I would have to rewrite it when I don't want to be inclusive.
Harv
In addition to the other points raised, using LIKE in the manner you suggest would cause Oracle to not use any indexes on the ID column due to the implicit conversion of the data from number to character, resulting in a full table scan when using LIKE versus and index range scan when using BETWEEN. Assuming, of course, you have an index on ID. Even if you don't, however, Oracle will have to do the type conversion on each value it scans in the LIKE case, which it won't have to do in the other.
You can use math function, otherwise you have to use to_char function to use like, but it will cause performance problems.
select * from mytable where floor(id /100000) = 715
or
select * from mytable where floor(id /100000) = TO_NUMBER('715') // this is parametric
Do anyone knows about how many values I can give in a where in clause? I get 25000 values in a where in clause and mysql is unable to execute. Any thoughts? Awaiting for your thoughts
Although this is old, it still shows up in search results so is worth answering.
There is no hard-coded maximum in MySQL for the length of a query. This includes all parts of the query such as the WHERE clause.
However, there is a value called max_allowed_packet which determines the largest query you can run on the MySQL server process. It isn't to do with the number of elements in the query, but the total length of the query. So
SELECT * FROM mytable WHERE mycol IN (1,2,3);
is less likely to hit the limit than
SELECT * FROM mytable WHERE mycal IN ('This string','That string','Tother string');
The value of max_allowed_packet is configurable from server to server. But almost certainly, if you find yourself hitting the limit because you're writing SQL statements of epic length (rather than dealing with binary data which is a legitimate reason to hit it), then you need to re-think your SQL.
I think that if this restriction is a problem then you're doing something wrong.
Perhaps you could store the data from your where clause in a table and then join with it. This would probably be more efficient.
I think it is something with execution time.
I think you are doing soemthing like this: Correct me if i am wrong:
Select FROM table WHERE V1='A1' AND V2='A1' AND V3='A3' AND ... Vn='An'
There is always a efficient way how you can do your SELECT in your database. Working in a database is importent to keep in your mind that seconds are very importent.
If you can share how your query is look like, then we can help you making a efficient SELECT statement.
I wish u succes