i am using Oracle 11 g and I need to know if a specific point is inside de buffer of another point from a table with spatial index, i am using the follow sentence:
'''SELECT A.fieldX
FROM TABLE A
WHERE
SDO_OVERLAPBDYDISJOINT(sdo_geom.sdo_buffer(A.geometry,2,0.1),SDO_GEOMETRY(2001,NULL
,SDO_POINT_TYPE(497644.6,2432725.8,NULL),NULL,NULL)) = 'TRUE';'''
And I obtain the follow error:
13226. 00000 - "interface not supported without a spatial index"
Cause: The geometry table does not have a spatial index.
Action: Verify that the geometry table referenced in the spatial operator
has a spatial index on it.
The operator SDO_OVERLAPBDYDISJOINT uses only geometries from tables with spatial index, and I understand that this error is caused for the buffer operator but if I invert the order and put first the SDO_POINT_TYPE, I have the same error. Is there any way to use this operator or another similar without a spatial index?
I dont want to use pl sql because I need to use the sentence in a VBA code.
Thanks a lot!!!
What you essentially want is to find out all the geometries that are within some distance of another. This is easily and better done this way. It is also much more efficient.
SELECT A.fieldX
FROM TABLE A
WHERE sdo_within_distance(A.geometry,SDO_GEOMETRY(2001,NULL,SDO_POINT_TYPE(497644.6,2432725.8,NULL),NULL,NULL)),'distance=2') = 'TRUE';
I think your problem is that the A.geometry is indexed, but its buffer is not.
The first thing you should try, is to use
SDO_OVERLAPBDYDISJOINT(A.geometry, buffer(sdo_point(...),2,0.1)) - and, while at it, it would be more correct to use SDO_INSIDE here.
If this does not work, you should check if your index is, indeed, ok. You can easily test it using a specific id from your table - lets say, 10 - and run:
select a.id from your_table a, your_table b where a.id=b.id and b.id=10 and sdo_equals(a.geometry,b.geometry)='TRUE'; If it returns your id (10 in my example), your index is ok.
Related
I have a problem where the fix is to exchange what gets filtered first, but I'm not sure if that is even possible and not knowledgeable enough how it works.
To give an example:
Here is a table
When you filter this using the ff query:
select * from pcparts where Parts = 'Monitor' and id = 255322 and Brand = 'Asus'
by logic this will be correct as the Asus component with a character in its ID will be filtered and will prevent an ORA-01722 error.
But to my experience this is inconsistent.
I tried using the same filtering in two different DB connections, the first one didn't get the error (as expected) but other one got an ORA-01722 error.
Checking the explain plan the difference in the two DB's is the ff:
I was thinking if its possible to make sure that the Parts got filtered first before the ID but I'm unable to find anything when i was searching, is this even possible, if not, what is a fix for this issue without relying on using TO_CHAR
I assume you want to (sort of) fix a buggy program without changing the source code.
According to your image, you are using "Filter Predicates", this normally means Oracle isn't using index (though I don't know what displays execution plans this way).
If you have an index on PARTS, Oracle will probably use this index.
create index myindex on mytable (parts);
If Oracle thinks this index is inefficient, it may still use full table scan. You may try to 'fake' Oracle into thinking this an efficient index by lying about the number of distinct values (the more distinct values, the more efficient)
exec dbms_stats.set_index_stats(ownname => 'myname', indname => 'myindex', numdist => 100000000)
Note: This WILL impact performance of other querys using this table
"Fix" is rather simple: take control over what you're doing.
It is evident that ID column's datatype is VARCHAR2. Therefore, don't make Oracle guess, instruct it what to do.
No : select * from pcparts where Parts = 'Monitor' and id = 255322 and Brand = 'Asus'
Yes: select * from pcparts where Parts = 'Monitor' and id = '255322' and Brand = 'Asus'
--------
VARCHAR2 column's value enclosed into single quotes
I have a table with "location" column which is a PostgreSQL "point" type.
I would like to search for many exact points, something like:
SELECT *
FROM my_table
WHERE location IN ('1,1', '2,2')
This doesn't work, it throws an error
operator does not exist: point = point
To look for exact point one have to use ~=, but it's only possible to query for one point this way.
I could workaround it, by using OR, like:
SELECT *
FROM my_table
WHERE (location ~= '0,1' OR location ~= '1,2')
This however looks like not optimal way, as it stops using index (gist) for more than 5 "or"s and do a sequential scan instead.
Is there a way to have a simple and optimal query to get records by looking for exact many points?
One method is to use any:
WHERE location ~= any (array['1,1'::point, '2,2'::point])
I would expect this to use available indexes, but you would have to check on your data.
I have Users table contains about 500,000 rows of users data
The Full Name of the User is stored in 4 columns each have type nvarchar(50)
I have a computed column called UserFullName that's equal to the combination of the 4 columns
I have a Stored Procedure searching in Users table by name using like operatior as below
Select *
From Users
Where UserFullName like N'%'+#FullName+'%'
I have a performance issue while executing this SP .. it take long time :(
Is there any way to overcome the lack of performance of using Like operator ?
Not while still using the like operator in that fashion. The % at the start means your search needs to read every row and look for a match. If you really need that kind of search you should look into using a full text index.
Make sure your computed column is indexed, that way it won't have to compute the values each time you SELECT
Also, depending on your indexing, using PATINDEX might be quicker, but really you should use a fulltext index for this kind of thing:
http://msdn.microsoft.com/en-us/library/ms187317.aspx
If you are using index it will be good.
So you can give the column an id or something, like this:
Alter tablename add unique index(id)
Take a look at that article http://use-the-index-luke.com/sql/where-clause/searching-for-ranges/like-performance-tuning .
It easily describes how LIKE works in terms of performance.
Likely, you are facing such issues because your whole table shoould be traversed because of first % symbol.
You should try creating a list of substrings(in a separate table for example) representing k-mers and search for them without preceeding %. Also, an index for such column would help. Please read more about KMer here https://en.m.wikipedia.org/wiki/K-mer .
This will not break an index and will be more efficient to search.
I have the following table structure:
column names : cell longitude latitude bcch bsic
data types : varchar double double double double
Keys : x
I want to know all the cells which are
in 10 km range of each other AND
have the same bcch+bsic.
What would be the spatial sql query for the above requirement?
Due to my limited understanding of postgis feel free to start your answer with "use this database table structure instead" so that it is more GIS oriented (I believe there is a concept of points rather than lat/long colums). I haven't written spatial queries before and am considering buying the "PostGIS In Action" book but need to know if what I am trying to do is possible and how.
Additionally I would like mention that I know how to do it in standard sql. I need a spatial query because there are round about 10000 records and using a standard sql method I would need to generate 10000*10000 (all the other cells for each cell) records and then query them which would be highly inefficient.
Denis,
that is not true. A gist index would help here.
Basarat I'm not quite clear what output you expect. Here is a query that for each cell
would return those at that are within 10km. First you want to add a geography column and then create a gist index of it. That's covered in first chapter of PostGIS in Action.
So lets say you have this new column called geog that you have put a gist index on.
Then your query would be
SELECT c.cell, array_agg(n.cell) As cells_close
FROM cells As c INNER JOIN cells As n ON ST_DWithin(c.geog, n.geog, 10000)
WHERE c.bsic = n.bsic --other criteria go here
GROUP BY c.cell;
If you don't want the output as an array -- you can do
array_to_string(array_agg(n.cell),',') As cell_comma_sep
I was curious since i read it in a doc. Does writing
select * from CONTACTS where id = ‘098’ and name like ‘Tom%’;
speed up the query as oppose to
select * from CONTACTS where name like ‘Tom%’ and id = ‘098’;
The first has an indexed column on the left side. Does it actually speed things up or is it superstition?
Using php and mysql
Check the query plans with explain. They should be exactly the same.
This is purely superstition. I see no reason that either query would differ in speed. If it was an OR query rather than an AND query however, then I could see that having it on the left may spped things up.
interesting question, i tried this once. query plans are the same (using EXPLAIN).
but considering short-circuit-evaluation i was wondering too why there is no difference (or does mysql fully evaluate boolean statements?)
You may be mis-remembering or mis-reading something else, regarding which side the wildcards are on a string literal in a Like predicate. Putting the wildcard on the right (as in yr example), allows the query engine to use any indices that might exist on the table column you are searching (in this case - name). But if you put the wildcard on the left,
select * from CONTACTS where name like ‘%Tom’ and id = ‘098’;
then the engine cannot use any existing index and must do a complete table scan.