What's wrong, I cannot create this table - sql

That's my first time using Firebird. I am trying to create this table. I checked the docs and it seems ok. What's wrong?
CREATE TABLE ENDERECO
(
ID_ENDERECO INTEGER generated by default as identity primary KEY,
RUA VARCHAR(50),
BAIRRO VARCHAR(35),
CEP VARCHAR(10),
COMPLEMENTO VARCHAR(35),
ECOMERCIO INTEGER(1),
ESTADO CHAR(2)
)

The problem is your use of INTEGER(1). The data type INTEGER (and SMALLINT and BIGINT) do not have a precision in their definition. See also Data Type Declaration Syntax in the Firebird 3 Language Reference:
<domain_or_non_array_type> ::=
<scalar_datatype>
| <blob_datatype>
| [TYPE OF] domain
| TYPE OF COLUMN rel.col
<scalar_datatype> ::=
SMALLINT | INT[EGER] | BIGINT
| FLOAT | DOUBLE PRECISION
| BOOLEAN
| DATE | TIME | TIMESTAMP
| {DECIMAL | NUMERIC} [(precision [, scale])]
| {VARCHAR | {CHAR | CHARACTER} VARYING} (length)
[CHARACTER SET charset]
| {CHAR | CHARACTER} [(length)] [CHARACTER SET charset]
| {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length)
| {NCHAR | NATIONAL {CHARACTER | CHAR}} [(length)]
In short, use:
CREATE TABLE ENDERECO
(
ID_ENDERECO INTEGER generated by default as identity primary KEY,
RUA VARCHAR(50),
BAIRRO VARCHAR(35),
CEP VARCHAR(10),
COMPLEMENTO VARCHAR(35),
ECOMERCIO INTEGER,
ESTADO CHAR(2)
)

Related

how to create a table that inherits from another table in SQL 3?

i have this table salle that has 2 attributes
this is the table that inherits from salle, it's called salleCours and has 3 additional attributes.
when i run the second command in oracle 11g express in sql command line it says under 'under' : missing or invalid option.
i dont know if it's a syntax problem or something else
Create table salle(
Numero varchar(20) primary key,
Videoprojecteur char(1) ) ;
Create table salleCours UNDER salle(
Capacite number(3),
Retroprojecteur char(1),
micro char(1)) ;
You want to define an OBJECT type and then use an object-defined table:
CREATE TYPE salle_type AS OBJECT(
Numero varchar(20),
Videoprojecteur char(1)
) NOT FINAL;
CREATE TYPE salleCours_type UNDER salle_type(
Capacite number(3),
Retroprojecteur char(1),
micro char(1)
);
CREATE TABLE salle OF salle_type (
Numero PRIMARY KEY
);
Then you can insert rows of either type:
INSERT INTO salle VALUES( salle_type( 'abc', 'Y' ) );
INSERT INTO salle VALUES( salleCours_type( 'def', 'Y', 42, 'N', 'X' ) );
And, if you want the values:
SELECT s.*,
TREAT( VALUE(s) AS salleCours_type ).Capacite AS capacite,
TREAT( VALUE(s) AS salleCours_type ).Retroprojecteur AS Retroprojecteur,
TREAT( VALUE(s) AS salleCours_type ).micro AS micro
FROM salle s
Which outputs:
NUMERO | VIDEOPROJECTEUR | CAPACITE | RETROPROJECTEUR | MICRO
:----- | :-------------- | -------: | :-------------- | :----
abc | Y | null | null | null
def | Y | 42 | N | X
db<>fiddle here

How to detect whether column NVARCHAR contains Latin or Cyrillic chars

I have "Template" table:
CREATE TABLE Template (
ID BIGINT, -- PK
NAME NVARCHAR(255)
)
Column NAME contains russian or english text. How can I move value of this column to RUSSIAN_NAME and ENGLISH_NAME columns in depends on value of NAME column value.
CREATE TABLE Template (
ID BIGINT, -- PK
RUSSIAN_NAME NVARCHAR(255),
ENGLISH_NAME NVARCHAR(255)
)
Try this:
I have no idea what the russian text is meaning, just copied from somewhere
DECLARE #tbl TABLE (name NVARCHAR(255),plainLatin NVARCHAR(255),foreignChars NVARCHAR(100));
INSERT INTO #tbl(name) VALUES
(N'abcd'),(N'слов в тексте'),(N'one more'),(N'с пробелами и без них');
UPDATE #tbl
SET plainLatin=CASE WHEN PATINDEX('%[^-a-zA-Z0-9 ]%' /*add signs you want to allow*/,name)=0 THEN name END
,foreignChars=CASE WHEN PATINDEX('%[^-a-zA-Z0-9 ]%' /*add signs you want to allow*/,name)>0 THEN name END
SELECT * FROM #tbl
The result
+-----------------------+------------+-----------------------+
| name | plainLatin | foreignChars |
+-----------------------+------------+-----------------------+
| abcd | abcd | NULL |
+-----------------------+------------+-----------------------+
| слов в тексте | NULL | слов в тексте |
+-----------------------+------------+-----------------------+
| one more | one more | NULL |
+-----------------------+------------+-----------------------+
| с пробелами и без них | NULL | с пробелами и без них |
+-----------------------+------------+-----------------------+

Envers generating audit-table schemas with all varchars length 1

I'm experimenting with Envers. I've got it working okay, except that when it generated the audit table for my audited entity it made all the varchar columns length 1, rather than the length of the corresponding column in the base table.
Like so:
Object: dbo.COMPANY_ADDRESS_TB
Column | Type
-----------------------------
ID | int
COMPANY_ID | int
ADDRESS_SEQ_NUM | int
TYPE | varchar(40)
ATTN | varchar(40)
STREET1 | varchar(60)
STREET2 | varchar(60)
STREET3 | varchar(60)
CITY | varchar(40)
STATE | varchar(25)
ZIP | varchar(18)
COUNTRY | varchar(25)
TIMESTAMP | binary(8)
ACTIVE | int
and then
Object: dbo.COMPANY_ADDRESS_TB_AUD
Column | Type
------------------------------
ID | int
REV | int
REVTYPE | tinyint
REVEND | int
ADDRESS_SEQ_NUM | int
addressSeqNum_MOD | bit
TYPE | varchar(1)
addressType_MOD | bit
ATTN | varchar(1)
attn_MOD | bit
STREET1 | varchar(1)
street1_MOD | bit
STREET2 | varchar(1)
street2_MOD | bit
STREET3 | varchar(1)
street3_MOD | bit
CITY | varchar(1)
city_MOD | bit
STATE | varchar(1)
state_MOD | bit
ZIP | varchar(1)
zip_MOD | bit
COUNTRY | varchar(1)
country_MOD | bit
ACTIVE | int
active_MOD | bit
Of course I can change the lengths by hand, but if I start auditing a lot of entities that could get both tedious and error-prone. Here's the code that sets this up:
var properties = new Dictionary<string, string>();
properties[NHibernate.Cfg.Environment.Dialect] = "NHibernate.Dialect.MsSql2008Dialect";
properties[NHibernate.Cfg.Environment.ConnectionDriver] = "NHibernate.Driver.SqlClientDriver";
properties[NHibernate.Cfg.Environment.Hbm2ddlAuto] = "update";
properties[NHibernate.Cfg.Environment.FormatSql] = "true";
properties[NHibernate.Cfg.Environment.ShowSql] = "true";
properties[NHibernate.Cfg.Environment.ConnectionString] = "Data Source=localhost;Initial Catalog=OU_KASH;Integrated Security=True;Asynchronous Processing=true";
var cfg = new Configuration();
cfg.Configure()
.SetProperties(properties)
.AddAssembly(typeof(AliasTb).Assembly.FullName)
;
cfg.SetEnversProperty(ConfigurationKey.StoreDataAtDelete, true);
cfg.SetEnversProperty(ConfigurationKey.AuditStrategy, typeof(NHibernate.Envers.Strategy.ValidityAuditStrategy));
cfg.SetEnversProperty(ConfigurationKey.TrackEntitiesChangedInRevision, true);
cfg.SetEnversProperty(ConfigurationKey.GlobalWithModifiedFlag, true);
cfg.IntegrateWithEnvers(new AttributeConfiguration());
Any idea what I might be doing wrong?
Sounds like a potential bug. Please report it here with minimal mapping to reproduce your issue (or even better - create a pull request with a failing test).
Okay, my bad. In trying to put a minimal example together I created a new class with just an ID and a string property. It turned out that the schema generated for that class as well had a length of 1 for the string column, meaning it had nothing to do with Envers, but rather my entity mapping was wrong (in the example that I originally posted the base table is in a legacy database, not generated by NH). I played around with the <property> attributes and got what I wanted.
Thanks for helping me through this.

improve database table design depending on a value of a type in a column

I have the following:
1. A table "patients" where I store patients data.
2. A table "tests" where I store data of tests done to each patient.
Now the problem comes as I have 2 types of tests "tests_1" and "tests_2"
So for each test done to particular patient I store the type and id of the type of test:
CREATE TABLE IF NOT EXISTS patients
(
id_patient INTEGER PRIMARY KEY,
name_patient VARCHAR(30) NOT NULL,
sex_patient VARCHAR(6) NOT NULL,
date_patient DATE
);
INSERT INTO patients values
(1,'Joe', 'Male' ,'2000-01-23');
INSERT INTO patients values
(2,'Marge','Female','1950-11-25');
INSERT INTO patients values
(3,'Diana','Female','1985-08-13');
INSERT INTO patients values
(4,'Laura','Female','1984-12-29');
CREATE TABLE IF NOT EXISTS tests
(
id_test INTEGER PRIMARY KEY,
id_patient INTEGER,
type_test VARCHAR(15) NOT NULL,
id_type_test INTEGER,
date_test DATE,
FOREIGN KEY (id_patient) REFERENCES patients(id_patient)
);
INSERT INTO tests values
(1,4,'test_1',10,'2004-05-29');
INSERT INTO tests values
(2,4,'test_2',45,'2005-01-29');
INSERT INTO tests values
(3,4,'test_2',55,'2006-04-12');
CREATE TABLE IF NOT EXISTS tests_1
(
id_test_1 INTEGER PRIMARY KEY,
id_patient INTEGER,
data1 REAL,
data2 REAL,
data3 REAL,
data4 REAL,
data5 REAL,
FOREIGN KEY (id_patient) REFERENCES patients(id_patient)
);
INSERT INTO tests_1 values
(10,4,100.7,1.8,10.89,20.04,5.29);
CREATE TABLE IF NOT EXISTS tests_2
(
id_test_2 INTEGER PRIMARY KEY,
id_patient INTEGER,
data1 REAL,
data2 REAL,
data3 REAL,
FOREIGN KEY (id_patient) REFERENCES patients(id_patient)
);
INSERT INTO tests_2 values
(45,4,10.07,18.9,1.8);
INSERT INTO tests_2 values
(55,4,17.6,1.8,18.89);
Now I think this approach is redundant or not to good...
So I would like to improve queries like
select * from tests WHERE id_patient=4;
select * from tests_1 WHERE id_patient=4;
select * from tests_2 WHERE id_patient=4;
Is there a better approach?
In this example I have 1 test of type tests_1 and 2 tests of type tests_2 for patient with id=4.
Here is a fiddle
Add a table testtype (id_test,name_test) and use it an FK to the id_type_test field in the tests table. Do not create seperate tables for test_1 and test_2
It depends on the requirement
For OLTP I would do something like the following
STAFF:
ID | FORENAME | SURNAME | DATE_OF_BIRTH | JOB_TITLE | ...
-------------------------------------------------------------
1 | harry | potter | 2001-01-01 | consultant | ...
2 | ron | weasley | 2001-02-01 | pathologist | ...
PATIENT:
ID | FORENAME | SURNAME | DATE_OF_BIRTH | ...
-----------------------------------------------
1 | hermiony | granger | 2013-01-01 | ...
TEST_TYPE:
ID | CATEGORY | NAME | DESCRIPTION | ...
--------------------------------------------------------
1 | haematology | abg | arterial blood gasses | ...
REQUEST:
ID | TEST_TYPE_ID | PATIENT_ID | DATE_REQUESTED | REQUESTED_BY | ...
----------------------------------------------------------------------
1 | 1 | 1 | 2013-01-02 | 1 | ...
RESULT_TYPE:
ID | TEST_TYPE_ID | NAME | UNIT | ...
---------------------------------------
1 | 1 | co2 | kPa | ...
2 | 1 | o2 | kPa | ...
RESULT:
ID | REQUEST_ID | RESULT_TYPE_ID | DATE_RESULTED | RESULTED_BY | RESULT | ...
-------------------------------------------------------------------------------
1 | 1 | 1 | 2013-01-02 | 2 | 5 | ...
2 | 1 | 2 | 2013-01-02 | 2 | 5 | ...
A concern I have with the above is with the unit of the test result, these can sometimes (not often) change. It may be better to place the unit un the result table.
Also consider breaking these into the major test categories as my understanding is they can be quite different e.g. histopathology and xrays are not resulted in the similar ways as haematology and microbiology are.
For OLAP I would combine request and result into one table adding derived columns such as REQUEST_TO_RESULT_MINS and make a single dimension from RESULT_TYPE and TEST_TYPE etc.
You can do this in a few ways. without knowing all the different type of cases you need to deal with.
The simplest would be 5 tables
Patients (like you described it)
Tests (like you described it)
TestType (like Declan_K suggested)
TestResultCode
TestResults
TestRsultCode describe each value that is stored for each test. TestResults is a pivoted table that can store any number of test-results per test,:
Create table TestResultCode
(
idTestResultCode int
, Code varchar(10)
, Description varchar(200)
, DataType int -- 1= Real, 2 = Varchar, 3 = int, etc.
);
Create Table TestResults
(
idPatent int -- FK
, idTest int -- FK
, idTestType int -- FK
, idTestResultCode int -- FK
, ResultsI real
, ResultsV varchar(100)
, Resultsb int
, Created datetime
)
so, basically you can fit the results you wanted to add into the tables "tests_1" and "tests_2" and any other tests you can think of.
The application reading this table, can load each test and all its values. Of course the application needs to know how to deal with each case, but you can store any type of test in this structure.

Need help optimizing a MySQL query - JOINs not using the correct indexes

I have this query below that I've rewritten a dozen different ways, yet I am unable to get it optimized and loaded in under 8 seconds. If I can get it to 2s, that would be good. 1s or less would be optimal.
This query retrieves a list of books that are currently available for sale or trade, and performs a bit of filtering. This query takes about 9-10 seconds.
SELECT
listing.for_sale,
listing.for_trade,
MIN(listing.price) AS from_price,
MAX(listing.price) AS to_price,
IF (NOW() > CONVERT_TZ(listing.date_sale_expiration, '+00:00', '-7:00'), 1, 0) AS expired,
COUNT(listing.book_id) AS 'listing_count',
book.id AS 'book_id',
book.title AS 'book_title',
book.date_released AS 'date_released',
book.publisher AS 'publisher',
book.photo_cover AS 'photo_cover',
publisher.name AS 'publisher_name',
COALESCE((SELECT COUNT(*) FROM listing l1 WHERE l1.book_id = book.id AND l1.status IN ('in_active_deal', 'complete')), 0) AS 'number_sold',
(SELECT 1 FROM listing l2 WHERE l2.status = 'available' AND l2.book_id = book.id AND l2.member_id = 1234 LIMIT 1) AS 'hasListing',
(SELECT 1 FROM wishlist w1 WHERE w1.book_id = book.id AND w1.member_id = 1234 LIMIT 1) AS 'hasWishlist'
FROM listing
INNER JOIN member ON
listing.member_id = member.id
AND member.transaction_limit <> 0
AND member.banned <> 1
AND member.date_last_login > DATE_SUB(CURDATE(), INTERVAL 120 DAY)
INNER JOIN book ON
listing.book_id = book.id
AND book.released = 1
INNER JOIN publisher ON
book.publisher_id = publisher.id
WHERE
listing.type = 'book'
AND listing.status = 'available'
AND (listing.for_trade = 1 OR (listing.for_sale = 1 AND NOW() < COALESCE(CONVERT_TZ(listing.date_sale_expiration, '+00:00', '-7:00'), 0)))
AND (
EXISTS (SELECT 1 FROM listing l3 LEFT JOIN book b ON l3.book_id = b.id WHERE l3.member_id = 1234 AND b.publisher_id = book.publisher_id AND l3.status = 'available' AND l3.type = 'book' AND (l3.for_trade = 1 OR (l3.for_sale = 1 AND NOW() < COALESCE(CONVERT_TZ(l3.date_sale_expiration, '+00:00', '-7:00'), 0))) LIMIT 1)
OR member.publisher_only <> 1
OR member.id = 1234
)
AND (
EXISTS (SELECT 1 FROM wishlist w WHERE w.member_id = member.id AND w.type = 'book' AND (w.type, w.book_id) IN (SELECT l4.type, l4.book_id FROM listing l4 WHERE 1234 = l4.member_id AND l4.status = 'available' AND (l4.for_trade = 1 OR (l4.for_sale = 1 AND NOW() < COALESCE(DATE_SUB(l4.date_sale_expiration, INTERVAL 7 HOUR), 0)))) LIMIT 1)
OR member.wishlist_only <> 1
OR member.id = 1234
)
GROUP BY
book.id
ORDER BY
book.date_released DESC
LIMIT 30;
These are my tables:
CREATE TABLE `listing` (
`id` int(10) unsigned NOT NULL auto_increment,
`member_id` int(10) unsigned NOT NULL,
`type` enum('book','audiobook','accessory') NOT NULL,
`book_id` int(10) unsigned default NULL,
`audiobook_id` int(10) unsigned default NULL,
`accessory_id` int(10) unsigned default NULL,
`date_created` datetime NOT NULL,
`date_modified` datetime NOT NULL,
`date_sale_expiration` datetime default NULL,
`status` enum('available','in_active_deal','complete','deleted') NOT NULL default 'available',
`for_sale` tinyint(1) unsigned NOT NULL default '0',
`for_trade` tinyint(1) unsigned NOT NULL default '0',
`price` decimal(10,2) default NULL,
`condition` tinyint(1) unsigned default NULL,
`notes` varchar(255) default NULL,
PRIMARY KEY (`id`),
KEY `ix_accessory` (`accessory_id`,`member_id`,`type`,`status`),
KEY `ix_book` (`book_id`,`member_id`,`type`,`status`),
KEY `ix_member` (`member_id`,`status`,`date_created`),
KEY `ix_audiobook` (`audiobook_id`,`member_id`,`type`,`status`),
KEY `ix_status` (`status`,`accessory_id`,`for_trade`,`member_id`)
) ENGINE=MyISAM AUTO_INCREMENT=281724 DEFAULT CHARSET=utf8
CREATE TABLE `member` (
`id` int(10) unsigned NOT NULL auto_increment,
`email` varchar(200) NOT NULL,
`screen_name` varchar(25) default NULL,
`date_last_login` datetime default NULL,
`wishlist_only` tinyint(1) unsigned NOT NULL default '1',
`platform_only` tinyint(1) unsigned NOT NULL default '0',
`transaction_limit` smallint(6) NOT NULL default '5',
`banned` tinyint(1) unsigned NOT NULL default '0',
`notes` text,
PRIMARY KEY (`id`),
KEY `ix_email` (`email`),
KEY `ix_screen_name` (`screen_name`),
KEY `ix_transaction_limit` (`transaction_limit`)
) ENGINE=MyISAM AUTO_INCREMENT=50842 DEFAULT CHARSET=utf8
CREATE TABLE `publisher` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(128) NOT NULL,
`date_updated` datetime default NULL,
PRIMARY KEY (`id`),
KEY `ix_name` (`name`)
) ENGINE=MyISAM AUTO_INCREMENT=129 DEFAULT CHARSET=utf8
CREATE TABLE `book` (
`id` int(10) unsigned NOT NULL auto_increment,
`publisher_id` int(10) unsigned default NULL,
`name` varchar(128) NOT NULL,
`description` text,
`keywords` varchar(200) default NULL,
`date_released` varchar(10) default NULL,
`genre` varchar(50) default NULL,
`subgenre` varchar(50) default NULL,
`author` varchar(100) default NULL,
`date_updated` datetime default NULL,
`photo_cover` varchar(50) default NULL,
`weight_oz` decimal(7,2) default NULL,
`released` tinyint(2) NOT NULL default '0',
PRIMARY KEY (`id`),
KEY `ix_genre` (`genre`),
KEY `ix_name` (`name`),
KEY `ix_released` (`released`,`date_released`),
KEY `ix_keywords` (`keywords`)
) ENGINE=MyISAM AUTO_INCREMENT=87329 DEFAULT CHARSET=utf8
CREATE TABLE `wishlist` (
`id` int(10) unsigned NOT NULL auto_increment,
`member_id` int(10) unsigned NOT NULL,
`type` enum('book','audiobook','accessory') NOT NULL,
`book_id` int(10) unsigned default NULL,
`audiobook_id` int(10) unsigned default NULL,
`accessory_id` int(10) unsigned default NULL,
`date_created` datetime NOT NULL,
`date_modified` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `ix_accessory` (`accessory_id`,`member_id`,`type`),
KEY `ix_book` (`book_id`,`member_id`,`type`),
KEY `ix_member_accessory` (`member_id`,`accessory_id`),
KEY `ix_member_date_created` (`member_id`,`date_created`),
KEY `ix_member_book` (`member_id`,`book_id`),
KEY `ix_member_audiobook` (`member_id`,`audiobook_id`),
KEY `ix_audiobook` (`audiobook_id`,`member_id`,`type`)
) ENGINE=MyISAM AUTO_INCREMENT=241886 DEFAULT CHARSET=utf8
And here is the result when I run EXPLAIN:
+----+--------------------+-----------+----------------+---------------------------------------------------------------------------------------+----------------------+---------+------------------------------------+-------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-----------+----------------+---------------------------------------------------------------------------------------+----------------------+---------+------------------------------------+-------+----------------------------------------------+
| 1 | PRIMARY | member | range | PRIMARY,ix_transaction_limit | ix_transaction_limit | 2 | NULL | 19617 | Using where; Using temporary; Using filesort |
| 1 | PRIMARY | listing | ref | ix_game,ix_member,ix_status | ix_member | 5 | live_database001.member.id,const | 7 | Using where |
| 1 | PRIMARY | book | eq_ref | PRIMARY,ix_released | PRIMARY | 4 | live_database001.listing.book_id | 1 | Using where |
| 1 | PRIMARY | publisher | eq_ref | PRIMARY | PRIMARY | 4 | live_database001.book.publisher_id | 1 | |
| 6 | DEPENDENT SUBQUERY | w | ref | ix_member_accessory,ix_member_date_created,ix_member_book,ix_member_publisher | ix_member_accessory | 4 | live_database001.member.id | 6 | Using where |
| 7 | DEPENDENT SUBQUERY | l4 | index_subquery | ix_book,ix_member,ix_status | ix_book | 11 | func,const,func,const | 1 | Using where |
| 5 | DEPENDENT SUBQUERY | l3 | ref | ix_book,ix_member,ix_status | ix_member | 5 | const,const | 63 | Using where |
| 5 | DEPENDENT SUBQUERY | b | eq_ref | PRIMARY | PRIMARY | 4 | live_database001.l3.book_id | 1 | Using where |
| 4 | DEPENDENT SUBQUERY | w1 | ref | ix_book,ix_member_accessory,ix_member_date_created,ix_member_game,ix_member_publisher | ix_book | 9 | func,const | 1 | Using where; Using index |
| 3 | DEPENDENT SUBQUERY | l2 | ref | ix_book,ix_member,ix_status | ix_book | 9 | func,const | 1 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | l1 | ref | ix_book,ix_status | ix_book | 5 | func | 10 | Using where; Using index |
+----+--------------------+-----------+----------------+--------------------------------------------------------------------------------------+----------------------+---------+------------------------------------+-------+----------------------------------------------+
This brings me to a couple questions:
1. The member table is using ix_transaction_limit, and as a result is searching through 19k+ rows. Since I am specifying a member.id, shouldn't this be using PRIMARY and shouldn't the rows be 1? How can I fix this?
2. How does the key_len affect the performance?
3. I've seen other complex queries which dealt with 100's of millions of rows take less time. How is it that only 19k rows are taking so long?
(I'm still very green with MySQL Optimization, so I'd really love to understand the how's & why's)
Any suggestions small or big is greatly appreciated, thank you in advance!
Not sure what transaction limit does but at a guess it seems like a strange choice to have an index on. What might help is an index on date_last_login. At the moment the query is filtering member and then joining listing to it - ie. its going through all the member records with the appropriate transaction limit and using the member id to find the listing.
After dropping the index on the member table, I was still having the same problems. In fact, that made it even worse. So my ultimate solution was to completely re-write the query from scratch.
And consequently, changing the order in the conditionals made a big difference as well. So moving the and member_id = 1234 and the and wishlish_only <> 1 up above the subquery was a huge improvement.
Thanks for all your help!