Postgres data base need suggestion creating an index for table - sql

I a have a table structure as below. For fetching the data from table I am having search criteria as mentioned below. I am writing a singe sql query as per requirement(sample query I mentioned below). I need to create an index for the table to cover all the search criteria. It will be helpful somebody advice me.
Table structure(columns):
applicationid varchar(15),
trans_tms timestamp,
SSN varchar,
firstname varchar,
lastname varchar,
DOB date,
Zipcode smallint,
adddetais json
Search criteria will be from API will be fall under 4 categories. All 4 categories are mandatory. At any cost I will receive 4 categories of values for against single applicant.
Search criteria:
ssn&last name (last name need to use function I.e. soundex(lastname)=soundex('inputvalue').
ssn & DOB
ssn&zipcode
firstname&lastname&DOB.
Query:
I am trying to write.
Sample query is:
Select *
from table
where ((ssn='aaa' and soundex(lastname)=soundex('xxx')
or ((ssn='aaa' and dob=xxx)
or (ssn='aaa' and zipcode = 'xxx')
or (firstname='xxx' and lastname='xxx' and dob= xxxx));
For considering performance I need to create an index for the table. Might be composite. Any suggestion will be helpful.

Some Approaches I would follow:
Yes, you are correct composite index/multicolumn index will give benefit in AND conditions of two columns, however, indexes would overlap on columns for given conditions.
Documentation : https://www.postgresql.org/docs/10/indexes-multicolumn.html
You can use a UNION instead of OR.
Reference : https://www.cybertec-postgresql.com/en/avoid-or-for-better-performance/
If multiple conditions could be combined for e.g: ssn should be 'aaa' with any combination, then modifying the where clause with lesser OR is preferable.

Related

Configuring indexes in postgres

this is my first time dealing with indexes and would like to understand few things.
I have the tables of the following schemas:
Table1: Customer details
id
name
createdOn
username
phone
address
1
xyz
some date
xyz12
12345678
abc
The id in the above table is unique. The id is not defined as PK in the table though. Would id + createdOn be a good complex index?
Table2: Tracked customer actions
customer id
name
timestamp
action type
cart value
address
1
xyz
some date
click
.
abc
The above table does not have any column with unique values and there can be a lot of sparse data. The above actions table is a sample and can have almost 18 columns, with new data being added frequently. Is having all columns as a index a good one?
The queries on these tables could be both simple and complex as below:
select * from customerDetails
OR
with target_customers as (
select
id as customer_id
from customerDetails
where customer_date > {some date}
)
select avg(cart_value) from actions a
where action_type = 'cart updated'
inner join target_customers b on a.customer_id = b.customer_id
These are sample queries and I believe I will be having even more complex queries using different aggregations and joins with other tables as well to gain insights while performing analytics in the future.
I want to understand the best columns for indexes on the above tables.
The id is not defined as PK in the table though."
That's unusual. Why is that?
Would id + createdOn be a good complex index?
No, you'd reverse it: createdOn, id. An index can use the first column alone. This allows you to use the index to order by createdOn and also createdOn between X and Y.
But you probably wouldn't include id in there at all. Make id a primary key and it is indexed.
In general, if you want to cover all possibilities for two keys, make two indexes...
columnA, columnB
columnB
columnA, columnB can cover queries which only reference columnA and can also order by columnA. It can also cover queries which reference both columnA and columnB. But it can't cover a query which only references columnB, so we need an single-column index for columnB.
Is having all columns as a index a good one?
Maybe, it depends on your queries, but probably not.
You want to index foreign keys, they should be indexed automatically, because that will speed up all joins.
You probably want to index timestamps that you're going to search or order by.
Any flags you often query by, such as where action_type = 'cart updated' you may want to index. Or you may want to partition the table by the action type.
The above actions table is a sample and can have almost 18 columns, with new data being added frequently.
This may be a good use of a single jsonb column to store all the miscellaneous attributes. This allows you to use a single index for the jsonb column. However, jsonb is not a panacea and you will have to choose what to put in jsonb and what to make columns.
For example, a timestamp such as createdOn should probably be a column. Also any foreign keys. And status flags such as action_type.

Row Stores vs Column Stores

Assuming that the database is already populated with data, and that each of the following SQL statements is the one and only query that an application will perform, why is it better to use row-wise or column-wise record storage for the following queries?...
1) SELECT * FROM Person
2) SELECT * FROM Person WHERE id=5
3) SELECT AVG(YEAR(DateOfBirth)) FROM Person
4) INSERT INTO Person (ID,DateOfBirth,Name,Surname) VALUES(2e25,’1990-05-01’,’Ute’,’Muller’)
In those examples Person.id is the primary key.
The article Row Store and Column Store Databases gives a general discussion on this, but I am specifically concerned about the four queries above.
SELECT * FROM ... queries are better for row stores since it has to access numerous files.
Column store is good for aggregation over large volume of date or when you have quesries that only need a few fields from a wide table.
Therefore:
1st querie: row-wise
2nd query: row-wise
3rd query: column-wise
4th query: row-wise
I have no idea what you are asking. You have this statement:
INSERT INTO Person (ID, DateOfBirth, Name, Surname)
VALUES('2e25', '1990-05-01', 'Ute', 'Muller');
This suggests that you have a table with four columns, one of which is an id. Each person is stored in their own column.
You then have three queries. The first cannot be optimized. The second is optimized, assuming that id is a primary key (a reasonable assumption). The third requires a full table scan -- although that could be ameliorated with an index only on DateOfBirth.
If the data is already in this format, why would you want to change it?
This is a very simple data structure. Three of your four query examples access all columns. I see no reason why you would not use a regular row-store table structure.

Oracle composite index with not all fields in condition

Tell me please a little about Oracle indexes, because I don't know how to ask about this situation to Google. Let's pretend I have a table
table T (
Id key
FKey1 int,
FKey2 int,
Date1 date,
Date2 date,
Name string,
Surname string
)
And have a composite index on all this fields except Id. I have 2 queries which used:
All the columns except Name and Surname;
All the columns except Surname - and search for Name with LIKE expression.
Is this index efficient? And if not, how can I improve it? Queries generated by ORM and just have a possibility to use indexes :(
Real index columns sequence:
Name
Surname
FKey1
FKey2
Date1
Date2
it depends on the sequence of the columns declared in the creation of the index .. it is important that the columns are more selettiv or used in all the queries are placed before the others: in your case
....
Surname , Name //used by two user queries
work better than
Name , surname // //used by oneuser queries

Assign unique ID to duplicates in Access

I had a very big excel spreadsheet that I moved into Access to try to deal with it easier. I'm very much a novice. I'm trying to use SQL via Access.
I need to assign a unique identifier to duplicates. I've seen people use DENSE_RANK in SQL but I can't get it to work in Access.
Here's what I'm trying to do: I have a large amount of patient and sample data (20k rows). My columns are called FULL_NAME, SAMPLE_NUM, and DATE_REC. Some patients have come in more than once and have multiple samples. I want to give each patient a unique ID that I want to call PATIENT_ID.
I can't figure out how to do this, aside from typing it out on each row. I would greatly appreciate help as I really don't know what I'm doing and there is no one at my work who can help.
To illustrate the previous answers' textual explanation, consider the following SQL action queries which can be run in an Access query window one by one or as VBA string queries with DAO's CurrentDb.Execute or DoCmd.RunSQL. The ALTER statements can be done in MSAcecss.exe.
Create a Patients table (make-table query)
SELECT DISTINCT s.FULL_NAME INTO myPatientsTable
FROM mySamplesTable s
WHERE s.FULL_NAME IS NOT NULL;
Add an autonumber field to new Patients table as a Primary Key
ALTER TABLE myPatientsTable ADD COLUMN PATIENT_ID AUTOINCREMENT NOT NULL PRIMARY KEY;
Add a blank Patient_ID column to Samples table
ALTER TABLE mySamplesTable ADD COLUMN PATIENT_ID INTEGER;
Update Patient_ID Column in Samples table using FULL_NAME field
UPDATE mySamplesTable s
INNER JOIN myPatientsTable p
ON s.[FULL_NAME] = p.[FULL_NAME]
SET s.PATIENT_ID = p.PATIENT_ID;
Maintain third-norm principles of relational databases and remove FULL_NAME field from Samples table
ALTER TABLE mySamplesTable DROP COLUMN FULL_NAME;
Then in a separate query, add a foreign key constraint on PATIENT_ID
ALTER TABLE mySamplesTable
ADD CONSTRAINT PatientRelationship
FOREIGN KEY (PATIENT_ID)
REFERENCES myPatientsTable (PATIENT_ID);
Sounds like FULL_NAME is currently the unique identifier. However, names make very poor unique identifiers and name parts should be in separate fields. Are you sure you don't have multiple patients with same name, e.g. John Smith?
You need a PatientInfo table and then the SampleData table. Do a query that pulls DISTINCT patient info (apparently this is only one field - FULL_NAME) and create a table that generates unique ID with autonumber field. Then build a query that joins tables on the two FULL_Name fields and updates a new field in SampleData called PatientID. Delete the FULL_Name field from SampleData.
The command to number rows in your table is [1]
ALTER TABLE MyTable ADD COLUMN ID AUTOINCREMENT;
Anyway as June7 pointed out it might not be a good idea to combine records just based on patient name as there might be duplicates. Better way will be treat each record as unique patient for now and have a way to fix patient ID when patient comes back. I would suggest to go this way:
create two new columns in your samples table
ID with autoincrement as per query above
patientID where you will copy values from ID column - for now they will be same. But in future they will diverge
copy columns patientID and patientName into separate table patients
now you can delete patientName column from samples table
add column imported to patients table to indicate, that there might be some other records that belong to this patient.
when patients come back you open his record, update all other info like address, phone, ... and look for all possible samples record that belong to him. If so, then fix patient id in those records.
Now you can switch imported indicator because this patient data are up to date.
After fixing patientID for samples records. You will end up with patients with no record in samples table. So you can go and delete them.
Unless you already have a natural key you will be corrupting this data when you run the distinct query and build a key from it. From your posting I would guess a natural key would be SAMPLE_NUM. Another problem is that if you roll up by last name you will almost certainly be combining different patients into one.

SQL: unique constraint and aggregate function

Presently I'm learning (MS) SQL, and was trying out various aggregate function samples. The question I have is: Is there any scenario (sample query that uses aggregate function) where having a unique constraint column (on a table) helps when using an aggregate function.
Please note: I'm not trying to find a solution to a problem, but trying to see if such a scenario exist in real world SQL programming.
One immediate theoretical scenario comes to mind, the unique constraint is going to be backed by a unique index, so if you were just aggregating that field, the index would be narrower than scanning the table, but that would be on the basis that the query didn't use any other fields and was thus covering, otherwise it would tip out of the NC index.
I think the addition of the index to enforce the unique constraint is automatically going to have the ability to potentially help a query, but it might be a bit contrived.
Put the unique constraint on the field if you need the field to be unique, if you need some indexes to help query performance, consider them seperately, or add a unique index on that field + include other fields to make it covering (less useful, but more useful than the unique index on a single field)
Let's take following two tables, one has records for subject name and subject Id and another table contains record for student having marks in particular subjects.
Table1(SubjectId int unique, subject_name varchar, MaxMarks int)
Table2(Id int, StudentId int, SubjectId, Marks int)
so If I need to find AVG of marks obtained in Science subject by all student who have attempted for
Science(SubjectId =2) then I would fire following query.
SELECT AVG(Marks), MaxMarks
FROM Table1, Table2
WHERE Table1.SubjectId = 2;