How to re-define indexes in a postgres SQL table - sql

I have this table which is automatically created in my DB.
This is the description of the table using the \d command.
Table "public.tableA":
Column | Type | Modifiers
----------------------------+----------+-----------------------------------------------------
var_a | integer | not null
var_b | integer | not null
var_c | bigint | not null default nextval('var_c_sequence'::regclass)
var_d | integer |
var_e | integer |
var_f | smallint | default mysessionid()
var_g | smallint | default (-1)
var_h | boolean | default false
var_g | uuid |
Indexes:
"tableA_pkey" PRIMARY KEY, btree (var_c)
"tableA_edit" btree (var_g) WHERE var_g <> (-1)
"tableA_idx" btree (var_a)
Check constraints:
"constraintC" CHECK (var_f > 0 AND var_d IS NULL AND var_e IS NULL OR (var_f = 0 OR var_f = (-1)) AND var_d IS NOT NULL AND var_e IS NOT NULL)
Triggers:
object_create BEFORE INSERT ON tableA FOR EACH ROW EXECUTE PROCEDURE create_tableA()
object_update BEFORE DELETE OR UPDATE ON tableA FOR EACH ROW EXECUTE PROCEDURE update_tableA()
I'm interested in creating this table myself, and I'm not quite sure on how to define this indices manually, any ideas?

Unless I've totally missed the boat:
alter table public."tableA"
add constraint "tableA_pkey" PRIMARY KEY (var_c);
create index "tableA_edit" on public."tableA" (var_g) WHERE var_g <> (-1);
create index "tableA_idx" on public."tableA" (var_a);
Btree is default, so I don't bother specifying that, but you can if you want.
You didn't ask, but the check constraint syntax is:
alter table public."tableA"
add constraint "constraintC"
CHECK (var_f > 0 AND var_d IS NULL AND var_e IS NULL OR
(var_f = 0 OR var_f = (-1)) AND var_d IS NOT NULL AND var_e IS NOT NULL)
By the way, the cheat would be to just look at the DDL in PgAdmin.
All that said, I generally discourage the use of the "quoteS" around a table to enforce upper/lowercase. There are cases where it makes sense (otherwise, why would the functionality exist), but in many cases it creates so much extra work in the future. In the case of the index names, it doesn't even buy you anything, since you don't really refer to them in any SQL.

Related

How to declare "nextval('testing_thing_thing_id_seq'::regclass)" as default value for column "thing_id" in postgres table "testing_thing"?

In my postgres db there is a table called testing_thing, which I can see (by running \d testing_thing in my psql prompt) it is defined as
Table "public.testing_thing"
Column | Type | Collation | Nullable | Default
--------------+-------------------+-----------+----------+-----------------------------------------------------
thing_id | integer | | not null | nextval('testing_thing_thing_id_seq'::regclass)
thing_num | smallint | | not null | 0
thing_desc | character varying | | not null |
Indexes:
"testing_thing_pk" PRIMARY KEY, btree (thing_num)
I want to drop it and re-create it exactly as it is, but I don't know how to reproduce the
nextval('testing_thing_thing_id_seq'::regclass)
part for column thing_id.
This is the query I put together to create the table:
CREATE TABLE testing_thing(
thing_id integer NOT NULL, --what else should I put here?
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
what is it missing?
Add a DEFAULT to the column you want to increment and call nextval():
CREATE SEQUENCE testing_thing_thing_id_seq START WITH 1;
CREATE TABLE testing_thing(
thing_id integer NOT NULL DEFAULT nextval('testing_thing_thing_id_seq'),
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
Side note: Keep in mind that attaching a sequence to a column does not prevent users to manually fill it with random data, which can create really nasty problems with primary keys. If you want to overcome it and do not necessarily need to have a sequence, consider creating an identity column, e.g.
CREATE TABLE testing_thing(
thing_id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
Demo: db<>fiddle

Can't update or delete row from table (Postgres)

I have table with bytea field. When I try to delete a row from this table, I get such error:
[42704] ERROR: large object 0 does not exist
Can you help me in this situation?
Edit. Information from command \d photo:
Table "public.photo"
Column | Type | Modifiers
------------+------------------------+-----------
id | character varying(255) | not null
ldap_name | character varying(255) | not null
file_name | character varying(255) | not null
image_data | bytea |
Indexes:
"pk_photo" PRIMARY KEY, btree (id)
"photo_file_name_key" UNIQUE CONSTRAINT, btree (file_name)
"photo_ldap_name" btree (ldap_name)
Triggers:
remove_unused_large_objects BEFORE DELETE OR UPDATE ON photo FOR EACH ROW EXECUTE PROCEDURE lo_manage('image_data')
Drop the trigger:
drop trigger remove_unused_large_objects on photo;
try using this
delete from photo where primarykey = 'you want to delete';

SQL. How could i set data unavailable depending of its value and update ir if used? // Restaurant system simulation

Im new in SQL and im building a database for testing and learn.
my example is from a restaurant where there are 5 tables: Customer/Table/Order/Sale/dishes
with these columns:
CUSTOMER : customerID, TableID.
TABLE: TableID, OrderID, available(boolean)
Order: FoodID
Sale: OrderID/TotalPrice/customerID/TableID
dishes: foodID/Price
What I want to do is:
A table with SALE unliquidated can not be assigned to a new client.
A sale can not be liquidated if no order.
The customer can not ask for dishes that do not exist in table dishes .
All orders and sales must be settled the same day as the customer visit .
How could I do that?
Thanks in advance
Edit:
Darwin von Corax came up with a complete solution to the problem. You can see his work in the answers and feel free to join in the Chat.
Here's what I've got so far.
The tables:
Table "public.orders"
Column | Type | Modifiers
-----------------+--------------+-----------------------------------------------------
id | integer | not null default nextval('orders_id_seq'::regclass)
discount | numeric(5,2) |
tax | numeric(5,2) |
tip | numeric(5,2) |
amount_tendered | numeric(6,2) |
closed | boolean | default false
party_size | integer |
Indexes:
"order_pk" PRIMARY KEY, btree (id)
Referenced by:
TABLE "order_items" CONSTRAINT "order_item_fk" FOREIGN KEY (order_id) REFERENCES orders(id)
TABLE "tables" CONSTRAINT "table_order_fk" FOREIGN KEY (order_id) REFERENCES orders(id)
Table "public.tables"
Column | Type | Modifiers
-----------+---------+-----------------------------------------------------
id | integer | not null default nextval('tables_id_seq'::regclass)
places | integer |
available | boolean |
order_id | integer |
Indexes:
"table_pk" PRIMARY KEY, btree (id)
"fki_table_order_fk" btree (order_id)
Foreign-key constraints:
"table_order_fk" FOREIGN KEY (order_id) REFERENCES orders(id)
Table "public.order_items"
Column | Type | Modifiers
-----------+---------+---------------------------------------------------------------
order_id | integer | not null
item_id | integer | not null default nextval('order_items_item_id_seq'::regclass)
dish_id | integer |
delivered | boolean |
Indexes:
"ord_item_pk" PRIMARY KEY, btree (order_id, item_id)
Foreign-key constraints:
"order_item_fk" FOREIGN KEY (order_id) REFERENCES orders(id)
"orditem_dish_fk" FOREIGN KEY (dish_id) REFERENCES dishes(id)
Table "public.dishes"
Column | Type | Modifiers
-------------+-------------------------+-----------------------------------------------------
id | integer | not null default nextval('dishes_id_seq'::regclass)
price | numeric(5,2) |
description | character varying(1024) |
Indexes:
"dish_pk" PRIMARY KEY, btree (id)
Referenced by:
TABLE "order_items" CONSTRAINT "orditem_dish_fk" FOREIGN KEY (dish_id) REFERENCES dishes(id)
Also I have two functions:
-- Function: seat_party(integer, integer)
-- DROP FUNCTION seat_party(integer, integer);
CREATE OR REPLACE FUNCTION seat_party(party_size integer DEFAULT 1, preferred_table integer DEFAULT 1)
RETURNS integer AS
$BODY$
DECLARE
assigned_table tables.id%TYPE := NULL;
new_order orders.id%TYPE;
BEGIN
IF ((preferred_table IS NOT NULL) AND (table_is_available(preferred_table, party_size))) THEN
assigned_table := preferred_table;
END IF;
IF (assigned_table IS NULL) THEN
SELECT INTO assigned_table
tables.id
FROM tables
WHERE order_id IS NULL
AND places >= party_size
LIMIT 1;
END IF;
IF (assigned_table IS NOT NULL) THEN
INSERT INTO orders (party_size)
VALUES (party_size)
RETURNING id AS new_order;
UPDATE tables
SET order_id = new_order
WHERE tables.id = assigned_table;
RETURN assigned_table;
ELSE
RETURN NULL;
END IF;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION seat_party(integer, integer)
OWNER TO dave;
and
-- Function: table_is_available(integer, integer)
-- DROP FUNCTION table_is_available(integer, integer);
CREATE OR REPLACE FUNCTION table_is_available(table_id integer, party_size integer)
RETURNS boolean AS
$BODY$
DECLARE
ord_id tables.order_id%TYPE;
places tables.places%TYPE;
BEGIN
SELECT INTO ord_id, places
tables.order_id
FROM tables
WHERE tables.id = table_id;
RETURN ((avail IS NULL) AND (places >= party_size));
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION table_is_available(integer, integer)
OWNER TO dave;
To complete the solution you will need procedures to take an order, serve an order, pay a bill, and close the day's business. I've created a chat for anyone who wants to question my reasoning or to discuss modifications or extensions: Extended discussion

Optimize sql query

Is it possable to optimize this query?
SELECT count(locId) AS antal , locId
FROM `geolitecity_block`
WHERE (1835880985>= startIpNum AND 1835880985 <= endIpNum)
OR (1836875969>= startIpNum AND 1836875969 <= endIpNum)
OR (1836878754>= startIpNum AND 1836878754 <= endIpNum)
...
...
OR (1843488110>= startIpNum AND 1843488110 <= endIpNum)
GROUP BY locId ORDER BY antal DESC LIMIT 100
The table looks like this
CREATE TABLE IF NOT EXISTS `geolitecity_block` (
`startIpNum` int(11) unsigned NOT NULL,
`endIpNum` int(11) unsigned NOT NULL,
`locId` int(11) unsigned NOT NULL,
PRIMARY KEY (`startIpNum`),
KEY `locId` (`locId`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
UPDATE
and the explain-query looks like this
+----+-------------+-------------------+-------+---------------+-------+---------+------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------------+-------+---------------+-------+---------+------+------+----------------------------------------------+
| 1 | SIMPLE | geolitecity_block | index | PRIMARY | locId | 4 | NULL | 108 | Using where; Using temporary; Using filesort |
+----+-------------+-------------------+-------+---------------+-------+---------+------+------+----------------------------------------------+
To optimize performance, create an index on startIpNum and endIpNum.
CREATE INDEX index_startIpNum ON geolitecity_block (startIpNum);
CREATE INDEX index_endIpNum ON geolitecity_block (endIpNum);
Indexing columns that are being grouped or sorted on will almost always improve performance. I would suggest plugging this query into the DTA (Database Tuning Advisor) to see if SQL can make any suggestions, this might include the creation of one or more indexes in addition to statistics.
If it is possible in your use case, create a temporary table TMP_RESULT (remove order) and than submit a second query that orders results by antal. Filesort is extremely slow and -- in your case -- you can not avoid this operation, because you do not sort by any of keys/indices. To perform count operation, you have to scan complete table. A temporary table is a much faster solution.
ps. Adding an index on (startIpNum, endIpNum) definitely will help you to get better performance but -- if you have a lot of rows -- it will not be a huge improvement.

MySQL: Which indexes to use for a simple range select?

I have a table with ~30 million rows ( and growing! ) and currently i have some problems with a simple range select.
The query, looks like this one:
SELECT SUM( CEIL( dlvSize / 100 ) ) as numItems
FROM log
WHERE timeLogged BETWEEN 1000000 AND 2000000
AND user = 'example'</pre>
It takes minutes to finish and i think that the solution would be at the indexes that i'm using. Here is the result of explain:
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
| 1 | SIMPLE | log | range | PRIMARY,timeLogged | PRIMARY | 4 | NULL | 11839754 | Using where |
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
My table structure is this one ( reduced to make it fit better on the problem ):
CREATE TABLE IF NOT EXISTS `log` (
`origDomain` varchar(64) NOT NULL default '0',
`timeLogged` int(11) NOT NULL default '0',
`orig` varchar(128) NOT NULL default '',
`rcpt` varchar(128) NOT NULL default '',
`dlvSize` varchar(255) default NULL,
`user` varchar(255) default NULL,
PRIMARY KEY (`timeLogged`,`orig`,`rcpt`),
KEY `timeLogged` (`timeLogged`),
KEY `orig` (`orig`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Any ideas of what can I do to optimize this query or indexes on my table?
You may want to try adding a composite index on (user, timeLogged):
CREATE TABLE IF NOT EXISTS `log` (
...
KEY `user_timeLogged` (user, timeLogged),
...
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Related Stack Overflow post:
Database: When should I use a composite index?
In addition to the suggestions made by the other answers, I note that you have a column user in the table which is a varchar(255). If this refers to a column in a table of users, then 1) it would most likely to far more efficient to add an integer ID column to that table, and use that as the primary key and as a referencing column in other tables; 2) you are using InnoDB, so why not take advantage of the foreign key capabilities it offers?
Consider that if you index by a varchar(n) column, it is treated like a char(n) in the index, so each row of your current primary key takes up 4 + 128 + 128 = 260 bytes in the index.
Add an index on user.