Naming database table fields (count vs counter) - naming-conventions

I have a database table with some counters: Favorites, Hits, Comments, etc.
What is the best name for this fields? Should I use the term 'FavoritesCounter' or 'FavoritesCount'?
Examples:
CREATE TABLE `tbl_example` (
`IdExample` int(8) NOT NULL AUTO_INCREMENT COMMENT '(PK)',
...
`FavoritesCounter` int(8) NOT NULL DEFAULT '0' COMMENT 'Number of times that example has been favorited.',
...
or
CREATE TABLE `tbl_example` (
`IdExample` int(8) NOT NULL AUTO_INCREMENT COMMENT '(PK)',
...
`FavoritesCount` int(8) NOT NULL DEFAULT '0' COMMENT 'Number of times that example has been favorited.',
...

Generally, the fields should have the noun contex. So "FavoritesCount" works well for database. The verb equivalent is "FavoritesCounter" and is used for denoting interactions, which is generally used in application code. The above echoes with your other fields such as Favorites, Hits and Comments.

Related

Optimize SQL Query (If possible) using CONVERT(INT, SUBSTRING( and LEN FUNCTION

My situation is like that :
I have these tables:
CREATE TABLE [dbo].[HeaderResultPulser]
(
[Id] BIGINT IDENTITY (1, 1) NOT NULL,
[ReportNumber] CHAR(255) NOT NULL,
[ReportDescription] CHAR(255) NOT NULL,
[CatalogNumber] NCHAR(255) NOT NULL,
[WorkerName] NCHAR(255) DEFAULT ('') NOT NULL,
[LastCalibrationDate] DATETIME NOT NULL,
[NextCalibrationDate] DATETIME NOT NULL,
[MachineNumber] INT NOT NULL,
[EditTime] DATETIME NOT NULL,
[Age] NCHAR(255) DEFAULT ((1)) NOT NULL,
[Current] INT DEFAULT ((-1)) NOT NULL,
[Time] BIGINT DEFAULT ((-1)) NOT NULL,
[MachineName] NVARCHAR(MAX) DEFAULT ('') NOT NULL,
[BatchNumber] NVARCHAR(MAX) DEFAULT ('') NOT NULL,
CONSTRAINT [PK_HeaderResultPulser]
PRIMARY KEY CLUSTERED ([Id] ASC)
);
CREATE TABLE [dbo].[ResultPulser]
(
[Id] BIGINT IDENTITY (1, 1) NOT NULL,
[ReportNumber] CHAR(255) NOT NULL,
[BatchNumber] CHAR(255) NOT NULL,
[DateTime] DATETIME NOT NULL,
[Ocv] FLOAT(53) NOT NULL,
[OcvMin] FLOAT(53) NOT NULL,
[OcvMax] FLOAT(53) NOT NULL,
[Ccv] FLOAT(53) NOT NULL,
[CcvMin] FLOAT(53) NOT NULL,
[CcvMax] FLOAT(53) NOT NULL,
[Delta] BIGINT NOT NULL,
[DeltaMin] BIGINT NOT NULL,
[DeltaMax] BIGINT NOT NULL,
[CurrentFail] BIT DEFAULT ((0)) NOT NULL,
[NumberInTest] INT NOT NULL
);
For every row in HeaderResultPulser I have multiple rows in ResultPulser
my key is the [HeaderResultPulser].[ReportNumber] to get a list of data in ResultPulser, and for every a lot of row with the same [ResultPulser].[ReportNumber]
It has multiple [ResultPulser].[NumberInTest] values
For example: in the ResultPulser table the data can look like this:
ReportNumber | NumberInTest
-------------+-------------
0000006211 | 1
0000006211 | 2
0000006211 | 3
0000006211 | 4
0000006211 | 5
0000006211 | 6
0000006212 | 1
0000006212 | 2
0000006212 | 3
0000006212 | 4
0000006212 | 5
NumberInTest can be 200, 500, 10000 and sometime even more..
The report number column contains two the first 7 chars are a number of machine and the rest is an incrementing number.
For example, 0000006212 is [0000006][212] == [the machine number][the incrementing number]
My query for example :
select
[HeaderResultPulser].[ReportNumber],
max(NumberInTest) as TotalCells
from
ResultPulser, HeaderResultPulser
where
((([ResultPulser].[ReportNumber] like '0000006%' and
CONVERT(INT, SUBSTRING([ResultPulser].[ReportNumber], 8, LEN([ResultPulser].[ReportNumber]))) BETWEEN '211' AND '815')
and ([HeaderResultPulser].[ReportNumber] = [ResultPulser].[ReportNumber])))
group by
[HeaderResultPulser].[ReportNumber]
Actually I want to get all the rows on the machine number 0000006 that number was 211 to 815 (include both)
This query takes about 6-7 seconds
There is a lot of data (in the hundreds of millions and billions and in the future can be more and can be much more in table ResultPulser), and it can get Tens of thousands of rows in HeaderResultPulser table
And In getting receive I only receive on select a few hundred in the worst case a thousand or about two thousand if I want to go far... but (in numbers) to get the max(NumberInTest) from ResultPulser I take about (It can get to a few millions of rows)
There is any way to optimize my query? Or when It's so much data it's just must this time? (That just the way it is)
The way you are doing joins is no longer standard. It's also hard to read, and dangerous if you ever need to use left joins. Instead of joining this way:
select *
from T1, T2
where T1.column = T2.column
Use ANSI-92 join syntax instead:
select *
from T1
join T2 on T1.column = T2.column
You said that your "key" was ReportNumber. Why isn't that declared in your schema? It sounds like you want a unique constraint on HeaderResultPulser.ReportNumber, and a foreign key on the the ReportPulser table, such that ReportNumber references HeaderResultPulser (ReportNumber)
Since your report number column seems to contain two different values, your table is not in First Normal Form. This is making things difficult for you. Why not split the two parts of the "report number" into two different columns when the data is entered? This will significantly improve your query performance, because you no longer need to perform an expression against the data in the table at query time to separate the ReportNumber into atomic values.
Your comment says that the first 7 characters of the ReportNumber are the MachineNumber. But you already have MachineNumber in the HeaderReportPulser table. So why not just add a separate column for Increment? If you still need ReportNumber to exist as a column, you can make it a calculated column, as the concatenation of MachineNumber and Increment.
If you don't want to touch the "existing" schema, we can do a similar thing in reverse. Your query will not be completely sargable unless you can do something to the schema, because you have to perform some kind of expression on the data in the ReportNumber column. But maybe you have the option to use a calculated column to do this up front:
alter table HeaderReportPulser
add Increment as right(ReportNumber, len(rtrim(ReportNumber)) - 7);
Now we have the increment as a column in its own right. But it's still being calculated at query time, because it's not persisted. We can make it persisted:
alter table HeaderReportPulser
add Increment as right(ReportNumber, len(rtrim(ReportNumber)) - 7) persisted;
We can also index a computed column. Since your required expression is deterministic and precise (see Indexes on Computed Columns), we don't actually have to mark it as persisted:
alter table HeaderReportPulser
add Increment as right(ReportNumber, len(rtrim(ReportNumber)) - 7);
create index ix_headerreportpulser_increment on HeaderReportPulser(Increment);
You could do a similar set of operations to create the Increment and MachineNumber on the ReportPulser table. If you always want to use both values, create an index on the combination of (MachineNumber, Increment)
The biggest performance gain might be eliminating the outer group by by using a correlated subquery or lateral join:
select hrp.[ReportNumber],
(select max(rp.NumberInTest)
from ResultPulser rp
where rp.ReportNumber = hrp.ReportNumber and
right(rp.ReportNumber, 3) between '211' and '815'
) as TotalCells
from HeaderResultPulser hrp
where hrp.ReportNumber like '0000006%';
Your logic looks like it only wants the last three characters of the ReportNumber, so I simplified the logic. I'm not 100% that is the case -- it just seems reasonable. Regardless, there is no need to convert the values to integers and then compare as strings. And similar logic can be used even for longer report numbers.
You also want an index on ResultPulser(ReportNumber, NumberInTest) :
create index idx_resultpulser_reportnumber_numberintest on ResultPulser(ReportNumber, NumberInTest)
EDIT:
Actually, I notice that the report number matches between the two tables. So this seems simplest:
select hrp.[ReportNumber],
(select max(rp.NumberInTest)
from ResultPulser rp
where rp.ReportNumber = hrp.ReportNumber
) as TotalCells
from HeaderResultPulser hrp
where hrp.ReportNumber >= '0000006211' and
hrp.ReportNumber <= '0000006815';
You still want to be sure you have the above index on ResultPulser.
If the ReportNumber is not a fixed 10 digits, then you can use:
where hrp.ReportNumber >= '0000006211' and
hrp.ReportNumber <= '0000006815' and
len(hrp.ReportNumber) = 10
This should also use the index and return exactly what you want.
Performance Optimization of any query depends on many factors including environment you are hosting and running your query. Hardware and Software play important part in optimization of heavy running database queries. In your case you can look into following things:
USE ANSI 92 JOIN syntax instead of default cross join
e.g
select *
from T1
join T2 on T1.column = T2.column
Put indexes on columns like
[ReportNumber]
[NumberInTest]
Note: You may need index for each column in the join area which is not primary key.
Remember use of MAX is always heavy and that could be the main problem in your query.
Finally you can further look into optimizing your query syntax using following online tool where you can specify your actual query and environment you are using:
https://www.eversql.com/
Hope it help you.
If you really want to optimize performance, I propose to add a bit of logic beyond SQL structures.
Is it possible that particular value of ReportNumber is present in table ResultPulser, but not in table HeaderResultPulser? If not, and I ssupose so, there is no reason to join table HeaderResultPulser.
Then, I propose to take advantage from fact, that the condition on ReportNumber can be expressed equivalently without dividing in substrings. For your example, the condition
([ResultPulser].[ReportNumber] like '0000006%' and
CONVERT(INT, SUBSTRING([ResultPulser].[ReportNumber], 8,
LEN([ResultPulser].[ReportNumber]))) BETWEEN '211' AND '815')
is equivalent to:
([ResultPulser].[ReportNumber] BETWEEN '0000006211' and '0000006815')
So the proposal is:
Create index on table ResultPulser(ReportNumber, NumberInTest)
Use selections similar to this:
select ReportNumber, max(NumberInTest) as TotalCells
from ResultPulser
where
ReportNumber BETWEEN '0000006211' and '0000006815'
group by
ReportNumber
(Please, add brackets or double quotes and capitalizations as necessary for MS SQL Server and your taste)
I would expect that good database will execute this query by index-only access, and it will be optimal from execution point of view.
Performance depends on not only on execution path, but also on setup and hardware. Please, make sure that your database has enough cache and fast disk accesses. Also concurrent load is very important.
Simple splitting the field ReportNumber into [the machine number] and [the incrementing number] will probably not improve performance of the query in form proposed by me. But it may be very convenient for other forms of access (other WHERE classes). And it will reflect the structure of the case. Even more important: It will release you from imposed limits. Currently, you have 3 digits for the [the incrementing number]. Are you sure, it will never be necessary to have more than 999 of them for single [the machine number]?
Why the field ReportNumber has type char(255), when only 10 characters are used? char(255) has fixed length, so it will be terrible wasting of space. Only database compression can help. Used space has strong influence on performance – Please, consider the above remark about the database cache.
If both these fields, [the machine number], [the incrementing number], are intergers, why not split ReportNumber and use integer type for them?
Side remark: Field names suggest that you search the total number of rows in table ResultPulser, which belong to single entry in table HeaderResultPulser. The proposed query will deliver this, only if numbers in NumberInTest are consecutive, without gaps. If this is not supplied, you have to count them rather than seek the maximum.

Is it possible to CREATE TABLE with a column that is a combination of other columns in the same table?

I know that the question is very long and I understand if someone doesn't have the time to read it all, but I really wish there is a way to do this.
I am writing a program that will read the database schema from the database catalog tables and automatically build a basic application with the information extracted from the system catalogs.
Many tables in the database can be just a list of items of the form
CREATE TABLE tablename (id INTEGER PRIMARY KEY, description VARCHAR NOT NULL);
so when a table has a column that references the id of tablename I just resolve the descriptions by querying it from the tablename table, and I display a list in a combo box with the available options.
There are some tables however that cannot directly have a description column, because their description would be a combination of other columns, lets take as an example the most important of those tables in my first application
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL
);
this as many would know, would be the full account number for a bank account, in my country it's composed as follows
[XXXX][XXXX][XX][XXXXXXXXXX]
^ ^ ^ ^
bank id | crc account number
|
|_ bank office id
so that's the reason of the way my bankaccount table is structured as is.
Now, I would like to have the complete bank account number in a description column so I can display it in the application without giving a special treatment to this situation, since there are some other tables with similar situation, something like
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL,
description VARCHAR DEFAULT bankid || '-' || officeid || '-' || crc || '-' || number
);
Which of course doesn't work since the following error is raised1
ERROR: cannot use column references in default expression
If there is any different approach that someone can suggest, please feel free to suggest it as an answer.
1 This is the error message given by PostgreSQL.
What you want is to create a view on your table. I'm more familiar with MySQL and SQLite, so excuse the differences. But basically, if you have table 'AccountInfo' you can have a view 'AccountInfoView' which is sort of like a 'stored query' but can be used like a table. You would create it with something like
CREATE VIEW AccountInfoView AS
SELECT *, CONCATENATE(bankid,officeid,crc,number) AS FullAccountNumber
FROM AccountInfo
Another approach is to have an actual FullAccountNumber column in your original table, and create a trigger that sets it any time an insert or update is performed on your table. This is usually less efficient though, as it duplicates storage and takes the performance hit when data are written instead of retrieved. Sometimes that approach can make sense, though.
What actually works, and I believe it's a very elegant solution is to use a function like this one
CREATE FUNCTION description(bankaccount) RETURNS VARCHAR AS $$
SELECT
CONCAT(bankid, '-', officeid, '-', crc, '-', number)
FROM
bankaccount this
WHERE
$1.bankid = this.bankid AND
$1.officeid = this.officeid AND
$1.crc = this.crc AND
$1.number = this.number
$$ LANGUAGE SQL STABLE;
which would then be used like this
SELECT bankaccount.description FROM bankaccount;
and hence, my goal is achieved.
Note: this solution works with PostgreSQL only AFAIK.

Database design for a template based evaluation system

We are working on a database to store some evaluations we conduct. There are a few different types of evaluations and some have changed over time. Because of this we need to keep a record of exactly what an evaluation looked like when it was undertaken.
I figured that the best way to support this would be through a template style system.
With:
A table saving all possible options;
A table mapping options to a template;
An evaluations table mapping a participant to a template on a date/time; and
A table mapping evaluator comments to an option of an evaluation.
This is a skeleton for the design:
CREATE TABLE options (
id SERIAL PRIMARY KEY,
option TEXT NOT NULL
);
CREATE TABLE templates (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE template_options (
template INTEGER NOT NULL REFERENCES templates( id ),
option INTEGER NOT NULL REFERENCES options( id ),
UNIQUE ( template, option )
);
CREATE TABLE participants (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE evaluations (
id SERIAL PRIMARY KEY,
template INTEGER NOT NULL REFERENCES templates( id ),
participant INTEGER NOT NULL REFERENCES participants( id ),
date TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE TABLE evaluation_data (
template INTEGER NOT NULL REFERENCES templates( id ),
option INTEGER NOT NULL REFERENCES options( id ),
evaluator_comments TEXT NOT NULL,
);
The design is able to capture our data but doesn't restrict the options saved in evaluation_data to the subset specified in the evaluation's template's option mapping. We could probably enforce it with a trigger (we can definitely do it with application logic [we are doing so at the moment]) but are we going down the wrong path with this design?
Can anybody think of a better way to do it?
Edit:
Added an example of a potential trigger we would need to use to ensure valid options are enforced with this design.
CREATE FUNCTION valid_option() RETURNS trigger as $valid_option$
BEGIN
IF NOT NEW.option IN ( SELECT template_options.option
FROM template_options
INNER JOIN templates
ON template_options.template = templates.id
WHERE templates.id = ( SELECT evaluations.template
FROM evaluations
WHERE evaluations.id = NEW.evaluation ) ) THEN
RAISE EXCEPTION 'This option is not mapped for this evaluations template.';
END IF;
RETURN NEW;
END
$valid_option$ LANGUAGE plpgsql;
CREATE TRIGGER valid_option BEFORE INSERT ON evaluation_data FOR EACH ROW EXECUTE PROCEDURE valid_option();
Remember that you need two sets of tables. The first set containing the assessment, questions, answer alternatives, categories(?) needed to display the assessment to the participant. The second set of tables to record data about the evaluation (ie. the participant taking the assessment): which assessment, which questions, which answer alternatives and in which order they were presented, which answer they entered (are they allowed to answer the same question multiple times?), etc.
We're using the following structure (I've removed topic scoring since you didn't ask about it):
Models for presenting an assessment:
Assessment: assessment_name, passing_status, version
Question: assessment, question_number, question_type, question_text
AnswerAlternative: question, correct?, answer_text, points
Models for recording an evaluation (participant taking an assessment):
Progress: started_timestamp, finished_timestamp, last_activity, status (includes "finished")
Result: user, assessment, progress, currently_active, score, passing_grade?
Answer: result, question, selected_answer_alternative, answer_text, score
To achieve your goal, I would augment this by writing the generated evaluation to a table and pointing to it from Reault. You could also record the selection and presentation criteria so you can re-generate the assessment programmatically (ie. if you're selecting the questions from a larger question db and re-ordering the answer alternatives before presenting them to the participant).

What the best way to self-document "codes" in a SQL based application?

Q: Is there any way to implement self-documenting enumerations in "standard SQL"?
EXAMPLE:
Column: PlayMode
Legal values: 0=Quiet, 1=League Practice, 2=League Play, 3=Open Play, 4=Cross Play
What I've always done is just define the field as "char(1)" or "int", and define the mnemonic ("league practice") as a comment in the code.
Any BETTER suggestions?
I'd definitely prefer using standard SQL, so database type (mySql, MSSQL, Oracle, etc) should't matter. I'd also prefer using any application language (C, C#, Java, etc), so programming language shouldn't matter, either.
Thank you VERY much in advance!
PS:
It's my understanding that using a second table - to map a code to a description, for example "table playmodes (char(1) id, varchar(10) name)" - is very expensive. Is this necessarily correct?
The normal way is to use a static lookup table, sometimes called a "domain table" (because its purpose is to restrict the domain of a column variable.)
It's up to you to keep the underlying values of any enums or the like in sync with the values in the database (you might write a code generator to generates the enum from the domain table that gets invoked when the something in the domain table gets changed.)
Here's an example:
--
-- the domain table
--
create table dbo.play_mode
(
id int not null primary key clustered ,
description varchar(32) not null unique nonclustered ,
)
insert dbo.play_mode values ( 0 , "Quiet" )
insert dbo.play_mode values ( 1 , "LeaguePractice" )
insert dbo.play_mode values ( 2 , "LeaguePlay" )
insert dbo.play_mode values ( 3 , "OpenPlay" )
insert dbo.play_mode values ( 4 , "CrossPlay" )
--
-- A table referencing the domain table. The column playmode_id is constrained to
-- on of the values contained in the domain table playmode.
--
create table dbo.game
(
id int not null primary key clustered ,
team1_id int not null foreign key references dbo.team( id ) ,
team2_id int not null foreign key references dbo.team( id ) ,
playmode_id int not null foreign key references dbo.play_mode( id ) ,
)
go
Some people for reasons of "economy" might suggest using a single catch-all table for all such code, but in my experience, that ultimately leads to confusion. Best practice is a single small table for each set of discrete values.
add a foreign key to "codes" table.
the codes table would have the PK be the code value, add a string description column where you enter in the description of the value.
table: PlayModes
Columns: PlayMode number --primary key
Description string
I can't see this as being very expensive, databases are based on joining tables like this.
That information should be in database somewhere and not on comments.
So, you should have a table containing that codes and prolly a FK on your table to it.
I agree with #Nicholas Carey (+1): Static data table with two columns, say “Key” or “ID” and “Description”, with foreign key constraints on all tables using the codes. Often the ID columns are simple surrogate keys (1, 2, 3, etc., with no significance attached to the value), but when reasonable I go a step further and use “special” codes. Following are a few examples.
If the values are a sequence (say, Ordered, Paid, Processed, Shipped), I might use 1, 2, 3, 4, to indicate sequence. This can make things easier if you want to find all “up through” a give stages, such as all orders that have not yet been shipped (ID < 4). If you are into planning ahead, make them 10, 20, 30, 40; this will allow you to add values “in between” existing values, if/when new codes or statuses come along. (Yes, you cannot and should not try to anticipate everything and anything that might have to be done some day, but a bit of pre-planning like this can make some changes that much simpler.)
Keys/Ids are often integers (1 byte, 2 byte, 4 byte, whatever). There’s little cost to make them character values (1 char, 2 char, 3, char, 4 char). That’s character, not variable character. Done this way, you can have mnemonics on your codes, such as
O, P, R, S
Or, Pd, Pr, Sh
Ordr, Paid, Proc, Ship
…or whatever floats your boat. Done this way, I have found that it can save a lot of time when analyzing or debugging. You still want the lookup table, for relational integrity as well as a reminder for the more obscure codes.

Optimizing Mysql Table Indexing for Substring Queries

I have a MySQL indexing question for you guys.
I've got a very large table (~100Million Records) in MySQL that contains information about files. Most of the Queries I do on it involve substring operations on the file path column.
Here's the table ddl:
CREATE TABLE `filesystem_data`.`$tablename` (
`file_id` INT( 14 ) NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`file_name` VARCHAR( 256 ) NOT NULL ,
`file_share_name` VARCHAR ( 100 ) NOT NULL,
`file_path` VARCHAR( 900 ) NOT NULL ,
`file_size` BIGINT( 14 ) NOT NULL ,
`file_tier` TINYINT(1) UNSIGNED NULL,
`file_last_access` DATETIME NOT NULL ,
`file_last_change` DATETIME NOT NULL ,
`file_creation` DATETIME NOT NULL ,
`file_extension` VARCHAR( 50 ) NULL ,
INDEX ( `file_path`, `file_share_name` )
) ENGINE = MYISAM
};
So for example ill have a row with a file_path like:
'\\Server100\share2\Home\Zenshai\My Documents\'
And I'll extract the User's name (Zenshai in this example) with something like
SELECT substring_index(substring_index(fp.file_path,'\\',6),'\\',-1) as Username
FROM (SELECT '\\\\Server100\\share2\\Home\\Zenshai\\My Documents\\' as file_path) fp
It gets a bit ugly, but that's not really my concern right now.
What I'd like some advice on is what kind of index (if any at all) can help speed up these types of queries on this table. Any other suggestions are welcome too.
Thanks.
PS. Although the table gets very large there is enough space for indexes.
You cannot use indices with your current table design.
You may add a column called USERNAME, fill it in the INSERT/UPDATE trigger with the expression you use in SELECT, and search on this column.
P. S. Just curious, you really have 100 mln+ files on your server?
I'd create a tiny (columns, not record count) subtable that would have the file path broken out and stored like so:
FK_TO_PARENT PATH_PART
1 Server100
1 share2
1 Home
1 Zenshai
1 My Documents
And then just index PATH_PART. Of course if the parent table is 100 Million plus, then this would be going into the billions of records.