Should I use VARCHAR(20) or VARCHAR(255) to store a name? - sql

At the university I was told to use VARCHAR(20) to store a first name. The VARCHAR type takes space depending of string length, so is it necessary to specify smaller length range? I'm asking because RedBean ORM creates on default a VARCHAR(255) field for strings which length is <= 255 chars.
Is it any difference between using VARCHAR(20) and VARCHAR(255)? Except the maximum string length that can be stored :)
*I know that such questions have already been asked but all I understand from them is that using VARCHAR(255) where it isn't necessary could cause excessive memory consumption in DB applications.
What it is in real life programming? Should I use VARCHAR(255) for all short text inputs or try to limit length whenever it is possible?

Because 255 is now just an arbitrary choice for a VARCHAR length.
Explanation: Prior to MySQL 5.0.3 (give or take a few point releases - I forget) a VARCHAR column could be 255 characters in length maximum, so VARCHAR(255) was often used as a default. Now, however, you can go up to 65,535 characters on VARCHAR, so if you're still using "255" then that seems arbitrary and not well thought out (or your schema is just old).

It is better to define the limit to reduce the default database size and also helpful for validation purpose.

As far as I know with varchar(20) you are telling that the field will contain not more than 20 characters.
First of all determine an ideal range for the specified field depending on what it will hold (name or address etc). Its always efficient to use as small length as required.

The decision to use 20 characters is just as arbitrary as 255 unless you know your data.
You could use varchar(max) but that changes the way your data is stored and could impact performance; but without knowing more about your application and the size/volume of the data it's hard to give advice.

VARchar (255) is generally abad choice unless you need that much space. YOU always wmake all fields the size they need to be and no more. There are several reasons for this. ONe is the row size of the record has limits. Often you can crete a row larger than the limits but the first time you try to enter data that exceeds teh limits, it willfail. THis is a bad thing and shoudl be avopided by using smaller field sizes. Larger field sizes also encourage the entry of bad information. If users know they havea lot of room in a filed, they willenter notes inthat field instead of theh data, I have seen such gems as (and this is genuine example from a past job) "Talk to the fat girl as the blonde is useless." in fields where teh length was too long. You don't want to give room for junk to be put into a field if you can help it. Bad data in means bad data out.
Wider pages can also be slower to access, so it is the database's best interest to limit field size.
Under no circumstances should you use nvarchar(max) or varchar(max) for any string type fields unless you intend for some records to contain more than 4000 characters. These fields cannot be indexed (in SQL server, know your own datbase limits when doing design) and using them indiscriminately is very bad and will cause a slower than slow database.
Names are tricky and can be hard to determine the size so some people go big. But 25-50 is more reasonable than 255. It may vary depending on the kind of names you are storing, for instance if corporations are mixed in with people names, then the field will need to be wider. If you have a lot of foreign names to store, you need to know what is the norm for names in that country as far as length. Some countries have typically longer names than others. ANd remeber as fara as first name are considered, it will make a difference whether the person uses their middle names as well and if there is anywhere to store that. This is especillay true for people who have more than one middle name or who are using their maiden name as a middle name but still go by their other names such as Mary Elizabeth Annette Von Middlesworth Jamison - you can see how hard a name like that is to break up into first, middle and last and the majority of the name might end up in the firstname column.

Related

Should I define a column type from actual length or nth power of 2(Sql Server )?

Should I define a column type from actual length to nth power of 2?
The first case, I have a table column store no more than 7 charactors,
will I use NVARCHAR(8)? since there maybe implicit convert inside Sql
server, allocate 8 space and truncate automatic(heard some where).
If not, NCHAR(7)/NCHAR(8), which should be(assume the fixed length is 7)
Any performance differ on about this 2 cases?
You should use the actual length of the string. Now, if you know that the value will always be exactly 7 characters, then use CHAR(7) rather than VARCHAR(7).
The reason you see powers-of-2 is for columns that have an indeterminate length -- a name or description that may not be fixed. In most databases, you need to put in some maximum length for the varchar(). For historical reasons, powers-of-2 get used for such things, because of the binary nature of the underlying CPUs.
Although I almost always use powers-of-2 in these situations, I can think of no real performance differences. There is one. . . in some databases the actual length of a varchar(255) is stored using 1 byte whereas a varchar(256) uses 2 bytes. That is a pretty minor difference -- even when multiplied over millions of rows.

Is varchar(128) better than varchar(100)

Quick question. Does it matter from the point of storing data if I will use decimal field limits or hexadecimal (say 16,32,64 instead of 10,20,50)?
I ask because I wonder if this will have anything to do with clusters on HDD?
Thanks!
VARCHAR(128) is better than VARCHAR(100) if you need to store strings longer than 100 bytes.
Otherwise, there is very little to choose between them; you should choose the one that better fits the maximum length of the data you might need to store. You won't be able to measure the performance difference between them. All else apart, the DBMS probably only stores the data you send, so if your average string is, say, 16 bytes, it will only use 16 (or, more likely, 17 - allowing 1 byte for storing the length) bytes on disk. The bigger size might affect the calculation of how many rows can fit on a page - detrimentally. So choosing the smallest size that is adequate makes sense - waste not, want not.
So, in summary, there is precious little difference between the two in terms of performance or disk usage, and aligning to convenient binary boundaries doesn't really make a difference.
If it would be a C-Program I'd spend some time to think about that, too. But with a database I'd leave it to the DB engine.
DB programmers spent a lot of time in thinking about the best memory layout, so just tell the database what you need and it will store the data in a way that suits the DB engine best (usually).
If you want to align your data, you'll need exact knowledge of the internal data organization: How is the string stored? One, two or 4 bytes to store the length? Is it stored as plain byte sequence or encoded in UTF-8 UTF-16 UTF-32? Does the DB need extra bytes to identify NULL or > MAXINT values? Maybe the string is stored as a NUL-terminated byte sequence - then one byte more is needed internally.
Also with VARCHAR it is not neccessary true, that the DB will always allocate 100 (128) bytes for your string. Maybe it stores just a pointer to where space for the actual data is.
So I'd strongly suggest to use VARCHAR(100) if that is your requirement. If the DB decides to align it somehow there's room for extra internal data, too.
Other way around: Let's assume you use VARCHAR(128) and all things come together: The DB allocates 128 bytes for your data. Additionally it needs 2 bytes more to store the actual string length - makes 130 bytes - and then it could be that the DB aligns the data to the next (let's say 32 byte) boundary: The actual data needed on the disk is now 160 bytes 8-}
Yes but it's not that simple. Sometimes 128 can be better than 100 and sometimes, it's the other way around.
So what is going on? varchar only allocates space as necessary so if you store hello world in a varchar(100) it will take exactly the same amount of space as in a varchar(128).
The question is: If you fill up the rows, will you hit a "block" limit/boundary or not?
Databases store their data in blocks. These have a fixed size, for example 512 (this value can be configured for some databases). So the question is: How many blocks does the DB have to read to fetch each row? Rows that span several block will need more I/O, so this will slow you down.
But again: This doesn't depend on the theoretical maximum size of the columns but on a) how many columns you have (each column needs a little bit of space even when it's empty or null), b) how many fixed width columns you have (number/decimal, char), and finally c) how much data you have in variable columns.

List of Best Practice MySQL Data Types

Is there a list of best practice MySQL data types for common applications. For example, the list would contain the best data type and size for id, ip address, email, subject, summary, description content, url, date (timestamp and human readable), geo points, media height, media width, media duration, etc
Thank you!!!
i don't know of any, so let's start one!
numeric ID/auto_increment primary keys: use an unsigned integer. do not use 0 as a value. and keep in mind the maximum value of of the various sizes, i.e. don't use int if you don't need 4 billion values when the 16 million offered by mediumint will suffice.
dates: unless you specifically need dates/times that are outside the supported range of mysql's DATE and TIME types, use them! if you instead use unix timestamps, you have to convert them to use the built-in date and time functions. if your app needs unix timestamps, you can always convert the standard date and time data types on the way out using unix_timestamp().
ip addresses: use inet_aton() and inet_ntoa() since it easily compacts an ip address in to 4 bytes and gives you the ability to do range searches that utilize indexes.
Integer Display Width You likely define your integers something like this "INT(4)" but have been baffled by the fact that (4) has no real effect on the stored numbers. In other words, you can store numbers like 999999 just fine. The reason is that for integers, (4) is the display width, and only has an effect if used with the ZEROFILL modifier. Further, this is for display purposes only, so you could define a column as "INT(4) ZEROFILL" and store 99999. If you stored 999, the mysql REPL (console) would output 0999 when you've selected this column.
In other words, if you don't need the ZEROFILL stuff, you can leave off the display width.
Money: Use the Decimal data type. Based on real-world production scenarios I recommend (19,8).
EDIT: My original recommendation was (19,4); however, I've recently run into a production issue where the client reported that they absolutely needed decimal with a "scale" of "8"; thus "4" wasn't enough and was causing improper tax calculations. I now recommend (19,8) based on a real-world scenario. I would love to hear stories needing a more granular scale.

how to store an approximate number? (number is too small to be measured)

I have a table representing standards of alloys. The standard is partly based on the chemical composition of the alloys. The composition is presented in percentages. The percentage is determined by a chemical composition test. Sample data.
But sometimes, the lab cannot measure below a certain percentage. So they indicate that the element is present, but the percentage is less than they can measure.
I was confused how to accurately store such a number in an SQL database. I thought to store the number with a negative sign. No element can have a negative composition of course, but i can interpret this as less than the specified value. Or option is to add another column for each element!! The latter option i really don't like.
Any other ideas? It's a small issue if you think about it, but i think a crowd is always wiser. Somebody might have a neater solution.
Question updated:
Thanks for all the replies.
The test results come from different labs, so there is no common lower bound.
The when the percentage of Titanium is less than <0.0004 for example, the number is still important, only the formula will differ slightly in this case.
Hence the value cannot be stored as NULL, and i don't know the lower bound for all values.
Tricky one.
Another possibility i thought of is to store it as a string. Any other ideas?
What you're talking about is a sentinel value. It's a common technique. Strings in most languages after all use 0 as a sentinel end-of-string value. You can do that. You just need to find a number that makes sense and isn't used for anything else. Many string functions will return -1 to indicate what you're looking for isn't there.
0 might work because if the element isn't there there shouldn't even be a record. You also face the problem that it might be mistaken for actually meaning 0. -1 is another option. It doesn't have that same problem obviously.
Another column to indicate if the amount is measurable or not is also a viable option. The case for this one becomes stronger if you need to store different categories of trace elements (eg <1%, <0.1%, <0.01%, etc). Storing the negative of those numbers seems a bit hacky to me.
You could just store it as NULL, meaning that the value exists but is undefined.
Any arithmetic operation with a NULL yields a NULL.
Division by NULL is safe.
NULL's are ignored by the aggregation functions, so queries like these:
SELECT SUM(metal_percent), COUNT(metal_percent)
FROM alloys
GROUP BY
metal
will give you the sum and the count of the actual, defined values, not taking the unfilled values into account.
I would use a threshold value which is at least one significant digit smaller than your smallest expected value. This way you can logically say that any value less than say 0.01, can be presented to you application as a "trace" amount. This remains easy to understand and gives you flexibility in determining where your threshold should lie.
Since the constraints of the values are well defined (cannot have negative composition), I would go for the "negative value to indicate less-than" approach. As long as this use of such sentinel values are sufficiently documented, it should be reasonably easy to implement and maintain.
An alternative but similar method would be to add 100 to the values, assuming that you can't get more than 100%. So <0.001 becomes 100.001.
I would have a table modeling the certificate, in a one to many relation with another table, storing the values for elements. Then, I would still have the elements table containing the value in one column and a flag (less than) as a separate column.
Draft:
create table CERTIFICATES
(
PK_ID integer,
NAME varchar(128)
)
create table ELEMENTS
(
ELEMENT_ID varchar(2),
CERTIFICATE_ID integer,
CONCENTRATION number,
MEASURABLE integer
)
Depending on the database engine you're using, the types of the columns may vary.
Why not add another column to store whether or not its a trace amount
This will allow you to to save the amount that the trace is less than too
Since there is no common lowest threshold value and NULL is not acceptable, the cleanest solution now is to have a marker column which indicates whether there is a quantifiable amount or a trace amount present. A value of "Trace" would indicate to anybody reading the raw data that only a trace amount was present. A value of "Quantity" would indicate that you should check an amount column to find the actual quantity present.
I would have to warn against storing numerical values as strings. It will inevitably add additional pain, since you now lose the assertions a strong type definition gives you. When your application consumes the values in that column, it has to read the string to determine whether it's a sentinel value, a numeric value or simply some other string it can't interpret. Trying to handle data conversion errors at this point in your application is something I'm sure you don't want to be doing.
Another field seems like the way to go; call it 'MinMeasurablePercent'.

[My]SQL VARCHAR Size and Null-Termination

Disclaimer: I'm very new to SQL and databases in general.
I need to create a field that will store a maximum of 32 characters of text data. Does "VARCHAR(32)" mean that I have exactly 32 characters for my data? Do I need to reserve an extra character for null-termination?
I conducted a simple test and it seems that this is a WYSIWYG buffer. However, I wanted to get a concrete answer from people who actually know what they're doing.
I have a C[++] background, so this question is raising alarm bells in my head.
Yes, you have 32 characters at your disposal. SQL does not concern itself with nul terminated strings like some programming languages do.
Your VARCHAR specification size is the max size of your data, so in this case, 32 characters. However, VARCHARS are a dynamic field, so the actual physical storage used is only the size of your data, plus one or two bytes.
If you put a 10-character string into a VARCHAR(32), the physical storage will be 11 or 12 bytes (the manual will tell you the exact formula).
However, when MySQL is dealing with result sets (ie. after a SELECT), 32 bytes will be allocated in memory for that field for every record.