I am new to SQL and have started learning Postgres. I want to create a database in which one of the column of table in that database is email. Now since I will be using email to uniquely identify the person for sign-up & logging in, I used UNIQUE command while creating the table so that if an account already exists with that email I will pass an error. But will this be efficient in large databases?
Here is the screenshot of the SQL command I have used.
What do you mean by efficient? A unique constraint or index adds overhead to prevent duplicates. This overhead is needed on inserts or updates on the affected columns.
How much overhead? Well a uniqueness is validating by using an index. On most databases, looking up a value in an index can be quite fast and usually does not pose a significant performance issue. Of course, such overhead can sometimes be important, particularly on very busy databases with lots of data changes (think dozens or hundreds of transactions per second).
In general, the size of the table is going to have little impact on the performance -- well, indexes do slow down a little bit (but hardly noticeable) as tables get bigger, assuming there is sufficient memory.
On the other hand, the cost of not having data integrity can be much larger -- affecting the performance and quality of queries that run on the data.
Related
Given a live table in SQL with some non-trivial number of columns/entries, with one or more applications actively querying it, what would be the effect of introducing a new index on some column of this table? What takes priority? Serving the query, or constructing the index? Put another way, would setting up the index be experienced by the querying applications as a delay in getting their responses?
It is possible to use the database while indexing is taking place, but it's effects on performance is nearly impossible for us to say. A great deal about the optimizer is magic to anyone who hasn't worked on it themselves, and the answer could change greatly depending on which RDMS you're using. On top of that, your own hardware will play a huge part in the answer.
That being said, if you're primarily reading from the table, there's a good chance you won't see a major performance hit, if your system has the IO/CPU capabilities of handling both tasks at the same time. Inserting however, will be slowed down considerably.
Whether this impact is problematic will depend on your current system load, size of your tables, and what exactly it is you're indexing. Generally speaking, if you have a decent server, a lowish load, and a table with only a few million rows or less, I wouldn't expect to see a performance hit at all.
I have a database with tables that grow every day. I cannot predict which tables are going to grow and which are not as I'm not the one who is putting the data into them.
Is there a way to find tables that need indexes at a particular point in time? Is there a way, in SQL Server, to notify me if a database needs tuning on certain tables?
This is a product we have deployed at different client locations and we cannot go onto their servers every time to check if they have a performance issue. What I was thinking about is something that can notify me if there are performance issues on certain tables, so as the new patches go to the clients we can add these indexes or tuned queries.
After referring to Insertion of data after creating index on empty table or creating unique index after inserting data on oracle? I'm not willing to create indexes while installing databases or when the tables have few rows or are empty.
As per my understanding we must not create indexes on a smaller table as it can affect the write performances.
This is only a real concern if you're bulk loading or otherwise generating a hundred million records each day and write performance is a problem. Indexes do increase write times because they have to be updated when data is written, but unless you're running on a potato or running very high loads it's unlikely to be a problem. You'd know it was a problem before you encountered it.
If we're talking about small tables (less than 100 pages) then it's much more likely that indexes won't be useful because the data set is so small, but you shouldn't be concerned about impacting write performance.
Overall, your application should have indexes that support the queries that you expect should be run in your unit testing and staging. You will need feedback from your customers or clients, but until you really know how people use their data, you're going to have to make a best guess.
The general question of "How do I know what indexes I need when I don't know what queries will be run?" is better suited to DBA Stack Exchange. Briefly, you'll need to use dynamic management views for that. The three missing index dynamic views can be used for this. The example query given isn't horrible:
SELECT mig.*, statement AS table_name,
column_id, column_name, column_usage
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig
ON mig.index_handle = mid.index_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id;
You shouldn't just blindly follow what this view says, however. It's a good lead on what to look at, but you have to look at the column order and queries actually being used to tell.
You should also monitor index usage statistics and examine how much and in what way indexes are used compared to how much they have to be updated. Indexes that are updated a million times a day but are used once or twice should be considered for removal.
You will also want to monitor query stats to look for queries that run for a long time. This may be poor development on the part of your client, but can also be a sign of design problems.
This is not even a comprehensive overview of things to look for, however. There's a lot to database maintenance and operations. That's why DBAs make a good living. This is just the tip of the iceberg. Just the tip for indexes, even.
What I'd do if you want to maintain this is consider asking your customers to allow you to send feedback for performance analysis. Set up a broker that monitors the management views and sends compiled and sanitized information back to yourselves. You'll need to be very careful about what you send because you don't want to be sending actual customer data, of course.
Keep in mind that dynamic management views typically reset when the instance does, so the results will not typically represent the entire lifespan of the database.
I have to create a data warehouse (a star schema) and customer wants it in SQL 2014 in memory. My understanding is in-memory has lot of limitations with FK constraints, indexes etc. These are crucial for us, because our fact table volume are in millions. As an alternative I was thinking suggesting creating bunch of de-normalized tables and join them in SQL for reporting and not go with Kimball DWH. I have around 9 transaction and 4 master tables.
Any better suggestion or alternative to go around this?
Normally when you say in-memory with SQL Server, it means the in-memory oltp (Hekaton), which is designed for specific situations, mainly to handle bottlenecks in locking and latching. I would assume that in this case that's not what you mean.
Microsoft also uses in-memory name with clustered columnstore, which to my mind at least makes things quite confusing. Clustered columnstore is designed for data warehouses, and instead of normal row based approach, it stores the data in column format. If you have enterprise edition, it's at least worth trying for fact tables. You should get significant space savings when comparing to normal compressed tables (my fact tables shrunk between 75% - 90% compared page compressed row store) -- which of course helps a lot in terms of what fits into the cache and performance should be a better too, but of course a lot depends on your data, database structure and your queries.
There are quite many restrictions too, biggest ones probably being that you can't have unique indexes or primary / foreign keys. This restriction will be removed in SQL Server 2016, so if you can wait until that, or possibly upgrade once it's available, that might not be such a big issue.
You mentioned that there is no support for indexes. That is true, but you don't need other indexes with clustered columnstore, because the data is stored in column format and is highly compressed.
I found a posting on the MySQL forums from 2005, but nothing more recent than that. Based on that, it's not possible. But a lot can change in 3-4 years.
What I'm looking for is a way to have an index over a view but have the table that is viewed remain unindexed. Indexing hurts the writing process and this table is written to quite frequently (to the point where indexing slows everything to a crawl). However, this lack of an index makes my queries painfully slow.
I don't think MySQL supports materialized views which is what you would need, but it wouldn't help you in this situation anyway. Whether the index is on the view or on the underlying table, it would need to be written and updated at some point during an update of the underlying table, so it would still cause the write speed issues.
Your best bet would probably be to create summary tables that get updated periodically.
Have you considered abstracting your transaction processing data from your analytical processing data so that they can both be specialized to meet their unique requirements?
The basic idea being that you have one version of the data that is regularly modified, this would be the transaction processing side and requires heavy normalization and light indexes so that write operations are fast. A second version of the data is structured for analytical processing and tends to be less normalized and more heavily indexed for fast reporting operations.
Data structured around analytical processing is generally built around the cube methodology of data warehousing, being composed of fact tables that represent the sides of the cube and dimension tables that represent the edges of the cube.
Flexviews supports materialized views in MySQL by tracking changes to underlying tables and updating the table which functions as a materialized view. This approach means that SQL supported by the view is a bit restricted (as the change logging routines have to figure out which tables it should track for changes), but as far as I know this is the closest you can get to materialized views in MySQL.
Do you only want one indexed view? It's unlikely that writing to a table with only one index would be that disruptive. Is there no primary key?
If each record is large, you might improve performance by figuring out how to shorten it. Or shorten the length of the index you need.
If this is a write-only table (i.e. you don't need to do updates), it can be deadly in MySQL to start archiving it, or otherwise deleting records (and index keys), requiring the index to start filling (reusing) slots from deleted keys, rather than just appending new index values. Counterintuitive, but you're better off with a larger table in this case.
In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two separate databases with a view on the main database which would union the two separate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirky things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?
I don't think that you are really going to gain anything by partitioning the table across multiple databases in a single server. All you have essentially done there is increased the overhead in working with the "table" in the first place by having several instances (i.e. open in two different DBs) of it under a single SQL Server instance.
How large of a dataset do you have? I have a client with a 6 million row table in SQL Server that contains 2 years worth of sales data. They use it transactionally and for reporting without any noticiable speed problems.
Tuning the indexes and choosing the correct clustered index is crucial to performance of course.
If your dataset is really large and you are looking to partition, you will get more bang for your buck partitioning the table across physical servers.
Partitioning is not something to be undertaken lightly as there can be many subtle performance implications.
My first question is are you referring simply to placing larger table objects in separate filegroups (on separate spindles) or are you referring to data partitioning inside of a table object?
I suspect that the situation described is an attempt to have the physical storage of certain large tables on different spindles from the rest of the tables. In this case, adding the extra overhead of separate databases, losing any ability to enforce referential integrity across databases, and the security implications of enabling cross-database ownership chaining does not provide any benefit over using multiple filegroups within a single database. If, as is quite possible, the separate databases you refer to in your question are not even stored on separate spindles but are all stored on the same spindle then you negate even the slight performance benefit you could have gained by physically separating your disk activity and have received absolutely no benefit.
I would suggest instead of using additional databases to hold large tables you look into the Filegroup topic in SQL Server Books Online or for a quick review see this article:
If you are interested in data partitioning (including partitioning into multiple file groups) then I recommend reading articles by Kimberly Tripp, who gave an excellent presentation at the time SQL Server 2005 came out about the improvements available there. A good place to start is this whitepaper
Which version of SQL Server are you using? SQL Server 2005 has partitioned tables, but in 2000 (or 7.0) you needed to use partition views.
Also, what was the reasoning for putting the table partitions in a separate database?
When I've had to partition tables in the past (pre-2005), it's usually by a date column or something similar, with a view over the various partitions. Books Online has a section that talks about how to do this and all of the rules around it. You need to follow the rules to make it work how it's supposed to work.
The key thing to remember is that your partitioning column must be part of the primary key and you want to try to always use that column in any access against the table so that the optimizer can ignore partitions that shouldn't be affected by the query.
Look up "partitioned table" in MSDN and you should be able to find a more complete tutorial for SQL Server 2005 partitioned tables as well as advice on how to set them up for maximum performance.
Are you asking about best practices in terms of database design, or convincing your lead to change his mind? :)
In terms of design... Back in the goode olde days, vertical partitioning was sometimes needed to work around database engine limitations, where the number of columns in a table was a hard limit, like 255 columns. These days the main benefits are purely for performance: putting rarely used columns, or blobs on a separate disk array. But if you're regularly pulling things from both tables it will likely be a loss. It sounds like your lead is suffering from a case of premature optimisation.
In terms of telling your lead is wrong... that requires diplomacy. If he's aware of mutterings of discontent in terms of performance, a benchmark is probably the best way to show the difference.
Create a new physical table somewhere with 'create table t1 as select * from view1' and then run some lengthy batch with the vertically partitioned table and your new table. If it's as bad as you say, the difference should be evident.
But this too may be premature optimisation. Find out what the end-users think of the performance. If the performance is good enough, for some definition of good, then don't fix what ain't broke.
There is a definite benefit for table partitioning (regardless whether it's on same or different filegroups /disks). If the partition column is correctly selected, you'll realize that your queries will hit only the required partition. So imagine if you have 100 million records (I've partitioned tables much bigger than that - about 20+ Billion rows) and if for the most part, more than 70% of your data access is only a certain category or timeline or type of data then it helps to keep the most accessed data in a separate partition. Plus you can align the partition with separate file groups with various type of disks (SATA, Fiber channel, SSDs) so that the most accessed/busy data are on the fastest storage and the least/rarely accessed are virtually on slower disks.
Although, in SQL Server there's limited partitioning ability, unlike Oracle. You can choose only one column for partitioning (even in SQL 2008). So you've to choose a column wisely where that column also is part of most of your frequent queries. For the most part, people find it easy to choose to partition by a date column. However although it seems logical to partition that way, if your queries do not have that column as part of the condition, you won't be gaining sufficient benefits from partitioning (in other words, your query will hit all the partition regardless).
It's much easier to partition for data warehouse/data mining type databases than OLTP as most DW database queries are limited by time period.
That's why these days due to the volume of data being handled by databases, it's wise to design the application in such a way that ever query is limited by some broader group such as time, geographical location or such so that when such columns are chosen for partitioning you'll gain maximum benefits.
I would disagree with the assumption that nothing can be gained by partitioning.
If the partition data is physically and logically aligned, then the potential IO of queries should be dramatically reduced.
For example, We have a table which has the batch field as an INT representing an INT.
If we partition the data by this field and then re-run a query for a particular batch, we should be able to run set statistics io ON before and after partitioning and see a reduction in IO,
If we have a million rows per partition and each partition is written to a separate device. The query should be able to eliminate the nonessential partitions.
I've not done a lot of partitioning on SQL Server, but I do have experience of partitioning on Sybase ASE, and this is known as partition eliminiation. When I have time I'm going to test out the scenario on a SQL Server 2005 machine.