sql same column value for all rows and update it frequently - sql

What I want to achieve:
I have a table denoting ID and Credits of each individual ID.
I want to rate each ID as, rate(ID)=Credit(ID)/sum(Credit(ID)) the sum is over all IDs
I will be updating the table quite frequently and want to keep the sum(Credit(ID)) handy by creating another column and storing this sum in the table (say sigmaID), which should always have exact same value for all rows.
Whenever I change Credit for an ID (say add 100), I can simply do the same operation on this column value (add 100)
Do I have to update sigmaID for all rows? Will it be efficient?
I would like to periodically check if sigmaID is indeed sum(Credit(ID)) for consistency , am I overdoing it? Will it inefficient?
Is there any other approach to this (I am worried about efficiency)?
Kindly provide pure SQL queries as I need to put all of this in an UPDATE trigger which will calculate this rating (and loads of other formulas with other parameters of ID). I may have access to scripting language (PHP/python) but I don't know for sure. Hence the pure SQL request.

Unfortunately my English is very very poor.
As i realized, you want to have all information about your column values, in past and present?
You want to log it ? If it so you can make journal table and log everything yo want in it by trigger.
best regards,
tato mumladze

Related

How to Represent a Single Statistic in SQL Database?

I have an SQL table (SQLite database), Listing, that has a datetime field. For my program, I need to know the most recent time field my program has seen. However, I don't store all listings the program has seen into the database.
So my question is, what is the usual way to store data like "most recently seen object", which is a single record, into a database? Is there something more elegant than making another table that has one record with a datetime field?
First of all, you need to save all records into your database, then you get the max() suggested by #GordonLinoff. What's your problem in save the data? If you hope keep database size small, You can add a trigger which delete oldest rows during new INSERTs.
But I don't understand properly your example. Can you give us some part of code?

How to retrieve only updated/new records since the last query in SQL?

I was asked to design a class for caching SQL query results. Calling the class' query method will query and cache the entire set of results at the first time; afterward, each subsequence query will retrieve only the updated portion, and will merge the result into the cache.
If the class is required to be generic, i.e. NO knowledge about the db and the tables, do you have any idea?
Is it possible, and how to retrieve only updated/new records since the last query?
MySQL (nor any other SQL-based database, as far as I know) does not store the "last update time" or "insert time" of a row anywhere, at least not anywhere you can query (parsing out the binary log is probably not something you're looking to do).
To get "new records" you will need to either
Query for all records with an autoincrement value higher than the last known max
Add a datetime/timestamp to the table that would store the insert/updated time of the record, and query based on that
You could consider having your code create its own table that would store a record every time an UPDATE or INSERT happened on another table, and then add triggers to all those other tables that would populate your table.
But that might be a bit much.
Either your cache class has to be notified of all updates, or you need some kind of expiration.
The only other option is to query the database to check, but then you kind of lose the point of the cache :)
The above is true for a "generic" cache. If you implement a specialized cache for a particular table you might be able to exploit DML patterns.

suggest a method for updating data in many tables with random data?

I've got about 25 tables that I'd like to update with random data that's picked from a subset of data. I'd like the data to be picked at random but meaningful -- like changing all the first names in a database to new first names at random. So I don't want random garbage in the fields, I'd like to pull from a temp table that's populated ahead of time.
The only way I can think of to do this is with a loop and some dynamic sql.
insert pick-from names into temp table
with id field
foreach table name in a list of
tables:
build a dynamic sql that updates all
first name fields to be a name
picked at random from the temp table based on rand() * max(id) from temp table
But anytime I think "loop" in SQL I figure I'm doing something wrong.
The database in question has a lot of denormalized tables in it, so that's why I think I'd need a loop (the first name fields are scattered across the database).
Is there a better way?
Red Gate have a product called SQL Data Generator that can generate fake names and other fake data for testing purposes. It's not free, but they have a trial so you can test it out, and it might be faster than trying to do it yourself.
(Disclaimer: I have never used this product, but I've been very happy with some of their other products.)
I wrote a stored procedure to do something like this a while back. It is not as good as the Red Gate product and only does names, but if you need something quick and dirty, you can download it from
http://www.joebooth-consulting.com/products/
The script name is GenRandNames.sql
Hope this helps
Breaking the 4th wall a bit by answering my own question.
I did try this as a sql script. What I learned is that SQL pretty much sucks at random. The script was slow and weird -- functions that referenced views that were only created for the script and couldn't be made in tempdb.
So I made a console app.
Generate your random data, easy
to do with the Random class (just
remember to only use one instance of
Random).
Figure out what columns and table
names that you'd like to update via
a script that looks at
information_schema.
Get the IDs
for all the tables that you're going
to update, if possible (and wow will
it be slow if you have a large table
that doesn't have any good PKs).
Update each table 100 rows at a time. Why 100? No idea. Could be 1000. I just picked a number. Dictionary is handy here: pick a random ID from the dict using the Random class.
Wash, rinse, repeat. I updated about 2.2 million rows in an hour this way. Maybe it could be faster, but it was doing many small updates so it didn't get in anyone's way.

SQL Server 2005 Computed Column Result From Aggregate Of Another Table Field's Value

Sorry for the long question title.
I guess I'm on to a loser on this one but on the off chance.
Is it possible to make the calculation of a calculated field in a table the result of an aggregate function applied to a field in another table.
i.e.
You have a table called 'mug', this has a child called 'color' (which makes my UK head hurt but the vendor is from the US, what you going to do?) and this, in turn, has a child called 'size'. Each table has a field called sold.
The size.sold increments by 1 for every mug of a particular colour and size sold.
You want color.sold to be an aggregate of SUM size.sold WHERE size.colorid = color.colorid
You want mug.sold to be an aggregate of SUM color.sold WHERE color.mugid = mug.mugid
Is there anyway to make mug.sold and color.sold just work themselves out or am I going to have to go mucking about with triggers?
you can't have a computed column directly reference a different table, but you can have it reference a user defined function. here's a link to a example of implementing a solution like this.
http://www.sqlservercentral.com/articles/User-Defined+functions/complexcomputedcolumns/2397/
No, it is not possible to do this. A computed column can only be derived from the values of other fields on the same row. To calculate an aggregate off another table you need to create a view.
If your application needs to show the statistics ask the following questions:
Is it really necessary to show this in real time? If so, why? If it is really necesary to do this, then you would have to use triggers to update a table. This links to a short wikipedia article on denormalisation. Triggers will affect write performance on table updates and relies on the triggers being active.
If it is only necessary for reporting purposes, you could do the calculation in a view or a report.
If it is necessary to support frequent ad-hoc reports you may be into the realms of a data mart and overnight ETL process.

`active' flag or not?

OK, so practically every database based application has to deal with "non-active" records. Either, soft-deletions or marking something as "to be ignored". I'm curious as to whether there are any radical alternatives thoughts on an `active' column (or a status column).
For example, if I had a list of people
CREATE TABLE people (
id INTEGER PRIMARY KEY,
name VARCHAR(100),
active BOOLEAN,
...
);
That means to get a list of active people, you need to use
SELECT * FROM people WHERE active=True;
Does anyone suggest that non active records would be moved off to a separate table and where appropiate a UNION is done to join the two?
Curiosity striking...
EDIT: I should make clear, I'm coming at this from a purist perspective. I can see how data archiving might be necessary for large amounts of data, but that is not where I'm coming from. If you do a SELECT * FROM people it would make sense to me that those entries are in a sense "active"
Thanks
You partition the table on the active flag, so that active records are in one partition, and inactive records are in the other partition. Then you create an active view for each table which automatically has the active filter on it. The database query engine automatically restricts the query to the partition that has the active records in it, which is much faster than even using an index on that flag.
Here is an example of how to create a partitioned table in Oracle. Oracle doesn't have boolean column types, so I've modified your table structure for Oracle purposes.
CREATE TABLE people
(
id NUMBER(10),
name VARCHAR2(100),
active NUMBER(1)
)
PARTITION BY LIST(active)
(
PARTITION active_records VALUES (0)
PARTITION inactive_records VALUES (1)
);
If you wanted to you could put each partition in different tablespaces. You can also partition your indexes as well.
Incidentally, this seems a repeat of this question, as a newbie I need to ask, what's the procedure on dealing with unintended duplicates?
Edit: As requested in comments, provided an example for creating a partitioned table in Oracle
Well, to ensure that you only draw active records in most situations, you could create views that only contain the active records. That way it's much easier to not leave out the active part.
We use an enum('ACTIVE','INACTIVE','DELETED') in most tables so we actually have a 3-way flag. I find it works well for us in different situations. Your mileage may vary.
Moving inactive stuff is usually a stupid idea. It's a lot of overhead with lots of potential for bugs, everything becomes more complicated, like unarchiving the stuff etc. What do you do with related data? If you move all that, too, you have to modify every single query. If you don't move it, what advantage were you hoping to get?
That leads to the next point: WHY would you move it? A properly indexed table requires one additional lookup when the size doubles. Any performance improvement is bound to be negligible. And why would you even think about it until the distant future time when you actually have performance problems?
I think looking at it strictly as a piece of data then the way that is shown in the original post is proper. The active flag piece of data is directly dependent upon the primary key and should be in the table.
That table holds data on people, irrespective of the current status of their data.
The active flag is sort of ugly, but it is simple and works well.
You could move them to another table as you suggested. I'd suggest looking at the percentage of active / inactive records. If you have over 20 or 30 % inactive records, then you might consider moving them elsewhere. Otherwise, it's not a big deal.
Yes, we would. We currently have the "active='T/F'" column in many of our tables, mainly to show the 'latest' row. When a new row is inserted, the previous T row is marked F to keep it for audit purposes.
Now, we're moving to a 2-table approach, when a new row is inserted, the previous row is moved to an history table. This give us better performance for the majority of cases - looking at the current data.
The cost is slightly more than the old method, previously you had to update and insert, now you have to insert and update (ie instead of inserting a new T row, you modify the existing row with all the new data), so the cost is just that of passing in a whole row of data instead of passing in just the changes. That's hardly going to make any effect.
The performance benefit is that your main table's index is significantly smaller, and you can optimise your tablespaces better (they won't grow quite so much!)
Binary flags like this in your schema are a BAD idea. Consider the query
SELECT count(*) FROM users WHERE active=1
Looks simple enough. But what happens when you have a large number of users, so many that adding an index to this table would be required. Again, it looks straight forward
ALTER TABLE users ADD INDEX index_users_on_active (active)
EXCEPT!! This index is useless because the cardinality on this column is exactly two! Any database query optimiser will ignore this index because of it's low cardinality and do a table scan.
Before filling up your schema with helpful flags consider how you are going to access that data.
https://stackoverflow.com/questions/108503/mysql-advisable-number-of-rows
We use active flags quite often. If your database is going to be very large, I could see the value in migrating inactive values to a separate table, though.
You would then only require a union of the tables when someone wants to see all records, active or inactive.
In most cases a binary field indicating deletion is sufficient. Often there is a clean up mechanism that will remove those deleted records after a certain amount of time, so you may wish to start the schema with a deleted timestamp.
Moving off to a separate table and bringing them back up takes time. Depending on how many records go offline and how often you need to bring them back, it might or might not be a good idea.
If the mostly dont come back once they are buried, and are only used for summaries/reports/whatever, then it will make your main table smaller, queries simpler and probably faster.
We use both methods for dealing with inactive records. The method we use is dependent upon the situation. For records that are essentially lookup values, we use the Active bit field. This allows us to deactivate entries so they wont be used, but also allows us to maintain data integrity with relations.
We use the "move to separation table" method where the data is no longer needed and the data is not part of a relation.
The situation really dictates the solution, methinks:
If the table contains users, then several "flag" fields could be used. One for Deleted, Disabled etc. Or if space is an issue, then a flag for disabled would suffice, and then actually deleting the row if they have been deleted.
It also depends on policies for storing data. If there are policies for keeping data archived, then a separate table would most likely be necessary after any great length of time.
No - this is a pretty common thing - couple of variations depending on specific requirements (but you already covered them):
1) If you expect to have a whole BUNCH of data - like multiple terabytes or more - not a bad idea to archive deleted records immediately - though you might use a combination approach of marking as deleted then copying to archive tables.
2) Of course the option to hard delete a record still exists - though us developers tend to be data pack-rats - I suggest that you should look at the business process and decide if there is now any need to even keep the data - if there is - do so... if there isn't - you should probably feel free just to throw the stuff away.....again, according to the specific business scenario.
From a 'purist perspective' the realtional model doesn't differentiate between a view and a table - both are relations. So that use of a view that uses the discriminator is perfectly meaningful and valid provided the entities are correctly named e.g. Person/ActivePerson.
Also, from a 'purist perspective' the table should be named person, not people as the name of the relation reflects a tuple, not the entire set.
Regarding indexing the boolean, why not:
ALTER TABLE users ADD INDEX index_users_on_active (id, active) ;
Would that not improve the search?
However I don't know how much of that answer depends on the platform.
This is an old question but for those search for low cardinality/selectivity indexes, I'd like to propose the following approach that avoids partitioning, secondary tables, etc.:
The trick is to use "dateInactivated" column that stores the timestamp of when the record is inactivated/deleted. As the name implies, the value is NULL while the record is active, but once inactivated, write in the system datetime. Thus, an index on that column ends up having high selectivity as the number of "deleted" records grows since each record will have a unique (not strictly speaking) value.
Then your query becomes:
SELECT * FROM people WHERE dateInactivated is NULL;
The index will pull in just the right set of rows that you care about.
Filtering data on a bit flag for big tables is not really good in terms of performance. In case when 'active' determinate virtual deletion you can create 'TableName_delted' table with the same structure and move deleted data there using delete trigger.
That solution will help with performance and simplifies data queries.