How to structure databse of API usage history - sql

I have a database of users for a web API, but I also want to store usage history for each user, i.e: page request count, data volumes, etc. What is the best way to implement this, in terms of database structure? My initial thought was to retain the main table, but then create a history table for each user. This seems horribly impractical, however. My gut feeling is that I probably need one separate table for usage history, but I am unclear as to how to structure it.
I am using SQLite.

For an event logging model (which is what you want), I can recommend two options
One table, lets call it activity_log.
`activity_log`{
id INTEGER PRIMARY KEY,
user_id MEDIUM INT NOT NULL,
event_type VARCHAR(10),
event_time TIMESTAMP
}
For each event in your system affecting a user, you insert a record into this role (i believe the column names are self-explanatory). I believe SQLite doesn't provide native TIMESTAMP type so you'll have to handle the storage in your application code. What this design will leave you with a table that has the potential to grow very large, but it will give you fine grained statistics. SQLite doesn't support clustered indexes but there are some options here that will help you out with performance tuning.
The same table as above, only instead of inserting a new row for every event, you're going to perform a conditional insert i.e. update existing rows for users already in and update for new users. This option will keep your table several times smaller than what you have above, but you'll only have access to the most recent use of your api.
If you can afford it, I'd say go with number 1.

In one of my programs, I maintain a table of module usage per user. The structure of the table is
table id
user id
prog id
date/time
history flag (0=current, 1=history)
runs (number of time user has run program on date)
About once a week, I aggregate the data in the table: if user 1 has run program 1 twice on a given date, then initially there will be two entries in the table:
1;1;1;04/10/12 08:56;0;1
2;1;1;04/10/12 09:33;0;1
After aggregation, the table becomes
3;1;1;04/10/12 00:00;1;2
Whilst the aggregation loses the time part, no other data is lost and queries against the table will be quicker.

Related

Track database changes or differentiate records with timestamp?

Keeping track of changes to a database must be a big concern for lots of people, but it seems that the big names have software for that.
My question is for a small SQL database with 10 tables, <10 columns each, using joins to create a "master" junction table: is there a downside to updating a few times per year by adding rows (with a lot of duplicate information) and then taking the MAX id (PK) to generate and post on a website the most recent data in tabular form (excerpted from the "master")? This versus updating the records, in which I'll lose information on the values at a particular moment.
A typical row for teacher contact information would have fName, lName, schoolName, [address & phone info]; for repertoire or audition information: year, instrument, piece, composer, publisher/edition.
Others have asked about tracking db changes, but only one recently, and not with a lot of votes/details:
How to track data changes in a database table
Keeping history of data revisions - best practice?
How to track data changes in a database table
This lightweight solution seems promising, but I don't know if it didn't get votes because it's not helpful, or because folks just weren't interested.
How to keep track of changes to data in a table?
more background if needed:
I'm a music teacher (i.e. amateur programmer) maintaining a Joomla website for our organization. I'm using a Joomla plugin called Sourcerer to create dynamic content (PHP/SQL to the Joomla database) to make it easier to communicate changes (dates, personnel, rules, repertoire, etc.) For years, this was done with static pages (and paper handbooks) that took days to update.
I also, however, want to be able to look back and see the database state at a particular time: who taught where, what audition piece was listed, etc., as we could with paper versions. NOTE: I'm not tracking HTML changes, only that information fed from the database.
Thanks for any help! (I've followed SO for years, but this is my first question.)
The code I'm using now to generate the "master junction table." I would modify this to "INSERT into" for my new rows and query from it via Sourcerer to post the information online.
CREATE TABLE 011people_to_schools_junction
AS (
SELECT *
FROM (
SELECT a.peopleID, a.districtID, a.firstName, a.lastName, a.statusID, c.schoolName
FROM 01People a
INNER JOIN (
SELECT districtID, MAX(peopleID) peopleID
FROM 01People
GROUP BY districtID
) b
ON a.districtID = b.districtID
AND a.peopleID = b.peopleID
INNER JOIN (
SELECT schoolID, MAX(peopleID) peopleID
FROM 01people_to_schools_junction ab
GROUP BY schoolID
) z
ON z.peopleID = a.peopleID
LEFT JOIN 01Schools c
ON c.schoolID = z.schoolID
WHERE z.schoolID IS NOT NULL
OR z.peopleID IS NOT NULL
ORDER BY c.schoolName
) t1
);
#Add a primary key as the first column
ALTER TABLE 011people_to_schools_junction
ADD COLUMN 011people_to_schoolsID INT NOT NULL AUTO_INCREMENT FIRST,
ADD PRIMARY KEY (011people_to_schoolsID);
To answer your questions in order:
Is there a downside?
Of course, and it's performance - related. If you add a million records each year, it will hurt performances; and occupy space on disk.
Where the suggestions in the linked question bad or just not popular?
The question and answers are good; but the right answer depends on your specific use case: are you doing it for legal reasons, how fast you want to be able to access the data, how much data and updates you have, how much you want your history functionality to last without changes... only if it met your use case you would vote.
As a rule of thumb, history should go to a different table, this would provide several advantages:
your current tables don't change, so your code needs no change except for storing the current version also in history;
your application doesn't slow down;
if your history tables grow you can move them easily to a different server;
In order to choose whether to have a single history table or several (one per backed up table) depends on how you plan to retrieve the data and what you want to do with it:
if you mirror each of your tables adding a timestamp and the user id, your code would need little modifications; but you'd end up with twice as many tables, and any structure change would then need to be replicated on the history table as well;
if you build a single history table with the timestamp, the user id, the table name and a json representation of the record, you will have an easier life building it, while for retrieving it you should access the data using an Object per row i.e. using Joomla's dbo getObjectList(), then the objects will be the same format you store in the history table and the changes there will be fairly easy. But querying for changes across specific tables/fields will be much harder.
Keep in mind that having data is useless if you can't retrieve it properly.
Since you mention pushing to the website a few times a year, the overhead of the queries should not be an issue (if you update monthly, waiting 5 minutes may not be a problem).
You should seek the best solution based on the other uses of this data: for it to be useful to anyone, you will have to implement a system to retrieve historical data. If phpmyadmin is enough, well look no further.
I hope this scared you. Either way it's a lot of hard work.
If you just want to be able to look up old data, you may instead store a copy of the markup/output you generate from time to time, and save it to different folders on the webserver. This will take minutes to set up, and be extremely reliable.
Sure, it's more fun to code it. But are you really sure you need it? And you can keep the database dumps just in case one day you change your mind.

Persisting data from SQL tables day by day

I manage data-tier applications for a small company and my SW is receiving criticism for the fact that information for part-costing can't be retrieved historically. So, for instance, what they would like is to be able to, at any point in time, retrieve the cost of a part as it was 6 months ago.
They used to do this through spreadsheets. They would copy the part table every day into a .xlsx file, and then anytime they wanted to know "hey, what was the cost of that part Jan 20 of last year?", they could just pull it up in excel.
So, we've begun doing the same thing in SQL, and the plan so far is that we will create a new table each time the part costs are updated, name the table with today's date, and persist it in a database for archived information. Then, we're planning to pull in whichever table we need according to it's time-stamp.
I can't help but think this is going to get very messy. Is this a bad approach for archiving data? Are there any industry standards I can adhere to for solving this problem in as few headaches as possible?
So, we've begun doing the same thing in SQL, and the plan so far is
that we will create a new table each time the part costs are updated,
name the table with today's date, and persist it in a database for
archived information. Then, we're planning to pull in whichever table
we need according to it's time-stamp.
I can't help but think this is going to get very messy. Is this a bad
approach for archiving data? Are there any industry standards I can
adhere to for solving this problem in as few headaches as possible?
You are right ... this solution will be messy.
Simplest thing that you can do is to create a History table say Parts_History that will have all the columns as the main Parts table and add additional timestamp column(s) to track updates. Every time there is a new price for a part ( which I hope is done thru a stored procedure) the existing price gets moved into the new table and the main table gets updated ALL Inside one transaction. If you dont have a single SP that handles the update then you can do that inside a trigger.
I will try and see if there are any good examples out there.
As far as I now there is no standard but approach is rather obvious. You have a table say part(partid int primary key, price decimal). Create an audit table part_audit(auditId int identity(1,1) primary key, partId int, price decimal, dateChange datetime default getdate()) and a trigger on part after update, delete. In the trigger check update(price) and if so insert into part_audit from deleted. To find historical price select nearest dateChange after date of interest.

DB schema for updating downstream sources?

I want a table to be sync-able by a web API.
For example,
GET /projects?sequence_latest=2113&limit=10
[{"state":"updated", "id":12,"sequence":2116},
{"state":"deleted" "id":511,"sequence":2115}
{"state":"created", "id":601,"sequence":2114}]
What is a good schema to achieve this?
I intend this for Postgresql with Django ORM, which uses surrogate keys. Presence of an ORM may kill answers like unions.
I can come up with only half-solutions.
I could have a modified_time column, but we cannot convey deletions.
I could have a table for storing deleted IDs, when returning 10 new/updated rows, I could return all the deleted rows between them. But this works only when the latest change is an insert/update and there are a moderate number of deleted rows.
I could set a deleted flag on the row and null the rest, but its kinda bad schema design to set all columns nullable.
I could have another table that stores ID, modification sequence number and state(new, updated, deleted), but its another table to maintain and setting sequence numbers cause contentions; imagine n concurrent requests querying for latest ID.
If you're using an ORM you want simple(ish) and if you're serving the data via an API you want quick.
To go through your suggested options:
Correct, so this doesn't help you. You could have a deleted flag in your main table though.
This seems quite a random way of doing it and breaks your insistence that there be no UNION queries.
Not sure why you would need to NULL the rest of the column here? What benefit does this bring?
I would strongly advise against having a table that has a modification sequence number. Either this means that you're performing a lot of analytic queries in order to find out the most recent state or you're updating the same rows multiple times and maintaining a table with the same PK as your normal one. At that point you might as well have a deleted flag in your main table.
Essentially the design of your API gives you one easy option; you should have everything in the same table because all data is being returned through the same method. I would follow your point 2 and Wolph's suggestion, have a deleted_on column in your table; making it look like:
create table my_table (
id ... primary key
, <other_columns>
, created_on date
, modified_on date
, deleted_on date
);
I wouldn't even bother updating all the other columns to be NULL. If you want to ensure that you return no data create a view on top of your table that nulls data where the deleted_on column has data in it. Then, your API only accesses the table through the view.
If you are really, really worried about space and the volume of records and will perform regular database maintenance to ensure that both are controlled then maybe go with option 4. Create a second table that has the state of each ID in your main table and actually delete the data from your main table. You then can do a LEFT OUTER JOIN to the main table to get the data. When there is no data that ID has been deleted. Honestly, this is overkill until you know whether you will definitely require it.
You don't mention why you're using an web API for data-transfers; but, if you're going to be transferring a lot of data or using this for internal systems only it might be worth using a lower-level transfer mechanism.

Postgresql logging daily stats

Evening all,
I am attempting to create a table that stores a series of stats on web usage for my application on a daily basis (trivial things like no. new users, total visits etc.), I am currently querying these on the fly, however I would now like to start storing them partly for performance (reducing a load of aggregate querys to single lookup) and partly to allow for historic analysis.
I have come up with the follow basic schema for the table (there will be more columns than this, just to give an idea)
create table web_stats(
web_stat_id bigserial primary key,
date_created timestamp not null default now(),
user_count integer not null,
new_user_count integer not null
);
comment on table web_stats is 'Table stores statistics on web usage';
Now, I am happy to create the queries to populate the table going forward (I am using Quartz scheduler to run the queries daily)
However I am not so sure the best way to populate the table retrospectively for past dates, should I use an INSERT statement to create a blank row for every day since the application went live (about 2 years ago), then use an UPDATE to populate the blank rows? Or can this be done in one fell swoop? Can someone provide some SQL for creating the rows?
If there is anything wrong with my design assumptions please let me know!
This is how I ended up doing it
INSERT INTO web_stats (date_created)
SELECT DATE('2011-08-20')+x.id
FROM generate_series(0,521) AS x(id);
Where 2011-08-20 is the date the application went live, and 521 is the number of days from now until then
This creates the empty table so that I can use the date_created field to populate the other fields using UPDATE statements
Maybe not the most efficient method but it works

Creating a variable on database to hold global stats

Let's pretend I've got a social network.
I'm always showing to the user how many users are registered and have activated their profile.
So, everytime a single user logs in, it goes to DB and make a:
select count(*) from users where status = 'activated'
so if 5.000 users logs in, or simply refreshes the page, it will make 5.000 requests to SQL above.
I was wondering if is better to have a variable some place(that I still have no idea where to put) that everytime a user activates his profile will add 1 and then, when I want to show how many users are registered to that social network, I'll only get the value of this variable.
How can I make this? Is it really a better solution to what I've got?
You could use an indexed view, that SQL Server will automatically maintain:
create table dbo.users (
ID int not null,
Activated bit not null
)
go
create view dbo.user_status_stats (Activated,user_count)
with schemabinding
as
select Activated,COUNT_BIG(*) from dbo.users group by Activated
go
create unique clustered index IX_user_status_stats on dbo.user_status_stats (Activated)
go
This just has two possible statuses, but could expand to more using a different data type. As I say, in this case, SQL Server will maintain the counts behind the scenes, so you can just query the view:
SELECT user_count from user_status_stats with (NOEXPAND) where Activated = 1
and it won't have to query the underlying table. You need to use the WITH (NOEXPAND) hint on editions below (Enterprise/Developer).
Although as #Jim suggested, doing a COUNT(*) against an index when the index column(s) can satisfy the query criteria using equality comparisons should be pretty quick also.
As you've already guessed - it's not a great idea to calculate this value every time someone hits the site.
You could do as you suggest, and update a central value as users are added, although you'll have to ensure that you don't end up with two processes updating the number simultaneously.
Alternatively you could have a job which runs your SQL routinely and updates the central 'user count' value.
Alternatively #2, you could use something like MemCache to hold the calculated value for a period of time, and then when the cache expires, recalculate it again.
There's a few options you could consider:
1) like you say, maintain a global count each time a profile is activated to save the hit on the users table each time. You could just store that count in a "Stats" table and then query that value from there.
2) don't show the actual "live" count, show a count that's "pretty much up to date" - e.g. cache the count in your application and have the value expire periodically so you then requery the count less frequently. Or if you store the count in a "Stats" table per above, you could have a scheduled job that updates the count every hour, instead of every time a profile is activated.
Depends whether you want to show the exact figure in real-time or whether you can live with a delay. Obviously, data volumes matter too - if you have a large database, then having a slightly out of date cached value could be worth while.
From a purely SQL Server standpoint, no, you are not going to find a better way of doing this. Unless, perhaps, your social network is Facebook sized. Denormalizing your data design (such as keeping a count in a separate table) will lead to possible sources of the data getting out of sync. It doesn't have to get out of sync if it is coded properly, but it can...
Just make sure that you have an index on Status. At which point SQL will not scan the table for the count, but it will scan the index instead. The index will be much smaller (that is, more data will fit in a disk page). If you were to convert your status to an int, smallint, or tinyint you would get even more index leaves in a disk page and thus much less IO. To get your description ('activated', etc.), use a reference table. The reference table would be so small, SQL would just keep the whole thing in RAM after the first access.
Now, if you still think this is too much overhead (and it should't be) you could come up with hybrid method. You could store your count in a separate table (which SQL would keep in RAM if it is just the one record) or assuming your site is in asp.net you could create an Application variable to keep track of the count. You could increment it in Session_Start and decrement it in Session_End. But, you will have to come up with a way of making the the increment and decrement thread safe so two sessions don't try and update the value at the same time.
You can also use the Global Temporary table. You will always get fast retrieval. Even
if you are setting 30 seconds ping. The Example Trigger Link1, Example Trigger Link2 will maintain such activities in this table.