Counting the number of downloads on a database oracle - sql

I am creating a database for an online store which is going to sell, movies, music and books as a final project in the Uni.
I have created all tables already and made sure they were all working. The database is supposed to have a "Historical" table. I want to have on thistab the number of downloads an specific client makes.
The primary keys on the table "Clients" are suscriber_number and id_download. These 2 are foreign keys-primary key on the historical tab. How can I make sure thatevery download a client makes it gets store on the historical tab as a new download and not just a replacement of the previous data? I am afraid it will just override the previous information and will not keep a count of the downloads on each suscriber. Are we able to keep track in oracle of the times I am using an update statement on existing data to retrieve it later on?

Related

Use DB Relation To Avoid Redundancy

I have designed an ERD of movies and tv series which is confidential. I can give you an overview of database.
It has more then 20 tables (more tables will be added later) and it is normalized. I have tables like Movie, Actors, Tv Seriers, Director, Producer etc. So these tables will contain most important information and also these tables are connected (by foreign keys and middle tables like MovieActor, MovieDirector etc).
So the scenario is like
1) The standard “starting” database should have Actors, Directors, Producers, Music Composers, Genres, Resolution Types… pre populated and pre defined by the Admin.
2) For every user creating his personal movie collection, he will be starting of his database with all the pre defined data, but if he wants to, he may add further data to his personal database. These changes will only be affecting his database and not the standard "starting" database (which was defined by Admin).
3) The Admin should have a separate view to add Actors, Directors, Producers… that will become part of the standard "starting" database. Any further changes done to this database will be available to the users as updates.
Suggested Solution
Question
The suggested solution is seems like I have to create new databases all the time for each user which seems not possible. My question is how can I manipulate the suggested solution so that my solution will be effective and possible. I would prefer to handle the situation by using database relations, not by separate storage.
You wouldn't create multiple databases, you would simply add an ownerId field to all relevant tables - admin would have ownerId = 0, indicating the row is part of the 'starting database' and new admin entries are instantly available to users.
In any output for a user where you want to display the starting data and their own, you would add WHERE (ownerId = 0 or ownerId = userId) to the appropriate query or if they need to see just their own, just ownerId = userId.
Presumably, they would be able to create relationships between their own data or 'starting' data and this approach should still work.
Foreign keys will still work but deleting will delete user data - basically you should only ever add to the starting data, not take away or you will run into problems.

SQL - Select data from three tables where one table has multiple foreign keys to the same primary key

I have the following tables and relations:
When I create a User, that user gets a CurrentWeekrow and that current week row in turn gets a CurrentWeekStatusrow. The user can add food items to the Foodtable and then can choose from these food items and select a few to insert in CurrentWeek.
In the client I want to grab CurrentWeekas an object that has a list of Foodobjects and a list of their corresponding status.
I am struggling as to how to make this happen. I think this can be done by making multiple queries to the database, one to fetch CurrentWeek and then from this extract all the FoodId's and make separate queries to fetch each Food. But this seems like a very bad solution.
The other solution I can think of is making a view with all the necessary data. But I don't know how to make this view and even if I manage to make the view I don't know how to separate each Food into different objects.
Do anyone know of a good way to accomplish this?
I use NodeJs as a REST API and Android Studio with retrofit to send REST calls.
After consulting StackOverflow and a few colleagues I changed the initial database schema into:
This was a design I initially chose to not go with as I thought adding one row on the CurrentWeek table for each user would be better than to add many rows for each user in the PlannedFood table. I see now however that this design have a few advantages as compared to the other design.
Designing it this way also solves my initial question as I can now grab all the rows in PlannedFood for a specific user, joining on FoodId and then map the Food data into a Foodobject on client-side.

Accepted methodology when using multiple Sqlite databases

Question
What is the accepted way of using multiple databases that record information about the same object that will ultimately end up living in one central database?
Example
There is one main SQL database about trees.
This database holds information about unique trees from all over the UK.
To collect the information a blank Sqlite database is created (with the same schema) and taken to the tree on a phone.
The collected information is then stored in the Sqlite database until it is brought back to the main database, Where it is then transferred into the main database.
Now this works fine as long as there is only one Sqlite database out for any one tree at a time.
However, if two people wanted to collect different information for the same tree at the same time, when they both came back and attempted to transfer their data in to the main database, there would be collisions on their primary key constraints.
ID Schemes (with example data)
There is a tree table which has unique identifier called treeID
TreeID - TreeName - Location
1001 - Teddington Field - Plymouth
Branch table
BranchID - BranchName - TreeID
1001-10001 - 1st Branch - 1001
1001-10002 - 2nd Branch -1001
Leave table
LeafID - LeafName - BranchId
1001-10001-1 - Bedroom - 1001-10001
1001-10002-2 - Bathroom - 1001-10001
Possible ideas
Assign each database 1000 unique ID's and then one they come back in as the ids have already been assigned the ids on each database won't collide.
Downfall
This isn't very dynamic and could fail if one database overruns on its preassigned ids.
Is there another way to achieve the same flexibility but with out the downfall mentioned above?
So, as an answer:
on the master db, store an extra id field identifying the source/collection database that the dataset was collected on, as well as the tree id.
(src01, 1001), (src02, 1001)
This also allows you to link back easily to the collection source of the information which is likely gonna be a future requirement. Now, you may or may not want to autogenerate another sequence id key value on the master db's table (I wouldn't but that's because I am not that fond of surrogate keys), but I would definitely keep track of the source/treeid it was originally collected with in the field, separately of any master db unique key considerations.
Apparently you are talking about auto-generated IDs for related objects, not the IDs for the trees themselves. Two different people collecting information about the same tree, starting from the same starting set, end up generating the same IDs independently. The two sets of generated IDs cannot coexist in the same DB.
Since you want to keep all the new data. One possible solution is to avoid using the field-generated IDs in the central database at all. When each set of data comes in, take the data that were added in the field, and programmatically add them to the central DB in a way equivalent to how they are added in the field, letting the central DB autogenerate its own IDs.
This requires a mechanism to distinguish newly-collected data from old, but that might be as simple as a timestamp.

What is the best method of logging data changes and user activity in an SQL database?

I'm starting a new application and was wondering what the best method of logging is. Some tables in the database will need to have every change recorded, and the user that made the change. Other tables may just need to have the last modified time recorded.
In previous applications I've used different methods to do this but want to hear what others have done.
I've tried the following:
Add a "modified" date-time field to the table to record the last time it was edited.
Add a secondary table just for recording changes in a primary table. Each row in the secondary table represents a changed field in the primary table. So one record update in the primary could create several records in the secondary table.
Add a table similar to no.2 but it records edits across three or fours tables, reference the table it relates to in an additional field.
what methods do you use and would recommend?
Also what is the best way to record deleted data? I never like the idea that a user can permanently delete a record from the DB, so usually I have a boolean field 'deleted' which is changed to true when its deleted, and then it'll be filtered out of all queries at model level. Any other suggestions on this?
Last one.. What is the best method for recording user activity? At the moment I have a table which records logins/logouts/password changes etc, and depending what the action is, gives it a code either 1,2, 3 etc.
Hope I haven't crammed too much into this question. thanks.
I know it's a very old question, but I'd wanted to add more detailed answer as this is the first link I got googling about db logging.
There are basically two ways to log data changes:
on application server layer
on database layer.
If you can, just use logging on server side. It is much more clear and flexible.
If you need to log on database layer you can use triggers, as #StanislavL said. But triggers can slow down your database performance and limit you to store change log in the same database.
Also, you can look at the transaction log monitoring.
For example, in PostgreSQL you can use mechanism of logical replication to stream changes in json format from your database to anywhere.
In the separate service you can receive, handle and log changes in any form and in any database (for example just put json you got to Mongo)
You can add triggers to any tracked table to olisten insert/update/delete. In the triggers just check NEW and OLD values and write them in a special table with columns
table_name
entity_id
modification_time
previous_value
new_value
user
It's hard to figure out user who makes changes but possible if you add changed_by column in the table you listen.

merging data from 2 databases

Currently have a contracts system that pulls in job data from our finance system. Each job has an id and the contracts hang off of that. We now have to bring in job data from another finance system. The jobs from the new system will also contain a job id and contracts will have to hang from this. I expect there will be some id conflicts when the data is merged. Whats the best way to deal with this. Should I create another table that pulls in the job data from both and assigns a new id for the contracts to hang from. Obviously I will need to update the current contracts to match the new id's generated. Does this sound like a good idea or is there a better way.
Given your additional comments, I would suggest that you use a mapping table to map any of the conflicting IDs in the old system to new IDs. Normally when importing data into an existing system you would want to keep the IDs of the current system intact, but since that system is going to be gone in a year (or however long it takes) and is about to be read only I would think that you would want to try to preserve IDs in the new system.
Once you create the mapping table, you would then use that to update any foreign key references, etc. and then import the new data, which should now have no conflicts.