Inputting data to database by many users [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Each of the salesmen should make a forecast of his sales. I know how he may input data directly from Excel sheet to SQL table. Do I need to create different tables - one table per salesman? At the end I need to aggregate all the forecasts. Is it possible to make it with just one table?
The condition is that one salesman is not allowed to see the other salesmen forecasts.
It seems to be a common problem of inputting data to database by many different users with restrictions on access.
Update. Each salesman is in different town. Say we have 500 salesmen so it is not the way to gather data from 500 Excel files into one big Excel file and then load it to SQL.

actually you don't need to create different tables for each salesmen. one table is enough to load all your salesman info Excel data. to find each salesmen's forecast sales simple transmission query will help u

You need at least two tables. You need a staging table to receive the excel data and perform the necessary validation, transformation, etc. You need at least one table for data storage. Given that you are talking about people and sales, you probably want a normalized database. If you don't know what that means, I've heard good things about the book, Database Design for Mere Mortals.

Related

How to organize 10.000s of tables of different size in a SQL database [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a program where the user fills data into a table. The tables all have different sizes, some are 3x2, others are 30x20.
Its just like an excel sheet and looks like this, where the user can add rows and columns as much as she needs them. These are charts related to products so each table has a unique product number
Whats the best way to organize this data in a SQL database?
Is it one SQL table per user generated table?
This seems excessive as i would end up with a database with 10000s of tables. Are there other, better ways to store the data? Can i combine it into one SQL table?
You can make SQL tables like
table_header
- id
- rows
- cols
table_detail
- header_id
- row
- col
- value
Then to store a table you create a header record and add the contents one entry at a time. To read the table back you would basically just pull all data for the table, maybe with an ORDER BY if you want to make parsing back into the table easier. Something like that.

Putting my attendance base on calendar and insert in my database [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am making a attendance system , i have a database which is i created and it has a tables from jan-dec (12 tables) and each table has this kind of column (Jan1,jan2//feb1,feb2//mar1,mar2... etc ) i know it is not a good practice tho i'm not familiar with sql, I would like to ask if how would i be able to make a much lesser tables/columns ? and it will based on my datepicker in my vb.net program?
Delve deeper into relational database design (take the link as a first step).
One thing is to create just one table, and have in it a column of type DATE or DATETIME to denote the date. Additional columns would have related data that is linked to the date. That would simplify your table structure greatly. From 12 tables with approx. 30 colums, to just one table with one column + columns with related information.

Database Schema SQL Rows vs Columns [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a lot of databases with relatively large amounts of columns ranging from 5 to 300. Each table has at least 50,000 rows in it.
What is the most effective way to store this data? Presently the data has just been dumped into an indexed sql database.
It was suggested to me to create 3 columns as follows.
Column Name, Column category, Row ID, Row Data.
example data would be
Male, 25-40, 145897, 365
Would this be faster? Would this be slower? Is there better ways to store such large and bulky databases?
I will almost never be updating or changing data. It simply be outputted to a 'datatables' dynamic table where it will be sorted, limited and ect. The category column will be used to break up the columns on the table.
Normalize your db!
I have struggled with this "theory" for a long time and experience has proven that if you can normalize data across multiple tables it is better and performance will not suffer.
Do Not try to put all the data in one row with hundreds of columns. Not because of performance but because of development ease.
Learn More here

SQL large table VS. multiple smaller tables [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have the option to use a single table that will expand upwards of 1,000,000 records per year.
With that said, I could use a foreign key to break up this table into muitiple smaller tables, which will reduce this expansion to each smaller table of 100,000 records per year.
Lets say 50% of the time, users will query all of the records where the other 50% of the time users will query the segmented smaller table data set. ( think based on all geographic areas vs. specific geographic areas)
Using a database managed by a shared hosting account ( think site5, godaddy, etc... ), is it faster to use a single larger table or to use several smaller segmented tables given this situation?
Where each dataset is accessed 10%/%90, 20%/%80, %30/%70... etc, at what point would using a single table vs muiltiple smaller tables be the most/least efficient?
In general do it so as to reduce the amount of duplicated information. If you are making smaller tables which have many redundant columns, then it seems like it'd be more efficient to have just one table. But otherwise, one table.
It also depends on what percent of the row is being used per query, and how your queries are structured. If you are adding lots of joins or subqueries, then it'll most likely be slower.

Which type of database structure design is better for performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
MSSQL database. I have issue to create database using old databases data. Old database structure is thousands tables conected with each other by ID. In this tables data duplicated many times. Old database tables have more than 50 000 rows (users). Structure like this table
Users (id, login, pass, register-date, update-date),
Users-detail (id, users_id, some data)
Users-some-data (id, users_is, some data)
and this kind of tables is hundreds.
And the question is, which design of db structure to choose, one table with all of this data, or hundreds of tables separated by some theme.
Which type of db structure would be with better performance?
Select id, login, pass from ONE_BIG_TABLE
or
Select * from SMALL_ONLY_LOGINS_TABLE.
Answer really depends on the use. No one can optimize your database for you if they don't know the usage statistics.
Correct DB design dictates that an entity is stored inside a single table, that is, the client with their details for example.
However this rule can change on the occasion you only access/write some of the entity data multiple times, and/or of there is optional info you store about a client (eg, some long texts, biography, history, extra addresses etc) in which cases it would be optimal to store them on a child-table.
If you find yourself a bunch of columns with all-null values, that means you should strongly consider a child table.
If you only need to try login credentials against the DB table, a stored procedure that returns a bool value depending on if the username/password are correct, will save you the round-trip of the data.
Without indexes the select on the smaller tables will be faster. But you can create the same covering index (id, login, pass) on both tables, so if you need only those 3 columns performance will probably be the same on both tables.
The general question which database structure is better can not be answered without knowing the usage of your database.