Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm designing an internal website for a small company I work for. I am comfortable in my ability to do the CSS and HTML and I'm willing to learn how to do whatever else will be needed.
The company is a transportation company that services many towns throughout the day. I have a table (in Excel format currently) of cities that we service along with a correlating zip code and pickup terminal. I would like the dispatchers to be able to enter the cities they are no longer able to service into a search bar and their input would search the previously mentioned table, copy the city, zip, and pickup terminal, and write it to a new table.
The customer service team would then be able to search the newly written table by either city, zip, or pickup terminal to see which cities we no longer service and provide feedback to our customers.
My question is what is the best way to go about this without the need for paid services? My table will contain less than 1000 rows (could easily be reduced to less than 500 if that changes things) and 3 columns and the table being written off of it will have less than 200 rows and 3 columns by the end of the business day.
I've never made a website that needed a database before and I don't know what my best option is for such a small table. I've looked into XML, SQL, and even Google Spreadsheets for options but I just don't know enough about databases to make an informed decision.
1000 rows of 3 columns is not a large amount of data; you could create a JSON or even a text file and load it into RAM. If you create a class for your data you could use dictionaries or maps to query it.
I would not worry about a database until performance or integrity becomes a bottleneck.
Here's a site that might give you some guidance: http://www.htmlgoodies.com/primers/database/article.php/3478121
I've had a lot of success with google sheets for some small-scale back-end data, kind of similar to your project. But we had an experienced developer who then used python to set it up. Also, if you want to scale your data set eventually, Google sheets may not be the way to go. I'd look into SQL as a long-term solution.
my-sql is a good choice
you can make your work easier by downloading xampp
it is free opensource software it contains apache server, mysql database ,php,perl interpreter and other utilities.
it also has 'phpMyAdmin' utility which gives you easy ways to create databases and tables without even knowing much of code of sql statements. but to add functionality you mentioned above you will need to write the back end of your website using php,asp,jsp,python or any other language you know, that takes time to do,
you can download xampp from
https://www.apachefriends.org/index.html
that will do it
http://www.w3schools.com
is helpful for tutorials on web development.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In our project we have about 100 customers and so far we have a database per customer.
We are facing some issues about software updates and debugging, and updates are really time consuming as you can imagine.
One of my colleagues told me that it would be easier to have only one clustered database for whole set of customers.
What do you guys think about it ?
How could we use that architecture to have one customer in beta-test with some schema modifications ?
We think that we could have some kind of database replication but how can we merge with different schema without data loss?
Edit :
Let's say I have one database server ( SQL-01 ) and I have 100 customers on that database.
How can I do to take one customer to SQL-02 after some schema change, and after a period of beta-testing I want every one to be updated on SQL-01 with the new schema and my beta-test customer to go in the SQL-01 until the next beta-test.
Whilst upgrades may be more painful, there are a number of requirements to be considered before you place all the customers in a single larger DB:
Security: Whilst you have the data in a single database you have no isolation protection for the data, e.g. it is co-located. You risk exposing one client's data to another very trivially with any single bug in the code. Using multiple databases, you get more isolation protection.
Upgrading: If all the clients access the same database, then the upgrades will be an all or nothing approach - you will not be able to easily migrate some users to a new version whilst leaving the others as they were. This means you can not schedule downtime based on an individual clients time-zone, they all go down at once.
Backups: You can make each database currently backup separately, if it is in one larger DB then every client's backups are mingled together. If a single client asks for a rollback to a given date you have to plan carefully in advance how that could be executed without affecting the other users of the system.
Beta Testing : As you have already noted, if you were wishing to upgrade an individual client for testing a new version, you would have to use a different database, or ensure every change made was backwards compatible so that no one else would notice. At some point there will be a breaking change and you then have a problem.
Scale : eventually, with enough clients and enough data, you run out of room scaling up, scaling out is cheaper, and easier if you have multiple databases, instead of one.
As per the links in the comments by Alex K. I would look to use automation to manage to overhead and minimize the problem of having a large number of DBs.
The best way from my point of view is to use one schema and one database for all customer => create a DatawareHouse and especially a Star schema if you have lot of data...
For example : You can start by creating a table with id customer,name,region,city..Like this :
If you want to have the 100 database you can use "ALTER SCHEMA" (with loop) :
ALTER SCHEMA TargetSchema TRANSFER SourceSchema.TableName;
You would almost always have one database for all the customers. SQL databases are designed for large amounts of data. In fact, they are even more efficient on larger amounts of data than smaller amounts (because on smaller amounts, pages would tend to be partially filled).
The only reason to separate out different customers is when the project requires it:
The requirement could be explicit, for some unfathomable reason.
The requirement could state that data from two customers cannot be in the same database.
There could be a requirement to customize the application for a particular customer.
In general, though, for performance, maintainability, support, and security, you only want one database. Each table should have an appropriate customer id.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've just started playing around with Node.js and Socket.io and I'm planning on building a little multi-player game. Probably something simple like each player has a character that they can run around in an arena and try and kill each other.
However I'm unsure how best to store the data. I can imagine there would be some loose relationships between objects such as a character and its weapon but that I would likely load these into the system by id as and when they are required and save them back out when I no longer need them.
In these terms would it be simpler to write 'objects' out to file instead of getting a database involved. Use a NoSql document database or just stick to good old Sql server?
My advice would be to start with NoSQL.
Flatfile is difficult because you'll want to read and write this data very, very often. One file per player is not a terrible place to start - and might be OK for the very first prototype - but you're going to be writing a huge amount. File systems are not good at this. The one benefit at prototype stage is you can debug quick - just cat out the current state of a user. Using a .json file, or similar .yaml format, will start you on your way very rapidly (and you can convert to the NoSQL approach as the prototype starts coming together).
SQL isn't a terrible approach. If you're familiar with this, you'll end up building a real schema, and creating a variety of tables and joining the user data against them quite a bit. This can be a benefit for helping you think through your game, but I think you'll end up spending a lot of time trying to figure out how to normalize your data and writing joins. Since it seems you're unfamiliar with the problem (thus are asking the question), you're likely to do this wrong (and get in the way of gaming awesomeness) and/or just spend too much time at it.
NoSQL - using a document store model - is much like just reading an writing a user object. You'll end up re-writing your user object every time - but this kind of access (key-value, accessed by the user id) is hyper efficient. You'll probably get into a prototype really, really quickly, and to the important aspect of building out your play mechanism. Key-value access is highly scalable in the long run.
If you want to store Player information, use sql. However if you're having a connection based system. As in something where you only need to store information while the player is connected and after the connection is lost you don't need to "save"; then just store it in Memory.
Otherwise, I would say that you should stick with Sql. Databases are optimized, quick, tried, tested and true. You can't go wrong with a Sql database.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to design a database for something like a downloads site . I want to keep track of users , the programs each users downloaded and also allow users to rate+comment said programs.The things I need from this database - get average rating for a program , get all comments for a program , know exactly what program was downloaded by whom(I dont care how many times each program was downloaded but I want to know for each users what programs he has downloaded),maybe also count number of comments for each program and thats about it(it's a very small project for personal use that I want to keep simple)
I come up with these entities -
User(uid,uname etc)
Program(pid,pname)
And the following relationships-
UserDownloadedProgram(uid,pid,timestamp)
UserCommentedOnProgram(uid,pid,commentText,timestamp)
UserRatedProgram(uid,pid,rating)
Why I chose it this way - the relationships (user downloads , user comments and rates) are many to many . A user downloads many programs and a program is downloaded by many users. Same goes for the comments (A user comments on many programs and a program is commented or rated by many users). The best practice as far as I know is to create a third table which is one to many (a relationship table).
. I suppose that in this design the average rating and comment retrieval is done by join queries or something similar.
I'm a total noob in database design but I try to adhere to best practices , is this design more or less ok or am I overlooking something ?
I can definitely think of other possibilities - maybe comment and\or rating can be an entity(table) by itself and the relationships are between 3 entities. I'm not really sure what the benefits\drawbacks of that are: I know that I don't really care about the comments or the ratings , I only want to display them where appropriate and maintain them(delete when needed) , so how do I know if they better become an entity themselves?
Any thoughts?
You would create new entities as dictated by the rules of normalization. There is no particular reason to make an additional (separate) table for comments because you already have one. Who made the comment and which program the comment applied to are full-fledged attributes of a comment. The foreign keys representing these relationships (which are many-to-one, from the perspective of the comment table) belong right where you've put them.
The tables you've proposed are in third normal form which is acceptable according to best practices. I would add that you seem to be tracking data on a transactional basis (i.e. recording events as and when they occur). That is a good practice too because you can always figure out whatever you want to based on detailed information.
Calculating number of downloads or number of comments is a simple matter of using SQL Aggregate Functions with filters on the foreign key(s) that apply to your query - e.g. where pid=1234 etc.
I would do an entity for Downloads with its own id. You could have download status, you may have multiple download of the same program for one user. you may need to associate your download to an order or something else,..
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am currently doing some kind of reporting system.the figures, tables, graphs are all based on the result of queries. somehow i find that complex queries are not easy to maintain, especially when there are a lot of filtering. this makes the query very long and not easy to understand. And also, sometimes, queries with similar filters are executed, making a lot of redundant code, e.g. when i am going to select something between '2010-03-10' and '2010-03-15' and the location is 'US', customer group is "ZZ", i need to rewrite these conditions each time i make a query in this scope. does the dbms (in my case, mysql) support any "scope/context" to make the coding more maintainable as well as the speed faster?
also, is there a industrial standard or best practice for designing such applications?
i guess what I am doing is called data mining, right?
Learn how to create views to eliminate redundant code from queries. http://dev.mysql.com/doc/refman/5.0/en/create-view.html
No, this isn't data mining, it's plain old reporting. Sometimes called "decision support". The bread and butter of information technology. Ultimately, play old reporting is the reason we write software. Someone needs information to make a decision and take action.
Data mining is a little more specialized in that the relationships aren't easily defined yet. Someone is trying to discover the relationships so they can then write a proper query to make use of the relationship they found.
You won't make a very flexible reporting tool if you are hand coding the queries. Every time a requirement changes you are up to your neck in fiddly code trying to satisfy it - that way lies madness.
Instead you should start thinking about a meta-layer above your query infrastructure and generating the sql in response to criteria expressed by the user. You could present them with a set of choices from which you could generate your queries. If you give a bit of thought to making those choices extensible you'll be well on your way down the path of the many, many BI and reporting products that already exist.
You might also want to start looking for infrastructure that does this already, such as Crystal Reports (swallowed by Business Objects, swallowed by SAP) or Eclipse's BIRT. Depending on whether you are after a programming exercise or a solution to your users' reporting problems you might just want to grab an off the shelf product which has already had tens of thousands of man years of development, such as one of those above or even Cognos (swallowed by IBM) or Hyperion (swallowed by Oracle).
Best of luck.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I don't even want to think about how many man hours have been spent writing the same queries to join over the exact same tables at my company.
When I first started at my job I identified this as an inefficiency and started writing views in a separate schema for the sole purpose of developer convenience.
My boss didn't like this very much and recommended I start committing my common queries to source control in a separate folder from the production SQL. This makes sense because some scripts require parameters and not all are read only.
What is a Common Query?
Script used to diagnose certain problems
Script to view relationships between several tables (doing multiple joins)
Script that we don't want in a stored procedure because it is often tweaked to diagnose the issue of the day
Issues I want to Address
Discoverability, queries will be rewritten if nobody can find them
IDE integration, want to be able to easily view queries in IDE. I've tried the SQL Server Solutions but they really suck because they lock you into only working on that set of files.
I was wondering how all the pro's out there share their common SQL queries.
Thanks
Seems like the op wants to know how to get the word out to the team about useful SQL that others can/should use so not to recreate.
How I have done this in the past is through two ways:
Create a team web wiki page that
details the SQL with examples of how
it is used.
Email the team when new SQL is
created that should be shared.
Of course, we always include the SQL code in version control, just the wiki and email are use in the "getting word out there" part.
If it is something that I would call "common" I would probably create a stored procedure that the folks with necessary permissions can run.
If the stored procedure route won't work well for your team, then the other option is to create a view. Creating a view comes with unique challenges though such as ensuring that everyone running the view has select permissions on all of the tables in the view as well.
Outside of storing the scripts in source control of some kind, maybe storing them on a Share Point site or a network file share would work OK for your team. The real challenge that you will have in sharing scripts is that people have different ways to identify what they are looking for. A wiki type of site that allows tagging the different types of things the queries do would be useful.
You create a view.
Lots of ways to do this (including some that you've mentioned already):
Table-Valued User Defined Functions
Stored Procedures
Views
Source Control
Formal, shared Data Access Layer for client code
Views are the right way to handle this sort of thing. Or, in some cases, a stored procedure.
But there's no rule that says you can't also store the DDL for a View or a Stored Procedure in source control.