Why does sql require a server? [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm new to SQL, and i'm trying to understand something basic about it.
Why do we need a server to connect to when using SQL?
In my very narrow vision of it, it just uses some databases, which could be implemented as arrays for example (or whatever it is that is implemented "backstage").
For example, if I want to set up a table in my computer and do some operations on it, what usage does the server has? Why can't it "just be there"?

I think the reason for your confusion is too narrow interpretation of the word "server" as a separate hardware box.
A server does not need to run on separate hardware, or even in a separate virtual environment. It could be another process on the same computer, or even a library within your process. What makes it a server is an ability to accept and process requests from clients. It does not matter where the server runs physically: as long as you follow a protocol in which requests originate on the client side, you have a server.

What you're envisioning (roughly) is referred to as an in-process database and they do exist for SQL. SQL Server is set up to be used by multiple users or applications so it makes sense for it to be a central server that many clients can connect to so they can share the same data.
If you only want to process data locally, there is SQL Express LocalDB, SQLite and a few others that allow you to essentially embed a SQL engine inside your application.

You are, perhaps, confusing SQL the language with SQL Server, a Microsoft product that implements SQL.
SQL itself has many, many implementations. Many of those implementations do not use a server. MS Access, SQLite, FileMaker are common SQL-using products that rely on file-sharing rather than a client-server setup to provide multi-user access. These products can all also be used on a single machine without sharing files.
There are also implementations of SQL that use CSV files for storage although these are less common.
Finally, many of the client-server SQL products offer related, smaller-scale implementations that do not require a server. These are generally implemented using file-sharing as well.

It depends on what you are needing. Some instances of SQL, such as SQLite3, are local and file based. They have no server. Most provide a server because of the problem they address.
But lets address why a server is needed. Consider a Microsoft Access application where the databases are files shared over a network. Suppose 5 people are working with the same file. They each search for something, and the entire file must be passed over the network. Suppose one edits a record. The next time the others do a search, they will have to load the entire file again. If the file is large this is a huge performance hit. This is why servers were created.
A server receives only the SQL. The server does either a search, or an edit, and returns only the data that has been requested. For any database of reasonable size the performance improvement is huge.
Another benefit of a server is access control. With a server you can have multiple accounts and control what databases and even tables they have access to, and what activities they are allowed to perform.
In short, the server was created to address the problems that arise when you have multiple clients working with a single database.

Related

Possibilities for external database with MS Access 2010

This question is quite general, however, i can not find a good answer for it.
What are the possibilities for using an external database with MS Access?
I see that MySQL can be used, but I would have to setup a ODBC connection and install drivers on every machine. The issue is that I have a software developed in MS Access that uses a lot of data, and it gets very slow at processing the data when i include a lot of data.
The software analyzes data from wind turbines, so it is used by different customers and it may contain a lot of different turbines with 50,000+ rows in each data set.
I would like these turbine data to be stored in a separate file that is pointed to by MS Access, so I include the software + whatever turbine data wanted.
As it is now, i have a lot of Access database files where the data is included in the software. It becomes impossible to keep track of - Especially when I do an edit to the source code of the software, which is do a lot these days.
Another issue is that the users may only have Access Runtime.
What are my options here? Is the best method to use the Access Link function?
Best regards, Emil.
Edit:
SQL's - Can they be combined? :
SELECT q_DataLimited.YAW001, q_DataLimited.YAW002
FROM q_DataLimited
WHERE (((q_DataLimited.YAW002)>Degree_dsp() And (q_DataLimited.YAW002)<Degree_dsp_high()));
And
SELECT Count(q_WindRose_PCU.YAW001) AS CountOfYAW0011
FROM q_WindRose_PCU;
Edit 2:
Public Degree As Long
Public Function Degree_dsp() As Long
Degree_dsp = Degree * 20
End Function
I have the degree as a counter outside the function in a form being:
For Degree = 0 To 17
DoCmd.OpenQuery "q_WindRose_PCU"
DoCmd.Close
Next Degree
Edit 3:
How to combine a query and the append of it to a table?
SELECT q_PowerBinned.Bin, Avg(q_PowerBinned.POW001) AS AvgOfPOW001, StDev(q_PowerBinned.POW001) AS StDevOfPOW001, Avg(q_PowerBinned.WSP001) AS AvgOfWSP001, StDev(q_PowerBinned.WSP001) AS StDevOfWSP001, Avg(q_PowerBinned.POW002) AS AvgOfPOW002, StDev(q_PowerBinned.POW002) AS StDevOfPOW002, Avg(q_PowerBinned.WSP002) AS AvgOfWSP002, StDev(q_PowerBinned.WSP002) AS StDevOfWSP002, Count(q_PowerBinned.Bin) AS CountOfBin
FROM q_PowerBinned
GROUP BY q_PowerBinned.Bin;
And then the append of the above to a table:
INSERT INTO t_Average_Stored ( Bin, PowAvg001, WindAvg001, PowAvg002, WindAvg002, n_samples, PowDev001, WindDev001, PowDev002, WindDev002 )
SELECT q_Average_Temp.Bin, q_Average_Temp.AvgOfPOW001, q_Average_Temp.AvgOfWSP001, q_Average_Temp.AvgOfPOW002, q_Average_Temp.AvgOfWSP002, q_Average_Temp.CountOfBin, q_Average_Temp.StDevOfPOW001, q_Average_Temp.StDevOfWSP001, q_Average_Temp.StDevOfPOW002, q_Average_Temp.StDevOfWSP002
FROM q_Average_Temp;
I see already a few suggestions in the comments, but I am going to answer the general question you posted. In short, the possibilities are endless.
MS Access, and Excel for that matter, have excellent external data tools that allow you to connect to almost any external data source and leverage on regular SQL-based databases or even use OLAP cubes to do your analysis. Access itself should be powerful enough to handle the data sets you mention. Even Access 2010 should be able to handle millions of records with relative ease.
MS Access does have a significant limitation, which is the 2GB file size. Once your database reaches 2GB, everything goes out the window and you are very likely to get data corruption. This is a well known issue, but I don't think you are anywhere near these limits.
Before considering an upgrade, though, there are a few things to suggest:
Analyze the structure of your data and your database. Perhaps your tables are too big (lots of columns) and unnecessarily redundant. It may make sense to process the raw data you receive to split it into different tables that reduce the redundancy and improve performance.
Look into indexing some key fields in your tables. This is heavily dependent on the type of analysis you do and what queries are most common. Read up on indexes and how to use them and explore some options with actual datasets. You may be surprised how queries that used to take minutes to run become almost instantaneous when the right indexes are created and maintained.
Analyze your queries for performance. If I remember correctly, MS Access 2010 had a performance analyzer, which could improve your queries to make them run more efficiently.
If you have already looked into the items above and you decide you really need to take a step up, one fairly easy path (and inexpensive) is to install SQL Server Express, which you can download for free from Microsoft. Access was made to talk to SQL Server and the performance is many times better. You can run SQL Server Express in your personal pc and use it as a back-end for Access, or you could actually install it in a networked pc and use it as a server (behind a firewall, of course, NEVER connected to the Internet). In this setup you can access your data from several PCs.
One key thing to keep in mind once you start using Access as a front end, is that you want to push the processing to the back end, not keep it in Access. The best way to do this is to create what Access calls pass-through queries. These queries are written in the backend's native SQL language and are sent to the back end server for processing. Only the processed data comes back. If you don't do this, for example by creating the queries in the visual editor in Access instead, the raw data will be sent to Access and then Access will try to create your results. This, as you can imagine, can actually be a lot slower than your initial situation, so don't do it.
If you are not a SQL expert and need a visual editor, there is a tool that you can download from Microsoft: SQL-Server Management Studio Express. The query editor is not that different from Access and will allow you to create queries in a visual manner, but in Transact-SQL (the language of SQL Server). You can also manage your SQL Server Express with this tool and maintain your data in this manner (import, export, etc). You can create the SQL statements you need in this editor and then copy and paste into the pass-through queries in Access. The data will be available for you in the program you are familiar with, but with the power of a much bigger database engine behind the scenes.
Since I do not want to sound like a Microsoft shill, I definitely want to mention other options for external data that could be equally or even more powerful than SQL Server Express. The only reason I mentioned these is because you are already familiar with Microsoft products and the learning curve is a bit less steep. Also, most things should work together out of the box.
The first option that comes to mind is SQLite, which is a high performing database that is actually file-based. It is very small, yet very powerful and fast, and it is ideal for a locally based application like what you mention. There are also lots of graphical interfaces for SQLite and you can connect to it via ODBC from Access. Again, you want to run everything using pass-through queries and let SQLite pick up the load. SQLite is Open Source and it is free.
If you are keen on having "a real database server", then MySQL is probably the next step up. Also Open Source and free, it is very popular, which means lots of places to get support and different graphical interfaces to choose from.
Any search for Open Source Database will give you even more options to try and choose from.
One key thing to keep in mind: if you install any database server in your PC, it will become a server, and will start advertising its services in your local network or on the internet if you bring it to a local Starbucks. Be careful with that, learn how to start/stop the services in your PC, and make sure you turn them off when you are not behind a firewall. There are many exploits for different database servers and you will get quickly detected once your PC starts advertising its newly acquired abilities.
Just to close, there is no difference in the performance of Access and the runtime. Just the ability to edit the queries and so on. Whatever front end you create in Access, your users will be able to utilize in the same manner.

Working of Login System in Large Applications [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am crazy to know that, how login system works in large applications like Facebook, Gmail, youtube, yahoo etc. Once after entering credentials server is responding more quickly. How is that possible ?
There must be more db servers for storing user information. So my question is
How they look for authentication information over more db servers?
Do they look over all the db servers to check for a particular user and if so how it is responding more quickly ?
Do they allocate db server based on geographical location of the user ?
And do they also have more application servers and how these are interconnected with each other.
RDBMS have the functionality to link servers that issue distributed queries, updates, commands, and transactions on heterogeneous data sources.
The database system will use some form of cached information about the user, in SQL Server an execution plan is stored and used when a query is executed. The database management system will decide which execution plan to take in order to generate the fastest results or use a cached data set. Note: Google, Facebook, Amazon etc will lot of server processing power behind the scenes which will make it seem instantaneous. They will also have dedicated teams to manage their databases, perform indexes, tuning, optimization and identify bottlenecks.
The geographical location of the server could be a factor. The closer the server is to the user the faster they can get the information but IMO this would be a matter of nano/milli seconds difference depending on where their data center is located. If the server gets too busy then the load balance will migrate you/other users to a server with more available resources.
Yes. Using more than one web server is needed in scenarios like this and is tied in to part 3 of the question, which server you hit depends on how much available resources the closest server has and if it will accept your connection. They are distributed but the whole process seems transparent to the user, i.e. they think they are using the same server as every other client. They can be interconnected by using session management, Web Services and other interoperability techniques and technologies.

Choose Database for multi-user applications with only a file server (windows)

I need to choose a database solution for multiple simultaneous users of windows based applications using the same database on a file server.
I need a database that can live on a Window OS file server.
Must be shared by several applications running on individual MS
Windows machines (mostly Windows 7)
Made available by file server.
Cannot use a database server/engine (due to internal political rules) or
webpage server.
Prefer using C# for a set of WPF applications.
Currently using set of VB applications with a set of MS Access
files - one of these applications has problems and needs a re-write.
Current set of about a half-dozen *.mdb files (some with link
tables) are about 400 MB. Growth at a guess of 10 to 20 MB/year.
Up to about a dozen concurrent users each on their own PC
currently. Don't expect much change on this in the future.
All apps both read and write data to the database.
Currently several (about 4 people) write ad hoc queries in Access -
they will continue to need to be able to write queries somehow.
Would like to prevent changes in database structure (adding
tables/columns) by end users.
Free software.
The choices I know about are:
Access .mdb files (the current situation).
SQLite.
SQL Server CE.
Are there other systems that might work that fit many or all of the desired characteristics? Are there particular "gotchas" I should know about for the systems that I am considering?
Well, "Cannot use a database server/engine" makes things harder. So does "free".
I think Access is the only thing in your list that comes close to fulfilling all the requirements. It's not free, but it seems you already have it, so at least it doesn't cost extra.
Access is essentially three different, bundled products.
Jet database engine
RAD environment for queries, forms, and reports
VBA programming environment
If you're using only the database engine, it makes sense to do some testing with SQL Server CE.
Switching to SQLite would probably require additional checks in application code. SQLite supports storage classes, not data types. What does that mean? It means SQLite allows this.
sqlite> create table foo (n integer);
sqlite> insert into foo values ('wibble');
sqlite> select n from foo;
wibble
HyperSQL is another possibility. Supports only JDBC, might run without a server component. (Docs weren't immediately clear about that.) I think it would require a lot more work to switch to this than to SQL Server CE.
See also H2 and Firebird.

Hiding tables from database users [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a web application using a database in SQL Server 2008. I am the developer of this project and this project has been hosted elsewhere.
I've delivered the project to the administrator. Now the administrator is able to connect to SQL Server but I don't want the administrator to be able to see the database tables.
Is there any way to do this?
I've seen this done before by the makers of ACT. Their product installs a new instance of SQL Server Express and as part of that process they encrypt the sa password for the instance. This makes it 'impossible' for others to connect to the database using anything other than their product and add-on tools.
I don't know exactly how they do it, but perhaps you could search for encrypting sa password or something similar and find out how to do it.
Shy of installing your own instance of SQL Server I am not sure how you would go about this.
Bear in mind that your application will need to then provide the ability to backup, tune, modify, etc the database as the DBA would not have access to the instance of SQL Server.
Incidentally, we threw ACT out once we saw this - I didn't, and still don't, like the idea of a black box running on one of our servers.
In the end, you'll probably find that this added layer of protection (for you, not the client) just isn't worth the aggravation. While you may have proprietary information in the form of the database schema the odds of the client reverse engineering your application and then making their own are slim. Even if they did, it is hard to make good software - they likely wouldn't get it 'right' anyway.
My advice, don't worry about this, focus on making your software great so there is no reason for them to roll their own or look elsewhere.
Simple: No, he is DBA. He can not do his job without the ability to work with the database. Get over your objections.
You cannot limit the rights of an administrator on a server.
The administrator also has access to pretty much all tables that store your encryption keys, so TDE won't work.
If he is a security admin or one of the other roles, that might solve the issue as by default such roles has no permissions on your data.
SQL Administrators normally have these roles to protect your data. They need it.
If the data is very sensitive, you need to use alternative means to secure it, such as applying your own encryption before saving the sensitive values. (AES etc.)

SQL-Azure Performance, Add Database or Add Server?

This is not a traditional scale-up or scale-out question.
Please bear with me, here first allow me give an example:
I created a Sql Azure server and create a 1GB database inside, cost $9.99 a month.
(It has a master database as well, 1G, but Microsoft not charge us for that)
Ok, here is my question comes, when I need another 1G database for my application. Why I need another 1GB database? You may ask me this because the azure can support database up to 50GB. My answer is distribution, I know the data will reach 50G eventually, so I create the data model distribute and spread the data in different database.
For all the sake of performance, which option I should use:
Create another database in same server
Create another server and create a new database inside
Both option cost same.
I guess option 2 will be better, isn't it?
I'm not sure there are strong (or any) performance implications, my understanding is that the consideration is mostly a management one as some entities, mostly around security, are defined at server level and some at database level.
Behind the scenes the model is quite different anyway, and a multi-tenant one, so having separate SQL Azure server does not actually mean you get a dedicated server per-se. theoretically separate servers or separate databases may end up looking exactly the same.