What is difference between doing connection pooling settings in application code like shown in below link vs doing on the server itself like shown in the second link for weblogic?
http://javarevisited.blogspot.com/2012/06/jdbc-database-connection-pool-in-spring.html
https://docs.oracle.com/cd/E13222_01/wls/docs81/ConsoleHelp/jdbc_connection_pools.html#1106131
Can someone explain?
Thanks!
If you are a web application programmer just trying out new stuff you can do it either way. You can learn both ways if you want to.
If you are setting up a company you may need more structure and separation of responsibilities:
Some of the advantages of Server Connection Pooling are:
Security. The password for the production database can remain encrypted and unknown to the developer.
Separation of roles. The connection pool can be managed by a different group of people, let's say middleware admins, who may not know how to code.
Administration. Stopping, starting and other lifecycle events can be managed by non-developers.
Configuration. The pool can be fine-tuned by database admins or data analysts as needed.
Related
I have a requirement to create a tablet application for use in restaurants. It will all be on a private internal network so security is not an issue. The question is which will cause the least network traffic? I can either connect directly to SQL using entity framework or I can connect to web services I create on the SQL server in IIS and the tablets communicate with that.
I guess to simplify it, does a standard SQL connection transfer more data than is necessary?
It's difficult to give a general rule, as network architecture plays into the answer quite heavily.
As a general guideline i would suggest to make web services or php "interfaces" on the server, it would give you a easier and more controllable data flow, besides you could handle transactions easier, as all of them would go thorugh one interface, all db accesses coming from one machine. It makes also debugging and errorhandling easier (log the interface and you see everything that's happening, so you don't ahve to check logs on devices) than if every client connects directly to the DB, gives you more control.
Just a general suggestion, a kind of web services/interface or whatsoever is always worth the investment, sooner or later you will go anyway this direction.
My humble oppinion
Background:
Our team is building an inhouse Intranet web application. We are using a standard three layer approach. Presentation layer (mvc web app), Business layer and data access layer.
Sql database is used for persistence.
Web app / iis handles user authentication (windows authentication). Logging is done in business and data access layer.
Question service account vs user specific Sql accounts:
Use service / app account:
Dev team is proposing to set up service account (set up for application only). This service account needs write & read access to db.
Vs
Pass on user credentials to SQL
IT ops is saying that using a service account (specifically created for app only) for db access is not deemed best practice. Set up Kerberos delegation configured from the web server to the SQL server so that you can pass on the Windows credentials of the end users & create a database role that grants the appropriate data access levels for end users
What is the best practice for setting up accounts in sql where all request to db will come through the front end client (ie via bus layer and then data layer)
The Best Practice here is to let the person/team responsible for the database make the decision. It sounds like the dev team wants to forward (or impersonate) some credentials to the DB which I know that some small teams like doing, but yes that can leave things a bit too open. The app can do whatever it likes to the database, which is not much of a separation if you're into that kind of thing.
Personally, if I understand what you're saying above, I do more of what the IT team is thinking about (I use Postgres). In other words my app deploys over SSH using a given account (let's say it's the AppName account). That means I need to have my SSH keys lined up for secure deployment (using a PEM or known_keys or whatever).
In the home root for AppName I have a file called .pgpass which has pretty specific security on it (0600). This means that my AppName account will use local security to get in rather than a username/password.
I do this because otherwise I'd need to store that information in a file somewhere - and those things get treated poorly pushed to github, for instance.
Ultimately, think 5 years from now and what your project and team will look like. Be optimistic - maybe it will be a smashing success! What will maintenance look like? What kinds of mistakes will your team make? Flexibility now is nice, but make sure that whomever will get in trouble if your database has a security problem is the one who gets to make the decision.
The best practice is to have individual accounts. This allows you to use database facilities for identifying who is accessing the database.
This is particularly important if the data is being modified. You can log who is modifying what data -- generally a hard-core requirement in any system where users have this ability.
You may find that, for some reason, you do not want to use your database's built-in authentication mechanisms. In that case, you are probably going to build a layer on top of the database, replicating much of the built-in functionality. There are situations where this might be necessary. In general, this would be a dangerous approach (the database security mechanisms probably undergo much more testing than bespoke code).
Finally, if you are building an in-house application with just a handful of users who have read-only access to the database, it might be simpler to have only a single login account. Normally, you would still like to know who is doing what, but for simplicity, you might forego that functionality. However, knowing who is doing what is usually very useful knowledge for maintaining and enhancing the application.
I want to enforce row-level security for any client connecting to a particular SQL Server database (just one database). I don't want to impose any particular set up required to happen on the client side (because this would mean that a client can set up itself so that it would have access to anything - which of course would be bad; BTW, the client is a WinApp that connects using either Windows Auth or SQL Auth). I basically want this to be transparent to any client. The client should not even know this is happening.
The enforcement of the row level security will be performed in views inside the database that are layered above the tables (in essence: no one will have the ability to perform DML against the tables directly; instead, all operations shall be performed against the views that lay on top of the tables. These views will have instead-of triggers running under a particular 'execute as', to ensure the DML operations can be correctly executed).
So basically, I want to remove the potential of the client to circumvent this security model by baking it into the database itself.
I also want to separate out the permissions granted to the user from the effective permissions that are applied to the current connection (think of it this way: if you are connected to the DB, you have a Security Context associated with your connection - maintained in the DB, mind you - this Security Context contains the information about which items you have access to. Upon establishing the connection, this Security Context is created and populated with information based of the permissions assigned to you and when the connection is closed, the information in this Security Context is removed; in fact, the entire Security Context should be removed). Of course, the Security Context should only be available within a given connection, connections should not have the ability to even see the existence of a security context for other connections.
(EDIT: one of the scenarios explicitly targeted, which explains why the Security Context is separated from the 'defined permissions' is as follows: if you establish a connection to the DB, you get a SecContext; now while your connection is active/not closed, the admin assigns new permissions to you, these new permissions will not be applied to this currently open connection. You must close the connection and re-establish a connection to have these changes reflected in your SecContext)
I know how I can enforce that the views will only return information that the user has access to. That's the easy part... This question is about creation and deletion of the Security Context I am talking about.
The Security Context should be created on establishing the connection.
It should also reside in an artifact that is accessible only to the current connection.
It must be queryable during the lifespan of the connection by the connecting user.
Finally, the Security Context must be dropped/removed/deleted when the connection is closed.
Would anyone have an idea about how this can be achieved.
Thanks
A SQL Server temp table (one with a name starting with #) is only available to the current connection, and dropped at the end. You only have to deal with establishing it on creating a new connection.
However it sounds like you are re-implementing a lot of what the DBMS already does for you. I would recommend reading up more on SQL Server's built-in security mechanisms, in particular login/user/schema separation.
I don't really understand your application in the abstract, so I don't entirely understand where you're coming from here, but here's a question for you: it sounds like you're giving your users a direct connection to your database. Do you need to do that?
Perhaps I'm missing the point, but could not all of this row-level security be done away with entirely if you built into your application an API, rather than provided your users with a direct database connection? If you set things up that way, your API could be the gatekeeper which prevents users from making changes to rows to which they should not be given access. Perhaps that would be simpler than working directly at the database level?
I hope that's helpful.
Im interested to find out what techniques developers are using to connect to a Windows Azure instance running in the cloud?
From what i understand it is very similar to SQL Server with two of the key differences being Multiple Active Recordsets are not supported and idle/long running connections are automatically terminated by azure. For this microsoft suggest incorporating retry logic in your application to detect a closed connection and then attempt to complete the interrupted action. Does any one have example code that they are currently using on this?
To build out the data layer i was looking at various ORMs. Since im going to be accessing azure from windows azure (ie seperate boxes) to me it would seem key that any ORM mapper would need to support asynchronous methods so as not to block any windows azure instances.
Any suggestions as to which ORM mapper to use, or comments on what you are currently using
I have successfully used NHibernate with Azure and we are in the process of building a commercial app on top of NHibernate. The only problem that I had was with the connection pools when running locally and connecting to SQL Azure in the cloud - which was fixed when turning connection pooling off.
You may find similar problems with other ORM's... SQL Azure is less patient (for obvious reasons) than most people are used to. Connections timeout quicker, recycle sooner and so on.
Test first!
Here's one specifically designed for Azure:
"Telerik recently announced the
availability of Open Access, the first
ORM that works seamlessly with SQL
Azure relational databases in the
Windows Azure cloud."
And a few commenters at the Azure User Group recommend LLBLGen and Entity Framework.
I've been using Entity Framework - runs without any problems, just a different connection string.
What you do have to think about is your connection strategy, and how efficient your queries are. I've got method that's easy to write in EF - I've got a new record that could be duplicated, so I check if it's there, and if not, add it.
EF makes it really easy to do this, as if you're just accessing a local collection. BUT ... if you're paying for your dB access because it's in Azure and not on your local network, hmm, maybe there's a better (aka cheaper) way of doing that
According to Ayende, NHibernate "just works" with SQL Azure.
We have been using NHibernate without any customization on Azure (indeed, it just works), you can check Lokad.Translate as an open source example of such use.
I've started to build a Windows Forms application. The application will work in two different modes:
local (1 user, opening and saving files just like a Microsoft Office application)
network (multiple users, all accessing a shared database in another host of the network)
For the local mode I am planning to use a SQLite embedded database, I've made some tests and it worked very well. For the network mode I'm thinking about SQL Server Express. Both solutions are free.
Now I'm worried about architecture best practices for this app. I usually split my application layers in 3: Presentation, Service (or business logic) and Data Access.
In your opinion, what are the architecture "best practices" for this kind of application, specially considering the data access layer in those 2 modes (local and network)?
For example, should I create one DAL class for local and one DAL class for network, and build a factory for them? Do you think nHibernate would work for this scenario (then I could use the same DAL class for both local and network modes)? Can you see better options for the database solutions I've chosen?
I appreciate any advice, hint, example, suggestion :)
Thanks!
If you use NHibernate, you can create your application any way you want. Plugging in a different database is just a matter of configuration.
By the way, i would prefer using MS SQL Server CE for the local database, because it is more compatible with MS SQL Server.