SQL Managed Instance Making Connectivity Easier for Users - sql

So we just switched to SQL Managed Instnace. As much as it pains me, we had more than a few places where untrained users were querying our production database(s). The switch to a business critical SQL Managed Instance was made partially because we could have them connecting to a read-only-replica of the DB.
Upon digging more, it seems that to connect to the read-only-replica (ror), they're going to need to open SSMS, hit "Advanced Options" then go over to extra parameters and put ApplicationIntent=ReadOnly. This is a bit of a bummer because 1. That means that many of them will probably mistakenly connecto to the "real" db and potentially cause havoc and 2. Thats a lot of extra steps for a user.
My Questions:
Is there a way to use SSMS to bake a connection into their system somehow that automatically sets the paramaters?
If not, is there a way to deny them connection if they DONT Have those parameters?
Side Note: I put a CNAME in my private DNS to cname sqlprod01.mydomain.com to the endpoint I get "bad login" but when i keep that same login info and hit the endpoint directly, it works fine. What's up with that?

Related

It it possible to Trace who deleted records in SQL Table?

There are Many SQL Servers hosted on different different Servers.
All Servers are working based on "SQL Server Authentication". So the Same Login is used by many people in the Organization.
How to trace who deleted some of the records in particular table?
Do we need any additional coding like Triggers are required or its a in-build feature of SQL server to provide those details?
Please help me.
Thank You.
If the deletion has already occurred and you had nothing in place to track / log this, then the chances are going to be very low - they are not zero, but not far above it.
If you use the transaction log to identify the exact deletion and the session id of the deletion, which we already know is the shared user login - and you have got successful login security auditing enabled you would in theory be able to trace it back to the IP address that made the deletion.
However - that is a pretty slim chance - I would suspect that the login is from the actual application software and you would of needed that to be running directly on the users machine, e.g not a 3-tier / web based server of any flavor, but a good old thick client app making direct connections.
That gets you an IP and a time, but not a who was logged in on that machine at that time, if its shared in any form, then you are having to get login records on the machine etc.

How to elegantly poll/pull information from a database?

I am currently beginning a new personal project. I have a database that keeps track of users as they log in to my webpage. It shows when they log on and log off. It uses SQL Server 2008.
What I would like to do is, whenever a user logs in, a scrolling bar along the top of my webpage alerts me to this. I have created a dashboard to keep track of a lot of my website statistics and this is something I think would be really cool. Useless, ultimately - but it would produce a "heheh" from me every so often, so why not ?
Now, I have never attempted to build something like this (which is the reason I am building it!) so I am torn between a few different design approaches. It seems like I could poll the database server repeatedly using http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldependency.aspx, just writing a query to find the set of currently logged in users and display any additions to that pool. If this is the right path to go down, then I would appreciate some more in-depth commentary on how this could be used.
From a high level perspective it seems like, rather than repeatedly polling the database, it would be more efficient to have the DB push the message out to my web server when there is a change. Would this be possible? If so, how ?
For the sake of argument, and to give this discussion a bit more specificity, let's assume our SQL Server tables are structured as follows (but feel free to make any improvements or changes as you see fit!):
Users {
ID Primary Key
Username(Varchar 100)
Password
}
LogInOrOutLogs {
SessionID Primary Key
UserID (Foreign Key)
TimeLoggedIn (DateTime)
TimeLoggedOut (DateTime)
CurrentlyLoggedIn(Bool)
}
Open to all technologies, all database structures, all design ideas. Go crazy! Only requirements : You have a DB of users which updates as they log in and out. Display the information on a web server as meaningfully, elegantly and simply as you can.
Thanks a lot, looking forward to reading peoples solutions for this problem.
Do you have look at Hibernate ? This is an elegante object layer over SQL database.
Then you can push triggers on your database to push the event. When you have a event to your data you send it to your web application via long query (it is an ajax query with very very very long timeout, the query is re-send after a event is receive).
A crazy design should also use a two way messaging system, one for message incoming into the DB one for other outputing from DB.
If you really like crazy thing you could thing of cache using a DB4O database (a cache for your SQL Server) embedded into a servicemix - redhatfuse. There easy way with servicemix because of the predeployed broker(activemq) and fuse with it's nice fabric system.

Cannot access SQL azure

Just had a bizarre issue with SQL Azure, and it's happened in a small phase just before full go live with some users doing some data entry.
"Database 'dbname' on server 'xxx' is not currently available. Please rety the connection later. If the problem persists, contact customer support."
When I tried to connect via SQL Azure database website I got:
"Firewall check failed.
Resource ID : 1. The request minimum guarantee is 0,
maximum limit is 180 and the current usage for the database is 0.
However, the server is currently too busy to support request greater than 0 for this database."
Looking at the databases section of the Azure Management website the site reported it couldn't access the DB, but I didn't capture the exact error message unfortunately.
Bizarrely, a couple of my users were still able to login to our system website that access the DB, and view and save data. Eventually they lost connection too however.
After an hour or so, the databases came back to life and we could fully access them again.
I have looked at the servers master db event table using queries from here and there was a couple of connection failures but nothing interesting. No throttling or deadlocks, a couple of failed connections that said "Client may have timed out when establishing connection. Try increasing the connection timeout." in the description
Any ideas where else to look?
Business users have had a massive drop in confidence because of this.
What your describing normally occurs because of :
1) SQL Connection limit being hit. Assuming you don't see this often you unlikely to be the cause. But worth checking putting a limit on your connection pool can help.
2)You neighbours being extremely noisy and thus the node re-adjusts.
3) Hardware failure and Microsoft bringing your database back online in a different node. This can take some time.
Normally I have seen this when Microsoft have throttled or had problems with a box and had to recover everyone over. Because you are on a shared system you have to keep in mind that they are recovering everyone else also in that node also and thus sometimes this takes time.
The best bet if you are worried and need to get a resolution for the business is to open a support ticket with MS and give them the time and error message you saw this. They will investigate and generally they have really good back end telemetry that will point to a reason. This will allow you to give the business a resolution and then you can make a call on future plans and contingencies. You have to keep in mind though that SQL Azure is shared system and transient errors can happen, you might need to design more failover into your designs.

SyncFramework 2.1 updates & deletes do not seem to apply properly

I'm synchronizing SQL Server 2008 with ~6 SQL Server 2008 Express clients (everything R2 I believe), using the SyncOrchestrator or specifically using http://code.msdn.microsoft.com/windowsdesktop/Database-SyncSQL-Server-e97d1208 as a base with slight modifications. To my knowledge this means all connections are peers or nodes.
I have 2 scopes. One is download only and the other is upload only. The download only scope is ridden with identity columns primarily because I didn't know any better and still couldn't wrap my head around introducing Guids as the PK on the client side. It doesn't totally matter as all clients should have exact replicas of about 8 or so tables and these machines don't touch this data in any way, only read it.
The upload only scope uses Guids as fortunately I can control that portion of the database and there would be no way 10 clients all using the same identity seed could sync back to the server properly. Both scopes use the default provisioning with bulk inserts and the whole 9 yards so there shouldn't be anything I'm doing on the provisioning end to screw this up.
I initially set everything up not using PerformPostRestoreFixup AND the initial database would be manually synchronized with insert statements from the host. This seemed fine but no updates or deletes seemed to ever be applied. You can safely ignore this (only used for historical accuracy and to prove my ineptness) as I then used VS2010 Database Projects to rebuild the database down to schema only & synchronized. I then used the steps outlined here (http://social.microsoft.com/Forums/br/syncdevdiscussions/thread/9ac6d1a1-1565-4b82-a8d8-3d4a9ff5d07b) (sync, backup, restore, call performpostrestorefixup, sync on x clients) and on my dev box where I'm setting all this up I could see updates and deletes just fine. Its when I deploy this to the x clients that I'm not seeing a mirror of the database as I think I should.
The initial sync will complain and try to synchronize all records again. I believe this is expected. During ApplyChangeFailed event on the client I set everything other than DbConflictType.ErrorsOccurred to ApplyAction.RetryWithForceWrite. This may be a source of problems as I initially thought this should be done to force the change down to the client. I want the server to always win in this scenario but during trace I always see the phrase "Local wins" during the bulk insert/update calls. It's possible I'm seeing the error before the re-apply happens but it's awkward to look at.
The only problem I seem to be having is with the download only scope. The initial client database is about a week old now and if I use the performpostrestorefixup steps I don't see any of the updates that have applied between now and then as I think I should. It's as if SyncFx almost prefers a blank database on the client side to kick off the initial sync then all the updates seem to apply just fine with no ApplyChangesFailed events kicking off.
If anyone has seen this before or has a clue where to go I would greatly appreciate it. My brain has fried trying to determine what it is that's going on. My last ditch effort will be to deploy blank databases to all the clients and have them start the sync. I've had no issues with this on the dev side but I can only test one other client to know if that'll do anything different. Aside from that I don't know what to do other than to keep doing manual syncs which would defeat this purpose entirely. I thought PerformPostRestoreFixup would alleviate the issue entirely but I seem to be having the same problems with or without it or perhaps I'm not looking at what I need to be.
Thanks
I wanted to report and close the entry with my findings.
When I would deploy a previously configured client database, I'd often get ApplyChangeFailed events in the form of this log:
"[05:30:41 PM] - ApplyChange Failed: TableName: , Stage: ApplyingInserts, ConflictType: LocalInsertRemoteInsert, Action: RetryWithForceWrite"
This is what I thought would be expected as it tried to reinsert the data that is already there. What this should've been changed to was an update statement during RetryWithForceWrite but I found the data was not updating with what was being sent down.
Once I started each client with a completely blank database and provisioned locally, all of these errors went away. It's as if every client expects some unique id only it sets. I'm also using x64 builds versus x86 which may have some or no bearing on the results. I wish I could determine what exactly happened but it seems that when in doubt, and whenever possible, starting from absolute zero and letting sync fill in the data is your safest option.

Leaving SQL Management open on the internet

I am a developer, but every so often need access to our production database -- yeah, poor practice, but anyway... My boss doesn't want me directly on the box using RDP, and so we decided to just permit MS SQL Management Console access so that I can do my tasks. So right now we have the SQL box somewhat accessible on the internet (on port 1433 if I am not mistaken), which opens a security hole. But I am wondering, how much of an uncommon practice is this, and what defaults should I be concerned about? We use MSSQL2008 and I created an account that has Read-Only access, because my production tasks only need that. I didn't see any unusual default accounts with default passwords on the system, so I would be interested to hear your take. (And of-course, is there a better way?)
Exposing a database or RDP directly to the Internet, even if locked down, is akin to putting up a sign saying "do not enter" - the security provided is not significant (and more importantly, could disappear tomorrow when an exploit is discovered).
A VPN is akin to actually locking the door - although security holes are sometimes discovered in VPN software, they are much rarer, as security is a primary concern there (as opposed to e.g. database servers, where it's mostly an afterthought). As for stability, I've never encountered this problem with a VPN server under such a small load (occasional access by a few users).
Bottom line: Unless you need to expose it to everyone (e.g. a web server), don't put it directly on the Internet.
BTW, are you sure your database server has not been hacked? In my experience, it means "didn't notice it", or at best "not hacked yet" - either way, that's a far cry from "reasonably secure".