I am developing a site and using Heroku for my hosting. I am on the their dev plan which allows me to use their free database but only up to 10,000 rows. After not touching the site for two days I was shocked to see an email alert from Heroku saying that I had reached 7,400 rows in my db. After some research I realized search bots were creating sessions. Is there a way to stop this? I have tried the solution in this post but it does not seem to work:
How to disable Rails sessions for web crawlers?
I am using rails 3.2.2.
As seen in this question ( Setting session timeout in Rails 3), you can implement auto expiry of your session and delete the database rows when a session has expired. If you keep session expiry time low and keep dropping database rows, I guess you won't hit your Heroku limit.
Why are you storing sessions in your DB? I would suggest you look for server/site settings that is doing it and disable it. There is no point in storing sessions in DB in most cases and definitely not on development site.
Related
I'm new to Azure SQL Database as this is my first project to migrate from a on premise setup to everything on Azure. So the first thing that got my concern is that there is a limit on concurrent login to the Azure SQL Database, and if it exist that number, then it'll start dropping the subsequent request. For the current service tier (S0), it caps at 60 concurrent logins, which I have already reached multiple times as I've encountered a few SQL failures from looking at my application log.
So my question is:
Is it normal to exceed that number of concurrent login? I'd like to get an idea of whether or not my application is having some issue, or my current service tier is too low.
I've looked into our database code to make sure we are not leaving database connection open. We use Enterprise Library, every use of DBCommand and IDataReader are being wrapped within a using block, thus they will get disposed once it runs out of scope.
Note: my web application consists of a front end app with multiple web services supporting the underlying feature, and each service will connect to the same database for specific collection of data, which makes me think hitting 60 concurrent login might be normal since a page or an action might involve multiple calls behind the scene, thus multiple connection to the database from a few api, and if there are more than one user on the application, then 60 is really easy to reach.
Again, in the past with on prem setup, I never notice this kind of limitation.
Thanks.
To answer your question, the limit is 60 on an S(0)
http://gavinstevens.com/2016/11/30/sql-server-vs-azure-sql/
There are Many SQL Servers hosted on different different Servers.
All Servers are working based on "SQL Server Authentication". So the Same Login is used by many people in the Organization.
How to trace who deleted some of the records in particular table?
Do we need any additional coding like Triggers are required or its a in-build feature of SQL server to provide those details?
Please help me.
Thank You.
If the deletion has already occurred and you had nothing in place to track / log this, then the chances are going to be very low - they are not zero, but not far above it.
If you use the transaction log to identify the exact deletion and the session id of the deletion, which we already know is the shared user login - and you have got successful login security auditing enabled you would in theory be able to trace it back to the IP address that made the deletion.
However - that is a pretty slim chance - I would suspect that the login is from the actual application software and you would of needed that to be running directly on the users machine, e.g not a 3-tier / web based server of any flavor, but a good old thick client app making direct connections.
That gets you an IP and a time, but not a who was logged in on that machine at that time, if its shared in any form, then you are having to get login records on the machine etc.
We've released a new game on Facebook that uses SQL Azure and we're getting intermittent connection timeouts.
I dealt with this earlier and implemented a 'retry' solution that seemed to have dealt with the transient connection issues.
However, now that the game is out I'm seeing it happen again. Not often, but it is happening. When it happens, I try logging into the SQL Azure Management web portal and I get a connection timeout there too. Same with trying SSMS.
The query itself is the first one of the game and it's a simple select on a table with 4 records.
After about 4 minutes, the timeouts stop and everything is good for a day or two.
Since these are players around the country, I don't have direct contact with the users.
I'm looking for any advice on how I can figure out what's going on.
Thanks,
Tim
FYI: http://apps.facebook.com/RelicBall/
Depending on how much compute you have in front of your database I would put in a limit on the connection pools that can be created with connection string.
Trying setting if for example you have 2 compute in front of the database.
Max Pool Size=70;
SQL Database can only handle 180 connections this is a hard limit. You can find for example when you are hitting the connection limit a retry framework will make the matter worse as it will try to connecting for a period of time leading to further downtime. This might be the reason you see several minutes as the compute retry frameworks give up.
http://msdn.microsoft.com/en-us/library/windowsazure/ff394114.aspx
Have a look with the following:
-- monitor connections
SELECT
e.connection_id,
s.session_id,
s.login_name,
s.last_request_end_time,
s.cpu_time
FROM
sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_connections e
ON s.session_id = e.session_id
GO
You should try to add cache to you application design, this can greatly reduce you application over head on the database and is recommend practice with SQL Azure. Especially as you can have connection issues. I have seen this type of issue before and it was connection limits so maybe worth investigating a bit of time in that direction to see if that causes. If not I would open a ticket to MS Support.
hths, Goodluck.
EDIT: Premium Database obviously raise the limits on connections so worth of investigation also as quick fix to this issue and potentially a long run one.
http://blogs.technet.com/b/dataplatforminsider/archive/2013/07/23/premium-preview-for-windows-azure-sql-database-now-live.aspx
I have a Django application hosted on Heroku using the PostgreSQL database addon. Upon performing a GET request for the front page, my applications performs a SQL query to extract some necessary display information. I also create a subprocess with Popen on each GET request.
However, when I notice that the number of GET requests increasing to around once every second, I being erroring at the statement model.objects.get(id="----"). I get an OperationalError ; I'm assuming that either my free plan on heroku isn't keeping up or my database isn't keeping up.
In this case, I don't want to leave Heroku's free plan, but I was wondering if I did, would I need to create more workers? Upgrade my database? What are ways to diagnose the issue? And why would a simple SQL query cause issues as the number of requests increase to an interval of around once every second? Does this seem reasonable?
My hack solution was just to sleep the view whenever I catch an OperationalError. Any other approaches recommended?
We're using siteCore 6.5 and each time we start to publish items, users who are browsing the website will get server 500 errors which end up being
Transaction (Process ID ##) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
How can we setup SQL Server to give priority to a specific application? We cannnot modify any queries or code so it has to be done via SQL Server (or connection string)
I've seen "deadlock victim" in transaction, how to change the priority? and looked at http://msdn.microsoft.com/en-us/library/ms186736(v=SQL.105).aspx but these seem to be per session, not globally.
I don't care if it's a fix/change to SiteCore or a SQL solution.
I don't think you can set the deadlock priority globally - it's a session-only setting. There are not any connection string settings that I know of. The list of possible SqlConnection string settings can be found here.
It sounds to me like you're actually having a problem with the cache and every time you publish, it's clearing the cache and thus you're getting deadlocks with all these calls made at the same time. I haven't seen this sort of thing happen with 6.5 so you might also want to check into your caching. It would help a lot to look at your Sitecore logs and see if this is happening when caches are being created. Either way, check the caching guide on the SDN and see if that helps.