The MSDN documentation for sp_getapplock says:
[ #Resource= ] 'resource_name' Is a string specifying a name that
identifies the lock resource.
The lock resource created by sp_getapplock is created in the current database for the
session. Each lock resource is identified by the combined values of:
The database ID of the database containing the lock resource.
The database principle specified in the #DbPrincipal parameter.
The lock name specified in the #Resource parameter.
My questions:
1. is the 'resource_name' just any old name you make up?
2. does the 'resource_name' have to refer to a table name or stored proc name or a (named) transaction name?
Yes, it's any old name you make up. You can say "sp_getapplock 'kitten'" and it will wait for the "kitten" lock to be released before acquiring it for itself and continuing on. You have to define the resources that make sense to serialize the access to.
I don't like the idea of naming the lock after a table because then it implies to other coders that access to that table is serialized when there's nothing in SQL Server (except for the applock framework) to enforce that. Put another way, applocks are sort of like a traffic light. There's nothing inherent about a red light that prevents you from going forward. It's just a good idea not to.
Related
How can I lock a table preventing other users querying it while I update its contents?
Currently my data is updated by wiping the table and re-populating it (i know, its not the best way to update data, but the source data has no unique key to do a record by record update and this is the only way). There exists the unlikely, but possible scenario where a user my access the table in the middle of the update and catch it while it is empty thus returning bad info.
Is there at the SQL (or code) level a way to create a blocking statement that will wait for a DB update to complete prior to querying?
Access has very little locking capabilities. Unless you're storing your data in a different backend, you only can set a database-wide lock or no lock at all.
There is some locking capability setting table locks when the table structure of a table is being changed, but as far as I can find, that's not available to the user (neither through the GUI nor through VBA)
Note that both ADO and DAO support locking (in ADO by setting the IsolationLevel, in DAO by setting dbDenyRead + dbDenyWrite when executing the query), but in my short testing, these options do absolutely nothing in Access.
In our build server, we have a number of feature branches get deployed against one database. The problem is sometimes some buggy scripts in one branch causes LB to exit without releasing the lock. The problem is there is no easy way to find out what branch caused this. We may have up to 30 branches getting deployed constantly as there are new changes against the branch.
Is there any way (or can we have new feature in Liquibase) to set the instance name and the name can be stored in LOCKEDBY column of table DATABASECHANGELOGLOCK so we can easily find out what branch/instance caused the issue?
Currently, LOCKEDBY has only IP in it which is the same for all the instances.
You can specify a system property which gets insert into the LOCKEDBY column:
System.setProperty("liquibase.hostDescription", "some value");
I think to achive this you need to patch Liquibase somewhere here:
https://github.com/liquibase/liquibase/blob/ed4bd55c36f52980a43f1ac2c7ce8f819e606e38/liquibase-core/src/main/java/liquibase/lockservice/DatabaseChangeLogLock.java
https://github.com/liquibase/liquibase/blob/ed4bd55c36f52980a43f1ac2c7ce8f819e606e38/liquibase-core/src/main/java/liquibase/lockservice/StandardLockService.java
to fetch additional variable somehow (property file/env variable/etc) and store in the table.
Btw, be careful with deploying multiple branches with the same database instance, because it is possible that you will make a change in DB structure for one branch, that will break another one.
Can someone please explain to me how SQL Server uses dot notation to identify
the location of a table? I always thought that the location is Database.dbo.Table
But I see code that has something else in place of dbo, something like:
DBName.something.Table
Can someone please explain this?
This is a database schema. Full three-part name of a table is:
databasename.schemaname.tablename
For a default schema of the user, you can also omit the schema name:
databasename..tablename
You can also specify a linked server name:
servername.databasename.schemaname.tablename
You can read more about using identifiers as table names on MSDN:
The server, database, and owner names are known as the qualifiers of the object name. When you refer to an object, you do not have to specify the server, database, and owner. The qualifiers can be omitted by marking their positions with a period. The valid forms of object names include the following:
server_name.database_name.schema_name.object_name
server_name.database_name..object_name
server_name..schema_name.object_name
server_name...object_name
database_name.schema_name.object_name
database_name..object_name
schema_name.object_name
object_name
An object name that specifies all four parts is known as a fully qualified name. Each object that is created in Microsoft SQL Server must have a unique, fully qualified name. For example, there can be two tables named xyz in the same database if they have different owners.
Most object references use three-part names. The default server_name is the local server. The default database_name is the current database of the connection. The default schema_name is the default schema of the user submitting the statement. Unless otherwise configured, the default schema of new users is the dbo schema.
What #Szymon said. You should also make a point of always schema-qualifying object references (whether table, view, stored procedure, etc.) Unqualified object references are resolved in the following manner:
Probe the namespace of the current database for an object of the specified name belonging to the default schema of the credentials under which the current connection is running.
If not found, probe the namespace of the current database for an object of the specified name belonging to the dbo schema.
And if the object reference is to a stored procedure whose name begins with sp_, it's worse, as two more steps are added to the resolution process (unless the references is database-qualified): the above two steps are repeated, but this time, looking in the database master instead of the current database.
So a query like
select *
from foo
requires two probes of the namespace to resolve foo (assuming that the table/view is actually dbo.foo): first under your default schema (john_doe.foo) and then, not being found, under dbo (dbo.foo'), whereas
select *
from dbo.foo
is immediately resolved with a single probe of the namespace.
This has 3 implications:
The redundant lookups are expensive.
It inhibits query plan caching, as every execution has to be re-evaluated, meaning the query has to be recompiled for every execution (and that takes out compile-time locks).
You will, at one point or another, shoot yourself in the foot, and inadvertently create something under your default schema that is supposed to exist (and perhaps already does) under the dbo schema. Now you've got two versions floating around.
At some point, you, or someone else (usually it happens in production) will run a query or execute a stored procedure and get...unexpected results. It will take you quite some time to figure out that there are two [differing] versions of the same object, and which one gets executed depends on their user credentials and whether or not the reference was schema-qualified.
Always schema-qualify unless you have a real reason not to.
That being said, it can sometimes be useful, for development purposes to be able to maintain the "new" version of something under your personal schema and the "current" version under the 'dbo' schema. It makes it easy to do side-by-side testing. However, it's not without risk (which see above).
When SQL sees the syntax it will first look at the current users schema to see if the table exists, and will use that one if it does.
If it doesn't then it looks at the dbo schema and uses the table from there
I'm getting back into NHibernate and I've noticed a new configuration property being used in examples: SchemaAutoAction. I cant seem to find documentation on what the various settings mean. The settings / my guesses as to what they mean are:
Recreate -- Drop and recreate the schema every time
Create -- If the schema does not exist create it
Update -- issue alter statements to make the existing schema match
the model
Validate -- Blow up if the schema differs from the model
Is this correct?
SchemaAutoAction is the same as schema-action mapping attribute.
As per docs:
The new 'schema-action' is set to none, this will prevent NHibernate
from including this mapping in its schema export, it would otherwise
attempt to create a table for this view
Similar, but not quite. The SchemaAutoAction is analogous to the configuration property hbm2ddl.auto, and its values are:
Create: always create the database when a session factory is created;
Validate: when a session factory is created check if the database matches the mappings and throw an exception otherwise;
Update: when a session factory is created issues DDL commands to update the database if it doesn't match the mappings;
Recreate: always creates the database and drop it when the session factory is disposed.
The user id on your connection string is not a variable and is different from the user id (can be GUID for example) of your program. How do you audit log deletes if your connection string's user id is static?
The best place to log insert/update/delete is through triggers. But with static connection string, it's hard to log who delete something. What's the alternative?
With SQL Server, you could use CONTEXT_INFO to pass info to the trigger.
I use this in code (called by web apps) where I have to use triggers (eg multiple write paths on the table). This is where can't put my logic into the stored procedures.
We have a similar situation. Our web application always runs as the same database user, but with different logical users that out application tracks and controls.
We generally pass in the logical user ID as a parameter into each stored procedure. To track the deletes, we generally don't delete the row, just mark the status as deleted, set the LastChgID and LastChgDate fields accordingly. For important tables, where we keep an audit log (a copy of every change state), we use the above method and a trigger copies the row to a audit table, the LastChgID is already set properly and the trigger doesn't need to worry about getting the ID.