sqlps transaction issue - sql

I wrote a script that uses sqlps to deploy a SSAS tabular cube. After the deployment, I need to perform a couple of actions on the cube but if I try to access it, I get a message saying that the cube doesn't exist. But it does, in fact, if I split the actions into two scripts (deploy -> exit sql ps -> new sqlps session), it works (for reasons that don't matter now, I cant do that).
It seems that the sqlps session doesn't see the cube it just deployed. I'm wondering if there is a refresh command I can run or if I can run sqlps in a "read uncommited" state.

Do you use the PowerShell Transactions supporting cmdlets? Looks like the transaction is being commited only after your script runs out.
Try to split the deploying logic into two transactions: creating the cube and the other actions you need to perform. For each of this parts you should use it's own transaction, like this:
Start-PSTransaction
// logic
Complete-PSTransaction
If you need to access the transactions programmatically, you should use the TransactionScope analog in the PowerShell, the Cmdlet's CurrentPsTransaction property, like this:
using(CurrentPsTransaction)
{
... // Perform transactional work here
}
... // Perform non-transacted work here
using(CurrentPsTransaction)
{
... // Perform more transactional work here
}
Update:
May be you can turn on the transaction using the declaration?
Developing for transactions – Cmdlet and Provider declarations
Cmdlets declare their support for transactions in a similar way that they declare their support for ShouldProcess:
[Cmdlet(“Get”, “Process”, SupportsTransactions=True)]
Providers declare their support for transactions through the ProviderCapabilities flag:
[CmdletProvider(“Registry”, ProviderCapabilities.Transactions)]
More about the isolation levels in Powershell:
TransactionScope with IsolationLevel set to Serializable is locking all SQL SELECTs

Related

Autonomous transaction analogue in ABAP

I'm trying to commit DML update in a database table while the main program is still running without committing it as there may be errors in future and there might be a need to rollback it but the internal (saved) updates should stay.
Like in the Oracle autonomous transactions.
Call function ... starting new task ... or Submit ... and return don't work as they affect the main transaction.
Is there a way to start a nested database LUW and commit it without interrupting the main LUW?
I am not aware of a way to do this with OpenSQL. But when you are using the ADBC framework, then each instance of the class CL_SQL_CONNECTION operates within a separate database LUW.
I would generally not recommend using ADBC unless you have to, because:
You are now writing SQL statements as strings, which means you don't have compile-time syntax checking.
You can't put variables into SQL code anymore. (OK, you can, but you shouldn't, because you are probably creating SQL injection vulnerabilities that way). You need to pass all the variables using statement->set_param.
You are now writing NativeSQL, which means that you might inadvertently write SQL which can't be ported to other database backends.
You can create separate function for saving your changes and you can call your function with starting new task mode like below.
call function 'ZFUNCTION' starting new task 'SAVECHANGES'
exporting
param = value.

Teradata JDBC Warning 3932 Issue

I can't get my Teradata sql transaction to work through a logstash file.
I am running a somewhat complex transaction with multiple statements (some of them DDL) relying upon previous statements in Teradata. I’m using the jdbc input plugin in logstash. The statement creates multiple volatile tables to provide columns of information upon which I call upon in later statements to complete the transaction. This transaction works perfectly fine when run in Teradata Studio, but has yet to work when I've tried to run it through a jdbc.conf file.
When I run the transaction through my config file from the command line I receive error message 3932 which essentially tells me that I need to enter in COMMIT statements after my volatile tables. I have looked into the error and have, to no productive success tried:
entering in COMMIT statements after each volatile table
placed BT and Et at the beginning and end of the transaction
changed modes within Teradata jdbc_connection_string parameters vector in hopes of having auto commit enabled (not sure if it is disabled or not).
I know the only issue is my transaction through the jdbc, as I (as mentioned before) have gotten the transaction to work in Teradata, and have successfully run my jdbc.conf file with a simpler query.
Any help would be much appreciated.

Using powershell - how to prevent SQL from access the same record

I want to run multiple instances of powershell to collect data from Exchange. In powershell, I use invoke-sqlcmd to run various SQL commands from powershell.
SELECT TOP 1 SmtpAddress FROM [MMC].[dbo].[EnabledAccounts]
where [location] = '$Location' and Script1 = 'Done' and (Script2 = '' or
Script2 is null)
When running more than one script, I see both scripts accessing the same record. I know there's a way to update the record, to lock it, but not sure how to write it out. TIA:-)
The database management system (I'll assume SQL Server) will handle contention for you. Meaning, if you have two sessions trying to update the same set of records, SQL Server will block one session while the other completes. You don't need to do anything to explicitly make that happen. That said, it's a good idea to run your update in a transaction if you are applying multiple updates as a single unit; a single change occurs in an implicit transaction. The following thread talks more about transactions using Invoke-SqlCmd.
Multiple Invoke-SqlCmd and Sql Server transaction

How to rollback data in a SQL Server database?

I had unfortunately deleted data from database by using following query in SQL Server
exec usp_delete_cascade "someTable", "id='somexyz'"
Can anyone please tell me how to get back my data?
Is this possible?
There are two kinds of transactions - implicit and explicit.
Implicit transaction is used every time you do DML statement (in your case delete). This transaction is not user handled. And it is not true your qry did not run under transaction.
Explicit transaction can be defined by user (with begin transaction). When you do not specify transaction, there are only implicit transactions, which are autocommited when statement success.
There is a few ways how to data recover, but never with 100% success and without work. You have to use some external program as SysTools SQL Recovery, ApexSQL Recover or Veeam. Level of recovery depends on your storage use and your server configuration.
Only one 100% way is prevension (and backups, change tracking etc).
You can try to recover with this tool of ApexSQL, but you would think in backup and measures to avoid this kind of problem.
http://www.apexsql.com/sql_tools_recover.aspx
Obviously is a third party tool and you would pay for using it.
It depends on your server config. But, by default, SQL Server does not starts transaction when executing query. So, if you do not started transaction, or transaction started, but commited, rollback is impossible.
Other ways to restore the data: if your database recovery model is set to full, and you have diff or full backup, youre lucky. If no, data is missing forewer.

Isolated committing of transaction in SQL

I have a n-tier C# ASP .Net application server which uses stored procedures to communicate with the database.
I have a service layer which rolls back all ADO .net transactions if an exception is thrown, using TransactionScope.requiresNew.
In my stored procedure, I want to track login attempt numbers, so we want to keep the transaction framework as is, but want to have an isolated transaction which we commit.
How do I do this?
I have tried using a new TransactionScope.RequiresNew in our data layer, but this has no effect.
Strange - RequiresNew in the inner (Logging) TransactionScope should work.
In the below nested transaction, TransactionScopeOption.Suppress or TransactionScopeOption.RequiresNew both work for me - the inner transaction is committed (Dal2.x), and the outer one aborted (Dal1.x).
try
{
using (TransactionScope tsOuter = new TransactionScope(TransactionScopeOption.Required))
{
DAL1.Txn1();
using (TransactionScope tsLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
DAL2.Txn2();
tsLogging.Complete();
}
throw new Exception("Big Hairy Exception");
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
Edit : Mixing TransactionScope and explicit T-SQL transactions is to be avoided - this is stated in the same link you've referenced viz http://msdn.microsoft.com/en-us/library/ms973865.aspx, quoted below
TransactionScopes manage transaction escalation quite intelligently - they will use the (e.g. DTC will only be used if the transactions span multiple databases or resources - e.g. SQL and MSMQ). They also work with the SQL 2005+ Lightweight transactions, so multiple connections to the same database will also be managed within a transaction without the overheads of DTC.
IMHO the decision as to whether to use Suppress vs RequiresNew will depend on whether you need to do your auditing within a transaction at all - RequiresNew for an isolated txn, vs Suppress for none.
When using System.Transactions,
applications should not directly
utilize transactional programming
interfaces on resource managers—for
example the T-SQL BEGIN TRANSACTION or
COMMIT TRANSACTION verbs, or the
MessageQueueTransaction() object in
System.Messaging namespace, when
dealing with MSMQ. Those mechanisms
would bypass the distributed
transaction management handled by
System.Transactions, and combining the
use of System.Transactions with these
resource manager "internal"
transactions will lead to inconsistent
results .... Never mix the two