I was looking at the source of sys.sp_dbcmptlevel in SQL Server 2005.
In the source, there is this line I do not understand how it works.
EXEC %%DatabaseEx(Name = #dbname).SetCompatibility(Level = #input_cmptlevel)
It doesn't appear that DatabaseEx is a stored procedure.
-- does not return any result
select *
from sys.procedures
where [name] like '%DatabaseEx%'
So my questions are
What is DatabaseEx and what does it do?
What is %% before DatabaseEx?
I think the best answer here is that it's not documented, and not supported, so don't rely on it. While it's interesting to know how SQL Server works internally, anything you do with that knowledge has the potential to break in a future hotfix, service pack or release.
Interesting find.
System SP's also refer to %%Object, %%Relation, %%ColumnEx, %%LinkedServer, %%Owner, %%CurrentDatabase(), %%ErrorMessage, %%Module, %%DatabaseRef, %%LocalLogin, %%Alias, %%ServerConfiguration, %%IndexOrStats, %%ScalarType (etc)
My interpretation is that the %%() retrieves some kind of (COM?) object based on filter criteria, followed by a method call.
-- Note: database #dbname may not exist anymore
-- Change compatibility level
-- If invoke gets error, exception will abort this proc.
EXEC %%DatabaseEx(Name = #dbname).SetCompatibility(Level = #input_cmptlevel)
it looks like a way to refer to a variable database as an object and make config changes
Related
I have two service programs: mySrvPgm and myErr
mySrvPgm has a procedure which contains:
/free
...
exec sql INSERT INTO TABLE VALUES(:RECORD_FMT);
if sqlError() = *ON;
//handle error
endif;
...
/end-free
myErr has a procedure sqlError:
/free
exec sql GET DIAGNOSTICS CONDITION 1
:state = RETURNED_SQLSTATE;
...
/end-free
Background info: I am using XMLSERVICE to call the given procedure in mySrvPgm from PHP. I am not using a persistent connection. myErr is bound-by-reference via a binding directory used by mySrvPgm. Its activation is set to *IMMED, its activation group is set to *CALLER.
The problem: Assume there is an error on the INSERT statement in mySvrPgm. The first time sqlError() is called it will return SQLSTATE 00000 despite the error. All subsequent calls to sqlError() return the expected SQLSTATE.
A workaround: I added a procedure to myErr called initSQL:
/free
exec sql SET SCHEMA MYLIB;
/end-free
If I call initSQL() before the INSERT statement in mySrvPgm, sqlError() functions correctly. It doesn't have to be SET SCHEMA, it can be another GET DIAGNOSTICS statement. However, if it does not contain an executable SQL statement it does not help.
The question: I believe the myErr service program is activating properly and has the correct scope, but I am wondering if there is something more I need to do to activate the SQL part of it. Is there some way to set it up so SQL auto-initializes when the service program is activated, or do I have to execute an unneeded SQL statement in order to get it started?
There is some more background information available here.
Thank you for reading.
What version an release of the OS? Are you upto date on PTFs?
Honestly, seems to me that it's possibly a bug. Or the manual(s) need clarification.. I'd open a PMR.
In Powerbuilder I am trying to update a table (Oracle) with blob but get sqlerror, "Database statement must refer to blob variable". My declaration and updateblob statements are as follows:
blob lblob_newxml
long llong_subid
UPDATEBLOB RP_XML_FORMS SET XML_DOC = :lblob_newxml
WHERE SUBMISSION_ID = :llong_subid
USING SQLCA;
Does anybody know why it is happening and or how to solve this problem? Thanks.
To get more information on this problem and the possible causes, I'd run with one of the database traces turned on. (You can check out database trace options in the Connecting to Your Database manual; link may not be appropriate for your PB version, which you haven't mentioned yet.) This may or may not tell you more, but it tracks everything between the app and when the PB drivers pass the commands "over the wall" to the database's driver.
Good luck,
Terry.
"The PowerBuilder VM can get the SQL syntax for the following types of errors, and passes it to the Transaction object’s DBError event for the following types of errors: ..." (see this page).
If your lblob_newxml is null then use this update statement instead:
UPDATE RP_XML_FORMS SET XML_DOC = NULL
WHERE SUBMISSION_ID = :llong_subid
USING SQLCA;
We have some business requirements that call for versioning. We chose to use MarkLogic library services for that. We have an issue testing our code with XRAY and using transactions.
Our test is as follows:
declare function should-save-with-version-when-releasing() {
declare option xdmp:transaction-mode "update";
let $uri := '/some-document-uri.xml'
let $document := fn:doc($uri)
let $pre-release-version := c:get-latest-version($uri)
let $post-release-version := c:get-latest-version($uri)
let $result := mut:release($document) (:this should version up:)
return (assert:not-empty($pre-release-version),
assert:not-empty($result),
assert:not-equal($pre-release-version,$post-release-version),
xdmp:rollback())
The test will pass no matter what, and as it turns out ML rollback demolishes all the variables.
How do we test it using transactions?
Any helps greatly appreciated,
im
With MarkLogic an entire XQuery update normally acts like a single transaction. When mut:release adds an update to the transaction's stack, the rest of the query will not see that update until after it commits. From the point of view of the query, this normally happens after the entire query finishes, and is not visible to the query.
The docs have something useful to add about what http://docs.marklogic.com/xdmp:rollback does:
When a transaction is rolled back, the current statement immediately
terminates, updates made by any statement in the transaction are
discarded, and the transaction terminates.
So it isn't that the variables are demolished: it's that your program is over.
I think http://docs.marklogic.com/guide/app-dev/transactions#id_15746 has an example that is fairly close to your use-case: "Example: Multi-statement Transactions and Same-statement Isolation". It demonstrates how to xdmp:eval or xdmp:invoke to update a document and see the results within the same query.
Test it to see that it works, then replace the xdmp:commit with an xdmp:rollback. For me the example still works. Start replacing the rest of the logic with your unit test logic, and you should be on your way.
Does somebody know how to turn the console output on in Sybase. The usual statement like print 'Hello', is not working for me, it just says command executed successfully without printing the log statement.
Are you using Interactive SQL in sybase? Or are you invoking dbisqlc with -nogui option and passing it an SQL file for it to run?
The 'message' method is only for interactive mode.
I'm trying to figure this out as well, but as far as I can tell the console output doesn't seem to work. I tried using 'select' statement such as:
SELECT "This is my message";
And it seems to work, but is a little too hacky for my tastes.
Please let me know if this works/you figured something better out :)
~Will
It depends on your setup. If you are using SQL Anywhere, PRINT 'Hello' will not be written to the client window if you are connected from an embedded SQL or ODBC application. The printed message will however be visible in the Server Messages in Sybase Central.
In your case, you'll probably need MESSAGE 'Hello' type status to client as #toniedzwiedz mentioned.
DECLARE #var1 INT, #var2 INT
SELECT #var1 = 3, #var2 = 5
PRINT 'Variable 1 = %1!, Variable 2 = %2!', #var1, #var2
Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though).
The job steps are:
1) Delete all rows from table;
2) Use For each loop to fill up table from Excel spreasheets;
3) Clean up table.
I've tried this MS page (steps 1 & 2), didn't see any need to start changing from Server side security.
Also SQLServerCentral.com for this page, no resolution.
How can I get error logging or a fix?
Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming.
I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?
Here's how I managed tracking "returned state" from an SSIS package called via a SQL Agent job. If we're lucky, some of this may apply to your system.
Job calls a stored procedure
Procedure builds a DTEXEC call (with a dozen or more parameters)
Procedure calls xp_cmdshell, with the call as a parameter (#Command)
SSIS package runs
"local" SSIS variable is initialized to 1
If an error is raised, SSIS "flow" passes to a step that sets that local variable to 0
In a final step, use Expressions to set SSIS property "ForceExecutionResult" to that local variable (1 = Success, 0 = Failure)
Full form of the SSIS call stores the returned value like so:
EXECUTE #ReturnValue = master.dbo.xp_cmdshell #Command
...and then it gets messy, as you can get a host of values returned from SSIS . I logged actions and activity in a DB table while going through the SSIS steps and consult that to try to work things out (which is where #Description below comes from). Here's the relevant code and comments:
-- Evaluate the DTEXEC return code
SET #Message = case
when #ReturnValue = 1 and #Description <> 'SSIS Package' then 'SSIS Package execution was stopped or interrupted before it completed'
when #ReturnValue in (0,1) then '' -- Package success or failure is logged within the package
when #ReturnValue = 3 then 'DTEXEC exit code 3, package interrupted'
when #ReturnValue in (4,5,6) then 'DTEXEC exit code ' + cast(#Returnvalue as varchar(10)) + ', package could not be run'
else 'DTEXEC exit code ' + isnull(cast(#Returnvalue as varchar(10)), '<NULL>') + ' is an unknown and unanticipated value'
end
-- Oddball case: if cmd.exe process is killed, return value is 1, but process will continue anyway
-- and could finish 100% succesfully... and #ReturnValue will equal 1. If you can figure out how,
-- write a check for this in here.
That last references the "what if, while SSIS is running, some admin joker kills the CMD session (from, say, taskmanager) because the process is running too long" situation. We've never had it happen--that I know of--but they were uber-paranoid when I was writing this so I had to look into it...
Why not use logging built into SSIS? We send our logs toa database table and then parse them out to another table in amore user friendly format and can see every step of everypackage that was run. And every error.
I did fix this eventually, thanks for the suggestions.
Basically I logged into Windows with the proxy user account I was running and started to see errors like:
"The For each file enumerator is empty"
I copied the project files across and started testing, it turned out that I'd still left a file path (N:/) in the properties of the For Each loop box, although I'd changed the connection properties. Easier once you've got error conditions to work with. I also had to recreate the variable mapping.
No wonder people just recreate the whole package.
Now fixed and working!