AutoIT Check If Process is running on Remote PC - process

I currently have some AutoIT code that will terminate a process on a remote machine, but I'm needing to find a way to add a check to see if the process is running first. After spending some time sifting through the AutoIT forums and google, I'm at a loss. Here is what I currently have:
Func EndProc()
$oWMIService = ObjGet("winmgmts:\\" & $ipAddress & "\root\CIMV2")
If Not IsObj($oWMIService) Then
MsgBox(48, "ERROR", "Couldn't locate the computer. Please make sure you've selected the correct computer and try again.")
Return
EndIf
Dim $handle, $colProc, $cProc
$cProc = $oWMIService.ExecQuery('SELECT * FROM Win32_Process WHERE Name = "' & $ProcessToKill & '"')
For $oProc In $cProc
$oProc.Terminate()
Next
If $handle Then
Return $handle
Else
Return 0
EndIf
EndFunc ; Func EndProc()

You may want to check out the examples here, there are a number of different ways to use WMI via AutoIT to retrieve the list of processes running remotely and filter on the ones you care about.
Alternatively, calling PSList through AutoIT could prove useful as well.

Related

How to handle errors from reqMktData calls

Are there any examples on the net how to process errors when downloading data from Interactive Brokers using the IBrokers package? I've had a look at the package details and eWrapper and twsCALLBACK seem to handle this but I can't get them to work. For example the code below produces an error and R hangs, the error msg isn't processed. Thanks for any suggestions.
contract <- twsContract(0,
symbol="SPI",
sectype="XXX", #bad sectype
exch="SNFE",
primary="",
expiry= "20181220",
strike="",
currency="AUD",
right="",
local="",
multiplier = "25",
combo_legs_desc = "",
comboleg = "",
include_expired = "",
secIdType = "",
secId = "")
tws <- twsConnect()
data <- reqMktData(tws,contract,snapshot = TRUE)
You should append a "Disconnect" command to you code. Otherwise your program try to build to connections on the same port, that's not possible and it will not terminate.
I don't know the IBroker package very well, please check the command for disconnecting and append it to your code. Refresh your command line and rerun your code.
In addition, connect to IB Gateway instead of TWS by using that port number (check API settings of your IB Gateway application). In the settings choose a detailed Log.
Run your code again (after changing port number) and send your log file. Then I will try to help more. It's hard to help without any error message.

SQL Server Multiple DDLs Ignoring Order And in a Single Transaction

I'm trying to run multiple DDLs (around 90) on an SQL Server.
The DDLs don't contain any changes to tables, only view, stored procedures, and functions. The DDLs might have inter-dependencies between them, one STP that calls another, for example.
I don't want to start organizing the files in the correct order, because it would take too long, and I want the entire operation to fail if any one of the scripts has an error.
How can I achieve this?
My idea so far, is to start a transaction, tell the SQL to ignore errors (which I don't know how to do) run all the scripts once, tell the SQL to start throwing errors again, run all the scripts again, and then commit if everything succeeds.
Is this a good idea?
How do I CREATE \ ALTER a stored procedure or view even though it has errors?
To clarify and address some concerns...
This is not intended for production. I just don't want to leave the DB I'm testing on broken.
What I would like to achieve is this: run a big group of scripts on the server, without taking the time to order them. But if any of the scripts has an error in it, I want to rollback the entire operation.
I don't care about isolation, I only want the operation to happen as a single transaction.
Organize the files in the correct order, test the procedure on a test environment, have a validation and acceptance test, then run it in production.
While running DDL in a transaction may seem possible, in practice is not. There are many DDL statements that don't mix well with transactions. You must put the application offline, take a database backup (or create a snapshot) before the schema changes, run the tested and verified upgrade procedure (your scripts), validate the result with acceptance tests and then turn the application back online. If something fails, revert to the backup created initially (with all the implications vis-a-vis any downstream log consumer like replication, log shipping or mirroring).
This is the correct way, and as far as I'm concerned the only way. I know you'll find plenty of advice on how to do this the wrong way.
We actually do something like this to deploy our database scripts to production. We do this in an application that connects to our databases. To add to the complication, we also have 600 databases that should have the same schema, but don't really. Here's our approach:
Merge all our scripts into one big file. Injecting go's in between every single file. This makes it look like there's one very long script. We do a simple ordering based on what the coders requested.
Split everything into "go blocks". Since go isn't legal sql, we split them up into multiple blocks that get executed one at a time.
Open a database connection.
Start a transaction.
for each go block:
Make sure the transaction is still active. (This is VERY important. I'll explain why in a bit.)
Run the code, recording the errors.
If there were any errors, rollback. Otherwise, commit.
In our multi database set up, we do this whole thing twice. Run through every database once, "testing" the code to make sure there are no errors on any database, and then go back and run them again "for real".
Now on to why you need to make sure the transaction is still active. There are some commands that will rollback your transaction on error! Imagine our surprise the first time we found this out... Everything before the error was rolled back, but everything after was committed. If there is an error, however, nothing in that same block gets committed, so it's all good.
Below is our core of our execution code. We use a wrapper around SqlClient, but it should look very similar to SqlClient.
Dim T = New DBTransaction(client)
For Each block In scriptBlocks
If Not T.RestartIfNecessary Then
exceptionCount += 1
Log("Could not (re)start the transaction for {0}. Not executing the rest of the script.", scriptName)
Exit For
End If
Debug.Assert(T.IsInTransaction)
Try
client.Text = block
client.ExecNonQuery()
Catch ex As Exception
exceptionCount += 1
Log(ex.Message + " on {0} executing: '{1}'", client.Connection.Database, block.Replace(vbNewLine, ""))
End Try
Next
If exceptionCount > 0 Then Log("There were {0} exceptions while executing {1}.", exceptionCount, scriptName)
If testing OrElse
exceptionCount > 0 Then
Try
T.Rollback()
Log("Rolled back all changes for {0} on {1}.", scriptName, client.Connection.Database)
Catch ex As Exception
Log("Could not roll back {0} on {1}: {2}", scriptName, client.Connection.Database, ex.Message)
If Debugger.IsAttached Then
Debugger.Break()
End If
End Try
Else
T.Commit()
Log("Successfully committed all changes for {0} on {1}.", scriptName, client.Connection.Database)
End If
Return exceptionCount
Class DBTransaction
Private _tName As String
Public ReadOnly Property name() As String
Get
Return _tName
End Get
End Property
Private _client As OB.Core2.DB.Client
Public Sub New(client As OB.Core2.DB.Client, Optional name As String = Nothing)
If name Is Nothing Then
name = "T" & Guid.NewGuid.ToString.Replace("-", "").Substring(0, 30)
End If
_tName = name
_client = client
End Sub
Public Function Begin() As Boolean
Return RestartIfNecessary()
End Function
Public Function RestartIfNecessary() As Boolean
Try
_client.Text = "IF NOT EXISTS (Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "') BEGIN BEGIN TRANSACTION " & name & " END"
_client.ExecNonQuery()
Return IsInTransaction()
Catch ex As Exception
Return False
End Try
End Function
Public Function IsInTransaction() As Boolean
_client.Text = "Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "'"
Dim scalar As String = _client.ExecScalar
Return scalar <> ""
End Function
Public Sub Rollback()
_client.Text = "ROLLBACK TRANSACTION " & name
_client.ExecNonQuery()
End Sub
Public Sub Commit()
_client.Text = "COMMIT TRANSACTION " & name
_client.ExecNonQuery()
End Sub
End Class
You have a good answer, here is "hack" answer. For the case "You cannot do this, but if you want it very much, then go on". I'm quite confident that you will not achieve what you are thinking of, therefore
DO FULL BACKUP!
Assuming there are no COMMIT or GO statements (explicit or !implicit!) in any of these files, the only thing you need to do is to run them in a single transaction. Combine them in one file, wrap in a transaction, and run.
How to combine 90 files in 1 file:
If sorting by name brings them in right order, then run this from folder with files in command prompt:
FOR /F "tokens=1" %G IN ('dir /b /-d /o:n *.sql') DO (
type %G >> Big_SQL_Script.sql && echo. >> Big_SQL_Script.sql
)
If order is random, then create a list of files dir /b /-d *.sql > File_Name_List.txt and order it manually. Then run:
FOR /F "tokens=1" %G IN (File_Name_List.txt) DO (
type %G >> Big_SQL_Script.sql && echo. >> Big_SQL_Script.sql
)
This way you can concatenate 90 files in automated order. Run and see what happens.
Good luck!

Transaction inside of code

I'm having an issue where I'm preaty not sure how to resolve this and I want to know what is the best approach I should consider in order to achieve this task.
We are developping an application VB.net 2.0 and SQL 2005. Users are allowed to cancel a reception based on a purchase which may contains many received goods. But, during the process of cancellation, some questions are asked to users such as "Do you want to cancel Good #1". If yes, delete. Then, "Do you want to cancel Good #2", no, do not delete and one another question (if received item is issued, a process must be made manualy by the user). And, at the end, if all goods were successfully cancelled, we have to cancel the reception itself. But sometime, if an error occurs or some conditions occurs once asked to user in this process, we want to cancel any actions made from the beginning and make it back to his original state. So I thought about Transaction.
I know there is Transaction for SQL which can be used and I know good enough how to use it, but I can't realy use this as user must perform actions which possibly cancel this transaction.
I also remembered TransactionScope from .NET 2.X and over which can achieve something similar and I also know as well how to use it. The problem comes with TransactionScope and MSDTC. When using this, we still getting an error which said :
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
I've tried what is describe here in another stack post and it works great... until user restard their computer. EVERY time users restart their computer, they must put value back. Plus, per default, no computer have this value set to On. At least on 10 computers bases, none were activated. There is something like 300 computers on which this program is installed so it's surely not the good things to consider neither.
So anyone have an idea of how I can acheive this? Is there anything else doing transaction via code which I can use?
NOTE1 : I know some would say, first ask conditions to user and maintain values in memory. Once done, if everything went well, go with delete. But what if an error occurs when deleting let's say, goods #4? And how can I give to a store procedure a dynamic list of goods to be deleted?
NOTE2 : Sorry for my english, I usualy talk french.
NOTE3 : Any exemple in C# can be provide also as I know both VB and C#.
Assuming you already have similar stored procedure to manage cancelation:
create proc CancelGood (#goodID int)
as
SET NOCOUNT ON
SET XACT_ABORT ON
begin transaction
update table1 set canceled = 1
where GoodID = #GoodID
update table2 set on_stock = on_stock + 1
where GoodID = #GoodID
commit transaction
VB code adds a string to some canceledGoods list if user selects 'Oui'. I'm not familiar with VB.Net; in c# it would look like:
canceledGoods.Add (string.Format("exec dbo.CancelGood {0}", goodID));
Than, if there is at least one string in canceledGoods, build and execute batch:
batch = "BEGIN TRANSACTION" +
" BEGIN TRY " +
string.Join (Environment.NewLine, canceledGoods.ToArray()) +
" END TRY" +
" BEGIN CATCH " +
" -- CODE TO CALL IF THERE WAS AN ERROR" +
" ROLLBACK TRANSACTION" +
" RETURN" +
" END CATCH" +
" -- CODE TO CALL AFTER SUCCESSFULL CANCELATION OF ALL GOODS" +
" COMMIT TRANSACTION"
conn.ExecuteNonQuery (batch);

qt mysql query giving different result on different machine

Following code works on my pc but gives error on other pc's. how is it possible to run this successfully on all machines.
QSqlQuery query;
QString queryString = "SELECT * FROM " + parameter3->toAscii() + " WHERE " + parameter1->toAscii() + " = \"" + parameter2->toAscii() + "\"";
bool retX = query.exec(queryString);
What pre requisite should be fulfilled for this to run on any pc
In troubleshooting, if you isolate your query and it returns the result you anticipated ( such as you have done utilizing qt creator to verify the query returns a result of true), the next step would be to take a close look at your code and verify that you are passing the proper parameters into the query for execution.
I have a virgin machine I utilize for this purpose. I am a software engineer by trade and I am fully aware that i have a ton of software installed on my PC which the common user may/will not have installed. So the virgin allows me to test the code in stand-alone form.
I suggest implementing a message box prior to the execution of your query which shows the query to be executed. This will verify the query is correct on the "other machines".
Certain dll's were needed. in my case qtguid4.dll, qtcored4.dll and qtsqld4.dll. There was a size difference. Once matched it worked on a pc. However, on other pc's i still get an error "The application failed to initialize 0xc000007b ....."
How is it possible to make an application run.
Brgds,
kNish

File.Copy FileNotFoundException reported randomly when it's never true

The code is very simple.
If File.Exists(strFileMovingTo) Then File.Delete(strFileMovingTo)
If File.Exists(strFileMovingTo) Then
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#fad.co.uk", "Display Jpg Problem", "The file " & strFileMovingTo & " cannot be removed by the file mover(to allow a new file to be moved over)")
Return False
Else
If File.Exists(strFileMovingFrom) Then
File.Copy(strFileMovingFrom, strFileMovingTo, True)
If File.Exists(strFileMovingTo) = False Then
''tried to copy file over but must have failed ... send email
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#friday-ad.co.uk", "Display Jpg Problem", "The file cannot be moved by the file mover from " & strFileMovingFrom & " to " & strFileMovingTo & ". Please have a look at why.")
Return False
Else
Return True
End If
End If
Return False
''make sure this file exists on fad dev
End If
However a FileNotFoundException exception is thrown during File.Copy even though its wrapped in a If File.Exists ... End If to check its existance.
The great thing is if you run this through the debugger it nearly always works, when released as an app it almost never works.
Scarily the file always exists.
Anyone know what's going on?
There's probably something else deleting the file and there's a race condition between the call to File.Exists and File.Copy.
I agree with Dave's answer that this looks like a timing issue. Also, if a file can't be deleted for any reason then usually File.Delete will throw an exception. Perhaps you should be catching that instead and reworking your logic.
There is many race condition, you shouldn't blindly rely on File.Exists for other file operations. Anybody can delete or add a file with the same name between two function calls.
If File.Exists(strFileMovingFrom) Then
// AT THIS TIME, another thread or another process might run
// the equivalent to **File.Delete(strFileMovingFrom)**
File.Copy(strFileMovingFrom, strFileMovingTo, True) //Can throw!
The fact that it works in debug tells me it's a timing problem. You're not waiting long enough for the deletes or other file system changes to happen.
Build in a wait of one to two seconds after making file system changes.
UPDATE:
How about this: Create a shared dictionary of file moves you want to perform and use a FileSystemWatcher to carry out the copy action.
If File.Exists(strFileMovingTo) Then File.Delete(strFileMovingTo)
Thread.Sleep(1000) --Add wait
If File.Exists(strFileMovingTo) Then
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#fad.co.uk", "Display Jpg Problem", "The file " & strFileMovingTo & " cannot be removed by the file mover(to allow a new file to be moved over)")
Return False
Else
If File.Exists(strFileMovingFrom) Then
File.Copy(strFileMovingFrom, strFileMovingTo, True)
Thread.Sleep(1000) --Add wait
If File.Exists(strFileMovingTo) = False Then
'tried to copy file over but must have failed ... send email
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#friday-ad.co.uk", "Display Jpg Problem", "The file cannot be moved by the file mover from " & strFileMovingFrom & " to " & strFileMovingTo & ". Please have a look at why.")
Return False
Else
Return True
End If
End If
Return False
'make sure this file exists on fad dev
End If
When working with some of file functions of Windows API, which also should be true for .NET one should always be aware about asynchronous nature of file system functions. Asynchronous, means that there is a non-zero, unpredictable, non-guaranteed time between you call to API affecting file system and next successful call to the same API related to the file or directory.
In non-transactional APIs it is common mistake to call something like "create file" then immediatelly try to "findFirst" and fail. Just treat the file system as messaging system with unpredictable delays and develop sort of "protocol" with repetitive polling, sleeps and timeouts or event notifications and callbacks.
However since introduction of Vista there is a different set of guarantees and expectations when applications can use so named "transactional" file API.