The code is very simple.
If File.Exists(strFileMovingTo) Then File.Delete(strFileMovingTo)
If File.Exists(strFileMovingTo) Then
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#fad.co.uk", "Display Jpg Problem", "The file " & strFileMovingTo & " cannot be removed by the file mover(to allow a new file to be moved over)")
Return False
Else
If File.Exists(strFileMovingFrom) Then
File.Copy(strFileMovingFrom, strFileMovingTo, True)
If File.Exists(strFileMovingTo) = False Then
''tried to copy file over but must have failed ... send email
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#friday-ad.co.uk", "Display Jpg Problem", "The file cannot be moved by the file mover from " & strFileMovingFrom & " to " & strFileMovingTo & ". Please have a look at why.")
Return False
Else
Return True
End If
End If
Return False
''make sure this file exists on fad dev
End If
However a FileNotFoundException exception is thrown during File.Copy even though its wrapped in a If File.Exists ... End If to check its existance.
The great thing is if you run this through the debugger it nearly always works, when released as an app it almost never works.
Scarily the file always exists.
Anyone know what's going on?
There's probably something else deleting the file and there's a race condition between the call to File.Exists and File.Copy.
I agree with Dave's answer that this looks like a timing issue. Also, if a file can't be deleted for any reason then usually File.Delete will throw an exception. Perhaps you should be catching that instead and reworking your logic.
There is many race condition, you shouldn't blindly rely on File.Exists for other file operations. Anybody can delete or add a file with the same name between two function calls.
If File.Exists(strFileMovingFrom) Then
// AT THIS TIME, another thread or another process might run
// the equivalent to **File.Delete(strFileMovingFrom)**
File.Copy(strFileMovingFrom, strFileMovingTo, True) //Can throw!
The fact that it works in debug tells me it's a timing problem. You're not waiting long enough for the deletes or other file system changes to happen.
Build in a wait of one to two seconds after making file system changes.
UPDATE:
How about this: Create a shared dictionary of file moves you want to perform and use a FileSystemWatcher to carry out the copy action.
If File.Exists(strFileMovingTo) Then File.Delete(strFileMovingTo)
Thread.Sleep(1000) --Add wait
If File.Exists(strFileMovingTo) Then
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#fad.co.uk", "Display Jpg Problem", "The file " & strFileMovingTo & " cannot be removed by the file mover(to allow a new file to be moved over)")
Return False
Else
If File.Exists(strFileMovingFrom) Then
File.Copy(strFileMovingFrom, strFileMovingTo, True)
Thread.Sleep(1000) --Add wait
If File.Exists(strFileMovingTo) = False Then
'tried to copy file over but must have failed ... send email
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#friday-ad.co.uk", "Display Jpg Problem", "The file cannot be moved by the file mover from " & strFileMovingFrom & " to " & strFileMovingTo & ". Please have a look at why.")
Return False
Else
Return True
End If
End If
Return False
'make sure this file exists on fad dev
End If
When working with some of file functions of Windows API, which also should be true for .NET one should always be aware about asynchronous nature of file system functions. Asynchronous, means that there is a non-zero, unpredictable, non-guaranteed time between you call to API affecting file system and next successful call to the same API related to the file or directory.
In non-transactional APIs it is common mistake to call something like "create file" then immediatelly try to "findFirst" and fail. Just treat the file system as messaging system with unpredictable delays and develop sort of "protocol" with repetitive polling, sleeps and timeouts or event notifications and callbacks.
However since introduction of Vista there is a different set of guarantees and expectations when applications can use so named "transactional" file API.
Related
I've spent three days looking for an answer so I hope you'll bear with me if this has already been addressed and I've been mighty unlucky finding a solution.
I'm using Fortran (eugh!) but this is a generic MPI query.
Scenario (simplified for this example):
Processes 0 and 1 communicate with process 2 (but not with each other)
0 & 1 do lots of sends/receives
2 does lots of receives/process/sends (but each pair is done twice so as to
pick up both 0 & 1)
0 & 1 will eventually stop - I know not when! - so I do an MPI_Send from each when appropriate using the rank of the 3rd process (filter_rank_id=2) and a special tag (c_tag_open_rcv=200), with a logical TRUE in the buffer (end_of_run). Like this:
CALL MPI_SEND(end_of_run, 1, MPI_LOGICAL, filter_rank_id, c_tag_open_rcv, mpi_coupling_comms, mpi_err)
The problem arises in process 2... it's busy doing its MPI_Recv/MPI_Send pairs and I cannot break out of it. I have set up a non-blocking receive for each of the other two processes and stored the request handles:
DO model_rank_id= 0, 1
!Set up a non-blocking receive to get notification of end of model run for each model
end_run = end_model_runs(model_rank_id) !this is an array of booleans initialised to FALSE
CALL MPI_IRECV(end_run, 1, MPI_LOGICAL, model_rank_id, &
c_tag_open_rcv, coupling_comms, mpi_request_handle, mpi_err)
!store the handle in an array
request_handles(model_rank_id) = mpi_request_handle
END DO
where model_rank_id is the process number in the MPI communicator i.e. 0 or 1.
Later on, busy doing all those receive/send pairs, I always check whether anything's arrived in the buffer:
DO model_rank_id= 0, 1
IF (end_model_runs(model_rank_id) .EQV. .FALSE.) THEN
CALL MPI_TEST(request_handles(model_rank_id), run_complete, mpi_status, mpi_err)
IF (run_complete .eqv. .FALSE.) THEN
!do stuff... receive/process/send
ELSE
!run is complete
!___________removed this as I realised it was incorrect__________
!get the stop flag for the specific process
CALL MPI_RECV(end_run, 1, MPI_LOGICAL, model_rank_id, &
c_tag_open_rcv, coupling_comms, mpi_err)
!____________end_________________________________________________
!store the stop flag so I can do a logical 'AND' on it and break out when
!both processes have sent their message
end_model_runs(model_rank_id) = end_run
END IF
END IF
END DO
Note that this snippet is contained in a loop which carries on until all the stop flags are TRUE.
I know it's fairly complex, but this can't be that hard, can it? If anyone can see the error that'd be fantastic, or even suggest a better way to do it.
Huge thanks in advance.
Your program is probably stuck in the MPI_RECV call. The reason is that having a positive completion flag as returned by MPI_TEST means that MPI_IRECV has received the message. Unless the sender sends another message with the same tag, MPI_RECV will simply block and wait, in your case probably indefinitely. Apart from that, you are issuing two MPI_IRECV calls with the same receive buffer which is probably not what you really want to do since end_run = end_model_runs(model_rank_id) does not copy the address of the array element into end_run but rather its value.
Your code should look like this:
DO model_rank_id= 0, 1
!Set up a non-blocking receive to get notification of end of model run for each model
CALL MPI_IRECV(end_model_runs(model_rank_id), 1, MPI_LOGICAL, model_rank_id, &
c_tag_open_rcv, coupling_comms, request_handle, ierr)
!store the handle in an array
request_handles(model_rank_id) = request_handle
END DO
...
DO model_rank_id= 0, 1
IF (end_model_runs(model_rank_id) .EQV. .FALSE.) THEN
CALL MPI_TEST(request_handles(model_rank_id), run_complete, status, ierr)
IF (run_complete .eqv. .FALSE.) THEN
!do stuff... receive/process/send
ELSE
!run is complete
!the stop flag is ALREADY in end_model_runs(model_rank_id)
!do a logical 'AND' on it and break out when
END IF
END IF
END DO
As a side note, using your own identifiers that start with mpi_ is a terrible idea since those might clash with symbols provided by the MPI library. You should really treat mpi_ as a reserved prefix and never use it while naming your own variables, subroutines, etc. I've fixed that for you in the code above.
I solved this eventually after a lot of experimentation, it was actually quite simple (isn't it always?)
The problem was due to the fact that processes 0 & 1 could end and post their "I'm finished" messages OK, but process 2 was in such a tight loop doing the test and recv/send pair (the outer loops on both sets of send/recv's omitted for clarity in original past), that the test would fail and the process would stick in the blocking MPI_RECV.
First I tried a sleep(3) which made it work, but it couldn't sleep on every loop without really bad effects on perfomance, then I tried an MPI_IPROBE but hit the same problem as the test. In the end, a timeout around the MPI_IPROBE did the trick, thus:
DO iter1 = 1, num_models
!Test each model in turn and ensure we do the comms until it has finished
IF (end_model_runs(iter1) .EQV. .FALSE.) THEN
model_rank_id= models(iter1)
now = TIME()
DO WHILE (TIME() .LT. now + timeout)
msg_flag = .FALSE.
CALL MPI_IPROBE(model_rank_id, c_tag, coupling_comms, &
msg_flag, empi_status, empi_err)
IF (msg_flag .EQV. .TRUE.) THEN
!Message waiting
EXIT
END IF
END DO
IF (msg_flag .EQV. .TRUE.) THEN
CALL MPI_RECV(state_vector, num_state_params, MPI_DOUBLE_PRECISION, &
model_rank_id, c_tag, coupling_comms, empi_status, empi_err)
ELSE !no message waiting, flag should be False, i.e. the run *has* finished
end_model_runs(iter1) = .NOT. msg_flag
END IF
END IF
END DO
and this code inside a loop which breaks once all the members of end_model_runs are TRUE.
I hope this helps someone else - and saves them three days of effort!
My application is crashing really, really hard, and it appears to be related to the database. The application deals with lots and lots of data, and hundreds of simultaneous users. In an effort to speed up data loads, I am loading some records like this:
def load(filename)
rc = Publication.connection.raw_connection
rc.exec("COPY invoice_line_items FROM STDIN WITH CSV HEADER")
# open up your CSV file looping through line by line and getting the line into a format suitable for pg's COPY...
error = false
begin
CSV.foreach(filename) do |line|
until rc.put_copy_data( line.to_csv )
ErrorPrinter.print " waiting for connection to be writable..."
sleep 0.1
end
end
rescue Errno => err
User.inform_admin(false, User.me, "Line Item import failed with #{err.class.name} the following error: #{err.message}", err.backtrace)
error = true
else
rc.put_copy_end
while res = rc.get_result
if (res.result_status != 1)
User.inform_admin(false, User.me, "Line Item import result of COPY was: %s" % [ res.res_status(res.result_status) ], "")
error = true
end
end
end
end
I also have Sidekiq running with about 90 threads. Does this method of loading put an exclusive lock on that table? Is it possible that these jobs are running into each other? If they are, am I better off just doing inserts?
COPY takes the same level of lock as INSERT. (It's missing from the explicit locking chapter, but visible in the source code). So whatever's giving you trouble, it's probably not that.
You should be looking at pg_locks and pg_stat_activity to see if anything's stuck on a lock. More info on other questions on SO or DBA.SE, the manual, and the PostgreSQL wiki.
In Visual Basic I create an Application Object and start it:
gApp = New CANoe.Application
gMeasurement = gApp.Measurement
gApp.Open(arrArgs(0), False, False)
gMeasurement.Start()
Once the application finishes processing the data two possible scenarios may happen: (i) the data file was corrupt and (in normal circumstances) an allert window is raised and (ii) the data file was ok. In (ii) case I can quite the Application with gApp.Quit(). However in case (i) gApp.Quit() does not work, since the program expects input from the user (although often I do not see the window at all).
Question 1: how can I quite the process corresponding to gApp? Currently I am quiting this in this way:
For Each p As Process In Process.GetProcesses
If p.ProcessName = "CANoe32" Then
p.Kill()
End If
Next
In general this is a bad solution since more instances of CANoe32 may run (although in this particular case only one process of this binary may run on the system).
Question 2 what would be a more elegant solution to quit the gApp in case it has child windows?
Any comments are very helpful
A possible solution to the problem would to use something similar to this ticket:
how-do-i-get-the-process-id-from-a-created-excel-application-object
I'm trying to run multiple DDLs (around 90) on an SQL Server.
The DDLs don't contain any changes to tables, only view, stored procedures, and functions. The DDLs might have inter-dependencies between them, one STP that calls another, for example.
I don't want to start organizing the files in the correct order, because it would take too long, and I want the entire operation to fail if any one of the scripts has an error.
How can I achieve this?
My idea so far, is to start a transaction, tell the SQL to ignore errors (which I don't know how to do) run all the scripts once, tell the SQL to start throwing errors again, run all the scripts again, and then commit if everything succeeds.
Is this a good idea?
How do I CREATE \ ALTER a stored procedure or view even though it has errors?
To clarify and address some concerns...
This is not intended for production. I just don't want to leave the DB I'm testing on broken.
What I would like to achieve is this: run a big group of scripts on the server, without taking the time to order them. But if any of the scripts has an error in it, I want to rollback the entire operation.
I don't care about isolation, I only want the operation to happen as a single transaction.
Organize the files in the correct order, test the procedure on a test environment, have a validation and acceptance test, then run it in production.
While running DDL in a transaction may seem possible, in practice is not. There are many DDL statements that don't mix well with transactions. You must put the application offline, take a database backup (or create a snapshot) before the schema changes, run the tested and verified upgrade procedure (your scripts), validate the result with acceptance tests and then turn the application back online. If something fails, revert to the backup created initially (with all the implications vis-a-vis any downstream log consumer like replication, log shipping or mirroring).
This is the correct way, and as far as I'm concerned the only way. I know you'll find plenty of advice on how to do this the wrong way.
We actually do something like this to deploy our database scripts to production. We do this in an application that connects to our databases. To add to the complication, we also have 600 databases that should have the same schema, but don't really. Here's our approach:
Merge all our scripts into one big file. Injecting go's in between every single file. This makes it look like there's one very long script. We do a simple ordering based on what the coders requested.
Split everything into "go blocks". Since go isn't legal sql, we split them up into multiple blocks that get executed one at a time.
Open a database connection.
Start a transaction.
for each go block:
Make sure the transaction is still active. (This is VERY important. I'll explain why in a bit.)
Run the code, recording the errors.
If there were any errors, rollback. Otherwise, commit.
In our multi database set up, we do this whole thing twice. Run through every database once, "testing" the code to make sure there are no errors on any database, and then go back and run them again "for real".
Now on to why you need to make sure the transaction is still active. There are some commands that will rollback your transaction on error! Imagine our surprise the first time we found this out... Everything before the error was rolled back, but everything after was committed. If there is an error, however, nothing in that same block gets committed, so it's all good.
Below is our core of our execution code. We use a wrapper around SqlClient, but it should look very similar to SqlClient.
Dim T = New DBTransaction(client)
For Each block In scriptBlocks
If Not T.RestartIfNecessary Then
exceptionCount += 1
Log("Could not (re)start the transaction for {0}. Not executing the rest of the script.", scriptName)
Exit For
End If
Debug.Assert(T.IsInTransaction)
Try
client.Text = block
client.ExecNonQuery()
Catch ex As Exception
exceptionCount += 1
Log(ex.Message + " on {0} executing: '{1}'", client.Connection.Database, block.Replace(vbNewLine, ""))
End Try
Next
If exceptionCount > 0 Then Log("There were {0} exceptions while executing {1}.", exceptionCount, scriptName)
If testing OrElse
exceptionCount > 0 Then
Try
T.Rollback()
Log("Rolled back all changes for {0} on {1}.", scriptName, client.Connection.Database)
Catch ex As Exception
Log("Could not roll back {0} on {1}: {2}", scriptName, client.Connection.Database, ex.Message)
If Debugger.IsAttached Then
Debugger.Break()
End If
End Try
Else
T.Commit()
Log("Successfully committed all changes for {0} on {1}.", scriptName, client.Connection.Database)
End If
Return exceptionCount
Class DBTransaction
Private _tName As String
Public ReadOnly Property name() As String
Get
Return _tName
End Get
End Property
Private _client As OB.Core2.DB.Client
Public Sub New(client As OB.Core2.DB.Client, Optional name As String = Nothing)
If name Is Nothing Then
name = "T" & Guid.NewGuid.ToString.Replace("-", "").Substring(0, 30)
End If
_tName = name
_client = client
End Sub
Public Function Begin() As Boolean
Return RestartIfNecessary()
End Function
Public Function RestartIfNecessary() As Boolean
Try
_client.Text = "IF NOT EXISTS (Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "') BEGIN BEGIN TRANSACTION " & name & " END"
_client.ExecNonQuery()
Return IsInTransaction()
Catch ex As Exception
Return False
End Try
End Function
Public Function IsInTransaction() As Boolean
_client.Text = "Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "'"
Dim scalar As String = _client.ExecScalar
Return scalar <> ""
End Function
Public Sub Rollback()
_client.Text = "ROLLBACK TRANSACTION " & name
_client.ExecNonQuery()
End Sub
Public Sub Commit()
_client.Text = "COMMIT TRANSACTION " & name
_client.ExecNonQuery()
End Sub
End Class
You have a good answer, here is "hack" answer. For the case "You cannot do this, but if you want it very much, then go on". I'm quite confident that you will not achieve what you are thinking of, therefore
DO FULL BACKUP!
Assuming there are no COMMIT or GO statements (explicit or !implicit!) in any of these files, the only thing you need to do is to run them in a single transaction. Combine them in one file, wrap in a transaction, and run.
How to combine 90 files in 1 file:
If sorting by name brings them in right order, then run this from folder with files in command prompt:
FOR /F "tokens=1" %G IN ('dir /b /-d /o:n *.sql') DO (
type %G >> Big_SQL_Script.sql && echo. >> Big_SQL_Script.sql
)
If order is random, then create a list of files dir /b /-d *.sql > File_Name_List.txt and order it manually. Then run:
FOR /F "tokens=1" %G IN (File_Name_List.txt) DO (
type %G >> Big_SQL_Script.sql && echo. >> Big_SQL_Script.sql
)
This way you can concatenate 90 files in automated order. Run and see what happens.
Good luck!
I currently have some AutoIT code that will terminate a process on a remote machine, but I'm needing to find a way to add a check to see if the process is running first. After spending some time sifting through the AutoIT forums and google, I'm at a loss. Here is what I currently have:
Func EndProc()
$oWMIService = ObjGet("winmgmts:\\" & $ipAddress & "\root\CIMV2")
If Not IsObj($oWMIService) Then
MsgBox(48, "ERROR", "Couldn't locate the computer. Please make sure you've selected the correct computer and try again.")
Return
EndIf
Dim $handle, $colProc, $cProc
$cProc = $oWMIService.ExecQuery('SELECT * FROM Win32_Process WHERE Name = "' & $ProcessToKill & '"')
For $oProc In $cProc
$oProc.Terminate()
Next
If $handle Then
Return $handle
Else
Return 0
EndIf
EndFunc ; Func EndProc()
You may want to check out the examples here, there are a number of different ways to use WMI via AutoIT to retrieve the list of processes running remotely and filter on the ones you care about.
Alternatively, calling PSList through AutoIT could prove useful as well.