Hibernate - First Sql insert takes a long time - sql

I'm trying to insert a record in DB using Hibernate.
The data gets saved in to multiple tables in the DB. On the hibernate side I have a parent Entity class with one-to-one and one-to-many mapping to other Entity classes.
In debug mode, I could see that the save operation results in multiple sql inserts.
The first insert sql takes a long time, approximately 300 milliseconds.
Please note: This does not include time taken for Session Initialisation, Obtaining JDBC connection etc.
10:46:24.132 [main] DEBUG org.hibernate.SQL - insert into MY_SCHEMA_NAME.PARENT_ENTITY (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN5, COLUMN6, COLUMN7, COLUMN8, COLUMN9, COLUMN10, COLUMN11, COLUMN12, COLUMN13, COLUMN14) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
The same sql if I execute from any other tool ( Oracle SQL developer ) it takes about 20 milliseconds.
The subsequent sql inserts executed by hibernate takes only about 15-20 milliseconds.
The question is, why the fist sql insert in Hibernate takes so much time, nearly 10 times when compared to subsequent sql inserts?

To answer this question you need to learn:
how the query is processed on the database side - see: Oracle Database tuning guide - SQL processing
how the query is cached on the client side - see: Oracle Database JDBC developer's guide - Statement and resultset caching
In short: when the SQL query is coming from the client to the database for the very first time, then the database is performing some additional steps before the execution of this statement (see the first link). After this very first time, the sql plan for this statement is placed into the shared pool (kind of cache), and the database can skip a few of the most time consuming tasks (hard parsing - optimization and row source generation) for all subsequent requests for this concrete query - this process is called "soft parse" in the diagram from the above link.
If the shared pool is cleared (for example after the database restart), these steps must be repeated again for the first incoming query - and this take an additional time.
The statement is flushed out from the cache when some table/view referenced by the query is changed (for example after ALTER TABLE command, or CREATE/DROP INDEX command), and the database is performing the hard parse again after this, and this take an additional time again.
On the client side when the statement is executed for the first time, it is placed in the cache (see the second link) - and this must take some additional time. After this, for all subsequent statement invocation the statement is retrieved from the cache - and this improves performance.
When the database driver is closed (for example on the application restart), the cache is cleared, and the next statement invocation must again take an additional time.
You can explicitely disable statement caching (see the second lnk for detailed instruction) and most likely you will see that all executed statements will take more time.

Related

Concurrency Spanner java stacktrace on `bq query` update column of partitioned table

I've created sql that does an update of all values in one column
UPDATE `Blackout_POC2.measurements_2020`
SET visitor.customerId_enc = enc.encrypted
FROM `Blackout_POC2.encrypted` AS enc
WHERE dateAmsterdam="2020-01-05"
AND session.visitId = enc.visitId
AND visitor.customerId = enc.plain
where dateAmsterdam is the partition key of the measurements_2020 table, and encrypted is a non-partitioned table that holds visitId, plain and encrypted fields. The code sets all values in the customerId_enc column with values from the encrypted table.
The code works perfectly fine when I run it one day at a time, but when I run days in parallel, I occassionally (1% or so) get a stacktrace from my bq query <sql> (see below).
I thought that I could modify partitioned tables in parallel within each partition, but that seems to occassionally not be the case. Could someone point me to where this would be documented, and preferably how to avoid this?
I can probably just rerun that query again, since it is idempotent, but I would like to know why this happens.
Thanks
Bart van Deenen, data-engineer Bol.com
Error in query string: Error processing job 'bolcom-dev-blackout-339:bqjob_r131fa5b3dfd24829_0000016faec5e5da_1': domain: "cloud.helix.ErrorDomain"
code: "QUERY_ERROR" argument: "Could not serialize access to table bolcom-dev-blackout-339:Blackout_POC2.measurements_2020 due to concurrent update"
debug_info: "[CONCURRENT_UPDATE] Table modified by concurrent UPDATE/DELETE/MERGE DML or truncation at 1579185217979. Storage set job_uuid:
03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786, Reason: code=CONCURRENT_UPDATE message=Could not serialize
access to table bolcom-dev-blackout-339:Blackout_POC2.measurements_2020 due to concurrent update debug=Table modified by concurrent UPDATE/DELETE/MERGE
DML or truncation at 1579185217979. Storage set job_uuid: 03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786
errorProto=domain: \"cloud.helix.ErrorDomain\"\ncode: \"QUERY_ERROR\"\nargument: \"Could not serialize access to table bolcom-dev-
blackout-339:Blackout_POC2.measurements_2020 due to concurrent update\"\ndebug_info: \"Table modified by concurrent UPDATE/DELETE/MERGE DML or
truncation at 1579185217979. Storage set job_uuid: 03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786\"\n\n\tat
com.google.cloud.helix.common.Exceptions$Public.concurrentUpdate(Exceptions.java:381)\n\tat
com.google.cloud.helix.common.Exceptions$Public.concurrentUpdate(Exceptions.java:373)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerData.verifyStorageSetUpdate(StorageTrackerData.java:224)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.validateUpdates(AtomicStorageTrackerSpanner.java:1133)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.updateStorageSets(AtomicStorageTrackerSpanner.java:1310)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.updateStorageSets(AtomicStorageTrackerSpanner.java:1293)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker.updateStorageSets(MetaTableTracker.java:2274)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects$1.update(StorageSideEffects.java:1123)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects$1.update(StorageSideEffects.java:976)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker$1.update(MetaTableTracker.java:2510)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerSpanner.lambda$atomicUpdate$7(StorageTrackerSpanner.java:165)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory$1.run(AtomicStorageTrackerSpanner.java:3775)\n\tat com.google.cloud.helix.se
rver.metadata.AtomicStorageTrackerSpanner$Factory.lambda$performJobWithCommitResult$0(AtomicStorageTrackerSpanner.java:3792)\n\tat
com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$RetryCountingWork.run(SpannerTransactionContext.java:1002)\n\tat com.googl
e.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeWithResultInternal(SpannerTransactionContext.java:840)\n\tat com.goo
gle.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeOptimisticWithResultInternal(SpannerTransactionContext.java:722)\n
\tat com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.lambda$executeOptimisticWithResult$1(SpannerTransactionContex
t.java:716)\n\tat
com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeWithMonitoring(SpannerTransactionContext.java:942)\n\tat co
m.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeOptimisticWithResult(SpannerTransactionContext.java:715)\n\ta
t com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory.performJobWithCommitResult(AtomicStorageTrackerSpanner.java:3792)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory.performJobWithCommitResult(AtomicStorageTrackerSpanner.java:3720)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerSpanner.atomicUpdate(StorageTrackerSpanner.java:159)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker.atomicUpdate(MetaTableTracker.java:2521)\n\tat com.google.cloud.helix.server.metadata.StatsRequ
estLoggingTrackers$LoggingStorageTracker.lambda$atomicUpdate$8(StatsRequestLoggingTrackers.java:494)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.record(StatsRequestLoggingTrackers.java:181)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.record(StatsRequestLoggingTrackers.java:158)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.access$500(StatsRequestLoggingTrackers.java:123)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$LoggingStorageTracker.atomicUpdate(StatsRequestLoggingTrackers.java:493)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects.apply(StorageSideEffects.java:1238)\n\tat
com.google.cloud.helix.server.rosy.MergeStorageImpl.commitChanges(MergeStorageImpl.java:936)\n\tat
com.google.cloud.helix.server.rosy.MergeStorageImpl.merge(MergeStorageImpl.java:729)\n\tat
com.google.cloud.helix.server.rosy.StorageStubby.mergeStorage(StorageStubby.java:937)\n\tat
com.google.cloud.helix.proto2.Storage$ServiceParameters$21.handleBlockingRequest(Storage.java:2100)\n\tat
com.google.cloud.helix.proto2.Storage$ServiceParameters$21.handleBlockingRequest(Storage.java:2098)\n\tat
com.google.net.rpc3.impl.server.RpcBlockingApplicationHandler.handleRequest(RpcBlockingApplicationHandler.java:28)\n\tat
....
BigQuery DML operations doesn't have support for multi-statement transactions; nevertheless, you can execute some concurrent statements:
UPDATE and INSERT
DELETE and INSERT
INSERT and INSERT
For example, you execute two UPDATES statements simultaneously against the table then only one of them will succeed.
Keeping this in mind, due you can execute concurrently UPDATE and INSERT statements, another possible cause is if you are executing multiple UPDATE statements simultaneously.
You could try using the Scripting feature to manage the execution flow to prevent DML concurrency.

Atomicity of a job execution in SQL Server

I would like to find the proper documentation to confirm my thought about a SQL Server job I recently wrote. My fear is that data could be inconsistent for few milliseconds (timing between the start of the job execution and its end).
Let's say the job is setup to run every 30 minutes. It will only have one step with the following SQL statement:
DELETE FROM myTable
INSERT INTO myTable
SELECT *
FROM myTableTemp
Could it happens that a SELECT query would be executed exactly in between the DELETE statement and the INSERT statement and thus returning empty results?
And what if I would have created 2 steps in my job, one for the DELETE query and another for the INSERT INTO? Is the atomicity is protected by SQL Server between several steps of one job?
Thanks for your help on this one
No there is no automatic atomic handling of jobs, whether they are multiple statements or steps.
Use this:
begin transaction
delete...
insert....
... anything else you need to be atomic
commit work

Sybase ASE 15.5 : Slow insert with JDBC executebatch()

I'm using Sybase ASE 15.5 and JDBC driver jconnect 4 and I'm experiencing slow insert with executebatch() with a batch size of +/-40 rows on a large table of 400 million rows with columns (integer, varchar(128),varchar(255)), primary key and clustered index on columns (1,2) and nonclustered index on columns (2,1). Each batch of +/-40 rows takes +/-200ms. Is the slowness related to the size of the table? I know that dropping the indexes can improve performance but unfortunately that is not an option. How can I improve the speed of insertion?
NOTE : This is part of the application live run, this is not a one shot migration, so I won't be using bcp tool.
EDIT : I have checked this answer for mysql but not sure it applies to Sybase ASE https://stackoverflow.com/a/13504946/8315843
There are many reasons why the inserts could be slow, eg:
each insert statement is having to be parsed/compiled; the ASE 15.x optimizer attempts to do a lot more work than the previous ASE 11/12 optimizer w/ net result being that compiles (generally) take longer to perform
the batch is not wrapped in a single transaction, so each insert has to wait for a separate write to the log to complete
you've got a slow network connection between the client host and the dataserver host
there's some blocking going on
the table has FK constraints that need to be checked for each insert
there's a insert trigger on the table (w/ the obvious question of what is the trigger doing and how long does it take to perform its operations)
Some ideas to consider re: speeding up the inserts:
use prepared statements; the first insert is compiled into a lightweight procedure (think 'temp procedure'); follow-on inserts (using the prepared statement) benefit from not having to be compiled
make sure a batch of inserts are wrapped in a begin/commit tran wrapper; this tends to defer the log write(s) until the commit tran is issued; fewer writes to the log means less time waiting for the log write to be acknowledged
if you have a (relatively) slow network connection between the application and dataserver hosts, look at using a larger packet size; fewer packets means less time waiting for round-trip packet processing/waiting
look into if/how jdbc supports the bulk-copy libraries (basically implement bcp-like behavior via jdbc) [I don't work with jdbc so I'm only guessing this might be avaialble]
Some of the above is covered in these SO threads:
Getting ExecuteBatch to execute faster
JDBC Delete & Insert using batch
Efficient way to do batch INSERTS with JDBC

SQL 2008+ NOLOCK vs READPAST Considerations for Reporting Accuracy

Understanding the final decision is business decision, what are the accuracy considerations between NOLOCK & READPAST running in SQL 2008 R2? I would like to have a better understanding before discussing changes with the business area.
I have inherited a number of queries, used to create data views for management reporting. ‘WITH (NOLOCK)’ is used liberally but inconsistently. The data being read is from the production server of a widely used application that is constantly being updated. We are migrating from a SQL 2005 server to a SQL 2008 R2 server. These reports want data fresher than the 24 hour old data on the archive server. The use of NOLOCK suggests a past decision; potential for conflict exists and it a bit of accuracy loss is acceptable. Data is used to populate dashboards for human awareness/decision making.
All the Queries are SELECT, with Read Only access for the data view login. The majority of the queries are single table with a few 2 and 3 table joins. Given the low level of joins WITH () seems a better choice than SET TRANSACTION ISOLATION LEVEL {}
Table Hints (Transact-SQL) http://msdn.microsoft.com/en-us/library/ms187373.aspx (as well as multiple questions on SO) says that NOLOCK and/or READUNCOMMITTED are likely to have duplicate read issues, in addition to missing locked records.
READPAST looks like the more accurate, as it will only miss locked records without a chance of duplicates. But I am not sure the level of missing locked records is consistent between it and NOLOCK.
There is good article by Tim Chapman comparing the two but it was written in 2007, most of the comments revolve around 2000 & 2005, with one comment indicating READPAST is problematic in 2008 R2
References
Effect of NOLOCK hint in SELECT statements
When should you use "with (nolock)"
Using NOLOCK and READPAST table hints in SQL Server (By Tim Chapman)
Edit:
Snapshot isolation is suggested in two answers below. Snapshot isolation is dependent setting of the DB, this Q/A https://serverfault.com/questions/117104/how-can-i-tell-if-snapshot-isolation-is-turned-on describes how to see what setting are in place on the database. I now know it is disabled, I am reading for reports from a major applications database. Changing the setting is not an option. +- a couple of percent accuracy is acceptable, application (OLTP) impact is not acceptable. Most simple queries do not need lock considerations but in some extreme cases, lock consideration is required. With the advent of Snapshot isolation for SQL 2005, little information is available on NOLOCK & READPAST behavior in SQL 2008 or higher. Yet they remain my only choices.
A better option worth consideration is enabling READ COMMITTED SNAPSHOT for the database itself. This uses versioning in the tempdb to capture the state of a table at the beginning of the transaction.
There is a very good read on various aspects of NOLOCK, READPAST etc, at http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
WITH (NOLOCK) can provide incorrect results if someone is updating the table when you are selecting from it. If a page-split happens as a result of an insert while you are reading the table, and the new page happens to be beyond the point you've read, WITH (NOLOCK) will have already returned rows from the old page, and will then return duplicate rows from the new page. This is just a single example of why (NOLOCK) is bad.
WITH (READPAST) will skip any records that are being updated or inserted while you are reading from the table. Neither option is good in a busy database.
In light of the recent edit to your question where you state you cannot change the database setting for READ COMMITTED SNAPSHOT, perhaps you should consider using a stored procedure to gather data for you reports, and setting the transaction isolation level at the beginning of the stored proc using SET TRANSACTION ISOLATION LEVEL SNAPSHOT;. In order to do this, you would need to change the database option 'allow snapshot isolation'.
From SQL Server Books Online:
SNAPSHOT
Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction. The transaction can only recognize data modifications that were committed before the start of the transaction. Data modifications made by other transactions after the start of the current transaction are not visible to statements executing in the current transaction. The effect is as if the statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction.
Except when a database is being recovered, SNAPSHOT transactions do not request locks when reading data. SNAPSHOT transactions reading data do not block other transactions from writing data. Transactions writing data do not block SNAPSHOT transactions from reading data.
During the roll-back phase of a database recovery, SNAPSHOT transactions will request a lock if an attempt is made to read data that is locked by another transaction that is being rolled back. The SNAPSHOT transaction is blocked until that transaction has been rolled back. The lock is released immediately after it has been granted.
The ALLOW_SNAPSHOT_ISOLATION database option must be set to ON before you can start a transaction that uses the SNAPSHOT isolation level. If a transaction using the SNAPSHOT isolation level accesses data in multiple databases, ALLOW_SNAPSHOT_ISOLATION must be set to ON in each database.
A transaction cannot be set to SNAPSHOT isolation level that started with another isolation level; doing so will cause the transaction to abort. If a transaction starts in the SNAPSHOT isolation level, you can change it to another isolation level and then back to SNAPSHOT. A transaction starts the first time it accesses data.
A transaction running under SNAPSHOT isolation level can view changes made by that transaction. For example, if the transaction performs an UPDATE on a table and then issues a SELECT statement against the same table, the modified data will be included in the result set.
NOLOCK can cause duplicate data to be read, data to be missed and the query to actually fail with an error messaged (something with "data movement").
On the other hand, a non-NONLOCK query can also read duplicate data and mis data! It is by no means a consistent snapshot of the database. The difference is that it will not read uncommitted data and will never fail.
The problem with NOLOCK is mostly that it can fail randomly, so you need retry. Also, the probability of wrong data being read is slightly higher.
NOLOCK has a big advantage when you're doing table scans: SQL Server can use allocation order scanning instead of index-order scans. TABLOCK has the same effect. This can be a significant speedup in the presence of fragmentation.
Consider just using the snapshot isolation level as it gets rid of all of these concerns. It comes with some other trade-offs and you don't get allocation-order scans. But it permanently and comprehensively removes locking problems.
Answering my own question after stress testing with SQLQueryStress http://www.datamanipulation.net/sqlquerystress/ (this is wonderful tool that is extremely easy to use). Results from SQLQueryStress are tested against SQL Server Profiler; the accuracy is the same as SQL Server Profiler, though the precession is two decimal places of a second less (but is sufficient for this test).
As mentioned in the question, the primary concern is application performance impact, with report accuracy and performance the secondary consideration. All testing occurred on the test server with where the test application is active and has some minor activity.
After downloading and becoming familiar with SQLQueryStress I set up a simple ‘ReportQuery’ to act as resource hog. It set to run 15 Iterations with 15 threads (225 total queries). Total run time is around 28 seconds, with an average iteration time of 1.49 seconds.
Created an Add/Delete ‘ApplicationQuery’ to represent ongoing application activity. It is set to run 2000 iterations with 1 thread. There are two versions, with a select statement (runs 31 seconds) and without a select statement (runs 28 seconds). These represent normal peak time application activity.
10 test runs of each of three versions of “ReportQuery” are run, this is to identify if there is any performance benefit between ‘with(nolock)’, ‘with(readpast), and without hints . Results indicate no significant difference the ReportQuery runs constantly in about 28 seconds with an average 1.5 second iteration time.
There are no big outliers so, decision to drop to 5 test runs for the following tests.
5 test runs of ApplicationQuery with a select statement; with one of each of three versions of “ReportQuery” also running. In each of 15 totals test, the ApplicationQuery is manually started, with the ReportQuery manually started immediately after. This scenario represents a resource heavy report query struggling with the applications ongoing activity for resources.
Repeated the test runs but this time used ApplicationQuery without a select statement.
Results: In every case the ApplicationQuery was throttled back to almost no forward progress, while the ReportQuery was running.
The ReportQuery had no significant loss of performance when struggling for resources with multiple ApplicationQuery’s against the database.
The ApplicationQuery was able to run queries parallel to the ReportQuery, but progress was very slow while competing for resources. In essence the total time to run 2000 Application Add/Delete Queries was extended by the time used by the ReportQuery.
The initial question was about which was more accurate, becomes pointless. There is essentially no report or application performance difference between using or not using the hints NOLOCK or READPAST, so don’t use either in a busy database and get the highest accuracy possible.
‘ReportQuery’
select
ID
, [TABLE_NAME]
, NUMBER
, FIELD
, OLD_VALUE
, NEW_VALUE
, SYSMODUSER
, SYSMODTIME
, SYSMODCOUNT
from dbo.UPMCINCIDENTMGMTAUDITRECORDSM1
where Number like '%'
or NUMBER like '2010-01-01'
‘ApplicationQuery’ (with Select Statement)
select *
from dbo.UPMCINCIDENTMGMTAUDITRECORDSM1
where FIELD = 'JJTestingPerformance'
insert into dbo.UPMCINCIDENTMGMTAUDITRECORDSM1 (ID
, [TABLE_NAME]
, NUMBER
, FIELD
, OLD_VALUE
, NEW_VALUE
)
values ('Test+Time'
, 'none'
, 'tst01'
, 'JJTestingPerformance'
, 'No Value'
, 'Test'
)
delete from dbo.UPMCINCIDENTMGMTAUDITRECORDSM1
where FIELD = 'JJTestingPerformance'
‘ApplicationQuery’ (without Select Statement)
insert into dbo.UPMCINCIDENTMGMTAUDITRECORDSM1 (ID
, [TABLE_NAME]
, NUMBER
, FIELD
, OLD_VALUE
, NEW_VALUE
)
values ('Test+Time'
, 'none'
, 'tst01'
, 'JJTestingPerformance'
, 'No Value'
, 'Test'
)
delete from dbo.UPMCINCIDENTMGMTAUDITRECORDSM1
where FIELD = 'JJTestingPerformance'

SQL Server Profiler 2005: How to measure execution time of insert statement with trigger?

I want to measure the execution time (using I guess duration from SQL Server Profiler) of an insert statement that has an instead-of insert trigger on it. How do I measure the complete time of this statement including the trigger time?
The execution time (duration) that you see in SQL server profiler for a query is the time it took to execute that query including evaluating any triggers or other constraints.
Because triggers are intended to be used as an alternative way to check data integrity, an SQL statement is not considered to have completed until any triggers have also finished.
Update: An overview of some commonly used SQL Server profiler events:
SQL:BatchCompleted Occurs when a SQL server batch (a group of statements) has completed execution - the duration is the total time to execute the batch.
SQL:StmtCompleted Occurs when a SQL statement executed as part of a batch completes execution - again the duration is the time to execute that single statement.
SP:Completed Occurs when a stored procedure has completed execution - the duration shown is the time to complete execution of the stored procedure.
SP:StmtCompleted Occurs when an SQL statement executed as part of a stored procedure completes.
A batch is a set of SQL statements separated by a GO statement, however to understand the above you should also know that all SQL server commands are executed in the context of a batch*.
Also, each of the above events also has a corresponding Starting event - SP:Starting, SQL:BatchStarting, SQL:StmtStarting and SP:StmtCompleted. These don't list durations (as we don't know the duration yet because its not completed, however do help show when the duration recording starts from).
To better understand the relationship between these events, I recommend that you experiment with capturing some traces of some simple examples (from within SQL Server Management Studio), for example:
SELECT * FROM SomeTable
GO
SELECT * FROM SomeTable
SELECT * FROM OtherTable
GO
SELECT * FROM SomeTable
exec SomeProc
GO
As you should see, for each of the 3 examples above you always get a SQL:BatchStarting and SQL:BatchCompleted, the other event types however provide more detail on the individual commands run.
For this reason I generally tend to use the SQL:BatchCompleted event the most, however if the statement you are attempting to measure is executed as part of a larger batch (or in a stored procedure) then you may find one of the other event classes helpful.
See TSQL Event Category (MSDN) for more information on the various SQL Server Profiling events - there are lots!
Finally, if you are executing this command from within SQL Server Management studio, be aware that the simplest way to record the execution time is to use the client side statistics feature:
(*) I'm pretty sure that everything is executed as part of a batch, although I've not managed to find any evidence on the internet to confirm this.