ODBC - optimising performance - TransactionsΒΆ
Optimising performance with ODBC - Transactions
By default, Sculptor makes each insert, delete and write command a single transaction on a SQL database. This provides maximum compatibility with Sculptor keyed files, but is only moderately efficient on SQL Server, and on other SQL databases can be very slow. This is not a problem with interactive programs that work with one record at a time, but if many records are being processed in a loop, it is much more efficient to make them a single transaction. In the examples below, the begin() and commit() calls make the loops 1.5 to 2 times faster on SQL Server.
EXAMPLE 1 - insert
begin()
while (TRUE) {
nextu File1 nsr = BREAK
File2 = File1
insert File2 re = IGNORE
}
commit()
EXAMPLE 2 - write
begin()
while (TRUE) {
next MyFile nsr = BREAK
...
writekl MyFile
}
unlock MyFile
commit()
EXAMPLE 3 - delete
begin()
while (TRUE) {
next MyFile nsr = BREAK
if (condition) delete MyFile
}
commit()
Although transactions improve performance, it is best to keep them to a reasonable size. Microsoft recommends avoiding large transactions, and Sculptor keyed files are less efficient with a large transaction than with no transaction. If the loops in the above examples will process more than a few thousand records, the following technique should be considered, assuming of course that it is not essential to make the entire update a single transaction for application reasons.
tmp.BatchCount = 5000
begin()
while (TRUE) {
next MyFile nsr = BREAK
...
write MyFile
if (--tmp.BatchCount <= 0) {
commit()
begin()
tmp.BatchCount = 5000
}
}
commit()
RELATED TOPICS |