DB Interface Module User’s Guide : PART III Using Advanced Features : Chapter 9 Caching : Caching When Inserting and Reading Data
Caching When Inserting and Reading Data
This section describes caching in relation to RWDBInserter, RWDBReader, and result schemas. For information on the use of the cache manager for metadata caching in RWDBTable and RWDBStoredProc, see Section 1.4.
Caching with RWDBInserter
With RWDBInserter, inserted rows may be cached locally and sent to the database in one operation. This procedure allows more optimal use of network packet sizes and server resources than sending individual rows separately.
When the caching feature of RWDBInserter is enabled, individual rows are stored in the RWDBInserter object’s internal buffers and then sent all at once to the database, typically by using the database vendor’s array binding interface. To enable the caching features when producing an RWDBInserter, we pass a cache size of n > 1 to its producer method. The following example shows how to use an inserter with a cache size of 5.
Example 15 – Caching with RWDBInserter
 
RWDBConnection aConn = aDb.connection();
RWDBInserter inserter = myTable.inserter(5); // 1
inserter << 0; // 2
inserter.execute(aConn); // 3
inserter << 1; // 4
inserter.execute(aConn); // 5
inserter << 2; // 6
inserter.execute(aConn); // 7
inserter << 3; // 8
inserter.execute(aConn); // 9
inserter << 4; //10
inserter.execute(aConn); //11
inserter << 5; //12
inserter.execute(aConn); //13
inserter.flush(); //14
In //1, we produce an inserter with a cache size of 5, from an existing RWDBTable object. In //2 - //13, we execute the inserter, inserting a new value between each execution as we would with an inserter that does no caching. There are only two differences between coding this example and coding an example that does not use a cached inserter: in //1, we set the cache size, and in //14, we flush the contents of the cache.
Now let’s see what happens to the inserter internally. In //1, when we create the inserter with a cache size of 5, the inserter creates an internal buffer big enough to hold five rows of data. In //2, we shift a row of data into the inserter. This row is stored in the first location in the inserter's internal buffers, and the inserter notes that it has cached one row.
When we execute in //3, the inserter doesn't send the row that was just shifted into itself to the database. Instead, the inserter compares the number of rows already cached to the cache size it was assigned at construction. Since the cache is not at capacity — in fact, it holds only one row at this point — the inserter returns immediately to the application.
In //4 - //5, the value of 1 is shifted into the inserter, and the inserter is executed again. The inserter notes that it has now cached two rows, and returns to the application. This pattern is repeated in //6 - //11. By the time we reach //11, the inserter has cached five rows. At this point, the number of rows cached is equal to the cache size, and the inserter sends all five cached rows to the database server at once. By using the inserter in this way, the application saves the cost of four network round trips. In our example, one more line of data is shifted into the cache and then we are done.
As we mentioned previously, the inserter's flush() method is invoked in //14. Actually, the flush operation was first invoked in //11 because the method is invoked automatically whenever the cache reaches its assigned capacity. In this example, the cache reached capacity in //11. But what about the final piece of data? We invoke flush() in //14 to ensure that all data is sent to the server. This is good programming practice, although another implicit invocation of flush() occurs when the inserter goes out of scope and its destructor is called.
Caching with RWDBReader
In “Caching with RWDBInserter”, we saw how to use the caching mechanism of class RWDBInserter. With class RWDBReader, we can use a similar caching mechanism when receiving data from a database server. To enable the caching features of RWDBReader, we pass a cache size of n > 1 to its producer method. The following example shows how to construct a reader with a cache size of 5.
Example 16 – Caching with RWDBReader
RWDBConnection aConn = aDb.connection();
RWDBReader reader = aSelector.reader(aConn, 5); //1
int aInt;
 
while (reader()) //2
{
reader >> aInt; //3
//process aInt
}
In //1, we produce an RWDBReader of cache size 5 from an RWDBSelector.
In //2, we use operator() of RWDBReader to fetch data from the database server; in this case, it fetches five rows, since we set the cache size to 5.
In //3, we shift the first row of the RWDBReader object's cache into the variable aInt. After processing the value, we return to step //2, where operator() is called a second time. On this second call to operator(), the RWDBReader returns without a network hit to fetch data, as the second row is already fetched from the database server. We then shift the second row in the RWDBReader object's cache into the application variable aInt in //3.
This pattern is repeated until the sixth call to operator(). At the time of this call, the RWDBReader has exhausted its internal cache of rows, and must request a new set of rows from the database server. As you can see, by using caching, the RWDBReader must make only one network trip for every five rows read.
RWDBReader producer methods accept a cache size of 0 by default, which results in the selection of an acceptable default cache size by the Access Module you are using. So for most Access Modules, caching is used at some default level even if you do not pass a cache size parameter. You need to explicitly pass a cache size only if you want a specific, non-default value. Please check your Access Module User's Guide for more information.
Restrictions On Caching with RWDBReader, RWDBInserter
Despite the increased performance that many applications gain from using the caching mechanism in the DB Interface Module, caching may adversely affect transactions. This depends largely on your access library. Please check your Access Module User’s Guide for any limitations and warnings on the use of caching in your application.
Result Schema Caching
In many applications, the schema of the database tables being queried does not change. Tables are designed when the application is designed, and the layout and structure remain the same while the program is running. In these cases, result schema caching may be enabled. Enabling result schema caching instructs the DB Interface Module not to fetch result schema information when re-executing a query. This can result in a performance gain if the same RWDBSelector or RWDBOSql objects are used repeatedly.
Result schema caching is managed by the RWDBEnvironmentHandle associated with the RWDBDatabase. To enable result schema caching, we must:
obtain the RWDBDatabase object’s environment handle by using the RWDBDatabase::environmentHandle() method.
use the cacheResultSchemas() method on RWDBEnvironmentHandle to indicate that result set schemas should be cached.
Example 17 – Result schema caching
 
RWDBConnection c = aDb.connection();
RWDBEnvironmentHandle *envH = aDb.environmentHandle(); //1
envH->cacheResultSchemas(true); //2
RWDBSelector s = aDb.selector();
RWDBTable t = aDb.table(“INVOICES”);
s << t;
 
RWDBResult r = s.execute(c); //3
RWDBResult r2= s.execute(c); //4
On //1, the environment handle is obtained from the RWDBDatabase. On //2, the schema cache is enabled. On //3, the selector is executed. On this execution, the result schema is fetched, because this RWDBSelector was not executed previously. On //4, however, the result schema obtained during //3 is reused. This reuse may result in a performance increase.
For applications that work with tables whose schema may change while the program runs, schema caching should not be used.
NOTE >> Do not use schema caching for databases that might change the schema of their objects while the application is running.
Using schema caching may result in undefined behavior if the schema of a database table being used changes while the program is running. By default, result schema caching is turned off since some applications work with tables whose schema may change.