Coo: Rethink Data Anomalies In Databases
Transaction processing technology has three important contents: data anomalies, isolation levels, and concurrent control algorithms. Concurrent control algorithms are used to eliminate some or all data anomalies at different isolation levels to ensure data consistency. Isolation levels in the current ANSI standard are defined by disallowing certain kinds of data anomalies. Yet, the definitions of data anomalies in the ANSI standard are controversial. On one hand, the definitions lack a mathematical formalization and cause ambiguous interpretations. On the other hand, the definitions are made in a case-by-case manner and lead to a situation that even a senior DBA could not have infallible knowledge of data anomalies, due to a lack of a full understanding of its nature. While revised definitions in existing literature propose various mathematical formalizations to correct the former argument, how to address the latter argument still remains an open problem. In this paper, we present a general framework called Coo with the capability to systematically define data anomalies. Under this framework, we show that existing reported data anomalies are only a small portion. While we theoretically prove that Coo is complete to mathematically formalize data anomalies, we employ a novel method to classify infinite data anomalies. In addition, we use this framework to define new isolation levels and quantitatively describe the concurrency and rollback rate of mainstream concurrency control algorithms. These works show that the C and I of ACID can be quantitatively analyzed based on all data anomalies.
READ FULL TEXT