论文标题
有效排序,重复的删除,分组和聚合
Efficient sorting, duplicate removal, grouping, and aggregation
论文作者
论文摘要
数据库查询处理需要算法以重复删除,分组和聚合。存在三种算法:到目前为止,流中的聚集是最有效的,但需要分类的输入;基于排序的聚合依赖于外部合并排序;哈希汇总依赖于内存的哈希表加上散列分区来临时存储。基于成本的查询优化选择了基于几个因素使用的算法,包括输入的排序顺序,输入和输出尺寸以及对排序输出的需求。例如,基于哈希的聚合非常适合小于可用内存的输出(例如,TPC-H查询1),而当聚集输入和输出都大,并且需要对后续操作(例如MERGE联接)进行排序,而在分类后进行分类和分类后的聚合是优选的。 不幸的是,在查询优化期间,合理选择所需的尺寸信息通常是不准确或不可用的,从而导致次优算法选择。作为回应,本文介绍了一种新算法,用于基于排序的重复删除,分组和聚合。新算法至少始终具有基于传统的哈希和传统基于类别的算法。它可以用作系统的唯一用于未分类输入的聚合算法,从而防止错误的算法选择。此外,新算法还会产生分类的输出,可以加快随后的操作。 Google的F1查询使用了每天汇总数据的生产工作负载中的新算法。
Database query processing requires algorithms for duplicate removal, grouping, and aggregation. Three algorithms exist: in-stream aggregation is most efficient by far but requires sorted input; sort-based aggregation relies on external merge sort; and hash aggregation relies on an in-memory hash table plus hash partitioning to temporary storage. Cost-based query optimization chooses which algorithm to use based on several factors including the sort order of the input, input and output sizes, and the need for sorted output. For example, hash-based aggregation is ideal for output smaller than the available memory (e.g., TPC-H Query 1), whereas sorting the entire input and aggregating after sorting are preferable when both aggregation input and output are large and the output needs to be sorted for a subsequent operation such as a merge join. Unfortunately, the size information required for a sound choice is often inaccurate or unavailable during query optimization, leading to sub-optimal algorithm choices. In response, this paper introduces a new algorithm for sort-based duplicate removal, grouping, and aggregation. The new algorithm always performs at least as well as both traditional hash-based and traditional sort-based algorithms. It can serve as a system's only aggregation algorithm for unsorted inputs, thus preventing erroneous algorithm choices. Furthermore, the new algorithm produces sorted output that can speed up subsequent operations. Google's F1 Query uses the new algorithm in production workloads that aggregate petabytes of data every day.