PushdownDB: Accelerating a DBMS using S3 Computation

https://arxiv.org/pdf/2002.05837.pdf

  1. What is the problem that is being solved?

  • Study the effectiveness of pushing parts of DMBS analytics queries into the Simple Storage Service (S3) engine of AWS, using a capability called S3 select.

    • Primitive (filter, projection, aggregation): can always be cost effectively moved into S3

    • Complex (join, top-K, group-by) require reimplementation to take advantage of S3 Select and are often candidates of pushdown.

2. What are the metrics of success?

  • Cost

  • Query latency

3. What are the key innovations over prior work? (motivations)

  • Cloud: disaggregated architecture, separate compute and storage

    • Problem: network can be a major performance bottleneck

    • Intuitive solutions

      • caching: compute server loads data from the remote storage once, caches it in main memory or local storage, and reuses it across multiple queries, thereby amortizing the network transfer cost.

      • computation pushdown: push functionality as close to storage as possible

        • E.x. push computation into specialized processors that are closer to storage

  • AWS: S3 select -- limited computation can be pushed onto S3

    • Problem

      • Limited computational interface of S3 allows only simple query operators to be pushed down

        • Select, project, aggregation

      • Other: requires new implementation

      • S3 Select pricing can be more expensive than EC2 nodes

  • This paper:

    • Contribution: first extensive study of pushdown computing for database operators in a disaggregated architecture.

4. Specific things about the paper (notes)

Data management in the cloud

  • Computing services: Elastic Compute Cloud (EC2)

    • instances

    • Locally-attached storage

  • Storage services: Simple Storage Service (S3)

    • Highly available

    • Virtually infinite storage capacity with relatively low cost

    • Can be shared across multiple computing instances

    • Cheaper than locally-attached and/or block-based alternatives (Elastic Block Store (EBS))

  • S3 select

    • 2018

    • Implement operators by scanning the rows in the table and returning qualifying rows to the compute node

    • Not support: join, group by, top-K

      • Challenging to redesign these operators to use S3 select

        • E.x. Join: data shuffling among storage nodes

  • Computing query cost

    • AWS: cost varies based on the region where the users data and computation are located

    • Storage cost

      • Charged monthly based on the amount of space used: $0.022/GB/Month

      • Only depends on data size and not on frequency of access

    • Data transfer cost

      • S3 users are charged for only the outgoing traffic and the prices is based on the destination of data

      • S3 Select not used: free (transferring data within the same region) to $0.09/GB (transferring data out of AWS)

    • S3 Select cost

      • Based on the amount of data scanned ($0.002/GB) in processing an S3 Select query and the amount of data returned ($0.0007/GB)

      • Depends on the selectivity of the query

      • Data scan and transfer cost is the major component in overall query cost

    • Network request cost

      • Issuing HTTP requests to S3 is based on the request type and the number of requests

      • Cost paid for both S3 Select requests and conventional table read requests

    • Computation cost

      • EC2 memory-optimized instances

      • Query execution time --> computational cost based on the hourly price of the host EC instance

        • r4.8xlarge instance costs $2.128 per hour

PushdownDB database

  • row-based DMBS testbed

    • Minimal optimizer and an executor

  • Query plan: directed acyclic graph, and executes queries in either serial or parallel mode

    • Serial: single CPU executes one operator at a time

      • Some can benefit. I.e. projection followed by a filter (avoids inter-process data transfers)

    • Parallel: multiple processes and passes batches of tuples from producer and consumer using a queue

      • Most can achieve better performance

Operators

  • Filter ("where")

    • Indexing with S3 Select

      • Problem: Hash indexes, tree-based indexes are both not a good fit for cloud storage environment because single lookup requires multiple accesses to the index. Multiple S3 requests that incur long network delays.

      • Design: Index table

        • Contains

          • Values of the columns to be indexed

          • Byte offsets of indexed records in that table

        • Accessing a table through an index (two phases)

          • Phase 1: predicate on the indexed columns is sent to the index table using an S3 Select request. Then the byte offsets of selected rows are returned to the compute server.

          • Phase 2: byte offsets are used to directly load the corresponding rows from the data table, by sending an HTTP request for each individual row.

            • Note: not using S3 select

  • Join

    • Not support to push a join operator in its entirety into S3

    • Why difficult to pushdown processing for joins

      • Two tables to be joined are partitioned across multiple S3 objects (can load data in parallel)

      • If not partitioned on the join key, needs to shuffling data across partitions. Challenging to support at the storage layer.

    • Hash join

      • Build phase: loads smaller table in parallel and sends each tuple to the appropriate partition to build a hash table

      • Probe phase: loads bigger table in parallel and sends tuples to the correct partition to join matching tuples by probing the hash table

    • Algorithm that PushdownDB supports

      • Baseline Join

        • the server loads both tables from S3 and executes the hash join locally, without using S3 Select

      • Filtered Join

        • pushes down selection and projection using S3 Select, and executes the rest of the query in the same way as baseline join.

      • Bloom Join (*)

        • after the build phase, a Bloom filter is constructed based on the join keys in the first table; the Bloom filter is then sent as an S3 Select request to load a filtered version of the second table

        • Support only integer join attributes

          • A more general support for hashing in S3 Select API would enable bloom joins on arbitrary attributes

          • ... Needs extension of S3 Select interface

  • Group-by

    • Server-side group-by: perform at the server-side by loading all data from S3 directly

    • Filtered group-by: loading S3 data using a predicate

    • S3-side group-by: pushes the group-by logic entirely into S3 thus minimize the amount of network traffic

      • First phase: collects the values for the groups in the group-by clause

      • Second phase: requests S3 to perform aggregation for each individual group that the first phase identified.

    • Hybrid group-by

      • Practice: data sets highly skewed (a few large groups contains majority of the rows)

        • Can lead to long S3 Select queries in S3-side group=-by

      • Propose: distinguish groups based on their size

        • pushes the aggregation on large groups to S3, eliminating the need for transferring large amount of data

        • small groups: are aggregated by the query execution nodes

      • Phases

        • First: only a sample of rows to capture the populous groups (scan the first 1% of data from table)

        • Second: two queries sent to S3. Q1 runs remote aggregation for the large groups. Q2 is sent for loading rows belonging to the rest of the groups from S3.

  • Top-K

    • Sampling-Based Top-K Algorithm

      • First phase: samples the records from the table and decides what subset of records to load in the second phase

      • Second phase: query execution node loads this subset of records and performs the top-K computation on it

5. What are the key results?

Filter

Take-away

  • Runtime: S3-side indexing has similar performance as S3-side filter when the filter is highly selective, but performance of indexing degrades as the filter selects more than 10^{-4} of the rows.

    • In this case, most of the execution time is spent requesting and receiving individual byte ranges from the data table

      • Though in parallel, they incur excessive CPU computation that become a performance bottleneck

  • Cost:

    • S3-side filter is 24% more expensive than server-side filter

      • S3-side: data scanning and loading

      • Server-side: computation

    • S3-side indexing is cheaper than server-side filter by 2.7x when filter is very selective

      • Reason: index table reduces the amount of data being scanned and transferred

  • In conclusion, S3-side indexing is the best approach with highly selective queries, whereas S3-side filter achieves a good balance between performance and cost for queries with any selectivity.

Join

  1. Customer Selectivity

Take-away

  • Runtime:

    • baseline and filtered joins perform similarly, which is expected since they only apply selection to the smaller customer table and load the entire orders table, which incurs the same large amount of network traffic

    • Bloom join: significantly better than either

      • High selectivity on the first table is encapsulated by the Bloom filter, which significantly reduces the number of returned rows for the larger orders table

2. Orders Selectivity

Take-away:

  • Filtered join performs significantly better than baseline join when the filter on the orders table is selective

    • Performance advantage disappeared when the filter is less selective

  • Bloom join: performs better and remains fairly constant as the number of records returned from the orders table remains small due to the Bloom filter

    • Cost: comparable or cheaper than alternatives

  • We can see that the best performance and cost numbers can be achieved when the false positive rate is 0.01. When the false positive rate is low, the Bloom filter is large in size, increasing the computation requirement in S3 Select. When the false positive rate is high, the Bloom filter is less selective, meaning more data will be returned from S3.

Group-by

  • S3-side group-by performs 4.1× better than filtered group-by when there are only a few unique groups. Performance degrades, however, when more groups exist. This is due to the increased computation overhead that is performed by the S3 servers

  • The server-side group-by pays more for compute, but the other two algorithms pay more for scanning and transferring S3 data

Top-K

Take-away:

  • As the sample size increases

    • the execution time of the sampling phase also increases. This is expected because more data needs to be sampled and returned

    • the execution time of the scanning phase decreases. This is because a larger sample leads to a more stringent threshold, and therefore fewer qualified rows in the scanning phase.

  • The amount of data returned from S3 first decreases due to the dropping S3 traffic in the scanning phase, and later increases due to the growing traffic of the sampling phase.

Figure 9:

  • for both algorithms, runtime increases as K increases. This is because a larger K requires a bigger heap and also more computation at the server side. The sampling-based top-K algorithm is consistently faster than the server-side top-K due to the reduction in the amount of data loaded from S3.

  • the sampling-based top-K algorithm is also consistently cheaper than server-side top-K.

Overall

  • Baseline: PushdownDB implementation but not include S3 Select features. The server loads the entire table from S3 and performs computation locally.

  • Optimized: optimizations discussed in the paper.

  • On average, the optimized PushdownDB outperforms the baseline PushdownDB by 6.7× and reduces the cost by 30%.

Experiment with Parquet

  • Parquet substantially outperforms CSV in the 10 and 20 column cases, where the query requests a small fraction of columns.

    • our query scans only a single column of Parquet data but has to scan the entire CSV file — Parquet outperforms CSV due to less IO overhead on S3 Select servers.

  • the performance advantage of Parquet over CSV is more prominent when the filter is more selective — when more data passes through, data transfer becomes the bottleneck so CSV and Parquet achieve similar performance. This

6. What are some of the limitations and how might this work be improved?

  • Limitations and suggestions to improve S3 Select

    • Suggestion 1: multiple byte ranges for GET requests

      • Now: current GET request to S3 supports only a single byte range. This means that a large number of GET requests have to be sent if many records are selected by a query.

      • Want to: allow a single GET request to contain multiple byte ranges can significantly reduce to cost of HTTP request processing in both the server and S3

    • Suggestion 2: Index inside S3

      • Build index structure entirely inside S3 avoids network messages between S3 and the server that are caused by accesses to the index data structure during an index loopup

    • Suggestion 3: More efficient Bloom filters

      • Current S3 Select does not support bit-wise operators

      • Support efficient bit-wise operators can improve efficiency of bloom join

    • Suggestion 4: Partial group-by

      • Now, CASE clause to implement S3-side group-by, not efficient

      • Suggest adding partial group-by queries to S3 to resolve this performance issue

    • Suggestion 5: Computation-aware pricing

      • Now: S3 Select Pricing model fixes the amount of data scanning costs regardless of what computation is being performed

      • Suggest that the data scan cost should depend on the workload

7. How might this work have long term impact?

  • presents PushdownDB, a data analytics engine that accelerates common database operators by performing computation in S3 via S3 Select. PushdownDB reduces both runtime and cost for a wide range of operators, including filter, project, join, group-by, and top-K.

  • First extensive study of pushdown computing for database operators in a disaggregated architecture.

  • Processing-in-Memory (PIM) exploits computation near or inside DRAM devices to reduce data transfer between CPU and main memory

  • The development of FPGAs and SSDs in recent years has made near storage computing more practical

    • Studies: push computation to both near-storage FPGAs and the processor within an SSD device

      • But focus on simple operators like filter or projection, but did not study the effect of more complex operators

  • Hybrid shipping techniques execute some query operators at the client side, where the query is invoked, and some at the server side, where data is stored

    • not consider how to push down only some of the steps involved in the implementation of a single operator

Last updated