🐣
Reading List
  • Starting point
  • Reference list
  • PhD application guidelines
  • Big Data System
    • Index
      • Architecture
        • Storage
          • Sun's Network File System (NFS)
      • Execution Engine, Resource Negotiator, Schedulers
        • Execution Engines
        • Resource Negotiator
        • Schedulers
      • Machine Learning
      • SQL Framework
      • Stream Processing
      • Graph Processing
      • Potpourri: Hardware, Serverless and Approximation
  • Operating System
    • Index
      • OSTEP
        • Virtualization
          • CPU Abstraction: the Process
          • Interlude: Process API
          • Mechanism: Limited Direct Execution
        • Intro
  • Networking
    • Index
      • CS 294 (Distributed System)
        • Week 1 - Global State and Clocks
          • Distributed Snapshots: Determining Global States of Distributed Systems
          • Time, Clocks, and the Ordering of Events in a Distributed System
        • Weak 5 - Weak Consistency
          • Dynamo: Amazon's Highly Available Key-value Store
          • Replicating Data Consistency Explained Through Baseball
          • Managing update conflicts in Bayou, a weakly connected replicated storage system
      • CS 268 (Adv Network)
        • Intro
        • Internet Architecture
          • Towards an Active Network Architecture
          • The Design Philosophy of the DARPA Internet Protocols
        • Beyond best-effort/Unicast
          • Core Based Trees (CBT)
          • Multicast Routing in Internetworks and Extended LANs
        • Congestion Control
        • SDN
          • ONIX: A Distributed Control Platform for Large-scale Production Networks
          • B4: Experience with a Globally-Deployed Software Defined WAN
          • How SDN will shape networking
          • The Future of Networking, and the Past of Protocols
        • Datacenter Networking
          • Fat tree
          • Jellyfish
        • BGP
          • The Case for Separating Routing from Routers
        • Programmable Network
          • NetCache
          • RMT
        • Datacenter Congestion Control
          • Swift
          • pFabric
        • WAN CC
          • Starvation (Sigcomm 22)
        • P2P
          • Design and Evaluation of IPFS: A Storage Layer for the Decentralized Web
          • The Impact of DHT Routing Geometry on Resilience and Proximity
        • Net SW
          • mTCP
          • The Click modular router
        • NFV
          • Performance Interfaces for Network Functions
          • Making Middleboxes Someone Else's Problem: Network Processing as a Cloud Service
        • Ethics
          • On the morals of network research and beyond
          • The collateral damage of internet censorship by DNS injection
          • Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests
        • Low Latency
          • Aquila: A unified, low-latency fabric for datacenter networks
          • cISP: A Speed-of-Light Internet Service Provider
        • Disaggregation
          • Network Requirements for Resource Disaggregation
        • Tenant Networking
          • Invisinets
          • NetHint: While-Box Networking for Multi-Tenant Data Centers
        • Verification
          • A General Approach to Network Configuration Verification
          • Header Space Analysis: Static Checking for Networks
        • ML
          • SwitchML
          • Fast Distributed Deep Learning over RDMA
      • Computer Networking: A Top-Down Approach
        • Chapter 1. Computer Network and the Internet
          • 1.1 What Is the Internet?
          • 1.2 The Network Edge
          • 1.3 The Network Core
        • Stanford CS144
          • Chapter 1
            • 1.1 A Day in the Life of an Application
            • 1.2 The 4-Layer Internet Model
            • 1.3 The IP Service Model
            • 1.4 A Day in the Life of a Packet
            • 1.6 Layering Principle
            • 1.7 Encapsulation Principle
            • 1.8 Memory layout and Endianness
            • 1.9 IPv4 Addresses
            • 1.10 Longest Prefix Match
            • 1.11 Address Resolution Protocol (ARP)
            • 1.12 The Internet and IP Recap
      • Reading list
        • Elastic hyperparameter tuning on the cloud
        • Rethinking Networking Abstractions for Cloud Tenants
        • Democratizing Cellular Access with AnyCell
        • Dagger: Efficient and Fast RPCs in Cloud Microservices in Near-Memory Reconfigurable NICs
        • Sage: Practical & Scalable ML-Driven Performance Debugging in Microservices
        • Faster and Cheaper Serverless Computing on Harvested Resources
        • Network-accelerated Distributed Machine Learning for Multi-Tenant Settings
        • User-Defined Cloud
        • LegoOS: A Disseminated Distributed OS for Hardware Resource Disaggregation
        • Beyond Jain's Fairness Index: Setting the Bar For The Deployment of Congestion Control Algorithms
        • IncBricks: Toward In-Network Computation with an In-Network Cache
  • Persistence
    • Index
      • Hardware
        • Enhancing Lifetime and Security of PCM-Based Main Memory with Start-Gap Wear Leveling
        • An Empirical Guide to the Behavior and Use of Scalable Persistent Memory
  • Database
    • Index
  • Group
    • WISR Group
      • Group
        • Offloading distributed applications onto smartNICs using iPipe
        • Semeru: A memory-disaggregated managed runtime
      • Cache
        • Index
          • TACK: Improving Wireless Transport Performance by Taming Acknowledgements
          • LHD: Improving Cache Hit Rate by Maximizing Hit Density
          • AdaptSize: Orchestrating the Hot Object Memory Cache in a Content Delivery Network
          • Clustered Bandits
          • Important Sampling
          • Contexual Bandits and Reinforcement Learning
          • Reinforcement Learning for Caching with Space-Time Popularity Dynamics
          • Hyperbolic Caching: Flexible Caching for Web Applications
          • Learning Cache Replacement with CACHEUS
          • Footprint Descriptors: Theory and Practice of Cache Provisioning in a Global CDN
      • Hyperparam Exploration
        • Bayesian optimization in cloud machine learning engine
    • Shivaram's Group
      • Tools
      • Group papers
        • PushdownDB: Accelerating a DBMS using S3 Computation
        • Declarative Machine Learning Systems
        • P3: Distributed Deep Graph Learning at Scale
        • Accelerating Graph Sampling for Graph Machine Learning using GPUs
        • Unicorn: A System for Searching the Social Graph
        • Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless
        • Garaph: Efficient GPU-accelerated GraphProcessing on a Single Machine with Balanced Replication
        • MOSAIC: Processing a Trillion-Edge Graph on a Single Machine
        • Fluid: Resource-aware Hyperparameter Tuning Engine
        • Lists
          • Wavelet: Efficient DNN Training with Tick-Tock Scheduling
          • GPU Lifetimes on Titan Supercomputer: Survival Analysis and Reliability
          • ZeRO-Infinity and DeepSpeed: Unlocking unprecedented model scale for deep learning training
          • ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning
          • KungFu: Making Training inDistributed Machine Learning Adaptive
        • Disk ANN
      • Queries Processing
        • Building An Elastic Query Engine on Disaggregated Storage
        • GRIP: Multi-Store Capacity-Optimized High-Performance NN Search
        • Milvus: A Purpose-Built Vector Data Management System
        • Query2box: Reasoning over Knowledge Graphs in Vector Space using Box Embeddings
        • Billion-scale Approximate Nearest Neighbor Search
        • DiskANN: Fast accurate billion-point nearest neighbor search on a single node
        • KGvec2go - Knowledge Graph Embeddings as a Service
    • Seminar & Talk
      • Berkeley System Seminar
        • RR: Engineering Record and Replay for Deployability
        • Immortal Threads: Multithreaded Event-driven Intermittent Computing on Ultra-Low-Power Microcontroll
      • Berkeley DB Seminar
        • TAOBench: An End-to-End Benchmark for Social Network Workloads
      • PS2
      • Sky Seminar Series
        • Spring 23
          • Next-Generation Optical Networks for Emerging ML Workloads
      • Reading List
        • Confluo: Distributed Monitoring and Diagnosis Stack for High-speed Networks
        • Rearchitecting Linux Storage Stack for µs Latency and High Throughput
        • eBPF: rethinking the linux kernel
        • BPF for Storage: An Exokernel-Inspired Approach
        • High Velocity Kernel File Systems with Bento
        • Incremental Path Towards a Safe OS Kernel
        • Toward Reconfigurable Kernel Datapaths with Learned Optimizations
        • A Vision for Runtime Programmable Networks
        • The Demikernel and the future of kernal-bypass systems
        • Floem: A programming system for NIC-accelerated network applications
        • High Performance Data Center Operating Systems
        • Leveraging Service Meshes as a New Network Layer
        • Automatically Discovering Machine Learning Optimizations
        • Beyond Data and Model Parallelism for Deep Neural Networks
        • IOS: Inter-Operator Scheduler for CNN Acceleration
        • Building An Elastic Query Engine on Disaggregated Storage
        • Sundial: Fault-tolerant Clock Synchronization for Datacenters
        • MIND: In-Network Memory Management for Disaggregated Data Centers
        • Understanding host network stack overheads
        • From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers
        • Redesigning Storage Systems for Future Workloads Hardware and Performance Requirements
        • Are Machine Learning Cloud APIs Used Correctly?
        • Fault-tolerant and transactional stateful serverless workflows
      • Reading Groups
        • Network reading group
          • Recap
          • ML & Networking
            • Video Streaming
              • Overview
              • Reducto: On-Camera Filtering for Resource Efficient Real-Time Video Analytics
              • Learning in situ: a randomized experiment in video streaming
              • SENSEI: Aligning Video Streaming Quality with Dynamic User Sensitivity
              • Neural Adaptive Video Streaming with Pensieve
              • Server-Driven Video Streaming for Deep Learning Inference
            • Congestion Control
              • ABC: A Simple Explicit Congestion Controller for Wireless Networks
              • TCP Congestion Control: A Systems Approach
                • Chapter 1: Introduction
              • A Deep Reinforcement Learning Perspective on Internet Congestion Control
              • Pantheon: the training ground for Internet congestion-control research
            • Other
              • On the Use of ML for Blackbox System Performance Prediction
              • Marauder: Synergized Caching and Prefetching for Low-Risk Mobile App Acceleration
              • Horcrux: Automatic JavaScript Parallelism for Resource-Efficient Web Computation
              • Snicket: Query-Driven Distributed Tracing
            • Workshop
          • Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities
        • DB reading group
          • CliqueMap: Productionizing an RMA-Based Distributed Caching System
          • Hash maps overview
          • Dark Silicon and the End of Multicore Scaling
        • WISR
          • pFabric: Minimal Near-Optimal Datacenter Transport
          • Scaling Distributed Machine Learning within-Network Aggregation
          • WCMP: Weighted Cost Multipathing for Improved Fairness in Data Centers
          • Data center TCP (DCTCP)
      • Wisconsin Seminar
        • Enabling Hyperscale Web Services
        • The Lottery Ticket Hypothesis
        • External Merge Sort for Top-K Queries: Eager input filtering guided by histograms
      • Stanford MLSys Seminar
        • Episode 17
        • Episode 18
  • Cloud Computing
    • Index
      • Cloud Reading Group
        • Owl: Scale and Flexibility in Distribution of Hot Contents
        • RubberBand: cloud-based hyperparameter tuning
  • Distributed System
    • Distributed Systems Lecture Series
      • 1.1 Introduction
  • Conference
    • Index
      • Stanford Graph Learning Workshop
        • Overview of Graph Representation Learning
      • NSDI 2022
      • OSDI 21
        • Graph Embeddings and Neural Networks
        • Data Management
        • Storage
        • Preview
        • Optimizations and Scheduling for ML
          • Oort: Efficient Federated Learning via Guided Participant Selection
          • PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections
      • HotOS 21
        • FlexOS: Making OS Isolation Flexible
      • NSDI 21
        • Distributed System
          • Fault-Tolerant Replication with Pull-Based Consensus in MongoDB
          • Ownership: A Distributed Futures System for Fine-Grained Tasks
          • Caerus: NIMBLE Task Scheduling for Serverless Analytics
          • Ship Computer or Data? Why not both?
          • EPaxos Revisited
          • MilliSort and MilliQuery: Large-Scale Data-Intensive Computing in Milliseconds
        • TEGRA: Efficient Ad-Hoc Analytics on Evolving Graphs
        • GAIA: A System for Interactive Analysis on Distributed Graphs Using a High-Level Language
      • CIDR 21
        • Cerebro: A Layered Data Platform for Scalable Deep Learning
        • Magpie: Python at Speed and Scale using Cloud Backends
        • Lightweight Inspection of Data Preprocessingin Native Machine Learning Pipelines
        • Lakehouse: A New Generation of Open Platforms that UnifyData Warehousing and Advanced Analytics
      • MLSys 21
        • Chips and Compilers Symposium
        • Support sparse computations in ML
      • SOSP 21
        • SmartNic
          • LineFS: Efficient SmartNIC offload of a distributed file system with pipeline parallelism
          • Xenic: SmartNIC-accelerated distributed transacitions
        • Graphs
          • Mycelium: Large-Scale Distributed Graph Queries with Differential Privacy
          • dSpace: Composable Abstractions for Smart Spaces
        • Consistency
          • Efficient and Scalable Thread-Safety Violation Detection
          • Understanding and Detecting Software Upgrade Failures in Distributed Systems
        • NVM
          • HeMem: Scalable Tiered Memory Management for Big Data Applications and Real NVM
        • Learning
          • Bladerunner: Stream Processing at Scale for a Live View of Backend Data Mutations at the Edge
          • Faster and Cheaper Serverless Computing on Harvested Resources
  • Random
    • Reading List
      • Random Thoughts
      • Hesse
      • Anxiety
  • Grad School
    • Index
      • Resources for undergraduate students
Powered by GitBook
On this page
  • Presentation
  • Paper
  • Abstract
  • Intro
  • System overview
  • GPU-Based Graph Processing
  • CPU-Based Graph Processing
  • Dispatcher
  • Evaluation
  • Group Discussion

Was this helpful?

  1. Group
  2. Shivaram's Group
  3. Group papers

Garaph: Efficient GPU-accelerated GraphProcessing on a Single Machine with Balanced Replication

https://www.usenix.org/system/files/conference/atc17/atc17-ma.pdf

PreviousDorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and ServerlessNextMOSAIC: Processing a Trillion-Edge Graph on a Single Machine

Last updated 3 years ago

Was this helpful?

Presentation

  • Large-scale graph processing

    • 10^10 pages, 10^12 tokens: page rank

    • 10^9 nodes, 10^12 edges: social network analysis

  • Powerful storage & computation technologies

  • Goal:

    • Large memory + fast secondary storages

    • CPU + GPUs

      • CPU: sequential

      • GPU: SIMD mode processing

    • How to efficiently integrate heterogeneity under a unified abstraction

  • Non-distributed platform

  • Most time-consuming: gather phase

Paper

Abstract

  • Garaph: GPU-accelerated graph processing system on a single machine with secondary storage as memory extension

  • Contributions

    • Vertex replication degree customization scheme

      • maximize GPU utilization given vertices' degrees and space constraints

    • Balanced edge-based partition and a hybrid of notify-pull and pull computation models

      • ensure work balance over CPU threads

      • optimized for fast graph processing on CPU

    • Dynamic workload assignment schemes

      • Takes into account of the characteristics of processing elements and graph algorithms

Intro

  • Distributed graph systems: need fast network and effective partitioning to minimize communication

  • Alternative: non-distributed

    • Benefit: users need not to be skilled at managing & tuning

    • Problem: pressure on memory and computing power. But it's affordable.

      • RAM is large

      • Advances of secondary storage: high access bandwidth close to memory

      • GPU: massive parallelism to offer high-performance graph processing

  • Setting: GPU-accelerated, secondary-storage based graph processing

  • Challenge:

    • highly skewed degree distribution of natural graphs

      • Small fraction of vertices adjacent to large fraction of edges --> heavy write contention among GPU threads due to atomic updates of the same vertices

      • Work imbalance of CPU threads

    • heterogeneous parallelism of CPU & GPU

      • CPU: sequential processing

      • GPU: bulk parallel processing

  • Propose: Garaph

    • GPU: vertex replication degree customization

    • CPU: balanced edge-based partition

    • Heterogeneity of computation units

      • Pull computation model: matches the SIMD processing model of GPU

      • Hybrid of notify-pull and pull computation model: optimizes for fast sequential processing on CPU

    • Dynamic workload assignment

System overview

2.1 Graph Representation

For organizing incoming and outgoing edges:

  • Compressed Sparse Column (CSC)

  • Compressed Sparse Row (CSR)

  • Shard:

    • split vertices V into disjoint sets of vertices and each set is represented by a shard that stores all incoming edges whose destination is in that set.

    • Edges in a shard are listed based on increasing order of their indexes of destination vertices.

    • Allow each shard to be fit into the shared memory for high bandwidth

    • Maximum offset is 12K, can use 16-bit integer to represent the index of destination vertices

  • Transfer shards from host memory to GPU memory in batch

    • Call this as a page

  • Leverages multi-stream feature of GPUs for the overlap of memory copy and kernel execution

  • Two vertex-centric computation models

    • Pull model

      • Every vertex updates its state by pulling the new states of neighboring vertices through incoming edges

    • Notify-pull

      • Only active vertices notify their outgoing neighbors to update, who in turn perform local computation by pulling states of their incoming neighbors

      • More effective in case of few active vertices

System Architecture

  • Dispatcher

    • loading graph from secondary storages, distributing the computation over CPU and GPU

    • Partitions each graph page into equal-size data blocks, which are uniformly distributed over multiple secondary storages with a hash function

    • Steps

      • Load data blocks from secondary storage to host memory

      • Construct pages

      • After one page is constructed, dispatch to either CPU or GPU

  • GPU/CPU computation kernel

    • GPU

      • Process the shards of page in a parallel manner

      • Only the pull model is enabled on GPU side

        • Notify-pull can lead to high frequency of non-coalesced memory accesses because of poor locality and warp divergence caused by distinguishing active/inactive vertices

    • CPU

      • Enables both pull and notify-pull

      • Each thread processes one edge set (divide edges of a page into sets of equal size)

    • Either of the two kernels has processed on page, there will be a synchronization between GPU and CPU

    • Execution can be done both synchronously and asynchronously

      • Iter: complete process over all the pages for one time

  • Fault Tolerance

    • Write vertex data to secondary storages periodically

Programming API

  • Modified Gather-Apply-Scatter (GAS) abstraction used in the PowerGraph

    • Modify scatter function to activate function which sets value if the vertex satisfies the active condition

  • Atomic (user-provided sum function) + non-atomic operations for GPU and CPU respectively

GPU-Based Graph Processing

  • Global Memory

    • Up to 24GB in size.

      • Size of the vertices is usually 4 bytes. Can store up to 6B (or 12B) vertices in global memory.

    • Global Vertices: allows quick access to values of vertices

    • Each shard in a page is processed by one GPU block in three phases: initialization, gather, and apply

      • Initialization:

        • LocalVertices to store accumulate value of each vertex in a shard.

        • Consecutive threads of a block initialize this array with default vertex values defined by users

      • Gather

        • Threads of one GPU block process edges of an individual shard. For each edge, one thread fetches vertex & edge data from global memory and increase accumulate value

        • To have coalesced global memory accesses: consecutive threads of the block read consecutive edges' data in global memory

      • Apply

        • Each thread of block updates vertex value in shared memory

        • Async: commit new vertex data to GlobalVertices array, which are immediately visible to the subsequent computation

        • Sync: values are written to temporary array in global memory, which would be visible in the next iteration

    • When page has been processed, new vertex values are synchronized between GPU global memory and host memory

      • Async: transmits updated values of GlobalVertices in the GPU global memory to array storing the most updated values of vertices in the host memory

      • Sync: stored in temporary space of GPU global memory are transmitted to temporary array in host memory, commit after the iteration ends

      • Can be overlapped with processing

Replication-Based Gather

  • Problem: gather phase with write contention (multi-threads simultaneously modifying the same shared memory address) --> position conflict

    • Frequent for natural graphs (power-law degree distribution)

  • Strategy: replication

    • which consists of placing R adjoining copies of the partial accumulated value in the shared memory to spread these accesses over more shared memory addresses.

    • Then these R partial accumulated values are aggregated to calculate the final accumulated value au for a vertex u.

      • Two-way merge

    • R: replication factor

  • Replication factor customization

    • Too large? GPU underutilization since fewer vertices can be fit in the shared memory

    • Maximizes the expected performance under given conflict degree and space constraints

CPU-Based Graph Processing

  • Main points

    • How it works

    • Balanced edge-based partition to exploit parallelism

    • Dual-mode processing model: switches between pull/notify-pull modes according to the density of active vertices in the page

  • Existing approach: assign each thread of a subset of vertices

    • Computation imbalance

    • Random memory access of edge data if adjacent vertices are assigned to different threads

  • Edge-centric partition

    • Enhance sequential access of edge data and improves work balance over threads

      • Why mention it in CPU-based??

        • Why is it not an issue on the GPU side?

          • Cuda blocks taking 32 threads

          • Done with the block, then push another block

  • CPU engine

    • GlobalVertices in host memory for quick access to values of vertices

    • LocalVertices to store accumulate values of destination vertices in the corresponding partition

    • Each page: initialization, gather, apply

      • If a page is processed at GPU side, system also synchronizes new vertex values between GPU memory and host memory

    • Processing is done

      • Graph state converges (i.e. no active vertices)

        • Active vertices: vertices with significant state change (use a bitmap to indicate)

      • Or: a given number of iterations complete

  • Each page

    • Initialization

      • Edges of the page is divided into partitions, and each thread processes one partition.

      • Number of replicas is at most n_t - 1

      • Initialize LocalVertices with vertices' default values

    • Gather

      • Each partition --> one thread

      • Edges are processed in a sequential order

      • For each edge, CPU thread performs gather and updates the accumulate value in LocalVertices with sum function

      • After each thread, aggregation phase aggregates values of vertices replicated at the partition boundaries

    • Apply

      • After gather phase of each page is finished, every thread updates vertices' values in the LocalVertices array

      • For each vertex in the partition, corresponding thread calls activate() to examine if the vertex is active or not and updates the bitmap

    • Sync

      • after the GPU has processed a page, it sends the corresponding vertex values to the host memory

      • Then, system calls Activate() of these updated vertices

      • Async:

        • enables the updates received from the GPU immediately visible through writing them into the GlobalVertices array of host memory

        • Then, sends back new vertices updated on CPU side to the GPU and overwrites the corresponding part of the GlobalVertices array in the GPU global memory

      • Sync

        • stores updated values in a temporary array, and commits these new values at the end of each iteration

        • the CPU transmits the new GlobalVertices array to that in the GPU memory at the end of each iteration

Dual-Mode Processing Engine

  • pull mode

    • more beneficial when most vertices are activated (dense active vertex set), which avoids the extra costs of modification

  • notify/pull mode: a vertex needs to be updated only when one of its source vertices is active in the previous iteration

    • more efficient when few vertices are active in the last iteration (sparse active vertex set)

  • At a given time during the graph processing, the active vertex set may be dense or sparse

    • E.x. starts from sparse, then becoming denser as more vertices being activated, and sparse again when algorithm approaches convergence

  • Problem of combining two modes where only part of graph can be loaded into the host memory

    • System entails I/O cost due to sequential and random accesses of outgoing/incoming edges on secondary storage for pull & notify-pull modes respectively (use a different formula to consider the rate of speeds between sequential read and random read of secondary storage)

Dispatcher

  • Adaptive scheduling mechanism to exploit the overlap of two engines

CPU-GPU Scheduling

  • if T(CPU) <T(GPU), the system adopts CPU kernel only due to sparse active vertex set. Otherwise, Garaph adopts both GPU and CPU kernels to reduce the overall time of the processing

  • At the beginning of each iteration, the scheduler calculates the following ratio of T(CPU) to T(GPU)

  • T(pull) / T(GPU) is initialized by the speed ratio of CPU and GPU hardwares, and is updated once both kernels have begun to process pages

  • Alpha < 1: only CPU kernel is used for graph processing in this iteration as most vertices are very inactive (e.g. very small f)

    • f is the fraction of |V_A| / |V|

  • Otherwise: process on both CPU and GPU kernel

    • Reactively assigns a page to a kernel once the kernel becomes free

GPU Multi-Stream Scheduling

  • To trigger the graph processing on the GPU side: two threads running on the host

    • Transmission thread: continuously transmits each page from the host memory to GPU’s global memory

    • Computation thread: launches a new GPU kernel to process the page that has already been transmitted.

  • NVIDIA's Hyper-Q feature

    • pipelining CPU-GPU memory copy and kernel execution

    • so that the processing tasks of pages can be dispatched onto multiple streams and handled concurrently

Evaluation

Group Discussion

  • warp: unit of 32 threads on most gpu

    • warp divergence: thread takes a difference step, they diverge

  • vectorized instruction: increment all elements in the array by one,

    • Check the value of the array (>5, <5), don't get 32x speed up