SPLASH 2022
Mon 5 - Sat 10 December 2022 Auckland, New Zealand
Thu 8 Dec 2022 15:30 - 16:00 at Lecture Theatre 2 - Compile Chair(s): Stefan Marr

Many applications, from social network graph analytics to control flow analysis, compute on sparse data that evolves over the course of program execution. Such data can be represented as dynamic sparse tensors and efficiently stored in formats (data layouts) that utilize pointer-based data structures like block linked lists, binary search trees, B-trees, and C-trees among others. These specialized formats support fast in-place modification and are thus better suited than traditional, array-based data structures like CSR for storing dynamic sparse tensors. However, different dynamic sparse tensor formats have distinct benefits and drawbacks, and performing different computations on tensors that are stored in different formats can require vastly dissimilar code that are not straightforward to correctly implement and optimize.

This paper shows how a compiler can generate efficient code to compute tensor algebra operations on dynamic sparse tensors that may be stored in a wide range of disparate formats. We propose a language for precisely specifying recursive, pointer-based data structures, and we show how this language can express many different dynamic data structures, including all the ones named above as well as many more. We then describe how, given high-level specifications of such dynamic data structures, a compiler can emit code to efficiently access and compute on dynamic sparse tensors that are stored in the aforementioned data structures.

We evaluate our technique and find it generates efficient dynamic sparse tensor algebra kernels that have performance comparable to, if not better than, state-of-the-art libraries and frameworks such as PAM, Aspen, STINGER, and Terrace. At the same time, our technique supports a wider range of tensor algebra operations—such as those that simultaneously compute with static and dynamic sparse tensors—than Aspen, STINGER, and Terrace, while also achieving significantly better performance than PAM for those same operations.

Thu 8 Dec

Displayed time zone: Auckland, Wellington change

15:30 - 17:00
CompileOOPSLA at Lecture Theatre 2
Chair(s): Stefan Marr University of Kent
15:30
30m
Talk
Compilation of Dynamic Sparse Tensor Algebra
OOPSLA
Stephen Chou Massachusetts Institute of Technology, Saman Amarasinghe Massachusetts Institute of Technology
DOI
16:00
30m
Talk
Incremental Type-Checking for Free: Using Scope Graphs to Derive Incremental Type-Checkers
OOPSLA
Aron Zwaan Delft University of Technology, Hendrik van Antwerpen Delft University of Technology, Eelco Visser Delft University of Technology
DOI
16:30
30m
Talk
UniRec: A Unimodular-Like Framework for Nested Recursions and Loops
OOPSLA
Kirshanthan Sundararajah Purdue University, Charitha Saumya Purdue University, Milind Kulkarni Purdue University
DOI