SPLASH 2022
Mon 5 - Sat 10 December 2022 Auckland, New Zealand

Goals of the workshop are:

  • To discover sources of unsoundness in different verification tools
  • To share experiences and exploits on how different verification tools can either be broken or expose confusing behavior
  • To broaden the attention of researchers to topics which so far escaped their focused area of research; e.g., from only type correctness to also avoiding stack overflows
  • To challenge assumptions uncritically assumed as a valid reasoning stepping stone in verification
  • To connect researchers from different areas of verification
  • To engage with and encourage the next generation of researchers in verification.

The Workshop Homepage is available at: https://unsound-workshop.org

Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 6 Dec

Displayed time zone: Auckland, Wellington change

09:00 - 10:00
Welcome and Invited TalkUnsound at Seminar Room G125
09:00
15m
Day opening
Welcome to Unsound
Unsound
Marco Servetto Victoria University of Wellington, Jan Bessai Independent
09:15
45m
Talk
What do we mean by "unsound"?Virtual
Unsound
Jan Bessai Independent
10:00 - 10:30
10:00
30m
Coffee break
Coffee break
Catering and Social Events

10:30 - 12:00
Invited TalksUnsound at Seminar Room G125
10:30
45m
Talk
How to trust a verified program?Virtual
Unsound
Wouter Swierstra Utrecht University, Netherlands
11:15
45m
Talk
MetaCoq as a tool to prevent future unsoundness in CoqVirtual
Unsound
13:30 - 15:00
Workshop Talks and DiscussionUnsound at Seminar Room G125
13:30
20m
Talk
Proving False in Object Oriented Verification Programs by Exploiting Non-Termination
Unsound
13:50
45m
Talk
The 4 horsemen of unsoundness in OO languages
Unsound
Marco Servetto Victoria University of Wellington
14:35
25m
Panel
Discussion
Unsound
Marco Servetto Victoria University of Wellington
15:00 - 15:30
15:00
30m
Coffee break
Coffee break
Catering and Social Events

Call for Papers

Software and proof verification has grown significantly in the last 15 years. Growth has come to the point where verification systems are complex and manually proving the soundness of those verification systems sometimes exceeds what a single research group can understand and verify as correct.

Even formally defining soundness can be challenging and its definition is varying from system to system. Specific research groups can have very specific notions of soundness they focus on, but those can diverge from what the users expect, especially if the users come from a different verification environment or they are approaching verification for the first time.

Participants to Unsound will be able to share their experience and exploits on how different verification tools can either be broken or expose confusing behavior, likely to be unexpected by users. We think this would be greatly beneficial not only because it will help all of us to iron out those unsoundnesses but also because it will facilitate understanding of the foundational differences between the assumptions of the various research lines.

The current academic environment encourages us to talk about the success case of our work. In this workshop we want to address and learn from failure cases and we want to reinforce the bedrock of our understanding. In practice, when we divert our focus to a specific aspect of verification we may (understandably) be less precise. For example, a line of research focusing on aliasing control in OO may be less precise when considering the implication in other areas, like termination. We believe that learning from the issues of many verification projects can broaden the attention of researchers to topics which so far escaped their focused area of research; e.g., from only type correctness to also avoiding stack overflows.

We believe that this environment would be particularly beneficial for young researchers that are in search of open questions in verification. This may provide a motivation to deep dive into the details of any particular tool, or to expand their individual area of expertise to get a wider and more objective and critical view of the whole area of verification.

We also wonder if in our fast expansion we accidentally glossed over some fundamental issue in verification, and if our mistake has now become engraved into the established wisdom and it is sometimes uncritically assumed as a valid reasoning stepping stone.

We are particularly interested in sources of unsoundness that are accidentally shared by many different unrelated research lines, and to develop an understanding on why this is the case.

The workshop would be its first instance and is meant to be welcoming for both people with strong theoretical skills, as well as people who just like hacking things. We do not expect fully polished submissions and we will not have formal proceedings. Students are especially welcome to attend.

Examples for possible contributions would be:

  • Definition of sound and unsound and how they can diverge between tools.
  • Divergences between user assumptions and actual definitions of soundness.
  • Common sources of unsoundness and why they emerge.
  • Bugs and unsoundnesses in the process of extracting a concrete program from a verified environment, e.g., from Coq to Haskell.
  • Logic errors in the specification of a verification tool, e.g., universe inconsistencies.
  • Bugs in the implementation of proof checkers.
  • Overconfident generalizations of sound subsystems to larger settings, e.g., imperative techniques in OO settings.
  • Disproving soundness statements in published papers about verification.
  • Finding statements proven in published literature that should no longer be trusted because they relied on a broken verification system.
  • Simply proving False in a verification tool, in particular we are interested in practical ways to trick available tools to accept wrong statements.
  • Breaking reasoning about programs with types by breaking the type system of the programming language in new and interesting ways.
  • Bad interactions between axiomatic choices in libraries used in proofs.
  • Impacts of the false sense of security when the chain of trust is broken by subtle unsoundness in verification tools.

Contributions:

Extended Deadline: 2022-09-16 (23:59 AOE)

Submissions should have 3 pages of text. Additional material (bibliography, related work, and code examples) will not count toward this limit. We strongly encourage authors to include instructions to reproduce results or exploits.

There will be a friendly peer review process, focusing on checking that the submitted material is appropriate for the workshop.

Publication

Informal proceedings will be made publicly available on the workshop web page. However, presentation at Unsound does not count as prior publication, and can later be published at a conference of the authors’ choosing.

Instruction to Authors

Submission

Authors should be aware of ACM’s policies on plagiarism (https://www.acm.org/publications/policies/plagiarism).

Program Committee members are allowed to submit papers.

Papers must be submitted online at: https://unsound2022.hotcrp.com/

Formatting:

Submitted papers should be in portable document format (PDF), formatted using the ACM SIGPLAN style guidelines. Authors should use the acmart format, with the acmsmall sub-format for ACM proceedings. For details, see:

http://www.sigplan.org/Resources/Author/#acmart-format

It is recommended to use the review option when submitting a paper; this option enables line numbers for easy reference in reviews.

Questions? Use the Unsound contact form.