Sound static analysis over-approximates the set of all possible executions in a given program in order to prove the absence of errors in the program. Due to this over-approximation, sound static analysis may generate spurious warnings about executions that are not wrong or even possible in the program. To become more practical, many static analyzers give up soundness by design. This means that they do not check certain properties or that they check them under certain unsound assumptions, such as the absence of arithmetic overflow. At the other end of the soundness spectrum, we have automatic test-case generation, which typically under-approximates the set of possible program executions. The goal of test-case generation is not to prove the absence of errors in the program but, rather, their existence. In this talk, I will present an overview of my research on combining these program analysis techniques to improve their overall automation, performance, and accuracy on real programs.
Around the mid of the 20th century, pioneering theoreticians of brain science and computer science such as von Neumann, Turing, McCulloch, Pitts and Barlow, were beginning to leverage their interests in the other field to gain a better understanding of their own. These two fields have since then rapidly grown their prestige as sciences, as they became more sophisticated and their body of knowledge expanded. However, in doing so, they have also grown further apart. In this talk I will outline some recent efforts from both research communities to recognize the potential for what could be achieved in a unified research effort to answer the question: "What are the algorithms which run our brain?"