Much of the policy and legal debate about algorithmic decision-making has focused on issues of accuracy and bias. Equally important however is the question of whether algorithmic decisions are understandable by human observers: whether the relationship between algorithmic inputs and outputs can be explained. Explanation has long been deemed a crucial aspect of accountability particularly in legal contexts. By requiring that powerful actors explain the bases of their decisions -- the logic goes -- we reduce the risks of error abuse and arbitrariness thus producing more socially desirable decisions. Decision-making processes employing machine learning algorithms complicate this equation. Such approaches promise to refine and improve the accuracy and efficiency of decision-making processes but the logic and rationale behind each decision often remains opaque to human understanding. Indeed at a technical level it is not clear that all algorithms can be made explainable and at a normative level it is an open question when and if the costs of making algorithms explainable outweigh the benefits. This presentation will begin to map out some of the issues that must be addressed in determining in what contexts and under what constraints machine learning approaches to governmental decision-making are appropriate.
Social relations are the foundation of human social life. Developing techniques to analyze such relations in visual data such as photos bears great potential to build machines that better understand people at a social level. Social domain-based theory from social psychology is a great starting point to systematically approach social relation recognition. The theory provides a coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations in each social domain. We proposed the first photo dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations and contributed the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performances we have some findings of interpretable features that are in accordance with the predictions from social psychology literature. We interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.
Two-thirds of all American adults access the news through social media. But social networks and social media recommendations lead to information bubbles and personalization and recommendations by maximizing the click-through rate lead to ideological polarization. Consequently rumors false news conspiracy theories and now even fake news sites are an increasingly worrisome phenomena. While media organizations (Snopes.com PolitiFact FactCheck.org et al.) have stepped up their efforts to verify news political scientists tell us that fact-checking efforts may be ineffective or even counterproductive. To address some of these challenges researchers at Indiana University are working on an open platform for the automatic tracking of both online fake news and fact-checking on social media. The goal of the platform named Hoaxy is to reconstruct the diffusion networks induced by hoaxes and their corrections as they are shared online and spread from person to person.