The growth of data-driven applications has led to increased interest on part of social scientists in developing critical forms of data literacy to help evaluate the knowledge produced with and through algorithms. Within this endeavor, in this presentation, I focus on critical forms of literacy for data visualizations. Contemporary social science work on data visualizations tends to focus extensively on data journalism, which remains but one part of data analytics’ visual discourse. In my own research on learning environments, I found that visuals also play a key role in how algorithms are demonstrated to and applied by would-be data analysts. In this paper, I build on social science work on vision (e.g., Goodwin) and scientific representational practices (e.g., Lynch) to show how learning data analysis also requires learning forms of ‘visual thinking’ i.e. thinking with and through visuals. An instance of this can be seen, for example, in the use of graphs/matrices to generate order and organization, enabling students to see data in forms amenable to human perception and action. I use two sets of empirics for this argument: participant-observation of (a) two semester-long graduate level data analytic courses, and (b) a series of three data analytic workshops organized at a major U.S. East Coast university. My aim in this presentation is to show how the vocabulary of visual thinking enables us to unpack data visuals not just as representations, but also as sociomaterial artifacts constituting the very practice of data analysis – well beyond the immediate contexts of the classroom.
Focusing on data analytic pedagogy, in this presentation I show how students learn to make sense of algorithmic output in relation to data, code, and prior knowledge. I showcase this by drawing out the relation and contrast between human and machine understanding of algorithmically outputted numbers. This presentation conceptualizes data analytics as a situated process: one that necessitates iterative decisions to adapt prior knowledge, code, contingent data, and algorithmic output to each other. Learning to master such forms of iteration, adaption, and discretion then is an integral part of being a data analyst. I focus on the pedagogy of data analytics to demonstrate how students learn to make sense of algorithmic output in relation to underlying data and algorithmic code. While data analysis is often understood as the work of mechanized tools, I focus on the discretionary human work required to organize and interpret the world algorithmically, explicitly drawing out the relation between human and machine understanding of numbers especially in the ways in which this relationship is enacted through class exercises, examples, and demonstrations. In a learning environment, there is an explicit focus on demonstrating established methods, tools, and theories to students. Focusing on data analytic pedagogy, then, helps us to not only better understand foundational data analytic practices, but also explore how and why certain forms of standardized data sensemaking processes come to be. To make my argument, I draw on two sets of empirics: participant-observation of (a) two semester long senior/graduate-level data analytic courses, and (b) a series of three data analytic training workshops taught/organized at a major U.S. East Coast university. Conceptually, this paper draws on research in STS on social studies of algorithms,sociology of scientific knowledge, sociology of numbers, and professional vision.
In this talk I focus on an empirical case-study of software development to highlight certain aspects concerning the negotiated, temporal, and situated character of the various processes involved within software testing. What is and isn’t software testing? When and where is software testing? What is the relation between testing, use, non-use, and the user? What is the distinction between software testing and software repair/maintenance? These are some of the questions that I will touch upon in this talk. Theoretically, this talk is situated at the intersection of Information Science (IS) and Science & Technology Studies (STS). Within sociology of testing, several scholars have worked on different aspects concerning technology testing such as the work on user configurations (Steve Woolgar), scripts (Madeleine Akrich), programs and anti-programs (Bruno Latour), similarity relationships (Donald MacKenzie), role of the user (Trevor Pinch) etc. In this talk I will show how the particular case of software testing can help us think through some of these concepts in interesting and different ways.
This presentation will highlight privacy issues raised by increasing access to social networks made possible by various mobile applications. I will focus on the unintended consequences of the ability of third-party apps to interact not only with the online databases and services of social networks but also with a user’s personal data within the mobile device itself. Although such apps can be regulated on standardized app-stores provided by Google or Apple, the ease of working with social and mobile platforms makes it increasingly difficult to manage and govern the intentionality of the large number of mobile apps that are developed each day. Social networks and mobile devices have now become ubiquitous tools that are used by individuals to manage their everyday lives and mobile app development has become a substantial market in itself. In such a scenario, it is imperative to examine the implications of the ability of third-party applications to facilitate the large scale convergence of user information in ways that are quite novel and non-traditional. In a time when ‘privacy as contextual integrity’ and ‘privacy by design’ are issues that are featured prominently on the societal agenda, this presentation will provide insights into questions such as what contextual integrity translates to for the increasingly ubiquitous mobile medium or what must we know before we start designing privacy into mobile apps and social platforms?