Research-Institute for Software Research - Carnegie Mellon University


Carnegie Mellon’s Institute for Software Research (ISR) hosts an active research group with a highly interdisciplinary approach to software engineering. Indeed, we believe that interdisciplinary work is inherent to software engineering. The field of software engineering (SE) is built on computer science fundamentals, drawing from areas such as algorithms, programming languages, compilers, and machine learning. At the same time, SE is an engineering discipline: both the practice of SE and SE research problems revolve around technical solutions that successfully resolve conflicting constraints. As such, trade-offs between costs and benefits are an integral part of evaluating the effectiveness of methods and tools.

Emerging problems in the area of privacy, security, and mobility motivate many challenges faced by today’s software engineers, motivating new solutions in SE research. Because software is built by people, SE is also a human discipline, and so research in the field also draws on psychology and other social sciences. Carnegie Mellon faculty bring expertise from all of these disciplines to bear on their research, and we emphasize this interdisciplinary approach in our REU Site.

A set of example software engineering projects is described below, but we anticipate that we will add more projects and that these projects will evolve before the summer begins.

API Usability

Mentor: Brad Myers and Joshua Sunshine

The Natural Programming Group has been working for many years on improving application programming interfaces (APIs) by applying human-computer interaction (HCI) principles and methods. For example, we showed that users were between 2.4 and 11.2 times slower at using APIs when multiple classes were required to be coordinated to perform a desired function, and the appropriate method was on the wrong class. Furthermore, using multiple HCI methods to iteratively evaluate and improve the APIs and the tools around them, including the interactive development environments (IDEs) and the documentation, we were able to make significant improvements for programmers. Recently, we have been focusing on the interaction between usability and security for APIs. For example, if APIs are easier to use, one would expect developers to make fewer errors and thus make the code more secure, but in other cases, programmers prefer APIs that seem to go against principles designed to improve security.

Students who work on this project will be expected to either have a background in HCI, Computer Science (CS), or Software Engineering (SE), and they will learn the necessary principles of the other areas in order to perform usability evaluations on existing and new APIs and tools. The students will start with studies about the actual barriers experienced by real developers in the field, by performing field observations, interviews and surveys of various target developers. Then, the students will design mitigations to try to reduce the barriers found, for example by improving the documentation or even adding new features to IDEs. The students will follow up with a summative user study to try to measure the extent to which the mitigations helped.

Can You Fool a Self-Adaptive Software System?

Mentor: Pooyan Jamshidi and Christian Kästner

Self-adaptive systems are designed to automatically adapt to changes in the environment and continue to function where other systems would have broken down. In their design, often machine learning techniques are used to learn when and how to react to changes. However, the use of machine learning in self-adaptive systems (e.g., robotic systems) is vulnerable to potential attacks, in which an attacker tries to manipulate or evade the system, either teaching the system the wrong knowledge or influencing it to make the wrong adaptations. One solution for this could be an anomaly detector as a part of the system’s feedback loop to notice when the system gets attacked.

The student will explore defenses against such attacks in the context of a self-adaptive service robot. The student will explore Bayesian optimization to test the robustness of machine learning algorithms against anomaly evasion attacks. The student will learn about, develop, and evaluate attack strategies able to evade a machine learning regressor or classifier. The ideal candidate should be interested in machine learning and have a fair background in math.

Computational Policy Semantics

Mentor: Travis Breaux

Mobile devices and Internet of Things use context to deliver rich, personalized services to end users. However, these services often collect sensitive personal information about end users, which can expose end users to privacy threats and vulnerabilities. To help developers plan new and exciting applications that leverage our personal tastes, preferences and behaviors, the Requirements Engineering Lab at CMU led by Dr. Travis Breaux, is investigating computational models of natural language policy semantics to design new methods to analyze how companies collect, use and share personal information. This research has two thrusts: (1) using natural language processing and crowdsourcing techniques to extract models from corporate privacy policy texts; and (2) using Description Logics to model policy semantics that can be used to trace data flow across company data practices. This research aligns natural language semantics with machine-based reasoning and human comprehension to develop more advanced interfaces to how policy and law can be used to govern machine behavior.

Undergraduates will participate in the extension of crowdsourcing and NLP-based tools to advance requirements extraction. This includes designing and executing surveys of human subject interpretations of text, and of crowd worker tasks to collect annotations, and evaluating annotation data based on worker consensus, performance metrics, and task reliability. Based on the crowd worker results, students will learn to apply phrase structure grammars and typed dependencies to automate worker annotation, and more advanced students may choose to explore machine learning and feature engineering to predict annotations and extract models. Early versions of these tools have been written in Java and Python, and students will have an opportunity to contribute to these tools. Students will also participate in the entire research process from literature reviews to problem and approach formulation, and the writing and submission of technical papers.

Designing Extensible, Domain-specific Languages for Mathematical Diagrams

Mentors: Keenan Crane, Jonathan Aldrich, and Joshua Sunshine
Graduate Mentor: Katherine Ye

Illustrations are crucial for scientific and mathematical understanding, yet many papers and books remain largely textual. It takes a tremendous amount of expertise to master existing diagramming tools such as TikZ and Adobe Illustrator, and these tools require manipulating graphical primitives at a low level. However, for the domain of mathematical diagrams, a high-level, declarative language already exists to describe what users desire to illustrate: the language of mathematical notation. Thus, we are building Penrose, a system to automatically generate professional-quality mathematical illustrations from high-level, purely semantic descriptions of mathematical objects. We are first targeting the domains of set theory, group theory, and topology. For more details, see this proposal:

Penrose offers the chance to conduct interdisciplinary research in programming languages and graphics. On the programming languages side, we are tackling many interesting problems involving modeling mathematical domains at the language level, designing a diagram styling language, and ensuring that sub-languages are extensible and interoperable. On the graphics side, we are tackling many interesting problems involving smart diagram layout methods, high-quality 2D and 3D rendering, and interactive CAD-inspired techniques for diagram editing. We are aiming to eventually parse and visualize a mathematics textbook automatically. Students should be interested in or have experience with some of the following topics: programming language design, compilers, graphics, pure math, and functional programming. Experience with all of the topics is not required.

IPL Editor and Interpreter

Mentors: David Garlan and Bradley Schmerl

Many systems today are composed of multiple physical and software systems. Each of these systems usually have their own model for analysis and interpretation (e.g., a physical model in a robot control system). For a system to work harmoniously and safely, all these models need to be consistent and able to work together. We have developed a language, called the Integration Property Language (IPL), that allows systems engineers to specify and check properties about model integration. This project will develop an parser and editor for this language, as part of a larger development environment (e.g., AcmeStudio or OSATE). Once this is done, this will be integrated with a robotics system (e.g., TurtleBot) that will generate monitors to run concurrently with the model, and allow property sanctification to be expressed after several runs.

Requirements: ability to do object-oriented programming, ideally Java.

Preference: theoretical background or practical experience in language engineering technology (grammars, parsers, interpreters). Ideally, experience with the xtext framework (or a comparable modern framework for language engineering).

Mining Software Repositories for Change Practices

Mentors: Jim Herbsleb, Chris BogartChristian Kästner

Changes to software libraries can have disruptive effects when many software projects depend on them. Users need to rework their projects to incorporate those changes or deliberately use old versions of the library, thus forgoing enhancements and fixes. At the same time library maintainers can invest extra effort to make their changes less disruptive, such as deprecating rather than removing methods. In our collaboration, we have investigated how developers make or do not make breaking changes and how they decide whether to invest extra effort in mitigating strategies and realized that entire groups of developers (ecosystems) share values that seem to drive their practices:

In this project, we want to study how library maintainers in different communities (e.g., PHP, Javascript, Java, Python) use those practices, for example, whether the practice of deprecating APIs is differently used across ecosystems with different values or whether backporting changes is more common in some communities than others. We plan to identify and quantify the practices by automatically analyzing large numbers of open source projects on Github. The student will build tools to automatically mine histories of open source projects on Github across multiple programming languages, to collect and analyze data, and to discuss the results in the larger context of a theory on cost distribution for change management.

Optimizing APIs for Controlling Model Learning

Mentor: Robert F. Murphy

Our group is heavily involved in the development and maintenance of the open source CellOrganizer project ( for constructing spatially accurate models of cells directly from microscope images.  The current API consists of two main functions (one for creating models from images and one for creating images from models) that are customized through a large and complex options structure.  Students will research and implement various alternative paradigms for controlling model creation, including adding more specific API functions to reduce allowable options or creating a GUI to create and validate the options structure.

Program Repair

Mentor: Claire Le Goues

Defective software costs the world economy billions of dollars annually in lost time and productivity and has important implications for safety, security, and the economy. New defects are created, and old ones found, at rates that far outstrip developers' abilities to address them. Dr. Le Goues’ research focuses on understanding, measuring, and automatically improving software quality through program analysis and transformation at scale. This includes techniques to automatically fix bugs in real-world programs, e.g., using search-based approaches like genetic programming to "evolve" fixed programs from buggy initial versions as well as more formal techniques such as leveraging SMT-solver-based semantic code search to find provably correct fixes for a given defective piece of code. Dr. Le Goues also studies ways to measure and assure correctness in light of constant source code evolution, work that includes large-scale studies of test-case-based assurance techniques in patch validation and specification mining from large code bases, informed by lightweight metrics.

Dr. Le Goues’ ongoing research activities admit a number of pathways for future REU students to contribute productively. Several of her planned research activities involve empirical studies of real-world defects and code, such as a survey of which locations a human developer modified to address complex multi-line bugs, or what types of constructs humans changed or inserted in doing so. Depending on interest, students may contribute to ongoing extensions of our search-based approach to the repair of object-oriented code, or work to measure the utility of real-world test suites to evaluating repair correctness. Dr. Le Goues is also interested in developing measures that explain and help developers trust and validate patches to source code, work that may involve collecting metrics (such as complexity, readability, length, among many others) over a historical set of defects or conducting a lightweight survey of open-source developers regarding their activities in reviewing or trusting a given change, all of which can be scoped to a summer project for a motivated undergraduate researcher.

Secure Programming Languages

Mentors: Jonathan Aldrich and Joshua Sunshine

The Plaid research group applies novel programming language ideas to software engineering challenges, validating those ideas both theoretically and empirically. With his students, he is currently designing the Wyvern programming language, focused on exploring new language features for security and adaptability. Wyvern provides a pure object model that explores foundational issues in object-oriented type tests and a capability-based module system that can help to assure a broad range of architectural security properties. It also provides an extensible syntax and type system, enabling convenient and safe command libraries that forestall command injection attacks.

The projects described above include extensive involvement of undergraduates; in fact, every paper cited above includes an undergraduate co-author. Going forward, there are a number of promising areas where an undergraduate can effectively contribute to research in the span of a summer. We are developing a foundational theory of gradual verification using separation logic, and an excellent undergraduate project would be developing a concrete design (in a first summer) and new algorithms (in a second summer) that integrate this theory into the Wyvern language. We have begun a collaboration with robotics researchers here at CMU to use Wyvern's language extensions to explore domain-specific languages for robotics, and prototyping such languages is a good scope for a 10-week summer project. Another project is enhancing Wyvern's module system, e.g. to support test cases as part of a module signature so that if there are multiple implementations of the module, they can be automatically compared for behavioral compliance with the test cases—thus avoiding problems with incompatible replacements or upgrades.

Social Coding

Mentor: Bogdan Vasilescu

Developing software has always been human-centric, but never more so than with the advent of GitHub, the online collaborative coding platform home to millions of users and projects. The phrase "social coding", made popular by GitHub, has come to represent a paradigm shift in software development, especially in the open source world: software is the collaborative enterprise of large, distributed, and diverse virtual communities. How can we empower distributed teams to develop software effectively and productively? How can technology help software teams do more with less? What effects does team composition have on productivity and code quality? Dr. Vasilescu's research addresses these questions using an interdisciplinary approach, that combines theory with principles of data analytics and software analysis on Big Code corpora from GitHub and Stack Overflow.

Dr. Vasilescu's ongoing research activities offer several opportunities for motivated undergraduate researchers to contribute. Potential topics include: How many projects can a software developer work on at the same time, before losing efficiency? How does team diversity impact productivity and code quality? How does DevOps impact development practices in distributed teams? And what can we learn from Big Code archives about how people program, in order to design more effective learning experiences or personalized IDEs? The ideal student has an interest in open-source software and data analytics.

Studying Continuous Integration Practices

Mentors: Bogdan Vasilescu and Christian Kästner

Automating builds and tests have achieved significant improvements in productivity and quality in software development. Continuous integration is the practice of automatically building the software and running all tests whenever the software is changed, typically with some service like Travis-CI or Jenkins, using build and test tools, such as make, cmake, ant, maven, sbt, and many others. At the same time, developers tend to copy practices from other projects and it is not very clear what practices are common and actually beneficial. Dr. Vasilescu has a long history of analyzing open source projects in Github at scale and Dr. Kaestner has experience with analyzing build systems. Both are interested in learning more about current practices.

Students will analyze open source projects and their history at scale using both data about thousands of projects from Github and from Travis-CI. They will build lightweight analysis tools, collect and analyze data, and explore explanations and theories and best practices. The ideal student has experience with continuous integration or popular build tools.

Testbed for Research in Cyber-Security

Mentors: David Garlan and Bradley Schmerl

We are engaged in a project that is developing a cybersecurity testbed for evaluating ways to defend against advanced persistent threats – or sophisticated security attacks in which an adversary pursues a multi-step sequence of cyber exploits over a long period of time.The testbed will allow security researchers to experiment with a system that mimics a real world example of a vulnerable enterprise network, and allows them to run various attacks on it. It will also allow those researchers to try out various “self-securing” approaches to defend against the attacks. 
There are several components of this project that are suitable for undergraduate research experiences, including: 
  • developing a console to monitor the testbed so that researchers can view its current state and gain insight into how an attack is proceeding 
  • selecting appropriate technology for the testbed, including virtual machines and services misconfigured to allow attacks to take place
  • working with other researchers to define typical attacks
  • implementing various counter-measures.
This project is intended for a student with an interest in computer and network security. Many of the skills for carrying out these tasks can be learned on the project, although some experience in networking, system administration, and programming in scripting languages would help.

Understanding the Effects of Software Variations

Mentor: Christian Kästner

Variation in software is everywhere. Software variation comes in many different forms, such as differences due to program modifications (e.g., patches), program parameters, security options, and features. All these examples have in common that they have effects on the control and data flow of the program. For example, a single line patch can affect the execution throughout an entire test suite and can lead to rather surprising effects in other parts of the program. In general, it is hard or even impossible to reason about the effects of variations manually without specialized tool support. Dr. Kaestner and his team have explored new execution and visualization techniques that narrow down the effects of software variations and help with understanding and debugging.

There are several projects that allow student involvement with this new technology. For example, the student will use the technology to explain the effects of software changes, automatically submitting comments to Pull Requests on Github that visualize and explain the effects of different parts of the pull request on the test suite. Alternatively, the student can get involved in the tooling and extend interpreters of different languages or other use cases or explore how to better present variation effects on an entire test suite instead of just on a single test. All task have a significant tool building component, but give plenty of freedom to explore new ideas and to perform experiments.

Variational Data Structures

Mentor: Christian Kästner

Highly-configurable systems challenge quality assurance approaches, a test succeeding in one configuration may fail in others, possibly triggered by intricate feature interactions . Configuration spaces grow exponentially, often with more configurations than there are atoms in the universe. Dr. Kästner and the research community as a whole have achieved significant progress in reasoning about entire configuration spaces, including variational parsing, type checking, static analysis, model checking, and testing of highly-configurable systems, scaling to systems with thousands of options such as the Linux kernel. The key to this progress is to make variability explicit in the analysis process and compute with variational data. To that end, efficient representations of variational data that maximize sharing and are efficient for computation are essential. Variational data representations and variational structures are a fundamental underpinning of many of Dr. Kästner's research tools and achieving more efficient representations could have significant consequences in scaling up quality assurance approaches.

Students will explore variational data structures and their tradeoffs. How can variational data be efficiently represented? What operating costs are required for computing with them? In which contexts is which representation more efficient? Students will explore atomic data structures, especially representing variational data with multi-terminal BDDs, evaluate performance in real-world analysis scenarios by swapping the representation in state-of-the-art variational analysis tools (TypeChef, Varex), and analyze tradeoffs among implementations regarding computation overhead and memory. The student will learn about reasoning with SAT solvers and BDDs, about rigorous performance evaluations, and about key concepts of software product lines and variational programming.

Usable Privacy Policy

Mentor: Norman Sadeh

Website privacy policies are often long and difficult to understand. Internet users care about their privacy, but they do not have the time to understand the policies of every website they visit. This problem has motivated the work of the Usable Privacy Policy Project, an NSF-funded project to use crowdsourcing, machine learning, and natural language processing to extract salient details from privacy policy text and present them to Internet users.

The REU student will participate in the design and implementation of software to extract data practices from privacy policy text. The student will become part of a large team of collaborators and will learn techniques in natural language processing and machine learning.

The ideal student has an interest in natural language processing and privacy. Experience with machine learning is a plus.