Quantifying Program Comprehension with Interaction Data
By Roberto Minelli, Andrea Mocci, Michele Lanza and Takashi Kobayashi
The paper aims to answer an often-underestimated question: whether or not program comprehension occupies a large part of the software development process. The approach is to quantify comprehension time by analyzing data obtained from sessions – during which developers interact with the interface of an IDE. 4 types of interaction events – Inspecting, Editing, Navigating, and Understanding are recorded in sessions attended by 15 Java developers and 7 Smalltalk developers. The events are captured by DFLOW and PLOG plugins for Pharo and Eclipse IDEs respectively. An estimation model is used to quantify percentage of time for each interaction event. The results suggest that percentage of comprehension time for Smalltalk program ranges from 54% to 88% while for Java the range is between 56% and 94%. The variation of percentages between them is 2% on the minimum and 6% on the maximum limit. Based on the interaction history, characteristics of individual developer are deducted too. A developer can be cautious, spending more time understanding before editing; while others may have a more aggressive approach, or exhibit a different interaction pattern. However, it is yet uncertain how experience can affect comprehension directly. The authors believe more research, an improved estimation model, and better tools than can capture micro-level activities accurately would contribute to their hypotheses. Nonetheless, it is evident that program comprehension occupies more time in software development process than previously believed.
An Exploratory Study of How Developers Seek, Relate, and Collect Relevant Information during Software Maintenance Tasks
By Andrew J. Ko, Brad A. Myers, Michael J. Coblenz, and Htet Htet Aung
The researchers behind this publication aimed to qualitatively and quantitatively gather evidence that could lead to discover patterns of how they seek, relate and collect information when doing software maintenance tasks. This is done by recording 70 minute development session for 10 Java developers, who were invited to solve 5 problems in a small paint application. Two of those five problems were big-fixing and three were enhancement related. To impose real life work environment, the sessions featured interruptions- developers had to solve simple mathematical problems every three minutes as part of distraction from their main tasks. The sessions were transcribed and analyzed with regard to factors such as time spent on tasks, success rate, and sequence of actions performed by the developers. The results yield to a new model of program understanding from the researchers which features three main tasks – searching, relating, and collecting, where relating involves a cascading effect on information mining to link one conclusion to another, and thus eventually leading to a solution. The researchers propose a few measures that can be helpful to developers to generate actions with more relevancy when understanding, as well as suggesting some UI enhancements for Eclipse. Tools such as Hipikat and FEAT can also improve navigation and find relevant dependency between program components.
Below are 3 strong points about the research in my opinion-
1. Thorough human inspection enable to analyze situations more effectively, given program understanding is a cognitive process.
2. Attempt to improve the process of seeking relevant information is one of the strongest aspects of this paper. It brings some great insights based on human inspection.
3. The paper is most probably a great reference as an extension of Information Foraging to the area of Software Engineering.
With due respect to authors, here are several weaknesses that may be considered-
1. The assumption that developers are distracted every 3 minutes may be an overestimation. In good working environments offered to developers these days a lot of effort is there to maximize productivity. They are not distracted that much frequently in my opinion.
2. Original workflow during the sessions is most probably hampered because the developers are told all of their activities would be recorded. This might have created some sort of hesitation or uneasiness. Moreover, fining for wrong answers seems like an unsuitable mimic of real life penalty for misinformation.
3. Absence of concrete mathematical model behind the experiment is a downside. 5% error rate may be negligible for a team of 10 developers but it may cause significant misinformation if the experiment was on say, 200 developers. May be higher accuracy automated tools would be better.
Leave a Reply