Mobile Application Energy Labeling
In recent years the usage of mobile devices and the expansion of their functionality by installing further applications have become very popular. Unfortunately, their extensive usage leads to high power consumption as frequent usage causes much faster battery discharging and drastically limits uptime of mobile devices and their applications. Thus, we develop an approach to profile the power consumption of mobile applications and compare power requirements of applications providing similar services. We envision a repository or market place that allows users comparing and selecting applications based on energy labels.
Today, the extensive usage of mobile devices and their low energy budgets lead to power consumption being a major concern. Often, devices consume so much energy that they run out of energy within hours or a day. Thus, investigating whether or not the power consumption of mobile devices can be decreased by developing applications more intelligently or more resource-saving is a major research challenge for today’s software developers. Thus, to increase the uptime of mobile devices, users should be able to base their decision, which application they want to install on an expectation of the application’s power consumption during runtime. To provide this information, we intend to profile the power consumption of applications providing similar services and approximate their long-running power consumption based on this information. This approximated power behavior could then be used to provide alternative grading systems for applications, e.g., an energy labeling system similar to existing approaches such as the European Union energy label for electric devices.
Energy Labeling Process
To provide users information on mobile applications’ power consumption as an additional criterion besides the app popularity within a community, we propose a process that allows for the investigation and approximation of the power consumption of mobile applications. The overall process consists of five steps and is shown in Figure 1.
First, during (1) service modeling, general use cases for a specific domain of applications are specified (e.g., use cases for email clients such as checking for new mails, reading a mail, writing a mail). Based on these services, abstract test cases are defined. They describe paths through the service model (e.g., that an email account has to be selected before an email can be opened) as well as test data for benchmarking (e.g, the name and login of an email account). Together, the service model and the abstract test cases can be considered as an abstract benchmark description, which specifies, how to test an application of a certain domain (e.g., email clients), but not which buttons, text fields etc. have to be used to run these tests for certain email applications.
During the (2) test case binding, the abstract test cases are bound to sequences of application-specific user interface (UI) interactions (e.g., the activity browsing inbox is bound to a click on a button “inbox”). The result is a set of concretized test cases for the application under test (AUT) that can be executed afterwards. We are currently working on an automated approach for the derivation of app-specific test cases from the abstract and an appspecific test model. The app-specific test model specifies, how the abstract activities are mapped to concrete UI interactions. Afterwards, the concrete test cases can be generated automatically.
Once concrete test cases are available, they are executed during the (3) profiling activity, resulting in an energy model. The resulting energy model describes the energy behavior of the AUT in the context of the specified use cases. The energy model can be considered as metadata defined relative to the original service model (e.g., activities within the service model such as reading an email or sending an email are annotated with power rates expressing the average power rate of the AUT when performing these activities).
Although the energy model is sufficient to express the AUT’s energy behavior, workloads are required to predict the AUT’s power consumption during runtime. As we intend to compare applications based on their round-the-clock energy behavior, it is insufficient to predict the power consumption for individual activities. Instead, we need information on how often and how long users that are interested in the application intend to perform certain activities. Thus, during (4) usage modeling the usage behavior of a certain user is described. The usage model enriches the service model with further metadata: activities are annotated with average durations (e.g., the average time to browse mails in the inbox) and transitions are annotated with statistical information (e.g., how often the write email activity is started). Currently, we consider two different possibilities to construct a usage model: users can try to estimate their usage behavior by creating the model manually based on a set of questions that help to configure the usage model during searching for a specific type of application (e.g., “How often do you check your inbox per day?”). The second possibility is based on usage profiling: the user has to install an application on his phone that monitors his usage behavior and uses this data to construct the usage model automatically. However, this scenario requires that users already have applications installed on their devices that fulfill services similar to the application they are looking for, which might not always be the case.
After modeling an applicationdomain’s use cases as well as profiling its power consumption on a specific mobile device and modeling a user’s usage behavior, the gained model and metadata can be combined to (5) estimate the power consumption of applications. By exchanging or altering the usage data, the approximation can be simply adapted to other users and usage scenarios. Moreover, the energy model can be exchanged as well, when approximations for other types of mobile devices, or for other applications providing similar services are required. This way, the approach allows for easy adaptation to other user, software and hardware contexts.
We have implemented a first version of a profiling infrastructure that allows the profiling of test cases’ power consumption when executed on Android devices. We have investigated a first case study comparing two different email clients showing that different apps can indeed cause different power consumption for similar provided services. We have conducted first case studies comparing mobile applications’ energy consumption and are working on an implementation of the complete labeling process as outlined above.
A video summarizes our work on comparing power consumption of mobile applications:
Below a list of publications related to this topic:
- Claas Wilke, Sebastian Richly, Georg Püschel, Christian Piechnick, Sebastian Götz and Uwe Aßmannn.
Energy Labels for Mobile Applications
To appear in Proceedings of 1. Workshop zur Entwicklung energiebewusster Software / First Workshop for the Development of Energy-aware Software (EEbS 2012), 2012.
- Claas Wilke.
Energieverbrauchsermittlung von Android-Applikationen (PDF complete proceedings, Slides)
In: Innovationsforum open4INNOVATION2012 regional kooperativ-global innovativ Beiträge zum Fachforum, Technical Report TUD-FI12-05-Mai 2012, Technische Universität Dresden, pp. 70-74, 2012.
- Claas Wilke, Sebastian Richly, Sebastian Götz, Christian Piechnick, Georg Püschel and Uwe Aßmann.
Comparing Mobile Applications’ Energy Consumption
To appear in: SEGC – Software Engineering Aspects of Green Computing Track at the 28th ACM Symposium on Applied Computing (SAC2013).