Devices

Below I outline the justification for my criteria in analyzing a number of life-logging domains. The actual scores against criteria for each of the domains may be found here:

Li (2011) found that, up to a point, there is some kind of correlation between the effort a person must make to collect their data and their degree of engagement with it.  Li and others also note that for each person there is a point where the effort or inconvenience makes the cost of data collection too high to want to participate. This cost or effort is usually described in terms of having to interact with the device, the data, or both in order to keep the entire system functioning. In the studies that Li and others carried out, they already had a chosen technology so a number of facets that make up the cost to the user were constant. Since I wanted to compare devices I needed to break down this cost further into sub-components that could be compared. One of the major differences between devices that became clear to me early in my evaluations was the device maintenance itself. The manifested itself largely as the charge/sync requirement: some devices required regularly charging with a cable; others lasted up to a week without charging. Some devices required daily plugging in with a cable and running specialized software manually on a computer; others did it invisibly and wirelessly. The other major component of cost that I observed was invasiveness: some devices required the user to strap a device to their skin all day while others simply required that an app be installed on a smartphone or a small device carried by the user during the day.

A common criterion for evaluating ubicomp systems is a comparison with the so-called ground truth (e.g. (LaMarca et al., 2005; Sohn et al., 2006)), in other words how closely the device measures what is actually happening. Li et al. (2010) found that accuracy was one of the barriers his participants experienced in the stage based model of personal informatics. Since most of the personal informatics literature is concerned with the effectiveness of a single device, the precision of the data is usually not considered. Some device makers claim that the increased invasiveness or frequency of charge/sync for their device yields more accurate results, I introduced the criterion of accuracy in order to allow for fair comparisons.  In many domains, the ground truth to calculate accuracy may be impossible or uneconomic to determine, for example, to determine actual calorie expenditure a participant needs to run on a treadmill in a lab with sensors and a mask to measure oxygen consumed and carbon dioxide produced. In cases such as this I have simply used a relative accuracy of the device considered to the most accurate consumer device I could find—the the case of calories consumed, for example, I used a treadmill with heart rate sensor.

One other important criterion identified by those in the Quantified Self (“Quantified Self,” 2012) and open data communities (“Open Rights Group,” 2012) is interoperability. Many commercial devices have a vested interest in keeping their users on their platform to access and process their data in silos making it difficult for users to process their own data or combine it with data from other systems. Li (2011) notes the importance of open data for visualization integration in the development of his Innertube prototype as well as being a barrier identified in his stage-based model. As it is a criterion cited frequently by the relevant communities, I chose open data as the fourth criterion upon which to evaluate lifelogging devices for further study.

Each of the four criteria of charge/sync, invasiveness, accuracy and open data were scored for each device using a 5 point Likert scale as explained in Table 1. In the following sub-sections the results are summarized for each of the domains of visual life-logging, activity life-logging and sleep life-logging. These 3 domains were chosen because they each had a wide selection of devices available and were the most popular domains in the communities I interrogated.

Charge/Sync 1-2 = cannot last the day and/or cable charging and/or custom software to sync; 

4-5 = lasts more than a day, wireless or other easy sync

Invasiveness 1-2 = very invasive, strapping to body, uncomfortable 

4-5 = nearly invisible, little user intervention required

Accuracy relative comparison within group: 1 low accuracy, 5 high
Open Data 1 = closed, proprietary software only2 = hard to access, manual process to extract

4 = open format but manual process to access

5 = open format with open cloud-based API

Table 1: Likert scales (1-5) for analysis criteria

Experiments with Personal Informatics Devices (Lifelogging) for self-hacking, persuasion, influence, nudge, and coercion (PINC)