This material is protected by U.S. copyright law. Unauthorized reproduction is prohibited.
To purchase quantity reprints or request permission to reproduce multiple copies, please e-mail reprints@ons.org.
Computerized Symptom and Quality-of-Life
Assessment for Patients With Cancer
Part II: Acceptability and Usability
Kristin H. Mullen, MN, ARNP, AOCN, Donna L. Berry, PhD, RN, AOCN,
and Brenda K. Zierler, PhD, RN
Key Points . . .
Purpose/Objectives: To determine the acceptability and usability of a
computerized quality-of-life (QOL) and symptom assessment tool and the
graphically displayed QOL and symptom output in an ambulatory radia-
Computerized quality-of-life and symptom assessment was
tion oncology clinic.
acceptable to patients with cancer as a method of gathering
Design: Descriptive, cross-sectional.
patient-reported information.
Setting: Radiation oncology clinic located in an urban university medi-
cal center.
Patients of all computer use backgrounds reported high ac-
Sample: 45 patients with cancer being evaluated for radiation therapy
and 10 clinicians, who submitted 12 surveys.
The graphic display of assessment responses was useful to
Methods: Acceptability of the computerized assessment was mea-
physicians and nurse clinicians in promoting communication
sured with an online, 16-item, Likert-style survey delivered as 45 patients
about symptoms and quality of life.
undergoing radiation therapy completed a 25-item QOL and symptom
assessment. Usability of the graphic output was assessed with clinician
completion of a four-item paper survey.
Main Research Variables: Acceptability and usability of computerized
the therapies designed to cure or prolong the lives of people
patient assessment.
Findings: The patient acceptability survey indicated that 70% (n = 28)
with cancer is essential for patients and healthcare providers.
liked computers and 10% (n = 4) did not. The program was easy to use
Despite the apparent interest in assessing quality of life (QOL)
for 79% (n = 26), easy to understand for 91% (n = 30), and enjoyable for
in patients with cancer, routine evaluations of QOL are un-
71% (n = 24). Seventy-six percent (n = 25) believed that the amount of
common in most clinical cancer settings (Batel-Copel, Korn-
time needed to complete the computerized survey was acceptable. Sixty-
blith, Batel, & Holland, 1997).
six percent (n = 21) responded that they were satisfied with the program,
Computerized administration of assessment tools is a reli-
and none of the participants chose the very dissatisfied response. Eighty-
able means of collecting patient data. Numerous studies have
three percent (n = 10) of the clinicians found the graphic output helpful
compared responses from each participant on the written ver-
in promoting communication with patients, 75% (n = 9) found the out-
sion of the instrument of choice and the computerized version
put report helpful in identifying appropriate areas of QOL deficits or con-
(Drummond, Ghosh, Ferguson, Brackenridge, & Tiplady,
cerns, and 83% (n = 10) indicated that the output helped guide clinical in-
teractions with patients.
Conclusions: The computer-based QOL and symptom assessment
Kristin H. Mullen, MN, ARNP, AOCN  , is a nurse practitioner for
tool is acceptable to patients, and the graphically displayed QOL and
the Palliative Care Service in the Department of Veterans Affairs at
symptom output is useful to radiation oncology nurses and physicians.
the VA Puget Sound Health Care System in Seattle, WA; and Donna
Implications for Nursing: Wider application of computerized patient-
L. Berry, PhD, RN, AOCN, and Brenda K. Zierler, PhD, RN, are as-
generated data can continue in various cancer settings and be tested for
sociate professors in Biobehavioral Nursing and Health Systems at
clinical and organizational outcomes.
the University of Washington in Seattle. At the time this study was
conducted, Mullen was a graduate student at the University of Wash-
ington and received an educational scholarship from the American
f the estimated 1,368,030 people who will be diag-
Cancer Society. (Submitted May 2003. Accepted for publication
nosed with cancer in the United States in 2004, ap-
March 25, 2004.)
proximately 63% will survive more than five years
(Jemal et al., 2004). Understanding the impact of cancer and
Digital Object Identifier: 10.1188/04.ONF.E84-E89
determine whether the graphically displayed assessment re-
1995; Lutner et al., 1991; Pouwer, Snoek, van der Ploeg,
sults were useful to the doctors and nurses who cared for the
Heine, & Brand, 1998; Roizen et al., 1992; Skinner & Allen,
1983; Taenzer et al., 1997, 2000; Turner et al., 1998; Velikova
et al., 1999). Most of these studies reported no noteworthy dif-
ferences between modes of testing. In addition, when partici-
pants were exposed to both types of test administration (writ-
ten and computerized), the majority reported a preference for
electronic questionnaires. The benefits of electronic assess-
This descriptive pilot study included two convenience
sample groups and was part of a larger descriptive cross-sec-
ment have included decreased time to complete the question-
naire, ability to enter data directly into existing clinical data-
tional study to develop and test a computerized QOL and
bases, and potentially more accurate response rates. The
symptom assessment tool for patients with cancer (see part I
of this article).
acceptability of healthcare information technologies has been
studied, and a considerable amount of literature has been pub-
Setting and Sample
lished on healthcare workers' interaction with computerized
medical records and other new technologies (Dewan & Loren-
The study took place in an outpatient radiation oncology
zi, 2000; Kushniruk, Kaufman, Patel, Levesque, & Lottin,
clinic located in an urban university medical center in the
1996; Kushniruk, Patel, & Cimino, 1997; Patel & Kushniruk,
northwestern United States. The first sample group was com-
1998). However, the literature contains much less information
prised of 45 consecutive clinic patients, 26 men and 18
concerning the usability of medical computer programs with
women, who were being evaluated for radiation therapy. In-
which patients interact.
clusion criteria were being 18 years of age or older, having a
Taenzer, Suave, Burgess, Milkavich, and Whitmore (1996)
cancer diagnosis, being able to communicate in English, and
have documented that a computer interview was a feasible
having an evaluation by a radiation oncologist for radiation
method for obtaining health information and that the program
therapy. Exclusion criteria included receiving or being evalu-
was very well accepted by the participants. In oncology set-
ated for total body irradiation, being evaluated for gamma
tings, positive effects on increasing the number of QOL issues
knife stereotactic radiosurgery or for neurosurgery, or being
discussed between patients and healthcare providers were re-
unable to communicate in English. The ages  -- participants in
ported (Taenzer et al., 1996). Wilkie et al. (2001) evaluated
this sample group ranged from 1897 years (X = 54.89 years).
the feasibility and acceptability of a computerized assessment
The educational backgrounds of the participants were rela-
of cancer-related symptoms in 41 patients. They concluded
tively diverse: 9% (n = 4) had not completed high school, and
that the computer program was a highly acceptable way for
43% (n = 17) had achieved an undergraduate or graduate de-
the participants to report their symptoms (Wilkie et al.).
Velikova, Brown, Smith, and Selby (2002) and Taenzer et
The second sample group was composed of 10 clinicians:
al. (2000) reported the perceptions of clinicians in the cancer
4 attending physicians, 2 resident physicians, 1 nurse practi-
care setting regarding usability of a computer-administered
tioner, and 3 RNs who cared for the participants described in
QOL assessment tool. The clinicians reported that the mea-
the patient sample. Although clinician participants were asked
surement tool identified areas of QOL concerns that had not
to complete the questionnaire one time only, two nurses com-
been addressed previously and that the QOL data obtained en-
pleted it twice responding to two different patient surveys.
hanced communication between patients and clinicians.
Because of the pilot nature of the study and the small sample
One relevant study that focused on the acceptability and
size, these second responses are included in the analysis. To
usability of computerized QOL screening was conducted in
be included in this sample, the healthcare providers must have
a cancer pain clinic (Carlson, Speca, Hagen, & Taenzer,
seen the graphically displayed output from the computerized
2001). The participants completed a computerized QOL as-
survey that one of their patients had completed. Exclusion
sessment and a postsurvey paper-and-pencil questionnaire
criteria included the physicians and nurses who were not car-
that assessed patients' impressions of the computerized as-
ing for the patients in the study. Fourteen clinicians were in-
sessment. The authors concluded that patients found this
vited to participate, 13 questionnaires were returned, and one
computerized assessment easy to use, understandable, enjoy-
returned survey was deemed ineligible because the clinician
able, helpful, and a good use of waiting room time. The pa-
(an attending physician) indicated on the survey that he had
tients were satisfied with the experience, and their attitudes
not seen the computer-generated patient data. Responses from
about computers improved after completing the computer
12 clinician questionnaires, which were completed by 10 dif-
program. Physicians and nurses who cared for patients in this
ferent clinicians, were used for data analysis.
setting reported that the QOL summary was useful in patient
care. Although computerized screening has been reported
about in other countries, none has been implemented and
Acceptability was measured using a computerized version
evaluated in a U.S. oncology ambulatory setting to screen for
of a questionnaire that was developed and used by Carlson et
QOL concerns and symptomatology.
al. (2001). The tool consists of six preassessment items and 10
The purpose of this study was to determine the acceptabil-
postassessment items using Likert-type responses. Total
ity of a computerized QOL and symptom assessment survey
scores were not calculated. Carlson et al. did not report reli-
for patients and the usability of the output for healthcare pro-
ability or validity data. The preassessment items elicited re-
fessionals. The acceptability analysis was intended to evalu-
sponses pertaining to previous computer use, education, and
ate whether patient participants were able to complete the
attitudes toward computers, paper-and-pencil surveys, com-
computerized program and whether they found computerized
puter questionnaires, and face-to-face interviews. Each atti-
assessment acceptable. An additional aim of this study was to
tude item was scored on a 1 (dislike this method very much)
Data Analysis
to 5 (like this method very much) scale. Seven of the post-
assessment items addressed the experience of using the com-
Descriptive statistics (mean and standard deviation) of the
puterized program, including how easy and enjoyable the pro-
sample characteristics as well as statistical analysis of the re-
gram was to use, how understandable the questions were, how
lationship between sample characteristics and computer use
helpful completing the program was, whether the participant
and computer acceptance were completed using chi-square.
liked the program, whether the amount of time to complete the
Preassessment acceptability responses were compared to the
program was acceptable, and overall satisfaction with the pro-
postassessment responses. In addition, an analysis of partici-
gram. The response choices ranged from 15, with higher
pant preference for computer versus paper-and-pencil assess-
scores indicating a more positive experience. Reliability test-
ment and degree of previous computer use were evaluated.
ing of these seven items revealed an alpha coefficient of 0.91.
Clinicians' responses to the utility of the graphic output were
Three additional postassessment items elicited responses rat-
described, and single-item frequency distributions were cal-
ing preferences of interview method, comparing face to face,
written, and computerized, again using a 15 scale.
Clinician usability was measured with a written question-
naire consisting of four questions using Likert-type responses
that also were scored on a 15 scale (Carlson et al., 2001).
After the physician or nurse concluded the clinic visit with the
Fifty-four patients were approached and invited to partici-
patient, he or she was asked to complete the short question-
pate, seven patients declined to participate because of feeling
naire that same day. The questions determined whether the
sick, and two patients chose to have their responses recorded
clinicians had viewed the assessment results before the clini-
only for clinical use and not to be used in the research data-
cal encounter and whether they found the graphically dis-
base, leaving a sample of 45 participants.
played results of the QOL and symptom survey useful.
Preassessment results: Table 1 describes the level of pa-
Carlson et al. did not report the reliability and validity of this
tients' computer use. The individuals who responded that they
usability tool, and the current study's authors did not calculate
never used computers all were aged 54 or older. Also, none of
these parameters in this pilot study.
the respondents who were older than 61 years reported using a
computer frequently. A significant negative correlation (r =
0.551) was found between age and computer use (p = 0.002).
Human subject approval from the university human sub-
As age increased, reported computer use decreased. Participants
jects division was granted prior to beginning the study. A
who did not complete high school used a computer less fre-
trained research assistant explained the study and provided
quently than those who attended some college or technical train-
the laptop computer to patients. Patients read the consent in-
ing or had received undergraduate or graduate degrees (p =
formation on the computer screen. The program was designed
0.002) (see Table 2).
to be user-friendly with a touch screen and simple directions.
When participants were asked to rate their attitudes about
The research assistant was available to assist patients if nec-
computers before completing the QOL and symptom items,
essary. When assistance was needed, the type of assistance
62% (n = 28) indicated that they liked computers, 18% (n = 8)
was recorded into a logbook. Prior to completing the QOL
were neutral, 9% (n = 4) reported that they did not like com-
and symptom questions, participants were asked to complete
puters, and 11% (n = 5) did not respond.
the six preassessment questions. When they completed this
Forty-one percent (n = 16) of the 39 responding participants
segment, the program prompted them to complete the 25
reported that they liked paper-and-pencil surveys, 54% (n =
QOL and symptom questions. At the end of the QOL and
21) responded that they liked computer questionnaires, and
symptom questions, they were asked to complete the 10
65% (n = 26) responded that they liked face-to-face inter-
postassessment questions. A total of 41 items were presented
views. Twenty-three percent (n = 9) of the participants dis-
on the computer screen to the participants who completed the
liked paper-and-pencil surveys; however, only 13% (n = 5)
entire program. Once patients completed the QOL and symp-
disliked computer surveys and 7% (n = 3) of 41 responders
tom assessment, a color graphic display of the results was
disliked face-to-face interviews.
printed and given to patients' physicians and nurses. The
Postassessment results: No significant difference existed in
color printout ranked patients' responses by level of symptom
participants' attitudes about computer questionnaires between
distress and QOL item score and flagged potentially trouble-
some levels in red.
Table 1. Participant Computer Use
Clinicians were asked to complete the four-item clinician
survey and one additional open-ended question eliciting infor-
mation about the usefulness of the graphic display of partici-
Never use computers
pants' QOL and symptom assessment. Return of the question-
No computer or typewriter use
naire implied consent to participate. On this paper-and-pencil
No computer use; some typewriter use
questionnaire, clinicians were asked to provide their job title
Occasional computer users
(RN, attending physician, resident physician, or other) but no
Use computer once per month
other identifiers. A collection box was left on a counter in the
Use computer once per week
clinic area labeled "quality-of-life clinician survey," and cli-
Frequent computer users
nicians were instructed to deposit the completed surveys in the
More than once per week
box. One e-mail reminder was sent to all physicians and
Missing data
nurses in the radiation oncology clinic to prompt the clinicians
to complete and return the survey.
N = 45
Table 2. Education Level Compared With Computer Use
to give the most favorable reports of utility of the patient
data in graph format, followed by the resident physicians and
Computer Use
attending physicians.
This study suggests that the computerized QOL and symp-
Did not complete high school
tom questionnaire used in a university medical center radia-
Completed high school diploma
tion oncology setting is an acceptable method of gathering
with or without some college or
patient information and that the immediate information gen-
technical training
erated by the program is clinically useful in the care of pa-
Completed undergraduate or
tients with cancer. Overall, the program was very well ac-
graduate degree
cepted, and patients who participated in the study preferred
N = 36
the computerized questionnaire format to paper-and-pencil
and face-to-face interviews. Patient participants found the
individuals who had achieved a higher level of education or
program easy to use and enjoyable and indicated that the
who reported more frequent computer use as compared to
amount of time it took to complete the program was appropri-
individuals who had achieved less than a college education
ate. Nurse clinicians and resident physicians found the output
and who never or rarely used computers.
to be more helpful in identifying areas of concern, promoting
When participants were asked how difficult the computer
communication, and guiding their clinical interactions with
program was to use, none of the 33 who completed this ques-
patients than did attending physicians. In this university set-
tion found the program to be very difficult. As seen in Figure
ting, as in many other settings, the resident physician typically
1, the responses were overwhelmingly positive with regard to
conducts the history and physical examination portion of the
aspects of using the program.
new patient encounter before the attending physician interacts
Participants were asked to do three different comparisons on
with the patient. The resident usually reports his or her find-
methods of survey administration: face to face and computers,
ings verbally to the attending physician. Viewing the graphic
paper and pencil and computers, and paper and pencil and face
output may have been redundant for the attending physician
to face. The responses to these questions were similar to the
and therefore was judged less useful.
responses in the preassessment segment of the program. Sev-
In this study, the pre- and postassessment questions and the
enty percent (n = 21) of 30 participants responded that they
clinician usability questionnaire were adapted with permis-
liked computer questionnaires, 23% (n = 7) chose the neutral re-
sion from Carlson et al. (2001). The current study's results
sponse, and 7% (n = 2) responded that they did not like com-
support the findings reported by that group. The sample was
similar in size to Carlson et al.'s; however, only 23% of the
In the final segment of the survey, 66% (n = 21) of 32 par-
sample group in Carlson et al.'s study used computers fre-
ticipants responded that they were satisfied with the comput-
quently (more than once per week) as compared to the cur-
erized program, 31% (n = 10) chose the neutral response, and
rent sample group, in which 49% reported that they used
none of the participants indicated that they were very dissatis-
computers frequently. In the preassessment portion of both
fied. The responses regarding attitudes about computer ques-
studies, participants reported very similar preferences for
tionnaires were very similar to the responses in the preassess-
ment segment: 70% (n = 21) of 30 participants responded that
they liked computer questionnaires, and 7% (n = 2) responded
that they disliked computer questionnaires. The average time
Very much
Not at all
it took for a participant to complete the entire tool was about
10 minutes.
Missing data: All consenting patients who answered at
least one question were included in the analysis. A consider-
able amount of data was missing from the final items because
participants who were not able to finish the survey for one of
several reasons were included in the analysis (Trigg, Berry,
Karras, Austin-Seymour, & Lober, 2003). The average num-
ber of missing responses per participant in the preassessment
portion of the survey was 0.6. The postassessment average
number of missing responses was 2.6.
Easy: How easy was the program to use?
The data from clinicians who responded that they had seen
Understand: How understandable were the questions?
the patient graphic output were analyzed for usability. The
Enjoy: How much did you enjoy using the program?
results of the clinician survey are seen in Figure 2. Because of
Helpful: How helpful was it to complete the program?
the small sample size (N = 12) in the clinician group, differ-
Time: Was the amount of time it took to complete the program acceptable?
ences between clinician groups, nurses, attending physicians,
N = 45
resident physicians, and the nurse practitioner were not calcu-
Figure 1. Computer Program Acceptability
lated. However, the nurses and nurse practitioner did appear
that 64% of clinicians agreed that the computer-generated
output identified appropriate areas of QOL deficits or con-
Not helpful
cerns, whereas in the current study, 77% of clinicians found
this to be true.
This work also confirms the research results of other stud-
ies that focused on the feasibility of computerized screen-
ing (Taenzer et al., 2000; Velikova et al., 2002; Wilkie et
al., 2001). Velikova et al. (2002) studied the acceptance and
feasibility of a computer-administered QOL measurement
in an oncology clinic in the United Kingdom. They con-
cluded that the computer method was well accepted and that
patients reported the information as a useful way to tell
doctors about their feelings. In addition, the three physicians
who participated in the evaluation of the usefulness and
Areas of
Areas of
clinical relevance of the information obtained from the as-
sessment found the information to be accurate and clinically
N = 12
The current study was limited by the small size of the
Figure 2. Helpfulness of the Computer-Generated Quality-
sample of clinicians and precluded statistical analysis of dif-
of-Life and Symptom Graphic
ferences in responses by clinician background. Another issue
is the amount of missing data from the postassessment que-
ries, which affects the interpretation of the results. For ex-
ample, those who did not have time to finish may have re-
face-to-face and computer methods of questionnaires. In
sponded differently to the postassessment questions. The
the postassessment segment, both studies reported that a
authors believe that one of the major reasons for the missing
majority of participants (65%70%) found computer ques-
data in this study was related to the frequent interruptions in
tionnaires acceptable. Carlson et al. demonstrated a larger
the cancer center lobby, inadequate amount of preappointment
change in the attitudes about computer questionnaires from
time available, and patient tardiness for appointments. These
the preassessment to the postassessment segment of the com-
feasibility concerns have been reported elsewhere (Trigg et
puter program, which they attributed to the positive experi-
al., 2003).
ence using the program. In the current study, attitudes regard-
The relationships among sample group demographics,
ing computer questionnaires were roughly the same in the
such as income, race, and ethnicity, and individual re-
preassessment and postassessment segments of the program
sponses on the instruments were not analyzed. In addition,
(70% acceptable). This may be related to the larger group of
the sample group was described, but other factors, such as
frequent computer users who participated in this study com-
types of cancer, types of cancer treatment (e.g., chemo-
pared to Carlson et al.'s.
therapy and surgery), or comorbid medical conditions, were
Some differences existed in the clinician survey portion of
not, which may affect the generalizability of the results.
this study compared to Carlson et al.'s (2001). In the present
This approach to assessment could be tested in chemo-
study, the sample group was made up of four attending phy-
therapy infusion areas or medical oncology areas for feasi-
sicians, two resident physicians, one nurse practitioner, and
bility and acceptability. With this type of QOL and symp-
three RNs. Because similarities were noted in the responses
tom assessment, clinicians and researchers would be able to
that each clinician group chose, the authors suspect that this
measure the change of those factors over time as patients
is related to the type of practice that each group assumes. In
begin and finish cancer treatments. Future studies focused
the Carlson et al. study, the clinician sample group was made
on the impact of serial screening on the levels of distress,
up of two doctors and two nurses who completed the survey
symptoms, and QOL have the potential to provide the nec-
in reference to the computerized QOL data obtained from 44
essary impetus for the change that would be required to
patients. They did not report differences in the responses by
implement routine QOL and symptom assessment into clini-
clinician group. Small sample sizes in both studies limited
cal practice.
A major difference between Carlson et al.'s (2001) work
Conclusions and Implications
and this study is the type of output the computer program
generated. In the Carlson et al. study, the computer printed out
for Nursing
a text format summary of the patients' responses and patients
were asked to hand carry the form to their clinic visit. In the
The data generated from this study indicate that computer-
current study, the computer generated a graph displaying
based, patient-entered QOL and symptom assessment is an
high-level item scores with a red color on a bar chart and a list
acceptable means of gathering information. The main impli-
of QOL items and symptoms ordered by the severity of dis-
cation of this research study for clinical practice is the
tress that the participant indicated on the computer program.
knowledge that this type of assessment is well accepted by
The output page was placed on the medical chart with the
patients with cancer and readily used by clinicians. How-
laboratory and vital sign forms used for the day's clinic visit
ever, this study indicated that 61% of patients who partici-
or handed directly to the provider by the research assistant.
pated in the computerized survey were not using computers
These differences may have had an effect on the variation in
on a frequent basis. Thus, a computerized survey may not be
the responses received from clinicians. Carlson et al. reported
an appropriate way to collect information from some pa-
tients outside the clinical setting. Such as system could be
puter-based screening with immediately available results may
set up in a clinical setting, with local support made available
be a viable means of implementing this change into practice.
to assist these patients so that all patients can benefit from
The authors wish to acknowledge Terri Whitney and Alexa Vellema, RN,
routine screening.
BSN, for their excellent assistance throughout the project.
When the implementation of routine screening is considered
in a clinical setting, concerns about increasing workload, the
Author Contact: Kristin H. Mullen, MN, ARNP, AOCN, can be
utility of the information, patient acceptance, and the change of
reached at kmullen@u.washington.edu, with copy to editor at
clinician practice or routine arise. This study confirms that com-
Questionnaire (DTSQ). Quality of Life Research, 7, 3338.
Batel-Copel, L.M., Kornblith, A.B., Batel, P.C., & Holland, J.C. (1997). Do
Roizen, M.F., Coalson, D., Hayward, R.S., Schmittner, J., Thisted, R.A.,
oncologists have an increasing interest in the quality of life of their pa-
Apfelbaum, J.L., et al. (1992). Can patients use an automated questionnaire
tients? A literature review of the last 15 years. European Journal of Can-
to define their current health status? Medical Care, 30(5, Suppl.), MS7484.
cer, 33, 2932.
Skinner, H.A., & Allen, B.A. (1983). Does the computer make a difference?
Carlson, L.E., Speca, M., Hagen, N., & Taenzer, P. (2001). Computerized
Computerized versus face-to-face versus self-report assessment of alcohol,
quality-of-life screening in a cancer pain clinic. Journal of Palliative Care,
drug, and tobacco use. Journal of Consulting and Clinical Psychology, 51,
17, 4652.
Dewan, N.A., & Lorenzi, N.M. (2000). Behavioral health information sys-
Taenzer, P., Bultz, B.D., Carlson, L.E., Speca, M., DeGagne, T., Olson, K.,
tems. Evaluating readiness and user acceptance. MD Computing, 17(4),
et al. (2000). Impact of computerized quality of life screening on physician
50 52.
behaviour and patient satisfaction in lung cancer outpatients. Psycho-
Drummond, H.E., Ghosh, S., Ferguson, A., Brackenridge, D., & Tiplady, B.
oncology, 9, 203213.
(1995). Electronic quality of life questionnaires: A comparison of pen-
Taenzer, P., Sauve, L., Burgess, E.D., Milkavich, L., & Whitmore, B. (1996).
based electronic questionnaires with conventional paper in a gastrointes-
The health interview: Automated assessment in a multidisciplinary outpa-
tinal study. Quality of Life Research, 4, 2126.
tient hypertension treatment program. MD Computing, 13, 423426.
Jemal, A., Tiwari, R.C., Murray, T., Ghafoor, A., Samuels, A., Ward, E., et
Taenzer, P.A., Speca, M., Atkinson, M.J., Bultz, B.D., Page, S., Harasym, P.,
al. (2004). Cancer statistics, 2004. CA: A Cancer Journal for Clinicians,
et al. (1997). Computerized quality-of-life screening in an oncology clinic.
54, 829.
Cancer Practice, 5, 168175.
Kushniruk, A.W., Kaufman, D.R., Patel, V.L., Levesque, Y., & Lottin, P.
Trigg, L., Berry, D., Karras, B., Austin-Seymour, M., & Lober, W. (2003).
(1996). Assessment of a computerized patient record system: A cognitive
Feasibility of patient entered QOL assessment data in a radiation oncology
approach to evaluating an emerging medical technology. MD Computing,
clinic. In H. Marin, E. Marques, E. Hovenga, & W. Goosen (Eds.), eHealth
13, 406415.
for all: Designing nursing agenda for the future [CD-ROM]. Sau Palo,
Kushniruk, A.W., Patel, V.L., & Cimino, J.J. (1997). Evaluation of Web-
Brazil: NI 2003.
based patient information resources: Application in the assessment of a
Turner, C.F., Ku, L., Rogers, S.M., Lindberg, L.D., Pleck, J.H., & Sonenstein,
patient clinical information system. Proceedings of the American Medical
F.L. (1998). Adolescent sexual behavior, drug use, and violence: Increased
Informatics Association Annual Symposium, 218222. Retrieved February
reporting with computer survey technology. Science, 280, 867873.
25, 2004, from http://www.amia.org/pubs/symposia/D200732.PDF
Velikova, G., Brown, J.M., Smith, A.B., & Selby, P.J. (2002). Computer-
Lutner, R.E., Roizen, M.F., Stocking, C.B., Thisted, R.A., Kim, S., Duke,
based quality of life questionnaires may contribute to doctor-patient inter-
P.C., et al. (1991). The automated interview versus the personal interview.
actions in oncology. British Journal of Cancer, 86, 5159.
Do patient responses to preoperative health questions differ? Anesthesiol-
Velikova, G., Wright, E.P., Smith, A.B., Cull, A., Gould, A., Forman, D., et
ogy, 75, 394 400.
al. (1999). Automated collection of quality-of-life data: A comparison of
Patel, V.L., & Kushniruk, A.W. (1998). Interface design for health care envi-
paper and computer touch-screen questionnaires. Journal of Clinical On-
ronments: The role of cognitive science. Proceedings of the American Medi-
cology, 17, 9981007.
cal Informatics Association Annual Symposium, 2937. Retrieved February
Wilkie, D.J., Huang, H.Y., Berry, D.L., Schwartz, A., Lin, Y.C., Ko, N.Y.,
25, 2004, from http://www.amia.org/pubs/symposia/D005235.PDF
et al. (2001). Cancer symptom control: Feasibility of a tailored, interactive
Pouwer, F., Snoek, F.J., van der Ploeg, H.M., Heine, R.J., & Brand, A.N.
(1998). A comparison of the standard and the computerized versions of the
computerized program for patients. Family Community Health, 24(3), 48
Well-Being Questionnaire (WBQ) and the Diabetes Treatment Satisfaction