As we saw in our earlier discussion of data-mining activities
on the Internet, most privacy concerns involving data mining were not centered
on the exchange of confidential or intimate information such as one's medical
records, financial records, or personal relationships. Rather, the concern was
over the collection of a kind of personal information which, in the past, one
might have thought not to need protection. In many cases, some legal protection
has been granted to personal information thought to be private (i.e., intimate,
sensitive, or confidential information). As Nissenbaum (1997) so aptly puts the
matter, our concern is now for personal information in "spheres other than the
intimate" (italics Nissenbaum). Concerns about which kinds of personal
information need protecting are now beginning to intensify because of the ways
personal privacy is threatened in certain (nonconfidential and nonintimate)
online activities that might easily be construed as activities in the public
sphere. So it would seem that Nissenbaum is justified in her claim that we need
to "protect privacy in public."
Nissenbaum also notes that because of the ways we typically
use the terms "private" and "public," the expression "privacy in public" may
seem oddly paradoxical. She further notes that few normative theories
sufficiently attend to the public aspect of privacy protection and that
philosophical work on privacy theory suffers a "theoretical blind spot" when it
comes to the protection of privacy in public. Nissenbaum points out that the
handful of arguments that have been advanced to protect privacy in public have
met with "knock-down objections," based on the following line of reasoning: "If
a person makes no effort to...[conceal]...information about themselves...then
restricting what others perceive of the person, or record and do with the
information thus recorded, is to place unacceptable limits on their freedom"
(Nissenbaum, 1997). It would certainly seem that this line of reasoning provides
a convenient rationale for those online entrepreneurs who currently engage in
data-mining techniques as well as in other data-gathering activities on the
Moor (1997) also envisions a scenario in which a married
couple arguing loudly in a restaurant about certain details of their marital
life can respond to a waiter, eager to offer the couple some advice, by pointing
out to the waiter that their discussion is really a "private matter" and that
they do not wish to hear his advice. Whereas the couple's response to the waiter
might seem odd or even paradoxical, Moor notes that such a response does make
sense because in private situations the access to information "can be blocked in
both directions." Moor points out that the couple can reasonably reject incoming
information from the waiter, despite the fact that the couple had been publicly
indiscreet in revealing details of their marital life to the waiter and to the
patrons of the restaurant. Because Moor's examples involving the restaurant
conversation and faculty salaries seem to address, at least on a certain level,
issues concerning privacy in public, his control/restricted access theory would
seem to provide some insight into the public vs. private distinctions regarding
personal information. We next consider Moor's theory more fully by applying it
to some specific privacy concerns currently associated with the
We next consider whether Moor's control/restricted access
theory can be successfully applied to the cluster of Internet-related privacy
concerns. First, we consider whether any of the privacy concerns we have
considered in the preceding sections necessarily violate or invade the privacy
of individuals. Finally, we will see whether Moor's theory can be used to frame
that may arise on the Internet as well.
7.1 The Loss of Privacy vs. the Violation or Invasion of
We begin by examining Moor's distinction between a loss
privacy and a violation or invasion of privacy, considered briefly in the
Section 2.2 of this study. We should recall that central to Moor's theory of
privacy was the notion of a situation. Does the Internet constitute a situation?
As we saw earlier, the Internet could be viewed either substantively as a
repository or perspectivally as a medium. When viewed as a repository of
information, consisting of all of the information contained in the data bases
accessible to it, the Internet might not seem to fit neatly Moor's notion of a
situation. When viewed as a medium, however, the Internet can be said to
constitute multiple situations. So using Moor's notion of a situation in
conjunction with our sense of "Internet as medium," the Internet can be viewed
as consisting of a series of situations. Interestingly, Moor includes among his
descriptions of a "situation" an "activity in a location" and the "storage and
access of information" (such as that in a computer database). Each of the
following Internet activities would seem to count as a legitimate situation: the
use of Internet search engines to locate individuals or information about those
individuals; the use of Internet cookies to gather personal information about
users and to store that information on the user's computer; the mining of
personal data; the use of Internet forms to (directly) gather personal
information; and the use of Internet server log files to gather personal data
(indirectly). Although Moor's theory could be applied to any of these
situations, let us consider, for purposes of this section of the study, the
situation of mining personal data on the Internet.
Does mining personal data on the Internet violate or invade
an individual's privacy? Moor (1997) would concede that privacy is indeed lost
by an individual, X, whenever data about X is mined on the Internet. On Moor's
view, however, the mere loss of privacy by an individual in a natural situation
does not necessarily constitute an invasion of that individual's privacy. So it
is not yet clear whether X's privacy has been violated or invaded in a normative
sense. Should all personal information contained on the Internet which is
currently accessible to data-mining techniques be declared a normatively private
situation? In other words, should access to that information be restricted,
and, if so, how should we decide the matter? Moor notes that the boundaries of
normative privacy can vary significantly from "place to place, and time to
time." He further notes that situations can also vary within a group, but points
out that this does not mean that privacy standards are "arbitrary or
unjustified." We saw in the preceding section in our discussion of whether
information regarding faculty salaries should be construed as public information
or be declared normatively private information, policy decisions can vary from
situation to situation, or context to context. We also saw that we can have good
reasons for making certain faculty salaries public in one context, e.g., at
state colleges, and declaring them normatively private situations in other
contexts, such as in small private colleges.
Moor (1997) claims that we can better protect our privacy if
we know exactly where the zones of privacy are, and if we know under what
conditions and to whom information will be given. One strength of Moor's theory
is that it requires us to state publicly what the parameters of a private
situation are so that they will be "completely public" and presumably known to
all those in or affected by a situation. Moor goes on to point out that
privately restricted situations or zones must conform to what he calls the
Publicity Principle. Let us next see how Moor's Publicity Principle can serve as
the foundation for a policy regarding privacy on the Internet in general and
privacy issues related to data mining in particular.
7.2 Moor's Publicity Principle and its Implications for the
It would perhaps be prudent for us to begin a public dialogue
on privacy concerns related to the Internet in general, and data mining in
particular, while both are still relatively new. A plausible policy would, as
Moor's theory rightly suggests, need to spell out clearly the requirements for
all individuals (Internet users) and online businesses, including Web-site
owners. Applying Moor's Publicity Principle to data-mining situations involving
online businesses and online consumers, we could propose what Moor calls a
rational debate on data mining in which consumers are first informed that data
mining is being used by certain online businesses to gather information about
them that can be used in ways that they most likely had not explicitly
authorized. Following Moor's Publicity Principle, with its invitation for a
rational debate, the onus would seem to be on businesses to inform consumers
about data mining, and not on the consumers to discover for themselves which
online businesses engage in this practice. Consumers need to be told explicitly
that information about them is being used in data mining activities, since it
would not be reasonable to expect that the average consumer would be aware of
data-mining technologies. So by having an explicit policy in which consumers
were made aware of data mining and its applications, online users could inquire
into how information about them is compiled and used by the businesses with
which they transact, and these consumers would thus be able to make informed
choices. An opportunity for individuals to make informed choices would certainly
seem to be an important ingredient in any policy that purports to be open and
An open and fair policy would require the explicit consent of
the online user (or data subject) to have his or her data used for data-mining
purposes. And following Moor's Publicity Principle, with its notion of informed
consent, consumers must also be given some say in what the acceptable rules --
e.g., the parameters and limitations of uses of the data about them -- will be
in that practice. Clearly defined rules must, Moor says, be established, and
individuals must be explicitly informed of those rules.
While most online users would likely opt-out of data mining,
some users might see certain advantages for themselves in having their personal
data mined. For example, that process might result in their e-mail (or hardcopy)
solicitations being more directly-targeted to their individual interests as
opposed to their receiving more generic forms of "junk mail." Other online
consumers might be inclined, if given a choice over whether to have their
personal data mined, to "opt-in" if there were certain financial advantages such
as consumer discounts or rebates on items purchased. The important point, of
course, is that Internet users would have some say in how data about them is
used. The same rules used for determining whether to declare data-mining
activities on the Internet a normatively private situation, and the same
considerations regarding the pros and cons for why users might opt-in or opt-out
out of Internet data mining, could be applied to Internet cookies as well as to
other Internet situations or activities.
It would seem that Moor's control/restricted access theory,
with its Publicity Principle, provides us with a comprehensive and consistent,
yet flexible, procedure for resolving privacy disputes involving the Internet.
Because of its flexibility of application, this theory also provides us with a
mechanism to resolve, via open and rational debate, future privacy concerns that
may arise from the use of Internet tools and techniques that have yet to be
developed and implemented. Unlike many privacy-enhancing technologies that have
recently been put forth as technical solutions or "techno-fixes" to privacy
threats introduced by specific Internet technologies, Moor's comprehensive
theory provides a procedure for addressing and resolving privacy issues in a
much more systematic manner. Because most technical solutions are aimed at
eliminating threats introduced by specific Internet tools and techniques, they
tend to be ad hoc and nonsystematic "quick fixes" to privacy issues that are
complex in nature. An adequate solution to current and future privacy issues
involving the Internet needs to take into account those complexities.
Fortunately, Moor's theory enables us to do just that.
Allen, A.: 1988, Uneasy Access: Privacy for Women in a Free
Society, (Rowman and Littlefield, Totowa, NJ).
Benassi, P.: 1999, "TRUSTe: An Online Privacy Seal Program,"
Communications of the ACM, vol. 42, 2, 56-59.
Cavoukian, A.: 1998, Data Mining: Staking a Claim on Your
Privacy, (Information and Privacy Commissioner's Report, Ontario,
Clarke, R.: 1988, "Information Technology and Dataveillance,"
Communications of the ACM, vol. 31, 5, 498-512.
Clarke, R.: 1999, "Internet Privacy Concerns Confirm the Case
for Intervention," Communications of the ACM, vol. 42, 2, 60-67.
DeCew, J.W.: 1997, In Pursuit of Privacy: Law, Ethics, and the
Rise of Technology, (Cornell University Press, Ithaca, New York).
Eisenberg, A.: 1996, "Privacy and Data Collection on the Net,"
Scientific American, March, 120.
Etzioni, O.: 1996, "The World Wide Web: Quagmire or Gold
Mine?" Communications of the ACM, vol. 39, 11, 65-68.
Fried, C.: 1970, "Privacy: A Rational Context," Chap. IX in
Anatomy of Values, (Cambridge University Press, New York).
Fulda, J.: 1998, "Data Mining and the Web," Computers and
Society, vol. 28, 1, 42-43.
Gavison, R.: 1980, "Privacy and the Limits of the Law," Yale
Law Journal, Vol. 89.
Kotz, D.: 1998, "Technological Implications for Privacy." In
J.H. Moor, ed. Proceedings of the Conference on The Tangled Web: Ethical
Dilemmas of the Internet, August 7-9, 1998, (Dartmouth College, Hanover, NH)
Moor, J.H.: 1997, "Towards a Theory of Privacy in the
Information Age," Computers and Society, vol. 27, 3, 27-32.
Nissenbaum, H.: 1997, "Can We Protect Privacy in Public?" In
M.J. van den Hoven, ed. Proceedings of the Conference on Computer Ethics:
Philosophical Enquiry: CEPE '97, (Erasmus University Press, Rotterdam, The
Posner, R.A. : 1978, "An Economic Theory of Privacy,"
Regulations, May-June, 19-26.
Rachels, James. (1975), "Why is Privacy Important," Philosophy
and Public Affairs, vol. 4, 4.
Tavani, H.T.: 1996, "Computer Matching and Personal Privacy:
Can They Be Compatible?" In C. Huff, ed. Proceedings of the Symposium on
Computers and the Quality of Life: CQL '96, (ACM Press, New York),
Tavani, H.T.: 1997, "Internet Search Engines and Personal
Privacy." In M.J. van den Hoven, ed. Proceedings of the Conference on Computer
Ethics: Philosophical Enquiry: CEPE '97, (Erasmus University Press, Rotterdam,
The Netherlands), 214-223.
Tavani, H.T.: 1998, "Data Mining, Personal Privacy, and Public
Policy." In L.D. Introna, ed. Proceedings of the Conference on Computer Ethics:
Philosophical Enquiry: CEPE '98, (University of London Press, London, UK),
Tavani, H.T.: 1999, "Informational Privacy, Data Mining, and
the Internet." Ethics and Information Technology, vol. 1, 2.
Tavani, H.T.: In Press, "Privacy Security," Chap. 4. in D.
Langford, ed. Internet Ethics, (MacMillan Press, London, UK).
Wright, M. and J. Kakalik.: 1997, "The Erosion of Privacy,"
Computers and Society, vol. 27, 4, 22-25.
Significant portions of this paper are extracted from "Privacy
and Security," Chap. 4 in Internet Ethics (ed. Duncan Langford), forthcoming
from MacMillan Publishers (UK). I am grateful to Professor Langford and to
MacMillan Publishers for permission to use material from that chapter in several
sections of this paper.
© 2000 Herman T. Tavani. Published with permission of the copyright holder.