What is Computer Laws and Ethics?

Computer Ethics is a part of practical philosophy which deals with how computing professionals should make decisions regarding professional and social conduct. Margaret Anne Pierce, a professor in the Department of Mathematics and Computers at Georgia Southern University has categorized the ethical decisions related to computer technology and usage into 3 primary influences:

  • The individual’s own personal code.
  • Any informal code of ethical conduct that exists in the work place.
  • Exposure to formal codes of ethics.

Foundation

To understand the foundation of computer ethics, it is important to look into the different schools of biology ethical theory. Each school of ethics influences a situation in a certain direction and pushes the final outcome of ethical theoritical.

Relativism is the belief that there are no universal moral norms of right and wrong. In the school of relativistic ethical belief, ethicists divide it into three connected but different structures, subject (Moral) and culture (Anthropological). Moral relativism is the idea that each person decides what is right and wrong for them. Anthropological relativism is the concept of right and wrong is decided by a society’s actual moral belief structure.

Deontology is the belief that people’s actions are to be guided by moral laws, and that these moral laws are universal. The origins of Deontological Ethics are generally attributed to the German philosopher Immanuel Kant and his ideas concerning the Categorical Imperative. Kant believed that in order for any ethical school of thought to apply to all rational beings, they must have a foundation in reason. Kant split this school into two categorical imperatives. The first categorical imperative states to act only from moral rules that you can at the same time will to be universal moral laws. The second categorical imperative states to act so that you always treat both yourself and other people as ends in themselves, and never only as a means to an end.

Utilitarianism is the belief that if an action is good it benefits someone and an action is bad if it harms someone. This ethical belief can be broken down into two different schools, Act Utilitarianism and Rule Utilitarianism. Act Utilitarianism is the belief that an action is good if its overall effect is to produce more happiness than unhappiness. Rule Utilitarianism is the belief that we should adopt a moral rule and if followed by everybody, would lead to a greater level of overall happiness.

Social contract is the concept that for a society to arise and maintain order, a morality based set of rules must be agreed upon. Social contract theory has influenced modern government and is heavily involved with societal law. Philosophers like John Rawls, Thomas Hobbes, John Locke, and Jean-Jacques Rousseau helped created the foundation of social contract.

Virtue Ethics is the belief that ethics should be more concerned with the character of the moral agent (virtue), rather than focusing on a set of rules dictating right and wrong actions, as in the cases of deontology and utilitarianism, or a focus on social context, such as is seen with Social Contract ethics. Although concern for virtue appears in several philosophical traditions, in the West the roots of the tradition lie in the work of Plato and Aristotle, and even today the tradition’s key concepts derive fromancient Greek philosophy.

The conceptual foundations of computer ethics are investigated by information ethics, a branch of philosophical ethics established by Luciano Floridi. The term computer ethics was first coined by Dr. Walter Maner, a professor at Bowling Green State University. Since the 1990s the field has started being integrated into professional development programs in academic settings.

History

The concept of computer ethics originated in 1950 when Norbert Wiener, an MIT professor and inventor of an information feedback system called “cybernetics“, published a book called “The Human Use of Human Beings” which laid out the basic foundations of computer ethics and made Norbert Wiener the father of computer ethics.

Later on, in 1966 another MIT professor by the name of Joseph Weizenbaum published a simple program called ELIZA which performed natural language processing. In essence, the program functioned like a psychotherapist where the program only used open ended questions to encourage patients to respond. The program would apply pattern matching pattern rules to human statements to figure out its reply.

A bit later during the same year the world’s first computer crime was committed. A programmer was able to use a bit of computer code to stop his banking account from being flagged as overdrawn. However, there were no laws in place at that time to stop him, and as a result he was not charged. To make sure another person did not follow suit, an ethics code for computers was needed.

Sometime further into the 1960s Donn Parker of SRI International, who was an author on computer crimes,  led to the development of the first code of ethics in the field of computer technology.

In 1970, a medical teacher and researcher, by the name of Walter Maner noticed that ethical decisions are much harder to make when computers are added. He noticed a need for a different branch of ethics for when it came to dealing with computers. The term “Computer ethics” was thus invented.

During the same year, the ACM (Association of Computing Machinery) decided to adopt a professional code of ethics due to which, by the middle of the 1970s new privacy and computer crime laws had been put in place in United States as well as Europe.

In the year 1976 Joseph Weizenbaum made his second significant addition to the field of computer ethics. He published a book titled “Computer power and Human reason” which talked about how artificial intelligence is good for the world; however it should never be allowed to make the most important decisions as it does not have human qualities such as wisdom. By far the most important point he makes in the book is the distinction between choosing and deciding. He argued that deciding is a computational activity while making choices is not and thus the ability to make choices is what makes us humans.

At a later time during the same year Abbe Mowshowitz, a professor of Computer Science at the City College of New York, published an article titled “On approaches to the study of social issues in computing”. This article identified and analyzed technical and non-technical biases in research on social issues present in computing.

During 1978, the Right to Federal Privacy Act was adopted and this drastically limited the government’s ability to search bank records.

During the same year Terrell Ward Bynum, the professor of Philosophy at Southern Connecticut State University as well as Director of the Research Center on Computing and Society there, developed the first ever curriculum for a university course on computer ethics. To make sure he kept the interests of students alive in computer ethics, he launched an essay contest where the subject students had to write about was computer ethics. In 1985, he published a journal titled “Entitled Computers and Ethics”, which turned out to be his most famous publication to date.

In 1984, the Small Business Computer Security and Education act was adopted and this act basically informed the congress on matters that were related to computer crimes against small businesses.

In 1985, James Moor, Professor of Philosophy at DartMouth College in New Hampshire, published an essay called “What is Computer Ethics”. In this essay Moor states the computer ethics includes the following: “(1) identification of computer-generated policy vacuums, (2) clarification of conceptual muddles, (3) formulation of policies for the use of computer technology, and (4) ethical justification of such policies.”

During the same year, Deborah Johnson, Professor of Applied Ethics and Chair of the Department of Science, Technology, and Society in the School of Engineering and Applied Sciences of the University of Virginia, got the first major computer ethics textbook published. It didn’t just become the standard setting textbook for computer ethics, but also set up the research agenda for the next 10 years.

In 1988, a librarian at St. Cloud University by the name of Robert Hauptman, came up with “information ethics”, a term that was used to describe the storage, production, access and dissemination of information. Near the same time, the Computer Matching and Privacy Act was adopted and this act restricted the government to programs and identifying debtors.

The 1990s was the time when computers were reaching their pinnacle and the combination of computers with telecommunication, the internet, and other media meant that many new ethical issues were raised.

In the year 1992, ACM adopted a new set of ethical rules called “ACM code of Ethics and Professional Conduct” which consisted of 24 statements of personal responsibility.

3 years later in 1995, Gorniak Kocikowska, a Professor of Philosophy at Southern Connecticut State University, Coordinator of the Religious Studies Program, as well as a Senior Research Associate in the Research Center on Computing and Society, came up with the idea that computer ethics will eventually become a global ethical system and soon after, computer ethics would replace ethics altogether as it would become the standard ethics of the information age.

In 1999, Deborah Johnson revealed her view, which was quite contrary to Kocikowska’s belief, and stated that computer ethics will not evolve but rather be our old ethics with a slight twist.

Internet Privacy

Internet Privacy is one of the lock issues that has emerged since the evolution of the World Wide Web. Millions of internet users often expose personal information on the internet in order to sign up or register for thousands of different possible things. This act has exposed themselves on the internet in ways some may not realize.

Source: CSC300 Lecture Notes @ University of Toronto, 2011. For more information on this topic, please visit the Electronic Privacy Information Center website.

Another example of privacy issues with concern to Google is tracking searches. There is a feature within searching that allows Google to keep track of searches so that advertisements will match your search criteria, which in turn means using people as products. If you are not paying for a service online instead of being the consumer, you may very well be the product.

There is an ongoing discussion about what privacy means and if it is still needed. With the increase in social networking sites, more and more people are allowing their private information to be shared publicly. On the surface, this may be seen as someone listing private information about them on a social networking site, but below the surface, it is the site that could be sharing the information (not the individual). This is the idea of an Opt-In versus Opt-Out situation. There are many privacy statements that state whether there is an Opt-In or an Opt-Out policy. Typically an Opt-In privacy policy means that the individual has to tell the company issuing the privacy policy if they want their information shared or not. Opt-Out means that their information will be shared unless the individual tells the company not to share it.

Computer Reliability

In computer networking, a reliable protocol is one that provides reliability properties with respect to the delivery of data to the intended recipient(s), as opposed to an unreliable protocol, which does not provide notifications to the sender as to the delivery of transmitted data. A reliable multicast protocol may ensure reliability on a per-recipient basis, as well as provide properties that relate the delivery of data to different recipients, such as e.g. total order, atomicity, or virtual synchrony. Reliable protocols typically incur more overhead than unreliable protocols, and as a result, are slower and less scalable. This often is not an issue for unicast protocols, but it may be a problem for multicast protocols. TCP, the main protocol used in the Internet today, is a reliable unicast protocol. UDP, often used in computer games or other situations where speed is an issue and the loss of a little data is not as important because of the transitory nature of the data, is an unreliable protocol. Often, a reliable unicast protocol is also connection-oriented. For example, the TCP/IP protocol is connection-oriented, with the virtual circuit ID consisting of source and destination IP addresses and port numbers. Some unreliable protocols are connection-oriented as well. These include ATM and Frame Relay, on which a substantial part of all Internet traffic is passed.