Modern information driven societies require constant access to their information. In the United States, the proliferation of devices that allow users to access content on the Internet, intranets and every network between has raised the bar for acceptable levels of performance from computing systems. Service level agreements have risen to a demand of over 99% uptime on critical systems and often even non-critical systems. Not only must the information be available, but it must also be reliable. Computer and information systems must provide access to the data and at once protect the data from those who would misuse it.

Companies must constantly look at their level of liability and manage the reliability and safety of their systems. Spinello discusses some issues of reliability such as software “bugs” which are an inherent problem with any piece of software and are to be expected, within reason. However, the programmers of the software are expected to assume the responsibility for providing fixes for the bugs and improving upon the existing code. Another issue that Spinello examines is the reliability of information and the amount of liability that should be placed upon companies when information is misleading or incorrect.

In case 7.7, Spinello writes about Microsoft Windows 95, code-named Chicago at the time. Microsoft disseminated information to the public about a large scale beta test of the highly anticipated new operating system and also gave a hard release date for it. The initial release date slipped back, as most testers had anticipated, but Microsoft vowed to deliver the product in December of 1994. A competitor at the time, IBM was making a new release of its own operating system and released it in the fall of 1994. The problem arose when consumers held back from purchasing the new IBM OS in favor of waiting for the Microsoft Chicago release (2003, pp. 193-194).

IBM expected consumers to start buying its OS when Microsoft missed the release date yet again and renamed their product Windows 95. This never happened, however, due in large part to the marketing power of Microsoft. Microsoft’s long awaited operating system was finally released in August of 1995, but the damage had already been done to the marketplace and IBM (2003, pp. 195-196). Was Microsoft wrong in their marketing tactics? Some would argue strongly that they were. Not only were they misleading to consumers, but also to the entire computing industry which was preparing and waiting for the release of the operating system.

Microsoft’s reach has since increased to encompass the entire globe, but many markets had yet to be touched by the software giant at the time Windows 95 was released. The implications of the delayed release may have also been to hinder the advancement of computing and information systems worldwide. Once they had gained the largest slice of the market pie, Microsoft then had to look at some other issues - reliability, liability and security. Microsoft is known for its issues regarding computer security, and with the release of Windows Vista, they have managed to make vast improvements to their code security and stability, but they are not the only software vendor.

Other software makers must also be examined for the quality of their products. One sector in particular that is at risk in the United States and other countries is healthcare. Most healthcare software is extremely specialized. In the past, software vendors would sell their wares to healthcare organizations and then leave the information technology departments of those companies to handle the problems that arose on their own. “As IT becomes more entwined with clinical care, providers have become savvier about pressing vendors to share legal risk for third-party claims or damages that might arise if something goes wrong” (Anon., 2006, pg. 3).

This seems like a fair request, but software vendors know that the nature of software guarantees a certain amount of bugs thereby raises the risk to the vendor. However, it is not unreasonable to expect that any crippling system bugs would be removed from the final release product. Asking software vendors to assume some liability would help to drive the quality of the software upward. “Most vendors seek to cap their liability at the amount the provider paid for the product or service - a number that is ‘woefully inadequate to protect the interests of the health system’” (Anon., 2006, pg. 3).

Security expert Bruce Schneier agrees that vendors and service providers need to be held accountable for problems with their systems in order for the systems to improve. “For example, banks will only get serious about identity theft if they’re legally liable for unauthorized withdrawals, and software vendors will take security seriously only when they can be sued for loss because of buggy software” (Hayes, 2006, pg. 54). Economic accountability will be a driving factor in changing the way information systems are used and in determining who is liable for issues of security and reliability, but what of the users of the information systems?

When it comes to security, humans are, after all, the weakest link. From Post-It notes on monitors with passwords written on them to unlocked, unattended workstations, to plain willingness to offer up private information unsolicited, users are a security nightmare within organizations. Hayes offers an interesting take on how to handle users and promote accountability within organizations to promote security conscious behavior.

“Suppose that instead of handling security problems invisibly, we made them highly visible to users. Suppose when one of those problem users opened a virus-laden email attachment or triggered a firewall reaction or plugged a thumb drive into a USB port, that didn’t just create an entry in a security log. Suppose it instantly shut down network access for the user’s entire workgroup. Oh, there would be screams… Is that sneaky? Sure. Draconian? It has to be. It will work only if the consequences are immediate and - at least to all appearances - automatic” (2006, pg. 54).

What a good idea that is, and so grounded in ethics. Hayes suggests that we make the consequences visible and immediate, whether good or bad. In the case of network security, the consequences of lackadaisical actions are often bad, but why punish those who are doing the right things? Why allow improper actions to degrade network performance for those who practice good security habits? Why risk incurring many more man-hours of repair and cleanup when the problem can be isolated and concentrated? This solution is an excellent idea and one that serves to put the full responsibility in the users’ hands.

Since groups tend to be self-regulating, poorly performing members will find themselves on the outside looking in as the natural order of optimizing a groups potential takes over. This is not the only solution to the problem, however. “How the ethical problems in software security can be solved is worth some profound research effort on the application level” (Takanen, 2004, pg. 109). Unlike accounting, there are no generally accepted practices by which software vendors must abide. While software engineers may address issues of software vulnerability and reliability on the individual level, this does not always translate up the food chain and once removed from a particular portion of the project, a software engineer may not revisit the same code again to correct any outstanding issues.

From a legal perspective, the United States has some way to go to resolve the problem of liability, especially in the software industry. Software products and systems are not only used to process secure transactions and enable consumers to manipulate data, but they are also used in environments where human lives are at stake and sensitive private data is handled by many different people at all hours of the day. Negative feedback has been proven to work less effectively than positive feedback when dealing with the human psyche, but should software vendors be offered incentives to provide better offerings and assume more liability, or should they be forced to accept a minimum level of responsibility by law and an increasing amount of accountability based upon the industry and the application of the product?

References

  • Anonymous. (2006). Sharing the risk with IT vendors. Trustee, 59(2), pg. 3.
  • Hayes, F. (2006). Control Charlie. Computerworld, 40(39), pg. 54.
  • Javitt, G. and Hudson, K. (2006). Federal regulation of genetic testing neglect. Issues in Science and Technology, 22(3), pp. 59-66.
  • Samuelson, P. (2005). The Supreme Court revisits the Sony safe harbor. Communications of the ACM, 48(6), pp. 21-25.
  • Spinello, R. (2003). Case Studies in Information Technology Ethics, Second Edition. Upper Saddle River, NJ: Prentice Hall.
  • Takanen, A., Vuorijarvi, P., Laakso, M., and Roning, J. (2004). Agents of responsibility in software vulnerability processes. Ethics and Information Technology, 6, pp.93-110.
  • Vedder, A. and Wachbroit, R. (2003). Reliability of information on the Internet: Some distinctions. Ethics and Information Technology, 5(4), pp. 211-215.