Information systems are developed to provide mechanisms for storing, accessing, sharing and manipulating data. The most important part of the information system is the data. Preserving the integrity of the data is essential in maintaining a trusted system that users will want to leverage for use in business. Confidentiality and security are also requirements of a well designed information system. Organizations, such as hospitals, which store large amounts of highly sensitive data are becoming the focus of attacks on information systems. Because of this, policies and procedures that surround the storage and use of patient data need to be examined and adjusted accordingly.

Hospitals seek to maintain a history of patient data. This allows health care professionals to provide a higher level of care and can assist doctors in making more accurate diagnoses. However, “[e]lectronic record keeping has led to increased interest in analyzing historical patient data to improve health care delivery” (Krishna, 2007, pg. 654). Large data warehouses allow for better results from data mining and statistical analysis. They also allow for a higher risk of breaking patient confidentiality as databases will contain names, social security numbers, addresses and medical histories.

The Health Insurance Portability and Accountability Act, or HIPAA, is used to categorize medical data and determine what is “safe” for use in research studies and what is not (Krishna, 2007, pg. 655). The regulations can only do so much in protecting patient data, though. Once again, the user is ultimately responsible for the confidentiality and security of the data. A professor at the University of Pittsburgh Medical Center was recently found to have published a couple of files on the hospital’s web site which contained confidential patient information. On April 10, 2007, UPMC was informed of the breach. A “preliminary investigation has determined that the names and social security numbers of approximately 80 patients were disclosed in a professional presentation that was prepared by a former University of Pittsburgh faculty member for a medical symposium that took place in 2002” (UPMC, 2007). The presentation also contained radiology exam dates, results and some other related medical data.

The irony of the situation is that the information was first discovered on the site in 2005 and was removed at that time. Since then, the information was “apparently inadvertently re-posted on the site” (UPMC, 2007). UPMC has made a public apology to the patients and has put forth an offer to protect the people by paying for any credit protection. While this is a good business move and the right thing to do, it does not prevent bad consequences entirely. The professor obviously had no malicious intent in posting this information to the site as it was “posted in an area of the UPMC Radiology Department Web site where faculty members share academics information with other health care professionals” (UPMC, 2007), but the problem is now compounded by the fact that the Internet has intervened.

The records were removed from the UPMC web site, but “computer security experts say the patients can never be entirely assured the content will be gone” (Twedt, 2007). The presentation that had been posted in 2002 was archived by The Internet Archive in California. They finally removed the data from their site a few days ago, but they are not the only Internet archive in existence.

Converting medical records from paper to electronic form is thus a controversial issue in that security becomes an even greater issue. In the United States, an initiative called VistA was started by the Veteran’s Administration. This program “helped transform the VA from a medical backwater into the country’s best-run, most cost-effective health care organization” (Goetz, 2007, pg. 147). WorldVistA EHR is a company formed by three men who took the VistA software (which is public domain) and “turned it into a collaborative project, with all the hallmarks of open source” in an effort to address the fact that “[l]ess than 25 percent of US health care providers use electronic records;; the rest use file folders bulging with paper” (2007, pg. 147).

The benefits of greater efficiency, better diagnosis, and mitigating unnecessary treatments are just a few of the results of moving medical records to an all electronic form. The government of the United Kingdom is also seeking to leverage the benefits of electronic medical records, and they are taking it a step further based upon their health care systems. They are “building a national database of medical records” (Anderson, 2006, pg. 13), but doctors, patients and other organizations are fighting the effort as they feel that confidentiality will be at risk.

Just how much risk do the citizens of the United Kingdom feel will be incurred by a national database? It’s hard to tell, but compared to the risk level of patient data at a teaching hospital such as UPMC, perhaps there is nothing to worry about. Drug users are concerned, however, as police access to the database could pinpoint them for arrest and prosecution (2006, pg. 16). Also, “[t]his data collection… [is] likely to stigmatize children unjustly and quite possibly breach European human rights law” (2006, pg.16) as school behavior history and domestic information is often included with medical diagnostic information.

Health care organizations can save millions of dollars a year by implementing electronic solutions, and government could save millions of dollars by centralizing medical records data, but what of the risks and costs of loss or theft of this data? This year, businesses will spend forty five billion dollars or more on information technology security (Kirkpatrick, 2006, p. 66). While spending large amounts of money to secure systems is not a poor business decision, it does not prevent hackers from successfully breaking into systems.

Kevin Mitnick (2002) asserts that users are the weakest link in any information system. This is due in large part to the practice of social engineering. Social engineering is traditionally defined as the application of sociological principles to specific sociological problems. In the context of information systems, social engineering is the exploitation of human psychological and sociological behaviors for personal gain (Berg, 1995). Social engineering has been a problem for years and is becoming more prevalent as we build new and more complex systems of communication. The simple exploitation of human curiosity and known weaknesses allow for data loss and identity theft which can lead to the loss of money and trust.

While social engineering is prevalent all over the world, the United States is impacted heavily by the consequences of it. It does not require an information system to be used as an exploitive technique, but social engineering is often used to circumvent the security and integrity that is so carefully built into information systems. Banks take great care to protect their customers financial data and accounts. Hospitals are constantly concerned with the privacy of patient data. Companies rely on maintaining data integrity in order to generate accurate reports and financial data. Bruce Schneier, in his book, writes, “As systems get more complex, they necessarily get less secure” (2000). This does not mean that all systems should remain small. That is not a feasible solution. In practice, this means that the larger the organization, the greater the need for security and training employees on the subject of social engineering with respect to information systems (Lepofsky, 2004).

Other countries experience the same problems with social engineering. “Sumitomo Mitsui Bank had foiled an attempt to steal $423 million after detecting suspicious money transfers” in 2005 (Lemos, 2006, pg. 96). The security breach had been facilitated by a key logging program installed on an employee’s computer. In the same article, Lemos talks about workers in the United Kingdom who accepted CDs that were being handed out at a subway station. The CDs contained a trojan virus that polled to see how many people trusted the CD enough to run it on their computer. “[A] similar study found that as many as 90 percent of people gave their passwords to a person conducting a survey” (2006, pg. 96).

With a seemingly careless attitude, people display a disregard for their own privacy and security until the consequences are clear to them. As the old adage goes, “It’s all fun and games until someone gets hurt”. When a person does suffer adverse consequences from the actions of someone who is attempting to steal personal data, they want someone to be held accountable. This liability is also a big issue when it comes to hospitals and financial institutions and the records they keep. “These companies should be held accountable for keeping sensitive data safe and required to notify all Americans when their identities are placed at risk by a breach in security” (Cocheo, 2005, pg. 58).

So how do we determine which electronic systems are good and which are bad, and how do we determine appropriate levels of accountability? Do hospitals split the risk with the patient? Does a privacy policy and notification of the patient of the privacy policy indemnify the hospital? Should the patient be required to follow a set of procedures and standards with regard to their own personal medical records that leave the care of the hospital? These are but a few questions that arise. The concept of Lebenswelt (the life-world) links us with our social groups and social agreements. As we develop our own moral frames and act within our culture, we make determinations about what is good for us and good for others. Some may have a greater concern for themselves, but others are always taken into consideration as they may be the people who we must trust to secure and protect our personal data.

Due to the ill consequences, and potential consequences, of the actions of one professor at UPMC, policies and procedures will be changed to prevent the past repeating itself. Other health care organizations will take a close look at the events of UPMC and make changes as well. We learn from the world around us and, unfortunately from mistakes made. In this instance, a social culture of academia ran into the larger culture of the United States and the issue of personal privacy. The majority took precedence and the smaller of the two cultures must adjust its behavior to maintain trust and a standard of ethics.

References

  • Anderson, R. (2006). Under threat: patient confidentiality and NHS computing. Drugs and Alcohol Today, 6(4), 13-17.
  • Berg, A. (1995). Cracking a social engineer. LAN Times, November 6, 1995.
  • Cocheo, S. (2005). Privacy rumblings. ABA Banking Journal, 97(6), 56-59.
  • Goetz, T. (2007). The code doctors. Wired Magazine, May, 2007, 147.
  • Kirkpatrick, J. (2006). Protect your business against dangerous information leaks. Machine Design, February 9, 2006, 66.
  • Krishna, R. (2007). Patient confidentiality in the research use of clinical medical databases. American Journal of Public Health, 97(4), 654-658.
  • Lemos, R. (2006). This man has a virus. PC Magazine, April 25, 2006, 96.
  • Lepofsky, R. (2004). Risk management. Risk Management Magazine, October 2004, 34-40.
  • Mitnick, K. D., & Simon, W. L. (2002). The Art of Deception: Controlling the Human Element of Security. Indianapolis, Indiana: Wiley Publishing, Inc.
  • Schneier, B. (2000). Secrets and Lies: Digital Security in a Networked World. John Wiley & Sons, Inc.
  • Spinello, R. (2003). Case Studies in Information Technology Ethics, Second Edition. Upper Saddle River, NJ: Prentice Hall.
  • Twedt, S. (2007). It’s ’too late’ to assure security of patient data. Retrieved from http://www.post-gazette.com/pg/07104/777971-114.stm on April 20, 2007.
  • UPMC. (2007). UPMC moves swiftly to resolve any problems created by posting of private patient information to the UPMC web site. Retrieved from http://www.upmc.com/Communications/NewsBureau/NewsReleaseArchives/2007/April/UPMCweb.htm on April 20, 2007.