Showing posts with label Stroger. Show all posts
Showing posts with label Stroger. Show all posts

Saturday, July 4, 2009

JCAHO Accreditation SealHOW THOUGHTLESS DECISION-MAKING & SLOPPY HOUSEKEEPING NEARLY HIJACKED A HOSPITAL’S JCAHO

In 2005, twenty servers running a critical application at the busiest hospital in Illinois were consolidated into one physical server. Instead of reaping the benefits of consolidation, disaster struck. (Its name will go unmentioned but you’ll find it out if you read on.)

Hospital management anticipated the usual benefits that virtualization brings:
  1. Easier administration. Caring for one server is easier than caring for 20.
  2. Greater confidence in the IT infrastructure. The storage that accompanies virtualization is likely to be more reliable than the distributed storage of standalone servers. This reliability is a product of newer technology and a more efficient design.
  3. Peace of mind. Virtualized storage complements or fits well with its business continuity features. VMware’s VMotion, for instance, empowers the human administrator to migrate virtual machines to backup servers in real time.
Unfortunately these benefits did not happen. They lost data and, for a time, they risked, first, losing JCAHO accreditation and, second, punitive action from CMS.

(Click here to learn why JCAHO accreditation is important to a hospital.)

How did this happen?

After the virtual environment was created, the IT staff added standard security controls to each new virtual server. This was fine as this is standard procedure. However, some of those virtual servers lay dormant. In fact, it appears that nearly a dozen servers were created for “testing” purposes. These were not removed after they had served their purpose. (I actually think that most of them were created for the novelty of it. How do you account for servers named “Tyrone” or “Michael Jordan?”) During the months that these servers lay dormant, Microsoft and the application vendor had issued patches. When these dormant servers were reactivated, they were not updated with those patches. The servers thus turned into potholes or, worse, security vulnerabilities waiting to be compromised. It didn’t take long for that to happen. Consequently, the hospital lost data.

We were brought in to sort out the mess.

LESSONS LEARNED

What did we take away from this incident?

First, virtual servers must be managed individually and managed from their creation to their removal.

Second, management of these servers consists of staying abreast of patches, installing them as needed and meticulously documenting the patches that were installed. These steps have to be done for the virtual environment, the guest operating system and the application. These steps are crucial especially because of staff turnover.

Finally, management of the virtualized data center should be handled by capable hands. The integrator may have configured the virtual environment properly when it was created. However, we all know that things change over time. Someone has to take ownership of staying abreast of these changes. In the hospital’s situation, the virtual environment unraveled in steps. Visualize these: (1) a new appliance was installed, (2) a new server was created, (3) a new application was implemented, and (4) Microsoft issued more security patches. All of these events most likely took place. Consequently, failing to update the relevant pieces or updating the pieces incorrectly would have caused problems. Note that there are two hurdles: (1) identify the pieces that need to be updated and (2) do the updates correctly. At the end, we discovered two network links that were dead ends. We think these links had prevented two or more virtual servers from communicating.

While that was a technical AHA!, the bigger picture shows the consequences of a thoughtless decision. The hospital had stopped paying maintenance fees to the integrator. It attempted to maintain the environment on its own. This was unwise since the IT staff did not have trained personnel. The VLAN’s configuration developed potholes and compromised security. This is how a combination of thoughtless decision-making and sloppy housekeeping nearly hijacked a hospital’s JCAHO accreditation and risked punitive action from CMS. (This was a major reason. During that period, the hospital was cited for numerous violations.)



Sphere: Related Content

Sunday, September 28, 2008

LESSONS FROM CONDUCTING A SECURITY GAP ANALYSIS

There are many reasons for ensuring that you have a secure information system. It becomes a question of how instead of why. How do you create and maintain a secure system?

I participated in my first security gap analysis project in 2006, blogged it that year, lost that blog, and found my notes again. It was an eye-opening experience especially since it was conducted in one of the largest hospitals—whether public or private—in the country. It serves the second most populous county in the U.S. According to 2006 US Census Bureau estimates, the county had 5.3 million residents—larger than the populations of 29 individual U.S. states or the combined populations of the six smallest US states.

There are many reasons for ensuring that you have a secure information system. It becomes a question of how instead of why. How do you create and maintain a secure system? Starting with what you have, the first step is to create a baseline—a model of your expectations about the security of your information system. If your business belongs to one of several industries that are governed by laws and regulations then you should start with the security requirements of those same laws and regulations.

A hospital, for instance, would be directly governed by the Health Insurance Portability & Accountability Act (HIPAA). It is also subject to other regulations like the eDiscovery rules but we will keep it simple by focusing on HIPAA alone.

The second step is to categorize the sensitivity of your data, identify its source, its location within the system, how its accessed, and who can access it.

Sensitive data can take the form of intellectual property. For a hospital, sensitive data is frequently legally protected. An example is the X-ray images of a patient.

Armed with this information, you can begin your gap analysis. Before this discussion goes further, it must be understood that gap analysis is an ongoing process. The environment is constantly changing. Your information system is constantly changing with it and, naturally, your security gaps are changing as well.

Comparing your actual practices with security requirements will identify the gaps in your system. Once identified, the gaps can be prioritized (by severity, for instance). Then a plan can be created for eliminating (or at least minimizing) those vulnerabilities.

Gap analysis is a specialized form of risk analysis. Risk analysis recognizes the fact that risks are everywhere and that you have limited resources to deal with them. The goal of risk analysis, therefore, is to learn how to deploy your resources in the most effective manner to eliminate or minimize the worst or most likely threats.

It is best to approach gap analysis as a project and like any project, senior management must support it. Security gap analysis must be conducted on a regular basis. It must be thorough and objective. The degree of thoroughness will establish the scope of the analysis. Will the project include physical as well as electronic security? Will it be limited to customer-facing applications?


Objectivity requires a fresh set of eyes. It wouldn’t make sense for an accountant to audit himself. It makes a lot of sense therefore to hire an outside firm to lead the project.

These are the lessons I learned when we conducted a security gap analysis at one of the largest hospitals—whether public or private—in the country.

Our presence was announced with a bang! When you stage a systems break-in, attack the system like a team of hackers would. A team attack is just as likely to happen in real life as a solitary attempt would. The ease and speed of our break-in convinced the hospital’s administration of the risks it faced.

Your project team should have members from different disciplines. I came away convinced that if the core team could only have two groups then the two should be your IT and your HR departments. Why HR? It’s because people will be the primary source of vulnerabilities.

Hospitals are very politicized organizations. In addition to having senior management’s blessing, we created a RACI matrix that was jointly accepted by all department heads.

RACI stands for Responsible-Accountable-Consulted-Informed. A RACI matrix will identify the authority and responsibility of all roles involved in the project. We had determined that our scope was going to be limited to electronic security and to customer-facing applications only. Due to the size of the hospital and the number of applications it ran, our gap analysis focused on the two most heavily implemented applications: lab and accounting.

This was the first gap analysis conducted on this hospital and the spotlight was on it. (And did it ever need it!)

WE ANALYZED THE GAP IN FIVE AREAS

FIRST AREA

AAA – Authorization, Access, and Accounting on an enterprise level. This included single sign-on, a primary aspect of federated identity. Our goal was to standardize the security infrastructure. We discovered numerous instances where Nurse-A could log in at Station-1, stay logged in while logging in again as herself at Station-2, and be granted a different access level.

All current authentication processes were reviewed. Possible vendor solutions were evaluated. A general implementation plan was developed.

SECOND AREA

Awareness. How security-conscious are the employees? Did they know about the different security levels of information?
  1. Unclassified
  2. Classified
  3. Confidential
  4. Restricted
  5. Secret
  6. Top Secret
Our goal was to heighten the security awareness of workers throughout the organization. Make it clear that this is everyone’s responsibility and request for their cooperation. A regular familiarization course was developed and all employees have to attend it every six months. A hotline was also established.

THIRD AREA

Incident Notification & Response. The security awareness course and the hotline are just two of the responsibilities of a new IT-based group. Our goal was to create a first-response team and proactive overseer of enterprise security. They did not make policy; instead they implemented it. At the same time, they tracked actual user practices, compared it to best practices, and submitted progress reports to the Chief Security Officer (a position that was newly created).

FOURTH AREA

Technical Security. We conducted a comprehensive review of the existing security framework. The framework covered firewalls, DMZs, intrusion detection & prevention tools, and the like. Security logs were audited. Patch management was taken seriously. Password policies were enacted. Our goal was to optimize the hospital’s technical security. These efforts were primarily focused at the hospital’s data center. Technical security briefly touched on Disaster Recovery but DR was going to be a separate project.

FIFTH AREA

Best Practices. Our objective was to train users to work using best practices. This was easier said than done since this was change management and most of the staff were lifers, i.e., employees of long tenure. We had to start over several times. In the end, we learned that the best way to coax them to accept change was to first listen to them. This is the area where our business analysts really proved their worth!

CONCLUSION

Several areas above, e.g., Technical Security and Best Practices, were longer and more difficult than expected. The entire project took eight months—two months past schedule and 40% over budget! The core project team consisted of three full-time members. I was one of them.

Would I consider it successful? Yes. We achieved the project's goals. Were the customers happy? The end-users were. Management was not. From the beginning, we articulated to senior management that they had an unrealistic schedule especially because they were ripping out an old application software system. Delays cost money.

At the project onset, they practiced an all too familiar but ill-advised tactic. They asked us for a "realistic" budget. We were outside consultants. Specifically we were the subcontractors of a (politically-connected) contractor. We used parametric and bottom-up estimates, got the agreement from our contractor, and we jointly submitted it to hospital management.

I remember the incident vividly. We were in the office of the hospital administrator. He glanced at it, asked us a few questions, crossed out our figure, deducted 30%, and wrote that down and signed off beside his scribbled amount. Furthermore, he slashed a month of our projected schedule.


Sphere: Related Content