RisknCompliance Blog

Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Tag: HIPAA (page 1 of 2)

No, Security-Privacy Is Not A Hindrance To TeleHealth Adoption

Since I follow the teleheath space rather closely from a security/privacy perspective, I was drawn yesterday to this article titled “How Health Privacy Regulations Hinder Telehealth Adoption”.  From my experience, I know telehealth has many obstacles to overcome but I have never thought of security-privacy being prominent among them. I have certainly not thought of security-privacy as a hindrance to its adoption, as the article’s title says.

I read the article and then downloaded the original AHA paper (pdf) the article is based on.

It wasn’t too long after did I conclude that the title of the article was misplaced, in my opinion.

The AHA paper is nicely written and very objective in my view. It covers a number of areas that are true challenges to telehealth adoption but it doesn’t portray security-privacy as a hindrance, contrary to the title of the article. On the other hand, it talks about specific security-privacy considerations for planning and implementation (see page 10 of the pdf). These considerations are no different from what one would need to implement when deploying any new type of technology.

The considerations are the right things to do if you were to have any confidence in your ability to safeguard patient privacy and safety. Sure, there are some regulatory aspects (discussed on page 11) but these are no different from what we need for protecting Protected Health Information (PHI) in any form.

In conclusion, I think the author should perhaps look to change the title lest anyone should think that it adds to the FUD, of which there is no shortage in security, as we know.

Patient Portals – Make or Break

Like many other Health IT initiatives today, the primary driver for patient portals is regulatory in nature. Specifically, it is the Meaningful Use requirements related to view,  download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and as a result, to the providers’ business in the increasingly competitive and complex healthcare marketplace in the United States.

The objective of this post is to discuss the security aspects of patient portals, specifically, why the current practices in implementing these portals could pose a big problem for many providers. More importantly, we’ll discuss specific recommendations for due diligence actions that the providers should take immediately as well as in the longer term.

Before we get to discuss the security aspects, I think it is important to “set the stage” by discussing some background on patient portals. Accordingly, this post covers the following areas in the indicated sequence:

1. What are patient portals and what features do they (or could) provide?

2. Importance of patient engagement and the role of patient portals in patient engagement

3. The problem with the current state in Health IT and hence the risks that the portals bring

4. Why relying on regulations or vendors is a recipe for certain failure?

5. What can/should we do (right now and in the future) –  Our recommendations

1. What are Patient Portals and what features do they (or could) provide?

I would draw on information from the ONC site for this section. Here is the pertinent content, to quote from the ONC site:

A patient portal is a secure online website that gives patients convenient 24-hour access to personal health information from anywhere with an Internet connection. Using a secure username and password, patients can view health information such as:

• Recent doctor visits

• Discharge summaries

• Medications

• Immunizations

• Allergies

• Lab results

Some patient portals also allow patients to:

• Exchange secure e-mail with their health care teams

• Request prescription refills

• Schedule non-urgent appointments

• Check benefits and coverage

• Update contact information

• Make payments

• Download and complete forms

• View educational materials

The bottom-line is that patient portals provide means for patients to access or post sensitive health or payment information. In the future, their use could expand further to include integration with mobile health applications (mHealth) and wearables. Last week’s news from EPIC should provide a sense for things to come.

2. Importance of patient engagement and the role of patient portals in patient engagement

As we said above, the primary driver for patient portals so far has been the Meaningful Use requirements related to view,  download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and becoming a key business enabler for providers.

The portals are indeed a leading way for providers to engage with patients, as can be seen in this graphic from the 2014 Healthcare IT Priorities published by InformationWeek1.

image

Effective patient engagement of course can bring tremendous business benefits, efficiencies and competitive edge to providers.

From a patient’s perspective, the portals can offer a easier method for interacting with their providers which in turn has its own benefits for patients. To quote from the recently released HIMSS report titled “The State of Patient Engagement and Health IT”2

A patient’s greater engagement in health care contributes to improved health outcomes, and information technologies can support engagement.

In essence, the importance of patient portals as a strategic business and technology solution for healthcare providers doesn’t need too much emphasis.

3. The problem with the current state in Health IT and hence the risks that the portals bring

In my view, the below quote from the cover page of the 2014 Healthcare IT Priorities Report published by InformationWeek1 pretty much sums it up for this section.

Regulatory requirements have gone from high priority to the only priority for healthcare IT.

We all know what happens when security or privacy programs are built and operated purely to meet regulatory or compliance objectives. It is a shaky ground at best. We talked about it in one of our blog posts last year when we called for a change in tone and approaches to healthcare security and privacy.

4. Why relying on regulators or vendors is a recipe for certain failure of your security program?

It is probably safe to say that security in design and implementation is perhaps not the uppermost concern that HealthIT vendors have (certainly not the patient portal vendors in my opinion) today. To make it easy for them, we have lackluster security/privacy requirements in the regulation for certifying Electronic Health Records.

Consider the security and privacy requirements (yellow in this pdf) that vendors have to meet in order to obtain EHR certification today. You will see that the certification criteria are nearly not enough to assure the products are indeed secure enough before the vendors can start selling them to providers. And then, the administration or enforcement of the EHR certification program has been lacking as well in the past.

If you consider a risk relevant set of controls such as the Critical Security Controls for Effective Cyber Defense, you will see that the certification criteria are missing the following key requirements in order to be effective in today’s security threat landscape:

· Vulnerability assessments and remediation

· Application Security Testing (Including Static and Dynamic Testing)

· Security training for developers

· Penetration tests and remediation

· Strong Authentication

Think about using these applications to run your patient portals!

If you are a diligent provider, you will want to make sure that the vendor has met the above requirements even though the certification criteria do not include them. The reality though may be different. In my experience, providers often do not perform all the necessary due diligence before purchasing the products.

And then, when providers implement these products and attest to Meaningful Use, they are expected to do a security risk analysis (see the green highlighted requirement in the pdf). In my experience again, risk analysis is not performed in all cases. Of those that perform them, many are not really risk assessments.

The bottom-line? … Many providers take huge risks in going live with their patient portals that are neither secure nor compliant (Not compliant because they didn’t perform a true risk analysis and mitigate the risks appropriately).

If you look again (in 1 above) at the types of information patient portals handle, it is not far-fetched to say that many providers may have security breaches waiting to happen. It is even possible that some have already been breached but they don’t know yet.

Considering that patient portals are often gateways to the more lucrative (from a hacker’s standpoint) EHR systems, intruders may be taking their time to escalate their privileges and move laterally to other systems once they have a foothold in the patient portals. Considering that detecting intrusions is very often the achilles heel of even the most well-funded and sophisticated organizations,  it should be a cause for concern at many providers.

5. What can/should we do (right now and in the future) –  Our recommendations

It is time to talk about what really matters and some tangible next steps …

What can or must we do immediately and in the future?

Below are our recommendations for immediate action:

a) If you didn’t do the due diligence during procurement of your patient portal product, you may want to ask the vendor for the following:

· Application security testing (static and dynamic) and remediation reports of the product version you are using

· Penetration testing results and remediation status

· If the portal doesn’t provide risk based strong (or adaptive) authentication for patient and provider access, you may insist on the vendor committing to include that as a feature in the next release.

b) If you didn’t perform a true security risk analysis (assessment), please perform one immediately. Watch out for the pitfalls as you plan and perform the risk assessment. Make sure the risk assessment includes (among other things) running your own vulnerability scans and penetration tests both at network and application layers.

c) Make sure you have a prioritized plan to mitigate the discovered risks and of course, follow through and execute the plan in a timely manner.

Once you get through the immediate action steps above, we recommend the below action items to be included in your security program for the longer term:

a) Implement appropriate due diligence security measures in your procurement process or vendor management program.

b) Have your patient portal vendor provide you the detailed test results of the security requirements (highlighted yellow in the attachment) from the EHR certifying body. You may like to start here (Excel download from ONC) for information on the current certification status of your patient portal vendors and the requirements they are certified for.

c) Ask the vendor for application security (static and dynamic) and pen test results for every release.

d) Segment the patient portal appropriately from the rest of your environment (also a foundational prerequisite for PCI DSS scope reduction if you are processing payments with credit/debit cards).

e) Perform your own external/internal pen tests every year and scans every quarter (Note : If you are processing payments with your payment portal,  the portal is likely in scope for PCI DSS. PCI DSS requires you to do this anyway).

f) Conduct security risk assessments every year or upon a major change (This also happens to be a compliance requirement from three different regulations that will apply to the patient portal –  HIPAA Security Rule, Meaningful Use and PCI DSS, if processing payments using credit/debit cards).

g) If you use any open source modules either by yourself or within the vendor product,  make sure to apply timely patches on them as they are released.

h) Make sure all open source binaries are security tested before they are used to begin with.

i) If the vendor can’t provide support for strong authentication,  look at your own options for proving risk based authentication to consumers. In the meanwhile, review your password (including password creation, reset steps etc.) to make sure they are not too burdensome on consumers and yet are secure enough.

j) Another recommended option is to allow users to authenticate using an external identity (e.g. Google, Facebook etc. using OpenID Connect or similar mechanisms) which may actually be preferable from the user’s standpoint as they don’t have to remember a separate log-in credential for access to the portal. Just make sure to strongly recommend that they use 2 step verification that almost all of these social media sites provide today.

k) Implement robust logging and monitoring in the patient portal environment (Hint : Logging and Monitoring is not necessarily about implementing just a “fancy” technology solution. There is more to it that we’ll cover in a future post)

In summary, there is just too much at stake for providers and patients alike to let the status quo of security in patient portals continue the way it is. We recommend all providers take priority action considering the lasting and serious business consequences that could result from a potential breach.

As always, we welcome your thoughts, comments or critique. Please post them below.

References

1 2014 Healthcare IT Priorities published by InformationWeek
2 The State of Patient Engagement and Health IT(pdf download)

 Recommended further reading

Patient engagement – The holy grail of meaningful use
Patient portal mandate triggers anxiety
MU Stage 2 sparks patient portal market

Hello PCI SSC… Can we rethink?

This is a detailed follow-up to the quick post I wrote the Friday before the Labor Day weekend,  based on my read at the time of the PCI SSC’s Special Interest Group paper on “Best practices for maintaining PCI DSS compliance”1 published just the day before.

The best practices guidance is by and large a good one though nothing of what is discussed is necessarily new or ground breaking. The bottom line of what the paper discusses is the reality of what any person or organization with electronic information of some value (and who doesn’t today?) needs to do… which is that there is no substitute for constant and appropriate security vigilance in today’s digital world.

That said,  I am not sure this guidance (or anything else PCI SSC has done so far with PCI DSS including the new version 3 taking effect at the end of the year) is going to result in the change we need… the change in how PCI organizations are able to prevent or at least able to detect and contain the damage caused by security breaches in their cardholder data environments (CDEs). After all, we have had more PCI breaches (both in number and scale) over the past year than at any other time since PCI DSS has been in effect.

One is then naturally forced to question why or how does PCI SSC expect a different result if  PCI DSS itself hasn’t changed fundamentally over the years. I believe a famous person no less than Albert Einstein had something to say about doing the same thing over and over again and expecting different results.AE

If you have had anything to do with the PCI DSS over the last several years, you are probably very familiar with the criticism it has received from time to time.  For the record, I think PCI DSS has been a good thing for the industry and it isn’t too hard to recognize that security in PCI could be much worse without the DSS.

At the same time, it is also not hard to see that PCI DSS hasn’t fundamentally changed in its philosophy and approach since its inception in 2006 while the security threat environment itself has evolved drastically both in its nature and scale over this period.

The objective of this post is to offer some suggestions for how to make PCI DSS more effective and meaningful for the amount of money and overheads that merchants and service providers are having to spend on it year after year.

Suggestion #1 : Call for Requirement Zero

I am glad the best practices guidance1 highlights the need for a risk based PCI DSS program. It is also pertinent to note that risk assessment is included as a milestone 1 item in the Prioritized Approach tool2 though I doubt many organizations use the suggested prioritization.

In my opinion however, you are not emphasizing the need for a risk based program if your risk assessment requirement is buried inconspicuously under requirement #12 of the 12 requirements (12.2 to be specific). If we are to direct merchants and service providers to execute a risk based PCI DSS program, I believe the best way to do it is by  making risk assessment the very first thing that they do soon after identifying and finalizing the CDE they want to live with.

As such, I recommend introducing a new Requirement Zero to include the following :

  • Identify the current CDE and try to reduce the CDE footprint to the extent possible
  • Update the inventory of system components in the CDE (Current requirement 2.4)
  • Prepare CDE Network diagram (Current requirement 1.1.2) and CHD flow diagram (Current requirement 1.1.3). I consider this to be a critical step. After all, we can only safeguard something valuable if we know it exists. We also talked about how the HIPAA Security Rule could use this requirement in a different post.
  • Conduct a Risk Assessment (Current requirement 12.2)

Performing a risk assessment right at the beginning will provide the means for organizations to evaluate how far they need to go with implementing each of the 200+ requirements. In many cases, they may have to go well over the letter of certain requirements and truly address the intent and spirit of the requirements in order to reduce the estimated risk to acceptable levels.

Performing the risk assessment will also (hopefully) force organizations to consider the current and evolving threats and mitigate the risks posed by these threats. Without the risk assessment being performed upfront, one will naturally fall into the template security mindset we discussed here. As discussed in the post, template approaches are likely to drive a security program to failure down the road (or at least make it ineffective).

Suggestion #2 : Discontinue all (requirements) or nothing approach

A true risk management program must mean that the organizations should have a choice not to implement a control if they can clearly articulate the risk associated with not implementing it is truly low.

I think PCI DSS has a fundamental contradiction in its philosophy of pushing a all-or-nothing regulation while advocating a risk based approach at the same time. In an ideal world where organizations have limitless resources and time at their disposal, they could perhaps fully meet every one of the 200+ requirements while also addressing the present and evolving risks. As we know however, the real world is far from ideal in that the organizations are almost always faced with constraints all around and certainly with the amount of resources and time available at their disposal.

Making this change (from all or nothing approach) of course will mean a foundational change in PCI DSS’ philosophy of how the whole program is administered by PCI SSC and the card brands. Regardless, this change is too important to be ignored considering the realities of business challenges and the security landscape.

Suggestion #3 : Compensating controls

As anyone that has dealt with PCI DSS knows, documentation of compensating controls is one of the most onerous aspects of PCI DSS, so much so that you are sometimes better off implementing the original control than having to document and justify the “validity” of the compensating control to your QSA. No wonder then, that a book on PCI DSS compliance actually had a whole chapter on the “art of compensating control”.

The need for compensating controls should be based on the risk to the cardholder data and not on not implementing the requirement itself. This should be a no-brainer if PCI SSC really wants PCI DSS to be risk based.

If the risk associated with not implementing a control is low enough, organizations should have a choice of not implementing a compensating control or at least not implementing it to the extent that the DSS currently expects the organization to.

Suggestion #4 : Reducing compliance burden and fatigue

As is well known, PCI DSS requires substantial annual efforts and related expenses. If the assessments involve Qualified Security Assessors (QSAs), the overheads are much higher than self-assessments. Despite such onerous efforts and overheads, even some of the more prominent retailers and well-funded organizations can’t detect their own breaches.

The reality is that most PCI organizations have limited budgets to spend on security let alone on compliance with PCI DSS. Forcing these organizations to divert much of their security funding to repeated annual compliance efforts simply doesn’t make any business or practical sense, especially considering the big question of whether these annual compliance efforts really help improve the ability of organizations to do better against breaches.

I would like to suggest the following changes for reducing compliance burden so that organizations can spend more of their security budgets on initiatives and activities that can truly reduce the risk of breaches:

  • The full scope (of all 200+ requirements or controls) may be in scope for compliance assessments (internal or by QSA) only during the first year of the three year PCI DSS update cycle. Remember that organizations may still choose not to implement certain controls based on the results of the risk assessment (see suggestion #2 above)
  • For the remaining two years, organizations may be required to perform only a risk assessment and implement appropriate changes in their environment to address the increased risk levels. Risk assessments must be performed appropriately and with the right level of due diligence. The assessment must include (among other things) review of certain key information obtained through firewall reviews (requirement 1.1.7), application security testing (requirement 6. 6), access reviews (requirement 7), vulnerability scans (11.2) and penetration tests (11.3).

 Suggestion #5 : Redundant (or less relevant) controls

PCI SSC may look at reviewing the value of certain control requirements considering that newer requirements added in subsequent versions could reduce the usefulness or relevance of those controls or perhaps even make them redundant.

For example, PCI DSS v3 requirement around penetration testing has a considerable change compared to the previous version. If the organization were to perform the penetration tests appropriately, there should not be much need for requirement 2.1 especially the rather elaborate testing procedures highlighted in the figure.

PCIDSSreq21

There are several other requirements or controls as well that perhaps fall into the same category of being less useful or even redundant.

Such redundant requirements should help make the case for deprecation or consolidation of certain requirements.  These requirements also help make the case for moving away from the all or nothing approach or philosophy we discussed under #2.

 Suggestion #6 : Reduce Documentation Requirements

PCI DSS in general requires fairly extensive documentation at all levels. We already talked about it when we discussed the topic of compensating controls above.

Documentation is certainly useful and indeed strongly recommended in certain areas especially where it helps with communication and better enforcement of security controls that help in risk reduction.

On the other hand, documentation purely for compliance purposes must be avoidable especially if it doesn’t help improve security safeguards to any appreciable extent.

…………………………………………………

That was perhaps a longer post than some of us are used to,  especially on a blog. These are the suggestions that I can readily think of. I’ll be keen to hear any other suggestions you may have yourself or perhaps even comments or critique of my thoughts.

References

1Best Practices for Maintaining PCI DSS Compliance (pdf)  by Special Interest Group, PCI Security Standards Council (SSC)
2PCI DSS Prioritized Approach (xls download)

That Odd Authentication Dichotomy Needs To Change

By now, it should be clear that we need to consider strong (multi factor) authentication for access to anything of value. In an age and time when most public email services (Gmail, Hotmail, Yahoo etc.) provide for strong authentication, it would seem inexplicable to allow access to corporate email or remote access to your organization’s systems with just the basic (user-id: password) authentication.

Think about this… Your personal Hotmail account uses 2 Factor, but your organization’s Office 365 email doesn’t.  I am sure you agree that this odd dichotomy needs to change.

(Note: I am not suggesting the privacy of your personal email is any less important than the security of your corporate email. By dichotomy, I am referring to your organization not being at least as much concerned about their security as you are concerned about your personal privacy)

And, if your organization does find itself in a situation where you have no way but to continue with the basic authentication, some testing and studies of passwords like this one should be considered for making your password policies (truly) stronger. Don’t continue with your password standard established years ago (or based on some arbitrary best practice or external standard)  forcing users to have a complex combination of alphanumeric/symbols, change passwords every 60 days or not allowing them to reuse the last 6 or 24 passwords or something. You may be only making their user experience miserable without making your password security any stronger. Also, don’t forget to take a look at your password hash we talked about here as a case in point.

Beware of Security Best Practices and Controls Frameworks

What could be possibly wrong with “Best Practices” or “Leading Practices” that your favorite security consultant might be talking about? Or for that matter, how could we go wrong if we used the “leading” security standards or controls frameworks?

It is of course useful to have a benchmark of some sort to compare yourself against your peers. The problem comes up (as it so often does) when we start to take these so called best practices and standards for granted. This often drives us to a state of what I like to call as template mindsets and approaches in security. More often than not in my experience, this leads to us making incorrect security decisions because we didn’t consider all the facts and circumstances that may be unique to each of our settings.

Let me explain with an example.

Let us say that you are using a leading security framework such as the HITRUST CSF for healthcare. To take the example of password controls, Control Reference 01.d on Password Management has a fairly restrictive set of password controls even at Level 1, which is HITRUST CSF’s baseline level of controls. Level 1 includes requirements for password length, complexity, uniqueness of the current password as compared to the last certain number of passwords and so on. However, there is no requirement in Level 1 around the standard to be used for hashing passwords. In fact, there is not a single mention of the words “password hash” or “salt” in over 450 pages of the CSF framework even in its latest 2014 version.

Now, if you are a seasoned and skilled security practitioner, you should know that these Level 1 password controls are mostly meaningless if the password hashes are not strong enough and the password hash file was stolen by some means. It is fairly common for hackers to steal password hash files early and often in their hacking campaigns. Reported breaches at Evernote, LinkedIn and Adobe readily come to mind. We learned about what appears to be this fairly unprecedented scale of stolen passwords just yesterday.

So, if you see a consultant using a so called best practice set of controls or one of the security controls frameworks to perform your risk assessment and he/she doesn’t ask a question on password hashes (or some other control or vulnerability that may truly matter), you should know the likely source of the problem. More than likely, they are simply going through the motions by asking you questions from a controls checklist with little sense of understanding or focus around some of the threats and vulnerabilities that may be truly important in your setting or context. And as we know, any assessment without a clear and contextual consideration for the real world threats and vulnerabilities is not really a risk assessment. You may just have spent a good amount of money on the consultant but probably do not have much to show for it in terms of the only metric that matters in an assessment – the number of “real” risks identified and their relative levels of magnitude –  so you can make intelligent risk management decisions.

In closing, let us not allow ourselves to be blindsided by the so called “Best Practices” and Security Controls Frameworks. Meaningful security risk management requires us to look at the threats and vulnerabilities that are often unique to each of our environments and contexts. What may have been considered a best practice somewhere else or a security framework put out by someone may at best be just a reference source to double-check and make sure we didn’t miss anything. They should never be the sole source for our assessments and certainly not the yardstick for our decision making in security.

I welcome your thoughts and comments.

Important notes and sources for reference

  • I used HITRUST CSF only for an example. The idea discussed in this post would apply to any set of Best Practices or Security Control Frameworks. After all, no set of Best Practices or Security Controls Frameworks and no matter how good their “quality” may be, they can’t keep up with the speed at which today’s security threats are evolving or new vulnerabilities are discovered.
  • If you are interested in learning some really useful information on password hashing and password management, I would strongly recommend this post (Caution: It is not a quick read;  Allow yourself at least 30 minutes to read and absorb the details especially if you are not a experienced security professional)

How useful is the HHS OIG report published this week?

I am sure some of you saw this news report about HHS OIG finding some security related deficiencies in the EHR certification program.

I was keen to read the full OIG report (pdf) which I did get a chance to do this evening. I know HHS OIG does great work overall but I must say I didn’t come away feeling very good about the quality or usefulness of this particular report, for the following couple of reasons:

  1. The report was really of an audit performed in 2012 of the 2011 EHR certification program which doesn’t even exist in 2014. What value does it provide if OIG has ONC providing responses to this audit report in 2014? Shouldn’t OIG have sent this report to ONC soon after they did the audit in 2012 so the report could have led to changes in the program when it still existed? It appears this OIG audit and the report could have been a better use of taxpayer dollars had it been timely.
  2. I am not sure OIG has done a good job of substantiating why they don’t agree that the 2014 certification criteria addresses their concerns. They provide an example of multi-factor authentication not being included in the 2014 criteria. While multi-factor authentication would obviously provide for better security, does OIG think all access to EHRs must be protected by multi-factor? Or is it perhaps only remote access (meaning access from outside the trusted network say a hospital facility)? Security in healthcare can’t come at the expense of user experience of providers and clinicians. Requiring multi-factor at all times is going to impact clinician productivity and hence patient care. Also, OIG should have known that multi-factor technologies are still not (or at least were not when ONC finalized the 2014 criteria) at a point where they can be used as the mandatory baseline authentication mechanism in EHRs without compromising user experience. If I remember correctly, the HealthIT Standards Committee (HITSC) did consider 2 factor authentication for inclusion in the 2014 criteria but decided to exclude it for “practicality” reasons. To sum up on this point, I think OIG could have been more objective in their opinions on 2014 criteria.

In closing, I am not sure what process or protocols does OIG follow but it appears this audit report could have had better impact if it had been more timely, objective and actionable.

A Second Look At Our Risk Assessments?

I came across this Akamai Security Blog post recently which I thought was a useful and informative read overall. As I read through the blog post however, something caught my attention. It is the phrase “The vendor considers the threat posed by the vulnerability”.  That prompted me to write this post …. on the need for extreme due diligence in security risk assessments and the critical importance for the engagement sponsors to keep the assessment teams on their toes. (Note: Just to be doubly clear, the objective here is not to pick on the Akamai post but to discuss certain key points about Security Risk Assessments)

When it comes to Security Risk Assessments (or Security Risk Analysis if the HIPAA Security Rule is of any relevance to you), I believe that terminology is extremely important. Those of us who have performed a “true” risk assessment know for a fact that the terms threat, vulnerability, likelihood, impact and risk mean specific things. In the case of this specific Akamai post, I think the author may have used the word “threat” instead of “risk” somewhat inaccurately. While it may not be significant in the context of this particular blog post, I believe that using these terms inaccurately can mean all the difference in the quality and usefulness of actual risk assessments. In my experience, more often than not, such misplaced terminology is a symptom of the lack of due diligence on the part of the person or the team doing the assessment. Considering that risk assessments are so “foundational” to a security program, we strongly recommend addressing such redflags very early in a Risk Assessment engagement.

In fact, I would like to suggest that the sponsors ask the following questions of the consultants or teams performing the risk assessment as early as pertinent in the engagement:

  • Have you identified the vulnerabilities accurately and do you have the evidence to back up your findings?
  • Have you identified all the relevant threats that can exploit each vulnerability?
  • How did you arrive at the likelihood estimation of each threat exploiting each vulnerability? Can you back up your estimation with real, known and published information or investigation reports on exploits over the recent past (say three years)? Did you consider the role of any and all compensating controls we may have in possible reduction of the likelihood estimates?
  • Does your Risk Ranking/Risk Statement clearly articulate the “real” risk (and not some imagined or assumed risk) to the organization, supported by the Likelihood and Impact statements?
  • When proposing risk mitigation recommendations, have you articulated the recommendations in actionable terms? By “Actionable”, we mean something that can be readily used to build a project plan to initiate the risk mitigation effort(s).

If answers to any of the above questions seem negative or even tentative, the assessment may not be serving the organization’s risk management objectives. In my experience, most risk assessments turn out be no more than mere Control or Gap Assessments, which don’t need to be conducted by the often “Highly Paid” consultants, quite frankly. 

A “true” risk assessment requires to be performed by a security practitioner or team that has the inquisitive mind, depth and breadth of relevant security skillsets as well as the knowledge of current security threat/vulnerability environment.

You may also find the following posts from our blog relevant and useful:

Top 10 Pitfalls – Security or Privacy Risk Assessments
Compliance obligations need not stand in the way of better information security and risk management
Next time you do a Risk Assessment or Analysis, make sure you have Risk Intelligence on board

Please don’t hesitate to post your feedback or comments.

Top 10 Pitfalls – Security or Privacy Risk Assessments

Risk Assessment is a foundational requirement for an effective security or privacy program and it needs to be the basis for every investment decision in information security or privacy. To that extent, we strongly recommend it as the very first thing that organizations do when you they set out on implementing or improving a program. It is no surprise then that most regulations also include them as mandatory requirements (e.g. HIPAA Security Rule, Meaningful Use Stages 1 and 2 for Healthcare Providers,  PCI DSS 2.0). Yet, we continue to see many organizations do not perform it right, if they perform one at all. This is true at least in the Healthcare sector that we  focus on.  They see it as just another compliance requirement and go through the motions.

So, we thought about a list of “Top 10 Pitfalls” related to Risk Assessments. We present them here and will be looking to expand and discuss each of these pitfalls in separate posts to follow.

    1. Performing risk analysis without knowing all the locations the data you are looking to safeguard (PHI, PII etc.) is created, received, stored, maintained or transmitted
    2. Approaching it with a compliance or audit mindset rather than a risk mindset
    3. Mistaking controls/gap assessment for risk analysis. Hint: Controls/Gap Assessment is but one of several steps in risk analysis.
    4. Focusing on methodologies and templates rather than outcomes; We discuss the idea here
    5. Not having a complete or holistic view of the threats and vulnerabilities and hence failing to articulate and estimate the likelihood adequately
    6. Not realizing that no security controls framework (e.g. NIST 800-53, HITRUST CSF etc.) is perfect and using the security controls in these frameworks without a sense of context in your environment
    7. Poor documentation – Reflects likely lack of due diligence and could lead to bad decision making or at the very least may not pass an audit
    8. Writing Remediation or Corrective Action Plans without specialist knowledge and experience in specific remediation areas
    9. Inadequate planning and lack of curiosity, investigative mindset or quality in engagement oversight
    10. Not engaging the right stake holders or “owners” throughout the risk assessment process and especially in signing off on remediation recommendations or Corrective Action Plans

We’ll be delighted to hear your feedback and will look to perhaps even grow this list based on the feedback. After all, this is about being a good steward of the security or privacy program dollars and managing risks to our organizations, customers or partners.

Pay attention to Security Risk Analysis in Meaningful Use Attestation

As is well known, Centers for Medicare & Medicaid Services (CMS) has been conducting pre and post payment audits of healthcare provider organizations attesting to Meaningful Use (MU).  Our experience tells us that providers do not always exercise the necessary due diligence in meeting Stage I MU Core Objective #14 (Eligible Hospitals) and #15 (Eligible Professionals). In our view and as supported by ONC’s 10 Step Plan for Meeting Privacy and Security Portions of Meaningful Use, the MU Security Risk Analysis needs to go well beyond assessing just the technical controls of a EHR system. We believe that the risk analysis should cover the people and process aspects of EHR operations as well as how the EHR interfaces with other systems, organizations, people or processes.

As noted in a previous post, College of Healthcare Information Management Executives (CHIME), a professional organization for chief information officers and other senior healthcare IT leaders seemed to hold the view that the MU Security Risks Analysis scope should be limited. While we do not have a complete insight into CHIME’s viewpoint, we believe that providers need some work to do if they are to meet the requirements effectively. A robust security risks analysis is in any case the right thing to do every time there is a change in the Health IT environment … and implementing a EHR should qualify as a major change in that regard. It is also a mandatory compliance obligation under the HIPAA Security Rule.

So, why not do the “right thing”? We highly recommend that providers avoid “checkbox compliance” tendencies when it comes to meeting MU Core Objective #14/15.

Can we change the tune on Health Information Security and Privacy please?

Notice the title doesn’t say HIPAA Security and Privacy. Nor does it have any of the words – HITECH, Omnibus Rule, Meaningful Use etc. That is the point of this post.

Let us start with a question…  I am sure many of you like me are routine visitors to the blogosphere and social media sites (especially LinkedIn group discussions) to get a pulse of the happenings in Information Security and Privacy. How often do you see posts or discussions around compliance versus discussions focused squarely on risk, meaning risk to the organization or to the patients if their health information was compromised by one or the other means?

Compliance (risk of non-compliance) is only one of the risks  and in our view, should not be the primary driver for any Information Security or Privacy program. In fact, we often like to say that Compliance should be a natural consequence of  good risk management practices.

Having lived and watched Health Information Security and Privacy for nearly ten years, I am not surprised by this trend at all. Rather, I am looking forward to a day where we talk more about safeguarding the security and privacy of patient data and less about preparing for an OCR Audit. I am not suggesting that you shouldn’t worry about the latter. In fact, I’ll say that one will very likely not have to worry about the OCR or any audit for that matter if one’s real intent is to safeguard security and privacy of patient information. The real intent and objective are extremely important because they shape our thinking and how we go about executing our efforts.

I think  Security and Privacy programs in Healthcare can be a lot more effective (and likely even cost efficient) if they were to prioritize the objectives in the following order:

  • Patient Care and Safety – In most discussions on security, we tend to focus solely on confidentiality of patient information and less so on integrity and availability of the information. When we begin to think of all three security components in equal measure, it is easier to appreciate how a security incident or breach could impact patient care and safety. With the increasing adoption of EHRs, it is very likely that many health-care providers are relying solely on electronic versions of the patient records in one or more EHRs. It is possible that a security incident or breach could result in the patient record not being “available” for access by the physicians who may need to look at the patient’s treatment history before providing the patient with some urgent or emergency care.  In another possible scenario, it is possible that the security breach resulted in compromise of the integrity of the patient record itself, in which case there may be a chance that physicians end up misdiagnosing the patient condition and not providing the right treatment. Such cases were probably unlikely in a world of paper records but they are not inconceivable in a world of electronic records. These issues can result from both malicious and unintentional circumstances.
  • Patient Privacy and Loss of Trust – The impact of a healthcare privacy breach doesn’t need much discussion. The impacted individuals can face severe and lasting financial and reputational harm which can make for a very painful experience. This in turn could result in the provider losing the valuable trust of its customers. 
  • Business Risk – Healthcare businesses could face Tort or Class Action lawsuits from either of the two previous scenarios.  And then of course, there is the possibility of patients turning to competitors especially when they have access to multiple providers where they live. In effect, health care organizations could face substantial losses to their bottomlines and given the increasing competitive nature of the industry, this could put business sustainability of the organizations at risk.
  • Risks of Non-Compliance – Finally of course, there is the risk of non-compliance with industry or government regulations. Non-compliance could leave healthcare organizations facing considerable civil and possible criminal fines as well as recurring expenses from having to comply with OCR resolution agreements for example. In most instances however, the impact of non-compliance fines and expenses are only temporary in nature lasting a few years or more. On the other hand, the impact of the previous three risks could be much more significant and longer lasting.

Until we think of security and privacy as being central to patient care/safety and the business/clinical culture, it is our view that many programs will likely falter and not deliver the intended results. The new era of digital healthcare requires healthcare organizations to think of security and privacy as a business or customer issue and not something that they need to address only for compliance purposes.

In a following post, we’ll specifically discuss some examples of why thinking compliance first will not get us very far in managing health information security risks.

Older posts