RisknCompliance Blog

Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Tag: Privacy (page 1 of 2)

Security Logging and Monitoring for EHRs

If you are like most medium or large healthcare providers these days, your Electronic Health Record (EHR) environment is likely a very complex one. Such complexity brings with it a fair amount of difficulty in monitoring the environments for security incidents.

Monitoring for security incidents is different from privacy monitoring

Many such healthcare providers have also likely invested in privacy monitoring solutions over the last few years. These investments have been driven largely by the HIPAA Security/Privacy rule or Meaningful Use mandates as well as the  need to be able to identify and respond effectively to privacy incidents or complaints.

Privacy monitoring use cases fall into a fairly limited set of categories – e.g. snooping of neighbor, workforce member or celebrity records. Given the nature and the somewhat narrow definition of these use cases, many organizations appear to be doing a good job in this respect. This is especially the case when organizations have implemented one of the leading privacy monitoring solutions.

While such organizations have notable success with monitoring for privacy incidents,  the same can’t be said for monitoring of security incidents. This is so despite the fact that most of these organizations have invested substantively in security – be it security monitoring solutions such as Security Information and Event Management (SIEM)  orservices such as third party managed security services.

Where is the problem and what might we do about it?

In our experience, the lack of effective security monitoring capabilities across EHR environments can be usually attributed to the lack of appropriate security logs to begin with. And, it is usually not a straightforward problem to solve for more than one reason. The most common reason is the complex nature of the applications and their diverse sets of components or modules. Many of the EHRs were not designed with good security monitoring, in our view.  One can also point to the rather complex and custom workflows at each organization that these EHRs support.

Solving this problem usually requires a specialist effort by personnel who have a strong background in security (and security monitoring). We also need who have specialist knowledge and experience with the respective EHR applications. After all, each EHR application is unique in how the vendors have implemented their security and security logging features.

How could we help?

Our RiskLCM services can help develop a strategy and assist with implementing a sustainable security monitoring program for your EHR(s). We have experience doing this for Epic and Cerner among others and can help you leverage your existing security/privacy monitoring technologies or managed services investments.

Please leave us a message at +1 312-544-9625 or send us a note to RiskLCM@rnc2.com if you would like to discuss further.

You may also be interested in a case study.

Thank you!

Is your auditor or consultant anything like the OPM OIG?

The OPM breach has been deservedly in the news for over a month now.   Much has been written and said about it across the mainstream media and the internet1.

I want to focus here on a topic that hasn’t necessarily been discussed in public,  perhaps not at all – Could the OIG (and their audit reports) have done more or different than what they did,  year after year of issuing the reports? Specifically,  how could these audit reports have driven some urgently needed attention to the higher risks and perhaps helped prevent the breach?

Let us look at the latest  OIG’s Federal Information Security Management Act (FISMA) audit report issued in November 2014 (pdf) as a case in point. The report runs over 60 pages and looks to have been a reasonably good effort in meeting its objective of covering the general state of compliance with FISMA.  However,   I am not sure the report is any useful for “real world”  risk management purposes at an agency that had known organizational constraints in availability of appropriate security people or resources;  an agency that should have had some urgency in implementing certain safeguards on at least the one or two critical system(s) for the nature and quantity of sensitive information they had.

We have talked about this problem before, advising caution against “fixation” with compliance or controls frameworks and not focusing on priority risk management needs. In the case of this particular report,  the OIG should have discussed the risks as well (and not just the findings or gaps) and provided some actionable prioritization for accomplishing quick-wins in risk mitigation. For example,  recommendation #21 on page 25 could have called out the need for urgency in implementing multi-factor authentication (or appropriate compensating controls) for the one or two “high risk” system(s) that had upwards of 20 million sensitive records that we now know were breached.

I also believe providing a list of findings in the Executive Summary (on page 2) was a wasted opportunity. Instead of providing a list of compliance or controls gaps,  the summary should have included specific call-to-action statements by articulating the higher risks and providing actionable recommendations for what the OPM could have done over the following months in a prioritized fashion.

OPM-OIG

Here then are my recommended takeaways:

1. If you are an auditor performing an audit or a consultant performing a security assessment,  you might want to emphasize “real” risks,  as opposed to compliance or controls gaps that may be merely academic in many cases. Recognize that evaluation and articulation of risks require a more complete understanding of the business, technology and regulatory circumstances as compared to what you might need to know if you were merely writing gaps against certain controls or compliance requirements.

2. Consider the organizational realities or constraints and think about creative options for risk management. Always recommend feasible quick-wins in risk mitigation and actionable prioritization of longer term tasks.

3. Do not hesitate to bring in or engage with specialists if you aren’t sure you can evaluate or articulate risks and recommend mitigation tasks well enough. Engage with relevant stakeholders that would be responsible for risk mitigation to make sure they are realistically able to implement your recommendations, at least the ones that you recommend for implementation before your next audit or assessment.

In closing, I would strongly emphasize a focus on meaningful risk management outcomes, not just producing reports or deliverables. A great looking deliverable that doesn’t convey the relative levels of real risks and the urgency of mitigating certain higher risks is not going to serve any meaningful purpose.

References for additional reading

1.  At the time of this writing,  I found these two links to be useful reading for substantive information on the OPM breach.

Information about OPM Cybersecurity Incidents

“EPIC” fail—how OPM hackers tapped the mother lode of espionage data

2.  You may also be interested in a quick read of our recommendations for agile approaches to security/privacy/compliance risk assessments or management.  A pdf of our slide deck will be emailed to you after a quick registration here.

No, Security-Privacy Is Not A Hindrance To TeleHealth Adoption

Since I follow the teleheath space rather closely from a security/privacy perspective, I was drawn yesterday to this article titled “How Health Privacy Regulations Hinder Telehealth Adoption”.  From my experience, I know telehealth has many obstacles to overcome but I have never thought of security-privacy being prominent among them. I have certainly not thought of security-privacy as a hindrance to its adoption, as the article’s title says.

I read the article and then downloaded the original AHA paper (pdf) the article is based on.

It wasn’t too long after did I conclude that the title of the article was misplaced, in my opinion.

The AHA paper is nicely written and very objective in my view. It covers a number of areas that are true challenges to telehealth adoption but it doesn’t portray security-privacy as a hindrance, contrary to the title of the article. On the other hand, it talks about specific security-privacy considerations for planning and implementation (see page 10 of the pdf). These considerations are no different from what one would need to implement when deploying any new type of technology.

The considerations are the right things to do if you were to have any confidence in your ability to safeguard patient privacy and safety. Sure, there are some regulatory aspects (discussed on page 11) but these are no different from what we need for protecting Protected Health Information (PHI) in any form.

In conclusion, I think the author should perhaps look to change the title lest anyone should think that it adds to the FUD, of which there is no shortage in security, as we know.

Patient Portals – Make or Break

Like many other Health IT initiatives today, the primary driver for patient portals is regulatory in nature. Specifically, it is the Meaningful Use requirements related to view,  download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and as a result, to the providers’ business in the increasingly competitive and complex healthcare marketplace in the United States.

The objective of this post is to discuss the security aspects of patient portals, specifically, why the current practices in implementing these portals could pose a big problem for many providers. More importantly, we’ll discuss specific recommendations for due diligence actions that the providers should take immediately as well as in the longer term.

Before we get to discuss the security aspects, I think it is important to “set the stage” by discussing some background on patient portals. Accordingly, this post covers the following areas in the indicated sequence:

1. What are patient portals and what features do they (or could) provide?

2. Importance of patient engagement and the role of patient portals in patient engagement

3. The problem with the current state in Health IT and hence the risks that the portals bring

4. Why relying on regulations or vendors is a recipe for certain failure?

5. What can/should we do (right now and in the future) –  Our recommendations

1. What are Patient Portals and what features do they (or could) provide?

I would draw on information from the ONC site for this section. Here is the pertinent content, to quote from the ONC site:

A patient portal is a secure online website that gives patients convenient 24-hour access to personal health information from anywhere with an Internet connection. Using a secure username and password, patients can view health information such as:

• Recent doctor visits

• Discharge summaries

• Medications

• Immunizations

• Allergies

• Lab results

Some patient portals also allow patients to:

• Exchange secure e-mail with their health care teams

• Request prescription refills

• Schedule non-urgent appointments

• Check benefits and coverage

• Update contact information

• Make payments

• Download and complete forms

• View educational materials

The bottom-line is that patient portals provide means for patients to access or post sensitive health or payment information. In the future, their use could expand further to include integration with mobile health applications (mHealth) and wearables. Last week’s news from EPIC should provide a sense for things to come.

2. Importance of patient engagement and the role of patient portals in patient engagement

As we said above, the primary driver for patient portals so far has been the Meaningful Use requirements related to view,  download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and becoming a key business enabler for providers.

The portals are indeed a leading way for providers to engage with patients, as can be seen in this graphic from the 2014 Healthcare IT Priorities published by InformationWeek1.

image

Effective patient engagement of course can bring tremendous business benefits, efficiencies and competitive edge to providers.

From a patient’s perspective, the portals can offer a easier method for interacting with their providers which in turn has its own benefits for patients. To quote from the recently released HIMSS report titled “The State of Patient Engagement and Health IT”2

A patient’s greater engagement in health care contributes to improved health outcomes, and information technologies can support engagement.

In essence, the importance of patient portals as a strategic business and technology solution for healthcare providers doesn’t need too much emphasis.

3. The problem with the current state in Health IT and hence the risks that the portals bring

In my view, the below quote from the cover page of the 2014 Healthcare IT Priorities Report published by InformationWeek1 pretty much sums it up for this section.

Regulatory requirements have gone from high priority to the only priority for healthcare IT.

We all know what happens when security or privacy programs are built and operated purely to meet regulatory or compliance objectives. It is a shaky ground at best. We talked about it in one of our blog posts last year when we called for a change in tone and approaches to healthcare security and privacy.

4. Why relying on regulators or vendors is a recipe for certain failure of your security program?

It is probably safe to say that security in design and implementation is perhaps not the uppermost concern that HealthIT vendors have (certainly not the patient portal vendors in my opinion) today. To make it easy for them, we have lackluster security/privacy requirements in the regulation for certifying Electronic Health Records.

Consider the security and privacy requirements (yellow in this pdf) that vendors have to meet in order to obtain EHR certification today. You will see that the certification criteria are nearly not enough to assure the products are indeed secure enough before the vendors can start selling them to providers. And then, the administration or enforcement of the EHR certification program has been lacking as well in the past.

If you consider a risk relevant set of controls such as the Critical Security Controls for Effective Cyber Defense, you will see that the certification criteria are missing the following key requirements in order to be effective in today’s security threat landscape:

· Vulnerability assessments and remediation

· Application Security Testing (Including Static and Dynamic Testing)

· Security training for developers

· Penetration tests and remediation

· Strong Authentication

Think about using these applications to run your patient portals!

If you are a diligent provider, you will want to make sure that the vendor has met the above requirements even though the certification criteria do not include them. The reality though may be different. In my experience, providers often do not perform all the necessary due diligence before purchasing the products.

And then, when providers implement these products and attest to Meaningful Use, they are expected to do a security risk analysis (see the green highlighted requirement in the pdf). In my experience again, risk analysis is not performed in all cases. Of those that perform them, many are not really risk assessments.

The bottom-line? … Many providers take huge risks in going live with their patient portals that are neither secure nor compliant (Not compliant because they didn’t perform a true risk analysis and mitigate the risks appropriately).

If you look again (in 1 above) at the types of information patient portals handle, it is not far-fetched to say that many providers may have security breaches waiting to happen. It is even possible that some have already been breached but they don’t know yet.

Considering that patient portals are often gateways to the more lucrative (from a hacker’s standpoint) EHR systems, intruders may be taking their time to escalate their privileges and move laterally to other systems once they have a foothold in the patient portals. Considering that detecting intrusions is very often the achilles heel of even the most well-funded and sophisticated organizations,  it should be a cause for concern at many providers.

5. What can/should we do (right now and in the future) –  Our recommendations

It is time to talk about what really matters and some tangible next steps …

What can or must we do immediately and in the future?

Below are our recommendations for immediate action:

a) If you didn’t do the due diligence during procurement of your patient portal product, you may want to ask the vendor for the following:

· Application security testing (static and dynamic) and remediation reports of the product version you are using

· Penetration testing results and remediation status

· If the portal doesn’t provide risk based strong (or adaptive) authentication for patient and provider access, you may insist on the vendor committing to include that as a feature in the next release.

b) If you didn’t perform a true security risk analysis (assessment), please perform one immediately. Watch out for the pitfalls as you plan and perform the risk assessment. Make sure the risk assessment includes (among other things) running your own vulnerability scans and penetration tests both at network and application layers.

c) Make sure you have a prioritized plan to mitigate the discovered risks and of course, follow through and execute the plan in a timely manner.

Once you get through the immediate action steps above, we recommend the below action items to be included in your security program for the longer term:

a) Implement appropriate due diligence security measures in your procurement process or vendor management program.

b) Have your patient portal vendor provide you the detailed test results of the security requirements (highlighted yellow in the attachment) from the EHR certifying body. You may like to start here (Excel download from ONC) for information on the current certification status of your patient portal vendors and the requirements they are certified for.

c) Ask the vendor for application security (static and dynamic) and pen test results for every release.

d) Segment the patient portal appropriately from the rest of your environment (also a foundational prerequisite for PCI DSS scope reduction if you are processing payments with credit/debit cards).

e) Perform your own external/internal pen tests every year and scans every quarter (Note : If you are processing payments with your payment portal,  the portal is likely in scope for PCI DSS. PCI DSS requires you to do this anyway).

f) Conduct security risk assessments every year or upon a major change (This also happens to be a compliance requirement from three different regulations that will apply to the patient portal –  HIPAA Security Rule, Meaningful Use and PCI DSS, if processing payments using credit/debit cards).

g) If you use any open source modules either by yourself or within the vendor product,  make sure to apply timely patches on them as they are released.

h) Make sure all open source binaries are security tested before they are used to begin with.

i) If the vendor can’t provide support for strong authentication,  look at your own options for proving risk based authentication to consumers. In the meanwhile, review your password (including password creation, reset steps etc.) to make sure they are not too burdensome on consumers and yet are secure enough.

j) Another recommended option is to allow users to authenticate using an external identity (e.g. Google, Facebook etc. using OpenID Connect or similar mechanisms) which may actually be preferable from the user’s standpoint as they don’t have to remember a separate log-in credential for access to the portal. Just make sure to strongly recommend that they use 2 step verification that almost all of these social media sites provide today.

k) Implement robust logging and monitoring in the patient portal environment (Hint : Logging and Monitoring is not necessarily about implementing just a “fancy” technology solution. There is more to it that we’ll cover in a future post)

In summary, there is just too much at stake for providers and patients alike to let the status quo of security in patient portals continue the way it is. We recommend all providers take priority action considering the lasting and serious business consequences that could result from a potential breach.

As always, we welcome your thoughts, comments or critique. Please post them below.

References

1 2014 Healthcare IT Priorities published by InformationWeek
2 The State of Patient Engagement and Health IT(pdf download)

 Recommended further reading

Patient engagement – The holy grail of meaningful use
Patient portal mandate triggers anxiety
MU Stage 2 sparks patient portal market

That Odd Authentication Dichotomy Needs To Change

By now, it should be clear that we need to consider strong (multi factor) authentication for access to anything of value. In an age and time when most public email services (Gmail, Hotmail, Yahoo etc.) provide for strong authentication, it would seem inexplicable to allow access to corporate email or remote access to your organization’s systems with just the basic (user-id: password) authentication.

Think about this… Your personal Hotmail account uses 2 Factor, but your organization’s Office 365 email doesn’t.  I am sure you agree that this odd dichotomy needs to change.

(Note: I am not suggesting the privacy of your personal email is any less important than the security of your corporate email. By dichotomy, I am referring to your organization not being at least as much concerned about their security as you are concerned about your personal privacy)

And, if your organization does find itself in a situation where you have no way but to continue with the basic authentication, some testing and studies of passwords like this one should be considered for making your password policies (truly) stronger. Don’t continue with your password standard established years ago (or based on some arbitrary best practice or external standard)  forcing users to have a complex combination of alphanumeric/symbols, change passwords every 60 days or not allowing them to reuse the last 6 or 24 passwords or something. You may be only making their user experience miserable without making your password security any stronger. Also, don’t forget to take a look at your password hash we talked about here as a case in point.

Beware of Security Best Practices and Controls Frameworks

What could be possibly wrong with “Best Practices” or “Leading Practices” that your favorite security consultant might be talking about? Or for that matter, how could we go wrong if we used the “leading” security standards or controls frameworks?

It is of course useful to have a benchmark of some sort to compare yourself against your peers. The problem comes up (as it so often does) when we start to take these so called best practices and standards for granted. This often drives us to a state of what I like to call as template mindsets and approaches in security. More often than not in my experience, this leads to us making incorrect security decisions because we didn’t consider all the facts and circumstances that may be unique to each of our settings.

Let me explain with an example.

Let us say that you are using a leading security framework such as the HITRUST CSF for healthcare. To take the example of password controls, Control Reference 01.d on Password Management has a fairly restrictive set of password controls even at Level 1, which is HITRUST CSF’s baseline level of controls. Level 1 includes requirements for password length, complexity, uniqueness of the current password as compared to the last certain number of passwords and so on. However, there is no requirement in Level 1 around the standard to be used for hashing passwords. In fact, there is not a single mention of the words “password hash” or “salt” in over 450 pages of the CSF framework even in its latest 2014 version.

Now, if you are a seasoned and skilled security practitioner, you should know that these Level 1 password controls are mostly meaningless if the password hashes are not strong enough and the password hash file was stolen by some means. It is fairly common for hackers to steal password hash files early and often in their hacking campaigns. Reported breaches at Evernote, LinkedIn and Adobe readily come to mind. We learned about what appears to be this fairly unprecedented scale of stolen passwords just yesterday.

So, if you see a consultant using a so called best practice set of controls or one of the security controls frameworks to perform your risk assessment and he/she doesn’t ask a question on password hashes (or some other control or vulnerability that may truly matter), you should know the likely source of the problem. More than likely, they are simply going through the motions by asking you questions from a controls checklist with little sense of understanding or focus around some of the threats and vulnerabilities that may be truly important in your setting or context. And as we know, any assessment without a clear and contextual consideration for the real world threats and vulnerabilities is not really a risk assessment. You may just have spent a good amount of money on the consultant but probably do not have much to show for it in terms of the only metric that matters in an assessment – the number of “real” risks identified and their relative levels of magnitude –  so you can make intelligent risk management decisions.

In closing, let us not allow ourselves to be blindsided by the so called “Best Practices” and Security Controls Frameworks. Meaningful security risk management requires us to look at the threats and vulnerabilities that are often unique to each of our environments and contexts. What may have been considered a best practice somewhere else or a security framework put out by someone may at best be just a reference source to double-check and make sure we didn’t miss anything. They should never be the sole source for our assessments and certainly not the yardstick for our decision making in security.

I welcome your thoughts and comments.

Important notes and sources for reference

  • I used HITRUST CSF only for an example. The idea discussed in this post would apply to any set of Best Practices or Security Control Frameworks. After all, no set of Best Practices or Security Controls Frameworks and no matter how good their “quality” may be, they can’t keep up with the speed at which today’s security threats are evolving or new vulnerabilities are discovered.
  • If you are interested in learning some really useful information on password hashing and password management, I would strongly recommend this post (Caution: It is not a quick read;  Allow yourself at least 30 minutes to read and absorb the details especially if you are not a experienced security professional)

How useful is the HHS OIG report published this week?

I am sure some of you saw this news report about HHS OIG finding some security related deficiencies in the EHR certification program.

I was keen to read the full OIG report (pdf) which I did get a chance to do this evening. I know HHS OIG does great work overall but I must say I didn’t come away feeling very good about the quality or usefulness of this particular report, for the following couple of reasons:

  1. The report was really of an audit performed in 2012 of the 2011 EHR certification program which doesn’t even exist in 2014. What value does it provide if OIG has ONC providing responses to this audit report in 2014? Shouldn’t OIG have sent this report to ONC soon after they did the audit in 2012 so the report could have led to changes in the program when it still existed? It appears this OIG audit and the report could have been a better use of taxpayer dollars had it been timely.
  2. I am not sure OIG has done a good job of substantiating why they don’t agree that the 2014 certification criteria addresses their concerns. They provide an example of multi-factor authentication not being included in the 2014 criteria. While multi-factor authentication would obviously provide for better security, does OIG think all access to EHRs must be protected by multi-factor? Or is it perhaps only remote access (meaning access from outside the trusted network say a hospital facility)? Security in healthcare can’t come at the expense of user experience of providers and clinicians. Requiring multi-factor at all times is going to impact clinician productivity and hence patient care. Also, OIG should have known that multi-factor technologies are still not (or at least were not when ONC finalized the 2014 criteria) at a point where they can be used as the mandatory baseline authentication mechanism in EHRs without compromising user experience. If I remember correctly, the HealthIT Standards Committee (HITSC) did consider 2 factor authentication for inclusion in the 2014 criteria but decided to exclude it for “practicality” reasons. To sum up on this point, I think OIG could have been more objective in their opinions on 2014 criteria.

In closing, I am not sure what process or protocols does OIG follow but it appears this audit report could have had better impact if it had been more timely, objective and actionable.

From A Security Or Compliance StandPoint…

It is probably safe to say that we security professionals hear the phrase in the title of this post rather frequently. For one, I heard it again earlier today from a experienced professional presenting on a webinar… I believe it is a cliche.

I actually think the phrase conveys the view that we do some things for security sake and do certain other things for compliance sake (but not necessarily for “improving the security risk posture” sake). Therein lies the problem in my view. Why should we do certain things for compliance if they don’t necessarily improve the security risk posture? BTW.. I think we shouldn’t do security for security sake either… more on it below.

I don’t think there is any security regulation that requires you to implement a specific control no matter what the risk associated with not implementing the control is. I think we all agree  PCI DSS is perhaps the most prescriptive security regulation there is and even it provides an option for us not to implement a specific control just for compliance sake if we could justify the reason by way of compensating controls . See below.

Note: Only companies that have undertaken a risk analysis and have legitimate technological or documented business constraints can consider the use of compensating controls to achieve compliance. (Source: Requirements and Security Assessment Procedures – PCI DSS version 3.0 (pdf))

I think all this can change if we insist on using the word “Risk” (and meaning it) in all of our security or privacy related conversations.  It can be hard to make the change because understanding and articulating risk is not easy… we know it is certainly much harder than rattling out a security control from a security framework (NIST 800-53, HITRUST CSF, ISO 27002 et al.) or some regulation (PCI DSS, HIPAA Security Rule et al.). It requires one to do a risk analysis and a good risk analysis requires work that can be hard to come by and we need to watch for the pitfalls .

We may be better served to think of non-compliance as an option and non-compliance needs to be treated as just another risk like all other risks that are possibly more serious in nature (risk of loss of patient privacy, intellectual property or identity theft etc.). If we did that, we would be in a position to articulate (or at least be forced to) why implementing a particular  compliance requirement doesn’t make sense because the risk associated with not implementing it is low enough given certain  business/technical factors or compensating controls we may have.

Before I forget …Just like it doesn’t make sense to do compliance for compliance sake, it doesn’t make sense to do security for security sake either. We often see this problem withSecurity Privacy Risk organizations looking to get themselves a “security certification” of some sort such as HITRUST CSF, ISO et al. In the quest to attain a passing certification score, you could find yourself implementing security controls for “security certification” sake.  There are certainly valid reasons why one would want to pursue certification, but one needs to be watchful so we aren’t forced to do security for security sake.

So, let us make a change in how we approach security and compliance by identifying and articulating “the risk” always and everytime …. Perhaps, we can make a start in the direction by not using the cliche in the title of the post. Instead, we might perhaps say something like  “From a security or privacy risk standpoint…”.

 

A Second Look At Our Risk Assessments?

I came across this Akamai Security Blog post recently which I thought was a useful and informative read overall. As I read through the blog post however, something caught my attention. It is the phrase “The vendor considers the threat posed by the vulnerability”.  That prompted me to write this post …. on the need for extreme due diligence in security risk assessments and the critical importance for the engagement sponsors to keep the assessment teams on their toes. (Note: Just to be doubly clear, the objective here is not to pick on the Akamai post but to discuss certain key points about Security Risk Assessments)

When it comes to Security Risk Assessments (or Security Risk Analysis if the HIPAA Security Rule is of any relevance to you), I believe that terminology is extremely important. Those of us who have performed a “true” risk assessment know for a fact that the terms threat, vulnerability, likelihood, impact and risk mean specific things. In the case of this specific Akamai post, I think the author may have used the word “threat” instead of “risk” somewhat inaccurately. While it may not be significant in the context of this particular blog post, I believe that using these terms inaccurately can mean all the difference in the quality and usefulness of actual risk assessments. In my experience, more often than not, such misplaced terminology is a symptom of the lack of due diligence on the part of the person or the team doing the assessment. Considering that risk assessments are so “foundational” to a security program, we strongly recommend addressing such redflags very early in a Risk Assessment engagement.

In fact, I would like to suggest that the sponsors ask the following questions of the consultants or teams performing the risk assessment as early as pertinent in the engagement:

  • Have you identified the vulnerabilities accurately and do you have the evidence to back up your findings?
  • Have you identified all the relevant threats that can exploit each vulnerability?
  • How did you arrive at the likelihood estimation of each threat exploiting each vulnerability? Can you back up your estimation with real, known and published information or investigation reports on exploits over the recent past (say three years)? Did you consider the role of any and all compensating controls we may have in possible reduction of the likelihood estimates?
  • Does your Risk Ranking/Risk Statement clearly articulate the “real” risk (and not some imagined or assumed risk) to the organization, supported by the Likelihood and Impact statements?
  • When proposing risk mitigation recommendations, have you articulated the recommendations in actionable terms? By “Actionable”, we mean something that can be readily used to build a project plan to initiate the risk mitigation effort(s).

If answers to any of the above questions seem negative or even tentative, the assessment may not be serving the organization’s risk management objectives. In my experience, most risk assessments turn out be no more than mere Control or Gap Assessments, which don’t need to be conducted by the often “Highly Paid” consultants, quite frankly. 

A “true” risk assessment requires to be performed by a security practitioner or team that has the inquisitive mind, depth and breadth of relevant security skillsets as well as the knowledge of current security threat/vulnerability environment.

You may also find the following posts from our blog relevant and useful:

Top 10 Pitfalls – Security or Privacy Risk Assessments
Compliance obligations need not stand in the way of better information security and risk management
Next time you do a Risk Assessment or Analysis, make sure you have Risk Intelligence on board

Please don’t hesitate to post your feedback or comments.

Top 10 Pitfalls – Security or Privacy Risk Assessments

Risk Assessment is a foundational requirement for an effective security or privacy program and it needs to be the basis for every investment decision in information security or privacy. To that extent, we strongly recommend it as the very first thing that organizations do when you they set out on implementing or improving a program. It is no surprise then that most regulations also include them as mandatory requirements (e.g. HIPAA Security Rule, Meaningful Use Stages 1 and 2 for Healthcare Providers,  PCI DSS 2.0). Yet, we continue to see many organizations do not perform it right, if they perform one at all. This is true at least in the Healthcare sector that we  focus on.  They see it as just another compliance requirement and go through the motions.

So, we thought about a list of “Top 10 Pitfalls” related to Risk Assessments. We present them here and will be looking to expand and discuss each of these pitfalls in separate posts to follow.

    1. Performing risk analysis without knowing all the locations the data you are looking to safeguard (PHI, PII etc.) is created, received, stored, maintained or transmitted
    2. Approaching it with a compliance or audit mindset rather than a risk mindset
    3. Mistaking controls/gap assessment for risk analysis. Hint: Controls/Gap Assessment is but one of several steps in risk analysis.
    4. Focusing on methodologies and templates rather than outcomes; We discuss the idea here
    5. Not having a complete or holistic view of the threats and vulnerabilities and hence failing to articulate and estimate the likelihood adequately
    6. Not realizing that no security controls framework (e.g. NIST 800-53, HITRUST CSF etc.) is perfect and using the security controls in these frameworks without a sense of context in your environment
    7. Poor documentation – Reflects likely lack of due diligence and could lead to bad decision making or at the very least may not pass an audit
    8. Writing Remediation or Corrective Action Plans without specialist knowledge and experience in specific remediation areas
    9. Inadequate planning and lack of curiosity, investigative mindset or quality in engagement oversight
    10. Not engaging the right stake holders or “owners” throughout the risk assessment process and especially in signing off on remediation recommendations or Corrective Action Plans

We’ll be delighted to hear your feedback and will look to perhaps even grow this list based on the feedback. After all, this is about being a good steward of the security or privacy program dollars and managing risks to our organizations, customers or partners.

Older posts