RisknCompliance Blog

Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Category: Information Risk (page 1 of 2)

Is your auditor or consultant anything like the OPM OIG?

The OPM breach has been deservedly in the news for over a month now.   Much has been written and said about it across the mainstream media and the internet1.

I want to focus here on a topic that hasn’t necessarily been discussed in public,  perhaps not at all – Could the OIG (and their audit reports) have done more or different than what they did,  year after year of issuing the reports? Specifically,  how could these audit reports have driven some urgently needed attention to the higher risks and perhaps helped prevent the breach?

Let us look at the latest  OIG’s Federal Information Security Management Act (FISMA) audit report issued in November 2014 (pdf) as a case in point. The report runs over 60 pages and looks to have been a reasonably good effort in meeting its objective of covering the general state of compliance with FISMA.  However,   I am not sure the report is any useful for “real world”  risk management purposes at an agency that had known organizational constraints in availability of appropriate security people or resources;  an agency that should have had some urgency in implementing certain safeguards on at least the one or two critical system(s) for the nature and quantity of sensitive information they had.

We have talked about this problem before, advising caution against “fixation” with compliance or controls frameworks and not focusing on priority risk management needs. In the case of this particular report,  the OIG should have discussed the risks as well (and not just the findings or gaps) and provided some actionable prioritization for accomplishing quick-wins in risk mitigation. For example,  recommendation #21 on page 25 could have called out the need for urgency in implementing multi-factor authentication (or appropriate compensating controls) for the one or two “high risk” system(s) that had upwards of 20 million sensitive records that we now know were breached.

I also believe providing a list of findings in the Executive Summary (on page 2) was a wasted opportunity. Instead of providing a list of compliance or controls gaps,  the summary should have included specific call-to-action statements by articulating the higher risks and providing actionable recommendations for what the OPM could have done over the following months in a prioritized fashion.

OPM-OIG

Here then are my recommended takeaways:

1. If you are an auditor performing an audit or a consultant performing a security assessment,  you might want to emphasize “real” risks,  as opposed to compliance or controls gaps that may be merely academic in many cases. Recognize that evaluation and articulation of risks require a more complete understanding of the business, technology and regulatory circumstances as compared to what you might need to know if you were merely writing gaps against certain controls or compliance requirements.

2. Consider the organizational realities or constraints and think about creative options for risk management. Always recommend feasible quick-wins in risk mitigation and actionable prioritization of longer term tasks.

3. Do not hesitate to bring in or engage with specialists if you aren’t sure you can evaluate or articulate risks and recommend mitigation tasks well enough. Engage with relevant stakeholders that would be responsible for risk mitigation to make sure they are realistically able to implement your recommendations, at least the ones that you recommend for implementation before your next audit or assessment.

In closing, I would strongly emphasize a focus on meaningful risk management outcomes, not just producing reports or deliverables. A great looking deliverable that doesn’t convey the relative levels of real risks and the urgency of mitigating certain higher risks is not going to serve any meaningful purpose.

References for additional reading

1.  At the time of this writing,  I found these two links to be useful reading for substantive information on the OPM breach.

Information about OPM Cybersecurity Incidents

“EPIC” fail—how OPM hackers tapped the mother lode of espionage data

2.  You may also be interested in a quick read of our recommendations for agile approaches to security/privacy/compliance risk assessments or management.  A pdf of our slide deck will be emailed to you after a quick registration here.

This is how the #AnthemHack could have been stopped, perhaps

It has been just over a week since the #AnthemHack was made public.

Over this period, the main stream media and many of the bloggers and commentators,  as usual,  have been all over it.  Many have resorted to some not-so-well-thought-out (at least in my opinion as well as a couple of others1 ) statements such as “encryption” could have prevented it. Some have even faulted HIPAA for not mandating encryption of data-at-rest2.

Amidst all this, I believe there has been some good reporting as well, albeit very few. I am going to point to a couple of articles by Steve Ragan at CSOOnline.com here and here.

I provide an analysis here of perhaps how Anthem could have detected and stopped the breach before the data was exfiltrated. This is based on the assumption that the information published in Steve Ragan’s articles is accurate.

Let’s start with some known information then:

  1. “Anthem, based on data posted to LinkedIn and job listings, uses TeraData for data warehousing, which is a robust platform that’s able to work with a number of enterprise applications”. Quoted from here.
  2. “According to a memo from Anthem to its clients, the earliest signs of questionable database activity date back to December 10, 2014”. Quoted from here.
  3. “On January 27, 2015, an Anthem associate, a database administrator, discovered suspicious activity – a database query running using the associate’s logon information. He had not initiated the query and immediately stopped the query and alerted Anthem’s Information Security department. It was also discovered the logon information for additional database administrators had been compromised.” Quoted from the same article as above.

I went over to the Teradata site to download their Security Administration guide of Release 13 of Teradata Database (download link). I downloaded the guide for an older version from November 2009. I am assuming Anthem is using Release 13 or later and so, isn’t missing the features I am looking at.

Database logging can be challenging sometimes and depending on the features available in your database, the logging configurations can generate a lot of noise. This in turn, may make it difficult to detect events of interest. I wanted to make sure there weren’t such issues in this case.

It turns out TeraData is fairly good in its logging capabilities. Based on the highlighted content, it appears one should be able to configure log generation specifically for a DBA performing a SELECT query on a table containing sensitive data.

=======================================================

TD-Logging

=======================================================

There should not ordinarily be a reason for a DBA to query for sensitive data so this should have been identified as a high risk alert use-case by their Logging and Monitoring program.

I am assuming Anthem also has a Security Information and Event Management (SIEM) solution that they use for security event monitoring. Even a garden variety SIEM solution should be able to collect these logs and raise an immediate alert considering the “high risk” nature of a DBA trying to query for sensitive data.

This alert should have gone to someone that is accountable or responsible for security incident response. It appears that didn’t happen. This is symptomatic of a lack of “ownership” and “accountability” culture, in my view. For a case of this nature, I strongly recommend the IT owner (e.g. Manager or Director of the Database Team) being on point to receive such alerts involving sensitive data. Your Security Operations folks may not necessarily know the context of the query and therefore the high risk nature of it. I talked about this in a guest post last month. See the last dot point in this post.

As quoted at #3 above, it appears one of the DBAs discovered someone using his/her credentials and running that query. You certainly don’t want to leave it to the DBA to monitor his own actions. If this was a malicious DBA, we might be talking about a major breach caused by an insider and not a Advanced Persistent Threat (APT) actor as the Anthem breach appears to be.  But then, I digress.

If the high risk anomalous DBA activity had been discovered immediately through the alert and if appropriate incident response steps had been initiated, it is possible that Anthem may have been able to stop the breach before the APT actor took the data out of the Anthem network.AnthemHack Tweet

So, if we come to think of it, some simple steps of due diligence in establishing security and governance practices might have helped avoid a lot of pain to Anthem not to mention a lifetime of hurt to the people and families impacted by the breach.

Here then are some take-aways if you would like to review your security program and want to make some changes:

  1. You may not need that shiny object. As explained above, an average SIEM solution can raise such an alert. We certainly don’t need that ‘big data” “analytics” solution costing 100s of thousands or millions of dollars.
  2. Clarity in objectives3. Define your needs and use cases before you ever think of a tool or technology. Even if we had a fancy technology, it would be no use if we didn’t identify the high risk use-case for monitoring the DBA activity and implement an alert for it.
  3. Process definition and speed of incident response. People and process aspects are just as important (if not more than) as the technology itself. Unfortunately, we have too many instances of expensive technologies not being effective because we didn’t design and implement the associated people/process workflows for security monitoring and timely incident response.
  4. Ownership and accountability. I talked about this topic last month with a fair amount of detail and examples. While our Security Operations teams have their job to do in specific cases, I believe that the IT leadership must be “accountable” for security of the data collected, received, processed, stored or transmitted by their respective systems. In the absence of such an accountability and ownership culture, our security monitoring and response programs will likely not be effective.
  5. Focus on quick wins. If we look at our environments with inquisitive eyes and ears, most of us will likely identify quick wins for risk reduction. By quick wins, I am referring to actions for reducing higher risk levels that we can accomplish in weeks rather than months, without deploying a lot of resources. Not all of our risk management action plans have to necessarily be driven by formal projects. In the context of this Anthem example, it should be a quick win to implement an alert and have the database manager begin to watch for these alerts.
  6. Don’t accept pedestrian risk assessments and management. If you go back and look at your last risk assessment involving a sensitive database for example, what risks were identified? What were the recommendations? Were these recommendations actionable or some “template” statements? Did the recommendations identify quick-win risk reduction opportunities? Did you follow through to implement the quick wins? In other words, the quality of a risk assessment must be solely determined by the risk reduction opportunities that you were able to identify and the outcomes you were able to accomplish within a reasonable period of time. The quality of paper deliverables, methodology etc. are not nearly as important, not to say that they don’t matter.
  7. Stay away from heavy weight security frameworks. We talked about this last year. I’ll probably have more to say about it in another post. Using #AnthemHack as an example, I plan to illustrate how a particular leading security framework wouldn’t be very helpful. In fact, I believe that using heavy weight security frameworks can be detrimental to most security programs. They take a lot of time and precious resources not to mention the focus away from accomplishing risk reduction outcomes that truly matter.
  8. Effective governance and leadership. Last but not the least, the need for leadership and governance should come as no surprise. None of the previous items on this list can be truly accomplished without an emphasis on governance and leadership starting right at the board level and across the executive leadership.

I hope the analysis and recommendations are useful to you.

Remember, while the techniques employed by APT actors may be advanced and persistent, the vulnerabilities they exploit are often there only because we didn’t do some basic things right or perhaps we made it too hard and complicated on ourselves to do it right.

References for additional reading

1 Why even strong crypto wouldn’t protect SSNs exposed in Anthem breach , Steven M. Bellovin

http://arstechnica.com/security/2015/02/why-even-strong-crypto-wouldnt-protect-ssns-exposed-in-anthem-breach

Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped, Rich Mogull

https://securosis.com/blog/even-if-anthem-encrypted-it-probably-wouldnt-have-mattered

 

2 I like the fact that the HIPAA Security Rule is not prescriptive, except… , Kamal Govindaswamy

http://rnc2.com/blog/regulatory-compliance/hipaahhitech/like-fact-hipaa-security-rule-prescriptive-except/

 

3 Security Analytics Lessons Learned — and Ignored!, Anton Chuvakin

http://blogs.gartner.com/anton-chuvakin/2015/02/09/security-analytics-lessons-learned-and-ignored/

Beware of Security Best Practices and Controls Frameworks

What could be possibly wrong with “Best Practices” or “Leading Practices” that your favorite security consultant might be talking about? Or for that matter, how could we go wrong if we used the “leading” security standards or controls frameworks?

It is of course useful to have a benchmark of some sort to compare yourself against your peers. The problem comes up (as it so often does) when we start to take these so called best practices and standards for granted. This often drives us to a state of what I like to call as template mindsets and approaches in security. More often than not in my experience, this leads to us making incorrect security decisions because we didn’t consider all the facts and circumstances that may be unique to each of our settings.

Let me explain with an example.

Let us say that you are using a leading security framework such as the HITRUST CSF for healthcare. To take the example of password controls, Control Reference 01.d on Password Management has a fairly restrictive set of password controls even at Level 1, which is HITRUST CSF’s baseline level of controls. Level 1 includes requirements for password length, complexity, uniqueness of the current password as compared to the last certain number of passwords and so on. However, there is no requirement in Level 1 around the standard to be used for hashing passwords. In fact, there is not a single mention of the words “password hash” or “salt” in over 450 pages of the CSF framework even in its latest 2014 version.

Now, if you are a seasoned and skilled security practitioner, you should know that these Level 1 password controls are mostly meaningless if the password hashes are not strong enough and the password hash file was stolen by some means. It is fairly common for hackers to steal password hash files early and often in their hacking campaigns. Reported breaches at Evernote, LinkedIn and Adobe readily come to mind. We learned about what appears to be this fairly unprecedented scale of stolen passwords just yesterday.

So, if you see a consultant using a so called best practice set of controls or one of the security controls frameworks to perform your risk assessment and he/she doesn’t ask a question on password hashes (or some other control or vulnerability that may truly matter), you should know the likely source of the problem. More than likely, they are simply going through the motions by asking you questions from a controls checklist with little sense of understanding or focus around some of the threats and vulnerabilities that may be truly important in your setting or context. And as we know, any assessment without a clear and contextual consideration for the real world threats and vulnerabilities is not really a risk assessment. You may just have spent a good amount of money on the consultant but probably do not have much to show for it in terms of the only metric that matters in an assessment – the number of “real” risks identified and their relative levels of magnitude –  so you can make intelligent risk management decisions.

In closing, let us not allow ourselves to be blindsided by the so called “Best Practices” and Security Controls Frameworks. Meaningful security risk management requires us to look at the threats and vulnerabilities that are often unique to each of our environments and contexts. What may have been considered a best practice somewhere else or a security framework put out by someone may at best be just a reference source to double-check and make sure we didn’t miss anything. They should never be the sole source for our assessments and certainly not the yardstick for our decision making in security.

I welcome your thoughts and comments.

Important notes and sources for reference

  • I used HITRUST CSF only for an example. The idea discussed in this post would apply to any set of Best Practices or Security Control Frameworks. After all, no set of Best Practices or Security Controls Frameworks and no matter how good their “quality” may be, they can’t keep up with the speed at which today’s security threats are evolving or new vulnerabilities are discovered.
  • If you are interested in learning some really useful information on password hashing and password management, I would strongly recommend this post (Caution: It is not a quick read;  Allow yourself at least 30 minutes to read and absorb the details especially if you are not a experienced security professional)

From A Security Or Compliance StandPoint…

It is probably safe to say that we security professionals hear the phrase in the title of this post rather frequently. For one, I heard it again earlier today from a experienced professional presenting on a webinar… I believe it is a cliche.

I actually think the phrase conveys the view that we do some things for security sake and do certain other things for compliance sake (but not necessarily for “improving the security risk posture” sake). Therein lies the problem in my view. Why should we do certain things for compliance if they don’t necessarily improve the security risk posture? BTW.. I think we shouldn’t do security for security sake either… more on it below.

I don’t think there is any security regulation that requires you to implement a specific control no matter what the risk associated with not implementing the control is. I think we all agree  PCI DSS is perhaps the most prescriptive security regulation there is and even it provides an option for us not to implement a specific control just for compliance sake if we could justify the reason by way of compensating controls . See below.

Note: Only companies that have undertaken a risk analysis and have legitimate technological or documented business constraints can consider the use of compensating controls to achieve compliance. (Source: Requirements and Security Assessment Procedures – PCI DSS version 3.0 (pdf))

I think all this can change if we insist on using the word “Risk” (and meaning it) in all of our security or privacy related conversations.  It can be hard to make the change because understanding and articulating risk is not easy… we know it is certainly much harder than rattling out a security control from a security framework (NIST 800-53, HITRUST CSF, ISO 27002 et al.) or some regulation (PCI DSS, HIPAA Security Rule et al.). It requires one to do a risk analysis and a good risk analysis requires work that can be hard to come by and we need to watch for the pitfalls .

We may be better served to think of non-compliance as an option and non-compliance needs to be treated as just another risk like all other risks that are possibly more serious in nature (risk of loss of patient privacy, intellectual property or identity theft etc.). If we did that, we would be in a position to articulate (or at least be forced to) why implementing a particular  compliance requirement doesn’t make sense because the risk associated with not implementing it is low enough given certain  business/technical factors or compensating controls we may have.

Before I forget …Just like it doesn’t make sense to do compliance for compliance sake, it doesn’t make sense to do security for security sake either. We often see this problem withSecurity Privacy Risk organizations looking to get themselves a “security certification” of some sort such as HITRUST CSF, ISO et al. In the quest to attain a passing certification score, you could find yourself implementing security controls for “security certification” sake.  There are certainly valid reasons why one would want to pursue certification, but one needs to be watchful so we aren’t forced to do security for security sake.

So, let us make a change in how we approach security and compliance by identifying and articulating “the risk” always and everytime …. Perhaps, we can make a start in the direction by not using the cliche in the title of the post. Instead, we might perhaps say something like  “From a security or privacy risk standpoint…”.

 

A Second Look At Our Risk Assessments?

I came across this Akamai Security Blog post recently which I thought was a useful and informative read overall. As I read through the blog post however, something caught my attention. It is the phrase “The vendor considers the threat posed by the vulnerability”.  That prompted me to write this post …. on the need for extreme due diligence in security risk assessments and the critical importance for the engagement sponsors to keep the assessment teams on their toes. (Note: Just to be doubly clear, the objective here is not to pick on the Akamai post but to discuss certain key points about Security Risk Assessments)

When it comes to Security Risk Assessments (or Security Risk Analysis if the HIPAA Security Rule is of any relevance to you), I believe that terminology is extremely important. Those of us who have performed a “true” risk assessment know for a fact that the terms threat, vulnerability, likelihood, impact and risk mean specific things. In the case of this specific Akamai post, I think the author may have used the word “threat” instead of “risk” somewhat inaccurately. While it may not be significant in the context of this particular blog post, I believe that using these terms inaccurately can mean all the difference in the quality and usefulness of actual risk assessments. In my experience, more often than not, such misplaced terminology is a symptom of the lack of due diligence on the part of the person or the team doing the assessment. Considering that risk assessments are so “foundational” to a security program, we strongly recommend addressing such redflags very early in a Risk Assessment engagement.

In fact, I would like to suggest that the sponsors ask the following questions of the consultants or teams performing the risk assessment as early as pertinent in the engagement:

  • Have you identified the vulnerabilities accurately and do you have the evidence to back up your findings?
  • Have you identified all the relevant threats that can exploit each vulnerability?
  • How did you arrive at the likelihood estimation of each threat exploiting each vulnerability? Can you back up your estimation with real, known and published information or investigation reports on exploits over the recent past (say three years)? Did you consider the role of any and all compensating controls we may have in possible reduction of the likelihood estimates?
  • Does your Risk Ranking/Risk Statement clearly articulate the “real” risk (and not some imagined or assumed risk) to the organization, supported by the Likelihood and Impact statements?
  • When proposing risk mitigation recommendations, have you articulated the recommendations in actionable terms? By “Actionable”, we mean something that can be readily used to build a project plan to initiate the risk mitigation effort(s).

If answers to any of the above questions seem negative or even tentative, the assessment may not be serving the organization’s risk management objectives. In my experience, most risk assessments turn out be no more than mere Control or Gap Assessments, which don’t need to be conducted by the often “Highly Paid” consultants, quite frankly. 

A “true” risk assessment requires to be performed by a security practitioner or team that has the inquisitive mind, depth and breadth of relevant security skillsets as well as the knowledge of current security threat/vulnerability environment.

You may also find the following posts from our blog relevant and useful:

Top 10 Pitfalls – Security or Privacy Risk Assessments
Compliance obligations need not stand in the way of better information security and risk management
Next time you do a Risk Assessment or Analysis, make sure you have Risk Intelligence on board

Please don’t hesitate to post your feedback or comments.

Top 10 Pitfalls – Security or Privacy Risk Assessments

Risk Assessment is a foundational requirement for an effective security or privacy program and it needs to be the basis for every investment decision in information security or privacy. To that extent, we strongly recommend it as the very first thing that organizations do when you they set out on implementing or improving a program. It is no surprise then that most regulations also include them as mandatory requirements (e.g. HIPAA Security Rule, Meaningful Use Stages 1 and 2 for Healthcare Providers,  PCI DSS 2.0). Yet, we continue to see many organizations do not perform it right, if they perform one at all. This is true at least in the Healthcare sector that we  focus on.  They see it as just another compliance requirement and go through the motions.

So, we thought about a list of “Top 10 Pitfalls” related to Risk Assessments. We present them here and will be looking to expand and discuss each of these pitfalls in separate posts to follow.

    1. Performing risk analysis without knowing all the locations the data you are looking to safeguard (PHI, PII etc.) is created, received, stored, maintained or transmitted
    2. Approaching it with a compliance or audit mindset rather than a risk mindset
    3. Mistaking controls/gap assessment for risk analysis. Hint: Controls/Gap Assessment is but one of several steps in risk analysis.
    4. Focusing on methodologies and templates rather than outcomes; We discuss the idea here
    5. Not having a complete or holistic view of the threats and vulnerabilities and hence failing to articulate and estimate the likelihood adequately
    6. Not realizing that no security controls framework (e.g. NIST 800-53, HITRUST CSF etc.) is perfect and using the security controls in these frameworks without a sense of context in your environment
    7. Poor documentation – Reflects likely lack of due diligence and could lead to bad decision making or at the very least may not pass an audit
    8. Writing Remediation or Corrective Action Plans without specialist knowledge and experience in specific remediation areas
    9. Inadequate planning and lack of curiosity, investigative mindset or quality in engagement oversight
    10. Not engaging the right stake holders or “owners” throughout the risk assessment process and especially in signing off on remediation recommendations or Corrective Action Plans

We’ll be delighted to hear your feedback and will look to perhaps even grow this list based on the feedback. After all, this is about being a good steward of the security or privacy program dollars and managing risks to our organizations, customers or partners.

Can we change the tune on Health Information Security and Privacy please?

Notice the title doesn’t say HIPAA Security and Privacy. Nor does it have any of the words – HITECH, Omnibus Rule, Meaningful Use etc. That is the point of this post.

Let us start with a question…  I am sure many of you like me are routine visitors to the blogosphere and social media sites (especially LinkedIn group discussions) to get a pulse of the happenings in Information Security and Privacy. How often do you see posts or discussions around compliance versus discussions focused squarely on risk, meaning risk to the organization or to the patients if their health information was compromised by one or the other means?

Compliance (risk of non-compliance) is only one of the risks  and in our view, should not be the primary driver for any Information Security or Privacy program. In fact, we often like to say that Compliance should be a natural consequence of  good risk management practices.

Having lived and watched Health Information Security and Privacy for nearly ten years, I am not surprised by this trend at all. Rather, I am looking forward to a day where we talk more about safeguarding the security and privacy of patient data and less about preparing for an OCR Audit. I am not suggesting that you shouldn’t worry about the latter. In fact, I’ll say that one will very likely not have to worry about the OCR or any audit for that matter if one’s real intent is to safeguard security and privacy of patient information. The real intent and objective are extremely important because they shape our thinking and how we go about executing our efforts.

I think  Security and Privacy programs in Healthcare can be a lot more effective (and likely even cost efficient) if they were to prioritize the objectives in the following order:

  • Patient Care and Safety – In most discussions on security, we tend to focus solely on confidentiality of patient information and less so on integrity and availability of the information. When we begin to think of all three security components in equal measure, it is easier to appreciate how a security incident or breach could impact patient care and safety. With the increasing adoption of EHRs, it is very likely that many health-care providers are relying solely on electronic versions of the patient records in one or more EHRs. It is possible that a security incident or breach could result in the patient record not being “available” for access by the physicians who may need to look at the patient’s treatment history before providing the patient with some urgent or emergency care.  In another possible scenario, it is possible that the security breach resulted in compromise of the integrity of the patient record itself, in which case there may be a chance that physicians end up misdiagnosing the patient condition and not providing the right treatment. Such cases were probably unlikely in a world of paper records but they are not inconceivable in a world of electronic records. These issues can result from both malicious and unintentional circumstances.
  • Patient Privacy and Loss of Trust – The impact of a healthcare privacy breach doesn’t need much discussion. The impacted individuals can face severe and lasting financial and reputational harm which can make for a very painful experience. This in turn could result in the provider losing the valuable trust of its customers. 
  • Business Risk – Healthcare businesses could face Tort or Class Action lawsuits from either of the two previous scenarios.  And then of course, there is the possibility of patients turning to competitors especially when they have access to multiple providers where they live. In effect, health care organizations could face substantial losses to their bottomlines and given the increasing competitive nature of the industry, this could put business sustainability of the organizations at risk.
  • Risks of Non-Compliance – Finally of course, there is the risk of non-compliance with industry or government regulations. Non-compliance could leave healthcare organizations facing considerable civil and possible criminal fines as well as recurring expenses from having to comply with OCR resolution agreements for example. In most instances however, the impact of non-compliance fines and expenses are only temporary in nature lasting a few years or more. On the other hand, the impact of the previous three risks could be much more significant and longer lasting.

Until we think of security and privacy as being central to patient care/safety and the business/clinical culture, it is our view that many programs will likely falter and not deliver the intended results. The new era of digital healthcare requires healthcare organizations to think of security and privacy as a business or customer issue and not something that they need to address only for compliance purposes.

In a following post, we’ll specifically discuss some examples of why thinking compliance first will not get us very far in managing health information security risks.

Focus On What Really Matters – Outcomes and Results

Here is something to think about as a security/privacy consultant or consulting team, big or small …

When you work on client consulting engagements, what are you really focused on? 

  • Is it just your methodology, the “quality” of your documentation deliverables or implementing a piece of technology?  
  • Are you also thinking about the immediate and long term value to the client in terms of definite outcomes? By outcomes, I mean the results that should really matter in security/privacy profession (i.e.) specific reduction in security or privacy risks

Based on what I see of the Information Security and Privacy engagements, most consultants are so fixated on the former and just don’t get themselves to think much of the latter. 

Here is something I would urge all consultants to do – Go back and talk to your clients that you have worked with over the past few years. Do a honest evaluation of how useful your engagement  deliverables have been to the client, in terms of the outcomes that really matter (i.e.) the extent to which the deliverables have helped the client reduce security/privacy risks (risk of regulatory non-compliance included).  Assuming that the client has indeed benefitted by way of the outcomes, do you honestly think the benefits were worth the dollars the client paid you?

I see one too many engagement deliverables of even “big name” consultants ending up as just “shelf-ware”. I have noticed this trend especially with engagements related to risk assessments and development of specific security or privacy strategies. In almost all cases, I would attribute the failure to a lack of understanding by the consultant regarding what “really matters” to the client. I also suspect in some cases that the consultant didn’t bring the “right” people to the engagement and perhaps failed to provide a quality oversight or leadership to the Engagement Team. In some extreme cases, I believe the consultant was trying to blindly leverage template deliverables from elsewhere. In other words, they were trying to fix a square peg in a round hole, as it were.

Now..  I know there are many consultants or consulting teams that would argue that it is not (at least not entirely) their responsibility for all of the client outcomes once they were no longer working with the client. While that may be true, I would argue that it is your responsibility to leave the client buyer or sponsor with a list of actionable objectives that the client needs to work further if he/she were to realize the expected outcomes.  And of course, your deliverables should have been good and conducive enough in the first place for the client to execute effectively towards those outcomes.

So, what do we need to do to stop this trend? Here are some thoughts:

  • Right from the first conversation with the client, focus on client outcomes and how the client may measure those outcomes
  • Based on the outcomes, agree upon appropriate deliverables. Don’t include deliverables for the sake of deliverables
  • When signing the engagement letter, make sure to include a section on measurable outcomes and how your engagement deliverables may help the client in realizing those outcomes subject to the client taking some specific actions.
  • Coach your engagement team to always have a keen eye on how every task they perform during the engagement is going to help what “really matters” to the client (i.e.) achieving the outcomes
  • Don’t be wedded to your methodology and deliverable templates. They are only as good  as how much they will help the client realize the outcomes.
  • As part of each deliverable, include a section on next steps that are required to be taken to realize one or more agreed outcomes identified in the engagement letter. Make sure to arrive at an agreement with the client sponsor regarding next steps before finalizing the deliverable.
  • At the end of the engagement, leave the client with a mutually agreed “Plan for Realization of Outcomes”, a set of actionable tasks that everyone agrees will be essential to achieve the outcomes identified in the engagement letter

Following these steps has served us well over the years. I’ll be interested in readers’ feedback.

Compliance obligations need not stand in the way of better information security and risk management

I couldn’t help write this post when I noticed this press release based on an IDC Insights Survey of Oil & Gas Companies. I don’t have access to the full report so I am basing my comments solely on the contents of the press release.

I found the following two findings (copied from the press release) to be of interest :

  • Security investments are not compliance driven. Only 10% of the respondents indicated that they are using regulatory compliance as a requirement to justify budgets.
  • Tough regulatory compliance and threat sophistication are the biggest barriers. Almost 25% of respondents indicated regulatory environment as a barrier to ensuring security. In addition, 20% of respondents acknowledged the increasing threat landscape.

The good news here is that only 10% of the respondents used Regulatory Compliance needs to justify budgets. What that tells me (I hope it is the case) is that the remaining 90% make budgetary decisions based solely on the information security risks that their  businesses face and not on the risks of not complying with regulations or audits. I would commend them for it… and I don’t think any good auditor (regulatory or internal/external) would have a problem with it either if the organization was able to “demonstrate” that the risk of not complying with a particular regulatory requirement was very low. Agreed.. you still need to be able to “demonstrate” which isn’t easy if one hasn’t been diligent with risk assessments.

The not-so-good news to me is the 25% number (I realize it might be low enough for some people)..  that of folks indicating that regulatory compliance is a barrier to ensuring security. For those folks, I say “It really doesn’t need to be a barrier”, not if you have good   information risk management governance and processes. I don’t know a single regulation that would force you to implement specific controls no matter what. Even if you are faced with an all-or-nothing regulation like PCI DSS, you can resort to using compensating controls (see here and here for some coverage of PCI DSS Compensating controls) to comply with a specific mandatory requirement.  To repeat my argument in the previous paragraph, an auditor would be hard-pressed to fault you if you were able to clearly articulate that you went about the compliance program  methodically by performing a risk assessment and prioritizing (by risk level) the need for specific controls required by the regulation. If you did that, you would focus on ”ensuring security” and not ignoring it for the sake of compliance.

Do we have a wake-up call in the OIG HHS Report on HIPAA Security Rule Compliance & Enforcement?

If you didn’t notice already, the Office of Inspector General  (OIG) in the Department of Health and Human Services (HHS) published a  report on the oversight by the Center for Medicare and Medicaid Services (CMS) in the enforcement of the HIPAA Security Rule. The report is available to the public here.   As we know, CMS was responsible for enforcement of the HIPAA Security Rule until the HHS  Secretary transferred that responsibility over to the Office of Civil Rights (OCR) back in 2009.

To quote from the report, the OIG conducted audits at seven covered entities (hospitals) in California, Georgia, Illinois, Massachusetts, Missouri, New York, and Texas in addition to an audit of CMS oversight and enforcement actions.  These audits focused primarily on the hospitals’ implementation of the following:

  • The wireless electronic communications network or security measures the security management staff implemented in its computerized information systems (technical safeguards);
  • The physical access to electronic information systems and the facilities in which they are housed (physical safeguards); and,
  • The policies and procedures developed and implemented for the security measures to protect the confidentiality, integrity, and availability of ePHI (administrative safeguards).

These audits were spread over three years (2008, 2009 and 2010) with the last couple of audits happening in March 2010. The report doesn’t mention  the criteria by which these hospitals were selected for audit except that these  hospitals were not selected because they had a breach of Protected Health Information(PHI) .

It wouldn’t necessarily be wise to extrapolate the findings in the report to the larger healthcare space in general without knowing how these hospitals were selected for audit. All one can say is that the findings would paint a worrisome picture if these hospitals were selected truly in a random manner.  For example, if one were to look at ”High Impact” causing  technical vulnerabilities, all 7 audited hospitals seem to have had vulnerabilities related to Access and Integrity Controls, 5 out of  7 had vulnerabilities related to Wireless and Audit Controls and  4 out 7 had vulnerabilities related to Authentication and Transmission Security Controls.

image

What might be particularly concerning is that the highest number of vulnerabilities were in the Access and Integrity Controls categories.  These are typically the vulnerabilities that are exploited most by hackers as evidenced (for instance) by the highlight in this quote from the 2011 Verizon Data Breach Investigation Report – “The top three threat action categories were Hacking, Malware, and Social. The most common types of hacking actions used were the use of stolen login credentials, exploiting backdoors, and man-in-the-middle attacks”.

Wake-up call or not, healthcare entities should perhaps take a cue from these findings and look to implement robust security and privacy  controls. A diligent effort should help protect organizations from the well publicized consequences of a potential data breach.

Older posts