RisknCompliance Blog

Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Page 2 of 5

PCI Breaches – Can we at least detect them?

Almost all Payment Card Industry (PCI) breaches over the past year, including the most recent one at Supervalu appear to have the following aspects in common:

1. They involved some compromise of Point of Sale (POS) systems.

2. The compromise and breaches continued for several weeks or months before being detected.

3. The breaches were detected not by the retailer but by some external entity – FBI, the US Secret Service, Payment processor, card brands, issuing bank etc.

4. At the time the breach was disclosed, the retailers appear to have had a passing PCI DSS certification.

Anyone that has a reasonable understanding of the current Information Security landscape should know that it is not a matter of “if” but “when” an organization will get compromised. Given this humbling reality, it only makes sense that we must be able to detect a compromise in a “timely” manner and hopefully contain the magnitude of the breach before it gets much worse.

Let’s consider the following aspects as well:

  1. PCI has one of the more prescriptive regulations in the form of PCI DSS and PA DSS than any other industry. As a case in point, consider the equivalent regulations for Electronic Health Records systems (EHRs) in the United States – the EHR Certification regulation (PA DSS equivalent) requirements highlighted yellow in this document and the Meaningful Use regulation (PCI DSS equivalent) requirements highlighted green. You will see that the PCI regulations are a lot more comprehensive both in breadth and depth.
  1. PCI DSS requires merchants and service providers to validate and document their compliance status every year. For the large retailers that have been in the news for the wrong reasons, this probably meant having a external Qualified Security Assessor (QSA)  performing a on-site security assessment and providing them with a passing Report on Compliance (ROC) every year.
  1. As for logging and monitoring requirements that should help with detection of a potential compromise,  both PCI DSS (Requirement 10)  and PA DSS (Requirement 4) are as detailed as they get in any security framework or regulation I am aware of.
  1. Even if you think requirement #10 can’t help detect POS malware activity, there is PCI DSS requirement 12.2 that requires a security risk assessment to be performed at least once a year. The risk assessment must consider the current threats and vulnerabilities. Given the constant stream of breaches, one would think that the POS malware threats are accounted for in these risk assessments.
  1. These large merchants have been around for a while and are supposed to have been PCI DSS compliant for several years. And so, one would think they have appropriate technologies and processes to at least detect a security compromise that results in the scale of breaches they have had.

So, what do you think may be the reasons why the retailers or the PCI regulations are not effective in at least detecting the breaches? More importantly, what changes would you suggest, both to the regulations and also to how the retailers plan and execute their security programs? Or perhaps even to how the QSAs perform their assessments in providing passing ROCs to the retailers?

I’m keen to hear your thoughts and comments.

That Odd Authentication Dichotomy Needs To Change

By now, it should be clear that we need to consider strong (multi factor) authentication for access to anything of value. In an age and time when most public email services (Gmail, Hotmail, Yahoo etc.) provide for strong authentication, it would seem inexplicable to allow access to corporate email or remote access to your organization’s systems with just the basic (user-id: password) authentication.

Think about this… Your personal Hotmail account uses 2 Factor, but your organization’s Office 365 email doesn’t.  I am sure you agree that this odd dichotomy needs to change.

(Note: I am not suggesting the privacy of your personal email is any less important than the security of your corporate email. By dichotomy, I am referring to your organization not being at least as much concerned about their security as you are concerned about your personal privacy)

And, if your organization does find itself in a situation where you have no way but to continue with the basic authentication, some testing and studies of passwords like this one should be considered for making your password policies (truly) stronger. Don’t continue with your password standard established years ago (or based on some arbitrary best practice or external standard)  forcing users to have a complex combination of alphanumeric/symbols, change passwords every 60 days or not allowing them to reuse the last 6 or 24 passwords or something. You may be only making their user experience miserable without making your password security any stronger. Also, don’t forget to take a look at your password hash we talked about here as a case in point.

Beware of Security Best Practices and Controls Frameworks

What could be possibly wrong with “Best Practices” or “Leading Practices” that your favorite security consultant might be talking about? Or for that matter, how could we go wrong if we used the “leading” security standards or controls frameworks?

It is of course useful to have a benchmark of some sort to compare yourself against your peers. The problem comes up (as it so often does) when we start to take these so called best practices and standards for granted. This often drives us to a state of what I like to call as template mindsets and approaches in security. More often than not in my experience, this leads to us making incorrect security decisions because we didn’t consider all the facts and circumstances that may be unique to each of our settings.

Let me explain with an example.

Let us say that you are using a leading security framework such as the HITRUST CSF for healthcare. To take the example of password controls, Control Reference 01.d on Password Management has a fairly restrictive set of password controls even at Level 1, which is HITRUST CSF’s baseline level of controls. Level 1 includes requirements for password length, complexity, uniqueness of the current password as compared to the last certain number of passwords and so on. However, there is no requirement in Level 1 around the standard to be used for hashing passwords. In fact, there is not a single mention of the words “password hash” or “salt” in over 450 pages of the CSF framework even in its latest 2014 version.

Now, if you are a seasoned and skilled security practitioner, you should know that these Level 1 password controls are mostly meaningless if the password hashes are not strong enough and the password hash file was stolen by some means. It is fairly common for hackers to steal password hash files early and often in their hacking campaigns. Reported breaches at Evernote, LinkedIn and Adobe readily come to mind. We learned about what appears to be this fairly unprecedented scale of stolen passwords just yesterday.

So, if you see a consultant using a so called best practice set of controls or one of the security controls frameworks to perform your risk assessment and he/she doesn’t ask a question on password hashes (or some other control or vulnerability that may truly matter), you should know the likely source of the problem. More than likely, they are simply going through the motions by asking you questions from a controls checklist with little sense of understanding or focus around some of the threats and vulnerabilities that may be truly important in your setting or context. And as we know, any assessment without a clear and contextual consideration for the real world threats and vulnerabilities is not really a risk assessment. You may just have spent a good amount of money on the consultant but probably do not have much to show for it in terms of the only metric that matters in an assessment – the number of “real” risks identified and their relative levels of magnitude –  so you can make intelligent risk management decisions.

In closing, let us not allow ourselves to be blindsided by the so called “Best Practices” and Security Controls Frameworks. Meaningful security risk management requires us to look at the threats and vulnerabilities that are often unique to each of our environments and contexts. What may have been considered a best practice somewhere else or a security framework put out by someone may at best be just a reference source to double-check and make sure we didn’t miss anything. They should never be the sole source for our assessments and certainly not the yardstick for our decision making in security.

I welcome your thoughts and comments.

Important notes and sources for reference

  • I used HITRUST CSF only for an example. The idea discussed in this post would apply to any set of Best Practices or Security Control Frameworks. After all, no set of Best Practices or Security Controls Frameworks and no matter how good their “quality” may be, they can’t keep up with the speed at which today’s security threats are evolving or new vulnerabilities are discovered.
  • If you are interested in learning some really useful information on password hashing and password management, I would strongly recommend this post (Caution: It is not a quick read;  Allow yourself at least 30 minutes to read and absorb the details especially if you are not a experienced security professional)

How useful is the HHS OIG report published this week?

I am sure some of you saw this news report about HHS OIG finding some security related deficiencies in the EHR certification program.

I was keen to read the full OIG report (pdf) which I did get a chance to do this evening. I know HHS OIG does great work overall but I must say I didn’t come away feeling very good about the quality or usefulness of this particular report, for the following couple of reasons:

  1. The report was really of an audit performed in 2012 of the 2011 EHR certification program which doesn’t even exist in 2014. What value does it provide if OIG has ONC providing responses to this audit report in 2014? Shouldn’t OIG have sent this report to ONC soon after they did the audit in 2012 so the report could have led to changes in the program when it still existed? It appears this OIG audit and the report could have been a better use of taxpayer dollars had it been timely.
  2. I am not sure OIG has done a good job of substantiating why they don’t agree that the 2014 certification criteria addresses their concerns. They provide an example of multi-factor authentication not being included in the 2014 criteria. While multi-factor authentication would obviously provide for better security, does OIG think all access to EHRs must be protected by multi-factor? Or is it perhaps only remote access (meaning access from outside the trusted network say a hospital facility)? Security in healthcare can’t come at the expense of user experience of providers and clinicians. Requiring multi-factor at all times is going to impact clinician productivity and hence patient care. Also, OIG should have known that multi-factor technologies are still not (or at least were not when ONC finalized the 2014 criteria) at a point where they can be used as the mandatory baseline authentication mechanism in EHRs without compromising user experience. If I remember correctly, the HealthIT Standards Committee (HITSC) did consider 2 factor authentication for inclusion in the 2014 criteria but decided to exclude it for “practicality” reasons. To sum up on this point, I think OIG could have been more objective in their opinions on 2014 criteria.

In closing, I am not sure what process or protocols does OIG follow but it appears this audit report could have had better impact if it had been more timely, objective and actionable.

From A Security Or Compliance StandPoint…

It is probably safe to say that we security professionals hear the phrase in the title of this post rather frequently. For one, I heard it again earlier today from a experienced professional presenting on a webinar… I believe it is a cliche.

I actually think the phrase conveys the view that we do some things for security sake and do certain other things for compliance sake (but not necessarily for “improving the security risk posture” sake). Therein lies the problem in my view. Why should we do certain things for compliance if they don’t necessarily improve the security risk posture? BTW.. I think we shouldn’t do security for security sake either… more on it below.

I don’t think there is any security regulation that requires you to implement a specific control no matter what the risk associated with not implementing the control is. I think we all agree  PCI DSS is perhaps the most prescriptive security regulation there is and even it provides an option for us not to implement a specific control just for compliance sake if we could justify the reason by way of compensating controls . See below.

Note: Only companies that have undertaken a risk analysis and have legitimate technological or documented business constraints can consider the use of compensating controls to achieve compliance. (Source: Requirements and Security Assessment Procedures – PCI DSS version 3.0 (pdf))

I think all this can change if we insist on using the word “Risk” (and meaning it) in all of our security or privacy related conversations.  It can be hard to make the change because understanding and articulating risk is not easy… we know it is certainly much harder than rattling out a security control from a security framework (NIST 800-53, HITRUST CSF, ISO 27002 et al.) or some regulation (PCI DSS, HIPAA Security Rule et al.). It requires one to do a risk analysis and a good risk analysis requires work that can be hard to come by and we need to watch for the pitfalls .

We may be better served to think of non-compliance as an option and non-compliance needs to be treated as just another risk like all other risks that are possibly more serious in nature (risk of loss of patient privacy, intellectual property or identity theft etc.). If we did that, we would be in a position to articulate (or at least be forced to) why implementing a particular  compliance requirement doesn’t make sense because the risk associated with not implementing it is low enough given certain  business/technical factors or compensating controls we may have.

Before I forget …Just like it doesn’t make sense to do compliance for compliance sake, it doesn’t make sense to do security for security sake either. We often see this problem withSecurity Privacy Risk organizations looking to get themselves a “security certification” of some sort such as HITRUST CSF, ISO et al. In the quest to attain a passing certification score, you could find yourself implementing security controls for “security certification” sake.  There are certainly valid reasons why one would want to pursue certification, but one needs to be watchful so we aren’t forced to do security for security sake.

So, let us make a change in how we approach security and compliance by identifying and articulating “the risk” always and everytime …. Perhaps, we can make a start in the direction by not using the cliche in the title of the post. Instead, we might perhaps say something like  “From a security or privacy risk standpoint…”.

 

A Second Look At Our Risk Assessments?

I came across this Akamai Security Blog post recently which I thought was a useful and informative read overall. As I read through the blog post however, something caught my attention. It is the phrase “The vendor considers the threat posed by the vulnerability”.  That prompted me to write this post …. on the need for extreme due diligence in security risk assessments and the critical importance for the engagement sponsors to keep the assessment teams on their toes. (Note: Just to be doubly clear, the objective here is not to pick on the Akamai post but to discuss certain key points about Security Risk Assessments)

When it comes to Security Risk Assessments (or Security Risk Analysis if the HIPAA Security Rule is of any relevance to you), I believe that terminology is extremely important. Those of us who have performed a “true” risk assessment know for a fact that the terms threat, vulnerability, likelihood, impact and risk mean specific things. In the case of this specific Akamai post, I think the author may have used the word “threat” instead of “risk” somewhat inaccurately. While it may not be significant in the context of this particular blog post, I believe that using these terms inaccurately can mean all the difference in the quality and usefulness of actual risk assessments. In my experience, more often than not, such misplaced terminology is a symptom of the lack of due diligence on the part of the person or the team doing the assessment. Considering that risk assessments are so “foundational” to a security program, we strongly recommend addressing such redflags very early in a Risk Assessment engagement.

In fact, I would like to suggest that the sponsors ask the following questions of the consultants or teams performing the risk assessment as early as pertinent in the engagement:

  • Have you identified the vulnerabilities accurately and do you have the evidence to back up your findings?
  • Have you identified all the relevant threats that can exploit each vulnerability?
  • How did you arrive at the likelihood estimation of each threat exploiting each vulnerability? Can you back up your estimation with real, known and published information or investigation reports on exploits over the recent past (say three years)? Did you consider the role of any and all compensating controls we may have in possible reduction of the likelihood estimates?
  • Does your Risk Ranking/Risk Statement clearly articulate the “real” risk (and not some imagined or assumed risk) to the organization, supported by the Likelihood and Impact statements?
  • When proposing risk mitigation recommendations, have you articulated the recommendations in actionable terms? By “Actionable”, we mean something that can be readily used to build a project plan to initiate the risk mitigation effort(s).

If answers to any of the above questions seem negative or even tentative, the assessment may not be serving the organization’s risk management objectives. In my experience, most risk assessments turn out be no more than mere Control or Gap Assessments, which don’t need to be conducted by the often “Highly Paid” consultants, quite frankly. 

A “true” risk assessment requires to be performed by a security practitioner or team that has the inquisitive mind, depth and breadth of relevant security skillsets as well as the knowledge of current security threat/vulnerability environment.

You may also find the following posts from our blog relevant and useful:

Top 10 Pitfalls – Security or Privacy Risk Assessments
Compliance obligations need not stand in the way of better information security and risk management
Next time you do a Risk Assessment or Analysis, make sure you have Risk Intelligence on board

Please don’t hesitate to post your feedback or comments.

I like the fact that the HIPAA Security Rule is not prescriptive, except…

I think it makes sense for the HIPAA Security Rule (even in its latest form from the Omnibus update)  not to be prescriptive. For one, the Rule is meant to address HIPAA Covered Entities (CEs) and now (with the Omnibus update) Business Associates (BAs) that come in all shapes, sizes and sophistication levels (Think single provider practices versus large hospital systems, one person billing coder versus large payers or clearing houses). The second reason I think it makes sense is that this is after all a Federal Government Regulation (as opposed to a industry regulation like PCI DSS). We all know how laborious and time consuming the Federal Government rule making process can be. Consider for example, the fact that the Omnibus Rule update to the HIPAA Security/Privacy Rules took more than four years after the relevant statute (HITECH Act of 2009) was signed into law. If the HIPAA Security Rule were prescriptive (like PCI DSS for example), the rule would need to be updated frequently in order for it to remain relevant in the constantly evolving environment of security threats and vulnerabilities. We know PCI DSS gets updated every three years or so, not to mention the constant stream of guidelines that PCI SSC issues.

For all that makes sense for the HIPAA Security Rule to be as non-prescriptive as it is, I think it could use one prescriptive requirement. And that is to require all CEs and BAs to have a current diagram of the PHI Data Flows. This in fact is a newly included requirement in the recently released PCI DSS 3.0 (pdf). Below is a screen capture of the the new PCI DSS requirement 1.1.3.

image

In my view, maintaining a current Data Flow Diagram showing all locations PHI is created, received, stored, processed  or transmitted is so “foundational” to Healthcare Security and Privacy programs. After all, how can one implement appropriate safeguards if one doesn’t know what and where to safeguard? It is also for this very reason that we have this requirement as the very first in our list of Top 10 PitFalls in Security/Privacy Risk Assessments.  The closest that the HHS Office for Civil Rights (OCR) comes to addressing this is buried in the last statement of the audit procedure in OCR Audit Protocol  (see screen capture below)  which says “Determine if the covered entity has identified all systems that contain, process, or transmit ePHI”. In my view, this procedure step is not good enough because “identifying systems” is not the same as having knowledge of all the PHI Data Flows.

image

In my experience, lack of knowledge of the PHI Data Flows is a very common challenge among most CEs and BAs regardless of their size or scale. The problem is especially acute when the data goes out of structured systems (EHRs, Revenue Cycle Management Applications etc.) in the form of unstructured data for one or more reasons. It is extremely hard to track and safeguard unstructured PHI so it is important that the organizations get a clear understanding of their PHI data flows and closely manage the flows. As such, any investments in a Security/Privacy program without first getting an understanding of the the data flows may not deliver the desired returns or help achieve the objective of safeguarding PHI or patient privacy.

I’ll be interested in hearing your feedback or opinions. What are your thoughts? What other prescriptive requirements would you like to see included in the HIPAA Security Rule?

Top 10 Pitfalls – Security or Privacy Risk Assessments

Risk Assessment is a foundational requirement for an effective security or privacy program and it needs to be the basis for every investment decision in information security or privacy. To that extent, we strongly recommend it as the very first thing that organizations do when you they set out on implementing or improving a program. It is no surprise then that most regulations also include them as mandatory requirements (e.g. HIPAA Security Rule, Meaningful Use Stages 1 and 2 for Healthcare Providers,  PCI DSS 2.0). Yet, we continue to see many organizations do not perform it right, if they perform one at all. This is true at least in the Healthcare sector that we  focus on.  They see it as just another compliance requirement and go through the motions.

So, we thought about a list of “Top 10 Pitfalls” related to Risk Assessments. We present them here and will be looking to expand and discuss each of these pitfalls in separate posts to follow.

    1. Performing risk analysis without knowing all the locations the data you are looking to safeguard (PHI, PII etc.) is created, received, stored, maintained or transmitted
    2. Approaching it with a compliance or audit mindset rather than a risk mindset
    3. Mistaking controls/gap assessment for risk analysis. Hint: Controls/Gap Assessment is but one of several steps in risk analysis.
    4. Focusing on methodologies and templates rather than outcomes; We discuss the idea here
    5. Not having a complete or holistic view of the threats and vulnerabilities and hence failing to articulate and estimate the likelihood adequately
    6. Not realizing that no security controls framework (e.g. NIST 800-53, HITRUST CSF etc.) is perfect and using the security controls in these frameworks without a sense of context in your environment
    7. Poor documentation – Reflects likely lack of due diligence and could lead to bad decision making or at the very least may not pass an audit
    8. Writing Remediation or Corrective Action Plans without specialist knowledge and experience in specific remediation areas
    9. Inadequate planning and lack of curiosity, investigative mindset or quality in engagement oversight
    10. Not engaging the right stake holders or “owners” throughout the risk assessment process and especially in signing off on remediation recommendations or Corrective Action Plans

We’ll be delighted to hear your feedback and will look to perhaps even grow this list based on the feedback. After all, this is about being a good steward of the security or privacy program dollars and managing risks to our organizations, customers or partners.

Pay attention to Security Risk Analysis in Meaningful Use Attestation

As is well known, Centers for Medicare & Medicaid Services (CMS) has been conducting pre and post payment audits of healthcare provider organizations attesting to Meaningful Use (MU).  Our experience tells us that providers do not always exercise the necessary due diligence in meeting Stage I MU Core Objective #14 (Eligible Hospitals) and #15 (Eligible Professionals). In our view and as supported by ONC’s 10 Step Plan for Meeting Privacy and Security Portions of Meaningful Use, the MU Security Risk Analysis needs to go well beyond assessing just the technical controls of a EHR system. We believe that the risk analysis should cover the people and process aspects of EHR operations as well as how the EHR interfaces with other systems, organizations, people or processes.

As noted in a previous post, College of Healthcare Information Management Executives (CHIME), a professional organization for chief information officers and other senior healthcare IT leaders seemed to hold the view that the MU Security Risks Analysis scope should be limited. While we do not have a complete insight into CHIME’s viewpoint, we believe that providers need some work to do if they are to meet the requirements effectively. A robust security risks analysis is in any case the right thing to do every time there is a change in the Health IT environment … and implementing a EHR should qualify as a major change in that regard. It is also a mandatory compliance obligation under the HIPAA Security Rule.

So, why not do the “right thing”? We highly recommend that providers avoid “checkbox compliance” tendencies when it comes to meeting MU Core Objective #14/15.

Can we change the tune on Health Information Security and Privacy please?

Notice the title doesn’t say HIPAA Security and Privacy. Nor does it have any of the words – HITECH, Omnibus Rule, Meaningful Use etc. That is the point of this post.

Let us start with a question…  I am sure many of you like me are routine visitors to the blogosphere and social media sites (especially LinkedIn group discussions) to get a pulse of the happenings in Information Security and Privacy. How often do you see posts or discussions around compliance versus discussions focused squarely on risk, meaning risk to the organization or to the patients if their health information was compromised by one or the other means?

Compliance (risk of non-compliance) is only one of the risks  and in our view, should not be the primary driver for any Information Security or Privacy program. In fact, we often like to say that Compliance should be a natural consequence of  good risk management practices.

Having lived and watched Health Information Security and Privacy for nearly ten years, I am not surprised by this trend at all. Rather, I am looking forward to a day where we talk more about safeguarding the security and privacy of patient data and less about preparing for an OCR Audit. I am not suggesting that you shouldn’t worry about the latter. In fact, I’ll say that one will very likely not have to worry about the OCR or any audit for that matter if one’s real intent is to safeguard security and privacy of patient information. The real intent and objective are extremely important because they shape our thinking and how we go about executing our efforts.

I think  Security and Privacy programs in Healthcare can be a lot more effective (and likely even cost efficient) if they were to prioritize the objectives in the following order:

  • Patient Care and Safety – In most discussions on security, we tend to focus solely on confidentiality of patient information and less so on integrity and availability of the information. When we begin to think of all three security components in equal measure, it is easier to appreciate how a security incident or breach could impact patient care and safety. With the increasing adoption of EHRs, it is very likely that many health-care providers are relying solely on electronic versions of the patient records in one or more EHRs. It is possible that a security incident or breach could result in the patient record not being “available” for access by the physicians who may need to look at the patient’s treatment history before providing the patient with some urgent or emergency care.  In another possible scenario, it is possible that the security breach resulted in compromise of the integrity of the patient record itself, in which case there may be a chance that physicians end up misdiagnosing the patient condition and not providing the right treatment. Such cases were probably unlikely in a world of paper records but they are not inconceivable in a world of electronic records. These issues can result from both malicious and unintentional circumstances.
  • Patient Privacy and Loss of Trust – The impact of a healthcare privacy breach doesn’t need much discussion. The impacted individuals can face severe and lasting financial and reputational harm which can make for a very painful experience. This in turn could result in the provider losing the valuable trust of its customers. 
  • Business Risk – Healthcare businesses could face Tort or Class Action lawsuits from either of the two previous scenarios.  And then of course, there is the possibility of patients turning to competitors especially when they have access to multiple providers where they live. In effect, health care organizations could face substantial losses to their bottomlines and given the increasing competitive nature of the industry, this could put business sustainability of the organizations at risk.
  • Risks of Non-Compliance – Finally of course, there is the risk of non-compliance with industry or government regulations. Non-compliance could leave healthcare organizations facing considerable civil and possible criminal fines as well as recurring expenses from having to comply with OCR resolution agreements for example. In most instances however, the impact of non-compliance fines and expenses are only temporary in nature lasting a few years or more. On the other hand, the impact of the previous three risks could be much more significant and longer lasting.

Until we think of security and privacy as being central to patient care/safety and the business/clinical culture, it is our view that many programs will likely falter and not deliver the intended results. The new era of digital healthcare requires healthcare organizations to think of security and privacy as a business or customer issue and not something that they need to address only for compliance purposes.

In a following post, we’ll specifically discuss some examples of why thinking compliance first will not get us very far in managing health information security risks.

« Older posts Newer posts »