RisknCompliance Blog

Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Category: Security (page 1 of 2)

Security is mostly basics, but talk is cheap

In most cases, better security posture is all about getting a few basics right. And this recent incident related to the breach of a Healthcare.gov server may be further proof of that.

Based on this article from csoonline, it appears the problem may have been that the “development server was poorly configured and used default credentials”.

At the same time, the article says that “the website undergoes quarterly security audits, as well as daily security scans and hacking exercises”. I am guessing then that the development server wasn’t included in the “hacking exercises” which I am assuming are penetration tests performed the way they should be.

Many times, you might be ok not to have your development environment undergo a full pen test especially when you are sure that you have the security basics right,  like not using the default credentials and configuration in this case. However, when you are as “prominent” as Healthcare.gov is for a number of reasons we all know, the elevated risk profile should require that we perform the necessary due diligence at least once upon installation or major change.

Again, we don’t know all the details but based on what is being repIMG_20140906_072450orted, this incident adds to the proof that better security is mostly about basics. However, as we know from experience, basics don’t always mean easy because there is this thing called execution which many organizations are not effective at. As they say, talk is cheap.

Just to be sure, we are not necessarily referring to the need for money or funding (though that may be a problem for some organizations). Healthcare.gov is again a good example in this context because I doubt they have any problem with funding for security. Considering the hiccups they had during the initial months of their launch, I suspect they don’t want to be in the news for anything except to announce good enrollment numbers, let alone a security breach.

Executing the basics in security takes a high standard of professional due diligence by the individuals or teams involved in planning and running the security program. Implementing sophisticated technologies or hiring expensive consultants is not going to be very useful if the foundational aspects are not effective.


Image courtesy : lovethispic.com


  • I used healthcare.gov as an example since the incident was in the news this week. I think they are also a good example to illustrate the fact that the best of funding, technologies or consulting resources can still not assure that you will not have a security breach.
  • Regardless of the breach (which appears not to have been damaging since no personal information was taken), one must note the fact that they probably did a good job in noticing anomalies on a development server. Considering that many organizations can’t detect breaches in time or at all even in their production environments (see our posts here and here), one might think that the healthcare.gov team has probably done a better job. We’ll probably learn more details in the coming days but it appears the circumstances and the consequences weren’t too bad.


That Odd Authentication Dichotomy Needs To Change

By now, it should be clear that we need to consider strong (multi factor) authentication for access to anything of value. In an age and time when most public email services (Gmail, Hotmail, Yahoo etc.) provide for strong authentication, it would seem inexplicable to allow access to corporate email or remote access to your organization’s systems with just the basic (user-id: password) authentication.

Think about this… Your personal Hotmail account uses 2 Factor, but your organization’s Office 365 email doesn’t.  I am sure you agree that this odd dichotomy needs to change.

(Note: I am not suggesting the privacy of your personal email is any less important than the security of your corporate email. By dichotomy, I am referring to your organization not being at least as much concerned about their security as you are concerned about your personal privacy)

And, if your organization does find itself in a situation where you have no way but to continue with the basic authentication, some testing and studies of passwords like this one should be considered for making your password policies (truly) stronger. Don’t continue with your password standard established years ago (or based on some arbitrary best practice or external standard)  forcing users to have a complex combination of alphanumeric/symbols, change passwords every 60 days or not allowing them to reuse the last 6 or 24 passwords or something. You may be only making their user experience miserable without making your password security any stronger. Also, don’t forget to take a look at your password hash we talked about here as a case in point.

Can we change the tune on Health Information Security and Privacy please?

Notice the title doesn’t say HIPAA Security and Privacy. Nor does it have any of the words – HITECH, Omnibus Rule, Meaningful Use etc. That is the point of this post.

Let us start with a question…  I am sure many of you like me are routine visitors to the blogosphere and social media sites (especially LinkedIn group discussions) to get a pulse of the happenings in Information Security and Privacy. How often do you see posts or discussions around compliance versus discussions focused squarely on risk, meaning risk to the organization or to the patients if their health information was compromised by one or the other means?

Compliance (risk of non-compliance) is only one of the risks  and in our view, should not be the primary driver for any Information Security or Privacy program. In fact, we often like to say that Compliance should be a natural consequence of  good risk management practices.

Having lived and watched Health Information Security and Privacy for nearly ten years, I am not surprised by this trend at all. Rather, I am looking forward to a day where we talk more about safeguarding the security and privacy of patient data and less about preparing for an OCR Audit. I am not suggesting that you shouldn’t worry about the latter. In fact, I’ll say that one will very likely not have to worry about the OCR or any audit for that matter if one’s real intent is to safeguard security and privacy of patient information. The real intent and objective are extremely important because they shape our thinking and how we go about executing our efforts.

I think  Security and Privacy programs in Healthcare can be a lot more effective (and likely even cost efficient) if they were to prioritize the objectives in the following order:

  • Patient Care and Safety – In most discussions on security, we tend to focus solely on confidentiality of patient information and less so on integrity and availability of the information. When we begin to think of all three security components in equal measure, it is easier to appreciate how a security incident or breach could impact patient care and safety. With the increasing adoption of EHRs, it is very likely that many health-care providers are relying solely on electronic versions of the patient records in one or more EHRs. It is possible that a security incident or breach could result in the patient record not being “available” for access by the physicians who may need to look at the patient’s treatment history before providing the patient with some urgent or emergency care.  In another possible scenario, it is possible that the security breach resulted in compromise of the integrity of the patient record itself, in which case there may be a chance that physicians end up misdiagnosing the patient condition and not providing the right treatment. Such cases were probably unlikely in a world of paper records but they are not inconceivable in a world of electronic records. These issues can result from both malicious and unintentional circumstances.
  • Patient Privacy and Loss of Trust – The impact of a healthcare privacy breach doesn’t need much discussion. The impacted individuals can face severe and lasting financial and reputational harm which can make for a very painful experience. This in turn could result in the provider losing the valuable trust of its customers. 
  • Business Risk – Healthcare businesses could face Tort or Class Action lawsuits from either of the two previous scenarios.  And then of course, there is the possibility of patients turning to competitors especially when they have access to multiple providers where they live. In effect, health care organizations could face substantial losses to their bottomlines and given the increasing competitive nature of the industry, this could put business sustainability of the organizations at risk.
  • Risks of Non-Compliance – Finally of course, there is the risk of non-compliance with industry or government regulations. Non-compliance could leave healthcare organizations facing considerable civil and possible criminal fines as well as recurring expenses from having to comply with OCR resolution agreements for example. In most instances however, the impact of non-compliance fines and expenses are only temporary in nature lasting a few years or more. On the other hand, the impact of the previous three risks could be much more significant and longer lasting.

Until we think of security and privacy as being central to patient care/safety and the business/clinical culture, it is our view that many programs will likely falter and not deliver the intended results. The new era of digital healthcare requires healthcare organizations to think of security and privacy as a business or customer issue and not something that they need to address only for compliance purposes.

In a following post, we’ll specifically discuss some examples of why thinking compliance first will not get us very far in managing health information security risks.

CHIME On MU Audits… Looking For Thoughts/Feedback

I happened to read this article from Information Week Healthcare and was especially interested by this quote reproduced below…

CHIME also raised the issue of excessive auditing of providers in the Meaningful Use program, which can lead to auditors looking beyond attestation to Meaningful Use. Hickman cited auditors who, according to other CIOs, have pried into whether the use of certified EHRs to protect security complies with the latest HIPAA regulations.”

As we know, the Stage 1 Core objective requires that all providers conduct a security risk analysis of the EHRs (see here for a related post) and have at least an actionable plan to remediate the discovered deficiencies. To that extent, CMS has clarified that this Stage 1 core objective is something that providers should be doing anyway for compliance with the “Required” Security Risk Analysis Implementation Specification in the HIPAA Security Rule.

So, my question is this based on the issue raised by CHIME as quoted in the article. Are the MU auditors really looking for compliance with the HIPAA Security Rule or are they looking for whether the risk analysis was performed as required by the MU core objective? Clearly, within the scope of the MU audits, they should be looking at the procedures and the results of the risk analysis performed before the end of the reporting periods. (See highlighted below and the full CMS document here).


I think though that CHIME’s issue might be with the amount of details and procedures that the auditors may be looking for. In my opinion, a good Security Risk Analysis should evaluate the effectiveness of not just the technical controls implemented in the EHRs but also the related people and process controls. When you put the relevant controls in all the three categories together, it is almost always the case that these controls should (minimally) cover most if not all of the standards and implementation specifications of the HIPAA Security Rule. The HIPAA Security Rule hasn’t been changed in nearly ten years (except making the Business Associates directly responsible in the Omnibus Rule that went into effect in March 2013) and in my opinion, sets a rather low bar for security in today’s world of evolving and advanced threats. So, Security Risk Analysis of an EHR should be a lot more comprehensive in the range of controls as compared to the standards and implementation specifications of the Security Rule (leaving out the risk analysis specification of the Security Rule of course).

This leads me to speculate that CHIME’s issue with the auditors could be somewhat unfounded. I want to highlight “speculate” because I don’t quite know what the auditor(s) in question may have been looking at.

It will be interesting to hear feedback from folks that have some first-hand experience with the MU audits.

Compliance obligations need not stand in the way of better information security and risk management

I couldn’t help write this post when I noticed this press release based on an IDC Insights Survey of Oil & Gas Companies. I don’t have access to the full report so I am basing my comments solely on the contents of the press release.

I found the following two findings (copied from the press release) to be of interest :

  • Security investments are not compliance driven. Only 10% of the respondents indicated that they are using regulatory compliance as a requirement to justify budgets.
  • Tough regulatory compliance and threat sophistication are the biggest barriers. Almost 25% of respondents indicated regulatory environment as a barrier to ensuring security. In addition, 20% of respondents acknowledged the increasing threat landscape.

The good news here is that only 10% of the respondents used Regulatory Compliance needs to justify budgets. What that tells me (I hope it is the case) is that the remaining 90% make budgetary decisions based solely on the information security risks that their  businesses face and not on the risks of not complying with regulations or audits. I would commend them for it… and I don’t think any good auditor (regulatory or internal/external) would have a problem with it either if the organization was able to “demonstrate” that the risk of not complying with a particular regulatory requirement was very low. Agreed.. you still need to be able to “demonstrate” which isn’t easy if one hasn’t been diligent with risk assessments.

The not-so-good news to me is the 25% number (I realize it might be low enough for some people)..  that of folks indicating that regulatory compliance is a barrier to ensuring security. For those folks, I say “It really doesn’t need to be a barrier”, not if you have good   information risk management governance and processes. I don’t know a single regulation that would force you to implement specific controls no matter what. Even if you are faced with an all-or-nothing regulation like PCI DSS, you can resort to using compensating controls (see here and here for some coverage of PCI DSS Compensating controls) to comply with a specific mandatory requirement.  To repeat my argument in the previous paragraph, an auditor would be hard-pressed to fault you if you were able to clearly articulate that you went about the compliance program  methodically by performing a risk assessment and prioritizing (by risk level) the need for specific controls required by the regulation. If you did that, you would focus on ”ensuring security” and not ignoring it for the sake of compliance.

Next time you do a Risk Assessment or Analysis, make sure you have Risk Intelligence on board

I was prompted to write this quick post this morning when I read this article.

I think it is a good example of what some (actually many, in my experience) risk management programs may be lacking, which is a good quality of Risk Intelligence. In this particular case, I think the original article failed to emphasize that vulnerabilities by themselves may not mean much unless there is a good likelihood of them being exploited, resulting in real risk.  We discussed some details regarding the quality of risk assessments in a previous post.

A good understanding of information risks and their prioritization needs to be the first and arguably the most important step in any information risk management program. Yet, we often see risk assessment initiatives not being done right or at the right quality. We think it is critical that a risk analysis or assessment is headed by someone or performed by a team that has or does the following:

  1. A very good understanding of your environment from people, process and technology perspectives
  2. A very good and up-to-date intelligence on the current threats out there (both internal and external) and is able to objectively define those threats
  3. Is able to clearly list and define the vulnerabilities in your environment. It will often require  process or technology specialists to do a good job of defining the vulnerabilities
  4. Is able to make an unbiased and objective determination of the the likelihood that the vulnerabilities (from Step 3) can be exploited by one or more threats (from Step 2)
  5. A very good understanding of the impact to the business if each vulnerability were to be exploited by one or more threats. Impact is largely a function of the organization’s characteristics including various business and technical factors, so it is important that you involve your relevant business and  technology Subject  Matter Experts.
  6. Based on the likelihood (Step 4) and impacts (Step 5), estimate risks and then rank them by magnitude.

We just can’t stress the importance of steps 1-5 enough. We think it takes “Risk Intelligence” to do these steps well. Without good Risk Intelligence on your team, you may well be wasting precious time, money and resources on your risk assessments.  More importantly, you may not be protecting your business to the extent that you should, with the same budget and resources.


Important Disclaimer

The guidance and content we provide in our blogs including this one is based on our experience and understanding of best practices. Readers must always exercise due diligence and obtain professional advice before applying the guidance within their environments.

Providers – Is HIPAA Security Risk Analysis in your plan over the next few months?

Security Risk Analysis is something that we recommend all organizations conduct periodically or before a  significant process or technology change. After all, threats, vulnerabilities and impact (three components of risk, see my other post here) often change or evolve over time which means that risk analysis results can soon become outdated.

In the context of Healthcare, Security Risk Analysis is also mandatory for two reasons.

The first reason is that it is required for compliance with HIPAA Security Rule which, by way of the HITECH Act, now applies to Business Associates in addition to Covered Entities.  It is a “Required” Implementation Specification in the “Security Management Process” standard under Administrative Safeguards of the HIPAA Security Rule, as highlighted in the table below.


The second (and more urgent) reason to conduct a Security Risk Analysis is that it is a core requirement for providers to achieve Meaningful Use certification of Electronic Health Records (EHRs) and thereby become eligible for Medicare/Medicaid incentives beginning April 2011 or risk Medicare reimbursement penalties beginning 2015 (see below).

image image


Source: Center for Medicare & Medicaid Services (CMS)

So, it is important that providers plan on conducting a security risk analysis within the next few months unless you have conducted one recently. If you have already implemented an EHR system, you will need to ensure that the risk analysis included the EHR system and the related processes or practice workflows. If you plan to implement an EHR system in the next few months, we would recommend conducting risk analysis before the implementation so that any discovered risks can be identified and mitigated by proper design of the system and associated workflows or processes.  Any change to the system or processes after implementation is going to be hard, not to talk of the disruption to the practice and other costs.

The Final Guidance from OCR on Risk Analysis can be a useful reference in planning and conduct of risk analysis efforts.

Finally, I would like to go back to what I said right at the beginning. We recommend that organizations focus on managing all information risks, not just the risk of non-compliance with regulations such as HIPAA.  Therefore, it is critical that personnel performing the risks analysis are up-to-date on the current threat environment. Upon determination of the threats, one must be able to clearly identify the organization’s vulnerabilities to those threats and then the impact resulting from any exploits and various legal or compliance obligations including HIPAA.  Last but not the least, risk analysis must be conducted at appropriate intervals and certainly whenever there is a significant change in processes or technologies.


Important Disclaimer

The guidance and content we provide in our blogs including this one is based on our experience and understanding of best practices. Readers must always exercise due diligence and obtain professional advice before applying the guidance within their environments.

You don’t know what you don’t know – Do we have a "detection" problem with the healthcare data breach numbers?

Like some of you perhaps, I have been reading a few recent articles on Healthcare data breaches, especially the one from Dark Reading and a detailed analysis of the 2010-to-date breaches from HITRUST Alliance.

What stood out for me from these articles is something that is not necessarily highlighted in the articles and that is the very low number of breaches involving technology/people/process controls as opposed to physical losses.

These articles focused on the 119 or so breaches that have been reported to Department of Health and Human Services (HHS) or made public to date in 2010. From the HITRUST Alliance analysis, it is clear that an overwhelming majority of the breaches resulted from physical loss/theft of paper or electronic media, laptops etc.  Only two breaches resulted from hacking incidents.

I then went back to do a little bit of my own analysis of the 2010 data breach incidents covered in the Identity Theft Resource Center report available here. I came up with the following numbers for breaches other than those that involved physical loss, theft, burglary, improper disposal etc. :

  • Malware  infection -1
  • Unauthorized access to file share – 1
  • Database misconfiguration or vulnerability – 2
  • Website vulnerability – 1
  • Improper access or misuse by internal personnel – 6

As you can see, these account for less than 10% of the healthcare breaches known or reported so far this year.  Contrast this with the findings in 2010 Verizon Data Breach Investigation Report which attributes 38% of breaches to malware, 40% to hacking and 48% to misuse. It is pertinent to note that the Verizon report focused on 141 confirmed breaches from 2009 covering  a variety of industries,  but I think it is still good for a high level comparison to determine if we may be missing something in the healthcare breach data.

The comparison seems to suggest that the healthcare industry probably has much stronger safeguards  against malware, hacking, improper logical access etc.  I know from my own experience working with healthcare entities that this is not necessarily the case. For further corroboration, I reviewed two Ponemon Institute survey reports – Electronic Health Information at Risk: A Study of IT Practitioners and Are You Ready for HITECH? – A benchmark study of healthcare covered entities & business associates, both from Q4 2009. Following sample numbers from these reports further validate that the state of Information Security and Privacy among HIPAA Covered Entities (CEs) and Business Associates (BAs) is far from perfect:

Electronic Health Information at Risk: A Study of IT Practitioners



% of respondents saying “Yes”


My organization’s senior management does not view privacy and data security as a top priority



My organization does not have ample resources to ensure privacy and data security requirements are met – 61% of respondents.



My organization does not have adequate policies and procedures to protect health information



My organization does not take appropriate steps to comply with the requirements of HIPAA and other related healthcare regulations


Are You Ready for HITECH? – A benchmark study of healthcare covered entities & business associates


HIPAA compliance requirements that are not formally implemented

% of respondents saying “Yes”


Risk-based assessment of PHI handling practices



Access governance a and an access management policy



Staff training



Detailed risk analysis


All this leads me to think of the possibility that some HIPAA CEs and BAs may not be detecting potential breaches. If you study the healthcare breaches that have been reported so far, almost all of them have been through physical losses of computers or media (which is easy to know and detect) or through reporting by third parties (victims, law enforcement, someone finding improperly disposed PHI paper records in trash bins  etc.).  I don’t know of any healthcare data breach this year that was detected through proactive monitoring of information systems.

As I covered in a related post on breach reports and what they tell us, I would recommend that CEs and BAs focus on certain key controls and related activities (see table below) in order to improve their breach prevention and detection capabilities:


Key Controls

Recommended Activities


Secure Configuration and Lockdown

Review configuration of information systems (network devices, servers, applications, databases etc.) periodically and ensure that they are locked down from a security configuration standpoint


Web Application Security

· Scan web applications periodically for OWASP Top 10 vulnerabilities and fix any discovered vulnerabilities

· For new applications under development, perform code reviews and/or vulnerability scans to fix any security vulnerabilities before the applications are put to production use (Studies show that it is far more cost effective to fix the vulnerabilities before applications are put to production use than after)

· Use Web Application Firewalls as appropriate


Strong Access Credentials

· Configure PHI systems and applications to have a strong password policy (complexity of the password, periodic change of password etc.)

· Implement multi-factor authentication on PHI systems and applications wherever possible

(Note: According to 2010 Verizon Data Breach investigation report, stolen access credentials lead to largest number of breaches from hacking incidents)


Access Assurance or Governance

· Conduct Access Certifications periodically, preferably at least every quarter for PHI systems and applications.

· Review access privileges within PHI systems and applications to ensure all access conforms to the “Least Privilege” principle. In other words, no user, application or service must have any more privileges than what is required for the job function or role

· If any excess privileges are found, they must be remediated promptly

· Revoke access to PHI systems and applications promptly in the event that a person leaves the organization or no longer requires access due to a change in the person’s job role within the organization


Logging, Monitoring and Reporting

· Identify “risky” events within PHI systems

· Configure the systems to generate logs for the identified events

· Tamper-proof the logs

· Implement appropriate technologies and/or processes for monitoring of the events (Refer to our related posts here and here for examples)

· High risk events must be identified and monitored through near-real-time alerts

· Responsibilities for daily review of log reports and alerts must be assigned to specific personnel


Encryption (Data at rest, media), Physical security of media

· Maintain an inventory of locations and systems wherever PHI exists

· Implement suitable encryption of PHI on laptops and removable media

· Implement appropriate physical security safeguards to prevent theft of devices or systems containing PHI


Security Incident Response

· Implement and operationalize an effective Security Incident Response program including clear assignment of responsibilities, response steps/workflows  etc.

· Test Incident Response process periodically as required


Security Awareness and Training

· Implement a formal security awareness and training program so the workforce is aware of their responsibilities,  security/privacy best practices and actions to take in the event of suspected incidents

· Require personnel to go through the security awareness and/or training periodically as appropriate

If you are familiar with the HIPAA Security Rule, you will notice that not all of the above controls are “Required” (as opposed to “Addressable”) under HIPAA Security Rule or in the proposed amendments to the rule under the HITECH Act. One may argue however, that the above controls must be identified as required based on “risk analysis” , which of course is a required implementation specification in the HIPAA Security Rule. In any event, CEs and BAs need to look beyond the HIPAA compliance risk and focus on the risk to their business or brand reputation if a breach were to occur.

Hope this is useful! As always, we welcome your thoughts and comments.

RisknCompliance Services Note

We at RisknCompliance maintain a up-to-date database of the current security threats and vulnerabilities at a detailed level. We are able to leverage this knowledge in  providing our clients with  high quality risk analysis.

Please contact us here if you would like to discuss your HIPAA security or privacy needs. We will be glad to talk to you about how we could be of assistance.

May we suggest some priority adjustments to your PCI DSS Compliance program?

It isn’t any news that achieving PCI DSS Compliance continues to be onerous for many merchants out there. PCI DSS is after all an all-or-nothing regulation meaning that not passing even one of over 200 requirements could prevent you from getting there. And then, if you do become compliant, there is really no assurance that you will have 100% security. This is something we have known all along to be true for any regulation and now we have one more statistic from the 2010 Verizon Data Breach Investigation Report to prove it …  21% of organizations facing payment card data breaches were compliant with PCI DSS at the time of the breach.

So, may be it is time to rethink our approach to PCI DSS compliance, in terms of how do we get there by way of addressing controls that carry higher breach risks before the others. That will at least help improve your  organization’s security posture against potential breaches even if you are nowhere close to meeting all PCI DSS requirements.   I think recent breach surveys or reports are a great source to identify such controls  with an objective of prioritizing the remediation initiatives in the right order. Such prioritization should help in achieving a better security posture sooner, as we’ll see below.

I am not the first one to suggest a prioritized approach to achieving PCI DSS compliance. In fact, PCI SSC already has guidance on this, though the guidance itself is somewhat dated having been issued back in February 2009. Since then,  the threat environment has probably evolved somewhat and exploitation of certain  vulnerabilities isn’t quite of the same order relative to others. Therefore, I suggest leveraging the data breach findings to make necessary prioritization adjustments.

Here are some findings from three recent reports on which I am basing my recommendations:




Relevant Controls (Our Analysis)


Verizon Data Breach Investigations Report 2010

· 61% of the breaches were discovered by a third party

· 86% of victims had evidence of the breach in their log files

· Technology – Monitoring, correlation, reporting and alerting off the log events

· Process – Regular reviews of logs, log reports or alerts

· People – Clear definition and assignment of responsibilities around log reviews and incident response


Verizon Data Breach Investigations Report 2010

· 94% of breached records had malware as one of the causes and 96% of breached records involved hacking

· 51% of malware was installed or injected remotely by the attacker (by obtaining privileged access to the system or other means such as SQL Injection)

· 85% of records breached by malware involved the attacker gaining backdoor access to the system

· 81% of records breached by malware involved data being sent to an external entity or site

· 86% of records breached by hacking involved use of stolen login credentials

· 86% of records breached by hacking involved use of stolen login credentials

· 89% of records breached by hacking involved SQL Injection

· 92% of records breached by hacking used web applications as the attack pathway

· Technology – Proper configuration and lockdown of systems, strong access credentials, access controls or assurance, assessment of web applications and remediation for OWASP Top 10 vulnerabilities, deployment of Web Application Firewalls, Logging/Monitoring/Reporting/Alerting of important events on critical systems

· Process – Configuration reviews, OWASP Top 10 vulnerability management, access assurance in the form of ongoing role/privilege management processes and periodic access certifications, regular reviews of logs, log reports or alerts, effective security incident response

· People – Clear definition and assignment of responsibilities around configuration reviews, access certifications, log reviews and incident response


Verizon Data Breach Investigations Report 2010

· More than 50% of breaches remain undiscovered for months or more

· 61% of the breaches were discovered by 3rd parties, and not the victim organization itself

· Technology – Monitoring, correlation, reporting and alerting off the log events

· Process – regular reviews of logs, log reports or alerts

· People – Clear definition and assignment of responsibilities around log reviews and incident response, User awareness and training


Verizon Data Breach Investigations Report 2010

· Few breaches were caused due to exploitation of vulnerabilities for which a patch was available.

· Likelihood of exploitation of an unpatched vulnerability is far less as compared to a vulnerability caused by a configuration issue.

Lockdown (secure configuration) of systems may receive higher priority over application of vendor patches unless there is a specific reason not to do so


Leaking Vault – Five years of data breaches – July 2010

· Drives/Media and hacking were the top two breach vectors

· Documents and Fraud (Social Engineering) have been increasing in prominence as threat breach vectors recently

· Of the breaches that involved hacking, SQL Injection, stolen credentials and malware accounted for most breaches

· Technology – Disk/Tape encryption, appropriate system lockdown to prevent use of media such as USB drives , Encryption of unstructured data (documents), Refer to controls in #2 against hacking

· Process – Physical Security, Encryption and Key Management

· People – Awareness and Training


Ponemon Institute – Annual Cost of Cybercrime study – July 2010

· The most costly cyber crimes are those caused by web attacks, malicious code and malicious insiders, which account for more than 90 percent of all cyber crime costs per organization on an annual basis.

· The average number of days to resolve a cyber attack was 14 days with an average cost to the organization of $17,696 per day. The survey revealed that malicious insider attacks can take up to 42 days or more to resolve.

Refer to #2 above

Here then is a summary of the key controls in the above table, relevant PCI DSS requirements and priorities from the PCI SSC Guidance.

Key Control (Our Analysis)

Relevant PCI DSS Requirement Numbers (See Notes below)

Secure Configuration and Lockdown

1.1.5 (2), 1.2 (2), 2.1 (2), 2.2.3 (3), 2.2.4 (3), 2.3 (2)

Web Application Security

6.5 (3)

Strong Access Credentials including periodic changes in credentials (e.g. password)

8 (4)

Access Assurance (Least Privilege access based on users’ business or job roles, timely revocation of access privileges)

7 (4), 12.2(6), 12.5.4(6), 12.5.5(6)

Logging, Monitoring and Reporting

10.1(4), 10.2(4), 10.3(4), 10.4(4), 10.5(4), 10.5(6), 10.7(4), 12.2(6), 12.5.2(6),

Encryption (Data at rest, media), Physical security of media

3.3(5), 3.4(5), 3.5(5), 9.5(5), 9.6(5), 9.7(5), 9.8(5), 9.9(5)

Security Incident Response

12.5.3(6), 12.9(6)

Security Awareness and Training

12.3(6), 12.3.10(6), 12.4(6), 12.6(6)

Note: Numbers in brackets are the priority numbers from the PCI SSC guidance. Numbers in the guidance range from 1 through 6. A lower number indicates a higher priority.

As we can see from the table, there are several requirements which if addressed sooner, will actually improve an organization’s security posture against potential breaches, based on what we know from the recent breach studies.  I would recommend increasing the priority of the requirements in red to at least 3 if not 2. I do realize that organizations may not be able to afford to address too many requirements at a higher priority. If that is the case, you may want to review the current priority 2 and 3 requirements against the key controls in the table above and then decide to push some of them lower down the priority order as applicable.

Hope this is useful! As always, we welcome your thoughts and comments.

RisknCompliance Services Note

We at RisknCompliance track about a dozen of such reports every year and maintain a up-to-date database of the current security threats and vulnerabilities at a detailed level. We are able to leverage this knowledge in  providing our clients with  a much-wanted third-party assessment of their risk management or audit methodologies and  programs. After all, security risk assessments and audits form the very foundation of risk management or audit programs, so we believe it is critical that every organization fine-tunes its methodologies and  knowledgebase.

Please contact us here if you would like to discuss your needs. We will be glad to talk to you with the details and how we could be of assistance to you.

Verizon 2010 Data Breach Investigations Report – Key takeaways for Security Assessors and Auditors

The Verizon 2010 Data Breach Investigations Report (DBIR) released last week has some interesting findings, just as it did last year. What makes it special this year is that Verizon partnered with the United States Secret Service in developing this report. I don’t intend to discuss all the statistics in this blog (will do so in another upcoming blog) but as you will see explained in the report, the Secret Service’s involvement has thrown new light into some of the findings.

My intention here is to highlight the significance of such a report to security and audit practitioners with the objective of improving the quality of their risk assessments or audits and more importantly, help make the right recommendations to management.  From my experience as a security practitioner and an occasional auditor, I can tell that we may not always be using all the available information to help improve the quality of our risk assessments or audits. And, I think reports such as the Verizon DBIR can provide some valuable help from that standpoint.

Let me explain what I mean… Deliverables for any risk assessment or audit typically include a list of findings and for each finding, we provide an explanation of the risk, the risk severity  (High, Medium, Low) and suitable recommendations for risk mitigation or remediation.  The management would then proceed to remediate various gaps in priority based on our risk rankings. Considering that risk is a product of likelihood and impact (I like the OWASP risk rating methodology, so will use it here), it is important that we get the impact and likelihood right.  Impact is largely a function of the organization’s characteristics including various technical and business factors seen in the methodology. On the other hand, likelihood is a function of threats and vulnerabilities.  I think the DBIR can be a useful reference in estimating the likelihood.

For example, the DBIR says that external agents were responsible for about 78% of the breaches whereas about 48% were caused by insiders. These numbers can be used to arrive at a better objective estimate  of the likelihood that these threat agents may cause any harm. Similarly, the DBIR also says that  48% of the breaches involved privilege misuse, 40% resulted from hacking  and 38% utilized malware. These numbers can be used for objective estimation of the likelihood that associated vulnerabilities could be exploited. The OWASP methodology has an illustration for such objective risk estimation.

These are but a couple of examples. The DBIR has a wealth of information that can be useful to auditors and security practitioners alike, both in improving the quality of their work as well as in being able to defend their risk rankings. We all realize that risk rankings almost always have a level of subjectivity in them but I think reports like the DBIR can be leveraged to make them as objective as possible. A very good example is the risk level one might normally assign to a case of unpatched vulnerability versus a configuration issue.  It may not be readily obvious that one might need to be assigned a higher risk level over another until you read the DBIR. The DBIR tells us that the likelihood of exploitation of an unpatched  vulnerability is far less as compared to a vulnerability caused by a configuration issue. If we didn’t leverage the DBIR (and assuming both issues had equal impacts), we might assign equal risk levels to both the findings or worse, we might assign the unpatched vulnerability a higher risk level.

Over the next couple of weeks, I plan to be blogging with a detailed commentary on some of the findings in the report including a special post on how the report can be leveraged to enhance the effectiveness of PCI DSS programs.

Hope this is useful! As always, we welcome your thoughts and comments.

RisknCompliance Services Note

We at RisknCompliance track about a dozen of such reports every year and maintain a up-to-date database of the current security threats and vulnerabilities at a detailed level. We are able to leverage this knowledge in  providing our clients with  a much-wanted third-party assessment of their risk management or audit methodologies and  programs. After all, security risk assessments and audits form the very foundation of risk management or audit programs, so we believe it is critical that every organization fine-tunes its methodologies and  knowledgebase.

Please contact us here if you would like to discuss your needs. We will be glad to talk to you with the details and how we might be of assistance to you.

Older posts