RisknCompliance Blog

Thoughts On Delivering Meaningful Outcomes in Security and Privacy

Tag: Breach (page 1 of 2)

Is your auditor or consultant anything like the OPM OIG?

The OPM breach has been deservedly in the news for over a month now.   Much has been written and said about it across the mainstream media and the internet1.

I want to focus here on a topic that hasn’t necessarily been discussed in public,  perhaps not at all – Could the OIG (and their audit reports) have done more or different than what they did,  year after year of issuing the reports? Specifically,  how could these audit reports have driven some urgently needed attention to the higher risks and perhaps helped prevent the breach?

Let us look at the latest  OIG’s Federal Information Security Management Act (FISMA) audit report issued in November 2014 (pdf) as a case in point. The report runs over 60 pages and looks to have been a reasonably good effort in meeting its objective of covering the general state of compliance with FISMA.  However,   I am not sure the report is any useful for “real world”  risk management purposes at an agency that had known organizational constraints in availability of appropriate security people or resources;  an agency that should have had some urgency in implementing certain safeguards on at least the one or two critical system(s) for the nature and quantity of sensitive information they had.

We have talked about this problem before, advising caution against “fixation” with compliance or controls frameworks and not focusing on priority risk management needs. In the case of this particular report,  the OIG should have discussed the risks as well (and not just the findings or gaps) and provided some actionable prioritization for accomplishing quick-wins in risk mitigation. For example,  recommendation #21 on page 25 could have called out the need for urgency in implementing multi-factor authentication (or appropriate compensating controls) for the one or two “high risk” system(s) that had upwards of 20 million sensitive records that we now know were breached.

I also believe providing a list of findings in the Executive Summary (on page 2) was a wasted opportunity. Instead of providing a list of compliance or controls gaps,  the summary should have included specific call-to-action statements by articulating the higher risks and providing actionable recommendations for what the OPM could have done over the following months in a prioritized fashion.

OPM-OIG

Here then are my recommended takeaways:

1. If you are an auditor performing an audit or a consultant performing a security assessment,  you might want to emphasize “real” risks,  as opposed to compliance or controls gaps that may be merely academic in many cases. Recognize that evaluation and articulation of risks require a more complete understanding of the business, technology and regulatory circumstances as compared to what you might need to know if you were merely writing gaps against certain controls or compliance requirements.

2. Consider the organizational realities or constraints and think about creative options for risk management. Always recommend feasible quick-wins in risk mitigation and actionable prioritization of longer term tasks.

3. Do not hesitate to bring in or engage with specialists if you aren’t sure you can evaluate or articulate risks and recommend mitigation tasks well enough. Engage with relevant stakeholders that would be responsible for risk mitigation to make sure they are realistically able to implement your recommendations, at least the ones that you recommend for implementation before your next audit or assessment.

In closing, I would strongly emphasize a focus on meaningful risk management outcomes, not just producing reports or deliverables. A great looking deliverable that doesn’t convey the relative levels of real risks and the urgency of mitigating certain higher risks is not going to serve any meaningful purpose.

References for additional reading

1.  At the time of this writing,  I found these two links to be useful reading for substantive information on the OPM breach.

Information about OPM Cybersecurity Incidents

“EPIC” fail—how OPM hackers tapped the mother lode of espionage data

2.  You may also be interested in a quick read of our recommendations for agile approaches to security/privacy/compliance risk assessments or management.  A pdf of our slide deck will be emailed to you after a quick registration here.

This is how the #AnthemHack could have been stopped, perhaps

It has been just over a week since the #AnthemHack was made public.

Over this period, the main stream media and many of the bloggers and commentators,  as usual,  have been all over it.  Many have resorted to some not-so-well-thought-out (at least in my opinion as well as a couple of others1 ) statements such as “encryption” could have prevented it. Some have even faulted HIPAA for not mandating encryption of data-at-rest2.

Amidst all this, I believe there has been some good reporting as well, albeit very few. I am going to point to a couple of articles by Steve Ragan at CSOOnline.com here and here.

I provide an analysis here of perhaps how Anthem could have detected and stopped the breach before the data was exfiltrated. This is based on the assumption that the information published in Steve Ragan’s articles is accurate.

Let’s start with some known information then:

  1. “Anthem, based on data posted to LinkedIn and job listings, uses TeraData for data warehousing, which is a robust platform that’s able to work with a number of enterprise applications”. Quoted from here.
  2. “According to a memo from Anthem to its clients, the earliest signs of questionable database activity date back to December 10, 2014”. Quoted from here.
  3. “On January 27, 2015, an Anthem associate, a database administrator, discovered suspicious activity – a database query running using the associate’s logon information. He had not initiated the query and immediately stopped the query and alerted Anthem’s Information Security department. It was also discovered the logon information for additional database administrators had been compromised.” Quoted from the same article as above.

I went over to the Teradata site to download their Security Administration guide of Release 13 of Teradata Database (download link). I downloaded the guide for an older version from November 2009. I am assuming Anthem is using Release 13 or later and so, isn’t missing the features I am looking at.

Database logging can be challenging sometimes and depending on the features available in your database, the logging configurations can generate a lot of noise. This in turn, may make it difficult to detect events of interest. I wanted to make sure there weren’t such issues in this case.

It turns out TeraData is fairly good in its logging capabilities. Based on the highlighted content, it appears one should be able to configure log generation specifically for a DBA performing a SELECT query on a table containing sensitive data.

=======================================================

TD-Logging

=======================================================

There should not ordinarily be a reason for a DBA to query for sensitive data so this should have been identified as a high risk alert use-case by their Logging and Monitoring program.

I am assuming Anthem also has a Security Information and Event Management (SIEM) solution that they use for security event monitoring. Even a garden variety SIEM solution should be able to collect these logs and raise an immediate alert considering the “high risk” nature of a DBA trying to query for sensitive data.

This alert should have gone to someone that is accountable or responsible for security incident response. It appears that didn’t happen. This is symptomatic of a lack of “ownership” and “accountability” culture, in my view. For a case of this nature, I strongly recommend the IT owner (e.g. Manager or Director of the Database Team) being on point to receive such alerts involving sensitive data. Your Security Operations folks may not necessarily know the context of the query and therefore the high risk nature of it. I talked about this in a guest post last month. See the last dot point in this post.

As quoted at #3 above, it appears one of the DBAs discovered someone using his/her credentials and running that query. You certainly don’t want to leave it to the DBA to monitor his own actions. If this was a malicious DBA, we might be talking about a major breach caused by an insider and not a Advanced Persistent Threat (APT) actor as the Anthem breach appears to be.  But then, I digress.

If the high risk anomalous DBA activity had been discovered immediately through the alert and if appropriate incident response steps had been initiated, it is possible that Anthem may have been able to stop the breach before the APT actor took the data out of the Anthem network.AnthemHack Tweet

So, if we come to think of it, some simple steps of due diligence in establishing security and governance practices might have helped avoid a lot of pain to Anthem not to mention a lifetime of hurt to the people and families impacted by the breach.

Here then are some take-aways if you would like to review your security program and want to make some changes:

  1. You may not need that shiny object. As explained above, an average SIEM solution can raise such an alert. We certainly don’t need that ‘big data” “analytics” solution costing 100s of thousands or millions of dollars.
  2. Clarity in objectives3. Define your needs and use cases before you ever think of a tool or technology. Even if we had a fancy technology, it would be no use if we didn’t identify the high risk use-case for monitoring the DBA activity and implement an alert for it.
  3. Process definition and speed of incident response. People and process aspects are just as important (if not more than) as the technology itself. Unfortunately, we have too many instances of expensive technologies not being effective because we didn’t design and implement the associated people/process workflows for security monitoring and timely incident response.
  4. Ownership and accountability. I talked about this topic last month with a fair amount of detail and examples. While our Security Operations teams have their job to do in specific cases, I believe that the IT leadership must be “accountable” for security of the data collected, received, processed, stored or transmitted by their respective systems. In the absence of such an accountability and ownership culture, our security monitoring and response programs will likely not be effective.
  5. Focus on quick wins. If we look at our environments with inquisitive eyes and ears, most of us will likely identify quick wins for risk reduction. By quick wins, I am referring to actions for reducing higher risk levels that we can accomplish in weeks rather than months, without deploying a lot of resources. Not all of our risk management action plans have to necessarily be driven by formal projects. In the context of this Anthem example, it should be a quick win to implement an alert and have the database manager begin to watch for these alerts.
  6. Don’t accept pedestrian risk assessments and management. If you go back and look at your last risk assessment involving a sensitive database for example, what risks were identified? What were the recommendations? Were these recommendations actionable or some “template” statements? Did the recommendations identify quick-win risk reduction opportunities? Did you follow through to implement the quick wins? In other words, the quality of a risk assessment must be solely determined by the risk reduction opportunities that you were able to identify and the outcomes you were able to accomplish within a reasonable period of time. The quality of paper deliverables, methodology etc. are not nearly as important, not to say that they don’t matter.
  7. Stay away from heavy weight security frameworks. We talked about this last year. I’ll probably have more to say about it in another post. Using #AnthemHack as an example, I plan to illustrate how a particular leading security framework wouldn’t be very helpful. In fact, I believe that using heavy weight security frameworks can be detrimental to most security programs. They take a lot of time and precious resources not to mention the focus away from accomplishing risk reduction outcomes that truly matter.
  8. Effective governance and leadership. Last but not the least, the need for leadership and governance should come as no surprise. None of the previous items on this list can be truly accomplished without an emphasis on governance and leadership starting right at the board level and across the executive leadership.

I hope the analysis and recommendations are useful to you.

Remember, while the techniques employed by APT actors may be advanced and persistent, the vulnerabilities they exploit are often there only because we didn’t do some basic things right or perhaps we made it too hard and complicated on ourselves to do it right.

References for additional reading

1 Why even strong crypto wouldn’t protect SSNs exposed in Anthem breach , Steven M. Bellovin

http://arstechnica.com/security/2015/02/why-even-strong-crypto-wouldnt-protect-ssns-exposed-in-anthem-breach

Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped, Rich Mogull

https://securosis.com/blog/even-if-anthem-encrypted-it-probably-wouldnt-have-mattered

 

2 I like the fact that the HIPAA Security Rule is not prescriptive, except… , Kamal Govindaswamy

http://rnc2.com/blog/regulatory-compliance/hipaahhitech/like-fact-hipaa-security-rule-prescriptive-except/

 

3 Security Analytics Lessons Learned — and Ignored!, Anton Chuvakin

http://blogs.gartner.com/anton-chuvakin/2015/02/09/security-analytics-lessons-learned-and-ignored/

Patient Portals – Make or Break

Like many other Health IT initiatives today, the primary driver for patient portals is regulatory in nature. Specifically, it is the Meaningful Use requirements related to view,  download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and as a result, to the providers’ business in the increasingly competitive and complex healthcare marketplace in the United States.

The objective of this post is to discuss the security aspects of patient portals, specifically, why the current practices in implementing these portals could pose a big problem for many providers. More importantly, we’ll discuss specific recommendations for due diligence actions that the providers should take immediately as well as in the longer term.

Before we get to discuss the security aspects, I think it is important to “set the stage” by discussing some background on patient portals. Accordingly, this post covers the following areas in the indicated sequence:

1. What are patient portals and what features do they (or could) provide?

2. Importance of patient engagement and the role of patient portals in patient engagement

3. The problem with the current state in Health IT and hence the risks that the portals bring

4. Why relying on regulations or vendors is a recipe for certain failure?

5. What can/should we do (right now and in the future) –  Our recommendations

1. What are Patient Portals and what features do they (or could) provide?

I would draw on information from the ONC site for this section. Here is the pertinent content, to quote from the ONC site:

A patient portal is a secure online website that gives patients convenient 24-hour access to personal health information from anywhere with an Internet connection. Using a secure username and password, patients can view health information such as:

• Recent doctor visits

• Discharge summaries

• Medications

• Immunizations

• Allergies

• Lab results

Some patient portals also allow patients to:

• Exchange secure e-mail with their health care teams

• Request prescription refills

• Schedule non-urgent appointments

• Check benefits and coverage

• Update contact information

• Make payments

• Download and complete forms

• View educational materials

The bottom-line is that patient portals provide means for patients to access or post sensitive health or payment information. In the future, their use could expand further to include integration with mobile health applications (mHealth) and wearables. Last week’s news from EPIC should provide a sense for things to come.

2. Importance of patient engagement and the role of patient portals in patient engagement

As we said above, the primary driver for patient portals so far has been the Meaningful Use requirements related to view,  download or transmit and secure messaging. However, the biggest long term benefit of the portals might be what they can do for patient engagement and becoming a key business enabler for providers.

The portals are indeed a leading way for providers to engage with patients, as can be seen in this graphic from the 2014 Healthcare IT Priorities published by InformationWeek1.

image

Effective patient engagement of course can bring tremendous business benefits, efficiencies and competitive edge to providers.

From a patient’s perspective, the portals can offer a easier method for interacting with their providers which in turn has its own benefits for patients. To quote from the recently released HIMSS report titled “The State of Patient Engagement and Health IT”2

A patient’s greater engagement in health care contributes to improved health outcomes, and information technologies can support engagement.

In essence, the importance of patient portals as a strategic business and technology solution for healthcare providers doesn’t need too much emphasis.

3. The problem with the current state in Health IT and hence the risks that the portals bring

In my view, the below quote from the cover page of the 2014 Healthcare IT Priorities Report published by InformationWeek1 pretty much sums it up for this section.

Regulatory requirements have gone from high priority to the only priority for healthcare IT.

We all know what happens when security or privacy programs are built and operated purely to meet regulatory or compliance objectives. It is a shaky ground at best. We talked about it in one of our blog posts last year when we called for a change in tone and approaches to healthcare security and privacy.

4. Why relying on regulators or vendors is a recipe for certain failure of your security program?

It is probably safe to say that security in design and implementation is perhaps not the uppermost concern that HealthIT vendors have (certainly not the patient portal vendors in my opinion) today. To make it easy for them, we have lackluster security/privacy requirements in the regulation for certifying Electronic Health Records.

Consider the security and privacy requirements (yellow in this pdf) that vendors have to meet in order to obtain EHR certification today. You will see that the certification criteria are nearly not enough to assure the products are indeed secure enough before the vendors can start selling them to providers. And then, the administration or enforcement of the EHR certification program has been lacking as well in the past.

If you consider a risk relevant set of controls such as the Critical Security Controls for Effective Cyber Defense, you will see that the certification criteria are missing the following key requirements in order to be effective in today’s security threat landscape:

· Vulnerability assessments and remediation

· Application Security Testing (Including Static and Dynamic Testing)

· Security training for developers

· Penetration tests and remediation

· Strong Authentication

Think about using these applications to run your patient portals!

If you are a diligent provider, you will want to make sure that the vendor has met the above requirements even though the certification criteria do not include them. The reality though may be different. In my experience, providers often do not perform all the necessary due diligence before purchasing the products.

And then, when providers implement these products and attest to Meaningful Use, they are expected to do a security risk analysis (see the green highlighted requirement in the pdf). In my experience again, risk analysis is not performed in all cases. Of those that perform them, many are not really risk assessments.

The bottom-line? … Many providers take huge risks in going live with their patient portals that are neither secure nor compliant (Not compliant because they didn’t perform a true risk analysis and mitigate the risks appropriately).

If you look again (in 1 above) at the types of information patient portals handle, it is not far-fetched to say that many providers may have security breaches waiting to happen. It is even possible that some have already been breached but they don’t know yet.

Considering that patient portals are often gateways to the more lucrative (from a hacker’s standpoint) EHR systems, intruders may be taking their time to escalate their privileges and move laterally to other systems once they have a foothold in the patient portals. Considering that detecting intrusions is very often the achilles heel of even the most well-funded and sophisticated organizations,  it should be a cause for concern at many providers.

5. What can/should we do (right now and in the future) –  Our recommendations

It is time to talk about what really matters and some tangible next steps …

What can or must we do immediately and in the future?

Below are our recommendations for immediate action:

a) If you didn’t do the due diligence during procurement of your patient portal product, you may want to ask the vendor for the following:

· Application security testing (static and dynamic) and remediation reports of the product version you are using

· Penetration testing results and remediation status

· If the portal doesn’t provide risk based strong (or adaptive) authentication for patient and provider access, you may insist on the vendor committing to include that as a feature in the next release.

b) If you didn’t perform a true security risk analysis (assessment), please perform one immediately. Watch out for the pitfalls as you plan and perform the risk assessment. Make sure the risk assessment includes (among other things) running your own vulnerability scans and penetration tests both at network and application layers.

c) Make sure you have a prioritized plan to mitigate the discovered risks and of course, follow through and execute the plan in a timely manner.

Once you get through the immediate action steps above, we recommend the below action items to be included in your security program for the longer term:

a) Implement appropriate due diligence security measures in your procurement process or vendor management program.

b) Have your patient portal vendor provide you the detailed test results of the security requirements (highlighted yellow in the attachment) from the EHR certifying body. You may like to start here (Excel download from ONC) for information on the current certification status of your patient portal vendors and the requirements they are certified for.

c) Ask the vendor for application security (static and dynamic) and pen test results for every release.

d) Segment the patient portal appropriately from the rest of your environment (also a foundational prerequisite for PCI DSS scope reduction if you are processing payments with credit/debit cards).

e) Perform your own external/internal pen tests every year and scans every quarter (Note : If you are processing payments with your payment portal,  the portal is likely in scope for PCI DSS. PCI DSS requires you to do this anyway).

f) Conduct security risk assessments every year or upon a major change (This also happens to be a compliance requirement from three different regulations that will apply to the patient portal –  HIPAA Security Rule, Meaningful Use and PCI DSS, if processing payments using credit/debit cards).

g) If you use any open source modules either by yourself or within the vendor product,  make sure to apply timely patches on them as they are released.

h) Make sure all open source binaries are security tested before they are used to begin with.

i) If the vendor can’t provide support for strong authentication,  look at your own options for proving risk based authentication to consumers. In the meanwhile, review your password (including password creation, reset steps etc.) to make sure they are not too burdensome on consumers and yet are secure enough.

j) Another recommended option is to allow users to authenticate using an external identity (e.g. Google, Facebook etc. using OpenID Connect or similar mechanisms) which may actually be preferable from the user’s standpoint as they don’t have to remember a separate log-in credential for access to the portal. Just make sure to strongly recommend that they use 2 step verification that almost all of these social media sites provide today.

k) Implement robust logging and monitoring in the patient portal environment (Hint : Logging and Monitoring is not necessarily about implementing just a “fancy” technology solution. There is more to it that we’ll cover in a future post)

In summary, there is just too much at stake for providers and patients alike to let the status quo of security in patient portals continue the way it is. We recommend all providers take priority action considering the lasting and serious business consequences that could result from a potential breach.

As always, we welcome your thoughts, comments or critique. Please post them below.

References

1 2014 Healthcare IT Priorities published by InformationWeek
2 The State of Patient Engagement and Health IT(pdf download)

 Recommended further reading

Patient engagement – The holy grail of meaningful use
Patient portal mandate triggers anxiety
MU Stage 2 sparks patient portal market

Hello PCI SSC… Can we rethink?

This is a detailed follow-up to the quick post I wrote the Friday before the Labor Day weekend,  based on my read at the time of the PCI SSC’s Special Interest Group paper on “Best practices for maintaining PCI DSS compliance”1 published just the day before.

The best practices guidance is by and large a good one though nothing of what is discussed is necessarily new or ground breaking. The bottom line of what the paper discusses is the reality of what any person or organization with electronic information of some value (and who doesn’t today?) needs to do… which is that there is no substitute for constant and appropriate security vigilance in today’s digital world.

That said,  I am not sure this guidance (or anything else PCI SSC has done so far with PCI DSS including the new version 3 taking effect at the end of the year) is going to result in the change we need… the change in how PCI organizations are able to prevent or at least able to detect and contain the damage caused by security breaches in their cardholder data environments (CDEs). After all, we have had more PCI breaches (both in number and scale) over the past year than at any other time since PCI DSS has been in effect.

One is then naturally forced to question why or how does PCI SSC expect a different result if  PCI DSS itself hasn’t changed fundamentally over the years. I believe a famous person no less than Albert Einstein had something to say about doing the same thing over and over again and expecting different results.AE

If you have had anything to do with the PCI DSS over the last several years, you are probably very familiar with the criticism it has received from time to time.  For the record, I think PCI DSS has been a good thing for the industry and it isn’t too hard to recognize that security in PCI could be much worse without the DSS.

At the same time, it is also not hard to see that PCI DSS hasn’t fundamentally changed in its philosophy and approach since its inception in 2006 while the security threat environment itself has evolved drastically both in its nature and scale over this period.

The objective of this post is to offer some suggestions for how to make PCI DSS more effective and meaningful for the amount of money and overheads that merchants and service providers are having to spend on it year after year.

Suggestion #1 : Call for Requirement Zero

I am glad the best practices guidance1 highlights the need for a risk based PCI DSS program. It is also pertinent to note that risk assessment is included as a milestone 1 item in the Prioritized Approach tool2 though I doubt many organizations use the suggested prioritization.

In my opinion however, you are not emphasizing the need for a risk based program if your risk assessment requirement is buried inconspicuously under requirement #12 of the 12 requirements (12.2 to be specific). If we are to direct merchants and service providers to execute a risk based PCI DSS program, I believe the best way to do it is by  making risk assessment the very first thing that they do soon after identifying and finalizing the CDE they want to live with.

As such, I recommend introducing a new Requirement Zero to include the following :

  • Identify the current CDE and try to reduce the CDE footprint to the extent possible
  • Update the inventory of system components in the CDE (Current requirement 2.4)
  • Prepare CDE Network diagram (Current requirement 1.1.2) and CHD flow diagram (Current requirement 1.1.3). I consider this to be a critical step. After all, we can only safeguard something valuable if we know it exists. We also talked about how the HIPAA Security Rule could use this requirement in a different post.
  • Conduct a Risk Assessment (Current requirement 12.2)

Performing a risk assessment right at the beginning will provide the means for organizations to evaluate how far they need to go with implementing each of the 200+ requirements. In many cases, they may have to go well over the letter of certain requirements and truly address the intent and spirit of the requirements in order to reduce the estimated risk to acceptable levels.

Performing the risk assessment will also (hopefully) force organizations to consider the current and evolving threats and mitigate the risks posed by these threats. Without the risk assessment being performed upfront, one will naturally fall into the template security mindset we discussed here. As discussed in the post, template approaches are likely to drive a security program to failure down the road (or at least make it ineffective).

Suggestion #2 : Discontinue all (requirements) or nothing approach

A true risk management program must mean that the organizations should have a choice not to implement a control if they can clearly articulate the risk associated with not implementing it is truly low.

I think PCI DSS has a fundamental contradiction in its philosophy of pushing a all-or-nothing regulation while advocating a risk based approach at the same time. In an ideal world where organizations have limitless resources and time at their disposal, they could perhaps fully meet every one of the 200+ requirements while also addressing the present and evolving risks. As we know however, the real world is far from ideal in that the organizations are almost always faced with constraints all around and certainly with the amount of resources and time available at their disposal.

Making this change (from all or nothing approach) of course will mean a foundational change in PCI DSS’ philosophy of how the whole program is administered by PCI SSC and the card brands. Regardless, this change is too important to be ignored considering the realities of business challenges and the security landscape.

Suggestion #3 : Compensating controls

As anyone that has dealt with PCI DSS knows, documentation of compensating controls is one of the most onerous aspects of PCI DSS, so much so that you are sometimes better off implementing the original control than having to document and justify the “validity” of the compensating control to your QSA. No wonder then, that a book on PCI DSS compliance actually had a whole chapter on the “art of compensating control”.

The need for compensating controls should be based on the risk to the cardholder data and not on not implementing the requirement itself. This should be a no-brainer if PCI SSC really wants PCI DSS to be risk based.

If the risk associated with not implementing a control is low enough, organizations should have a choice of not implementing a compensating control or at least not implementing it to the extent that the DSS currently expects the organization to.

Suggestion #4 : Reducing compliance burden and fatigue

As is well known, PCI DSS requires substantial annual efforts and related expenses. If the assessments involve Qualified Security Assessors (QSAs), the overheads are much higher than self-assessments. Despite such onerous efforts and overheads, even some of the more prominent retailers and well-funded organizations can’t detect their own breaches.

The reality is that most PCI organizations have limited budgets to spend on security let alone on compliance with PCI DSS. Forcing these organizations to divert much of their security funding to repeated annual compliance efforts simply doesn’t make any business or practical sense, especially considering the big question of whether these annual compliance efforts really help improve the ability of organizations to do better against breaches.

I would like to suggest the following changes for reducing compliance burden so that organizations can spend more of their security budgets on initiatives and activities that can truly reduce the risk of breaches:

  • The full scope (of all 200+ requirements or controls) may be in scope for compliance assessments (internal or by QSA) only during the first year of the three year PCI DSS update cycle. Remember that organizations may still choose not to implement certain controls based on the results of the risk assessment (see suggestion #2 above)
  • For the remaining two years, organizations may be required to perform only a risk assessment and implement appropriate changes in their environment to address the increased risk levels. Risk assessments must be performed appropriately and with the right level of due diligence. The assessment must include (among other things) review of certain key information obtained through firewall reviews (requirement 1.1.7), application security testing (requirement 6. 6), access reviews (requirement 7), vulnerability scans (11.2) and penetration tests (11.3).

 Suggestion #5 : Redundant (or less relevant) controls

PCI SSC may look at reviewing the value of certain control requirements considering that newer requirements added in subsequent versions could reduce the usefulness or relevance of those controls or perhaps even make them redundant.

For example, PCI DSS v3 requirement around penetration testing has a considerable change compared to the previous version. If the organization were to perform the penetration tests appropriately, there should not be much need for requirement 2.1 especially the rather elaborate testing procedures highlighted in the figure.

PCIDSSreq21

There are several other requirements or controls as well that perhaps fall into the same category of being less useful or even redundant.

Such redundant requirements should help make the case for deprecation or consolidation of certain requirements.  These requirements also help make the case for moving away from the all or nothing approach or philosophy we discussed under #2.

 Suggestion #6 : Reduce Documentation Requirements

PCI DSS in general requires fairly extensive documentation at all levels. We already talked about it when we discussed the topic of compensating controls above.

Documentation is certainly useful and indeed strongly recommended in certain areas especially where it helps with communication and better enforcement of security controls that help in risk reduction.

On the other hand, documentation purely for compliance purposes must be avoidable especially if it doesn’t help improve security safeguards to any appreciable extent.

…………………………………………………

That was perhaps a longer post than some of us are used to,  especially on a blog. These are the suggestions that I can readily think of. I’ll be keen to hear any other suggestions you may have yourself or perhaps even comments or critique of my thoughts.

References

1Best Practices for Maintaining PCI DSS Compliance (pdf)  by Special Interest Group, PCI Security Standards Council (SSC)
2PCI DSS Prioritized Approach (xls download)

PCI Breaches – Can we at least detect them?

Almost all Payment Card Industry (PCI) breaches over the past year, including the most recent one at Supervalu appear to have the following aspects in common:

1. They involved some compromise of Point of Sale (POS) systems.

2. The compromise and breaches continued for several weeks or months before being detected.

3. The breaches were detected not by the retailer but by some external entity – FBI, the US Secret Service, Payment processor, card brands, issuing bank etc.

4. At the time the breach was disclosed, the retailers appear to have had a passing PCI DSS certification.

Anyone that has a reasonable understanding of the current Information Security landscape should know that it is not a matter of “if” but “when” an organization will get compromised. Given this humbling reality, it only makes sense that we must be able to detect a compromise in a “timely” manner and hopefully contain the magnitude of the breach before it gets much worse.

Let’s consider the following aspects as well:

  1. PCI has one of the more prescriptive regulations in the form of PCI DSS and PA DSS than any other industry. As a case in point, consider the equivalent regulations for Electronic Health Records systems (EHRs) in the United States – the EHR Certification regulation (PA DSS equivalent) requirements highlighted yellow in this document and the Meaningful Use regulation (PCI DSS equivalent) requirements highlighted green. You will see that the PCI regulations are a lot more comprehensive both in breadth and depth.
  1. PCI DSS requires merchants and service providers to validate and document their compliance status every year. For the large retailers that have been in the news for the wrong reasons, this probably meant having a external Qualified Security Assessor (QSA)  performing a on-site security assessment and providing them with a passing Report on Compliance (ROC) every year.
  1. As for logging and monitoring requirements that should help with detection of a potential compromise,  both PCI DSS (Requirement 10)  and PA DSS (Requirement 4) are as detailed as they get in any security framework or regulation I am aware of.
  1. Even if you think requirement #10 can’t help detect POS malware activity, there is PCI DSS requirement 12.2 that requires a security risk assessment to be performed at least once a year. The risk assessment must consider the current threats and vulnerabilities. Given the constant stream of breaches, one would think that the POS malware threats are accounted for in these risk assessments.
  1. These large merchants have been around for a while and are supposed to have been PCI DSS compliant for several years. And so, one would think they have appropriate technologies and processes to at least detect a security compromise that results in the scale of breaches they have had.

So, what do you think may be the reasons why the retailers or the PCI regulations are not effective in at least detecting the breaches? More importantly, what changes would you suggest, both to the regulations and also to how the retailers plan and execute their security programs? Or perhaps even to how the QSAs perform their assessments in providing passing ROCs to the retailers?

I’m keen to hear your thoughts and comments.

Beware of Security Best Practices and Controls Frameworks

What could be possibly wrong with “Best Practices” or “Leading Practices” that your favorite security consultant might be talking about? Or for that matter, how could we go wrong if we used the “leading” security standards or controls frameworks?

It is of course useful to have a benchmark of some sort to compare yourself against your peers. The problem comes up (as it so often does) when we start to take these so called best practices and standards for granted. This often drives us to a state of what I like to call as template mindsets and approaches in security. More often than not in my experience, this leads to us making incorrect security decisions because we didn’t consider all the facts and circumstances that may be unique to each of our settings.

Let me explain with an example.

Let us say that you are using a leading security framework such as the HITRUST CSF for healthcare. To take the example of password controls, Control Reference 01.d on Password Management has a fairly restrictive set of password controls even at Level 1, which is HITRUST CSF’s baseline level of controls. Level 1 includes requirements for password length, complexity, uniqueness of the current password as compared to the last certain number of passwords and so on. However, there is no requirement in Level 1 around the standard to be used for hashing passwords. In fact, there is not a single mention of the words “password hash” or “salt” in over 450 pages of the CSF framework even in its latest 2014 version.

Now, if you are a seasoned and skilled security practitioner, you should know that these Level 1 password controls are mostly meaningless if the password hashes are not strong enough and the password hash file was stolen by some means. It is fairly common for hackers to steal password hash files early and often in their hacking campaigns. Reported breaches at Evernote, LinkedIn and Adobe readily come to mind. We learned about what appears to be this fairly unprecedented scale of stolen passwords just yesterday.

So, if you see a consultant using a so called best practice set of controls or one of the security controls frameworks to perform your risk assessment and he/she doesn’t ask a question on password hashes (or some other control or vulnerability that may truly matter), you should know the likely source of the problem. More than likely, they are simply going through the motions by asking you questions from a controls checklist with little sense of understanding or focus around some of the threats and vulnerabilities that may be truly important in your setting or context. And as we know, any assessment without a clear and contextual consideration for the real world threats and vulnerabilities is not really a risk assessment. You may just have spent a good amount of money on the consultant but probably do not have much to show for it in terms of the only metric that matters in an assessment – the number of “real” risks identified and their relative levels of magnitude –  so you can make intelligent risk management decisions.

In closing, let us not allow ourselves to be blindsided by the so called “Best Practices” and Security Controls Frameworks. Meaningful security risk management requires us to look at the threats and vulnerabilities that are often unique to each of our environments and contexts. What may have been considered a best practice somewhere else or a security framework put out by someone may at best be just a reference source to double-check and make sure we didn’t miss anything. They should never be the sole source for our assessments and certainly not the yardstick for our decision making in security.

I welcome your thoughts and comments.

Important notes and sources for reference

  • I used HITRUST CSF only for an example. The idea discussed in this post would apply to any set of Best Practices or Security Control Frameworks. After all, no set of Best Practices or Security Controls Frameworks and no matter how good their “quality” may be, they can’t keep up with the speed at which today’s security threats are evolving or new vulnerabilities are discovered.
  • If you are interested in learning some really useful information on password hashing and password management, I would strongly recommend this post (Caution: It is not a quick read;  Allow yourself at least 30 minutes to read and absorb the details especially if you are not a experienced security professional)

Do we have a wake-up call in the OIG HHS Report on HIPAA Security Rule Compliance & Enforcement?

If you didn’t notice already, the Office of Inspector General  (OIG) in the Department of Health and Human Services (HHS) published a  report on the oversight by the Center for Medicare and Medicaid Services (CMS) in the enforcement of the HIPAA Security Rule. The report is available to the public here.   As we know, CMS was responsible for enforcement of the HIPAA Security Rule until the HHS  Secretary transferred that responsibility over to the Office of Civil Rights (OCR) back in 2009.

To quote from the report, the OIG conducted audits at seven covered entities (hospitals) in California, Georgia, Illinois, Massachusetts, Missouri, New York, and Texas in addition to an audit of CMS oversight and enforcement actions.  These audits focused primarily on the hospitals’ implementation of the following:

  • The wireless electronic communications network or security measures the security management staff implemented in its computerized information systems (technical safeguards);
  • The physical access to electronic information systems and the facilities in which they are housed (physical safeguards); and,
  • The policies and procedures developed and implemented for the security measures to protect the confidentiality, integrity, and availability of ePHI (administrative safeguards).

These audits were spread over three years (2008, 2009 and 2010) with the last couple of audits happening in March 2010. The report doesn’t mention  the criteria by which these hospitals were selected for audit except that these  hospitals were not selected because they had a breach of Protected Health Information(PHI) .

It wouldn’t necessarily be wise to extrapolate the findings in the report to the larger healthcare space in general without knowing how these hospitals were selected for audit. All one can say is that the findings would paint a worrisome picture if these hospitals were selected truly in a random manner.  For example, if one were to look at ”High Impact” causing  technical vulnerabilities, all 7 audited hospitals seem to have had vulnerabilities related to Access and Integrity Controls, 5 out of  7 had vulnerabilities related to Wireless and Audit Controls and  4 out 7 had vulnerabilities related to Authentication and Transmission Security Controls.

image

What might be particularly concerning is that the highest number of vulnerabilities were in the Access and Integrity Controls categories.  These are typically the vulnerabilities that are exploited most by hackers as evidenced (for instance) by the highlight in this quote from the 2011 Verizon Data Breach Investigation Report – “The top three threat action categories were Hacking, Malware, and Social. The most common types of hacking actions used were the use of stolen login credentials, exploiting backdoors, and man-in-the-middle attacks”.

Wake-up call or not, healthcare entities should perhaps take a cue from these findings and look to implement robust security and privacy  controls. A diligent effort should help protect organizations from the well publicized consequences of a potential data breach.

Next time you do a Risk Assessment or Analysis, make sure you have Risk Intelligence on board

I was prompted to write this quick post this morning when I read this article.

I think it is a good example of what some (actually many, in my experience) risk management programs may be lacking, which is a good quality of Risk Intelligence. In this particular case, I think the original article failed to emphasize that vulnerabilities by themselves may not mean much unless there is a good likelihood of them being exploited, resulting in real risk.  We discussed some details regarding the quality of risk assessments in a previous post.

A good understanding of information risks and their prioritization needs to be the first and arguably the most important step in any information risk management program. Yet, we often see risk assessment initiatives not being done right or at the right quality. We think it is critical that a risk analysis or assessment is headed by someone or performed by a team that has or does the following:

  1. A very good understanding of your environment from people, process and technology perspectives
  2. A very good and up-to-date intelligence on the current threats out there (both internal and external) and is able to objectively define those threats
  3. Is able to clearly list and define the vulnerabilities in your environment. It will often require  process or technology specialists to do a good job of defining the vulnerabilities
  4. Is able to make an unbiased and objective determination of the the likelihood that the vulnerabilities (from Step 3) can be exploited by one or more threats (from Step 2)
  5. A very good understanding of the impact to the business if each vulnerability were to be exploited by one or more threats. Impact is largely a function of the organization’s characteristics including various business and technical factors, so it is important that you involve your relevant business and  technology Subject  Matter Experts.
  6. Based on the likelihood (Step 4) and impacts (Step 5), estimate risks and then rank them by magnitude.

We just can’t stress the importance of steps 1-5 enough. We think it takes “Risk Intelligence” to do these steps well. Without good Risk Intelligence on your team, you may well be wasting precious time, money and resources on your risk assessments.  More importantly, you may not be protecting your business to the extent that you should, with the same budget and resources.

======================================

Important Disclaimer

The guidance and content we provide in our blogs including this one is based on our experience and understanding of best practices. Readers must always exercise due diligence and obtain professional advice before applying the guidance within their environments.

Let’s talk some “real” insider threat numbers – How can Access Governance and SIEM be useful as effective safeguards?

If you have been following some of our posts, you probably realize that we don’t advocate security for the sake of security. Nor do we like to do compliance for the sake of compliance though you may not have much choice there if the compliance requirements are mandated by external regulations such as industry regulations (e.g. PCI DSS, NERC CIP etc.) or government regulations (e.g. HIPAA, GLBA, SOX etc.). On the other hand, we think that every investment in security projects or operations (beyond what is required for routine business support) must be justifiable in terms of the risk(s) that we are trying to mitigate or eliminate. And the allocation of IT resources and budgets must be prioritized by risk level which in turn requires every IT organization to conduct periodic risk assessments  and rank the risks by severity.  This probably sounds all too obvious but we still see a lot of security purchasing decisions being made that are not based on formal risk assessments or discernable risk-aligned  priorities. BTW, I talk about the quality of risk assessments in another post.

In this post, I would like to go over some “real” numbers on insider threats, as we know from a few recent survey reports. More importantly, I’ll cover how Access Governance and Security Information and Event Management (SIEM) can be effective safeguards in mitigating risks from insider threats.  If you are not up to speed on what Access Governance (sometimes also referred to as Access Assurance) includes, I would point you here (may need registration).  For SIEM, I would point you here.

It probably needs an explanation as to why I chose Access Governance and SIEM for this discussion. Insider threats, by definition, are caused by people  (employers, contractors, partners etc.) whose identity is known to the organization and have been provided some level of access to one or more of the organization’s information systems.  Access Governance can be both an effective detective control (through access reviews) and preventative control (through role based access provisioning and access remediation) for user access. SIEM can be an effective control for detecting anomalous, suspicious  or  unauthorized user activities. When properly integrated, Access Governance and SIEM  solutions can help achieve substantial reduction of risks from insider threats.

Below is a discussion of findings related to insider threats from recent reports. Also provided are notes on how effective implementations of Access Governance and SIEM processes or technologies can be useful safeguards against these threats. I use findings from three recent reports for the analysis – 2010 Verizon Data Breach Investigations Report (DBIR), 2010 CyberSecurity Watch Survey (CSWS)and Securosis 2010 Data Security Survey (SDSS).

Size and significance of Insider Threats

Report

Finding

DBIR

48% of all breaches were attributed to internal agents

CSWS

“The most costly or damaging attacks are more often caused by insiders (employees or contractors with authorized access)”

“It is alarming that although most of the top 15 security policies and procedures from the survey are aimed at preventing insider attacks, 51% of respondents who experienced a cyber security event were still victims of an insider attack. This number is holding constant with the previous two surveys (2007 and 2006)

Insider incidents are more costly than external breaches, according to 67% of respondents

SDSS

Among respondents who knew of data breaches in their own organizations, 62 percent said malicious intentions were behind them. Insider breaches comprised 33 percent of incidents, hackers comprised 29 percent, and the remaining breaches were accidental.

As one can infer from these findings, insider threats are the cause of at least as many security breaches as external threats. It also appears that the cost of breaches caused by internal threats could be higher than those caused by external threats.

Intentional Vs Accidental

Report

Finding

DBIR

90% of these internal agents’ caused breaches were the result of deliberate and malicious activity.

CSWS

Insiders most commonly expose private or sensitive information unintentionally, gain unauthorized access to/use of information systems or networks and steal intellectual property

SDSS

Among respondents who knew of data breaches in their own organizations, 62 percent said malicious intentions were behind them. Insider breaches comprised 33 percent of incidents, hackers comprised 29 percent, and the remaining breaches were accidental.

It appears from the findings that insiders could be causing breaches intentionally more often than accidentally. Access Governance can help reduce malicious insider risk  by enforcing “least privilege” user access and "segregation of duties" through role based access provisioning, access reviews and remediation of improper access. On the other hand, a properly implemented SIEM solution can be an effective deterrent (as a detective control) to malicious insider threats by logging user activities, correlation of user activities and alerting on suspicious activities by the user. By suitable integration of SIEM and Access Governance solutions, it is possible to analyze user activities (obtained from SIEM) against a user’s role in the organization and hence what the user is authorized to do (obtained from Access Governance).

Cause and prevention

Report

Finding

DBIR

51% of these internal agents’ caused breaches involves regular users or employees, 12% involved accounting or finance staff and 12% involved network or systems administrators

“In general, employees are granted more privileges than they need to perform their job duties and the activities of those that do require higher privileges are usually not monitored in any real way.”

“Across all types of internal agents and crimes, we found that 24% was perpetrated by employees who recently underwent some kind of job change. Half of those had been fired, some had resigned, some were newly hired, and a few changed roles within the organization.”

“With respect to breaches caused by recently terminated employees, we observed the same scenarios we have in the past: 1) the employee’s accounts were not disabled in a timely manner, and   2) the employee was allowed to “finish the day” as usual after being notified of termination. This obviously speaks to the need for termination plans that are timely and encompass all areas of access (decommissioning accounts, disabling privileges, escorting terminated employees, forensic analysis of systems, etc.)”

CSWS

“The most costly or damaging attacks are more often caused by insiders (employees or contractors with authorized access)”

“It is alarming that although most of the top 15 security policies and procedures from the survey are aimed at preventing insider attacks, 51% of respondents who experienced a cyber security event were still victims of an insider attack. This number is holding constant with the previous two surveys (2007 and 2006)

The DBIR findings clearly illustrate the need for organizations to enforce least privilege access through business need-to-know (managing user access based on a user’s role), periodic review of user access (access reviews and certification) and prompt remediation of improper user access.  Access Governance solutions can help achieve these objectives effectively as well as efficiently.

The CSWS finding seems to suggest a problem with the enforcement of organization’s policies related to user access.  As mentioned above, a properly implemented Access Governance program and solution can help with effective enforcement of user access policies.

To conclude, it is obvious that risk management of insider threats needs to be a key focus area of any Information Security  or Risk Management program. An effective Access Governance and SIEM program can help with significant mitigation of the insider risk.

————————————————————————

RisknCompliance Consulting Services Note

We at RisknCompliance have extensive advisory and implementation experience in the Access Governance and SIEM areas.

Please contact us here if you would like to discuss your needs. We will be glad to talk to you about how we could be of assistance.


You don’t know what you don’t know – Do we have a "detection" problem with the healthcare data breach numbers?

Like some of you perhaps, I have been reading a few recent articles on Healthcare data breaches, especially the one from Dark Reading and a detailed analysis of the 2010-to-date breaches from HITRUST Alliance.

What stood out for me from these articles is something that is not necessarily highlighted in the articles and that is the very low number of breaches involving technology/people/process controls as opposed to physical losses.

These articles focused on the 119 or so breaches that have been reported to Department of Health and Human Services (HHS) or made public to date in 2010. From the HITRUST Alliance analysis, it is clear that an overwhelming majority of the breaches resulted from physical loss/theft of paper or electronic media, laptops etc.  Only two breaches resulted from hacking incidents.

I then went back to do a little bit of my own analysis of the 2010 data breach incidents covered in the Identity Theft Resource Center report available here. I came up with the following numbers for breaches other than those that involved physical loss, theft, burglary, improper disposal etc. :

  • Malware  infection -1
  • Unauthorized access to file share – 1
  • Database misconfiguration or vulnerability – 2
  • Website vulnerability – 1
  • Improper access or misuse by internal personnel – 6

As you can see, these account for less than 10% of the healthcare breaches known or reported so far this year.  Contrast this with the findings in 2010 Verizon Data Breach Investigation Report which attributes 38% of breaches to malware, 40% to hacking and 48% to misuse. It is pertinent to note that the Verizon report focused on 141 confirmed breaches from 2009 covering  a variety of industries,  but I think it is still good for a high level comparison to determine if we may be missing something in the healthcare breach data.

The comparison seems to suggest that the healthcare industry probably has much stronger safeguards  against malware, hacking, improper logical access etc.  I know from my own experience working with healthcare entities that this is not necessarily the case. For further corroboration, I reviewed two Ponemon Institute survey reports – Electronic Health Information at Risk: A Study of IT Practitioners and Are You Ready for HITECH? – A benchmark study of healthcare covered entities & business associates, both from Q4 2009. Following sample numbers from these reports further validate that the state of Information Security and Privacy among HIPAA Covered Entities (CEs) and Business Associates (BAs) is far from perfect:

Electronic Health Information at Risk: A Study of IT Practitioners

#

Question

% of respondents saying “Yes”

1

My organization’s senior management does not view privacy and data security as a top priority

70%

2

My organization does not have ample resources to ensure privacy and data security requirements are met – 61% of respondents.

61%

3

My organization does not have adequate policies and procedures to protect health information

54%

4

My organization does not take appropriate steps to comply with the requirements of HIPAA and other related healthcare regulations

53%

Are You Ready for HITECH? – A benchmark study of healthcare covered entities & business associates

#

HIPAA compliance requirements that are not formally implemented

% of respondents saying “Yes”

1

Risk-based assessment of PHI handling practices

49%

2

Access governance a and an access management policy

47%

3

Staff training

47%

4

Detailed risk analysis

45%

All this leads me to think of the possibility that some HIPAA CEs and BAs may not be detecting potential breaches. If you study the healthcare breaches that have been reported so far, almost all of them have been through physical losses of computers or media (which is easy to know and detect) or through reporting by third parties (victims, law enforcement, someone finding improperly disposed PHI paper records in trash bins  etc.).  I don’t know of any healthcare data breach this year that was detected through proactive monitoring of information systems.

As I covered in a related post on breach reports and what they tell us, I would recommend that CEs and BAs focus on certain key controls and related activities (see table below) in order to improve their breach prevention and detection capabilities:

#

Key Controls

Recommended Activities

1

Secure Configuration and Lockdown

Review configuration of information systems (network devices, servers, applications, databases etc.) periodically and ensure that they are locked down from a security configuration standpoint

2

Web Application Security

· Scan web applications periodically for OWASP Top 10 vulnerabilities and fix any discovered vulnerabilities

· For new applications under development, perform code reviews and/or vulnerability scans to fix any security vulnerabilities before the applications are put to production use (Studies show that it is far more cost effective to fix the vulnerabilities before applications are put to production use than after)

· Use Web Application Firewalls as appropriate

3

Strong Access Credentials

· Configure PHI systems and applications to have a strong password policy (complexity of the password, periodic change of password etc.)

· Implement multi-factor authentication on PHI systems and applications wherever possible


(Note: According to 2010 Verizon Data Breach investigation report, stolen access credentials lead to largest number of breaches from hacking incidents)

4

Access Assurance or Governance

· Conduct Access Certifications periodically, preferably at least every quarter for PHI systems and applications.

· Review access privileges within PHI systems and applications to ensure all access conforms to the “Least Privilege” principle. In other words, no user, application or service must have any more privileges than what is required for the job function or role

· If any excess privileges are found, they must be remediated promptly

· Revoke access to PHI systems and applications promptly in the event that a person leaves the organization or no longer requires access due to a change in the person’s job role within the organization

5

Logging, Monitoring and Reporting

· Identify “risky” events within PHI systems

· Configure the systems to generate logs for the identified events

· Tamper-proof the logs

· Implement appropriate technologies and/or processes for monitoring of the events (Refer to our related posts here and here for examples)

· High risk events must be identified and monitored through near-real-time alerts

· Responsibilities for daily review of log reports and alerts must be assigned to specific personnel

6

Encryption (Data at rest, media), Physical security of media

· Maintain an inventory of locations and systems wherever PHI exists

· Implement suitable encryption of PHI on laptops and removable media

· Implement appropriate physical security safeguards to prevent theft of devices or systems containing PHI

7

Security Incident Response

· Implement and operationalize an effective Security Incident Response program including clear assignment of responsibilities, response steps/workflows  etc.

· Test Incident Response process periodically as required

8

Security Awareness and Training

· Implement a formal security awareness and training program so the workforce is aware of their responsibilities,  security/privacy best practices and actions to take in the event of suspected incidents

· Require personnel to go through the security awareness and/or training periodically as appropriate

If you are familiar with the HIPAA Security Rule, you will notice that not all of the above controls are “Required” (as opposed to “Addressable”) under HIPAA Security Rule or in the proposed amendments to the rule under the HITECH Act. One may argue however, that the above controls must be identified as required based on “risk analysis” , which of course is a required implementation specification in the HIPAA Security Rule. In any event, CEs and BAs need to look beyond the HIPAA compliance risk and focus on the risk to their business or brand reputation if a breach were to occur.

Hope this is useful! As always, we welcome your thoughts and comments.

RisknCompliance Services Note

We at RisknCompliance maintain a up-to-date database of the current security threats and vulnerabilities at a detailed level. We are able to leverage this knowledge in  providing our clients with  high quality risk analysis.

Please contact us here if you would like to discuss your HIPAA security or privacy needs. We will be glad to talk to you about how we could be of assistance.

Older posts