Thursday, September 26, 2013

ICMC: ISO/IEC 19790 Status and Supporting Documents

Presented by Randall Easter, NIST; and Miguel Bagnon, Epoche & Espri.

Mr. Bagnon started out by explaining the structure of ISO, the IEC and SC27 working group.  The ISO standards body looks at  creating market driven standards, getting input from all of the relevant stake holders.  The SC27 focuses on security and privacy opics across 5 working groups.  The SC27 has 48 voting member countries - from Algeria to Uruguay!  There are 19 other observing countries. You can see a very wide representation of continents, countries, cultures and languages.

The WG3 Mission is security evaluations, testing and specification. This covers how to apply the criteria, testing criteria, and administrative procedures in this area.

The process is open to the world (as you can see), drafts are sent out for review by the public before becoming a final international standard.  Please participate if you can, it's the only way to have your opinion counted.

Mr. Easter then dove into ISO 19790, and the related standards: ISO 24759 (test requirements for cryptographic modules), 18367 (algorithm and security mechanisms conformance testing), 17825 (testing methods for the mitigation of non-invasive attack classes against crypto modules) and 30104 (physical security attacks, mitigation techniques and security requirements).

ISO 19790 was first published in 2006 and it was technically equivalent to FIPS 140-2, plus an additional requirements for mitigation of attacks for Level 4.  This standard has been adopted internationally and is being used around the world.

What Mr. Easter had been hoping would happen was that ISO 19790 and FIPS 140-3 would closely track each other, with ISO 19790 picking up all of the changes from FIPS 140-3.  FIPS 140-3 was so delayed, though, that ISO 19790 began to develop independently.

Mr. Easter noticed that there were no validation labs participating in the ISO standard, so he got permission to circulate the draft amongst the labs and to incorporate their comments, as he's the editor of the document.

This document has been adopted by ANSI as a US standard now as well.

At this time, it is not officially recognized by NIST and the US Government.

This is very frustrating to many vendors and labs, because FIPS 140-2 was published in 2001 and it is quite stale (hence the 170 page Implementation Guidance). Technology is changing, the original language in FIPS 140-2 wasn't clear to all, and there seems to be a way out - if only NIST would adopt it.

Until that happens, vendors are stuck implementing to FIPS 140-2.

How can you change this? Call up your NIST representative or friendly CSEC contact and ask for this.

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Key Management Overview

Presenters: Allen Roginsky and Kim Schaffer, NIST.

Key Establishment Methods



Key Establishment Methods in FIS 140-2 cover key agreement, key transport (including encapsulation and wrapping), key generation, key entry (manual, electronic, unprotected media), and key derivation.

The best method to make sure you do this right is to comply with SP 800-56A (CAVP KAS certificate required).

You can also use SP 800-56B, which is vendor affirmed right now. SP 800-56B is IFC based and key confirmation is highly desirable.

Or, you can use non-approved methods that rely on approved validated algorithm components. The shared secret is still computed per SP 800-56A with a CVL certificate. The kdf (key derivation function) then would be aproved (with a CVL certificate) per SP 800-56B and 80-56C.  There was a new version of SP 800-56A released in May 2013 that should help alleviate some of this convoluted cross referencing, and clarify many questions people have had over the last few years.

OR...you can even use non-approved, but allowed implementations.  That is, if your key strengths are consistent with SP 800-131A transition requirements.

Key Transport Modes

Key transport modes can be confusing as well.  Key encapsulation is where keying material is encrypted using asymmetric (public key) algorithm.  Key Wrapping, though, is where the keying material is encrypted suing symmetric algorithms. Both commonly provide integrity checks.

Approved methods would be an approved IFC based key encapsulation scheme as in SP 800-56B, key wrapping schemes (AES or 3DES based) as per PS 80038F, AES based authentication encryption m odes permitted in SP 800-38F, or as per SP 800-56A, a DLC-based key agreement scheme together with a key wrapping algorithm.

Any key encapsulation scheme employing an IFC based methodology that uses key lengths specified in SP 800-131A as acceptable or (through 2013) deprecated.   When AES or 3DES are used for wrapping, a CAVP validation of the algorithm is required.

Key Generation Methods

People often mistakenly believe that because they are using a good RNG, that they must be doing the right thing for key generation... not always the case!  You still need to follow SP 800-133 and IG 7.8 (Implementation Guidance).

The vendor needs to identify the method used and account for the resulting length and strength of the generated keys. This is about the generation of a symmetric algorithm key or a seed for generating an asymmetric algorithm key; the the generation of an asymmetric algorithm domain parameters and RSA keys.  See IG 7.8 and the future versioin of SP 800-90A.

You can use SP 800-132 for password-based key generation for storage applications only.

Key Entry

Implementation Guidance (IG) 7.7 provides examples explaining the FIPS 140-2 requirements. Key entry/output via the GPC internal path is generally N/A.  Key establishment over the unprotected media requires protection.  Split knowledge entry for manually distributed keys at Levels 3 and .

Key Derivation

When you're deriving a key - it's coming from something else. If you're deriving from a shared secret (per SP 800-135rev1), that includes the following protocols and their key derivation function are included: IKE (versions 1 and 2) , TLS (1.0->1.2), ANSI X9.42 and X9.63, SSH, SRTP, SNMP and TPM.  You can also drive from other keys, which is covered by  SP 800-108 - which also includes IEEE 802.11i key derivation functions (IG 7.2).



 This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: The Upcoming Transition to New Algorithms and Key Sizes

Presented by Allen Roginsky, Kim Schaffer, NIST.

There are major things we need to be concerned about – we need to move from old, less secure algorithms to the new ones. This includes the transition to 112-bit strong crypto and closing certain loopholes in old standards

The algorithms will fall into the following classes:
  • Acceptable (no known risks of use)
  • Deprecated (you can use it, but you are accepting risk by doing so)
    • This is a temporary state
  • Restricted (deprecated and some additional restrictions apply) 
  • Legacy-Use (may only be usd to process already-protected information) 
  • Disallowed (may not be used at all)
And of course, these classifications can change at any time. As you all know, the crypto algorithm arena is ever changing.  I asked a question about the distinction between Legacy-Use and disallowed.  It seems to me that you might find some old data laying around that you’ll need to decrypt at a later date.  Mr. Roginsky noted that they didn’t really cover this when they did the DES transition, and you might be okay because decrypting is not really “protecting” data.

When we get to January 1, 2014, 112-bit strength is required.  Two-key 3DES is restricted through 2015. Digital signatures are deprecated though 2013 if they aren’t strong enough.   This is an example where you could continue to use them for verification under “Legacy-Use” when we reach 2014.

Non SP-800-90A RNGs are disallowed for use after 2015 – you won’t even be able to submit a test report after December 31, 2013 if you don’t have an SP-800-90A RNG.

There is a new document everyone will want to review: SP 800-38 – it explains the use of AES and 2Des for key wrapping.

SHA-224, 256, 384, 512 are all approved for all algorithms. SHA-1 is okay, expect for digital signature generation. There are other changes around MACs and key derivation.

We’ll also be transitioning from FIPS 186-2 to FIPS 186-3/4.  Conformane to 186-2 can be tested through 2013.  Already validated implementations will remain valid, subject to the key strength requirements.  Only certain functions (such as parameter validation, public key validation and signature verification) will be tested for 186-2 compliance after 2013.  What this really means is that some key sizes are gone” after 2013: RSA can only use 2048 and 3072 keys.

Make sure you also read Implementation Guidance (IG) 7.12: RSA signature keys need to be generated as in FIPS 186-3 or X9.31.

The deadlines are coming up – don’t delay!

 This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Reuse of Validation of Other Third Party Components in a 140-2 Cryptographic Module

Presented: Jonathan Smith, Senior Cryptographic Equipment Assessment Laboratory (CEAL) Tester, CygnaCom Solutions

What is component in this context?  An algorithm, 140-2 module, third party library, etc - not a hardware device.  There is more interest in this area, as more validations are occurring.  Requirements are not obvious in this area, and there isn't a lot of guidance to follow.

Let's say you want to reuse an algorithm that has its CAVP certificates - if you wan to leverage that validation, you have to make sure you are talking about the same Operational Environment (OS/processor for software) and that there is no change within the algorithm boundary when you embed it within a module.  CAVP boundaries are not as well defined as CMVP, but for all intents and purpose it is the compiled binary executable that contains the algorithm implementation.

When you're reusing someone else's algorithm, you will have a hard time to make sure all of the CMVP self-tests are all being run at the right time. You may not be able to reuse it with out rebuilding it.

Now you may want to use an entire validated module - first make sure you have the correct validated version.  If you can use it completely unchanged, you will have to reference the other module's certificate.  One note, if the embedded module is Level 2, but your code only meets Level 1 criteria - the composite module could not be evaluated higher than Level 1. Now, the inverse is not necessarily true - you might be embedding a Level 1 module, but your different use cases may allow you to get a higher level for the composite module.

To reuse this module, again, you need to have an unchanged operational environment the same as trying to reuse an algorithm.  The new module boundary must include the entire boundary of the included module. You'll need to have a consistent error state - you cannot allow one part of the composite module to enter an error state while the rest of the system continues serving crypto.

Your documentation can quite frequently reference the embedded module's documentation, leaving certain tasks up to the embedded module.  Make sure the new capabilities of the composite module are documented.

A question came up about using multiple vendor's modules together, where they each have their own validation certificate.  Mr. Easter (CMVP) recommended we read Implementation Guidance (IG) 7.7 for detailed advice on this concept.

There was a question about if the embedded module was validated before new IG came out - what then?  As long as the embedded module meets SP800-31A, then the old certificate fully applies and you will not have to bring it up to the new IG.

This post syndicated from: Thoughts on security, beer, theater and biking!

Wednesday, September 25, 2013

ICMC: FIPS and FUD


Ray Potter, CEO, Safelogic.

Mr. Potter has seen lots of vendors jumping into the FIPS-140 bandwagon when they see another vendor claiming FIPS-140 certification, without understanding what that meant.  That other vendor may have simply gotten an algorithm certificate for just one algorithm, for example.

FIPS-140 is important - it provides a measure of confidence and a measure of trust.  A third party is validating your claims.  FIPS-140 is open and everyone in the world can read the standard and the IG and implement based on these and understand what it means to be validated. Most importantly, FIPS-140 is required by the US Government and desired by other industries (medical/financial/etc).

Having this validation shows that you've correctly implemented your algorithms and you are offering secure algorithms.

See claims like the module has been "designed for compliance to FIPS 140-2", or "it will be submitted" or "a third party has verified this" or "we have an algorithm certificate" or "we have implemented all of the requirements for FIPS-140" - none of these is truly FIPS-140-2 validated.

Once you have a certificate in hand from the CMVP, then you're validated.

But even when the vendor has done the right thing for some products, sales can just get this wrong - too eager to tell the customer that everything is then validated.

So, Mr. Potter has encountered honest mistakes, but he's also seen sales/vendors outright lie about this, because it simply takes too long and is too expensive to do this.  Why do this? Make the sale - hope your in process evaluation completes before the sale.

Issues that exacerbate the situation: unpredictable CMVP queues, new Implementation Guidance (IG) that is sometimes retroactive, and uneven application across the government.  Some government agencies may accept different phases (in validation) - where others require a certificate in hand.

Vendors get frustrated when they are in the queue for months, then get some retroactive IG that requires code changes - they don't see this as worth the effort.

We can help: educate your customers on the value of FIPS-140 validations and what it really means to be validated, only use validated modules and follow strict guidelines for claims.

There are some people that will take another FIPS-140 validated implementation, repackage it and get their own certificate for the same underlying module but with their name as the vendor on the cert.

I asked about why some vendors are doing validations of their crypto consumers, when they've already got a validation for the underlying consumer.  Mr. Potter noted that some people might do this because they need to cover the key management aspects that the consumer is doing that weren't covered in the other evaluation, or that the consumer may actually have some if it's own internal crypto in addition to what they are getting from the underlying module, or that they simply are trying to make a very important customer happy.

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Understanding the FIPS Government Crypto Regulations for 2014

 Presented by Edwards Morris, co-founder Gossamer Security Solutions.

Looking forward to what new regulations are going to mean for older algorithms and the protocols that use them, right away there seems like there may be work for us to do.

Security strength of various algorithms are not straightforward to calculate, as it's based on how it's used and how big the key is, etc.

In May 19, 2005, DES is being sunset, because it has less than 80-bits of security.  Because this sunset applies to the bits of security, DH and RSA with key sizes smaller than 1024-bits are included.

Originally the DES transition gave people two years to migrate to AES or 3DES.

As a part of this, NIST released SP 800-57, which included recommendations for key management.

This was harder than anticipated, due to lack of standards, available 140-2 approved products and the sheer size of deployments - so then NIST 800-131A was born.

NIST 800-131A includes more details on the transitions, terminology and dates.  Some 80-bit crypto will be deprecated in 2013 and others in 2015.

The devil is in the details, of course,  SP 800-131A refers to SP800131B, which is in draft and ultimately because FIPS-140-2 Implementation Guidance.  Digging into all of these requirements, how can you tell if TLS can still be used?

Mr. Morris dug into  various protocols to help us interpret this standard.

IPsec

IPsec is made up of so many RFCs, so getting the big picture is not an easy task. It also has so many possible options.  SP 800-57 Pat 3 details this guidance.  Three IPsec protocols allow a choice of algorithms: IKE (for key exchange), ESP and AH.

You can avoid using IKE by using manual keying (acceptable 2014+, but what a pain to configure/deploy), IKEv1 (unacceptable in 2014) and  IKEv3 (acceptable 2014+, if configured correctly).

For encryption in IPsec, you'll be okay with ENCR_3DES (in CBC mode), ENCR_AES_CBC (CBC mode only), and a few others.  You'll be able to use SHA1 as an HMAC, but not alone for signatures (wow, does this get twisted).

Mr. Morris continued through what would be acceptable for Pseudo-random functions, Integrity, Diffie-Hellman group and Peer Authentication - too quickly for me to type all of those algorithms, but I will try to update this if I can get access to the slide deck.

It's not as simple as which algorithm you use, but again how you use it, which sized keys, etc.

IPsec is well positioned for this - as long as you configure it correctly.

TLS

TLS is not the same things SSL v3.0. TLS is equivalent to SSL v 3.1.  TLS 1.0 is acceptable. SSL, though, is not allowed by CMVP.  Since 1.0 is acceptable, that essentially means that TLS 1.1 and 1.2 are also acceptable, as long it's configured correctly (seeing a theme here?).  For example, if you use pre-shared keys in lieu of certificates, ensure the key is greater or equal to 112 bits.

You won't be able to use the following key exchanges: *_anon, SRP_*, KRB5. (though Mr. Morris hasn't dug through the Kerberos standards, yet, to see where they will be with these 2014 requirements.

SSH

This is not covered by SP 800-57, so this is harder to figure out if it's okay or not. There are problems when the RFCs require things that are no longer considered secure: Single DES are required to be implemented, but by having them available in your implementation - this will be a problem.

Recommendations

Look for documentation and configuration guides that can help there, or get independent evaluations of the software that you're deploying (companies like Gossamer will look at how you're using even opensource software like Apache, OpenSSL, OpenSSH, etc)

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Building a Corporate-Wide Certification Program - Techniques that (might) work

Tammy Green, Security and Certification Architect at Blue Coat, came into what seemed like a simple task. She had had some Common Criteria experience when she joined Blue Coat, their product had previously been evaluated at FIPS-140-2 level 2, and they had already signed contracts with a consulting firm (Corsec) and lab.  Should've been easy, right?  Her new boss said it should take about a year and only 20% of her time.  Two and half full time years later... yikes.

One of the biggest problems was getting internal teams to work with her - even people involved in the previous validations didn't even want to talk to Ms. Green about it.

Nobody wanted to do this - they want to work on the new shiny features that they can sell, how does a process that takes 2 years (often not complete until after a product is EOL) help them?

It's hard to see the long term picture - you want to sell to the government, you need FIPS-140-2 validations.

Ms. Green didn't want to do this herself again afterwards. Instead of running away, she worked on setting up a certification team locally (her boss hiring a program manager helped to encourage it).

In addition to having the program manager, you need a certification architect.  You can't use the same architect as the product architect, because that person is busy designing shiny new features.

You need to work with the development team well in advance - fit your FIPS-140-2/Common Criteria schedule into their development schedule. You can't screw on the necessary tests and requirements as an afterthought, and you don't want to delay a schedule because requirements are dropped in at the end.

Target the right release: because FIPS-140-2 takes so long, you need to pick a release you plan on supporting for a long time.

Ms. Green found that after time... engineers stopped replying to her emails and answering her phone calls.  You need to identify key engineering resources to work with and their management needs to commit to those engineers dedicating 10-20% of their time to these validations.

Once you get this set up and have educated engineering, you'll find they'll reach out to you in advance - better timing!

Her team keeps track of what needs to be done: file bugs and track them.  You'd think the project manager for the product team would do this, but what she's found is that the bugs get reprioritized and reassigned to a future release.  Someone who understands validations needs to track these issues.

Ms. Green recommends that you create the security policy from existing documents: don't rely on engineers doing this. They simply don't understand what goes into this document or why it's important.  Instead, use engineering and QA to validate content.

It's important to convince QA continue to test FIPS mode and related features, as some customers may still want to run in FIPS mode (even though it wasn't validated) or that the release would be ready for validation if something went horribly wrong with the older release in validation.

Schedule time to prep. Ms. Green has 4-8 hour long meetings to make sure everyone understands what's important. Take time to prepare, make sure everyone knows what will be expected from the lab visit and have a test plan formalized in advance.  It's actually a lot of work to set up failures (the lab evaluators require that you demonstrate what happens when  a failure happens, even though you have to inject the failure to force it).  Debug builds, builds you know will fail, multiple test machines, platforms, etc.

To keep your team from killing you... or damaging morale, celebrate the milestones. Mention the progress in every status report, celebrate the milestones, do corporate wide announcements when you finish.

Do a post mortem to understand how this can be improved: give your engineering team a voice! Listen and take action based on what worked and didn't.

Update tools and features to make this easier next time: keywords to bugs and features, modifying product life-cycle, add questions related to FIPS to templates.

Suggestions/questions from the audience:
Make sales your best friend.  Validations/certifications are not fun, nobody does them for fun - you do this to make money.
Get the certification team involved as early as possible: from the very beginning - marketing design meetings.
Why don't you run your FIPS-140 mode tests all the time?  Time consuming, slower, not seen as a priority when there are no plans to validate.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Panel: How Can the Validation Queue for the CMVP be Improved

Fiona Pattinson, atsec information security, moderated a panel on CMVP queue lengths. Panelists: Michael Cooper, NIST; Steve Weymann, Infoguard; James McLaughlin, Gemalto; and Nithya Rahamadugu, Cygnacom.

Mr. McLauglin said the major issues for vendors with these queue lengths, because it impacts time to market. If you don't get your validation until after your product is obsolete, that can be incredibly frustrating. It's important to take schedule and time to market into consideration, which is impossible when you cannot anticipate how long you'll be sitting in the queue.

Mr. Weymann expressed how frustrating this is for a lab to be able to communicate to their customers what the expectations are and what to do when the implementation guidance is changed (or as CMVP would say: clarified) while you're in the queue - do you have to start over?

Mr. Cooper, CMVP, expressed that there is a resource limitation (this gets back to the 8 reviewers). But is simply adding resources enough?  They have grown in this area - used to only have 1 or 2 reviewers.  But does adding people resolve all of the issues?  Finding people with the correct training is difficult as well.  Hiring in the US Government can take 6-8 months, even if they do find the right person with the right background and right interest.

Mr. Weymann posits that perhaps there are ways that could better use our existing resources.  Labs could help out a lot more here, making sure things are in such good shape at submission that the work that CMVP has to do is easier.

Ms. Rahamadugu and Mr. Weymann suggested adding more automation into the CMVP process. Mr. Cooper noted that he has just now gotten the budget to hire someone to do some automation tasks, so hopefully that will result in improvements to pace.

A comment from the audience from a lab suggested automation of the generation of the certificate, once the vendor has passed validation.  Apparently this can sometimes be error prone, resulting in 1-2 more cycles of review.  Automation could help here.

Mr. Easter said they like to complete things in 1-2 rounds of questions to the vendor, and there are penalties (not specified) for more than 2 rounds.  Sometimes the answers to a seemingly benign question can actually bring up more questions, which will result further rounds.  Though, CMVP is quick to want to have a phone call to resolve "question loops".

Mr. Easter noted that he tries to give related reviews or technologies to one person. This often helps to speed up reviews, reducing the questions. On the other hand, that puts the area expertise in the hands of one person, so when they go on vacation - there can be delays.

Mr. Weymann noted that it seems that each lab seems to gather different information and presents it in different ways. For example, different ways of presenting which algorithms are presented, key methods, etc.

Concern from the audience about loss of expertise if anyone moves - could validators shadow each other to learn each other's area of expertise?  Or will that effectively limit the number of reviewers.
Vendors feel like they have to continually "train" the validators, and retrain every time - frustrating for vendors to do this seemingly over and over.

A suggestion from the audience: could labs review another lab's submissions before it went to CMVP?  This is difficult, due to all of the NDAs involved.  Also, a vendor may not want a lab working with a competitor to review their software.

Complicated indeed!

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Impact FIPS 140-2: Implementation Guidance

Presenters: Kim Schaffer, Apostol Vassilev, and Jim Fox - all from NIST (CMVP).  The talk walked us through some of the most confusing/contentious recent updates to the Implementation Guidance document.

Implementation Guidance 9.10

Default Entry Points are well-known mechanisms for operator independent access to a crypto module. When an application loads, static default constructs, dynamically loaded libraries, etc - these access points are linked into the application.  When this default entry point is called, power on self tests must be called with out application or operator intervention.

This guidance exists in harmony with IG 9.5, which is regarding, essentially making sure there isn't a way for the module to skip power-on-self-tests.

Implementation Guidance 3.5

All services need to be documented in your security policy, even the non-approved ones.  Each service needs to be listed individually.  Some services can be documented externally, but those need to be publicly accessible (via URL or similar).

Operational Environment

This is defined as the OS and platform [Ed note: emphasis mine].

Today, processors are becoming nonstandard, so NIST can no longer just consider the OS alone and needs to reinvigorate the intent seen in the original document.  This gets more complicated with virtual machines (VMs) in the mix.

What they used to call GPC (general purpose processor), they are now referring to them as Special Processing Capability (SPC) because of the cryptography being added to the processors.

What do they mean by SPC?  extended calls to enhance cryptographic algorithm without full implementation.  Processors include Intel, Arm and SPARC.

If the module can demonstrate independence of SPC, then it won't necessarily be considered a hybrid implementation.  If the lab can show this, even though an implementation can leverage an SPC, they may still be considered a software or firmware implementation.

Questions

An audience member showed irritation that the new IG does not give time to implement (ie no grace period).  Mr. Randy Easter reiterated  that this is not a new requirement, but something that was formerly missed by the testing that the labs did. They want to nip these lapses in the bud and get people back on track.  Essentially correcting assumptions that CMVP was making about what the labs were testing. There are some changes/clarifications on this lack of grace period that were sent to the labs last night (not sure what's in that, but the labs all seem happy and are anxiously checking their mail).

Another question about why the very specific platform is required on the certificate. Mr. Easter noted that this is due to how dependent entropy is on the underlying platform - this becomes important to test and verify. This is a frustrating for Android developers.  Mr. Easter noted that most consumers, though, are happy to use the same module on slight variations of platforms (for example, different releases of Windows on slightly different laptop hardware - too many variations to actually test).

Mr. Schaffer noted that certificates do not need to go down to the patch level, unless the vendor requests it.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: CAVP/CMVP Status - What's Next?

We were lucky to have three panelists from NIST: Sharon Keller, Randall Easter, and Carolyn French, so we can hear about what they are doing straight from the horses mouth, as it were.  This is a rare opportunity for vendors to get to interact directly with members of these validation programs.

Ms. Keller, CAVP - NIST, explained a bit of history about this body. CMVP and CAVP used to be one group, but they were separated in 2003, as their tasks could easily be broken up.  As noted yesterday, the validations done by CAVP is much more black and white - you've either passed the test suite, or failed.  The CAVP has automated this as much as possible, which also contributes to the speed.

In addition to these validations, CAVP writes the tests, provides documentation and guidance and provides clarification, as needed.  They also keep an up to date webpage that lists algorithm implementations.

The rate of incoming validations seems to be nearly logarithmic growth!  Already this year, they've already issued more algorithm certificates than in all of 2012.  This year, they added a new tests for algorithms like SHA512/t.

Mr. Easter said that NIST had originally planned to do a follow-on conference to their 2003 conference in 2006, once FIPS-140-3 was finalized.... oops!

The original ISO/IEC 19790 (March 2006) was a copy of FIPS-140-2, the latest draft contains improvements that were hoped to be in FIPS-140-3.

Because FISMA requires that the US Government use validated crypto, these standards have become very important.

Vendors should make sure they reference the Implementation Guidance (IG). Mr. Easter noted that the IG, in their opinion, does not contain  any new "shalls", but merely clarification of existing requirements. Though, asking many vendors here, apparently the original document was so murky that these IGs seem like completely new requirements.  Now that FIPS-140-2 is so old (12 years now), the IG document is larger than the standard itself!  This can

There are actually only 8 full time reviewers for the CMVP, and those same reviewers also have to do bi-annual audits of the 22 international labs, write Implementation Guidance, etc - you can see why they are busy!

Reports from labs show the greatest areas of difficulty for conformance are key management, physical security, self-tests and RNGs.  Nearly 50% of the vendor implementations have non-conformance issues (8% are algorithm implementation conformance issues).

Mr. Easter apologized for the  queue and noted that this is not what they want either: they want the US Government to be able to get the latest software and hardware, too!

Currently, there are 200 reports in the queue that are actively being tracked and worked - again, 8 reviewers.  Is adding more reviewers the answer? How many people can they steal from the labs ;-)

What can you do to help?  Labs should make sure the reports are complete, in good shape, with all necessary elements..   Vendors should try to answer any questions as fast as possible.  Help close the loop, make the jobs of the reviewers simple.

Unfortunately,  work on FIPS-140-3 seems to have stalled in the group that evaluates new algorithms, and Mr. Easter would like NIST instead to adopt ISO19790 as FIPS-140-3 (it's ready to go: fleshed out, DTRs) - asking us to help pressure NIST to get this standard adopted.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Welcome and Plenary Session

Congratulations to Program Chair Fiona Pattison, atsec information security, for putting together such an interesting program for this inaugural International Cryptographic Module Conference.   She has brought together an amazingly diverse mix of vendors, consultants, labs and NIST.  People from all over the world are here.

Our first keynote speaker is Charles H. Romine, Directory, Information Technology Laboratory, NIST. Mr. Romine started off his talk by noting how important integrity and security is to his organization.  Because of this, based on community feedback, they've reopened review of algorithms and other documents.  Being open and transparent is critical to his organization.

NIST is reliant on their industry partners for things like coming up with SHA3.  They are interested in knowing what is working for their testing programs and what is not working, hopefully they will get a lot of great feedback at this conference.

Because these validations are increasingly more valuable in the industry - demand for reviews have gone up significantly. How can NIST keep the quality of reviews up, while still meeting the demand? Mr. Romine is open for feedback from us all.

Our next keynote was Dr. Bertrand du Castel, Schlumbeger Fellow and Java Card Pioneer, titled "Do Cryptographic Modules have a Moat?"

Dr. du Castel asks, is the problem with cryptography simply key exchange? Or is it really a trust issue?  With things like bitcoin (now taxed in Germany) and paypal everywhere on the Internet - it should be obvious how important trust is.  He walked us through many use examples, using humorous anecdotes to demonstrate that the importance of trust is growing and growing.


This post syndicated from: Thoughts on security, beer, theater and biking!

Tuesday, September 24, 2013

ICMC: Introduction to FIPS-140-2

Presented by Steve Weingart, Cryptographic & Security Testing Laboratory Manager, at atsec information security.

While I'm not new to FIPS-140-2, but as I'm here in Gaithersberg, MD for the inaugural International Cryptographic Module Conference, it's always good to get a refresher from an expert.  Please note: these are my notes from this talk and should not be taken as gospel on the subject. If you need a FIPS-140 validation - you should probably engage a professional :-)

Since the passage of Federal Information Security Management Act (FISMA) of 2002, US  Federal Government can no longer waive FIPS requirements. Vendors just simply need to comply to various FIPS standards, including FIPS-140-2 (the FIPS standard that is relevant to cryptography and security modules).  Financial institutions and others that care about third party evaluations also want to see this standard implemented.

Technically, a non-compliant agency could lose their funding.

FIPS-140 is a joint program between US NIST and Canadian CSEC (aka CSE).

All testing for FIPS-140 is done by accredited labs (accredited by National Voluntary Laboratory Scheme).  Labs, though, cannot perform consulting on the design of the cryptographic module. Could be seen as a conflict of interest, if they were seen as designing and testing the same module.  They can use content you provide to make your Security Policy and Finite State Machine (FSM), as those documents have to be in a very specific format that individual vendors will likely have trouble creating them their first time out.

A cryptograpic module is defined by its security functionality, well-defined boundaries and have at least one approved security function.  A module can be contained in hardware, software, firmware, software-hybrid, firmware-hybrid.

Security functionality can be: symmetric/asymmetric key cryptography, hashing, message authentication, RNG, or key management.

The testing lab makes sure that you've implemented the approved security functions correctly to protect sensitive information, makes sure you cannot change the module after it's been validated (integrity requirement), and that you prevent unauthorized use of the module.

Your users need to be able to query the module to see if it is performing correctly and to see the operational state it is in.

FIPS-140 has been around, originally as Federal Standard 1027, since about 1982.  Of course, as technology changes, the standard gets out of date.  FIPS-140-2 came out in 2001 with some change notices in 2002.  FIPS-140-3 has had many false starts.  A large quantity of implementation guidance has come  out (the IG is approaching, if not overtaking, the size of the initial FIPS-140-2 document).

Some brief clarifications: the -<number> on the standard refers to the version.  The first was FIPS-140, next FIPS-140-1, and the final currently adopted one is FIPS-140-2.

Each of those versions have levels that you can be evaluated at. Level one is the "easiest", level four is the hardest (available to hardware only).

FIPS-140-3 has had two rounds of public drafts, there were over 1200 comments, but it seems there is just one person still working on this draft.  In addition, there are not any Derived Test Requirements (DTR) so the labs cannot even consider writing tests for the standard.

There are new versions of "FIPS-140-3" by ISO (19790)  (released in 2012), essentially competing with the NIST draft.  Though, the original goal was to have the ISO standard be the same document, just as an international standard.

Right now, you can validate against the ISO standard in other countries - not in the US or Canada, though.  If you used an international body to validate your module against the ISO standard, it would not get you through the door to US or Canadian Government customers.

It's up to NIST and CSEC to pick one of these, create transition guidance and testing information to let vendors and labs move forward.

ISO 19790 has many improvements, but if you implement them, you will not pass FIPS-140-2 testing. For example, ISO 19790 allows lazy POST (only testing when needed), but FIPS-140-2 requires POST of the entire boundary any time any part of the boundary is used.

FIPS-140-2 has four levels, and it doesn't matter if 99% of your module meets all of the items required for a higher level (like level 2) - but 1% only meets level 1, you cannot be validated at the higher level.

The ISO document doesn't point to specific EAL common criteria levels, helping to alleviate the chicken and the egg circular dependency for FIPS-140 levels. For example, FIPS-140-2 Level 2 requires a EAL validated OS underneath.  The EAL validation requires a FIPS-140-2 validated crypto module.

The finite state model means that you can only be in one state at one time. That means you cannot be generating key material at the same time you're performing cryptographic operations in another thread. This can be very difficult to accomplish with modern day multi-threaded programs - the labs that do the validation review source code, too, so no sneaking around them!

Mr. Weingart keeps reminding us that FIPS-140 is a validation, not an evaluation.

There are some good questions about how do things like OASIS KMIP interact with the requirements for FIPS-140-2 cryptographic key management requirements?  The general thought is they should harmonize, but KMIP doesn't seem to be referenced.

Around key generation, RNG and entropy generation is very important, and with the current news - this is being heavily scrutinized by NIST right now.  Simply using /dev/random (without knowing anything about its entropy sources) is not sufficient. Of course, when you're also the provider of /dev/random, you have a bit more knowledge. We should expect further guidance in this area.

Cryptographic modules have to complete power on self-tests (POSTs) to ensure that the module is functioning properly - no shortcuts allowed! (again, your code will be reviewed - shortcuts will be seen!).  There are also some conditional self-tests - tests run when a certain condition occur, for example, generating a key.

If any of these tests fail, you must not perform *any* cryptographic operations until the failure state has been cleared and all POSTs are rerun.  That is, even if your POSTS for SHA1 fail, you cannot even provide AES or ECC.

If you make any additional "extra credit" security claims, like "we protect against timing attacks", that either needs to be  verified by the lab, or a disclaimer needs to be placed in your security policy.

Implementation Guidance

There is a lot of new implementation guidance coming up, fast and furiously.

The most contentious one is the run Power on Self-Tests all the time (whether in FIPS mode or not). This can be problematic, particularly for something like a general purpose OS or smartcards. Things that may not have been designed for this, or that just don't have great performance capabilities (like smartcards) this can make your device or system unusable for customers that do not need FIPS-140 validated hardware/software.

IG G.14, for example, has some odd things on algorithms, like RSA4096 will be removed from approval, but RSA 2048 won't be. This seems to be related to performance issues, according to discussions in the room, but that seems a harsh punishment for perf issues.  Check SP800-31A for more details about what will and will not be allowable going forward.

Cryptographic Algorithm Validation Program

Any algorithms used in approved mode need to be validated to make sure they are operating correctly.  This step is required before you can submit to the CMVP  (Cryptographic Module Validation Program).  This is a very mechanical process - you have either passed the algorithm tests, or you haven't.  CAVP turn around to issue certificates for your algorithms is typically very quick, because there isn't wiggle room or room for interpretation.

You will work with labs (22 approved ones that are accredited currently) on this, and a consulting firm that will help you to work with the labs, work on your documentation, design, architecture, etc.  You can hire another lab as a consultant, just your lab cannot consult for you. (back to that they cannot test what they designed)

Preparing yourself for validation

Read FIPS-140-2, implementation guidance, SP-800s documentations, etc.

Take training, where available and possible.

Enlist help on your design and architecture as early as possible, get a readiness assessment.

You can do your algorithm testing early, find and fix problems early in your development.

Iterate as needed (if this is your first time, you'll almost certainly have to iterate to get this right).


This post syndicated from: Thoughts on security, beer, theater and biking!

Friday, September 20, 2013

PKCS 11 Technical Committee Face to Face

This week, Oracle hosted the OASIS PKCS 11 Technical Committee's face to face meeting on our Santa Clara campus.

It was a very productive two days, I believe we got through some of the final issues to the next revision of the standard (v2.40).  Work won't finish there, it seems, as all of the committee members are excited about what we can do in the future to make PKCS 11 an even more robust interface for providing cryptographic services to applications and utilities.

As most of you already know, Solaris's user level Cryptographic Framework is a PKCS 11 API, so we're very excited to see the standard progress and evolve.

As co-chair of the committee, I am so proud of everyone's hard work in dusting off the standard and doing the hard work necessary to quickly converge to get the next revision ready to go!

The standard moved from RSA to OASIS earlier this year.

Thursday, September 19, 2013

ICMC: Presenting on Software in Silicon: Crypto-Capable Processors

I will be co-presenting on crypto-capable processors at the inaugural International Cryptographic Module Conference next week, along with Dave Weaver (SPARC architect), and Wajdi Feghali (Intel).

We'll be talking about the evolution of cryptographic instructions in general purpose processors, using Solaris as the case-study example.

Are you interested in cryptographic modules, FIPS-140 validations, or crypto stuff in general?  You should register for next week's ICMC conference in Gaithersberg, MD! There's a few days left to register.