Regular expressions and recommended practices

Whenever a security person crosses a vulnerability report, one of the the first steps is to ensure that the reported problem is actually a vulnerability. Usually, the issue falls into well known and studied categories and this step is done rather quickly. Occasionally, however, one can come across bugs where this initial triage is a bit more problematic. This blog post is about such an issue, which will ultimately lead us to the concept of “recommended practice”.

What happened?

On July 31st 2014, Maksymilian Arciemowicz of cxsecurity reported that “C++11 [is] insecure by default.”, with upstream GCC bugs 61601 and 61582. LLVM/Clang’s libc++ didn’t dodge the bullet either, more details are available in LLVM bug 20291.

Not everybody can be bothered to go through so many links, so here is a quick summary: C++11, a new C++ standard approved in 2011, introduced support for regular expressions. Regular expressions (regexes from here on) are an amazingly powerful processing tool – but one that can become extremely complex to handle correctly. Not only can the regex itself become hideous and hard to understand, but also the way how the regex engine deals with it can lead to all sorts of problems. If certain complex regexes are passed to a regex engine, the engine can quickly out-grow the available CPU and memory constraints while trying to process the expression, possibly leading to a catastrophic event, which some call ReDoS, a “regular expression denial of service”.

This is exactly what Maksymilian Arciemowicz exploits: he passes specially crafted regexes to the regex engines provided by the C++11 implementations of GCC and Clang, causing them to use a huge amount of CPU resources or even crash (e.g. due to extreme recursion, which will exhaust all the available stack space, leading to a stack-overflow).

Is it a vulnerability?

CPU exhaustion and crashes are often good indicators for a vulnerability. Additionally, the C++11 standard even suggests error return codes for the exact problems triggered, but the implementations at hand fail to catch these situations. So, this must be a vulnerability, right? Well, this is the point where opinions differ. In order to understand why, it’s necessary to introduce a new concept:

The “recommended practice” concept

“Recommended practice” is essentially a mix of common sense and dos and don’ts. A huge problem is that they are informal, so there’s no ultimate guide on the subject, which leaves best practices open to personal experiences and opinion. Nevertheless, the vast majority of the programming community should know about the dangers of regular expressions; dangers just like the issues Maksymilian Arciemowicz reported in GCC/Clang. That said, passing arbitrary, unfiltered regexes from an untrusted source to the regex engine should be considered as a recommended practice case of “don’t do this; it’ll blow up in your face big time”.

To further clear this up: if an application uses a perfectly reasonable, well defined regex and the application crashes because the regex engine chocked when processing certain specially crafted input, it’s (most likely) a vulnerability in the regex engine. However, if the application uses a regex thought to be well defined, efficient and trusted, but turns out to e.g. take overly long to process certain specially crafted input, while other, more efficient regexes will do the job just fine, it’s (probably) a vulnerability in the application. But if untrusted regexes are passed to the regex engine without somehow filtering them for sanity first (which is incredibly hard to do for anything but the simplest of regexes, so better to avoid it), it is violating what a lot of people believe to be recommended practice, and thus it is often not considered to be a strict vulnerability in the regex engine.

So, next time you feel inclined to pass regexes verbatim to the engine, you’ll hopefully remember that it’s not a good idea and refrain from doing so. If you have done so in the past, you should probably go ahead and fix it.

Don’t judge the risk by the logo

It’s been almost a year since the OpenSSL Heartbleed vulnerability, a flaw which started a trend of the branded vulnerability, changing the way security vulnerabilities affecting open-source software are being reported and perceived. Vulnerabilities are found and fixed all the time, and just because a vulnerability gets a name and a fancy logo doesn’t mean it is of real risk to users.

ven1

So let’s take a tour through the last year of vulnerabilities, chronologically, to see what issues got branded and which issues actually mattered for Red Hat customers.

“Heartbleed” (April 2014)CVE-2014-0160

Heartbleed was an issue that affected newer versions of OpenSSL. It was a very easy to exploit flaw, with public exploits released soon after the issue was public. The exploits could be run against vulnerable public web servers resulting in a loss of information from those servers. The type of information that could be recovered varied based on a number of factors, but in some cases could include sensitive information. This flaw was widely exploited against unpatched servers.

For Red Hat Enterprise Linux, only customers running version 6.5 were affected as prior versions shipped earlier versions of OpenSSL that did not contain the flaw.

Apache Struts 1 Class Loader RCE (April 2014) CVE-2014-0114

This flaw allowed attackers to manipulate exposed ClassLoader properties on a vulnerable server, leading to remote code execution. Exploits have been published but they rely on properties that are exposed on Tomcat 8, which is not included in any supported Red Hat products. However, some Red Hat products that ship Struts 1 did expose ClassLoader properties that could potentially be exploited.

Various Red Hat products were affected and updates were made available.

OpenSSL CCS Injection (June 2014) CVE-2014-0224

After Heartbleed, a number of other OpenSSL issues got attention. CCS Injection was a flaw that could allow an attacker to decrypt secure connections. This issue is hard to exploit as it requires a man in the middle attacker who can intercept and alter network traffic in real time, and as such we’re not aware of any active exploitation of this issue.

Most Red Hat Enterprise Linux versions were affected and updates were available.

glibc heap overflow (July 2014) CVE-2014-5119

A flaw was found inside the glibc library where an attacker who is able to make an application call a specific function with a carefully crafted argument could lead to arbitrary code execution. An exploit for 32-bit systems was published (although this exploit would not work as published against Red Hat Enterprise Linux).

Some Red Hat Enterprise Linux versions were affected, in various ways, and updates were available.

JBoss Remoting RCE (July 2014) CVE-2014-3518

A flaw was found in JBoss Remoting where a remote attacker could execute arbitrary code on a vulnerable server. A public exploit is available for this flaw.

Red Hat JBoss products were only affected by this issue if JMX remoting is enabled, which is not the default. Updates were made available.

“Poodle” (October 2014) CVE-2014-3566

Continuing with the interest in OpenSSL vulnerabilities, Poodle was a vulnerability affecting the SSLv3 protocol. Like CCS Injection, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue.

Most Red Hat Enterprise Linux versions were affected and updates were available.

“ShellShock” (September 2014) CVE-2014-6271

The GNU Bourne Again shell (Bash) is a shell and command language interpreter used as the default shell in Red Hat Enterprise Linux. Flaws were found in Bash that could allow remote code execution in certain situations. The initial patch to correct the issue was not sufficient to block all variants of the flaw, causing distributions to produce more than one update over the course of a few days.

Exploits were written to target particular services. Later, malware circulated to exploit unpatched systems.

Most Red Hat Enterprise Linux versions were affected and updates were available.

RPM flaws (December 2014) CVE-2013-6435, CVE-2014-8118

Two flaws were found in the package manager RPM. Either could allow an attacker to modify signed RPM files in such a way that they would execute code chosen by the attacker during package installation. We know CVE-2013-6435 is exploitable, but we’re not aware of any public exploits for either issue.

Various Red Hat Enterprise Linux releases were affected and updates were available.

“Turla” malware (December 2014)

Reports surfaced of a trojan package targeting Linux, suspected as being part of an “advance persistent threat” campaign. Our analysis showed that the trojan was not sophisticated, was easy to detect, and unlikely part of such a campaign.

The trojan does not use any vulnerability to infect a system, it’s introduction onto a system would be via some other mechanism. Therefore it does not have a CVE name and no updates are applicable for this issue.

“Grinch” (December 2014)

An issue was reported which gained media attention, but was actually not a security vulnerability. No updates were applicable for this issue.

“Ghost” (January 2015) CVE-2015-0235

A bug was found affecting certain function calls in the glibc library. A remote attacker that is able to make an application call to an affected function could execute arbitrary code. While a proof of concept exploit is available, not many applications were found to be vulnerable in a way that would allow remote exploitation.

Red Hat Enterprise Linux versions were affected and updates were available.

“Freak” (March 2015) CVE-2015-0204

It was found that OpenSSL clients accepted EXPORT-grade (insecure) keys even when the client had not initially asked for them. This could be exploited using a man-in-the-middle attack, which could downgrade to a weak key, factor it, then decrypt communication between the client and the server. Like Poodle and CCS Injection, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue.

Red Hat Enterprise Linux versions were affected and updates were available.

Other issues of customer interest

We can also get a rough guide of which issues are getting the most attention by looking at the number of page views on the Red Hat CVE pages. While the top views were for the  issues above, also of increased interest was:

  • A kernel flaw (May 2014) CVE-2014-0196, allowing local privilege escalation. A public exploit exists for this issue but does not work as published against Red Hat Enterprise Linux.
  • “BadIRET”, a kernel flaw (December 2014) CVE-2014-9322, allowing local privilege escalation. Details on how to exploit this issue have been discussed, but we’re not aware of any public exploits for this issue.
  • A flaw in BIND (December 2014), CVE-2014-8500. A remote attacker could cause a denial of service against a BIND server being used as a recursive resolver.  Details that could be used to craft an exploit are available but we’re not aware of any public exploits for this issue.
  • Flaws in NTP (December 2014), including CVE-2014-9295. Details that could be used to craft an exploit are available.  These serious issues had a reduced impact on Red Hat Enterprise Linux.
  • A flaw in Samba (February 2015) CVE-2015-0240, where a remote attacker could potentially execute arbitrary code as root. Samba servers are likely to be internal and not exposed to the internet, limiting the attack surface. No exploits that lead to code execution are known to exist, and some analyses have shown that creation of such a working exploit is unlikely.

Conclusion

ven2

We’ve shown in this post that for the last year of vulnerabilities affecting Red Hat products the issues that matter and the issues that got branded do have an overlap, but they certainly don’t closely match. Just because an issue gets given a name, logo, and press attention does not mean it’s of increased risk. We’ve also shown there were some vulnerabilities of increased risk that did not get branded.

At Red Hat, our dedicated Product Security team analyse threats and vulnerabilities against all our products every day, and provide relevant advice and updates through the customer portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter, while avoiding being caught up in a media whirlwind for those that don’t.

JOSE – JSON Object Signing and Encryption

Federated Identity Management has become very widespread in past years – in addition to enterprise deployments a lot of popular web services allow users to carry their identity over multiple sites. Social networking sites especially are in a good position to drive the federated identity management, as they have both critical mass of users and the incentive to become an identity provider. As the users move away from a single device to using multiple portable devices, there is a constant pressure to make the federated identity protocols simpler (with respect to complexity), more user friendly (especially for developers) and easier to implement (on wide range of devices and platforms).

Unfortunately older technologies are deeply rooted in enterprise environments and are unsuitable for Internet (Kerberos), or are based on more or less complicated data serialization (e.g. OpenID 2.0 or SAML). Canonicalization, whitespace handling and representation of binary data are among the challenges that various serialization formats face.

OpenID Connect

Another approach to the problem of serialization of structured data in context of identity management is to use already widespread and simple JSON in combination with base64url encoding of data. The advantage in context of federated identity management is obvious – while being sufficiently universal, they have almost native support in web clients. Getting this level of simplicity and interoperability is very compelling, despite these having some shortcomings in e.g. bandwidth-efficiency.

This approach is taken by an upcoming standard OpenID Connect, a third revision of OpenID protocol. The protocol describes a method of providing identity-based claims from identity provider to relying party, with end user being authenticated by Identity Provider and authorizing the request. The communication between client, relying party and identity provider follows OAUTH 2.0 protocol, just like in previous version of the protocol, OpenID 2.0. Claims have a format of JSON key-value hash and the task of protection of integrity, and possibly confidentiality, is addressed by JSON Object Signing and Encryption (JOSE) standard.

JOSE

The standard provides a general approach to signing and encryption of any content, not necessarily in JSON. However, it is deliberately built on JSON and base64url to be easily usable in web applications. Also, while being used in OpenID Connect, it can be used as a building block in other protocols.

JOSE is still an upcoming standard, but final revisions should be available shortly. It consists of several upcoming RFCs:

  • JWA – JSON Web Algorithms, describes cryptographic algorithms used in JOSE
  • JWK – JSON Web Key, describes format and handling of cryptographic keys in JOSE
  • JWS – JSON Web Signature, describes producing and handling signed messages
  • JWE – JSON Web Encryption, describes producting and handling encrypted messages
  • JWT – JSON Web Token, describes representation of claims encoded in JSON and protected by JWS or JWE

JWK

JSON Web Key is a data structure representing a cryptographic key with both the cryptographic data and other attributes, such as key usage.

{ 
  "kty":"EC",
  "crv":"P-256",
  "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4",
  "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM",
  "use":"enc",
  "kid":"1"
}

Mandatory “kty” key type parameter describes the cryptographic algorithm associated with the key. Depending on the key type, other parameters might be used – as shown in the example elliptic curve key would contain “crv” parameter identifying the curve, “x” and “y” coordinates of point, optional “use” to denote intended usage of the key and “kid” as key ID. The specification now describes three key types: “EC” for Elliptic Curve, “RSA” for, well, RSA, and “oct” for octet sequence denoting the shared symmetric key.

JWS

JSON Web Signature standard describes process of creation and validation of datastructure representing signed payload. As example take following string as a payload:

'{"iss":"joe",
 "exp":1300819380,
 "http://example.com/is_root":true}'

Incidentally, this string contains JSON data, but this is not relevant for the signing procedure and it might as well be any data. Before signing, the payload is always converted to base64url encoding:

eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLm
NvbS9pc19yb290Ijp0cnVlfQ

Additional parameters are associated with each payload. Required parameter is “alg”, which denotes the algorithm used for generating a signature (one of the possible values is “none” for unprotected messages). The parameters are included in final JWS in either protected or unprotected header. The data in protected header is integrity protected and base64url encoded, whereas unprotected header human readable associated data.

As example, the protected header will contain following data:

{"alg":"ES256"}

which in base64url encoding look like this:

eyJhbGciOiJFUzI1NiJ9

The “ES356″ here is identifier for ECDSA signature algorithm using P-256 curve and SHA-256 digest algorithm.

Unprotected header can contain a key id parameter:

{"kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"}

The base64url encoded payload and protected header are concatenated with ‘.’ to form a raw data, which is fed to the signature algorithm to produce the final signature.

Finally, the JWS output is serialized using one of JSON or Compact serializations. Compact serialization is simple concatenation of comma separated base64url encoded protected header, payload and signature. JSON serialization is a human readable JSON object, which for the example above would look like this:

{
  "payload": "eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6
              Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ",
  "protected":"eyJhbGciOiJFUzI1NiJ9",
  "header":
    {"kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"},
     "signature":
     "DtEhU3ljbEg8L38VWAfUAqOyKAM6-Xx-F4GawxaepmXFCgfTjDxw5djxLa8IS
      lSApmWQxfKTUJqPP3-Kg6NU1Q"
}

Such process for generating signature is pretty straightforward, yet still supports some advanced use-cases, such as multiple signatures with separate headers.

JWE

JSON Web Encryption follows the same logic as JWS with a few differences:

  • by default, for each message new content encryption key (CEK) should be generated. This key is used to encrypt the plaintext and is attached to the final message. Public key of recipient or a shared key is used only to encrypt the CEK (unless direct encryption is used, see below).
  • only AEAD (Authenticated Encryption with Associated Data) algorithms are defined in the standard, so users do not have to think about how to combine JWE with JWS.

Just like with JWS, header data of JWE object can be transmitted in either integrity protected, unprotected or per-recipient unprotected header. The final JSON serialized output then has the following structure:

{
  "protected": "<integrity-protected header contents>",
  "unprotected": <non-integrity-protected header contents>,
  "recipients": [
    {"header": <per-recipient unprotected header 1 contents>,
     "encrypted_key": "<encrypted key 1 contents>"},
     ...
    {"header": <per-recipient unprotected header N contents>,
     "encrypted_key": "<encrypted key N contents>"}],
  "aad":"<additional authenticated data contents>",
  "iv":"<initialization vector contents>",
  "ciphertext":"<ciphertext contents>",
  "tag":"<authentication tag contents>"
}

The CEK is encrypted for each recipient separately, using different algorithms. This gives us ability to encrypt a message to recipients with different keys, e.g. RSA, shared symmetric and EC key.

The two used algorithms need to be specified as a header parameters. “alg” parameter specified the algorithm used to protect the CEK, while “enc” parameter specifies the algorithm used to encrypt the plaintext using CEK as key. Needless to say, “alg” can have a value of “dir”, which marks direct usage of the key, instead of using CEK.

As example, assume we have RSA public key of the first recipient and share a symmetric key with second recipient. The “alg” parameter for the first recipient will have value “RSA1_5″ denoting RSAES-PKCS1-V1_5 algorithm and “A128KW” denoting AES 128 Keywrap for the second recipient, along with key IDs:

{"alg":"RSA1_5","kid":"2011-04-29"}

and

{"alg":"A128KW","kid":"7"}

These algorithms will be used to encrypt content encryption key (CEK) to each of the recipients. After CEK is generated, we use it to encrypt the plaintext with AES 128 in CBC mode with HMAC SHA 256 for integrity:

{"enc":"A128CBC-HS256"}

We can protect this information by putting it into a protected header, which, when base64url encoded, will look like this:

eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0

This data will be fed as associated data to AEAD encryption algorithm and therefore be protected by the final signature tag.

Putting this all together, the resulting JWE object will looks like this:

{
  "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0",
  "recipients":[
    {"header": {"alg":"RSA1_5","kid":"2011-04-29"},
     "encrypted_key":
       "UGhIOguC7IuEvf_NPVaXsGMoLOmwvc1GyqlIKOK1nN94nHPoltGRhWhw7Zx0-
        kFm1NJn8LE9XShH59_i8J0PH5ZZyNfGy2xGdULU7sHNF6Gp2vPLgNZ__deLKx
        GHZ7PcHALUzoOegEI-8E66jX2E4zyJKx-YxzZIItRzC5hlRirb6Y5Cl_p-ko3
        YvkkysZIFNPccxRU7qve1WYPxqbb2Yw8kZqa2rMWI5ng8OtvzlV7elprCbuPh
        cCdZ6XDP0_F8rkXds2vE4X-ncOIM8hAYHHi29NX0mcKiRaD0-D-ljQTP-cFPg
        wCp6X-nZZd9OHBv-B3oWh2TbqmScqXMR4gp_A"},
    {"header": {"alg":"A128KW","kid":"7"},
     "encrypted_key":
        "6KB707dM9YTIgHtLvtgWQ8mKwboJW3of9locizkDTHzBC2IlrT1oOQ"}],
  "iv": "AxY8DCtDaGlsbGljb3RoZQ",
  "ciphertext": "KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY",
  "tag": "Mz-VPPyU4RlcuYv1IwIvzw"
}

JWA

JSON Web Algorithms defines algorithms and their identifiers to be used in JWS and JWE. The three parameters that specify algorithms are “alg” for JWS, “alg” and “enc” for JWE.

"enc": 
    A128CBC-HS256, A192CBC-HS384, A256CBC-HS512 (AES in CBC with HMAC), 
    A128GCM, A192GCM, A256GCM

"alg" for JWS: 
    HS256, HS384, HS512 (HMAC with SHA), 
    RS256, RS384, RS512 (RSASSA-PKCS-v1_5 with SHA), 
    ES256, ES384, ES512 (ECDSA with SHA), 
    PS256, PS384, PS512 (RSASSA-PSS with SHA for digest and MGF1)

"alg" for JWE: 
    RSA1_5, RSA-OAEP, RSA-OAEP-256, 
    A128KW, A192KW, A256KW (AES Keywrap), 
    dir (direct encryption), 
    ECDH-ES (EC Diffie Hellman Ephemeral+Static key agreement), 
    ECDH-ES+A128KW, ECDH-ES+A192KW, ECDH-ES+A256KW (with AES Keywrap), 
    A128GCMKW, A192GCMKW, A256GCMKW (AES in GCM Keywrap), 
    PBES2-HS256+A128KW, PBES2-HS384+A192KW, PBES2-HS512+A256KW 
    (PBES2 with HMAC SHA and AES keywrap)

On the first look the wealth of choice for “alg” in JWE is balanced by just two options for “enc”. Thanks to “enc” and “alg” being separate, algorithms suitable for encrypting cryptographic key and content can be separately defined. AES Keywrap scheme defined in RFC 3394 is a preferred way to protect cryptographic key. The scheme uses fixed value of IV, which is checked after decryption and provides integrity protection without making the encrypted key longer (by adding IV and authentication tag). But here`s a catch – while A128KW refers to AES Keywrap algorithm as defined in RFC 3394, word “keywrap” in A128GCMKW is used in a more general sense as synonym to encryption, so it denotes simple encryption of key with AES in GCM mode.

JWT

While previous parts of JOSE provide a general purpose cryptographic primitives for arbitrary data, JSON Web Token standard is more tied to the OpenID Connect. JWT object is simply JSON hash with claims, that is either signed with JWS or encrypted with JWE and serialized using compact serialization. Beware of a terminological quirk – when JWT is used as plaintext in JWE or JWS, it is referred to as nested JWT (rather than signed, or encrypted).

JWT standard defines claims – name/value pair asserting information about subject. The claims include

  • “iss” to identify issuer of the claim
  • “sub” identifying subject of JWT
  • “aud” (audience) identifying intended recipients
  • “exp” to mark expiration time of JWT
  • “nbf” (not before) to mark time before which JWT must be rejected
  • “iat” (issued at) to mark time when JWT was created
  • “jti” (JWT ID) as unique identifier for JWT

While standard mandates what are mandatory values of the claims, all of them are optional to use in a valid JWT. This means applications can use any structure for JWT if it`s not intended to use publicly, and for public JWT set of claims is defined and collisions in names are prevented.

The good

The fact that JOSE combines JSON and base64url encoding making it simple and web friendly is a clear win. Although we will definitely see JOSE adopted in web environment first, it does have ambition to become more general purpose standard.

The design promotes secure choices, e.g. use of unique CEK per message, which makes users default to secure configurations while still giving option to use less secure methods (“dir” for encryption, “none” in JWS). Being a new standard authors did seize the opportunity to define only secure algorithms. This is certainly good, but as the advances in cryptography weaken the algorithms, ability to deprecate algorithms (with backwards compatibility always being an issue) will be more important in the future.

The bad

Despite the effort to keep the standard simple, some complexities inevitably slipped through. One obvious comes from a different serializations. JOSE standards support two serializations – JSON and Compact. JSON serialization is human readable and gives the users more freedom as it supports some advances features like multiple recipients. However, since single recipient is a much more common case, standard also defines a variant called flattened JSON serialization. In this type nested parameters from nested fields (like “recipients”) is moved directly to top level JSON object, making multiple recipients with flattened serialization impossible.

The compact serialization is created, according to the standard, for “space constrained environments”. The result of the serialization is comma separated concatenation of base64url encoded segments of the original JSON object. For example for JWS the serialization is constructed as follows:

BASE64URL(UTF8(JWS Protected Header)) || '.' ||
BASE64URL(JWS Payload) || '.' ||
BASE64URL(JWS Signature)

Astute readers notice that compact serialization further restricts the available features of JWS, e.g. it is not possible to include unprotected header anymore. The space saving comes from dropping the keys that denote the parts of JSON object (“payload”, “signature” etc.). On the other hand, base64url encoding expands the length of the cleartext data (i.e. unprotected header), so in extreme example compact serialized JWS might actually be longer that JSON serialized one if the header contains enough data (to compensate inefficiency of specifying keys in the object) and is stored as unprotected header. Of importance is also the number of dot separated sections, since their number is the only method of differentiating between compact serialized JWE and JWS. Other proposed extensions, such as Key Managed JSON Web Signature (KMJWS), must take this into account.

The standard also still contains several ambiguities, e.g. JWK defines a JWK Set and states that “The member names within a JWK Set MUST be unique;”  without specifying what member name actually is.

The ugly

The JOSE standard is already incorporated in OpenID Connect standard. As the standards evolved side by side, OpenID Connect standard is based on older revisions of JOSE. More importantly, 15.6.1. Pre-Final IETF Specifications section of OpenID Connect states:

“Implementers should be aware that this specification uses several IETF specifications that are not yet final specifications … While every effort will be made to prevent breaking changes to these specifications, should they occur, OpenID Connect implementations should continue to use the specifically referenced draft versions above in preference to the final versions …”

The compatibility issues are always bane of cryptographic standard and this decision to prefer pre-final revisions of JOSE standard might force implementations to make some hard decisions.

Future

The JOSE standard seems to be quickly approaching the final revisions and we will most probably see more of it on the web. Implementations for most of the popular languages are in place and we will see whether the decision to award Special European Identity Award for Best Innovation for Security in the API Economy to JOSE will also stand the test of time.

Not using IPv6? Are you sure?

World IPv6 Launch logo

CC-BY World IPv6 Launch

Internet Protocol version 6 (IPv6) has been around for many years and was first supported in Red Hat Enterprise Linux 6 in 2010.  Designed to provide, among other things, additional address space on the ever-growing Internet, IPv6 has only recently become a priority for ISPs and businesses.

On February 3, 2011, ICANN announced that the available pool of unallocated IPv4 addresses had been completely emptied and urged network operators and server owners to implement IPv6 if they had not already done so.  Unfortunately, many networks still do not support IPv6 and many system and network administrators don’t understand the security risks associated with not having some sort of IPv6 control within their networks setup even if IPv6 is not supported.  The common thought of not having to worry about IPv6 since it’s not supported on a network is a false one.

The Threat

On many operating systems, Red Hat Enterprise Linux and Fedora included, IPv6 is preferred over IPv4.  A DNS lookup will search first for an IPv6 address and then an IPv4 address.  A system requesting a DHCP allocation will, by default, attempt to obtain both addresses as well.  When a network does not support IPv6 it leaves open the possibility of rouge IPv6 DHCP and DNS servers coming online to redirect traffic either around current network restrictions or through a specific choke point where traffic can be inspected or both.  Basically, if you aren’t offering up IPv6 within your network someone else could.

Just like on an IPv4 network, monitoring IPv6 on the internal network is crucial for security, especially if you don’t have IPv6 rolled out.  Without proper monitoring, an attacker, or poorly configured server, could start providing a path way out of your network, bypassing all established safety mechanisms to keep your data under control.

Implementing IPv6

There are several methods for protecting systems and networks from attacks revolving around IPv6.  The simplest, and most preferred method, is to simply start using IPv6.  It becomes much more difficult for rouge DNS and DHCP servers to be implemented on a functioning IPv6 network.  Implementing IPv6 isn’t particularly difficult either.

Unfortunately IPv6 isn’t all the simple to implement either.  As UNC‘s Dr. Joni Julian spoke about in her SouthEast LinuxFest presentation on IPv6 Security, many of the tools administrators use to manage network connections have been rewritten, and thus renamed, to support IPv6.  This adds to the confusion when other tools, such as iptables, require different rules to be written to support IPv6.  Carnegie Mellon University’s CERT addresses many different facets of implementing IPv6 including ip6tables rules.  There are many resources available to help system and network administrators setup IPv6 on their systems and networks and by doing so networks will automatically be available to IPv6-only networks of the future present.

Blocking and Disabling IPv6

If setting up IPv6 isn’t possible the next best thing is disabling, blocking, and monitoring for IPv6 on the network.  This means disabling IPv6 in the network stack and blocking IPv6 in ip6tables.

# Set DROP as default policy to INPUT, OUTPUT, and FORWARD chains.
ip6tables -P INPUT DROP
ip6tables -P OUTPUT DROP
ip6tables -P FORWARD DROP

# Set DROP as a rule to INPUT and OUTPUT chains.
ip6tables -I INPUT -p all -j DROP
ip6tables -I OUTPUT -p all -j DROP

Because it can never known that every system on a network will be properly locked down, monitoring for IPv6 packets on the network is important.  Many IDSs can be configured to alert on such activity but configuration is key.

A few final words

IPv6 doesn’t have to be scary but if you want to maintain a secure network a certain amount of respect is required.  With proper monitoring IPv6 can be an easily manageable “threat”.  Of course the best way to mitigate the risks is to embrace IPv6.  Rolling it out and using it prevents many of the risks already discussed and it could already be an availability issue if serving up information over the Internet is important.