FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode

A digital blue tree on a gradient blue background.

The quantum computing era is coming, and it will change everything about how the world connects online. While quantum computing will yield tremendous benefits, it will also create new risks, so it’s essential that we prepare our critical internet infrastructure for what’s to come. That’s why we’re so pleased to share our latest efforts in this area, including technology that we’re making available as an open source implementation to help internet operators worldwide prepare.

In recent years, the research team here at Verisign has been focused on a future where quantum computing is a reality, and where the general best practices and guidelines of traditional cryptography are re-imagined. As part of that work, we’ve made three further contributions to help the DNS community prepare for these changes:

  • an open source implementation of our Internet-Draft (I-D) on Merkle Tree Ladder (MTL) mode;
  • a new I-D on using MTL mode signatures with DNS Security Extensions (DNSSEC); and
  • an expansion of our previously announced public license terms to include royalty-free terms for implementing and using MTL mode if the I-Ds are published as Experimental, Informational, or Standards Track Requests for Comments (RFCs). (See the MTL mode I-D IPR declaration and the MTL mode for DNSSEC I-D IPR declaration for the official language.)

About MTL Mode

First, a brief refresher on what MTL mode is and what it accomplishes:

MTL mode is a technique developed by Verisign researchers that can reduce the operational impact of a signature scheme when authenticating an evolving series of messages. Rather than signing messages individually, MTL mode signs structures called Merkle tree ladders that are derived from the messages to be authenticated. Individual messages are authenticated relative to a ladder using a Merkle tree authentication path, while ladders are authenticated relative to a public key of an underlying signature scheme using a digital signature. The size and computational cost of the underlying digital signatures can therefore be spread across multiple messages.

The reduction in operational impact achieved by MTL mode can be particularly beneficial when the mode is applied to a signature scheme that has a large signature size or computational cost in specific use cases, such as when post-quantum signature schemes are applied to DNSSEC.

Recently, Verisign Fellow Duane Wessels described how Verisign’s DNSSEC algorithm update — from RSA/SHA-256 (Algorithm 8) to ECDSA Curve P-256 with SHA-256 (Algorithm 13) — increases the security strength of DNSSEC signatures and reduces their size impact. The present update is a logical next step in the evolution of DNSSEC resiliency. In the future, it is possible that DNSSEC may utilize a post-quantum signature scheme. Among the new post-quantum signature schemes currently being standardized, though, there is a shortcoming; if we were to directly apply these schemes to DNSSEC, it would significantly increase the size of the signatures1. With our work on MTL mode, the researchers at Verisign have provided a way to achieve the security benefit of a post-quantum algorithm rollover in a way that mitigates the size impact.

Put simply, this means that in a quantum environment, the MTL mode of operation developed by Verisign will enable internet infrastructure operators to use the longer signatures they will need to protect communications from quantum attacks, while still supporting the speed and space efficiency we’ve come to expect.

For more background information on MTL mode and how it works, see my July 2023 blog post, the MTL mode I-D, or the research paper, “Merkle Tree Ladder Mode: Reducing the Size Impact of NIST PQC Signature Algorithms in Practice.”

Recent Standardization Efforts

In my July 2023 blog post titled “Next Steps in Preparing for Post-Quantum DNSSEC,” I described two recent contributions by Verisign to help the DNS community prepare for a post-quantum world: the MTL mode I-D and a public, royalty-free license to certain intellectual property related to that I-D. These activities set the stage for the latest contributions I’m announcing in this post today.

Our Latest Contributions

  • Open source implementation. Like the I-D we published in July of this year, the open source implementation focuses on applying MTL mode to the SPHINCS+ signature scheme currently being standardized in FIPS 205 as SLH-DSA (Stateless Hash-Based Digital Signature Algorithm) by the National Institute of Standards and Technology (NIST). We chose SPHINCS+ because it is the most conservative of NIST’s post-quantum signature algorithms from a cryptographic perspective, being hash-based and stateless. We remain open to adding other post-quantum signature schemes to the I-D and to the open source implementation.
    We encourage developers to try out the open source implementation of MTL mode, which we introduced at the IETF 118 Hackathon, as the community’s experience will help improve the understanding of MTL mode and its applications, and thereby facilitate its standardization. We are interested in feedback both on whether MTL mode is effective in reducing the size impact of post-quantum signatures on DNSSEC and other use cases, and on the open source implementation itself. We are particularly interested in the community’s input on what language bindings would be useful and on which cryptographic libraries we should support initially. The open source implementation can be found on GitHub at: https://github.com/verisign/MTL
  • MTL mode for DNSSEC I-D. This specification describes how to use MTL mode signatures with DNSSEC, including DNSKEY and RRSIG record formats. The I-D also provides initial guidance for DNSSEC key creation, signature generation, and signature verification in MTL mode. We consider the I-D as an example of the kinds of contributions that can help to address the “Research Agenda for a Post-Quantum DNSSEC,” the subject of another I-D recently co-authored by Verisign. We expect to continue to update this I-D based on community feedback. While our primary focus is on the DNSSEC use case, we are also open to collaborating on other applications of MTL mode.
  • Expanded patent license. Verisign previously announced a public, royalty-free license to certain intellectual property related to the MTL mode I-D that we published in July 2023. With the availability of the open source implementation and the MTL mode for DNSSEC specification, the company has expanded its public license terms to include royalty-free terms for implementing and using MTL mode if the I-D is published as an Experimental, Informational, or Standards Track RFC. In addition, the company has made a similar license grant for the use of MTL mode with DNSSEC. See the MTL mode I-D IPR declaration and the MTL mode for DNSSEC I-D IPR declaration for the official language.

Verisign is grateful for the DNS community’s interest in this area, and we are pleased to serve as stewards of the internet when it comes to developing new technology that can help the internet grow and thrive. Our work on MTL mode is one of the longer-term efforts supporting our mission to enhance the security, stability, and resiliency of the global DNS. We’re encouraged by the progress that has been achieved, and we look forward to further collaborations as we prepare for a post-quantum future.

Footnotes

  1. While it’s possible that other post-quantum algorithms could be standardized that don’t have large signatures, they wouldn’t have been studied for as long. Indeed, our preferred approach for long-term resilience of DNSSEC is to use the most conservative of the post-quantum signature algorithms, which also happens to have the largest signatures. By making that choice practical, we’ll have a solution in place whether or not a post-quantum algorithm with a smaller signature size is eventually available. ↩

The post Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode appeared first on Verisign Blog.

Next Steps in Preparing for Post-Quantum DNSSEC

binary digits on a gradient blue background

In 2021, we discussed a potential future shift from established public-key algorithms to so-called “post-quantum” algorithms, which may help protect sensitive information after the advent of quantum computers. We also shared some of our initial research on how to apply these algorithms to the Domain Name System Security Extensions, or DNSSEC. In the time since that blog post, we’ve continued to explore ways to address the potential operational impact of post-quantum algorithms on DNSSEC, while also closely tracking industry research and advances in this area.

Now, significant activities are underway that are setting the timeline for the availability and adoption of post-quantum algorithms. Since DNS participants – including registries and registrars – use public key-cryptography in a number of their systems, these systems may all eventually need to be updated to use the new post-quantum algorithms. We also announce two major contributions that Verisign has made in support of standardizing this technology: an Internet-Draft as well as a public, royalty-free license to certain intellectual property related to that Internet-Draft.

In this blog post, we review the changes that are on the horizon and what they mean for the DNS ecosystem, and one way we are proposing to ease the implementation of post-quantum signatures – Merkle Tree Ladder mode.

By taking these actions, we aim to be better prepared (while also helping others prepare) for a future where cryptanalytically relevant quantum computing and post-quantum cryptography become a reality.

Recent Developments

In July 2022, the National Institute of Standards and Technology (NIST) selected one post-quantum encryption algorithm and three post-quantum signature algorithms for standardization, with standards for these algorithms arriving as early as 2024. In line with this work, the Internet Engineering Task Force (IETF) has also started standards development activities on applying post-quantum algorithms to internet protocols in various working groups, including the newly formed Post-Quantum Use in Protocols (PQUIP) working group. And finally, the National Security Agency (NSA) recently announced that National Security Systems are expected to transition to post-quantum algorithms by 2035.

Collectively, these announcements and activities indicate that many organizations are envisioning a (post-)quantum future, across many protocols. Verisign’s main concern continues to be how post-quantum cryptography impacts the DNS, and in particular, how post-quantum signature algorithms impact DNSSEC.

DNSSEC Considerations

The standards being developed in the next few years are likely to be the ones deployed when the post-quantum transition eventually takes place, so now is the time to take operational requirements for specific protocols into account.

For DNSSEC, the operational concerns are twofold.

First, the large signature sizes of current post-quantum signatures selected by NIST would result in DNSSEC responses that exceed the size limits of the User Datagram Protocol, which is broadly deployed in the DNS ecosystem. While the Transmission Control Protocol and other transports are available, the additional overhead of having large post-quantum signatures on every response — which can be one to two orders of magnitude as long as traditional signatures —introduces operational risk to the DNS ecosystem that would be preferable to avoid.

Second, the large signatures would significantly increase memory requirements for resolvers using in-memory caches and authoritative nameservers using in-memory databases.

Bar graph of the size impact of traditional and post-quantum signature size where a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.
Figure 1: Size impact of traditional and post-quantum signature size impact on a fully signed DNS zone. Horizontal bars show percentage of zone that would be signature for two traditional and two post-quantum algorithms; vertical bars show the percentage increase in the zone size due to signature data.

Figure 1, from Andy Fregly’s recent presentation at OARC 40, shows the impact on a fully signed DNS zone where, on average, there are 2.2 digital signatures per resource record set (covering both existence and non-existence proofs). The horizontal bars show the percentage of the zone file that would be comprised of signature data for the two prevalent current algorithms, RSA and ECDSA, and for the smallest and largest of the NIST PQC algorithms. At the low and high end of these examples, signatures with ECDSA would take up 40% of the zone and SPHINCS+ signatures would take up over 99% of the zone. The vertical bars give the percentage size increase of the zone file due to signatures. Again, comparing the low and high end, a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.

Merkle Tree Ladder Mode: Reducing Size Impact of Post-Quantum Signatures

In his 1988 article, “The First Ten Years of Public-Key Cryptography,” Whitfield Diffie, co-discoverer of public-key cryptography, commented on the lack of progress in finding public-key encryption algorithms that were as fast as the symmetric-key algorithms of the day: “Theorems or not, it seemed silly to expect that adding a major new criterion to the requirements of a cryptographic system could fail to slow it down.”

Diffie’s counsel also appears relevant to the search for post-quantum algorithms: It would similarly be surprising if adding the “major new criterion” of post-quantum security to the requirements of a digital signature algorithm didn’t impact performance in some way. Signature size may well be the tradeoff for post-quantum security, at least for now.

With this tradeoff in mind, Verisign’s post-quantum research team has explored ways to address the size impact, particularly to DNSSEC, arriving at a construction we call a Merkle Tree Ladder (MTL), a generalization of a single-rooted Merkle tree (see Figure 2). We have also defined a technique that we call the Merkle Tree Ladder mode of operation for using the construction with an underlying signature algorithm.

Diagram showing an example of a Merkle tree ladder.
Figure 2: A Merkle Tree Ladder consists of one or more “rungs” that authenticate or “cover” the leaves of a generalized Merkle tree. In this example, rungs 19:19, 17:18, and 1:16 are the collectively the ancestors of all 19 leaves of the tree and therefore cover them. The values of the nodes are the hash of the values of their children, providing cryptographic protection. A Merkle authentication path consisting of sibling nodes authenticates a leaf node relative to the ladder e.g., leaf node 7 (corresponding to message 7 beneath) can be authenticated relative to rung 1:16 by rehashing it with the sibling nodes along the path 8, 5:6, 1:4 and 9:16. If the verifier already has a previous ladder that covers a message, the verifier can instead rehash relative to that ladder, e.g., leaf node 7 can be verified relative to rung 1:8 using sibling nodes 8, 5:6 and 1:4.

Similar to current deployments of public-key cryptography, MTL mode combines processes with complementary properties to balance performance and other criteria (see Table 1). In particular, in MTL mode, rather than signing individual messages with a post-quantum signature algorithm, ladders comprised of one or more Merkle tree nodes are signed using the post-quantum algorithm. Individual messages are then authenticated relative to the ladders using Merkle authentication paths.

Criterion to Achieve Initial Design with a Single Process Improved Design Combining Complementary Processes Benefit
Public-Key Property for Encryption – Encrypt Individual Messages with Public-Key Algorithm – Establish Symmetric Keys Using Public-Key Algorithm
– Encrypt Multiple Messages Using Each Symmetric Key
– Amortize Cost of Public-Key Operations Across Multiple Messages
Post-Quantum Property for Signatures – Sign Individual Messages with Post-Quantum Algorithm – Sign Merkle Tree Ladders using Post-Quantum Algorithm
– Authenticate Multiple Messages Relative to Each Signed Ladder
– Amortize Size of Post-Quantum Signature Across Multiple Messages
Table 1: Speed concerns for traditional public-key algorithms were addressed by combining them with symmetric-key algorithms (for instance, as outlined in early specifications for Internet Privacy-Enhanced Mail). Size concerns for emerging post-quantum signature algorithms can potentially be addressed by combining them with constructions such as Merkle Tree Ladders.

Although the signatures on the ladders might be relatively large, the ladders and their signatures are sent infrequently. In contrast, the Merkle authentication paths that are sent for each message are relatively short. The combination of the two processes maintains the post-quantum property while amortizing the size impact of the signatures across multiple messages. (Merkle tree constructions, being based on hash functions, are naturally post-quantum.)

The two-part approach for public-key algorithms has worked well in practice. In Transport Layer Security, symmetric keys are established in occasional handshake operations, which may be more expensive. The symmetric keys are then used to encrypt multiple messages within a session without further overhead for key establishment. (They can also be used to start a new session).

We expect that a two-part approach for post-quantum signatures can similarly work well in an application like DNSSEC where verifiers are interested in authenticating a subset of messages from a large, evolving message series (e.g., DNS records).

In such applications, signed Merkle Tree Ladders covering a range of messages in the evolving series can be provided to a verifier occasionally. Verifiers can then authenticate messages relative to the ladders, given just a short Merkle authentication path.

Importantly, due to a property of Merkle authentication paths called backward compatibility, all verifiers can be given the same authentication path relative to the signer’s current ladder. This also helps with deployment in applications such as DNSSEC, since the authentication path can be published in place of a traditional signature. An individual verifier may verify the authentication path as long as the verifier has a previously signed ladder covering the message of interest. If not, then the verifier just needs to get the current ladder.

As reported in our presentation on MTL mode at the RSA Conference Cryptographers’ Track in April 2023, our initial evaluation of the expected frequency of requests for MTL mode signed ladders in DNSSEC is promising, suggesting that a significant reduction in effective signature size impact can be achieved.

Verisign’s Contributions to Standardization

To facilitate more public evaluation of MTL mode, Verisign’s post-quantum research team last week published the Internet-Draft “Merkle Tree Ladder Mode (MTL) Signatures.” The draft provides the first detailed, interoperable specification for applying MTL mode to a signature scheme, with SPHINCS+ as an initial example.

We chose SPHINCS+ because it is the most conservative of the NIST PQC algorithms from a cryptographic perspective, being hash-based and stateless. It is arguably most suited to be one of the algorithms in a long-term deployment of a critical infrastructure service like DNSSEC. With this focus, the specification has a “SPHINCS+-friendly” style. Implementers familiar with SPHINCS+ will find similar notation and constructions as well as common hash function instantiations. We are open to adding other post-quantum signature schemes to the draft or other drafts in the future.

Publishing the Internet-Draft is a first step toward the goal of standardizing a mode of operation that can reduce the size impact of post-quantum signature algorithms.

In support of this goal, Verisign also announced this week a public, royalty-free license to certain intellectual property related to the Internet-Draft published last week. Similar to other intellectual property rights declarations the company has made, we have announced a “Standards Development Grant” which provides the listed intellectual property under royalty-free terms for the purpose of facilitating standardization of the Internet-Draft we published on July 10, 2023. (The IPR declaration gives the official language.)

We expect to release an open-source implementation of the Internet-Draft soon, and, later this year, to publish an Internet-Draft on using MTL mode signatures in DNSSEC.

With these contributions, we invite implementers to take part in the next step toward standardization: evaluating initial versions of MTL mode to confirm whether they indeed provide practical advantages in specific use cases.

Conclusion

DNSSEC continues to be an important part of the internet’s infrastructure, providing cryptographic verification of information associated with the unique, stable identifiers in this ubiquitous namespace. That is why preparing for an eventual transition to post-quantum algorithms for DNSSEC has been and continues to be a key research and development activity at Verisign, as evidenced by our work on MTL mode and post-quantum DNSSEC more generally.

Our goal is that with a technique like MTL mode in place, protocols like DNSSEC can preserve the security characteristics of a pre-quantum environment while minimizing the operational impact of larger signatures in a post-quantum world.

In a later blog post, we’ll share more details on some upcoming changes to DNSSEC, and how these changes will provide both security and operational benefits to DNSSEC in the near term.

Verisign plans to continue to invest in research and standards development in this area, as we help prepare for a post-quantum future.

The post Next Steps in Preparing for Post-Quantum DNSSEC appeared first on Verisign Blog.

The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award

Header image: Flashing technical imagery

The global internet, from the perspective of its billions of users, has often been envisioned as a cloud — a shapeless structure that connects users to applications and to one another, with the internal details left up to the infrastructure operators inside.

From the perspective of the infrastructure operators, however, the global internet is a network of networks. It’s a complex set of connections among network operators, application platforms, content providers and other parties.

And just as the total amount of global internet traffic continues to grow, so too does the shape and structure of the internet — the internal details of the cloud — continue to evolve.

At the Association for Computing Machinery’s Special Interest Group on Data Communications (ACM SIGCOMM) conference in 2010, researchers at Arbor Networks and the University of Michigan, including Danny McPherson, now executive vice president and chief security officer at Verisign, published one of the first papers to analyze the internal structure of the internet in detail.

The study, entitled “Internet Inter-Domain Traffic,” drew from two years of measurements involving more than 200 exabytes of data.

One of the paper’s key observations was the emergence of a “global internet core” of a relatively small number of large application and content providers that had become responsible for the majority of the traffic between different parts of the internet — in contrast to the previous topology where large network operators were the primary source.

The authors’ conclusion: “we expect the trend towards internet inter-domain traffic consolidation to continue and even accelerate.”

The paper’s predictions of internet traffic and topology trends proved out over the past decade, as confirmed by one of the paper’s authors, Craig Labovitz, in a 2019 presentation that reiterated the paper’s main findings: the internet is “getting bigger by traffic volume” while also “rapidly getting smaller by concentration of content sources.”

This week, the ACM SIGCOMM 2021 conference series recognized the enduring value of the research with the prestigious Test of Time Paper Award, given to a paper that “deemed to be an outstanding paper whose contents are still a vibrant and useful contribution today.”

Internet measurement research is particularly relevant to Domain Name System (DNS) operators such as Verisign. To optimize the deployment of their services, DNS operators need to know where DNS query traffic is most likely to be exchanged in the coming years. Insights into the internal structure of the internet can help DNS operators ensure the ongoing security, stability and resiliency of their services, for the benefit both of other infrastructure operators who depend on DNS, and the billions of users who connect online every day.

Congratulations to Danny and co-authors Craig Labovitz, Scott Iekel-Johnson, Jon Oberheide and Farnam Jahanian on receiving this award, and thanks to ACM SIGCOMM for its recognition of the research. If you’re curious about what evolutionary developments Danny and others at Verisign are envisioning today about the internet of the future, subscribe to the Verisign blog, and follow us on Twitter and LinkedIn.

The post The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award appeared first on Verisign Blog.

Information Protection for the Domain Name System: Encryption and Minimization

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name Server Resolver Query Scenario Typical Response Content from Verisign’s Servers
Root DNSKEY record set for root zone • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist • Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .net DNSKEY record set for <TLD> zone • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists • NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist • Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

The Domain Name System: A Cryptographer’s Perspective

Man looking at technical imagery

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere

Lock image and DNS

Over the past several years, questions about how to protect information exchanged in the Domain Name System (DNS) have come to the forefront.

One of these questions was posed first to DNS resolver operators in the middle of the last decade, and is now being brought to authoritative name server operators: “to encrypt or not to encrypt?” It’s a question that Verisign has been considering for some time as part of our commitment to security, stability and resiliency of our DNS operations and the surrounding DNS ecosystem.

Because authoritative name servers operate at different levels of the DNS hierarchy, the answer is not a simple “yes” or “no.” As I will discuss in the sections that follow, different information protection techniques fit the different levels, based on a balance between cryptographic and operational considerations.

How to Protect, Not Whether to Encrypt

Rather than asking whether to deploy encryption for a particular exchange, we believe it is more important to ask how best to address the information protection objectives for that exchange — whether by encryption, alternative techniques or some combination thereof.

Information protection must balance three objectives:

  • Confidentiality: protecting information from disclosure;
  • Integrity: protecting information from modification; and
  • Availability: protecting information from disruption.

The importance of balancing these objectives is well-illustrated in the case of encryption. Encryption can improve confidentiality and integrity by making it harder for an adversary to view or change data, but encryption can also impair availability, by making it easier for an adversary to cause other participants to expend unnecessary resources. This is especially the case in DNS encryption protocols where the resource burden for setting up an encrypted session rests primarily on the server, not the client.

The Internet-Draft “Authoritative DNS-Over-TLS Operational Considerations,” co-authored by Verisign, expands on this point:

Initial deployments of [Authoritative DNS over TLS] may offer an immediate expansion of the attack surface (additional port, transport protocol, and computationally expensive crypto operations for an attacker to exploit) while, in some cases, providing limited protection to end users.

Complexity is also a concern because DNS encryption requires changes on both sides of an exchange. The long-installed base of DNS implementations, as Bert Hubert has parabolically observed, can only sustain so many more changes before one becomes the “straw that breaks the back” of the DNS camel.

The flowchart in Figure 1 describes a two-stage process for factoring in operational risk when determining how to mitigate the risk of disclosure of sensitive information in a DNS exchange. The process involves two questions.

Figure 1  Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.
Figure 1 Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.

This process can readily be applied to develop guidance for each exchange in the DNS resolution ecosystem.

Applying Guidance

Three main exchanges in the DNS resolution ecosystem are considered here, as shown in Figure 2: resolver-to-authoritative at the root and TLD level; resolver-to-authoritative below these levels; and client-to-resolver.

Figure 2 Three DNS resolution exchanges: (1) resolver-to-authoritative at the root and TLD levels; (2) resolver-to-authoritative at the SLD level (and below); (3) client-to-resolver. Different information protection guidance applies for each exchange. The exchanges are shown with qname minimization implemented at the root and TLD levels.

1. Resolver-to-Root and Resolver-to-TLD

The resolver-to-authoritative exchange at the root level enables DNS resolution for all underlying domain names; the exchange at the TLD level does the same for all names under a TLD. These exchanges provide global navigation for all names, benefiting all resolvers and therefore all clients, and making the availability objective paramount.

As a resolver generally services many clients, information exchanged at these levels represents aggregate interests in domain names, not the direct interests of specific clients. The sensitivity of this aggregated information is therefore relatively low to start with, as it does not contain client-specific details beyond the queried names and record types. However, the full domain name of interest to a client has conventionally been sent to servers at the root and TLD levels, even though this is more information than they need to know to refer the resolver to authoritative name servers at lower levels of the DNS hierarchy. The additional detail in the full domain name may be considered sensitive in some cases. Therefore, this data exchange merits consideration for protection.

The decision flow (as generally described in Figure 1) for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Yes. Techniques such as qname minimization, NXDOMAIN cut processing and aggressive DNSSEC caching — which we collectively refer to as minimization techniques — can reduce the information exchanged to just the resolver’s aggregate interests in TLDs and SLDs, removing the further detail of the full domain names and meeting the principle of minimum disclosure.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? No. The information exchanged is just the resolver’s aggregate interests in TLDs and SLDs after minimization techniques are applied, and any further protection offered by encryption wouldn’t justify the operational risk of a protocol change affecting both sides of the exchange. While some resolvers and name servers may be open to the operational risk, it shouldn’t be required of others.

Interestingly, while minimization itself is sufficient to address the disclosure risk at the root and TLD levels, encryption alone isn’t. Encryption protects against disclosure to outside observers but not against disclosure to (or by) the name server itself. Even though the sensitivity of the information is relatively low for reasons noted above, without minimization, it’s still more than the name server needs to know.

Summary: Resolvers should apply minimization techniques at the root and TLD levels. Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges.

Note that at this time, Verisign has no plans to implement DNS encryption at the root or TLD servers that the company operates. Given the availability of qname minimization, which we are encouraging resolver operators to implement, and other minimization techniques, we do not currently see DNS encryption at these levels as offering an appropriate risk / benefit tradeoff.

2. Resolver-to-SLD and Below

The resolver-to-authoritative exchanges at the SLD level and below enable DNS resolution within specific namespaces. These exchanges provide local optimization, benefiting all resolvers and all clients interacting with the included namespaces.

The information exchanged at these levels also represents the resolver’s aggregate interests, but in some cases, it may also include client-related information such as the client’s subnet. The full domain names and the client-related information, if any, are the most sensitive parts.

The decision flow for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Partially. Minimization techniques may apply to some exchanges at the SLD level and below if the full domain names have more than three labels. However, they don’t help as much with the lowest-level authoritative name server involved, because that name server needs the full domain name.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? In some cases. If the exchange includes client-specific information, then the risk may be justified. If full domain names include labels beyond the TLD and SLD that are considered more sensitive, this might be another justification. Otherwise, the benefit of encryption generally wouldn’t justify the operational risk, and for this reason, as in the previous case, encryption shouldn’t be required, except in the specific purposes noted.

Summary: Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names or client-specific information. Otherwise, they should not be required to implement DNS encryption.

3. Client-to-Resolver

The client-to-resolver exchange enables navigation to all domain names for all clients of the resolver.

The information exchanged here represents the interests of each specific client. The sensitivity of this information is therefore relatively high, making confidentiality vital.

The decision process in this case traverses the first and second steps:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Not really. Minimization techniques don’t apply here because the resolver needs the full domain name and the client-specific information included in the request, namely the client’s IP address. Other alternatives, such as oblivious DNS, have been proposed, but are more complex and arguably increase operational risk. Note that some clients use security and confidentiality solutions at different layers of the protocol stack (e.g. a virtual private network (VPN), etc.) to protect DNS exchanges.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? Yes.

Summary: Clients and resolvers should implement DNS encryption on this exchange, unless the exchange is otherwise adequately protected, for instance as part of the network connection provided by an enterprise or internet service provider.

Summary of Guidance

The following table summarizes the guidance for how to protect the various exchanges in the DNS resolution ecosystem.

In short, the guidance is “minimize at root and TLD, encrypt when needed elsewhere.”

DNS Exchange Purpose Guidance
Resolver-to-Root and TLD Global navigational availability • Resolvers should apply minimization techniques
• Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges
Resolver-to-SLD and Below Local performance optimization • Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names, or client-specific information
• Resolvers and SLD servers (and below) should not be required to implement DNS encryption otherwise
Client-to-Resolver General DNS resolution • Clients and resolvers should implement DNS encryption on this exchange, unless adequate protection is otherwise provided, e.g., as part of a network connection

Conclusion

If the guidance suggested here were followed, we could expect to see more deployment of minimization techniques on resolver-to-authoritative exchanges at the root and TLD levels; more deployment of DNS encryption, when needed, at the SLD levels and lower; and more deployment of DNS encryption on client-to-resolver exchanges.

In all these deployments, the DNS will serve the same purpose as it already does in today’s unencrypted exchanges: enabling general-purpose navigation to information and resources on the internet.

DNS encryption also brings two new capabilities that make it possible for the DNS to serve two new purposes. Both are based on concepts developed in Verisign’s research program.

  1. The client can authenticate itself to a resolver or name server that supports DNS encryption, and the resolver or name server can limit its DNS responses based on client identity. This capability enables a new set of security controls for restricting access to network resources and information. We call this approach authenticated resolution.
  2. The client can confidentially send client-related information to the resolver or name server, and the resolver or name server can optimize its DNS responses based on client characteristics. This capability enables a new set of performance features for speeding access to those same resources and information. We call this approach adaptive resolution.

You can read more about these two applications in my recent blog post on this topic, Authenticated Resolution and Adaptive Resolution: Security and Navigational Enhancements to the Domain Name System.

Verisign CTO Burt Kaliski explains why minimization techniques at the root and TLD levels of the DNS are a key component of a balanced DNS information protection strategy.

The post A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere appeared first on Verisign Blog.

❌