Personal tools
You are here: Home Proceedings Committee Proceedings Technical Policy Committee Archive 2002 Technical Committee Report 07/06/02

Technical Committee Report 07/06/02

— filed under:

Internet Protocol Version 6 Primer

Report adopted by the Council Meeting of 7 June 2002

D. Zanetti, Chair
30 May 2002

1. Introduction

This paper is intended to give members of the Council of InternetNZ some background material on one of the most ambitious changes proposed for the Internet, a major revision of it's core protocol set.

We shall divide the issues into three parts: Problems with the current protocol set, solutions within the existing protocol, and solutions provided by migrating to a new version of the protocol.

2. Problems with the IPv4 Internet

The fourth version of the Internet Protocol became the accepted standard in the early 1980s, and has by and large performed well in spite of the huge uptake of the Internet in the mid 1990s. However, a number of problems with both the protocol itself and the implementation of the Internet have cropped up with its increasing size.

2.1 Address Allocation

Early decisions were taken to divide the IPv4 address space (which is 32-bits long) into classes. Three of these classes (A-C) were allocated into pools based on sizes of networks to be used in a unicast way, and one class was allocated for multicast uses (Class D). There is a fifth class, E, which is reserved for experimental uses.

A class A represents 16.7M IP addresses, more than any organisation should ever need. Nonetheless, very little justification was required to obtain such a large chunk of space. Many organisations ended up with far larger allocations than they needed at the time, nor need now.

With the massive uptake of the Internet, allocation polices have become more strict. Firewalling has become more common, questioning the need to allocate global IP addresses to an organisation's internal hosts. Still, many organisations still hold on to these large blocks. ISPs the world over are collecting up smaller contiguous blocks to allocate to their customers, but this only leads into other problems. (See Class-based routing below.)

The supply of addresses able to be easily made available is running short. If the uptake outside the US wasn't enough to run the space dry, the drive towards "always on" Internet connections for the masses will certainly make sure of it.

2.2 Class-based routing

Just like addresses were only handed out in classes, the early Internet routed that address space by class only. When the Internet was small, this worked quite well. But with the pressure on address space, and the need to tape together smaller blocks, the global routing table is quickly becoming unmanageable.

At the edge of the Internet, you can rely on default routes to make life easier. Where does this packet need to go? Doesn't matter, the default route will surely get it closer to the destination. Away from the edges, it becomes much more important to know exactly where every single block is.

Many organisations early on were also allocated space directly, instead of obtaining some space from their ISP. This also causes bloat in the global routing table, and yet most organisations don't need to have "portable" address space. The overwhelming use of the DNS means there is less need to route resources at an IP level.

2.3 Fragmentation

IP does not make assumptions about the capabilities of the communications links which make it up. This is important, as different technologies have different limitations on the size of their packets. Ethernet, for example, enforces frames (packets, more or less) are no larger than around 1500 octets. Serial links over modems have no practical limit to their packet size. The maximum size packet a link will accept is called it's Maximum Transmission Unit (MTU).

When a router is about to send a packet over a link, it must consider the MTU of that link. If the packet is larger than the MTU, then it has a problem. Under IPv4 the over sized packet is split into fragments, and then sent over the link. It does this by default, unless told otherwise by the packet - the "Don't Fragment" bit indicates this.

Fragmentation is costly at the receiving end, because it must wait for fragments which may never appear. It is also complicated for firewalls to handle fragments, many simply don't try and either defragment such packets before considering them, or throw them away.

Lastly, most operating systems have an appalling track record for implementing the handling of fragments correctly. The "Teardrop" DoS attacks around 1998, which affected nearly every operating system regardless of who wrote it, are a prime example.

2.4 Encryption

The Internet has moved on from being a closed network of like-minded individuals and organisations, and is widely used for data transfers which should be kept away from prying eyes. Protecting that data has largely been tacked on top in the application itself (such as using PGP, or SSL/TLS).

3. Solutions to IPv4s problems with IPv4

3.1 Network Address Translation

One method of getting around the lack of IP addresses is to use NAT. If we re-wrote packets at some point between the client and server, such that the client appears to be a different address (eg, a real globally unique one), then we do not require so many IP addresses. In theory, anyway.

In practice, it's a useful technique, but it has limitations. It works very well when the higher-level protocols being carried are simple client-to-server connections, such as HTTP. Add any IP addresses or ports to the higher-level data, or require connections initiated from the server back to the client, and it falls apart.

Those problems too can be worked around, but it requires the device doing NAT to understand those high-level protocols, or to implement "1 to 1" NAT, which somewhat reduces the re-usability of addresses. It also breaks the "end to end" principle, by requiring the middle to have some intelligence about it, instead of just routing packets. NAT is not a long-term solution to the lack of address space, although it remains useful for other purposes.

3.2 Forced re-numbering

We could gain considerable space by forcing organisations which have excessive allocations into renumbering. Carefully managed, it could result in considerably lifetime for the IPv4 space. Re-numbering is a slightly painful process, but it is made easier with good planning and handy use of the DNS.

Fairly obviously, this is a difficult process to get traction on - many organisations believe they have a right to the address space they hold. It could also be argued that if IPv6 is "just around the corner", as it has been for several years, why not re-number to IPv6 address space. It ends up being just as much effort, although it does require IPv6 capable devices.

3.3 Classless Inter-Domain Routing

With the introduction of CIDR, there are ways to reduce the size of the routing table, by reducing groups of smaller blocks into a single CIDR route, and free up unused address space by being able to split up the old larger blocks into smaller ones. But with so many organisations with their own blocks, the effect is not as significant as it could be.

For CIDR to work, organisations need to give up their blocks unless they can prove they need portable space, and instead pull space from their ISP. This requires re-numbering, which as we've noted, runs into some issues. The routing tables are growing again, so a much more heavy-handed approach needs to be taken urgently before the routing table collapses.

3.4 IPsec

IPsec does offer ways of introducing encryption and authentication at lower levels, but does so by encapsulating the traffic, rather than simply being "part" of IP. (IPsec is also usable in IPv6, but in a slightly different manner.)

IPsec does not immediately lend itself well to replace methods such as SSL/TLS. It is not widely implemented as a default part of most IP code, although this is changing.

4. IPv6 Overview

IPv6 involves a number of changes to the IP protocol. The key changes are:

  • Address space increased from 32-bit to 128-bit,
  • Fragmentation is removed
  • Multicast clarified by adding scope, instead of relying on TTLs
  • Broadcast addresses replaced with "anycast" addresses
  • Reduced information in packet headers
  • Extensible header format, rather than always relying on encapsulation, with arbitrary length headers

There are many other changes (such as clarifying that address apply to "interfaces", not "nodes", and that an interface may have more than one IPv6 address), which we will skip over.

4.1 Address space

The address space is increased from roughly 4.2*10^9 addresses, to 3.4*10^38 addresses, nearly 30 orders of magnitude greater. Worse cases estimate around 1500 addresses for every square meter of the Earth's surface, including oceans, and taking into account being able to route to every square metre.

With careful management, this should be more address space than would be needed for decades to come. Currently 85% of the address space is reserved, on that basis.

As a bonus, due to other items being removed from the IPv4 header, the impact of moving to 128-bit addresses is much reduced.

4.2 CIDR is mandatory

IPv6 will, from day one, operate on a classless basis, and there will be stringent allocation rules. Most organisations will not get portable address space, instead they will largely get space from ISP.

Further, there are clearer proposals to split address space by smaller geographic regions, hopefully further reducing the clutter in the routing table.

4.3 Authentication and Encryption

IPv6 requires all implementations to correctly handle at least the authentication extension headers, and strongly encourages them to handle the encryption ones as well.

Under IPv6, these functions are no longer tacked-on, but a natural part of the IP protocol. They are also much more cleanly able to be added and removed from packets, and are less invasive into the higher-level protocols. For example, IPsec with IPv4 requires all packets to end up with a protocol of type 50 or 51, and then encapsulates the TCP (or UDP, etc) with additional headers marking the real protocol type.

It's untidy, and amounts to tunnelling, rather than just protecting the payload. Tunnels are still useful (and just as possible with IPv6), but it's clearer that it is actually a tunnel.

4.4 Fragmentation is removed

IPv6 has no support for fragmented packets, which as noted before have been causing problems in the past. In essence, IPv6 packets behave the same as IPv4 packets with the "Don't Fragment" bit set.

This is not as much of a change as it sounds, because most IP stacks do this by default today! It is far more efficient to work out what the largest packet we can send over the whole path is, than to spend time waiting for fragments to arrive and re-assemble them. (Only to have TCP do more or less the same thing, at a higher level.)

Path MTU Discovery is well established in IPv4, and becomes mandatory under IPv6.

5. Issues for InternetNZ to consider

In summary, there are several issues InternetNZ must consider in the near future:

  • Address space is running out, and there are no long-term alternatives
  • The routing table will colpase unless radical changes to addressing are made.

There is no future for the Internet unless these two problems are solved in very short order. They must be solved permenantly - there have already been several emergency "fixes", which have only bought us some more time.

IPv6 represents the cleanest way to solve these two problems, and we get a number of useful features for free as a result.

6. Recommendations

  • That InternetNZ supports meaningful migration to IPv6 to begin as soon as possible
  • That InternetNZ form a working group to co-ordinate and facilitate ISP migration to IPv6

© 2002 The Internet Society of New Zealand
Last updated 3 JUne 2002

Document Actions