To view PDF files

You need Adobe Reader 7.0 or later in order to read PDF files on this site.
If Adobe Reader is not installed on your computer, click the button below and go to the download site.

Letters

Platform Technologies of IP Optical Traffic Engineering Server

Eiji Oki, Daisaku Shimazaki, Tomonori Takeda, Takashi Miyamura,
Ichiro Inoue, and Kohei Shiomoto

Abstract

This article describes the platform technologies of an Internet protocol (IP) and optical traffic engineering (TE) server, which performs traffic control in cooperation with IP routers and optical cross-connects in both IP networks and optical backbone networks. This IP optical TE server responds flexibly to unexpected changes in traffic demand and restores network connectivity quickly if a system failure or natural disaster occurs while network resources are being utilized.

PDF
NTT Network Service Systems Laboratories
Musashino-shi, 180-8585 Japan

1. Introduction

The network is expected to be flexible enough to handle unexpected changes in traffic demand and to be robust against network failures or natural disasters that occur while network resources are being utilized. NTT Network Service Systems Laboratories has been developing an Internet protocol (IP) and optical traffic engineering (TE) server (IP optical TE server), which performs traffic control in cooperation with IP routers and optical cross-connects in both IP networks and optical backbone networks [1]–[3]. The traffic control is called multilayer traffic engineering. This article introduces the server's functions and its platform technologies.

2. Functions of the IP optical TE server

Multilayer traffic engineering enables us to utilize network resources in IP and optical networks. As shown in Fig. 1, this server manages network resources of both networks and provides multilayer routes. In addition, it reconfigures a virtual network topology (VNT) consisting of several optical paths in response to traffic demand changes and network failures. The server's functions consist of (1) multilayer and multidomain path computation, (2) VNT control, (3) traffic matrix estimation, and (4) network status visualization. The software structure of the server is shown in Fig. 2.


Fig. 1. Multilayer traffic engineering based on IP optical TE server.


Fig. 2. Software structure of IP optical TE server.

(1) Multilayer and multidomain path computation

Multilayer path computation: let us consider computing a route between a source router and a destination router in an IP/optical network. There are two possible ways to do this: by a single-layer or multilayer path computation. In the single-layer path computation, only IP links that have already been established are considered. They may be optical paths. On the other hand, in the multilayer path computation, the possibility of new IP links, which are optical paths that will be established later, are considered as well as previously established ones. For example, if the source router is not connected to the destination router in the current IP topology, the IP optical TE server may decide to establish a new optical path for the requested end-to-end packet path based on both IP and optical network resources. The calculation takes into account constraints such as bandwidth, delay, link attributes, inclusive/exclusive routes, and protection class. The multilayer route that is finally selected also depends on the carrier's policy. For example, one policy might try to find available existing optical paths as much as possible, while another might try to establish a new optical path in order to minimize the number of hops between source and destination routers.

Multidomain path computation: in a large-scale network, it is difficult to operate a single domain, where a domain means an area or autonomous system, due to the limitations of routing protocol scalability. A large-scale network is operated as multiple domains. Since an IP optical TE server cannot handle all the domains, several such servers are distributed throughout the network and they communicate with each other to provide an end-to-end path route.

(2) VNT reconfiguration

The IP optical TE server can reconfigure the VNT in response to a traffic demand change, network failure, or topology change or under the control of an operator.

(3) Traffic matrix estimation

The VNT design algorithm uses a traffic matrix as input information. Each element of the traffic matrix is the traffic volume between a pair of border routers. As the network increases, the number of border routers increases and it becomes difficult to measure all the traffic volumes between border routers. In addition, in the case of IP packet networks, it also becomes difficult to analyze the IP headers of all the IP packets to determine their destination border routes. Therefore, the traffic matrix derivation method must be scalable in terms of network size.

Our approach is to estimate the traffic matrix based on the traffic volume passing through IP links (which are optical paths) and on IP/MPLS routing information, instead of directly measuring each traffic volume between border routers. We implemented our traffic matrix estimation function in the IP optical TE server. The traffic matrix estimation process consists of the following three steps. In step 1, the traffic volume passing through IP links is retrieved using SNMP (simple network management protocol) from the interface management information base (MIB). In step 2, the traffic matrix is estimated using the retrieved traffic volume and the IP/MPLS routing information. This estimation reduces the complexity of the traffic measurement, making it proportional to the number of IP links, whereas in the conventional approach it is proportional to the number of border router pairs. As a result, VNT reconfiguration is possible even when the network size is large.

(4) Virtualization of network status

A network operator can efficiently control and manage networks by visualizing the multilayer network status. The visualization function displays the topology of each layer and the network status. A schematic screen image is shown in Fig. 3. It provides a realtime display of the physical network topology, IP network topology (VNT), traffic volume passing through optical paths and MPLS paths, and lists of MPLS and optical paths including their routes and attributes.


Fig. 3. Visualization of multilayer network status.

3. Platform technologies for the IP optical TE server

The IP optical TE server consists of a platform part and an application part. The platform part provides common functions and related interfaces that support application-specific functions for the application part. The server's functions and interfaces are listed in Table 1. The platform supports the OSPF (open shortest path first) protocol to collect topology information, SNMP/MIB to retrieve traffic information, and PCEP (path computation element communication protocol) [4] for path computation request and reply. It also supports a command line interface (CLI) to configure and control routers. As much as possible, this server uses standardized interfaces to routers, such as OSPF, SNMP, and PCEP.


Table 1. Functions and interfaces of IP and optical TE server.

PCEP is a protocol for communication between a path computation client (PCC) and a path computation element (PCE), or between two PCEs. The communication consists mainly of path computation requests and path computation replies as well as notifications and error messages. Examples of PCEP message sequences are shown in Fig. 4. PCEP lets us separate a path computation function that was implemented in routers from the routers so that network providers can operate their networks with their own policies. It is currently being standardized by network operators and vendors, including NTT. As shown in Fig. 5, multilayer TE is performed by communication between the IP optical TE server and commercial routers.


Fig. 4. Example of PCEP message sequences.


Fig. 5. Path computation element communication protocol (PCEP).

Information that is directly collected from a network is stored in the network status database in the platform part of the IP optical TE server. On the other hand, information that is processed in the network status database and used directly for an application is stored in the application database in the application part.

To support network status visualization, topology and traffic data is stored in the topology database and traffic database, respectively. After the data format has been transformed to extensible markup language (XML) format in the platform part, the data is sent to the function block for network status visualization. This makes it easy to locate the visualization function block remotely away from the main body of the server.

The CLI function block for configuring and controlling routers is dependent on the types of network equipment. The implementation of the platform part eliminates the network equipment dependency by using definition files. When software in network equipment is updated or when the network consists of different types of equipment, we only have to change the definition files.

4. Future development

In the future, we will perform experiments on the functions of the IP optical TE server considering operation and control scenarios that may be used in practice.

References

[1] K. Shiomoto, E. Oki, I. Inoue, and S. Urushidani, “A server-based traffic engineering method in IP+Optical multilayer networks,” iPOP 2006, Session T3-4, June 2006.
[2] K. Shiomoto, E. Oki, D. Shimazaki, and T. Miyamura, “Multilayer Traffic Engineering Experiments in MPLS/GMPLS Networks,” IEEE BROADNETS 2007, September 2007.
[3] K. Shiomoto, I. Inoue, R. Matsuzaki, and E. Oki, “Research and Development of IP and Optical Networking,” NTT Technical Review, Vol. 5, No. 3, pp. 48–53, 2007.
[4] J. P. Vasseur ed., “Path Computation Element (PCE) communication Protocol (PCEP),” IETF draft, draft-ietf-pce-pcep-09.txt, Nov. 2007 (work in progress).
Eiji Oki
Senior Research Engineer, Backbone Networking Systems Group, Broadband Network Systems Project, NTT Network Service Systems Laboratories.
He received the B.E. and M.E. degrees in instrumentation engineering and the Ph.D. degree in electrical engineering from Keio University, Kanagawa, in 1991, 1993, and 1999, respectively. He joined NTT Communication Switching Laboratories in 1993. Since then, he has been researching multimedia-communication network architectures based on ATM techniques, traffic-control methods, and high-speed switching systems. From 2000 to 2001, he was a Visiting Scholar at Polytechnic University, Brooklyn, New York, where he was involved in designing terabit switch/router systems. He is now engaged in R&D of high-speed optical IP backbone networks. He received the 1998 Switching System Research Award and the 1999 Excellent Paper Award from the Institute of Electronics, Information and Communication Engineers (IEICE) of Japan, and the 2001 Asia-Pacific Outstanding Young Researcher Award presented by IEEE Communications Society for his contribution to broadband network, ATM, and optical IP technologies. He co-authored “Broadband Packet Switching Technologies,” published by John Wiley, New York, in 2001 and “GMPLS Technologies,” published by CRC Press, Boca Raton, in 2005. He is a Senior Member of IEEE and a member of IEICE.
Daisaku Shimazaki
Engineer, Backbone Networking Systems Group, Broadband Network Systems Project, NTT Network Service Systems Laboratories.
He received the B.E. degree in applied chemistry and M.S. degree in material science from Keio University, Kanagawa, in 1999 and 2001, respectively. He joined NTT Network Service Systems Laboratories in 2001. His research interests include IP optical networking and traffic engineering based on GMPLS techniques. He is a member of IEEE and IEICE.
Tomonori Takeda
Engineer, Backbone Networking Systems Group, Broadband Network Systems Project, NTT Network Service Systems Laboratories.
He received the M.E. degree in electronics, information and communication engineering from Waseda University, Tokyo, in 2001. He joined NTT in 2001 and has been engaged in research on the next-generation network architecture, IP optical network architecture, and related protocols. He currently co-chairs the L1VPN WG in IETF. He is a member of IEEE and IEICE.
Takashi Miyamura
Engineer, Backbone Networking Systems Group, Broadband Network Systems Project, NTT Network Service Systems Laboratories.
He received the B.E. and M.E. degrees from Osaka University, Osaka, in 1997 and 1999, respectively. In 1999, he joined NTT Network Service Systems Laboratories, where he engaged in R&D of a high-speed IP switching router. He is now researching future photonic IP networks and an optical switching system. He received the 2001 APCC Paper Award at the 7th Asia-Pacific Conference on Communications. He is a member of IEICE and the Operations Research Society of Japan.
Ichiro Inoue
Senior Research Engineer, Supervisor, Backbone Networking Systems Group, Broadband Network Systems Project, NTT Network Service Systems Laboratories.
He received the B.E. and M.E. degrees in electrical engineering from the University of Tokyo, Tokyo, in 1988 and 1990 respectively. He joined NTT in 1990. Since then, his research interests have included telecommunication protocols such as IP and ATM. He was engaged in research and international standardization of the ATM adaptation layer protocol for data and control plane applications over ATM networks. He joined the xbind program investigating end-to-end high-level network service virtualization technologies as a Visiting Scholar at Columbia University, NY, USA, from 1995 to 1996. After that, he became a technical manager responsible for an audio-visual conferencing network architecture and its trial and commercial service at an NTT division. After returning to NTT Network Service Systems Laboratories, he conducted many R&D projects for broadband IP core and edge routers and networking. In 2001, he began researching international standardization of IP and optical networking technologies such as a multilayer service network architecture and layer-1 VPN. He has been active in standardization such as ISO/ISC (as a national committee member), ITU-T, and IETF. He is a member of IEEE and IEICE. Since 2007, he has been a Secretary of the IEICE Communication Society's Technical Committee on Network Systems.
Kohei Shiomoto
Senior Research Engineer, Supervisor, Group Leader, Backbone Networking Systems Group, Broadband Network Systems Project, NTT Network Service Systems Laboratories.
He received the B.E., M.E., and Ph.D. degrees in information and computer sciences from Osaka University, Osaka, in 1987, 1989, and 1998, respectively. He joined NTT in 1989 and engaged in R&D of ATM traffic control and ATM switching system architecture design. During 1996–1997, he was engaged in research on high-speed networking as a Visiting Scholar at Washington University in St. Louis, MO, USA. During 1997–2001, he was directing architecture design for the high-speed IP/MPLS label switching router research project at NTT Network Service Systems Laboratories. Since July 2001, he has been engaged in the research fields of photonic IP router design and routing algorithms and in GMPLS routing and signaling standardization, first at NTT Network Innovation Laboratories and then at NTT Network Service Systems Laboratories. He is active in GMPLS standardization in IETF. He is a member of IEEE, IEICE, and the Association for Computing Machinery. He was the Secretary for International Relations of the Communications Society of IEICE from June 2003 to May 2005. He was the Vice Chair of Information Services of the IEEE ComSoc Asia Pacific Board from January 2004 to December 2005. He has been involved in the organization of several international conferences including HPSR 2002, WTC 2002, HPSR 2004, WTC 2004, MPLS 2004, iPOP 2005, MPLS 2005, iPOP 2006, and MPLS 2006. He received the Young Engineer Award from IEICE in 1995 and the Switching System Research Award from IEICE in 1995 and 2000. He is one of the authors of “GMPLS Technologies: Broadband Backbone Networks and Systems”.

↑ TOP