kl800.com省心范文网

Coordinated multihop scheduling A framework for end-to-end services_图文

776

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002

Coordinated Multihop Scheduling: A Framework for End-to-End Services
Chengzhi Li, Member, IEEE, and Edward W. Knightly, Member, IEEE

Abstract—In multihop networks, packet schedulers at downstream nodes have an opportunity to make up for excessive latencies due to congestion at upstream nodes. Similarly, when packets incur low delays at upstream nodes, downstream nodes can reduce priority and schedule other packets first. The goal of this paper is to define a framework for design and analysis of coordinated multihop scheduling (CMS) which exploit such internode coordination. We first provide a general CMS definition which enables us to classify a number of schedulers from the literature, including G-EDF, FIFO+, CEDF, and work-conserving CJVC as examples of CMS schedulers. We then develop a distributed theory of traffic envelopes which enables us to derive end-to-end statistical admission control conditions for CMS schedulers. We show that CMS schedulers are able to limit traffic distortion to within a narrow range resulting in improved end-to-end performance and more efficient resource utilization. Consequently, our technique exploits statistical resource sharing among flows, classes, and nodes, and our results provide the first statistical multinode multiclass admission control algorithm for networks of work conserving servers.
I. INTRODUCTION
D URING periods of congestion, a flow or class end-to-end performance properties are strongly influenced by the choice of the packet scheduling algorithm employed at the network’s routers. Consequently, recent advances in scheduler design can ensure properties such as fairness, performance differentiation, and performance isolation [3], [13], [15], [26]. Moreover, such performance properties are now achievable in high-speed implementations [24], [30], [32] and scalable architectures in which core nodes do not maintain a per-flow state [6], [25], [33].
Exploiting these scheduling mechanisms, admission control can limit congestion levels so that (for example) targeted latencies and throughputs are ensured, thereby providing services with predictable and controlled performance levels [22]. For example, statistical class-based admission control tests have been derived for earliest deadline first (EDF) [4], [27], [31], weighted fair queueing (WFQ) [12], [27], [38], strict priority [27], and virtual clock [20]. Moreover, techniques for providing
Manuscript received August 28, 2000; revised May 24, 2002; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor P. Steenkiste. An earlier version of this paper was published in the Proceedings of ICNP 2000. This work was supported by the National Science Foundation under Grant ANI-0085842 and Grant ANI-9733610, by a gift from Texas Instruments, and by a Sloan Fellowship.
C. Li is with the Department of Computer Science, University of Virginia, Charlottesville, VA 22094 USA (e-mail: chengzhi@cs.virginia.edu).
E. W. Knightly is with Rice University, Houston, TX 77005 USA. Digital Object Identifier 10.1109/TNET.2002.805024

multinode or end-to-end statistical services have been developed for several classes of nonwork-conserving schedulers [5], [28], [31], [37] and for WFQ networks with isolation among flows [38].1
However, in both the data plane (scheduling) and control plane (admission control), none of the aforementioned techniques exploit a key property of multihop networks, namely, that a downstream node can compensate for excessive latency or unfairness incurred at an upstream node. Nor will downstream nodes reduce the priority of a packet which arrives ahead of schedule due to a lack of congestion upstream. In contrast, a number of service disciplines in the literature have been proposed which do exploit this property, which we refer to as coordination. Examples include the oldest customer-first service discipline [8], global earliest deadline first (G-EDF) [8], modified first-in-first-out (FIFO+) [10], and coordinated earliest-deadline-first (CEDF) [1], [2].
The contributions of this paper are twofold. First, we devise a general framework for design and specification of a class of service disciplines which we refer to as coordinated multihop schedulers (CMSs). The key CMS property is that a packet’s priority index at a downstream node is recursively expressed through the priority index of the same packet at the previous node and therefore is a function of the packet’s (perhaps virtual) entrance time into the network. We show that a broad class of schedulers from the literature, including CEDF, FIFO+, and others, can be characterized by this recursion and belong to the CMS class. We make several observations regarding coordinated multihop schedulers.
1) The well-known traffic distortion problem, in which provisioning of end-to-end services is hampered by complex traffic distortions due to multiplexing, e.g., [11], [23], can be addressed.
2) CMS inter-server cooperation can improve a flow’s end-to-end performance, and consequently, improve the efficiency and utilization of the network at large.
3) They can be core-stateless, in some cases quite trivially, and therefore can share the same scalability properties of architectures in which core nodes do not maintain per-flow state [33].
Our second contribution is to devise a general theory for statistical analysis and admission control of coordinated servers. Our key technique is to devise a framework for end-to-end service provisioning that exploits the structural properties of coordinated multihop schedulers, thereby overcoming the traffic distortion problem and realizing the efficiency gains
1That is, a statistical multiplexing among flows is not considered.

1063-6692/02$17.00 ? 2002 IEEE

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

777

of coordination. To analyze CMS networks, we introduce the concept of essential traffic, which is the traffic that must be served before a time instant such that no local service violations will occur at that time. Using this concept and building on the interclass theory of [27], we derive expressions for the essential traffic and service envelopes, which provide a general statistical characterization of a CMS node’s workload and service capacity. Within this framework, we establish an important property of the CMS discipline, namely, that traffic distortion in CMS networks is limited to within a narrow range. Therefore, the essential traffic and service envelopes at a CMS node can be evaluated as simple and minimally distorted functions of the flows’ original (undistorted) traffic envelopes that characterized traffic before entrance into the network. We then derive CMS admission control conditions by transforming the problem of evaluating the service-violation probability into the problem of computing the essential traffic envelope and the essential service envelope.
Previous techniques for multinode admission control include studies of nonwork-conserving schedulers which shape and reshape traffic [5], [17], [18], [28], [31], [36], [37]. While such schemes can have good performance properties, they require per-flow traffic processing in core nodes and do not exploit the coordination property. For work-conserving service disciplines, a key issue is traffic distortion. Previous approaches include bounding this distortion [7], [11], [23], [35] and exploiting isolation properties of GPS servers [14], [19], [26], [38]. While such techniques are important for their generality, we will show that they can be conservative in practice. In contrast, our work develops a general framework for end-to-end services in CMS networks. Our solution applies to the broad class of (work-conserving) CMS servers, exploits the efficiency gains of coordination, and provides an end-to-end admission control algorithm that is quite general and achieves high utilization for multiclass multinode services.
The remainder of this paper is organized as follows. In Section II, we define the CMS discipline and show how scheduling algorithms from the literature can be classified within the CMS framework. Next, in Section III, we develop a general theory for analysis and admission control for statistical end-to-end services. Finally, in Section IV, we provide admission control results obtained by simulations and numerical analysis, and in Section V we conclude.
II. FRAMEWORK FOR COORDINATED SCHEDULING
In this section, we provide a formal definition of the CMS coordination property. We then use this definition to show how a number of schedulers from the literature possess this property so that our admission control tests derived in Section III apply to a broad class of schedulers.
A. CMS Definition
Denote as the priority index assigned to the th packet of flow with size at its th hop. Moreover, let denote the time when the th packet of flow arrives at its first hop.

TABLE I

Finally, let denote the increment of the priority index of the th packet of flow at its th hop.2 Definition 1 (Coordinated Multihop Scheduling): Consider a
multiplexer which services packets in increasing order of their priority indexes. A scheduler possesses the CMS property if the priority index of packet of flow at its th hop can be expressed as
(1)

where is a nonnegative function of , , , , and ,

and for , the priority increments satisfy

,

,

, for some constants and .

In other words, at the first node, the priority index is added to

the packet’s arrival time, and the index may be a constant, or a

function of the packet’s arrival time, the packet size, the priority

index of the flow’s previous packet, or constants associated with

2Notation is summarized in Table I.

778

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002

(a)

(b)

(c) Fig. 1. Illustration of coordination. (a) CMS and EDF at first hop. (b) EDF at second hop. (c) CMS at second hop.

the flow and/or node. In contrast, at downstream nodes, the pri-

ority index is computed recursively as a function of the upstream

index rather than by using the local arrival time. Moreover, while

this downstream index can also be dynamic, it must be bounded

within a range such that

,

.

Observe that the requirement that is a function of , , ,

, and

can be interpreted as meaning that the priority

index of flow ’s th packet at its th hop can be determined

when the packet first enters the network. Consequently, FIFO,

EDF, WFQ, and virtual clock are not coordinated schedulers.

For example, the priority index assigned by FIFO can be written

as

, where is the arrival time of the th packet of

flow at its th hop. For , cannot be determined when

the packet arrives at its first hop. Similarly, for virtual clock, the

priority index can be written as

,

.

However, for , depends on the arrival time of the th

packet of flow at its th hop and cannot be determined when

the packet arrives at its first hop.

Based on the selected method for assigning the increments

of the priority index, we subclassify CMS service disciplines

into delay- and rate-CMS. A service discipline belongs to the

delay-CMS class if represents a delay parameter of the th

packet of flow at its th hop. For example, this delay param-

eter can be simply a local delay bound or, for other service dis-

ciplines (described below), can be a function of packet ’s delay

relative to the scheduler’s mean delay.

In contrast, a service discipline belongs to rate-CMS class if

is a function of and , where is the size of the th

packet of flow and is the reserved bandwidth for flow at

its th hop. The main characteristic of this class is that reserved

bandwidths rather than delay bounds determine the service pri-

ority. Below we also describe examples of schedulers belonging to the rate-CMS class.

B. Discussion

The key property of the CMS discipline is that the priority

index of each packet at a downstream server depends on its

priority index at upstream servers, so that all servers in the

network cooperate to provide the end-to-end service. For

example, if a packet violates a local deadline at an upstream

server, downstream nodes will increase the packet’s priority,

thereby increasing the likelihood that the packet will meet its

end-to-end delay bound. Similarly, if a packet arrives “early”

due to a lack of congestion upstream, downstream nodes will

reduce the priority of the packet.

To illustrate this property, consider the simple example of

Fig. 1 in which three packets of flow arrive to the network

at

, respectively, and traverse two hops with

. In the example, all packets have identical size,

the link speed is one packet per time unit, and cross traffic exists

at both hops. At the first hop, these three packets are assigned

priority indexes (deadlines) of 5, 6, and 7, respectively, by both

CMS and EDF. Suppose further that these three packets depart

from the first hop at times 3, 4, and 10, respectively, so that the

third packet misses its local deadline by three time units due to

cross traffic with higher priority. According to the arrival times

at the second hop, these three packets are assigned priority in-

dexes of 8, 9, and 15 by EDF, whereas the indexes are 10, 11,

and 12 for CMS. In the example, with further cross traffic at the

second hop, the third packet has higher priority in the CMS net-

work than the EDF network and therefore is able to meet both

its local delay bound and global delay bound. In contrast, in the

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

779

EDF network, the third packet meets its local delay bound at the second hop, but is not able to “catch up,” and meet its end-to-end delay bound.

C. Example CMS Disciplines

The above definition of CMS is quite general. Here, we show

how several service disciplines from the literature, including

G-EDF [8], FIFO+ [10], core-jitter virtual clock (CJVC) [34],

and CEDF [1], [2] can be classified as instances of the CMS dis-

cipline.

1) Global EDF: The G-EDF service discipline was intro-

duced in [8] to address the problem that reconstruction of con-

tinuous speech from voice packets is complicated by variable

delays of packets due to multiplexing. In G-EDF, the priority

index for a packet with age (time in network) arriving at a

server at time is defined as

, where

is the maximum allowable entry-to-exit delay and is the esti-

mated delay along the packet’s remaining route in the network.

If we rewrite the priority index assigned by G-EDF as

(2)

to the th packet of flow before it enters the network. Fur-

thermore, it can be verified that

,

for

, where

and

. Thus, considering

, it is clear that work-conserving CJVC is

a rate-CMS service discipline.

4) Coordinated EDF: In [1] and [2], the CEDF service dis-

cipline is developed with the goal of minimizing end-to-end de-

lays. The approach is to use EDF together with randomization

of packet injection time and coordination of servers. There exist

two ways to assign local deadline in CEDF service discipline.

In [2], the priority indexes are assigned as

(5)

where is the token arrival time chosen uniformly at random

from interval

,

, is the max-

imum size of flow- packets, is the rate of flow , is the

utilization factor, and is a constant (expected local delay

bound) determined for the th hop of flow .

In [1], the priority indexes are assigned as

where is the path length of flow , and is the expected delay suffered by flow- packets at its th hop, then it is clear that G-EDF is a delay-CMS discipline.
2) FIFO+: The modified first-in-first-out (FIFO+) service discipline [10] assigns a packet’s priority index according to the difference between the average queueing delay seen by a packet and the particular queueing delay suffered by the packet at upstream servers. From the definition in [10], we can rewrite the recursive FIFO+ priority index as
(3)

(6)

where is the end-to-end delay bound for flow ,

,

is the arrival time of token for the th packet of flow

(similar to above), the is the capacity of the server in the

th hop of flow , and is the path length of flow . Thus, both

variants of CEDF can be classified as delay-CMS disciplines in

which the first priority index is randomized.

where

is the average queueing delay of flow seen by

the packet at the upstream server. Provided that ,

,

is determined before the packet departs from its first hop and

that the range of is bounded, comparing (3) with Definition 1

shows that FIFO+ is also a delay-CMS discipline.

3) Work Conserving CJVC: CJVC was proposed in [34] as

a mechanism for achieving guaranteed service without per-flow

state in the network core. CJVC uses “dynamic packet state”

to store information in each packet header containing the eli-

gible time of the packet at the ingress router and a slack variable

that allows core routers to determine the local priority index of

the packet. For a work-conserving variant of CJVC, the priority

index of packet of flow at node is given by

(4)

where flow th packet size and reserved bandwidth are given by and respectively, and is the slack variable assigned

III. CMS ANALYSIS AND ADMISSION CONTROL

In this section, we develop a statistical multinode analysis and

admission control algorithm for CMS. We proceed in several

steps. First, we introduce two key concepts needed for analysis:

essential traffic and essential service. These concepts enable us

to statistically bound the traffic that must be serviced in order

to meet a flow’s local QoS constraints. We next show how the

essential traffic at a downstream node can be computed based on

a simple and minimally distorted transformation of the traffic at

the entrance of the network. This result (Theorem 1) is a key to

efficient end-to-end analysis. We then derive an expression for

the statistical service envelope (Theorem 2): with this statistical

description of service, we can characterize and control statistical

sharing across traffic classes. Finally, we derive an end-to-end

admission control test for coordinated schedulers (Theorem 3).

Throughout, we denote

as the total

traffic in arriving from traffic flow at its th hop, a node

which is indexed by

. Without loss of generality, we ig-

nore propagation delays so that the departure traffic of flow

from server

is the arrival traffic of flow to server

780

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002

. Similar to [27], we call a sequence of nonnega-

tive random variables

a statistical traffic envelope

of flow if

3

(7)

and assume that

and

are independent and

and

are independent if

. Furthermore, we consider

a discrete time model with an infinite buffer in which traffic is

treated as fluid. We next review several facts about stochastic

ordering that is used later in this section.

Lemma 1: Let

be independent and

be independent. If

for

, then

1)

;

2)

for any real number ;

3) there exist independent random variables

such that has the same distribution as and

for

.

Proof: See [29].

A. Essential Traffic

Here, we define essential traffic as a building block for

analysis of coordinated schedulers that enables us to accurately

evaluate a flow’s delay-bound-violation probability. In partic-

ular, for a given local deadline , all arriving traffic of server

arriving in

can be virtually decomposed according to

whether or not its local deadline is later than . As only the

portion of traffic with local deadline no later than time affects

the probability of violating the local deadline , we refer to this

traffic as essential traffic, which we formally define as follows.

Definition 2 (Essential Traffic): The essential arrival traffic

of flow at server

is defined as the total flow-

traffic with local deadline no later than time arriving at server

no later than , i.e.,

(8)

Fig. 2 illustrates the relationship among arrival traffic, essen-

tial traffic, and departure traffic. If, for example, the priority

index increment for flow at each hop is , then flow ’s es-

sential traffic

is simply

at node 1. Sup-

pose further that some flow- packets miss their local deadline

at the first hop. Then, according to Definition 2,

will

cross

as depicted in the figure. Next, ignoring propa-

gation delay, the departure traffic of node 1 is the arrival traffic at

node 2. Since at node 2, flow ’s arrival traffic has not missed its

node-2 local deadline upon arriving,

.

The key point for Fig. 2(b) is that the relationship between the

arrival traffic

and the essential traffic

charac-

terizes the advantage of CMS’s coordination mechanism. In par-

ticular, the horizontal distance between

and

corresponds with the queueing delay incurred at node 1: the

larger the queueing delay, the shorter the horizontal distance,

and the higher the resulting priority index. Consequently, ex-

cessively delayed packets at upstream nodes can “catch up” at

downstream nodes and still satisfy their end-to-end deadline re-

quirement.

 [ ]  [ ] 3X Y (stochastic inequality) denotes P X > z P Y > z for all z.

(a)

(b)
= Fig. 2. Example of arrival, essential, and departure traffic (priority index
increment ). (a) First hop. (b) Second hop.

B. Essential Traffic Envelope

In each server of CMS networks, packets are served in in-

creasing order of their priority indexes (local deadlines). For a

given , flow ’s essential traffic

must be character-

ized to compute the local deadline violation probability. Fur-

thermore,

affects the probability of

violating the local deadline by no more than for a packet with

local deadline arriving at server

during

. Thus,

we define the essential traffic envelope as follows.

Definition 3 (Essential Traffic Envelope): A sequence of

nonnegative random variables

is an essential

traffic envelope of flow at its th hop if

and

such that

,

(9)

A key challenge for provisioning multinode services is char-

acterizing the traffic at downstream servers. The difficulty is

due to the fact that a flow’s traffic is unavoidably distorted after

multiplexing with other flows. Furthermore, the traffic distor-

tion may be accumulated along the path of a flow in networks

without coordinated scheduling disciplines. However, due to the

coordination property, the accumulated distortion phenomenon

of the essential traffic is mitigated. In networks with coordi-

nated scheduling, the distortion of a flow’s essential traffic after

passing through a server depends only on the local deadline vi-

olation. If the flow’s traffic does not violate its local deadlines,

the distortion of the essential traffic is eliminated, even if the

flow’s traffic incurs a queueing delay. The following theorem

precisely characterizes this advantage of the coordination prop-

erty.

Let

denote the maximum tolerable local deadline vi-

olation of flow- traffic at server

. That is, a flow- packet

will be discarded if it misses its local deadline at server

by more than

. Let

denote the amount of

flow- traffic with local deadline at server

no later than

that is discarded during before arriving at server

.

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

Theorem 1: An essential traffic envelope its th hop is given by

of flow at Since

, we have

according to (12) and (14), we have

(10)

where

.

Proof: To relate a flow’s downstream essential traffic to its

original arrival envelope, we statistically upper bound

and lower bound

.

Let

. For all

and such that

,

consider the interval

. Without loss of generality, as-

sume that there is at least one flow- packet with local deadline

no later than arriving at server during

. Other-

wise,

, a trivial case. Since for

a given flow , packets are always serviced in increasing order

of their local deadlines, all flow- packets arriving at server

before time have local deadlines no later than . According to

Definition 2 and the definition of

, we have

Since

and

, we have

(11) Notice that the interval has duration
,

781
. Thus,
,

According to (7),

Next, we lower bound

as follows:

(12)
Thus, we have (13)

since at time all flow- traffic with local deadlines at server

no later than

either has departed

server

and arrived at server

or has been dis-

carded due to the maximum tolerable local deadline violation

at server

for flow- traffic. Thus, similar

to (12), we have

(14)

That is, by Definition 3,

This theorem describes an important property of the CMS dis-

cipline, namely, that distortion of essential traffic downstream

is limited to within a narrow range. To illustrate, consider the

special case in which

for all so that

for

. In this case, for any flow , its essen-

tial traffic envelope at downstream servers, namely,

, is affected by the maximum toler-

able local deadline violation

. Furthermore, if is

chosen appropriately such that flow- packets do not miss their

local deadlines or flow- packets that miss local deadlines are

782

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002

discarded, i.e.,

, flow- ’s essential traffic envelope

at downstream servers is identical to that at the ingress server,

namely,

. Finally, for two different flows

and , according to (10),

and

are indepen-

dent if

and

are independent. Thus, the property of

independence between essential traffic envelopes at the network

edge is preserved downstream.

, according to Theorem 1, the total flow- traffic with local deadlines no later than arriving at server during
is bound by

C. Essential Service Envelope

The above result enables us to derive end-to-end admission

control tests for CMS networks in the single class case. How-

ever, with multiple traffic classes with statistical sharing across

classes, classes affect each others’ performance. Consequently,

characterizing the extent to which resources are shared across

classes is the key to achieving high utilization in multiclass net-

works without worst-case allocation for each class [27]. Thus,

we use statistical service envelopes as a tool for characterizing

and controlling interclass resource sharing.

Definition 4 (Essential Service Envelope): A sequence of

nonnegative random variables

is called a (sta-

tistical) essential service envelope provided by server

to

the traffic of flow , if

and such that

,

the minimum available service, denoted as

,

from server

to flow- traffic with local deadline at server

no later than and arriving at server

during

is lower bound by

Since

,

,

, are independent and

,

, are independent, according to

Lemma 1, we have

(15)

provided that, during

, server

only services the

traffic arriving during

.

Roughly,

is a measure of the service provided in

an interval with length to flow- traffic with local deadline

seconds before the end of the interval. Notice that the minimum

available service for flow from server

during an interval

will depend on the other flows’ arriving traffic. Furthermore, the

essential service envelope provided by server

to flow

depends on the other flows’ essential traffic. Using this defini-

tion, we can now derive an expression for the essential service

envelope.

Theorem 2: For a given

, and

,

and so

where and are defined by

,

is the set of flows served by server , and is the capacity

of server .

Proof: Since the amount of total traffic (except flow )

with local deadlines no later than and arriving at server

during

is

and the total service capacity of server during the same time

(17)

interval is

, we have

That is, by Definition 4,

(16)

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

783

According to this theorem and Lemma 1, for a given , and

, there exists a random variable

with the

same distribution as

such that

(18)

Furthermore, according to (17), (7), and (10), we have that

(19)

where the random variable

has the same distri-

bution as

and

. Finally, according to (10) and

,

,

are independent,

,

, are independent.

Furthermore,

,

, are independent, and from

Lemma 1 we have that

,

, are indepen-

dent. Since

has

the same distribution as

,

Observe that

is the amount of flow- traffic de-

parting from server

during

. If

, all flow- traffic with local deadline no later than at

server and arriving at server before time has departed

server before time

. For one flow- packet with local

deadline arriving at server

at time , the event of this

packet being served after time

is contained in the event

, and we henceforth consider this latter

event. The following theorem shows how to evaluate this delay

distribution.

Theorem 3: The virtual delay distribution of flow at its th

hop is bounded by

(23)

for

, where

and

are random variables with the same distribution as

and

, respectively.

Proof: From (22), we have

(20)

D. Admission Control

We now derive an end-to-end admission control condition for

CMS networks. The technique used to analyze the deadline-vio-

lation probability is to analyze per-server local delay-bound-vi-

olation probabilities and then compose them into end-to-end

ones. When computing the local-deadline-violation probability

at a given server, any arrival packet at the server is not consid-

ered as discarded even if it incurs a long queueing delay, i.e.,

packet discarding only occurs at the upstream servers. For a

multiplexer (server) in a CMS network and given time and

local deadline , an important instant previous to is the time

when the multiplexer does not service traffic with local dead-

lines later than . As we see below, this is important for analysis

because the traffic arriving at the multiplexer before this mo-

ment does not affect the probability of local deadline violation.

We refer to this instant as the void time denoted as

and

precisely define it as

and

(21)

Since

,

, so that5

. Thus,

If

, there always exist packets with deadlines

no later than at server during

. Since

at

there is not traffic with a deadline no later than at

server , at least

amount of flow- traffic has

been served. Furthermore, during

, server

only serves the traffic with local deadline no later than and

arriving at server after

. Similar to (16), we have

Thus,

where

is the total amount of traffic with local deadline

no later than backlogged at server at time .4 Notice that

server is not necessarily idle at time

as it may be

busy serving traffic with local deadlines later than . We use a

concept of (virtual) delay due to the essential traffic at a par-

ticular node to derive the local deadline-violation probability as

an intermediary step toward bounding the end-to-end deadline

violation probability. Thus, we define (virtual) essential delay

of flow- traffic with local deadline no later than time

at server

at time as

(22)

4Without loss of generality, we assume that network is idle at time 0.

and so

5When computing the probability of local deadline violation for a flow-i

packet with local deadline s arriving at time t, f (s + D ; s) = f (t; s),

because the local deadline of any flow-i packet arriving after time t is larger than

f 0 g  f 0 s, so that f (t; s) f

(s + D ) > 0

f (s + D ; s)

g f

(s + D ) > 0 .

784
Since the random variable

, we have

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002
where

and and are defined by

.

Proof: From Theorem 3, we have

Notice that for a real number , from Theorem 1 and Theorem

2,

and

. According

to Lemma 1, we can find random variables

and and

with the same distribution as respectively such

that

and

. Thus, we have

According to Theorem 1 and according to (20)

(25) Therefore Therefore, we have

That is
Thus, according to Theorem 3, the problem of computing the flow- delay distribution is transformed into the problem of finding flow ’s essential traffic envelope and essential service envelope. Based on Theorems 1 and 2, we have the following results.
Corollary 1: For
(24)

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

785

Fig. 3. Tandem network topology.

Thus, applying this result, each flow can be guaranteed an end-to-end delay bound along with its violation probability by using Corollary 1 to compose per-node QoS parameters into end-to-end ones.

We simulate exponential ON–OFF flows with ON-rate of 64 kb/s, mean ON time of 312 ms, and mean OFF time of 325 ms. For the CMS discipline, we choose

IV. SIMULATION AND ADMISSION CONTROL EXPERIMENTS

for cross traffic flows with a 1 hop path;

In this section, we study the performance of the CMS discipline by performing a set of ns-2 simulation and admission control experiments. Our goal is twofold. First, we establish the effectiveness of our admission control algorithm in properly controlling the number of admitted flows in CMS networks. To achieve this, we first perform a large set of simulation experiments, in which a fixed number of flows are established over various network paths (as described below). For each scenario, we perform numerous simulations and record average performance measures such as a end-to-end delay bound violation probability. We then run a set of admission control experiments in which we use an implementation of our admission control to determine the maximum number of admissible flows under a certain performance criteria. The results of these experiments yield experimental and predicted admissible regions, i.e., the true admissible region obtained by simulations and those obtained by our admission control algorithm.
Our second set of experiments explore the end-to-end performance of CMS networks as compared with non-CMS networks. In particular, we consider networks of FIFO, EDF, and WFQ schedulers and investigate the fraction of packets violating end-to-end delay targets under the different schedulers. These experiments illustrate the potential QoS improvements of coordinated network scheduling, that is, its ability to improve end-to-end performance under a particular network load, or conversely to improve the admissible region under a particular QoS requirement.
A. Scenario
We consider a simple tandem network topology as depicted in Fig. 3. All link rates are 10 Mb/s, packet lengths are 100 bytes, and propagation delays are 0 ms. There are flows entering the network from the first server and exiting from the last server. These flows have the longest path and are chosen to be the target class for analysis. In addition, each router also serves two classes of cross traffic consisting of flows which traverse a single router and then exit the network, and flows that traverse two routers and then exit the network. The cross traffic has the same characteristics as the target traffic (described below) and comprises approximately 80% of the total traffic.

for cross traffic flows with a 2 hop path;

for target traffic flows with a 6 hop path

where is the expected end-to-end queueing delay bound. In

this case,

,

is equivalent

to

,

. Using [21], and the

flows’ mean rate, peak rate, and mean burst length given by the

tuple

, we approximate the statistical traffic envelope

as

and var

, where

.

Furthmore, when predicting the admissible region for CMS net-

works, we assume that a packet will be discarded if it suffers

a queueing delay at a server more than two times its queueing

delay budget at that server, i.e.,

. Finally, for the

EDF discipline, we choose the priority index for flow- ’s th

packet at its th hop as

, and for the WFQ discipline

we assign the same weight for each flow.

B. WFQ Admission Control
To compare CMS with WFQ, we also implement WFQ admission control as follows. Consider flow with guaranteed bandwidth guaranteed at router such that

where is the long-term average rate of flow and

is the

set of flows that are served by server . By simple extension of

the results in [38], the probability of the end-to-end deadline

violation of the traffic of flow can be bounded by

(26)

786

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002

where

and is the set of flows with the same

source and destination as flow .

C. Computing Performance Bounds To compute

we use the maximum variance approach developed in [9]. Let var
var

Fig. 4. Measured and predicted admissible regions.
A proof of this bound can be found in [9] and a detailed comparative performance study in [22]. Roughly, the approach uses the dominant time scale, the value of minimizing the expression above, to derive the exponential asymptotic upper bound of (27). Our goal here is to use the technique as an efficient and accurate means to evaluate the admission control conditions. Regardless, refinements based on large deviations theory can also be applied within the context of statistical envelopes [4].

D. Admissible Regions

Approximating as Gaussian, the following upper bound can been obtained:
(27)

Here, we compare measured and predicted admissible regions for CMS and WFQ networks for the scenario described above and present measured admissible regions for EDF and FIFO networks. The results of the experiments are depicted in Fig. 4. The figure shows network utilization versus the end-to-end queueing delay of the target traffic, i.e., aggregate average traffic rate divided by link capacity versus the delay target. The combinations of the target traffic and cross traffic and the corresponding expected end-to-end queueing delays are given in Table II.
The simulation curves depict the maximum number of flows (scaled to utilization) that can be multiplexed such that the delay target (depicted on the horizontal axis) is satisfied with delaybound-violation probability of no more than 0.001. For these curves, ten independent simulation runs of 400 simulated seconds are performed and mean results are reported. Similarly, the “predicted” curve depicts the maximum number of flows admitted by our admission control algorithm, under the targeted delay depicted on the horizontal axis, and a delay-bound-violation probability of 0.001.
We make the following observations regarding this figure. First, notice that the admissible region for the CMS network is larger than that of the WFQ, EDF, and FIFO networks. For example, for the same violation probability ( 0.001) and the same traffic load (60 target traffic flows and 240 cross traffic flows at each node, i.e., approximately 94.1% utilization), CMS can support an end-to-end delay of 48 ms whereas WFQ’s delay is 52 ms, EDF’s delay is 56 ms, and FIFO’s delay is 80 ms. Thus, while WFQ achieves local fairness of bandwidth sharing at each node [26] and EDF minimizes the queueing delay at a single server system [16], CMS uses the coordination property

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

787

TABLE II TRAFFIC COMBINATION AND EXPECTED END-TO-END QUEUEING DELAY

to minimize end-to-end delay and achieve global performance

properties.

Second, we observe that our CMS admission control algo-

rithm is able to exploit a large fraction of the available statis-

tical multiplexing gain. For example, by an end-to-end delay

bound of 60 ms, the CMS admission control algorithm admits a

set of flows which contains 56 target traffic flows and 224 cross

traffic flows at every server, within 6.1% of the actual utilization

achievable in simulations (60 target traffic flows and 240 cross

traffic flows). In contrast, for the WFQ network, 54 target traffic

flows and 208 cross traffic flows are admitted at each router,

which is more than 10% less than the utilization achievable in

simulations (58 target traffic flows and 234 cross traffic flows).

The reason for the more conservative nature of WFQ admis-

sion control is that traffic is treated as traversing the network on

(a)

a guaranteed-rate “pipe” between an ingress and egress router,

without taking into account inter-class resource sharing among

pipes. Thus, with more traffic classes and more complex topolo-

gies, such an approach will suffer further utilization penalties.

E. End-to-End Delay Performance
Here, we compare the end-to-end queueing delays incurred by the target traffic for networks with CMS, WFQ, FIFO, and EDF schedulers. FIFO services provide baseline results for a scheduler with neither coordination nor QoS differentiation. In contrast, EDF provides differentiation and optimality at a single node, but does not employ coordination, so that gains of CMS versus EDF are strictly due to coordination.
We measure end-to-end delay distributions for two scenarios:
1) Fifty-six target traffic flows and 224 cross traffic flows at each server (the expected end-to-end queueing delay bound for each flow is 6 ms, and so the increments of the priority index at each server are 1 ms for target traffic, 3 ms for cross traffic with a two-hop path, and 6 ms for cross traffic with a one-hop path);
2) Sixty target traffic flows and 240 cross traffic flows at each server (the expected end-to-end queueing delay bound for each flow is 45 ms, and so the increments of the priority index at each server are 7.5 ms for target traffic, 22.5 ms for cross traffic with a two-hop path, and 45 ms for cross traffic with a one-hop path).
For this fixed number of flows and utilization, Fig. 5 depicts the end-to-end queueing delay and its corresponding violation probability.
We make two observations regarding the figure. First, the QoS-violation probabilities for CMS, WFQ, and EDF are al-

(b)

Fig. 5. Comparison of FIFO, EDF, WFQ, and CMS disciplines (exponential

= D = = ON–OFF traffic). (a) Utilization 87.7%,

6 ms. (b) Utilization 94.1%.

D =45 ms.

ways smaller than those for FIFO. For example, in Fig. 5(b), the violation probability for a 50-ms end-to-end queueing delay bound is 0.0007 for CMS, 0.0016 for EDF, 0.001 for WFQ, and 0.0094 for FIFO. Second, notice that for smaller delays (less than 6 ms with 87.7% utilization or less than 40 ms with 94.1% utilization), the CMS violation probability larger than EDF’s whereas for larger end-to-end queueing delays (greater than 6 ms with 87.7% utilization or greater than 40 ms with 94.1% utilization scenario) is smaller than EDFs. The reason for this is that, in a CMS network, packets which suffer excessive queueing delays at upstream nodes have an opportunity to “catch up” at a downstream node, by having a higher (relative)

788

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 6, DECEMBER 2002

priority index. In contrast, in EDF networks, each router treats packets locally according to their arrival time and local deadline, without regard to whether this arrival time is late or early. Thus, the experiments indicate that coordinated scheduling also has performance advantages, in addition to its other properties (e.g., tractability) established above.

V. CONCLUSION
In this paper, we developed a framework for CMS. With a definition of the fundamental coordination property, we showed how a number of schedulers from the literature can be characterized as CMS disciplines. We then developed a general theory based on traffic and service envelopes to analyze CMS networks and devised admission control tests for statistical end-to-end services. We showed that CMS disciplines limit traffic distortion to within a narrow range, thereby providing a foundation for efficient and scalable multinode services. We performed a set of simulation and admission control experiments to illustrate the accuracy of the approach to control multinode admissions and demonstrated potential performance advantages of CMS as compared to WFQ, EDF, and FIFO. Our results present a new framework for understanding end-to-end statistical services in differentiating and work-conserving schedulers.

ACKNOWLEDGMENT
The authors would like to thank the anonymous referees and the Editor, Prof. P. Steenkiste, for their constructive comments.

REFERENCES

[1] M. Andrews, “Probabilistic end-to-end delay bounds for earliest dead-

line first scheduling,” in Proc. IEEE INFOCOM 2000, Tel Aviv, Israel,

Mar. 2000, pp. 603–612.

[2] M. Andrews and L. Zhang, “Minimizing end-to-end delay in

high-speed networks with a simple coordinated schedule,” in Proc.

IEEE INFOCOM ’99, New York, Mar. 1999, pp. 380–388.

[3] J. Bennett and H. Zhang, “WF Q: Worst-case fair weighted fair

queueing,” in Proc. IEEE INFOCOM ’96, San Francisco, CA, Mar.

1996, pp. 120–128.

[4] R. Boorstyn, A. Burchard, J. Liebeherr, and C. Oottamakorn, “Effec-

tive envelopes: Statistical bounds on multiplexed traffic in packet net-

works,” in Proc. IEEE INFOCOM 2000, Tel Aviv, Israel, Mar. 2000, pp.

1223–1232.

[5]

, “Tradeoffs in networks with end-to-end statistical QoS guaran-

tees,” in Proc. IWQoS 2000, June 2000, pp. 221–230.

[6] Z. Cao, Z. Wang, and E. Zegura, “Rainbow fair queueing: Fair band-

width sharing without per-flow state,” in Proc. IEEE INFOCOM 2000,

Tel Aviv, Israel, Mar. 2000, pp. 922–931.

[7] C. Chang, “Stability, queue length, and delay of deterministic and sto-

chastic queueing networks,” IEEE Trans Automat. Contr., vol. 39, pp.

913–931, May 1994.

[8] T. Chen, J. Walrand, and D. Messerschmitt, “Dynamic priority protocols

for packet voice,” IEEE J. Select. Areas Commun., vol. 7, pp. 632–643,

June 1989.

[9] J. Choe and N. Shroff, “A central limit theorem based approach to ana-

lyze queue behavior in ATM networks,” IEEE/ACM Trans. Networking,

vol. 6, pp. 659–671, Oct. 1998.

[10] D. Clark, S. Shenker, and L. Zhang, “Supporting real-time applications

in an integrated services packet network: Architecture and mechanism,”

in Proc. ACM SIGCOMM ’92, Baltimore, MD, Aug. 1992, pp. 14–26.

[11] R. Cruz, “A calculus for network delay, Parts I and II,” IEEE Trans.

Inform. Theory, vol. 37, pp. 114–141, Jan. 1991.

[12] G. de Veciana and G. Kesidis, “Bandwidth allocation for multiple quali-

ties of service using generalized processor sharing,” IEEE Trans. Inform.

Theory, vol. 42, pp. 268–272, Jan. 1995.

[13] C. Dovrolis and P. Ramanathan, “A case for relative differentiated ser-

vices and the proportional differentiation model,” IEEE Network, vol.

13, pp. 26–35, Sept. 1999.

[14] A. Elwalid, D. Mitra, and R. Wentworth, “Design of generalized processor sharing schedulers which statistically multiplex heterogeneous QoS classes,” in Proc. IEEE INFOCOM ’99, Mar. 1999, pp. 1220–1230.
[15] S. Floyd and V. Jacobson, “Link-sharing and resource management models for packet network,” IEEE/ACM Trans. Networking, vol. 3, pp. 365–386, Aug. 1995.
[16] L. Georgiadis, R. Guérin, and A. Parekh, “Optimal multiplexing on a single link: Delay and buffer requirements,” IEEE Trans. Inform. Theory, vol. 43, pp. 1518–1535, 1997.
[17] L. Georgiadis, R. Guérin, and V. Peris, “Effect of network QoS provisioning based on per node traffic shaping,” IEEE/ACM Trans. Networking, vol. 4, pp. 482–501, Aug. 1996.
[18] S. Golestani, “A stop-and-go queueing framework for congestion management,” in Proc. ACM SIGCOMM ’90, Philadelphia, PA, Sept. 1990, pp. 8–18.
[19] P. Goyal, S. Lam, and H. Vin, “Determining end-to-end delay bounds for heterogeneous networks,” in Proc. IEEE Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV’95), Durham, NH, Apr. 1995, pp. 287–298.
[20] P. Goyal and H. Vin, “Statistical delay guarantee of virtual clock,” in Proc. IEEE Real-Time Systems Symp., Dec. 1998, pp. 450–459.
[21] E. Knightly, “Enforceable quality of service guarantees for bursty traffic streams,” in Proc. IEEE INFOCOM ’98, San Francisco, CA, Mar. 1998, pp. 635–642.
[22] E. Knightly and N. Shroff, “Admission control for statistical QoS: Theory and practice,” IEEE Network, vol. 13, pp. 20–29, Mar. 1999.
[23] J. Kurose, “On computing per-session performance bounds in high-speed multi-hop computer networks,” in Proc. ACM SIGMETRICS ’92, Newport, RI, June 1992, pp. 128–139.
[24] A. Mekkittikul and N. McKeown, “A practical scheduling algorithm to achieve 100% throughput in input-queued switches,” in Proc. IEEE INFOCOM ’98, San Francisco, CA, Mar. 1998, pp. 792–799.
[25] T. Nandagopal, N. Venkitaraman, R. Sivakumar, and V. Bharghavan, “Delay differentation and adaptation in core-stateless networks,” in Proc. IEEE INFOCOM 2000, Tel Aviv, Israel, Mar. 2000, pp. 421–430.
[26] A. Parekh and R. Gallager, “A generalized processor sharing approach to flow control in integrated services networks: The multiple node case,” IEEE/ACM Trans. Networking, vol. 2, pp. 137–150, Apr. 1994.
[27] J. Qiu and E. Knightly, “Inter-class resource sharing using statistical service envelopes,” in Proc. IEEE INFOCOM ’99, New York, Mar. 1999, pp. 1404–1411.
[28] M. Reisslein, K. Ross, and S. Rajagopal, “A framework for guaranteeing statistical QoS,” IEEE/ACM Trans. Networking, vol. 10, pp. 27–42, Feb. 2002.
[29] S. M. Ross, Stochastic Processes: Wiley, 1983. [30] M. Shreedhar and G. Varghese, “Efficient fair queueing using deficit
round-robin,” IEEE/ACM Trans. Networking, vol. 4, pp. 375–385, June 1996. [31] V. Sivaraman and F. Chiussi, “Providing end-to-end statistical delay guarantees with earliest deadline first scheduling and per-hop traffic shaping,” in Proc. IEEE INFOCOM 2000, Tel Aviv, Israel, Mar. 2000, pp. 631–640. [32] D. Stephens and H. Zhang, “Implementing distributed packet fair queueing in a scalable switch architecture,” in Proc. IEEE INFOCOM ’98, San Francisco, CA, Mar. 1998, pp. 282–290. [33] I. Stoica, S. Shenker, and H. Zhang, “Core-stateless fair queueing: A scalable architecture to approximate fair bandwidth allocations in high speed networks,” in Proc. ACM SIGCOMM ’98, Vancouver, BC, Canada, Sept. 1998, pp. 118–130. [34] I. Stoica and H. Zhang, “Providing guaranteed services without per flow management,” in Proc. ACM SIGCOMM ’99, Cambridge, MA, Aug. 1999, pp. 81–94. [35] O. Yaron and M. Sidi, “Performance and stability of communication networks via robust exponential bounds,” IEEE/ACM Trans. Networking, vol. 1, pp. 372–385, June 1993. [36] H. Zhang, “Providing end-to-end performance guarantees using nonworking-conserving disciplines,” Comput. Commun., vol. 18, no. 10, pp. 769–781, Oct. 1995. [37] H. Zhang and E. Knightly, “Providing end-to-end statistical performance guarantees with bounding interval dependent stochastic models,” in Proc. ACM SIGMETRICS ’94, Nashville, TN, May 1994, pp. 211–220. [38] Z. Zhang, D. Towsley, and J. Kurose, “Statistical analysis of generalized processor sharing scheduling discipline,” IEEE J. Select. Areas Commun., vol. 13, pp. 368–379, Aug. 1995.

LI AND KNIGHTLY: COORDINATED MULTIHOP SCHEDULING: A FRAMEWORK FOR END-TO-END SERVICES

789

Chengzhi Li (S’97–M’99) received the B.S. degree in applied mathematics from Fuzhou University, China, and the M.S. degree in operations research from Xiamen University, China, and the Ph.D. degree in computer engineering from Texas A&M University, College Station, in 1999.
He is currently a Research Scientist with the University of Virginia, Charlottesville. He From 1999 to 2001, he was a Post-Doctoral Fellow with Rice University, Houston, TX. His research areas encompass wired and wireless networking.

Edward W. Knightly (S’92–M’96) received the B.S. degree from Auburn University in 1991, the M.S. degree from the University of California at Berkeley in 1992, and the Ph.D. degree from the University of California at Berkeley in 1996.
He is an Associate Professor in the ECE/CS Departments at Rice University, Houston, TX. His research interests are in the areas of quality-of-service, scheduling, admission control, and media access protocols in wireless and wireline networks. He is an Editor of the Computer Networks Journal. Prof. Knightly is an Editor for IEEE/ACM TRANSACTIONS ON NETWORKING, IEEE TRANSACTIONS ON MULTIMEDIA, and previously, IEEE Network Magazine. He served as Co-Guest Editor of IEEE Network Magazine’s Special Issue on integrated and differentiated services for the Internet in 1999. He served as co-chair of IWQoS 1998 and on the steering committee for IWQoS from 1999–2001. He is the finance chair for ACM MOBICOM 2002, was the tutorial co-chair for IEEE ICNP 2001, and served on the program committee for numerous networking conferences including ICNP, INFOCOM, IWQoS, MOBICOM, and SIGMETRICS. He received the National Science Foundation CAREER Award in 1997 and the Sloan Fellowship in 2001.


Coordinated multihop scheduling A framework for end....pdf

Coordinated multihop scheduling A framework for end-to-end services AbstractIn multihop networks, packet schedulers at downstream nodes have an opportunity ...

...network scheduling A framework for end-to-end se....pdf

Coordinated network scheduling A framework for end-to-end services_专业资料。In multi-hop networks, packet schedulers at downstream nodes have an opportunity...

Project Improvement of Multiple Linear Regression's....pdf

Knightly, "Coordinated Multihop Scheduling: A Framework for End-to-End Services," IEEE/ACM Transactions on Networking, 10(6), December 2002. 5. V. ...

1 Background of Sensor Network 1.1 Sensors and Sens....pdf

CS848 Presentation Query Processing for Sensor ...The nodes use a multihop routing protocol to ...Now the nodes need to be coordinated within a ...

12-sensor_network.ppt

Reduces latency in multi-hop scenario ? Wake up for a short period of ...Estrin, “Medium Access Control with Coordinated Adaptive Sleeping for Wireless...

...secure efficient distance vector routing for mob....pdf

and is robust against multiple uncoordinated attackers creating incorrect routing...a number of intermediate nodes to forward packets to create amultihop”...

Distributed packet scheduling for multihop flows in....pdf

Knightly, “Coordinated Multihop Scheduling: A Framework for End-to-End Services,” IEEE/ACM Transactions on Networking, 10(6):776-789, December 2002. ...

A Framework for Providing E-Services to the Rural A....pdf

A Framework for Providing E-Services to the ...energy constrained nodes and multihop communication....? Dynamic sensor scheduling: Implying some ...

Admission Control for Multihop Wireless Backhaul Ne....pdf

scheduling for multihop (mesh) networks [24, 26...[21, 22], and Coordinated EDF (CEDF) [3, ...scheduling: A framework for end-to-end services....