Overview of Computer Networks
To place the Internet in context, consider a more general computer network with hosts, routers, links, and applications at hosts.
network cloud with routers/links
source/destination hosts/applications outside of network cloud
each router has two parts, data and control,
each host has a transport part between network and application
each router has a data part, a control part, and incoming links
and outgoing links (one pair for every adjacent router)
Consider an application at a host (the source) sending a stream of messages to an application at another host (the destination). Refer to this stream of messages as a flow.
The messages reach the destination via a sequence of routers. Each router has cross-traffic, due to flows of other source-destination pairs.
A router distinguishes flows using identifying information in the packets. In the Internet, this is the destination host id. In general, it can include source/destination host/application ids, application type (eg, audio, video, data), customer id, arriving link, virtual connection id, etc. Some of the identifying information on a packet may be updated by a router, eg, virtual connection id.
A router has two parts, a data part (switch fabric) and a control part, which share the routing table.
The routing table has an entry for each active flow. The entry indicates the identifying attributes of the flow and the outgoing link and any changes to attributes.
The data part of the router forwards packets based on the routing table. If an incoming packet indicates a new flow (eg, a connection request packet, or a packet whose attributes are not in the routing table), the data part hands the packet over to the control part.
The control part is responsible for
- identifying new flows and adding entries for them in the routing table and reserving any other resources (bandwidth, buffers)
- identifying the termination of flows and removing their entries in the routing table.
- maintaining current state of the network (failed/congested links, ...)
In the current Internet, the destination host id is the only flow attribute. In the Internet, a router usually maintains a next-hop for every destination. But in general routing information can be more complex, for example, class-based routes/next-hops.
[PICTURE: two-way commit exchange time line
source router router destination
In the case where routers reserve resources for flows, the steps in establishing the route for a new flow would be as follows.
Source issues connection request to first router
The first router chooses a route (or next hop) to destination, based on local information (which may include info on remote resource availability), and sends connection request along the route (Admission control).
Reverse path upon failure:
Source can send arbitrary stream of messages, ie, m1, m2, m3, ..., with
no constraint on the message sizes and (increasing) transmission times.
Destination receives messages without loss, in order, with constant
That is, destination receives m1, m2, m3, ..., at times t1+D, t2+D, ...,
where ti is tx time of mi.
This ideal service is very hard to achieve, and is also not needed.
- Send: arbitrary message sizes and tx times.
- Receive: with no loss, in order, but arbitrary delay
Noninteractive (compressed) video
- Send: varying message size, periodic tx times
- Receive: some loss tolerable, in order, delay-jitter tradeoff
Noninteractive audio similar to video but bw is fixed and much less (64kbps).
Interactive video/audio similar except require bounds on delay-jitter
Performance Metrics to characterize the service:
- connection request blocking
- connection request set up time
- For an established connection
- delay jitter
Metrics can be statistical or deterministic.
Provides interactive voice service:
Constant bit rate, low delay, very low jitter.
PICTURE: TDMA frames on a link
Data path uses TDM (or FDM in older systems)
- links are divided (statically) into circuits using TDM.
- links operate synchronously with same frame size (slots/frame).
- an established connection uses one slot per frame on each link in its path.
- [Actually, hierarchical TDM is used, with a frame of a high-speed
containing many frames of a lower speed link.]
- Connection establishment: For each destination addr, control indicates
the outgoing link and slot to use for a conn request to that destination.
- Data forwarding: For each incoming link and slot, control indicates
the outgoing link and slot (if the incoming slot is being used).
Control parts interact via packet-switched network separate from the
network used by data path.
TDMA is not efficient for data applications.
It requiress data application to
- reserve enough slots to support the maximum burst size, or
- buffer data and send at single-slot rate (too slow)
Link bw is not statically divided. Rather packets are buffered.
Statistical multiplexing is much better than TDMA
- N bursty flows when statistically multiplexed yields a smoother flow.
- So have a buffer for each outgoing link, to temporarily store bursts.
- Buffer size depends on max burst size of the aggregate
Ideally: buffer overflows iff average inrate exceeds link bw.
This allows best utilization of link bw, so can support more users.
- Max burst size of aggregate flow depends on burstiness of individual
and the value of N:
- If individual flows have low burstiness (e.g., exponential interarrival
times), the aggregate flow smoothens quickly (exponentially) in N.
- If individual flows are very bursty (e.g. self-similar, or heavy-tailed
interarrival times), the aggregate flow smoothens very slowly
(sub-exponentially) in N.
- message header identifies flow.
- router has separate buffers for each outgoing link
- packet is dropped if there is no buffer space for incoming packet.
- end-to-end delay/jitter/loss can be highly variable, which is ok
for data applications but not for multimedia.
Packet-switched network usually uses same links for data and control
- in-band signalling
- cheaper, better utilization of link bandwidth
Connection-oriented (virtual circuit)
- For a connection request, control chooses
- outgoing link
- allocates buffer/bw based on availability and connection's needs.
resources allocated do not have to be worst-case. So there can still
be sharing, queuing, loss.
- can associate VC id
- teardown at end of connection
- No connection request or connection establishment
- no allocation
- no teardown
# Datagram rather than virtual circuit
- increased reliability/survivability (ARPANET)
- simpler: no resource reservation, connection establishment, no teardown
- cheaper network routers, complexity at hosts
- easier internetworking: intermediate networks need only forward packets
based solely on the destination address
- harder to control, intrinsically more unstable,
- therefore require more conservative end-to-end congestion control
- therefore lower utilization than could be achieved with virtual circuit
# Adequate for non-real-time applications
- file transfer, remote login, web browsing, ...
# Not adequate for real-time applications, multimedia, interactive
# Currently the Internet network layer is not connection-oriented.
- router does not know which of its flows are active (and so should
be allocated resources)
- IP packet does not have this info and routers do not peek inside packet
# Not feasible to make the Internet network layer connection-oriented:
- legacy investment
- politically unacceptable
- survivability/flexibility features suffer
# Thus the approach is based on soft-states
- Router considers a flow to be active for upto some specified T seconds
after arrival of a packet of that flow
- Soft-state approach allows routes to change midway in the connection.
If new path is not be able to support the needs of a rerouted flow,
the flow settles for less.
The hope is that the amount of real-time traffic would be small compared
to the overall traffic, and so this would rarely happen.
The number of active flows in a backbone router is enormous (eg, 10^6).
So implementing per-flow resource management is unlikely.
Instead distinguishes applications into a small number of classes, eg
- high priority and low priority
- voice, video, high priority data, low priority data, ...
# Policing source traffic (when it enters the network) is essential
with class-based control, to prevent a mis-behaving flow from
overwhelming other flows in the same class.
END NOTE 2