Internet-Technology

วันเสาร์ที่ 17 เมษายน พ.ศ. 2553

hxhx2hx

hxhx2hx
hxhx2hx is the node of backlink of my blogspot
hxhx2hx is Non formulated I think it good for web seo
hxhx2hx is the pipe for link structure
hxhx2hx is the common line for people use and competitve

So I always Use hxhx2hx for Recomendation People For Use this Keyword listing is usual
nowaday you can not know the exactly of Backlink

I will use This Keyword For Measurment of Backlink
Tag : hxhx2hx
seo : hxhx2hx

วันเสาร์ที่ 21 กุมภาพันธ์ พ.ศ. 2552

Homework 5 part 3

Problem 3 (200 pts)

Assuming there are three service classes (A,B and C), a backbone link bandwidth is allocated to traffic within these classes in the proportion of 10% , 30% and 60%, respectively. At time 0 there are 3 packet of size 100 , 400 , 200 bytes in queue A. There are 4 packets of size 500 , 1000 , 100 , 50 bytes in queue B. There are 3 packets of size 1000, 100, 200 in queue C. If a router implementing a Deficit Weight Round Robin (DWRR) scheduling with a basic quantum unit of 100 bytes, what would be an order in which these packet from A, B and C will be transmitted.

วันศุกร์ที่ 20 กุมภาพันธ์ พ.ศ. 2552

Homework 5 Part 2

Problem 2 (200 pts)

Assuming that the resource capacity is 20 , a set of four source with demand 12 , 4 , 5 and 9 share the same amount of resource during a given time.

2.1 Compute the max-min fair allocation for all four sources when all four sources have the same weight.

2.2 Compute the max-min fair allocation for all four sources whn all four sources have weights 0.5 , 3.5 , 4 and 2, respectively.

2.3 Assume that all traffic are active all the time and the weight distribution are 0.5 , 3.5 , 4 and 2, respectively, if the WRR sheduler is used on four sources having separated queues, determine how many packets the scheduler should pick from each queue in each visit given that the packet size is the same for all four sources?

2.4 Repeat question (2.3) when the packet size of all four sources are 100 , 200 , 100 , 200 bytes respectively.

Homework 5

1.What effect does (i) the leaky bucket and (ii) token bucket traffic shaping has on a bursty traffic ?

(i) the leaky bucket : Although the leaky bucket algorithm has several uses, it is best understood in the context of network traffic shaping or rate limiting. Typically, the algorithm is used to control the rate at which data is injected into a network, smoothing out "burstiness" in the data rate.
Two predominant methods for shaping traffic exist : a leaky bucket implementation and a token bucket implementation. Sometimes the leaky bucket and token bucket algorithms are mistakenly lumped together under the same name. Both these schemes have distinct properties and are used for distinct purposes. They differ principally in that the leaky bucket imposes a hard limit on the data transmission rate, whereas the token bucket allows a certain amount of burstiness while imposing a limit on the average data transmission rate.

The leaky-bucket implementation is used to control the rate at which traffic is sent to the network. A leaky bucket provides a mechanism by which bursty traffic can be shaped to present a steady stream of traffic to the network, as opposed to traffic with erratic bursts of low-volume and high-volume flows.

data traffic shaping via leaky bucket in order to reduce outage probability and enhance capacity of voice users for uplink variable spreading gain code-division multiple-access (VSG-CDMA) cellular networks. In order to analyze the effect of leaky bucket mechanism, we employ a closed Jackson queueing network with arbitrary service time distributions to model different states of a typical traffic-shaped data user. By solving the concerned queueing network, we obtain the steady state probability with which a typical data user exist at each state. By aggregating the effects of all data and voice users we obtain mean and variance of interference and compute the outage probability for different number of voice users. At the last part of the paper, we show the efficiency of the proposed traffic shaping on capacity enhancement, at the cost of delay degradation for data users, in different conditions.


(ii) token bucket traffic : A token bucket is a common algorithm used to control the amount of data that is injected into a network, allowing for bursts of data to be sent. Although it has several uses, it is best understood in the context of network traffic shaping or rate limiting.

Two predominant methods for shaping traffic exist: a leaky bucket implementation and a token bucket implementation. Sometimes they are mistakenly lumped together under the same name. Both these schemes have distinct properties and are used for distinct purposes. They differ principally in that the leaky bucket imposes a hard limit on the data transmission rate, whereas the token bucket allows a certain amount of burstiness while imposing a limit on the average data transmission rate.


2. Describe Clearly how the token bucket filter can be used to enforce peak rate of a connection

Token Bucket Filter : The Token Bucket Filter (TBF) is a simple queue, that only passes packets arriving at rate in bounds of some administratively set limit, with possibility to buffer short bursts.
The TBF implementation consists of a buffer (bucket), constatly filled by some virtual pieces of information called tokens, at specific rate (token rate). The most important parameter of the bucket is its size, that is number of tokens it can store.
Each arriving token lets one incoming data packet of out the queue and is then deleted from the bucket. Associating this algorithm with the two flows -- token and data, gives us three possible scenarios:
The data arrives into TBF at rate equal the rate of incoming tokens. In this case each incoming packet has its matching token and passes the queue without delay. The data arrives into TBF at rate smaller than the token rate. Only some tokens are deleted at output of each data packet sent out the queue, so the tokens accumulate, up to the bucket size. The saved tokens can be then used to send data over the token rate, if short data burst occurs. The data arrives into TBF at rate bigger than the token rate. In this case filter overrun occurs -- incoming data can be only sent out without loss until all accumulated tokens are used. After that, overlimit packets are dropped.
The last scenario is very important, because it allows to administratively shape the bandwidth available to data, passing the filter. The accumulation of tokens allows short burst of overlimit data to be still passed without loss, but any lasting overload will cause packets to be constantly dropped.

3. Describe clearly how the token bucket filter can be used to enforce average rate of a connection

4.Describe why traffic shaper is needed at (i) the ingress router before entering a network, (ii) within the network, (iii) at the egress router after exiting a network.

(i) the ingress router before entering a network
(ii) within the network
(iii) at the egress router after exiting a network.


ingress router is a Label Switch Router that is a starting point (source) for a given Label Switched Path. An ingress router may be an egress router or an intermediate router for any other LSP(s). Hence the role of ingress and egress routers is LSP specific. Usually, the MPLS label is attached with an IP packet at the ingress router and removed at the egress router, whereas label swapping is performed on the intermediate routers. However, in special cases (such as LSP Hierarchy in RFC 4206, LSP Stitching and MPLS local protection) the ingress router could be pushing label in label stack of an already existing MPLS packet (instead of an IP packet). Note that, although the ingress router is the starting point of an LSP, it may or may not be the source of the under-lying IP packets.


5. Explain clearly why open-loop flow control requires call admission control mechanism ?
to meet bandwidth , buffer and Qos requirements
verifying actual complaince with traffic descriptor

Call admission control scheme - Qos guarantees of newly accepted connections should not affect currently supported connections , too conservative schemes, based on worst case scenario, are resource wasteful , too optimistic schemes may fal to meet Qos guarantees

6. Explain the differences between leaky bucket and token bucket traffic shaping policy ?
What is the difference between token bucket and leaky bucket?

An important difference between two traffic shaping algorithms: token bucket throws away tokens when the bucket is full but never discards packets while leaky bucket discards packets when the bucket is full. Unlike leaky bucket, token bucket allows saving, up to maximum size of bucket n. This means that bursts of up to n packets can be sent at once, giving faster response to sudden bursts of input. Leaky bucket forces bursty traffic to smooth out, token bucket permits burstiness but bounds it. Token bucket has no discard or priority policy. Token bucket when compared to leaky bucket, is easy to implement. Each flow needs just a counter to count tokens and a timer to determine when to add new tokens to the counter.
Reference from: www.ics.uci.edu/~ics218/lectures/mmlecture13.ppt

The Leaky Bucket Algorithm (1/4)
(a) A leaky bucket with water (b) a leaky bucket with packets
The Leaky Bucket Algorithm (2/4)• Each host is connected to the network by aninterface containing a leaky bucket– I.e., a finite internal queue– This arrangement can be built into the hardwareinterface or simulated by the host OS– In fact, it’s nothing other than a single-server queueingsystem with constant service rate– The host is allowed to put one packet per clock tickonto the network– This mechanism can smooth out burst and greatlyreduce the chances of congestion
The Leaky Bucket Algorithm (3/4)• When the packets are all the same size(e.g., ATM cells), the above algorithm canbe used– However, when variable-sized packets arebeing used, it’s often better to allow a fixednumber of bytes per tick, rather than just onepacket• Implementations with circular queue for variable-sized and fixed sized packets
The Leaky Bucket Algorithm (4/4)• Imagine that a computer can produce data at 25 M bytes/sec and thatthe network also runs at this speed– However, the routers can accept this data rate only for short intervals– For long intervals, they work best at rates not exceeding 2 MB/sec• Now, suppose data comes in 1-M-byte bursts, one 40-msec burst– To reduce the average data rate to 2MB/sec, we could use a leakybucket with ρ= 2 MB/sec and a capacity, C, of 1MB
The Token Bucket Algorithm:
• Drawbacks of the leaky bucket algorithm– For many applications, it’s better to allow the output tospeed up somewhat when large bursts arrive• MPEG-based streaming• In the token bucket algorithm, the leaky bucketholds tokens, generated by a clock at the rate ofone token every ΔT sec– For a packet to be transmitted, it must capture anddestroy one token
• Differences between LBA and TBA– LBA does not allow idle host to save up permission tosend large burst later• TBA does allow saving, up to the maximum size of the bucket, n• This allows some burstiness in the output stream and givingfaster response to sudden bursts of input– TBA throws away tokens when the bucket fills up butnever discards packets• In contrast, LBA discards packets when the bucket fills up– TBA regulating a host can make the host stop sendingwhen the rules say it must• Telling a router to stop sending while its input keeps pouring inmay result in lost data
• Implementations in TBA– A packet can only be transmitted if enough tokens areavailable to cover its length in bytes– Fractional tokens are kept for future use– The token counter is incremented by one every ΔTand decremented by one whenever a packet is sent– In the byte-count variant, the token counter isincremented by k bytes every one every ΔT anddecremented by the length of each packet sent• LBA and TBA can also be used to smooth trafficbetween routers, as well as regulate host output

Comparing Traffic Policing and Traffic Shaping for Bandwidth Limiting
Traffic PolicingCisco IOS supports the following methods of traffic policing:
Committed Access Rate
Class-Based Policing
The two mechanisms have important functional differences, as explained in Comparing Class-Based Policing and Committed Access Rate. Cisco recommends class-based policing and other features of the modular QoS CLI when applying QoS policies.
Use the police command to specify that a class of traffic should have a maximum rate imposed on it, and if that rate is exceeded, an immediate action must be taken. In other words, with the police command, it is not an option to buffer the packet and later send it out, as is the case for the shape command.
In addition, with policing, the token bucket determines whether a packet exceeds or conforms to the applied rate. In either case, policing implements a configurable action, which includes setting the IP precedence or Differentiated Services Code Point (DSCP).
The following diagram illustrates a common application of traffic policing at a congestion point, where QoS features generally apply.

Minimum Versus Maximum Bandwidth ControlsBoth the shape and police commands restrict the output rate to a maximum kbps value. Importantly, neither mechanism provides a minimum bandwidth guarantee during periods of congestion. Use the bandwidth or priority command to provide such guarantees.
A hierarchical policy uses two service policies – a parent policy to apply a QoS mechanism to a traffic aggregate and a child policy to apply a QoS mechanism to a flow or subset of the aggregate. Logical interfaces, such as subinterfaces and tunnel interfaces, require a hierarchical policy with the traffic-limiting feature at the parent level and queuing at lower levels. The traffic-limiting feature reduces the output rate and (presumably) creates congestion, as seen by queuing excess packets.
The following configuration is sub-optimal and is shown to illustrate the difference between the police versus the shape command when limiting a traffic aggregate – in this case class-default – to a maximum rate. In this configuration, the police command sends packets from the child classes based on the size of the packet and the number of bytes remaining in the conform and exceed token buckets. (See Traffic Policing.) The result is that rates given to the Voice over IP (VoIP) and Internet Protocol (IP) classes may not be guaranteed since the police feature is overriding the guarantees made by the priority feature.
However, if the shape command is used, the result is a hierarchical queuing system, and all guarantees are made. In other words, when the offered load exceeds the shape rate, the VoIP and IP classes are guaranteed their rate, and the class-default traffic (at the child level) incurs any drops.

7. What is the difference between FIFO queuing and WFQ queuing ?


•Comparing FIFO vs WFQ -- Worst case buffer tradeoffs
Queuing Scenarios: FIFO vs WFQ Delay with Equal Buffer Size

•WFQ provides good performance for all applications at the cost of some loss of VoIP performance
•FIFO performed surprisingly well compared to WFQ
•Buffer size is an important consideration when comparing queuing scheme performance

8. In what scenario where the WRR is not fair ?
Not fair to lower classes. – No QoS guarantee for lower priority traffic

9. Explain why the tail dropping policy rewards greedy connections and how the random dropping policy can help solve such problem ?

In most drop policies such as drop tail and RED (Random Early Detection), maintains information about active connections. RED is an AQM mechanism, which is not designed to operate with any specific protocol, but performs better with protocols that perceive drops as indications of congestion (e.g. TCP). RED gateways require the user to specify five parameters: the maximum buffer size or queue limit (QL), the minimum (minth) and maximum (maxth) thresholds of the "RED region", the maximum dropping probability (maxp), and the weight factor used to calculate the average queue size (wq). QL can be defined in terms of packets or bytes. Note that when DT is implemented at a router, QL is the only parameter that needs to be set. A RED router uses early packet dropping in an attempt to control the congestion level, limit queuing delays, and avoid buffer overflows. Early packet dropping starts when the average queue size exceeds minth. RED was specifically designed to use the average queue size (avg), instead of the current queue size, as a measure of incipient congestion, because the latter proves to be rather intolerant of packet bursts, as we will see shortly. If the average queue size does not exceed minth a RED router will not drop any packets.

10. Explain how RED can help avoiding global synchronization effect ?


RED queuing discipline

All queuing disciplines we have seen so far (FIFO, PRIO, TBF, SFQ) and also HTB that we will see later, base their queuing algorithm in the well-known FIFO queue. As you remember, in this queuing discipline packets enter the queue at the tail and leave it from the head. This, that could be considered something very simple and natural, pose a serious problem to those flows based on the, de-facto standard, TCP transport protocol.

Homework 4

ITS 413 - Internet Technologies and Applications
Homework 4
Due Date : 24/1/09
Part I - Answer the following questions clearly (100 pts)

วันเสาร์ที่ 27 ธันวาคม พ.ศ. 2551

Internet Timeline

ข้อมูลสรุปนี้ไม่พร้อมใช้งาน โปรด คลิกที่นี่เพื่อดูโพสต์

Analytic

 
Development @ Sirwilliams