Thursday, October 05, 2006

Instant Throughput Dilemma

After numerous compiling the code and observing the output, I think have achieve in calculating the instant throughput and hope it will be accurate enough. Instant throughput is the total number of bits divided the total time. The total time is the difference between the current packet time and the previous time. So why is throughput is important?

It is a good measure of the channel capacity of a communications link, and connections to the internet are usually rated in terms of their bitrate, how many bits they transmit per second (bit/s).[Wikipedia]


The idea of calculating the instant looks simple but I was having a hard time coding the method lol.I added two methods in MIRAI-SF3.1 network simulator's 3g base station subclass.

Instant Throughput method is based on an awk script that can be found at Marco Fiore website.

Here my Pseudocode:

instantThroughput method {

Calibrate tk if the currentpackettime > tk when the method is called for the first time.
Sum the total bytes of data being transmitted.
Sum the total by add the time different between the currentpackettime and previous packet time.
Check if the currentpackettime is larger or equal to time interval, if true then calculate the instant throughput, set the totalBytes and totalTime to zero.
Increase tk by adding interval constant value.
Set the currentpackettime to previouspackettime.

}


calibratetk method{
//check if the method is running for the first time, if true then set the first
//boolean flag to true and add the current tk to the current packet time and return //the tk value.

}


Sometimes, I noticed the currentpackettime is greater than tk when the method is called. The logical reason behind is that the base station does not transmit any packet all time. Instant throughput is calculated every
time the method is called until the tk becomes larger than then
currentpackettime(i.e. Every time instant throughput is calculated, tk is
increased, tk+=interval). To make sure that the current packet time is smaller than tk when the first transmission of packet. Thanks to Kentaro a MIRAI-SF developer from NICT for replying my email.

Now, how do i code a variable bit rate (VBR) application? Scratching my head again.

No comments: