The question I am trying to figure out is:
In this problem we consider the delay introduced by the TCP slow-start phase. Consider a client and a Web server directly connected by one link of rate R. Suppose the client wants to retrieve an object whose size is exactly equal to 15S, where S is the maximum segment size (MSS). Denote the round-trip time between client and server as RTT (assumed to be constant). Ignoring protocol headers, determine the time to retrieve the object (including TCP connection establishment) when
- 4S/R > S/R + RTT > 2S/R
- 8S/R > S/R + RTT > 4S/R
- S/R > RTT
I have the solution already (its a problem from a textbook), but I do not understand how they got to the answer.
- RTT + RTT + S/R + RTT + S/R + RTT + 12S/R = 4 · RTT + 14 · S/R
- RTT + RTT +S/R + RTT +S/R + RTT +S/R + RTT + 8S/R = 5 · RTT + 11 ·S/R
- RTT + RTT + S/R + RTT + 14S/R = 3 · RTT + 15 · S/R
and here is the image that goes with the answer:

What kind of makes sense to me: Each of the scenarios is one where the RTT time is more or less than the time it takes to transmit a certain amount of segments. So for the first one, it takes somewhere between 3S/R and S/R seconds per RTT. From there I don't understand how slow-start is operating. I thought it just increases the window size for every acknowledged packet. But, for example in the solution to #1, Only two packets appear to be sent and ACKed and yet the window size jumps to 12S? What am I missing here?
Yes the answer is correct,
Slow start double the amount of MSS every time, so starting from 1 then 2 then 4 then 8...
To understand the figure, think of it that way: EACH time ONE MSS is well received , 2 MSS are sent. in your example: when the first green mss is well acknowledged , 2 blue mss are sent and when the second mss is well acknowledged , 2 more blue mss are sent.
When the number of mss increases you wont be waiting RTT because while sending the acknowledgment other MSS are being set in a simultaneous way.