Post by
zezo | 2012-01-18 | 08:11:11
The trade-off is a space covered vs. spatial resolution
In this case narrow path with higher resolution works better.
The basic idea is that in general it does not make sense to sail in direction which is opposite of the destination, so you discard some headings.
In open sea and stable conditions you could check only 90 degree sector around the destination point. This would be enough to allow going upwind and tacking in the worst possible case.
Things change when there is an obstacle in the way in the form of land mass or weather system.
Example: If you wanted to sail from W to E coast of India right now starting at 20N the initial direction is downwind with heading of about 190 or 200. If you are sailing towards 20N at the other side this is at least a 100-degree angle between destination and initial heading with effectively negative VMG. To cover that case you have to consider 200-degree sector. But a 200-degree sector covers 4 time more area than 100-degree, and is therefore about 4 times slower.
It's not exactly 4 times - could be 2 or 8, because one of the optimizations is that directions with TWA below 40 or above 160 are simply discarded as inefficient. So if you get the narrow 90-degree sector case combined with direct upwind path it's effectively decreased to 10-degree sector and will be very fast. Yesterday I decreased the sector from 240 to 180 degrees, which combined with wind direction is like effective decrease from 160 to 100.
All of this would not matter in a standalone application - you would get the result in 2 or 5 seconds instead of 1, but with this multiuser environment even increasing the time from a to 1.5 seconds can be a disaster at peak times.
Let's suppose 60 people try to access the site in one minute. The 1-second case will almost work. It will work perfectly if the requests come at 1/second, and fail badly if all requests come within few seconds. In the worst case (60 requests in 1 second) you can do two things - 1. spool them and run them one by one. This guarantees up to 60-second response time with fastest response at 1 second. 2. run 60 parallel requests - this gives 60-second response time for everyone. Here is where you get avalanche effect - running requests in parallel has some overhead so it gets slower with increased demand. After some waiting period users start reloading which increases the demand further. The queue and waiting times gets longer and sometimes everything grinds to a halt (btw this is what happens with the game servers too around wind update times when you can't save changes)