@miconda yes the new code is not using randomized selection for the destination. It is selecting based on probability throughout (recalculate the percentage of calls to that destination each iteration, out of the total calls made). Think of the sorting loop as if it were "processing a new call" per each iteration of i
. Maybe an explanation based on your above example; weight1=75, weight2=25, would clarify.
First iteration of i
no calls have been processed, so the call goes to dst1. On the second iteration of i
dst1 has 1/1 calls, 100% is greater than the intended weight, so we distribute the call to dst2. for i=3
both dsts have 50% of the calls, but dst1 is intended to have 75% of the calls, so the call goes to dst1. This repeats until 100 "calls" have been processed.
The effect is the exact same in terms of distribution of calls as a percentage, i.e. the intention of the algorithm, but the "bug fix" is that distribution is equivalent at high call volumes and low call volumes. Thus, it is true that the new algorithm does not use random sorting under 100 calls like the last, but I believe that is a bug fix, not a new feature. I think that was a decent explanation, but if not please ask for more clarification.
On the topic of memory allocation, thanks for the reference, I am always happy to learn! In my live tests the compiler happily accepted my code and gdb gave me good output back... I don't tout to be a master of memory management so I will defer to you on whether pkg_malloc/free()
is required here to be compliant with all the intended C specs.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.