Location>code7788 >text

A Random Number Rant

Popularity:878 ℃/2024-07-20 23:54:38

Random numbers are important to programming, and today we'll explore some common random number-related issues from several perspectives.

This article only discusses integer-related random numbers, and additionally requires you to have a minimal understanding of probability theory (or at least know what a classical generalization is).

Index of this article

  • How to generate rand5 from rand7
    • The go standard library approach
  • Generate rand7 from rand5
  • Make the most of every bit
  • Random numbers with weights
  • random number seed
    • golang and python3 chose those sources to seed the
  • summarize

How to generate rand5 from rand7

The first is the opposite of a well-known algorithmic question.

Here, given a random number generator that can generate 0 to 6 with equal probability, it is required to create a random number generator that can generate 0 to 4 with equal probability using this generator.

Some might immediately give this answer:

func rand5() int {
    return rand7() % 5
}

However this answer only satisfies that the output ranges from 0 to 4 and does not satisfy equal probability, so it is incorrect. This is when the list method comes into play:

Output of rand7 rand7 % 5
0 0
1 1
2 2
3 3
4 4
5 0
6 1

Notice the problem, 0 and 1 appear twice and they are twice as likely to appear as any other number, so the probability distribution is not equal.

Through the list method we have actually found the real solution to this problem: the remaining outputs are not only equal in probability of meeting the requirements if 5 and 6 are eliminated, so the code will look like this:

func rand5() int {
    n := rand7()
    for n >= 5 {
        n = rand7()
    }
    return n
}

The above code is actually a very important random sampling algorithm: rejection sampling. Similarly the above rand7 generating rand5 can be categorized into a large class of problems: given a set of problems that satisfy the law or are characterized by ag(x)samples, it is now necessary to filter or generate another set of samples from these samples that satisfy the feature isf(x)of the sample. There are many algorithms for solving this type of problem, and rejecting samples is relatively intuitive: determining theg(x)samples meet the requirements, those that don't are excluded to take the next sample, and those that do meet the requirements are categorized to satisfy thef(x)in the sample set of the

Following this perspective, the above question can be divided into several basic elements:

  • g(x):rand7
  • f(x): rand5
  • Samples to be rejected: integers greater than or equal to 5

Rejecting samples gives the desired results most of the time, but there is also the sampling rate to be aware of. The sampling rate is theg(x)How many of the samples are acceptable, and when the sampling rate is too low it means that the algorithm will also be very inefficient. So let's do some simple math on the sampling rate for rand5, which is five out of seven, or about 70%. This probability is not too big or too small, and is barely appropriate.

When the sample rate is too low either you have to change the criteria or range for rejecting samples or you can't use rejected samples anymore.

The go standard library approach

The standard library certainly doesn't have rand5 and rand7, but it does provide a program calledInt63nfunction, which solves the problem of how to obtain a uniform distribution from a uniformly distributed[0, 2⁶⁴)Generate uniformly distributed random integers over the range[0, n)on a random integer. In other words, it's the same problem, although the range is different.

We certainly cannot throw out all samples greater than or equal to n as we did above, since 2⁶⁴ contains at least 1844 kyo (1E16) integer samples, which would result in an unacceptably low sampling rate.

But since we're using mod to choose random numbers in a range, we can choose multiples of n. The proof is so simple, the list method plus induction will do. Or also think of it this way, there is an integer constant C thatx % ncap (a poem)Cx % nThe kinds of outputs that can be produced and the number of them are identical, so if the samples can be uniformly distributed in the[0, n)on the range, then the range[0, C·n]only[0, n)It was repeated C times, so the samples were uniformly distributed over each segment and uniform over the entire combined interval.

With the constant C, this allows us to get as many samples sampled as possible, which reduces the number of retries.

C is actually quite well chosen, just take the largest multiple of n within 2⁶⁴, and if C itself is divisible by 2⁶⁴ then C is2⁶⁴/n

So the standard library reads.

func (r *Rand) Int63n(n int64) int64 {
	if n <= 0 {
		panic("invalid argument to Int63n")
	}
	if n&(n-1) == 0 { // n is power of two, can mask
		return r.Int63() & (n - 1)
	}
	max := int64((1 << 63) - 1 - (1<<63)%uint64(n))
	v := r.Int63()
	for v > max {
		v = r.Int63()
	}
	return v % n
}

The code is still very simple, overC·nsamples are all rejected for sampling, the remaining samples are guaranteed to be in themod ntime to obtain uniformly distributed random integers too.

What is the sampling rate? We can use the rejection rate to backtrack, and here the rejection rate is kinda easy to figure out, it's the(1<<63)%uint64(n), which works out to a rejection of 1 in 10 billion for less, and 1 in tens of millions for more - both low enough that basically a maximum of two retries most of the time is all it takes to get the desired result.

But as a standard library, it's not enough, especially given the reality that go's compilation optimizations are very weak, and efficient algorithms are needed to compensate. What is the problem? First of all, it's not the sampling rate, which is sufficient, the problem is that it requires two 64-bit divisions, which is much slower than other operations such as right-shift, not to mention two times, and the algorithms used by the standard libraries of other languages only require 0 to 1 divisions, which is enough.

It's a good thing that go offersmath/rand/v2, using a more efficient algorithm.

The new algorithm is still based on rejection sampling, but the range of sampling has been changed as follows:

  1. Still using rand64 with equal probability to generate a random integer x
  2. Now putx*n, the range of values generated in this way is[0, n·2⁶⁴)
  3. Because it is an isometric expansion of an existing range, thex*nexist[0, n·2⁶⁴)Still uniformly distributed (but note that the range is extended, but the total number of samples remains the same or 2⁶⁴)
  4. [0, n·2⁶⁴)can be divided equally into n ranges of[0, 2⁶⁴), [2⁶⁴, 2*2⁶⁴), [2*2⁶⁴, 3*2⁶⁴) ... [(n-1)*2⁶⁴, n*2⁶⁴)
  5. Thus each equally divided range integer divided by 2⁶⁴ yields an integer k. k must be in the range of[0, n)interior
  6. k can be treated as a conforming result, and whole division by 2⁶⁴ can actually be converted to a bitwise operation so that the division operation can be reduced by one.

There are several problems with the new algorithm, the first being thatx*nIn most cases it will be more than 2⁶⁴, but that's nothing to worry about because go offers high-performance 128-bit integer arithmetic.

The second one isx*nAlthough in[0, n·2⁶⁴)uniformly distributed, but how do we ensure that each of the equally partitioned[(k-1)*2⁶⁴, k*2⁶⁴)What about an equal distribution on it as well?

The answer is that if there are only the six steps written above, we can't guarantee it. The reason is because to guaranteex*nDistributed evenly across each[(k-1)*2⁶⁴, k*2⁶⁴)on it, we have to make sure that x itself is going to be uniformly distributed in the[(k-1)*(2⁶⁴/n), k*(2⁶⁴/n))On, in other words, dividing 2⁶⁴ into n parts, with the same number of samples in each part. Since our samples are all integers and not real numbers, it's easy to imagine that most of them won't be able to divide 2⁶⁴, thus leaving a "remainder". But our new algorithm actually assumes that x is uniformly distributed over an equal range of 2⁶⁴ divisions. The fact that it is not divisible means that even with the most homogeneous split, there will be some ranges that have a few more samples or fewer samples than others, more or less depending on your understanding of what you mean by2⁶⁴/NRounding.

But this is not much of a problem; usually the direct difference in the number of segments produces a very small error in the probability; for example, if we take n to be 6 and split it as evenly as possible, there are four segments with one more sample than the total number of samples in the remaining segments, but there are more than 3E18 samples in each of the segments, and the effect of whether there is one more or two more is almost negligible.

However, the most important thing about the standard library is to ensure that the results are correct, even if the likelihood is 1 in 3E18, it is still not 0. The implementation of the function is incorrect, not to mention that, according to the choice of n, the larger the n the number of samples in the segments, the smaller the number of samples, and the impact of the difference in the number of segments between the segments will be more and more, and there will always be an n that can make the result's error so large that it can't be ignored.

The problem is actually good, because we know that there will always be some grouping of samples is redundant, we just need to make sure that the number of samples in the grouping is consistent, do not need to care about the specific elimination of samples are. Assuming that we use the downward rounding approach, then there will be some theoretically should be in the segment k samples run to the k + 1 grouping, these samples are usually distributed in the beginning of the segment position, we can reject these samples sampling, so it is easier to realize. These samples, when multiplied by n, will fall on[k*2⁶⁴, k*2⁶⁴+(2⁶⁴%n))Up.

By eliminating these samples, we are assured that thex*nat each[(k-1)*(2⁶⁴/n), k*(2⁶⁴/n))It's all evenly distributed on it now.

Once the idea is understood it's not hard to look at the code:

func (r *Rand) uint64n(n uint64) uint64 {
	if is32bit && uint64(uint32(n)) == n {
		return uint64(r.uint32n(uint32(n)))
	}
	if n&(n-1) == 0 { // n is power of two, can mask
		return r.Uint64() & (n - 1)
	}

    hi, lo := bits.Mul64(r.Uint64(), n)
	if lo < n {
		thresh := -n % n // 2⁶⁴ % n simplified form
		for lo < thresh {
			hi, lo = bits.Mul64(r.Uint64(), n)
		}
	}
	return hi
}

The essence is in the utilization(x*n) >> 64to avoid thex % nBring on the division operation. And the new algorithm doesn't have to count the remainder at the beginning, so you can get lucky and not do the division at all once.

And a little question, is 128-bit multiplication enough? It's definitely enough, because n can only be taken up to 2⁶⁴, which means thatx*nThe range is only up to[0, 2⁶⁴·2⁶⁴), 128-bit multiplication is just enough.

Finally a performance test, which has been provided in the standard library, on my 10th gen i5 the old algorithm takes 18ns for a single call, the new algorithm only takes 5ns, and both use the same random number generator, so it's fair to say that the new algorithm is 3x faster, which is still a significant improvement.

Generate rand7 from rand5

While the previous section discussed filtering subsets of a particular feature from a larger sample space, in this section we do the reverse: derive supersets with a particular feature from a sample space with a smaller range.

Also, this is a common moderately difficult algorithm question.

The first thing to consider is how to expand the restricted sample space as much as possible. In the previous section we used multiplication to extend the range of the sample distribution; however, multiplication, especially by a constant, does not increase the sample size, so this approach is a pass.Addition flattens the range of the sample, but it does not increase the total number of samples, and we need the sample space to be[0, x)The starting point is all changed after panning, so that won't work either.

That leaves the feasible and most stable option ofrand5() * rand5(). It expands the range of the sample like multiplication, and because it is not multiplied by a constant there is an opportunity to generate new samples. Let's make a table:

rand5 rand5 rand5 * rand5
0 0 0
0 1 0
0 2 0
0 3 0
0 4 0
1 0 0
1 1 1
1 2 2
1 3 3
1 4 4
2 0 0
2 1 2
2 2 4
2 3 6
2 4 8
3 0 0
3 1 3
3 2 6
3 3 9
3 4 12
4 0 0
4 1 4
4 2 8
4 3 12
4 4 16

It is true that new samples have appeared, but they are not continuous enough, e.g. there are no 7's and 10's. Therefore this path is not possible.

It's time to get on to the original rejection sampling algorithm, which we use5 * rand5 + rand5

rand5 rand5 5 * rand5 + rand5
0 0 0
0 1 1
0 2 2
0 3 3
0 4 4
1 0 5
1 1 6
1 2 7
1 3 8
1 4 9
2 0 10
2 1 11
2 2 12
2 3 13
2 4 14
3 0 15
3 1 16
3 2 17
3 3 18
3 4 19
4 0 20
4 1 21
4 2 22
4 3 23
4 4 24

That's right, it happens to produce evenly distributed integers from 0 to 24. Amazing, isn't it, and it's actually not hard to figure out why. Let's start by looking at5 * rand5This way or generate the five numbers 0, 5, 10, 15, 20, we must need to fill in the missing numbers 1 to 4, 11 to 14 if we want to have a chance of generating consecutive integers. At this point it just so happens that a+ rand5It would be possible to fill in all these missing holes. Of course it would be simpler to understand it in terms of progression.

Summary.n * randn + randnIt is possible to generate a continuous range of[0, n*n)of uniformly distributed integers. Note that there is no combining law here, because randn's result is different each time

This sample space is far beyond the requirements of rand7, so now the question returns to the first section: how to generate rand7 from rand25?Now we all know:

func rand7() int {
    x := 5*rand5() + rand5()
    max := 25 - 25%7
    for x >= max {
        x = 5*rand5() + rand5()
    }
    return x % 7
}

You can also rewrite it to be more efficient as stated in the previous section.

Make the most of every bit

rand.Uint64()The returned random number is a full 64bits, and we usually don't need such a large random number, for example, if we only need a random number from 0 to 15, this number only needs 4bits, if we use therand.Uint64N(16)to generate, we would have wasted 60bits of data.

Why do you say that? Because.rand.Uint64()Guaranteed to generate with equal probability[0, 2⁶⁴)The first is that every bit of this random number is also random, which is obvious; the second is that the probability that each bit is 0 or 1 is also equal, which can be derived by list plus induction. Let's look at the second point so prove.

Let's assume for a moment that we have a system that uniformly generates[0, 8)A random number generator for numbers in the range, now let's look at all the possibilities:

generated figure binary representation First from left. Second from left. Third from left.
0 000 0 0 0
1 001 0 0 1
2 010 0 1 0
3 011 0 1 1
4 100 1 0 0
5 101 1 0 1
6 110 1 1 0
7 111 1 1 1

It is not hard to notice that each of the three bits has eight outputs, with 0 accounting for four and 1 accounting for the remaining four, with 0 and 1 having equal probability. This conclusion generalizes to any[0, 2^n)Scope.

Similarly, based on this conclusion, we can also conclude that any consecutive n bits that produce a uniform distribution in the[0, 2^n)on random numbers, this proof is too simple, so let's just focus on it.

Now look back at why I said it would be a waste of 60bits, because according to the conclusion above, our 64-bit random integer can be divided exactly once every four bits, which would divide it into 16 groups, and each group would produce exactly[0, 16)range of random numbers with equally equal probability. That is to say that one timerand.Uint64()Theoretically it should be possible to generate 16 of the random numbers we need, but in practice we only generate one. There's a lot of room for improvement here.

How do we do it? We need a little bit of bitwise arithmetic to group 64-bit integers in sets of four:

n := rand.Uint64()
mask := uint64(0xf) // 0b1111
for i := range 10 {
a[i] = int(n & mask) // only the rightmost four bits are valid (the right side of the writing order, we won't discuss the big and small ends here because it's too much trouble to explain)
n >>= 4 // remove the four bits you just used
}

The code is simple, check out the performance test below:

// Don't write code like this
// I do this so that memory allocation and reclamation doesn't create unnecessary noise for the test
func getRand(a []int) {
if len(a) < 10 {
panic("length wrong")
}
for i := range 10 {
a[i] = int(rand.Int32N(16))
}
}

// Don't write code like this
// I do this so that memory allocation and reclamation doesn't create unnecessary noise for the test
func getRandSplit(a []int) {
if len(a) < 10 {
panic("length wrong")
}
n := rand.Uint64()
mask := uint64(0xf)
for i := range 10 {
// No mod needed here
a[i] = int(n & mask)
n >>= 4
}
}

func BenchmarkGetRand(b *) {
var a [10]int
for range {
getRand(a[:])
}
}

func BenchmarkGetRandSplit(b *) {
var a [10]int
for range {
getRandSplit(a[:])
}
}

Test results:

goos: windows
goarch: amd64
pkg: benchrand
cpu: Intel(R) Core(TM) i5-10200H CPU @ 2.40GHz
BenchmarkGetRand-8        	15623799	        79.31 ns/op	       0 B/op	       0 allocs/op
BenchmarkGetRandSplit-8   	100000000	        11.18 ns/op	       0 B/op	       0 allocs/op

By fully utilizing every bit, we've increased performance bySix times.

So far so good, if you don't care about the probability distribution of the generated random numbers or you just want to generate the[0, 2^n)range of random numbers and this n is divisible by 64, then you can just skip to the next section and continue reading.

The person who proceeds to the next section is surely hoping to generate random numbers with uniform probability no matter what the range is and to utilize as many random bits as possible that have been generated. But things often don't work out as well as they should, e.g., the[0, 13)cap (a poem)[0, 7)Just two examples. The former right boundary is not a power of 2, and the latter is a power of 2 but 3 does not divide 64.

Let's start with a simple problem to solve.[0, 13)The number 12 has to be represented in at least 4 bits. Subject to the first representation of the number 12 at least 4 bits, 4 can divide 64, so we can also be divided into a group of 4 consecutive bits, but then the conditions of the uniform probability distribution can not be met. Can not guarantee the reason is very simple, and we said in the first section of the "rand7 generate rand5" case, each uniformly divided out of a group of consecutive bits in the combination of samples we do not need to exist. The way to deal with this situation has already been described in Section 1, which is to reject the samples. Determining the scope of the rejection is also done using the approach described in Section 1, noting that each set of bits can generate[0, 16)of random numbers, and given that 13 itself is prime, it is sufficient here to simply eliminate all samples ≥ 13.

So the code becomes something like the following:

func getRandSplit(a []int) {
	if len(a) < 10 {
		panic("length wrong")
	}
	mask := uint64(0xf)
	count := 0
	for {
		n := rand.Uint64()
		for i := 0; i < 16; i++ {
			sample := int(n & mask)
			n >>= 4
			// If you don't meet the requirements, you'll go directly to the next group.
			if sample >= 13 {
				continue
			}
			// Not here either.mod
			a[count] = sample
			count++
			if count >= 10 {
				return
			}
		}
	}
}

If a group doesn't meet the sampling requirements, we skip and go straight to the next group, so it's possible that we won't be able to get enough random numbers in the 16 groups, so we'll have to get the 64-bit random numbers once more, and then go into the split calculation again. Doing this has a small negative impact on performance, but it's still fast:

goos: linux
goarch: amd64
pkg: benchrand
cpu: Intel(R) Core(TM) i5-10200H CPU @ 2.40GHz
BenchmarkGetRand-8              16242730                72.22 ns/op            0 B/op          0 allocs/op
BenchmarkGetRandSplit-8         37794038                31.81 ns/op            0 B/op          0 allocs/op

At that point the performance is only doubled.

The above kind of situation is still the easiest, but[0, 7)It's not a good idea. First of all it takes at least 3 bits to represent 6, and 3 doesn't divide 64, and secondly 6 is not a power of 2. What to do with this one?

There are two approaches, both with the core idea of rejecting sampling, but with a difference in the scope of the rejection.

The first idea is that since 3 doesn't divide 64, we pick one that is divisible by 3. Here it's 63, which means more than2⁶³-1samples are all discarded, and then the conforming samples are partitioned by every consecutive 3bits. In this way we first ensure that each set of the 3bits split out is equally generated[0, 8)A random integer in the range. Now the question becomes "how does rand8 generate rand7", which we will do and have done many times, and the final code will look like this:

func getRandSplit(a []int) {
if len(a) < 10 {
panic("length wrong")
}
// Note that mask is now just three bits, i.e. 0b111
mask := uint64(0x7)
mask := uint64(0x7)
for {
n := rand.Uint64()
// Reject samples greater than 2⁶³-1 first
if n > 1<<63-1 {
continue
}
for i := 0; i < 21; i++ {
sample := int(n & mask)
n >>= 3
// A set of bits is rejected if the combined number is greater than or equal to 7.
if sample > 6 {
continue
}
// There is no need for a mod here, because the resulting sample itself will only be between 0 and 6
a[count] = sample
a[count] = sample
if count >= 10 {
return
}
}
}
}

The code becomes long and complex and requires two steps to reject the samples, relatively we can also split 21 groups at a time, which is 5 more than at 4bits, so it's hard to say if the performance drops or stays the same, so let's look at the test:

goos: linux
goarch: amd64
pkg: benchrand
cpu: Intel(R) Core(TM) i5-10200H CPU @ 2.40GHz
BenchmarkGetRand-8              16500700                73.77 ns/op            0 B/op          0 allocs/op
BenchmarkGetRandSplit-8         31098928                39.54 ns/op            0 B/op          0 allocs/op

It did slow down a bit, but it was still an 85% speedup overall.

The second idea requires only one step to reject sampling, since 3 does not divide 64, find the nearest integer to 3 that can divide 64 and is greater than 3. Here we can directly note that 4 meets the condition, and in practical development you can rely on a bit of linear probing if you want to find any number that meets the condition. Now we press the consecutive 4-bit partition of the 64-bit random integer, so that each group can be divided to generate a uniformly distributed in the[0, 16)The problem becomes "generate rand7 from rand16". Then the problem becomes "generate rand7 from rand16". The code is written like this:

func getRandSplit2(a []int) {
	if len(a) < 10 {
		panic("length wrong")
	}
	mask := uint64(0xf)
	count := 0
	for {
		n := rand.Uint64()
		for i := 0; i < 16; i++ {
			sample := int(n & mask)
			n >>= 4
			if sample > 13 {
				continue
			}
			// modYou can't miss it.,Because we'll be generating values greater than or equal to7results
			a[count] = sample % 7
			count++
			if count >= 10 {
				return
			}
		}
	}
}

The code is simpler, and there is only one step to reject the sampling, but the problem is that the generating range for each group becomes larger causing us to use the modulo operation. Let's see what the performance is like:

goos: linux
goarch: amd64
pkg: benchrand
cpu: Intel(R) Core(TM) i5-10200H CPU @ 2.40GHz
BenchmarkGetRand-8              16451838                75.86 ns/op            0 B/op          0 allocs/op
BenchmarkGetRandSplit-8         30802065                39.15 ns/op            0 B/op          0 allocs/op
BenchmarkGetRandSplit2-8        38995390                30.75 ns/op            0 B/op          0 allocs/op

Idea 2 is almost 20% faster than idea 1, and it seems that the two-step rejection sampling becomes hard, not just because it's slower to get the 64-bit random number a few more times, but the extra if may also affect the branch prediction, even if in the end we have 5 more groups to sample.

So, when you need a more limited range of random numbers, making the most of every bit is a very desirable means of improving performance.

Random numbers with weights

After half a day of discussing the case of a uniform distribution of probabilities, there is another common scenario in business: a set of samples is sampled, both for randomness and to try to statistically satisfy some proportionality between the samples.

The first thing that comes to mind in this scenario I think is a lottery. Yes yes, this is a common application of weighted random numbers. But there is another scenario where a load balancer is connected to a set of server hardware with different weights, and now it wants to try to distribute the links according to the weights, and that's where weighted random numbers come in handy.

Suppose we have the sample(1, 2, 3, 4)four integers, and then to press1:2:3:4What should be done to randomly generate samples by the proportion of the

Proportionally, we can get 1, 2, 3, 4 generated by the probability of 0.1, 0.2, 0.3, 0.4, these probabilities add up to be equal to 1, so we might as well imagine that there is a number on the axis of the interval from 0 to 1, and then we will be these proportions of the "stuffed" into the axis:

0.0   0.1   0.2   0.3   0.4   0.5   0.6   0.7   0.8   0.9   1.0
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|

|_______________________|
            4           
                        |_____|
                           1  
                              |_________________|
                                       3        
                                                |___________|
                                                      2

I've deliberately broken up the order, and the order doesn't actually affect the results. Each sample can have a range on the numerical axis, and the lengths of the ranges are also consistent with the weighting between them, so when there exists a range that can be generated uniformly[0, 1)When generating a random number generator for all the real numbers in between, we choose to generate samples corresponding to the range in which the number generated by this generator falls, for example, if the generated real number falls in the range of[0.0, 0.4)In this interval, the sample "4" is generated, and if it falls in the interval[0.8, 1.0)For this interval, "2" is generated. Thus the random number with weights is generated. This is still quite easy to understand by looking at the diagram above, so I won't bother proving it.

But I don't really want to use this method. Why? Because, as you can see, the left side of the interval is closed, which means we have to do an equivalent comparison of floating-point numbers, which is easy but I'm too lazy to write about it, and because floating-point numbers can't accurately represent the proportions of all the cases, which results in a loss of precision at both ends of the interval, which requires an error rate to be taken into account, which is usually negligible (especially in the statistical sense), but just I'm having a hard time considering this.

So I'm looking for a solution that doesn't require floating point numbers. Come up with it is not difficult, suppose we have 0 to 9 integers, exactly 10, now the sample "1" proportionally need to occupy one of the samples, the sample "4" proportionally need to occupy four of the samples, and now we have a uniform generation of 0 to 9 integers of the random number generator, then as long as the random number generated is exactly the sample "4" to occupy those random numbers we generate "4 of the integer random number generator, that as long as the generation of random numbers is exactly the sample "4" occupied by those random numbers we generate "4", generated random numbers are sample "1" Occupied then generate "1". You can see that as long as the occupation of a certain number of different samples, then we can generate the same random numbers with weights.

Here are a few questions, one is how to determine the total number of samples, this is simple, each proportion as an integer can be added up, for example1:2:3:4just like1+2+3+4=102:2:3:5just like2+2+3+5=12, and so on. What if the ratio is a real number?2.3 : 3.4 : 4.7What to do? This is to use the nature of the ratio, the proportion of equal expansion after the proportion remains unchanged, so we multiply each real number by 10, and then remove the decimal point after all the zeros as a whole number, so23+34+47=104Theoretically, you can do this with any ratio, but integers eventually have a size limit, and you can't generate a random number with theright, so be careful that the sum doesn't exceed the integer range limit.

Second, how to calculate the range of the sample, although we only need not the same to meet the total number of discrete points in 1 on the line, but in order to facilitate the calculation we still choose a continuous integer is better, so the range is limited to[0, sum-1]. This allows us to directly utilize therand.Uint64N()to generate the desired random number generator.

In the end we just have to let the sample occupy some consecutive integers proportionally at random. And it's enough that we only need to record the right boundary, we start with the range of[0, n]The comparison starts with the first sample of the range, and if the generator gives a random number that is less than or equal to some right-hand boundary, it must fall on the sample represented by the boundary (since the comparison starts with the smallest boundary, it is necessarily impossible for the random number to fall on the previous range of samples).

It's really just replacing continuous real numbers with discrete integer points for a change.

It's just faster to write code when you get the idea:

type WeightRandom[T comparable] struct {
	entries       []entry[T]
	upperBoundary uint64
}

type entry[T comparable] struct {
	value T
	end   uint64
}

First, the data structure.WeightRandomis our random number generator with weights.upperBoundaryis the sum of the sample sizes.entriesis then the right-hand boundary of the individual samples and the consecutive integers occupied by the samples.

Let's move on to construction.WeightRandomobject's methods:

func NewWeightRandom[T comparable](rndMap map[T]uint64) *WeightRandom[T] {
	var lowerBoundary uint64
	entries := make([]entry[T], 0, len(rndMap))
	for k, v := range rndMap {
		if v == 0 {
			panic("weight cannot be zero")
		}
		if lowerBoundary+v < lowerBoundary {
			panic("overflow")
		}
		lowerBoundary += v
		entries = append(entries, entry[T]{
			value: k,
			end:   lowerBoundary,
		})
	}
	(entries, func(a, b entry[T]) int {
		return (, )
	})

	if len(entries) == 0 {
		panic("no valid sample")
	}

	return &WeightRandom[T]{
		entries:       entries,
		upperBoundary: lowerBoundary,
	}
}

lowerBoundaryUsed to count how many samples there were, we ended up choosing a left-closed-right-open interval so that it was easy to count.rndMapThe key is the sample and the value is the ratio. Once the range of samples has been calculated and saved, we need to sort the samples from smallest to largest by the right border, because the later lookup range-to-sample correspondence requires the right border to satisfy the smallest-to-largest order.

Finally, there is the lookup function:

func (w *WeightRandom[T]) RandomGet() T {
	x := rand.Uint64N()
	for i := range  {
		if x < [i].end {
			return [i].value
		}
	}
	panic("not possible")
}

The search is done by generating a range in the[0, upperBoundary)Between the random number, and then we start from the smallest boundary to compare one by one, once found larger than their own boundary, then it means that the need to generate the boundary corresponding to the sample. The bottom of the sentence panic as literally, theoretically can not be executed, but go do not know, we either return a null value or panic, considering that can go here that our program or the standard library code has a major bug, panic to debug is a better way.

Depending on the size of the upperBoundary, we can actually reuse the previous section's approach of making full use of every bit without generating a new random number every time, and then generating it again when the groupings from the split are all consumed, which would speed up this function considerably. But for the sake of generality and trying to simplify the code, I won't write it this way.

Attach a use case at the end:

func main() {
	w := NewWeightRandom(map[string]uint64{
		"a": 15,
		"b": 30,
		"c": 45,
		"d": 60,
	})
	m := make(map[string]int, 4)
	const limit = 100_0000_0000
	for range limit {
		m[()]++
	}
	for k, v := range m {
		("key: %s, count: %d, p: %g\n", k, v, float64(v)/limit)
	}
}

We generate the four letters "abcd" by weighting them in the proportion of15:30:45:60To simplify it a bit, it's1:2:3:4So theoretically the probability should be closer to 10%, 20%, 30% and 40%. But statistical probability is always a bit of error, as long as the general trend is close to this percentage. Let's run it 10 billion times to see the results:

$ go run 

key: b, count: 2000011606, p: 0.2000011606
key: a, count: 1000058297, p: 0.1000058297
key: d, count: 3999943022, p: 0.3999943022
key: c, count: 2999987075, p: 0.2999987075

Very much as expected. As an optimization measure we could use something like a binary lookup to locate the samples since the right boundary itself is ordered, which could significantly improve performance when there are a large number of samples. But if you don't have more than 10 samples I think a linear lookup like this would be sufficient for now.

How can I put it, simple and crude but effective solution to the problem. It's just that there are still drawbacks compared to the floating point solution:

  1. Because integers have a range limit, this results in the total number of samples being limited to a smaller range than in the floating point scheme
  2. Again, because of integer size constraints, the range of choices for scaling will be smaller than for floating point numbers

Because there are fewer restrictions, more people use floating-point solutions in general-purpose tool libraries, but business scenarios and general-purpose tool libraries are not the same, and many times there is nothing wrong with choosing an integer solution. Ultimately, you should choose one that meets your business needs and that is understandable to you and your colleagues.

As for performance, the lookup process for the floating point scheme is the same as for the integer scheme, and the performance needs a full test to see which is better or worse, so I'm not going to fantasize about it here. Of course I won't do the test, I'll be lazy.

random number seed

The "seed" is actually some initial state required by some pseudo-random number generation algorithms, which generate a sequence of randomized values based on the initial state.

So the same seed will usually result in the same sequence, at which point while there is randomness in each sequence, there is no randomness between the two sequences - with one you can accurately predict the alignment of the other. A concrete manifestation of this would likely be if you wrote a game in which an enemy takes some random action, yet because the seed isn't set up properly, it results in that enemy taking the exact same action step every time you see it, and such a game is extremely boring.

Not only that, but generating the same sequence of values creates a myriad of security issues.

Seeds are usually only used to be set once, and at the point where the program first needs a random number - ideally, however there are always unlucky people who forget this, and so the randomization algorithms are often left with the default low-quality and identical seeds. So more modern languages like go1.22 and python3 have opted to automatically set the seed for you when the program first starts running.

In addition we have to worry about the quality of the seeds.

The value of "seed" needs to be different each time it is obtained, and there are several common sources of seeds that meet the requirements:

  1. Use system time. This one is indeed different every time you get it, it's fast to get, and it's the default option for many developers and libraries. But system time is modifiable, and with the troublesome thing called leap seconds in the world, the probability that you'll accidentally get the same system time isn't really that low.
  2. Using random devices, these are software-emulated random number generators, in the case of Linux, there arerandomcap (a poem)urandomTwo types, of whichrandomThe noise of the operating system and the external environment is used as a data source to generate random values, which can be considered as "true random" when the requirements are not too high, but of course its generation rate is relatively low; the other one isurandomIt will generate an initial state with external noise and then use a pseudo-random number algorithm to quickly generate random values based on this state, so it has a high generation rate but low quality. Generally using urandom to generate seeds is sufficient.
  3. Using hardware that generates real random numbers, the principle is similar to that of software emulation, unlike most software implementations that will see a performance increase when switching to a hardware implementation, TRNG may instead have lower performance, but compared to software implementations it can collect noise in the environment at a much finer level thus generating experimentally proven unpredictable and high quality random numbers. Common TRNG generation rates range from hundreds of k/s to several megabits/s, but the likes of the/dev/randomTypical rates can be in the range of several megabytes to tens of megabytes/s. In addition to performance inconsistencies, not all devices include TRNG, which results in limited applicability, so not many people use it directly. However, many scenarios that rely on high quality random numbers will have to consider TRNG.
  4. Using address space layout randomization. Modern operating systems add a random offset to the program's memory address after loading the program, so the program gets basically a different variable address each time it runs. This is an extremely low-cost method of obtaining random values that costs almost nothing at runtime, and Google's abseil library is full of code that uses this method to obtain random number seeds. However, the prerequisite for using this method is that your system supports address space layout randomization, and secondly, the quality of the random offsets added by the system should be fair, both of which are out of our control, and we can only believe that commonly used operating systems do these things. Also, highly privileged programs and users can always write some code into places with fixed addresses, and although this operation is becoming more and more difficult, it is not completely impossible, so this option is usually not considered when high quality seeds are needed (another reason is that some systems can be configured to turn off the randomized layout or even not support randomization at all).
  5. Linux-specific auxiliary vectors that can be accessed via the/proc/<pid>/auxvor glibc functionsgetauxvalto get. This array contains a set of information about the operating system and the current process, all written by the operating system when the program is loaded, Windows has something similar. Some of this data is invariant such as hardware information and platform information, and some of it is completely random, for example it contains the entry address of the program and the address of the vDSO, which are completely random because of ASLR, and there is also a special field for random values in auxv, which together can lead to a much stronger unpredictability than relying on ASLR alone.

The principle is to make it as difficult as possible to predict the outcome, ideally to make it completely unpredictable.

So what do I use as a developer? Generally speaking, system time is enough, you can use it to finish writing or do some simple tools, but remember that system time is modifiable and unreliable. If it's a library or a game that relies heavily on randomness, you can use it./dev/urandomIt's a more desirable option. Utilizing the random values that come with ASLR like Google does is fine when you're looking for extreme performance and don't require that much seed quality.

If you really have a hard time choosing, let's see what others have done.

golang and python3 chose those sources to seed the

golang actually utilizes auxv first, and if the system doesn't support it, it falls back to reading random values from a random device like urandom, and then uses the system time if this also goes wrong:

// runtime/

// OS-specific startup can set startupRand if the OS passes
// random data to the process at startup time.
// For example Linux passes 16 bytes in the auxv vector.
var startupRand []byte

func randinit() {
	lock(&)
	if {
		fatal("randinit twice")
	}

	seed := &
	// Check to see if theauxvInformation is written by the system
	if startupRand != nil {
		for i, c := range startupRand {
			seed[i%len(seed)] ^= c
		}
		clear(startupRand)
		startupRand = nil
	} else {
		// start withurandomretrieve
		if readRandom(seed[:]) != len(seed) {
			// readRandom should never fail, but if it does we'd rather
			// not make Go binaries completely unusable, so make up
			// some random data based on the current time.
			readRandomFailed = true
			readTimeRandom(seed[:])
		}
	}
	(*seed)
	clear(seed[:])
	 = true
	unlock(&)
}

This is a global function setup, go can also create its ownThis seed can only be passed in explicitly, then what to pass go can not be controlled, flexible at the same time sacrificed a certain degree of security.

Python3, on the other hand, reads urandom first, and when it fails, it combines the system time plus the current process pid to generate the seed, which is stronger than using the system time alone:

// /python/cpython/blob/main/Modules/_randommodule.c#L293
static int
random_seed(RandomObject *self, PyObject *arg)
{
    int result = -1; /* guilty until proved innocent */
    PyObject *n = NULL;
    uint32_t *key = NULL;
    size_t bits, keyused;
    int res;

    // When the parameter is empty
	if (arg == NULL || arg == Py_None) {
       if (random_seed_urandom(self) < 0) {
            PyErr_Clear();

            /* Reading system entropy failed, fall back on the worst entropy:
               use the current time and process identifier. */
            if (random_seed_time_pid(self) < 0) {
                return -1;
            }
        }
        return 0;
    }

    // Generate seeds based on parameters if they are not null

Done:
    Py_XDECREF(n);
    PyMem_Free(key);
    return result;
}

This function is then used by the Random object's__init__method calls, and if you initialize a Random object but don't pass the seed argument, then the default setting is made. And all methods in the random module are actually called by a globalRandom()provided by the object, and since it doesn't pass seed in, seed is automatically set in the code:

# /python/cpython/blob/main/Lib/#L924
_inst = Random()
seed = _inst.seed
random = _inst.random
uniform = _inst.uniform
triangular = _inst.triangular
randint = _inst.randint
choice = _inst.choice
randrange = _inst.randrange
sample = _inst.sample
shuffle = _inst.shuffle
choices = _inst.choices
normalvariate = _inst.normalvariate
lognormvariate = _inst.lognormvariate
expovariate = _inst.expovariate
vonmisesvariate = _inst.vonmisesvariate
gammavariate = _inst.gammavariate
gauss = _inst.gauss
betavariate = _inst.betavariate
binomialvariate = _inst.binomialvariate
paretovariate = _inst.paretovariate
weibullvariate = _inst.weibullvariate
getstate = _inst.getstate
setstate = _inst.setstate
getrandbits = _inst.getrandbits
randbytes = _inst.randbytes

This way python prevents the problem of users forgetting to set seed.

summarize

That's all there is to say about random numbers for now, except for a smattering of numerical algorithms that are relatively easy to understand.

Probability and statistics have a huge impact on programming, so I think it might be more helpful to take a little time to look at probability statistics rather than spending time looking at differential equations, which has been a hot topic lately.

Finally, in fact, the standard library and a variety of third-party libraries have been thoughtfully prepared almost a full set of features, read the documentation can be assured that the use of these libraries, and I also recommend the use of these libraries, open-source and well-tested code is always better than their own closed-door work to the strong.

consultation

/questions/18394733/generating-a-random-number-between-1-7-by-rand5

/wiki/Rejection_sampling

/questions/linux-kernel-70/what-does-proc-pid-auxv-mean-exactly-4175421876/