There is a lot of research going on related to generating "truly" random numbers.
There is a very simple method, invented long time ago. The method is attributed to von Neumann [1].
In the most simple form it can be thought of as generating random bits out of a biased source of 0s or 1s. Given that probability of a sequence 01 is the same as 10, one can use 01 - to represent the truly random "0" and 10 as a truly random "1" bit (the 00 and 11 combinations are simply discarded).
Pretty straightforward. Can anyone point why such a method doesn't generate a random sequence (and thus solve the problem of generating random numbers on computers)?
Let me explain what "random" and "truly random" mean. "Random" simply means the numbers are identically distributed and chosen independently of everything else (that is, the numbers are i.i.d.). And "truly random" simply means i.i.d. and uniform (see also Frauchiger et al. 2013.)
If the source of input bits are i.i.d. but have a bias, then the von Neumann method will remove this bias — the numbers remain i.i.d. but are now unbiased, namely, each number will be 0 or 1 with equal probability. In general, if the source numbers weren't i.i.d. (more specifically, exchangeable [Peres 1992]) to begin with, the von Neumann method won't make those numbers "truly random"; even von Neumann (1951) assumes "independence of successive tosses [of the coin]". The von Neumann method is one of a host of randomness extractors available (and I discuss some of them), and this discussion applies to these other extractors just as well as the von Neumann method.
In any case, the distinction between "pseudorandom" and "truly random" numbers is not what applications care about (and you didn't really specify what kind of application you have in mind). Instead, in general:
secrets
module orrandom.SystemRandom
.numpy.random.Generator
.REFERENCES: