🔥 Burn Fat Fast. Discover How! 💪

Mathematics for the Data Scientist, Part 2: Zipf's Law This em | Big Data Science

Mathematics for the Data Scientist, Part 2: Zipf's Law
This empirical pattern of natural language word frequency distribution is often used in quantitative linguistics and NLP problems. Zipf's law says: if all words in a large text are ordered in descending order of frequency of their use, then the frequency of the n-th word in this list will be inversely proportional to its ordinal number n (rank). For example, the second most commonly used word occurs about two times less often than the first, the third - three times less often than the first, etc.
The pattern was first discovered by French stenographer Jean-Baptiste Estoux in 1908. In practice, the law was applied to describe the distribution of city sizes by the German physicist Felix Auerbach in 1913. And the American linguist George Zipf actively popularized this pattern in 1949, proposing to use it to describe the distribution of economic forces and social status: the richest person has twice as much money as the next rich man, etc. An explanation of Zipf's law based on the correlation properties of additive Markov chains (with a step memory function) was given in 2005. Mathematically, Zipf's law is described by the Pareto distribution (the well-known 80 to 20 principle).
The different areas of application of the law (not only linguistics) are explained by the American bioinformatics specialist Wentian Li, who proved that a random sequence of characters also obeys this Zipf's law. Scientist argues that Zipf's law is a statistical phenomenon that has nothing to do with the semantics of a text, and the probability of a random occurrence of any word of length n in a chain of random characters decreases with increasing n in the same proportion as the rank of this word in the frequency list (ordinal scale). Therefore, the multiplication of the rank of a word by its frequency is a constant.