Lovegrove Mathematicals

logo

"Dedicated to making Likelinesses the entity of prime interest"

Introduction

Background

The entity commonly called 'the best estimate of a probability' can be defined without using the concept of probability. If we do this then we need another name for 'The best-estimate of a probability' since there is no probability for it to be the best-estimate of: on this site, we shall call it a Likeliness.

The usual concept of probability -the frequentist probability- is in practice never known. But, more than that, it is unknowable. It is not possible to deduce the limit of a convergent sequence from any finite number of its terms.

The value of a probability is known only when it is a given (such as being told that a coin is fair) or is calculated from givens. In other contexts, we have to work with estimates. Those estimates are too often still called 'the probability'; this causes confusion and can be misleading because it can lead to the estimates being treated as if they were the probabilities.

The difficulty with this confusion is that we might then do things with those estimates which magnify any inherent errors -and so would not matter if they were truly the probabilities (since the errors would be zero, and many times zero is still zero)- giving results which are seriously wrong. For example, we might substitute into a non-linear formula, such as the Multinomial Theorem. Substituting even a good estimate into a non-linear formula can give large errors -not just of 10% or 20% but by a factor of 10x or 20x or more: you will see the theory and the consequences on this site. When working with likelinesses, we do not use the Multinomial Theorem, or anything similar, in this way.

Problems are also encountered on the theoretical front, where confusion between estimates and the actual values can cause errors in the theory. Examples of this are the problem called "The Perfect Cube Factory" and the difficulties Johnson had with his "Combination Postulate".

Having eliminated probabilities from our definition of likeliness, we are in the position of being able to define probability in terms of likeliness rather than the other way round. There is no need to do this, but it does have an historical and academic interest. This is a simple process: it is based on the observation that if we truly knew the probability then it would be independent of data. For example, if we truly knew that a coin was fair then it would not matter how many times it came down "H" or "T": if it's fair then it's fair, and that's the end to the matter.

What is meant by 'truly knew'? It means there is only one possibility: and that means that the underlying set -the set of possible distributions- is singleton.

So we define our concept of probability as being a likeliness with a singleton underlying set.

We then look at the consequences of this definition and find that this concept of probability is indeed independent of data. Further weight is given to this approach by the observation that the Multinomial Theorem picks out singleton underlying sets as something uniquely special.

Having defined our basic concepts, we then investigate them further and find that we can make predictions (see the horse-racing results) and carry out analyses (such as of the Distribution of Distributions) which the more traditional approach could not adequately tackle.

There is an additional benefit. In many applications the underlying science does not lead to a closed, parametric formula to represent a generating distribution. Instead, it leads to a geometric shape: 'ranked', 'unimodal', 'U-shaped', etc. These concepts are not easily handled by the usual probabilistic parameter-based techniques; in fact, they can rarely be handled at all. Likelinesses are set-oriented rather than formula-oriented; this gives them the flexibility to handle such concepts directly.

Why the new word "Likeliness"?

Pros of likelinesses

Cons of likelinesses