"Dedicated to making Likelinesses the entity of prime interest"
The usual concept of probability -the frequentist probability- is in practice never known. But, more than that, it is unknowable. It is not possible to deduce the limit of a convergent sequence from any finite number of its terms.
The value of a probability is known only when it is a given (such as being told that a coin is fair) or is calculated from givens. In other contexts, we have to work with estimates. Those estimates are too often still called 'the probability'; this causes confusion and can be misleading because it can lead to the estimates being treated as if they were the probabilities.
The difficulty with this confusion is that we might then do things with those estimates which magnify any inherent errors -and so would not matter if they were truly the probabilities (since the errors would be zero, and many times zero is still zero)- giving results which are seriously wrong. For example, we might substitute into a non-linear formula such as the Multinomial Theorem. Substituting even a good estimate into the Multinomial theorem can give large errors -not just of 10% or 20% but by a factor of 10x or 20x or more: you will see the theory and the consequences on this site.
Problems are also encountered on the theoretical front, where confusion between estimates and the actual values can cause errors in the theory. Examples of this are the problem called "The Perfect Cube Factory" and the difficulties Johnson had with his "Combination Postulate".
Fortunately, it turns out that the entity commonly called 'the best estimate of a probability' can -despite its name- be defined without actually using the concept of probability. Of course, if we do this then we need another name for 'The best-estimate of a probability' since there is no probability for it to be the best-estimate of: on this site, we shall call it a Likeliness.
Having eliminated probabilities from our definition of likeliness, we are in the position of being able to define probability in terms of likeliness rather than the other way round. There is no need to do this, but it does have an historical and academic interest. This is a simple process: it is based on the idea that if we truly knew the probability then it would be independent of data. For example, if we truly knew that a coin was fair then it would not matter how many times it came down "H" or "T": if it's fair then it's fair, and that's the end to the matter.
What is meant by 'truly knew'? It means there is only one possibility: and that means that the underlying set -the set of distributions meeting the requirements of the problem- must be singleton.
So we define our concept of probability as being a likeliness with a singleton underlying set.
We then look at the consequences of this definition and find that this concept of probability is indeed independent of data. Further weight is given to this approach by the observation that the Multinomial Theorem picks out singleton underlying sets as something uniquely special so far as its validity is concerned.
Having defined our basic concepts, we then investigate them further and find that we can make predictions (see the horse-racing results) and carry out analyses (such as of the Distribution of Distributions) which the more traditional approach could not adequately tackle.
There is an additional benefit. In many applications the underlying science does not lead to a closed, parametric formula to represent a generating distribution. Instead, it leads to a geometric shape: 'ranked', 'unimodal', 'U-shaped', etc. These concepts are not easily handled by the usual probabilistic parameter-based, formulaic techniques; in fact, they can rarely be handled at all. Likelinesses are set-oriented rather than formula-oriented; this gives them the flexibility to handle such concepts directly.