The Supplement Database rates a supplement's specific claims in two ways: effectiveness and confidence. These two ratings deal with: 1) how well a supplement does what it promises, and, 2) whether there is enough evidence to come to a definite conclusion. Both ratings are on a 3 point scale; 3 being the best, 1 being the worst.
Each supplement is rated based on conclusions found in peer reviewed journal articles. Researchers look at a variety of claims that supplements make and come to a conclusion. Generally, conclusions are either: 3) the supplement did everything it claimed, 2) the supplement did what it claimed in certain circumstances, 1) the supplement did not do anything it claimed. These numbers (3, 2, 1) correspond to the effectiveness rating assigned by The Supplement Database.
Each study contributes one rating per claim per supplement. For example, The Supplement Database contains 5 studies on: "Does supplement X increase strength?" Two studies said supplement X increased strength (rating of 3). Two studies said supplement X increased strength only sometimes (rating of 2). One study said supplement X did not increase strength at all (rating of 1).
The Supplement Database then averages these ratings: (3+3+2+2+1) / (5 studies) = 2.2. The effectiveness rating of supplement X's ability to increase strength is 2.2 meaning it can sometimes accomplish this claim and you should see some positive results when using it to increase strength.
Supplements have different effectiveness ratings across multiple claims. In addition to increasing strength, supplement X also claims to decrease body fat. For this claim, there are only two studies. Both studies conclude supplement X does not decrease body fat at all (rating of 1). These two ratings are averaged together. The effectiveness rating for supplement X's ability to decrease body fat is 1 meaning there is little to no evidence you will see any positive results.
Confidence ratings are based on how many studies the database contains on a supplement's claims. This rating tells you whether the effectiveness rating had enough evidence backing it up. Is one study enough to change your behavior? It shouldn't be. Sometimes studies have conflicting conclusions on the same topic. We shouldn't make supplement decisions based on one study. You should only make decisions based on claims that have been thoroughly vetted. The confidence rating ensures claims are thoroughly vetted.
Each study contributes 20 points to a supplement claim's confidence rating. A rating at or above 80 (4 studies on a claim) means there is enough evidence to base decisions off of. Four studies is generally a good start when deciding whether or not to act on evidence. The more studies included on a specific claim, the more certain you can be that the effectiveness rating is accurate.
In the above examples, supplement X had 5 studies looking into the claim of increasing strength. At 20 points per study, the confidence rating for supplement X's ability to increase strength is 100. This means the effectiveness rating of 2.2 is valid and can be trusted.
The database however, only contained two studies on whether supplement X could decrease body fat. This translates into a confidence rating of 40 which is low. A low confidence rating means there is not enough information to ensure the effectiveness rating is accurate. You should not make any decision on using supplement X to decrease body fat until there is more research included in the database on this claim.
The Bottom Line
A confidence rating has nothing to do with whether or not a supplement works. It only tells you whether the effectiveness rating is valid or not. Ideally, you want a high effectiveness rating with a high confidence rating. This means that there is enough good research on a supplement's claim.