I agree with David -- using a carbonated beverage seems odd, although initially the experiment appears to be well founded and executed. However, a few things nagged at the corners of my mind.
So I read through the document again and identified a few problems with the test. A main premise is determining which mead is different -- what does this tell us? IMO very little, as there is a 50% chance of blindly choosing correctly. 2/3 being able to choose the different mead is not as statistically relevant as it appears to be. The fact that 1/3 couldn't tell the difference indicates a high similarity, which is not surprising. Also, the author notes that he failed to correctly identify the odd wine on his first 2 tests -- it took experience for him to determine them.
His conclusions harp on things that are not statistically supported. 2 people out of 30 noted bitterness and 1 noted astringency in the fined mead -- this is 6.7% and 3.3% of the participants. If either point was a real issue, I'd expect more folks to note this.
Having the participants know that there is a Baseline and Treated mead may bias the experiment. They go into the experiment with preconceived notions.
If I was conducting the experiment, I'd provide 2 bottles and ask the participants to score the meads according to a known scoring method, and to describe the points of each wine -- clarity, aroma components, taste components, etc. With no preconceived notions nor any explanation of why the meads are different -- label one A and the other B.