I resolved to answer this ‘market’ vs. ‘quality’ question.
It would be impractical for me to measure if there are too many breweries in
the UK for the market to sustain them all; this would require time, resources
and data sets which I have no chance of acquiring any time soon. So, I decided
to attack the ‘quality’ issue, to determine whether the breweries that closed citing
alleged unprofitability did so because they produced beer that was simply not
good enough.
On a night out, when I decide to log and rate the beer I
am drinking, my girlfriend finds it amusing. I am, there is no doubt, a bit of
a ‘beer spotter.’ I log my beers on Untapped website, but the dominant
player in the ‘beer logging’ world is ratebeer.com. On it, users can give beers
a score on a scale of one to five, and the beers’ average score is then
calculated. As more drinkers rate the beers, these average scores are updated
accordingly. It is these scores that I have used to measure the quality of breweries’
beers, give each company what I am calling an ‘overall beer
rating’, then determine whether beer quality is a factor in their survival.
The fairest way to calculate the overall beer ratings was to use
weighted averages (see end note for a description of my calculations – I won’t explain
it here, that would be dull). I have calculated these for all the breweries
that I know have closed in the last three years because they were
allegedly unprofitable, as well as those that have been taken
over by new owners. For comparison, I have also determined the overall
beer rating for an similar number of producers that have opened
in the last three years and are still operating. These were mostly chosen at random; however, I have also calculated the rating of three breweries which have national
recognition as producers of excellent beer.
The results are shown in the graph below. Breweries that
have closed citing unprofitability are in crimson, those that have been taken
over are in orange, while breweries that are still operating are in blue. I have
anonymised the breweries to avoid understandable embarrassment (and perhaps
some self-congratulatory back-slapping).
While it is important not to overstate the value of these
results – after all I have only worked out forty-one breweries’ overall beer
ratings – they clearly suggest that there is a correlation between the quality
of beer a brewery produces and its survival. If we consider the breweries in
the sample that have the twenty-one lowest overall beer ratings (2.653 to 2.941),
only five of these are still open. Nine that closed down citing unprofitability were in
this end of the graph, while seven that were taken over are here too. Consequently,
it can be very easily suggested that these breweries ceased trading because
their beer was not of a high enough or consistent quality to appeal to drinkers.
Conversely, if we look at the twenty breweries with the
highest overall beer ratings, the situation is reversed (2.941 to 3.550). Only
three of the breweries that closed down citing unprofitability are in this end
of the graph, as is one brewery which was taken over. Because they are here,
rather than at the other end of the graph, it is plausible to suggest factors
aside from their beer quality caused their downfall. For example, they might have expanded
their operations too quickly and were subject to unsustainable debt repayments.
Notably, sixteen of the breweries in this end of the graphic are still open,
with twelve having ratings of greater than 3.00. This heavily implies
that those breweries that are producing more impressive and appealing beers are more likely
to stay open.
While these findings are very tentative, I feel that this
research has started to reveal something important. It strongly suggests that the quality of a
brewery’s beer is a major determinant of its survival. Of course, in some cases
other factors will have played a role in a brewery's closure, but, simply put, the market seems to be working. Those
companies that fail to provide drinkers with quality products are seemingly
doomed to fail, while others that excel, innovate and possibly take risks, are more
likely to be successful. In future posts I will consider these points in more detail, as well as reassess them.
All comments and ideas are, as usual, very welcome.
All comments and ideas are, as usual, very welcome.
End note on weighted averages
The weighted averages (breweries’ overall beer rating) were
calculated by multiplying each beer’s average rating by the number of ratings
it had received, adding all the beer’s scores together, and then dividing this
by the total number of ratings all the beers had been given. The table above is an example. The benefit of this approach was that if the majority of
a brewery’s beer received a low rating, but they had one short-lived product that
was rated as exceptional, both these things would be reflected in the overall
score the brewery received. A short example will explain this. If a brewery
produced in a five year period two beers which had an average rating of 2.33
and 4.27 respectively, a basic calculation of the brewery’s overall beer rating
would be as follows:
(2.33 + 4.27)/2 = 3.30
But consider if the first beer was rated 150 times, and therefore
was theoretically produced for a long period, while the second beer had only received
five ratings, and thus had a short production run. The consequence of this
would be that the overall beer rating of 3.30 would not be an accurate
reflection of the quality of the majority of beer the brewery turned out in its
existence. The weighted average solves this problem, as it takes into account
the extent to which each beer was produced. The calculation is as follows:
((2.33 x 150) + (4.27 x 5))/(150+5) =
(349.50+21.35)/155=
370.85/155 = 2.39
Thus, the weighted overall beer rating for the brewery is
2.39, which is much closer to the score the majority of its beer received, but
does take into consideration the one time the company brewed a superlative beer.