AI reviews at the Edinburgh Fringe

If you’re putting a show on at the Edinburgh Fringe, the thing everyone looks for is a good review.  In the performing arts, the star system starts where the Michelin Guide runs out of steam.  Three stars for a show is just run of the mill.  Four stars means it’s a “must see”, while five stars indicates that you may need to kill to get tickets.  Except it’s all got rather devalued.

Go back thirty years and there were two gold standards – the Scotsman and The Stage.  Each had a team of professional reviewers who awarded stars in their daily reviews.   A few other printed publications like The List joined in, but everything changed when Broadway Baby burst on the scene in 2004 with online reviews.  The Scotsman was outraged and railed against non-professional reviewers, but an increasing number of online review sites appeared.  Audiences were encouraged to comment on Twitter and post reviews on the Edfringe.com website.

The result was that it became a lot easier to get a four or five star review from somewhere – sometimes the Edfringe.com ones even appeared before the show opened.  Productions now plaster their posters and flyers with the stars in large font and the source in diminishingly small font.  What’s important seems to be how many sets of stars you have.  That’s been going on for several years.  What is ruffling feathers this year is the accusation that some reviews are being written by AI. 

Jane Bradley has railed against this in the Scotsman, but most of the evidence seems largely hearsay.  The modus operandi of the new AI reviewer is apparently to record the show on their phone, then ask ChatGPT to write a review for them.  Given that the average “professional” review is around 250 words, of which about 100 are the critic’s personal opinion of what was great or awful, that seems like a ridiculously long-winded way of generating a review.  Scanning in the show’s publicity blurb, adding an opinion, and then asking ChatGPT to rewrite it in the style of Joyce McMillan would be much more efficient.  But that would imply a bit more understanding of how to use AI effectively.

AI is a very obvious target in this year’s Fringe.  Some performers were happy to promote shows written by AI.  Rather more denigrated it, while others conflated it fairly indiscriminately with anything related to science or technology. There is no doubt it’s there, but there’s little detail of whether it’s having any effect.  What’s interesting about the Scotsman’s response to the use of AI in reviews is that it’s remarkably similar to their attack on the growth of on-line reviewers twenty years ago, when they railed against the danger of non-professional critics.  Plus ça change…

However reviews are written, the star system will continue to be gamed.  Performers are desperate to garner four and five star reviews, but it’s not clear that stars have the cachet they used to.  This year we saw four shows that the Scotsman or the Stage had given four stars, but on the days we saw them, there were fewer than ten people in the audience.  All four shows merited their four star reviews, but they hadn’t translated to ticket sales. 

This raises the question of how to promote a show.  It feels that things are changing.  Every show concludes with an appeal for word of mouth and posting on social media as the best way to drum up an audience.  Some shows deliberately spurn flyering on ecological grounds.  Others thrust them out enthusiastically.  So how are audiences making a choice?  Back in 2010 there was an innovative app called EdTwinge, which crawled audience tweets to assess which got the most enthusiastic response.  It worked remarkably well, but fell by the wayside as its creators moved on.  It was probably closer to AI than anything at Edinburgh this year.  Since Elon Musk’s acquisition of Twitter (now in the guise of X) it’s lost a lot of its relevance, as his ownership has alienated many of the Fringe’s audience. 

The debate on AI reviews seems to centre on professional critics justifying why their reviews are better.  It deflects attention from explaining why audiences for well-reviewed shows are smaller than performers might hope.  Nobody seems to know why that is.  Some blame the Oasis effect, some blame fewer visitors (which we don’t know until the final numbers come out), but there may be other reasons.  This year I was struck by the  number of conversations I heard where people were asking “What was that show about?”  If audiences don’t understand the show, does the review help? 

It’s easy to blame AI, and “experts” are already pontificating on its effect and how to detect it, but it feels like a red herring.  Something was different this year, and if the Fringe is to prosper, everyone needs to work out what that was.  Good reviews from established, professional critics no longer correlate with audience size, and that was visible across a number of different genres.  That’s what we need to be talking about.  Simply moaning about the bogeyman of the day misses the point.  If good reviews don’t generate good audience numbers, it really doesn’t matter who or what writes them.   Four or fives stars count for nothing.  Maybe the Michelin brothers were right to stop at three.