Hospitality Included: One Small Leap for Restaurants

I listened to a fascinating episode of Freaknomics this week, which was about a new way of billing restaurants in New York, spearheaded by Danny Meyer of Union Square Hospitality Group (which has fancy restaurants like The Modern and Gramercy Tavern, but also Shake Shack in its portfolio). Called ‘Hospitality Included’, it aims to do away with tipping in favour of a scheme that pays employees more fairly and removes the ‘how much should I leave as a tip’ guessing game for patrons. It basically means raised prices in the menu – which interestingly patrons don’t mind, and neither do servers who typically get the lion’s share of tips, as the data below and in the episode indicate. Danny Meyer piloted it with The Modern, so this episode is largely about learnings from there.

Key takeaways which prove, as he says, that ‘doing the right things is the most profitable thing’:

  • job applications for cooks in the restaurant were +270% from -50% for the previous 7 months (this is because cooks were paid really badly earlier) – their pay was up by 20%
  • job applications for servers were up by 25% for the first month of the trial, 100% for the second and 200% for the most recent month they have data for
  • the average amount per bill is the same (so no loss)
  • customer rating for the restaurant on OpenTable is up 12%
  • December 2015 was their most successful month ever (by that time they were about 6 weeks in with Hospitality Included) – this was helped by the fact that they had incredible positive PR as a result of the scheme
  • managers of the other restaurants in the group all want to be next to try this out

It’s well worth listening to the whole episode – I actually went back and listened to parts of it again. The purely numbers-led folk can’t argue with a bump in revenues. It’s really interesting to see these behavioural economics experiments in action, and even better to see them succeed.

Behaviour, Experience and the new world of business

I read a bit about two fairly new C-suite roles this week: the Chief Experience Officer and the Chief Behavioural Officer. The former gets the company more heavily invested in thinking about the whole customer experience process, with a focus on design and development, and the latter does a similar role but with a focus on the psychology of the customer at the time of her experience with the brand.

The one thing I took from both pieces is that though both roles are more or less new to the industry, the best indication of their success will be when the entire leadership of a company is able to see these as crucial to day-to-day and not worthy of a callout. It’s the same thing with innovation today in businesses – I notice that companies that see it as a natural part of what they do are more likely to go through with it versus discuss and debate endlessly.

At the end of the day, a clear focus on the two key audiences for a company – customers and employees – are what will see businesses grow. It’s important to note how linked the entire team (CMO, CIO, CXO, CTO, CFO…) needs to be aligned to execute these well enough. If the people in charge of internal comms, both design & deployment, aren’t in tune with what’s going on in the market, then “if they have a choice, [employees] will go work someplace where the systems are easy, useful and allow them to be productive,” as Greg Petroff, CXO at GE, says. If the team isn’t in tune with with customers’ true feelings and behaviour, then those customers will not “get more of what they want” and, in turn, they will not “reward those companies with their business and trust”.

It’s always, ALWAYS, a team effort. As the Re/code article says, researching insights, scoping projects, engaging clients (and employees – my addition), building technology, designing tests and measuring results is rarely done by one person.

Great video with @m_sendhil on big data & social science (via @edge)

This video, an informal presentation by Harvard professor Sendhil Mullainathan followed by a conversation with attendees (all distinguished in themselves, including Daniel Kahneman) is really interesting. In it, Professor Mullainathan talks about a new piece of work he is involved in that looks at the impact of big data on social science. He talks of the importance of starting with casting your net as wide as possible and using induction to see what comes out of that data, rather than deduction where you go head first with a specific goal in mind.

What do you go out and collect? The stuff that you think matters. That’s why deduction is so powerful. But once you collect all kinds of things, then you will have the ability to look at all these variables and see what matters, much like in word sense disambiguation. We’re no longer defining rules. We’re just throwing everything in.

It’s a lovely conversation, and you can see how his thought process evolves through it; his research is still a work in progress. I also really like the way he distinguishes between ‘long’ and ‘wide’ data when we refer to ‘big’ data, which more people should do:

We could break the word “big” into two parts:  Long data and wide data. What do I mean by that? Long data is the number of data points you have. So if you picture the data set as sort of like a matrix, or written on a piece of paper, length is the length of that dataset. The width is the number of features that you have.

These two kinds of “big” work in exactly the opposite direction. That is, long is really, really good. Wide, some of it’s bad, and it poses a lot of problems. Why does wide pose a lot of problems? Picture the prediction function working as a search process. The search processes find the combinations of features that work well to predict why. You could see, with just a little back of the envelope calculation the mathematics are such that as the data gets even a little bit wider, this thing is growing exponentially, I mean, just crazy exponentially. As a result, when data gets wider, and wider, and wider, the problem gets harder, and harder, and harder, and algorithms do worse, and worse, and worse. As the data gets longer and longer, algorithms do better and better.

Watch the whole thing. The way the screen is split into 5 parts is also rather neat, giving multiple perspectives at the same time.

On startups and behavioural economics

Sequoia Capital has published a very useful post on how startups can price their products appropriately. They talk about how theoretically rational economics should be a guide to deciding what price to set for your product; it’s an assessment of demand vs. supply. But they also mention two key factors that affect startups which aren’t typical of other companies: startups aren’t bound by a finite production schedule (once an app or site is built for example, it can live ad inifinitum) and that a competitor benchmarking exercise might not be all that useful because startups usually aim to bring a completely new offering to the marketplace and therefore don’t have much to benchmark against.

So behavioural economics comes into it – anyone familiar with Nudge or Dan Ariely’s work amongst many others will know about peoples’ propensity to buy products based on various biases; Sequoia mention the decoy effect with regards to pricing.

As much as behavioural economics comes into marketing commercial products (soap, shampoo, chocolate), it comes into marketing startups – where a startup’s product is usually itself.

Why is this important?

While startups conduct user testing for products to assess market fit and pricing amongst other things, they don’t often take the time to think about what they want their product and company to *mean* to people; how people might perceive them (this isn’t applicable to ALL startups mind you, but depending on the stage they’re at I’ve found it applies to many). The only things that are usually guaranteed are that they want to expand their user base and want to make money doing it.

From the customer’s POV, behavioural biases can influence people’s opinion of a startup in domains other than pricing, which affects sales. For example, I might not care much for a startup’s product because of biases like the ambiguity effect – the likelihood that I’ll avoid their product because it might seem unknown. If I’m prone to exhibit loss aversion then startup X might be better off telling me why they’re better than Y judged according to specific criteria if they want to acquire me as a customer (why I should use This Is My Jam over Spotify Social when I’m an existing Spotify user, for example). That’s why marketing is important, as Albert Wenger from Union Square Ventures says:

Many engineering led companies have a relatively deep distrust of sales, marketing and business development. While a healthy dose of skepticism is entirely appropriate here, even companies with extremely awesome technology tend to really grow only if they also get sales, marketing and business development right.

Have a meaningful product, and communicate it in a way that shows an understanding of your audience’s biases to get where you need to go. You’ll get a lot of the answers you need during user-testing, but you need to use this knowledge appropriately to market your product in addition to building it. It would be brilliant if all product WAS marketing in and of itself, but my experience has been that that is not always the case, especially not for untested startups (it works for Nike Fuelband, but not everyone is Nike).

Buffer has got the messaging down pat, so much so that even though I heard about the company relatively recently, I’m a fan and motivated to start using it more:


My advice to startups  is to get mentoring and advice from people who have experience in this area as early as possible, so that as a founder you can give your product and your company all the support it needs to go to market successfully.

93.5% – I’ll take it! Massive thanks for a valuable learning experience @danariely & team! @coursera

So the Dan Ariely behavioural economics course I did? I got 93.5%, and a statement of accomplishment signed by the master himself. YES!

I did better than I expected, which means I prove one of the experiments I read about during the course, that women undervalue themselves when in a group, which coincidentally was also proved in a recent University of Massachusetts study that I heard about on Wired UK’s podcast that I listened to this morning.

I’m already getting better at this sort of thing 🙂

Screen Shot 2013-05-20 at 11.47.56