By supporting the ability of anyone to build on top of Reddit’s platform, Reddit created an invaluable user research arm that also provides a long-term competitive advantage by keeping potential competitors and their customers contributing to Reddit. This an incredibly difficult thing to do, and they seem suddenly blind to why it was worth it.
PETERS: I want to stop you for a second there. So you’re saying that Apollo, RIF, Sync, they don’t add value to Reddit?
HUFFMAN: Not as much as they take. No way.
(and I’m going to ignore for the moment questions on how they’ve handled this, monetization, and so on, focusing only on this core value they’ve created and are destroying)
A vast community of people all working on new designs, development innovations, and approaches, responding immediately to user feedback to try new things – compare this to what you have to do internally.
Every company I’ve been at has a limited user research budget to discover their customers and their needs, and as limited room to get feedback on possible solutions by building prototypes or even showing paper drawings. To entirely focus on new ideas? You might be lucky to get a Hack Day once a quarter.
If you have a thriving third party development community, you have an almost unlimited budget for all of these things, happening immediately, and on a hundred, a thousand different ideas at any one time, and those ideas are beyond what you might be able to brainstorm.
It’s a dream, and once you’ve done the hard work of getting the ecosystem healthy, it does it on its own. Anything you want to think about you’ll find someone has already broken the trail for you to follow, and sometimes they’ve built a whole highway.
You can think small, like “how can we make commenting easier?” There will be a half-dozen different interpretations of what comment threading should look like, and you have the data to see if those changes help people comment more, and if that in turn makes them more engaged in conversation.
And it goes far beyond that, to entirely new visions of how your product might work for entirely new customers.
If you’re sitting around the virtual break room and someone says “what if we leaned into the photo sharing aspect, and made Reddit a totally visual, photo-first experience?” in even the best company you’re going to need to make a case to spend the time on it, then build it, figure out how to get it cleared with the gatekeepers of experimentation…
Or if you have a 3rd party ecosystem as strong as Reddit’s, you can type “multireddit photo browser” or something into a search engine and tada, there you go, a whole set of them, fully functional, taking different approaches, different customer groups. I just did that search and there’s a dozen interesting takes on this.
Every different take on the UX, and every successful third-party application is a set of customer insights any reasonable company would pay millions for. Having a complete set of APIs publicly available lets other people show you things you might not have dreamed possible (this is also a hidden reason why holding back features or content from your APIs is more harmful that it initially seems).
Successful third party applications give you insight into:
A customer group
What they’re trying to do
By comparison, how you’re failing to give it to them
A minimum number to what they’re willing to pay to solve that problem
Even when these applications don’t discover something that’s useful – say someone builds a tool that’s perfect for 0.1% of the user base, but that tool requires a lot of client-side code, so it’s just not worth it to bring that into the main application. It’s still a huge win, because those users are still on the platform, participating in the core activities that make the system run, building the network effects (and, because you’re a business, making money in total).
And if those developers of these niche apps ever hit gold and start to grow explosively, you’ll see it, and be able to respond, far earlier than you would if they weren’t on your platform.
That’s great!
The biggest barrier for any challenger app isn’t the idea, or even the design and execution, it’s attracting enough users to be viable, and surviving the scale problems if it does start to grow. By supporting a strong third party application ecosystem, you’re ensuring that they never solve those problems – their user growth is your user growth. They don’t have to solve the problem of solving the scaling infrastructure because you did. It will always make short-term sense to stay with you.
Instead of building competitors, you’re building collaborators, who will be pulling you to make your own APIs ever-better, who are working with you and contributing to the virtuous cycle at the heart of a successful user-based product like Reddit.
I know, from the outside we just don’t get it. Reddit’s under huge pressure to IPO, and the easy MBA-approved path to a successful IPO is ad revenue, which means getting all those users on the first-party apps, seeing the ads, harvesting their data, all that gross stuff. And we can imagine that the people pushing this path to riches look at all of these third party apps and say “there’s a million people on Apollo, if they were on our app, we’d make $5m more in ad revenue next year.”
This zero-sum short-sighted thinking may not be the doom of Reddit – they may well shut down all the third-party apps and survive the current rebellion of moderators and users (and the long-term effects of their response to it).
It was and could have been such a beautiful partnership, where Reddit thrived learning, cooperating with, and improving itself along with its outside partners. As this developer community now looks to rebuild around free and decentralized platforms like Mastodon, it’s easy to see how Reddit’s lost ecosystem might eventually return to topple them.
Or, “The reason no one strictly obeys your shopping filters (the reason is money)”
Why do sites sometimes disobey filters? Often only a little bit, but noticeably, enough that it feels like an obstinate toddler testing your boundaries?
“You said you wanted a phone that was in stock and blue, huh? Got so many of those!”
“I’ll lead off by showing you some white phones that are really cheap… and hey if you want to narrow it down further, try narrowing it down –“
“Then I’ll show you phones that are blue. Mostly. More than this result set at least.”
I have cracked from frustration yelled “I told you morning departures!” while searching for flights at a travel site that employed me to work on those shopping paths.
So why? Why does everyone do this when it annoys us, the shopper?
Because our brains don’t work right, and we’re not rational beings, it ends up forcing everyone to cater to irrational cognitive biases to compete. I’ll focus here on availability and price, and in travel, because that’s where I have the most experience, but you’ll see this play out everywhere.
The worst thing from a website’s view is for you to think they don’t have what you want, or that you do and it’s too expensive, and this drives almost all the usability compromises you see that cause you to grind your teeth. And from the perspective of the people who run the website, they know — and they have to keep doing it.
Let’s start with availability. Few sites brag about the raw number of items they stock any more, but the moment you start shopping, they want you to know they have everything you could possibly be looking for. They want you to not bother shopping elsewhere.
Even when a site wants to present a focused selection, that they might not have a million things, they want you to think they have all of that specific niche.
Tablet Hotels focuses on expert-selected, boutique hotels. And here’s them walking you through their selection:
Do you believe there are 161 hip golf hotels? I didn’t. 161 hip golf hotels seems like it’s all the hip golf hotels that might be curated by hotel experts at the MICHELIN Guide(tm).
The desire to seem like they have all the available things makes sites compromise to make the store shelves seem full:
You search for dates and you get places that have partial availability
You search on VRBO for a place and get 243 results, all “unavailable”
You search for a location and get 3 in the city and then results from increasingly far away until it gets to a couple hundred results
As long as they can keep you from thinking “ugh, they don’t have anything” they’re winning — because the next time you’re shopping, you will shop where you think there’s the most selection.
They must also appear the cheapest. Our brains are terrible about this (see: the anchoring effect), and it creates a huge incentive to do whatever you have to in order to have the cheapest price even if it is irrelevant.
This sounds crazy, but I’m here to tell you having spent a wild amount of time and money doing user studies in my shopping site career, if someone’s shopping for non-stop flights between Los Angeles and Boston, and
Site A leads with a $100 14-hour flight that stops in Newark, Philadelphia, then La Guardia to give you the highest possible chance at further delays, followed by ten non-stop results for $200
Site B shows the same ten non-stop results for the same $200
Shoppers will rate Site A as being less expensive.
I have sat in on sessions where I wanted to scream “but you wrote down the same prices for the flight you ended up picking!” I have asked people why they thought that, and they’ll say “they had the lower prices” even though that lower price was junk. They will buy from that site, and return to shop there first next time.
It’s incredibly frustrating, and it happens that session, and the next, it’s not 50% of people in sessions — it’s 75, 90%. We all think we’re savvy customers, but our brains… our brains want to take those shortcuts so badly.
This drives even worse behavior, like “basic economy” — if an airline can get a price displayed that makes it look like it’s the cheapest, even if after adding seat selection, a checked bag, free access to the lavatory the person will pay far more than a normal ticket on a different airline, they’re going to be perceived as the better value, and the less expensive airline, in addition to having a better chance to make that sale because fewer people will go to the trouble of making all the add-ons and then comparing the two.
(And even then, and I swear this is true, once a shopper’s brain has “Airline A is cheaper” there is a very good chance even if they price out the whole thing, taking notes on a pad of paper next to their computer, when they do the math that shows Airline B is cheaper for what they need, they will get all consternated, scrunch their face, and say “well that can’t be right”, at which point there’s a crash in the distance as a product manager throws a chair in frustration.)
All of this combines to put anyone working on the user experience of a site in an uncomfortable situation:
Do we show a junk result up top that shows that we could get the lowest price possible, even though it’s not at all what the customer asked for, or
Do we lose the customer’s sale to the competitor who does show that result, and also risk them not shopping with us in the future?
The noble, user-advocate choice means the business fails over the long-term, and so eventually, the business puts junk in there.
So what do we do, as people who care about users and want to minimize this, do?
We can start by trying. It’s easy to sigh, give in, make the results set “get result set for filters, then throw the cheapest option at #1 no matter if it ranks or should appear” and then move on to something that’s seemingly more interesting. But this seemingly intractable conflict is where we should be dissatisfied, and where we have a chance to be creative.
We can approach with empathy: how can you be as open or helpful as possible when we’re forced to compete in this way. Instead of presenting a flight result in the same way as the others, we can say “$200 if you’re willing to compromise on stops, see more options…. $300 without your airline restrictions…”
Let customers know there’s another option, and don’t pass it off as part of the result set they asked for, call it out as a different approach.
Or, for example, the common “we have 200 hotels that aren’t available” — don’t show me 200 listings of places I can’t go, that doesn’t help anyone. If you have to tell me there are at maximum 200, tell me 50 of your normal 200 have availability if I move my dates, or here are 75 but a ways off.
Or think about this in terms of a problem you’re having — even if you write a sigh-and-an-eye-roll of a user story like “as a business, I want to build trust with users, so I can survive” that’s a starting point. What’s trust? What builds and undermines trust with your customers? Can you show your math? Can you explain what you’re trying to do to them?
It’s unrealistic to expect that you can start a conversation with a random shopper about how anchoring works and how to combat it, but what would you want to say? Are there tools you would arm them with so that they don’t fall prey to CheaperCoolerStuffwithFeesFeesFees?
Because if nothing else, knowing that this is all true, we can at least apply this to ourselves. The more time I spent in user studies watching smart people lose their way and come to entirely reasonable but incorrect conclusions because they’d been misled by having their brain trip up, the more I was able to not only ask questions like “which of these sites has the best prices for the thing I want?” but also questions like “which of these sites helps me find the thing I need?”
Concede what you must, but in seeking to help customers get what they want, instead of annoying them or seeming untrustworthy, and feeling like you’re only doing it because you’re forced to, you should be able to compete, help them succeed, and build a better and more durable relationship.
Every infuriating thing on the web was once a successful experiment. Some smart person saw
Normal site: 1% sign up for our newsletter
Throw a huge modal offering 10% off first order: +100% sign ups for our newsletter
…and they congratulated themselves on a job well done before shipping it.
As an experiment, I went through a list of holiday weekend sales, and opened all the sites. They all — all, 100% — interrupted my attempt to give them some money.
As an industry, we are blessed with the ability to do fast, lightweight AB testing, and we are cursing ourselves by misusing that to juice metrics in the short term.
I was there for an important, very early version of this, and it has haunted me: urgency messages.
I worked at Expedia during good and bad times, and during some of the worst days, when Booking.com was an up and comer and we just could not seem to get our act together to compete. We began to realize what it must have felt like to be at an established travel player when Expedia was ascendant and they were unable to react fast enough. We were scared, and Booking.com tested something like this:
Next to some hotels, a message that supply was limited.
Why? It could be either to inform customers to make better decisions. Orrrrrr it could instill a sense of fear and urgency to buy now, rather than shop around and possibly buy from somewhere else. If that’s the last room, what are the chances it’ll be there if I go shop elsewhere?
There’s a ton of consumer behavioral research on how scarcity increases chances of getting someone to buy, so it’s mostly the second one. If a study came out that said deafening high-pitched noises increased conversion rates, we would all be bleeding from our ears by end of business tomorrow, right?
So we stopped work on other, interesting things to get a version of this up. Then Booking took it down, our executives figured it had failed A/B and thus wasn’t worth pursuing, so we returned to work. Naturally Booking then rolled it out to everyone all the time, and we took up a crash effort to get it live.
(Expedia was great to me, by the way. This just a grim time there.)
You know what happened because you see it everywhere: urgency messaging worked to get customers to buy, and buy then. Expedia today, along with almost every e-commerce site that can, still does this —
It wasn’t just urgency messages, either. We ran other experiments and if they made money and didn’t affect conversion numbers (or if the balance was in favor of making money), out they rolled. It just felt bad to watch things like junky ads show up in search results, and look at the slate of work and see more of the same coming.
I and others argued, to the more practical side, that each of those things might increase conversion and revenue immediately and in isolation but in total they made shopping on our site unpleasant. In the same way you don’t want to walk onto a used car lot where you know you’ll somehow drive off with a cracked-odometer Chevrolet Cavalier that coughs its entire drivetrain up the first time you come to a stop, no one wants to go back to a site that twists their arm and makes them feel bad.
Right? Who recommends the cable company based on how quick it was to cancel?
And yet, if you show your executives the results
Control group: 3% purchased
Pop-up modals with pictures of spiders test group: 5% purchased
95% confidence
How many of them pause to ask more questions? (And if they have a question, it’s “is this life yet why isn’t this live yet?”)
And the justifications for each of the compromises are myriad, from the apathetic to outright cynical: they have to shop somewhere, everyone’s doing it, so we have to do it, people shop for travel so infrequently they’ll forget, no one’s complaining.
There’s two big problems with this: 1) if you’re not looking at the long-term, you may be doing serious long-term damage and not know it, and you’ll spiral out of control 2) you’ll open the door to disruptive competition that you almost certainly will be unable to respond to as a practical matter
Let’s walk through those email signups as an example case.
What this tells me as a customer is they want me sign up for their email more than they want me to have an uninterrupted experience at the very least. It’s like having a polite salesperson at the store ask if you need help, except it’s every couple seconds of browsing, and the more seriously you look the more of your information they want.
They’re willing for me to not buy whatever it was I wanted, or at least they are so hungry to grow their list they’ll pay me to join, which in turn should make anyone suspect they’re going to spam the sweet bejeezus out of their list in order to make back whatever discount they’re giving out.
As a product manager, it means that company has an equation somewhere that looks like
(Average cart purchase) * (discount percentage) + (cost of increased abandon rate) > ($ lifetime value of a mailing list customer)
…hopefully.
It may also be that the Marketing team’s OKRs include “increase purchases from mailing list subscribers by 30% year over year”
So there’s some balance you’re drawing between cost of getting these emails — and if you’re putting one or two of these shopping-interrupting messages on each page, it’s going to be a substantial cost — in exchange for those emails. Now you have to get value out of those emails you mined.
You may think your communications team is so amazing, your message so good, that you’re going to be able to build an engaged customer base that eagerly opens every email you send, gets hyped for every sale, and forwards your hilarious memes to all their friends.
Maybe! Love the confidence. But everyone else also thinks that, soooooo… good luck?
As a customer, I quickly get signed up for way too many email lists, so my eyes glaze over. I’m not opening any of them. Maybe I mark them as spam because some people make it real hard to unsubscribe and it’s not worth it to see if you made opt-out easy…
Now your mailing list is starting to have trouble getting filed directly to spam by automated filters, so by percentage fewer and fewer people are purchasing based on emails. Once your regular customers have all signed up for email, subscription growth even with that incentive is slowing. And if you’re sharp, you’ve noticed the math on
(Average cart purchase) * (discount percentage) + (cost of increased abandon rate) > ($ lifetime value of a mailing list customer)
Is rapidly deteriorating, and now you’re really in trouble.
What do you do?
Drive new customers to the site with paid marketing! It’s expensive even if you manage to target only good target customers. These new customers want that coupon, so you juice subscriptions and sales. And hey, that marketing spend doesn’t affect the equation… for a while.
Send more emails to the people who are seeing your emails! They’re overwhelmed with emails so you need to be up in their face every day! You see increased overall purchase numbers, and way more unsubscribes/marked as spam, and people are turned off to your brand. Which also doesn’t affect that equation… for a while.
Increase the discount offered!
Well everyone, it’s been a good run here, I’ve loved working with you all, but this other company’s approached me with this opportunity I just can’t pass up…
This is true of so many of these: if you think through the possible longer-term consequences of the thing you’re testing, you’ll see that your short-term gains often create loops that quickly undo even the short-term gain and leave you in a worse position than when you started.
But no one tests for that. The kind of immediate, hey why not, slather Optimizely on the site and start seeing what happens testing will inevitably reveal that some of the worst ideas juice those metrics.
How many executive groups will, when shown an AB test for something like “ask users if we can turn on notifications” showing positive results that will juice revenue short-term, ask “can we test how this plays out long-term?”
As product managers, as designers, as humans who care, it is our responsibility to never, ever present something like that. We need to be careful and think through the long-term implications of changes as part of the initial experiment design and include them in planning the tests.
If we present results of early testing, we need to clearly elucidate both what we do and don’t know:
“Our AB test on offering free toffee to shoppers showed a 2% increase in purchase rate, so next up we’re going to test if it’s a one-time effect or if it works on repeat shoppers, whether our customers might prefer Laffy Taffy, and also what the rate of filling loss is, because we might be subject to legal risk as well as take a huge PR hit…”
Show how making the decision based on preliminary data carries huge risks. Executives hate huge risks almost as much as they like renovating their decks or being shown experiment results suggesting there’s a quick path to juicing purchase rates. At the very least, if they insist on shipping now, you can get them to agree to continue AB testing from there, and set parameters on what you’d need to see to continue, or pull, the thing you’re rolling out.
It’s not just the short-term versus the long-term consequences of that one thing, though. It’s the whole thing, all of them, together. When you make the experience of your customers unpleasant or even just more burdensome, you open the door for competition you will not be able to respond to.
I’ll return to travel. You make the experience of shopping at any of the major sites unpleasant, and someone will come along with a niche, easy-to-use, friendly site, probably with some cute mascot, and people will flock to it.
Take Hotel Tonight — started off small, slick, very focused, mobile only, and they did one thing, and you could do it faster and with less hassle than any of the big sites.
You’re paying for customer acquisition, they’re growing like crazy as everyone spreads their word for free. It’s so easy and so much more pleasant than your site! They raise money and get better, offer more things, you wonder where your lunch went…
If you’re a billion-dollar company, unwinding your garbage UX is going to be next to impossible. The company has growth targets, and that means every group has growth targets, and now you’re going to argue they should give up something known to increase purchase rates? Because some tiny company of idiots raised $100m on a total customer base that is within the daily variance of yours?
I’ve made that argument. You do not win. If you are lucky, the people in that room will sigh and give you sympathetic looks.
They’re trying to make a 30% year-over-year revenue growth target. They’re not turning off features that increase conversion. Plus they’ll be somewhere else in the 3-5 years it takes for it to be truly a threat, and that’s a whole other discussion. And if they are around when they have to buy this contender out, that’s M&A over in the other building, whole other budget, and we’ll still be trying to increase revenue 10% YoY after that deal closes.
There are things we can try though. In the same way good companies measure their success against objectives while also monitoring health metrics (if you increase revenue by 10% and costs by 500%, you know you’re going the wrong way), we should as product managers propose that any test have at least two measurable and opposed metrics we’re looking at.
To return to the example of juicing sales by increasing pressure on customers — we can monitor conversion and how often customers return.
This does require us to start taking a longer view, like we’re testing a new drug, as well — are there long-term side-effects? Are there negative things happening because we’re layering 100 short-term slam-dunk wins on top of each other?
I’m less sure then of how to deal with this.
I’d propose maintaining a control experiment of the cleanest, fastest, most-friendly UX, to use as a baseline for how far the experiment-laden ones drift, and monitor whether the clean version starts to win on long-term customer value, and NPS, as a start.
From there, we have other options, but all start from being passionate and persistent advocates for the customer as actual people who actually shop, and try to design our experiments to measure for their goals as well as our own.
We can’t undo all of this ourselves, but we can make it better in each of our corners by having empathy for the customer and looking out for our businesses as a whole. And over the long term, we start turning AB testing back into a force for long-term
So far: match.com was not fun, then EliteSingles looked at Match.com’s heterosexual bias, said “hold my privilege,†and set out to make the experience even more coercive, white, and hetero-normative. I did not have a good time. Then I took a couple-month break there because I got an insane flu and then met someone delightful I dated for a couple months, and I didn’t want to revisit this.
Still, we
Next up I went to Chemistry.com. Chemistry, like OkCupid used to, claims to do matching based on a huge number of questions and science. It’s got Dr. Helen Fisher, who I’ve heard on podcasts and seems great!
Chemistry claims their test is “fun, engaging, and provides an in-depth look at who you are and what you want in a relationship.”
I’ll spoil it for you: it is none of those things, and Chemistry offers some clear signs that you shouldn’t trust them.
Anyway, let’s get started? Sure match and EliteSingles were white and heteronormative, but a science-based site like this is going to have a more diverse and —
DAMMIT.
(And I am again using VPNs to test these things from cities with wildly different demographics, that’s not just them guessing I’m straight and in Portland)
I’m sure Chemistry will have a more nuanced set of who can look for what, right?
Nope. You’re straight or you’re gay.
ðŸ˜
So let’s get into the meat of this. Let’s kick off this personality test.
😑
I kinda gave up immediately. Was the next question going to ask me to feel the lumps on my head and pick the diagram closest to it? What could this possibly indicate about one’s personality?
That critical question answered, you’re introduced to the bulk of the test. It’s 45 minutes of questions, often in succession asking for almost the same thing:
and
Occasionally with a curve ball like this:
or
These moments were welcome breaks from the world of bubbles. Eventually you’re granted questions with different numbers of answers:
When you’re through that ordeal, you get to describe yourself.
Again, I’m really hoping for some better options than we’ve seen in our last two adventures.
Eye color… hair… build…
Hmmm.
…also an interesting set of choices…
Again, hate this question, hate the “marriage is the most important thing†and you’re either not in one, you’re on your way out, out, or you were involuntarily taken out of one. In a loving long-term partnership? Nope! Doesn’t matter… ughhhh.
It takes the “forced choice†approach to getting you to pick some interests. You have to have three, and only three count.
Now to upload your photo. You have two choices. Facebook, or upload.
Wait, what’s that tiny small grey text there? “Skip this step.”
Look, it’s voluntary to sign up for a site like this. If it’s that important to their success, and to the success of everyone else, that there be a photo there, make it mandatory. Maybe don’t spring it on them this late in the process — which is another thing, Chemistry does not tell you it’s going to take so long to sign up.
Then you get the sell on subscribing —
Okay, well, thanks for telling me. I’m curious what those features are — it’s pretty vague what “enhanced search†means, and having the two communication features makes it seem like you might not be able to contact people. It’s an odd choice — I’d really think they’d want to do a better job expressing what the value is here before they make you the pitch.
BUT THIS IS THE PITCH! Continue is actually sign up — now you’re asked for payment. Did you want to skip? Hidden grey text again. Note that here it’s not next to the continue button, but all the way over on the left. This is… intentionally deceptive.
This page is so jarringly different from the design you’ve seen to that point I thought for a moment that I’d clicked on an ad or gone awry somehow. Clearly this is some vestigial code owned by a troll under a bridge, or something.
However I want to focus on a huge breach of trust here.
Let’s say you want that “special profile highlight offer†they’re pushing. $38.94, right?
No!
No.
There is an extra $4 added for no reason. “All new upgrade orders†— is this an upgrade? It’s a new account. What are they talking about? Why does that say “upgrade now?†Am I even in the right place?
What are the chances you realize you’re moving forward with a different amount, given this confusing presentation? This like a hidden fee on your hotel bill where if you look up at the person at the desk they immediately remove it out of embarrassment?
You’re prompted to set up some things that people can ask you, what you’re looking for… I was out by this point, though. However, I’d been sent
The results of my personality test!
What, all those questions about whether I’m into new experiences told you whether I’m into new experiences? THAT IS AMAZING.
Truly a marvel of science. Who knows what the future might bring us?
Yeah, this very much rubbed me the wrong way. It felt like a particularly sophisticated “What Zootopia character are you?†Where all the questions are “do you like carrots?†“Are you good at multiplication?†“Do you have over 1,000 people at your family reunions?†“OMG YOU’RE JUDY HOPPS”
Still, this was — as personality tests can be — an interesting break before I had to face:
The cancellation test!
One of the best ways to learn about a company is by how they act when you cancel. Do they make it difficult? Do you have to call someone? Do they make you go out under a full moon and hold up a solved Rubik’s Cube with both hands and turn three times counter-clockwise, so that you end facing South-by-South-East?
Probably an account status, right?
“Other account status changes†is cryptic…
Oh there it is, the last option.
Why is Date capitalized here? Why is the distinction between casual/serious made here? Why would you stop if you made a friend — isn’t Chemistry about serious people here to meet their partners?
Why aren’t you allowed to tell them you don’t like their site? That’s not a “Technical issue”
Anyway, so pick a reason…
We’re into bad breakup territory here, where everything you say requires more explanation. So you type something in —
You have by my count gone through at least six (and probably a lot more, possibly including looking up a help article on how to remove your profile). You’ve just told them more about why want to remove your profile. And you get this last “wait†modal. It’s just..
I will say it’s nice that they clearly tell you what each of those do, but it’s probably deliberately confusing if someone’s going through this thinking “cancel my account†at each step, gets to the end, and — because Chemistry’s been trying to divert them the whole time — sees “cancel†as the option they want, and “Remove Profile†as a different, non-deletion step. This is not helped by how many other sites — see Match for one example — very much want to keep your zombie self up and boosting their numbers, and try to dance around what profile and account mean.
The end
I’m disappointed. I thought given the association with Dr. Fisher that Chemistry might actually be more… on the up-and-up? More inclusive? By the time I got through the questions, though, I had no desire to see what the rest of the experience was like, and getting out of it only reinforced my impression that I didn’t want to do business with Chemistry. I continue on.
I’ve been through two tries at implementing dual-track agile, and I’d like to offer some perspective on the travails, the pros, the cons, and offer some advice to those who might attempt it.
What’s dual-track agile?
In short — you pull a product manager, a designer, and a developer with broad knowledge of the area you’re working in forward in the process, taking on scouting and evaluating work, which then drops into the backlog of whatever agile methodology you’re using.
This is intended to solve the problems of how you develop the stories that a team will work on — the design sprint, the reset sprint, the investigation card, or in carving capacity for those tasks out of the team’s capacity — which in themselves often provoke both a set of methodology challenges and a level of bikeshed argumentation about methodology that can be immensely draining.
Implementation one: everyone jump into the pool at once
At Simple, we’d been through a couple of massive crash re-platformings that hadn’t delivered new features to customers. We two product teams (one, mine, working on what would become Shared Accounts, and another working on long-neglected customer service features, so we were going after things that would create value for our customers, but we were still not shipping.
We brought in Silicon Valley Product Group to do an assessment and recommendation. One of their number came in, did interviews with key stakeholders, and then in a day-long session in which almost everyone was present (we had our recruiters there!), told us what they’d seen as problems and offered a set of prescriptions. The biggest and most systemic one was to go to dual track agile.
Our leadership then declared we’d adopt dual-track agile, and so it was.
It didn’t take. We adopted dual track in name, but in practice, we couldn’t get the developers with the required knowledge to participate, and so discovery withered. Ours could not move into doing that work, being continually pulled into on-call, supporting retiring the deep technical debt, and doing the architecture work that would keep a team working.
Without developer participation, the discovery track could evaluate whether work was valuable to the end user, and whether that value could be realized by customers, but it still meant that items about to go to the team did not have a ROI, because we still needed to figure out the I. And in turn, that meant that there still needed to be an additional step in what should be the “delivery” track to do even high level development investigation.
What could we have done?
no methodology exists in a vacuum — consider the people and teams that will be doing the work. The people may be willing, but if circumstance can’t permit, you’re set up to fail
if there are changes you can make that would allow you to use a methodology, you have to make the changes or wait — don’t do the reorg or change your approach and just hope
a top-down mass implementation wasn’t the way to go — ever the pragmatist, I’d have rather found a team where it was a particularly good fit, done it there, and then learned lessons that could spread. When we started doing Scrum on teams at Expedia, we got approval to try it in three teams with good conditions and very different problems (flight search, hotels, and packages) and were able to learn from each other and measure our success against other teams and spread the good word.
Two: easing into it
As Lead Product Manager at Manifold, I was able to drive methodology. I decided to follow my own advice and do it real subtle-like. We started to use Product Pad, doing more and more discovery activities… and my intention was to use more and more of the process until we’d be using dual-track across the teams without having to talk about it.
During this time, my boss, the VP of Eng, encouraged me to just do it! We’re small, you can do a lunch-and-learn and just go!
I, having been burned by the previous experience, and encouraged by success at gradual adoption of Scrum at Expedia, declined and decided to go ahead with moving forward. Let this be a lesson to us all:
If you have the opportunity to jump ahead, and the support to do so, maybe just do it
Dual-track took… kind of. Unfortunately for dual-track, the company made a hard business pivot and organized around long-term contractual obligations (so product teams organized around delivering promised functionality, rather than pursuing objectives of their own). There’s a little bit of work to be done within that, but you’re not going out to do basic user research, address problems, etc.
Fundamentally, dual-track exists to support testing ideas, learning, and exploration. If you’re not doing that as a business, don’t adopt a framework that supports it. It’s like the difference between the road-trip and doing a regular commute. One requires research, planning, friends, snacks, a good dog to hang out the window if you’re lucky, and the other requires you to figure out the vehicle and route once and then pay attention.
I also encountered more resistance in the form of nearly-endless tool and process questioning than I’d expected or was prepared for. I found myself answering questions like “What we do when robosaurs attack local food distribution centers is a good question, and we’d have to talk about what that would mean for ticket handling…””
Now, what was going on? There were two culture clashes with Manifold’s stated values of transparency and autonomy, where anyone should be able to do anything as long as there was an audit log and the action was reversible.
1: dual-track itself seemed to clash with culture: that there would be three people who, outside of a product team’s normal activities, would be making decisions on the direction of the team and what work there was to be done.
2: tooling: first, introducing another tool for people to use raised the “why not just do all this in (JIRA/github/___)?” and in particular, the tool I’d introduced, Product Pad, is great for a lot of things but has a ton of restrictions on roles (for instance, a normal user can’t go add stories to an idea filed by someone else) that rankled people. I had not done enough to consider this.
What happened?
We started, ideas and feedback go into the hopper, there are processes to do discovery, and it’s being used on some more-nebulous pieces to create a roadmap.
I feel like I should have abandoned it when we made our pivot — I should have considered that a change in direction and how we work as a company was worth wiping the white board clean and evaluating the process challenges against what the kind of work we’d be doing over the next 12-18 months.
Questions to ask and things to do as you weigh dual-track.
First, think about this:
What’s your elevator pitch for why the change is worth it — what will you gain, or what known harms would it have prevented?
What’s that pitch for each of the stakeholders in the process? What’s your pitch to your Marketing partners, for instance?
Who’s your champion at the exec level?
Consider the culture: will people react poorly to the appearance of a trio of people taking over the direction of the team where once it seemed open to all? What can you do in designing the process and communication to address those concerns?
And then, block out your calendar for a long time so you can concentrate, and
Map out the process from end-to-end before any public rollout or discussion.
How will an idea or customer request go through the process? What tools will be used? Consider the maximally complex cases —
A developer has an idea
We write up the idea — where? How is it tracked?
How do you decide that that idea is worth investigating out of all the other ideas? Who makes that decision? Where?
We need to do some user research on the idea — who does it? Where is that request tracked? How does the result get tracked?
We need to show a prototype to users, and it requires both design and a little bit of dev. How do you start and track that work? How do you record the work?
You now know that the idea has merit and can be made usable. How do you track that?
How do you cost out the work? How is that work assigned, and tracked?
How do you choose what goes onto the roadmap from all of the ideas that have cost/benefit ratings? How?
If you’re doing regular user research or usability testing, how will that fit? Where will those results go?
How will you communicate each decision out to everyone who is interested in the results?
Then for each person in the process, list how many tools they must use including any new ones, and how they’ll track their work at each stage. If they live in a world of GH issues and you now require them to put their head into Product Pad regularly, or participate in discussions there, you’re adding complexity into their life and they’ll have to see a lot of value come out of this extra effort — which may fall to you as a product manager.
Now you’ve got a rollout plan, of sorts, with an idea of the cost and what will need to happen for the process to be a success, and you can make an informed decision.
In summary, dual-track agile is a methodology of contrasts
Having been through two attempts to implement it, I’m much more likely to look at existing processes and find ways to build in things like regular user testing and fast feedback loops, and see if they improve things — or start to create the need for dedicated discovery process and activities.
I would love to hear other people’s success or failure stories implementing dual-track agile (or any of the latest hot methodologies) — please, drop me a line if you’ve got them.
In “Stop Obsessing Over ‘The Teams
John Cutler argues “we shouldn’t focus on “The Teamsâ€, until the conditions for effective teams are in place.”
Cutler begins:
Most Agile methodologies/frameworks focus myopically on “the teams†(front-line teams of individual contributors). Meanwhile, organizational dysfunctions (those causing 80% of the drag) persist.
And that you have to solve organizational problems first.
Companies usually ask me to “transform teams†but many of the problems they need fix stem from the org level. Local optimization over global rarely works. You have to work on both.
Everyone’s right — the trick is, sometimes it’s agile methodologies are what let you find the global problems, and make wide organizational change necessary.
Cutler continues:
The dreamland utopia of “The Business†and “The Developers / The Teams†just working it out over mugs of coffee … is ultimately unfeasible in any org — with >1 teams — that simply tries to graft something like Scrum on to their existing structures, dependencies, and project culture.
No.
When faced with something that seems intractable, you start with what you can control, improve what you can, and force the organizational change that seems unfeasible.
I worked at Expedia way back during a rough patch for the company, and if you’d asked “what’s going on here?” you’d get a litany of entirely different and well-supported answers from different organizations. Or teams. Or people at adjacent desks on the same team with different roles. Top-down “let’s unlock transformation” attempts went nowhere. The business deteriorated and everyone knew why, but no one could agree on the reasons, much less what to do.
Three of us Program Managers (Adam Cohn, Yarek Kowalik, me) got permission to turn our teams to Scrum from the Microsoft process Expedia kept after spin-off, which went:
1) Strategy comes from somewhere
2) Business crunches numbers, hands a direction to team in Business Requirements Document
3) Program Manager turns that into a full specification down to a really deep technical level
4) Spec is reviewed and iterated on until it gets stage-gate approvals
5) Development goes until it hits “Dev complete”
6) QA until it hits “QA complete”
…the whole thing.
There were all kinds of organizational and business problems, all the stuff Cutler points out. The teams weren’t really autonomous, the Program Managers were effectively Product Owners, Project Managers, and deputized Product Managers, it took forever to ship, what did ship didn’t do well — all of it.
We had to hand-lobby leadership and make ourselves pests until we got permission to try Scrum just on the team level, just on those teams. Adopting was rough, and there was a time where transparency and the ability to produce stats meant we were the only teams admitting we weren’t making progress.
But it worked. It transformed everything.
It started with the immediate team and radiated out:
showing the teams’ work created a sense of increasing success where there’d been frustration
seeing the effect of gates on productivity let us remove the “toss over the wall” steps and get everyone closer
reporting how having design do one set of “red-lines” in the spec and then only respond to critical issues delayed iteration got us more and better collaboration throughout
documenting the problems with releasing code helped drive efforts towards continual delivery
With the “engineering can’t ship” out as an excuse, it kicked off organizational change. Once, no one could agree on why things weren’t working in our organization. Now we could prove them, and when people agreed on the causes, we could solve them.
For instance, tracking the work done and the cost of it meant we could show executives the productivity-draining effect of business-level randomization. I could go and say “as the business, you told us to prioritize quick-wins in response to this metric, nothing happened and we made no progress on what you want for four weeks” and the leadership of that organization had to grapple with what that meant.
That started a conversation first about how much time we put into tech debt, and on-call, and soon it was a business-level reckoning of “are we doing the right things at all?”
Solving that drove change through the once removed business organizations: it required Product Managers who could act quickly, pragmatically, who were available and collaborative (I wrote about what I learned from working with one of them ). And for the first time, you could tell who was an effective Product Manager, and that organization transformed.
All of those changes were painful and it took way longer than anyone wanted. And if you’d started omniscent, knowing what all the barriers were going to be, sure, it’d have been way easier to make a list. But we’re not. Instead agile methodologies let you see and prove the barriers to the team’s work and force action, and then you get to see the next one.
When large organizations are faced with holistic questions, you don’t get progress. At best you get a weighty consultant-produced report on how you’re deviating from best practices, and each of the groups tasked with work finds some way to argue their bit of the findings don’t apply, or feet are dragged and no progress is ever made.
Agile methodologies allow you to flip that: to not have to solve the whole thing at once. To find the things that are small, and fixable, and get better as you build clear and incontrovertible cases for the next implementable organizational change. You don’t know what the effect of a set of wide-ranging best practice announcements will be, but you know having a designer embedded on product teams will make everyone more effective.
Obsess over teams. Not only is it the often the best way to create change, in large organizations beginning with with the team is the only place it can originate.
Calacanis’s Angel contains useful insight if you’re interested in dipping your toes in funding startups, if you’re thinking of starting one and want to know what the other side sees, and for those of us who are around or just interested in the world.
That insight costs putting up with Jason himself, and that’s enough to make this hard to recommend to anyone without this advice:
Do not under any circumstances read Chapter 1 Skip anything about how great Jason is or his history
The rest of the book is relatively tolerable, but I wrote “fuck you” in the margins repeatedly in this chapter. It kicks off some themes that keep coming up in the book that are problematic.
I’m going to entirely ignore Calacanis’s opinion of himself and his abilities, and how important he seems to find it that you too believe in him. I’m personally extremely skeptical of the kind of rabid, persistent self-promotion he engages in throughout. The book’s here, we can roll our eyes to the sky, and move forward. There are much larger issues here. You can check out this profile of him in the NYT titled “Should the Middle Class Invest in Risky Tech Start-Ups?”
Skip past this section if you’re already familiar with the problems with Calacanis (and much of Silicon Valley)’s attitude.
Jason thinks you’re pitiable moron if you for some inexplicable reason aren’t also a VC
First, privilege
Our parents and grandparents took factory and white-collar jobs and rode them for all they were worth in the last century. Now the robots — who never sleep and self-improve on an exponential curve — are taking these jobs. Meanwhile, we humans, with our nagging psychological and emotional needs, struggle to keep up.
Most of you are screwed.
But you’re here, so you’re clearly willing to learn and I can radically improve your odds if you do the work.
First, note his use of “you.” It’s “our” for the parents, but when it comes to who’s screwed, he’s clearly comfortable on his lifeboat seat.
This is important. In order to participate in this new economy, he suggests starting in a “syndicate” where someone like Jason finds investments, you and his crew of small investors jump in with him, and Jason takes a fee for making the match. To do this, you must be an accredited investor. Here’s the standard for that, which changed in 2011.
You must be:
worth more than $1 million dollars
your primary residence doesn’t count towards that
Angel was published this year. A million dollars is about 90th percentile in the net worth of US families (and I believe that counts home equity, but figure it doesn’t so we can forge ahead with even 10% qualifying). There’s an assumption throughout that you, the chump, can just start doing this. He refers to it as “taking the red pill” at one point (and let’s for a moment ignore that term’s association with some deeply toxic people).
For 90% of people, there’s no option on the investing path. Calacanis can’t “radically improve your odds” if you can’t participate. He has another option for those people, I’ll get there.
Second, that’s paired with contempt.
Calacanis goes back and forth in the book from being insufferable to being able to look at his own history and even at times in his own luck to heaping shit on people who aren’t in the tech industry.
Cafe X and other startups will also eliminate millions of jobs in which humans get paid to stand behind a counter and repeat back your seven precious little instructions on how to prepare your morning libation, before pressing one button and masturbating a milk-frothing pitcher for two minutes.
I’m not going to argue even that he’s wrong that baristas could be endangered. His tone is fucking unacceptable though. Think about someone you know that has a “service” job and does it amazingly well. Doesn’t even have to be a barista, though it would warm my heart if it was entirely counter to Calacanis’s example. It’s almost certain that what they do beyond the core and potentially mechanizable task is what makes them great.
What does a great city bus driver, for instance, do beyond drive the bus? They’re getting people in wheelchairs on or off and secured, they’re doing customer service like helping people navigate a public transit system, and infrequently but critically they’re defusing dangerous situations or dealing with crimes as they happen. Why shit on them for not having a million dollars and investing in startups that will put their fellow co-workers on the street?
Now, to his credit, Chapter 8 is “How to be an Angel Investor with Little or No Money” and then 9 is “The Pros and Cons of Advising” — but this chapter’s about signing on as an advisor, a probably unpaid position in which you work for equity. Which, and again, Calacanis doesn’t seem to realize this — requires you both have skills useful to a startup, like his internet marketing expertise, which very very few of the 90% of people without a million dollars will, and also the connections to the startup, which that 90% won’t have, and time to make that happen, which the 90% are probably not going to have — particularly since the jobs being created by Calacanis & Co. are frequently high-effort, no benefit gigs that I’m sure Calacanis would express contempt for if he thought to (“…eliminate millions of jobs where humans get paid to pick up your overly-precious sushi order and ride their overly-burdened bikes in their ridiculous helmets, weaving to dodge potholes and confusing my autonomous car’s prediction sensors causing it to slow and making me several moments late for my next pitch meeting…”)
This is — and despite being written in first-person, it’s possible he did not write this — a stark contrast to what the book promises, that if you’re a “mom in the bookstore with screaming kids, the sales executive in the airport exhausted from layovers, or the kid graduating from college wondering, “What now?”” that you can be like Calacanis and “live in a big house with a bunch of Teslas in the driveway and an ATM balance receipt that makes me smile from ear to ear every time I see it.”
They can’t. That college student’s almost certainly graduating with crippling debt and joining an economy with little to offer them. The mom with the screaming kids? Is she going to take them to the pitch meetings? Tech bros don’t need daycare to meet with founders. And — yeah.
What’s good here?
Once we’re well into it, there’s some legitimately worthwhile stuff regardless of Calacanis himself.
Chapter 11 starts into how you can start by making tiny deals in angel syndicates, and use that to gain experience in how to evaluate founders and make connections with other investors and people in the community, and how you can help your founders.
The sections on evaluating startup pitches and what you should do to prepare for and how to be in those meetings are great.
Chapter 22, “Why Angels Should Write Deal Memos” is great advice for almost any kind of decision like investing in a startup (investing generally, for instance) — that you need to create a clear case for why you’re acting not only as a clarifying thought exercise but so that you can look back at it in the future and compare it to where you are, offering a valuable reality check.
Then the walk through of what it’s like to be an investor of a startup in those early years, and how it will be challenging, are all excellent advice.
This is also where Calacainis’ experiences can make the story — when you hear about him getting screwed over in a deal, or acting poorly, it’s easy to sympathize and learn from him. When he’s rude in response to something, you get it, and there are lessons there both in how to conduct yourself as an investor but in what to watch for to avoid being in that situation.
For founders, you’ll be able to see it all through the investor’s eyes, and walking through the pitfalls of investing (in particular, that the money is not going to get you where you think it will and everyone outside the startup knows this). Knowing the expectations of a good investor in both communication and what they want to hear so they can try to help is excellent to set yourself up well.
On balance
I was well into this book still angry about its opening attitude and considering a pointed email to Calacanis. Once I took a walk and could set it aside, I returned to get into the nuts-and-bolts, and it was entirely worthwhile.
If you take my advice and skip that first chapter and then gloss over any of his “I’m amazing” self-reassurance, you’ll have a much more pleasant experience in getting all the good stuff.
Phil Town is the author of two books, 2006’s Rule #1 and Payback Time
Town’s origin story is that casting about after serving in Vietnam as part of the Green Berets, he worked as a river guide in the Grand Canyon, where an older investor he met offered to teach him the ways of the stock market.[1] He then claims to have turned $1,000 into $1,450,000 in five years and became a full-time investor[2].
Now, he’s got a popular podcast where he’s teaching his daughter Danielle Town (who is so great as a foil and voice of the listener), and wants you to attend his seminars and subscribe to his stock research tools from him.
Should I read this or not?
Totally worth reading if you don’t already have a valuation strategy or process you’re into, with the caveat that theres’s some confusing additional material that might make you scratch your head.
If you do, or you’re more generally familiar with value investing (why it can work, particularly), there may not be a lot here.
The meat of the book: how to find good businesses on sale
Distilled, Rule One tells us to invest in good stocks that are 50% off. He’s got the “Four Ms” for you
Meaning
Moat
Management
Margin of Safety
And then walks you through each one. Some are easy to understand: for “Moat” he talks about competitive advantages you can count on over time. For “Management” you’re looking for some key metrics, and there are good explanations of why you want to look at each one.
“Meaning” isn’t quite the right term (and to his credit, on his podcast Town’s joked about how the neccessities of book marketing forced the names so there would be a catchy “Four Ms”) — but here, do you understand what the business is doing, and would you buy the whole business?
Where the real meat is for people will be in how you value and price stocks, looking for the discount. And here’s where a lot of math comes in, and I respect that: this isn’t Greenblatt’s “Magic Formula” where you just rank all the stocks. For every stock, Town walks through how to estimate the future value of the stock, and compare it to alternatives, like investing in U.S. Treasury bills, or against expectations.
I dug this a lot, but I also had to break out the spreadsheets, and it’s way more of a pain to get all the data together (this as much a problem with the web in 2017 though). Handy solution though — you can just subscribe to his website and get all those numbers in one place.
You end up with what Town calls the “sticker price” for a stock, the current fair value for a company given expectations for its growth. The explanation for the concept is excellent — that price should be a starting point, and you want a sale.
Running those numbers was an eye-opener for me and taught me a lot about how to think about stocks as both being a company’s value in the present but an expectation of future growth as well. For instance, I’d been griping that one company seemed wildly under-valued based on their price/earnings ratio. When I ran through the process here, I understood that it was priced perfectly reasonably.
Once you can calculate the sticker price, you want stocks that are at least 50% off. For this to be true, either the company is deeply out of fashion or something has gone horribly wrong and the stock’s being beaten up over bad news. As I write this, the CAPE (Cyclically Adjusted PE Ratio) is at 30.62, which is historically very high, and you’re not going to find many stocks on sale for half off.
Going off the rails: add technical analysis
Chapter 12 is “The Three Tools” and here Town talks about “red light” and “green light” indicators for whether to buy a stock. Here we diverge entirely from the value of a stock and get into some entirely different territory, and where I think the book loses people. Buying quality businesses on sale is simple, and while it takes a little work, it makes sense as you’re doing it.
In Chapter 12, suddenly we’re not just looking at business, we’re now trying to figure out what everyone else is doing, and things get way harder to understand.
The three tools:
Moving average convergence divergence. As a measure of whether there’s “pressure” pushing the stock up or down
Stochastics. Measures if something is “overbought” or “oversold” — and the explanation of this one is not great
Moving average.
I’m going to say up front I didn’t get this chapter at all. I feel like I’m missing the point, so this will be a bit more uncertain.
This chapter confuses the issue by . For instance, you’re supposed to look at Stochastics for crossings of the 14-day and 5-day lines. For one, most places you look you don’t get a ‘buy’ and a ‘sell’ line as the book describes them, you get a “%k” and “%d” and it’s confusing. When I got through the instructions for setting one up using Town’s settings for time frames, I looked up some stocks, and it was a coin flip on whether the stock went in any discernable direction.
I don’t know if I was just unlucky or if I’m even using the tools right. My point is that if you’ve gone through the book and you’re working with pretty clear, understandable numbers (earnings per share is pretty much earnings per share), this shift to difficult to understand technical indicators is confusing and exaperating.
Moreover, there’s no convincing why here. The explanation of what we’re doing in this chapter doesn’t make it clear why it’s important to pay attention to it. It seems like the fact that you’ve found a company that’s on sale for 50% off should far outweigh whether others are getting into or out of the stock at the moment.
I was left thinking there are important signals I shoud pay attention to in addition to basic valuation, but unsure how I’d do that.
This is also where the book falls down on the cover promise of “successful investing in only 15 minutes a week!” Even if you’re only monitoring the few stocks you’ve bought, using these three tools once a day to see if you should bail on a stock, you’ve burned your 15 minutes squinting at whether one line crossed another.
It all feels like Town, as a long-time experienced investor familiar with tools like these, was unable to explain these or their importance in a way that made sense to those without the same experience, which is unfortunate because I got a lot out of the valuation sections.
And from there, much good is undone
Chapter 13 then walks through a possible example as a couple considers buying The Cheesecake Factory, Inc. The valuation all makes sense, and they buy. A chart of spectacular returns is shown — CAKE goes from $18.90 to ~$36 in two years, an 90% gain. Nice. The couple’s example $20,000 is now $38,000. Except… here’s what Town writes:
By getting in just below $19, and then moving in out 11 times in two years with the big guys, and by adding $500 a month they were saving, by July 2005, CAKE gives Doug and Susan a nice compounded rate of return of 56% and their $20,000 is now worth $78,000
He’s claiming twice that return rate.
Nope. Nope nope nope nope. Just… not true. At all.
1) Is the regular additional contribution counted as part of the return? It’s unclear. But why mention it otherwise? You can’t have them put more cash in and count that as what it’s worth now. I could have an $100 investment that’s totally flat and add $1m and wow, I beat the stock market over two years because now I have $124. This is, at best, confusing. At worst, he’s counting over $10,000 and associated gains in value as part of the return that shouldn’t be there.
2) They moved in and out 11 times? How do we know that? Does this assume that they perfectly timed each of those? When were those 11 moves? What did the technical indicators say at the time?
3) They’re going to get creamed on short-term taxes. The investor who held for more than a year pays way less.
Investor A: bought at the same time and held. +$18,000, may be paying zero in taxes
Investor B: bought, as a beginner times moving in and out 11 times, up $58,000, might be $40,000 after taxes.
If you think Town is counting the additional contributions in taxes, we’re down to around $30,000 for Investor B.
It’s not okay that this important calculation is so unclear, and the handling of the additional contributions makes me extremely suspicious. It doesn’t make sense as written, and I don’t understand what the point is. 89% over two years is great! Why confuse things?
This raises a larger question: does this all work? We’re presented with a couple examples of Company A compared to Company B, walked through the Cheesecake Factory example, but beyond those hand-picked examples, all we have to rely on is Town’s accounting of his track record. I’d have felt much better about the whole thing given a wider accounting of his trades, or of studies done on historical data — like we have for Greeblatt’s Magic Formula book.
Put it all together, what’s it mean
I dug reading it, and particularly found it useful in making me do real work putting things together and looking at companies. But I also found the technical tools section confusing and hard to understand, and the example math particularly worrying. It occupies a weird place in an investment bookshelf: I can see recommending it to someone who is interested in Warren Buffet’s investment philosophy and wants to know how to crunch the numbers to find those great companies at attractive prices.
[1] The story, as told, is so perfectly mythic that it’ll probably raise at least one eyebrow if you read it. But whatever.
[2] (I spent a little time looking at this, but besides him raising money in 2013 for Rule One Capital, I couldn’t find anything on early investment proof or his returns as a manager).
[3] Similarly, Town claims early that as a Vietnam veteran, on his last day in uniform he was at the Sea-Tac International airport when someone spat on him and ran away. This… I don’t know. That spitting at veterans happened at all is disputed: there’s a whole book on this, Spitting Image by Lembcke, discussing how no incident like this has ever been documented as part of a larger case that antiwar activists and veterans were allied far more than is now generally believed.
I thought about looking into this in greater detail (particularly, why would he be at Seatac on his last day in uniform, when it seems like he’d have flown into McChord Air Force base?), and stopped. This has been hashed out many other places, and it doesn’t seem relevant. Let’s just grant Town this.
(Writing this up for hopeful discovery of future Drobo owners)
My Drobo started acting up a while ago in an incredibly frustrating way:
The Drobo would sometimes not show up, or not mount, requiring a dance of restarting it, restarting the computer, plugging, unplugging
When it was mounted, you’d get a short while before the Drobo went unresponsive in the middle of an operation, and then it’d unmount (and OS X would throw a warning about dismounting drives improperly)
Sometimes if you left it connected for long enough, it would show up again, hang around for a bit, and then disconnect.
Nothing worked: re-installing software, resets, the “remove drives, reboot the Drobo, wait, turn it off, put the drives back in…” And all the while status-light-wise, and Drobo Dashboard-wise, it reported everything was good
And unhappily, Drobo support costs money, and I’m cheap, so I wasted a ton of time troubleshooting it. As a bonus, their error logging and messaging is either unhelpful or encrypted.
(I feel like if you encrypt your device’s logs, you should offer free support at least for unencrypting the logs and letting the user know what’s up. I’m disappointed in them and will not be purchasing future Drobos. Or recommending them.)
Eventually I pulled each of the drives and checked their SMART status (OK status overall on all drives, though I also pulled the details and one of them had flags, but SMART’s not great (see: Backblaze’s blogs on this). So I cloned them sector-by-sector onto identically-sized drives. The drive with the odd SMART errors (but, again, overall OK status) made some really unsettling noises at a couple points during sustained reads, but the copy went off okay.
Fired it up, and it worked. Drobo came back on, mounted, works fine (for nowwwww….).
I spent some more time hunting around in the Drobo support forums looking for more information, and found someone reporting back on a similar issue said they’d had a drive go bad but the Drobo never reported any issues, and it wasn’t identified until support looked through the encrypted error logs and said “oh, drive number X is going bad, that’s causing your Drobo’s strange behavior.” Clearly, given my success, at least one of my drives was secretly bad and cloning and replacing was the solution
So! May writing this up help at least one future support-stranded Drobo owner: if your Drobo is unmounting randomly, not showing up in the Finder, throwing dismount errors, but the Drobo’s reporting that everything is hunky-dory, and you don’t want to pay for support and you’re willing to take advice from some random fellow owner on the Internet who may not even have the same issue… here’s one approach before you throw your malfunctioning Drobo out the window:
Power it down and pull the drives
Using whatever utility you like, check the high-level SMART status on the drives to see if something’s clearly screwed up
(optional, if they’re all okay) look at the detailed SMART errors and see if any of the drives looks really wonky
If any of them are bad, do a sector-by-sector clone of that drive, swap the clone in, power up the Drobo, see if that works. If yes: yay! If not —
Clone & replace them all, see if that works.
May this work, and may the drive be in good enough shape to successfully clone.
I should also note that as much as I’m annoyed my Drobo was out of support, assuming they would have been able to tell me what was happening and which drive to clone and replace, it would have been worth it to pay for the per-incident to save myself the headache.