Category Archives: Uncategorized

Your data are racist

Say you’re a university loan administrator. You have one loan and two students, who anonymized seem to you in every way identical: same GPA, same payment history, all that good stuff. You can’t decide. You ask a data-driven startup to determine which one’s the greater risk to default or pay late. You have no idea how they do it, but it comes back —

The answer’s clear: Student A!

Congratulations, you’ve just perpetuated historical racism.

You didn’t know it. The startup didn’t know it: evaluated both students and found Student A’s social networks are stable, and their secondary characteristics are all strongly correlated with future prosperity and loan repayment. Student B scores less well on social network and their high school, home zip code, interests, and demographic data are associated with significantly higher rates of late payments and loan defaults.
From their perspective, they’re telling you — accurately, most likely — which of those two is the better loan risk.

No one at the startup may even know what the factors were. They may have grabbed all the financial data they could get from different sources, tied it to all the demographic data they could find, and set machine learning on the problem, creating a black box that they show is significantly better than a loan officer or risk analyst at guessing who’s going to default. No one

Machine learning goes like this: you feed the machine a sample of, say, 10,000 records, like

Record Foo

  • Zip Code: 98112
  • Married: Yes
  • Age: 24
  • Likes Corgis: Yes
  • Defaulted on a student loan: No
  • Made at least one payment more than 90 days late: No

Record Bar

  • Zip Code: 91121
  • Married: No
  • Age: 34
  • Likes Corgis: No
  • Defaulted on a student loan: Yes
  • Made at least one payment more than 90 days late: Yes

You set the tools on it and it’ll find characteristics and combinations of characteristics that it associates with the outcomes, so that if you hand it a new record: Angela, a 22-year old from Boston, unmarried, doesn’t like Corgis, and your black box says “I’m 95% sure they’ll default.”

It’s the ultimate in finding correlation and assuming causation.

You see how good it is by giving it sets of people where you know the outcome and see what the box predicts.

You don’t even want to know what the characteristics are, because you might dismiss something that turns out to be important (“People who buy riding lawnmowers buy black drip coffee at premium shops? What?”).

Because machine learning is trained up on the past, it means that it’s looking at what people did while being discriminated against, operating at a disadvantage, and so on.

For instance, say you take ZIP Codes as an input to your model. Makes sense, right? That’s a perfectly valid piece of data. It’s a great predictor of future prosperity and wealth. And you can see that people in certain areas are fired more from their jobs, and have a much harder time finding new ones, and so default on payments more often. Is it okay using that as a factor?

Because America spent so long segregating housing, and because those effects continue forward, using ZIP means that given ZIP X I’m 80% certain you’re white. Or 90% if you’re in 98110.

We don’t even have to know, as someone using the model, that someone is black. I just see that people in that zip code predict defaults, or being good on your loan. Or I might not even know that my trained black box loves ZIP Codes.

And if you can use address information to break it down to census tract and census block, you’re even better at making predictions that are about race.

This is true of so many other characteristics. Can I mine your social network and connect you directly to someone who’s been to jail? That’s probably predictive of credit suitability. Oh — black people are ~9 times more likely to be incarcerated.

Are your parents still married? Were they ever married? That’s — oh.

Oh no! You’ve been transported back in time. You’re in London. It’s 1066. William, Duke of Normandy, has just now been crowned. You have a sackful of gold you can loan out. Pretty much everyone wants it, at wildly varying interest rates. Where do you place your bets?

William, right? As a savvy person, you’re vaguely aware that England has a lot of troubles ahead, but generally speaking, you’re betting on those who hold wealth and power to continue to do so.

Good call!

What about say, 500 years later? Same place, 1566. Late-ish Tudor period. You’re putting your money on the Tudors, while probably being really careful to not actually reminding them that they’re Tudors.

Good call!

Betting on the established power structure is always a safe bet. But this means you’re also perpetuating that unjust power structure.

Two people want to start a business. They’re equally skilled. One gets a loan at 10% interest, the other at 3%. Which is more likely to succeed?

Now is the bank even to blame for making that reasonable business decision? After all, some people are worse credit risks that others. Is the bank to disregard a higher profit margin by being realistic about the higher barriers that minorities and women face? Don’t they have a responsibility to their shareholders to look at all the factors?

That’s seductively reasonable. Too see this at scale, look at America’s shameful history of housing discrimination. Blacks were systematically locked out of mortgages and house financing, made to pay extremely high rates without building mortgages, never building equity. At the same time, their white counterparts could buy houses, pay them off, and pass that wealth to their kids. Repeat over generations. Today, about a third of the wealth cap in families, where white families have over $100,000 in assets and minorities almost nothing, comes from the difference in home ownership.

When we evaluate risk based on factors that give us race, or class, we handicap those who have been handicapped generation after generation. We take the crimes of the past and ensure they are enforced forever.

There are things we must do.

First, know that your data will reflect the results of the discriminatory, prejudiced world that produced them. As much as possible, remove or adjust factors that reflect a history of discrimination. Don’t train on prejudice.

Second, know that you must test the application of the model: if you apply your model, are you effectively discriminating against minorities and women? If so, discard the model.

Third, recognize that a neutral, prejudice-free model might seem to test worse against past data than it will in the future, as you do things like make capital cheaper to those who have suffered in the past. Be willing to try and bet on a rosier future.

Citations on wealth disparity:

http://www.demos.org/blog/9/23/14/white-high-school-dropouts-have-more-wealth-black-and-hispanic-college-graduates

http://www.demos.org/publication/racial-wealth-gap-why-policy-matters

I ran a Problem Roadmap meeting and it was pretty great

I’ve taken on Product Management for a set of internal tools, and found myself lost in 700-some open tickets (including meta-tickets and sub-tickets and all that goodness). Product’s a relatively new discipline at the company, the tools team is saddled with technical debt and severely resource-constrained, and my early discussions with internal customers ran strong with discontent.

As a fan of Mellisa Perri generally and “Rethinking the Product Roadmap” in particular, I wanted to try seeing if a Problem Roadmap meeting would help.

I hoped a problem roadmap would give us all agreed-on, prioritized problems we could evangelize and pursue, going from being ticket-or-project focused (700 tickets!) to outcome-focused, and start reducing the iteration time from months to weeks and soon, days. Then I’d be able to start culling that backlog like crazy and lining up ideas and bugs against outcomes we were pursuing, and we’d all have clear success metrics we could look to.

I invited members of the development team and a cross-section of interested people in the support organization for two hours. We ended up with ~12 people.

To start, I presented the goals for the company that related to the discussion: where did we need to get to with customer satisfaction overall, and our goals specific to our customer support organization.

I introduced what we were trying to do in the meeting, along with an example problem and a metric that could trace it. On the giant whiteboards, I drew two columns for problem and metric to measure it.

Then I asked “What are the problems we’re facing getting to our goals?”

Early, our conversations were specific: “Bug X is hurting us” which in turn led to “Oh, we’re working on that” (which I was guilty of). We’d come up with metrics to measure those and move on. As we filled each whiteboard up, I’d move to the next board (or take pictures and erase all three)

We quickly moved to larger issues, and the discussions got into new, interesting problems I knew we weren’t already discussing. Which led to eager participants jumping to “how we could fix that.” This was challenging: when do you bring that back, and when do you let it run?

I’d explain (or re-iterate) once we’d defined problems and metrics, we’d vote and then pursue solutions. But some of the ideas were so good, it was hard to rein them in.

With more problems, we got better defining the metrics we’d use, and it led to a focus I hadn’t seen in other meetings trying to address this. In some cases, needing metrics meant reconsidering what we thought the problem meant, sometimes discovering there was more than one problem.

New, more specific descriptions often illuminated issues there’d long been angst but not clarity around, and the metrics provided a way for us to target them. For example, a problem that a tool didn’t work right resulted in us defining three issues: workflows, tool design, and then the technology, all with metrics. That clarity would have made this worth doing on its own.

Requiring metrics forced uncomfortable discoveries that we didn’t have useful measurement against our goals, which I’d also have held the meeting just to find out.

Towards the end, we’d gotten to amazing discussions I hadn’t yet seen. New problems just under the company and organizational goals, and in considering those, a problem around whether the organization was structured to pursue our larger goals.

I’ll offer two examples of the kind of problems we came up with early, and then later as conversation opened up

Problem Metric
Portland coffee is 10% worse than Seattle’s Survey of employee satisfaction; dev team velocity; chats answered/hour
We can’t see if we’ve met our goals if we can’t measure them Yes/no: are there metrics in place that measure x/y/z?

Yup. 90 minutes from Bug A (Bug A, measure Error A) to sweeping, actionable metrics (Organizational issue B, employee satisfaction, workflow measurement, other good stuff).

Then the voting. I re-wrote the list from the photos for space, and then gave everyone five votes, multi-votes okay. Here’s what happened:

Problem Votes
Huge existential thing we’d never talked about 9
Large systemic issue with banky thing A 5
Large systemic issue with bank thing B 4

… followed by a long tail of 2s and 1s.

We’d never talked about the top item before! Anywhere! It wasn’t on a roadmap! It wasn’t in any of the 700 tickets! Brand new! I’m using exclamation points!

Both the problems and metrics we came up with around the 2nd-and-3rd place priorities clarified huge problem clouds (dozens of tickets filed against something with different solutions or issues, all without metrics or an overall goal).

That’s gold. I’m so happy.

I’d recommend this approach to anyone doing Product (or Program) Management looking for a way to re-focus conversation on outcomes. I’ll report back on how evangelizing the findings goes. The discussions it inspires around the problems, and how to measure them, made it worthwhile.

I can see where it might be less valuable if you’re tightly bound to prescribed work… but also, I can see where it might help you break out from that.


Now, some random miscellany.

Logistical challenges running the meeting:

  • It was hard for me as person-at-the-whiteboard to keep up with discussions as they picked up pace, and especially when
  • Problem A would come up, inspiring someone to shout Problem B, sparking discussion, and Problem A would languish
  • The layout of the conference room. I wrote on the wall-of-whiteboards and everyone else faced me from the other side. I’d like to find a better way to do that, but all conference rooms are going to have some version of this problem

Questions I’m considering for next time:

  • Do I do this again with different people from the same teams? When?
  • How to better communicate what the next steps will be, and does that improve focus?
  • Is there a better way to introduce the concept of the meeting?
  • Would a note-taker help?
  • How would a meeting like this incorporate remote employees?
  • What’s the best way to manage voting on a list like that? Are the differences between voting methods even meaningful?

Simpler Days, #2

“I love talking to new users, they’re so full of wonder.” — Simple person

After two days at Simple, some random thoughts:

They’re unbelievably dedicated to customer service. Part of why I signed on was that I knew this, but I keep finding it’s greater than I thought.

Their customer service people, who are here, in Portland, have no scripts.

No scripts.

They’re free to listen to and help people however they can.

This is so great.

Then, when a co-worker was walking me through some parts of the application, he’d show me something and then some tiny edge case where it was so clear that someone had sweated the details. I said “that is so cool” repeatedly as he demonstrated things.

They’ve managed this under nearly unbelievable constraints, being in the banking industry. And I spent five years at the start of my tech career in telecom. That the product is as surpassingly good as it is is amazing.

I filed my first two bugs against the site today, which should surprise no one who knows me. My first bug was yesterday, against an internal tool.

Mind-mapping software for Mac OS X mini-reviews

(Written because all the Google Search results I could find were spam)

Desktop only, b/c I’m specifically looking at things you can use while not internet-connected.

How I evaluated: I mapped out my thinking on some simple project ideas, which meant a lot of entering new text, and then I tried taking notes on a presentation, which added more navigation and on-the-fly reorganization.

Mind-mapping tools I’d recommend

MindNode Pro

$20, free trial available

Pretty sweet, easy to use, pretty, simple look to it, surprising depth once you start to poke around in what you can attach and do with stuff. I dig it. The free trial is

The companion iOS apps look pretty good too, though I’m satisfied with iThoughts there.

I also don’t get why it’s MindNode Pro: wasn’t putting “Pro” on your app a thing you did when there were free and paid versions, like that phase where we named the paid versions “HD?”

Xmind

Free, Pro version available

I liked this a lot, seemed to drain battery like crazy. The Plus and Pro versions are targeted towards businesses (Gantt View, only $99 more for a limited time), with the only thing you might need being some of the export tools. But you’re probably fine.

Mind-mapping tools I wouldn’t recommend

Scapple

$15, free trial available

I love Scrivener, and I love how Literature & Latte runs their shop — my experiences with them over the years have been uniformly good.

Big difference between this and others is that Scapple defaults to a map without hierarchy: you’re not starting with a central topic and building out. Everything’s an island and then you like them up, toss a picture in, whatever. For that, it’s great, and the ease of use is good.

My problem is that for the stuff I most frequently use mind mapping for, I just could not seem to make it work fast enough: where I’m cruising along hitting tab/enter and typing as I go, Scapple never allowed me to get into that flow. I really wanted to like it more, and might return to it for doing writing brainstorming.

Mind Manager $350. Plus subscription stuff.

Targeted for the enterprise that can buy and negotiate licensing fees, I guess. I’ve played around with the free version, it seemed great and over-featured for personal use. But I don’t need another project management collaboration tool set, and I don’t have $350.

Freemind/Freeplane

Free. Built in Java. It looks like dated open-source software for Windows 3.11, but seems to work okay. Requires Java. Java kills batteries dead.

Bringing a tank to a water balloon fight: huge apps that can mind map

You can of course take any sufficiently advanced graphing program and use it, or a plug-in, or whatever. My experience is that they’re too heavy, and actually way harder to use than XMind or MindNode.

Of them, OmniGraffle seemed the least-horrible option for mind-mapping, and is also a pretty great diagramming application in general. It is, however, $99 for the standard version and $199 for the business-y version with Visio support and fancier export options. On the plus side, OmniGroup are a great bunch of people, with amazing customer support.

Towards better agile status reports

“It is important to communicate to stakeholders that early calculations using velocity will be particularly suspect. For example, after a team has established some historical, data, it will be useful to say things like, “This team has an average velocity of 20, with a likely range of 15 to 25.” These numbers can then be compared to a total estimate of project size to arrive at a likely duration range for a project. A project comprising a total of 150 story points, for example, may be thought of as taking from 6 to 10 sprints if velocity historically rangers from 15 to 25.”

— Mike Cohn, Succeeding with Agile 2010.

This requires that you have the project’s scope and cost in story points clear at that point, which you won’t have. Story point estimation takes time to achieve any useful consistency, and especially to do the kind of learning (“last time we took something like this on, it was an 8, not a 3”) that gets you to numbers you might want to rely on.

It depends on scope being set — and how often is that true, particularly on agile projects where you’re able to show users the product at the end of each sprint and make adjustments?

More importantly, say these teams are relying on the release date of a project:

  • customer support, which needs to update all the help documentation and set up training with the people who pay you money
  • marketing, which needs to create materials based on final screenshots and capabilities
  • other teams, which will use the delivered work to create new projects

Where if you deliver early, that’s great, they have more time to rehearse, edit, and control over whether they want to release early.

If you’re late, they need to know as soon as possible if they need to start cutting schedule or scope, or spend more to keep those constant and meet a new deadline.

Then take Cohn’s example. If my company is lining everything up behind a date, and my status email is

“We have 150 story points remaining, and the team’s current velocity is 20 per two-week sprint, with a range of 15-25, so we should deliver between 20 weeks from now and 12 weeks, and probably around 16 weeks, which is on time.”

People would rightly stand up from their desks and hurl the nearest team-building trophy at me.

At the same time, putting more information about the probabilities and how you calculated them isn’t going to help. You can’t talk about the power law or the Hurst Exponent here. The question people want answered is: do I need to take action?

You’re going to end up reconciling this to some organizational standard, like:

  • Green: On target
  • Yellow: Risks but on target
  • Red: Will miss date unless changes are made

Which actually means for PMs:

  • Green: I’m not paying attention
  • Yellow: I’m on top of things
  • Red: Please have your VP come by my desk immediately and yell at me

And for your customers, eventually means:

  • Green: the PM isn’t paying attention so I won’t either
  • Yellow: something meh whatever
  • Red: aieeeeeeeeeeee

Then what do you say in your one-sentence “summary” at the top of your status report, the one thing people will read if they open it at all?

My best results came from straight team votes at the end of the sprint: ask “Do you feel confident we’ll hit our date?”

80% or more thumbs up? Green.

50%? Yellow.

Under 50%? Red.

This requires the team trusts they can give their honest opinion and not be pressured. If that’s not the case, do the rest of the PM world a favor and leave our ranks.

Ask, get the number, and nod. Don’t argue, don’t cheer, just nod, and use that. Then in your status report, you write

“We’re green.” If you want, offer “(87% of team members believe we will meet our target date)”

Now, you do want to express the uncertainty, but in a way that people can use. Think of yourself as a weather forecaster. Do you tell people to bring umbrellas? Put on sunscreen?

Hurricane forecasting has a useful graphic for this:

Hurricane forecast example

Where on the calendar does your project land, and with what certainty? Assume that you ship every week on Friday. Why not

Status forecasting

Distance from the calendar is how far out you are, and the shading indicates likeliness. Mark the release deadline with red (or a target, or…).

Daniel Worthington offered this:

Screen Shot 2014-08-18 at 11.57.58 AM

Which is even better.

So two questions for you — 1) How do you effectively measure uncertainty on a project’s delivery date? 2) How do you convey that so it’s simple and easy to act on?

The Side: Everything fails

In which I whine about tools.

I’m sure for developers who work with Ruby the maze of tools, installs, and dependencies is like swimming in water to a fish. Except just from work, the fishes complain a lot about it too. For me, though, it’s so fucked up I want to throw something at it.

Let’s say I want to install Ruby and Rails on my Mac.

% brew install ruby
(vwhoosh! works)
% gem install rails
(bombs on crazy UnknownHost error)
% gem install rails
(vwhoosh! installs a ton of stuff)
% rails –version
Rails is not currently installed on this system. To get the latest version, simply type:

$ sudo gem install rails

You can then rerun your “rails” command.

(cursing)

Okay, so let’s figure out how to do that. Stack Overflow answer says this is a command line tools issue. Do I have the command line tools? I’ll check — Crap, it returns a “Can’t install software because it is not currently available in the Software Update server” error. That makes no goddamn sense.

Back to Stack Overflow… ah! That’s a bug. I guess I’m actually fine. BLARGH.

Except rails will install and then tells me it didn’t. Still haven’t figured that one out.

And hey — as in Python, there’s a huge set of version differences!

Q: Hey, I’m a normal dude trying to get this thing up, which version of Ruby do I —
A: (shrug)
Q: Thanks for that.

To deal with all these dependencies, I can use rvm which… I don’t even know! Woo!

This seriously makes me want to go back to wrestling with Python. Or set fire to my computer.

Review skimming escalation

Dungeon Keeper’s catching flak for its terribleness, and today’s story is their new implementation of review skimming.

Check out their writeup.

Here’s their picture of where this goes beyond anything I’d seen before:

Where, say, OkCupid asks “love us?” and “leave feedback” to try and skim off only the people who’ll leave good reviews, Dungeon Keeper’s explicitly asking you what your star rating would be, as if you’re rating it right then (in a window labelled “Rate Your Experience”)(!).

Logical next step would be to present a fake Android rating screen that discards 1-4 star reviews and then submits 5-star reviews on the user’s behalf.

How skimming reviews makes apps twice as slimy

I read the fine “Choices and Consequences” (which responds to John Gruber on one-starring apps that obnoxiously beg for feedback).

Daniel:

At the other end, you find blatant harassment and tricky language meant to confuse users into capitulating. Modal alert panels might interrupt a user’s workflow at inopportune times, demanding that they either leave a review now or be reminded later to do so.

Part of the frustration is much deeper than that, and goes to a deeply scummy tactic Apple’s let proliferate. I’m going to call this skimming reviews: you pop up a request for a dialogue, but in a way that encourages only people who are going to leave good ones to do it. OkCupid’s app is the clearest example of this:

photo

Check out the levels of ridiculousness:

  1. They’re first trying to make users self-select. Do you love the app? Well, then…
  2. The call to review is not “review us” it’s “Rate us highly.” What if you don’t want to review them highly? Presumably you look for another option, which…
  3. “Send feedback” keeps them out of the review ecosystem

Right? As much as the cumulative annoyance of “rate us” requests might grind us down, this kind of thing is way more toxic and makes the whole experience seem dirty to me, because it’s not even a sincere request for feedback, but an attempt to turn users into positive reviews.

I don’t understand why this passes App Store review.

Mavericks power savings on a macro level

I did some napkin math on the overall effect of the new version of Mavericks, and wanted to share. Standard caveats apply: as you’ll see, I’m doing a lot of hand-waving.

Starting assumptions:

  • Adoption of Mavericks will equal Mountain Lion* for ~30 million users
  • 75% of Mac sales are laptops
  • I’m using the Mac Mini for desktop power consumption to concentrate on processing/disk tasks
  • I’m also using it for the under-load draw for laptops

So! Thirty million Macs, idling, draw 195 megawatts. Under load, they draw 2,380 MW.

Assume Mavericks gets you a 15% savings in power both at idle and under load.*** If all Macs are at idle, you’re saving 30MW. If all the Macs are running at the same time, you’re saving 357MW.

How much is that?

  • The smallest nuclear reactor in the US generates 438 megawatts
  • An average coal plant generates about 500 megawatts

Now, you don’t of course get all the savings at once, and I’m totally omitting how Mavericks affects machines staying in minimal power draw states longer instead of waking, working, sleeping, waking…

I welcome thoughts on how to improve my napkin math and get to a better number.

Even as a rough guess, 80% of a coal plant… that’s pretty awesome.

* I assume this will be low, as new Macs will come with Mavericks installed and replace ones that don’t, and also because with power-saving features, there’s a huge incentive for laptop users to upgrade if they are able but haven’t. Also, it’s free.
* nuke cite http://www.eia.gov/tools/faqs/faq.cfm?id=104&t=3
*** I’m guessing, based on anecdotal reports of battery life improvements, mostly during betas

My AT&T Customer Service Nightmare

First, I worked for AT&T Wireless for five years, and more generally, I know how at the ground level, you can make a mistake or be limited in your ability to act by policies.

And second. I entirely acknowledge that in normal circumstances, I’d have looked over my bill at some point before I did.

So!

At the end of November, I was getting divorced and called AT&T to de-activate the phone my wife used. The rep said she’d de-activated it, and it would stop working at the end of that billing cycle (December 22nd).

What actually happened: the rep did not cancel that account, and somehow after that call I had *two* accounts: one, for my phone, and a second, for hers (at $120/mo+tax = $140/month). That there’s two plans is actually kind of important, because it meant that checking on my phone showed everything was cool, and there was the old phone showing up on its separate tab, and the only way I’d have figured it out was to say “hey, the total billed amount isn’t what my phone plan is.”

My wife calls them a week later to confirm with them that they’re canceling the line. They say “yup!”

Then because I was getting divorced, I go deal with the other stuff in my life for months, totally ignoring all my auto-billing like rent, auto insurance, and phone. I finally notice at the start of this month what’s screwed up. I call in, I get a friendly rep, he’s apologetic, puts in a case (CM20130805-70519441), cancels the phone, and says I should expect a credit of about $971.91 (there’s some tax uncertainty).

What actually happened: he submitted the case, but the second phone/plan was not cancelled.

At the end of this billing cycle, I check the bill, see it’s not cancelled, see I’ve only received a small credit.

The math: AT&T owes me ~$979.91 + $140 another month they billed me after not canceling again (and let’s assume it’s all set up so that’s the end of that).

They’d refunded $423.83 of the $979.91. No one ever contacted me to talk about what happened.

I call back in. I talk to a guy who seems more stymied than anything.

He reports the “back room team” took the case, looked at the account and said that because there was usage on it for some of those months, they weren’t going to refund those months.

Now, when you look at the usage, it’s clearly accidental. In one month, In another, they forwarded a phone call to her new phone. For the April-May billing cycle, the statement says 4mb of data used, and when you look at it, it’s 0kb. For May-June, someone sent her a text message, and there’s another couple random phone-dialed in usages.

None of which should have happened, if they’d cancelled the account. I just don’t understand this part.

I went around and around with that rep, in which he repeated the reason why they’d denied my refund, I said “look, I understand that’s not you, but this is crazy” and he’d say there was nothing to be done about it. I asked about appealing, he said I couldn’t, there was no way to escalate, if I filed another case about those three months it would just be denied. I asked him what I should do about this and he says “there’s nothing you can do”. Eventually, after about an hour, he told me he’d set me up with a supervisor callback.

His supervisor calls back! He wants to know about the call, if there’s anything (rep name) was unable to help me with. I say “no” and say it’s not really his fault, he’s clearly constrained, so I’m glad he escalated it.

The supervisor’s a little baffled. It seems like he’s calling because he saw his rep was on that call for so long, not because the rep put me in for a callback.

I don’t know if that’s a mixup or the rep told me he’d have someone call me back and then just dumped me, but if it’s the latter, that’s reaaaally frustrating.

So he walks through the whole thing with me, does a lot of math, credits me for what I’m being billed this month, puts a thing in for next month (since we’re now on another billing cycle and I still have both plans attached).

At one point, he talks to his supervisor, and comes back with a partial solution. It sounds like the supervisor’s unwilling to go back farther than three months. Through some customer service foo, he refunds ten things in varying amounts that add up to $369.50 and after talking to me for a minute about the gap, credits me $150.

I’m entirely uncertain of whether that happened or not, because when I look at my account, it doesn’t show the credits he talked about. But it’s not super-easy to figure out credits and adjustments on the online site.

Here’s where I am, then.

Best case: the zombie plan is cancelled. Next month’s bill will show up with a credit for the zombie plan. There will appear on my account a $200 credit towards this month’s bill and a $150 credit that don’t show up on my AT&T account on the site.

Math as it shows on my bill right now
~971.91 owed
+140 charged after that
-$423.83 credited by secret back room team
-$369.50 by supervisor ninja service against past bills
= I’m still out ~$319 because AT&T didn’t cancel the account back at the end of November

And next month, assuming the supervisor’s right, I get billed again for the zombie plan, but also credited for it.

Now, if you assume that the credit the supervisor yesterday is happening but just not there yet, I’m actually to the good. But you’ll forgive me if at this point having someone tell me they’re crediting $350 to this month’s bill but where that does not actually show up on my bill, I’m pretty skeptical that
a) there actually was a credit or
b) there will be a credit

Much less that zombie phone/plan is actually cancelled, or that I’m not going to be having this exact same conversation next month.

And at this point, it’s been five calls and hours on hours on the phone being frustrated, and having guarantees that something will be done turn out to almost always be entirely untrue.

The most galling thing is the back-room team dismissal based on trivial usage. Who gets a ticket like that, where it’s clearly entirely their company’s fault, and looks for the slightest pretense to deny it? I don’t even understand why, given that their error allowed that trivial usage, it’s even a pretense — is there a huge fraud problem with people who cancel, wait to see if it’s not cancelled, and then send one text message?

But yeah. From looking at my bill, AT&T’s got me for $300+.

Sigh.