EPISODES CONTACT

Lessons Learned Building Alexa, with Product Manager Polly Allen

Ep 2

Dec 07, 2022 • 43 min
0:00
/
0:00
ABOUT THIS EPISODE
Polly Allen, former Principal Product Manager on Amazon Alexa, shares what she’s learned about the challenging world of building products for voice and AI, as well the roles of data-driven decision-making and computing science education in product leadership.
TRANSCRIPT

Allen: Welcome to It Shipped That Way, where we talk to product leaders about the lessons they’ve learned helping build great products and teams. I’m Allen Pike. Joining us today is Polly Allen, who’s most recently a principal product manager at Amazon, that has a wealth of experience as a software engineer, engineering manager, product management and director, and product and AI coach. Welcome, Polly.

Polly: Thanks so much Allen. Great to be here.

Allen: Glad to have you on. We have had great conversations before, just talking about building product and building teams and building careers, and all that kind of stuff, and when we were starting the show you were one of the very first… Names that came to mind was like, “Oh, got to talk to Polly about this stuff,” so I’m glad to have you on.

Polly: Thanks so much. I’m very flattered that I made the list.

Allen: Well, one way that I find fun or I think makes a lot of sense to ramp people in, is to let you give a little recap of… I gave that resume in a sentence of positions that you had, but how would you summarize your journey, from starting as I characterize as starting as a software developer, which is how I think a lot of folks start in product. And then, to where you are now, as somebody who has had this broad and deep experience working on product. How do you recap that story for folks?

Polly: I did start off a professional life as a software developer. I always caveat with my experience, with those set of skills are not necessary or the foundational skills to being a product manager, because I think that’s a big misconception.

Allen: Let’s come back to that, because I think that’s something a lot of people are curious about, that topic and that trade off. We’ll get back to that.

Polly: A hundred percent. I know, I often say I know exactly how much of my computer science degree I do not use in my day-to-day. Hopefully that helps me help other people.

Allen: That percentage just gets higher for me every week.

Polly: Exactly. I really did enjoy software development though. I still miss it on the daily and I really liked the deep analytical problem solving work, but I also really liked the social work of figuring out, how’s my piece going to plug into this other piece? How does all the system design fit together? So, that always led me, especially as an extrovert in software engineering, to talking to others, figuring out the plan, which led to a lot of people saying, “Hey, how about you help out on the program management side, on the product management side?” And I always felt like after having built software for around 10 years, it got to a point where I thought I’d really like to be more involved with figuring out what should we build and why, and getting closer to the customer. So, I transitioned from software engineer, software engineering manager over to product side and I took a position with a friends’ startup that was in the analytics space. So, that was super exciting and you get to be scrappy and wear a lot of hats. That’s something that’s common to a lot of product-management roles, as well, so I really like diversity in my day.

Allen: It’s a startup life and product-management-life, so you’re double lined up there.

Polly: Yeah, exactly. It’s where I first learned too, for things like sales enablement and marketing enablement and a lot of those other things that you don’t have to do on the software side, so I got thrown in the deep end. We did get acquired by a larger company in 2017 and I then moved on to come back to Amazon. I’d been a software development manager there, but I came back and joined the Alexa AI group. I’ve been really fascinated with AI, and particularly NLP, after having done a bunch of projects in the search space. I’d always thought that overlap of tech and linguistics is super interesting, so this was really a dream job as a principal product manager, helping to help Alexa understand what people are saying, but also help to make her smarter in all of her responses.

Allen: Yeah. I really enjoyed talking to people. Obviously, AI is one of our current generation of the next big thing slash current big thing, in terms of exciting new technical changes, that’s been happening over the last five or 10 years. But, a lot of people have a bit of a tendency to maybe play up the role of AI like, “Oh yeah, it’s like a spreadsheet with Ai.” What exactly does AI do? It’s like, “Well, if there’s AI there…” We told our investors there’s AI, but no one would dispute that…

Polly: Exactly.

Allen: … When you’re doing natural language processing and you’re trying to get to the point where you feel like you’re having a natural conversation with an assistant that understands what you’re saying and can take thoughtful actions on it, it’s like, that’s AI. In my opinion, that’s very, very clear application.

Polly: Exactly. It’s very different. I’ve seen a lot of people just say anything that’s automating something is AI. You’re like, “no, that’s not automation.”

Allen: Not to poke holes in other people’s glasshouses, or whatever. But, what was that like, going in as a product manager? I assume that like most product managers, you’ve come in with a background on software that you’re looking at, buttons and lists, and stuff like that, into a space product managing something where you’re talking to it.

Polly: Yeah. Moving from voice interfaces, from visual UX to that voice forward or multimodal. It’s kind of the terminology we use, because-

Allen: Yeah. It seems like you’re going from deterministic UI that you’re looking at, to maybe non-deterministic, non UI, which seems… I don’t know, seems hard.

Polly: I feel like there’s two questions baked into this question, so I’m going to tease them apart. One of them is, the interesting piece of moving to working with voice as an interface, and that was one journey. And then, the second one was, what’s different when you’re using machine learned systems and how you work with those teams and how you do product management? So, I’m going to tackle those one at a time, because I think they’re very different. For voice UXs or voice forward Uxs, multimodal, when you’re using both the screen and voice, I was so fascinated to learn about it when I first joined, and a lot of companies, like Amazon, have training specifically for voice UX design, because it’s this new and emerging field that no one really has done before. The biggest problem with voice you can probably guess is really discoverability that if you just put something in front of people and say, “talk to it,” they’re like, “well, what can I say?” There’s no real easy way to make what the features apparent, right? And then the second one is, you have to be able to cover such linguistic flexibility to be able to handle “ums” and “ahs” and pauses and things like that. So that’s where a lot of statistics are used to kind of say, “okay, this sound is most likely, these three sounds in a row have an 80% chance of being this word. This word has an 80% chance of being meaning a versus-“

Allen: Yeah, 80% though it’s like, “oh man.”

Polly: I know. Exactly right. So depending on the word, there could be more or less certainty depending on everything from the accent and the background noise and the context it’s used, mumbling, everything. It does the household have more than one language. People want to switch languages when they’re talking to voice interface. So these models and multiple systems that go into just the understanding piece are incredibly complex. But then even being sure that the way you write Alexa responds is something that people are able to pick up just listening to audio, you read very differently than you hear. And you can’t backtrack an audio, you can’t go back and say, “oh, it was first sentence again if you weren’t listening.” So the words have to be simpler, the sentences have to be shorter. So it’s a different style of writing. If you’re saying Alexa say the following things than you would if you were say, putting it in the UX on screen.

Allen: Yeah, well I feel that when working on podcast stuff, I’ll write out a little blurb I’m going to say, and then I say that I’m like, “that doesn’t feel the same when I read it.” When I read it sounded good.

Polly: Exactly. You have to hear yourself say it out loud. So it was interesting that this is a whole new field of design where people are learning, don’t use the following words, they’re too long or easily misunderstood. They’re too easily conflated with words that sound pretty similar but have very different meaning, because a lot of people speak back the same things they’ve heard. So all a really fascinating field.

Allen: Interesting. Yeah. So even if they understand what Alexa said and then if they repeat that word back and Alexa can’t understand them, then that’s a problem too.

Polly: Yeah, exactly. And then when you have new words coming out in lexicon when the whole issue of Gabby Petto came out in the news and was being constantly interpreted as potato.

Allen: Oh no.

Polly: So it’s a really complex problem, but I found endlessly fascinating. So that’s the voice UX problem, which is one piece in itself.

Allen: I’ve been trying to make a habit with this show of enunciating the word shipped when we say it shipped that way clearly. But I’m curious if once it’s out in the world and people are trying to ask voice assistants to play the show, whether or not we end up with any verbal ty post, but fingers crossed on that.

Polly: Exactly. Best of luck. I have a hard time getting the right podcasts sometimes on my voice assistant, but it could be an issue. There’s so many of them so disambiguating could be a problem.

Allen: So that’s a voice piece, which is the fascinating in itself and already an interesting challenge. And then you’re going into not only the deterministic, “okay, we definitely are going to make Alexa say this thing,” but now it’s also then maybe it’s how exactly this thing is going to get interpreted. There’re these uncertainties you’re kind of getting, “oh this is an 80% chance of being the sound or this word.”

Polly: Yeah, exactly. So I think that was one of the largest shifts for me was that in instead of specifying the solution like you would in a deterministic way where you’re saying, “here’s the solution, let’s make an algorithm that hits it and we’re aiming for 100%” what the team’s going to give you is you are giving them an optimization function where you’re saying, “here’s what we’re optimizing for, we’re optimizing for users continuing to say yes, tell me more. Or we’re optimizing for them continuing to listen or overall play time.” So you are telling them the metrics and this is sort of the language between you and the team. And then we’re saying, “okay, here’s our hypotheses about what we think would help this metric that’s run bunch of experiments, but let’s put forward.” And then the other piece is that with it not being deterministic, you have to understand that “okay, it’s only going to hit those metrics X percent of a time.” How good does it need to be for it to be something that you’re going to launch? And that’s something where you’re looking for comparable features, you’re looking for at user expectations and at what point do they just give up on a device if it’s terrible? Too many times, too many times in a row, what does this mean for repeat usage? So if you try something and it doesn’t work, your tendency is you’re probably going to try it again. But once you’ve tried it twice or three times, eventually you’re like, “this just doesn’t work and I’m never coming back.” Right? So understanding user patterns in how many times you are forgiven, especially for new features. When you’re first learning user behavior, you often are saying, “we need to get this out and we need to start collecting data before it’s perfect.” So it’s a very different kind of process that you’re not waiting for it to be. I mean, not that you do in agile software development either, but it’s just a different, you really have to look at the bar differently from, “oh this experience looked exactly like I prescribed,” versus “Oh, it’s meeting these metrics that I told the team we needed to hit.”

Allen: Yeah. Well, and in some ways that’s been the progression of product management over the last 10 years in a lot of ways for beyond voice in AI. But it used to be like, “okay, someone is going to deem,” if you’re unlucky, it’s Steve Jobs is sitting at the top of the org chart team, “this is pixel perfect everything, every single experience has been evaluated, ship it and put it on CDs to now we’re shipping of dynamic software even in the traditional software world where we have a whole bunch of feature flags and A B tests and metrics and subgroups and stuff like that.” And so it’s kind of interesting how we have so much more metric driven software development that and less feel of like, “no, this is good now. Okay, let’s ship this experience to our users.”

Polly: Exactly. And it is where both one, I think there’s a gap today in just executive education and they’re in them understanding what they’re getting into when they say, “I want to leverage machine learning in our solution.” A lot of the time they don’t know everything they’re getting into. And often they’ll say, “well let’s test it out. We know the data quality is the big important thing. Do we have enough data? Is it good enough quality? Let’s do a proof of concept and just see if it works.” And that’s often the very first baby step on the journey. And they’re expecting like, “oh, we did the machine learning.”

Allen: Yeah.

Polly: “Now we’re done. We get all the magic.”

Allen: Our CEO’s exact voice is recognizable, therefore it worked, ship it.

Polly: Right. Worked on his laptop now, not just mine.

Allen: Or it worked that one time in the boardroom.

Polly: Exactly. Worked in the demo. Obviously executives understand it has to work more than once, but that idea of “hey, we found the solution,” it’s only good 78% of the time if that isn’t an acceptable user experience. And often it’s not, how do we use a human in the loop system or something to get this up to the acceptable quality bar until we have enough data to automate all the way, that kind of thing. So it’s often a longer journey than many people envision.

Allen: Yeah, that sounds difficult. You mentioned this data driven, like okay, you’re setting a metric and you’re measuring, and obviously this is something we do for the most part across all product management and product leadership and engineering management leadership, engineering leadership is trying to make sure that we’re measuring the things that matter and we’re thinking about the data and how are people really using the product and what is working, what’s not. But over the years product development, product matches has gotten more and more data driven, which of course has its strengths, weaknesses, and you kind of touch on some of the strengths, which is like if you have something that works probabilistically, you can then measure, “okay, how do we think it’s working out in the field? What percentage of people are having this experience that they’re coming back and what percentage people aren’t and based on the flags.” But how do you think about the trade off? Because obviously as we lose more and more of that, “okay, real human beings on our team are using this thing and it feels good and we are actually engaging with it as a whole product rather than some metrics in a spreadsheet or on a dashboard, we lose something.” And so how do you think about how to detect when the data is maybe wagging the product so to speak? Or is it just all data all the time? I’m sure that’s not your position, but why do you think about that trade off?

Polly: Yeah, exactly. I do think it’s really easy, especially when people first get into this to, you’re working a lot of the time with a lot of people who come from math and stats backgrounds and everyone’s like, “give me the hard numbers.” And that’s the main thing and, “how is this decision made and if it wasn’t made with numbers?” But I think it’s really important to look into maintain that sense of intuition. I will say there’s a difference if you’ve one part of a very large system, it’s kind of hard to have a full intuition if you’ve got a system that’s working end to end. So for example, just the piece of understanding what a word means, which word did the user intend to use, you can get a good error rate on just that piece. But then you overall you’ve got, “okay, did the system fully understand the intent of the user sentence and actually give them the response that the user liked?” So one is end to end, one is very small, so at the very small piece it’s often easier to use the metrics and get that intuition of, “okay, this really is 98% of the time we do get the word the user intended or not.” But that end to end, especially when you have multiple systems that are adaptive is where you really need, “okay, you’ve got the stats, but let’s dive in and actually see if your intuition is correct on this and if this still makes sense.” So I think it comes mostly when you’re looking at things holistically. I’ve got a great example from Alexa AI here. So one of the metrics that had canonically been used across the board to understand if users got the expected response is called bar generate. So really did they immediately say, “No, Alexa.”

Allen: Yeah.

Polly: That’s a pretty obvious signal that you did not get what you wanted. But some features that people were asking for topic based news and just saying, “Hey, what’s the latest on the hockey game? What’s the latest on the Queen’s funeral?” And in a small number of cases, certain question shapes, we were launching just a radio stream, just like, “here’s a live radio stream.” It didn’t actually have to do anything to do with the topic at hand, but the issue was that users are like, “oh, it’s the news. I asked for the news. This particular story isn’t about the Queen, but is it just five seconds early?” So they patiently wait and then eventually go like, “ugh.” Or maybe they would even hear something interesting.

Allen: That has nothing to do with what they asked for or.

Polly: Exactly. So our normal flags of like, “Hey, how are we doing on interrupt rate for this feature? If anything’s got a terrible interrupt rate, let’s go look at,” it wouldn’t get triggered in that case. And it takes someone going, “wait, this is super weird.” I had this super weird experience the other day on Alexa. Can we look into what’s happening? Where we uncovered, “hey, there’s this whole class of experiences that this metric doesn’t cover appropriately because a user behavior.”

Allen: And I think most people worked on product, at least in product that’s sort of on a larger team. It’s not from the zero to one proof of concepty-stage have encountered a metric that people use to make decisions. But then later you realized, “this isn’t fully measuring what we thought.”

Polly: Exactly. Yeah, exactly.

Allen: Or at least not in all cases.

Polly: Precisely or exactly. The measurement somehow was broken is the other piece. We definitely had times where especially when we’re working with the large language models and generating content on one of our projects, we had asked Annotators to verify, not just of what’s the fluency as the metric of the output that summaries stories that we were generating, but how true are they? What is the factual correctness and how true are they to the underlying stories? And we were really surprised when very early in the project we got back results that were saying like, “yep, they’re all completely true and completely fluent.” And we were like, really? We just started this? I mean maybe that’s great. And again-

Allen: A little suspicious.

Polly: … The CEO hearing that would be like, “great ship it.” Sorry CEOs, I don’t mean to bang on you. Our fictitious idiot CEO doesn’t exist in the real world would say.

Allen: Yeah, the PHP CEO from Twitter.

Polly: Exactly. But once we took a look at this and said, “let’s dig in,” we had found there were examples. One of them was a news story that was about how there’s a new law in Finland that if you see a dog or a puppy or a baby in a car and they’re overheating how babies, dogs and puppies overheating cars that you must break the window. This is a new law that you’ll be fined if you don’t help out. Sounds reasonable. Except for when we looked into the underlying input, it had nothing to do with babies, puppies, cars, Finland, windows, nothing at all. This is a completely made up story.

Allen: So the AI hallucinated this news story out of it knows plausible because it source information about the world. Sweden exists. This is a problem where babies get in a hot car and the laws get created. And so it hallucinated this news story.

Polly: Exactly. So if you’ve trained a model on millions of news stories from around the world, it’s like, “wow, this really does sound like a lot that they make in Finland.” Hopefully I’m not being racist pro or against Finn people.

Allen: Nothing against Finn. We love Finland.

Polly: It just sounds all of it, especially when it’s fluent and written in the style of a news story that we’ve grown up listening to. That’s what factual reporting looks like. It was just incredibly believable most of the time.

Allen: How did your metric, you ended up with a metric at some point that said, “oh, this story checks out.” Was that a human being was played this, and then human being said, “did the AI successfully make a news story?” And they’re like, “yep, they made a news story” without having the context to know whether or not it was a news story that existed?

Polly: Right. So the first thing we had done was given the news stories, we had given the articles as context, but we had found that they weren’t reading them in depth. And so we had to create new tools that were actually like, “here are the parts that were taken from the articles and from where.” And provide a lot more of that kind of almost one step into explainability for the humans, for them to understand, to be able to evaluate which part of this… So that’s where two, we could develop different degrees of factual correctness where you’re completely made up story versus they got the score of the hockey game wrong by one, which are the kind of errors that happen in the newspaper all the time.

Allen: Yeah, for sure.

Polly: I mean not to make say that that’s correct and that’s not the bar, but it was just interesting that-

Allen: It’s slightly less alarming than, “I’m completely manufacturing a law.”

Polly: Yeah, exactly. Precisely. It’s an interesting look at how it’s not just, “oh, is this metric capture everything, but is there something broken in how we’re measuring it? Is there a bug in the measurement system itself?”

Allen: Exactly. That haunts me, and maybe this is common for a lot of people in product roles, but I’ve seen over 20 years so many times it doesn’t happen super often, but it’s happened enough times where meaningful decisions we’re getting based on a measurement that just didn’t measure what we thought it did. Man, it suck when that happened. We had an e-commerce product that had a metric that we thought was… We were kind of interpreting it as an indication of what ratio of users had a good result, let’s say, what percentage of people were checking out. But what it ended up being was actually the ratio of checkouts to users. So if one user had 10 checkouts, then they would count in this metric as 10 users having checked out. And that’s really different when on this product. We had a small core of really, really passionate people, but then the way that metric had been phrased in the dashboard made it seem like, “oh, okay, well we had a 20% checkout rate,” or whatever it was, or 70% or something like that. It was really, it was so high that the people were like, “wow,” and we got to tell the board about how great this fraud. It’s like we’ve really cracked this nut. But what it was, was the ratio of checkouts to people rather than the percentage of people that had one or more checkouts.

Polly: Exactly. Especially the good news ones, those are the worst and every time there’s good news, it’s almost unbelievably true. I’m like, “okay, I now know checkout 10 times, not just once or twice before you tell the board.”

Allen: And it’s hard to be disciplined about that. And one of the things that is an unfortunate kind of corollary of that is, well, if it’s really, really bad news, often you will double check if it’s like totally like, “oh, our product is just completely going to be a failure. If this metric is true,” then someone will probably go find the bug. If it’s extremely good news too good to be true, then also probably, hopefully if you’re disciplined, someone will check. But there’s a whole bunch of bugs out there in people’s analytics out there not to give other people nightmares or pass on my nightmare that are off by 20% or 10% and there’s this subtle bug that, “oh, in Firefox it double counts,” or something like that.

Polly: Yes, exactly.

Allen: I’m screwing up the metrics a little bit, not to give you all existential dread.

Polly: Or the worst two I find is when you see something that seems really weird and unusual and you’re like, “I wonder what that’s about.” And you never really get a satisfying answer and you’re like, “it’s just weird.”

Allen: You’re like, “yeah, it’s a little bit, maybe some of the… It is contributed what it is, and some of it might be contributed by this.” And sometimes analytics package doesn’t get a chance to send off an event before they reload the next page. So we try to compensate for that. And you’re like, “yeah,” I just unsettled the whole thing. But that goes a little bit back to what you were talking about with explainability. When we build systems, especially way worse when you’re talking about AI systems and systems with heuristics and learning systems is if we can’t explain and trace back where this result came from, then as product leaders, it’s extremely challenging to improve it.

Polly: Yeah, exactly. And really diving into, “okay, how was this derived exactly, let’s map user-action to the actual metric.” And then please, my intuition would say that this is higher than that. It’s just a really important exercise. I think joining any new team to give you that real intuition of how these metrics were derived instead of accepting them blindly.

Allen: When we think about AI and all the new product experiments and startups and things that are being built over the last five years, especially around different new neural net applications and things like that, what is your response to the people that will kind of take, I guess a bear argument about AI and say, “well, yeah, you can make these really impressive demos,” but because you don’t have that explainability piece on a lot of these packages, you can create some model and then like 90% of the time it produces a good result and 10% of the time it produces an offensive result or something like that. And then it’s like, “how do you get to that last 10% or last 1%?” And the answer is kind of like, “I don’t know, it’s just a black box.” Or at least that’s the kind of argument some people will make against the general idea that we’re looking at the next 10 years where AI is going to be doing more and more and replacing some of what folks would traditionally be doing in building kind of more bespoke hand tuned experiences for people. What is your thoughts on that? Do you feel something that’s just something that is evolving in from what you’ve seen that there are tools and ways to make it so that your product team can understand the AI or is it kind of just a struggle?

Polly: So there’s a huge area of research and there have been some promising advances in explainability and kind of understanding the underlying model. There’s just still such a gap. And I think part of it is, again, much being a product manager, you don’t have to be a developer, you also don’t have to be a data scientist or a statistician, but we don’t yet have enough people who are able to do that translation. And what do I need to understand from what the data scientist is saying to actually how this is affecting or impacting the product? It takes a lot of collaboration with some pretty communicative data scientists who really understand things in depth enough to explain them, to be able to get that point. I think overall, the piece that I think people sometimes miss out is when we hear about AI and we talk about the future, a lot of people go straight to Skynet and we’re not going to have any jobs and it’s going to take over the world and well, military will give it all the guns and we’re all going to die. And of course that’s the dark view of how things might go.

Allen: That’s the worst.

Polly: Exactly, there’s worst timeline, and then best timeline is much more optimistic. But even on the best timeline, I don’t think we automate 100% of really most people’s jobs. So instead of thinking about replacing jobs, it’s more about how will the jobs change and what training do we need to provide people so that they have some idea of, “how do I think critically about what data this machine could have?” So voice systems are trained on text and there’s no way it could have been trained yet on people’s inflection, on their physical interactions, their hand gestures, whatever. All the things we’re not capturing on this podcast, for example, that are actually 90% of communication. So if I think there’s a temptation when you see these systems to personify them and think like, “oh, if I can do that, I can do anything a human can do.” But if you have that skepticism of knowing, “okay, here is the limitations of the data that went into it, and here’s about how often I can expect a perfect answer.” And like you were saying is the other, if it’s 90% is great and then the other 10% is egregious, that’s actually, I think that’s more understandable for me as a human to help operate with this. Then if it’s like, and 10% of the time it’s just barely wrong because then you have to really keep a close eye on it. But I think of it as more as augmenting human intelligence and really automating a way. Often the parts people are just not interested in doing. I was talking to an engineer who works in sanitation and she was saying as an intern, 90% of her job was watching these long, long videos of they’d put cameras and sensors down sewage pipes and you have to watch hours and hours of video to see if there’s any tree branch intrusion into the video, into the pipes.

Allen: Smells like a test for AI to me.

Polly: Exactly. She was like, “no one is sad if AI takes this job away from us.” Right?

Allen: Yeah. Is taking the tree branch intrusion interns jobs.

Polly: No. Exactly. So it’s really interesting to take a look at, of course, which jobs will be partially automated, more than 80% automated, and there’re different degrees in different industries, but I think there will always be a place for, there’s just the context of the world that we are so far away from teaching a machine system about, yet there’s a reason we still don’t have fully automated airplanes, for example, even though we’ve had autopilot systems for how long, right?

Allen: Yeah. And the point about this is think been an age old debate for hundreds of years, but we advanced technology that takes away some things that people are previously getting paid to do. But then there’s other things that we do that are hopefully higher and better uses for our time. But one of the things that we already see starting to happen with AI, and there’s going to be way more of it, is increase in consumption of the work once it becomes far cheaper. So this branch intrusion thing maybe in this one sanitation company can watch branch intrusion in one certain place, but they don’t have that many cameras. They’re not going to get that much video. But you can easily imagine a point where it’s like, “okay, well on the standard of things you do when you put a pipe down is you put a camera every hour often, and then the eye just like, lets immediately anyone know if there’s a problem, this scale can go way up because you can do it just at such better level.” And so that in theory makes us better off because we have fewer water mains burst and wasting water and all these sort of things.

Polly: Right, exactly. You’re just less cost to humidity on things. We really have better uses for that, whether it’s brain time or money,

Allen: And we’ll just all agree not to give the AI any weapons and we’ll be good.

Polly: That’s right. As long as we break that in early and often, we should be fine. Non nuclear codes.

Allen: Before we went out of time, I wanted to get back to one of the things you brought up at the beginning of the show, which was this question about, which comes up all the time in debates when you’re building a product team, which is to what degree do people need a technical background in order to lead product? And I know you have the software engineering background, so you have that sort of context, but then also now you’re working in AI and things, which is something that you’re, I assume you were not learning in your CS degree. I did not do very much in my CS degree about that stuff. There was I think one class basically. And so, how do you think about that trade off? And there’re some companies that automatically say product managers must have an engineering background and some leaders will say, “oh, as long as you have technical strength somewhere, you just need people that are people that can communicate.” How do you think about that trade off or I guess it’s like a continuum of stances on product leadership.

Polly: It is a continuum. I do get very frustrated with the companies that just say dogmatically, “oh, you need a computer science degree for certain positions.” One, I’ve known lots of people who have come into programming from an alternative background who wouldn’t meet that bar and who are amazing leaders, amazing product managers. And so that always just drives me a little crazy. But even if they say CS or equivalent experience, they essentially mean time coding. And I always think it’s lazy because I think what they’re saying is you need the ability to gain the trust and problem solve with tech teams. And we don’t know how to interview for that.

Allen: They’ve had bad experience where someone comes in and then the tech team just writes them off as like, “oh, this person doesn’t get technology and they keep asking me to do impossible things and they don’t believe us when we say something is or isn’t possible or will more take certain my time.” And so they try to solve that missing. “Oh, well you have to have CS Degree.

Polly: Exactly. It’s part of what motivated me to get into this space around tech and AI coaching for business leaders and product managers was that I had interviewed a lot of product managers who wouldn’t pass the technical bar for say a product manager technical role. And it would drive me crazy because they would kind of say, “but do you know where I should go?” And a lot of them I swear left thinking they should do a coding bootcamp or something and that’s the only path, or “go back and do a CS degree.” And you’re like, “there is this 10% of knowledge, there’s a thin layer of knowledge.” And what that kind of gives you is also just the confidence that if someone throws a three letter acronym at you, you can figure out what it is. And I think there’s a lot of that. There’s a little bit of gate keeping that happens on tech teams where we like to throw around big terms and sound smart. So especially if you don’t have a team that has a lot of psychological safety and it’s not easy to ask questions, people end up sort going, “oh, I guess tech just isn’t my thing or not for me, or it seems impossible to learn.” So, that’s one sad thing I see happening. But the other piece is that it’s really more about empathy for the developers and that’s what a couple years of coding would give you is really understanding how they think about these problems, the areas they feel misunderstood by leaders in business and how you can get them on your side by showing that you do understand their concerns and you’re not just ignoring them or dismissing them and you’re not just going to ask for impossible things and then be like, “oh, you’re bad developers because you can’t do this impossible thing.” I think that’s the key piece.

Allen: You can learn that stuff and how to have productive and effective relationships with the development teams without actually doing a coding bootcamp. And I’ve certainly seen people who just did a coding bootcamp and became whatever it was, three months or six months or something and still don’t really have the level of thoughtfulness about what are the things that really frustrated a development team. What are the things that motivated development team, what are the jargons and acronyms that come into actually shipping software at scale? Meanwhile, I’ve worked with product managers and project managers that don’t have a sales degree but that have been in it for a few years. And then you can just kind of throw out, “okay, well what is going to happen when we deploy?” And then they immediately, the ring goes of all the sort of things that you might have to consider around deploying code to production, whatever, even though they’ve never in any code.

Polly: Right, exactly, 100%. And I think a lot of it is that as developers all the time, we’re getting up and we’re drawing things on whiteboards to try to explain stuff more visually. And I’ve just seen people without technical backgrounds kind of freak out and go, “oh, my gosh, they’re speaking a different language.” And part of it is, I know in school we all learned some version of UML or some kind of system diagraming, timing diagrams. That’s 98% of the time that’s not what we’re doing. We’re just drawing lines and boxes to explain concepts on the board. But it’s really easy to freak someone out if they haven’t come from that world and haven’t seen that done or been through a problem solving process with someone where they’re communicating that way. But then if you’ve never done it, you freak out the first time you see it, you stay away from that side, you never get good at it either. So I think that’s where in the coaching, I encourage folks to kind of find their secret tech whisper in their organization that they feel safe asking questions too, but at least have the understanding of your own systems at a super high level, find a system architecture diagram and ask them to walk you through it to the point that you will understand, “oh, why do we make this trade off? Oh, it was for latency.” The main concerns that you have in system design will really make you a better collaborator with the tech team.

Allen: And you use that term systems design, which to me I think is a huge part of the differentiator in between non-technical folks, or I don’t even like the word non-technical, but folks without a CS background that can be really effective in some of those product conversations and people who sometimes struggle is the ability just to reason about the software as a system and on the parts that interact. And I saw this, I taught a couple courses at our university many years ago, I do on web applications basically, and these are people in their fourth year of computing science. But I started talking about like, “oh, okay, well, and the server does this and when the client loads…” I was explaining HTTP or something like that. And then I could just see all the students, and they’re not all of them, but a lot of them, their eyes are just starting to go really wide or get really squinty. They’re just like, “okay, our brains are breaking.” Or they would come up and they would ask questions afterwards that I realized I just completely failed them. They had no idea what I was talking about. They didn’t know what a server was, they didn’t know what a client was. And they’re like, “okay, so is my code… I’m writing the code and it’s on my computer, but then it’s on the internet. So what’s the difference between code running and the browser to running on the server?” And so that’s something you get more practice at teacher, I’m sure that you’re doing coaching on this stuff, you practiced how to explain that kind of stuff. But I failed the first few times trying to convey that.

Polly: It’s very hard. Especially I feel like as people with technical backgrounds, in some ways were the worst at this because if you have had four years to understand it and it’s all suck in, it starts to become just like oxygen and then so don’t see that. So I do my very best to meet people, people where they are, and I have had people that are just like, “internet, is this ants marching? Is this the power cord to my computer? What’s happening?” And that’s totally great too. I think the other piece is people need to learn. It’s not a personal design flaw in yourself that you don’t understand this yet. There has never been really great systematic teaching about this stuff, how and where would you have learned this unless you felt like it was meant for you and picked it up.

Allen: Learning that stuff can really level someone up. I’ve seen folks in leadership positions just be way more effective because they can have conversations at that level. So I think that’s awesome that you’re doing that.

Polly: Exactly. And just the confidence. I think a lot of the time them feel like they have to rely on other people to have to make those decisions for them. And of course there’s a good point to have delegation, but that ability to dive deep when you want to, it can also earn you a lot of trust with your team too.

Allen: Yeah, well it’s way better saying like, “okay, well I don’t understand anything you’re talking about, but I trust you,” to being able to have a little conversation with them and then actually be able to trust them at a layer below that rather than just like, “okay, you say so servers, I don’t know.”

Polly: 100%. And I think it makes people more comfortable going to them and saying like, “Hey, remember how we talked about this risk?” As opposed to be like, “we didn’t bring up this risk because we knew it might just blow your mind and you’d be afraid and you’d say no.” But it was like, “okay, that risk did materialize, sorry, or now let’s make a plan to do it as opposed to trying to hide things or again,” it’s just that transparency can really help the teams be more effective more quickly.

Allen: Yes. You mentioned last time we were talking that you were working on a podcast. Is that related to this coaching work?

Polly: It is, yeah. Actually I hadn’t thought of, “oh, you’re so good at this Allen, you queued this all up in… you’ve made it full circle. Way to go. Yes. The Tech of Confidence Podcast is the name of the podcast. It’s launching the November, and it’s for business leaders who would like to work in technical companies or more closely with the tech teams or even level up their effectiveness in a relationship with an existing tech team. If you feel like just enough to be dangerous, but less than you’d like to in those kind of technical discussions. This is what the podcast is about. It’s also really trying to make some of the latest innovations from the Metaverse and web three and AI more approachable and find out where you can learn more things about them. If it’s an area you’re interested in and in your company or your team might benefit from you having a deeper background in that without having you to having about needing to learn to code.

Allen: Awesome. Well, I’m looking forward to checking it out. Sounds like something that maybe some audience overlap between folks listening to this show and yours.

Polly: Well, maybe I’ll have to have you on as a guest. We’ll see.

Allen: Yeah, that sounds fun. I’d be happy to, maybe I’ll try to explain servers better now that I’ve been at it for a while.

Polly: Exactly. You’ll be given another chance.

Allen: Awesome. So Polly, where can they go to learn more about your work and what you’re up to?

Polly: Yeah, for sure. I have a website at humaninthelooplabs.com and also I’m at pollyallen.com, so my email address is there on the website.

Allen: Awesome. Thanks Polly. It Shipped That Way is brought to you by Steamclock Software. If you’re a growing business and your customers need a really nice mobile app, get in touch with Steamclock. That’s it for today. You can give us feedback or rate the show by going to itshipped.fm or @itshippedfm on Twitter. Until next time, keep shipping.

Read More