The Next Next

AI-Driven Parenting Support: Exploring Happypillar with Mady Mantha

Episode Summary

In this episode of 'The Next Next,' host Jason Jacobs interviews Mady Mantha, co-founder and CTO of Happypillar—a digital therapeutic application providing evidence-based behavioral interventions for families. Mady, who has a rich background in AI and machine learning, discusses her career journey and the evolution of AI in various sectors. She details how Happypillar aims to make parenting easier by using AI to deliver therapeutic interventions for children dealing with behavioral issues. Mady explains how the platform evolved from manual processes to incorporating complex AI models and highlights the ongoing importance of human expertise in the loop. The discussion also explores how AI is transforming startup dynamics, funding, team structuring, and more. Lastly, Mady speaks on the future vision for Happypillar, expanding its offerings to preteens, teens, and couples, and invites listeners to check out their work and join their mission.

Episode Notes

AI and Parenting: Revolutionizing Child Behavioral Therapy with Mady Mantha In this episode of The Next Next, host Jason Jacobs interviews Mady Mantha, co-founder and CTO of Happypillar, a digital therapeutic application that delivers evidence-based behavioral interventions to families using AI. Mady shares her extensive background in product and machine learning engineering and details her journey from developing conversational AI at Sirius and Walmart to building Happypillar. They discuss how Happypillar uses AI to make parenting easier by providing scaled therapeutic support, the roles of human expertise in AI development, the evolving AI landscape, and the intersection of AI with startup building, funding, and human roles. The conversation also explores the implications of AI on team structures in engineering, marketing, and competitive dynamics. Mady highlights the importance of domain expertise and distribution as moats and provides insight into future expansions for Happypillar. 

00:00 Introduction to Mady Mantha and Happypillar 

02:41 The Role of AI in Modern Startups 

04:45 Building Happypillar: The Journey and Challenges 

07:35 AI's Impact on Behavioral Interventions 

12:23 Iterative Development and AI Integration 

21:14 AI Native Startups vs Traditional Startups 

25:39 Therapist Integration in AI 

26:51 Scaling Therapy Services with AI 

27:32 Personalization in Therapy 

29:17 AI's Impact on Internal Processes 

31:39 AI in Coding and Team Dynamics 

34:23 Future of Junior Engineers 

36:45 Fundraising and Bootstrapping 

38:34 Marketing and AI Tools 

40:01 Human vs. AI in Sensitive Areas 

43:04 Competitive Landscape and Moats 

45:28 Open Source and Innovation 

46:20 Future Plans for Happy Pillar 

47:37 Call to Action and Conclusion

Episode Transcription

Jason Jacobs: Today, on The Next Next, our guest is Mady Mantha, co founder and CTO of Happypillar. Happypillar is a digital therapeutic application focused on providing evidence based behavioral interventions to families. Maddy has a strong background in product and machine learning engineering. She's held leadership roles in various startups, notably as the director of conversational AI at Sirius.

Mady led the team that developed Walmart's conversational AI, which was featured in the Wall Street Journal. Now, I was excited about this one because Mady has been working on AI for many years, since before it was cool. She also not only comes at it from a technical perspective, but she's also a founder, meaning that she understands and is living what's happening in the trenches in terms of how AI is changing, how startups are built, how startups are funded, roles.

The amount of roles, the profile of roles, what training is needed, etc. She's also on the front lines of figuring out four different specialties. For example, therapy in Happypillar's case. How important is the human? What role does the human expert play with credentials? How is that paired with AI? Which aspects does the human need to stay involved?

Which aspects can the machine take over? And where is that going? As well as how to tell when you're entering Another category, not just therapy, but whether it's law, whether it's accounting, whether it is personal training, whether it's medical care, et cetera, how to think about human versus AI and how to do it right at any rate, fantastic discussion.

But before we get started,

I'm Jason Jacobs, and this is The Next Next, it's not really a show. It's more of a learning journey to explore how founders can build ambitious companies while being present for family and not compromising flexibility and control, and also how emerging AI tools can assist with that. Each week, we bring on guests who are at the tip of the spear on redefining how ambitious companies get built.

And selfishly, the goal is for this to help me better understand how to do that myself. While bringing all of you along for the ride. Not sure where this is going to go, but it's going to be fun.

Okay, Mady Mantha, welcome to the show.

Mady Mantha: Thanks, Jason. I'm super excited.

Jason Jacobs: Yeah I'm super excited for you to come on. I'm appreciative to Rob May A little bit earlier guest. I say early guest as if like we've been around so long. This is like episode four or five or something. It's still early. But but yeah, he was nice enough to put us in touch.

It seems like you've been steeped in startups and steeped in building with AI and building AI products for quite a while. And I'm really excited to learn more and to learn from you. So thanks.

Mady Mantha: Yeah, same here. I'm excited.

Jason Jacobs: For starters, maybe just give a bit of context on Happy pillar. What it is. How it came to be.

Mady Mantha: Yeah, totally. So my co founder and I started Happy Pillar about three years ago. We're using AI to make parenting easier. I know that parenting is already pretty easy endeavor. Exactly. Our platform specifically provides behavioral interventions for Children dealing with anything from tantrums to issuing fall issues following directions, issues with self esteem to big changes in the house, like maybe a new sibling and stuff like that.

And it just helps kids form healthy attachment and the delivery mechanism or the access to this kind of therapeutic intervention is the innovation. And we're able to. Do this at scale using a I I'm not a parent, but my co founder is and we met when we were working at this other startup doing things in conversational a I and this was during the pandemic and things were definitely a little different then.

Parenting was a little different as well. All of the things that, were things that you just did, like going to the park, or going to a music class, or going to a soccer game, all of a sudden became things that you couldn't immediately do. And so she was having, a hard time with her kids, and she did this kind of therapy with a therapist, and it worked like magic.

And she said, could we, it costs 200 a session, 1000 a month, I can afford it but I wonder if we can make it somewhat better, easier using natural language processing. And she was like, you work in NLP, do you think you could do this? And I was like, yeah, I think we could figure something out, and, three years later, we built Happy Pillar, and we have thousands of parents using it, sending us emails like, wow, this helps my kids, thanks for building this, and that makes our everyday feel really good.

Jason Jacobs: Nice. And and I forgot, usually I start out these discussions, especially since we only had a quick chat before, but we don't know each other super well. But the journey that I'm on is to understand how to build different in a way that I can be far more present at home without compromising ambition and also how these emerging tools like AI can help Facilitate that.

So how AI is changing, how startups get built and funded and how someone like me might be able to leverage it to to build different. So with that in mind, I'm just curious, how did you first come to be working in AI in the first place? Where did that come from? How long has it been? And maybe talk a bit about just like how the landscape's been evolving since when you started to now.

Mady Mantha: Yeah, that's such a great question. I was in AI right when it wasn't really called AI. It was more probabilistic statistical modeling because that's all what machine learning is just probabilities. And my background isn't really in the theoretical aspects of machine learning either. It's been in, I studied math and computer science in college, and then I worked on the TAPI oil pipeline at Brookings, working just on statistical modeling, and then I've worked on building AI solutions that kind of have real world utility and I did this at scale at places like Walmart, Home Depot, Target, and then in the open source conversational AI space at Rasa, I worked on intelligent search and search ranking and all kinds of things like that.

And I guess what I learned is that, while foundational models are pretty great, like GPT and 2 and Gemini, they're incredibly powerful, but, the raw power isn't quite enough. And the exciting shift that we see today where the real opportunity lies is moving from this kind of training phase, which, yeah, requires massive compute.

To the application phase and that's what you've been talking about a little bit in your newsletter as well Is how do we take these general purpose models and make them useful and how do we build them differently for specific tasks? Like how do we use clever fine tuning? How do we use prompt engineering?

But especially how do we use like some of these specialized techniques to build something that has value and utility? So we're not only talking about AI in the theoretical sense So yeah, I've been doing this for over 10 years. And there's a running joke right now that every company is just a chat GPT wrapper, right?

And there's some truth to it, but I'd argue that in some cases, like a well designed Specialized solution that's tailored to a very specific domain. It could be climate science. It could be climate tech. It could be like behavioral change and parenting tied to a specific domain with carefully curated data and expert knowledge baked in, even though it's a wrapper that can actually be more valuable than the underlying foundational model itself.

Jason Jacobs: Huh. And we're certainly going to get back into the Happy Pillar story, but if you just look for a minute at the landscape, What do you see? Do you, do you, is it overhyped? Is it saturated? Is it a bubble? Are there areas that are overhyped and areas where where the change will be profound?

I'm asking all kinds of leading questions, which I should never do as a host, but I wish I had just hit period or question mark after I said, what do you see? And then stopped. So pretend I did that.

Mady Mantha: Yeah. Okay. So pretend I'm an LLM and you gave me a question and I'm just like generating data.

Jason Jacobs: I know I'm a clearly a terrible prompt engineer.

Mady Mantha: No, that's exactly how you need to do prompting. That's cool. Cause I think that we're seeing, first of all, I think we're just scratching the surface of what AI has to offer and all of the possibilities. And I think right now it's super exciting because we're seeing a surge in AI native startups, companies that are like building their entire product around AI from the ground up, not just bolting it on as an afterthought.

And all of these startups, if you've seen, they have small agile teams, super rapid iteration cycles, and a very deep understanding of the probabilistic nature of machine learning. I don't want to say things like, oh, yeah, it's just a team of two cracked engineers doing this because everyone on Twitter is like saying that, but, think of a startup, not just Happy Pillar, but think of a startup, that's using AI to optimize crop yields in vertical farms.

They're not just using AI to monitor temperature, which you may have seen five years ago or even 10 years ago, which was groundbreaking then, but they're using it to predict plant growth, to adjust nutrient delivery. And even optimize the lighting spectrum based on real time feedback and these specialized custom models that they have, and that's a fundamentally different approach than traditional agriculture, and that, I think, mirrors the shift that we're seeing in Behavioral health cardiovascular health, people using AI not just to predict the propensity of heart attacks, but, using it to dynamically alter and pivot someone's treatment plan.

And you might be seeing this a lot in climate tech too. Using first principles thinking. To build AI native startups. So I think that's a really cool change that we're seeing is not just how do I inject AI into this process, but how do I use, how do I like really transform this using all the things that AI offers today?

Jason Jacobs: And for someone like me stepping in from the outside with beginner mind, it's confusing because you've got all these LLMs that are raising money at fancy prices. And then you've got all these upstarts and everyone says they're doing something in AI. And there's all these tools and a lot of the tools have overlap and are saying the same things like we enable building agents and there's open source ways and closed source ways.

And like how does one make sense of the landscape? And from your standpoint, where you're actually building on top of the the underlying models, like, how do you get confidence and conviction that you're not building on top of quicksand, given how quickly the landscape is evolving and how much uncertainty there is?

Mady Mantha: Yeah. That's interesting. I guess I would always go back to thinking am I. Am I answering my customer's problem? Am I addressing their problem? Am I building something that they want to come back and use? Am I building something that if I worked in clinical trials, am I building something that these companies are able to use and help them inform better outcomes?

So I guess I would always go back to asking the question of is this useful to the people that I'm building it for? Is this responsive to my community? And sure, you can make use of all of these new tools, if you will. You could start using, O1, and then DeepSeq comes out that uses pure reinforcement learning, and it doesn't use supervised learning anymore, which is something that we've all used building, things over the past two, three years.

Or you could say could use newer techniques like low rank adoption, which is really powerful for fine tuning. And you're right. All of these things are. Happening at an increasingly faster pace. Things coming out every other day, every week. But software has always been like people still use Java, to build these big systems, highly performant systems.

People still need rules to make things work. And AI is still just based on probabilistic models, which means that it doesn't have definitive answers. And you have to deal with a lot of uncertainty anyway. That's just the nature of working in AI. So I guess going back to using a combination of rules and building in expert knowledge and answering your customer's problems is always a winning equation.

Jason Jacobs: Great. And so it sounds like in the Happy Pillar case, so your co founder was actually a customer of this type of therapy for her family and uncovered that it was really valuable and thought that maybe using AI, it could be delivered in a way that could be broader and more accessible at a lower price point without compromising quality.

I'll go on with the question, but did, am I right so far?

Mady Mantha: You said it's so great.

Jason Jacobs: Okay. So then you came in as the person that's been steeped in AI. Where did you go from there? How much of it. was driven by where the tools are and what's possible versus the customer. And, how did they meet in the middle?

What did that process look like and how is that the same or different from companies that you've built in the past?

Mady Mantha: We, when we first started this, we were still working our other jobs. So we worked evenings and weekends, and I think not having You know, 60 or 70 hours a week to focus on just this, at least in the beginning, helped us come to this, or follow this process, which is, we wanted to replicate the mechanisms of this kind of behavioral intervention.

and then just deliver it at scale. So instead of a therapist helping one family, this was, AI helping many families. Like a one to many situation. And in the beginning, we didn't even use AI. We asked our parents. We posted in, I think, a Facebook mom group saying, Hey, this helped me with my kids tantrums.

Is anyone else's kids having tantrums? And you can imagine everyone said yes. We were like, hey, would you be willing to try something like this? And within, I think, two hours, we had a 60 parent wait list. So we asked them to have these five minute conversations with their kids, send us recordings, because that's what this therapy is talking to your kid.

And we just got those recordings, and we got a therapist, and then started working on our AI model then to analyze it and to provide feedback. So we didn't have a fancy app. We didn't have a whole lot of like custom AI models or specialized techniques to work on them. We just had recordings that we made sense of.

And then we slowly built on top of it, tools and features that made real sense and that people and parents like really needed access to. And so we built this entire kind of AI pipeline. Iteratively, and we didn't have any of that in the beginning. We just had, me and Sam listening to recordings, sending it off to a therapist, and giving the parents feedback on it.

Jason Jacobs: Huh. And so maybe talk a bit about how it evolved from getting recordings and having 100 percent human delivering the feedback to starting to pick off pieces of that and finding aspects to automate. Yeah, so it'd be helpful to understand that process. That evolution, where you are today, the barriers, what's possible, what's becoming possible, and how you, where you see it going.

And gosh, I can't believe I'm trying to fit all that into one question. I'm sorry. 

Mady Mantha: Okay, so we had recordings. And then we built like an app where you could hit, we had this button that said record, you could hit that button, you could record a conversation with your kid and then we built in a diarization mechanism, which is just identifying who the parent is and who the child is, because really at the bedrock of this is we want to analyze what the adult is saying and not really care about what the child is saying because we want to provide feedback on the adult.

And then we built a classification model. That was trained on these kind of parent child interaction therapy principles. So we could categorize the adult speech into these five to eight categories. Things like praise and behavioral descriptions and criticism. All kinds of things. And we were like, okay, this can't just be keyword spotting.

We can't just find a keyword for praise. And okay, you have a praise. It's really about understanding the intent and context of the speech. So we used supervised learning because we thought that if we have access to a lot of labeled data, which we started collecting, ton of conversation data, we thought, okay, we could understand the entire intent and the context of the speech and classify it, exactly like therapists would classify it.

And our data annotation techniques were unique because we weren't just, it wasn't just me or another data engineer annotating the data. It was our therapist looking at this data, annotating it, and telling us that, okay, yeah, based on the context of the speech, this is a praise, or this is a description, this is, whatever the classification was.

And once we had a performant classification, model, we also used other AI techniques to call semantic textual similarity that helped us look at how close the parent was adhering to the therapy's principles. And then as we had a rudimentary model, we started collecting more and more data and our data fly wheel was great.

Parents found this useful. They were using it more. And as they used it more, we had more data to feed back into our models and to make it even better. And then we thought, okay, this is cool, but parents often talk to a therapist at the end of each week or at the end of each session, and they look at all of this data, and the therapist tells them, hey, you did great here.

You could have done better here. They don't have that kind of thing within the app. Then we incorporated a weekly survey where we aggregated their interaction scores, the survey responses, and then we gave them access to historical trends. Parents could look at personalized insights, they could track their progress over time, and it's how you analyze, customer data at Walmart or Target, and you help identify patterns and make personalized recommendations.

And then we thought, okay I think if they had content within the app, that would be really meaningful. A therapist telling them, about how to deal with a particularly difficult week, or a therapist telling them about anxiety with three year olds and what you could do. And so we started incorporating content in the app and then giving personalized recommendations to parents that were having issues with anxiety or bedtime problems and stuff like that.

Yeah, we kept adding tailored content within the app. And then at the end of six months, we had this entire pipeline that had multiple AI models working together going far beyond just like a chatbot answering questions. And then we had multiple specialized techniques from reinforcement learning to supervised learning to then providing personalized recommendations.

And it's just a work that evolved over time, but we didn't set about, day one saying, Oh, we need to do all these cool things with AI and include them within the app. It was more so an evolution of, okay, I think parents could find some kind of recommendations useful. Or, I think it could help them if they looked at their progress week over week.

That would help them want to stick with the program. And just infused AI in that way.

Jason Jacobs: Are there services that existed before Happy Pillar that deliver this type of therapy either through telehealth, one to one, or some other means of virtual delivery?

Mady Mantha: Yeah, totally. And the landscape today is pretty mature. But the way we saw it and the way that it exists today, even, is that on the one end of the spectrum, it's exactly like you said. You have one to one therapy. You also have telehealth, which is also one to one, where services connect you with a therapist online.

And you can talk to them, and that's, absolutely needed. And then, on the other end of the spectrum, you have solutions that are like psychoeducation. And I'm sure we all read this, right? Not just books and blogs, but services that tell you, Hey, if you're, four year old doesn't want to leave the park, here are things that you could try with them next time you go to the park to get them to leave faster.

And those are great, but it's, parents are so busy. And they have so many things that they have to learn and read and think about that there wasn't anything to fill this huge missing middle of, is there a program or is there a service that's actionable, that's like low lift, that only takes five minutes a day and that doesn't require you to read a bunch of stuff and then remember it when the thing is happening to you, but you just have to, spend five minutes a day interacting with your kid, talking to them.

Yeah. And in doing so, you give them five minutes of full attention and you practice all these techniques, which you don't have to think about, like having a personal trainer. You don't have to count your reps, when you're working out. And then they do everything for you and you get feedback.

Jason Jacobs: So is the one to one session not a part of this process, the Happy Pillar process?

Mady Mantha: You have a one to one session with your kid. You have a conversation with your kid that gets recorded and then our AI listens to it and gives therapist vetted feedback after.

Jason Jacobs: I've been hearing this term AI native and I think you might have used it earlier in this discussion just in talking about how startups are starting that are, fully embracing these tools from the outset versus bolting them on later. When I hear you describe how Happy Pillar has formed and you've been iterating on it, to me, it sounds like just a sound kind of company or product. Formation and development process, but it doesn't sound fundamentally different than how startups were built before. How do you think about this term AI native and what's different about AI native startups versus just regular well run startups?

Mady Mantha: Yeah, wow, that's and I think depending on who you ask they might say I think maybe they're the same

Jason Jacobs: Hmm. I'm asking you.

Mady Mantha: yeah And I would say I think that you can't Divorce or ignore a lot of really good how to build Product the right way or how to build a company the right way. How do you employ all of these like? How do you collect feedback? How do you interview the right kind of domain experts before you're starting a company to learn as much as you can from them so you can build things the right way?

I don't think all of that is, is very different from really just building something that's AI native. The difference, I think, lies in like the, example that you and I were discussing about the crop yields and farms. Is thinking about how maybe machine learning and data and, I don't know, vector databases can help you reimagine the way traditional art, agriculture works, or how can you reimagine the way that traditional therapy works with instead of you going.

Meeting your therapist on Zoom or going to your therapist's office and talking to them one-on-one, how can we replicate some of those things using machine learning models and data and technology? And I guess like how did people think about building websites when Web 1.0 came about? Okay.

You have a brick and mortar store. You go in and I guess on one side you have shirts and on the other side you have, socks. And how do we like reimagine a website that has, a catalog and a search index? And how do we make all of that possible? And I think AI is similar. It's like you have a website or you have databases and you have a Enterprise software that works in this way, how could we improve upon it using vector databases to store embeddings for our, I don't know, educational content?

Because vector databases with their embeddings can allow us to quickly resurface the most relevant articles. And AI reads vector bases and high dimensional embeddings. That's just how they understand data. So maybe we use that. And yeah. How do we use all of this data to teach your models better? I think it's just like figuring out those aspects, but I think everything else remains the same.

And even when we use a I models because of the high degree of uncertainty that they have we still use traditional software rules. If then statements if l statements because those work, you know?

Jason Jacobs: I'm, I've heard about how the human therapist is involved in training the model in the formation of Happy Pillar's service. What is the human therapist involvement ongoing, if any? And I'll ask that both in terms of holistically at all in the infrastructure of Happy Pillar, but also on per client or per family basis.

Mady Mantha: Yeah. That's that's I think an important topic, and we often talk to parents about that. Starting with the data annotation, when we have access to all of these parent child recordings the way we annotate data is unique. A therapist looks at our data and annotates it, labels it, so that our classification models can read it and continue to learn on top of that.

Engineers don't annotate our data. We just don't really have this deep subject matter expertise that therapists do. All of the content that you see within the app in Happy Pillar is also therapist created. It's not just something that AI generates. And the way that our models classify data is also based entirely on Therapy, parent child interaction therapy principles.

So I would say that at every given point within Happy Pillar, we have either a pre vetting by a therapist or a post vetting by a therapist. And human checkpoints at every given point within our AI pipeline. That, to us, was very important because we didn't want to just build like an AI wrapper for therapy.

We wanted to build, we wanted to make parent child interaction therapy accessible because it simply works and parents find a lot of value in it. So how do we charge parents 12 a month as opposed to 1, 000 a month? How do they still do this within five minutes? And if you're going to a therapist office, they had to figure out scheduling.

They had to spend, an hour or hour and a half within their day to fit this in. But with Happy Pillar, you could do this at any time from anywhere, and it takes five minutes a day. So we wanted to solve challenges around scheduling, around cost, around accessibility, and AI was just like a medium that helped us do that.

Jason Jacobs: So as you serve more customers, will you also need more therapists or at least therapist time?

Mady Mantha: Yeah, I would imagine so. Right now, we're able to service about 3 to 500, 000 patients or parents without additional therapists with our small clinical team. So I imagine that we can scale up to a million parents with the same clinical team that we have right now. As we're able to automate parts of the annotation mechanism or parts of the AI pipeline mechanisms without sacrificing quality or without sacrificing human checkpoints, we're able to scale indefinitely.

Jason Jacobs: And I know if you look at personal training, for example, on the fitness side, sometimes these apps will ask you, do you want someone who's a drill sergeant or do you want someone who's encouraging or do you want someone who is, data driven or does it work similarly in, in therapy?

Are there different therapy? personality types. And I know for an MVP, it makes sense to start with kind of one offering. Do you see expanding into choices over time? Or do you think it's important to have one consistent happy pillar delivery style?

Mady Mantha: I did not think of that. I think that's such a cool idea, as you were saying, that reminds me of, TARS from Interstellar, where you could tweak his humor level and sarcasm and all kinds of things. I imagine, you would want to make, you would want to have more therapists, because, I think we all choose our therapists based on how good of a fit it is, personality wise.

Right now it's really just centered around the parent and the child interacting and you know building their relationship and the therapist only comes in when it's time to give feedback on how they did and How to get them to improve and how to get them to stick with the program So because they're not really talking to therapists It's something that we probably don't have to think about in the short term, but I imagine as we're expanding into other modalities where you do have to interact with a therapist more and more, it would totally serve us if we thought of different personality types and different I don't want to say flavors because that doesn't sound good to attach to a person, but like a different flavor.

Jason Jacobs: Huh. And, um, gosh, there's two different ways I want to go with this, but I'm gonna, I'll pick the first way and come back to the second. But the first way is we've talked about how AI is enabling the customer focused offering. How is it changing how you're building Happy Pillar, if at all? So more internally focused.

Mady Mantha: Oh, yeah, that is I probably think that All of the different advances in a I all of the things that we see just every six months there seems to be. I know there's stuff every week, but I would say that every six months there seems to be something meaningful that we can act upon and look into and learn.

I think that it informs the way that we build internally. Probably way more than just our external how, parents interact with AI, which isn't all that different to them. They're just still talking to their kids and they receive some feedback and they look at that and, they have access to all of this content, but with the way that we build, I would say, we started with using vector databases, which I know have been around for a while.

They're pretty cool. It's not they're groundbreaking or anything like that, but they're optimized for storing and searching high dimensional embeddings. And, those embeddings are representations that AI models use to understand data. And so we're, we use those kinds of databases, like Pinecone or, Quadrant or whatever, because they're incredibly fast, and they're efficient for the kind of semantic search that we offer within Happy Pillar.

We use reinforcement learning with Some parts of our A. I. Pipelines with just like what deep seek does, which uses pure reinforcement learning with an interesting spin on the reward function. And that's been shown to significantly improve reasoning without a whole lot of label data. So for certain aspects that don't require a ton of label data, we feel more confident using reinforcement learning.

And then we used You know, specialized techniques like semantic textual similarity to calculate the mathematical distances for certain points. So I would say that as we experiment within different parts of our AI pipeline, we want to look at the new things that come out to see if it makes sense for us to use to see how useful it might be.

Is it going to make our model more accurate? Is it going to make it more performant? Is it going to make our compute significantly less if we use something like low rank? And Mechanisms so that you don't have to retrain your entire model. You can just retrain specific parts of it. So yeah, we look at it from like a utility base.

Is it gonna, help A, B, and C? And if it does, we do it.

Jason Jacobs: And what about when it comes to coding? How do you think about it as it relates to either profiles of roles in terms of engineers, or designers, or product managers, or others that you might need, but also the, um, so the profile of roles, the quantity of roles, and then the maybe some of the skills or soft skills that might be required to be successful in those roles within Happy Pillar.

Is it changing any of that at all? Do you see that happening directionally or is it more just building strong technical teams, was as building strong technical teams will be?

Mady Mantha: No, I think it's totally changing. Even today I'll say that I probably use Claude and Gemini and obviously GitHub Copilot to help make me code faster and better. There are things that, certain methods or classes or functions that I don't want to necessarily write or, I can, but then the autocomplete feature or just me chatting and brainstorming with Claude and getting those like pieces that, that I don't want to necessarily write, I'll do that.

I'll prompt it and I'll get that. So it helps me code faster. We probably, I'm trying to figure out how to say this kind of. Like just respectfully and gently too, we probably don't need like large teams of junior engineers or as many, we still need engineers. We still need a strong technical team.

But we probably don't need as many because people can work faster. We still need strong teams because, my friends and I always talk about this, which is you. A 10X engineer using AI is so powerful because AI still does probabilities and it does inferential thinking, it does prediction, it doesn't do anything deterministic.

So you could ask it a question and it gives you an answer, they're, it's not opinion based in any way. It could output a bit of Java for you or a bit of Python for you, but maybe you need to be using Java when you're building enterprise grade search. Maybe you need to be using Python when you want to customize your AI models.

It doesn't have that kind of opinionated framework. Or a deterministic output, and that's where humans come in. You need humans to say here are my preferences, this is my opinion. I think A is stronger than B when it comes to the, this particular thing we're talking about. So for that you, you do need 10X engineers.

But a 10X engineer with AI can do a lot more than a team of five engineers, I would say. So it helps inform the way that we build teams in that way, which is great because the market's not that great for raising money right now. So we've been able to do so much with so little over the past three years.

Jason Jacobs: It used to be that, the best way to become senior was to be junior. 

Mady Mantha: Yeah.

Jason Jacobs: how do you think AI will impact that path? And what will be the best way to make sure that we don't essentially run out of seniors without a farm system to train juniors?

Mady Mantha: Yeah. I'm curious for how you think we could do this too, but people don't, it, it used to be that. Like you needed to know a whole lot or, memorize a lot, lots of things and know how to write a bibliography. Like when I was in college, I'd be like, Oh, are we using the AMA format, to do this?

And now you can just ask Chad GP to do it. I would say like knowing certain, like you could do a lot more with how to use this knowledge than you could with just like knowing it. So I would say that with junior engineers, to look at this as an opportunity, you have access to so much more information.

You have access to all these tools that could do so much for you. So can you build something meaningful? Can you start creating your, start by having your own portfolio, code as much as you can because, coding will not take as long as it used to. So can you build up your portfolio?

Can you build some interesting and meaningful things using these tools? Can you have, AI will never have opinions or even if it does, it's just like probability anyway. It's inferential thinking. It's not decisional thinking. So can you use as a human, your decisional mindset to create something meaningful or beautiful?

We want to be able to automate these things so you could. Focus on, I don't know, things like beauty. What's that thing that people used to say? I do engineering so my kids can like, do poetry and philosophy or something like

Jason Jacobs: Oh, I haven't heard that.

Mady Mantha: So like, how could you use AI to do like mundane, boring, repetitive things and really have an opinion, maybe build something really cool with AI, build up your portfolio, um, take the initiative to reach out to senior engineers or people at companies and, try to maybe get into a mentorship or something like that.

I think that the opportunity is huge. Junior engineering jobs aren't going anywhere, but maybe you just need to go about it differently. Yeah.

Jason Jacobs: on something you said before. You talked about how the fundraising climate is difficult, which has been so it's been good that AI has enabled you to do so much more with so much less. The, so that, that talks about in a world where there's not access to capital, what to do in the meantime.

But if AI continues to enable. us to do so much more with so much less. Why we, why even want the capital in the first place?

Mady Mantha: Yeah. I know friends that have started companies and when we started in 2022, when it was starting to get bad, but maybe not quite as worse as maybe 2023 was. And I have friends that started companies that essentially had to bootstrap. If you need to pay yourself, if you need healthcare, stuff like that, you could.

I know friends that have done services contracts to raise, a little bit of money to bootstrap their company that way. And more about bootstrapping than I do. I haven't really had to bootstrap as, as much, but we did crowdfunding. We have a lot of parent and therapist investors that we talked to that were like, Wow, I wish I could invest in this, but like not a whole lot.

So we had a crowdfunding platform where we raise checks as small as a thousand dollars from. Thousands of people so we raised in like these mini tranches, 100 to 200 K at a time over the course of the past three years. But yeah, you could bootstrap, you could build an MVP. It used to be that you could raise on an idea.

Now you don't need to, you could build an MVP within, a week or two weeks, depending on how focused you are. As in like how focused your vision is.

Jason Jacobs: And the same question I asked you about coding and product management and design, I want to ask you about more go to market, whether it's marketing or partnerships or that, that more commercial side of the business. How much are you leveraging these tools over there and how much opportunity do you think there is to do and what are the tactics that you're implementing in terms of how to get more customers, how to drive more revenue, et cetera?

Mady Mantha: Yeah. I'm not a huge fan of using AI to generate like LinkedIn posts or just marketing content, ads, stuff like that. Cause I think there's still, you could definitely, you don't, you no longer have a blank page problem. You could generate stuff. So you don't have to stare at a blank page when you're coming up with things to.

You know how to write about your product, how to position it to socialize it better, but there definitely needs to be a human element with that. But a lot of marketing ops were able to automate using AI. There's lots of tools that can help you schedule things. There are tools that can help you make your ads even better, more performant.

You could. So you can really do something as low lift as even just brainstorming with your A. I. Agent to figure out if your ads are good before you go live or before you reach out to an expert to do ad words, tagging, stuff like that. So I think a lot of the ops work that we do, we automate to a degree, we use tools and we just use generative A.

I. As well. So that we can spend more time thinking of, I don't know, a humorous way to talk about Sleep issues or anxiety spend more time on, on things like that.

Jason Jacobs: Huh. And and this is less of a happy pillar specific question, but when you think about categories that might be ripe to deliver whether it's personal training or therapy or legal advice or accounting or whatever how do you, there's a spectrum. There's some that could maybe be Fully automated.

There's some that need to stay fully human and then there's some that can be hybrid. But then within the hybrid, I would imagine that it looks different in terms of which things are okay to be human and or which things are okay to be machine and which things need to be human. It probably depends. Case by case, depending on what type of thing it is.

Have you thought much about this? Are there guidelines? Is there a chart? Is it like how would one go about determining how in the loop a human might be from case to case and which areas might be more ripe to leverage these tools to try to automate.

Mady Mantha: Yeah that's, I think, so important now because you have companies like Carvey, which is your legal AI assistant, how much of that is AI and how much of that is actual lawyers doing something and you have, people used to say, don't go on WebMD because it'll tell you like you're about to die when you have a headache and you're like, I have a headache.

What do I do? So I, I would say that in, yeah. Cases where, I don't know, you're dealing with a uniquely human problem, whether it's like humans needing legal advice or medical advice or even behavioral challenge. And I think that's why with Happy Pillar we make sure there's a human in the loop in kind of every checkpoint.

It's important to have some kind of human intervention. You maybe shouldn't have generative AI go unchecked, but you need human guardrails and checkpoints to, pre vet and post vet information that's going out to humans. Especially if it's information that humans can base their decisions on, and especially if those decisions relate to anything legal or medical.

I think that is a sensitive area that you probably want to rethink, especially because there's so much uncertainty with AI. It will it's not, an exact definitive answer. Even when you give it math problems, it could come back with, no, I think 2 plus 2 is 5, and here's why. So you should always double check that.

And I do that even when I'm prompting it. I say something, which a lot of people tell you to do, which is just ask if it's sure. Say, are you sure again? And then it will like re rethink and re reason. So I think certain identifying certain areas, like medicine, health, law, could be a good way to go about it.

If it's automating like database batch jobs things like that, I think it's like safe to automate those things. But again, if you're like automating batch jobs in highly sensitive areas, you probably need to recheck that with a human systems admin or something like that.

Jason Jacobs: Given that we've been talking about how these tools enable you to stand up new products and make progress faster, I would imagine that also applies to your competitors, either ones that exist today or future competitors. Do you think that the life cycle of companies is compressing? How do you think about moats and defensibility and what will be the implications of AI on these things?

Mady Mantha: Yeah. I used to think that or I think. Yeah. I used to think that once you have a foundational model, if this one foundational model outperforms everyone else, like that's their moat, they're so powerful and now everyone is coming with their own foundational model. You have, obviously the big players, you have Fang, you have Deep Seek, you have OpenAI, everyone has a model.

And does it matter for your specific use case? If one is outperforming the other by two or 3%, is your output. If you build a wrapper on it going to be all that much more different. Probably not so there is no moat with intelligence anymore or is there right? That's a little confusing that I just need to think a little bit.

About that. If intelligence is free and if everyone's able to buy everyone, the big players are able to afford compute, then that no longer intelligence no longer is a moat. Then his data your moat. And I think that sure, you could have labeled data and the more data you have, the better your models are.

But then, we're seeing deep seek with pure reinforcement learning and no label data outperform everyone else. So then, do have your own data mode? And I think if you look at data as having intrinsic value, data itself as a commodity, in addition to just creating data flywheels that can make your models more performant, that could be meaningful, right?

If you, for Happy Builder, we have all this data that can maybe inform better care approaches in pediatric health, that can maybe help us build diagnostic tools. So data is a commodity that could be useful. I think that your distribution will always be your superpower and your moat specific expert knowledge and specific domain expertise is your moat.

I would say a team of really amazing. Lawyers building something like a legal AI assistant and having their own distribution network. I think they probably need to worry less about their competitors that might not have access to highly specialized domain expertise. So yeah, I think might come down to expert knowledge and distribution.

If compute and data don't really matter as much as they used to.

Jason Jacobs: And where does what's happening in open source fit into all of this, if at all?

Mady Mantha: I love llama. I love that they open sourced a lot of stuff. I think open source really is huge in terms of driving innovation. Cause once you. Once more and more people have access to it and you have people from diverse backgrounds, like different industries, different, levels of expertise as engineers, as builders, as product managers, you have all those people taking open source and building something meaningful, building something on top of it, helping create custom specialized models is really great.

If you look at Hugging Face, which takes a lot of open source models and they have engineers building Really cool things on top of it. The community contributing to it. I think that is a huge driver for innovation. For sure. also just love open source, 

Jason Jacobs: and and back to Happy Pillar, I know I'm bouncing around a bit, but who is Happy Pillar for today from a customer profile? And who do you imagine it will be for in two, three, five years,

Mady Mantha: Yeah 

Jason Jacobs: if different?

Mady Mantha: Happy Biller today is for parents and caregivers with kids aged two to seven. We have a lot of customers who are also grandparents, or just caregivers. Like I do Happy Biller sometimes with my nephew. So it's for caregivers with kids aged two to seven. We definitely want to expand into support for older kids, preteens and teens.

There's a huge need for that right now. And we also want to expand into, Couples and young adults because this kind of therapy delivery mechanism using a I lends itself really well to a five minutes. Couples version of this kind of therapy or a five minute session with just yourself as a young adult when you have lots of challenges that you're facing.

So in 2 to 3 years, older kids. In five years, we're thinking couples. Really, I think the future of happy pillar, the way we see it or healthy pillar, we also have that domain is the premier mental health resource for kids and people of all ages.

Jason Jacobs: Great. And and in terms of a call to action for anyone listening that's inspired by your work, is there anything they can do to be helpful to you? Are there people you want to hear from? Are there things you want to impart in listeners to think about leaving this discussion?

Mady Mantha: Yeah. I would love for people to check us out at happy pillar. com. Our app is free to use. It's available in both Google and the app stores. And if you're also passionate about, wanting to revolutionize early childhood behavior check us out at happy puller. com. We're always looking for passionate people to join our mission and our cause.

Jason Jacobs: Great. Thanks, Mattie. And anything I didn't ask that I should have or any parting words for listeners?

Mady Mantha: I think you're just great to talk to. We've talked about a whole lot of stuff. I think you've covered it all.

Jason Jacobs: Great. Thanks so much for coming on. It's a fascinating story. I can't wait to track your progress. My kids are a little old for the happy pillar of today, but as you expand into older kids I agree with you that is a necessary area. So keep me in mind for that as well. And best of luck.

Mady Mantha: Absolutely. Can't wait for you to be ground zero and get your thoughts on that.

Jason Jacobs: Thank you for tuning into the next, next, if you enjoyed it, you can subscribe from your favorite podcast player. In addition to the podcast, which typically publishes weekly, there's also a weekly newsletter on sub stack at the next, next dot sub stack. com. That's essentially for weekly accountability of the ground.

I'm covering areas I'm tackling next and where I could use some help as well. And it's a great area to foster discussion and dialogue around the topics that we cover on the show. Thanks for tuning in. See you next week!