I recently had the pleasure of sitting down with Rachel Wolfson on the Deep Dive Podcast for a wide-ranging conversation about artificial intelligence, where it’s headed, what it means for how we work and live, and some of the projects I’ve been building at the intersection of AI and real-world use cases. We covered a lot of ground, from the philosophical question of whether we’ve truly entered the “age of AI” to the nuts and bolts of how I created an AI-powered version of Scott Adams, the creator of Dilbert.
If you’ve been following what’s happening in AI, or if you’re just starting to pay attention, I think this conversation will give you a useful framework for thinking about what’s coming. We talked about opportunity, risk, and what I genuinely believe is one of the most important things anyone can do right now to stay ahead of the curve.
Below you’ll find the key topics we covered, the full video, and the complete transcript.
What We Covered
Rachel and I touched on several themes that I think about constantly as I build at Age of AI and FreedomGPT:
- Are we in the age of AI? I shared my framework for thinking about why AI is different from previous technology waves. It’s not just about significance or how fast it spreads, but about the speed of its own self-improvement.
- Building an AI clone of Scott Adams and how I assembled his public corpus of work, stitched together LLM, voice, and lip-sync technologies, and why his own wishes made this project possible and meaningful.
- Digital twins for living people and why the real opportunity isn’t replacing someone at a speaking engagement, but giving anyone 24/7 access to an expert who actually understands their context.
- AI’s impact on jobs, crypto, regulation, and the “infinite money glitch” including my take on the moment I believe will fundamentally change how society works.
Watch Now
Watch the full conversation below, or click here to watch on YouTube.
Full Transcript
This is the transcript from the Deep Dive Podcast interview with John Arrow.
Rachel Wolfson: Hey everyone, welcome back to another episode of Deep Dive Podcast. I’m your host, Rachel Wolfson. Today I’ve got a great interview lined up for you guys. I’m speaking with John Arrow. He is the founder of FreedomGPT and John is a true entrepreneur at heart. He has started and founded a number of companies from a very early age. And in today’s interview, John is going to be speaking all about artificial intelligence and use cases that we’re currently seeing. John recently did a really cool use case by creating an AI clone of Scott Adams, who is the creator of Dilbert the cartoon. So he’s going to be speaking all about that in today’s episode, and I encourage you guys to listen to everything because this is just such a great interview, especially if you want to learn more about artificial intelligence and what we’re going to start seeing more of in the future. Before getting started today with today’s episode, I also want to take the time to remind you guys to smash that like button, hit subscribe, especially if you enjoy the content that you’re seeing today. I’ve got so much more of that coming your way and I want you to stay in touch. Without further ado, let’s get started with today’s interview. Hey John, how’s it going?
John Arrow: Hey Rachel, great to be here.
Rachel Wolfson: Yeah, good to see you again. And I’m so happy that we’re doing this podcast in person this time.
John Arrow: So much better than thousands of miles away compared to last time.
Rachel Wolfson: Yeah, I know. So John, you have a very interesting background and I want to get into everything with you. But before we start, tell our listeners a little bit about yourself and what you’ve been doing and your entrepreneurial journey.
John Arrow: Right. Well, I have somewhat of a unique journey in the sense that I’ve never had a boss in my life. I grew up making web pages. So as like a 10 or 11-year-old kid in the 90s, I was that person making websites. I scaled that, I did some interesting products, and then most recently I bootstrapped a company called Mutual Mobile to 400 people and about $45 million a year in revenue. We sold that company, and what was really interesting about that company is that we got exposure to all of these different types of emerging tech practices. So what started with mobile and then shifted to tablet, Internet of Things, self-driving vehicles, and then right before we sold the company, we were doing a lot in the artificial intelligence space, specifically around machine learning and around computer vision.
Rachel Wolfson: Got it. When you started Mutual Mobile—and this is a question I’ve been wanting and I actually don’t even know the answer, I’ve been wanting to ask you—how old were you?
John Arrow: I was, when we started Mutual Mobile, I was… let’s see, the iPhone was announced in 2007, shipped in 2008. Steve Jobs was up there making that amazing keynote and then the App Store went live in 2009, which is when we started. So I think I was like 21.
Rachel Wolfson: Okay, so you were quite young. And what was Mutual Mobile? Like, what was the purpose behind it?
John Arrow: Mutual Mobile, the founding story was my friends and I realized this was going to be a technology with the iPhone that changed the way the world worked in so many different ways. In the same sense that we’re having these conversations about artificial intelligence now, we realized everything was going to be reinvented. And we realized if we got the smartest, most motivated people all under one roof, we were going to be able to service those opportunities and it was going to be an excellent vantage point for how the world was about to change.
Rachel Wolfson: Awesome. Yeah, well you are an early adopter definitely from the moment I met you, I knew that. And then FreedomGPT is your current company that you founded, correct?
John Arrow: That’s correct. And so through the course of running Mutual Mobile, I’ve got a lot of practice in running many companies concurrently. We kind of treated all of our customers as different companies. And so FreedomGPT was one of those that came out of just an interest of mine and my co-founder Tarun’s desire to realize how can we bring AI to the masses in a way that they’re not just getting pigeonholed into one model, but people get exposure to all of the plethora of different models that are out there. The reason why we wanted to do that is back in November of 2022 when ChatGPT first shipped, we realized there was a lot of situations where the model wasn’t necessarily being honest or truthful or it would refuse to do things that a computer shouldn’t refuse to do. And so that was the first model. We allowed people to basically ask any question they wanted of AI and AI would try its best. Over time, it’s evolved to, given that there’s literally millions of models that are out there now, how can we choose the best for you based on your subjective needs and your objective needs? And that’s why we have all these users today.
Rachel Wolfson: And FreedomGPT also has a crypto element to it. Can you kind of explain what that is?
John Arrow: Yes. So early on, because we had an AI resource that was seemingly uncensored, a lot of web hosts would shut us down. They would say, “Look, if your model doesn’t give the right answer about COVID or about you name it, they would cancel our hosting contract.” We realized this was only going to intensify as the models got better and the views got more controversial. And then we said, let’s figure out a way to use decentralized technologies—Web3—to take away the keys from the web hosts. And so we turned to our users. We had millions of people using it at this point and we said, “Would you be willing to host inference for us locally on your machines?” And our users stood up and we wanted to reward them for that. So we created a token and they can still use that token today if they want to use the AI anonymously. Now we’ve brokered deals with web hosts and inference has become cheap enough that they’re not, you know, slamming the doors anymore if your model doesn’t give the right answer. But early on, it was a wonderful way to bootstrap it. That’s what crypto allowed us to do. We would have been paying potentially hundreds of thousands of dollars a month in web hosting costs if our users weren’t willing to kind of take on some of that inference work for us.
Rachel Wolfson: Got it, yeah. So I want to really focus this conversation around AI. And so, first, before we get into some tougher questions, would you say that we are entering this AI age? And if so, what does that look like?
John Arrow: Well, I’m biased because I run our family office, which is called Age of AI. And I would say what’s fascinating about just the nature of humanity is we never know when we’re standing on an exponential function. We’re really bad at thinking about history in terms of exponential patterns. For hundreds of thousands of years of humanity, if you looked at the last hundred years, it was going to be more than likely what the next hundred years was going to look like. Hominids had been using fire for close to two million years, which is wild. Like, rainforests, savannas would catch on fire and then they would find dead animals and they would go eat the animals. Like, “Oh, this is useful.” But it took hundreds of thousands of years from that moment before humans were able to actually control and start fires on their own. And so it’s tempting to kind of think about, is AI like another technology? Is it like the internet or is it like the smartphone? And I don’t know if it really is. I think this is different because whenever you have a new technology, it’s not just the significance of it, it’s also the velocity of how quickly it spreads. Like when the iPhone came out, everybody got the iPhone really, really quickly. That was pretty significant and had that high velocity. But so did Beanie Babies. When Beanie Babies came out—as a 90s kid, I remember them—they spread really quickly, but they didn’t have that much significance. Now there’s a third factor that starts to matter that we’ve never really had before, and it’s the speed of improvement of these devices. When the iPhone came out, it was only incremental improvements. My iPhone today is vastly superior to the iPhone that I had in 2009, but it was incremental improvements. It took many, many years for it to get this better. It took like five years before you had a front-facing camera, which is wild to think about. Now, what’s fascinating about AI and the age of AI—that’s different and maybe the reason why I hesitated to say are we in the age of AI—is because I think of the age of AI as when you have this recursive self-improving state. Where as the technology gets better, it gets better faster and faster. And it’s similar to that exponential curve. I don’t know if we really know if we’re in it or if we’re not in it, but once we are in it, the reason you’ll know is because the world will look vastly different. It’ll be like early days of COVID to a month in, when unemployment hit 20% in the world looked so different that you couldn’t go out to the grocery store anymore. I think those are the changes that are in store for. That being said, I think they’re positive changes, but they’re going to be that significant and that intense.
Rachel Wolfson: Right. And I mean, right now we’re only seeing maybe we’re scratching the surface, right, with these AI use cases? I mean, we’re seeing it, but it’s not like our every day is AI. We’re not doing like agent-to-agent commerce just yet, but we will eventually get there and maybe that’s when we’ll be in the age of AI.
John Arrow: That could be when the AIs are interacting with each other. The way that I think about the thing that’s most startling to me—and maybe this speaks to how quickly we get used to things—you mentioned Mutual Mobile. We had close to 400 people working for us and companies would come to us and spend millions of dollars to build a piece of technology. Today, if I hadn’t sold that company, I don’t know what we would be doing with those people. I don’t think they would have jobs because it’s so easy to spin up Claude code and agents to create new technology for several orders of magnitude cheaper and several orders of magnitude faster, which is mind-boggling to me. And even if there was only incremental improvement from here, that would be an absolute game changer. It would change the IT services landscape in a way that people can’t imagine. I mean, like there was a time before electricity rolled out across the United States in the 1880s where it just used to get dark at night. There’s a book written about that called “Last Days of Night.” I think AI is maybe the closest fitting analogy is electricity. Like you can’t even think of a time when you lived in a city without electricity. That’s what’s about to happen. But unlike electricity, the velocity of what comes next is going to increase logarithmically. So the agent-to-agent transactions that you talk about, yeah, maybe most of the innovation and most of the use cases for AI is for other AIs. So where does that leave us as humans?
Rachel Wolfson: It’s a good question, for sure. And kind of on that note, you recently created a pretty cool AI project—I don’t know if that’s the correct term—but basically you took Scott Adams, who’s the creator of Dilbert the cartoon, and you made an AI clone of him. Explain how that worked.
John Arrow: So Scott Adams has kind of an interesting character. He created Dilbert and then he became somewhat of a… I’d call him an apolitical commentator. He tried to avoid taking sides on either left or the right and said, “I just want to analyze the persuasive qualities of both.” And he would do this daily podcast every day. He would get up at 9:00 AM with this podcast and millions of people listened to him and it was quite a special thing. One of the recurring themes that he had throughout his podcast is he said he wanted to achieve immortality as other people creating AIs of him. So he put his corpus of work out on the internet and he went a step further and he basically said, “I want to take all of this work and I want to put it into the public domain to encourage people to create AIs of me after I’m gone.” And I want to make it really, really easy. I want to make it so… right here I’m saying his exact words were, “I will not come after you if you do this. I want you to use my work. I want you to use my likeness. I put it into the public domain. It’s yours.” And I always thought, “Wow, what a generous contribution. He’s going to be one of the first because as builders in the AI space, we have to be really cognizant of the copyright risks.” And a lot of this emergent creative works of AIs is really regurgitations of copyrighted material. That’s what happens when you’re experimenting. So the fact that Scott Adams put himself out there and he said, “Look, I want other people to do this, let’s see what happens,” I thought was such a gracious thing. Now, unfortunately, shortly after he said that, he was diagnosed with prostate cancer and he passed away about two months ago. So it was a very quick demise for him. But shortly after he died, I remembered what he said and I realized I had the unique blend of time, skills, and interest to do what he said and be one of the first people to create an AI of him. So I did just that and the results of people seeing the AI version of Scott Adams, where it’s trained on the entirety, totality of everything that he put out there publicly, is just amazing. People think it’s the real show. So much so that we even have to remind people at the start of the show, AI Scott Adams says, “I’m still dead, this is not me,” and there’s a disclaimer. And it’s given people a lot of calmness and I think it’s helped them deal with his death in a productive way. And also—and this was the selfish reason for me wanting to do it—I was curious of his take on current events. It’s been fascinating with the Iran war going on right now to hear what Scott Adams would have said. And again, who knows if it’s 100% accurate or not, but it’s the best thing that we got. It’s been extremely controversial, but I feel so fortunate that we’ve gotten to put that out there and do it. And I hope other people decide to donate their likeness to the public domain.
Rachel Wolfson: It’s so interesting that you did that because I think that as we start to enter this age of AI, we’ll probably see more of these use cases. Like what do you think? Do you think when celebrities or, you know, even relatives and loved ones, do you think when they pass away we’ll start seeing these AI clones?
John Arrow: I suspect everything that is today a problem, a concern, an anxiety of humanity is likely going to be solved in this age of AI. From death to disease to you name it, those will go away. Now, they’ll probably be replaced with new anxieties, but the ones we currently have will go away. So I think we’re about to enter this realm of unbelievable abundance. Like we talk about death—one of the worst things about death is when we lose a great mind, that mind’s gone. Think about Albert Einstein, you think about Abraham Lincoln, these amazing figures. We can read their text, in some cases if they were around with audio recording and video, we can go back and do that. But we lose the way they would approach modern times and new problems. With where AI is today—not even if it advances, but where it is today—we can at least get their take on things into the future. And so if that continues, yes, people will still die, but we won’t lose their intellect. And there’s been a fascinating new study that just came out; somebody took a fruit fly brain and uploaded it to a computer. Which, if you think about a human brain, any brain, it follows all the same laws of physics as any machine. So we’re probably not that far off from just simulating and taking somebody’s intellect and figuring out how to make an AI of it, but possibly taking their existence—whatever that means—and putting it into a silicon form factor so that it could persist forever. And so maybe then death even goes away.
Rachel Wolfson: So, are you able to tell us when you created this AI clone of him, what was that process like? How did you do that? Are you able to share with us?
John Arrow: Sure. First of all, I think there’s a huge, just a huge responsibility when you say, “I want to create an AI of somebody,” especially if they were loved or revered by millions of people. That’s a big weight. And so it was an extremely iterative process. We didn’t want to just take what we first created and put it out there. We spent many weeks working in refining on that. So the first thing we did is we said, let’s assemble, let’s aggregate everything that we can find that’s easily publicly available and give the AI instructions to refer to that. “That is you, that is the soul of it.” Obviously, we can never get access to his inner thoughts, but Scott Adams was a prolific writer. He was an author; he authored, I think, close to two dozen books. So that was all out there. All of his transcripts, thousands and thousands of hours of transcripts from his shows were out there. We said, let’s assemble all that into one corpus and then give different AI models the ability to sift through that. And so when there’s a new event that’s taking place—unfortunately, hostilities in the Middle East seem to be a recurring theme throughout history—and Scott Adams did this podcast for a decade, and we could go back and reference all of the other times when there was an issue with Iran and take that and use that as a starting point. So that was the first thing, just from how the AI thinks. Now, beyond thinking, though, you care about how somebody presents themselves. You care about ideologically, you care about things such as the voice, the cadence of how they talk, you care about what their face looks like. And so we were able to blend together both the LLM, then voice technology through ElevenLabs, and then using Fal.ai, using lip sync and facial… aggregated into this thing that, if I might say so, looks a lot like Scott Adams. It’s not perfect. If you watch it for more than a minute, you’ll know it’s AI. There’s no doubt about it. But it was a montage of technologies and allowed for long-form content. One of the things that was mind-boggling to me is when we started this project, I was hoping there was something already out there we could use. That was the goal. So this is something that I’ve been doing with my brother Zack, and he went and looked long and hard and he’s much more technical about this stuff than I am. He said, “No, there’s nothing like this. There’s something out there that might let you do it for 15 seconds before it goes off the rails.” And so we had to duct tape and piece these technologies together that weren’t really designed to work with each other, but we made them work through each other. He created this platform that we were able to do this for Scott Adams and we’ve been able to do it for other people too. We haven’t released any other videos because Scott is the only one that we can find who’s given this kind of A-to-Z permission. But if others choose to do that, it’s out there. We have a new company now called OtherForm.ai which lets people do exactly that. So if somebody knows they’re going to die and they want to be able to interact with loved ones—I know it’s freaky for a lot of people, kind of like Black Mirror—but some people who want that, they can elect to do that. Similarly, if say the estate of Stephen Hawking decides, “Hey, it might be nice to get Hawking’s take on new things,” we could put him into the system too.
Rachel Wolfson: It’s really, really interesting that you did that. Now you also mentioned that it’s been controversial as well. Can you talk about the mixed reviews that you’ve gotten with Scott Adams’ clone?
John Arrow: It’s something and I get it. There’s people that love it. Tons and tons of people have reached out to us about how amazing this is; we get messages every single day, “Thank you for doing this.” We also get shared messages from people saying, “You shouldn’t be doing this. When somebody dies, that should be it.” And there’s this uncanny valley where if it’s real enough, it can be disturbing. And I get it, and I think that’s unfortunate. But we realized this was something that were his wishes, that he wanted to do, that he said over and over again. So I think there’s kind of two people that disagree with us doing it. The one is the people saying it just violates the natural order of things—when somebody’s gone, that should be it. There’s another group of people who unfortunately, I think it’s maybe cannibalizing some of their activities around it or they think Scott meant something else and they’re against it. My hope with the project is that there’s a middle ground where this becomes a more normal activity. In the same sense that we could go back and watch somebody’s old videos after they die, their old movies. I think it will hopefully be seen like that. But because this is one of the first, it conjures up these difficult conversations that need to be had. And I can tell you, it needs to be figured out quickly because the technology is going to get better and better and better and that it will soon be indistinguishable.
Rachel Wolfson: For sure. What are your thoughts on—because I’ve seen like Tim Draper has done this a little bit—with a digital twin. So, right now we’re talking about somebody passes away and you make their clone and they’re gone and maybe it lives on YouTube or X or whatever and you can watch the show. But when somebody’s living and let’s say they get invited to speak at an event in Europe, but they don’t want to actually go to Europe, have you thought about creating a digital twin for speakers or for people like that where that AI can actually be there in their place?
John Arrow: In my mind, that’s the more salient opportunity. So OtherForm.ai can do it for living people as well. I do believe though, you know, if I was going to go listen to Tim Draper, I would kind of want the real thing. It would take something away; it’s kind of the difference between seeing your favorite band play versus listening to them on your iPhone or something. So I think it’s very hard to make people okay with the digital twin version if they are aware of it. However, where it really shines is if there’s an interactive element. So Tim’s a very busy man and rightly so, he’s done so many incredible things. Now, if anybody could now have a conversation one-on-one with Tim’s digital clone, that’s a different story entirely. I mean, if you could have access to his mind 24/7, or the mind of Elon Musk or, you know, Tim Ferriss, Naval Ravikant, you name it, that becomes a lot more compelling. And I think that’s where the digital twin product will really shine is by letting them dive into the nuts and bolts of what you’re working on. I don’t know about you, but whenever I need advice, I care way less about getting the absolute expert in the field and more about getting somebody who understands me and the context of the situation that I’m working on and what I’m optimizing for. It’s why, you know, I think I’m more likely to go to you with a question or you’re likely to go to me than to find the absolute best person in the world about it. And the cool thing about the digital twin concept is soon you can have both. You can find that expert, you can over time let them understand your wants, your desires, your needs, and so when you get their own advice, it’s tailored custom to you. Whereas even if you could sit down with Tim Draper for an hour, he’s not going to have that context. And he’s certainly not going to have time probably for the repeated follow-ups. But with the digital twin, he absolutely could. And I think this technology, that concept, will allow people who are influencers and philosophers to help and monetize their time orders of magnitude more efficiently than they can do today by just speaking engagements.
Rachel Wolfson: How would that even look, though? Would it be something where it’s like, I’m on a Zoom call with somebody’s digital twin and they’re giving me advice, or is it going to be a robot? I mean, you know everything about AI in my opinion, so what are your thoughts on that?
John Arrow: You raise an interesting question: what is the form factor? Should it use this incumbent form factor of how we are used to having conversations? And when you appreciate that the medium matters, the modality matters, right? There’s different intensities to conversations. There’s a difference between us doing this face-to-face versus a Zoom call or versus emails back and forth. What I suspect will happen is that there’s going to be a whole new modality that emerges where instead of a one-on-one conversation being the norm—that will persist—but what most people will choose to do with a digital twin is they will choose to have always-on audio and visual recording of their life. 24/7, it’ll be encrypted, there’ll be no way for this data to get out. There’s already some hardware companies in the space; Apple is likely to be a leader in this regard. But then you can take all that content, you can choose your expert, somebody who follows you, and let them give their analysis. And the great thing about it is you won’t even need to get their advice. They’ll have spent their entire day as the Tim Draper 24/7 clone looking at everything and saying, “Look, here’s the questions you should be asking. I’m going to tell you how to improve your life without you even needing to ask.” And once we pass that level, it’s an extremely flattening piece of technology. Right now, the only people in the world that can have teams of advisors are presidents and high-level public CEOs. Soon every single person on this planet can have their own expert advisor that is sifting through each and every second of their life. And that will be the ultimate leveling force for humanity.
Rachel Wolfson: Yeah, I mean it’s also interesting, it’s also just crazy and kind of creepy to think about, like you said, kind of Black Mirror.
John Arrow: No, I’m just appreciating the way you said it. It is Black Mirror.
Rachel Wolfson: It is. But on that note, we think of AI and we think of it as something so revolutionary. I mean today we can get into a Waymo and it just, we don’t even need a driver, right? Like we don’t have to have someone drive us around, a car can just drive us to where we need to be. But what are the risks associated with AI that a lot of people may not consider right now?
John Arrow: We’re already seeing how these risks are going to play out. The biggest, most salient problem is bad actors. Whenever there’s a new piece of technology, there is a differential in how well people understand that technology. You’re seeing deepfake technology being used to take advantage of the elderly. You’re seeing it being used in social engineering attempts. So bad people will use good technology to do bad things. That won’t change. Now, fortunately, the same technology can be used as a counter. It can detect these threats and realize, “Look, this is not who they say they are. This is a deepfake video. Don’t give them your credit card information, instead report it.” So that’s the most immediate thing. One of the areas that I perceive—and this has just been amazing to me—in all of my professional time, I have never had more people reaching out to me looking for new jobs. It’s absolutely stunning how quickly this is happening. In the months before COVID, unemployment was close to 4%. What should be in a healthy economy. And then in the course of one month, we went from 4% to 25% unemployment. It was insane. We’re about to have something just as dramatic, except unlike COVID, there’s not going to be this reversal. I don’t believe. I think it will be more like a tsunami where right now there’s a certain type of job that AI is really, really good at replacing those workers. But it’s climbing the totem pole. It’s climbing the tree like a fire and it’s getting hotter and hotter and hotter. Now, at first glance, it seems really bad, and I think it’s really horrible for the people that are affected by it. Over time, it’s likely though to give people a lot of abundance. The way we think of capitalism, the way we think of society, is going to undergo a fundamental shift where the vast majority of the people won’t need to be thinking about working in the conventional sense unless they want to work. And ordinarily, this would be a good thing, but the speed, the velocity that it’s happening with will cause some real growing pains and catch people off guard. And governments and businesses are going to have to catch up in a way that people aren’t quite sure what that’s going to look like yet.
Rachel Wolfson: It’s really interesting that you say that because, you know, a lot of people are losing their jobs and I guess that is because of the growth of AI. What does that leave for us? Like, are all jobs going to be replaced by AI or are there some jobs out there that you think AI will never be able to replace?
John Arrow: There’s certainly jobs that are going to be more difficult for AI to replace. Things that involve bits, software, are the easiest things to replace. Things that involve atoms, matter, much more difficult to replace. So the things that are probably the safest are mechanical labor: surgeons, that type of thing. The people that are most replaceable are knowledge workers and analysts and people that are literally writing software now. However, there’s this really fascinating thing that’s going on right now where it’s making them way more efficient. Attorneys are a wonderful example. Attorneys are billing at extremely high rates at extremely high margins now because it makes them much more effective. However, soon you won’t be going to your lawyer for advice or for a document review; you’ll just go to your AI. Once that happens, law firms are going to be in a lot of trouble, except in very specific domains, maybe like litigation or something. So I don’t think there’s really anything that AI can’t replace. Where this leaves humans is it means we’re going to have to shift our role of how we think of what brings value to the world. It used to be that people would trade their labor, their time for dollars, and then people started owning land and they would trade the land for dollars. Then capital itself became something where you could lend out your money and make more for it. And then most recently was knowledge, right? If you had a lot of knowledge, you could use that in ways to produce value or use it to arbitrage things. Those are all things that are going to change in the age of AI because the AI can do that for you and it can do it faster and it can do it better and cheaper. So I don’t think anything’s off-limits, but it’s going to be a lot of fun if you can embrace that technology now and you’re going to be on the right side of history versus I think subservient to it if you kind of go kicking and screaming.
Rachel Wolfson: So you mentioned that people are coming to you asking about jobs and what they can do. So with the rise of AI, like what does that leave for us?
John Arrow: Those people I give a single piece of advice for: try creating software now with artificial intelligence. Doesn’t need to be Claude, doesn’t need to be… you know, ChatGPT, just choose any AI that you’re using and try to make a piece of software. If you try to do that, you’re going to be so blown away that it will give you all these ancillary ideas and you’ll be ahead of 99.9% of the people. Most people on this planet, most people in this country, have never made a piece of software or a website. You can do that now in 10 minutes. And once you do that, you’re going to have all this abundance that you’ll have these other business ideas, you’ll figure out how to make companies more efficient. So I think the best way to do it is if your job is being taken by AI, go figure out how to use it so you can create a new one. When the internal combustion engine was invented, it led to a huge layoff or a huge demise in the number of jobs related to keeping horses healthy and for riding them and all of that because the car became much more mainstream. That happened over the course of decades. But even today, that job market never recovered. AI is going to happen much quicker, right? It’s got that significance and it’s got that velocity. And the velocity is a dangerous part. When things are slow enough, people can adapt. But because this is going to hit us so quickly, the only way you can adapt I think is if you’re part of that change. Have you tried making any software yet?
Rachel Wolfson: No.
John Arrow: It’s so addicting.
Rachel Wolfson: Well, you’re going to have to… because you know, you say it like it’s an easy thing, right? But I wouldn’t even know where to start. I mean, I’ve used ChatGPT, ask questions and this and that, but how would I make a piece of software?
John Arrow: It’s gotten so much better. Two years ago, I would have had the same problem. It would have been easier probably for me just to start opening and writing code. Now it will guide you. It’s at the point now where it’s easier to tell an AI what you want from a software standpoint than to tell an engineer or a program product manager. It’s easier to talk to the AI and it can iterate on it 24/7 as much as you want and it never gets frustrated with what you ask or if you want a pixel from there to be moved over there. It will tell you how to do it. It will tell you how; the instructions are built in and it’s gotten that good. If I was going to extrapolate where it’s going to be though in a few months, soon you won’t even need to say “I want a piece of software.” You can just say “I want this business outcome.” There’s this idea that I’ve been tantalized with: the infinite money glitch. When the world changes and there’s no going back is the moment you can prompt an AI agent to go make a dollar for you. Because once you can do one dollar, you can spin up tens of thousands, tens of millions of instances of that and we are almost right there. This is the fun time when 99% of the population hasn’t tried it yet. So if you just say, “Rachel, hey, I want to go create this piece of software,” it will tell you how to do it, which is so cool. You don’t need anybody’s help. I mean it’s amazing to me that there’s even AI consultants that are out there because the AI itself is better than any person that could teach you how to use it.
Rachel Wolfson: Okay, so you’ve said two interesting things but let’s touch on—because I also want to talk about mistakes, let’s just talk about that now. With AI, I mean it is capable of making mistakes, right? Like I was just speaking with Chandler Feng of T54 and we’re talking about agentic commerce. If we want to buy a $6 coffee, but you know, there’s a $7 coffee and that’s the only option, like and the agent does that but you put the limit at six, like that’s a mistake, right? That’s $1 over your budget. What about these mistakes that agents can make on a user’s behalf?
John Arrow: There’s this joke when you’re prompting an agent to do something for you where you type in the prompt—maybe it’s “I need a new website for my coffee shop or a new point of sale system”—you type that prompt and then at the end of the prompt you type “Make no mistakes.” And it sounds like a joke, but it improves the outcome right now because it will go back and it will look and it will say, “Did I do things correctly?” It’ll have this built-in error checking for it. I would say even if you don’t include that, it’s still better than most humans. Like it will make mistakes. The trick is you don’t want to give an AI too much rope right now. Like I would not give it access to your crypto wallet; there’s been instances of it divulging that or spending and emptying a wallet that way. I wouldn’t give it access to send out emails to your entire listserv right away. But I certainly give it access to my email, I put in parameters of what it can and can’t do, and it will make mistakes. But it’s very unlikely for it to be something that’s that disastrous, and it’s certainly likely it’s going to be better than any human incumbent that you gave that same task to.
Rachel Wolfson: Okay, now let’s get back to the other point I wanted to ask you about. You mentioned an agent creating a US dollar.
John Arrow: Yes.
Rachel Wolfson: So that’s interesting. Does that mean that we’ll eventually get to a point where agents are just going to be able to spin up money for users to use? Is that what you mean?
John Arrow: Pretty much. When this moment happens—and I haven’t heard anybody else discuss this moment—I believe is the Rubicon where once we cross it, society is going to fundamentally change. Because right now, most of the capital that’s made—almost all the capital made through the world—is through some type of value creation. That’s predominantly done by humans. When an AI can do $1, it can do $1 billion very, very soon and thereafter. And soon it will do more value creation than all of humanity can do. At that point—to your original question, where do humans do? What’s going to be the responsibility? And we are so, so close to that. I mean, I again, ChatGPT agent mode, I open that up once a week to check how close it is and I say, “Go out there and make a dollar.” And it’s getting closer and closer. It’s not able to do it now, won’t be able to do it next week I don’t think, but it will be able to do it soon. And those opportunities will be very short-lived. There’ll be tons of people that are out there taking advantage of whatever that one opportunity was and it’ll close the door on it. There’ll be others though, and soon they’ll be able to come up with ideas and execute on those ideas and build businesses on those ideas and sell those businesses faster than a human can. And the buyer will likely be other AIs, to your point, because who’s the best person to know what an AI wants? Probably another AI. And so we’re just watching the show at that point.
Rachel Wolfson: So given this potential for AIs to eventually create fiat, would you say that is another enticing reason for people to think about investing in like Bitcoin for instance? Because you know, I don’t think an AI, given that Bitcoin is a limited amount, right, is AI ever going to be able to spin up cryptocurrency?
John Arrow: Well the reason we need fiat is interesting, right? We need fiat because we have to manage scarce resources. Like the first thing from my first day as an economics major at the University of Texas, people respond to incentives is what they taught us and we need money to manage scarce resources. Well, people will always respond to incentives, but once everything’s abundant, we don’t really need the money as much. And I think crypto factors into this in a pretty fascinating way. Most of the transactions that AIs are making are going to be really, really small and they’re going to need to be permissionless. AIs largely will be buying access to proprietary datasets early on. They will largely be buying things that allow them to interact with the physical world, largely getting humans to do things on their behalf. And so at that point, the best probably way to facilitate permissionless and small transactions is crypto. So there’s no reason that an AI can’t do it. Also, it takes care of a lot of the difficulty around KYC and AML and all that if the AI is operating on the crypto realm rather than operating on the dollar realm. They don’t need to even think about an on-ramp or an off-ramp, it becomes much easier. So I think crypto and AI is the natural confluence and agentic transactions is where I think crypto is going to get its groove back. And it’s going to be extraordinarily necessary. The natural limiting function for most AI training and even inference is energy, too. Well, crypto’s proof-of-work, at least for Bitcoin, is all about energy. And so it makes sense that we have a unit that instead of being backed by the full faith of the United States government, it’s being backed based on the full faith of the joule, right? It should be closer to energy than it should be by any type of government.
Rachel Wolfson: Right. And we’ve also, or I’ve seen and I’m sure you’ve seen as well, AI agents will be the biggest users of stablecoins for instance. And I mean I use stablecoins, right? They’re great for cross-border payments. But given the fact that AI agents are so good at transacting with these decentralized elements, do you think that AI agents would be the biggest users of stablecoins?
John Arrow: I suspect AI agents are going to prefer non-stablecoins initially. Because one of the things that I find was so natural about crypto is it had the fundraising element baked into it. People could speculate on the token; the token might go up, it might go nowhere, but it allowed capital allocation in a really efficient way. And a lot of these agents are going to need to convince humans initially, then later agents, that they’re worth spending money on. Many of the agents are going to use enormous amounts of compute inference costs, and in the way that they’re going to do that is they’re going to operate similar to other capital markets. They’re going to have their own little public offering and say, “If you believe in what I’m doing as an agent, I’ll sell you a piece of future proceeds.” That’ll be done in a decentralized, completely auditable way so that if you buy in with 0.2 of a Bitcoin and we grow by 100, you’re going to get out 20 Bitcoin. And as a result, that would be a reason why people would speculate on these. I think stablecoins later become really important once the markets mature and capital fundraising is as not as necessary. But I suspect we’re going to see a whole another meme coin situation because of these agents in short order.
Rachel Wolfson: Interesting, yeah. Shifting gears a little bit, what are some of the most, you know, we talked about the AI clone that you’ve done with Scott Adams, we’ve talked about digital twins, what are some other really interesting use cases that you have on your radar right now when it comes to AI?
John Arrow: One thing we’re looking at right now is how you can replace SaaS companies as a whole with AI products. This is a space that a lot of people are looking at. Macros hard looking at it, but we’re all used to these technology products that we have to purchase and we have to do it in a way that is unfortunate because you have this subscription, we use it, and then we don’t use it for again and we kind of forget that it’s there. So one of the things that we’re looking at right now is creating an analyst firm that will systematically go look at SaaS companies and say, “How can we use tools like Claude code and other agentic features to emulate those products in a way that doesn’t violate patents or copyrights and put that out there in an open source fashion so that anybody can access this?” Right now, my brother and I are working on a suite of products that effectively open source—gives you open source access to everything that Intuit has built. So we have TurboTax, we have Quickbooks… actually these are pieces of software that have legacy pieces of software that only get updated occasionally that everybody kind of hates using. I’ve never met someone who uses Intuit’s products and like really loves it, but they’re a necessary evil. So we’re creating a suite of products that effectively open source that and we’re going to, when we’re done, put that on GitHub. And then we’re going to write an analyst paper with a short thesis on why we think companies like Intuit are probably in for a rough ride. We’ll enter in a short position on the stock using put options and give that disclosure that we are short that stock and then publish it. And then let the market forces do the work. And I think that does two things: one, it causes the company to need to innovate in a way where they’re not resting on their laurels. So legacy SaaS providers are going to have to say, “How do we remain relevant in the age of AI?” like Salesforce, like SAP, like Intuit. And then it will also allow people who are the consumers of that to not need to spend so much of their money on these software products. They’ll be able to use these products for free and to innovate on them. And people who agree with us can also choose to mirror our traits too.
Rachel Wolfson: Right, right. So I mean in a way, you know, although we’re seeing a lot of job losses because of AI, like at the same time if we can create these efficiencies and not have to pay for things like QuickBooks and, you know, lawyers, right, in a way we will… it’s kind of nice, right, for people to see that advance.
John Arrow: It is. And on the lawyer front and on the doctor front, where people are using AI very heavily, I don’t think those two professions are going to go peacefully into the night. I think they’re going to go kicking and screaming. There’s a piece of legislation in New York right now that’s trying to ban AIs to give any advice in the legal realm, the medical realm, the dental realm, and in the engineering realm. Which, again, we know why they’re doing that. The reason they say they’re doing that is they don’t want people to be harmed if the AI gives an incorrect piece of information, which to be fair, that does happen and there could be disastrous results. That being said, if you look at who’s really behind the bill, it’s lobbying agencies for those professional fields that are practicing a form of protectionism that is being borne by the consumers, right? They’re being forced to potentially continue to pay for those professionals. The benefit’s going to be so huge though, there’s no way those lobbying groups are going to get to keep people out. I think lawyers will be the most successful, but even then, they’re on their way out. In Austin, we had the taxi lobby as a huge, huge opponent to Uber and they kept Uber out of Austin for so long. You think of Austin, Texas as an innovative tech hub, we were like the last place in the United States to get Uber because of that taxi lobby. AI is way more useful than Uber in so many ways. So I don’t think there’s going to be any profession…
Rachel Wolfson: Do you think just given that, and you bring up a really important point, like how should AI be regulated and should it be regulated?
John Arrow: It should be regulated. It needs to be regulated; it needs to be done with their eyes wide open though. We can’t do it from a protectionism standpoint. We can’t say, “Oh, there’s a group of people like lawyers that are disproportionately being negatively affected.” If that’s true, let’s figure out how to retrain those people, let’s figure out how to make them more effective with AI. But we need to look at things for the benefit of society as an all. Like there’s things that are extremely illegal that you can do with AI. Most of those things though that are illegal are covered by existing laws. Like you can’t go and make copyrighted material—that’s illegal. You can’t violate people’s copyrights. You can’t go out and try to defraud somebody with AI—that’s already illegal by existing laws. You can’t use AI to call in bomb threats—that’s already illegal. So there’s very little that AI can do that’s new that isn’t already regulated in some factor. There are some things, but those are more the exception to the rule. That’s what we need to regulate.
Rachel Wolfson: Right, right, for sure. John, we’re running low on time so is there anything that you want our listeners to know that I didn’t already ask you about or that we didn’t cover?
John Arrow: I would say give us your feedback on AI Scott Adams. We love that, we want to figure out how to make the model better. It reads all the comments, so when he posts something we try to incorporate that. And second of all, please go out and try programming something with AI. You’ll be amazed at how easy it is and how fun it is.
Rachel Wolfson: Well I’m going to try it, and if I… because you know, you may think this is easy, but it sounds complicated, so if I need help I’m going to reach out to you, John.
John Arrow: I’ll be standing by.
Rachel Wolfson: Great. If our listeners want to get in touch with you, are you on social media or is there a website?
John Arrow: I’m on X; I’m at @johnarrow on X. Or feel free to email me; I’ll put it out there: [email protected].
Rachel Wolfson: Great. John, always wonderful speaking with you. We’re going to do another follow-up I’m sure soon and this has just been a really great conversation so thank you.
John Arrow: Thanks, Rachel, for having me.
Rachel Wolfson: Thanks. Special thanks to Four Labs Digital for producing Deep Dive Podcast. I’d also like to thank the sponsors behind Deep Dive. You can click the links in the show notes to learn more about each of the initiatives from these sponsors. Finally, thanks to the listeners for tuning in. Please be sure to subscribe, like, and share.
