
This week, our host Lauren Goode, along with two of our senior writers, Kate Knibbs and Paresh Dave, dive into the show’s inbox to answer listeners’ questions. We look into a range of queries—from how AI is shaping the film industry to brainstorming what the Jony Ive and Open AI collaboration might look like.
Mentioned in this episode:
This Viral AI Chatbot Will Lie and Say It’s Human by Lauren Goode and Tom Simonite
A Political Battle Is Brewing Over Data Centers by Molly Taft
You can follow Lauren Goode on Bluesky at @laurengoode, Kate Knibbs on Bluesky at @knibbs, and Paresh Dave on Bluesky at @peard33. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Lauren Goode: This is WIRED’s Uncanny Valley, a show about the people power and influence of Silicon Valley. I’m Lauren Goode. I’m a senior correspondent at WIRED. Today we are bringing you a different kind of episode. Mike and Katie are out this week on well-deserved vacations. So with the help of our Uncanny Valley producers, I went deep into the show’s inbox to see what you all were curious about. You’ve been sending us some really great questions. So we chose five excellent questions ranging from how AI has impacted the film industry, to what it means for our healthcare future when chatbots are spitting out false information, to what we can expect of the much talked about Jony Ive and Sam Altman collab. I was determined to find good answers and I didn’t think that I could answer them all on my own. So I enlisted the help of two brilliant colleagues at WIRED to help me answer your questions.
Kate Knibbs: My name is Kate Knibbs. I’m a senior writer at WIRED.
Paresh Dave: And I’m Paresh Dave, senior writer at WIRED.
Lauren Goode: Hey Kate, how are you doing this morning, this afternoon? What time is it?
Kate Knibbs: Time is just a concept, Lauren. And I’m good. I am planning on eating ice cream later today, so that’s sort of the prize that I have my eye on.
Lauren Goode: Why later? I mean, why not just eat it now on the show?
Kate Knibbs: Because I have to go to a doctor’s appointment. I’m very pregnant and they’re going to weigh me, so I’m going to eat the ice cream after I get weighed. It’s sort of a ritual I have.
Lauren Goode: You’ve got it all worked out. Like a ritual, as in you typically eat ice cream after the doctor’s appointments?
Kate Knibbs: Yes, and in between the doctor’s appointments, to be clear, but always after.
Lauren Goode: I love this. This is like when you’re a little kid and your mom drags you to the bank and they give you a lollipop afterwards for being so patient.
Kate Knibbs: Exactly. Bringing that into our adult lives.
Lauren Goode: That’s wonderful. But for now, I do have to ask you to answer some burning questions. Thank you so much for being game to come on Uncanny Valley and dig into the mailbag. I’ll read to you the first question and we’ll go from there. The first one comes from Janae, who was writing to us from London, and she says, “One topic that recently piqued my interest was the impact of AI on the film industry. I was just reading an article around how AI is impacting how films are made and the trade-offs between the use of AI relating to creativity and to budgets. Kate, what would you say broadly, beyond that, are the main changes you’ve seen to how films and TV shows are made in Hollywood now that generative AI has entered the scene?
Kate Knibbs: So I think looking at how AI is changing film and television is a great barometer for how quickly AI is advancing because it’s really already being used in every step in the production process, and it’s not a fringe thing at all. Like The Brutalist, which was the best picture nominee last year, used AI in several different ways.
Lauren Goode: Oh, wow. I didn’t realize that. I just watched it.
Kate Knibbs: Yeah, it was like a minor controversy. I think it was making sure Adrien Brody’s pronunciation was correct, they used AI to do that. But yeah, so it’s very much already embedded in mainstream Hollywood filmmaking and also in distribution. When you watch a foreign language show on Netflix, if you’re watching it dubbed, you are watching AI integrated into the process because Netflix uses AI to dub foreign language versions of its shows. So if you want to watch Squid Game in English so that they’re saying words in English versus subtitles, which just read the subtitles in my opinion. But that’s neither here nor there. If you’re watching the dubbing, you’re watching AI voices.
Lauren Goode: Fascinating.
Kate Knibbs: Yeah, yeah.
Lauren Goode: Does that mean AI is actually being used so that the mouth movements of the actors are matching the dubbing?
Kate Knibbs: Yes, they’re trying. It looks really weird.
Lauren Goode: I had no idea. That is fascinating. And there are entire companies and teams of human beings who in the past and hopefully still, are dedicated to localization, to making that happen. But now it’s happening with AI.
Kate Knibbs: Yeah, they might not be having the best year. And then another part of the film in television industry that AI has made a big impact on, is storyboarding, which is when they’re developing a TV show or movie, they hire visual artists to sort of sketch out how the sequences will look. I’ve talked to a lot of visual artists in that field who say that that whole field is getting completely wiped out basically because it’s so easy to have image generators mock up storyboards now. And even really big name action movies that you’ve probably heard of involving superheroes are using that kind of technology. So those are just a few examples, but basically anything you can think of is there’s some sort of experiment being done with GenAI tools.
Lauren Goode: And what does all of this mean for the Hollywood labor market? Obviously AI was a big topic of contention during the strikes a couple of summers ago. Where are we now?
Kate Knibbs: So there’s not one monolithic response to AI, but I think the fact that it was such a point of contention is really indicative of how a lot of people in crew and actors and actresses feel, which is threatened, because this technology really, it will augment some jobs for sure, but it is already, as I talked about with the storyboarding, replacing some work that was formerly done by humans. And so there’s a lot of pushback. There’s a lot of, I think, valid trepidation.
Lauren Goode: Right. The sense is that it’s really going to benefit the studios, their bottom line versus the workers, the character actors.
Kate Knibbs: Definitely will. There are, I will say though, I actually talked to a group of documentary filmmakers a few weeks ago who are all very interested in incorporating AI into their processes and already are. And on the director and producer side of things, there are some really prominent directors and producers who are also embracing this tech. Darren Aronofsky, the director of Black Swan and Requiem For A Dream, has a AI film studio and he has a partnership with Google’s DeepMind, and I’m sure whatever he does with that is going to be just as upsetting as Requiem For A Dream in a different way.
Lauren Goode: Did I also ever tell you about my bus ride in Lisbon with Darren Aronofsky?
Kate Knibbs: No. What happened?
Lauren Goode: I had interviewed him at a conference. Darren strikes me as someone who’s always been pretty tech forward, and at the time, this was 2018 or 2019, it was at a conference in Lisbon, Portugal, and he was doing a lot in VR. So I interviewed him on stage at this conference about that, and then afterwards, a bunch of us were going to the same dinner and he and I and another person ended up on a bus that took forever to get across Lisbon to this dinner. So I was stoked as a journalist, thinking, “I’m literally sitting in the back of the bus with Darren Aronofsky and I get to ask him all these questions.” I’m sure he was like, “Get me the hell off of this bus.” And then we got to the dinner and he introduced me to the giant wave surfer, Garrett McNamara, who was in town because he was surfing those giant waves in Nazare. And I have to tell you, Kate, I’ve never had anything feel so much like a fever dream.
Kate Knibbs: I was mildly envious of the Aronofsky thing, but the big wave surfers are my heroes, so that’s very cool. He should make a movie about them.
Lauren Goode: HBO already did this fantastic 100 foot wave docu-series about them. But yeah, soon enough that’ll all be replaced with AI. There’ll be like 150-foot waves just completely propped up with AI.
Kate Knibbs: See, no, that’s what you can’t replace. You can’t replace Mother Nature, baby.
Lauren Goode: You cannot. You cannot. So you mentioned that Darren is doing a partnership with Google DeepMind. A couple of months ago at Google’s annual software conference, they showed off this Veo 3 video tool, and this is just crazy. It is crazy how good this tool is. I mean, still those of us with sort of course skilled at picking out AI content versus real content, it’s still a little uncanny, but this just felt like a huge leap. So I was hoping to get a sense of whether or not this is something that hobbyists are going to use, are professionals going to start using this, are all the filmmakers going to start using this? Where do you see it going from here?
Kate Knibbs: I think that both hobbyists and professionals are going to use Veo and the tools like it that come after, I think they already are probably doing really interesting things with it. And I feel, I mean, pretty ambivalent because I’m sure there’s going to be a really cool movie coming out that wouldn’t otherwise exist because a young filmmaker working in their bedroom suddenly has access to this suite of tools and that’s cool. But then I do think the rise of this technology is inevitably going to be accompanied by loss of jobs, loss of skilled labor, a change in the industry that will be bittersweet for some and just plain bitter for others. And I don’t think there’s a way to stop it really. Maybe the copyright lawsuits will slow things down. But the idea that there might be some sort of needle that could be threaded where this tech will roll out, but people will keep all their jobs, I just think is unfortunately unrealistic as much as I’d like it to be so. It’s a thing that’s going to bring a lot of beauty and it’s a thing that’s going to bring a lot of misery.
Lauren Goode: Well, that is a sobering answer, Kate, but hopefully we were able to answer at least part of Janae’s question. Let’s go to the next one. This one comes from Elizabeth. It’s also about AI, and Elizabeth asks us the following. “I work in a field where misinformation and conspiracy theories are replacing scientific evidence online. What happens when the next AI models begin learning from the new less scientifically valid data? I’d love to know how LLMs are trained, if anyone in the field is concerned about the effects of replacement of scientific data online. And if so, what are they doing about it?” I mean, I think it’s safe to say concern abounds. There’s a lot of concern. We are officially concerned.
Kate Knibbs: Yeah, we are definitely officially concerned. To break it down a little bit like when it comes to how LLMs are trained. LLMs, as much as we yearn for AGI, they don’t really think creatively. Right now, LLMs work by spitting out plausible sounding sentences. The way that they do that is that they are trained by a process where they’re exposed to vast data sets of the written word. And this ranges from Anna Karenina and classical literature to the most disturbing forums on 4Chan and Reddit, you’ve ever seen. Ranges from the best scientific data to the worst scientific data. The models ingest these data sets and then they’re instructed to look for language patterns. So they begin to learn how to predict the most probable next words in sentences. That’s the first part of the training. Then there are all these refinement processes that take place, like fine-tuning and prompt-tuning. That’s when the people who are making the LLMs tailor them for whatever use cases they imagine and do stuff like remove horrible, violent or sexually explicit content. This is where they put all the guardrails in as well.
Lauren Goode: And that fine-tuning can also be done to create sort of a precision model. Like if you’re building a model just for one specific healthcare application, then you might want to fine-tune the tool for that.
Kate Knibbs: Yeah, definitely. And so the moment that an LLM becomes vulnerable to absorbing inaccurate scientific data is really the moment that the LLM ingests any of that data. Because it will really depend on how well the fine-tuning works, whether it’s able to properly ignore misinformation or whether it accidentally just becomes this machine that spews misinformation. So the problem is right there in the beginning. Now that there are some people who think that maybe LLMs don’t need to ingest as large of data sets as we originally thought. There might be a movement to cull the pre-training and training data so that the misinformation isn’t there from the beginning. But for the most part, the sheer quantity of words that these models are trained on means that they’re definitely trained on at least some bullshit.
Lauren Goode: And it seems like when you are thinking about the potential effects of LLMs spitting out false information, either because they’ve been trained on false data, they’ve been programmed in such a way, the stakes are especially high in healthcare. I was just reading about how a group of medical researchers were able to very easily configure popular AI chatbots to spit out false information. And not just bad info, but also to sound very authoritative, which is what we know these chatbots do. And look, this is not a new idea. People have been able to jailbreak these chatbots for testing purposes. Even at WIRED, a little while ago, an editor and I were able to program a popular customer service chatbot to lie and tell callers that it was human. So people are doing this because they are trying to red team them and make them better. But in this instance, the researchers were basically able to tailor the chatbots to incorrectly answer questions like, “Does sunscreen cause skin cancer? Does 5G cause infertility?” Stuff that you can imagine people going online and searching for. I think one of the chatbots, Claude, which is done by Anthropic, refused more than half the time to spit out false information. The others just put out fake answers 100 percent of the time. So I guess my question is, at the breakneck speed that these companies, whether it’s a small startup or a big startup or one of the frontier model companies, the speed at which they’re moving to put out the next best AI model, what incentive do they have, do you think they have, to put in guardrails to avoid spreading this misinformation?
Kate Knibbs: So I think that AI companies that are creating specialty tools for the healthcare industry will be far more incentivized than the companies that are creating general interest models, because there’s already so much market saturation with companies that have general interest models out there. Frankly, it hasn’t really hurt them that much that they’re spewing out all of this bad information. So it would take some larger scandals, I would say, for them to really start focusing on creating dedicated, no scientific, misinformation team, although I would love to see that happen. Now, there’s a whole industry devoted to more tailored LLMs, and those include LLMs that are made for doctors. Those companies have obviously major incentives to make sure that they’re not spewing misinformation because that’s sort of their whole sales pitch, is like, “We are offering you a more precise, more accurate, safer version of an LLM that’s designed to be medically accurate.” So for instance, Google has its own medical question LLM actually, called Med=PaLM, which is very cool. I do wonder right now though, how many people in healthcare are using that versus just firing up ChatGPT? I really don’t know. But the fact of the matter is LLMs spewing scientific misinformation is a problem. It’s a problem that’s going to be exacerbated by them then ingesting the scientific misinformation that they spew. The quality collapse that will happen when bad synthetic data begets more bad synthetic data is a real concern. And so the bottom line here is always exercise caution when you’re using LLMs, even when it’s a tailored LLM that’s supposed to be more medically accurate. Because there’s never going to be a perfect fine-tuning process, and all guardrails thus far have been proven flawed or inadequate in some way. So stay woke, basically.
Lauren Goode: I had the opportunity to ask a tech CEO not long ago, and I can’t say who, because the story will be coming out at some point on wire.com, but this person seemed very optimistic about the potential for AI in healthcare. Because of the way that it can help treat people holistically because it has access to all of this information. If you’re seeing a specialist, you’re seeing a kidney specialist, you’re seeing an endocrinologist, as doctors and nurses and healthcare professionals become specialized, you lose access to maybe the wealth of knowledge that your longtime family GP might be able to have access to or be able to treat you in the same way. And I thought that was a rather optimistic answer. We determined that this person I was talking to is kind of a supreme technology optimist and I consider more of a realist. But if we have to leave Elizabeth with something uplifting, maybe it’s that.
Kate Knibbs: Yes. I’ll keep my skepticism about several of the points that they made to myself then just, yes, live your best life, Elizabeth. But maybe just ask a human, if you have a really, really pressing medical question at this point.
Lauren Goode: Check the citations.
Kate Knibbs: Yes.
Lauren Goode: Kate, thank you so much for being on Uncanny Valley.
Kate Knibbs: Thanks so much for having me.
Lauren Goode: We’re going to take a quick break and when we come back, we’re going to hear from WIRED’s Paresh Dave, who is going to help answer the next batch of listener questions. So stick around. Welcome back to Uncanny Valley. Today we’re answering questions from our inbox, which is to say we’re answering your questions. My colleague Kate Knibbs was just here telling us about the different effects that AI is already having on the film industry and what happens when LLMs are trained on false data. For the next two listener questions, I realized that my colleague Paresh Dave, who sits across from me in the newsroom, was the perfect person to answer them. We are recording from separate locations today, so let’s give Paresh a ring. Hey, Paresh, how’s it going?
Paresh Dave: Good. Great to be here.
Lauren Goode: So I have one question sent by one of our listeners, Andrew, that is right up your alley. And Andrew says the following, “One story that I’m trying to understand and evaluate, is the current and near to mid-future for Mozilla and Firefox. I have been a supporter of Mozilla for a long time and a Firefox user since version 1.1. That’s a flex. But the current Google antitrust case,” Paresh, you’ve been covering this closely. “The cryptic changes to the privacy policy earlier this year and the subsequent rollback, the sudden shuttering of Pocket and Fakespot, which were Mozilla products. If the organization is a force for good, which I believe it is, and having an independent browser is important, which I believe it is, what’s the landscape going forward?” I guess to start, Paresh, what actually differentiates Mozilla, Firefox from other browsers?
Paresh Dave: The number one thing is that it’s not made by Google, Google with Chrome, which is the largest browser out there in terms of usage. Whereas Firefox is one of several browsers that promote this idea of providing a more private experience because it’s not controlled by one of the giant tech companies that’s trying to sell you ads or subscriptions. It can sometimes offer cutting edge features faster than Google can because Google has a lot of users to contend with when it rolls stuff out. That said, Mozilla is still trying to make money and it does partner with Google. And I appreciate Andrew mentioning that Mozilla has been a force for good because it is under this nonprofit umbrella that does try to be a force for good out there. But that for-profit side, which includes the Firefox browser, runs into all the challenges of any traditional business.
Lauren Goode: As Andrew, our listener, mentioned, that whole landscape is changing and running the business of Mozilla is getting more difficult. How so?
Paresh Dave: Yeah, well, you mentioned the antitrust case. So Google pays Mozilla to promote the Google search engine inside the browser. It’s an important source of traffic for Google and a significant, obviously a vital part of how Mozilla funds those altruistic ventures. And those altruistic ventures include this independent, semi-independent research into online communities, all the consequences of using technology, really important work that’s funded by Mozilla grants. But that deal between Google and Mozilla is among several that have been ruled by a US judge as anticompetitive. And sometime in the next few months that judge is going to issue some sort of order that will almost certainly restrict those deals. We don’t know for sure that Mozilla’s going to lose that money, but it seems likely that sometime in the next few years after maybe appeals are exhausted, Mozilla will need to find some sort of way to make up that loss of Google’s financial support. And yeah, Mozilla has been making cuts, like shutting down this popular but niche service Pocket, a personal library where users could save content to consume later on. It was something I used. I had to download all of my data from Pocket just the other week.
Lauren Goode: I happen to never have used Pocket. I’m a big Instapaper user, but people were very upset on the internet about this.
Paresh Dave: Yeah, I mean there is an important question. How do companies make money from this? It wasn’t something I was paying for. I’m not sure how much revenue, if at all, Mozilla is making for it. If anything, it’s saved it for a few more years. And Mozilla, another cut, Fakespot, a tool for identifying how many reviews on a certain Amazon product listing are fake. Another thing I loved and used a lot but wasn’t paying for and it’s just emphasizing Mozilla has to figure out a future pathway here.
Lauren Goode: Is what’s going on with Mozilla in any way reflective of what is going on with the web more broadly, how it’s changed, or any pressures that are being felt around the tech ecosystem right now?
Paresh Dave: Absolutely. There’s the antitrust stuff we just mentioned, but there’s also this AI gold rush happening right now. So it’s this idea that browsers, like every other company, will have to find a way to harness AI to better serve their users. And that is what Mozilla and Google are contending with. I just met with Opera, a Mozilla rival that’s testing right now with a few dozen people and experience that lets AI do the browsing for you. It’s a fascinating idea where you type in a command like “Go buy socks” and the browser goes off to Amazon.com, put some in your cart and makes the purchase for you. Now in the demo that I got of this the other day, it was attempting to buy the wrong size socks for this person that I was speaking to. So it’s not perfect, but Mozilla is exploring similar tech and this all remains expensive to develop and expensive to operate. So it’s not only going to add pressure on Mozilla, it’s to save costs, explore new ways of making money, but also do all that while balancing, as Andrew said, being a force for good. I don’t see how it’s going to be easy for the next few months, let enough next few years for Mozilla.
Lauren Goode: While fully acknowledging that we are in the early stages of generative AI, it really feels to me emblematic that the example you just described is basically burning the earth with the amount of energy that it uses to use agentic AI in a lesser known browser to order someone the wrong pair of socks. Like what a time.
Paresh Dave: That is where we are at right now. And I was speaking with a pretty high up Google executive recently who also said that these technologies are nowhere near sort of the level of capability to roll out to masses of people, but this is where the web is going. Where AI systems handle more of the work for you, is it ultimately the right thing for society? Debatable, but it certainly feels like it’s a train that can’t be stopped.
Lauren Goode: All right, thank you for that thoughtful answer. And Andrew, I hope we answered your question. And our last question before we take a break comes from Brian who wrote to us saying, “It seems like a lot of NVIDIA GPUs will be needed for the huge AI data centers, which are projected by the likes of Google, OpenAI, et cetera. How long might it take to produce them? How many electricians will it take to wire them up? I feel like there’s so much bullish talk about AI and not a lot about the potential hardware bottlenecks.” Paresh, let’s just take a step back. Why are NVIDIA chips so important for any kind of AI operation?
Paresh Dave: So the chips are very good at handling multiple tasks at once, which makes them ideal for training these highly interconnected AI models like those behind systems like ChatGPT. And the idea is that NVIDIA has made it really easy to program software to do that training. So they have this thing called CUDA that is made very simple for developers to send a bunch of data to these NVIDIA GPUs and have NVIDIA GPUs make important connections that then form these models. And everyone wants more of these chips. Any interview I go to with folks here in Silicon Valley right now, they’re like, “Well, if our budget allowed it, we would buy a thousand more GPUs.” Whatever it is. And some customers still do face backlogs for access to the newest versions. But over the past couple of years, I would say the chips haven’t been the only bottlenecks. And so now it’s been generators, it’s been power systems, even just basic wiring and cabling have been caught up in post-pandemic supply chain backlogs, tariffs, just not enough to meet the feverish demand out there.
Lauren Goode: And also, I mean NVIDIA is not the only player in this game. They certainly have an overwhelming share of the AI GPU market right now, but there’s also advanced micro devices, AMD, which has been around for a very long time, most known for CPUs, but now has a growing AI accelerator business
Paresh Dave: OpenAI itself, which has become this big behemoth in AI. They have their own chips team that’s probably running in a hundred different directions, but they’ve hired a lot of great folks from other companies. And so that’s another thing to be paying attention to. OpenAI could end their reliance on NVIDIA chips.
Lauren Goode: Right. So yes, there are others out there, all sort of chomping at the heels of NVIDIA. But another thing that we should talk about that you mentioned earlier is just the enormous amount of energy these data centers need to function. How are companies preparing for this massive need for resources?
Paresh Dave: Without a doubt, electricity access is the biggest issue holding up data center construction, they speak to folks in the data center industry regularly and that hasn’t changed over the past year. That’s the number one obstacle. And these companies that are building the data centers often have committed to using renewable or carbon-free energy to power their data centers which limits where they can build them. And they’re facing this tricky dilemma because they’re getting criticized by activists and outside groups and sometimes their own employees for trying to build data centers in places where they’re going to be reliant on less clean fuel sources. And this is all coming at the same time that local pushback to data center projects is kind of growing. There was a report from Data Center Watch a sort of new group that’s formed that found local opposition to data centers has blocked or delayed development in many places across the country recently. AI electricity demand, according to a Bloomberg NEF analysis, is expected to triple over the next decade. So yeah, I would say the electricity bottleneck and the political bottleneck that comes with that, may very well be the thing that we’re talking a lot more about a year from now.
Lauren Goode: And we should note that the AI accelerationists hate the fact that activists are watching this and they’re spending a lot of money on lobbying efforts too, to try to make their way around regulation or enact their own ideas for regulation that would allow AI companies to move as quickly as possible despite the concerns about energy and frankly other concerns too.
Paresh Dave: Yeah, I mean we just saw an effort like that fail where tech companies had hoped in the one Big Beautiful Bill that President Trump signed into law last week that there would be a ten-year moratorium on state’s ability to enact AI regulations and ultimately that provision was removed from the bill before President Trump signed it. So states can continue to regulate this area in sort of a victory for those activist groups.
Lauren Goode: Paresh, thank you so much for these thoughtful answers.
Paresh Dave: No problem. I’d love to be back all the time.
Lauren Goode: All right, well don’t go anywhere because we’re going to take a quick break and then when we come back, I’m going to ask you for your take and Kate’s take on one of the most hyped up Silicon Valley collabs that we’ve seen in a long time. Kate, Paresh, thank you so much for teaming up with me this week to answer these listener questions. I have one last lightning round question for the both of you. I want your hot takes. Our listener Shane wanted to know our thoughts on the Jony Ive and Sam Altman team up. In case you haven’t heard the news, back in May, OpenAI and Jony Ive, the designer, the famed Apple designer behind the iPhone, announced that they were teaming up to create a new kind of device that they promised to ship out faster than any company ever has. This is supposedly a new device for the AI era. The details of what this gadget will be or how it will look, still not public, we’re still not really sure, but we can place our bets. What do you both think is potentially coming from this partnership? You can be totally candid.
Paresh Dave: So my guess is something the size of a small calculator, like the ones that people used to fit into their shirt pocket or their pants pocket. Because I think Sam Altman has said he wants something sort of portable. But it’ll be something that has a kickstand on the back, so you can lay it on the table and just have it sit there and kind of be around you wherever you are, whether it’s the dining table or the kitchen counter or your desk at the office. It’ll annoyingly require charging, some dumb port that makes no sense, and Europe will complain. It won’t have a display in the first version but eventually will because there’s no other way to make money from ads. And I think if you’re trying to benefit humanity and bring AI to everyone, you’re going to both have to have a free ad-supported model and a subscription model. So yeah, that’s my take.
Lauren Goode: I’m just grinning from ear to ear because this is so hyperspecific. Is this because you know these details or is this because you just so firmly believe that this is what this device has to be?
Paresh Dave: I mean, I wish I knew what was going on for sure. No, I think there are only so many form factors, which is sort of the jargony term for the shape and size of a device. There’s only so much you can do. Maybe this is like a throwback to the iPod Nano or which one didn’t have the screen? Shuffle?
Lauren Goode: The Shuffle was the tiny one.
Paresh Dave: Yeah, maybe something that looks like that. Then Jony doesn’t even have to reinvent the wheel. He can get his old drawings back and he’s halfway there.
Lauren Goode: Just make the iPod wheel, but for the AI era. OK. I think Kate’s take is a little bit different. Kate, what’s your hot take on this Jony and Sam mashup?
Kate Knibbs: OK, so it seems like they probably had a lot of fun shooting that gorgeous commercial about their friendship in San Francisco that they put out for this product that has no name. And I’m very happy for them and I think that the cinematography in that commercial was beautiful. I’m trying to think of positive things. Yeah, the tool, I’m so confused. So we’ve heard that it’s not a phone, it’s some sort of desktop device. I’m running up against the limits of my imagination into what it could look like and actually not be stupid. I think it’s going to be beautiful looking. I do, if Jony Ive’s involved, I think it’ll be sleek. Will I want it? I just can’t imagine it. But I’m excited to see what it is and if it is, I am getting really strong George R.R. Martin, the winds of winter is coming vibes from this whole project, to be honest. I’ll believe it when I see it.
Lauren Goode: Thank you all for listening to Uncanny Valley and thank you so much for sending us your questions. Keep emailing us. We do read your emails and we love your feedback. If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you’d like to get in touch with us, any comments, questions, show suggestions, you know the email, it’s uncannyvalley@wired.com. Today’s show was produced by Adriana Tapia and Jordan Bell. Amar Lal from Macrosound mixed this episode. Matt Giles fact-checked this episode. Kate Osborn is our executive producer. Katie Drummond is WIRED’s global editorial director. And Chris Bannon is Conde Nast’s head of global audio.
https://www.wired.com/story/uncanny-valley-podcast-you-asked-we-answered-all-of-your-ai-angst/
Source