Stream: software unscripted podcast

Topic: Broken AI Discourse


view this post on Zulip Sky Rose (Aug 02 2025 at 05:26):

TL;DR: In the recent podcast, there was a discussion about how opposition to AI is more intense than opposition to other software tools. I propose that this is because unlike other software tools that people think are bad, AI tools negatively affect the world, not just their users. (Sorry this post is so long and so full of opinions.)

I'm a couple weeks behind but I just listened to Broken AI Discourse. Both Richard and Steve were a bit baffled by why AI haters are so much more opposed to other people using AI, compared to the discourse around vim/emacs or other software tools. As an AI hater, I think I can explain:

The difference is that the opposition is based on the view that the AI bubble is bad for the tech industry and the world. (For a lot of reasons: harm to consumers, layoffs, AI features being forced into every product, etc, the specific reasons aren't important for this meta-discussion). So it's not an opposition to people using the wrong tools like vim vs emacs. I think using emacs causes RSIs and vim doesn't, but I'm not your doctor and I don't care if other people injure themselves by using emacs. It's an opposition to the direction of the industry. I think using AI tools gives more legitimacy to people causing the harm to the industry, and I do care about that, so I don't want other people to use AI.

Perhaps there are ways to have AI tools that aren't bad for the industry. Tools with ethically sourced data, that don't depend on huge compute resources, don't lead to security holes, etc. But it's really hard to evaluate or support any tool under the shadow of the AI bubble. Even if a tool like that did exist (maybe they do), it'd be hard to trust it given the reputation of AI, and if it was successful it would still end up reinforcing the trend of AI, and therefore lead to more harms by other bad AI products. So I can't support even AI tools that appear to be good, until the bubble pops. I want people to stop using and making AI tools so that the bubble will pop sooner, and _then_ we can look for the good AI tools in the ashes.

Here's an example of that: There are protein folding AI tools that I've heard are very useful in biology research. They solve the hard part of protein folding and lead to big jumps in knowledge. That's great, we should do more of that stuff! There's also a materials science AI that discovered thousands of new materials. I heard that one was not useful, that listing out new materials is not the hard part of materials science, that it doesn't actually help with the engineering or understanding of useful materials, and that the motivation for publishing it was to build hype for AI, not to contribute to science. How does someone without a background in science tell the difference? I'm opposed to the existence and use of AI tools because the in the current ecosystem of AI hype, bad tools like that materials science AI tool steal the oxygen from all the useful science (and software development) around them.

Some caveats and nuance:

So in summary, I think the main reason for the intense opposition to the use of AI developer tools is not because any individual tool is bad, or that AI is bad for the user, but because the current AI trend as a whole is _so_ harmful to the world that people opposed to it want to stop it entirely, including individuals' use of AI developer tools.

view this post on Zulip Richard Feldman (Aug 02 2025 at 12:41):

thanks for all the details! :heart:

view this post on Zulip Richard Feldman (Aug 02 2025 at 12:44):

I think if I could summarize my feelings about the topic as a whole, this is the part that jumped out at me:

the specific reasons aren't important for this meta-discussion

this is an example of why I think the "discourse" is broken

view this post on Zulip Richard Feldman (Aug 02 2025 at 12:45):

specific reasons are the only hope I can see for people to change their views of things (in general, not with this topic in particular)

view this post on Zulip Richard Feldman (Aug 02 2025 at 12:46):

and when the discourse is at a point where specifics aren't considered important, that means we've closed our minds to being changed

view this post on Zulip Richard Feldman (Aug 02 2025 at 13:00):

maybe others think the situation is serious enough that changing one's mind about it is a mistake to be avoided at all costs, but I don't think this is one of those topics, and my frustration is that it seems to be in that category for a lot of others

view this post on Zulip Sky Rose (Aug 02 2025 at 13:14):

Those details are important for the discussion of "Is AI bad?". I meant that they're not important for "Why is AI discourse bad?" which is the discussion we're having now.

But you are right that a lot of anti-AI people (including me) are closed to being convinced AI is good, and that contributes to AI discourse being bad. (Which is why I'm specifically trying to stay in the meta-discussion about why the discourse is bad, where I am open minded.)

view this post on Zulip Richard Feldman (Aug 02 2025 at 13:21):

I think that's the main thing that's broken to me though :smile:

to me, functioning discourse about a topic is where minds are open to change based on new information or ideas

view this post on Zulip Sky Rose (Aug 02 2025 at 14:06):

Yeah okay, so I should expand on why anti-AI people aren't open to that change. I think for me there are two main reasons:

First, there's so much AI hype out there (some of it supported, a lot of it hyperbole) that we've already seen the arguments, weren't convinced the first time, and now we're tired of seeing the hype over and over again and getting AI forced into all the products we use even though we didn't ask for them. A prerequisite for being open to changing my mind is knowing that the person giving me pro-AI information is aware of this, and that I can trust their pro-AI evidence is well supported and not fueled by empty hype or the bubble.

Second, I see "AI as a whole is bad" as a blocker for "this use of AI is good". The bad discourse we've been talking about is usually at the smaller scale. Before I'm open to change at that small scale, I either need to be convinced that AI as a whole isn't bad (unlikely, discussions at that scale are rare and usually run by grifters) or that it's possible for an AI tool to be good even within a world full of bad AI. This is a high bar but possible. I'm pretty much convinced for the protein folding thing. It's just usually not part of the discussion.

view this post on Zulip Sky Rose (Aug 02 2025 at 14:08):

And I should say: I am open to changing the fact that I'm not open to change (hence, this meta-discussion)

view this post on Zulip Richard Feldman (Aug 02 2025 at 15:15):

Sky Rose said:

there's so much AI hype out there (some of it supported, a lot of it hyperbole) that we've already seen the arguments, weren't convinced the first time, and now we're tired of seeing the hype over and over again and getting AI forced into all the products we use even though we didn't ask for them.

this resonates with me too :smile:

(and apparently with Steve as well, based on that portion of our conversation on the episode)

view this post on Zulip Richard Feldman (Aug 02 2025 at 15:20):

so there are some smaller points I think are relevant, which contribute to my overall sense of AI being a reasonable thing to discuss (as in, not so categorically bad that it's pointless to discuss - personally I don't put in either the bucket of "categorically bad" or "categorically good" but rather "has significant upsides and downsides")

view this post on Zulip Richard Feldman (Aug 02 2025 at 15:24):

for example, you mentioned layoffs earlier. I think the vast majority of "AI-caused layoffs" are actually CEOs doing PR sleight-of-hand.

For example, when there were layoffs in the years before chatGPT launched, CEOs were making announcements like "Due to the condition of the economy, unfortunately we're having to make some layoffs. Everyone knows the economy is rough, so this isn't a sign that our company specifically is in a bad spot, it's just the economy, you see. Yeah, that's the reason."

Today, instead they have the option of instead announcing "Thanks to efficiency improvements brought by AI, we're able to lay off people. We totally didn't need to do this, and everything is going absolutely great at this company, and we would not have been doing the layoffs anyway and blaming it on the economy, we swear, it's just that the AI efficiencies have been so great that we're doing voluntary layoffs. Yeah. That's the reason."

view this post on Zulip Richard Feldman (Aug 02 2025 at 15:25):

I'm not saying this explains 100% of layoffs that are claimed to be about AI, but it is awfully fortunate coincidence that now it's become possible to announce layoffs while putting a spin on it that suggests the company is totally fine and "actually better than ever, why do you ask?"

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 16:46):

Resource usage and AI is one that hits home for me specifically...

My job is literally to analyze AI and make it run faster. My old naive view was that would help reduce the burden of AI on world resources. While that may be true in the short term (e.g. If you work with X to make 1 job 2x faster, you likely reduce compute by 2x in the short term), it is definitely not true in the long term. Making AI execute faster by and large works as an enabling function for more companies to enter the market, try bigger things, and scale further. This growth and scaling is almost guaranteed to outpace any performance gains.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 16:47):

Arguably this is still critical for enabling more useful AI, but it is not some sort of clean picture.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 16:48):

First, there's so much AI hype out there

Even if AI dies, investor run on hype...so it likely will just pivot to something else...

There will always be some folks really pushing boundaries and many many others who are just marketing snake oil.

view this post on Zulip Richard Feldman (Aug 02 2025 at 16:54):

yeah if we were building new coal power plants to power AI usage, that would be really bad. But from what I've heard (Mitchell Hashimoto posted about this from a friend in the power plant construction industry) there are apparently, for the first time in over half a century, a double digit number of signed contracts to begin construction of new nuclear power plants in the US - specifically because of the power demand for data centers

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 16:56):

getting AI forced into all the products we use even though we didn't ask for them

But your toaster absolutely needs to run chatgpt... ... ... ... :rip-intense:

Yeah, AI is doing many cool things. Some that are fundamental to research and saving lives. But also, crap AI is sprinkled into everything cause investors or market trends or etc....


One thing that is kinda annoying is that generally something is AI until it is really useful. Then it is generally just a product or get another name. For example, there are many very successful application of vision recognition in cancer diagnosis, crop analysis, animal recognition for conservation, etc. Not to mention how much better language translation has gotten.

Most of these tools are built on AI fundmentals that were popular about a decade AI. At this point, most of that stuff is not every marketed as AI. We don't want the word AI anywhere near our cancer diagnosis...except we 100% do want AI doing the diagnosis cause it is better than humans at doing so.

This trend around naming means that AI tends to constantly be the thing that doesn't quite work yet, has a bunch of hype, and is still in active research. All the successful stuff is moved out from under the AI umbrella. It is basically a moving bar that guarantees AI cannot escape being hype.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 16:57):

for the first time in over half a century, a double digit number of signed contracts to begin construction of new nuclear power plants in the US - specifically because of the power demand for data centers

Also a ton of new solar farms (though those are not as nice as nuclear).
It is sadly quite lagging compared to the power usage.

view this post on Zulip Richard Feldman (Aug 02 2025 at 16:58):

some people may have different beliefs about this. To me personally, it has been clear for decades (and glaringly obvious if you look at the modern examples of France and Germany specifically) that nuclear power adoption is by far the most plausible path to the outcome of reversing the climate change trend.

my view on this is that if AI's power use ends up resulting in big countries moving from fossil fuels to nuclear power, then AI's environmental legacy will have been so unbelievably positive that it will have been an extremely good thing for the planet even if environmental impact were the only factor considered.

view this post on Zulip Richard Feldman (Aug 02 2025 at 16:58):

this is probably an unusual viewpoint, but I have been cheerleading for nuclear for a very long time, and nothing has caused actual construction of new plants to happen. If AI is the first thing to change that, I view that as a massive environmental positive.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 16:59):

Yeah, I am 100% on the nuclear train

view this post on Zulip Anton (Aug 02 2025 at 17:12):

small modular reactors seem especially exciting

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 17:22):

Yeah, hopefully they can figure out the corrosion issues of the molten salts and properly productionize things. Have not followed the progress in a bit though.

view this post on Zulip Sky Rose (Aug 02 2025 at 20:43):

All the successful stuff is moved out from under the AI umbrella

This is making me pause and think. This means that the AI discourse is focused on things that are experimental or investment schemes or things that are seeking attention with the "AI" label. And of course those things are going to be more controversial.

Do you mean that an individual product, when it achieves success, sheds its AI label? Or that new products that are using established techniques don't adopt the AI label that previous entries in the same space used? Either way, why do you think this is? Is this new to AI hype, or does the same thing happen with other buzzwords?

There's also a little bit of things going the other way. Products using established techniques (e.g. image processing) marketing themselves as AI, because they think it will make them seem more cutting edge. I guess this does happen with other buzzwords though, so doesn't do much for explaining why AI discourse is worse.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 20:58):

Do you mean that an individual product, when it achieves success, sheds its AI label? Or that new products that are using established techniques don't adopt the AI label that previous entries in the same space used?

I think it is mostly that in the research stage it is labelled as AI, but in a lot of the more robust products that just work, the AI label is dropped. I would guess that it is mostly to move away from the risky connotations associated with AI.

Oh and a big one is that often once a product works well, they tend to move away from marketing the technology and towards marketing the product as something standalone. Kinda hide away the underlying details and just focus on the value it brings.

I would roughly put it as AI -> Cutting Edge, removing AI -> Stable and robust.

Another example is with rust. A lot of application start by marketing that they are written in rust. They are trying to use Rust -> safer and more secure as an anchor. I think many products market less and less about the language they are written in as the gain notoriety and success. It is just leaking an implementation detail.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 20:59):

There's also a little bit of things going the other way. Products using established techniques (e.g. image processing) marketing themselves as AI, because they think it will make them seem more cutting edge

Yeah, I think this is often who are you marketing to. If the answer is users, this often does not happen. If the answer is investors or executives, sadly buzzword soup helps.

view this post on Zulip Brendan Hansknecht (Aug 02 2025 at 21:02):

An example for the rust thing is Deno. I know that it's site used to explicitly market that they are written in rust. Now that they have more of a footholding, real users, and such, they don't really talk about rust at all. They talk about how they can make the users life better. But Rust -> safer and more robust was definitely a starting anchor for them.


Last updated: Aug 17 2025 at 12:14 UTC