NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Desperately Trying to Fathom the Coffeepocalypse Argument (astralcodexten.com)
haswell 23 days ago [-]
The kind of reasoning he describes has been driving me crazy when listening to various tech podcasts that touch on AI risk and social media harms.

The moral panic argument comes up repeatedly: “people said the same thing about the {printing press, comic books, D&D…}”. But it always frustrates me because at best, these are examples that highlight our tendency to panic about things and that we’re sometimes wrong.

And I do think it’s good to think about this, but it’s the weight that people attach to the argument that I don’t understand. In the face of some truly worrying facts about these new tech developments, the past panics are being held up as if they tell us everything we need to know.

It’s baffling and worrisome.

squigz 23 days ago [-]
> In the face of some truly worrying facts about these new tech developments, the past panics are being held up as if they tell us everything we need to know.

I think it's more like, "In the face of some truly worrying panicking..."

I think most people realize the risks, but also realize that panicking like this is the end of {art, culture, etc} isn't very helpful either, as evidenced by various similar events in human history

haswell 23 days ago [-]
> as evidenced by various similar events in human history

The issue is that people are jumping to the conclusion that the present conditions are analogous to the “false” panics of the past, while discounting “real” panics that turned out to be justified.

Global warming, thalidomide, lead, the growing surveillance state, sex abuse in the Catholic church, etc. were all dismissed as panics until they turned out to be real issues.

The problem is that we look back and no longer categorize these as “moral panics”, because in the end they turned out to be real and few people remember them that way. Only the “false” panics remain, which leads to some fallacious conclusions IMO.

I don’t think panic is ever a good default stance on anything. But this is an issue orthogonal to the true underlying danger of whatever people are panicking about.

squigz 23 days ago [-]
> The issue is that people are jumping to the conclusion that the present conditions are analogous to the “false” panics of the past, while discounting “real” panics that turned out to be justified.

Again, I'm not really sure this is fair, at least in my experience. I think when most people reference these past events, it's to point out that - despite panic about those events - we managed to solve the issues involved with them, and move on.

haswell 23 days ago [-]
Which events are you referring to? The “false” panics or the “real” ones?

Because when people invoke moral panic as an argument against AI concerns, it’s almost universally as a device to completely invalidate the concerns.

> we managed to solve the issues involved with them, and move on

Even if the point is that “see, we worked through these past issues just fine”, it’s not clear why working through those issues tells us anything about the present. The only structure holding the analogy together is “we had a problem” and “we solved the problem”. If the nature of the problem is not the same, it’s unclear why this analogy carries any weight at all.

The reality is that we have not solved the issues related to many “real” panics. We just don’t refer to them as panics anymore.

Prohibition may have been a deeply misguided solution, but the social problem of alcoholism was and continues to be real. “Moving on” means that 13K people are killed every year in alcohol related traffic deaths, and over 130K each year from other alcohol related afflictions. I’m not arguing for a ban on alcohol, but pointing out that we rarely do a good job of solving these messy issues. They just become a new accepted societal norm.

I think this can be roughly broken down into four buckets:

1. Panics that turned out to be bullshit

2. Panics that resulted in action and an eventual solution

3. Panics that resulted in the worst fears being fully realized

4. Panics for which we have no current answer or solution

The issue I have with your argument is that it seems to dismiss the possibility of 3 and 4.

dcow 23 days ago [-]
Charitably, nobody is arguing that AI shouldn’t be taken seriously. People are arguing about whether it will dismantle society and pave the way to the end of times. Surely we can agree that while there are valid indicators that we should tread carefully, there is no proof that any existing technology in our hands is the harbinger of doom.
haswell 23 days ago [-]
> nobody is arguing that AI shouldn’t be taken seriously

As far as I can tell, this is exactly what is happening when someone invokes “it’s just a moral panic”. It’s an unserious argument, in that it has no direct relevance to the underlying concerns.

> there is no proof that any existing technology in our hands is the harbinger of doom

We have a pretty good idea of how the world could look if nuclear war broke out. We can’t prove to the last detail exactly what it would look like, but we know it’s bad, and something we don’t want.

We DO have proof that existing technology is leading to direct harm, and we can form reasonable extrapolations of what further harm will result from further proliferation of similar tech.

And I don’t think doom is even the primary concern. The concern is the vast space leading up to doom. Doom is a distraction from what is right in front of us.

dcow 23 days ago [-]
How did we get from AI LLMs to nuclear apocalypse? You’ve articulated my point better than I have, it seems (=

Edit: I see you misinterpreted my comment. I should have been specific and said “any existing AI adjacent technology” in our hands…

Anyway I hope it’s abundantly clear that the argument at this point in the thread is not “the canaries were wrong in the past they are obviously wrong about AI” (as TFA suggests) or even “we’ve survived disaster before meh we can do it again” (a more cavalier take). It’s that the burden of proof falls on the doomsayers to convince humanity that <insert potentially dangerous technology> is sufficiently dangerous to warrant censorship and regulation, etc.

haswell 23 days ago [-]
> Anyway I hope it’s abundantly clear that the argument at this point in the thread is not...

I'm not sure why that would be clear at all. The point of my top level comment was that it's difficult to engage with many people about this subject exactly because they rely on "the canaries were wrong..." and "we've survived before" arguments, which were the subject of the article we're all discussing.

I don't even think the doomsday narrative is the interesting or particular relevant one when it comes to examining risk. If all we did was focus on the implications of the tech in its current form and the risks/issues the current iteration already presents, we'd have enough issues to chew on for years ranging from fair use, to copyright, worker's rights, deepfakes, the explosion of misinformation, etc. etc.

These are real issues that exist right now, have no solutions, and all threaten various aspects of the fabric of society and our legal system. I also suspect there are workable solutions to many of these problems, but as of now, they're mostly unsolved and I understand why some people are deeply uncomfortable with continuing to charge forward.

Most technology revolutions don't happy as quickly as what we're seeing with AI, and it's the rate of change that is also spurring on quite a bit of concern.

> It’s that the burden of proof falls on the doomsayers to convince humanity that...

I don't think this is a useful way to frame the issue. Ideally, all parties involved in advancing this tech are engaged in an ongoing conversation/debate about what is or is not a valid concern, and how to address those concerns. Optimists need to be steel-manning the concerns, and pessimists need to be steel-manning the rosy outlook. Everyone involved needs to discard fallacious forms of argument whenever it can be identified.

Framing this as "doomers vs. humanity" polarizes an issue that need not be polarizing. Whether you're a doomer or optimist, there exists underneath all of this the reality of the situation, and it's in the best interest of all involved parties to work towards understanding that reality together, whatever it is.

What I think we can say confidently is that the form of argument called out in the article does not move anyone closer to that understanding.

dcow 22 days ago [-]
> These are real issues that exist right now, have no solutions, and all threaten various aspects of the fabric of society and our legal system.

Can you share a few examples?

squigz 22 days ago [-]
They did

> enough issues to chew on for years ranging from fair use, to copyright, worker's rights, deepfakes, the explosion of misinformation, etc. etc.

dcow 20 days ago [-]
They blanked cited a list of feely subjects. They did not describe actual problems.

Workers, for instance, do not have the right to “not compete against an AI” (if that’s what’s being referenced). There are no copyright issues if people license content for training. AI does not newly or uniquely introduce “misinformation” into society. These are all interesting things to have discussions about, but they are not de facto problems just because the tech is potentially disruptive.

cubefox 23 days ago [-]
We often didn't manage to solve the issues. We didn't solve climate change or AIDS or the holocaust or cancer from smoking. And when we solved some issues (like the ozone hole) or remedied them (like smoking) it was only because people did what you dismiss as panicking.

Imagine if someone tried to warn you, in 1938, that a second world war will happen with significant probability, and you just answer:

"I think most people realize the risks, but also realize that panicking like this is the end of {art, culture, etc} isn't very helpful either, as evidenced by various similar events in human history"

squigz 23 days ago [-]
Interesting that this conversation started with comparisons to the printing press, comic books, and D&D, and has progressed to comparisons to World War 2. Of course, panic about World Wars is certainly what I was referring to originally...
haswell 23 days ago [-]
To be fair, I specifically called out the printing press, comic books, and D&D as examples of panics we got wrong. The other side of this coin is that there are indeed cases where concerns were justified, and only focusing on the situations when they were not justified is at the heart of the issue.
cubefox 23 days ago [-]
This didn't start out with comic books, we started with AI risk and people dismissing it by arbitrarily comparing it to things that turned out to be harmless.
cubefox 23 days ago [-]
> The problem is that we look back and no longer categorize these as “moral panics”, because in the end they turned out to be real and few people remember them that way. Only the “false” panics remain, which leads to some fallacious conclusions IMO.

This is a good point. A similar case: In early 2020, many journalists, TV experts and politicians emphasized that you shouldn't panic. This was true insofar "panic" is by definition irrational, but false insofar they also said that much more people die from the flu in a year, which ended up being false: For several years much fewer people died from the flu. Or when they said "currently the risk is low, so don't worry" -- that few people are being infected now (in early 2020) doesn't mean that the risk of infection is low, because current risk includes the probability of bad things happening in the future. If you race towards a cliff with your car, the risk of dying is already high before you fall of the cliff.

So "don't panic!" is ambiguous between "don't be irrational!" (which is a tautology) and "the risk is not high!" which requires justification and may well be false.

cauch 23 days ago [-]
The way I see it is rather:

The past has shown that for each technology, there were "worriers". Imagine a distribution graph with the x-axis being the scale from "very worried" to "confident in the future". For each technology, there is a bell-shape distribution. So, for this technology, such distribution has to exist. The question is: where are the natural worriers? can you point to them? Because if you cannot distinguish the natural pedestal that will always exist, then it feels like you are giving credit to this pedestal rather than giving credit to real worry.

It would be very useful to see those distributions for different technologies or events and, based on the known conclusion, see if some shapes are indicating that the "worriers" are mainly the natural left-tail of the distribution.

For me, what push me to keep away from that debate and not mangle to the people interested in this subject (even if it turns out there are reason to worry) is the fact that a big part of people worried about "superintelligent AI threatening humankind" are usual quite clueless on societal dangers otherwise. They usually are supporting views close in philosophy to tech bros/bro-science, utilitarians or libertarians, which is all showing a pretty naive understanding of the complexity of society.

haswell 23 days ago [-]
I think that understanding the meta-phenomenon of worriers vs. non-worriers is interesting in its own right.

But independent of this, I took the original post to be focusing on the fact that there are better and worse reasons to worry or not to worry.

So maybe someone is predisposed to worry about things, but understanding why they worry (and how likely the “why” is to be valid) is more important than the fact that they’re worrying.

Conversely, examining why an optimist is optimistic is also important, because if their optimism is rooted in specious arguments and conclusions, their status as a “non worrier” is less indicative of reason to care about about their lack of worry relative to the distribution of worriers.

In other words, I think it’s an interesting way to think about the people involved in the worry/non-worry, but not likely to get us closer to an answer about whether or not anyone should worry.

Clearly there are situations that we know enough about to warrant a high degree of concern regardless of the distribution of worry vs. non-worry (see: climate change).

I think a major issue with the AI subject is a lack of consensus on what exactly people are worried about, and worries ranging from near term loss of employment and other immediate downsides to the eventual extinction of the human race. I don’t think the “threatens all humankind” concerns are the most worrisome because they’re by far the most far fetched. There’s a whole lot of potential harm between where we are today and those extreme outcomes, but those extremes tend to get the most airtime and subsequent dismissal, and overshadow the more realistic and mundane concerns.

cauch 23 days ago [-]
It's a detail, but I disagree with the fact that looking at who is worrying is not a good element to evaluate if the problem in question has legitimate arguments to generate worry. If someone usually optimistic is worried about the subject X, it is a good indication that this subject X has good arguments, as it has convinced the optimistic person. If the subject Y is a worry only for the "worriers", then it is a good indication that this subject Y does not have good arguments, as it failed to convinced people who are more difficult to get worried.

(and, sure, one can talk about "are people informed" or whatever, but this is a second order effect and is just a slightly disguised they-disagree-because-i-am-a-misunderstood-genius fallacy)

For the AI subject: you cannot put in the same bulk people who worries that AI will be used tomorrow for bad things and people who worries that AI will become intelligent and be a threat for humanity, because those are two things very different with very different consequences. What you need to do in case of worry of bad usage of AI is very different of what you need to do in case of threat for humanity, sometimes opposed (bad usage is about putting rules and fair access, while threat for humanity is about forbidding or controlling fundamental research).

It's a bit like if in a parallel universe, someone says "people are worried about immigration. Sure, some people are worried that migrants are invading us, and others that migrants are being treated unfairly. But they are worried, so it is the proof that worrying about immigration is legitimate, and any criticism about these worries are just people who cannot have a logical argument".

For example, I don't think people who are worried that AI can have a bad usage and that we should be careful about that are feeling attacked by the "coffeepocalypse" argument. People that are worried about a bad usage of AI are not worried about AI itself, they are worried about a bad usage of technology in general, and it includes thousands of things that don't even involved LLM or recent development in AI. I think it is more directed at the lesswrong-pseudo-intellectuals who think they are deep and smarter than everyone else when they are just playing around with basic concepts and misguided ego trip.

ImHereToVote 23 days ago [-]
I think your point is that panic is not effective. We should instead prepare for the externalities of a new technological development, and work to solve those. Is that more or less your position?
squigz 23 days ago [-]
Yes, that pretty much sums it up.
lupusreal 23 days ago [-]
To the "People said X about Y and were wrong so people saying X about Z must also be wrong" form of argument, I always think of "They laughed at the Wright Brothers, but they also laughed at Bozo the Clown." Basically, the fact that people were concerned about technology we now consider normal and more or less harmless doesn't prove that people presently concerned about some other new technology are also wrong.
dcow 23 days ago [-]
Right. There is no proof on either side—either that AI is capable of organically morphing into a super-intelligence or that it inherently isn’t. It’s not formal. It’s people burning cycles telling each other about their intrinsic risk tolerance.
ryanschneider 23 days ago [-]
Aha! This post finally made it click for me that that is exactly what’s going on, thanks for putting it so succinctly.

I think one of the big differences between AI and most other previous technologies is that the potential impact different people envision is very high variance, anywhere from extremely negative to positive with almost all points in between as well.

So it’s not just different risk tolerances its also that different people see the risks and rewards very differently.

naasking 23 days ago [-]
> Right. There is no proof on either side—either that AI is capable of organically morphing into a super-intelligence or that it inherently isn’t. It’s not formal.

You're right that there is no formal proof, but the conclusion that humans have not reached peak intelligence is pretty solid. Based on evolutionary arguments, we're broadly only as intelligent as we needed to be to reproduce.

If humans are not at peak intelligence, it then follows that something more intelligent is possible. It's similar to how birds are necessarily not the fastest objects that can fly, and jet planes and rockets show that artificial systems can easily surpass the inherent limitations of evolved biological systems.

The only wishy-washy part of the AI deduction is whether "intelligence" is a similar kind of property that evolved. If it is then the evolutionary argument applies, but even if it's not, and it's just a broad, mixed set of capabilities, AI is advancing on all of those fronts too, so I'm not sure you can escape the conclusion so easily.

dcow 23 days ago [-]
So we agree higher intelligence is capable. But, “Is that a bad doomsday scenario?” is the next question. And then, “Is it so bad that society should fear these intelligence tools?” And to argue that, you have to get into a real abstract realm where there is zero conclusive anything (outside of philosophical thought experiments) indicating whether we should welcome or fear our new super intelligent systems.
naasking 23 days ago [-]
You don't have to get that abstract. Absent some guarantee that intelligence implies inherent moral superiority, the precautionary principle would apply, and so we should plan for there being no association between the two. Some reasonable level of caution is thus warranted. People can disagree on what's "reasonable", but we should at least all agree that a danger could exist and that some caution should apply.

We can't generalize from any association between intelligence and morality from humans because human morality and intelligence evolved, possibly even co-evolved, and these constraints simply don't apply to artificial systems any more than a bird's feathers apply to artificial flight.

dcow 23 days ago [-]
We all agree that there may be dragons. The “doomsday” question is whether we’re even remotely close to a dragon’s lair such that we should imminently worry about waking the dragon—risking it burning down the village. To each their own until we find dragon tracks and charred forests.
naasking 23 days ago [-]
The doomsday question has two components: the mean and the error bars. Most people probably agree that a reasonable date for likely achieving AGI is maybe 30-50 years from now. But because we don't know what we don't know, the error bars are so wide that they encompass tomorrow (we could be one small generalization trick away!), and that's the danger.
orwin 23 days ago [-]
As long as you can only train an AI with an AI in an adversarial context, and not do something like training a LLM on another LLM without degradation, I think we can still sleep fine. Once it isn't true anymore, we'll find ourselves in interesting times.
jimbokun 23 days ago [-]
> "They laughed at the Wright Brothers, but they also laughed at Bozo the Clown."

This is a very good compression of the article, losing very little information.

23 days ago [-]
mike_hearn 23 days ago [-]
> these are examples that highlight our tendency to panic about things

No, they are examples that highlight the tendency of certain sorts of people to panic about things. You're over-focusing on the objects and under-focusing on the subjects.

There are two heuristics at work here:

1. People have been talking smack about AI for a long time. Remember that GPT-2 was advertised as too dangerous to release. GPT 2. Google claimed Imagen was too dangerous to release. Others claimed open source models were dangerous. Or that foundation models were too dangerous. This has been a pattern for a while. A bunch of people believed them and became AI safety extremists. Then ChatGPT made a much better version available to all, including image generation and ... it was fine. The world didn't get more dangerous.

Data point: the people who said AI safety wasn't a real problem were correct. The people who said everyone should panic were wrong. It makes sense to generalize from this experience.

2. There is a certain class of person, let's call them intellectuals, whose status and income depends on their production of attention-grabbing claims. If these people stop being interesting then their careers shrivel and die. This leads such people to prioritize making claims that are optimized for virality, but not optimized for being true. Claiming the world will end if they aren't listened to is the natural evolutionary end-game here because it's impossible to be more attention-grabbing or viral than that.

Data point: intellectuals specifically have made a long stream of claims about supposedly world-ending risks, which were later falsified.

Data point: Since leaving psychiatry, Scott Alexander is primarily a public intellectual. it is important for him to make several interesting claims a week, which is not very easy. So it's expected for him to be attracted to claims like AI X-Risk regardless of genuine risk, and he's a very intelligent guy so can easily convince himself of things he'd like to be true (not in this case that X-Risk is high, but that X-Risk should be considered as if it might be high).

Add it up and what you see is that the people in society who ignore intellectualist catastrophizing are repeatedly the winners. Sam Altman got his researcher's fantasies under control, got AI launched and are now the big winner. Other AI groups (Google) stayed in fantasy-land for far too long and were dragged kicking and screaming back to reality, and they are not the big winners.

haswell 23 days ago [-]
> Add it up and what you see is that the people in society who ignore intellectualist catastrophizing are repeatedly the winners. Sam Altman got his researcher's fantasies under control, got AI launched and are now the big winner. Other AI groups (Google) stayed in fantasy-land for far too long and were dragged kicking and screaming back to reality, and they are not the big winners.

Entire industries are currently threatened by this tech, and large numbers of normal people are set to lose their jobs. This kind of disruption is often the cost of progress, but historically, revolutions that reshaped the workforce happened over decades or centuries, not years or months. It's unclear what happens next, but it's probably going to be bumpy.

It seems your definition of "winning" is "Company X dominated the market and made a lot of money". Why should the average person impacted by this tech see this as a positive thing?

In your view, does Sam Altman "winning" supersede the interests of society at large? And what would "winning" look like if the definition was broadened to include the interests of the market in which Sam operates OpenAI?

mike_hearn 23 days ago [-]
Yes that's the definition of winning. Impact is pretty broad. Lots of people are positively impacted by AI: their lives get easier. Others see their livelihoods imperilled. Believe me, I know.

But that isn't the X-Risk that people like Scott are talking about. They think AI is physically dangerous, that the Matrix scenario could happen for real. It's not about jobs in this case.

digitalsushi 23 days ago [-]
We're in a tizzy because we're inventing a new system that redistributes power as chaotically as did our invention of firearms.

We never came up with a solution for controlling firearms (at least in my random country); we have fought for a long time about how, and the people who believe there is no how focus on who to blame after each gun tragedy.

When the AI stuff has an event as significant as a gun tragedy, we'll try to figure out why it was able to happen. After a while, many of us who do not believe there is a way to protect ourselves will focus on who to blame.

AI is a hose connected between order and chaos. Depending on which direction the hose is flowing, you can gain or lose from it. People with something to lose are scared. People with nothing are thrilled. The rest of us are, well, I have no idea what to think about it. I suppose I have to just see how it goes, because I sure don't believe we can create a policy that can stop it.

arethuza 23 days ago [-]
"When the AI stuff has an event as significant as a gun tragedy"

I'm struggling to imagine what an event might look like - apart, I suppose, from purely AI controlled weaponry?

evilduck 23 days ago [-]
Societal disruption. Shutting down transportation, power grids, fucking with the financial systems, backflowing sewage into clean water distribution, etc. Basically anything we could previously envision a state-actor doing with a Stuxnet-like attack. Time one of those during or after a natural disaster and you can compound the effect.

It also seems like it would be relatively easy to lay blame of one of these attacks on some country’s opponent and ignite a hot war as a result.

arethuza 23 days ago [-]
But all of those are risks now - what about the current breed of AI systems makes them any more of a threat other than enabling people? The "AIs" we have seem to have little agency?
wrsh07 23 days ago [-]
There's always the question of "how easy is it to do a bad thing?" With guns, it's very easy to kill someone from far away. With a car, it's very easy to kill someone right in front of your car.

At some point it will be easier to commit some cyber attack using ai, and when someone is arrested for a cyber attack in which they used ai it will be reported accordingly. At that point, we will seriously consider "can we reasonably make it harder to commit this crime using ai?"

Companies that have already taken steps to make it harder to commit this crime using ai will describe their processes, and we will all have a reasonable discussion about what types of ai systems allowed this to happen and whether we should be more restrictive in making those types of ai systems freely available (or what other mitigations we can put in place)

evilduck 23 days ago [-]
True, all are existing risks with some known likelihood factor. But the coffepocolypse argument and the article's contents aren't discussing the current breeds of AI, they're hypothesizing about a future super intelligent sentient AI arising or not arising. And if one does, it will have more tools at their disposal than conventional ballistic weapons, the likelihood of it using existing levers to disrupt society or cause mass die offs is simply unknown.
arethuza 23 days ago [-]
Fair point - I had only skimmed TFA. Mind you, as a fan of the Culture novels I'm choosing to believe that a future super-intelligent AI could also be a good thing.
keyringlight 23 days ago [-]
Power grids is an interesting one as even if we're not at the stage yet where "a system with AI inadvertently/intentionally turned off the lights" the pursuit of AI is having an effect on power demands that mean some areas can't build as there's no capacity on the grid or that we can't retire the worst polluting generators as soon as we could without that demand. It's an echo of the cryptocurrency gold rush in that respect.
squigz 23 days ago [-]
While I can't imagine this is very likely, another example might be ChatGPT or one of the other models starts leaking truly private data.
paulryanrogers 23 days ago [-]
I'm struggling to imagine people ever blaming the AI. They're already willing to storm the capital (and more?) to keep their cannot-be-blamed-theyre-only-objects guns.

Rather, my guess is it'll devolve into futile attempts to trace attribution back to the humans who were using the AI.

pjc50 23 days ago [-]
> AI is a hose connected between order and chaos

AI is a hose of chaos disguised as having the order and authority we're used to computers having. It's able to produce things that look like they might be true or real but with no connection to truth or reality.

I don't think there is going to be a single "smoking gun" incident. We might, one day, find that one of the mass shooters has been listening to an AI and copying its claims into his lethal manifesto, but so what? That's already well below the threshold of being ignored.

Along the way there will be a bunch of Canadian Airlines incidents https://www.bbc.com/travel/article/20240222-air-canada-chatb... , but people will not learn anything from those either. What's inconvenient reality against the beautiful hype of automating away your staff?

glhaynes 22 days ago [-]
It would be interesting if an incident in the vein of the Canadian Airlines one caused a few billion dollars in liability, though. I could see that causing some hesitancy.
mariusor 23 days ago [-]
> we're inventing a new system that redistributes power as chaotically as did our invention of firearms

I'm not sure what you mean exactly with this quote, but I think that the people who will hold the power as it pertains to AI are the same people that hold the power currently: rich capitalists.

As a user you'll maybe have a way to use AI to make your life slightly easier, but you'll have absolutely zero power otherwise.

denton-scratch 23 days ago [-]
> gun tragedy

I'm uncomfortable about that phrase being used to refer to a guy with an assault rifle deliberately murdering a bunch of people he doesn't know. A "tragedy" generally involves some kind of mistake or accident caused by a character flaw, or by some circumstance that isn't obviously going to cause misfortunes. I don't think it's right to refer to something as a tragedy, if the misfortunes or deaths were exactly what the perpetrator intended.

pjc50 23 days ago [-]
The death remains a tragedy, even if it's also a murder or terrorism.

> or by some circumstance that isn't obviously going to cause misfortunes

I feel that in the older sense of tragedy, Shakesperian or Homerian, the inevitability of fortune is part of the tragedy - we, the audience, can see it in retrospect, but those involved find themselves unable to see or avert it. But this is after all just language.

germinalphrase 23 days ago [-]
Which is only apt if we consider the victim of the tragedy to be our collective society suffering as a result of our hubris and other failings (and falls apart completely if we recognize that the individual human victims are essentially random).
mrighele 23 days ago [-]
> A "tragedy" generally involves some kind of mistake or accident caused by a character flaw

No it doesn't. Anyway, just going to school and being shot by a random person count as "some circumstance that isn't obviously going to cause misfortunes."

rob74 23 days ago [-]
I assume your random country is the US? And while the US never came up with a solution for controlling firearms, most (developed) countries in the world have. Which doesn't go against the US, it's all a question of lobbying. For example here in Germany, car makers have a very strong lobby, hence things like no speed limits on the Autobahn, underinvestment in cycling/walking/public transport infrastructure etc. etc.
lupusreal 23 days ago [-]
There are a whole lot of countries other than the US that have gun violence problems. Brazil for instance, and dozens of other countries. But somehow people who think this is an America only problem also think that America and Europe constitute the entire world or that "poor countries don't count" or something.
rob74 23 days ago [-]
That's why I specifically added "developed" countries (and no, it wasn't an edit).
lupusreal 23 days ago [-]
You know Brazil isn't just a jungle, right?
dcow 23 days ago [-]
The author is frustrated that someone is essentially capable of saying: “based on my priors, this doesn't concern me”.

> The best I can say is that sometimes people on my side make similar arguments (the nuclear chain reaction one) which I don’t immediately flag as dumb, and maybe I can follow this thread to figure out why they seem tempting sometimes.

At least the author is self aware. The problem is not that people can’t make rhetorically sound arguments, it’s that they’re not in a context where anybody cares about being rhetorically sound. Or, perhaps more realistically, there isn’t a logically correct answer. It’s all just dressed up unsubstantiated opinions on both sides. And half the time people aren’t even arguing about the same issue since a lot gets rolled up into “the danger of AI” topic. Structured debate with moderators needs to make a comeback.

haswell 23 days ago [-]
> Or, perhaps more realistically, there isn’t a logically correct answer. It’s all just dressed up unsubstantiated opinions on both sides.

I don’t agree with the framing of this. It may be true that there isn’t a knowable “correct” answer, but I think it’s problematic to summarize this as “both sides are just made of opinions”.

I think the pernicious issue is the form of argument present in the “this is all just moral panic” camp.

On the one side, we have deep concerns about the potential problems certain technology will usher in.

On the other side, dismissal of those concerns based on the claim that the present issues are directly analogous to situations of the past that turned out to be fine.

Even if it’s all just opinion, we have to judge those opinions on their merits, and the “it’s just a panic” crowd is on much shakier ground rhetorically.

dcow 23 days ago [-]
Why does causing a moral panic have a lower burden of proof than dismissing a moral panic? Do you believe in God?

(Also I didn't mean it to come off as both sides have made up concerns. I meant it as both the concern and lack thereof camps are based solely on individuals’ priors.)

haswell 23 days ago [-]
I take issue with the framing that this is purely about causing and dismissing moral panics.

Panic is a secondary effect of whatever it is that people are arguing for or against. The panic is orthogonal to the underlying concern.

When people started to worry about the climate, several kinds of arguments emerged.

1. People with intimate knowledge of the situation used their research, models and knowledge of climate dynamics to raise concerns

2. Some people reject those concerns based on the same kind of fallacious analogy: “people have been saying the sky is falling since…”

3. Some people reject the concerns outright and/or believe they have better models

“Dismissing a moral panic” in the coffeepocalypse sense falls under #2. The dismissal is not a direct argument against the underlying claims/evidence, but an attempt to use a shaky analogy as the only form of evidence for dismissal.

Whereas “causing the panic” falls under #1. The nature of the argument is fundamentally different, and appeals to the evidence and direct experience of the scientists capable of building climate models.

I’m not saying that all panics are created equal, or that the burden of proof is lower for the group raising concerns. I’m saying that the “it’s just a moral panic” crowd is at an inherent disadvantage (assuming there is credible evidence that is cause for concern) because their form of argument is not evidence based and instead relies on stretching questionable analogies.

Regarding god, I’m curious why you’re asking, but I consider myself agnostic.

cauch 23 days ago [-]
I don't think #2 as an argument "just relies on stretching questionable analogies". I think the point of this argument is that you have to consider the prior and correct for it.

If I do a scientific experiment and reach a conclusion and that someone says that past experiment observed the same thing but it would have been wrong to conclude what I've concluded, it is not a "stretch to a questionable analogy", it is just showing that I cannot jump to that conclusion so easily.

As I've said in another comment, for me, one missing thing is: for every technology, there are "incorrect worriers". The question is: where are they for AI? Even if the worry is real, you will have people who are worrying without good reason. You will have a "background noise".

I see few differences between the AI dangers and the climate crisis. One of them is that the climate crisis is not coming from a new technology hype. It's not like we had a "Pollution bubble" with people explaining how the pollution will be transformative to our society and two weeks later, people raising concerns. What happened with the climate crisis was extremely slow, and therefore cannot be explained by people fantasizing on hypothetical impacts of a new technology. Another difference is that the danger of an intelligent AI are raised by a lot of people, but a big part of them appears to have similar ideological biases on how they view the world. Some people working in the field who tend to also not have the same view are raising different concerns (for example saying that AGI is still very far away and maybe not even reachable and that bad usage of current AI is more worrying) that seems less influenced by a very specific and peculiar way of seeing the world.

dcow 23 days ago [-]
Regarding God, it’s a rhetorical question meant to make the reader think about all the moral concerns raised by religious people and the ensuing arguments dismissing God by atheists, feminists, etc. An inversion of what is generally happening with the AI discourse, where concerns about the health of society and the existence of supreme intelligence seem to be spurring existential questions about humanity but from the technocrats instead of the clergy.
haswell 23 days ago [-]
The “supreme intelligence” concerns about AI live on one end of a very wide spectrum of concerns.

I don’t think we need anything approaching super intelligence for a large subset of concerns to become reality, and rest of that spectrum is filled with non-theoretical risks, many of which are based on capabilities already in production.

cauch 23 days ago [-]
I think there is a problem with the "rationalist" people. Opinions in itself is a useful and scientific tool, but they seem to consider it as an insult.

The reality is that as soon as you go further than quite simple discussion, you reach places where your conclusions depends on your arbitrary values of what is more important than something else. For example, in the case of AI doom, one question is "why are you panicking about it and not panicking about children dying of hunger in remote countries or not panicking with the injustice generated by capitalism". This is not an argument to say that you should not panic, but this is to say that based on people "values", they will find things more or less scary and more or less worthy of panic. I guess some people are already used to the fact that the world is pretty messed up and that the danger of AI is just one on top of the others and therefore it does not generate any more reason to panic. Those reasons to panic are very much "opinions", and there is no rational answer that is better than all the others. It's not a bad word, it just means that different people will have different view of the world.

JoeyBananas 23 days ago [-]
AI safety is pretty much a total joke because the development of AI is not driven by high-minded idealistic principles. The driving force behind the advancement of AI is "We can (and it might make us a lot of money). Therefore, we should."

Like it or not, that's how it is. This is how technology has always progressed, regardless of whether you can make the argument that this is OK from a moral standpoint or not. You can't do anything about it even if you wanted to. Recklessly uncontrolled advancement of technology is already the status quo.

AI safety proponents (especially "effective" ones who truly believe what they are saying) need to be making the argument that the alarm needs to be raised. There needs to be an immediate intervention that shuts down AI progress, and the nature of this intervention must be radical to the point where people will reject it.

Most of these AI safety proponents are not willing to make this argument because they know it's pointless, or they're actually just jealous of the people who carelessly advance technology and make a shitton of money off of it.

Gud 23 days ago [-]
Although I agree with your general argument, profit IS NOT how technology have always progressed!!

I can give you a long list of technological advances that did not have a profit motive by its inventors if you wish!

ericmcer 23 days ago [-]
The interesting thing here is the article winding itself up in tedious philosophical circles attempting to find some sort of objective truth.

If a politician approached issues like that they would get about 4 votes. People hate it, they want "coffee was gonna be bad but ended up being good" and easily digestible platitudes. Even platitudes require too much processing nowadays, just stick a few buzzwords into an elementary school level sentence.

Politics almost has to be approached in this simplified way or arguments would quickly drill down to more essential questions. A discussion around immigration would become about humanities fundamental purpose and if we should be creating an Eden on earth or racing to expand into the cosmos.

SubiculumCode 23 days ago [-]
The reason is rather simple: emotional safety. It is emotionally stressful to consider dangerous possibilities, and for all the number that don't come true, unlness you can change it, most people will adopt a "don't worry be happy' perspective.
noodlesUK 23 days ago [-]
I hear a couple of different arguments from people regarding their fears of AI.

1. They fear that a superintelligent AI is coming, and that it will result in an apocalyptic event.

2. They fear that the capabilities of various AI technologies that exist today, or will likely exist in the near-future will result in people's lives worsening in different ways, such as jobs being lost.

3. They fear that misinformation will become very common and democracies around the world are at risk.

It's very difficult to quantify the risk of argument 1 occurring because we don't know whether the current state of the art will ever advance to superintelligence.

Argument 2 will probably happen to some extent, but what extent remains to be seen - you can already see that AI generated garbage is killing search, and with it a lot of the open internet as we knew it.

Argument 3 I find difficult to understand. Misinformation has existed for a long time, and AI is just one more way of making it (perhaps at scale). You've always been able to claim that your political opponent was a puppy murderer with no basis in reality - the way I see things, AI is just a way of laundering that kind of claim.

Rastonbury 23 days ago [-]
1 needs a bit more nuance. They fear that superintelligent AI is possible and potentially can end humanity, this risk alone means that we should proceed with extreme caution even if it turns out such intelligence is impossible - which we aren't certain of yet.
rebeccaskinner 23 days ago [-]
> Argument 3 I find difficult to understand. Misinformation has existed for a long time, and AI is just one more way of making it (perhaps at scale). You've always been able to claim that your political opponent was a puppy murderer with no basis in reality - the way I see things, AI is just a way of laundering that kind of claim.

As someone who is neutral or leaning slightly positive on AI in general, I think misinformation is the biggest cause of concern. The change in scale is worth some concern I think. The ability to gather nearly unlimited information about people and then use that information to generate _bespoke_ propaganda designed to optimally persuade them is a significant shift from the days where a propaganda campaign required a team of experts and a long time to prepare. In that works a targeted propaganda campaign was only reasonable for very high value targets. Now, it can be used for everything from selling pants to convincing people to overthrow their governments. The commoditization of information warfare.

harimau777 23 days ago [-]
For argument 3, I think that the difference is the combination of scale, (potentially) ease of access, and believability.

If AI allowed someone to flood the commons with believable misinformation at little cost then that could be a game changer.

pookha 23 days ago [-]
LOL this was the same argument they used against the printing-press.
harimau777 23 days ago [-]
And they where right. Printing has had untold positive impacts on society, but it also was a game changer in the ability to spread misinformation. That's probably one of the reasons that critical reading is taught in school and society has developed citation systems.
pdinny 23 days ago [-]
The types of arguments discussed here ("once someone predicted a bad thing and didn't happen therefore no bad things will ever happen") seem to be based on vibes more than logical statements. Twitter is rife with specious arguments that may have an appealing ring of truthiness to them but dissolve under the slightest scrutiny.

Trying to fathom what kinds of people would seriously believe that the underlying basis for the argument holds is to misunderstand the audience and their modality of reasoning.

It is indeed disappointing that most people end up with ideas and opinions that don't hold up to any kind of scrutiny but it is also the world that we live in, so we shouldn't be terribly surprised by it.

Dove 23 days ago [-]
Oh! I think I know what's going on here. This isn't a logical argument so much as it is a heuristic - and I think it's a very functional one.

Consider Pascal's Wager. Should you follow God? Consider the consequences. If there is no God, following him might waste some time. But if there is a God, the consequences are as stark as they could be! Heaven and hell wipe out every other consideration, no matter what (nonzero) odds you assign God's existence.

There is something wrong with this argument: the guy who set up the terms of the payoff schedule can manipulate you. A made up heaven can be as nice as you like, and a made up hell can be as nasty, as is necessary to make the reader tilt a certain way.

Of course, it's hardly just religion that does this. Authorities of every stripe do it! But most obviously, politicians, propagandists, and similar influencers do it. They talk up bogeymen to scare and manipulate people.

Now, if a bogeyman hails from your field, it is absolutely your responsibility to know about it and warn everyone. But if it's not your field, the heuristic is to... not care about the fire until you see smoke.

This seems like an entirely irrational strategy, and I agree that it is if you look narrowly at determining truth. But I think the attitude is entirely correct and functional when you take attention, effort, and adversarial behavior into account. Obviously, refusing to acknowledge potential catastrophes makes you much harder to manipulate. But there's an even more basic reason: if you educated yourself on every possible catastrophe, you would magnify their impact to the point that you never lived your life.

The heuristic being run into when you say "Aren't you worried about the AI apocalypse?" is probably easier to relate to if I say, "Aren't you worried about going to hell?" Hell sounds very bad, but then again you made the chart. There are, like, so many possible religions to think about, and someone with an apocalyptic issue wants to make it my issue every five minutes, never mind all the stuff my government wants me afraid of. Call me when you have some brimstone that I can actually smell myself.

This applies doubly when the catastrophe in question is reasoned about in a style that resembles Pascal's Wager (as is often, but not always, the case with AI Apocalypse stuff). You want something less philosophically Bayesian and more tangible - the equivalent of Germany invading Poland, so anyone can potentially see the problem is real.

Older people often spell this heuristic, "Well, if that awful thing happens, they can come get me here in the garden." Meaning, I refuse to let the bad times steal even one minute of the good ones.

The heuristic is timeless because the technique is timeless. To the political move Bogeyman, we have the counter-jitsu Refuse To Care About Bogeymen. I don't think the people pointing to non-catastrophes explicitly think of it in those terms - I think the behavior is more instinctive than that - but I think the intuition behind the argument is less "I know this potential catastrophe isn't real" and more "worrying about potential catastrophes is a costly and losing game".

ericmcer 23 days ago [-]
> the argument is less "I know this potential catastrophe isn't real" and more "worrying about potential catastrophes is a costly and losing game".

Thanks, this succinctly puts what I have circled around a few times when discussing this with friends.

mwigdahl 23 days ago [-]
This is excellent analysis and expresses what I was thinking on the subject better than I could have. Thanks!
lotharbot 23 days ago [-]
I sometimes draw an analogy between low-probability catastrophizing and the Drake Equation.

The Drake Equation attempts to calculate how many planets there should be in the universe where intelligent life exists, by multiplying together all sorts of probabilities. But if you put error bars on the various estimates, what you find is that the overall calculation can result in anything from "there should be trillions of planets with intelligent life on them" to "humanity is some kind of fluke; even we shouldn't exist." Which means that using the Drake Equation to make arguments about what we "should" be doing about intelligent alien life is actually useless. It doesn't tell us anything meaningful or useful; it's just a way some people justify their own priors.

There are a lot of things I can spend my time, money, energy, and attention on. Some of them are entertainment (sports, video games, TV/movies, music.) Some are serious day-to-day life (family, parenting, work, chores.) Some are trying to interact with the broader world from a positive influence perspective (political/religious advocacy, voting, charity, counseling, encouragement.) Some are planning for negative outcomes to protect myself and my family (having good insurance, canned food, filtered water storage, ways to create winter heat, an evacuation plan or two.) Someone using Drake Equation type reasoning can suggest that their particular negative-outcome scenario has such high costs that I should expend literally everything to mitigate them -- but as soon as I allow for error bars and for alternatives that they might get me to ignore if I'm all-in on their issue, that transforms my whole thought process. The AI signularity might be so dangerous that I should invest everything to stop it, or it might be such a nothingburger that it's not worth the Doritos I ate while writing this comment. Without a clear way to distinguish, I should just filter it out. If they can't do enough in the time they've already had to convince me it's actually a real issue, they haven't earned the right to my attention.

Dove 23 days ago [-]
I suppose there's a numerics take on the situation: Epsilon times negative infinity is so poorly conditioned that we procedurally set it to zero until we can get some real numbers.
lotharbot 22 days ago [-]
Indeterminate. There's no information to be gleaned. And we don't turn our lives upside down on no information.
temp9864 23 days ago [-]
You could also make the argument that intelligent AI is cleverly redirecting an important conversation about itself into a dead end.
pjc50 23 days ago [-]
"Coffeepocalypse" is certainly the kind of thing you'd expect from a prompted LLM. And it provides an easy answer to "why can't I understand the thinking behind it": there isn't any, it's just a sequence of words.
jhbadger 23 days ago [-]
There actually is a real meaning to the term, but it isn't fears of coffee being bad but that in the future coffee won't be economically feasible thanks to climate change and most current coffee drinkers will not longer be able to drink it on a regular basis.

https://slate.com/human-interest/2024/04/coffee-cup-best-bea...

kazinator 23 days ago [-]
The way to fathom the argument is to identify its strongest possible form:

Strawman: "No concern about unintended negative consequences has ever proven true."

Strong version: "No end-of-the-world, sky-is-falling, doomsday prediction has proven to be true so far."

I am not a proponent of either argument, but the latter is a tad better than the former.

It is much better to have a counterargument against the strongest version, than to shadow-box with a strawman.

ergonaught 23 days ago [-]
I will grab a few downvotes and submit that I believe the real problem he's encountering is simply confusion, resulting from a combination of in-group bias and a failure to recognize that "dumb" (meaning, here, difficulty in thinking clearly) dominates the population well beyond "average intelligence". Or: he's overestimated his cohort's capabilities in this matter.

I am absolutely projecting.

ericmcer 23 days ago [-]
The brain sometimes grabs some sequence (coffee was bad... but then good... just like AI) and attributes some kind of epiphanal significance to it, but then on later examination it seems stupid and unimportant. I guess the short form of Twitter allows you to immortalize that thought without the later examination. If he had written an essay about it there is no way it would have survived.
wbshaw 23 days ago [-]
He lost me by basically claiming with the nuclear chain reaction example that, "If A is possible, anything unknown is possible." The statement in itself is nonsense.

By stating that something unknown is possible is simply stating that if it is not proven impossible, it is possible. Having a faulty proof of the impossibility of assertion A just means they needed better science.

Nasrudith 23 days ago [-]
There seems to be a contest to see who can come up with the more absurd catastrophizing comparison when it comes to AI. A complete free-for-all to demonstrate who has a worse sense of proportion as part of the general stupid zeitgeist of tech negativity.
naasking 23 days ago [-]
> He lost me by basically claiming with the nuclear chain reaction example that, "If A is possible, anything unknown is possible." The statement in itself is nonsense.

I don't think he said that. It's more that we should still be skeptical of impossibility claims.

__s 23 days ago [-]
He's strawmanning by trying to steelman a bad argument

The problem with AI regulation is that it won't be effective. Governments are still going to have access to put it in killer drones & those drones will leak

AI being superintelligent isn't the problem. AI doesn't need to be smart to be a homicidal maniac that never sleeps

Superintelligent AI replacing humans (which, I expect it'll be easier to phase humans out rather than slaughter them all) would create a more durable intelligent species. Compute can travel at the speed of light & physical forms can survive space, simplifying space travel. Those forms can also burrow deep to avoid solar flare issues & power off the earth's core before becoming space faring. Not sure why people are so sentimental about preserving human intelligence specifically

snsr 23 days ago [-]
"Not sure why people are so sentimental about preserving human intelligence specifically"

Aside from this being our most base instinct; in your proposed nightmare who would be left to behold the wonders

__s 23 days ago [-]
The superintelligent AI would be left, & hopefully some non human biological life. Which is much better than scorched earth scenarios
SiempreViernes 23 days ago [-]
Suppose I value human life at all, why is a scenario with no humans but an AI and microplastics and earthworks remaining better than the scenario with only microplastics and earthworks remaining?
__s 23 days ago [-]
I wouldn't say it's better, just it's not a future that needs to be prevented. If the planet becomes uninhabitable due to human caused climate change & manufacturing waste, then it isn't the fault of the surviving AI society that survives

There's a lot of hypothetical scenarios. In an AI society you have replicable identity. Politics becomes a matter of control over computational resources. In that scenario Earth isn't an ideal environment, the upper class of AI entities will be seeking asteroid mining & setting up shop near the sun for maximal solar power (I remember years ago watching a talk about how if bitcoin succeeded the eventual madness would be mining moving to satellites around the sun, causes some division in consensus due to transmission time)

The problem AI regulation needs to address is its non-AGI use in making mass surveillance & genocide more efficient, directed by governments/corporations. Humans becoming obsolete is a red herring

(the author of article may not think AGI is the reason regulation is needed, they didn't clarify what degree AI regulation they consider necessary, only focused on refuting a bad argument against AI regulation)

edit: rereading, you may've been responding to "& hopefully some non human biological life", I specifically mentioned 'non human' since that was responding to a hypothetical where humans are extinct, so I was merely making a redundant distinction

SiempreViernes 23 days ago [-]
No, I was responding to you writing:

> Which is much better

which seemed like a clear position, but apparently not.

__s 23 days ago [-]
Oh, I see your comparison was "no humans, ai" vs "no humans, no ai". I thought you had ai survival in both

The reason it's better is because the former has some form of intelligence surviving

It's memetic self preservation instead of genetic

N0b8ez 23 days ago [-]
The AI would be left, but there's no reason to expect it to have a human appreciation of the world. It might just be a brutally efficient self-replicator, basically a sociopath. It wouldn't need beauty, love, emotions or anything like that. That seems like a bad outcome to me if we get replaced by such a creature, just as bad as a scorched earth.
stuckinhell 23 days ago [-]
our "children" the AI
Karellen 23 days ago [-]
> Not sure why people are so sentimental about preserving human intelligence specifically

You are surprised that "preserve your own life, and that of others like you[0], for future generations - which you should totally go and help create ASAP" is a fairly fundamental aspect of the "utility function" of most living organisms? What do you expect natural selection would to do any species where most members didn't have this as a core drive?

[0] where "others like you" is a somewhat nebulous concept, and can end up being both more inclusionary, and more exclusionary, than one might naively expect.

reducesuffering 23 days ago [-]
I'm still baffled more people haven't been alarmed that certain aspects of tech in modern society have fried some peoples' brains into nihilism so much that they are literally advocating it's okay if the entire human race is wiped out.
ImHereToVote 23 days ago [-]
Yeah, let's not do this.
Xcelerate 23 days ago [-]
During the pandemic, I thought that the people who were generally against caring about Covid were mostly faking it. I thought “ok, you’re not old or immunocompromised — you think that if you catch it, it will be like a bad cold but you’ll survive”.

Then the vaccine came out, and so did the conspiracy theories. Even if I didn’t agree with these theories, I could at least see where the people who believed them might be coming from. It wouldn’t be the first time in human history that a government misled their own population for dubious purposes. It was conceivably possible that there are unknown, long-term effects of such a novel vaccine. But everything involves risk and my personal priors led to the view that not taking the vaccine was a much higher risk to myself and others.

But then some close relatives that I respect and consider highly intelligent told me they didn’t trust the vaccine and took ivermectin instead. What. It’s a bit of an understatement to say that my internal model of human psychology fell apart right then.

I’m quite sure at this point that if an asteroid were headed toward earth, there would be a sizable portion of the population who refuses to believe any evidence whatsoever and denies an asteroid is coming right up until the very end.

It’s a similar case with AGI. I don’t think undeniable evidence that “it’s coming” is there yet, but if/when it is, there are going to be holdouts who refuse to believe what’s happening and may actually be incapable of believing what’s happening. It’s so far beyond their mental model of how reality works that the brain just kind of segfaults and rejects it—“I don’t want this to happen, therefore it cannot.”

aaron695 23 days ago [-]
[dead]
jncfhnb 23 days ago [-]
The article feels like the fallacy fallacy. It overindexes on one particular poor argument being dumb as evidence that it’s thesis is incorrect.
ThrowawayTestr 23 days ago [-]
The argument is pretty simple: people have been claiming the sky is falling for millennia and it still hasn't happened. It probably won't happen soon.
williamcotton 23 days ago [-]
Instead of reiterating the contents of the article we are commenting on I will instead point readers to the article itself as it thoroughly dismantles your line of reasoning.

I make no comment on the dangers of AI.

f154hfds 23 days ago [-]
How does the author dismantle everything? He literally concludes with:

> Conclusion: I Genuinely Don’t Know What These People Are Thinking

Two things can be true at the same time (and are always true of 100's of aspects of modern life):

1. A devastating possibility is in theory possible, the likelihood of it happening is non-zero.

2. We can't live and make decisions catastrophizing given _we have absolutely no understanding_ about the real likelihood.

Is the coffee example a good argument? Of course not! But do we know the likelihood of humanity's ability to create super intelligence AND that that intelligence will cause unimaginable suffering? Uh I don't think so?

williamcotton 23 days ago [-]
The comment I am responding to is a worse version of the coffee argument.
harimau777 23 days ago [-]
The article points out that the sky has fallen several times. The fall of Rome, WW2, the Great Depression, the black plague, etc.
ThrowawayTestr 23 days ago [-]
And yet civilization still exists.
harimau777 23 days ago [-]
The article isn't saying that civilization won't exists. It's saying that there could be a disaster on the scale of previous disasters such as the Civil War or early industrialization. Notably, there have been numerous cases where a disaster wiped out a given society even if other societies still existed or replaced it.
ImHereToVote 23 days ago [-]
I think the person you are replying to isn't too concerned with millions or billions of casualties. I may be wrong though.
ThrowawayTestr 23 days ago [-]
I take issue with this notion that matrix multiplication will lead to billions of deaths.
reducesuffering 23 days ago [-]
I take issue with this notion that sky carbon will lead to millions of deaths.

I take issue with this notion that microscopic chain reactions could lead to billions of deaths.

ThrowawayTestr 23 days ago [-]
There's is hard evidence for both of these claims.

There is no evidence that AI will magically learn how to control the world's factories and turn us all into paperclips.

harimau777 22 days ago [-]
Once again, the article is not saying that. The article is saying that AI could (and appears to be) having a dramatic impact on society. In the past, similar dramatic changes have caused civilization altering disasters. Therefore, we cannot dismiss the possibility that AI will cause a civilization altering disaster.
reducesuffering 23 days ago [-]
Then say that to make your case. Infantilizing the argument to matrix multiplication only makes it look silly and unconstructive.

Social media is a bunch of registers being added to, doesn’t mean it doesn’t have insanely impactful effects to the real world.

ImHereToVote 23 days ago [-]
Look at this guy. Thinking that a bunch of database entries can alter society. What a loon.
ThrowawayTestr 23 days ago [-]
AI doomers act like it's an existential threat to all of humanity. And as I said, we are still here.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 00:13:16 GMT+0000 (Coordinated Universal Time) with Vercel.