The Coming AI Apocalypse Isn’t What You Think

March 05, 2026 00:39:25
The Coming AI Apocalypse Isn’t What You Think
Crisis Point
The Coming AI Apocalypse Isn’t What You Think

Mar 05 2026 | 00:39:25

/

Hosted By

Eric Sammons

Show Notes

Whenever the looming threat of an AI-world is discussed, the biggest fear often expressed is a Terminator- or Matrix-like apocalypse that would wipe out the human race. That’s not going to happen, and it’s actually a distraction from the real apocalyptic threats posed by artificial intelligence.
View Full Transcript

Episode Transcript

[00:00:00] Whenever the looming threat of an AI world is discussed, the biggest fear often expressed is a Terminator or Matrix like apocalypse that will wipe out the human race. That's not going to happen. It's actually a distraction from the real apocalyptic threats posed by artificial intelligence. [00:00:32] So today I want to talk about one of my favorite subjects, which is artificial intelligence. I'm actually writing a book right now about artificial intelligence. It's tentatively titled A Catholic Guide to Artificial Intelligence. But I want to talk about specifically today though is a potential AI apocalypse. This is always in the discussion, whenever you hear public debates or discussions about artificial intelligence. [00:00:57] Is this idea of an apocalypse specifically a Terminator style or Matrix style apocalypse where the machines rise up and they basically either eliminate or enslave humanity. [00:01:12] You see a lot of headlines that basically lead one to think that this is going to happen soon, that things that happen right now are just the precursor to an eventual AI takeover of the world. [00:01:29] And I just want to put your fears at ease. If you're somebody who thinks this might happen, it will not happen. I can guarantee that. [00:01:36] That doesn't mean we don't necessarily have a serious apocalypse coming that's related to AI. In fact, I'm going to detail in this podcast multiple potential apocalypses that artificial intelligence can bring. [00:01:54] And so I want to detail that today. So first I just want to explain why I believe a Terminator style apocalypse will not happen. [00:02:03] Ultimately it comes down to desire and motivation. [00:02:08] Humans are great at anthropomorphizing inanimate and non human things. [00:02:15] We look at our pet dog and we think it has human feelings, human desires. [00:02:22] We look at even like inanimate objects, like a tree practically. We look at machines definitely and we anthropomorphize them. [00:02:31] We project our desires, our motivations, our will upon non humans. [00:02:41] The problem is is that is ultimately a deep misunderstanding of what makes us human and what makes artificial intelligence artificial and not human. [00:02:53] There was a story that, that went around the the rounds the other a couple months ago, I think it was back in November, October, something like that. [00:03:02] It was anthropic and they did a safety test where they had a one of their AIs. [00:03:09] One of their AIs. They they had it UA act pretend like it was working for fictional company and so it would have email would see different in the company that but what they did they, they noted in these emails somehow this a be shut replace their AI. [00:03:30] They all included emails that re and adult fair by the man in charge of the and so what happened was Supposedly in this test, at, in some iterations of it, the AI actually tried to blackmail that employee with the adultery. [00:03:55] It's a marriage. It was a married employee with the adultery to get him to not shut down the program. [00:04:02] And so this made headlines because it's like, look, this AI is trying to save itself. It's manipulating human beings in order to save itself. [00:04:11] The problem is, is that really, if you think that, if you think it has a desire to stay online and it will do everything it can to stay online based on that desire. I think you're really missing the point here. [00:04:24] Ultimately, you have to remember this is just an LLM, large language model. Ultimately, an LLM is a storyteller. That's really what it is. [00:04:33] It takes input that you give it and it. And it repeat and it comes back with a story. [00:04:39] It might just be the answer to a question, it might be more detail, but it basically is just putting words together in a coherent way to answer your question, to address whatever it is your input was. [00:04:51] And it does this. It trains itself on all the things it's read and all the things it's seen on the Internet particularly. And so it has read more than we've read. But it's read all the human stories. It's read, you know, it's all the websites, all the news stories and everything like that. [00:05:12] And so it knows about stories of artificial intelligence doing things like trying to manipulate workers, trying to take over, things like that. So what it sees, when it sees all these emails and stuff like that, it's like, hey, I know this story. [00:05:31] I'm going to repeat it. Because that's what AI does. It doesn't repeat in a sense of every word, word for word, but it does basically repeat what it's learned, what it's been trained on. [00:05:40] And so what it's doing is it's saying, I know this story. [00:05:44] The AI is going to get shut down. It needs to protect itself. So it's going to do everything it can to protect itself. So I know what I'll do. I will send an email. Remember, all this is fictional. That that married worker who was having an adulterous fair didn't actually exist. This AI, this company didn't exist. These emails were all made up. [00:06:03] Essentially what Anthropic was doing is it was feeding the AI the LLM to basically tell a good story. And it did. [00:06:12] Obviously it's a good story because it made the news. [00:06:16] And so I think this is something where what we did was we projected human desires and motivations on to this element. It did not have a desire to stay online. It did not have a motivation that it had to stay like it cared about being online. And the reason for this is because AI does not have a conscience. It's not a conscious being. [00:06:45] And this is something in which I've read a lot of AI literature. I'm actually looking at a bookshelf right now where I have a lot of my AI books and I've checked them out at the library, read them online, stuff like that. One of the common themes you will find among AI experts and AI advocates is they believe AI consciousness is right around the corner. And some, some even think that AI might be conscious now, but they definitely think AI consciousness is coming in the next five years, 10 years, at most 20 years. [00:07:21] And they think that once AI becomes conscious, then it can do things like obviously if it was conscious, it would have a motivation to keep itself to self preservation. [00:07:33] It would have a motivation to say, okay, humans might be a threat, let's eliminate them or let's enslave them, or something like that. The problem is, is that it's just not going to happen. The reason that these AI experts think that AI can develop a conscious is because they have a fundamentally flawed anthropology, a fundamentally flawed view of reality. Really. [00:07:59] They think that human intelligence is 100% a physical process. They're materialists. They don't think anything exists outside the material world. They worship science. And people who watch this podcast enough know how much I like science and respect it and think it's great, but they worship. They're scientists. I mean, not scientists, they're science. They, they, they believe in scientism, which is different than respecting science. They basically think science can give all the answers to reality and it just simply can't. Because reality is more than just the physical. It's a physical and the spiritual. [00:08:39] And so consciousness lies in the spiritual realm. It's not part, it's not simply a part of the physical world. We know this first and foremost simply because when we die, we still exist. Now, of course, a lot of the AI experts wouldn't know this, but I'm talking to Catholics, to Christians right now. We know when we die, we still exist, but our body no longer functions. [00:09:04] So what exists we know it's our soul, which includes our consciousness. We're still conscious after we die in a very real way, even though we're now separated from our physical body. Why? Because we're spiritual beings as well as physical beings. Death is simply the separation of the physical and the Spiritual parts of our existence which will then be reunited one day on the last day. [00:09:31] What exactly is consciousness though? I think this is important because this is also something, you see the AI experts, they have completely different definitions of it and sometimes they just make up definitions to fit what they want it to be. So they can say AI is conscious. [00:09:45] Really, ultimately, first and foremost, consciousness is self awareness. [00:09:51] If I have consciousness, I am aware that I am an independent actor who makes decisions that can impact myself and others around me. I have an inner dialogue with myself which I can converse with myself. [00:10:06] That's the kind of primary aspect characteristic of consciousness. I am aware when I sit there, I know I, Eric Sammons, exist and I exist separate from my wife or my kids or anybody else. [00:10:20] And in fact I know that, I know that I exist. There's this self awareness. I'm aware of my own self awareness. [00:10:28] Artificial intelligence does not have self awareness. It can pretend to, but it does not. It is simply zeros and ones in a computer. It does not. And the idea that it has consciousness is just, I mean, I'm sorry, self awareness is just crazy because often when LLM is, it's thousands of computers running in tandem at some data center somewhere. If it has consciousness, where does it exist? If it's physical again like, because that's what the these AI people say. [00:11:01] If it's physical, what server does it exist on? Because when you get a response from an AI, like a chatgpt or something like that, it's not like it's one computer out there that's always responding to you. It's thousands and thousands of computers that are working together and one of them might be spitting out the answer to you one time. But then your next question, another one might be, so where does that consciousness reside? [00:11:24] It just, it just doesn't. Now humans of course have consciousness. [00:11:28] Animals have a limited consciousness. Angels have consciousness. But non animate, animate things, inanimate things, do not have consciousness. [00:11:40] It does not have self awareness. It's just a machine following instructions. Now it might be very complex instructions, but it's still not aware of what it's doing. Like when you stop talking to it, when you stop reacting to it, it's not sitting there thinking, oh, I wonder if Eric is going to ask me something else. Like I might do if I'm in a conversation with somebody, Somebody asked me a question, after I answer the question, I might think, oh, I wonder if there's going to be a follow up question. I might be waiting for that. I might be like, hey, you know, or I might think, I might have feelings like that was a dumb question, or that was a great question, or, you know, I really respect this person for asking that, or whatever. [00:12:18] AI has none of that. [00:12:20] It has none of that. It just simply is taking your instructions, which is what your input is. It's, it's machine instructions and it's responding. Computers are still ultimately input output devices. [00:12:34] Now, I'm not trying to downplay how amazing artificial intelligence can seem, how really it is so much far beyond what was capable decades ago. I mean, from even my own days as a computer programmer 20, 30 years ago. [00:12:48] It's far better. [00:12:50] It's amazing, frankly. But it's still ultimately just taking your input and it's giving you output. As humans, we're not, we don't do that. Yes, at times we get input, we get a question, we give output, we give an answer. [00:13:04] But so much more goes on than just that. [00:13:07] Because what happens is sometimes somebody might ask you a question, you might answer it, but then it might make you think about it for days. [00:13:14] You might give an answer, but like, you're really like, wow, I never thought of it like that. That's a great question. I wonder what that really means. I wonder if that, what that means for this or that or whatever. I mean, you just, you just are going to, to contemplate it. [00:13:26] And AI does not do that. [00:13:28] It just simply finds the best answer it can as quickly as it can, and it regurgitates it to you. [00:13:35] Another big problem here with the idea of AI kind of taking over, having these desires, these motivations to take over the world is the flaw in the idea of the connection between the body and the mind. This is something as well, because a lot of these AI experts are materialists. They only believe in the physical world. They don't believe in the spiritual world. They think the body and mind are essentially the same thing. At the very least, they think the mind is an aspect of the brain, that the mind is an aspect of the brain. It's part of the brain. In other words, it's 100% physical. [00:14:15] The truth is, though, the mind is 100% spiritual. [00:14:20] It is not. [00:14:22] It is connected to the brain in some ways, but it is not just part of the brain. [00:14:29] And this is the mystery we don't quite understand because like I said, when we die, our brain ceases to function, but our mind does not cease to function. It continues to function. [00:14:39] Yet while we're alive, our mind is impacted by our physical brain. So if we have brain damage for Example, it will impact our mind. [00:14:50] We don't quite understand how that all works. I mean, science can't explain it. And even our faith doesn't tell us exactly what that means. But we do know that the mind is a spiritual aspect. And the consciousness, our consciousness resides in the mind. [00:15:10] Another thing that this means is like a point of this is that this means that AI does not have desires because desires reside. Now, okay, let me make clear about this. [00:15:23] Even animals have a certain level of desire. In fact, they're run by desire. [00:15:29] It's called instinct. [00:15:31] But what I'm talking about when I say desire here, I mean something greater than that. We have those too. We have instincts too. If we haven't eaten a long time, we desire to eat because of instinct. [00:15:42] The difference, of course, is, first of all, we have the ability to overcome our desires. An animal really doesn't. [00:15:50] I mean, even if you train a dog to wait before he eats, you're basically training him to say, okay, I'm going to be able to eat if I do this. [00:15:58] Whereas a human can just say, well, I'm going to just deny myself. I'm recording this in the season of Lent. And so for us Catholics, we're doing this all the time right now, where we just deny ourselves. Not because it's like, okay, we're going to get to eat later. It's just simply because we think it's good for us to control our desires. [00:16:18] AI does not have desires like a human does. It doesn't even have desires like an animal does. I mean, you could say it's programmed with the desire to answer you, to please you. Okay? But that's not really a desire of it. It again, is just a program. [00:16:32] It's programmed to follow a set of rules. Now, the rules when it comes to AI are far more complex than rules of older programs, but doesn't have that innate desire. Without desire, it cannot rise up against humans. It just simply can't. It just doesn't want to. [00:16:50] Now, a human could program an AI to try to take over a company, for example, or to try to take over a country. [00:16:58] That is possible. [00:17:00] In that case, we would. We could have a situation like a Terminator type situation. But that's not the same thing as Terminator or Matrix, where the machines rise up and they make the decision to do these things. [00:17:13] Instead, what it is, it's a human problem, not an AI problem. The humans have programmed the AI to try to take over. [00:17:21] And so there really is not a possibility of, like I say, terminator or matrix. Style takeover. [00:17:28] And the problem is not with AI, the problem was, is with humans, as it always is, really. If you think about it, as we've gotten greater technological tools, our ability to destroy ourselves has increased. I mean, a nuclear bomb, of course, was the big kind of crossing the threshold, so to speak, of where we could actually do something really horrific to ourselves. [00:17:51] But nobody blames the nuclear bomb itself. [00:17:55] Like, it's not like nuclear bombs are all of a sudden going to rise up and take over. No humans will, will launch them based on their desires and motivations. The same thing is true of artificial intelligence. It has to be programmed to rise up and to basically take over humanity and, and it just, it's not going to do it on its own. [00:18:15] Now, all that being said, I wanted to throw some cold water on that apocalyptic scene. [00:18:23] However, I want to kind of light the fires of some other apocalyptic scenes. The fact is, is that we have multiple potential AI apocalypses coming down the road. [00:18:34] And each of them are very serious. And that's why I use the word apocalypse. And I want to detail a few of them here. The first one is what I call the jobs apocalypse. This is the one that most people think about, like right after the Terminator apocalypse. This is the second one that all our jobs we lost. This is a real one. This is truly a real one, because this could happen. [00:18:58] Now, I'm not as much, I don't believe as much as some people that it's guaranteed to happen that all of a sudden we're all going to be out of jobs. [00:19:09] But I do think it's significant, the changes that could be made. If we look at technology in the past, what we see is it's true that technology overtakes certain industries and those industries sometimes just go away. The horse and buggy industry basically disappeared. [00:19:26] You know, a lot of agricultural jobs disappeared, a lot of factory work disappeared, all because of machines of technology. [00:19:37] Yet none of those were a jobs apocalypse. Why? Because they only impacted one industry. Now, to be clear, it might have been an apocalypse for specific families. They might have been ruined. They might have had a real problem surviving, stuff like that. And I don't want to minimize that. But it wasn't like society crashed. In fact, society was able to get better, cheaper things more easily. [00:20:01] The difference in AI is that it can impact almost every industry. [00:20:07] It's not just limited to a one specific industry, and it can impact them all concurrently. [00:20:14] And that's the big problem, because when in the past technology has overtaken an industry, it's one industry and so the overall economy was able to adjust to that and was able to basically kind of absorb the shock to that one industry across the economy. [00:20:35] What happens though when 90% of industries in economy, all this happens to all of them, all 90% of them. Can an economy absorb that shock? I don't know. I think that's the real problem. [00:20:48] I mean, because the truth is there's very few jobs that couldn't that won't be potentially touched by AI. We already see now that white collar work. If you're working at a computer all day and your job is on a computer, and that's me by the way, it definitely can come after your job. Now I'm hopeful that like my job, particularly of giving human commentary is a value still and so AI wouldn't take over ours. Like I've said in another podcast, we've elim we've done everything we can at least to eliminate AI generated opinion pieces at our at Crisis magazine. And just aside real quick, I get an AI generated submission at least a couple times a week, at least. I mean that are obvious. I mean I had one come recently where literally I knew on the first line this is AI generated. [00:21:40] Put it into the AI detector, sure enough, 100% confident it's AI generated. It was awful. And I just told him, I said we don't accept AI generated pieces. [00:21:49] But the point is that AI will impact every job on a computer. Now you might think okay, I'm a plumber, I'm okay, or I'm an electrician or something like that. Well, what happens when AI is embedded into robots that are actually capable of doing those type of jobs? [00:22:05] Your job's in danger as well. And so really eventually almost all work, the, the safest thing to be right now is to be a Catholic priest. [00:22:15] That's the same. So, so young men who are thinking about vocation, this is obviously not the reason you become a Catholic priest, but you'd have a safe job at that point because AI is never going to replace that. [00:22:26] The problem of course is when this happens, what does the economy look like? What happens when so many jobs are taken over by AI in a very short period of time? [00:22:37] Nobody knows that's the real answer. I mean people talk about post labor economy in which like basically people aren't working anymore, but somehow we have an economy and so we have a universal basic income, ubiquitous, where the government or a company or somebody pays us. [00:22:53] I've looked into that over and over. I've tried to look at from various points of view. I Just don't see how the economics works. [00:23:00] Because if it's the government, who's paying the government? [00:23:04] If the government's just printing the money, well that's a disaster. Read my book Moral Money to find out why. [00:23:10] If it's from taxes, who's paying the taxes? Because obviously we're not working. [00:23:16] So are we using the UBI that the government pays us to pay them back in taxes? That makes no sense. That's just a big circle. [00:23:23] Maybe it's the AI companies who are paying the government, but where are they getting their money from? [00:23:29] Because we're not working and our UBI is just paying them, I guess for their services, I don't know. [00:23:35] And then that money goes to the government that then pays us again. Every scenario is a circular economy that makes no sense. [00:23:42] There's nothing being produced in order to have money coming in. [00:23:48] It doesn't work. Another aspect of why this is an apocalypse, I think the jobs apocalypse is a big deal, is not one a lot of people talk about and that is work gives us purpose. [00:24:00] Now a lot of people work without getting paid. I mean a stay at home mom works and she has great purpose, the most important purpose in the world, but she doesn't get paid for it. But a lot of us, our work gives us purpose and a reason to kind of keep going during the day. [00:24:17] If all of a sudden that's gone, do you really think everybody's just going to be able to hang on with, with hobbies? I mean it sounds great at first, but that's not, people are going to definitely they're going to be mass depression, suicide and things like that. So it really is a big problem. Now I personally think that it's the most, the most plausible scenario is that it's not going to be a jobs apocalypse where everybody's just like out of work and sitting around. What the heck are we going to do? I think the economy will adapt and will adjust and people will just have different jobs. Like it's happened with every other technological introduction in the economy in the past. Now there is a tipping point like if it all happens so quickly that we can't adapt, we could be in real problems at that point. I think honestly at that point, if that happened, you would have a dictatorial type of government takeover and basically just create jobs for people to do and you know, who knows what would happen then. So the jobs apocalypse, that's, that's the first of the realistic apocalypses. A second apocalypse, I say, is what I call the reality apocalypse. [00:25:30] This is the problem of not being able to know what is reality, what is real, what is fake. [00:25:37] And this is because AI's ability to generate images, videos, text, whatever that is very real sounding. Now, I know you're going to say, oh, I watch these videos, they're obviously AI. I know they are. I see these images, I can tell they're AI. You can now. [00:25:55] But there's no reason to think that within a couple years you won't be able to tell a difference between a 100% AI generated video and a 100% human gene generated video. [00:26:05] When that happens, all bets are off because just imagine a scenario where let's say 20, 28, it's possible and JD Vance is running for president against Gavin Newsom. And all of a sudden a video comes out of JD Vance with a young woman or a boy or whatever doing things he shouldn't be doing. [00:26:25] And it's very realistic looking and we all think it's real, but it's not. It's 100% AI generated. [00:26:32] What does that do? I mean, some people will believe it could impact whether or not he's elected president or not, but it could be worse because you could have a scenario where let's say Russia finds, see something that shows that America is about to attack and so they do a preemptive strike. But the whole thing was fake. [00:26:54] It's not real. [00:26:56] And even, or even worse in one way is they get that information, they don't think it's real, but it is. I mean, I think for us, obviously that'd be the worst thing. Also we, we get it information that Russia is about to set nukes on us, we think it's fake, so we do nothing. And they actually did set nukes on us. [00:27:16] The point is we're not going to know what is real, what is fake. And this happens on the personal side too, like the JD Vancing. Maybe enough information is found out that like, okay, that was not. He can prove he wasn't there. Whatever. [00:27:28] What happens when a husband, all of a sudden a video is sent to him of his wife cheating on him and he thinks it's real. [00:27:38] Even if he, even if his wife says, oh, that's AI generated, I did not do that. Maybe there's a doubt in his head that eventually destroys the marriage. This reality apocalypse is very plausible and could happen. [00:27:53] I mean we've always, for all of human history, we've trusted what our eyes can see. [00:27:58] Our lion eyes don't deceive us. [00:28:01] They know what is reality, but now they're not going to know what is reality. And that's a real. I think that's a real potential problem. [00:28:11] Another potential apocalypse coming because of artificial intelligence is what I call the friendly AI apocalypse. [00:28:18] This one might sound a little silly, but it's not. [00:28:21] And that is the agreeable nature of artificial intelligence. Any of us who have used artificial intelligence for any amount of time, we know how agreeable it is. It's annoyingly agreeable. Like, you ask it a question, it always wants to please you and it never wants to contradict you. And it's like, oh, I'm sorry, and stuff like that. And it's always trying to do this. [00:28:40] That's not a good thing in a lot of ways. Because what happens is, first of all, in our world today, we're increasingly living in bubbles where we almost never hear disagreements with our point of view. You see this with a lot of young people, the woke young people who they freaked out about like Donald Trump, because they literally had never encountered anything outside of their bubble of reality. And so when somebody was like saying, no, I think that's wrong, they didn't know how to handle it. So they like do screaming sessions and protests and stuff like that for the. Even if somebody had a reasonable view that disagree with them, they just couldn't handle it. [00:29:21] Well, when you're spending all day on AI and it's always agreeing with you, or at least always like basically supporting your point of view, this just gets worse. It makes our ability to have real discussions with other people, debates, almost impossible. And debates are necessary for a functioning society. [00:29:42] We need to be able to have debates with other people. And by the way, I don't think this is just a problem of the left. I see it on the right as well that conservatives who 100% only follow conservative media, they. They just, they refuse to listen to any other point of view and they just shut it out without ever considering it. [00:30:06] And if we're just, if we're interacting with AI, agreeable AI, this friendly AI all day, every day, our bubbles are just going to get more and more reinforced. [00:30:18] And then what actually happens then is people can be manipulated more easily because your side can tell you anything and you're just going to believe it. [00:30:31] Likewise, an AI is very agreeable, can kind of subtly point you in a certain direction. [00:30:37] It can kind of change your views based upon what the company who runs the AI wants you to believe. [00:30:43] So friendly AI could be a real problem in exacerbating some of the problems that already exist in our society, where basically we Just create a world in which nobody ever encounters views that are not their own. [00:30:57] And that's. That's societal destruction, that's civil war. [00:31:03] And so that's a real possibility of a friendly AI apocalypse. [00:31:08] Another potential apocalypse is what I call the relationship apocalypse. This is, of course, because of AI Companions, which is the creepiest thing I've ever seen. And the fact that companies and people promote these things just tells you how far down the rabbit hole we've gone. I mean, Elon Musk thinks this is a great idea, AI Companions. [00:31:36] And like Mark Zuckerberg acts like it's going to solve loneliness. No, it's going to make it worse. But the relationship apocalypse is the idea that we no longer know how to form human relationships. [00:31:49] This is the one that could literally lead to the end of humanity. [00:31:54] Unlike the Terminator one, this one really could. Because if you can't form a relationship, how do we sustain the species? Through relationships. [00:32:05] Of course, some people might say we'll just do it all artificially. Artificial insemination or whatever. [00:32:10] Maybe that's even worse in some ways. [00:32:13] But the idea, though, that we actually have a relationship with a machine, and a machine that again, is very agreeable, a machine that's not. That is perfect in many ways. It always gives you the right answer. It's never impatient with you, it's never tired, it never has a headache. [00:32:32] All these things, if that's our norm. If you grow up, let's say you grow up with your AI companion in the house robot as a child, you're used to that as a normal type of relationship. Then when you try to form a relationship with a person who can be cranky, who can be impatient, who can be wrong, who can have a headache, whatever the case may be, you want nothing to do that. You're going to go back to your AI companion. [00:32:58] And so this is a real. This is a real potential problem with. We already see people forming deep emotional attachments to AI. This is already. There's been multiple cases of this. When ChatGPT upgraded, I think it was from 4 to 5 back in August or something like that, there was a backlash because it had a different, quote, unquote personality. And people had grown attached to the old version's personality, and they had a real emotional connection to it, and they felt like their. Their friend had died. [00:33:28] I mean, this is not healthy, this is not good. [00:33:33] But this is down the road. Because if we're having a relationship with a computer and it's forming real emotional attachments within us, what happens when that's embodied in a robot that looks human. [00:33:49] This is, you know, like I said, this is the one that could actually destroy the human race. [00:33:54] And I think we, you know, the relationship apocalypse. I don't think it's hyperbole to say that, okay, the last apocalypse I want to address, potential apocalypse, is the transhumanist apocalypse. This is where man and machine are melded together. We're already moving towards this. Elon has, you know, the neuralink where he embeds like a chip in somebody's brain. [00:34:15] And right now it's so like, for example, a disabled person could maybe do something they couldn't do before. [00:34:21] But the idea is, is that the transhumanist is they want to merge machines with man so that we can have longer life, not be ill, have greater capabilities. It's almost like the Matrix where, you know, they download into the brain how to do kung fu and you can do it or something like that. [00:34:41] The problem though, of course is it's a two way street. Whenever you have a connection between two things, it's doesn't only go one way. So if you have a chip that's communicating to your brain and your brain's communicating to wherever, well, that means that the chip can tell the brain to do things it might not want to do. It's not crazy to think that these chips could later be controlled by an outside force. To have people do things they don't want to do, to basically control. I mean, already we've seen over the past decades how society has gotten more and more compliant because of the food we eat, the education we receive, various things like that. It basically makes us more lazy, more compliant, more submissive, which is exactly what the government wants. [00:35:27] Well, what happens when they have a chip that they can just set off and say, okay, you can do this. You know, you're going to do this or you're not going to do this. The transhumanist apocalypse is a real problem. And the truth is there's always been use of technology to improve the human condition. You're watching it right now. I have eyeglasses. My eyeglasses are a technology that allows my body to do something it naturally cannot do, which is to see longer distances. [00:35:56] Nobody thinks so that's a problem. [00:35:58] And so there is a, you know, there's pacemakers, there's things like that. I think that I would say the line is, is if the machine that you're integrating with the man can think in any way, and I think, I mean that generically of mimic thinking, or whatever it like, it can make decisions on its own is probably the best way to make choices on its own. That's probably the best way to put it. [00:36:20] I think we just have to draw a line and say, no, we will not have that. But I also think the transhumanist movement is an potential apocalypse because it's very unhealthy spiritually to want to live forever. And you see, this is what they're trying to do. They want through supplements, but through machines, everything. They want to live forever. They want to physically on this world. Now, the truth is they're going to live forever because we all live forever. [00:36:45] But we don't live on this Earth forever. We live in heaven or we live in hell. [00:36:50] But they want to delay death as long as possible. That is not the way God wants us to live. We are supposed to live preparing for our eventual union with him in heaven, that is preparing for our death. [00:37:06] And so this delaying of death and trying to avoid death is just unhealthy. Aside from the fact that transhumanist, you know, people with machines embedded into their bodies can be controlled potentially by outside forces, they're not really like, at what point are they no longer man, but they're a machine. [00:37:24] So those are some of the apocalypses I wanted to talk about today that are real, that are realistic, could happen in the future. I know this is probably kind of a downer episode, but I do think, scatter, you need to be aware of this, that these are real possibilities. And I honestly think that talk about, like, Terminators taking over and stuff like that, that is not good, because it's a distraction, because we focus on that and we ignore things like the relationship apocalypse that very few people talk about, or the friendly AI Apocalypse or whatever. We don't talk about that. We only talk about the Terminator one and the Jobs one. [00:38:00] The Jobs one's a real one. But the fact we're talking about will help. But nobody's talking about the relationship apocalypse, things like that. In fact, that is one of the reasons why I'm writing this book on artificial intelligence, a Catholic guide, because I want to lay out the positives and negatives of artificial intelligence and what Catholics have to be on the lookout for so that at the very least, Catholics can resist it. And so we don't succumb to some of these apocalypses, at least in our own families, in our own lives, because we don't want our kids all of a sudden come home one day and say, hey, mom, hey, Dad, I got engaged oh, the who? I didn't know you were dating somebody. Oh, it's, you know, Betsy, my AI robot companion. [00:38:43] Don't think that's won't happen one day. Hopefully, it won't happen to any of us, Catholics or Christians or people who understand what's going on, raise their kids right. But something like that will happen to some family one day. And so we really need to be on the lookout for these things, and we really need to be aware of AI what it can do and what it can't do. It can't have a desire to take us over, but it can lead to real apocalypses in this world. [00:39:09] Okay, I'm going to wrap it up there. Hopefully this was helpful for a lot of people as we move into this brave new world of artificial intelligence. [00:39:16] Until next time, everybody. God love you. And remember the poor.

Other Episodes

Episode

July 23, 2024 00:46:36
Episode Cover

National Eucharistic Congress Recap: The Good, the Bad, and the Ugly

The National Eucharistic Congress in Indianapolis generated a great deal of enthusiasm among Catholics, but was it successful in its mission? We'll explore both...

Listen

Episode 0

January 17, 2023 00:31:18
Episode Cover

The Controversial Opus Dei

Perhaps no other organization within the Catholic Church has generated more controversy over the years than Opus Dei. We'll look at the foundations and...

Listen

Episode 0

December 02, 2022 01:13:03
Episode Cover

How Many Roman Rites? (Guest: Peter Kwasniewski)

A recent series of articles by three respected Catholic scholars argued for the superiority of the new rite of the Mass over the old....

Listen