Will AI Make Jobs Optional and Money Disappear?

November 21, 2025 00:36:06
Will AI Make Jobs Optional and Money Disappear?
Crisis Point
Will AI Make Jobs Optional and Money Disappear?

Nov 21 2025 | 00:36:06

/

Hosted By

Eric Sammons

Show Notes

Elon Musk claimed that artificial intelligence will one day make jobs optional and money obsolete. We'll look into why reading Genesis 1-3 shows he's completely wrong.
View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. So everybody's talking about the future of artificial intelligence or more accurately, what the future will be like in an artificial intelligence world. And you hear like the, the artificial intelligence, the AI boosters talk about the utopia it's going to bring about. And you hear a lot of the people who are more skeptical of the, of artificial intelligence talking about the dystopia it's going to bring about. And often they have the same presuppositions and one side thinks it's going to, that's going to be great for the world, the other side thinks can be terrible for the world. So I wanted to talk about that a bit today, but particularly I wanted to talk about the comments that Elon Musk just made this week week about an AI future. Specifically, he brings up the possibility of a future, a likelihood in his mind of a future in which jobs are optional, paying jobs, I assume he means are optional, and that money, currency is actually no longer really used. I'm going to go ahead and play the clip. It's only about a minute and a half, two minutes long. Because I want you to get a sense for what he is saying here. [00:01:22] Speaker B: If you say like in the long term, where will things end up? Long term? I don't know what long term is. Maybe it's 10, 20 years, something like that. For me, that's long term. My prediction is that work will be optional, optional, optional. [00:01:39] Speaker A: So we'll take that. [00:01:42] Speaker B: Yeah, I mean, it'll be like playing sports or a video game or something like that. If you want to work, you know, in the same way, like you can go to the store and just buy some vegetables or you could grow vegetables in your backyard. It's much harder to grow vegetables in your backyard. But some people still do it because they like growing, growing vegetables. That will be what work is like, optional. And between now and then, there's actually a lot of work to get to that point. And always recommend people read yen banks, culture books to get a sense for what a probable positive AI future is like. And interestingly, in those books, money is no longer, doesn't exist. It's kind of interesting and I, my guess is if you go out long enough, assuming there's a continued improvement in AI and robotics, which seems likely, the money will stop being relevant at some point in the future. There will still be constraints on power, like in it, like electricity and mass. The fundamental physics elements will still be constraints, but I think at some point currency becomes irrelevant. [00:03:09] Speaker A: Okay, so this is not uncommon, this belief among like kind of the AI Leaders like Elon Musk, like Sam Altman, people like that. And it's very common for them to overhype the future. Obviously some of this overhyping is because they gotta make sales, they gotta raise money, they gotta get investments, they gotta do these things in order to keep their AI based companies afloat and growing. There's a lot of worry in the AI industry that things are going to fall apart soon. This is going to be like the dot com bubble that's going to burst at some point soon. I don't, I don't want to talk about that. But what I do know is this, these men, and I think they're almost always men, are incredibly brilliant when it comes to technology. I mean, we don't hold. People like me, people like you don't hold a candle to their brilliance, their intelligence when it comes to a lot of this stuff. However, as intelligent, as brilliant as they are when it comes to technology, they are that dumb when it comes to understanding the human condition and understanding basic economics. I mean, there's just no other way to put it. And in fact, I don't think it's, I'm not trying to be like insulting here, but I don't think it's a coincidence that most of these men who are, who are run these technology companies, people like Mark Zuckerberg, people like Elon Musk, they're clearly have social problems. They have problems like really being human. I mean that's the joke about Mark Zuckerberg is he was kind of like a robot. Elon obviously has some of that nature as well. And I think it makes them very not understand. They see things very much just from a technology standpoint, just from a machine standpoint and not really from a human standpoint. Understanding how human nature works, how humans work and frankly how our world works and why we have the economics we have in the world. I would, I would, I would recommend anybody like to Elon Musk, to Mark Zuckerberg, Sam Altman, whoever. I would recommend if you really want to understand economics, you really want to understand the human condition, you know what I'd recommend they read the first three chapters of Genesis. Just read the first three chapters of Genesis. You can't stop at chapter two, you got to read chapter three. If you only read the first two chapters, you might actually think your utopia that you're talking about might be possible. Read chapter three though as well, because I know you're watching this, Elon, I know you're watching this, Mark. So. Because what it tells us is that we live in a fallen world, but that we were made for something more. What I mean by that is we were made for infinity, for God, who is infinite. And we can never be satisfied unless we receive him, unless we acquire him, and so to speak. I don't understand what I mean there yet in this world. The fundamental principle of this world is scarcity, meaning all the things that we need and all the things we want aren't available to us because the only thing that we want that can really satisfy our wants is God. Yet in this world, everything is finite. And in fact, there's a limited amount of the finite things. And so therefore, this is the basis of economics. The fact is, is that with my time, my money, my energy, I can only do so much. So if I want, let's say I want, I love bananas and I want a thousand bananas, but I also love apples, and I want a thousand apples, but I can only afford a thousand of one of those two. That's scarcity. I can only get. Maybe I get 500 of each, something like that. The point is, oh, I can't have everything I want. I want a thousand apples, I want a thousand bananas. I just can't because I don't have the money for it. And even if I worked hard, maybe one day I'd be able to get a thousand of each of those. However, at that point, do you think I'm fully satisfied if I get the thousand apples? Let's say I don't have the money for it now. So there's scarcity I can't get. I can't trade off money for, or my work for a thousand apples, a thousand bananas. Today, let's say I compromise. I say, okay, I'm only going to get 500 of each. Okay, but I still wanted a thousand. So I work hard. I decide, okay, I'm work, I'm saving my money. And maybe a year later I have enough money, I buy another 500 of each. Now I have that thousand apples, thousand bananas. I assume I ate the first 500, whatever, because otherwise they've gone bad. But now do you really think I'm satisfied? Do you really think now I will live the rest of my life in peace and contentment knowing I got my thousand apples, my thousand bananas? Of course not. You know what I'm gonna want? I'm gonna want more because I was made for more. So maybe I don't realize what exactly I made for. And so I'm like, okay, I'm gonna have. I'm gonna keep on working hard to make, get more apples and more bananas. But the fact is, first of all, I can never get an infinite supply of bananas and apples. It's impossible. They don't exist in this world and I don't have the resources to buy them even if they did exist. And that is the trade offs that are the fundamental building blocks of economics. And so Elon, you could tell when he, at the end he says like, there'll be limits on power, energy, things like that. So he understands there is scarcity. He's not like completely clueless, but he doesn't really understand because he, he, he tends to, he only limits to that. But he makes it seem like we'll get anything we want because, you know, anything that basically we won't need to work because we'll be completely satisfied. But that's not what's going to happen. We're still going to need to work because of the fact that there will always be scarcity and there will always be infinite wants. Consider this 100 years ago or 200 years ago, let's say 200 years ago. So 1825, if you told somebody in 1825 everything the average American, let's say the average American you told in 1825 what the average American in 2025 has available to him when it comes to money, housing, clothing, all the different aspects of life, that man in 1825 would think, oh my goodness, everything I want would be satisfied in that future. If that's really the future, then those people will not have to work or they won't, you know, they'll just have everything they need. They'll be completely happy. Yet those of us living in 2025 know it's not true. In fact, I would argue it's likely the opposite is true. That people today actually have more, I should say people today actually have less satisfaction with life than they did in 1825. We want more and more. None of us really feel like, okay, we're satisfied. In fact, I haven't done studies on this, but I think it's true that we actually work harder, more hours and all that today than they would have 1825. I know in 1825 working on a farm is not easy yet they had like winter months, not off, but a lot of it off. They took a lot of holidays and things like that. They worked with the cycle of the seasons. Now though, most of us, we work at least 60 hours a week, often more. We have phones that basically are tied to us that make us available for work 24, 7, almost no matter what job you have. I remember in the late 1990s, a few people, like doctors and sometimes would have pagers, if you don't know what those are. Kids, they're like little things that were on your. You had a clip on your belt and they would beep you and it would, it would like vibrate to say, okay, you have a message and you had to find a phone and call the person. And I remember getting a beeper for my job because I was working for a web hosting company. It was 1997, 1998, and I had to know if a server went down because I was in charge of. I mean, the company was like me and like two other guys. And so if a server went down, I needed to know so I could find out what's wrong with it and boot it back up. Otherwise websites would be down. And so I thought I was super important with this. But the point is, it made me available to for work 24, 7, something that wasn't true in 1825 for most jobs yet it is today. So. Yet while we have so much more available to us when it comes to consumer goods, I mean, we spend so much, we have so much available to us that we spend most of our, our money on entertainment, it seems like, but we have food in abundance, we have, you know, housing in abundance, clothing, a bunch of stuff like that. I know there's problems in the economy. I'm not going to get in that right now. The point is, when compared to 1825, we have all this, yet we know that there is still scarcity. And here's the thing, there will always be scarcity. There will always be scarcity. And that because of that, we will always be required. Most of us will be required to work. The idea of jobs being optional, that robots and AI will just basically do all the jobs for us and produce all this stuff and we will just sit back and simply take it. That's not what will happen. What will happen is our wants and needs will evolve. They'll change just like they have since 1825. Nobody in 1825 thought, I gotta have a phone yet today. A cell phone. Nobody thought that. Or any type of phone. Of course, 1825, there was no phones, but nobody thought that. Today though, most of us believe we have to have a phone. That's how we keep communication with our family, with, work, with, you know, that's how we find out information about the world and things like that. Almost every one of us has a cell phone. And basically you're in a monastery if you don't. And we all feel like a cell phone is a need almost more than a want. Because I mean, I kind of feel like that, like if I didn't have a cell phone, it really would greatly impact my life, my ability to communicate with my adult children and you know, a lot of them do my job, all this type of stuff. So you see how needs and wants evolve with the times. It's not like the needs and wants stay stagnant. And so if we just have a bunch of robots make something, make these things that all of a sudden, okay, we'll be hunky dory then. And of course, the economics behind it is just ludicrous because who's. How do you determine who gets what? So I don't care what kind of world, you know, AI world you're living in, there's still going to be a limited number of products. It's impossible because of just resources, natural resources, to create an unlimited amount of every product. So if you can produce a thousand copies of this latest technology for free, ish, basically through AI, whatever, without workers, not human workers will say, well, what if 1,001 people want it? Okay, now you produce 2,000. What if 2,001 people want it? You produce 10,000. What if 10,001 people want it? I mean, the point is, obviously there's still scarcity. And so there still has to be a determination of who gets what. And that is done in a free market by people basically working, trading their time for things they need and want. That's basically what the economy is, just trading your time for things that people want and need and work. And another thing about, I want to make a point about work. Work, remember is pre fall, like I said, read Genesis 1 through 3. Three, you need to read because you need to learn about scarcity. One and two, you need to read because you need to learn about what our desires are were made for God. But Also in chapter one, verse 28, I think it is. Yeah, verse 28 says, God commands man, be fruitful, multiply and fill the earth and subdue it. Meaning do work. This is before the Fall, man was given the command to do work. Work is integral to who we are as human beings. It's not a result of the Fall, it is who we are. And so to work is essential. Now remember, just to be clear, doesn't work is not the same thing as a job. And I do want, I mean, and Elon's talking about jobs and I'll address that in a second. But just to be clear Everybody works in one sense, unless you're like, you know, bedridden and comatose or something like that. So, you know, the stay at home mom who makes, doesn't make any money. She works extremely hard. You know, the student works, the software developer works, the construction, you know, employee works. Everybody works on some, on something. If you're rich and you sit around doing nothing, you still probably work at video games or whatever the case may be. We find our purpose in our work and so it's very important that we, that we all have a purpose, that we all have work. It would be a disastrous if we did not have that. Now in Elon's world though, jobs are optional. So we just do whatever we want to do. That's what fills our purpose. But like I've already been saying, the fact is, is that we still will have scarcity in the world. There will still be things we want that other people want and there's not enough of them to go around, whatever that may be. Like I said in 1825, if you told them, oh, everybody's going to not only want but need a cell phone, they're going to think you're crazy. 1, 22, 25, 200 years from now, there are going to be things that people quote unquote, need that we don't even exist today. So we obviously don't need them or even want them. That will continue to happen. And so we need some way of knowing, you know, of distributing these things. And this is where the role of money comes in. Because what money does is it sets prices for things. So it makes it so I know, you know, it allows for a equitable, as best as possible in this fallen world distribution of things. Now of course, the socialist, which I do think that, you know, Elon comes across as, you know, the big capitalist stuff like that. But like people like him and they kind of are cozying up to UBI Universal Basic Income, which is basically socialism in practice. I think they have a socialist streak in them, a lot of them, which is basically you have this top down certain people who are determining what everybody gets, because that's kind of how you would have to do it. I mean, that method is disastrous. But you have to have a method, you just have to have a method in which things are distributed to people. And the free market, what happens is it's the price structure so prices determine. They note the scarcity of something. The fact that a ounce of salt is incredibly cheap compared to an ounce of gold, it tells you the scarcity of each thing. Now note, scarcity does not mean how much of something is available as much as the demand for something. There might be. I mean there's, there's like, you know, lots of grains of sand. Nobody wants it. There might be some object that you've created in your, in your, in your garage that is scarcity. There's only one of them in existence, but there's no demand for it. So the price of it's basically zero. But pricing determines like demand, scarcity. So the idea being that gold is much more valuable, has a higher price than salt because of the fact that the demand for it, the demand versus the supply is different between gold and between salt. So money really is necessary to know this. So this idea where this future where there no money exists, you know where they get this from? To be very clear, where this, where Elon and like the guy he's reading can't remember his name. Where they get the idea of a future with no money. It comes from, it comes from Star Trek. It comes from Jim Roddenberry, the creator of Star Trek. I love Star Trek. But that was so economically stupid. The idea that in the future there will be no money. You have to have money because there has to, I mean, or do a barter system. But barter systems are highly inefficient. But you have to have some form of money because money again is what determines, lets the whole, makes public what the scarcity of something is, what demand is compared to the supply. So if there are a thousand of some widget in the, but 5,000 people want it, the price will go up so that only the people who really want it, who are willing to pay the most, they get it. If there's a thousand, you know, pieces of some widget, you know, and only 500 people want it, well then the price will likely go down because they need, they need more people to want to want to buy it. And it only will buy it at a, at a, at a cheaper price. So we absolutely have to have money. It's the lubricant basically of every, you know, of managing trade offs efficiently in any economy. So I don't care if you have robots making everything and people don't have to make anything. They can't make everything we want. And so we have to have money. If you just, where does the money come from? If you just print the money, then it's worthless. So this ubi where like where does this money come from? Does it come from the robot companies? I guess the whole idea is a circular argument. It makes no sense economically or like, it doesn't make sense in understanding human nature. So what is the AI future going to be like? I don't know, and you don't either. But I do think that one, a couple of the things that we need to understand when we hear this, because I do think I see what happens is, is like the materialistic, atheistic AI proponents, they will project a world that they think is utopia. But people like us who are Catholic and understand human nature, error. We're like, oh, that does not sound like a utopia. It sounds like a dystopia. The problem is, is a lot of the assumptions made by the AI optimists are actually wrong. And the AI doomers take on those same presuppositions, those same assumptions. For example, there is this idea that one day there will be artificial superintelligence. Okay, let me just make sure we're clear the terms here. Artificial general intelligence is the idea that an AI will exist that is as intelligent as a human at basically all different aspects, all different fields of study. So, for example, the best physicist, an AI will be as smart as the best physicist when it comes to physics, will be as smart as the best chemist when it comes to chemistry, will be smart as best economist. It comes to economics, all that. All these type things that is. But not smarter than humans. That's artificial general intelligence. And frankly, we're getting there on a lot of fields of study, a lot of fields of study, particularly those that are basically confined to a computer. We're getting there. Artificial superintelligence is the idea that this artificial general intelligence will make itself better so that it will become more intelligent than human beings and that this will continue, this cycle will continue, recursive cycle of AI making AI smarter, which then make AI smarter, AI smarter, and becomes artificial superintelligence. This is what they talk about, the singularity, where it gets so intelligent, it's basically godlike, and they really use those terms. And. And so it knows basically everything there is to know in the universe and it can do anything. And they. Some AI people look at that like, oh, this is going to be great. And of course, those of us who aren't insane think that would be awful. Here, let me calm you down a little bit. It's not going to happen. I just don't think artificial superintelligence is possible. The idea that man can make a machine that makes itself smarter than us has a lot of flawed presuppositions behind it, most of them having to do with like a very materialistic atheistic view of evolution. But it just. There's been nothing proven or shown that a computer can make itself smarter than humanity on some significant level by, you know, programming itself. It just, it's philosophically, you know, really flawed that, that view. So, so I don't think that's going to happen. But that's not, of course, what Elon's talking about here. And that's not. Not the problems. There's a lot of other problems with a few. An AI future that dangers with it. I actually am relatively pro AI in the sense I think AI can help us a lot. And this is something. Another thing we need to understand. What do we mean by AI? AI is an umbrella term for lots of different things. Some of them are great and some of them are frankly horrifying. And I think that's what we have to understand. I see people rail against AI. Matt Walsh is on a big kick against artificial intelligence on X and I think he just really is overstating the potential problems and he's kind of gone crazy on it. But I do think there are real dangers. But for example, AI, there's a lot of fields of study, a lot of areas of work in which, you know, AI can be a real help to humanity. I'm thinking of like, research of like pulling together lots of information that a human very. It's very difficult for a human to do this and like making it so that we can understand it better because it can. You know, data matching, pattern matching is something computers are excellent at. They do much better than humans on a, on a volume basic. We're actually better at pattern matching than computers in a lot of, of ways on a micro level. But on a macro level, you know, computers are great. So for example, maybe all the, all the research done in trying to find a cure for cancer, AI could very much help with that. Maybe all the research that's been done on like deep space astronomy missions, understanding how, you know, planets are formed and things about how God has created the universe, that type of stuff. There's so much immense information. So, so much information. We just can't go through it all. You know, humans can't go through it all. So AI could be great at that. There's a lot of things like I'm like fine with a lot of robots doing manual labor. Certain types of like, repetitive tasks that really are, frankly, they're dehumanizing the types of tasks we're talking about. So having non humans do dehumanizing work I think is great. So I mean, obviously I'M not saying all AI. I don't think all AI is bad. I think a lot of AI applications can be very good. And I actually think like the chatbots, I honestly just look at them as an evolution of search engines. They have dangers attached to them, but so did search engines. But they make things very efficient for searching and for finding out information quickly. But that being said, there's so many dangers. Like, I mean, just an example of a danger with search engines that has gone on and become a danger with AI with like chatbots is misinformation. I mean, we saw this during COVID how you would search for things in Google that had to relate to Covid and they would not give you all the facts. They would make sure they, they point you a certain direction. You know, you saw that in, in presidential campaigns with Facebook, places like that, where they, they direct you away from conservative or Trump or things like that. AI can, these chatbots, LLMs can do the exact same thing. They can really give information. Of course now, of course with AI, they can really fake information. Great. I mean, the deep fakes are getting better and better. There will come a day, I really do think this, where you will see videos of some famous person doing something awful and you will have no idea whether or not it's real or not. And it will stick in your head and you will have, it will. You will be unable to shake that image from your mind when thinking about that person. And that person never did it. The thing in question, maybe beating up a kid or something like that, you're going to have that image in your brain of some famous person doing that who never did it. And I think that's, that's a real danger. That's. I mean, I wish there was a way, maybe there will be a way that we could make it so that every AI video image or whatever had some way it was tagged so it could not so you would know for sure it's AI that. I mean, obviously there's other concerns. Well, there's privacy concerns with, I mean, people are putting, are using chatbots as therapy. You're putting in their deepest, darkest secrets and that's being held by these AI companies. There's a decline in our cognitive abilities. I mean, I said before just like a minute or two ago, that there's advantages to, you know, chat bots as far as for search engines, stuff like that. Well, I remember the famous article From I think 2007, 2008. It's like Google is making us stupid. And the point was, is because we have Google to look things up very quickly. We don't have to do the research. Our cognitive abilities are actually declining, and I think that's true. And we've also seen there's a big decline in cognitive abilities since the introduction of smartphones. And I think AI probably will accelerate that. Our ability to, you know, as we outsource our thinking, our cognitive abilities go down. This is why I am adamant about, like, I spend a lot of my. My work time on computers, on, you know, AI, on, excuse me, social media, things like that. I very much make sure I read books, physical books on a. Like, every day I'm reading some physical book because I think it. It's like. It's like exercise. If you think about it, it's very analogous to what's happened to our physical bodies, what's happening to our brains. With the introduction of technology by the 1960s or so, most people stopped working manual labor, physical jobs. But what that meant was a lot of people started getting out of shape. And that's why you saw the rise of, like, these fitness programs and things like running became very popular. Working out at the gym became very popular starting in 1960s, 70s and beyond. Why? Because our jobs did not give us physical labor. And so our bodies were breaking down. And so we thought, oh, we need to do. We recognize. But the problem is not enough people recognize we need physical. That's why, you know, I run, why I work out. I try to. Because my jobs consist of sitting in front of a computer all day. Well, the same thing is true of the brain. If you spend all day, you know, outsourcing your thinking, your brain will decline, just like your body decline if you outsource. It's a muscle, just like any other muscle, your body. And so what I do, I think the best exercise for your brain is just reading books. There's other exercises you can do. You can do puzzles and things like that. But I really think, and I think some studies have shown this, but that's a real danger of AI future, a future of AI Transhumanism is obviously a huge danger where we integrate AI into the human body and particularly the human brain. I mean, Elon's literally working on this right now. Now he's saying it's to help people who have the, like, disabilities and things like that, to help them, and perhaps it will help them. I'm not going to claim it doesn't, but there's obviously a real danger in having a chip maybe in your brain that's controlled by something outside of your body because you might just be thinking, oh, just down. You know, I can just, I can just ask for information I get very quickly. But who's to say somebody can't send information in there? Well, that could potentially control you. I mean, this is terrifying stuff. I mean, I'm totally, you know, I think the transhumanism stuff is awful. And of course, AI replacing human companionship, the companion robots. We all know where that's leading, where all of a sudden you become best friends. You know, you're, you're some sad person living alone. You have a robot companion and you think this is real, that you're. You. You spend all your time, you know, talking to and hanging out with a robot, with somebody who's not human. These things are real dangers. I do think AI So I'm not, I'm not a, I'm not a utopian person when it comes to AI do not recognize the dangers. But I just don't think a lot of what Elon and those guys are worry, are not, they're not worried about promoting like the idea that we don't have jobs and that like, you know, AI does everything for us. We don't need money. I think that's just ludicrous. That's not going to happen. I do think it will change our economy. Just like computers. Think about how different our economy is today than it was 100 years ago. 200 years ago, computers, technology radically changed our economy. It will continue to do so in the future. However, the fundamental, like basics of economics will not change because we as humans do not change. We are made for something infinite, yet we only have finite goods and services here on Earth. So we will never be satisfied. We will always want more. I'm not saying that's always. I mean, that's a aspect of the fall, the fact that we always want more, but it's the reality of what we live with. And so that's, that's why we will never have a future where we all have work optional and money is not being used. Basic economic principles will still apply in the future. You know, I've thought a lot about this. I'm AI lately. I'm actually writing a book right now on artificial intelligence. Like the working title is the Catholic Guide to Artificial Intelligence. Hopefully it's going to come out next year. I'm kind of waiting for, see if Pope Leo says something about it. But I, I encourage you when that book comes out to read it because I'm gonna, I'm gonna lay out exactly how AI works. What are the, the proposed benefits, what are the, the dangers of it and you know, how Catholics really should approach artificial intelligence. Also, as you might know, if you've already gotten my, my fiction book Shard of Eden, you know, go to shardofeden.com my science fiction book, artificial intelligence plays very heavily in the story, in the storyline, in the plot of Shard of Eden. My science fiction knowledge just came out last month, so I've been thinking about this a lot, but. And I really feel like I'm not gonna, I'm not gonna go with the utopian guys, I'm not gonna go with the doomers. I think both are wrong. I think if you look at the history of technology, it very much upends how things are done, but. And a lot of times it does make the world worse. But it's not the utopia or dystopia that we picture, that we want to picture. We want to picture one of two extremes, either utopia or the dystopia. Neither of those has happened in the past and it will not happen in the future. What will happen is things will change radically. Some things will become better for humanity, frankly, and many things will become worse. But I know what will not happen. What will not happen is we will not all of a sudden have. Jobs are optional for everybody and, and money doesn't exist. That's just. Elon's wrong on that and I think we need to realize that. So. Okay, I'm going to wrap it up there. I hope you enjoyed this. And I, I just be thinking about artificial intelligence with a very balanced view. A Catholic view is what I urge you. Don't, don't believe the hype either to make you very excited or to make you very depressed. Understand that AI will be incorporated into in many ways in our future society. And some will be good and some will be bad. And as Catholics, we just simply have to. We'll have to live with it and live and embrace a Catholic life no matter how the world around us changes. So, okay, that's it for now. Until next time, everybody. God love you.

Other Episodes

Episode 0

June 27, 2023 00:27:06
Episode Cover

Bishop Strickland: Cancelled Bishop?

The apostolic visitation of Bishop Joseph Strickland exposes the sham that is "synodality," as it seems the Vatican only wants certain voices to be...

Listen

Episode

October 08, 2024 00:36:14
Episode Cover

Apocalypse Now?

Our times are becoming apocalyptic: multiple wars, destructive hurricanes, rampant heresy, increased government totalitarianism, and other traumatic events. Are we nearing the End Times?...

Listen

Episode

May 08, 2025 01:15:18
Episode Cover

A New Pope!

Eric Sammons, Timothy Flanders, and Kennedy Hall discuss the new pope.

Listen