Episode Transcript
[00:00:00] Foreign.
[00:00:10] How much is too much when it comes to using AI in creative fields like art, writing and filmmaking, where do we draw the line? That's what I talk about today on Crisis One Hill. I'm Eric Simmons, your host, editor in chief of Crisis magazine. So obviously, generative AI, I. E.
[00:00:26] AI that creates things, that generates images, writing, film, things like that is all the rage, and everybody's talking about it, but it's also raising questions about the morality of how we can use it.
[00:00:40] If I have an article that I write, but actually is just a prompt of mine that's generated completely by AI but I put my name on it, is that moral? If I do a work of art using AI, that's basically just me entering prompts, did I create that work of art? Is it moral to say that it's my work?
[00:01:00] These are the type of questions people are asking a lot right now.
[00:01:04] Now, it's interesting because generative AI really hit the mainstream, so to speak, a number of years ago when ChatGPT 3.5 came out in November 2022, I think it was. Might have been 2023, one of those two.
[00:01:20] And all of a sudden we had AI that could talk a lot like a human.
[00:01:25] And it started being able to generate images and videos that weren't too bad. Now, at first, they were more comical than anything. Like, you'd have images with the guy with six fingers or something like that, and you'd have videos that were not very good. But as we can tell, they're getting better and better.
[00:01:42] And these, these different AI programs are creating images that are more lifelike, more realistic. They're creating videos that are more realistic, and they're generating writing that sounds pretty darn human. Yes, I know it's not completely there yet, but one thing you have to recognize when you're talking about AI is whatever artificial intelligence you're using right now is the worst you will use for the rest of your life because it rapidly gets better.
[00:02:09] It just. It's Moore's Law. It just continues to get better and better and better.
[00:02:15] And so I do think we'll come to point very soon where there will be videos created by AI that you won't be able to tell are AI generated images. Same. And writing, I think, is going to continue to get better. I know there's ways that there's certain tells of AI generated writing right now, but I do think over time, even that's going to become less so. In fact, I just read an article. I think I'm going to pull it up later in the podcast, in which there's now called Humanizers, where AI programs will humanize your writing to make it sound less AI generated, which is somewhat ironic, I guess.
[00:02:51] So I do think we're going to get that point. So at first when these came out, people really enjoyed them. A lot of people really enjoyed them now. Yeah, there was some creepiness to them, like the images.
[00:03:03] But I. And you see a lot of these videos on YouTube, they're AI generated and they get tons of views.
[00:03:10] However, I've noticed, particularly in the last few months, a backlash is growing against AI.
[00:03:19] A couple months ago, I think it was Matt Walsh over at Daily Wire, he really went on a jihad against AI on X. And just talking about how it's awful, it should be banned, it's the worst. And I've seen. And recently Peter Kwesneski, our friend, friend of the program, he wrote an article about the morality of authorship, of taking authorship of something that was AI generated that was very much anti AI. And he basically was like, it's all. It's awful.
[00:03:51] And so we're seeing this backlash. And so the question does become like, where is the line? Can we use AI to help us with our art, with our filmmaking, with our.
[00:04:04] With our writing?
[00:04:06] And I will say this is something of great interest to me. I admit there's a reason I'm doing a podcast episode on this, because I very much care about this, because this is my world.
[00:04:16] I am what they say, creative person. I'm a math brain, so I never consider myself that. But I'm a writer.
[00:04:22] I do a lot of writing. I write books, I write articles. I'm also an editor of an online magazine. And so I get submission article submissions every day and I have to go through them.
[00:04:33] And so this really impacts my world. And not only is that I'm writing a book right now on artificial intelligence, so this is very much in the forefront.
[00:04:43] My novel that came out a couple months ago, Shard of Eden science fiction novel, Artificial Intelligence, was a big deal in it.
[00:04:50] Pick up the book and you'll see how.
[00:04:53] But my book on artificial intelligence, A Catholic Guide to Artificial Intelligence is the working title, comes out later this year. 2026 is my hope.
[00:05:01] I'm far along on it. But the point is this world matters to me, these questions matter to me.
[00:05:08] And in fact, I've been developing a policy at Crisis for AI Generation, how much AI help you can get in submitting an article to us. And I just revealed it to my writers to let them know I Don't think I haven't put it on the website yet. Or maybe I have. I can't remember. I'm about to put it on the website as well, and I'll talk about what that policy is a little bit later in the show here. But so my point is, this is a big deal to me. I really care about this. And so a couple questions that come up about this. The first one is, I think it's just a basic question of can AI be creative? Is artificial intelligence actually creative? I heard a lot of people say, no, artificial intelligence can never be creative, because creative, the creative process requires you to be human, to create something. It requires you to have some type of a mind, a will, an intellect, a soul. It requires a soul.
[00:06:10] I think, though I wouldn't necessarily take that point of view because I think the word creative matters here. What do we mean by that, the definition of it?
[00:06:20] If it means we're creating something new, then I think that, yes, we can say AI is creative.
[00:06:28] We can't argue that AI is creative because it is creating something new. Now, I know the first thing that people will say is, wait a minute. It's just generating things based upon all the training data it received, whether it's reading all the. Everything on the Internet and then just creating a written work that is based on all that, or the same with artwork or videos.
[00:06:49] The problem with that objection is that's what we do.
[00:06:56] That's what we, as people do.
[00:06:58] Everything I've written is based in some way on all the things I've read.
[00:07:04] I mean, there's certain writers I admire that I may not even consciously try to imitate, but I definitely subconsciously try to imitate somebody like a Joseph Pierce, Anthony Esalen, people like that, Kevin Wells. I look at their writing and I'm like, you know, I like that I want to be more like that.
[00:07:23] And so even if I'm not, when I'm writing, typing, thinking, oh, let me write this like Kevin Wells would write it, at least they're in the back of my mind somewhere.
[00:07:32] That's my training data that I'm using when I write.
[00:07:36] And so in a certain sense, just about everything creative is derivative. There's nothing new under the sun. When I write something, yes, it's coming from my mind, and it's new in the sense that I'm hoping I'm saying something in a new way. I'm giving an opinion that has not been expressed exactly like this anywhere else. But it is ultimately somewhat derivative in that it's based upon all my own training data, so to speak. And that's what AI is doing. AI is taking all this training data and it's producing something. Now, I know it's not the same, and I don't want anybody to think I think AI writing is the same as human writing, but I do think the word creative could be applied to it.
[00:08:16] And some people have also argued against AI being creative because a lot of the slop that's created with AI and yes, there's a lot of slop.
[00:08:25] I mean, there's a lot of slop.
[00:08:29] But the fact is that this, like I already said, AI, it's getting better. What it creates.
[00:08:37] There will come a day when the most critical AI critical person, a Peter Kwasneski or a Matt Walsh, will watch a video or read something or see an image that was AI generated, but they don't know it is. And they will think, wow, this is beautiful. This is well written. This is great. This is meaningful.
[00:08:56] That's just a reality of what's going to happen.
[00:08:59] So at that point, would we say AI is creative? I mean, the definition shouldn't of creativity shouldn't be just that, oh, it's good, it's well done. Because there could be somebody who's not a good artist like me. I can't paint. I can't draw worth a lick. My daughter is literally an artist. My oldest daughter is literally an artist. That's her profession. I have no idea how she got that gene. She had to get all from my wife's side because I have none. I have none.
[00:09:26] And so if I draw something, it might be horrible looking, but it would still be creative.
[00:09:32] I mean, because it's me generating this.
[00:09:35] The issue here really is, is that for all of human history, creativity is meant ultimately it's a creation creating something. Because the ultimate creator, the only true creator in existence, is God himself. He is the creator. He's the only one who creates something out of nothing. He's going to create something truly new.
[00:10:00] We, however, are creations.
[00:10:03] When we create something, then we're an instrument, in a sense, of the one creator of God.
[00:10:11] We're creations who create.
[00:10:14] Well, what is artificial intelligence?
[00:10:17] It's a creation of a creation that creates.
[00:10:22] So we have God be one creator. He creates us.
[00:10:26] We then create artificial intelligence, and then it creates something.
[00:10:32] So it's an extra step. But ultimately it still goes back to the ultimate creator, because artificial intelligence wouldn't exist if we had not created it. And we wouldn't exist if God had not created it.
[00:10:44] Now it Might sound like I am giving an apologia for AI Creations. And I love them. I think they're great. No, I think there's a lot of problems with it. I'm not. I'm not.
[00:10:54] I'm more pro AI than a lot of people who are probably listening to this podcast right now. A lot of traditional Catholics particularly, are probably very anti AI I'm not the AI.
[00:11:05] The funny thing is, my adult kids, they're very anti AI they're way more anti. I found this out that they're. They're just. They're like, burn it all down. I mean, essentially they hate AI.
[00:11:19] I'm not like that. So it is kind of funny. Maybe it's a, you know, a generational thing that younger people are less anti AI than. I mean, are more anti AI than. Than some older folks like me.
[00:11:32] But the fact is, is I. So I would argue that AI can be creative now. It's not the same as human creativity. Like I said, there is no true soul behind it. I mean, that's the key difference. This is a big thing I talk about in my book, I will talk about when it comes out on. On artificial intelligence is the fact that AI does not have a soul and it cannot have consciousness. It cannot have sentience.
[00:11:55] It is ultimately a computer program. It is a machine creating.
[00:12:01] But if you think about it, we use tools all the time now. I mean, artists like, you know, architects, people who, you know, maybe people build tables or whatever the case may be, they use tools often that actually do a lot of the hard work of creating.
[00:12:16] And so it's a blurry line, I think. I think it's a blurry line between creativity of a machine versus creativity of man who is telling machine what to do. That's the other thing to recognize with artificial intelligence. It doesn't create that image till you tell it to. And you have to describe it. You have to explain it. So there is human input always, at least right now, in AI Quote, unquote, creativity.
[00:12:41] So what are the practical.
[00:12:43] What is the practical morality of using AI? That's the question. Like.
[00:12:48] Like I, as you can probably imagine, because I'm. I'm less anti AI than some people.
[00:12:54] I'm less dogmatic than some on this.
[00:12:56] I mean, there's some who are like, if you use AI at all, it's basically a sin.
[00:13:02] Yet there's problems with this because there's a spectrum. Using AI as a spectrum, spell check and grammar check. Guess what?
[00:13:10] That's artificial intelligence. Very rudimentary, very primitive.
[00:13:14] But it's artificial intelligence using a, a word processor. It, you, it gives you a suggestion like you're starting to type and it suggests, like, how to finish sentence. If you hit tab to accept that, does that mean you've crossed the line? You've now used artificial intelligence in your writing.
[00:13:33] And then of course, you go all the way. Know or, or like, for example, synonyms. Here's a big one for me.
[00:13:39] Often I will reuse a word too often in my writing. My wife, when she edits this stuff, she always, you know, marks this out, but I'm trying to, like, make it as hard for her. So I'm trying to, like, you know, think of synonyms, and sometimes I'll think of a good synonym or two, but sometimes I just can't think of a good one.
[00:13:54] So in the past, what I've always done is I just went to Google, I asked, hey, give me some synonyms, you know, synonyms for this word. Whatever. I go to, I'm sorry, not Google. I go to thesaurus.com, ask for synonyms. I use the thoris.com for many years now. However, often what I'll do is I'll just go to Google Gemini and I'll just say, hey, give me synonyms for this word.
[00:14:13] And I will say, the results are better than thesaurus.com does that mean, though? And if I use one of those, those synonyms, does that mean now that I am using artificial intelligence, that I've crossed a line, a moral line? I shouldn't do that.
[00:14:31] I don't know. I don't think so. Obviously I'm doing it. So if you think the fact that I'm using AI for synonyms means my content's AI generated, you're not going to read my stuff anymore. Okay, I'm sorry to reveal this to you, but then, of course, then you can go further.
[00:14:49] For example, maybe you write a paragraph and you're like, you know, I need to clean this up a little bit.
[00:14:54] And you ask an AI program, hey, clean this up for me. Edit it. Basically, you know, all my writing I give to an editor before I publish it. And my wife is my primary editor, and she has been for 20 years of my writing now.
[00:15:08] And so what if I ask Google Gemini or Chat GPT or Grok or somebody like. Or anthropic Claude, I asked him, hey, could you, could you edit this, do a line editing of it?
[00:15:20] Not even, like reword things, but just, you know, which is basically grammar checks and things like that. Maybe words not, you know, clear, something like that.
[00:15:28] And if I accept those suggestions, is that crossing the line? Is that doing something that I shouldn't do? It's not truly creative. It's. It's too much AI.
[00:15:38] And then, of course, you can go further.
[00:15:40] You can basically write something, an outline of something maybe, and say, hey, turn this into an article.
[00:15:48] And maybe you give it a lot of prompts, like, give it this voice. Make sure you have this perspective. Here's my. Here's my outline of what I want to say.
[00:15:57] Now I think a lot of us start getting a little uncomfortable. If you weren't uncomfortable before, and I wasn't before, now I'm getting a little uncomfortable, morally speaking, is this okay? Is this something that I should. I can do, or am I starting to blur that line? It's not really created by me. And of course, then you could even have a prompt, just a single prompt, say, write an entire article.
[00:16:22] And I think we'll get to a point where we can do a single prompt to write an entire book. You can't do that now.
[00:16:26] You can't write a book with AI but you can do. You can do an article. In fact, I remember one time experimenting with this about a year ago, maybe less than a year ago, and I said, I can't remember which one I use. I said, write an article on some topic in the voice of Eric Sammons. The reason I did this, because I knew I have a lot of writing. I have hundreds and hundreds of articles online.
[00:16:50] I'm sure these AI programs have scanned my books so they know how I write, they know my style. They write in my style. And I will tell you, yikes. It was a little creepy because it sounded a lot like me.
[00:17:04] And I was like, holy cow.
[00:17:06] Now, I think if I ran through a good AI detector, it probably would tell it's not me. It's not. It's. It's. It's not human generated.
[00:17:14] But you can do this now. And I think a lot of us, most of us would probably say, okay, now you've definitely crossed the line. So I think that we have a number of things here that I don't think it's. It's completely clear.
[00:17:27] I don't think it's a straight line saying, okay, this is moral, this is not moral.
[00:17:33] I think that. And I think it really. Because I do think it becomes a little bit of a judgment call.
[00:17:40] Like, I don't think any of us would think spellcheck is immoral to use, but I think all of us would say, if you write a single prompt for your entire article. And it basically generates the article. Yeah. And you call it your own. If you say it's AI generate, fine. If you call it your own. That's. I think when you start having. I think we start having some problems, it seems to be.
[00:17:58] But I will say though, to give devil's advocate here, what about ghost writing?
[00:18:05] For ever in the history of writing, there have been ghost writers. This is where somebody's name is on the book, but somebody else wrote it. And the only people who know that typically are the writer, the given writer, the ghost writer, and probably the publisher. Although I would be willing to bet.
[00:18:25] Excuse me. I'm willing to bet there have been times when a book was ghostwritten and they didn't tell the publisher.
[00:18:33] But the point is, this is an accepted practice in the publishing world. I guarantee you have read books, if you've read any number of books, you've read books that have been ghost written. I bet you there's a book or two on these bookshelves behind me that have been ghost written. Nobody thinks anything of that. Nobody thinks that's immoral.
[00:18:51] But it does involve a deception, doesn't it?
[00:18:54] You think author X wrote it, but actually author Y wrote it.
[00:19:00] That is a deception on some level, but yet we accept it as it's okay.
[00:19:05] So what if I have AI write something for me and I put my name on it? Peter Kwasnski in his article, which I'll link to in the show notes, he's like, obviously this is immoral. And I get that. And I kind of have the same view. But if that's immoral, why isn't ghost writing immoral? I understand if you have a problem with it because it's written by AI and that's your problem.
[00:19:33] But I don't understand fully why you would think it's immoral for taking authorship of that. When people take authorship of things they don't write all the time through ghost writing.
[00:19:45] So I, again, I don't think these questions are as clear cut as sometimes we make them out to be. I'm not trying to be a moral relativist. I'm not trying to be all like Mr. Nuance here, but I do think there are nuances here. So maybe I am Mr. Nuance. You know, call me Michael Lofton. I don't care.
[00:20:02] So these are the things that I'm thinking about all the time as an editor and as a writer. And I think a lot of people are thinking about them who are creatives, who are creating artwork or writers, writings or, or film or whatever. But I think those who consume it, it matters too. But here's the thing.
[00:20:20] I think among creatives, among your writers, your artists, your filmmakers, it matters a heck of a lot more than it does to the public, the consuming public.
[00:20:30] I honestly think most people don't care.
[00:20:34] Most people just care about the end result. They don't care what you did, the work, the hard work you did to produce it. And this hurts. This hurts. As an author, I don't like to think that, but I know it's true.
[00:20:49] Let me give an example. My book, Deadly Indifference, which came out, I guess about four years, five years ago now, a religious indifference, the rise of religious indifference in the Catholic Church. Buy it at your local bookstores. Probably won't be there. So buy it at Amazon or buy it even better at my website, ericsammons.com I poured blood, sweat and tears into this book. And in fact, I wrote this book twice.
[00:21:15] What I mean by that is I wrote this book, a draft of it, and I didn't like it.
[00:21:20] I thought, no, I don't think I make the arguments well, I don't think this is a very good book, really.
[00:21:27] And so I scrapped it and I started over. Now I kept some of the. Some items. I kept some, you know, sections and things like that and some arguments. I eventually wrote it again. And so when this book was published, it really meant something to me that I had put so much effort into this and all the research I had done and all the work I had done to get it to where it was, I was very proud of it.
[00:21:49] Yet I know if I'm being completely honest with myself, the people who read it don't care.
[00:21:56] They don't care that I have.
[00:21:58] I wrote it twice, essentially, and I put so much work into it, they don't care.
[00:22:04] They just are like, is this a well written book? Does it make strong arguments? Can I follow it? Is it clear?
[00:22:11] And does it make me think that's all they care about?
[00:22:15] And so one day, if an artificial intelligence can write a book at that same level, I know that most people won't care. They'll just be okay. Is it making the points? Does it move me? Does it, you know, if it's artwork or a video or something like that, that's what really matters to them. Not, not it's. It's. It. They're consumers. They consume content.
[00:22:40] Now, to all my fellow writers, artists, filmmakers, people out there, creatives, I know you don't like to hear this, and I don't like to say it, but that's simply the reality.
[00:22:51] Now, that doesn't mean I'm not going to fight against it on some level, because I do think there's areas where it matters much more than others.
[00:22:59] I don't think consumers will care as much when it comes to fictional books if it's AI generated or art, a lot of art or writing. But I do think opinion pieces. So, like Deadly Indifference, for example, if you knew that was AI written, it wasn't. Obviously it was before the days of AI Anyway, I think it would change your opinion because you'd be like, well, this isn't actually a person's opinion. So I do think opinion pieces matter. But like a history book might not matter at all to most people. I could see definitely AI creating a history, a high school history textbook that schools would use and be fine with because it has all the information they need. It gives the perspective they want, and that's fine.
[00:23:41] But like, for example, the articles. At Crisis, we don't publish news stories, we publish opinion pieces.
[00:23:48] And who writes it is very important.
[00:23:51] I've noticed that certain writers, people will. They will get a lot of traffic no matter what they write on, because people care about what they think. They want to know what Jan Smith thinks, they want to know what Anthony Esalen thinks, they want to know what Kevin Wells thinks. And so they read those articles because of who wrote it. If it was AI generated, I think it would affect very much. Who cares what a machine thinks about the crisis in the church today? Who cares what a machine thinks about whether or not Trump should have abducted the guy from Venezuela or not?
[00:24:28] They want to know what I think. They want to know what Dr. Regis Martin thinks. They want to know what people like that think.
[00:24:38] And so at Crisis, I've been crafting a policy of what we're going to do about AI because here's the thing.
[00:24:47] I know that I've had articles submitted to me, at least, that have been AI I'll put it this way, I'm very confident. I'm highly confident they were AI generated. It's possible. I've even accepted some of them unknowingly, because one thing I will say is there's AI detectors out there.
[00:25:06] I don't fully trust AI detectors.
[00:25:10] I, in fact, had an article one time that I was kind of curious about that was submitted to me. Like, is this. Did AI generate this or help out with this? So I ran through three different AI detectors.
[00:25:21] One said 100% AI generated, like, yikes. The next one said 0%. AI generated. And the third one said 90% AI generated. So what am I supposed to do? Add those three together, 190, divide by three and figure out and determine okay, if it's over this percent.
[00:25:39] And so I don't really fully trust AI generators. Now I will say they are getting better. In fact, I started using one recently that I think is very good. I ran a ton of stuff through it that I knew, stuff I wrote that I knew, or articles I knew were not AI generated. Charles Colomb is not using an AI generator. I guarantee that Joseph Pierce is not using an AI generator, Anthony Esalen is not. So I run them through that and they would all say zero percent, it's human generated.
[00:26:07] And so what I've started doing is every single submission I get, no matter who it's from, even it's from Joseph Pierce, although I know his or not, I'm just doing it as a practice. It's automatic because I'm not actually taking extra steps. It's telling me the confidence level the AI detector has on whether or not it's this AI generated.
[00:26:27] And I mean, I recently got one that said 100%, it's AI generated. Now the question is, what do I do with this information?
[00:26:34] Here's the thing, I. I know it's not 100% accurate. And so I don't feel like it's right to just take an AI generator's word for it and just reject it out of. And reject something out of hand. This is a big debate right now, actually, in the school systems, in the colleges, actually. There's an article. Let me pull it up here real quick.
[00:26:54] And to avoid accusations of AI cheating, college students are turning to AI. This is an article in NBC News. I'll link to it in the show notes. And it's interesting because it says students are taking new measures such as dumbing down their work, spying on themselves, and using AI humanizer programs to beat accusations of cheating with artificial intelligence. The problem is these schools, they are using AI detectors, but they're not always perfect. And so what's happening is sometimes they're saying something AI generated and the student is like, no, I know I wrote this and I have proof that I wrote it.
[00:27:30] And the article is actually fascinating because it's talking about this give and take. Obviously there's students using AI generators, they're cheating, but then there's students that are not.
[00:27:38] And so these students actually are starting to use humanizer programs, programs. They're also dumbing down their own writing like they will write Something themselves, put it through an AI detector, realize it says it's probably AI generate and it'll change how they wrote it. And also there's a thing there, there's an. I can't remember what the name was. It's in the article where what it does, it's like a surveillance system you install on your computer to prove you wrote it, because it will check, like how long you're typing of it if you just copied and pasted it. You know, think how long it took you to do it, stuff like that. And you can then submit that to the professor to say, listen, here's proof. I actually wrote this. It's not AI generated.
[00:28:18] So I don't want to go on an anti AI jihad with my own writers. There's a story about a. From a couple months ago, the subreddit, the art subreddit on, on Reddit.
[00:28:29] Their moderators were very much like anti AI and what they would do is if you submit an artwork, they would run it through an AI detector and if it said it was AI generated, they would ban you. It would not only delete your post, they ban you permanently from the. The subreddit.
[00:28:46] First of all, there's an irony here. They're using AI program to do this. That's what AI detectors are. They're AI.
[00:28:52] But one of the, One of the artists, he was like, no, I have proof that he wrote out to him. He wrote to him, said, listen, I have proof that I created this. I can show you the steps and things like that. And they were like, no, there is no appeal.
[00:29:06] Well, this went viral. If this happened. And people were so upset that this artist was basically canceled and banned for something he wrote that they pushed back. And Reddit actually removed the moderators of the art subreddit and made it so that they had new moderators who would not be so dogmatic about accepting the results of an AI detector.
[00:29:31] So I don't want to go down that path of a jihad against my own writers. I trust, by the way, that the people who've written for me in the past, I trust that the vast majority of them do not use, do not generate the articles of AI. They might get some help with it, but again, how much is too much?
[00:29:46] So what I do is I run through the AI detector. First of all, I just look at it like AI detector is one step in the process. I might decide not to accept the article for lots of different reasons. We have a lot of articles at that point. We have a, you know, maybe we have maybe it's just a topic I'm not interested in. Maybe it's not well written. All these different reasons I could reject an article.
[00:30:08] Never take offense, by the way, if I ever reject an article you submit, or anybody rejects an article you submit, there can be reasons that have nothing to do with the quality of your work, why an editor rejects it.
[00:30:17] But I will also. Now a new factor is. What's that? What's that number? It says the percentage. If it says 100%, I don't think I'm accepting that. I better have good reason. But what if it says 75%?
[00:30:31] It doesn't say, by the way, it's not saying 100% of its AI generator, 75% AI generated. It's saying it's confidence level that is primarily AI generated. So it's saying that it's 100% confident that this was primarily AI generated, or it's 50% confident or 25% confident. What do I do with that number?
[00:30:50] If it's 50% or above, do I then reject it? If it's 25%, do I accept it? If it's 10%, I mean, what do I do?
[00:30:57] And honestly, I don't have a set thing I do.
[00:31:01] I basically just decide, you know, okay, that's one factor in my equation of deciding whether or not to accept it. I don't want AI generated articles at Crisis. I want people to know that the person who wrote it, that's their opinions, that is their right writing, that is their craft, that is what they're doing. And so I don't want just to be something where it's just simply that we have AI generated content Crisis. But at the same time, I have to be honest.
[00:31:29] I cannot know with 100% certainty if an article had AI help and how much AI help it had.
[00:31:37] I actually don't mind if my writers use AI to help their, their article to be better written. I really don't because I think that's fine. I think it's just like using. It's like using an editor, a human. Not everybody has a wife like mine who can edit all their works.
[00:31:53] And so using. And they don't have. My writers are not rich people. They don't have the money to, to afford necessarily paying somebody to edit their works. And so if they use AI to clean things up, I really don't mind. But if it gets to a point where it's the AI detector saying there's a lot of AI in here, more AI than is human, and I really don't think our readers want to read that because they're going to see that. And also I just think it's, it's, it's becoming more AI, more machine than it is man.
[00:32:23] And so I think these are questions we are all going to be debating and asking. And so I would say what is the morality of using AI for writing creative tasks like art and filmmaking?
[00:32:38] I think first of all, you can use it if you want to, to totally generate something if you label as AI generated. I mean, there's you, there's videos on YouTube, there's they're 100 AI and everybody knows they're 100 AI. So I don't see a problem with that. You might not like it. You could maybe. I would like it if, if YouTube did a filter that anything that's completely AI generated never showed up in your, in your homepage. I would like that.
[00:33:03] But the fact is I've seen someone like them and I thought, hey, it's pretty good, I didn't mind them.
[00:33:09] I understand if you don't like that. The question, the real morality is using AI, claiming authorship, claiming, claiming ownership. And I would just say that isn't 100% clear cut. There's some things we know for sure are not okay and some things we know for sure are okay. And there's, there's some in the middle there that we, we can have some disagreement and debate about it. I personally probably am more on the you can use AI as a tool than maybe a lot of people who are watching this or listening to this podcast are, but I do. There's obviously a line that you can't cross. You can't just simply write a prompt, say write an article for Crisis, that's that on this topic, and then just submit it. If you do that, first of all, my agent AI detector is going to find it and I don't want that. So, so how should we use AI? Like I said, it's an ongoing question. I think we, we have to look at it as a tool.
[00:34:00] And I think the big thing we have to do is AI, like all computer tools, should be used to assist us with the mundane, with the repetitive, with the non thinking aspects of our lives.
[00:34:17] It should not do our thinking for us. In fact, what will happen is if we use AI to think for us, we will stop being able to think. We will lose our ability to think. And I think that's the greatest danger of AI, not necessarily that I might read something online that was generated by AI and I didn't know it was, but That I will use AI to think for me and then I will lose the ability to think. Writing is one of the best ways to stimulate your brain and to help it to think.
[00:34:43] Artwork, painting artwork, video production that also helps stimulate the brain. If we stop doing that and we just offload it to AI, we're going to get a really dumbed down society. We're all going to become big dummies and I don't want to see that happen.
[00:35:00] So let's not outsource our create the create the truly creative part. I mean, talked at the beginning about what it means to be creative. I'd say the truly creative part, the really thinking part of it. The part where you're really. Yes, it's going to be hard to write something, it's going to be hard to paint something. It'll be hard to produce a video.
[00:35:18] That's the beauty of it though. It should be hard. So I will say this one final statement.
[00:35:25] Over the past year, I've gotten more anti AI than when I started.
[00:35:28] I actually mentioned in my book I'm writing that I started the book, I think in June of last year.
[00:35:34] I was more positive AI when I started than I am now, the more I see of it.
[00:35:39] But I'm not across the board anti AI either. So I just think it's one of these things that maybe I will do this podcast again in a year and when I do, I will decide that I'm completely anti AI. I mean, that's possible, but for now I'm willing to accept it. On some level we just have to draw those lines and make them very clear in our heads what's acceptable and what's not. So okay, I'm going to wrap it up there. I'd love to hear your thoughts in the comments about what you think about using AI. Maybe have AI generate a comment for us and see if we can tell or not. So. Just kidding. You don't really have to do that. So okay, well, until next time, everybody. God love you. And remember the poor.