NEW BOOK!
Explore a better way to work – one that promises more calm, clarity, and creativity.

My Thoughts on ChatGPT

In recent months, I’ve received quite a few emails from readers expressing concerns about ChatGPT. I remained quiet on this topic, however, as I was writing a big New Yorker piece on this technology and didn’t want to scoop my own work. Earlier today, my article was finally published, so now I’m free to share my thoughts.

If you’ve been following the online discussion about these new tools you might have noticed that the rhetoric about their impact has been intensifying. What started as bemused wonder about ChatGPT’s clever answers to esoteric questions moved to fears about how it could be used to cheat on tests or eliminate jobs before finally landing on calls, in the pages of the New York Times, for world leaders to “respond to this moment at the level of challenge it presents,” buying us time to “learn to master AI before it masters us.”

The motivating premise of my New Yorker article is the belief that this cycle of increasing concern is being fueled, in part, by a lack of a deep understanding about how this latest generation of chatbots actually operate. As I write:

“Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?”

I then spend several thousand words trying to detail the key ideas that explain how the large language models that drive tools like ChatGPT really function. I’m not, of course, going to replicate all of that exposition here, but I do want to briefly summarize two relevant conclusions:

  • ChatGPT is almost certainly not going to take your job. Once you understand how it works, it becomes clear that ChatGPT’s functionality is crudely reducible to the following: it can write grammatically-correct text about an arbitrary combination of known subjects in an arbitrary combination of known styles, where “known” means it encountered it sufficiently many times in its training data. This ability can produce impressive chat transcripts that spread virally on Twitter, but it’s not useful enough to disrupt most existing jobs. The bulk of the writing that knowledge workers actually perform tends to involve bespoke information about their specific organization and field. ChatGPT can write a funny poem about a peanut butter sandwich, but it doesn’t know how to write an effective email to the Dean’s office at my university with a subtle question about our hiring policies.
  • ChatGPT is absolutely not self-aware, conscious, or alive in any reasonable definition of these terms. The large language model that drives ChatGPT is static. Once it’s trained, it does not change; it’s a collection of simply-structured (though massive in size) feed-forward neural networks that do nothing but take in text as input and spit out new words as output. It has no malleable state, no updating sense of self, no incentives, no memory. It’s possible that we might one day day create a self-aware AI (keep an eye on this guy), but if such an intelligence does arise, it will not be in the form of a large language model.

I’m sure that I will have more thoughts to share on AI going forward. In the meantime, I recommend that you check out my article, if you’re able. For now, however, I’ll leave you with some concluding thoughts from my essay.

“It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy,” I wrote. “ChatGPT is amazing, but in the final accounting it’s clear that what’s been unleashed is more automaton than golem.”

35 thoughts on “My Thoughts on ChatGPT”

  1. This is completely untrue.

    I know one company that just fired 4 of their marketing people. their entire marketing department, because of ChatGPT. The people in charge completely understand that ChatGPT does NOT do everything by itself, and can get stuff wrong, and make mistakes. But the thing is, that content writers do as well. Human content writeres cannot do a Vulcan mind-meld of the owner or others who have been in the industry for 25 years.

    The situation is that the CEO can type in a few sentences and get responses from ChatGPT. Because they have expertise, they know what is bullshit from ChatGPT. They can then narrow down the scope and extend it, too, until they get what they want. And they can do this in a few hours, rather than spend hours with content writers, only for the content writers to get it completely wrong. And ChatGPT can write up 10 paragraphs in 10 seconds. And re-write and re-write in seconds, too. No, CEOs understand that ChatGPT is not right all the time, but if you read it, you can instantly tell what stays and what is not correct. But to do it yourself from scratch can take weeks. ChatGPT allows you to create dozens of renditions within the space of an hour, and mix and match them.

    No one is realistically saying ChatGPT is going to do it all, but as I said, 4 marketing people, *that I know personally happened*, lost their jobs last week because of ChatGPT, and the CEO and his team are doing it, because it takes less than an hour for professional quality content. People who have decades of experience know what the fuck they are about, and how to use the ChatGPT to create exactly what they want, in a fantasticly grammaticallly correct output.

    Reply
    • I agree with you.
      In reality much of today’s written advertising and commentary does not require originality to satisfy customers, clients and casual readers. Thus it will eliminate a large swath of content writers and young Madison Avenue script writers for advertising agencies.
      Time, place and repetition are often more important than content to improve outcomes.

      Reply
    • It’s fun reading ranters who don’t do their homework, or know the history of the digital workplace. Your description of the CEO playing around with ChatGBT to produce marketing material sounds just like the early 1990s noises coming from the same C-suites of yore. Why do I need a secretary or accountant when I have this new toy: the PC and its software like Lotus and WordPerfect? (actually the CEOs got back their secretaries pretty quick but called them PAs instead.)

      But to the 4 fired marketers, don’t worry, the CEOs will soon tire of their new toy and you will back employed again.

      Reply
  2. I do think, however, the harms of AI and models being static is only true on the surface. With self-learning capability of ChatGPT4 and each iteration vastly improving on the set much significantly.

    I quite liked Tristan Harris / Aza Raskin’s discussion specifically on ChatGPT and the AI world —https://www.humanetech.com/podcast/the-ai-dilemma

    Reply
    • I read the transcript of the podcast @Shas recommended, and it is chilling.

      If 50% of the engineers that worked on a new aircraft told you it had a 10% or greater probability of crashing, would you get on the aircraft? Well, as the podcast notes, this analogy applies to AI, and the proportion of AI researches that say it has a 10% chance of making the human race extinct. Anyone that thinks this is hyperbole does not sufficiently understand AI or exponentiality. At a minimum, we need to be aware of the potential economic, societal and cultural impacts of AI – the podcast uses social media as an analogue for this warning.

      I think we would all like to hear more from Professor Newport on these issues, especially given his area of expertise in the intersection of technology and culture. Focusing on the current version of ChatGPT is shortsighted and insufficient relative to the potential dangers of general AI.

      Reply
      • It is so absurd for someone to say with a straight face that there is a 10% chance of this. I mean, any person that really stops and thinks about the big picture will instantly realize that the chances are, in fact, 100%. There is no way that humanity will continue to exist. Even if John Connor #1 through iteration #999,999 keep Kazinski-ing the entire planet, that one millionth time the luddite movement will fail and machine intelligence will improve itself exponentially and then nanoscale (or smaller!) intricasies we cannot yet imagine will be spreading (neigh, exploding!) outward from Earth at near the speed of light converting matter and energy into more processing power. This future is not some speculation it is a fact. Fortunately for us hairless ape meatbags we won’t have to worry about anything but the shortest-term or local ramifications. We will have immortality soon and can bliss out for a year or two but … that means little since as soon as our atoms are more useful for something else we will instead be computational dust.

        Reply
        • “that means little since as soon as our atoms are more useful for something else we will instead be computational dust”

          Define “useful”. AI doesn’t really have goals of it’s own. What you describe could happen if runaway instrumental convergence occurs. If that hurdle is overcome, though, it becomes far less likely. Infinite paperclips aren’t useful in a world without any paper, or people.

          Reply
  3. To be honest, I’m more concerned about kids and students using it to not study correctly or not put any efforts in their daily homework than it taking adult’s jobs.

    Some days ago I heard a group of University students talking about chat gpt and how one girl managed to copy the code from another student and make it change it slightly so that it didn’t look like she copied it and she passed the assignment or whatever it was.
    Making this tool this accessible to all and easier to use is concerning to me… not even putting a little time to change it by herself which at least teaches you a little bit will not help the girl at all… I’ve heard stories on schools too, about kids using chat gpt to generate their essays in school and passing tests this way.

    I don’t know. It doesn’t look promising for young people’s future if this becomes too easy to use.

    Reply
    • A better idea… and the only reasonable focus, in my opinion, when it comes to ChatGPT and education is that it is education (and in this case, testing, in particular) that needs to change. Essays are finished (unless they’re timed and monitored which is ineffective) They’re often a waste of time anyway. Teachers and professors need to start allowing students to use AI how and when they want and fit that into the curriculum and the evaluation process. Any other way will not work (and risks making school even less relevant to the workplace than it already is).

      Reply
    • This is already happening without AI. College degrees are considered both worth something due to tradition, yet also worthless when it comes to new grads because of the amount of plagiarism and cheating that happens.

      Reply
  4. Fascinating article, I found it really enlightening. It elucidates much of what I had been thinking, but with all the behind-the-scenes know how to back it up!

    Reply
  5. A good write-up, Cal. But ChatGPT and tools like it will surely have drastic effects on website traffic in the coming years? This, in turn, will affect a lot of writers’ jobs and revenue from websites employing them drops off a cliff.

    Reply
  6. The article does an excellent job demystifying the workings of large language models like ChatGPT, and it also seems quite accurate about the current state of the technology (although GPT-4, released in March, is already substantially more capable than the original ChatGPT from November).

    The unanswered question that I find particularly interesting is whether the limitations of an LLM are fundamental or transient. For example, various people are working on hacks to give LLMs more useful “working memory” like summarizing the conversation so far, and connecting an LLM to a vector database can give it access to more detailed and accurate information on a specific domain.

    More fundamentally, is there a level of sophistication at which “just predicting text” is indistinguishable from what we consider human-level writing that represents real manipulation and creation of ideas?

    As far as taking or not taking jobs, in my opinion that whole framing is misleading. LLMs will take over various tasks and make the people who do those tasks much more productive. Imagine the same number of software developers now writing 5-10X as much code. That level of productivity will cause parts of the economy to be reconfigured. Maybe an LLM won’t “take” your job as a copywriter, but maybe there will be less demand for copywriters overall, which will affect your job.

    Reply
  7. Even if we accept the premise in this post, the issue is not limited to the current capabilities of ChatGPT. The bigger issue is the trajectory we are on, and future versions of ChatGPT. And this post does not address the risks of exponentially more capable AI generally, beyond ChatGPT. While AI may not have “motivations” of its own, nefarious actors can surely direct it in very harmful ways…

    Reply
    • Yes, this is a much more legitimate and pressing concern. Not the ability of the AI to think for itself, but it being used by malicious actors to cause harm.

      Reply
  8. I’ve been a writer and editor for 56 years, and I don’t fear ChatGPT. It’s a step forward, and a welcome one, in computers taking on the scut work that attends any endeavor, including communication. I feel that ChatGPT will ultimately free us to focus on – and profit greatly from – turning attention to important factors that are outside of ChatGPT’s abilities: true, unfeigned customer care, inspired art, and the inner qualities that are indispensable for success in every area: kindness, compassion, service, refined feelings, etc. Maybe now schools will recognize the utter folly of educating to the test.

    Reply
    • > I’ve been a writer and editor for 56 years, and I don’t fear ChatGPT.

      That’s because you’re ready to retire or retired. It’s the currently employeed youngsters and up to 40-somethings that should and do fear it.

      Reply
  9. I enjoyed the article and getting to understand a bit better how predictive language models work.

    I think the impact on jobs really depends (for now, at least) on the type of job.

    My personal worries (again, for now, at least) are as follows:

    (1) The internet is going to get even worse — less reliable, less useful, fuller of content designed to distract and sell, now that the cost of producing it has gone from near-negligible to zero. At least until some sensible regulations are put in place, which I am tentatively hopeful for, if it gets bad enough that people revolt.

    (2) The spread of effective bespoke propaganda will grow exponentially.

    (3) People might become even more absorbed in their devices. In at least one sense, it doesn’t matter if an algorithm has consciousness, if people believe it has consciousness. And even though my own interactions with the model have been disappointing and rather uninteresting, I can’t help relating to it as if it were a person.

    Reply
  10. One more thing ChatGPT (and it’s like and descendants) cannot do: it can’t take responsibility for anything. To use your deans office email, it cannot take responsibility for the correctness of the information provided, nor responsibility (legal, social, etc) for the consequences of providing that information. You can’t fire, or punish, ChatGPT if it gets something wrong and causes someone to get fired or harmed. A lot of jobs have elements of responsibility (sometimes explicit, often implicit) that simply cannot be outsourced to technology.

    Reply
  11. Good summary Cal, and pretty much mirrors my (a data scientist) take on chatGPT.

    ChatGPT is not a mind, not self-aware, and fundamentally little more than a (impressively) sophisticated statistical association machine, is completely correct, and that is something which is unlikely to change in the near future.

    I think, as the first comment here mentions, chatGPT has the potential to do some things faster than humans have typically been able to do them. But this is nothing more than the exceedingly vast majority of other productionized technologies which have scaled other endeavours, like manufacturing.

    I’m yet to see chatGPT do anything more impressive than be able to compile pre-existing (human created) content in a way that impressively correlates with a given input.

    – chatGPT cannot create new information
    – it cannot make decisions, it can only make associations based on probabilities
    – it would not exist and/or would not work without the immense database of pre-existing information across the world wide web
    – at best it can do what Jefferson pointed out in the first comment – impressively synthesise information based on an input, which will, at best, make content creation by actual humans easier and more efficient by minimising otherwise laborious and time intensive repetitive tasks like information gathering

    However, it’s also in the back of my mind that AI technology has consistently surpassed the expectations of most naysayers often much faster than anyone anticipated. I hesitate to say that anything related to technology is ‘impossible’ or ‘a long way into the future’.

    However I’m still very skeptical that truly self-aware, ‘sentient’ AI is at all possible. At the very least true general AI is still on a completely different plane of human knowledge and understanding than what currently exists. Making Statistical modelling software increasingly powerful is not the same as making it any closer to the ability to know of its own existence, to be able to think, reason and moralise actions the way humans can do without any real effort. That is something that, if at all possible, requires real breakthrough in many other spaces of human knowledge than just statistics.

    Reply
    • @Geoff – AI cannot create new information, but it can create new knowledge. Think of it this way: the world has natural laws, and we as humans have not created or changed any of them. It is only our learning and making new connections, layering on top of past knowledge and achievements, that advances the technologies and products we have been able to develop. Our achievements come from our improved understanding of what already exists. So now AI can do that, exponentially faster. It does not need to create new information to create new knowledge, the same way we did not need to create new scientific laws – we just had to discover what was already there, by making new connections and building on existing knowledge. AI can absolutely do that, infinitely better than we can.

      Reply
  12. I totally agree with Cal’s summary on chatGPT – it’s not a mind, it’s not self-aware, and it’s essentially a fancy statistical machine. But let’s not underestimate its potential! As a data scientist, I do believe that chatGPT can help us do certain tasks faster than we could on our own. Just like other technological advancements that have scaled industries like manufacturing, chatGPT has the potential to make content creation easier and more efficient by minimizing repetitive tasks like information gathering.

    But let’s be real – chatGPT has its limitations as explained on https://chatgpt4online.org/. It cannot create new information, make decisions, or think on its own. Its abilities are based on probabilities and associations derived from an immense database of pre-existing information on the web. At best, it can impressively synthesize information based on input, which is still a remarkable feat, but not quite the same as true creativity or decision-making.

    While I’m skeptical about the possibility of self-aware AI, I also acknowledge that technological advancements often surpass our expectations much faster than we anticipate. Who knows what breakthroughs may come in the future that can bring us closer to the development of true general AI? But until then, let’s continue to utilize chatGPT for what it does best – impressive statistical modeling and content synthesis.

    Reply
  13. In Professor Newport’s New Yorker article, he writes the following, in the context of assuring us that ChatGPT won’t eliminate our jobs:

    “Much of what occurs in offices, for example, doesn’t involve the production of text, and even when knowledge workers do write, what they write often depends on industry expertise and an understanding of the personalities and processes that are specific to their workplace. Recently, I collaborated with some colleagues at my university on a carefully worded e-mail, clarifying a confusing point about our school’s faculty-hiring process, that had to be sent to exactly the right person in the dean’s office. There’s nothing in ChatGPT’s broad training that could have helped us accomplish this narrow task.”

    This is no comfort at all – why couldn’t ChatGPT also be deployed in office environments, and train itself on exactly the type of “hyper-local” writing that takes place between employees?

    Reply
  14. Unfortunately, this is patently false. A quick look at the papers/studies being released on GPT-4 show memory & improvement, or, self-learning in practice. What you’re describing is narrow AI. Also, GPT-4 is an order of magnitude more powerful than ChatGPT. Everyone I notice referencing ChatGPT at this point seems unaware of these findings/advancements and is only delayed in responding to the buzz and provocative chatter. I hope you’ll dig up some of these papers available – or – I highly recommend the YouTube channel ‘AI Explained’ for a brief but thorough analysis. Completely educational.

    Reply
  15. ChatGPT may do a fine job of finding and spitting out collated data, information and opinions that people have spewed out into the world of the internet, but my current impression is that it is unlikely to do a good job of replacing people who think for a living. There are too many nuances in human language, behaviour, and interactions, and too many factors other than simple data that can go into decisions and thought processes, whether conscious or unconscious.

    Recently, someone I know asked ChatGPT what dating advice Jane Austen would give. In a few seconds a list of 10 or 12 pointers was returned, and my friend was quite pleased with the result. Personally, I wasn’t impressed at all. While the points were valid and seemed like good advice for most people, they were so generic anyone could have written them. I have read all of Jane Austen’s books as well as a book of her personal letters, and the ChatGPT answer was nothing like Jane Austen. The language (ie. choice of words and how they were put together in the sentences) was wrong, as was the context, and Jane’s intelligent and witty personality were completely missing. It seems to me that companies that are hasty in replacing employees who use their brain for knowledge, analysis, negotiating, communicating, problem solving etc. may regret the decision.

    Reply
    • @Judy – There is no comfort in today’s limitations. You need to understand that AI is improving at an exponential rate. Even AI researchers admit to not knowing enough about what the models are capable of. Don’t allow blind spots to develop because of current limitations, which are going to be corrected more quickly than you can imagine.

      Reply
  16. I believe the constraint of content creation has shifted somewhere else. If it is now easier to create better content, faster, regular content creation it is no longer “the hard part”.

    It is however harder to create original creative content, which makes a significant change on an important metric. As YouTube and TikTok users discovered; it became easier to create high-quality videos on those platforms, which moved challenges to marketing / content/ topics chosen etc.

    Another example is email; as more people use email, we became flooded with them. This shifted the focus to techniques to filter emails, face to face meetings and other communication and marketing tools.

    Reply
  17. It is interesting and thrilling how ChatGPT takes everyones’ mind. All from writers and copywriters to platforms like Studybay that assist students with their papers are discussing the issue. But I think that real professionals won’t leave without a job. The person who really has amazing skills and knowledge is always wanted. That is why we don’t need to waste our nerves on it, but keep improving yourselves.

    Reply
  18. “Ever since ready-made dinners have been invented, they have replaced chefs.” Sure… but the chef profession still exists, right? Processed food certainly hasn’t made the profession go away.

    Of course you can say that learning to cook is good for your physical health. What about mental and social health? It’s clear that AI is the junk food of mental and social health. It will be there for convenience, but nothing more than that.

    If a competitor copies your AI-generated content, can you claim ownership of it? We need to keep in mind that AI is a gigantic plagiarism machine built off the copyrighted works of millions.

    We don’t appreciate thieves who steal art and webnovels to sell off Amazon, so anyone who relies on AI is going to be ousted quickly. Maybe some people don’t care and will purchase whatever seems good enough, like the endless plagiarized T-shirt designs. However, for anything more than t-shirts and junk food, I think the majority do care about plagiarism and theft.

    Just because GPT-4 sounds more human, or keeps evolving to the point that it can replace anyone and everyone, still doesn’t address copyright and ownership. And anyway, who defines what “sounds human?”

    If your subscription to the AI data-center is canceled, are you going to fold because you’ve relied on AI for so long? These AI companies are going to charge more, given how much money it takes to maintain them.

    I think people will evolve their slang and jargon to escape the oppressive conventions of digitized and analyzed speech that has been encoded into countless hard drives. If your marketing content sounds too much like AI, we will ignore it even more.

    No one needs to learn to cook anymore. No one needs to learn how to write marketing pieces. If you’re less experienced than an AI, you’re not worth investing in. Yeah, right. Have fun expanding business and gaining adoption with just you, a team of nerds, and an AI (I don’t mean you directly, just in a general sense).

    Maybe it’s sad that content writers lost their job, but in my experience, content writing is soul-sucking and mostly irrelevant drivel anyway. They’ll find something better. For now, it’s tasty and new and shiny, but the gut feeling that AI is junk food will never go away.

    Reply
  19. Some examples of GPT-4 experimentation I’ve seen lately (from a software developer POV)

    1. “Examine the code I’ve written and make suggestions on improvements.” GPT-4 returned 15 suggestions. Most were laughable, several were wrong but two suggestions were useful and had not been considered by the developers. The conclusion was that this was a useful exercise for skilled developers but a beginner (junior developer) would not have understood which suggestions would have been useful.

    2. GPT-4 can be used to write a first draft of software documentation. This is an onerous task for programmers who often lack good writing skills. This is a huge time saver.

    3. As others have noted, GPT-4 can be used to generate marketing materials although they tend to be rather dull. This is useful for a first draft but a clever human would be needed to make it palatable. I would not want to live in a world in which all marketing was done by GPT-4.

    4. The new Google AI system can be used to analyze error messages and assist in debugging programs. Many of the suggestions are obvious but it would be useful for a student or a junior developer. I suspect this will improve greatly as new versions are released.

    5. The elephant in the room is likely to be the next generation of general AI. Currently most of the tools are session-based (you converse with the tool but the tool does not examine your history, personal data, etc). Imagine a general AI that has access to all your personal online data, location history, contacts, images and videos. And we know that the guardrails are not in place yet for a totally safe and ethical experience.

    Reply
  20. “2. GPT-4 can be used to write a first draft of software documentation. This is an onerous task for programmers who often lack good writing skills. This is a huge time saver.”

    This is a narrow but useful task.

    Reply
  21. Unfortunately, employers believe otherwise. Professional writers from my network are already feeling the brunt. I’ve been told by a few people how they’re either told to produce 10x more or their work has dried up, with ChatGPT being given as an excuse.
    While I believe you when you say that ChatGPT won’t completely eliminate writer’s jobs. Some employers sure are trying to do this. And even if they still need us, we would be reduced to checking the hallucinations of AI.
    It’s going to be a rough road for workers in the next few years.

    Reply
  22. It’s certainly true that the emergence of ChatGPT started a heated debate. ChatGPT can be extremely helpful when you need to generate writing ideas or edit a text, the bot can do it in seconds, leaving you plenty of time for other tasks. But it is impossible to fully entrust ChatGPT with researching or writing texts due to the risk of misleading information, outdated results, and lack of originality as explained on https://custom-writing.org/blog/chatgpt-academic-writing.

    Reply

Leave a Comment