top of page
Search
Writer's pictureCatie Phares

🚦What your editor wants you to know about ChatGPT🚦


By now, we’re all probably more than a little sick of talking about ChatGPT. I know I am, and it’s what kept me from addressing the topic in any real detail so far, even as (in light of my job) virtually everyone I know has been thoughtful enough to ask me for my opinions on it. The truth is, those opinions are far from novel or groundbreaking: I believe it’s very cool, kind of scary, and unlikely to be revolutionary in exactly the ways we expect.


But even as my opinions of ChatGPT itself aren’t really worth writing about, I would like to answer those folks who have asked for my thoughts, as an academic editor, on how to use this tool effectively.


That’s why I’ve created this brief “traffic light” guide to some best practices around using generative AI in the context of academic writing, specifically. 


First, let’s walk through the very real dangers underlying the main point I want to make here: 

At this point, I see no benefit in using ChatGPT to generate anything that your career is riding on. 


Dangers


It’s not good enough for academic writing

ChatGPT uses AI algorithms to analyze text and generate a response; this means that while it can provide quick answers to simple questions, it certainly can’t be relied on to provide accurate answers to complex ones. As a result, it’s really not suitable for use in academic research or other high-value contexts where accuracy is essential (it has a well-documented problem with generating fake citations in particular).


It damages your credibility 

As journals scramble to develop AI policies and decide how they’re going to cope with this new reality, using ChatGPT to write any kind of submission can severely harm your academic credibility. It might not always be that way—and personally, I’m not even convinced that it should always be that way. But for the time being, you’re playing fast and loose with your scholarly reputation if you use AI-generated writing, especially without disclosing it. Note that academics aren’t just using tech like the GPT-2 Output Detector to monitor their students’ submissions—they’re using it on their fellow academics’ work too, with embarrassing results. At this stage, the modest payoff of outsourcing your vital academic writing to AI (more on that below) is not worth this risk.


It’s working with pretty poor material

In order to help you with your academic writing, ChatGPT will draw on the massive pool of existing academic writing. The problem? The majority of academic writing is terrible. Emulating it isn’t going to make your work stand out; on the contrary, you may well find the writing suffers from more problems than if you just hammered it out yourself.


It may weaken your creative and analytical muscles

Picture this: you enter a wall-sitting competition against people who routinely wall-sit for an hour a day. But in this scenario, you haven’t done a single wall-sit since high school gym class. Who’s likely to win this grueling contest? 


My fear for academics who rely increasingly on generative AI is that they’re seriously underestimating (a) how necessary the writing process is to critical and creative thinking, and (b) how competitive the current academic landscape is. You need strong, well-practiced “muscles” in designing, conducting, and communicating high-quality research if you want to keep doing this as a career. In short, it’s a concerning possibility that while you spend precious time and energy figuring out all the cool things that ChatGPT can do, your competitors are honing the valuable skills that ChatGPT can never help with—and getting a leg-up from AI with all the less important stuff (see the “green light” points below) so they’re even further ahead.


It poses serious ethical concerns

I’m sure I don’t need to elaborate on this point as it’s been eloquently made thousands of times already. Ethical and legal considerations abound regarding the use of generative AI, especially regarding its inherited assumptions and biases.


Those are the outright dangers to keep on your radar. But even beyond those concerns, when ChatGPT is indeed the perfect tool for the job, there are some relevant limitations worth noting too.


Limitations


It’s semi-creative but sloppy

ChatGPT would have me a lot more nervous if I were some type of copywriter vs. an editor, for one reason: it’s admittedly very good at synthesizing and regurgitating content but incapable of producing polished, effective content. And for that, we can largely thank English, and the myriad contexts in which it’s used. It’s likely that AI will never be able to outperform an experienced editor in the English language (knowing this, however, did not make it any less satisfying for me to see fellow editor Adrienne Montgomerie pit 3 forms of AI against a human NYT copyeditor—total knockout!).


Again, I might be worried if we were all increasingly working and thinking in a tidily constructed, exception-free language like Esperanto or something, but we’re not. We’re dealing with English. And I’m betting that English will continue its 1400-year(ish) streak of being the fastest-changing language on the planet and defying all attempts to regulate or codify it.


English, probably.


And really, it’s not even that creative

All generative AI is, of course, ultimately drawing on information that already exists. Hence, if your goal is to stand out, engage your readers, and make pioneering contributions your field, you’re severely hampered in using ChatGPT. In my experience, it’s actually a lot harder to wade back into “meh” content and try to insert some brilliance than it is to just roll up your sleeves and do the challenging work that produces brilliance in the first place.


It’s far from hands-off or easy (if you care about the outcome)

ChatGPT needs careful and specific instructions to do an even passable job—so careful and specific that using it to create anything more than an outline or sample to work from for something critically important is probably not even going to save you much time (especially by the time you dive back into the AI-generated content to fact-check and edit it, which you’ll definitely need to do).



Quite the understatement!


“Okay,” you might be thinking, “So in your view, are there ANY good uses for ChatGPT?” 


Absolutely! Here are some of what I consider its most promising capabilities in the context of academic writing.


Promising possibilities


It can sum up what's already out there and highlight gaps

This might be the most effective way for academics to leverage ChatGPT: the technology excels at collating the research that already exists on your topic. For younger researchers in particular, this could be a game changer. Identifying those gaps and questions in your field that are still crying out for answers can take a tremendous amount of time and energy—or almost none at all, with ChatGPT.


Example prompt: Can you tell me what we still don't know about [insert your topic or field]?


Alternatively, you can ask ChatGPT a specific research question that you've considered investigating—then fact-check its answer. If the answer is solid, the question's already been answered. If it's nonsense, however, you've got a promising lead. The fact that ChatGPT has fabricated an answer to your question means there's nothing for it to draw from and hence, the question may well make a great study.


It can streamline information and options in an age of information overload

Many activities that take time away from your academic writing—like managing your inbox, developing personalized recommendations about anything you’re deciding on, or getting the gist of non-critical text that you don’t have time to read in detail—are where ChatGPT shines. Anything that reduces your cognitive load so you can put more brain power toward your most important academic work is a good idea, so I’d recommend familiarizing yourself with generative AI for this purpose, if no other.


It can drastically reduce time spent on filler, unprofitable writing

Based on what I see among my clients, it seems to me that most universities and funding bodies are asking far too much of researchers these days. The sheer amount of writing that has to be done for no other purpose than to tick an administrative box provides a great opportunity to lean on generative AI. Properly prompted, ChatGPT can put together perfectly suitable emails, summaries, responses to student questions, feedback on non-critical work, minor review rebuttals, assignment questions, and many other forms of writing that are often seen as detracting from academics’ most impactful work. That said, I’d never skip the vital step of at least scanning ChatGPT’s work before sending.


It can help less confident writers level the playing field

Employing ChatGPT in the above two areas if you’re a speed reader and writer in English will still yield tangible benefits. But if you’re slower or less confident in English, those benefits will be exponential. A common concern that I hear from ESL and neurodivergent clients is the fear of using the “wrong tone” in their personal writing. Whether they’re writing to a co-author, a reviewer, a student, a publisher, or someone else, these authors can be very nervous about inadvertently coming across as confrontational, dismissive, pushy, harsh, or obsequious; “Can you make sure I don’t sound rude?” is one of the top requests I get in this regard. I don’t think it’s fair that these authors should have to spend more of their time (or more money on an editor like me) than their fellow researchers simply to control for subtleties of tone. For that reason, I suggest that they give ChatGPT a shot next time they’re struggling to find the appropriate words in a scholarly or personal exchange. For example, try out or tweak these sample prompts: 


I need to decline an opportunity that doesn’t align with my research goals. Can you please draft me a kind but firm response passing on the chance to speak at a conference in July?” 


or


“Someone disagrees with my research question. They think it has no merit. Can you please craft a polite response asking them to be more specific?”


It can help you get started or unstuck

One of the biggest problems I see across writers who believe that they’re “not good writers” is a lack of a safe space within their own minds for their writing. In other words, they’re so critical of their writing that the whole task has become, for them, tainted with feelings of inadequacy and even despair. Getting started can therefore feel like the first and hardest step in what promises to be a massive struggle. If this resonates with you, I’d recommend using ChatGPT as a kind of structural helper to take some of the doom and angst out of starting a paper. Here’s how:


Prompt: I'm writing a research paper on ______ [overall topic, e.g., gender and work-related injuries]. I have found that ______ [your main finding(s), e.g., men experience more work-related injuries than women]. I argue that the reason for this finding is ________ [your hypothesis/argument, e.g., because men pay less attention to safety information and are more likely to feel pressured to perform unsafe tasks], which is based on  _________ [the theory you draw on if any, e.g., social theories of gender and risk perception]. Can you please provide me with an outline for my research paper?


I tried this prompt myself with a few different topics and theories inserted in the appropriate blanks and got pretty decent first outlines. 


Note that ChatGPT didn’t actually write a single sentence of the paper or give me any ideas—which is a good thing since, as noted, it lacks expertise and true creative ability. But importantly, it gave me a clear structure. And structure, like the banks of a river, is often the key to getting our creative juices flowing along with momentum (vs. stagnating in a swamp of overwhelm). 


Similarly, a great use of ChatGPT would be to provide some examples or clarity when you get stuck with a writing-related question that you’re embarrassed to bring to another person (even though you shouldn’t be, but we’ve all been there!). 


Example: “I'm not sure I understand what a great conclusion does aside from repeat my findings. Can you give some examples?”


or


“What makes a paper’s title interesting or catchy? Can you give me some examples?” Etc.



The final thing this editor wants you to know about ChatGPT is that there’s certainly no shame in using it and no need to shun it entirely. Generative AI is firmly a part of our lives now, and I don’t see any real benefit in wishing things were otherwise. At the same time, I’d strongly recommend saving it for these “promising possibilities” and other fun uses rather than making it a co-author on your important publications.



-Catie Phares


Comments


Commenting has been turned off.
bottom of page