Followers

Tuesday, June 24, 2025

Beyond the Hype: A Strategic Guide to Using AI in Academic Work and Education

 Summary of Key Points

  • 🤔 Which AI to Use?: For serious work, the choice is simple: Anthropic's Claude, Google's Gemini, or OpenAI's ChatGPT. Other tools are specialized or less developed.
  • 💰 Free vs. Paid: To access the most capable models required for high-stakes work, a paid subscription (around $20/month) is necessary. Free versions are essentially demos.
  • 🚗 System vs. Model: It is crucial to understand the difference between the overall system (e.g., ChatGPT) and the models it offers (e.g., a fast model like GPT-4o vs. a powerful one like o3). Always manually select the powerful model for important tasks.
  • 🔬 Deep Research: This feature, which integrates web searching to produce cited reports, is a key capability for professionals. It is more accurate and useful for tasks like creating guides, summaries, or getting a second opinion.
  • 🗣️ Voice Mode's Real Power: Beyond conversational chat, the "killer feature" of voice mode in Gemini and ChatGPT is its ability to use your phone's camera, allowing the AI to "see" and comment on your environment in real-time.
  • 🖼️ Generation Capabilities: ChatGPT and Gemini can create images. All three systems can generate documents, code, and even simple interactive tools if prompted correctly (using the "Canvas" option in Gemini/ChatGPT).
  • ✍️ Modern Prompting: Complex prompt engineering is less important now. The key is to provide clear context (uploading files is effective) and specific instructions. Treat it as an interactive, two-way conversation.
  • ⚠️ Troubleshooting & Scepticism: Hallucinations still occur, especially without web searches or when using faster models. It is vital to verify information and remember the AI is a tool, not an oracle. Check the "show thinking" trace to understand its process.

Introduction

The rapid evolution of generative AI presents both opportunities and significant confusion for those of us in academia. Every few months, a new model or feature is announced, making it difficult to determine which tools are genuinely useful for teaching and research versus which are merely technological novelties. The discourse is often dominated by abstract fears or uncritical enthusiasm. A more grounded approach is necessary.


Ethan Mollick's recent guide, "Using AI Right Now," offers a refreshingly direct framework for navigating this environment. His analysis moves past the general discussion of "AI" to focus on the practical choices and skills required for effective use. This post will break down Mollick's key insights and translate them into a strategic plan for academics. We will move from selecting the right system to mastering its core functions and, finally, to adopting a productive and critical mindset for AI-assisted academic work. The goal is not to simply use AI, but to integrate it thoughtfully as a capable, if fallible, colleague.


Part 1: Selecting Your AI Collaborator and a Word on Investment

The first point of confusion for many is the sheer number of available AI tools. Mollick argues for simplification. For most people engaged in serious, professional work, the choice comes down to one of three core systems: Anthropic's Claude, Google's Gemini, or OpenAI's ChatGPT. While specialized tools exist for tasks like search (Perplexity) or specific agentic functions (Manus), these three platforms represent the most robust, feature-complete options for general-purpose academic tasks.

However, a critical distinction must be made, one that is often lost on casual users. You are not just choosing a single AI; you are choosing a system that contains multiple models. Mollick likens this to choosing between a sports car and a pickup truck; both are vehicles, but they serve different purposes. Each system offers at least two tiers: a fast, conversational model (like GPT-4o or Gemini Flash) and a more powerful, analytical model (like Claude 3 Opus or Gemini 1.5 Pro). The faster models are the default, as they are cheaper to run. They are adequate for brainstorming or quick queries. For any high-stakes academic activity—be it drafting a manuscript, analyzing data, developing a course syllabus, or conducting a literature review—manually switching to the most powerful model available is not just recommended; it is essential.

This leads directly to the question of cost. The free versions of these platforms do not provide access to the top-tier models. Therefore, considering the approximately $20 per month subscription fee as a research or professional development expense is a necessary step. Without this investment, you are effectively using a demo, not a professional tool. From a cost-benefit perspective, the potential productivity gains in drafting, coding, and analysis can readily justify this modest outlay, much like a subscription to a statistical software package or an academic database.

A final consideration in selection is data privacy. According to Mollick, Claude does not train its models on user data by default. For ChatGPT and Gemini, you may need to opt-out of data training manually, a straightforward process that ensures the confidentiality of your research and communications. This is a critical step for any academic handling sensitive or unpublished material.

Part 2: Mastering the Core Capabilities for Research and Teaching

Once you have selected and subscribed to a system, the next step is to move beyond simple chat queries and master the advanced features. Mollick identifies three capabilities that are particularly transformative for professional work: Deep Research, multimodal voice mode, and advanced generation.

Deep Research: This feature elevates the AI from a closed-book conversationalist to a research assistant. When activated, the AI performs web searches to ground its responses, producing higher-quality, cited reports. Mollick notes these reports often impress information professionals, suggesting a high degree of utility. For academics, the applications are immediate:

  • Literature Reviews: "Conduct a search for recent peer-reviewed articles (2022-2025) on the application of behavioural economics to digital learning environments. Summarize the key themes, methodologies, and unresolved questions."
  • Grant Proposals: "Provide a background report on current federal funding priorities for educational technology research, including recent grant recipients and their project abstracts."
  • Teaching Prep: "Develop a detailed travel guide for a hypothetical business trip to Wisconsin, focusing on the state's cheese industry, for a case study in a supply chain management course."

While not infallible, these reports are far more accurate than standard AI responses and provide a strong foundation for further work.

Voice Mode with Vision: The voice modes in the Gemini and ChatGPT mobile apps are more than just a hands-free way to talk to an AI. Their "killer feature," as Mollick puts it, is the ability to share your screen or camera. This multimodal capability opens up new avenues for real-time problem-solving. An economics teacher could point their camera at a complex supply-demand graph and ask for a simplified explanation for a high school class. A researcher could get real-time help debugging a piece of code on their screen. While hiking, one could identify plants for a biology lesson. This transforms the AI from a text-based tool into a context-aware assistant that sees what you see.

Generating Academic Outputs: All three systems can produce a wide variety of outputs beyond plain text. They can write code to create statistical analyses, interactive simulations, or simple games. To do this reliably in ChatGPT or Gemini, you must select the "Canvas" option. Claude is often capable of producing these outputs directly. The potential here is to move from describing a concept to creating a tool that demonstrates it. For example, instead of just writing about cost-benefit analysis, one could ask the AI to generate a simple, interactive HTML tool that allows students to input variables and see the outcome. This is a powerful way to create bespoke educational technology with minimal coding knowledge.

Part 3: The Art of Interaction and Maintaining Scepticism

With the right system and an understanding of its features, the final piece is interaction. Mollick's research suggests that much of the online advice about "prompt engineering" is outdated. Being polite, for instance, has no consistently positive effect on output quality. Instead, effective interaction hinges on two principles: providing context and being clear about your objective.

Rather than asking, "Write a marketing email," a more effective prompt is, "I'm launching a B2B SaaS product for small law firms. Write a cold outreach email that addresses their specific pain points around document management." For academics, this means providing the AI with the necessary context. Use the file upload feature to give it your draft manuscript, the project proposal you are reviewing, or the syllabus you are revising. Then, give it a clear role and task: "Act as a peer reviewer for the Journal of Strategic Management. Critique the attached introduction for clarity, argument strength, and contribution to the existing literature. Provide your feedback in a numbered list."

This approach reframes the interaction from a simple question-and-answer session to a collaborative dialogue. You should engage the AI, push back on its suggestions, ask for alternatives, and use the "branching" feature (editing a previous prompt to explore a different path) to refine the output.

However, this collaboration requires a healthy dose of professional scepticism. AI models still "hallucinate"—they invent facts, sources, and even misrepresent their own actions with confidence. The risk is lower with powerful, web-connected models, but it never disappears. Therefore, the cardinal rule is to use AI primarily in areas where you have expertise. You are the final arbiter of truth. If an AI output seems too good to be true or makes a surprising claim, it requires independent verification. The AI is a thought partner, not an oracle. It can help expand ideas and challenge your assumptions, but it cannot and should not replace your critical judgment.

Final Remarks and a Call to Action

The effective use of AI in an academic context is not about finding the perfect prompt or the one "best" AI. It is about making a strategic choice of system, committing to the premium tools, and mastering the core features that support substantive work. It requires shifting your mindset from using AI as a search engine to engaging it as an interactive, albeit flawed, collaborator.

Mollick’s advice is to spend your next hour with AI productively. I would echo that call to action for any academic colleague. First, choose one of the three main systems—Claude, Gemini, or ChatGPT—and pay the subscription fee. Second, immediately test it on a real task from your work. Do not ask it a trivia question. Give it a draft to critique or a literature search to perform, ensuring you have selected the most powerful model. Third, experiment with the multimodal voice and generative features to understand their potential for your teaching and research. The difference between a casual user and a power user is not esoteric knowledge, but the willingness to apply these powerful features to real, high-stakes work.


References

Mollick, E. (2025, June 23). Using AI right now: A quick guide. One Useful Thing. https://www.oneusefulthing.org/p/using-ai-right-now-a-quick-guide

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

ChatGPT in the Classroom: Promising Performance, Moderate Perceptions, and a Need for Caution

Summary of Key Points 📊 The Study: A meta-analysis of 51 experimental studies on ChatGPT's impact in education, published between lat...