A (Somewhat) Meta Analysis of the impact of DeepReserch AI tools... and a bit about what we should do about it.

Preview
Research, thought work, and specialized knowledge are now accessible through drag-and-drop tools. To better understand these tools, I adjusted my usual research methodologies for the blog you’re currently reading.
For background, I love research—it’s a huge part of my writing process. I typically examine at least 20-30 pieces to find usable references. To get there, I have to sift through many more. I enjoy this process because it lets me explore different facets of a topic. The downside? It’s labor-intensive. There’s a lot of culling and evaluating sources. Ironically, I almost never use other blogs, opting instead for academic and news-based resources. For this piece, I fed the following prompt to the free trial version of Google Gemini’s Deep Research tool:

I want to understand the implications of tools like deep research on society. With a particular focus on the sociopolitical ramifications of such tools within a society that offers premium wages for thought work. What scientific and academic studies have explored the potential impact of these tools on work and work culture? I also want to understand what impact these tools will have on labor costs overall. please include information about how companies are currently adopting these tools and what impact it is having on the current workforce.

Candidly, this is a lazy prompt. It's overly vague, filled with jargon, and lacks constraints. I wanted to simulate how a non-AI practitioner might query the product. The results were fairly impressive. 

From a UX perspective, I appreciated that the reasoning parameters were transparent. Gemini explained, point by point, how it interpreted the prompt and applied that interpretation to its research. It also provided a full list of sources used to explore each facet of the topic.

It examined far more blogs than I typically would, layering those with academic sources and sites demonstrating industry expertise—like SHRM, an HR platform it used to assess labor impacts. If you're interested, the tool produced a full research paper, shared below.

Honestly, it's a good read—accessible and informative. I’ve highlighted the elements that most interest me and significantly inform this piece. I’ve also added commentary to key sections to show how I interpreted the curated research and synthesized it into something new. 

But this text—the one you're reading now—isn't about the topic I prompted. It's about how we respond to the creation of these tools and how we integrate them into our lives. Hence, the meta approach. The goal here is to illustrate how to incorporate AI-generated outputs into novel, human-directed work, just as I would with “manually derived” research.

An Evolve or Perish moment

Learning to use these tools goes beyond rote skills like "how to prompt" or "how to train." It’s fundamentally about how to critically interpret outputs and build upon them. While tools may democratize access to thought work or Ph.D.-level research, the operator still needs to discern, engage, and critique. Human capabilities like discernment, critique, and intellectual synthesis are irreplaceable and remain largely undemocratized.

If your understanding of a topic is surface-level, these tools might help you fake expertise for a while. But in the long term, you’re proper fucked. When research is handed to you, superficial insight no longer cuts it. You’re now expected to extend beyond what’s readily available and generate real value from your perspective.

Crucially, you must cultivate the ability to critique and bring a critical valence to your prompting. While sources shape the perspectives AI presents, there’s often an inherent positive bias. Reckoning with ideas curated by tools like these isn't just about asking the right questions. It's about the human intellectual task of dissecting and synthesizing. In work contexts, critical thinking remains the most potent form of human magic, not only to generate usable outputs but to build something new upon them.

Fortifying Our Economy in the Face of Technological Disruption

Usman Sheikh, a must-follow on LinkedIn, recently posted a valuable take on this topic in response to OpenAI’s unveiling of its $20,000/month agentic research product. I’ve included it here, so there’s no need to rehash his excellent analysis. 

Make it stand out

That said, a bit of context:

The U.S. economy has been unpredictable over the past three months, recalibrating across several fronts. We’ve seen a frozen job market, widespread uncertainty, and high inflation. All while both public and private sectors push for layoffs and invest deeply in digital labor.

While I’m not apolitical, my focus here is the impact of technology on political and social norms. This space strives to offer an apolitical analysis of potential policy responses to the conditions outlined above.

As a society, we must make hard choices about how we want the economy to function in this new era. As practitioners and experts, we need to help non-SMEs imagine new economic paradigms as old norms collapse. It’s socially irresponsible to leave these decisions in the hands of a few powerful, self-interested parties.

We’ve faced such moments before—and we can learn from how we responded.

What We Can Learn from FDR

If you remember high school history, a few key facts about the New Deal likely stand out:
  1. It marked the end of laissez-faire economic policies and embraced Keynesian principles, focusing on government spending and fiscal stimulus.

  2. It birthed the modern labor movement. The Wagner Act guaranteed workers the right to organize, shifting power dynamics between employers and employees.

  3. It didn’t end the Great Depression (WWII did), but it forged the modern economic era.

It was a moment of reckoning when we recognized that the old world was dying and began building the foundation of a new one. We find ourselves in such a moment again.
Again, we are in need of strong regulation and improved labor protections. There’s also a compelling case for strengthening the social safety net and exploring solutions like universal basic income, but these are ideas that are far from being de rigueur. 
Instead, we’re making deep cuts where we might need reinforcements. Again, this is not a political argument but a call to reconsider whom our economy is designed to serve as we design ever more powerful AI systems. No one is asking for a welfare state, just a focused recalibration of existing structures in light of what’s coming...well, what’s here.

Defining Cognitive Protections

Unsafe at Any Speed gave us seatbelts. Silent Spring gave us the EPA. The Jungle gave us food safety laws. Each revealed dangers society had previously ignored.

Since the rise of social media in the early 2000s, we've struggled to enact meaningful policy around algorithmic systems. We are, metaphorically, pantsless in the face of AI's commercialization.

Despite mounting evidence that prolonged use of AI tools can degrade cognitive abilities, we still lack consensus on how to constrain or refine their use. The market’s brute-force approach has led to widespread labor cuts and unfocused applications that rarely deliver on AI’s promise of human augmentation.

We’re failing to define these systems, and we’re not establishing effective norms for human–AI teaming. The problem is systemic, compounded by market pressure to “do something” without clearly defined paths to value.

The rise of deep research tools threatens the very institutions that might help solve this problem, institutions like consultancies and academia. Yet these same institutions remain our best existing mechanism for guiding organizations and individuals toward equitable, value-driven solutions. If they’re to survive, they must evolve beyond surface-level guidance.

Deep research tools give consultancies a chance to streamline and operate with leaner teams. But there’s a brief window during which they must elevate those within their ranks who are capable of systems-level thinking and let go of middling talent. Business as usual will lead to inevitable decline.

AI is no longer "on the horizon." We're not “experimenting” anymore. We are evolving with it. Every sector of society must adapt—or perish.


Key Takeaways

  • The era of superficial thinking is over. Easy-to-use research tools have shifted expectations of what qualifies as thought work. Critical thinking is now a prerequisite. Deep knowledge and the ability to synthesize information into original perspectives are baseline skills in today’s labor market.

  • We can look to the past for inspiration on how to fortify our economies in the face of a new order. Society cannot ignore this task, as the disruption to come will be significant and wide-ranging. 

  • Consultancies, academic institutions, and policymakers must swiftly restructure to guide more of society through this transformation. If they don’t, they’ll face extinction.


Disclaimer: The opinions expressed in this blog are my own and do not necessarily reflect the views or policies of my employer or any company I have ever been associated with. I am writing this in my personal capacity and not as a representative of any company.

This article was edited with the help of Editorial AI .

Next
Next

AI Has a Use Case Problem—Because It Also Has a Practitioner Problem