Consultants in politics are increasingly leveraging AI, yet they remain committed to upholding their creative authority.
What to Know:
- 59% of political consultants use AI weekly or more; 34% use it daily.
- ChatGPT leads all tools with 76% usage; Gemini is a distant second.
- AI is mostly used for internal efficiency, not public-facing campaign content.
- 43% of consultants say AI had only a "moderate" role in 2024; 9% called it "very significant."
- Deepfakes and misinformation are the industry’s top ethical concerns.
AI may be revolutionizing industries, but in political consulting, the adoption is more careful than chaotic. A new study conducted by 3D Strategic Research and Normington Petts, commissioned by the American Association of Political Consultants (AAPC), shows that AI has indeed entered the campaign office—just not where you might expect.
Widespread Use, Narrow Trust
According to the survey of 200 current and lapsed AAPC members, 59% use AI at least weekly, and 34% use it daily. Another 13% tap it a few times a month. Despite the growing frequency of use, the majority are not turning to AI to build persuasive content.
Source: AAPC, 2025 AI Member Survey
Instead, most consultants use AI to support internal tasks: drafting proposals, building presentations, or summarizing lengthy research. The tool of choice is overwhelmingly ChatGPT, used by 76% of respondents. Google’s Gemini trails far behind at 28%, followed by tools like Canva and Microsoft Copilot.
Data Source: AAPC, 2025 AI Member Survey
But even as these tools proliferate, consultants remain wary. The study found widespread concern over the accuracy of AI-generated content, as well as its potential to hallucinate facts or introduce bias.
A Supporting Role, Not a Strategic One
While early AI boosters predicted the 2024 election cycle would be a showcase for machine-assisted strategy, the reality was more restrained. Only 9% of respondents said AI played a "very significant role" in the 2024 cycle.
Source: AAPC, 2025 AI Member Survey
Consultants surveyed in AAPC's 2025 poll revealed varied perspectives on AI's influence in campaign strategy. A significant portion, 43%, acknowledged a moderate role for AI, while an equal percentage indicated its role was minimal.
The study confirms that AI in politics remains largely experimental and optional. In-depth interviews with consultants revealed that most rely on the tech as a brainstorming partner or launchpad for early drafts. Others use it for time-saving automations, such as cleaning up meeting notes or structuring timelines. Few, however, trust it to generate the final product.
One respondent put it bluntly:
“ChatGPT is great for cranking out 30 bad ideas fast—and one of them might lead to a good one.”
That attitude captures the current norm: AI is not replacing human judgment; it’s accelerating the idea cycle.
The Future? Mixed and Cautious
When asked about AI’s future in the industry, 54% said it won’t fundamentally change campaigns, while 41% believe it will radically transform political consulting. This division hints at a generational or structural divide. Younger consultants and digital-first firms are more likely to believe AI will reshape the industry. Legacy firms remain more skeptical.
Source: AAPC, 2025 AI Member Survey
Still, even skeptics are integrating AI for efficiency. Areas like summarizing large volumes of data, analyzing transcripts, or planning project timelines are considered some of AI's greatest opportunities. Consultants also see emerging roles in opposition research, targeting, and voter segmentation, though adoption in those areas remains limited.
Ethical Fault Lines
Despite its benefits, AI raises serious concerns. The top two challenges named by consultants were accuracy and misinformation. Deepfakes, synthetic audio, and the potential for manipulated content worry practitioners more than job loss.
Consultants generally agreed on ethical boundaries: using AI to add a flag to a campaign photo? Fine. Using it to generate fake testimonials or voices without disclosure? Not so much. It's generally agreed upon that AI's ethical application lies in non-advertising tasks and minor adjustments. However, creating entire AI-generated ads or making misleading changes, especially without transparency, is considered unethical and goes too far.
Disclosure doesn’t absolve all sins, but it’s seen as making unethical uses “least bad,” rather than outright deceptive.
Wrap Up
The AAPC’s 2025 survey doesn’t paint a picture of an AI revolution—it captures something subtler and more honest: a professional class using tech to stay afloat, not take over. Consultants aren’t asking ChatGPT to write campaign strategy; they’re asking it to wrangle talking points and clean up decks at midnight.
AI in politics isn’t about robots replacing consultants. It’s about consultants surviving another 90-hour week. The risks—hallucinated data, bias, legal gray zones—are real. But so is the upside: saved hours, cleaner memos, and tighter turnarounds.
If there’s a revolution coming, it won’t be broadcast in binary. It’ll look like a junior staffer wrapping up a poll memo by 3 p.m. instead of 8. The first draft writes itself. The second one wins the campaign.
And in the arms race of time versus burnout, that’s not science fiction—it’s strategy.