How to Assess AI Literacy Across Your Organization Team
Practical framework for measuring AI literacy at your organization. Learn what to assess, how to assess it, and how to use results to guide training and adoption.
You’ve decided your organization needs to build AI literacy. But where do you start? What does “AI literate” actually mean for a copywriter versus a developer versus an account manager?
The first step is assessment. You can’t improve something you don’t measure.
This guide walks you through how to assess AI literacy across your team. You’ll identify skill gaps, understand readiness levels, and know exactly where to focus training effort.
What Is AI Literacy, and Why It Matters
AI literacy isn’t the same as AI expertise. You don’t need your copywriter to understand transformers or how to fine-tune models.
AI literacy is the combination of:
- Conceptual understanding (what AI is, what it can and cannot do)
- Practical skills (how to use AI tools in your specific role)
- Critical thinking (how to evaluate and improve AI output, when to use it and when not to)
A copy writer with AI literacy knows:
- How LLMs work at a basic level and what their limitations are
- How to write effective prompts for content generation
- How to evaluate generated copy and improve it
- When to use AI for drafting versus when to write from scratch
- How to maintain their voice and approach while using AI
A developer with AI literacy knows:
- How AI code assistants work and what their limitations are
- How to use them effectively for code generation and debugging
- How to evaluate generated code for security and efficiency
- When to use AI versus when to code from scratch
A project manager with AI literacy knows:
- How AI tools can help with project tracking, client communication, and documentation
- How to use them without creating extra work
- When AI output is ready to use versus when it needs review
The common thread: understanding, using tools effectively, and thinking critically about when and how to apply them.
Without assessment, you treat everyone the same. With assessment, you understand what each role needs and where gaps are.
The Four Dimensions of AI Literacy
AI literacy breaks into four dimensions. You’ll assess all of them:
1. Conceptual Understanding
Does your team understand what AI is, what it can and cannot do, and how it works at a basic level?
Example questions:
- What is an AI model, in your own words?
- Why might an AI give you a bad answer?
- Can AI replace your job?
- What’s the difference between using AI for brainstorming versus using it for final deliverables?
Conceptual understanding is the foundation. Without it, people treat AI as magic or scam, not as a tool with specific strengths and limitations.
2. Tool Competency
Can your team actually use AI tools? Do they know what tools exist and how to use them in their role?
Example questions (for each major tool):
- Have you used [tool] before?
- Can you write a prompt that would help with [specific task in their role]?
- How would you improve an output that wasn’t quite right?
- What are three ways you could use this tool in your job?
Tool competency is about hands-on ability. It’s learnable and usually comes with 4-8 weeks of practice.
3. Integration and Workflow Thinking
Can people see how AI fits into their actual workflows? Do they think about AI as part of how they work, not as an extra thing?
Example questions:
- Walk me through your typical week. Where would AI save you time?
- How would you integrate AI into [specific process they do regularly]?
- What’s the biggest bottleneck in your work? Would AI help with it?
- What would need to change about your workflow to use AI effectively?
Integration thinking is where adoption gets real. It’s not “I know how to use ChatGPT.” It’s “I’ve redesigned my workflow to include AI.”
4. Critical Evaluation and Quality Judgment
Can people evaluate whether AI output is good enough? Do they understand when to trust it and when to review it carefully?
Example questions:
- When would you use AI output without review?
- When would you need to heavily edit or rewrite AI output?
- What kinds of mistakes does AI commonly make in [their domain]?
- How do you know if an AI output is high quality for your purposes?
Critical evaluation is what separates “people who use AI” from “people who use AI effectively.” It’s especially important for client-facing work.
The Assessment Framework: What to Measure
You’ll assess these four dimensions using a combination of methods:
Method 1: Self-Assessment Surveys
Ask people to rate themselves on a scale.
Template: “For each statement, rate yourself 1-5 (1 = strongly disagree, 5 = strongly agree)”
Conceptual Understanding:
- I understand how AI models work at a basic level
- I know what AI can and cannot do effectively
- I can explain to a client why we use AI for certain tasks
- I understand the risks of using AI (bias, hallucinations, data privacy)
Tool Competency:
- I have used [Tool X] in my work
- I can write a prompt that produces useful results for my role
- I know which tools are best for different tasks
- I can troubleshoot when an AI output is not what I need
Integration:
- I have identified ways AI could save me time in my actual workflows
- I’ve changed how I work to include AI
- I use AI regularly (at least weekly) in my role
- I see AI as a core part of how I’ll work, not a nice-to-have
Critical Evaluation:
- I can evaluate whether AI output is high quality for client work
- I know when I can use AI output as-is versus when I need to review/edit
- I understand the limitations of AI in my specific domain
- I can explain to someone why a particular piece of AI output does or doesn’t work
Self-assessment surveys are quick and easy, but people aren’t always accurate about their own skills. Use them as one data point, not the whole picture.
Method 2: Practical Exercises
Actually have people use AI tools and demonstrate competency.
Example exercises by role:
For copywriters:
- “Write a prompt that would generate a first draft of a LinkedIn post about [client topic]. Then run it and show me what you got. Then tell me what you’d change about the output.”
For designers:
- “Show me how you’d use an AI image tool to explore three different concepts for [design challenge]. Walk me through your prompts and what you’d use this for versus what you’d hand-sketch.”
For developers:
- “Use an AI code assistant to solve this [specific code problem]. Show me how you’d verify the code before using it in production.”
For account managers:
- “Use AI to draft a client communication about [situation]. Walk me through your approach and what you’d change about the output.”
Practical exercises show actual competency, not just self-perception.
Method 3: Workflow Mapping
Ask people to walk through their actual workflows and identify where AI could fit.
Process:
- Have people map their typical week or month
- For each key workflow, ask: “Where does this take the most time? Where do you get stuck?”
- Ask: “How could AI help with this step?”
- Assess whether they can see the AI opportunities and understand how to integrate them
This shows integration thinking and how well they understand their own work.
Method 4: Peer or Manager Assessment
Get input from people who work with them regularly.
Questions for peers/managers:
- Is this person using AI in their work?
- Do they ask good questions about how to use AI effectively?
- Have you noticed changes in their work or speed based on AI use?
- What would help them adopt AI more?
Peer/manager input gives you real-world observation, not just self-report.
How to Run an Assessment: The Practical Process
Step 1: Define What “AI Literate” Means for Each Role (Week 1)
Before you assess, get clear on what you’re assessing toward.
For each major role at your organization (copywriter, designer, developer, account manager, etc.), define: “What does it look like when someone in this role is AI literate?”
Example: For a copywriter:
- Conceptual: Understands that AI is a draft tool, not a finished product
- Practical: Can write prompts that generate useful outlines and first drafts
- Integrated: Uses AI for 30% of first drafts, maintains own voice and brand voice standards
- Critical: Can evaluate copy quality and knows when to edit heavily versus use as-is
Write this out for each role. It becomes your rubric for assessment.
Step 2: Run a Baseline Assessment (Week 2)
Use the four methods above to get a baseline of where your team is.
Survey (30 minutes per person):
- Quick self-assessment (10 minutes)
- Can usually be done async
Practical exercises (30-60 minutes per person):
- Demo of using an AI tool relevant to their role
- Can be done in small groups or individually
- Usually takes 2-3 hours total across the team
Workflow mapping (30 minutes per person):
- Quick conversation about their typical week
- Can be part of a team meeting or done individually
Manager input:
- Ask managers to fill out a quick assessment of each person (15 minutes per person)
Total time to complete baseline: 6-12 hours for a 10-person team. Doable in a week.
Step 3: Analyze Results and Identify Gaps (Week 2-3)
After assessment, analyze what you learned:
- By person: Who’s advanced? Who’s just starting? Who’s resistant?
- By role: Do copywriters have different skill gaps than developers?
- By dimension: Are people strong on conceptual understanding but weak on integration?
- By barrier: What’s holding people back? (No tools? No time? Confidence?)
Create a simple summary. Example:
“10-person team assessed:
- 2 people (20%): Advanced (using AI regularly, integrated into workflow)
- 4 people (40%): Intermediate (using tools, some workflow integration)
- 3 people (30%): Beginning (tried AI, not yet regular use)
- 1 person (10%): Resistant (hasn’t tried, skeptical)
Gap analysis:
- Conceptual understanding is strong (80% understand basics)
- Tool competency is moderate (60% can write a basic prompt)
- Integration is weak (only 20% have integrated AI into workflows)
- Critical evaluation is mixed (40% confident in quality judgment)
Biggest barrier: People don’t see how AI fits their specific work.”
Step 4: Design Targeted Training Based on Results (Week 3)
Don’t give everyone the same training. Design training based on what you found.
Example:
- For advanced people: Skip basic training. Offer “advanced use cases and workflow integration” workshops. Consider making them peer educators.
- For intermediate people: Focus on integration and workflow redesign.
- For beginners: Focus on conceptual understanding and basic tool competency.
- For resistant people: One-on-one conversations about concerns, not group training.
Targeted training is way more effective than generic training.
Step 5: Re-assess in 8-12 Weeks (After Training/Adoption Period)
After you’ve done training and given people time to practice, assess again.
Use the same methods (survey, exercises, workflow mapping, manager input). Compare to baseline.
You should see improvement across all dimensions. If you don’t, adjust your approach.
Sample Rubric: AI Literacy Levels
Here’s a simple rubric you can adapt for your organization. Use it to describe where people are:
Level 0: No Experience
- Has not used AI tools
- Uncertain what AI can do or how it works
- No integration into workflows
- May be skeptical or fearful
Level 1: Aware but Not Practicing
- Has heard about AI, maybe tried it once
- Basic understanding of what AI is
- Not using it regularly
- Hasn’t integrated it into workflow
- May be waiting for training or a clear opportunity
Level 2: Basic Competency
- Using AI tools, but inconsistently (maybe weekly or monthly)
- Can write simple prompts and get useful results
- Beginning to see where it fits in their work
- Quality judgment is developing (sometimes trusts output, sometimes needs heavy editing)
- Still treating AI as separate from “real work”
Level 3: Intermediate Competency
- Using AI regularly (multiple times per week)
- Can write good prompts and evaluate/improve output
- Integrated AI into one or two workflows
- Clear understanding of what AI is good for in their role
- Thinking about expanding use cases
Level 4: Advanced Competency
- AI is part of normal daily work
- Experimenting with new use cases and tools
- Helping teach others
- Can evaluate quality critically and adjust prompts accordingly
- Thinking about impact on client work and delivery
- Potentially advocating for workflow changes to optimize AI use
Most organizations’ goal is to get the team to Level 3 (intermediate) within 6 months of starting. Some people will be Level 4 naturally; that’s fine.
Real Organization Example: Assessment in Action
Organization: 15-person creative shop
Week 1: Define targets For copywriter:
- Level 3: Using AI for 25-30% of first drafts, can evaluate quality, maintains brand voice, integrated into proposal process
- For designer:
- Level 2-3: Using AI for concept exploration, can prompt and evaluate, understands limitations
Week 2: Run assessment
- Quick surveys: 20 minutes per person
- Practical exercises: Ask each person to write a prompt in their area and show the result
- Workflow mapping: Ask each person “where does AI help you most?”
- Manager input: Managers rate each person on current AI use
Results:
- Copywriters: 1 Advanced, 2 Intermediate, 2 Beginner
- Designers: 1 Intermediate, 3 Beginner
- Developer: 1 Advanced
- Account managers: 3 Intermediate, 2 Beginner
- Operations: 1 Intermediate
Gap analysis: Designers and account managers are furthest behind. Copywriting team is strongest. Most people understand conceptually but haven’t fully integrated into daily work.
Week 3: Design training
For designers:
- Run a 90-minute workshop on AI image generation
- Have the advanced person in the company do a 30-minute session on their process
- Pair each designer with an advanced user for weekly practice sessions for 4 weeks
For account managers:
- 60-minute workshop on using AI for client communication
- Provide templates and example prompts
- Weekly office hours for questions
For copywriters:
- Advanced workshop on workflow integration and using AI at scale
- Have them share their processes with designers and account managers
Week 5-12: Continuous practice and reinforcement
Weekly office hours, celebrations of wins on Slack, integration into projects.
Week 13-14: Re-assess
Re-run survey, practical exercises.
Results:
- Designers: 2 Intermediate, 2 Beginner+ (up from all beginner except 1)
- Account managers: All 5 Intermediate (up from 3)
- Copywriters: 1 Advanced, 2 Intermediate, 2 still building (refining)
Not everyone moved multiple levels, but clear progress across the board. Biggest wins: practical skill development and integration starting to happen.
Common Assessment Mistakes to Avoid
Mistake 1: Relying only on self-assessment People are bad at knowing their own level. You need external observation (manager, practical exercise).
Mistake 2: Assessing too early Give people at least 2-3 weeks of access to tools and training before assessing. You’re assessing potential and willingness, not mastery, early on.
Mistake 3: Assessing only tool use, not understanding You can use a tool without understanding it. Someone might get okay results without knowing why. Assess both.
Mistake 4: Not reassessing One-time assessment is not useful for improvement. Assess, train, reassess. Do this multiple times.
Mistake 5: Labeling people Avoid saying “Janet is Level 2.” Instead, say “Janet is developing integration skills” or “Janet is strong on conceptual understanding but needs practice on tool use.”
Labels stick; skills change.
FAQ
Q: How often should I assess? A: Baseline once before training. Then re-assess every 8-12 weeks for the first year. After that, annually is usually enough, plus spot checks as you introduce new tools.
Q: Should assessment be formal or informal? A: Start informal (conversations, watching people work). Formalize enough that you have consistent data across the team (survey, exercises). You don’t need a big formal process.
Q: What if someone scores low on assessment? A: That’s the point of assessment. Low scores tell you where to focus training. They’re not judgments; they’re data points.
Q: Should assessment be tied to performance reviews? A: Not directly, but AI literacy can be an input to performance conversations. “You’ve made great progress on AI skills this year” is positive feedback. Eventually, AI literacy becomes a baseline expectation (like being able to use email or your CMS).
Q: What if someone refuses assessment or training? A: That’s a conversation. Understanding why (fear, skepticism, practical barriers) gives you insight into what needs to happen. Most refusal comes from a specific concern that you can address.
Bringing It Together
Assessment is the prerequisite for effective AI literacy building. You can’t improve what you don’t measure.
The assessment framework is straightforward: measure conceptual understanding, tool competency, integration, and critical evaluation. Use surveys, practical exercises, workflow mapping, and manager input. The whole thing takes 6-12 hours for a small team.
Once you have baseline data, you know exactly where to focus training effort. Different roles need different training. Different people are at different starting points. Assessment lets you be smart about investment.
Re-assess every 8-12 weeks to measure progress and adjust your approach. Over 6 months, you should see movement from “just starting” to “integrating AI into workflows” across most of your team.
If you want a comprehensive assessment of your team’s overall AI readiness (not just literacy, but strategy alignment, tools, workflows, culture, and more), the Agentic Readiness Audit includes a detailed team skills and literacy evaluation. We assess each role at your organization, identify skill gaps, and help you build a targeted development plan.
How AI-ready are today’s marketing leaders?
Get the Report