6 Myths About AI in Recruiting
Recruiters often believe in some myths that impact their usage of AI.
Let's debunk some common misconceptions.
Why Read This Guide?
AI has evolved rapidly, leaving little time for people to fully understand its fundamentals.
Yet, there is pressure to implement AI in recruiting, causing confusion and uncertainty.
This guide aims to demystify common myths around AI in recruiting, empowering recruiters to make informed decisions.
6 Myths covered in this guide
1
AI makes biased decisions
2
AI is keywords driven
3
Newer models are always better
4
Our private data will leak to the model providers
5
AI's output can't be trusted
6
AI products are all alike
Myth 1: AI makes biased decisions
Fact: LLMs are not trained on any specific recruiting data to select or reject candidates. They are trained on general language and knowledge and can be instructed to use logics as defined by you
1
You do have a lot of control
You have the freedom to define your own rules in English
2
Careful prompting can reduce biases
Give clear instructions to AI to avoid biases of any kind
3
Omit critical info to avoid biases further
While sending data to AI, redact personal info like gender, race, religion, country etc.
Myth 2: AI is keywords driven
Facts:
LLMs look at words to sense "meaning"
You can use different words and LLMs will still understand since they work based on your intent, not keywords
But you can insist on keywords too
If you do want specific hard skills and keywords, you can still ask AI to look for them. It's just not mandatory.
Myth 3: Newer models are always better
Fact: Most models have reached a point which is good enough for a lot of recruiting tasks
Size ≠ Effectiveness
Smaller models can excel in specific tasks
Task specific optimization
Well-defined goals trump model size
Engineering matters
Beyond the model and prompts, a lot depends on how the feature is designed and implemented
Myth 4: Our private data will leak to the model providers
Facts
1
LLMs only learn during their training
Providers like OpenAI allow opting out of your data being used to train their next versions
2
Model providers scrub any personal data you send automatically
However you should avoid sending personal information to the models for absolute control
3
Your data is remembered only during "a conversation"
A conversation stores your messages to give contextual results. Starting a new conversation starts everything from scratch
Myth 5: AI's output can't be trusted
It's true that LLMs are trained to be "creative" by default. But
1
Your prompts can avoid this to a big extent
Giving specific instructions and setting "temperature" to 0 will reduce errors
2
Use good prompting techniques
Prompting techniques like "Chain of Thoughts" avoid hallucination.
3
Never rely on AI blindly
Never auto reject candidates based on AI. Always do manual testing of AI's results randomly and regularly.
Myth 6: AI products are all alike
It's not just about "a model and a prompt". Different AI products have
Difference in goals
Each AI product may have specific design goals
Difference in AI implementation
The way data is stored, data is processed, prompts are written and AI components are combined, creates a big difference in the end product
Difference in Product Engineering
The way data is collected, stored, processed and how UX is designed changes the results
AI beliefs
Each AI vendor may have different thoughts on AI vs Humans, Safety vs Speed, Quality vs Quantity which can greatly impact your results.
This guide is shared with the "Gen AI Recruiters" community
Make your recruiting career "AI proof". Join the community.
Initiative by
Hire skilled tech talent. Without the unproductive work.
Made with Gamma