AI policy needs: principles, boundaries, expectations grounded in respect for people
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$0 for the first 4 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*No charge for 4 weeks then price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.75/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Artificial intelligence has officially left the “future of work” category and moved squarely into the “already at work” reality.
Employees are using AI tools to draft emails, summarize meetings, screen resumés, create marketing copy, write code, analyze spreadsheets and many other tasks. Whether leaders realize it or not, AI is already showing up in daily work across most organizations.
That alone is reason enough for companies to have an AI policy. When organizations fail to set expectations, employees will create their own rules — often with the best intentions, but with an uneven understanding of risk, ethics, privacy and fairness.
An AI policy is not about shutting innovation down or slapping wrists. It is about creating clarity, consistency and trust. From an HR lens, it is also about protecting people. That includes employees, candidates, customers and leaders.
Just as organizations learned the hard way social media policies were necessary, AI policies are now a basic governance tool. Hoping people will “just use common sense” has never been an effective risk-management strategy.
One of the most important reasons organizations need an AI policy is to define responsible use.
AI tools are powerful, but they are not neutral. They are trained on human-created data, which means they can reflect and amplify bias. If an organization is using AI to screen resumés, assess performance, predict turnover or support decision making, there are serious human implications.
HR professionals are trained to think about fairness, discrimination, accommodation and unintended consequences. That perspective is critical when determining where AI can be used, where it should not be used and where human oversight must always be present. (Spoiler alert: when AI affects people, humans should always be in the loop.)
From an HR standpoint, an AI policy should clearly state use of AI supports decision making but does not replace accountability. Leaders still own hiring decisions; managers still own performance conversations.
AI does not get to be blamed for poor judgment, even if it came with a very confident recommendation. AI can help write a performance review, but it cannot sit across the table and explain it with empathy. It cannot read the room, understand context or know when to pause. HR understands people management is not just about efficiency, it is about relationships.
Privacy and confidentiality are another major reason policies are essential.
Many AI tools require users to input information and employees may not realize uploading a document or pasting text could mean that information is stored or used to train future models. From an HR lens, this raises immediate red flags. Employee data, compensation details, medical information, investigation notes and proprietary business information should never be casually dropped into a public AI tool.
An AI policy helps employees understand what types of information are off limits and why. It also reinforces the organization’s broader commitment to data protection and trust.
HR involvement is also crucial when it comes to transparency. Employees deserve to know when AI is being used in ways that affect them. If AI tools are being used to screen resumés, analyze engagement survey comments or flag attendance patterns, that should not be a secret. A thoughtful AI policy encourages openness rather than fear.
HR professionals are skilled at change management and communication. They understand how to explain new tools in ways that reduce anxiety and build buy in. When people feel informed rather than monitored, adoption tends to go much more smoothly.
Another reason HR must have a seat at the table is training. An AI policy is only as good as the organization’s ability to support it.
HR understands learning curves, skill gaps and the reality not everyone has the same comfort level with technology. Some employees will jump in enthusiastically; others will quietly avoid it or use it incorrectly. A responsible AI policy should be paired with education that explains not just how to use tools, but how to use them well. That includes understanding limitations, checking outputs for accuracy, and applying human judgment before relying on results.
There is also a performance management angle that cannot be ignored. If employees are using AI to complete parts of their work, organizations need to think carefully about expectations. Is AI use encouraged, optional or restricted in certain roles? How does this affect productivity standards, quality expectations or skills development?
HR can help ensure AI does not quietly erode learning opportunities or create unrealistic performance comparisons. The goal should be to enhance human work, not to create a workplace where everyone feels they are competing with an algorithm that never needs coffee.
Equity and accessibility are another area where HR’s voice is essential.
AI can be a powerful tool for inclusion when used thoughtfully. It can help remove barriers, support neurodivergent employees, assist with language translation and improve access to information. At the same time, poorly designed or poorly governed AI can disadvantage certain groups.
HR professionals are trained to think about accommodation, systemic barriers, and inclusive design. An AI policy that reflects those values sends a clear message about what kind of workplace the organization is trying to build.
From a leadership perspective, an AI policy also protects the organization’s culture. Culture is shaped by what is rewarded, what is tolerated and what is ignored. If AI is used to cut corners, avoid conversations or depersonalize people management, culture will shift accordingly.
HR can help frame AI as a tool that supports thoughtful leadership rather than replaces it. The policy can reinforce expectations around ethics, respect and professionalism, even when technology is involved.
Finally, involving HR in the creation of an AI policy is about credibility.
Employees are far more likely to trust a policy that clearly considers human impact rather than one that reads like a legal disclaimer wrapped in tech jargon. HR professionals know how policies land in the real world. They understand how employees actually behave, not how we wish they behaved. Their input helps ensure the policy is practical, humane and aligned with how work actually gets done.
An AI policy does not need to predict every future development or tool. It needs to set principles, boundaries and expectations grounded in respect for people.
Organizations that take the time to do this thoughtfully are sending a message innovation and responsibility can co-exist. From an HR lens, that is not just good governance, it is good leadership.
Tory McNally, CPHR, BSc., vice-president, professional services at TIPI Legacy HR+ (formerly Legacy Bowes), is a human resource consultant, strategic thinker and problem solver. She can be reached at tmcnally@tipipartners.com