Steph Marsh, an employment law specialist and head of the employment team at Coodes Solicitors, discusses the use of artificial intelligence in recruitment and HR processes.
Artificial intelligence (AI) is rapidly transforming recruitment and HR processes across the retail sector, with a third of UK companies expecting productivity gains from its adoption. However, the technology’s widespread use is creating significant operational, ethical and legal risks that many fashion retailers are unaware of or unprepared for.
For childrenswear businesses facing seasonal recruitment challenges and high staff turnover, the business case for AI in recruitment appears particularly compelling. Retailers are drawn to its ability to streamline candidate selection for store staff, warehouse operatives and head office roles, automate repetitive tasks, and extract insights from large volumes of applications during peak hiring periods.
AI-powered platforms can screen CVs, match applicants against job descriptions, schedule interviews, and generate candidate reports in a fraction of the time and cost associated with traditional methods. Some retailers now use AI screening tools to conduct preliminary interviews virtually, entirely through AI bots.
Although AI can streamline candidate selection, poorly managed systems can perpetuate discrimination, breach data protection laws, and expose businesses to claims. These are risks that could prove particularly damaging for consumer-facing brands in the children’s sector, where reputation and trust are paramount.
The discrimination risk
Algorithms trained on biased data sets, which typically reflect unrepresentative cohorts or historical inequalities, can amplify discriminatory outcomes. This presents a particular concern for retailers seeking to build diverse teams.
The most frequently cited example is Amazon’s discontinued AI recruitment tool. Having been trained primarily on male CVs, the system systematically discriminated against female applicants, demonstrating how AI can embed and magnify existing workplace inequalities rather than eliminate them.
Under the Equality Act 2010, employers must ensure recruitment practices do not produce discriminatory outcomes, either directly or indirectly. AI tools that disadvantage protected groups, whether unintentionally or otherwise, may give rise to claims under sections 13 or 19 of the Act. The responsibility for discriminatory outcomes rests firmly with the employer, not the technology provider.
Data protection compliance
The GDPR and Data Protection Act 2018 impose additional obligations on employers using AI. GDPR specifically protects data subjects, including employees and job applicants, from significant decisions based solely on automated processing.
This means retailers cannot rely entirely on AI to make hiring decisions, performance assessments, or dismissal recommendations without meaningful human involvement. The automation must be part of a process that includes human oversight and intervention.
Data security presents another critical concern, particularly for retailers handling customer data alongside employee information. Fashion retailers using AI tools must consider similar risks around design concepts, supplier information, pricing strategies and customer databases.
Practical steps for employers
To manage legal and reputational risk, childrenswear businesses incorporating AI tools into their working practices should develop clear AI use policies governing both organisational responsibilities and employee conduct.
These policies should specify approved tools and permissible uses, along with prohibited activities, such as uploading confidential data about customers, suppliers or product ranges to public platforms. Governance structures for oversight should be included, too, and while these actions won’t eliminate risk entirely, they will significantly reduce exposure to discrimination and data breach claims.
Systems must be audited regularly for hidden bias and to ensure security, robustness and compliance with equality, data protection, and employment laws. Training data should be scrutinised to ensure it reflects a diverse and representative pool of candidates across races, ethnicities, genders and educational backgrounds. All the outputs must be monitored with human consideration to verify that the results align with the intended policy.
Importantly, human oversight must be built into every process involving AI, including regular reviews, clear accountability for automated systems, and comprehensive training for HR and line managers using them.
Final thoughts
Robust human oversight, regular audits, and clear governance policies are essential, as those tempted to delegate decision-making entirely to AI systems do so at considerable legal and reputational peril.
As courts increasingly take a dim view of AI-related failures, whether discriminatory hiring practices or fabricated evidence, retailers must ensure that efficiency gains do not come at the expense of legal compliance and fair treatment. In an industry built on trust between brands and families, the reputational cost of AI failures could far exceed any short-term productivity gains.