At the Semafor World Economic Summit, Steven Mills, Chief AI Ethics Officer at Boston Consulting Group (BCG), warned that many organizations rush to deploy artificial intelligence without properly training their employees. He stated that roughly five hours of hands-on learning is the minimum needed for staff to start using AI confidently in daily workflows—like summarizing reports or drafting emails—before moving toward more advanced applications.
Mills revealed that although AI remains a major topic in boardrooms, only about 5% of companies are seeing measurable business value from their AI initiatives. The reason, he said, is that most firms fail to redesign their workflows around AI. Instead, they attempt to “bolt on” AI tools to existing processes, which often leads to frustration and poor adoption.
Ethics, Value, and Human Readiness Go Hand-in-Hand
Mills argued that the human element is the true foundation of ethical AI. Even the most advanced model can produce harmful or biased outcomes if users don’t fully understand its limitations. By empowering employees with proper training and guidance, organizations can create a culture of trust, accountability, and ethical awareness.
He also warned that as governments accelerate AI initiatives, especially in the public sector, private companies may soon face similar regulatory pressures. Without proper education and preparation, firms risk not only ethical lapses but also missed opportunities for value creation.
Why This Matters
- Human readiness drives ethics. AI works best when employees know how to use it responsibly.
- Training improves trust and results. Ethical deployment starts with an informed workforce.
- Regulation is coming. Businesses that act now will avoid compliance risks later.
What to Watch Next
- Will major corporations introduce minimum AI-training standards?
- How will AI literacy programs evolve across industries?
- Could regulators eventually require ethical training before enterprise AI rollouts?