| Abstract: | Artificial Intelligence (Al) has rapidly become integral to decision-making systems across domains such as hiring, finance, education, and law enforcement. While Al promises efficiency and scalability, it also introduces several ethical challenges. This paper examines the problem of algorithmic bias, focusing on how Al systems could amplify structural inequalities embedded in historical data. Using the Amazon hiring algorithm (2018) as a central case study, the paper demonstrates that Al tools are not impartial but socially situated technologies. It explores demographic parity, equal opportunity, and merit-based fairness in AI resource allocation. The paper also addresses issues of opacity, accountability, and loss of human agency associated with Al systems. It concludes that without fairness-aware design, transparency, and regulatory intervention, Al risks automating discrimination at scale while undermining democratic principles and natural justice. |