AI Governance in Pakistan
Legal, ethical, and strategic imperatives towards responsible governance
Author Bios:
Adil Nawaz is a final-year LLB student at Islamia Law College, University of Peshawar. He is the General Secretary of the Law Society and an active researcher in constitutional and environmental law.
Manahil Irfan is a legal researcher and aspiring policy analyst with a focus on AI ethics, gender justice, and international law. She is passionate about promoting responsible technology use in South Asia.
As Artificial Intelligence (AI) continues to revolutionize every facet of society—from governance and healthcare to national security and education—Pakistan stands at a critical juncture. The country is witnessing a rapid technological transformation without a corresponding evolution in legal and regulatory frameworks. The absence of a robust, rights-based, and enforceable AI policy has left individuals, institutions, and national interests vulnerable to ethical violations, privacy intrusions, cyber threats, and social manipulation.
Both international examples and domestic incidents underscore the urgency for Pakistan to develop a comprehensive legal architecture to govern AI deployment. This article combines ethical, legal, and strategic perspectives to propose a framework that ensures responsible AI governance in Pakistan—balancing innovation with accountability and security with civil liberties.
The Case for Comprehensive AI Legislation
Pakistan’s current legal infrastructure is ill-equipped to handle the nuanced threats of AI. The Prevention of Electronic Crimes Act (PECA) 2016 primarily addresses traditional forms of cybercrime but falls short when it comes to AI-generated deepfakes, algorithmic discrimination, or voice cloning (Government of Pakistan, 2016). Over 11,000 AI-related complaints were registered with the FIA in 2023 alone, involving cases ranging from identity theft and political misinformation to gender-based harassment (Digital Rights Foundation, 2023).
High-profile incidents involving AI-generated audios of politicians and public figures have gone viral, misleading the public and eroding trust in democratic institutions (Bandial, 2024). Global parallels exist: U.S. President Joe Biden and actress Emma Watson have both been victims of AI-generated deepfakes (The White House, 2023; Mesa, 2023). However, unlike countries such as the U.S., EU, and China, Pakistan lacks a structured policy framework to prevent such misuse.
Furthermore, the State Bank of Pakistan (2023) has reported rising financial fraud incidents using voice cloning technology. These developments indicate that AI is no longer a futuristic concern but a present-day threat demanding swift legislative response.
Human Rights and Ethical Implications of AI
AI systems, while efficient, can perpetuate social biases, undermine privacy, and threaten human dignity. Article 14 of Pakistan’s Constitution and Article 17 of the ICCPR guarantee the right to privacy and dignity. However, these constitutional protections are increasingly insufficient when AI tools are used for political manipulation, predictive policing, or generating non-consensual explicit content, especially targeting women (Gezgin et al., 2021; Ferguson, 2017).
Deepfake pornography, often targeting female public figures, has become an alarming global trend. Nearly 98% of deepfakes are pornographic in nature, and 99% of those target women (Digital Rights Foundation, 2023). In Pakistan’s conservative society, such misuse of AI technology can have devastating reputational, psychological, and even physical consequences for victims.
The ethical dilemma extends to automated decision-making in sectors like hiring, healthcare, and banking. Biased algorithms—trained on discriminatory datasets—can deny individuals opportunities based on race, gender, or socio-economic status (Plečko & Bareinboim, 2024). AI must therefore be aligned with human-centric values, and ethical guidelines should be legally enforceable, not voluntary.
Global Regulatory Frameworks: Models to Emulate
Internationally, the regulation of AI has become a priority. The European Union’s AI Act (2024) sets a precedent by classifying AI systems into risk categories—unacceptable, high-risk, and low-risk—and prescribing corresponding legal obligations (Madiega, 2024). The U.S. has adopted a more sectoral approach, issuing executive orders and soft regulations like the AI Bill of Rights that focus on algorithmic accountability and public transparency (White House, 2022, 2023).
China’s model, while heavily state-controlled, shows how strategic regulation of AI can align with national security interests (Ruan et al., 2021). These examples present different paths that Pakistan can adapt to its unique legal, cultural, and political context.
Pakistan’s draft Personal Data Protection Bill (2023) and National AI Policy are initial steps in the right direction (Ministry of Information Technology & Telecommunication, 2023). However, they remain vague on implementation mechanisms and enforcement. Critical issues such as data localization, biometric surveillance, algorithmic bias, and the liability of AI developers are left largely unaddressed.
Security Threats and Hybrid Warfare
AI is increasingly deployed as a tool of hybrid warfare—ranging from psychological operations and misinformation to cyberattacks on critical infrastructure. Pakistan currently faces over 900,000 cyberattacks daily, many using AI-driven malware and social engineering tools (Express Tribune, 2023). The Cyber Crime Wing of the FIA, while active, lacks the technical capacity and financial resources to tackle these sophisticated threats (Azad, 2022).
Pakistan must recognize AI as both a technological asset and a security liability. Drawing inspiration from institutions like the U.S. Cybersecurity and Infrastructure Security Agency (CISA) could help establish a specialized cyber-defense body tailored to AI risks (The White House, 2023).
On the military front, integrating AI into surveillance, drone warfare, and command systems raises significant ethical and security questions. Without clear international norms on military AI, the risks of accidental escalation or autonomous system malfunction increase dramatically (Galaz et al., 2021; Buzan & Wæver, 2003). Pakistan must engage in regional and global dialogues on responsible military use of AI while developing its own ethical doctrines.
Institutional and Technical Capacity Building
Effective AI regulation cannot be achieved through law alone. It demands institutional reform, technical capacity, and public awareness. Government departments currently lack the expertise to monitor AI tools, assess their risks, or audit algorithms for bias (Mathur, 2022). Moreover, judges, police officers, and regulators are often unfamiliar with the mechanics of machine learning, neural networks, or natural language processing.
To address this, Pakistan must:
Partner with universities to establish AI Law and Ethics Labs;
Train judiciary and law enforcement in AI forensics and regulation;
Launch public campaigns on AI awareness, particularly focused on marginalized communities and women (Lexalytics, 2022);
Develop multilingual educational content to ensure AI literacy across Pakistan’s diverse population.
The Way Forward: Legal and Policy Recommendations
Legislate a Comprehensive AI Law: Pakistan needs a stand-alone AI law that defines key concepts, rights, responsibilities, and penalties. The law must be aligned with constitutional guarantees and international treaties.
Amend PECA 2016: Incorporate AI-specific crimes, definitions for deepfakes, algorithmic decision-making, and cyber-psychological manipulation.
Operationalize the Data Protection Bill: Equip the Personal Data Protection Authority with enforcement powers and mandate data impact assessments for AI systems.
Establish an AI Governance Body: Create a National Commission on AI Ethics and Safety comprising legal experts, technologists, ethicists, and civil society actors.
Invest in AI Research and Public Sector Innovation: Support home-grown AI solutions that address Pakistan’s development challenges—from healthcare to agriculture—while embedding human rights safeguards from the design stage (Dong & McIntyre, 2014).
Engage Globally: Pakistan should participate in OECD, UNESCO, and UN initiatives on AI governance to ensure its frameworks are globally interoperable and diplomatically aligned.
Conclusion: Balancing Innovation with Accountability
AI holds immense promise for Pakistan. It can revolutionize governance, democratize access to services, and enhance national competitiveness. But without a thoughtful and enforceable legal framework, it also threatens privacy, social equity, and security.
Pakistan has a unique opportunity to shape its AI governance in ways that respect its cultural values, legal traditions, and development goals. It must act now—before the pace of innovation overtakes the rule of law.
References
Ahn, M. J., & Chen, Y. (2020). Artificial intelligence in government. Proceedings of the 21st Annual International Conference on Digital Government Research. https://doi.org/10.1145/3396956.3398260
Apple Podcasts. (2022, May 21). Dr. Julia Glidden: Accelerating digital transformation in the public sector [Audio podcast episode].
Azad, T. M. (2022). Cyber warfare as an instrument of hybrid warfare: A case study of Pakistan. South Asia Journal of South Asian Studies.
Bandial, S. (2024, August 30). AI and gender-based violence. DAWN.COM. https://www.dawn.com/news/1855645
Beidleman, S. W. (2009). Defining and deterring cyber war. Army War College.
Bentotahewa, V., Hewage, C., & Williams, J. (2022). The normative power of the GDPR: A case study of data protection laws of South Asian countries. SN Computer Science, 3(3), 183. https://doi.org/10.1007/s42979-022-01079-z
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Buzan, B., & Wæver, O. (2003). Regions and powers: The structure of international security. Cambridge University Press.
Costanza-Chock, S., Raji, I. D., & Buolamwini, J. (2022). Who audits the auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
Dawn. (2023). Political deepfakes and their impact on Pakistani democracy.
Digital Rights Foundation. (2023). Cyber Harassment Helpline Annual Report.
https://digitalrightsfoundation.pk
Dong, X., & McIntyre, S. H. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. Quantitative Finance, 14(11), 1895–1896. https://doi.org/10.1080/14697688.2014.946440
European Commission. (2021). Proposal for a regulation on artificial intelligence.
Express Tribune. (2023). Pakistan faces 900,000 daily cyberattacks, says IT minister.
Ferguson, A. G. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
FIA. (2023). Annual report of cybercrime complaints in Pakistan. Federal Investigation Agency. [Unofficial source, internal report].
FWD50. (2022, November 22). Enabling digital transformation in public sector with industry partnerships with Dr. Julia Glidden [Video]. YouTube.
Galaz, V., et al. (2021). Artificial intelligence, systemic risks, and sustainability. Technology in Society, 67, 101741. https://doi.org/10.1016/j.techsoc.2021.101741
Gezgin, S., Yalçın, S., & Evren, O. (2021). Orientalism from past to present. IGI Global. https://doi.org/10.4018/978-1-7998-7180-4.ch001
Government of Pakistan. (2016). The gazette of Pakistan, telecommunication consumer protection regulations. Ministry of Information Technology and Telecommunication.
Government of Pakistan. (2023). Draft of the Personal Data Protection Bill, 2023. Ministry of Information Technology and Telecommunication.
Harari, Y. N. (2018, September 14). The myth of freedom. The Guardian. https://www.theguardian.com/books/2018/sep/14/yuval-noah-harari-the-new-threat-to-liberal-democracy
Hartman, E. (2024, April 3). How AI is revolutionizing the practice of law. Harris Sliwoski. https://harris-sliwoski.com/blog/how-ai-is-revolutionizing-the-practice-of-law/
Hosseini, M., Rasmussen, L. M., & Resnik, D. B. (2023). Using AI to write scholarly publications. Accountability in Research, 31(7), 715–723. https://doi.org/10.1080/08989621.2023.2168535
Kalhoro, N. A. (2024, February 29). The draft national AI policy: A way forward for Pakistan. Paradigm Shift. https://www.paradigmshift.com.pk/draft-national-ai-policy-pakistan/
Lexalytics. (2022, December 7). Bias in AI and machine learning: Sources and solutions. https://www.lexalytics.com/blog/bias-in-ai-machine-learning
Madiega, T. (2024). Artificial Intelligence Act. European Parliamentary Research Service.
https://www.europarl.europa.eu
Mathur, V. (2022, June 1). Artificial intelligence and law. Legal Service India. https://www.legalserviceindia.com/legal/article-8680-artificial-intelligence-and-law.html
Mesa, N. (2023). Will advancements in AI lead to job loss in biotech? BioSpace.
https://www.biospace.com
Ministry of Information Technology & Telecommunication. (2023). National Artificial Intelligence Policy for Pakistan. Government of Pakistan.
Nyholm, S. (2024). What is this thing called the ethics of AI and what calls for it? In D. J. Gunkel (Ed.), Handbook on the ethics of artificial intelligence (pp. 13–26). Edward Elgar Publishing.
Plečko, D., & Bareinboim, E. (2024). Causal fairness analysis: A causal toolkit for fair machine learning. Foundations and Trends in Machine Learning, 17(3), 304–589. https://doi.org/10.1561/9781638283317
Ruan, L., Knockel, J., & Crete-Nishihata, M. (2021). Information control by public punishment: The logic of signalling repression in China. China Information, 35(2), 133–157. https://doi.org/10.1177/0920203X20963010
Sardar, Z. (2020). The smog of ignorance: Knowledge and wisdom in postnormal times. Futures, 120, 102554. https://doi.org/10.1016/j.futures.2020.102554
Sheikh, H. (2021). AI as a tool of hybrid warfare: Challenges and responses. Journal of Information Warfare.
Søndergaard, M. L. J. (2020). Troubling design: A design program for designing with women's health. ACM Transactions on Computer-Human Interaction, 27(4), Article 24. https://doi.org/10.1145/3397199
State Bank of Pakistan. (2023). Report on digital fraud trends.
Stepka, M. (2022, February 21). Law bots: How AI is reshaping the legal profession. Business Law Today. https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/
The White House. (2023). Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
Ververis, V., et al. (2024). Website blocking in the European Union: Network interference from the perspective of open internet. Policy & Internet, 16(1), 121–148. https://doi.org/10.1002/poi3.367
Villasenor, J. (2023, March 20). How AI will revolutionize the practice of law. Brookings Institution. https://www.brookings.edu/articles/how-ai-will-revolutionize-the-practice-of-law/
White House. (2022). Blueprint for an AI Bill of Rights.