Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS
Artificial intelligence (AI) is reshaping modern society, enabling the automation and modification of routine human activities and, consequently, enhancing efficiency and productivity. Like any technological development, AI presents both benefits and risks. Concerns include potential biases, privacy intrusions, and ethical dilemmas.
While artificial intelligence has not quite yet achieved singularity, the last fortnight brought about a substantial update to the AI regulatory landscape. As of February 2, Chapters I and II of the EU AI Act have entered into force. This includes Article 5, which prohibits certain AI systems whose use may intrude upon an individual’s privacy. This includes certain AI systems relating to emotion recognition in the workplace, subliminal manipulation, and predictive policing. Separately, EU AI Act obligations relating to AI literacy have also gone into effect.
In our latest blog post, we shared a few considerations for compliance in the context of complex outsourcing contracts. Continuing on this theme, we take a look into the matter of data protection compliance.
In the current, highly competitive, business landscape, businesses face the challenge of optimizing efficiency, enhancing productivity, and reducing costs, all while maintaining the quality of their services. One of the strategies for achieving these goals is outsourcing noncore business functions to qualified and experienced vendors, which is where the drafting and negotiation of outsourcing agreements comes into play.
Mike Pierides and James Mulligan co-authored an article in the Journal of Securities Operations & Custody which explores key themes of outsourcing and third-party risk management regimes that apply to financial entities and their service providers. The article serves as a compendium of key differences between regulatory expectations on resiliency and outsourcing, highlights key best practices and challenges to implementing these expectations, and, finally, considers the impact of artificial intelligence solutions on such regulatory expectations.
On January 14, the UK government published a consultation on new measures to tackle the increasing threat of ransomware attacks. Ransomware is malicious software (malware) that infects a victim’s computer system and prevents the victim from accessing IT systems, significantly impairs their use of ICT systems, and/or facilitates the theft of sensitive data. A ransom is then demanded for restoration of use and/or data and, as we previously noted, the cost of ransomware attacks is increasing nearly 20% year-on-year.
European regulators recently published clarifications on the scope of ICT services under the EU Digital Operational Resilience Act (DORA), prepared by the European Commission, which confirms previous guidance and enables financial entities to take out of scope certain services which form part of regulated financial services.
Please join us for our fourth annual Artificial Intelligence Boot Camp, during which Morgan Lewis lawyers will discuss the latest developments, insights, and impacts of AI usage and integration for companies of all sizes and industries.
Please join us on Wednesday, February 5, 2025, from 12:00–1:00 pm ET as partners Ksenia Andreeva and Kristin Hadgis and associate Oliver Bell provide a global update on data handling and compliance issues with a focus on the United States, Europe, and the Middle East.
On January 13, 2025, the United Kingdom’s Prime Minister Sir Keir Starmer announced the UK AI Opportunities Action Plan. The AI Opportunities Action Plan outlines the UK’s intentions to become a world leader in artificial intelligence technology for the benefit of private businesses and their customers as well as for all UK residents via AI-enabled public services.