Unlocking AI Act: A Closer Look at The First Regulation on AI

ai-act-explained

As artificial intelligence (AI) becomes more prevalent, governments worldwide are working on rules to ensure its safe and ethical use. Among these is the AI Act, a regulatory framework proposed within specific regions that aims to address the multifaceted impact of AI systems on society.

In addressing the various aspects and questions surrounding the AI Act, it’s crucial to comprehend its scope and the responsibilities it imposes. In this blog, I would like to tell more about the landscape of AI regulation, exploring who the AI Act applies to, the risks AI poses to humanity, cybersecurity concerns, and the legal positions of different regions regarding AI. Through professional insight, I will mention the key points of the AI Act and how it aims to shape our future with AI.

What is the AI Act?

AI act classifies AI systems based on risk levels and outlines corresponding obligations for providers and users. It establishes standardized regulations governing the development, market placement, and utilization of AI systems within the European Union. The legislation adopts a risk-based approach, ensuring proportionate measures are in place based on the risk associated with the AI systems. This framework aims to provide a cohesive and comprehensive set of rules to govern AI technologies across the EU.

Global AI Regulation Race: Why Does the EU Aim to Be The First Global Leader in Regulating AI?

Yes, the EU is the first significant regulator to draft comprehensive AI laws and stands at the forefront of shaping global norms in AI governance. Their main goal is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly, focusing on human oversight to prevent harmful outcomes.

The EU holds global influence in digital regulations. Its standards, like GDPR, set worldwide benchmarks.

Will These AI Regulations Achieve The Same Widespread Adoption as the GDPR?

The short answer is yes. Given the EU’s proven track record with GDPR, there is anticipation regarding whether these AI regulations will achieve a similar widespread adoption. The EU’s role as a trailblazer in digital regulation makes Brussels a focal point in the evolving landscape of global AI standards.

The sophistication of current technologies necessitates proactive measures to safeguard countries, companies, individuals, and society. Taking prompt action ensures local protection and sets a precedent for other nations to follow suit. They must act now to help these AI regulations attain the same global adoption as the GDPR.

How Is The EU’s approach to regulating artificial intelligence (AI)?

The EU’s approach to regulating artificial intelligence (AI) involves the establishment of a comprehensive regulatory framework known as the Artificial Intelligence Act. This act classifies AI systems based on risk levels and outlines corresponding obligations for providers and users. The main features include:

Unacceptable Risk AI Systems:
These systems are considered a threat to people and will be banned. Examples include cognitive behavioral manipulation, social scoring, biometric identification, and real-time and remote biometric identification systems (e.g., facial recognition). Limited exceptions may be allowed for law enforcement purposes.

High-Risk AI Systems:

AI systems negatively affecting safety or fundamental rights fall into this category.
Divided into two subcategories:
Those used in products falling under EU product safety legislation (e.g., toys, aviation, cars, medical devices).
Those falling into specific areas (e.g., critical infrastructure, education, employment, law enforcement).

All high-risk AI systems undergo assessment before being marketed and throughout their lifecycle.

General Purpose and Generative AI:

Generative AI, like ChatGPT, must comply with transparency requirements, including disclosure of AI-generated content, prevention of illegal content generation, and publishing summaries of copyrighted data used for training.

High-impact general-purpose AI models, such as GPT-4, undergo thorough evaluations and serious incidents must be reported to the European Commission.

Limited Risk AI Systems:

These systems must comply with minimal transparency requirements to inform users, allowing them to decide whether to continue interacting with the application.

Users should be aware when interacting with AI systems generating or manipulating image, audio, or video content (e.g., deepfakes).

Who Does the AI Act Apply to?

The AI Act casts a wide net, applying to various stakeholders. It encompasses providers of AI systems, whether based within the regions where the act is enforced or operating externally, but supply their systems to users within the jurisdiction. Furthermore, it implicates users of AI systems, including businesses and public institutions, who must adhere to compliance requirements delineating the safe utilization of AI technologies.

Moreover, the AI Act extends its reach to importers and distributors, ensuring the entire supply chain maintains accountability for the AI systems in circulation. Other parties involved indirectly, such as third-party service providers that contribute to the functioning of AI, are also within the fold of the AI Act’s influence. This collective approach demonstrates a commitment to overseeing the entire lifecycle of AI systems from development to deployment.

AI Act has significant implications for businesses, governments, and individuals as AI becomes increasingly integral to our daily lives.

A key aspect of the AI Act is its tiered approach to regulation, emphasizing a risk-based framework that distinguishes between ‘unacceptable’, ‘high-risk’, and ‘limited’ or ‘minimal’ risk AI applications. This ensures that the level of regulatory oversight is proportionate to the potential harm that a particular AI system might pose, making the AI Act applicable in a context-specific manner.

One of the fundamental objectives of the AI Act is to create a better legal environment across regions, reducing the fragmentation that would potentially thwart the development and adoption of AI technologies. By setting clear guidelines and standards, the Act aims to engender trust and confidence in AI applications among the public and businesses.

The AI Act is designed to be a comprehensive legal framework reflecting AI technology’s broad implications across various sectors and actors. Its broad scope standardizes AI use, ensuring safety, accountability, and public trust.

The AI Act also considers the impact on small and medium-sized enterprises (SMEs), which are critical drivers of innovation. Special provisions are designed to reduce the burdens on SMEs, making it easier for these entities to navigate the AI landscape without stifling their entrepreneurial spirit. This reflects an understanding of their vital role in driving AI forward.

Is AI a Threat to Humanity?

The conversation on whether AI constitutes a threat to humanity is multi-layered, touching on ethical, existential, and practical concerns. AI Act contributors know the potential risks of autonomous decision-making systems and have made provisions to mitigate this. Some argue that with increased intelligence and autonomy, AI systems may eventually operate beyond human control or alignment with human values.

AI’s rapid advancements present scenarios where superintelligent systems could perform tasks more efficiently than humans, potentially displacing workforce segments and influencing power dynamics. AI systems can also be manipulated or operate unintendedly, presenting new risks. The AI Act addresses such concerns through stringent regulations on high-risk applications.

However, it’s also worth noting that AI offers immense benefits in various domains, improving efficiency, solving complex problems, and even saving lives. The objective of the AI Act is not to stifle such transformative potential but to provide a framework that enables responsible progress that aligns with human rights and safety.

Future Skills Needed in Cloud and AI Security

Despite dystopian narratives popularized by science fiction, the consensus among experts is that a cautious and proactive regulatory approach, as encouraged by the AI Act, can guide AI’s development toward supporting rather than threatening humanity. By embedding ethical considerations within the legislative process, the Act seeks to neutralize threats before they materialize.

At this intersection, the AI Act’s role becomes pivotal: ensuring that AI’s trajectory remains beneficial by identifying and mitigating potential risks early on. Thus, while AI does present potential threats, the Act serves as a safeguard, acting as a bulwark against the probabilities of such risks becoming realities.

Indeed, the future of AI and humanity is deeply intertwined, and proper governance, such as provided by the AI Act, is instrumental in ensuring that this partnership is constructive and not adversarial. By addressing fear with foresight, the Act aims to preempt the question of AI as a threat with comprehensive preventative measures.

How Far Does AI Threat to the Digital World Go?

In Restless Cyber Edge‘s previous edition, they mentioned AI and the rising threat of virtual kidnapping.

The rise in cyber kidnapping scams has rattled specific communities, and this recent incident has exposed the alarming tactics employed by criminals.

In this instance, cybercriminals engineered a situation where the victims isolated themselves in a remote location, creating the illusion of captivity. Convinced of the threat, the victim’s family paid a substantial ransom for his release.

If you recall Deutsche Telekom’s AI-driven social experiment highlighting the dark side of ‘Sharenting,’ this real-world scenario echoes similar concerns.

Cyber kidnappings entail duping individuals into believing their loved ones are in peril, compelling them to pay ransom. Tactics involve spoofed phone numbers, AI-generated voices, and psychological manipulation. Criminals often exploit personal information from social media or data breaches to make their threats appear genuine. The psychological impact is profound; anyone can fall victim to this unsettling trend.

To guard against such threats, experts advise staying vigilant during calls, independently verifying information, and refraining from sharing personal details on social media. If you suspect a cyber kidnapping, it is crucial to contact law enforcement promptly.

Cybersecurity and the New AI Act

The integration of AI within our digital infrastructure inadvertently raises questions of cybersecurity. As AI systems handle vast amounts of data and make high-stakes decisions, ensuring security against cyber threats is paramount. The AI Act frames cybersecurity as one of its core concerns, weaving it into the regulatory fabric.

Under the AI Act, providers must adhere to state-of-the-art standards in cybersecurity, protecting AI systems from unauthorized access, modification, or misuse. The Act emphasizes the confidentiality, integrity, and availability of data processed by AI systems, aligning with broader cybersecurity principles.

High-risk AI systems face more rigorous cybersecurity scrutiny due to the potential consequences of their failure or compromise. These systems must be robust against potential threats and have mechanisms to deal with any vulnerabilities that may be discovered.

Regular assessment of cybersecurity measures throughout the lifecycle of an AI system is mandated within the AI Act. This continual evaluation process is designed to maintain high cybersecurity adaptability, ensuring that AI systems remain resilient against evolving digital threats.

One cannot overstate the importance of trust in AI technologies; a key component of this trust is the assurance that these systems are secure. The AI Act reinforces the idea that robust cybersecurity policies must be inextricable from deploying and operating AI solutions.

We can say that, the AI Act’s provisions for cybersecurity underscore a commitment to proactive, dynamic, and vigilant practices that preserve the integrity of AI systems. It’s a vital step towards ensuring a secure transition into an AI-driven future.

Demystifying the Puzzle: Understanding the AI-Driven Cybersecurity Universe

What Cybersecurity Obligations Under the AI Act?

The requirements for cybersecurity under the AI Act are far-reaching and designed to ensure the highest levels of data protection. As mentioned, AI system providers must integrate state-of-the-art cybersecurity measures, especially for high-risk systems, which the Act places under especially stringent requirements.

These obligations include thorough risk assessments, adopting best cybersecurity practices, and implementing reliable encryption methods to protect data from unauthorized access. Transparency in the documentation of these measures is also an obligation that facilitates checking compliance with the AI Act’s standards.

An additional cybersecurity obligation is to establish and maintain incident response systems. In the event of a cybersecurity breach, providers are expected to swiftly address the issue, mitigate the damage, and report the incident to relevant authorities under the Act’s provisions.

Mandatory testing, validation, and continuous updates of AI systems are also part of these obligations, ensuring that cybersecurity measures remain effective against new and emerging threats. The Act also requires providers to furnish detailed documentation showing how they adhere to cybersecurity requirements. This documentation is critical for auditing and legal compliance purposes.

Furthermore, the AI Act requires that these systems be developed with data minimization in mind, only collecting data that is imperative to the AI system’s function and ensuring that the privacy of individuals is not unduly compromised. The aim is to confine data exposure to potential cyber threats by limiting the breadth of information stored and processed.

To summarize, the obligations are multifaceted and emphasize a comprehensive approach to cybersecurity in the context of AI. This is fundamental to instilling confidence in AI technologies and ensuring their safe and secure adoption across industries.

What is the Expected Position of the AI Act?

The standard position on the AI Act is balanced: fostering technological development and innovation while protecting public interests and fundamental rights. It is not designed to stifle the AI industry but to ensure sustainable, ethical, and secure development.

In the spirit of its common position, the AI Act stipulates precise requirements for transparency and accountability. It proposes structures whereby individuals understand and trust the AI decisions that affect them. Educating stakeholders and the public forms part of this trust-building initiative.

Coherence with existing regulations is also a key component of the AI Act’s common position. It aims to dovetail with other laws concerning data protection, consumer rights, and non-discrimination to create a cohesive legal ecosystem for AI’s smooth integration into society.

The standard position also advocates for a harmonized market across regions so that AI innovations can seamlessly cross borders, fostering a collaborative, rather than disjointed, approach to technology governance. This benefits providers and users by simplifying regulatory adherence and broadening market access.

Additionally, the Act lays the groundwork to promote AI upskilling and literacy, ensuring that the workforce is prepared for AI implementation and society is educated on its implications. This is essential in cultivating a technologically adept populace ready for the AI revolution.

Overall, the AI Act’s standard position is proactive leadership, establishing frameworks and standards that can be referenced globally. The goal is to encourage AI development while ensuring the technology’s benefits are maximized, and risks are minimally felt.

What is the Fine For the AI Act?

Fines under the AI Act are structured to be dissuasive while reflecting the gravitas of non-compliance with the violation’s nature and severity. Infringements concerning prohibitions on certain AI practices, for example, those that pose clear safety risks, are subject to the highest potential fines.

High-risk AI systems not complying with the required standards may incur considerable financial penalties. These punitive measures are tiered according to the specific provisions breached, demonstrating the granularity with which the AI Act approaches enforcement.

Furthermore, penalties for non-compliance with the AI Act vary depending on the size of the entity and the extent of the infraction. Smaller entities may face reduced penalties to avoid crippling fines that could stymie innovation and competitiveness.

The fine structure also entails consideration for the corrective steps an entity has taken following a violation. The willingness to ameliorate issues and bolster compliance post-infringement is considered when determining fines, embodying a spirit of remediation alongside deterrence.

Fining systems within the AI Act are about punishment and incentivizing a culture of responsibility and compliance. The fines encourage businesses and organizations to prioritize safety, transparency, and accountability in their AI implementations.

The AI Act plays a clarifying role in response to whether AI threatens the law. It anticipates potential conflicts between AI decision-making and existing legal tenets, providing legal pathways for recourse and ensuring that AI is subservient to human legal frameworks.

How Can Organizations Navigate Their Cybersecurity Obligations in Compliance With The EU AI Act?

The European Union’s AI Act outlines essential cybersecurity obligations for high-risk artificial intelligence (AI) systems. The obligations aim to ensure an adequate level of accuracy, robustness, safety, and cybersecurity in AI systems. Here’s a breakdown of the key cybersecurity obligations:

Focus on AI Systems

The AI Act focuses on AI systems, encompassing software housing multiple AI models and integral components like interfaces and databases.

The cybersecurity obligations apply to the entire AI system, emphasizing the importance of considering the interactions between various AI models within a system.

Cybersecurity Risk Assessment

Compliance with the AI Act requires a detailed cybersecurity risk assessment for high-risk AI systems.
This assessment links system-level requirements to individual components, identifying and addressing specific risks. It translates regulatory cybersecurity requirements into mandates for each component.
Integrated and Continuous Approach:

Robust AI systems require an integrated and continuous approach combining cybersecurity practices with AI-specific measures.

The approach should adhere to principles of in-depth security and security by design, extending throughout the entire lifecycle of the AI product.

Limits in the State of the Art

Recognizing the varying maturity of AI technologies, the AI Act acknowledges that not all AI technologies are suitable for high-risk scenarios without addressing their cybersecurity shortcomings.

Compliance necessitates adopting a holistic approach, especially for emerging technologies, acknowledging inherent limitations.

Additional Legal Tech Offerings

The EU AI Act follows a principle-based approach, requiring companies to demonstrate compliance through documentation. It emphasizes accountability and mandates adherence to state-of-the-art measures. To facilitate compliance, many legal tech solutions will offer AI system maturity assessments fuctions or features.

These new kinds of legal tech solutions are used for assessing the maturity of AI systems against regulations and technical standards. It provides a compliance score and identifies corrective actions.

AI Laws: A Global Perspective

The AI Act is not an isolated phenomenon when considering the legal ecosystem for AI. The United States, the United Kingdom, and Europe each have unique approaches driven by their distinctive legal traditions, market environments, and policy goals.

While the United States has yet to adopt a comprehensive federal framework akin to the AI Act, specific regulations and guidelines at state and sectoral levels govern the use of AI. These vary significantly in scope and enforcement, creating a patchwork of provisions that reflect the dynamism of the US market and present challenges of inconsistency.

On the other hand, the United Kingdom, post-Brexit, is carving out its AI governance trajectory. With aspirations to lead AI innovation, the UK has implemented strategies and advisory bodies to stimulate AI development while mulling over comprehensive legislation that safeguards public interests.

Europe has taken a more centralized approach with the AI Act, reflecting the European Union’s regulatory philosophy emphasizing cross-border consistency and safeguarding fundamental rights. This approach positions the EU as a potential standard-bearer in international AI regulation.

The global positioning of the AI Act within these broader dialogues indicates a movement toward enhanced regulatory frameworks as AI assumes greater societal roles. It’s clear that the conversation is far from complete, and the Act is but one step among many on the journey of defining how AI integrates with established legal constructs.

Cloud Security Alliance – Turkey Chapter’s Highlighted Group: AI Safety & Security Working Group

csa-turkey-ai-work

As Cloud Security Alliance – Turkey Chapter, we conducted our board meeting to set our goals for 2024 and evaluate the past period. We have prepared our 2024 plans at the Cloud Security Alliance – Turkey Chapter. With 9 Working Groups and numerous significant projects, the highlight is the AI Safety & Security working group, which we will bring to life with the support of CSA Global.

Although Gokhan Polat is not visible due to taking the photo, the entire Board of Directors is ready to work on Cloud and Artificial Intelligence Security.

What Does the Cloud Security Alliance – Turkey (CSATR) Team Expect in 2024?

With the new year, our aim as the CSA Turkey Chapter is to bring you more information and innovation. Throughout the year, we will continue to follow developments in cybersecurity through events, training, and projects. We believe that, together with you, we will continue to grow. Special thanks to A. Kemal Kumkumoğlu, founder of KECO Legal, and their consultant Selin Çetin Kumkumoğlu, for hosting us.

Why You Should Follow Sector Events of CSA?

CSA’s AI Summit 2024
The recent CSA’s Virtual AI Summit event features industry innovators to deliver guidance on critical AI topics and their impact on cybersecurity.

This kind of industry’s must-attend events can help you gain a holistic understanding of the future of AI disciplines, receive pragmatic advice on managing risks, and gain benefits from generative AI today.

More About Cloud Security Alliance – Turkey (CSATR)
Cloud Security Alliance – Turkey (CSATR) is the official branch of the Cloud Security Alliance (CSA) in Turkey, a world-leading team in cloud computing security.

The aim of CSATR is to convey, adapt, and promote global developments in cloud computing security in Turkey.

CSATR operates as a volunteer effort, and participation is free for individuals. To actively participate or stay informed, follow this company page and join the LinkedIn group with the same name.

Final Words

As we navigate the complex interplay between AI and law, the AI Act represents a significant step towards creating guardrails that ensure AI’s benefits are enjoyed while its risks are mitigated. It lays a foundation for building societal trust in AI technologies and secures a place for human oversight in an increasingly automated world. As AI continues to reshape industries and daily life, proactive and thoughtful regulation such as the AI Act will be pivotal in harnessing the transformative power of AI responsibly.

Additional Sources: europarl, Cepr.

Partners