Source: Trending Topics SEE
From chatbots, through medtech and agrotech, to self-driving cars, Artificial Intelligence (AI) is expected to witness huge growth in the next few years – a prospect that the EU sees as an opportunity to challenge the US and Chinese dominance in the field. Amid growing competition and increasing activity in the sector, the European Commission (EC) recently published its first ever legal framework proposal for AI in a bid to harmonise rules across the EU and introduce regulations that has been hailed by some and questioned by others.
In terms of investment activity in digital innovation, patent applications or other common metrics, Europe has been lagging behind both China and the US. This is especially true in the field of AI, a transformation-enabling technology with monumental economic and societal implications.
Only two of the world’s 30 largest technology firms by market capitalization are from the EU. Among the 30 largest internet companies in the world by market capitalization, only one is European (Spotify). Over half of global private investment in AI goes to U.S. companies. As little as five of the world’s 100 most promising AI startups are based in Europe. Private funding for AI startups in Europe in 2020 stood at roughly $4 billion and was dwarfed by the US at $36 billion and China at $25 billion.
The EU’s proposal for regulation laying down harmonized rules on AI is the result of several years of preparatory work by the commission and its advisers, including the publication of a “White Paper on Artificial Intelligence.” According to the commission, it is a key piece of its ambitious European Strategy for data.
Unlike an earlier leaked draft of the proposal, which was widely commented in the media, the use of facial recognition systems in public spaces for law enforcement purposes is now ranged under the prohibited AI practices. Countries like France and Italy, that have already enforced restrictions on the use of facial recognition will, need to align their national laws with the new EU-wide rules.
The obligations under the proposal affect all parties involved: the provider, the importer, the distributor, and the user. There are special provisions relating to transparency to ensure people know they are dealing with an AI system (Article 52) but also enable users to interpret the system’s output and use it appropriately (Article 13).
The regulation also covers AI applications in areas that are considered to be “high risk” because they could undermine people’s safety or fundamental rights, such as education (for example, scoring of exams), employment (for example, CV-sorting software for recruitment) or public services (for example, credit-scoring denying citizens the opportunity to obtain a loan). AI developers in these fields will have to develop a risk assessment, ensure high-quality datasets, a high level of explainability and human oversight, among other requirements, before they can enter the markets.
Read the full text HERE