AI Is Already Being Used in the Legal System - We Need to Pay More Attention to How We Use It

While ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is the law.

AI Is Already Being Used in the Legal System - We Need to Pay More Attention to How We Use It

Photo Credit: Reuters

The EU has enacted legislation designed to govern how AI can and can’t be used in criminal law

Highlights
  • AI can generate data to help lawyers identify precedents in case law
  • It can come up with ways of streamlining judicial procedures
  • In North America, algorithms designed to support fair trials are already
Advertisement

Artificial intelligence (AI) has become such a part of our daily lives that it's hard to avoid – even if we might not recognise it. While ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is the law.

The idea of AI deciding guilt in legal proceedings may seem far-fetched, but it's one we now need to give serious consideration to.

That's because it raises questions about the compatibility of AI with conducting fair trials. The EU has enacted legislation designed to govern how AI can and can't be used in criminal law.

In North America, algorithms designed to support fair trials are already in use. These include Compas, the Public Safety Assessment (PSA) and the Pre-Trial Risk Assessment Instrument (PTRA). In November 2022, the House of Lords published a report which considered the use of AI technologies in the UK criminal justice system.

Supportive algorithms On the one hand, it would be fascinating to see how AI can significantly facilitate justice in the long term, such as reducing costs in court services or handling judicial proceedings for minor offences. AI systems can avoid the typical fallacies of human psychology and can be subject to rigorous controls. For some, they might even be more impartial than human judges.

Also, algorithms can generate data to help lawyers identify precedents in case law, come up with ways of streamlining judicial procedures, and support judges.

On the other hand, repetitive automated decisions from algorithms could lead to a lack of creativity in the interpretation of the law, which could result slowdown or halt development in the legal system.

The AI tools designed to be used in a trial must comply with a number of European legal instruments, which set out standards for the respect of human rights. These include the Procedural European Commission for the Efficiency of Justice, the European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and Their Environment (2018), and other legislation enacted in past years to shape an effective framework on the use and limits of AI in criminal justice. However, we also need efficient mechanisms for oversight, such as human judges and committees.

Controlling and governing AI is challenging and encompasses different fields of law, such as data protection law, consumer protection law, and competition law, as well as several other domains such as labour law. For example, decisions taken by machines are directly subject to the GDPR, the General Data Protection Regulation, including the core requirement for fairness and accountability.

There are provisions in GDPR to prevent people from being subject solely to automated decisions, without human intervention. And there has been discussion about this principle in other areas of law.

The issue is already with us: in the US, “risk-assessment” tools have been used to assist pre-trial assessments that determine whether a defendant should be released on bail or held pending the trial.

One example is the Compas algorithm in the US, which was designed to calculate the risk of recidivism – the risk of continuing to commit crimes even after being punished. However, there have been accusations – strongly denied by the company behind it - that Compas's algorithm had unintentional racial biases.

In 2017, a man from Wisconsin was sentenced to six years in prison in a judgment based in part on his Compas score. The private company that owns Compas considers its algorithm to be a trade secret. Neither the courts nor the defendants are therefore allowed to examine the mathematical formula used.

Towards societal changes? As the law is considered a human science, it is relevant that AI tools help judges and legal practitioners rather than replace them. As in modern democracies, justice follows the separation of powers. This is the principle whereby state institutions such as the legislature, which makes the law, and the judiciary, the system of courts that apply the law, are clearly divided. This is designed to safeguard civil liberties and guard against tyranny.

The use of AI for trial decisions could shake the balance of power between the legislature and the judiciary by challenging human laws and the decision-making process. Consequently, AI could lead to a change in our values.

And since all kinds of personal data can be used to analyse, forecast and influence human actions, the use of AI could redefine what is considered wrong and right behaviour – perhaps with no nuances.

It's also easy to imagine how AI will become a collective intelligence. Collective AI has quietly appeared in the field of robotics. Drones, for example, can communicate with each other to fly in formation. In the future, we could imagine more and more machines communicating with each other to accomplish all kinds of tasks.

The creation of an algorithm for the impartiality of justice could signify that we consider an algorithm more capable than a human judge. We may even be prepared to trust this tool with the fate of our own lives. Maybe one day, we will evolve into a society similar to that depicted in the science fiction novel series The Robot Cycle, by Isaac Asimov, where robots have similar intelligence to humans and take control of different aspects of society.

A world where key decisions are delegated to new technology strikes fear into many people, perhaps because they worry that it could erase what fundamentally makes us human. Yet, at the same time, AI is a powerful potential tool for making our daily lives easier.

In human reasoning, intelligence does not represent a state of perfection or infallible logic. For example, errors play an important role in human behaviour. They allow us to evolve towards concrete solutions that help us improve what we do. If we wish to extend the use of AI in our daily lives, it would be wise to continue applying human reasoning to govern it. 


Samsung Galaxy A34 5G was recently launched by the company in India alongside the more expensive Galaxy A54 5G smartphone. How does this phone fare against the Nothing Phone 1 and the iQoo Neo 7? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: AI, ChatGPT
iPhone 16 Camera Module Design Tipped to Look Similar to iPhone 12: All Details
Play-to-Earn Blockchain Game Stepn Integrates Apple Pay Into Its Services, Introduces New In-App Currency
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »