There needs to be a way to create an inclusive set of regulation that will steer the way we develop this technology into the future
There is no doubt that we see significant advances in intelligent machinery across all facets of our lives.
Great strides have been made in neural network technology where patterns are collected from large amounts of data which can be used and improved.
This means that AI can identify objects much faster and accurately and make better decisions without any interference from humans. A great example of this would be self-driving cars.
It raises the question of how this new set of rules will be played out across all segments of our society and in the legal frameworks that we currently have in existence.
Can AI truly make better decisions than humans?
We have case studies today of hospitals refusing to operate on a critically dying patient because they could not get the consent from the right party.
In other words, there is rigidity in the way contracts manage our current lives due to the potential consequences of breaking that contract. That is where we are now as a civil society.
Now, imagine a situation where we know that a person needs medical attention but due to a structured set of algorithms set within a program, there is no permission to allow it. Or a self-driving car that needs to break a road rule in order to avoid a dangerous collision.
To what extent are algorithms able to discern discriminatory patterns as well as humans? And more importantly, how will the courts handle these sorts of cases?
Also Read: An encrypted affair: messaging apps are becoming an important tool for cryptocurrency adoption
What is even more interesting is how the law will treat the layperson, who had no direct involvement in the coding or development of these technologies in the first place, but it was their input (of their own data) which impacted the final decision which set forth the action.
Should they be responsible for the consequence of AI’s decision given by individuals themselves even though they did not code the algorithm that determines its decision-making process? Are the biases that AI generates as a result of our own data our own fault or the fault of the algorithms? These are obviously interesting questions that we, as stewards of the next generation of technology, must seriously consider.
I think one of the issues closest to the heart is whether AI will eliminate or determine my personal choice. As a centre left-liberal, I would be a bit worried if I lost my freedom of choice, and even worse, losing my personal choice and not realizing it.
But of course, our personal choices are largely subjective and as a best-selling author, Daniel Ariely would argue, we are, in any case, predictably irrational.
Could AI actually improve our decision-making process and make our lives more meaningful or would it make us even more polarised than ever before? Let’s take a look at both sides of the equation below.
Potential benefits
AI could potentially make our decisions less subjective. For example, it can create greater efficiencies in any given system. A good example might be the use of energy. It can determine what actions would create the best use of energy by calculating energy peaks, best use case and so on.
It can also determine how best to spend our wealth (in itself also another form of energy) and can be objective in determining the way we spend since spending can become very much an emotional exercise.
Also Read: How I started a US$1K/M side hustle selling slim paper wallets from rural Japan
It can also be a potential benefit by harnessing the collective experience of many in order to determine the best result for any particular action. A bit like garnering advice from a hundred wise elders who have been there and done that, using historical data of previous judgments and decisions.
Potential bad consequences
A bit like a social experiment gone wrong, AI also has the potential of creating some pretty scary dystopia-like scenarios. Like in the episode of Netflix’s Black Mirror where all relationships were based on an algorithm, we risk creating societies so rigid in its application that it loses its touch of humanity.
Sound familiar?
This kind of social engineering might some of the fictional stuff today, but it could very well become reality if we don’t have discussions about the role of AI across all aspects of society in the future. There are many more potential disasters.
For example, gerrymandering may become even easier. Just look at the Cambridge Analytica scandal and how data can be used to influence voting results.
Imagine if AI takes this even further and it somehow manipulates to make a certain segment of people “choose” to buy houses in a certain area. Or having our choices in say fashion or art narrowed to what algorithms choose for us based on our previous choices and therefore eliminating the potential to open our minds to other kinds of creative experiences.
All of this could have the potential to make future generations a narrower, less multi-faceted, dare I say, less tolerant individuals.
The social responsibility of developers of AI technology
As the developers and guardians of this new technological revolution, we must keep these issues in mind. We know we cannot stop it and I don’t really see any reason why we should stifle its advancement in any case.
But we must ensure that this technology is used for the advancement of humanity, in creating a more inclusive world that emphasises our commonality rather than our differences.
AI has the potential to make our lives that much richer and give us more time to focus on more important things in life rather than “working just to make a living”.
Also Read: The raging Amazon forest fires: Why businesses need to step up for climate change
It is a chance for us to redesign the paradigm of our very existence as we know it with a focus on creating synergies rather than competition. In the words of R. Buckminster Fuller, “we can operate our planet in such a manner as to support and accommodate all humanity at a substantially more advanced standard of living than any humans have ever experienced”.
As technologists and participants working in its development, we have an important role to play in setting the right foundation for the advancement and development of future protocols.
In our interconnected world, this will obviously require a multi-stakeholder, global effort so that we can ensure its direction does not fall in the hands of those who wish to take advantage of it for selfish means. We find the opportunity once again to ensure that we develop this technology for the better evolution of humanity.
–
Editor’s note: e27 publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.
Join our e27 Telegram group here, or our e27 contributor Facebook page here.
Image Credit: Annie Spratt
The post Rise of AI and the legal considerations we must face in the new world appeared first on e27.