Technology is becoming a key component of many organizations nowadays as a response to disruptions coming from start-ups and other emerging competitors. This is compounded by the emergence of the “true digital native consumers,” the Generation Z, who were born wielding and using only digital tools.
But technology is not perfect. In March 2018, the first reported fatal crash involving a self-driving Uber car killed a woman on a street in Tempe, Arizona. Artificial intelligence (AI) such as an autonomous vehicle can indeed go wrong, and it raises the question of who to blame for such mishap.
If autonomous cars become the norm in the coming years, whose responsibility is it when an accident happens? The AI software provider could be liable for buggy software, car manufacturers for faulty design, the service center for poor servicing of the vehicle, or even the owner of the autonomous vehicle for failing to update the software from the manufacturer.
Take another example — the recent report of CNN which headlined that “AI is hurting people of color and the poor.” It cited several examples like: “A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans.”
Bias in these situations stems from how the writers of the algorithms forget certain variables such as race or gender. Hence, I agree that “Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring.”
Similar to AI, the ethical consequences of blockchain technology can be just as diverse and wide-ranging. The growing popularity of blockchain as a digital distributed ledger technology has the potential to provide secure and immutable digital identities of distributed and sequenced information or transactions in various industries such as financial services, logistics, and healthcare. But these digital identities open important questions about control and privacy of data.
The newness of blockchain with its most popular use case, cryptocurrency, and the lack of regulatory oversight opens this technology to scams and unscrupulous businesses. One high profile case is Onecoin which was exposed as a Ponzi scheme and reportedly stole millions from duped investors who believed they were getting in early on what would become the “next Bitcoin.”
While the novelty of blockchain technology may give rise to certain groups taking advantage of people’s ignorance, other technologies intentionally lure its users by using entertaining applications only to capture personal data. This can be gleaned from how Filipinos are willing to give their detailed information just to be entertained.
Many apps, once downloaded and accepted, reads your browser history, checks your phone calls, remembers your location, listen to your microphone, and watches you through the camera — a blatant breach of your privacy. The same goes for most entertaining apps such as those that tell your celebrity look-alike or quiz apps that tell you what your super powers are.
While these technologies and other emerging ones present huge benefits for society, there are patently ethical issues that come out faster than how regulators and industry groups can react. There is a need for an ethical framework that will guide in the design and usage of these new technologies.
We propose a three-step Technology Ethical Design Framework. The first step is to understand the desired outcome of the use of the technology and clearly define the approach to achieve the outcome.
The second step is to design and simulate implementation of the technology in multiple scenarios to reveal the impacts of design alternatives on the defined outcomes and on the people affected by the design.
The final step is the maintenance phase that involves periodically revisiting the first two steps to ensure that the technology is still achieving its objectives and desired outcomes.
This governance model which can be implemented by regulators and industry groups should be inclusive and community-based and may involve other professions and experts other than technologists. Experts in the field of philosophy should be an integral member of the governance committee to allow thorough questioning of the technology’s potential societal impact, such as ethical dilemmas posed by AI.
Ultimately, the governance of emerging technologies should consider the desired outcomes to people and society while mitigating the ethical risks.
Reynaldo C. Lugtu, Jr. is President & CEO of Hungry Workhorse Consulting, a digital and culture transformation firm. He is the Chairman of the Information and Communications Technology Committee of the Financial Executives Institute of the Philippines. He teaches strategic management in the MBA Program of De La Salle University. The author may be emailed at email@example.com.