How Can AI Models Be Deployed Ethically? (Roundtable Interview)

How Can AI Models Be Deployed Ethically? (Roundtable Interview)

Just how safe is AI? How can humanity develop and deploy AI models that will change the world without compromising safety?

These are some of the questions that have sparked debates, which have led to recent laws and regulations that are meant to guide developers when bringing these wonderful innovations to life. 

Our team of experts have considered all the angles regarding the development of ethical AI.

Here’s what they had to say.

Md Faruk Khan, Founder & CEO at mdfarukkhan.com

“I’ve learned that the World Health Organization (WHO) is highlighting the need for an inclusive approach in deploying AI technologies like large multi-modal models (LMMs) ethically. This means bringing together everyone from governments to healthcare providers and even patients to address potential AI risks, including data biases and automation bias. They stress the importance of clear regulations and ethical guidelines for AI in healthcare, including audits and assessments after launch to protect human rights and ensure safety.

It’s crucial for us to balance innovation with ethical considerations, with governments providing essential support for ethical AI development and making sure AI applications in healthcare respect ethical obligations and human rights, focusing on individual dignity, autonomy, and privacy. This comprehensive framework aims to leverage AI’s potential to improve healthcare outcomes while carefully managing risks and ethical issues.”

Alireza Ghods, Ph.D., CEO and Co-founder at NATIX

“When people hear the word AI they naturally tend to envision Terminator-like sentient machines, but in truth, AI is simply a powerful tool that can assist us in so many tasks. AI in itself can be used to ensure a technology developed is ethical.

The task of making sure an AI model is operating ethically starts in the design stage, so it’s really up to developers to do so. At NATIX, we trained the AI to anonymize and strip the data collected from any private information. So in our case, the AI model is the tool that makes our product ethical.

Nevertheless, we must remember that AI models are not a “deploy-and-forget” type of technology, and they need to be constantly fact-checked and challenged. One of the problems with ChatGPT, for example, is that it answers with such confidence that you might believe it even when it’s wrong.”

Angel Vossough, CEO and Co-Founder of BetterAI

“In an era where machines can outplay humans in chess and take the wheel, their struggle to grasp human empathy shines a light on what sets the human mind apart – our complex emotions and thought processes are what truly make us unique. As a data scientist, Co-Founder and CEO of an AI startup, and a woman, my journey through the evolving landscape of AI is deeply intertwined with a commitment to leveraging technology for societal good and peace. Here’s my perspective on the future direction of AI and empathy.

The Paradox of Progress

The irony of AI’s rapid advancement is that the closer we get to replicating human intelligence, the more apparent it becomes that the subtleties of human empathy are AI’s greatest hurdle. This paradox highlights the complexity of what it means to be human and the depth of our emotional intelligence.

What makes this roadblock so challenging? The complexity of empathy.

What makes empathy particularly challenging for AI is its multi-dimensional nature. Empathy isn’t just about recognizing emotions; it’s about feeling with people. For AI to truly embody empathy, it must go beyond algorithms and data; it must connect, understand, and respond to human emotions in a way that feels authentic and meaningful.

A Collaborative Path to Resolution
Overcoming the empathy roadblock in AI requires a collaborative, interdisciplinary approach. By integrating insights from psychology, cognitive science, ethics, and AI research, we can develop systems that better recognize and simulate human emotions. This involves not only technical advancements, but also a commitment to understanding the ethical implications of empathic AI.

Relative to a resolution timeline, it’s wise to bear in mind that it’s a journey, not a sprint. Predicting the timeline for achieving empathic AI is challenging. It’s a journey marked by incremental advancements and ethical considerations. We’re navigating uncharted territory, where each breakthrough brings us closer to understanding the essence of human empathy. This journey might span decades, reflecting the depth of the challenge and the commitment needed to address it.

As we navigate further down this path, the focus must remain on the positive impact of AI on society. The goal is not just to create machines that imitate human empathy but to enhance our collective ability to understand and care for one another.

As an advocate for empathic AI, I envision a future where technology amplifies our capacity for empathy, bridging divides, and fostering peace. Our responsibility is to guide this development thoughtfully, ensuring AI serves as a force for good, enhancing human connections in a world that greatly needs them.”

Nandita Gupta, Accessibility Product Manager, AI Accessibility & Product expert, TEDx Speaker at Microsoft

“The one biggest way to ensure AI models are used responsibly is to train it on the right data. The inputs are equally as important and there is a need to ensure the sources that are embedded for specific models have been vetted for your use cases. This not only ensures more reliable outputs, but also potentially blocking sources leads to answers like “I can’t help you with that” versus hallucinations and randomization in answers.

Another important aspect is to consider the principle of “do no harm”. What is the specific application of your AI model and how may it be used? Is there a possibility that the hallucinations with answers could do more harm than good?

Do rigorous testing to identify use cases that work well versus the ones that need further improvement. Ensure there is a way to track the quality of outputs within areas and identify and be transparent with users on the use cases so they may use the AI in ways they see fit.

Data collection with these models should also be treated with utmost importance so as to not violate user trust. One challenge seen with many AI models is the use and storage of customer data which makes users wary of using these tools.”

Brian Prince, Founder & CEO at Top AI Tools

“Ensuring the ethical and responsible use of AI technology and models is paramount, not just for the integrity of the AI industry but for the safety, security, and advancement of society as a whole. While President Biden’s recent Executive Order lays the groundwork for responsible and ethical AI use in businesses, it’s just a starting point.

Companies like ours, which educate the public on AI, can advocate for fair and responsible use. It’s also up to developers and companies who use AI to ensure their models adhere to rigorous ethical standards.

Transparency is key. It’s crucial that developers and companies are open about how their AI models are built, the data they’re trained on, and the decision-making processes they employ. This transparency allows for greater scrutiny and accountability, ensuring that biases are identified and addressed promptly, and that the AI’s decision-making process can be understood and trusted by users.

At the company level, continuous monitoring and auditing of AI systems is essential. AI models can drift over time as they encounter new data, potentially leading to outcomes that were not intended or may even be biased or unethical. Regular audits, ideally by independent third parties, ensure that AI systems continue to operate within their ethical boundaries and that any drift is corrected promptly.

This ongoing oversight helps maintain public trust and ensures that AI technologies remain aligned with societal values and norms. By fostering an open dialogue and building consensus on ethical standards, we can all help ensure AI technologies are used responsibly and for the greater good.”

David Ly, CEO and founder of Iveda 

“AI bias can have significant real-world consequences, impacting the way AI works and how effective it is at the tasks we entrust it with. Consider scenarios in which AI has a hand in sifting through job applications where individuals are required to disclose information about their gender identity or race. Implicit bias for or against a certain group of people could hinder qualified individuals from getting their resume seen by leadership. Or in the case of AI deployed by law enforcement––in circumstances of criminal identification, for example––training data must be completely impartial to ensure that justice is truly just.

Defeating AI bias should be of the utmost importance for any organization, government body, or entity deploying the technology. It’s crucial to address AI bias to ensure fair and ethical decision-making, promote diversity and inclusion, and prevent the amplification of existing societal inequalities through automated systems.

Mitigating AI bias should involve a multi-faceted approach. Firstly, it requires diverse and representative data collection, ensuring that the training data accounts for a wide range of demographics and perspectives. Next, transparency and interpretability of AI algorithms are vital, enabling users to understand the decision-making process.

Lastly, regular and rigorous testing for bias should be conducted, with the involvement of several diverse stakeholders. Additionally, ongoing monitoring and feedback loops can and should be out in place to identify and rectify biases that may emerge over time. Overall, diminishing AI bias demands a commitment to fairness, transparency, and inclusivity throughout the entire lifecycle.

While eliminating AI bias entirely may be challenging, ongoing research and advancements in the field of AI ethics are aiming to minimize its impact. In the big picture, collaboration between AI developers, ethicists, and policymakers is crucial in developing frameworks and guidelines that address bias and promote fairness across the board.

Additionally, educating the public about AI bias and its potential consequences can raise awareness and foster responsible use of the technology. The more we understand how any tech works, the better we may manage.”

Hussein Hallak, Co-founder of Momentable

“There is a lot of debate over AI.  When does it cross the line?  What should we be concerned about? 

 While I am excited about all that AI can offer us, I realize there are concerns.  

“Our aim should be not just to prepare for an AI-dominated future, but to shape that future in a way that reflects our highest values and aspirations. The greatest risk of AI lies in what it reflects and amplifies. When we rely solely on AI’s statistical and rational decision-making, especially in domains demanding a human touch, we risk exacerbating problems like social media echo chambers, the spread of misinformation, and the rise of hate speech and terrorism.

It’s crucial to remember that AI systems are likely to be a reflection of us – our ethics, our biases, and our values. If we, as a society, fail to evolve our ethics and surpass our biases, how can we expect AI, which feeds on our data and content, to transcend our limitations? Our immediate focus should be on ensuring that these systems are developed to support humanity. This involves creating robust frameworks and structures that guide AI development in a way that benefits society.”

Some issues we should look out for and get ahead of?

  • Ethical Use and Bias: AI mirrors our world, including its biases. When AI learns from data with inherent prejudices, especially in crucial fields such as employment, law enforcement, and finance, it risks perpetuating these biases. This calls for a vigilant approach to data selection and algorithm design.
  • Job Displacement: Beyond the often-discussed fear of an AI takeover, a more immediate concern is job loss due to AI-driven automation. AI will replace manual jobs and complex decision-making roles as well. The challenge extends to addressing socio-economic issues, including income disparity and the urgent need for new educational and retraining strategies.
  • Privacy and Ethical Dilemmas: AI’s advances in data analysis, facial recognition, and military applications pose serious privacy and ethical questions. Can these systems be trusted to make fair decisions? How do we ensure accountability and protect individual rights in this rapidly evolving landscape?
  • Understanding and Transparency: Many AI systems are “black boxes,” with decision-making processes opaque to users. This is particularly troubling in areas such as healthcare or criminal justice, where understanding the ‘why’ behind a decision is as crucial as the decision itself.
  • Regulation and Control: The fast pace of AI development often outstrips regulatory frameworks, creating a gap in governance and oversight. Aligning innovation with safety and ethical standards is a global challenge, complicated by varying regional and national regulatory approaches. So unless there is a coordinated global effort for regulation and control, we might see a disparity in how AI is developed and used across the world.

There are ways we can prevent such concerns from materializing.  The best way to do this is with Regulatory Measures, Technological Transparency, and Societal Readiness/education and skill development.

To assure the public, it’s important to communicate these efforts transparently and continuously. Showcasing how regulations protect their interests, how technological transparency allows for accountability, and how societal measures are in place to support them during this transition, can build trust and dispel fears.

Kos Galatsis, CEO & CTO at Forensics Detectors

“One key strategy is to embed ethical principles into the development process right from the start. This involves creating AI models that respect human rights, diversity, and equality, and are free from any form of harmful bias.

Secondly, transparency is paramount, enabling the decision-making processes of AI models to be understood and explained. Being able to ‘look under the hood’ of the AI model allows for more informed decisions about their deployment and use, and forms a strong basis of accountability.

Thirdly, data privacy should be upheld. AI models often require large amounts of data, and steps should be taken to ensure that data handling and processing abide by established privacy standards.

Finally, education on AI ethics helps the wider public and decision-makers understand the implications of AI technology. Providing clear and accessible information makes everyone an active participant in identifying, discussing, and mitigating the ethical challenges of AI. “

Will Yang, Head of Growth & Marketing at Instrumentl

“Ensuring that AI technology and models are used ethically and responsibly is indeed a critical issue. The first step towards this goal is to establish ethical guidelines. This might sound illustrative initially, but it’s an effective way to articulate what is acceptable and what is not when deploying AI models.

In addition, monitoring algorithm bias is crucial. Despite often being unintentional, AI models can be biased due to their training on data that reflects existing prejudices, resulting in unfair outcomes. Regular bias audits, along with the use of tools and standards to de-bias datasets and algorithms, can help curtail this issue.

Another essential aspect is transparency. Users should be made aware when they are interacting with AI and they should have access to simple and clear explanations of how the AI model functions and makes decisions. In order words, why certain outputs or decisions are made by the AI system should be explainable.

Finally, considerations should also be made concerning the impact of AI on the job market. It’s important for AI models to be deployed in a way that complements human capabilities rather than replace them. This will lead to a job transformation instead of job destruction and result in a net positive impact on society.”

Leave a Reply

Your email address will not be published. Required fields are marked *