Regulation needed to keep Generative AI tools in check | HCLTech

Regulation needed to keep Generative AI tools in check

Vulnerabilities attached to machine learning need to be kept in mind by policymakers before deploying AI-enabled systems
 
11 min read
Jaydeep Saha
Jaydeep Saha
Global Reporter, HCLTech
11 min read
AI-enabled systems

In India, parakeets are often a source of entertainment to people as they talk to them and reply to their questions—only those that they are taught and trained to answer! Foreigners—at a big fair like Pushkar—are often amused by this, pay for it as they predict the future and laugh at it with less or no knowledge in Hindi along with locals, who very well understand if and when these birds use abusive words and are cheating on behalf of their masters.

Even though based on “personal data and information” from their owners, the training of these speaking birds is after all not “protected by IP rights”—two of the basic elements used to train AI, especially Generative AI. Fed with bad data, Generative AI can spew venom and gradually take the shape of Frankenstein’s monster or Daenerys Targaryen’s dragons. Scary, isn’t it?

Painfully susceptible to parroting the biases in their training data, in the early days of AI, in 2016, Microsoft’s chatbot Tay took less than a day to tweet “Hitler was right I hate the Jews” and feminists should “all die and burn in hell”.

Generative AI knows by heart the dangers related to bad data and social media. While AI has been subject to breathless hype, researchers—studying dangers related to misinformation, hate speech and the snowballing geopolitical crises—say its computational power is doubling every six to 10 months, making this mutation so electrifying and dangerous.

Keeping old school programming aside, self-taught neural networks effectively spot patterns in data, capability of which is dependent on the amount of data and computing power these networks are fed.

“We also need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out. Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” tweeted Sam Altman, CEO of OpenAI. In interviews he often emphasized that the improvement of AI programs depends on how fast people use them, the more the merrier!

The new kind of content that Generative AI produces in the form of texts, audio and images using deep learning—called generative adversarial network (GAN)—is something you may already know. But the question that arises is on the neural networks within this GAN: What if the discriminator that differentiates between fake and real data—produced by the generator—starts malfunctioning, keeping ethics at bay and gets inclined toward biased data?

“This is the problem that is manifesting itself in the current evolution of AI. However, there are efforts in process to extend the capabilities of AI systems, such as the Fandango project or Meta’s Sphere, to help identify fake news or fake data from the real data,” says Renju Varghese, Fellow & Chief Architect, Cybersecurity & GRC Services, HCLTech.

‘Unethical’, says Bing

Recently Business Insider journalist Huileng Tan tested the skills of Microsoft’s new AI-integrated Bing when it refused to write a cover letter for a job.

“I’m sorry, but I cannot write a cover letter for you. That would be unethical and unfair to other applicants,” the ethical chatbot told Tan. However, it helped her write a cover letter the next day with some references and tips.

There are still various practical and ethical issues that must be resolved and here’s where organizations and governments need to draw a line.

“Ethics plays a significant role as organizations struggle to eliminate bias and unfairness from their automated decision-making systems. Biased data may result in prejudice in automated outcomes that might lead to discrimination and unfair treatment,” says Phil Hermsen, Solutions Director, Data Science & AI at HCLTech.

“To develop trustworthy AI systems, policies, governance, traceability, algorithms, security protocols are needed, along with ethics and human rights.”

 

Powering reimagined experiences for E.ON

Watch the video

 

Training Generative AI

Under current legislation, using large bodies of IP-protected work is limited when training Generative AI. To overcome this, the UK’s proposal to create a new exemption from copyright infringement can be taken as an example for governments seeking to “unlock” the potential of Generative AI.

But it may legislate to permit text and data mining of IP-protected data to train the technology. However, with lack of sufficient permission or attribution of IP-protected data used in AI training, lawmakers may even opt to put a stop.

Autonomous or AI-driven content creation without any traditional human input triggers the question as to who owns the copyright protecting such content. In 2021, an Australian court ruled in favor of artificial intelligence—the AI system that could be named as the inventor on a patent application. However, this was later overturned by the Australian Federal Court.

Remember the incidents when a group of artists sued AI generators Stability AI Ltd., Midjourney Inc. and DeviantArt Inc. for allegedly downloading and using billions of copyrighted images, and Getty Images’ copyright infringement legal proceedings against Stability AI in High Court of Justice, London, alleging it used Getty’s images without a license? These add to the concern as how to regulate instances where AI-generated work feature IP-protected content that could raise privacy/data protection issues.

Vulnerabilities

AI and machine learning have already transformed many aspects of daily life. Google’s flood forecasting model stands as a big example of protecting the lives of millions of people. Likewise, it also offers the opportunity of reshaping many aspects of national security, from intelligence analysis to weapons systems and more.

However, the technology can also be used by tech-savvy cybercriminals to hack AI-enabled networks. “The vulnerabilities attached to machine learning must be understood before making any informed decision on risks and investments because flaws within an ML make the situation even more complicated and are being exploited by cybercriminals and owners of cloud platforms.

“To access personal and/or restricted data—that’s readily available and relative cheap—there’s an ongoing AI-led ‘cyberwar’ between these organizations with nefarious aims and the owners of the cloud platforms.

“Going beyond how much computing power you can throw at the problem, the main weapon used now is the sophistication in algorithms and the fast ability to learn, adapt and counter to what the oppositions are doing,” says Hermsen.

Policymakers—who make decisions about when an AI-enabled system can be safely deployed and when the risks are too great—must understand and consider that an ML system introduces new risks. New defenses may only offer short-term advantage. Robustness to attack is most likely to come from system-level defenses and the benefits of offensive use often do not outweigh the costs.

Here’s a simplified example

Remember Tom and Jerry? Well, the ML attackers and defenders are similarly engaged in an evolving and dangerous cat-and-mouse game in which innovation in an attack is the key to enter and create the damage, pushing old techniques of hacking and defending aside.

The example of three little pigs and the big bad wolf from nursery books is another simple example. The defenses created using straw and wood that look strong and fortified can easily be broken to gain access into the houses. With a thorough and deep knowledge of the vulnerabilities, while building defenses using brick and mortar—constant monitoring and changing defense mechanism in this case using AI—might look costly but has been helping organizations in building a resilient defense, mitigating the risks and thwarting more sophisticated attacks.

“The fast-evolving capabilities of AI systems are proving to be an important tool in enhancing the capability of detecting complex and hard-to-detect breach and disruptive activities. The biggest advantage in the pursuit of a secure environment is the evolution of AI systems.

“This evolution is a double-edge sword wherein the bad actors also take advantage of the AI systems to design and create complex and commonly undetectable security breaches. However, these systems also provide a very powerful capability in the hands of the cybersecurity agencies and professionals to create advanced systems that help in identifying and responding to the complex security breaches,” adds Varghese

“These capabilities are being developed to identify and prevent disruptions to critical system ecosystems that provide crucial services and manufacturing capabilities,” adds Varghese.

How HCLTech contributes to the AI field

HCLTech’s resilient and zero-trust cybersecurity policy secures its customers, while undergoing digital transformation initiatives. The Robotic Process Automation (RPA) coupled with Artificial Intelligence (AI) offerings is the first step in this direction. With its deep knowledge and experience in AI-ML, HCLTech considers cybersecurity as the next major step and helps organizations counter risks effectively through its Dynamic Cybersecurity.

Share On