Not surprisingly, given recent advancements in the field, AI was the overarching theme at Dublin Tech Summit this year.

Adding flavor to the speaking sessions was the alarming statement issued by 350 AI scientists and tech CEOs proclaiming AI an “extinction-level threat.” The purpose of this letter was to urge policymakers to approach the technology as they would climate crisis and nuclear weapons, sparking debate among event speakers. Dr. Patricia Scanlon, Ireland’s first AI ambassador and the Founder of Soapbox Labs, echoed this precautionary sentiment and warned that AI is “not a fad and not a productivity tool. The innovations of AI will persist — the global economy and every industry will be impacted.”

ChatGPT has brought AI to the public’s consciousness.

Reverting to the origin of our current AI obsession, Scanlon pointed out that a year ago, the world wasn’t even talking about generative AI, but now we have a tool: ChatGPT.

Scanlon explained that ChatGPT is known as “narrow or weak AI,” and the next step of this evolution is AGI (Artificial General Intelligence) or “strong AI.” Before ChatGPT, AGI was predicted to emerge in 30 or 40 years. Now, Scanlon says society must start preparing as it could come quicker than initially thought.

Dr. Ben Goertzel, CEO and Founder of SingularityNET, in his talk, Large Language Models: What Comes Next?, stated that it’s highly probable that AGI will appear in three to seven years. Of ChatGPT, Goertzel highlighted many of the technology’s shortcomings. “ChatGPT can’t write a science paper. It can write something that looks like a science paper, but it would be a rethreading of something in its training data. ChatGPT can write poetry in any genre, style, or language, but it is a mediocre poet.”

The disruption will be global.

Goertzel concluded that ChatGPT lacks reasoning and creativity and can’t filter ideas beyond its training data. Despite this, he said that AI, even without AGI, can still do 50-70 percent of human jobs. Similarly, Scanlon honed in on the negatives of AI, stating, “There will be job losses. It happens in every revolution.”

Mark Jordan, CEO of Skillnet Ireland, and Tracey Keogh, Co-Founder of Grow Remote, discussed which jobs could be most threatened by AI. Jordan stated that entry-level roles are in danger, specifying: “Customer-facing functions will be replaced by advanced chatbots. The disruption will be global, and we will see more and more of that as these chatbots become more advanced.”

Jordan said workers should think about their core competencies and skill sets that will enable them to compete for new jobs.

“Jobs might change, but they will still be there.”

A panel discussion entitled Algorithms Against Humanity also addressed the prospect of job losses. Sean O hEigeartaigh, Director of AI: Futures and Responsibility Program at the University of Cambridge, had grave concerns regarding the social impact that job losses will have on humanity.

Roles he specified included digital analyst jobs, graphic artists, and customer service representatives. On the latter, he noted that people in developing countries are pulling their families out of poverty traps thanks to these jobs, but the positions will likely disappear. “People that have been working the same job for 20 years are not going to be a tech startup the next day,” he concluded.

Michael Richards, Director of Policy at the U.S. Chambers of Commerce, took a more optimistic view of coming job losses, using the example of horse and carriage drivers becoming taxi drivers when cars were invented. “Jobs might change, but they will still be there,” he said. That said, Richards did state that education systems need to start preparing their students for the jobs of tomorrow and that businesses need to start preparing their employees for the coming changes.

Regulating AI to safeguard against its dangers

Regulation was another core theme at this year’s event. When asked whether he agreed with the publicized letter stating that AI posed an extinction-level threat, O hEigeartaigh said it was hard to nail down as “this isn’t like climate change where we can look at CO2 levels. These are arguments rather than trends that we can see with confidence.” He stressed, however, that AI must not take on an arms race quality whereby safety and regulation will get thrown out the window.

Richards took a somewhat opposite view saying that it would be a mistake to focus too much on the harm that AI could do. Rather than pause the development of AI, he stated that innovation should be allowed to happen alongside guardrails: “Our [the U.S.’s] adversaries would continue to march ahead, and it’s important at this time that others do not move ahead of this,” he posited.

Firmly on the opposite side of the argument was Angelika Sharygina, Techfugees Ambassador and Political Researcher. Of Afghan and Ukrainian descent, with family members caught up in both conflicts, her point of view was that AI misinformation can lead to unsafety and loss of life, citing incorrect information about evacuation points in times of war as an example.

Is there public trust in the tech companies behind AI?

Interestingly, Sharygina conducted a straw poll with the audience on whether they trusted the tech companies developing AI. Not a single hand went up. She continued, “When the accumulation of power is only in the hands of a few tech companies, we see terrible consequences,” to which she received a round of applause from the crowd.

In a fireside titled, How AI and Other Technologies are Helping to Advance DE&I in the Workplace, Sandra Healy, Founder and CEO of inclusio, followed a similar tune: “OpenAI is not open because you can’t go in and test its data, and bias is built into it.”

Fellow panelist Claire Thomas, Chief Diversity and Inclusion Officer at Hitachi Vantara, explained why this is a problem. Since ChatGPT is being fed data from the internet, biased language and human assumptions are showing up in its output. She cited the example of ChatGPT referring to surgeons as “he” and receptionists as “she.”

“AI is using the judgments that have been put into the models. This could potentially make the bias worse instead of making it better,” Thomas concluded.

Enjoy Reading This?
Contact Us