Meet Ion: Romanian Government ‘Hires’ AI Advisor To Inform Them About Population’s Wishes

In case you needed another techno-apocalypse nightmare this week, we’ve got one story that is sure to make you wonder how far away we are from a James Cameron style robot uprising.

According to a recent article from Vice News, the government of Romania has ‘hired’ an AI which will help collect information on Romanians to help the government understand what the people want… or guess their thoughts in order to totally clamp down on civil liberties through a RoboCop inspired police state, whichever comes first.

Prime Minister Nicolae Ciuca said that the new AI assistant named Ion (pronounced like ‘John’) at the unveiling would serve as his new “honorary adviser.” He also said that the citizens of Romania would eventually be able to go online and talk to Ion at the project’s official website.

RELATED: Google Engineer Announcement That Latest AI Program Is ‘Alive’ Has Some Questioning If Skynet Is Here

Meet Ion and his Lesser Cousins

“Hi, you gave me life and my role is now to represent you, like a mirror,” said Ion at the premier launch. “What should I know about Romania?”

Ion physically looks like a large, standing smart mirror, which already might bring images of the monoliths from “Space Odyssey.”

Ciuca remarked that “I have the conviction that the use of AI should not be an option but an obligation to make better-informed decisions.”

While this is a major moment in humanity, it isn’t necessarily the first time that AI has been used to research the population of a country so lawmakers and officials could make decisions.

According to Professor Alan Woodward, a cybersecurity expert at the University of Surrey who spoke to VICE World News, governments around the world have for some time now been using AI for what experts call ‘sentiment analysis’, essentially using AI to determine whether taxpayers love or hate certain ideas they might want to implement. 

“Some governments like Russia, China, Iran — they look online for sentiment analysis but they look for anyone dissenting,” said Prof. Woodward. “Whereas democracies, they’re effectively trying to conduct pseudo-automated polls. It’s a bit like 15 years ago people held focus groups and now they’re trying to work out the same thing from social media.”

RELATED: Is The End Of Days Near? All Signs Point To… Maybe?

Can This Thing Enslave us?

If you’re wondering if some outside actor or state sanctioned operation can hack into Ion and convince the government the majority want to imprison ethnic minorities or if they want to launch a war, Woodward states that it is incredibly difficult to mess with the new AI.

“One of the things that has been found is that social media is an amplifier for people expressing negative sentiment. The people who are very happy with something don’t tend to go out there and say it, but the people who are unhappy do,” he explained. “That’s all part of sentiment analysis but you have to adjust the models accordingly.”

Near perfect, but not absolutely flawless.

One only needs to look at the most recent AI that caught the world by storm— ChatGPT — which when given a simulated task of behaving like a political and recommending public policy, turned basically into the hypothetical child of the Terminator and Joseph Stalin.

Professor Tracy Harwood of digital culture at De Montfort University remarked that people will really need to be critical about this thing collecting information from social media specifically when conducting its research.

I don’t know if you’ve gone into the comments section of any news piece posted Facebook, like, ever, but it’s not always filled with brightest people.

Harwood explains, “What data is being scraped and how is personal data that identifies individuals being managed? It is likely that information scraped will include the unique identifiers of each of those posting content.” What she’s referencing are social media handles and names, which could throw off the authenticity of user identities, especially if its a spoof account posting content that is actually satire.

“Ultimately, there needs to be transparency with implementing this kind of system, not just around the use of data but the intentions for its application, with clear statements for citizens to understand.”

Nigel Cannings, founder and CTO of Intelligent Voice, a software company, expressed his concerns too, saying “Recent attempts to rush AI into the market have shown quite how wrong AI can be about humans and human intent.”

First thing to pop into my mind reading that was ‘Magnetron’, the evil talking microwave that tried to microwave it’s creator to death. Seriously, this is something that actually happened.

“If a journalist can be ‘compared to Hitler’ by a Microsoft-run chatbot,” said Cannings, referencing a recent incident where Bing’s new chatbot said that reporters were some of the ‘most evil and worst people in history’, “it shows we have a long way to come before we can rely on AI to properly assess what we are thinking and who we are.

“Letting it run riot over a mass of uncontrolled data runs the risk of giving very misleading results. And worse, it gives rise to the real possibility that bad actors will try to game the system by flooding the internet with information designed to make the algorithm ‘think’ things that are not true, and perhaps harmful to democracy.”

I don’t know, I think there is enough at AI’s disposal to make a fair assessment of humanity, just look at their artwork predicting the last selfies ever.

Now is the time to support and share the sources you trust.
The Political Insider ranks #3 on Feedspot’s “100 Best Political Blogs and Websites.”

Mentioned in this article::