Author Archive

Home / Author's Article(s) / Alex

The recent Romanian election presents a fascinating case study of modern digital election interference, with many ramifications, but particularly through social media manipulation.

What’s even more interesting is the ongoing development that seems to never end.

While researching the cybersecurity aspect of the incident, the analysis went touched a bit of the political side as well.

The Manufactured Viral Campaign

The striking aspect was the sophisticated manipulation of TikTok’s platform. Candidate Calin Georgescu’s sudden rise was engineered through a network of 25,000 coordinated accounts.

What makes this operation particularly noteworthy is its technical sophistication, as mentioned by SRI: each account operated from a unique IP address, making traditional bot detection nearly impossible.

Following the Money Trail

Despite Georgescu’s claims of running a zero-budget campaign, financial investigations revealed a different story. Through the FameUP platform, approximately $381,000 was spent between October and November 2024. This funding went toward a coordinated influencer campaign, with individual influencers receiving up to €1,000 per pre-made video share.

Technical Infrastructure Attacks

Beyond social media manipulation, the election faced direct infrastructure challenges. The Romanian Intelligence Service (SRI) documented over 85,000 cyber attacks, along with breach on several institutions website.

Broader Implications

This case demonstrates how modern election interference combines social media manipulation with traditional cyber attacks. The operation’s sophistication – from unique IP addresses to multi-layered influencer campaigns – suggests state-level resources and planning.

The investigation is still ongoing and hopefully we will learn more.

The EU’s Digital Services Act (DSA) faces its first major test with this incident.

Recently wrote about a recent survey by the National Cybersecurity Alliance (NCA) and CybSafe that revealed this information.

Another good read on the topic is: AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business.

Going back to the survey, if we consider the social desirability bias, then this number may be well above 50%. The social desirability bias occurs when respondents alter their answers to be viewed more favorably by others, often due to fear of judgment or legal repercussions.

This mostly occurs for example in research conducted in countries where drugs are banned, and participants may underreport or avoid disclosing drug use to align with societal norms or legal constraints. In turn, leading to skew results.

A similar effect could be seen when answering questions about whether they are inputting sensitive data into GPT models, at work.

Another notable result, aside from the data privacy issue, concerns usage. The younger generation has a higher adoption rate compared to the elderly. Unfortunately, aside from usage, trust in these systems also seems to be quite high.

The disclaimer “ChatGPT can make mistakes. Check important info.” appears to be similar to the warning messages on cigarette packs, few people read or pay attention to them. We are still in the early adoption phase, with many experts in their fields using LLMs to augment their work or become more proficient overall. But what awaits us next, several generations ahead, wnhen LLMs could become single point of truth?

Photo by DC Studio on Freepik.

This year, January went fast, my honeymoon concluded, and February arrived with surprising swiftness as well. While it didn’t bring much time to settle in, it did however inspire a renewed passion for writing.