Recently wrote about a recent survey by the National Cybersecurity Alliance (NCA) and CybSafe that revealed this information.

Another good read on the topic is: AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business.

Going back to the survey, if we consider the social desirability bias, then this number may be well above 50%. The social desirability bias occurs when respondents alter their answers to be viewed more favorably by others, often due to fear of judgment or legal repercussions.

This mostly occurs for example in research conducted in countries where drugs are banned, and participants may underreport or avoid disclosing drug use to align with societal norms or legal constraints. In turn, leading to skew results.

A similar effect could be seen when answering questions about whether they are inputting sensitive data into GPT models, at work.

Another notable result, aside from the data privacy issue, concerns usage. The younger generation has a higher adoption rate compared to the elderly. Unfortunately, aside from usage, trust in these systems also seems to be quite high.

The disclaimer “ChatGPT can make mistakes. Check important info.” appears to be similar to the warning messages on cigarette packs, few people read or pay attention to them. We are still in the early adoption phase, with many experts in their fields using LLMs to augment their work or become more proficient overall. But what awaits us next, several generations ahead, wnhen LLMs could become single point of truth?

Photo by DC Studio on Freepik.