The LLM Misinformation Problem I Was Not Expecting

By: Kathleen Moriarty, CIS CTO

The prolific use of Artificial Intelligence (AI) Large Language Models (LLMs) present new challenges we must address and new questions we must answer. For instance, what do we do when AI is wrong? I teach two Masters-level courses at Georgetown, and as such, I've received guidance on how the program allows use of tools like Chat GPT and Bard. I expected to see students use AI and LLMs without properly validating generated content or providing attribution to the content sources. In one instance, students submitted oddly similar submissions that may have started in part or in full from AI LLMs. In that particular case, however, they sought supporting materials in a manner similar to the use of an internet search engine. Then the fall 2023 semester began, and a new pattern emerged.

A Trend of Non-Vetted Content

Not long into the fall 2023 semester, students began to cite blogs and vendor materials that made sense but were partly or entirely incorrect. This problem traces back to LLMs providing "hallucinations." In some cases, vendor content creators incorporate these untrue materials directly into their published content without vetting or correcting them.

It wasn't an infrequent problem during the fall 2023 semester. In the past four years of teaching three semesters a year, I encountered just one activity where several students found incorrect information as the result of a high search result. During the fall 2023 semester, however, I noticed the problem on at least three separate assignments. In one case, the information was put together so well in the source materials that it caught me off guard. I had to validate my own thoughts with others to confirm!

Let's take a look at a couple of examples to better understand what's going on.

Misidentifying AI Libraries/Software as Operating Systems

In one example, I saw students reference descriptions of what might be AI-related libraries or software as operating systems. In a recent module on operating systems, for instance, students enthusiastically described "artificial intelligence operating systems (AI OS)" and even "Blockchain OS." There's just one issue: there's no such thing as an AI OS or Blockchain OS

This content made it online because no one corrected it before publishing it in multiple places online as blog content. Inaccurate descriptions, such as those calling AI libraries or software development kits as operating systems, add confusion when students and even professionals use internet resources to learn about new developments and technologies. In this case, students needed to learn about the evolution of operating system architecture. Vetted materials were available, but some students veered into their own research and wound up using sources with content that was not accurate. To its credit, the content was very descriptive and convincing – although incorrect.

The issue here is more than just semantics or a nuance. This type of content makes it more difficult for students to grasp the purpose of an operating system versus libraries, software development kits, and applications – concepts that are fundamental to system architecture and its security.

False Authentication Protocols

Another example of non-vetted AI results includes how some online content inaccurately describes authentication, creating misinformation that continues to confuse students. For instance, some AI LLM results describe Lightweight Directory Access Protocol (LDAP) as an authentication type. While it does support password authentication and serve up public key certificates to aid in PKI authentication, LDAP is a directory service. It is not an authentication protocol.

Vetting in Education and Infosec

This problem I've discussed above is likely happening in more fields than security architecture and design. When it comes to validating content in any field, two themes come up consistently:

  1. Author credibility
    1. Is the author recognized for the work, topic cited, or somewhat related work?
    2. Is there evidence that other experts have validated the content?
    3. When was the material published? Have the authors applied any updates or corrections?
  2. Source credibility
    1. Do sources support the conclusions?
    2. Are the sources ones you would consider to be trustworthy or known to be vetted?
    3. If standards are referenced, do the materials provided by the standards committee support the language and claims? Are the technical terms consistent from the standards committee?

As a way forward, in consultation with the CIS Marketing and Communications team, we will be adding a marker to blogs to communicate the level of review prior to publication. For my own blogs, I’ve reached out to known experts to review them. (In one case, I've decided to hold one up from publication due to an oversight that requires correction.) This is more of an allow-list approach toward understanding what content has been vetted rather than expecting AI results to be marked.

As for fellow teachers, you can and should provide guidance on sources that are known to be reliable within a field of study. This is something I did with my students after detecting the problem. Students should check that sources have vetted their content and that the content creator has the credentials to verify their published content.

In Creation of a New Best Practice

The problems around vetting AI results won't be going away anytime soon. It’s important that educators make sure students have the proper guidance to guide their research in a field of study. Education should embrace markers similar to those proposed by CIS. These tools can go through a consensus process to gain acceptance as a new best practice, which could ultimately prove useful for updating and sharing content expediently. 

 

About the Author

Kathleen Moriarty
Chief Technology Officer

Kathleen Moriarty

Kathleen Moriarty, Chief Technology Officer at the Center for Internet Security has over two decades of experience. Formerly as the Security Innovations Principal in Dell Technologies Office of the CTO, she worked on ecosystems, standards, and strategy. During her tenure in the Dell EMC Office of the CTO, Moriarty had the honor of being appointed and serving two terms as the Internet Engineering Task Force (IETF) Security Area Director and as a member of the Internet Engineering Steering Group from March 2014-2018. She is a 2020 Tropaia Award Winner for Outstanding Faculty at Georgetown University School of Continuing Studies and was recognized in the book, "Women Know Cyber: 100 Fascinating Females Fighting Cybercrime," published by Cybersecurity Ventures.

Moriarty has over twenty years of experience driving positive outcomes across information technology leadership, IT strategy and vision, information security, risk management, incident handling, project management, large teams, process improvement, and operations management in various roles with MIT Lincoln Laboratory, Hudson Williams, FactSet Research Systems, and PSINet. She holds a Master of Science degree in Computer Science from Rensselaer Polytechnic Institute, as well as, a Bachelor of Science degree in Mathematics from Siena College.