25/11/24: Unit 2 – Computer Culture

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems   
www.nist.gov

Credit: N. Hanacek/NIST

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.
Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet.
“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”
AI systems have permeated modern society, working in capacities ranging from driving vehicles to helping doctors diagnose illnesses to interacting with customers as online chatbots. To learn to perform these tasks, they are trained on vast quantities of data: An autonomous vehicle might be shown images of highways and streets with road signs, for example, while a chatbot based on a large language model (LLM) might be exposed to records of online conversations. This data helps the AI predict how to respond in a given situation.
One major issue is that the data itself may not be trustworthy. Its sources may be websites and interactions with the public. There are many opportunities for bad actors to corrupt this data — both during an AI system’s training period and afterward, while the AI continues to refine its behaviors by interacting with the physical world. This can cause the AI to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts.
“For the most part, software developers need more people to use their product so it can get better with exposure,” Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”
In part because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection. To assist the developer community, the new report offers an overview of the sorts of attacks its AI products might suffer and corresponding approaches to reduce the damage.
The report considers the four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.
Evasion attacks, which occur after an AI system is deployed, attempt to alter an input to change how the system responds to it. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road.
Poisoning attacks occur in the training phase by introducing corrupted data. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.
Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.
“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”
The authors — who also included Robust Intelligence Inc. researchers Alie Fordyce and Hyrum Anderson — break down each of these classes of attacks into subcategories and add approaches for mitigating them, though the publication acknowledges that the defenses AI experts have devised for adversarial attacks thus far are incomplete at best. Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology, Vassilev said.
“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

How might the increasing sophistication of artificial intelligence (AI) impact the future of computer viruses, both in terms of how viruses are created and how they are defended against?

Puntuación: 0 / Votos: 0

Comentarios

  1. Rodrigo Mallqui escribió:

    In my opinion, artificial intelligence has evolved at an astonishing pace, transforming our present and shaping the future in ways we do not yet fully understand. While it offers numerous benefits, in the wrong hands, it can pose significant dangers to society. Artificial intelligence has the potential to create more sophisticated, adaptive, and intelligent computer viruses capable of evading traditional defenses by learning and modifying their behavior according to the environment. On the other hand, it can also strengthen cybersecurity through advanced systems that detect anomalous patterns in real time, predict attacks, and automate responses.

    However, this technological evolution has intensified the digital arms race, where attackers and defenders constantly compete to surpass each other’s capabilities, resulting in more complex attacks and increasingly challenging defenses. It is essential to establish clear and strict policies to regulate the use of these tools, especially in sensitive areas such as computer viruses or destructive weapons. For example, at the recent APEC 2024 event, the United States and China discussed the need to control the use of artificial intelligence in nuclear weapons, a crucial topic for ensuring global security and avoiding irreversible consequences.

  2. Carlos Guzmán Gonzales escribió:

    I think that AI is revolutionizing both the creation of and defense against computer viruses. On the one hand, attackers are beginning to take advantage of AI to develop more sophisticated viruses. Thanks to advanced algorithms, viruses can analyze patterns in defensive systems, adjust their strategies in real time and exploit vulnerabilities before developers identify them. They can also generate more credible attacks, such as precisely crafted phishing emails or viruses that constantly mutate to evade any known detection systems.
    However, AI is also strengthening cyber defenses. Machine learning-based security systems can already detect anomalous behavior, even in the face of completely new threats, and respond immediately, limiting their impact. Also, AI can predict vulnerabilities in networks and devices, allowing developers to strengthen security on a preventive basis.
    Despite these advances, the NIST report stresses that current defenses are still imperfect. There is no completely foolproof solution, and well-designed attacks can overcome even the most advanced tools. This turns the development of AI for cybersecurity into an arms race, where attackers and defenders constantly compete to have the upper hand.
    In this context, AI can be both a powerful tool and a risk. To safely leverage its potential, it is critical to promote collaboration between governments, businesses and academia, develop more resilient technologies and maintain constant vigilance to address any evolving threats.

  3. CÉSAR PÉREZ SANTIVÁÑEZ escribió:

    This text reminds me of the classic 1968 film ‘2001: A Space Odyssey.’ In the film, the AI HAL employed similar tactics, corrupting information and manipulating human operators. It’s unsettling to see these fictional scenarios becoming increasingly relevant in our reality; it makes me chills. I think the solution is too difficult, because day by day, virus writers create more and more harmful and potent viruses. Defences can’t keep up with the same pace. However, the NIST could integrate former hackers or virus writers to feed AI with important psychological and skill data to anticipate different types of attacks. Another group of experts could review information on various topics or develop special codes, such as using a person’s phone number as their DNI and bank account number, to ensure unique identification. But these are just romantic theories.
    AI is already part of our lives, but I personally don’t trust it completely, especially for knowledge-based tasks, as it often provides incorrect information. I prefer to rely on traditional methods for information gathering. Ultimately, the future of AI security is uncertain. While it’s essential to embrace technological advancements, we must also remain vigilant and proactive in addressing potential threats. By understanding the risks and taking appropriate measures, we can hope to harness the power of AI for good while mitigating its potential dangers

  4. Luis Yepez Porcel escribió:

    How might the increasing sophistication of artificial intelligence (AI) impact the future of computer viruses, both in terms of how viruses are created and how they are defended against?

    I think that with the advancement of AI, computer viruses may become more complex and difficult to detect. Attackers can use machine learning techniques to develop malware that adapts and evolves, making viruses more resistant to traditional security measures.
    On the other hand, AI also offers new tools for defending against these attacks. AI-based cybersecurity systems can analyze large volumes of data in real-time to identify suspicious patterns and respond quickly to emerging threats. These systems can continuously learn and improve their detection and response capabilities as they encounter new types of attacks.
    In summary, while AI can be used to create more sophisticated and dangerous computer viruses, it also provides powerful tools to enhance our defenses against these attacks. The key will be staying one step ahead by using AI not only to react to threats but also to anticipate and prevent them.

  5. Fátima Matta escribió:

    In my opinion, the growing sophistication of artificial intelligence is having a profound and increasingly complex impact on the world of cybersecurity, especially with regard to the creation and detection of computer viruses. AI can automatically generate malicious code, adapting it to different systems and vulnerabilities. This allows cybercriminals to create a large number of virus variants in a very short time, making them difficult to detect. AI algorithms can learn and adapt to new defences, making viruses harder to detect and remove. This means that viruses can quickly evolve to evade traditional security solutions.
    AI can analyse large amounts of data to identify vulnerabilities specific to a system or user. This allows for the creation of more personalised and effective attacks. AI can generate fake content, such as emails or websites, that look authentic but actually contain malware. This makes it harder for users to distinguish between what is legitimate and what is a threat.
    Proactive detection: AI can analyse large amounts of data to identify patterns of suspicious behaviour and detect threats before they cause damage. AI-based security systems can automatically respond to threats, isolating infected systems and removing malware. AI can analyse threats in real time, enabling organisations to respond quickly to new attacks.
    Developing new defences: AI can help researchers develop new security techniques and improve existing ones.
    In conclusion, AI is transforming both the threat landscape and the cybersecurity landscape. On the one hand, it is enabling cybercriminals to create more sophisticated and personalised attacks. On the other hand, it is giving organisations the tools they need to detect and respond to these threats more effectively.

  6. Roberttex escribió:

    lisinopril 20 12.5 mg: lisinopril 10mg tablets – generic lisinopril 3973

  7. ArmandoMah escribió:

    lisinopril 0.5 mg: lisinopril 20 25 mg – prinivil 2.5 mg

  8. Roberttex escribió:

    buy misoprostol over the counter: order cytotec online – buy cytotec over the counter

  9. Roberttex escribió:

    where to get zithromax: cheap zithromax pills – zithromax for sale 500 mg

  10. Roberttex escribió:

    can i buy clomid without prescription: clomid buy – cost of generic clomid price

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *