Archivo por meses: noviembre 2024

25/11/24: Unit 2 – Computer Culture

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems   
www.nist.gov

Credit: N. Hanacek/NIST

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.
Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet.
“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”
AI systems have permeated modern society, working in capacities ranging from driving vehicles to helping doctors diagnose illnesses to interacting with customers as online chatbots. To learn to perform these tasks, they are trained on vast quantities of data: An autonomous vehicle might be shown images of highways and streets with road signs, for example, while a chatbot based on a large language model (LLM) might be exposed to records of online conversations. This data helps the AI predict how to respond in a given situation.
One major issue is that the data itself may not be trustworthy. Its sources may be websites and interactions with the public. There are many opportunities for bad actors to corrupt this data — both during an AI system’s training period and afterward, while the AI continues to refine its behaviors by interacting with the physical world. This can cause the AI to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts.
“For the most part, software developers need more people to use their product so it can get better with exposure,” Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”
In part because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection. To assist the developer community, the new report offers an overview of the sorts of attacks its AI products might suffer and corresponding approaches to reduce the damage.
The report considers the four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.
Evasion attacks, which occur after an AI system is deployed, attempt to alter an input to change how the system responds to it. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road.
Poisoning attacks occur in the training phase by introducing corrupted data. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.
Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.
“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”
The authors — who also included Robust Intelligence Inc. researchers Alie Fordyce and Hyrum Anderson — break down each of these classes of attacks into subcategories and add approaches for mitigating them, though the publication acknowledges that the defenses AI experts have devised for adversarial attacks thus far are incomplete at best. Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology, Vassilev said.
“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

How might the increasing sophistication of artificial intelligence (AI) impact the future of computer viruses, both in terms of how viruses are created and how they are defended against?

19/11/24: Unit 1 – Supplement – The Abstract

The environment plays a vital role in sustaining life on Earth. It encompasses everything around us, from the air we breathe to the water we drink and the land we live on. Unfortunately, human activities have increasingly put this delicate balance at risk. Industrial pollution, deforestation, overconsumption of natural resources, and the widespread use of fossil fuels are some of the major causes of environmental degradation.

One of the most significant threats we face today is climate change. Rising temperatures, caused by excessive greenhouse gas emissions, are leading to more frequent natural disasters, melting ice caps, and rising sea levels. These changes disrupt ecosystems, threaten wildlife, and endanger human populations, particularly those in vulnerable regions.

Preserving the environment is crucial for ensuring the health and well-being of all living organisms. Sustainable practices, such as reducing waste, transitioning to renewable energy, and protecting natural habitats, are essential steps toward environmental recovery. Governments, industries, and individuals all have a role to play in protecting the planet. By making environmentally conscious decisions and advocating for strong environmental policies, we can help ensure a healthier, more sustainable future for generations to come.

Can you identify this piece of writing? is it an article or an abstract? Support your answer

Identify its components, give it a tittle and determine its purpose.

12/11/24: Unit 1 – The World of Work

A new world of work    
29 April 2024

A revolution is taking place in the world of work, prompting employees to reassess their priorities and expectations. In order to continue to attract and retain talent, companies must boost their efforts and bolster their value proposition.

“Quiet quitting” is a trend that is increasingly making waves. But what exactly does it entail? It refers to a situation where a person does not actually quit their job, but gradually disengages with their duties.

Did you know?

A global survey conducted by the Gallup Institute found that people of all ages in the active population are falling out of love to some extent with their job. Only 23% said they felt engaged at work.

A new paradigm
In a world shaken by the Covid pandemic, geopolitical and economic uncertainty, as well as disruptive technologies, the reasons causing people to feel disengaged are both numerous and complex. The Gallup survey identified that the phenomenon has intensified because of unmet wage demands, expectations regarding recognition, diversity and inclusion, as well as the desire for increased well-being at work.

Employees want a better work/life balance, driving them to turn to remote working and asserting their right to disconnect. At the same time, they value social interactions and the team dynamics in a physical workplace, both essential to stimulate creativity.

They are looking for meaningful work and give great importance to the values championed by their employer. A study conducted by Mercer found that 96% of employees wanted their employer to implement a sustainable development program. They also expressed a need to enhance their employability, seeking to exercise their rights to training and professional mobility.

What companies can do
Given the vast range of employee expectations, businesses are rolling out a great many initiatives to attract and retain talent, which obviously include competitive remuneration policies, but also can involve covering tuition fees and implementing skills development programs.

They are also investing in actions that improve quality of life at work, such as quality catering services, flexible working hours, and environmentally friendly offices boasting a range of services, including gardens, break rooms, nursing rooms, silent spaces for people with autism, and music rooms. Businesses are working overtime to address quiet quitting. Time will tell if they are triumphant.

Taken from: https://servier.com/en/newsroom/a-new-world-of-work/

Have you heard about “quiet quitting” before? What do you know about it? Have you experienced it?
How has your working life changed after the Covid pandemic?