Selections: Adversarial Training

Share on facebook
Share on twitter
Share on linkedin

Adversarial training is a tactic used by data scientists to improve machine learning models. This method is done by methodically attacking the machine learning models by inputting information into the AI with the intent to fool the algorithm and identify its weak points. For example, if an organisation was looking to improve a chatbots understanding of the sentiment of a user, they would specifically change certain words or grammar to how the AI would respond to the specific inquiry.

In this case, adversarial training works as a way to explain how the AI thinks. This tactic is similar to explainable AI, but instead of trying to open up how the AI’s “black box” thinks, data scientists are able to fine-tune the algorithm by knowing what inputs cause what out-puts.

Adversarial training’s name comes from computer security where the term “adversary” is used to classify people or machines that are trying to manipulate or infiltrate a program or computer network. When using adversarial training the “adversary” is the variety of attack methods that disrupt the machine learning model. There are a variety of different attacks that can be used, but two examples of this are what is called a “poisoning attack” and an “evasion attack”.

In a poisoning attack, which is used when the machine learning models are being trained, the adversary repeatedly puts incorrectly labelled data into the model, causing the system to make skewed or inaccurate decisions. By doing this, data scientists are able to learn what data inputs cause the AI to think the way it does. They can the change the algorithm in order to better learn from the current training data.

In an evasion attack, which is used when the model is already in operation, the adversary attempts to get around the AI by slightly changing the input and seeing what they are able to get through. For example, if the AI is supposed to filter out emails that contained certain keywords, as it could be spam, the data scientist would change the spelling of a word and see if that message makes it past the filter.

In terms of what was previously written, adversarial training is used by researchers or data scientists to improve the AI algorithms, but, it can also be used maliciously. Hackers can use the same techniques to damage current AI models and exploit them for malicious purposes. This is not something that is overly prevalent today, but as more and more organisations continue to use machine learning models, it is something that could be a bigger issue in the future.

To better train, monitor, and secure machine learning models we’ve put together this shortlist of startups who effectively utilise adversarial techniques.


2020 founded, Undisclosed raised, 1 – 10 employees, UK-based

Client snapshot:

Undisclosed

What do they do?

Advai is developing a commercial platform that allows AI companies, owners and developers to test their systems and identify weaknesses and issues that can be addressed. They are also working on real-time monitoring of AI systems, i.e. identifying inputs to the system before they cause it to fail.

Why we’re interested

Advai is one of the only startups in the United Kingdom that uses adversarial AI methods for improving machine learning. Their company focuses on helping to train models to be more robust, ethical, fair, and secure by using adversarial attacks. Their dashboard also provides real-time monitoring, understandability metrics across an organisations entire model estate. Advai also recently participated in the Percy Hobart Fellowship to help drive innovations in the defence industry.


2018 founded, $15.2m raised, 25 – 50 employees, US-based

Client snapshot:

US Air Force, US Navy, US Department of Defence

What do they do?

Calypso AI is testing and evaluation software intended to check, monitor and secure AI. Their platform allows users to rapidly develop, test, and deploy artificial intelligence and data science systems that are natively secure, robust, and explainable to enhance its mission, enabling clients to deeply understand, score, and track the performance and reliability of their AI models.

Why we’re interested

Calypso AI is focused on monitoring machine learning models with a focus on the national security sector. They’ve recently launched a new tool called VESPR and has seen early adoption from the US Air Force and Department of Homeland Security. VESPR comes from years of research into adversarial machine learning and gives operators the confidence that their models are secure against adversarial attacks. The defence industry has taken a particular interest in enhancing its security against adversarial attacks as hostile parties have the capabilities to deploy these measures against them.


2019 founded, $590.6k raised, 1 – 10 employees, Canada-based,

Client snapshot:

Undisclosed

What do they do?

TrojAI is a cybersecurity service intended to protect artificial intelligence platforms from adversarial attacks. The companies services include data transformation, model monitoring techniques that assist in the defence against such adversarial attacks, in some cases also provide incremental enhancement in model accuracy and performance. It also includes a forensically tracked archival repository system, enabling customers to protect their artificial intelligence-based technology from data poisoning and model evasion attacks.

Why we’re interested

TrojAI helps secure different industries that rely on AI to get their job done. One area that they help to secure is healthcare. The health industry uses machine learning to scan look over images and automatically identify if something, such as a cancerous tumour, is in the image. Adversarial tactics can be used to mistrain a machine learning model and could make it 30% less likely to identify a health problem. Beyond this, TrojAI piqued our interest as they also successfully participated in the highly competitive Techstars AI accelerator in Montreal.


2019 founded, $14.0m raised, 25 – 50 employees, US-based

Client snapshot:

Undisclosed

What do they do?

Robust Intelligence creates an operational risk platform intended to help products integrate seamlessly into the artificial intelligence development life cycle to ensure robustness and reliability. The company’s platform provides two complementary components that work in conjunction that is automated tests of pre-production models and quality assurance tests of in-production models, enabling clients to get access to products that detect and eliminate contaminated data to ensure better model performance in non-trivial production environments.

Why we’re interested

Robust Intelligence was founded through the research of Yaron Singer, a former assistant professor at Harvard. One of the area’s that they focus on is stopping fraud by training machine learning models to notice if attackers are using adversarial techniques for financial transactions, such as in cheques. Fraudsters are able to trick the AI into believing that a cheque is a different amount by slightly changing different aspects of the cheques image. Through image spoofing the AI could perceive that the cheque is for a higher amount then it actually is. Beyond cheque fraud, there are many concerns around how adversarial attacks could effect the financial industry.

Get in touch