Nowadays, most AI and Machine Learning algorithms leverage large amounts of data which can be purchased, collected or sourced online. This data is known as the Training Dataset and it enables the model to learn patterns and relationships within the data. By doing so, the model can make predictions or decisions based on new and unseen data.
Poisoning: Supply Chain Attacks
Researchers uncovered more than 100 malicious AI Models on the popular and open-source AI platform, Hugging Face. Many of these were made possible using Supply Chain Attacks. But what is thesupply chain? It’s all of the external parts that an organization relies on to operate.
Evasion: Adversarial Patching
AI’s learn to identify patterns in data but, just because they can identify something like we do, that doesn’t mean they do it in the same way as us. They might see patterns we do not or might take shortcuts we don’t expect.
Privacy: Model Inversion
AI’s ability to learn from vast datasets is its greatest strength but this capability can be turned against it and exposing private information is a reality even for the more complex models. Lets explore the mechanics of Model Inversion (MI) attacks, their methods, and strategies to mitigate them.
Privacy: Membership Inference Attacks
AI models are continually advancing their ability to detect patterns becoming increasingly more complex. As AIs are trained on more and more sensitive datasets, it becomes essential to ensure that these models are privacy oriented.
Poisoning: Model Poisoning
Sometimes you want to train or use an AI without having all the work and cost of training it from scratch. In these cases it’s common to reach for an open source model. Sites like Hugging Face allow people to share models and datasets without being dependent on just those used by the monolithic corporations.