Testing and exploiting Java Deserialization in 2021
Great overview by Lukasz Mikula of what deserialization is, its root cause, auditing source code for deserialization vulnerabilities, testing with Ysoserial and discussion of its payloads, and troubleshooting exploitation attempts that aren’t quite working.
Launching OSV - Better vulnerability triage for open source
It can often be a pain to map a CVE to the vulnerable package versions, both as a user to determine if you’re affected, as well as for the overworked package maintainer to determine all affected versions and commits. This promising project by Google aims to reduce this burden by attempting to automatically determine affected package versions by, given a reproduction test case + how to build the app, bisects to find the impacted commit ranges and version/tags.
Currently the data is mostly C/C++ data from OSS-Fuzz, but they’re working to extend it with data from language ecosystems like NPM and PyPI. They’re also providing an API (here) you can query.
These backdoored dependencies were ran inside more than 35 organizations to date across all three tested programming languages, earning Alex a $30K bounty each from Shopify, Apple, and PayPal, and $40K from Azure. Netflix, Yelp, and Uber were also affected.
Fun fact: several package managers, when specifying an internal index (e.g.
pip install <library> --extra-index-url ...) look to see if
library exists on the specified internal package index as well as the public one, and if so, installs whichever has a higher version. That is, an attacker’s typosquatted package just needs to use a high version number and it will be selected. Package managers, y u do dis 😅😅?!?!
By @RenwaX23: An Electron browser that will automatically check for reflected, stored, and DOM-based XSS vulnerabilities in the background as you browse. Supports GET and POST requests.
By Kinnaird McQuade and Jason Dyke: “Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance frameworks.” Currently supports: PCI, SOC 1/2/3, ISO/IEC, HIPAA BAA, and FedRAMP Moderate and High.
Tool by Ian Mckay that can automatically generate a basic IAM policy from AWS client-side monitoring (CSM).
How to Use AWS Services to Secure your Endpoints Without Provisioning Infrastructure
Great post by ScaleSec’s Anthony DiMarco on how to choose a technology for exposing your Lambdas, how to get free TLS certs from AWS, and how to separate authentication and authorization logic from your business logic with custom authorizers. For the latter, the post discusses Cognito User Pools, IAM-based authorization, Lambda Authorizers, and OpenID Connect / OAuth 2.
A Practical Guide to Writing Secure Dockerfiles
Slides by Madhu Akula that reference many great resources and tools, including:
Politics / Privacy
Project by Jonas Strehle that uses favicons to fingerprint website visitors.
|Incognito / Private mode detection
|Persistent after flushed website cache and cookies
|Identify multiple windows
|Working with Anti-Tracking SW
AI can now learn to manipulate human behaviour
A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning.
…in one game the AI was out to maximise how much money it ended up with, and in the other the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode.
The research has an enormous range of possible applications, from enhancing behavioural sciences and public policy to improve social welfare, to understanding and influencing how people adopt healthy eating habits or renewable energy. AI and machine learning could be used to recognise people’s vulnerabilities in certain situations and help them to steer away from poor choices.
The method can also be used to defend against influence attacks. Machines could be taught to alert us when we are being influenced online, for example, and help us shape a behaviour to disguise our vulnerability (for example, by not clicking on some pages, or clicking on others to lay a false trail).
There’s no way this research could play out poorly 😅😅
I stumbled across the handle @litcapital, which has some on point finance memes.
Out of 100s of AppSec articles I’ve read over the past few years, this is easily in my top 3 for threat modeling.
My bud Jacob Salassi and I wrote about his journey scaling threat modeling in a hypergrowth start-up: Snowflake.
Tons of detailed, actionable insights and a few spot-on Arrested Development memes.
If you’re lazy (or want to help promote the post), I wrote a short Twitter thread of the key points here.