FBI: Hackers stole source code from US government agencies and private companies
By Catalin Cimpanu: According to the FBI, threat actors are abusing misconfigured SonarQube applications to access and steal source code repositories from US government agencies and private businesses. Lesson: secure defaults matter. Most companies don’t change the default settings, leading to public SonarQube instances requiring no auth or using default creds (
Fixing leaky logs: how to find a bug and ensure it never returns
A neat example of quick feedback loops and “self-service security.” Developer and engineering manager Nathan Brahms found that sensitive information was being logged, decided on a fix and pushed it out, and then wrote a code-base specific Semgrep pattern to ensure that issue never happened again, all without involving the AppSec team. Total time start to finish: a few hours.
A security compliance scanning tool using the CIS Azure Benchmark 1.2, by Kesten Broughton.
A static analysis tool that checks Kubernetes YAML files and Helm charts to ensure the applications represented in them adhere to best practices, by StackRox.
Type confusion: discovery, abuse, and protection
34c3 talk by Mathias Payer: “Type confusion, often combined with use-after-free, is the main attack vector to compromise modern C++ software like browsers or virtual machines. Typecasting is a core principle that enables modularity in C++. For performance, most typecasts are only checked statically, i.e., the check only tests if a cast is allowed for the given type hierarchy, ignoring the actual runtime type of the object. Using an object of an incompatible base type instead of a derived type results in type confusion. We discuss the details of this vulnerability type and how such vulnerabilities relate to memory corruption. Based on an LLVM-based sanitizer that we developed, we will show how to discover such vulnerabilities in large software through fuzzing and how to protect yourself against this class of bugs.”
By @hot3eed: Bidirectional XPC message interception and more for iOS and macOS, powered by Frida.
Politics / Privacy
Privacy Labels for iOS and Mac Apps Are Coming
Apple continues to position itself as the privacy-focused, not Facebook/Google tech giant:
Starting Dec. 8, developers will need to provide information about what kind of data their apps collect and how the data will be used. Just as food manufacturers are required to print nutritional labels on food to provide nutrition information such as calories and ingredients, these apps will have “privacy labels” telling users upfront how the apps use information.
How artificial intelligence may be making you buy things
Using data from loyalty cards as well as our online shopping carts and product viewing behavior, more and more retailers are using AI to recommend items you’re more likely to purchase.
As excellently posed in The Social Dilemma, I think this starts to raise ethical questions as businesses toe the line between offering “helpful” suggestions and deals for consumers vs exploiting cognitive tendencies to maximize profit.
“The AI module is designed not only to do the obvious stuff, but it learns as it goes along and becomes anticipatory. It can start to build a picture of how likely you are to try a different brand, or to buy chocolate on a Saturday.” And it can offer what he calls “hyper-personalised offers”, like cheaper wine on a Friday night.
“With the app we have found that the average contents of a basket are up 20%, and people with the app are three times more likely to return to shop in that store.”
PLATYPUS: With Great Power comes Great Leakage
Academic paper: “With PLATYPUS, we present novel software-based power side-channel attacks on Intel server, desktop and laptop CPUs. We exploit the unprivileged access to the Intel RAPL interface exposing the processor’s power consumption to infer data and extract cryptographic keys.”
Lyft’s Alex Chantavy and Andrew Johnson describe some Cartography updates. If you’re not familiar, Cartography is a baller tool that can query various services you’re using (e.g. AWS, GitHub, Okta, …), enumerate objects and their relationships, put that info in Neo4J, and then let you query it for relevant security insights. See tl;dr sec 21 and 51 for additional links about Cartography.
This post describes how Cartography can now incorporate AWS IAM info.
You can then specify “Resource Permission Relationships” to evaluate offline what a principal can access. Using the Okta integration, you can also determine what an individual user has access to.
Lastly, and I want to emphasize how awesome this is, you can use Cartography’s Drift Detection feature to inform you via Slack alerts whenever meaningful IAM changes have occurred, so that you can investigate.
When you introduce a new open source dependency into your company’s software, there’s generally no easy indication of how secure that package is. That’s why the Open Source Security Foundation (OpenSSF) has released a Security Scorecard tool on GitHub whose goal is to “automate analysis and trust decisions on the security posture of open source projects.”
Each check returns a Pass / Fail decision, as well as a confidence score between 0 (unable to get any real signal) and 10 (completely sure of the result).
The tool currently checks if a target project:
- Contains a security policy
- Has contributors from at least two different organizations
- Declares and freezes dependencies?
- Cryptographically signs releases and release tags
- Runs tests in CI
- Requires code review before code is merged
- Has a CII Best Practices Badge?
- Uses Pull Requests for all code changes
- Uses fuzzing (e.g. OSS-Fuzz) or static analysis tools
- Is active (had commits or releases in the last 90 days)
Thanks for reading!