We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.
Please see our statement on Data Privacy.
A sensitive data leakage vulnerability was identified in scikit-learn's TfidfVectorizer, specifically in versions up to and including 1.4.1.post1, which was fixed in version 1.5.0. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer.
Reserved 2024-05-22 | Published 2024-06-06 | Updated 2024-08-01 | Assigner @huntr_aiCWE-921 Storage of Sensitive Data in a Mechanism without Access Control
huntr.com/bounties/14bc0917-a85b-4106-a170-d09d5191517c
github.com/...ommit/70ca21f106b603b611da73012c9ade7cd8e438b8
Support options