Tag: infosec

  • Security.txt in the wild: 2025 edition

    One year ago, I checked the top 1 million “websites” for a security.txt file and then posted the results in this blog. As it was described at the time, I used a tool written by someone else who had already run this “experiment” in 2022.

    You can look at the post, if you are keen to know what is this file, why I was curious about the adoption numbers and what were last year’s results.

    As promised, I am collecting and publishing this information on the blog again this year. Yes, I did remember or, more precisely, my calendar app did.

    The first step was to download the same software again, and the second step was to download the most recent list of the top 1 million domains from the same source.

    Then, after consuming energy for a few of hours and wasting some bandwidth, the results were the following:

    TotalChange from 2024
    Sites scanned999968-0,003%
    Sites with a valid file1773-81%
    Sites with an invalid file12140+454%
    Sites without a file986055-0,25%
    ContactPolicyHiringEncryptionExpiry
    Sites with value120194526310730528480
    Change from 2024+30,3%+23,2%+21,2%+15,2%+70,9%

    Overall, there was an expected increase in usage, however the change from last year is again underwhelming. The number of domains with the file went from, 11501 to 13913, which is a minimal improvement.

    The valid/invalid numbers seem to be messed up, but this could be due to the software being outdated with the spec. I didn’t waste too much time on this.

    Even considering and ignoring the limitations described in the original author’s post, and ignoring the valid file detection issue, I think these results might not reflect the reality, due to the huge number of errors found in the output file.

    Overall, adoption seems to be progressing, but it still seems very far from being something mainstream.

    If I do this next year, perhaps it will be better to use a different methodology and tools, so I can obtain more reliable results.

    Fediverse Reactions
  • Are Redis ACL password protections weak?

    Earlier this year, I decided to explore Redis functionality a bit more deeply than my typical use-cases would require. Mostly due to curiosity, but also to have better knowledge of this tool in my “tool belt”.

    Curiously, a few months later, the whole ecosystem started boiling. Now we have Redis, Valkey, Redict, Garnet, and perhaps a few more. The space is hot right now and forks/alternatives are popping up like mushrooms.

    One common thing inherited from Redis is storing user passwords as SHA256 hashes. When I learned about this, I found it odd, since it goes against common best practices. The algorithm is very fast to brute force, does not protect against the usage of rainbow tables, etc.

    Instead of judging too fast, a better approach is to understand the reasoning for this decision, the limitations imposed by the use-cases and the threats such application might face.

    But first, let’s take a look at a more standard approach.

    Best practices for storing user passwords

    According to OWASP’s documentation on the subject, the following measures are important for applications storing user’s passwords:

    1. Use a strong and slow key derivation function (KDF).
    2. Add salt (if the KDF doesn’t include it already).
    3. Add pepper

    The idea for 1.is that computing a single hash should have a non-trivial cost (in time and memory), to decrease the speed at which an attacker can attempt to crack the stolen records.

    Adding a “salt” protects against the usage of “rainbow tables”, in other words, doesn’t let the attacker simply compare the values with precomputed hashes of common passwords.

    The “pepper” (a common random string used in all records), adds an extra layer of protection, given that, unlike the “salt”, it is not stored with the data, so the attacker will be missing that piece of information.

    Why does Redis use SHA256

    To store user passwords, Redis relies on a vanilla SHA256 hash. No multiple iterations for stretching, no salt, no pepper, nor any other measures.

    Since SHA256 is meant to be very fast and lightweight, it will be easier for an attacker to crack the hash.

    So why this decision? Understanding the use-cases of Redis gives us the picture that establishing and authenticating connections needs to be very, very fast. The documentation is clear about it:

    Using SHA256 provides the ability to avoid storing the password in clear text while still allowing for a very fast AUTH command, which is a very important feature of Redis and is coherent with what clients expect from Redis.

    Redis Documentation

    So this is a constraint that rules out the usage of standard KDF algorithms.

    For this reason, slowing down the password authentication, in order to use an algorithm that uses time and space to make password cracking hard, is a very poor choice. What we suggest instead is to generate strong passwords, so that nobody will be able to crack it using a dictionary or a brute force attack even if they have the hash.

    Redis Documentation

    So far, understandable. However, my agreement ends in the last sentence of the above quote.

    How can it be improved?

    The documentation leaves to the user (aka server administrator) the responsibility of setting strong passwords. In their words, if you set passwords that are lengthy and not guessable, you are safe.

    In my opinion, this approach doesn’t fit well with the “Secure by default” principle, which, I think, is essential nowadays.

    It leaves to the user the responsibility to not only set a strong password, but to also ensure that the password is almost uncrackable (a 32 bytes random string, in their docs). Experience tells me that most users and admins won’t be aware of it or won’t do it.

    Another point made to support the “vanilla SHA256” approach is:

    Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you already have access to what the password is protecting: the Redis instance stability and the data it contains.

    Redis Documentation

    Which is not entirely true, since ACL rules and users can be set as configuration files and managed externally. These files contain the SHA256 hashes. This means that in many setups and scenarios, the hashes won’t live only on the Redis server. This kind of configuration will be managed and stored elsewhere.

    I’m not the only one who thinks the current approach is not enough, teams implementing compatible but alternative implementations seem to share the concerns.

    So, after so many words and taking much of your precious time, you might ask, “what do you propose?”.

    Given the requirements for extremely fast connections and authentication, the first and main improvement would be to start using a “salt”. It is simple and won’t have any performance impact.

    The “salt” would make the hashes of not so strong passwords harder to crack, given that each password would have an extra random string that would have to be considered individually. Furthermore, this change could be made backwards compatible and added to existing external configuration files.

    Then, I would consider picking a key stretching approach or a more appropriate KDF to generate the hashes. This one would need to be carefully benchmarked, to minimize the performance impact. A small percentage of the time it takes for the whole process of initiating an authenticated connection, could be a good compromise.

    I would skip for now the usage of a “pepper”, since it is not clear how this could be done and managed from the user’s side. Pushing this responsibility to the user (Redis server operator), would create more complexity than it would be beneficial.

    An alternative approach, that could also be easy to implement and would be more secure than the current one, would be to automatically generate the “password” for the users by default. It would work like regular API keys, since it seems this is how Redis sees them:

    However ACL passwords are not really passwords. They are shared secrets between the server and the client, because the password is not an authentication token used by a human being.

    Redis Documentation

    The code already exists:

    …there is a special ACL command ACL GENPASS that generates passwords using the system cryptographic pseudorandom generator: …

    The command outputs a 32-byte (256-bit) pseudorandom string converted to a 64-byte alphanumerical string.

    Redis Documentation

    So it could be just a matter of requiring the user to explicitly bypass this automatic “API key” generation, to set up his own custom password.

    Summing it up

    To simply answer the question asked in the title: yes, I do think the user passwords could be better protected.

    Given the requirements and use-cases, it is understandable that there is a need to be fast. However, Redis should do more to protect the users’ passwords or at least ensure that users know what they are doing and pick an almost “uncrackable” password.

    So I ended up proposing:

    • An easy improvement: Add a salt.
    • A better improvement: Switch to a more appropriate KDF, with low work factor for performance reasons.
    • A different approach: Automatically generate by default a strong password for the ACL users.
    Fediverse Reactions
  • Security.txt in the wild

    A few years ago, I covered here in the blog the “security.txt spec”. A standard place with the security related contacts, designed to help researchers, and other people, find the right contacts to report vulnerabilities and other problems.

    At the time, I added it to my personal domain, as an example.

    When I wrote the post, the spec was still fairly recent, so as expected it wasn’t widely adopted and only the more security conscious organizations did put it into place.

    Since then, as part of my security work, I implemented it for several products, and the results were good. We received and triaged many reports that were sent to the correct addresses since day one.

    Many people, who put the security.txt file in place, complain about the amount of low effort reports that are sent their way. I admit this situation is not ideal. However, I still think it is a net positive, and the problem can be minimized by having a good policy in place and a streamlined triage process.

    While I always push for the implementation of this method on the products I work on, I have very little information about how widespread the adoption of this “spec” is.

    The topic is very common in certain “hacker” forums, but when I talk to people, the impression I get is that this is an obscure thing.

    The website, findsecuritycontacts.com, relies on security.txt to get its information. It also monitors the top 500 domains every day to generate some stats. The results are disappointing, only ~20% of those websites implement it correctly.

    I remember reading reports that covered many more websites, but recently, I haven’t seen any. With a quick search, I was able to find this one.

    It was written in 2022, so the results are clearly dated. On the bright side, the author published the tool he used to gather the data, which means we can quickly gather more recent data.

    So, to kill my curiosity, I downloaded the tool, grabbed the up-to-date list of the top 1 million websites from tranco-list.eu, gathered the same data and with a few lines of python code I obtained the following results:

    • Total sites scanned: 999992
    • Sites with a valid file: 9312 (~0.93%)
    • Sites with an invalid file: 2189 (~0.22%)
    • Sites without a file: 988491 (~98.85%)
    ContactPolicyHiringEncryptionExpiry
    Sites with value92183674256426504960

    The results are a bit underwhelming, I’m not sure if it is a flaw in the software, or if this is a clear picture of the reality.

    On the other hand, if we compare with the results that the original author obtained, this is just about a 3-fold improvement during the period of 1 year and a half. Which is a good sign.

    Next year, if I don’t forget, I will run the experiment again, to check the progress once more.

    Fediverse Reactions
  • Security.txt

    Some days ago while scrolling my mastodon‘s feed (for those who don’t know it is like Tweeter but instead of being a single website, the whole network is composed by many different entities that interact with each other), I found the following message:

    To server admins:

    It is a good practice to provide contact details, so others can contact you in case of security vulnerabilities or questions regarding your privacy policy.

    One upcoming but already widespread format is the security.txt file at https://your-server/.well-known/security.txt.

    See https://securitytxt.org/ and https://infosec-handbook.eu/.well-known/security.txt.

    @infosechandbook@chaos.social

    It caught my attention because my personal domain didn’t had one at the time. I’ve added it to other projects in the past, but do I need one for a personal domain?

    After some thought, I couldn’t find any reason why I shouldn’t add one in this particular case. So as you might already have guessed, this post is about the steps I took to add it to my domain.

    What is it?

    A small text file, just like robots.txt, placed in a well known location, containing details about procedures, contacts and other key information required for security professionals to properly disclose their findings.

    Or in other words: Contact details in a text file.

    security.txt isn’t yet an official standard (still a draft) but it addresses a common issue that security researches encounter during their day to day activity: sometimes it’s harder to report a problem than it is to find it. I always remember the case of a Portuguese citizen, who spent ~5 months trying to contact someone that could fix some serious vulnerabilities in a governmental website.

    Even though it isn’t an accepted standard yet, it’s already being used in the wild:

    Need more examples? A small search finds it for you very quickly or you can also read here a small analysis of the current status on Alexa’s top 1000 websites.

    Implementation

    So to help the cause I added one for this domain. It can be found at https://ovalerio.net/.well-known/security.txt

    Below are the steps I took:

    1. Go to https://securitytxt.org/ and fill the required fields of the form present on that website.
    2. Fill the extra fields if they apply.
    3. Generate the text document.
    4. Sign the content using your PGP key
      gpg --clear-sign security.txt
    5. Publish the signed file on your domain under https://<domain>/.well-known/security.txt

    As you can see, this is a very low effort task and it can generate very high returns, if it leads to a disclosure of a serious vulnerability that otherwise would have gone unreported.