Category: Security

  • Security.txt in the wild: 2025 edition

    One year ago, I checked the top 1 million “websites” for a security.txt file and then posted the results in this blog. As it was described at the time, I used a tool written by someone else who had already run this “experiment” in 2022.

    You can look at the post, if you are keen to know what is this file, why I was curious about the adoption numbers and what were last year’s results.

    As promised, I am collecting and publishing this information on the blog again this year. Yes, I did remember or, more precisely, my calendar app did.

    The first step was to download the same software again, and the second step was to download the most recent list of the top 1 million domains from the same source.

    Then, after consuming energy for a few of hours and wasting some bandwidth, the results were the following:

    TotalChange from 2024
    Sites scanned999968-0,003%
    Sites with a valid file1773-81%
    Sites with an invalid file12140+454%
    Sites without a file986055-0,25%
    ContactPolicyHiringEncryptionExpiry
    Sites with value120194526310730528480
    Change from 2024+30,3%+23,2%+21,2%+15,2%+70,9%

    Overall, there was an expected increase in usage, however the change from last year is again underwhelming. The number of domains with the file went from, 11501 to 13913, which is a minimal improvement.

    The valid/invalid numbers seem to be messed up, but this could be due to the software being outdated with the spec. I didn’t waste too much time on this.

    Even considering and ignoring the limitations described in the original author’s post, and ignoring the valid file detection issue, I think these results might not reflect the reality, due to the huge number of errors found in the output file.

    Overall, adoption seems to be progressing, but it still seems very far from being something mainstream.

    If I do this next year, perhaps it will be better to use a different methodology and tools, so I can obtain more reliable results.

    Fediverse Reactions
  • Are Redis ACL password protections weak?

    Earlier this year, I decided to explore Redis functionality a bit more deeply than my typical use-cases would require. Mostly due to curiosity, but also to have better knowledge of this tool in my “tool belt”.

    Curiously, a few months later, the whole ecosystem started boiling. Now we have Redis, Valkey, Redict, Garnet, and perhaps a few more. The space is hot right now and forks/alternatives are popping up like mushrooms.

    One common thing inherited from Redis is storing user passwords as SHA256 hashes. When I learned about this, I found it odd, since it goes against common best practices. The algorithm is very fast to brute force, does not protect against the usage of rainbow tables, etc.

    Instead of judging too fast, a better approach is to understand the reasoning for this decision, the limitations imposed by the use-cases and the threats such application might face.

    But first, let’s take a look at a more standard approach.

    Best practices for storing user passwords

    According to OWASP’s documentation on the subject, the following measures are important for applications storing user’s passwords:

    1. Use a strong and slow key derivation function (KDF).
    2. Add salt (if the KDF doesn’t include it already).
    3. Add pepper

    The idea for 1.is that computing a single hash should have a non-trivial cost (in time and memory), to decrease the speed at which an attacker can attempt to crack the stolen records.

    Adding a “salt” protects against the usage of “rainbow tables”, in other words, doesn’t let the attacker simply compare the values with precomputed hashes of common passwords.

    The “pepper” (a common random string used in all records), adds an extra layer of protection, given that, unlike the “salt”, it is not stored with the data, so the attacker will be missing that piece of information.

    Why does Redis use SHA256

    To store user passwords, Redis relies on a vanilla SHA256 hash. No multiple iterations for stretching, no salt, no pepper, nor any other measures.

    Since SHA256 is meant to be very fast and lightweight, it will be easier for an attacker to crack the hash.

    So why this decision? Understanding the use-cases of Redis gives us the picture that establishing and authenticating connections needs to be very, very fast. The documentation is clear about it:

    Using SHA256 provides the ability to avoid storing the password in clear text while still allowing for a very fast AUTH command, which is a very important feature of Redis and is coherent with what clients expect from Redis.

    Redis Documentation

    So this is a constraint that rules out the usage of standard KDF algorithms.

    For this reason, slowing down the password authentication, in order to use an algorithm that uses time and space to make password cracking hard, is a very poor choice. What we suggest instead is to generate strong passwords, so that nobody will be able to crack it using a dictionary or a brute force attack even if they have the hash.

    Redis Documentation

    So far, understandable. However, my agreement ends in the last sentence of the above quote.

    How can it be improved?

    The documentation leaves to the user (aka server administrator) the responsibility of setting strong passwords. In their words, if you set passwords that are lengthy and not guessable, you are safe.

    In my opinion, this approach doesn’t fit well with the “Secure by default” principle, which, I think, is essential nowadays.

    It leaves to the user the responsibility to not only set a strong password, but to also ensure that the password is almost uncrackable (a 32 bytes random string, in their docs). Experience tells me that most users and admins won’t be aware of it or won’t do it.

    Another point made to support the “vanilla SHA256” approach is:

    Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you already have access to what the password is protecting: the Redis instance stability and the data it contains.

    Redis Documentation

    Which is not entirely true, since ACL rules and users can be set as configuration files and managed externally. These files contain the SHA256 hashes. This means that in many setups and scenarios, the hashes won’t live only on the Redis server. This kind of configuration will be managed and stored elsewhere.

    I’m not the only one who thinks the current approach is not enough, teams implementing compatible but alternative implementations seem to share the concerns.

    So, after so many words and taking much of your precious time, you might ask, “what do you propose?”.

    Given the requirements for extremely fast connections and authentication, the first and main improvement would be to start using a “salt”. It is simple and won’t have any performance impact.

    The “salt” would make the hashes of not so strong passwords harder to crack, given that each password would have an extra random string that would have to be considered individually. Furthermore, this change could be made backwards compatible and added to existing external configuration files.

    Then, I would consider picking a key stretching approach or a more appropriate KDF to generate the hashes. This one would need to be carefully benchmarked, to minimize the performance impact. A small percentage of the time it takes for the whole process of initiating an authenticated connection, could be a good compromise.

    I would skip for now the usage of a “pepper”, since it is not clear how this could be done and managed from the user’s side. Pushing this responsibility to the user (Redis server operator), would create more complexity than it would be beneficial.

    An alternative approach, that could also be easy to implement and would be more secure than the current one, would be to automatically generate the “password” for the users by default. It would work like regular API keys, since it seems this is how Redis sees them:

    However ACL passwords are not really passwords. They are shared secrets between the server and the client, because the password is not an authentication token used by a human being.

    Redis Documentation

    The code already exists:

    …there is a special ACL command ACL GENPASS that generates passwords using the system cryptographic pseudorandom generator: …

    The command outputs a 32-byte (256-bit) pseudorandom string converted to a 64-byte alphanumerical string.

    Redis Documentation

    So it could be just a matter of requiring the user to explicitly bypass this automatic “API key” generation, to set up his own custom password.

    Summing it up

    To simply answer the question asked in the title: yes, I do think the user passwords could be better protected.

    Given the requirements and use-cases, it is understandable that there is a need to be fast. However, Redis should do more to protect the users’ passwords or at least ensure that users know what they are doing and pick an almost “uncrackable” password.

    So I ended up proposing:

    • An easy improvement: Add a salt.
    • A better improvement: Switch to a more appropriate KDF, with low work factor for performance reasons.
    • A different approach: Automatically generate by default a strong password for the ACL users.
    Fediverse Reactions
  • Security.txt in the wild

    A few years ago, I covered here in the blog the “security.txt spec”. A standard place with the security related contacts, designed to help researchers, and other people, find the right contacts to report vulnerabilities and other problems.

    At the time, I added it to my personal domain, as an example.

    When I wrote the post, the spec was still fairly recent, so as expected it wasn’t widely adopted and only the more security conscious organizations did put it into place.

    Since then, as part of my security work, I implemented it for several products, and the results were good. We received and triaged many reports that were sent to the correct addresses since day one.

    Many people, who put the security.txt file in place, complain about the amount of low effort reports that are sent their way. I admit this situation is not ideal. However, I still think it is a net positive, and the problem can be minimized by having a good policy in place and a streamlined triage process.

    While I always push for the implementation of this method on the products I work on, I have very little information about how widespread the adoption of this “spec” is.

    The topic is very common in certain “hacker” forums, but when I talk to people, the impression I get is that this is an obscure thing.

    The website, findsecuritycontacts.com, relies on security.txt to get its information. It also monitors the top 500 domains every day to generate some stats. The results are disappointing, only ~20% of those websites implement it correctly.

    I remember reading reports that covered many more websites, but recently, I haven’t seen any. With a quick search, I was able to find this one.

    It was written in 2022, so the results are clearly dated. On the bright side, the author published the tool he used to gather the data, which means we can quickly gather more recent data.

    So, to kill my curiosity, I downloaded the tool, grabbed the up-to-date list of the top 1 million websites from tranco-list.eu, gathered the same data and with a few lines of python code I obtained the following results:

    • Total sites scanned: 999992
    • Sites with a valid file: 9312 (~0.93%)
    • Sites with an invalid file: 2189 (~0.22%)
    • Sites without a file: 988491 (~98.85%)
    ContactPolicyHiringEncryptionExpiry
    Sites with value92183674256426504960

    The results are a bit underwhelming, I’m not sure if it is a flaw in the software, or if this is a clear picture of the reality.

    On the other hand, if we compare with the results that the original author obtained, this is just about a 3-fold improvement during the period of 1 year and a half. Which is a good sign.

    Next year, if I don’t forget, I will run the experiment again, to check the progress once more.

    Fediverse Reactions
  • Meet the InfoSec Planet

    If you are a frequent reader of this blog, you might already know that I created a small tool to generate a simple webpage plus an RSS feed, from the content of multiple other RSS sources, called worker-planet.

    This type of tool is often known as a “planet”:

    In online media a planet is a feed aggregator application designed to collect posts from the weblogs of members of an internet community and display them on a single page.

    Wikipedia

    While the tool is open-source, a person needs to deploy it before being able to see it in action. Not great.

    This brings us to last week. I was reading a recent issue of a popular newsletter, when I found an OPML file containing 101 infosec related sources curated by someone else.

    Instead of adding them to my newsreader, which to be honest, already contains a lot of cruft that I never read and that I should remove anyway, I saw a great fit to build a demo site for `worker-planet`.

    Preparing the sources

    The first step was to extract all the valid sources from that file. This is important because there is the chance that many of the items might not be working or online at all, since the file is more than 2 years old.

    A quick python script can help us with this task:

    # Extract existing URLs
    urls = []
    tree = ET.parse(opml_file)
    for element in tree.getroot().iter("outline"):
        if url := element.get("xmlUrl"):
            urls.append(url)
    
    # Make sure they are working
    def check_feed(url):
        try:
            response = urlopen(url)
            if 200 <= response.status < 300:
                body = response.read().decode("utf-8")
                ET.fromstring(body)
                return url
        except Exception:
            pass
    
    working_urls = []
    with ThreadPoolExecutor(max_workers=20) as executor:
        for result in executor.map(check_feed, urls):
            if result:
                working_urls.append(result)

    As expected, from the 101 sources present in the file, only 54 seem to be working.

    Deploying

    Now that we already have the inputs we need, it is time to set up and deploy our worker-planet.

    Assuming there aren’t any customizations, we just have to copy the wrangler.toml.example to a new wrangler.toml file and fill configs as desired. Here’s the one I used:

    name = "infosecplanet"
    main = "./worker/script.js"
    compatibility_date = "2023-05-18"
    node_compat = true
    account_id = "<my_id>"
    
    workers_dev = true
    kv_namespaces = [
        { binding = "WORKER_PLANET_STORE", id = "<namespace_id_for_prod>", preview_id = "<namespace_id_for_dev"> },
    ]
    
    [vars]
    FEEDS = "<all the feed urls>"
    MAX_SIZE = 100
    TITLE = "InfoSec Planet"
    DESCRIPTION = "A collection of diverse security content from a curated list of sources. This website also serves as a demo for \"worker-planet\", the software that powers it."
    CUSTOM_URL = "https://infosecplanet.ovalerio.net"
    CACHE_MAX_AGE = "300"
    
    [triggers]
    crons = ["0 */2 * * *"]

    Then npm run build plus npm run deploy. And it is done, the new planet should now accessible through my workers.dev subdomain.

    The rest is waiting for the cron job to execute and also configure any custom routes / domains on Cloudflare’s dashboard.

    The final result

    The new “Infosec Planet” is available on “https://infosecplanet.ovalerio.net” and lists the latest content in those infosec related sources. A united RSS feed is also available.

    In the coming weeks, I will likely improve a bit the list of sources to improve the overall quality of the content.

    One thing I would like to highlight, is that I took a special precaution to not include the full content of the feeds in the InfoSec Planet’s output.

    It was done this way because I didn’t ask for permission from all those authors, to include the contents of their public feeds in the page. So just a small snippet is shown together with the title.

    Nevertheless, if some author wishes to remove their public feed from the page, I will gladly do it so once notified (by email?).

  • What to use for “TOTP” in 2023?

    At the start of last week, we received great news regarding new improvements to a very popular security app, “Google Authenticator”. A feature it was lacking for a long time was finally implemented, “cloud backups”.

    However, after a few days, the security community realized the new feature wasn’t as good as everybody was assuming. It lacks “end-to-end encryption”. In other words, when users back up their 2FA codes to the cloud, Google has complete access to these secrets.

    Even ignoring the initial bugs (check this one and also this one), it is a big deal because any second factor should only be available to the “owner”. Having multiple entities with access to these codes, defeats the whole purpose of having a second factor (ignoring again any privacy shortcomings).

    Summing up, if you use Google Authenticator, do not activate the cloud backups.

    And this brings us to the topic of today’s post: “What app (or mechanism) should I use for 2FA?”

    This question is broader than one might initially expect, since we have multiple methods at our disposal.

    SMS codes should be on their way out, for multiple reasons, but specially because of the widespread SIM swapping vulnerabilities.

    Push-based authenticators don’t seem to be a great alternative. They are not standardized, they tie the user to proprietary ecosystems, and they can’t be used everywhere.

    In an ideal scenario, everyone would be using FIDO2 (“Webauthn”) mechanisms, with hardware keys or letting their device’s platform handle the secret material.

    While support is growing, and we should definitely start using it where we can, the truth is, it is not yet widely accepted. This means we still need to use another form of 2FA, where FIDO2 isn’t supported yet.

    That easy to use and widely accepted second factor is TOPT.

    This still is the most independent and widely used form of 2FA we have nowadays. Basically, you install an authenticator app that provides you temporary codes to use in each service after providing the password. One of the most popular apps for TOPT is the “problematic” Google Authenticator.

    What are the existing alternatives?

    Many password managers (1Password, Bitwarden, etc.) also offer the possibility to generate these codes for you. However, I don’t like this approach because the different factors should be:

    • Something you know
    • Something you have
    • Something you are

    In this case, the password manager already stores the first factor (the “something you know”), so having all eggs in the same basket doesn’t seem to be a good idea.

    For this reason, from now on, I will focus on apps that allow me to store these codes in a separate device (the “something you have”).

    My requirements for such an app are:

    • Data is encrypted at rest.
    • Access is secured by another form of authentication.
    • Has easy offline backups.
    • It is easy to restore a backup.
    • Secure display (tap before the code is displayed on the screen).
    • Open source.
    • Available for android.

    There are dozens of them, but many don’t comply with all the points above, while others have privacy and security issues that I can’t overlook (just to give you a glimpse, check this).

    In the past, I usually recommended “andOTP“. It checks all the boxes and is indeed a great app for this purpose. Unfortunately, it stopped being maintained a few months ago.

    While it is still a solid app, I don’t feel comfortable recommending it anymore.

    The bright side is that I went looking for a similar app and I found “Aegis“, that happens to have great reviews, fulfills all the above requirements and is still maintained. I guess this is the one I will be recommending when I’m asked “what to use for 2FA nowadays”.

  • Controlling the access to the clipboard contents

    In a previous blog post published earlier this year I explored some security considerations of the well known “clipboard” functionality that most operating systems provide.

    Long story short, in my opinion there is a lot more that could be done to protect the users (and their sensitive data) from many attacks that use of clipboard as a vector to trick the user or extract sensitive material.

    The proof-of-concept I ended up building to demonstrate some of the ideas worked in X11 but didn’t achieve one of the desired goals:

    It seems possible to detect all attempts of accessing the clipboard, but after struggling a bit, it seems that due to the nature of X11 it is not possible to know which running process owns the window that is accessing the clipboard. A shame.

    Myself, last blog post on this topic

    The good news about the above quote, is that it no longer is true. A kind soul contributed a patch that allows “clipboard-watcher” to fetch the required information about the process accessing the clipboard. Now we have all ingredients to make the tool fulfill its initial intended purpose (and it does).

    With this lengthily introduction we are ready to address the real subject of this post, giving the user more control over how the clipboard is used. Notifying the users about an access is just a first step, but restricting the access is what we want.

    On this topic, several comments to the previous post mentioned the strategy used by Qubes OS. It relies on having one clipboard specific to each app and a second clipboard that is shared between apps. The later requires user intervention to be used. While I think this is a good approach, it is not easy to replicate in a regular Linux distribution.

    However as I mentioned in my initial post, I think we can achieve a similar result by asking the user for permission when an app requests the data currently stored on the clipboard. This was the approach Apple implemented on the recent iOS release.

    So in order to check/test how this could work, I tried to adapt my proof-of-concept to ask for permission before sharing any data. Here’s an example:

    Working example of clipboard-watcher requesting permission before letting other apps access the clipboard contents.

    As we can see, it asks of permission before the requesting app is given the data and it kinda works (ignore the clunky interface and UX). Of course that there are many possible improvements to make its usage bearable, such as whitelisting certain apps, “de-duplicate” the content requests (apps can generate a new one for each available content type, which ends up being spammy), etc.

    Overall I’m pleased with the result and in my humble opinion this should be a “must have” security feature for any good clipboard manager on Linux. I say it even taking into account that this approach is not bulletproof, given that a malicious application could continuously fight/race the clipboard manager for the control of the “X selections”.

    Anyhow, the new changes for the proof-of-concept are available here, please give it a try and let me know what you think and if you find any other problems.

  • Inlineshashes: a new tool to help you build your CSP

    Content-Security-Policy (CSP) is an important mechanism in today’s web security arsenal. Is a way of defending against Cross-Site Scripting and other attacks.

    It isn’t hard to get started with or to put in place in order to secure your website or web application (I did that exercise in a previous post). However when the systems are complex or when you don’t fully control an underlying “codebase” that frequently changes (like it happens with of-the-shelf software) things can get a bit messier.

    In those cases it is harder to build a strict and simple policy, since there are many moving pieces and/or you don’t control the code development, so you will end up opening exceptions and whitelisting certain pieces of content making the policy more complex. This is specially true for inline elements, making the unsafe-inline source very appealing (its name tells you why you should avoid it).

    Taking WordPress as an example, recommended theme and plugin updates can introduce changes in the included inline elements, which you will have to review in order to update your CSP. The task gets boring very quickly.

    To help with the task of building and maintaining the CSP in the cases described above, I recently started to work on a small tool (and library) to detect, inspect and whitelist new inline changes. You can check it here or download it directly from PyPI.

  • Who keeps an eye on clipboard access?

    If there is any feature that “universally” describes the usage of computers, it is the copy/paste pattern. We are used to it, practically all the common graphical user interfaces have support for it, and it magically works.

    We copy some information from one application and paste into another, and another…

    How does these applications have access to this information? The clipboard must be something that is shared across all of them, right? Right.

    While very useful, this raises a lot of security questions. As far as I can tell, all apps could be grabbing what is available on the clipboard.

    It isn’t uncommon for people to copy sensitive information from one app to another and even if the information is not sensitive, the user generally has a clear target app for the information (the others don’t have anything to do with it).

    These questions started bugging me a long time ago, and the sentiment even got worse when Apple released an iOS feature that notifies users when an app reads the contents of the clipboard. That was brilliant, why didn’t anyone thought of that before?

    The result? Tons of apps caught snooping into the clipboard contents without the user asking for it. The following articles can give you a glimpse of what followed:

    That’s not good, and saying you won’t do it again is not enough. On iOS, apps were caught and users notified, but what about Android? What about other desktop operating systems?

    Accessing the clipboard to check what’s there, then steal passwords, or replace cryptocurrency addresses or just to get a glimpse of what the user is doing is a common pattern of malware.

    I wonder why hasn’t a similar feature been implemented in most operating systems we use nowadays (it doesn’t need to be identical, but at least let us verify how the clipboard is being used). Perhaps there exists tools can help us with this, however I wasn’t able to find any for Linux.

    A couple of weeks ago, I started to look at how this works (on Linux, which is what I’m currently using). What I found is that most libraries just provide a simple interface to put things on the clipboard and to get the current clipboard content. Nothing else.

    After further digging, I finally found some useful and interesting articles on how this feature works on X11 (under the hood of those high level APIs). For example:

    Then, with this bit of knowledge about how the clipboard works in X11, I decided to do a quick experiment in order to check if I can recreate the clipboard access notifications seen in iOS.

    During the small periods I had available in the last few weekends, I tried to build a quick proof of concept, nothing fancy, just a few pieces of code from existing examples stitched together.

    Here’s the current result:

    Demonstration of clipboard-watcher detecting when other apps access the contents

    It seems possible to detect all attempts of accessing the clipboard, but after struggling a bit, it seems that due to the nature of X11 it is not possible to know which running process owns the window that is accessing the clipboard. A shame.

    The information that X11 has about the requesting client must be provided by the client itself, which makes it very hard to know for sure which process it is (most of the time it is not provided at all).

    Nevertheless, I think this could still be a very useful capability for existing clipboard managers (such as Klipper), given the core of this app works just like one.

    Even without knowing the process trying to access the clipboard contents, I can see a few useful features that are possible to implement, such as:

    • Create some stats about the clipboard access patterns.
    • Ask the user for permission, before providing the clipboard contents.

    Anyhow, you can check the proof of concept here and give it a try (improvements are welcome). Let me know what you think and what I’ve missed.

  • Security.txt

    Some days ago while scrolling my mastodon‘s feed (for those who don’t know it is like Tweeter but instead of being a single website, the whole network is composed by many different entities that interact with each other), I found the following message:

    To server admins:

    It is a good practice to provide contact details, so others can contact you in case of security vulnerabilities or questions regarding your privacy policy.

    One upcoming but already widespread format is the security.txt file at https://your-server/.well-known/security.txt.

    See https://securitytxt.org/ and https://infosec-handbook.eu/.well-known/security.txt.

    @infosechandbook@chaos.social

    It caught my attention because my personal domain didn’t had one at the time. I’ve added it to other projects in the past, but do I need one for a personal domain?

    After some thought, I couldn’t find any reason why I shouldn’t add one in this particular case. So as you might already have guessed, this post is about the steps I took to add it to my domain.

    What is it?

    A small text file, just like robots.txt, placed in a well known location, containing details about procedures, contacts and other key information required for security professionals to properly disclose their findings.

    Or in other words: Contact details in a text file.

    security.txt isn’t yet an official standard (still a draft) but it addresses a common issue that security researches encounter during their day to day activity: sometimes it’s harder to report a problem than it is to find it. I always remember the case of a Portuguese citizen, who spent ~5 months trying to contact someone that could fix some serious vulnerabilities in a governmental website.

    Even though it isn’t an accepted standard yet, it’s already being used in the wild:

    Need more examples? A small search finds it for you very quickly or you can also read here a small analysis of the current status on Alexa’s top 1000 websites.

    Implementation

    So to help the cause I added one for this domain. It can be found at https://ovalerio.net/.well-known/security.txt

    Below are the steps I took:

    1. Go to https://securitytxt.org/ and fill the required fields of the form present on that website.
    2. Fill the extra fields if they apply.
    3. Generate the text document.
    4. Sign the content using your PGP key
      gpg --clear-sign security.txt
    5. Publish the signed file on your domain under https://<domain>/.well-known/security.txt

    As you can see, this is a very low effort task and it can generate very high returns, if it leads to a disclosure of a serious vulnerability that otherwise would have gone unreported.