Tag: Security

  • Security.txt in the wild: 2025 edition

    One year ago, I checked the top 1 million “websites” for a security.txt file and then posted the results in this blog. As it was described at the time, I used a tool written by someone else who had already run this “experiment” in 2022.

    You can look at the post, if you are keen to know what is this file, why I was curious about the adoption numbers and what were last year’s results.

    As promised, I am collecting and publishing this information on the blog again this year. Yes, I did remember or, more precisely, my calendar app did.

    The first step was to download the same software again, and the second step was to download the most recent list of the top 1 million domains from the same source.

    Then, after consuming energy for a few of hours and wasting some bandwidth, the results were the following:

    TotalChange from 2024
    Sites scanned999968-0,003%
    Sites with a valid file1773-81%
    Sites with an invalid file12140+454%
    Sites without a file986055-0,25%
    ContactPolicyHiringEncryptionExpiry
    Sites with value120194526310730528480
    Change from 2024+30,3%+23,2%+21,2%+15,2%+70,9%

    Overall, there was an expected increase in usage, however the change from last year is again underwhelming. The number of domains with the file went from, 11501 to 13913, which is a minimal improvement.

    The valid/invalid numbers seem to be messed up, but this could be due to the software being outdated with the spec. I didn’t waste too much time on this.

    Even considering and ignoring the limitations described in the original author’s post, and ignoring the valid file detection issue, I think these results might not reflect the reality, due to the huge number of errors found in the output file.

    Overall, adoption seems to be progressing, but it still seems very far from being something mainstream.

    If I do this next year, perhaps it will be better to use a different methodology and tools, so I can obtain more reliable results.

    Fediverse Reactions
  • Are Redis ACL password protections weak?

    Earlier this year, I decided to explore Redis functionality a bit more deeply than my typical use-cases would require. Mostly due to curiosity, but also to have better knowledge of this tool in my “tool belt”.

    Curiously, a few months later, the whole ecosystem started boiling. Now we have Redis, Valkey, Redict, Garnet, and perhaps a few more. The space is hot right now and forks/alternatives are popping up like mushrooms.

    One common thing inherited from Redis is storing user passwords as SHA256 hashes. When I learned about this, I found it odd, since it goes against common best practices. The algorithm is very fast to brute force, does not protect against the usage of rainbow tables, etc.

    Instead of judging too fast, a better approach is to understand the reasoning for this decision, the limitations imposed by the use-cases and the threats such application might face.

    But first, let’s take a look at a more standard approach.

    Best practices for storing user passwords

    According to OWASP’s documentation on the subject, the following measures are important for applications storing user’s passwords:

    1. Use a strong and slow key derivation function (KDF).
    2. Add salt (if the KDF doesn’t include it already).
    3. Add pepper

    The idea for 1.is that computing a single hash should have a non-trivial cost (in time and memory), to decrease the speed at which an attacker can attempt to crack the stolen records.

    Adding a “salt” protects against the usage of “rainbow tables”, in other words, doesn’t let the attacker simply compare the values with precomputed hashes of common passwords.

    The “pepper” (a common random string used in all records), adds an extra layer of protection, given that, unlike the “salt”, it is not stored with the data, so the attacker will be missing that piece of information.

    Why does Redis use SHA256

    To store user passwords, Redis relies on a vanilla SHA256 hash. No multiple iterations for stretching, no salt, no pepper, nor any other measures.

    Since SHA256 is meant to be very fast and lightweight, it will be easier for an attacker to crack the hash.

    So why this decision? Understanding the use-cases of Redis gives us the picture that establishing and authenticating connections needs to be very, very fast. The documentation is clear about it:

    Using SHA256 provides the ability to avoid storing the password in clear text while still allowing for a very fast AUTH command, which is a very important feature of Redis and is coherent with what clients expect from Redis.

    Redis Documentation

    So this is a constraint that rules out the usage of standard KDF algorithms.

    For this reason, slowing down the password authentication, in order to use an algorithm that uses time and space to make password cracking hard, is a very poor choice. What we suggest instead is to generate strong passwords, so that nobody will be able to crack it using a dictionary or a brute force attack even if they have the hash.

    Redis Documentation

    So far, understandable. However, my agreement ends in the last sentence of the above quote.

    How can it be improved?

    The documentation leaves to the user (aka server administrator) the responsibility of setting strong passwords. In their words, if you set passwords that are lengthy and not guessable, you are safe.

    In my opinion, this approach doesn’t fit well with the “Secure by default” principle, which, I think, is essential nowadays.

    It leaves to the user the responsibility to not only set a strong password, but to also ensure that the password is almost uncrackable (a 32 bytes random string, in their docs). Experience tells me that most users and admins won’t be aware of it or won’t do it.

    Another point made to support the “vanilla SHA256” approach is:

    Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you already have access to what the password is protecting: the Redis instance stability and the data it contains.

    Redis Documentation

    Which is not entirely true, since ACL rules and users can be set as configuration files and managed externally. These files contain the SHA256 hashes. This means that in many setups and scenarios, the hashes won’t live only on the Redis server. This kind of configuration will be managed and stored elsewhere.

    I’m not the only one who thinks the current approach is not enough, teams implementing compatible but alternative implementations seem to share the concerns.

    So, after so many words and taking much of your precious time, you might ask, “what do you propose?”.

    Given the requirements for extremely fast connections and authentication, the first and main improvement would be to start using a “salt”. It is simple and won’t have any performance impact.

    The “salt” would make the hashes of not so strong passwords harder to crack, given that each password would have an extra random string that would have to be considered individually. Furthermore, this change could be made backwards compatible and added to existing external configuration files.

    Then, I would consider picking a key stretching approach or a more appropriate KDF to generate the hashes. This one would need to be carefully benchmarked, to minimize the performance impact. A small percentage of the time it takes for the whole process of initiating an authenticated connection, could be a good compromise.

    I would skip for now the usage of a “pepper”, since it is not clear how this could be done and managed from the user’s side. Pushing this responsibility to the user (Redis server operator), would create more complexity than it would be beneficial.

    An alternative approach, that could also be easy to implement and would be more secure than the current one, would be to automatically generate the “password” for the users by default. It would work like regular API keys, since it seems this is how Redis sees them:

    However ACL passwords are not really passwords. They are shared secrets between the server and the client, because the password is not an authentication token used by a human being.

    Redis Documentation

    The code already exists:

    …there is a special ACL command ACL GENPASS that generates passwords using the system cryptographic pseudorandom generator: …

    The command outputs a 32-byte (256-bit) pseudorandom string converted to a 64-byte alphanumerical string.

    Redis Documentation

    So it could be just a matter of requiring the user to explicitly bypass this automatic “API key” generation, to set up his own custom password.

    Summing it up

    To simply answer the question asked in the title: yes, I do think the user passwords could be better protected.

    Given the requirements and use-cases, it is understandable that there is a need to be fast. However, Redis should do more to protect the users’ passwords or at least ensure that users know what they are doing and pick an almost “uncrackable” password.

    So I ended up proposing:

    • An easy improvement: Add a salt.
    • A better improvement: Switch to a more appropriate KDF, with low work factor for performance reasons.
    • A different approach: Automatically generate by default a strong password for the ACL users.
    Fediverse Reactions
  • Security.txt in the wild

    A few years ago, I covered here in the blog the “security.txt spec”. A standard place with the security related contacts, designed to help researchers, and other people, find the right contacts to report vulnerabilities and other problems.

    At the time, I added it to my personal domain, as an example.

    When I wrote the post, the spec was still fairly recent, so as expected it wasn’t widely adopted and only the more security conscious organizations did put it into place.

    Since then, as part of my security work, I implemented it for several products, and the results were good. We received and triaged many reports that were sent to the correct addresses since day one.

    Many people, who put the security.txt file in place, complain about the amount of low effort reports that are sent their way. I admit this situation is not ideal. However, I still think it is a net positive, and the problem can be minimized by having a good policy in place and a streamlined triage process.

    While I always push for the implementation of this method on the products I work on, I have very little information about how widespread the adoption of this “spec” is.

    The topic is very common in certain “hacker” forums, but when I talk to people, the impression I get is that this is an obscure thing.

    The website, findsecuritycontacts.com, relies on security.txt to get its information. It also monitors the top 500 domains every day to generate some stats. The results are disappointing, only ~20% of those websites implement it correctly.

    I remember reading reports that covered many more websites, but recently, I haven’t seen any. With a quick search, I was able to find this one.

    It was written in 2022, so the results are clearly dated. On the bright side, the author published the tool he used to gather the data, which means we can quickly gather more recent data.

    So, to kill my curiosity, I downloaded the tool, grabbed the up-to-date list of the top 1 million websites from tranco-list.eu, gathered the same data and with a few lines of python code I obtained the following results:

    • Total sites scanned: 999992
    • Sites with a valid file: 9312 (~0.93%)
    • Sites with an invalid file: 2189 (~0.22%)
    • Sites without a file: 988491 (~98.85%)
    ContactPolicyHiringEncryptionExpiry
    Sites with value92183674256426504960

    The results are a bit underwhelming, I’m not sure if it is a flaw in the software, or if this is a clear picture of the reality.

    On the other hand, if we compare with the results that the original author obtained, this is just about a 3-fold improvement during the period of 1 year and a half. Which is a good sign.

    Next year, if I don’t forget, I will run the experiment again, to check the progress once more.

    Fediverse Reactions
  • Meet the InfoSec Planet

    If you are a frequent reader of this blog, you might already know that I created a small tool to generate a simple webpage plus an RSS feed, from the content of multiple other RSS sources, called worker-planet.

    This type of tool is often known as a “planet”:

    In online media a planet is a feed aggregator application designed to collect posts from the weblogs of members of an internet community and display them on a single page.

    Wikipedia

    While the tool is open-source, a person needs to deploy it before being able to see it in action. Not great.

    This brings us to last week. I was reading a recent issue of a popular newsletter, when I found an OPML file containing 101 infosec related sources curated by someone else.

    Instead of adding them to my newsreader, which to be honest, already contains a lot of cruft that I never read and that I should remove anyway, I saw a great fit to build a demo site for `worker-planet`.

    Preparing the sources

    The first step was to extract all the valid sources from that file. This is important because there is the chance that many of the items might not be working or online at all, since the file is more than 2 years old.

    A quick python script can help us with this task:

    # Extract existing URLs
    urls = []
    tree = ET.parse(opml_file)
    for element in tree.getroot().iter("outline"):
        if url := element.get("xmlUrl"):
            urls.append(url)
    
    # Make sure they are working
    def check_feed(url):
        try:
            response = urlopen(url)
            if 200 <= response.status < 300:
                body = response.read().decode("utf-8")
                ET.fromstring(body)
                return url
        except Exception:
            pass
    
    working_urls = []
    with ThreadPoolExecutor(max_workers=20) as executor:
        for result in executor.map(check_feed, urls):
            if result:
                working_urls.append(result)

    As expected, from the 101 sources present in the file, only 54 seem to be working.

    Deploying

    Now that we already have the inputs we need, it is time to set up and deploy our worker-planet.

    Assuming there aren’t any customizations, we just have to copy the wrangler.toml.example to a new wrangler.toml file and fill configs as desired. Here’s the one I used:

    name = "infosecplanet"
    main = "./worker/script.js"
    compatibility_date = "2023-05-18"
    node_compat = true
    account_id = "<my_id>"
    
    workers_dev = true
    kv_namespaces = [
        { binding = "WORKER_PLANET_STORE", id = "<namespace_id_for_prod>", preview_id = "<namespace_id_for_dev"> },
    ]
    
    [vars]
    FEEDS = "<all the feed urls>"
    MAX_SIZE = 100
    TITLE = "InfoSec Planet"
    DESCRIPTION = "A collection of diverse security content from a curated list of sources. This website also serves as a demo for \"worker-planet\", the software that powers it."
    CUSTOM_URL = "https://infosecplanet.ovalerio.net"
    CACHE_MAX_AGE = "300"
    
    [triggers]
    crons = ["0 */2 * * *"]

    Then npm run build plus npm run deploy. And it is done, the new planet should now accessible through my workers.dev subdomain.

    The rest is waiting for the cron job to execute and also configure any custom routes / domains on Cloudflare’s dashboard.

    The final result

    The new “Infosec Planet” is available on “https://infosecplanet.ovalerio.net” and lists the latest content in those infosec related sources. A united RSS feed is also available.

    In the coming weeks, I will likely improve a bit the list of sources to improve the overall quality of the content.

    One thing I would like to highlight, is that I took a special precaution to not include the full content of the feeds in the InfoSec Planet’s output.

    It was done this way because I didn’t ask for permission from all those authors, to include the contents of their public feeds in the page. So just a small snippet is shown together with the title.

    Nevertheless, if some author wishes to remove their public feed from the page, I will gladly do it so once notified (by email?).

  • What to use for “TOTP” in 2023?

    At the start of last week, we received great news regarding new improvements to a very popular security app, “Google Authenticator”. A feature it was lacking for a long time was finally implemented, “cloud backups”.

    However, after a few days, the security community realized the new feature wasn’t as good as everybody was assuming. It lacks “end-to-end encryption”. In other words, when users back up their 2FA codes to the cloud, Google has complete access to these secrets.

    Even ignoring the initial bugs (check this one and also this one), it is a big deal because any second factor should only be available to the “owner”. Having multiple entities with access to these codes, defeats the whole purpose of having a second factor (ignoring again any privacy shortcomings).

    Summing up, if you use Google Authenticator, do not activate the cloud backups.

    And this brings us to the topic of today’s post: “What app (or mechanism) should I use for 2FA?”

    This question is broader than one might initially expect, since we have multiple methods at our disposal.

    SMS codes should be on their way out, for multiple reasons, but specially because of the widespread SIM swapping vulnerabilities.

    Push-based authenticators don’t seem to be a great alternative. They are not standardized, they tie the user to proprietary ecosystems, and they can’t be used everywhere.

    In an ideal scenario, everyone would be using FIDO2 (“Webauthn”) mechanisms, with hardware keys or letting their device’s platform handle the secret material.

    While support is growing, and we should definitely start using it where we can, the truth is, it is not yet widely accepted. This means we still need to use another form of 2FA, where FIDO2 isn’t supported yet.

    That easy to use and widely accepted second factor is TOPT.

    This still is the most independent and widely used form of 2FA we have nowadays. Basically, you install an authenticator app that provides you temporary codes to use in each service after providing the password. One of the most popular apps for TOPT is the “problematic” Google Authenticator.

    What are the existing alternatives?

    Many password managers (1Password, Bitwarden, etc.) also offer the possibility to generate these codes for you. However, I don’t like this approach because the different factors should be:

    • Something you know
    • Something you have
    • Something you are

    In this case, the password manager already stores the first factor (the “something you know”), so having all eggs in the same basket doesn’t seem to be a good idea.

    For this reason, from now on, I will focus on apps that allow me to store these codes in a separate device (the “something you have”).

    My requirements for such an app are:

    • Data is encrypted at rest.
    • Access is secured by another form of authentication.
    • Has easy offline backups.
    • It is easy to restore a backup.
    • Secure display (tap before the code is displayed on the screen).
    • Open source.
    • Available for android.

    There are dozens of them, but many don’t comply with all the points above, while others have privacy and security issues that I can’t overlook (just to give you a glimpse, check this).

    In the past, I usually recommended “andOTP“. It checks all the boxes and is indeed a great app for this purpose. Unfortunately, it stopped being maintained a few months ago.

    While it is still a solid app, I don’t feel comfortable recommending it anymore.

    The bright side is that I went looking for a similar app and I found “Aegis“, that happens to have great reviews, fulfills all the above requirements and is still maintained. I guess this is the one I will be recommending when I’m asked “what to use for 2FA nowadays”.

  • Controlling the access to the clipboard contents

    In a previous blog post published earlier this year I explored some security considerations of the well known “clipboard” functionality that most operating systems provide.

    Long story short, in my opinion there is a lot more that could be done to protect the users (and their sensitive data) from many attacks that use of clipboard as a vector to trick the user or extract sensitive material.

    The proof-of-concept I ended up building to demonstrate some of the ideas worked in X11 but didn’t achieve one of the desired goals:

    It seems possible to detect all attempts of accessing the clipboard, but after struggling a bit, it seems that due to the nature of X11 it is not possible to know which running process owns the window that is accessing the clipboard. A shame.

    Myself, last blog post on this topic

    The good news about the above quote, is that it no longer is true. A kind soul contributed a patch that allows “clipboard-watcher” to fetch the required information about the process accessing the clipboard. Now we have all ingredients to make the tool fulfill its initial intended purpose (and it does).

    With this lengthily introduction we are ready to address the real subject of this post, giving the user more control over how the clipboard is used. Notifying the users about an access is just a first step, but restricting the access is what we want.

    On this topic, several comments to the previous post mentioned the strategy used by Qubes OS. It relies on having one clipboard specific to each app and a second clipboard that is shared between apps. The later requires user intervention to be used. While I think this is a good approach, it is not easy to replicate in a regular Linux distribution.

    However as I mentioned in my initial post, I think we can achieve a similar result by asking the user for permission when an app requests the data currently stored on the clipboard. This was the approach Apple implemented on the recent iOS release.

    So in order to check/test how this could work, I tried to adapt my proof-of-concept to ask for permission before sharing any data. Here’s an example:

    Working example of clipboard-watcher requesting permission before letting other apps access the clipboard contents.

    As we can see, it asks of permission before the requesting app is given the data and it kinda works (ignore the clunky interface and UX). Of course that there are many possible improvements to make its usage bearable, such as whitelisting certain apps, “de-duplicate” the content requests (apps can generate a new one for each available content type, which ends up being spammy), etc.

    Overall I’m pleased with the result and in my humble opinion this should be a “must have” security feature for any good clipboard manager on Linux. I say it even taking into account that this approach is not bulletproof, given that a malicious application could continuously fight/race the clipboard manager for the control of the “X selections”.

    Anyhow, the new changes for the proof-of-concept are available here, please give it a try and let me know what you think and if you find any other problems.

  • Inlineshashes: a new tool to help you build your CSP

    Content-Security-Policy (CSP) is an important mechanism in today’s web security arsenal. Is a way of defending against Cross-Site Scripting and other attacks.

    It isn’t hard to get started with or to put in place in order to secure your website or web application (I did that exercise in a previous post). However when the systems are complex or when you don’t fully control an underlying “codebase” that frequently changes (like it happens with of-the-shelf software) things can get a bit messier.

    In those cases it is harder to build a strict and simple policy, since there are many moving pieces and/or you don’t control the code development, so you will end up opening exceptions and whitelisting certain pieces of content making the policy more complex. This is specially true for inline elements, making the unsafe-inline source very appealing (its name tells you why you should avoid it).

    Taking WordPress as an example, recommended theme and plugin updates can introduce changes in the included inline elements, which you will have to review in order to update your CSP. The task gets boring very quickly.

    To help with the task of building and maintaining the CSP in the cases described above, I recently started to work on a small tool (and library) to detect, inspect and whitelist new inline changes. You can check it here or download it directly from PyPI.

  • Who keeps an eye on clipboard access?

    If there is any feature that “universally” describes the usage of computers, it is the copy/paste pattern. We are used to it, practically all the common graphical user interfaces have support for it, and it magically works.

    We copy some information from one application and paste into another, and another…

    How does these applications have access to this information? The clipboard must be something that is shared across all of them, right? Right.

    While very useful, this raises a lot of security questions. As far as I can tell, all apps could be grabbing what is available on the clipboard.

    It isn’t uncommon for people to copy sensitive information from one app to another and even if the information is not sensitive, the user generally has a clear target app for the information (the others don’t have anything to do with it).

    These questions started bugging me a long time ago, and the sentiment even got worse when Apple released an iOS feature that notifies users when an app reads the contents of the clipboard. That was brilliant, why didn’t anyone thought of that before?

    The result? Tons of apps caught snooping into the clipboard contents without the user asking for it. The following articles can give you a glimpse of what followed:

    That’s not good, and saying you won’t do it again is not enough. On iOS, apps were caught and users notified, but what about Android? What about other desktop operating systems?

    Accessing the clipboard to check what’s there, then steal passwords, or replace cryptocurrency addresses or just to get a glimpse of what the user is doing is a common pattern of malware.

    I wonder why hasn’t a similar feature been implemented in most operating systems we use nowadays (it doesn’t need to be identical, but at least let us verify how the clipboard is being used). Perhaps there exists tools can help us with this, however I wasn’t able to find any for Linux.

    A couple of weeks ago, I started to look at how this works (on Linux, which is what I’m currently using). What I found is that most libraries just provide a simple interface to put things on the clipboard and to get the current clipboard content. Nothing else.

    After further digging, I finally found some useful and interesting articles on how this feature works on X11 (under the hood of those high level APIs). For example:

    Then, with this bit of knowledge about how the clipboard works in X11, I decided to do a quick experiment in order to check if I can recreate the clipboard access notifications seen in iOS.

    During the small periods I had available in the last few weekends, I tried to build a quick proof of concept, nothing fancy, just a few pieces of code from existing examples stitched together.

    Here’s the current result:

    Demonstration of clipboard-watcher detecting when other apps access the contents

    It seems possible to detect all attempts of accessing the clipboard, but after struggling a bit, it seems that due to the nature of X11 it is not possible to know which running process owns the window that is accessing the clipboard. A shame.

    The information that X11 has about the requesting client must be provided by the client itself, which makes it very hard to know for sure which process it is (most of the time it is not provided at all).

    Nevertheless, I think this could still be a very useful capability for existing clipboard managers (such as Klipper), given the core of this app works just like one.

    Even without knowing the process trying to access the clipboard contents, I can see a few useful features that are possible to implement, such as:

    • Create some stats about the clipboard access patterns.
    • Ask the user for permission, before providing the clipboard contents.

    Anyhow, you can check the proof of concept here and give it a try (improvements are welcome). Let me know what you think and what I’ve missed.

  • Why you shouldn’t remove your package from PyPI

    Nowadays most software developed using the Python language relies on external packages (dependencies) to get the job done. Correctly managing this “supply-chain” ends up being very important and having a big impact on the end product.

    As a developer you should be cautious about the dependencies you include on your project, as I explained in a previous post, but you are always dependent on the job done by the maintainers of those packages.

    As a public package owner/maintainer, you also have to be aware that the code you write, your decisions and your actions will have an impact on the projects that depend directly or indirectly on your package.

    With this small introduction we arrive to the topic of this post, which is “What to do as a maintainer when you no longer want to support a given package?” or ” How to properly rename my package?”.

    In both of these situations you might think “I will start by removing the package from PyPI”, I hope the next lines will convince you that this is the worst you can do, for two reasons:

    • You will break the code or the build systems of all projects that depend on the current or past versions of your package.
    • You will free the namespace for others to use and if your package is popular enough this might become a juicy target for any malicious actor.

    TLDR: your will screw your “users”.

    The left-pad incident, while it didn’t happen in the python ecosystem, is a well known example of the first point and shows what happens when a popular package gets removed from the public index.

    Malicious actors usually register packages using names that are similar to other popular packages with the hope that a user will end up installing them by mistake, something that already has been found multiple times on PyPI. Now imagine if that package name suddenly becomes available and is already trusted by other projects.

    What should you do it then?

    Just don’t delete the package.

    I admit that in some rare occasions it might be required, but most of the time the best thing to do is to leave it there (specially for open-source ones).

    Adding a warning to the code and informing the users in the README file that the package is no longer maintained or safe to use is also a nice thing to do.

    A good example of this process being done properly was the renaming of model-mommy to model-bakery, as a user it was painless. Here’s an overview of the steps they took:

    1. A new source code repository was created with the same contents. (This step is optional)
    2. After doing the required changes a new package was uploaded to PyPI.
    3. Deprecation warnings were added to the old code, mentioning the new package.
    4. The documentation was updated mentioning the new package and making it clear the old package will no longer be maintained.
    5. A new release of the old package was created, so the user could see the deprecation warnings.
    6. All further development was done on the new package.
    7. The old code repository was archived.

    So here is what is shown every time the test suite of an affected project is executed:

    /lib/python3.7/site-packages/model_mommy/__init__.py:7: DeprecationWarning: Important: model_mommy is no longer maintained. Please use model_bakery instead: https://pypi.org/project/model-bakery/

    In the end, even though I didn’t update right away, everything kept working and I was constantly reminded that I needed to make the change.

  • Security.txt

    Some days ago while scrolling my mastodon‘s feed (for those who don’t know it is like Tweeter but instead of being a single website, the whole network is composed by many different entities that interact with each other), I found the following message:

    To server admins:

    It is a good practice to provide contact details, so others can contact you in case of security vulnerabilities or questions regarding your privacy policy.

    One upcoming but already widespread format is the security.txt file at https://your-server/.well-known/security.txt.

    See https://securitytxt.org/ and https://infosec-handbook.eu/.well-known/security.txt.

    @infosechandbook@chaos.social

    It caught my attention because my personal domain didn’t had one at the time. I’ve added it to other projects in the past, but do I need one for a personal domain?

    After some thought, I couldn’t find any reason why I shouldn’t add one in this particular case. So as you might already have guessed, this post is about the steps I took to add it to my domain.

    What is it?

    A small text file, just like robots.txt, placed in a well known location, containing details about procedures, contacts and other key information required for security professionals to properly disclose their findings.

    Or in other words: Contact details in a text file.

    security.txt isn’t yet an official standard (still a draft) but it addresses a common issue that security researches encounter during their day to day activity: sometimes it’s harder to report a problem than it is to find it. I always remember the case of a Portuguese citizen, who spent ~5 months trying to contact someone that could fix some serious vulnerabilities in a governmental website.

    Even though it isn’t an accepted standard yet, it’s already being used in the wild:

    Need more examples? A small search finds it for you very quickly or you can also read here a small analysis of the current status on Alexa’s top 1000 websites.

    Implementation

    So to help the cause I added one for this domain. It can be found at https://ovalerio.net/.well-known/security.txt

    Below are the steps I took:

    1. Go to https://securitytxt.org/ and fill the required fields of the form present on that website.
    2. Fill the extra fields if they apply.
    3. Generate the text document.
    4. Sign the content using your PGP key
      gpg --clear-sign security.txt
    5. Publish the signed file on your domain under https://<domain>/.well-known/security.txt

    As you can see, this is a very low effort task and it can generate very high returns, if it leads to a disclosure of a serious vulnerability that otherwise would have gone unreported.

  • CSP headers using Cloudflare Workers

    Last January I made a small post about setting up a “Content-Security-Policy” header for this blog. On that post I described the steps I took to reach a final result, that I thought was good enough given the “threats” this website faces.

    This process usually isn’t hard If you develop the website’s software and have an high level of control over the development decisions, the end result ends up being a simple yet very strict policy. However if you do not have that degree of control over the code (and do not want to break the functionality) the policy can end up more complex and lax than you were initially hoping for. That’s what happened in my case, since I currently use a standard installation of WordPress for the blog.

    The end result was a different security policy for different routes and sections (this part was not included on the blog post), that made the web-server configuration quite messy.

    (With this intro, you might have already noticed that I’m just making excuses to redo the initial and working implementation, in order to test some sort of new technology)

    Given the blog is behind the Cloudflare CDN and they introduced their “serverless” product called “Workers” a while ago, I decided that I could try to manage the policy dynamically on their servers.

    Browser <--> CF Edge Server <--> Web Server <--> App

    The above line describes the current setup, so instead of adding the CSP header on the “App” or the “Web Server” stages, the header is now added to the response on the last stage before reaching the browser. Let me describe how I’ve done it.

    Cloudflare Workers

    First a very small introduction to Workers, later you can find more detailed information on Workers.dev.

    So, first Cloudflare added the v8 engine to all edge servers that route the traffic of their clients, then more recently they started letting these users write small programs that can run on those servers inside the v8 sandbox.

    The programs are built very similarly to how you would build a service worker (they use the same API), the main difference being where the code runs (browser vs edge server).

    These “serverless” scripts can then be called directly through a specific endpoint provided by Cloudflare. In this case they should create and return a response to the requests.

    Or you can instruct Cloudflare to execute them on specific routes of your website, this means that the worker can generate the response, execute any action before the request reaches your website or change the response that is returned.

    This service is charged based on the number of requests handled by the “workers”.

    The implementation

    Going back to the original problem and based on the above description, we can dynamically introduce or modify the “Content-Security-Policy” for each request that goes through the worker which give us an high degree of flexibility.

    So for my case a simple script like the one below, did the job just fine.

    addEventListener('fetch', event => {
      event.respondWith(handleRequest(event.request))
    })
    
    /**
     * Forward the request and swap the response's CSP header
     * @param {Request} request
     */
    async function handleRequest(request) {
      let policy = "<your-custom-policy-here>"
      let originalResponse = await fetch(request)
      response = new Response(originalResponse.body, originalResponse)
      response.headers.set('Content-Security-Policy', policy)
      return response
    }

    The script just listens for the request, passes it to a handler function (lines 1-3), forwards to the origin server (line 12), grabs the response (line 13), replaced the CSP header with the defined policy (line 14) and then returns the response.

    If I needed something more complex, like making slight changes to the policy depending on the User-Agent to make sure different browsers behave as expected given the different implementations or compatibility issues, it would also be easy. This is something that would be harder to achieve in the config file of a regular web server (nginx, apache, etc).

    Enabling the worker

    Now that the script is done and the worker deployed, in order to make it run on certain requests to my blog, I just had to go to the Cloudflare’s dashboard of my domain, click on the “workers” section and add the routes I want it to be executed:

    cloudflare workers routes modal
    Configuring the routes that will use the worker

    The settings displayed on the above picture will run the worker on all requests to this blog, but is can be made more specific and I can even have multiple workers for different routes.

    Some sort of conclusion

    Despite the use-case described in this post being very simple, there is potential in this new “serverless” offering from Cloudflare. It definitely helped me solve the problem of having different policies for different sections of the website without much trouble.

    In the future I might comeback to it, to explore other user-cases or implementation details.

  • Setting up a Content-Security-Policy

    A couple of weeks ago, I gave a small talk on the Madeira Tech Meetup about a set of HTTP headers that could help website owners protect their assets and their users. The slides are available here, just in case you want to take a look.

    The content of the talk is basically a small review about what exists, what each header tries to achieve and how could you use it.

    After the talk I remembered that I didn’t review the heades of this blog for quite sometime. So a quick visit to Mozilla Observatory, a tool that lets you have a quick look of some of the security configurations of your website, gave me an idea of what I needed to improve. This was the result:

    The Content-Security-Header was missing

    So what is a Content Security Policy? On the MDN documentation we can find the following description:

    The HTTP Content-Security-Policy response header allows web site administrators to control resources the user agent is allowed to load for a given page.

    Mozilla Developer Network

    Summing up, in this header we describe with a certain level of detail the sources from where each type of content can be fetched in order to be allowed and included on a given page/app. The main goal of this type of policy is to mitigate Cross-Site Scripting attacks.

    In order to start building a CSP for this blog a good approach, in my humble opinion, is to start with the more basic and restrictive policy and then proceed evaluating the need for exceptions and only add them when strictly necessary. So here is my first attempt:

    default-src 'self'; object-src 'none'; report-uri https://ovalerio.report-uri.com/r/d/csp/reportOnly

    Lets interpret what it says:

    • default-src: This is the default value for all non-mentioned directives. self means “only things that come from this domain”.
    • object-src: No <object>, <embed> or <applet> here.
    • report-uri: All policy violations should be reported by the browser to this URL.

    The idea was that all styles, scripts and images should be served by this domain, anything external should be blocked. This will also block inline scripts, styles and data images, which are considered unsafe. If for some reason I need to allow this on the blog I could use unsafe-inline, eval and data: on the directive’s definition but in my opinion they should be avoided.

    Now a good way to find out how this policy will affect the website and to understand how it needs to be tuned (or the website changed) we can activate it using the “report only mode:

    Content-Security-Policy-Report-Only: <policy>

    This mode will generate some reports when you (and other users) navigate through the website, they will be printed on the browser’s console and sent to the defined report-uri, but the resources will be loaded anyway.

    Here are some results:

    CSP violations logs on the browser console
    Example of the CSP violations on the browser console

    As an example below is a raw report from one of those violations:

    {
        "csp-report": {
            "blocked-uri": "inline",
            "document-uri": "https://blog.ovalerio.net/",
            "original-policy": "default-src 'self'; object-src 'none'; report-uri https://ovalerio.report-uri.com/r/d/csp/reportOnly",
            "violated-directive": "default-src"
        }
    }

    After a while I found that:

    • The theme used on this blog used some data: fonts
    • Several inline scripts were being loaded
    • Many inline styles were also being used
    • I have some demos that load content from asciinema.org
    • I often share some videos from Youtube, so I need to allow iframes from that domain
    • Some older posts also embeded from other websites (such as soundcloud)

    So for the blog to work fine with the CSP being enforced, I either had to include some exceptions or fix errors. After evaluating the attack surface and the work required to make the changes I ended up with the following policy:

    Content-Security-Policy-Report-Only: default-src 'self'; script-src 'self' https://asciinema.org 'sha256-A+5+D7+YGeNGrYcTyNB4LNGYdWr35XshEdH/tqROujM=' 'sha256-2N2eS+4Cy0nFISF8T0QGez36fUJfaY+o6QBWxTUYiHc=' 'sha256-AJyUt7CSSRW+BeuiusXDXezlE1Wv2tkQgT5pCnpoL+w=' 'sha256-n3qH1zzzTNXXbWAKXOMmrBzjKgIQZ7G7UFh/pIixNEQ='; style-src 'self' 'sha256-MyyabzyHEWp8TS5S1nthEJ4uLnqD1s3X+OTsB8jcaas=' 'sha256-OyKg6OHgnmapAcgq002yGA58wB21FOR7EcTwPWSs54E='; font-src 'self' data:; img-src 'self' https://secure.gravatar.com; frame-src 'self' https://www.youtube.com https://asciinema.org; object-src 'none'; report-uri https://ovalerio.report-uri.com/r/d/csp/reportOnly

    A lot more complex than I initially expected it to be, but it’s one of the drawbacks of using a “pre-built” theme on a platform that I didn’t develop. I was able (in the available time) to fix some stuff but fixing everything would take a lot more work.

    All those sha-256 hashes were added to only allow certain inline scripts and styles without allowing everything using unsafe-inline.

    Perhaps in the future I will be able to change to a saner theme/platform, but for the time being this Content-Security-Policy will do the job.

    I started enforcing it (by changing Content-Security-Policy-Report-Only to Content-Security-Policy) just before publishing this blog post, so if anything is broken please let me know.

    I hope this post has been helpful to you and if you didn’t yet implement this header you should give it a try, it might take some effort (depending on the use case) but in the long run I believe it is totally worth it.

  • Staying on an AirBnB? Look for the cameras

    When going on a trip it is now common practice to consider staying on an rented apartment or house instead of an hotel or hostel, mostly thanks to AirBnB which made it really easy and convenient for both side of the deal. Most of the time the price is super competitive and I would say a great fit for many situations.

    However as it happens with almost anything, it has its own set of problems and challenges. One example of these new challenges are the reports (and confirmations) that some, lets call them malicious hosts, have been putting in place cameras to monitor the guests during their stay.

    With a small search on the internet you can find

    Someone equipped with the right knowledge and a computer can try to check if a camera is connected to the WiFi network, like this person did:

    Toot describing that a camera that was hidden inside a box

    If this is your case, the following post provides a few guidelines to look for the cameras:

    Finally, try to figure out the public IP address of the network you are on ( https://dshield.org/api/myip ) and either run a port scan from the outside to see if you find any odd open ports, or look it up in Shodan to see if Shodan found cameras on this IP in the past (but you likely will have a dynamic IP address).

    InfoSec Handlers Diary Blog

    This page even provides a script that you can execute to automatically do most steps explained on the above article.

    However, sometimes you don’t bring your computer with you, which means you would have to rely on your smartphone to do this search. I’m still trying to find a good, trustworthy and intuitive app to recommend, since using nmap on Android will not help the less tech-savvy people.

    Meanwhile, I hope the above links provide you with some ideas and useful tools to look for hidden cameras while you stay on a rented place.

  • Looking for security issues on your python projects

    In today’s post I will introduce a few open-source tools, that can help you improve the security of any of your python projects and detect possible vulnerabilities early on.

    These tools are quite well known in the python community and used together will provide you with great feedback about common issues and pitfalls.

    Safety and Piprot

    As I discussed some time ago on a post about managing dependencies and the importance of checking them for known issues, in python there is a tool that compares the items of your requirements.txt with a database of known vulnerable versions. It is called safety (repository)  and can be used like this:

    safety check --full-report -r requirements.txt

    If you already use pipenv safety is already incorporated and can be used by running: pipenv check (more info here).

    Since the older the dependencies are, the higher the probability of a certain package containing bugs and issues, another great tool that can help you with this is piprot (repository).

    It will check all items on your requirements.txt and tell you how outdated they are.

    Bandit

    The next tool in the line is bandit, which is a static analyzer for python built by the Open Stack Security Project, it checks your codebase for common security problems and programming  mistakes that might compromise your application.

    It will find cases of hardcoded passwords, bad SSL defaults, usage of eval, weak ciphers, different “injection” possibilities, etc.

    It doesn’t require much configuration and you can easily add it to your project. You can find more on the official repository.

    Python Taint

    This last one only applies if you are building a web application and requires a little bit more effort to integrate in your project (at its current state).

    Python Taint (pyt) is a static analyzer that tries to find spots were your code might be vulnerable to common types of problems that affect websites and web apps, such as SQL injection, cross site scripting (XSS), etc.

    The repository can be found here.

    If you are using Django, after using pyt you might also want to run the built in manage.py check command, (as discussed in a previous post) to verify some specific configurations of the framework present on your project.

     

  • Upgrade your “neo-python” wallets

    Several weeks ago I started to explore the NEO ecosystem, for those who are not aware NEO is a blockchain project that just like Ethereum pretends to create the tools and the platform to execute smart-contracts and create new types of decentralized applications. It has its pros and cons just like any other system, but that is outside of the scope of this blog post.

    One of the defining characteristics of this “cryptocurrency” is the ability develop those smart-contracts in programming languages the user already is familiar with (however only a small subset of the language is available).

    So I searched for the available SDKs and found the neo-python project, which is a wallet software and also a set of tools to develop using the Python programming language. The project is developed by a community of supporters of the NEO ecosystem called City of Zion.

    And now the real topic of the post begins, while learning the features and exploring the codebase I found an urgent security issue with the way the wallets were being encrypted by neo-python.

    Long story short, the method used to protect the wallets wasn’t correctly implemented and allowed an attacker with access to the wallet file to decrypt it without the need for the password/pass-phrase (more details here) .

    Fortunately it is an actively developed project and the team responsible for it was quick to acknowledge the problem and merge the fix I proposed in a pull request. The fix is now present in the newer versions of the project, and it now forces the users to reset the security features of their wallets (check this video for more details, starting on minute 8 up to 10) .

    Now in this post I would like to leave my recommendation about how to proceed after re-encrypting the wallet, because even though the issue is fixed your private keys might have been compromised before you applied the patch. If you are a user and didn’t noticed nothing yet the most probable scenario is that you weren’t compromised, since most immediate thing an attacker could/would do is to steal your funds.

    Nevertheless, there is always the possibility and to avoid any bad surprises you definitely should:

    1. Properly encrypt your wallet using the reencrypt_wallet.py  script.
    2. Check the new generated wallet is working properly.
    3. Then delete the old wallet.
    4. Create a new wallet.
    5. Transfer your funds to the new wallet.

    The steps 4 and 5 are necessary because the fix protects your master key but it doesn’t change it and as I previously said if a copy of your vulnerable wallet exists (created by you or by an attacker) your funds are still accessible. So don’t forget to go through them.

    Other than this, the project is very interesting and while still immature it has been fun the work with it, so I will keep contributing some improvements in the near future.