Tag: Django

  • Django: Deferred constrain enforcement

    Another Friday, another Django related post. I guess this blog is becoming a bit monothematic. I promise the next ones will bring the much-needed diversity of contents, but today let’s explore a very useful feature of the Django’s ORM.

    Ok… Ok… it’s more of a feature of PostgreSQL that Django supports, and it isn’t available on the other database backends. But let’s dive in any way.

    Let’s imagine this incredibly simplistic scenario where you have the following model:

    class Player(models.Model):
      team = models.ForeignKey(Team, on_delete=models.CASCADE)
      squad_number = models.PositiveSmallIntegerField()
    
      class Meta:
        constrains = [
          models.UniqueConstraint(
            name="unique_squad_number",
            fields=["team", "squad_number"],
          )
        ]

    So a team has many players and each player has a different squad/shirt number. Only one player can use that number for a given team.

    Users, can select their teams and then re-arrange their player’s numbers however they like. To keep it simple, let’s assume it is done through the Django Admin, using a Player Inline on the Team‘s model admin.

    We add proper form validation, to ensure that no players in the submitted squad are assigned the same squad_number. Things work great until you start noticing that despite your validation and despite the user’s input not having any players assigned the same number, integrity errors are flying around. What’s happening?

    Well, when the system tries to update some player records after being correctly validated, each update/insertion is checked against the constraint (even atomically within a transaction). This means that the order of the updates, or in certain situations all updates with correct data, will “raise” integrity errors, due to conflicts with the data currently stored in the database.

    The solution? Deferring the integrity checks to the end of the transaction. Here’s how:

    class Player(models.Model):
      team = models.ForeignKey(Team, on_delete=models.CASCADE)
      squad_number = models.PositiveSmallIntegerField()
    
      class Meta:
        constrains = [
          models.UniqueConstraint(
            name="unique_squad_number",
            fields=["team", "squad_number"],
            deferrable=models.Deferrable.DEFERRED,
          )
        ]

    Now, when you save multiple objects within a single transaction, you will no longer see those errors if the input data is valid.

    Fediverse Reactions
  • Django: Overriding translations from dependencies

    This week, I’ll continue on the same theme of my previous “Django Friday Tips” post. Essentially, we will keep addressing small annoyances that can surface while developing your multilingual project.

    The challenge for this article shows up when a given string from a package that is a dependency of your project is either:

    • Not translated in the language you are targeting.
    • Translated in a slightly different way than you desire.

    As we are all aware, most packages use English by default, then the most popular ones often provide translations for languages that have more active users willing to contribute. But these efforts are laborious and can have differences for distinct regions, even if they use the same base language.

    Contributing upstream, might not always be an option.

    This means that to maintain the coherence of the interface of your project, you need to adapt these translations locally.

    Handling the localization of the code in your repository in Django is obvious and well documented. Django collects the strings and adds the translation files to the locale path (per app or per project).

    For the other packages, these strings and translations are located within their directory hierarchy, outside the reach of the makemessages command. Django, on the other hand, goes through all these paths, searching for the first match.

    With this in mind, the easiest and most straightforward way I was able to find to achieve this goal was:

    Create a file in your project (in an app directory or in a common project directory), let’s call it locale_overrides.py and put there the exact string from your dependency (Django or another) that you which to translate:

    from django.utils.translation import gettext_lazy as _
    
    locale_overrides = [
        _("This could sound better."),
        ...
    ]

    Then run manage.py makemessages, translate the new lines in the .po file as you wish, then finally compile your new translations with manage.py compilemessages.

    Since your new translations are found first, when your app is looking for them, they will be picked instead of the “original” ones.

    For tiny adjustments, this method works great. When the amount of content starts growing too much, a new approach might be more appropriate, but that will be a topic for another time.

  • Status of old PyPI projects: archived

    Since late January, the python package index (PyPI) supports archiving projects/packages. This is, in fact, a very welcome feature, since it clearly tells without any doubt when a package is no longer maintained and will not receive any further updates.

    It makes it easier for the person looking for packages, to know which ones deserve a closer inspection and which ones are there abandoned, polluting the results.

    Previously, the only viable way to retire a package was by adding a disclaimer to the README, and let it sit there indefinitely, being treated just like the other active packages.

    “You had the option of deleting the package”, you might say. Yes, but as I explained in a previous post, this is dangerous and should be avoided. So, archiving is in my view the best course of action when a person no longer wants to maintain their published packages and projects.

    With this in mind, this week I decided to do my part and archive old packages that I had published for different reasons and were there abandoned for years. These were:

    • mdvis: a small package I wrote many years ago, mostly to learn how to publish things on PyPI.
    • auto-tune: something I was about to start working on for a previous employer and that was cancelled at the last minute.
    • django-cryptolock: an experiment done for a previous client. It tried to implement an existing proposal for an authentication scheme, using Monero wallets.
    • monero-python: a few years ago, during my day-to-day work, this package was removed (then renamed by the original author). At the time, it was a direct dependency for many projects and tools, which meant a malicious actor could have taken it and compromise those systems. As a precaution, I grabbed the open name. It has been there empty ever since.

    Now it is your turn.

    After a sufficient number of packages get marked as archived, we can hope for some enhancements to the search functionality of PyPI. Namely, a way of filtering out archived packages from the results and a visual marker for them in the list view. One step at a time.

    Fediverse Reactions
  • Why isn’t my translation showing up?

    Here we go again for another post of this blog’s irregular column, entitled Django’s Friday Tips. Today let’s address a silent issue, that any of you that have formerly worked with internationalization (i18n) almost certainly already faced.

    You add a string that must be translated:

    from django.utils.translation import gettext_lazy as _
    
    some_variable = _("A key that needs translation")

    You then execute the manage.py makemessages --locale pt command, go to the generated django.po file and edit the translation:

    msgid "A key that needs translation"
    msgstr "Uma chave que precisa de tradução"

    You compile (manage.py compilemessages --locale pt), and proceed with your work.

    A few moments later, when checking the results of your hard effort… nothing, the text is showing the key (in English).

    Time to double-check the code (did I forgot the gettext stuff?), the translation file, the I18n settings, etc. What the hell?

    Don’t waste any more time, most likely the translation is marked as fuzzy, like this:

    #: myapp/module.py:3
    #, fuzzy
    #| msgid "Another key that needed translation"
    msgid "A key that needs translation"
    msgstr "Uma chave que precisa de tradução"

    You see, you didn’t notice that #, fuzzy line and kept it there. The command that compiles those translations ignores the messages marked as fuzzy.

    So the solution is to remove those extra 2 lines, or to compile with the --use-fuzzy⁣ flag. That’s it, compile, and you should be able to proceed with the problem solved.

    Fediverse Reactions
  • Ways to have an atomic counter in Django

    This week, I’m back at my tremendously irregular Django tips series, where I share small pieces of code and approaches to common themes that developers face when working on their web applications.

    The topic of today’s post is how to implement a counter that isn’t vulnerable to race conditions. Counting is everywhere, when handling money, when controlling access limits, when scoring, etc.

    One common rookie mistake is to do it like this:

    model = MyModel.objects.get(id=id)
    model.count += 1
    model.save(update_fields=["count"])

    An approach that is subject to race conditions, as described below:

    • Process 1 gets count value (let’s say it is currently 5)
    • Process 2 gets counts value
    • Process 1 increments and saves
    • Process 2 increments and saves

    Instead of 7, you end up with 6 in your records.

    On a low stakes project or in a situation where precision is not that important, this might do the trick and not become a problem. However, if you need accuracy, you will need to do it differently.

    Approach 1

    with transaction.atomic():
        model = (
            MyModel.objects.select_for_update()
            .get(id=id)
        )
        model.count += 1
        model.save(update_fields=["count"])

    In this approach, when you first fetch the record, you ask for the database to lock it. While you are handling it, no one else can access it.

    Since it locks the records, it can create a bottleneck. You will have to evaluate if fits your application’s access patterns. As a rule of thumb, it should be used when you require access to the final value.

    Approach 2

    from django.db.models import F
    
    model = MyModel.objects.filter(id=id).update(
        count=F("count") + 1
    )

    In this approach, you don’t lock any values or need to explicitly work inside a transaction. Here, you just tell the database, that it should add 1 to the value that is currently there. The database will take care of atomically incrementing the value.

    It should be faster, since multiple processes can access and modify the record “at the same time”. Ideally, you would use it when you don’t need to access the final value.

    Approach 3

    from django.core.cache import cache
    
    cache.incr(f"mymodel_{id}_count", 1)

    If your counter has a limited life-time, and you would rather not pay the cost of a database insertion, using your cache backend could provide you with an even faster method.

    The downside, is the level of persistence and the fact that your cache backend needs to support atomic increments. As far as I can tell, you are well served with Redis and Memcached.

    For today, this is it. Please let me know if I forgot anything.

  • Filter sensitive contents from Django’s error reports

    Reporting application errors to a (small) list of admins is a feature that already comes built in and ready to use in Django.

    You just need to configure the ADMINS setting and have the application ready to send emails. All application errors (status 500 and above) will trigger a new message containing all the details, including a traceback.

    However, this message can contain sensitive contents (passwords, credit cards, PII, etc.). So, Django also provides a couple of decorators that allow you to hide/scrub the sensitive stuff that might be stored in variables or in the body of the request itself.

    These decorators are called @sensitive_variables() and @sensitive_post_parameters(). The correct usage of both of them is described in more detail here.

    With the above information, this article could be done. Just use the decorators correctly and extensively, and you won’t leak user’s sensitive content to your staff or to any entity that handles those error reports.

    Unfortunately, it isn’t that simple. Because lately, I don’t remember working in a project that uses Django’s default error reporting. A team usually needs a better way to track and manage these errors, and most teams resort to other tools.

    Filtering sensitive content in Sentry

    Since Sentry is my go-to tool for handling application errors, in the rest of this post, I will explore how to make sure sensitive data doesn’t reach Sentry’s servers.

    Sentry is open-source, so you can run it on your infrastructure, but they also offer a hosted version if you want to avoid having the trouble of running it yourself.

    To ensure that sensitive data is not leaked or stored where it shouldn’t, Sentry offers 3 solutions:

    • Scrub things on the SDK, before sending the event.
    • Scrub things when the event is received by the server, so it is not stored.
    • Intercept the event in transit and scrub the sensitive data before forwarding it.

    In my humble opinion, only the first approach is acceptable. Perhaps there are scenarios where there is no choice but to use one of the others; however, I will focus on the first.

    The first thing that needs to be done is to initiate the SDK, correctly and explicitly:

    sentry_sdk.init(
        dsn="<your dsn here>",
        send_default_pii=False
    )

    This will ensure that certain types of personal information are not sent to the server. Furthermore, by default certain stuff is already filtered, as we can see in the following example:

    Screenshot of Sentry's error page, focusing on the section that shows the code that caused the error and the local variables. Variables are not filtered.
    Screenshot of Sentry's error page, focusing on the section that shows the contents of the request. Content is not filtered.

    Some sensitive contents of the request such as password, authorization and X-Api-Token are scrubbed from the data, both on local variables and on the shown request data. This is because the SDK’s default deny list checks for the following common items:

    ['password', 'passwd', 'secret', 'api_key', 'apikey', 'auth', 'credentials', 'mysql_pwd', 'privatekey', 'private_key', 'token', 'ip_address', 'session', 'csrftoken', 'sessionid', 'remote_addr', 'x_csrftoken', 'x_forwarded_for', 'set_cookie', 'cookie', 'authorization', 'x_api_key', 'x_forwarded_for', 'x_real_ip', 'aiohttp_session', 'connect.sid', 'csrf_token', 'csrf', '_csrf', '_csrf_token', 'PHPSESSID', '_session', 'symfony', 'user_session', '_xsrf', 'XSRF-TOKEN']

    However, other sensitive data is included, such as credit_card_number (assigned to the card variable), phone_number and the X-Some-Other-Identifier header.

    To avoid this, we should expand the list:

    DENY_LIST = DEFAULT_DENYLIST + [
        "credit_card_number",
        "phone_number",
        "card",
        "X-Some-Other-Identifier",
    ]
    
    sentry_sdk.init(
        dsn="<your dsn here>",
        send_default_pii=False,
        event_scrubber=EventScrubber(denylist=DENY_LIST),
    )

    If we check again, the information is not there for new errors:

    Screenshot of Sentry's error page, focusing on the section that shows the code that caused the error and the local variables. Variables are filtered.
    Screenshot of Sentry's error page, focusing on the section that shows the contents of the request. Content is filtered.

    This way we achieve our initial goal and just like Django’s decorators we can stop certain information from being included in the error reports.

    I still think that a deny list defined in the settings is a poorer experience and more prone to leaks, than the decorator approach used by Django error reporting. Nevertheless, both rely on a deny list, and without being careful, this kind of approach will eventually lead to leaks.

    As an example, look again at the last two screenshots. Something was “filtered” in one place, but not in the other. If you find it, please let me know in the comments.

  • Take advantage of Django’s system checks

    Today, let’s go back to the topic of the first post in this series of Django tips.

    At the time, I focused on the python manage.py check --deploy command. In this article, I will explore the feature on which it is built and how it can be quite handy for many other scenarios.

    So, the System Check Framework, what is it?

    The system check framework is a set of static checks for validating Django projects. It detects common problems and provides hints for how to fix them. The framework is extensible so you can easily add your own checks.

    Django documentation

    We already have linters, formatters, and static analysis tools that we run on the CI of our projects. Many companies also have strict peer review processes before accepting any changes to the code. However, this framework can still very useful for many projects in situations such as:

    • Detect misconfigurations when someone is setting a complex new development environment, and help them correctly resolve the issues.
    • Warn developers when project-specific approaches are not being followed.
    • Ensure everything is well configured and correctly set up before deploying to production. Otherwise, make the deployment fail.

    With the framework, the above points are a bit easier to implement, since “system checks” has access to the real state of the app.

    During development, checks are executed on commands such as runserver and migrate, and before deployment a call to manage.py check can be executed before starting the app.

    Django itself makes heavy use of this framework. I’m sure you have already seen messages such as the ones shown below, during development.

    Screenshot of a Django app being launched and system checks warnings showing in the terminal.

    I’m not going to dig deeper into the details, since the Django documentation is excellent (list of built-in checks). Let’s just build a practical example.

    A practical example

    Recently, I was watching a talk and the speaker mentioned a situation when a project should block Django’s reverse foreign keys, to prevent accidental data leakages. The precise situation is not relevant for this post, given it is a bit more complex, but let’s assume this is the requirement.

    The reverse foreign key feature needs to be disabled on every foreign key field. We can implement a system check to ensure we haven’t forgotten any. It would look very similar to this:

    # checks.py
    from django.apps import apps
    from django.core import checks
    from django.db.models import ForeignKey
    
    
    @checks.register(checks.Tags.models)
    def check_foreign_keys(app_configs, **kwargs):
        errors = []
    
        for app in apps.get_app_configs():
            if "site-packages" in app.path:
                continue
    
            for model in app.get_models():
                fields = model._meta.get_fields()
                for field in fields:
                    errors.extend(check_field(model, field))
    
        return errors
    
    
    def check_field(model, field):
        if not isinstance(field, ForeignKey):
            return []
    
        rel_name = getattr(field, "_related_name", None)
        if rel_name and rel_name.endswith("+"):
            return []
    
        error = checks.CheckMessage(
            checks.ERROR,
            f'FK "{field.name}" reverse lookups enabled',
            'Add "+" at the end of the "related_name".',
            obj=model,
            id="example.E001",
        )
        return [error]

    Then ensure the check is active by loading it on apps.py:

    # apps.py
    from django.apps import AppConfig
    
    
    class ExampleConfig(AppConfig):
        name = "example"
    
        def ready(self):
            super().ready()
            from .checks import check_foreign_keys

    This would do the trick and produce the following output when trying to run the development server:

    $ python manage.py runserver
    Watching for file changes with StatReloader
    Performing system checks...
    
    ...
    
    ERRORS:
    example.Order: (example.E001) FK "client" reverse lookups enabled
            HINT: Add "+" at the end of the "related_name".
    example.Order: (example.E001) FK "source" reverse lookups enabled
            HINT: Add "+" at the end of the "related_name".
    example.OrderRequest: (example.E001) FK "source" reverse lookups enabled
            HINT: Add "+" at the end of the "related_name".

    On the other hand, if your test is only important for production, you should use @checks.register(checks.Tags.models, deploy=True). This way, the check will only be executed together with all other security checks when running manage.py check --deploy.

    Doing more with the framework

    There are also packages on PyPI, such as django-extra-checks, that implement checks for general Django good practices if you wish to enforce them on your project.

    To end this post, that is already longer than I initially desired, I will leave a couple of links to other resources if you want to continue exploring:

  • So you need to upgrade Django

    No matter how much you try to delay and how many reasons you find to postpone, eventually the time comes. You need to update and upgrade your software, your system components, your apps, your dependencies, etc.

    This happens to all computer users. On some systems, this is an enjoyable experience, on other systems as painful as it can get.

    Most of the time, upgrading Django on our projects falls in the first category, due to its amazing documentation and huge community. Nevertheless, the upgrade path takes work and “now” rarely seems the right time to move forward with it, specially if you are jumping between LTS versions.

    So, today’s tip is a mention of 2 packages that can help you reduce the burden of going through your codebase looking for the lines that need to be changed. They are:

    Both of them do more or less the same thing, they will automatically detect the code that needs to be changed and then fix it according to the release notes. Attention, this is no excuse to avoid reading the release notes.

    django-upgrade is faster and probably the best choice, but django-codemod supports older versions of Python. Overall, it will depend on the situation at hand.

    And this is it… I hope these libraries are as helpful to you as they have been to me.

  • Secure PostgreSQL connections on your Django project

    Last week, an article was published with some interesting numbers about the security of PostgreSQL servers publicly exposed to the internet (You can find it here).

    But more than the numbers, what really caught my attention was the fact that most clients and libraries used to access and interact with the databases have insecure defaults:

    most popular SQL clients are more than happy to accept unencrypted connections without a warning. We conducted an informal survey of 22 popular SQL clients and found that only two require encrypted connections by default.

    Jonathan Mortensen

    The article goes on to explain how clients connect to the database server and what options there are to establish and verify the connections.

    So, this week, let’s see how we can set up things in Django to ensure our apps are communicating with the database securely over the network.

    Usually, we set up the database connection like this:

    DATABASES = {
        "default": {
            "ENGINE": "django.db.backends.postgresql",
            "NAME": "db_name",
            "USER": "db_user",
            "PASSWORD": "db_password",
            "HOST": "127.0.0.1",
            "PORT": "5432",
        }
    }

    The above information can also be provided using a single “URL” such as postgres://USER:PASSWORD@HOST:PORT/NAME, but in this case, you might need some extra parsing logic or to rely on an external dependency.

    Now, based on that article psycopg2 by default prefers to use an encrypted connection but doesn’t require it, or even enforces a valid certificate. How can we change that?

    By using the field OPTIONS and then set the sslmode:

    DATABASES = {
        "default": {
            "ENGINE": "django.db.backends.postgresql",
            "NAME": "db_name",
            "USER": "db_user",
            "PASSWORD": "db_password",
            "HOST": "127.0.0.1",
            "PORT": "5432",
            "OPTIONS": {
                "sslmode": "<mode here>"
            }
        }
    }

    The available modes are:

    • disable
    • allow
    • prefer
    • require
    • verify-ca
    • verify-full

    The obvious choice for a completely secure connection is verify-full, specially when the traffic goes through a public network.

    If you are using a URL, something like this should do the trick: postgres://USER:PASSWORD@HOST:PORT/NAME?sslmode=verify-full.

    And that’s it, at least on the client’s side.

    If the above is not an option for you, I recommend taking a look at pgproxy. You can find more details here.

  • Shutting Down Webhook-logger

    A few years ago I built a small application to test Django’s websocket support through django-channels. It basically displayed on a web page in real time all the requests made to a given endpoint (you could generate multiple of them) without storing anything. It was fun and it was very useful to quickly debug stuff , so I kept it running since that time.

    If you are interested in more details about the project itself, you can find a complete overview here.

    However today, Heroku, the platform where it was running, announced the end of the free tier. This tier has been a godsend for personal projects and experiments over the last decade and heroku as a platform initially set the bar really high regarding the developer experience of deploying those projects.

    “Webhook-logger” was the only live project I had running on Heroku’s free tier and after some consideration I reached the conclusion it was time to turn it off. Its functionality is not unique and there are better options for this use case, so it is not worth the work required to move it to a new hosting provider.

    The code is still available in case anyone still want to take a look or deploy by their own.

  • Controlling the access to the clipboard contents

    In a previous blog post published earlier this year I explored some security considerations of the well known “clipboard” functionality that most operating systems provide.

    Long story short, in my opinion there is a lot more that could be done to protect the users (and their sensitive data) from many attacks that use of clipboard as a vector to trick the user or extract sensitive material.

    The proof-of-concept I ended up building to demonstrate some of the ideas worked in X11 but didn’t achieve one of the desired goals:

    It seems possible to detect all attempts of accessing the clipboard, but after struggling a bit, it seems that due to the nature of X11 it is not possible to know which running process owns the window that is accessing the clipboard. A shame.

    Myself, last blog post on this topic

    The good news about the above quote, is that it no longer is true. A kind soul contributed a patch that allows “clipboard-watcher” to fetch the required information about the process accessing the clipboard. Now we have all ingredients to make the tool fulfill its initial intended purpose (and it does).

    With this lengthily introduction we are ready to address the real subject of this post, giving the user more control over how the clipboard is used. Notifying the users about an access is just a first step, but restricting the access is what we want.

    On this topic, several comments to the previous post mentioned the strategy used by Qubes OS. It relies on having one clipboard specific to each app and a second clipboard that is shared between apps. The later requires user intervention to be used. While I think this is a good approach, it is not easy to replicate in a regular Linux distribution.

    However as I mentioned in my initial post, I think we can achieve a similar result by asking the user for permission when an app requests the data currently stored on the clipboard. This was the approach Apple implemented on the recent iOS release.

    So in order to check/test how this could work, I tried to adapt my proof-of-concept to ask for permission before sharing any data. Here’s an example:

    Working example of clipboard-watcher requesting permission before letting other apps access the clipboard contents.

    As we can see, it asks of permission before the requesting app is given the data and it kinda works (ignore the clunky interface and UX). Of course that there are many possible improvements to make its usage bearable, such as whitelisting certain apps, “de-duplicate” the content requests (apps can generate a new one for each available content type, which ends up being spammy), etc.

    Overall I’m pleased with the result and in my humble opinion this should be a “must have” security feature for any good clipboard manager on Linux. I say it even taking into account that this approach is not bulletproof, given that a malicious application could continuously fight/race the clipboard manager for the control of the “X selections”.

    Anyhow, the new changes for the proof-of-concept are available here, please give it a try and let me know what you think and if you find any other problems.

  • Django Friday Tips: Less known builtin commands

    Django management commands can be very helpful while developing your application or website, we are very used to runserver, makemigrations, migrate, shell and others. Third party packages often provide extra commands and you can easily add new commands to your own apps.

    Today lets take a look at some less known and yet very useful commands that Django provides out of the box.


    diffsettings

    $ python manage.py diffsettings --default path.to.module --output unified

    Dealing with multiple environments and debugging their differences is not as rare as we would like. In that particular scenario diffsettings can become quite handy.

    Basically, it displays the differences between the current configuration and another settings file. The default settings are used if a module is not provided.

    - DEBUG = False
    + DEBUG = True
    - EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
    + EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
    - TIME_ZONE = 'America/Chicago'
    + TIME_ZONE = 'UTC'
    + USE_SRI = True
    - USE_TZ = False
    + USE_TZ = True
    ...

    sendtestemail

    $ python manage.py sendtestemail my@address.com

    This one does not require an extensive explanation. It lets you test and debug your email configuration by using it to send the following message:

    Content-Type: text/plain; charset="utf-8"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Subject: Test email from host on 2022-04-28 19:08:56.968492+00:00
    From: webmaster@localhost
    To: my@address.com
    Date: Thu, 28 Apr 2022 19:08:56 -0000
    Message-ID: <165117293696.405310.3477251481753991809@host>
    
    If you're reading this, it was successful.
    -----------------------------------------------------------

    inspectdb

    $ python manage.py inspectdb

    If you are building your project on top of an existing database (managed by other system), inspectdb can look into the schema and generate the respective models for Django’s ORM, making it very easy to start using the data right away. Here’s an example:

    # This is an auto-generated Django model module.
    # You'll have to do the following manually to clean this up:
    #   * Rearrange models' order
    #   * Make sure each model has one field with primary_key=True
    #   * Make sure each ForeignKey and OneToOneField has `on_delete` set to the desired behavior
    #   * Remove `managed = False` lines if you wish to allow Django to create, modify, and delete the table
    # Feel free to rename the models, but don't rename db_table values or field names.
    
    ...
    
    class AuthPermission(models.Model):
        content_type = models.ForeignKey('DjangoContentType', models.DO_NOTHING)
        codename = models.CharField(max_length=100)
        name = models.CharField(max_length=255)
    
        class Meta:
            managed = False
            db_table = 'auth_permission'
            unique_together = (('content_type', 'codename'),)
    ...

    showmigrations

    $ python manage.py showmigrations --verbosity 2

    When you need to inspect the current state of the project’s migrations in a given environment the above command is the easiest way to get that information. It will tell you what migrations exist, which ones were applied and when.

    admin
     [X] 0001_initial (applied at 2021-01-13 19:49:24)
     [X] 0002_logentry_remove_auto_add (applied at 2021-01-13 19:49:24)
     [X] 0003_logentry_add_action_flag_choices (applied at 2021-01-13 19:49:24)
    auth
     [X] 0001_initial (applied at 2021-01-13 19:49:24)
     [X] 0002_alter_permission_name_max_length (applied at 2021-01-13 19:49:24)
     [X] 0003_alter_user_email_max_length (applied at 2021-01-13 19:49:24)
    ...

    There are many other useful management commands that are missing in the base Django package, to fill that gap there are some external packages available such as django-extensions. But I will leave those to a future post.

  • Inlineshashes: a new tool to help you build your CSP

    Content-Security-Policy (CSP) is an important mechanism in today’s web security arsenal. Is a way of defending against Cross-Site Scripting and other attacks.

    It isn’t hard to get started with or to put in place in order to secure your website or web application (I did that exercise in a previous post). However when the systems are complex or when you don’t fully control an underlying “codebase” that frequently changes (like it happens with of-the-shelf software) things can get a bit messier.

    In those cases it is harder to build a strict and simple policy, since there are many moving pieces and/or you don’t control the code development, so you will end up opening exceptions and whitelisting certain pieces of content making the policy more complex. This is specially true for inline elements, making the unsafe-inline source very appealing (its name tells you why you should avoid it).

    Taking WordPress as an example, recommended theme and plugin updates can introduce changes in the included inline elements, which you will have to review in order to update your CSP. The task gets boring very quickly.

    To help with the task of building and maintaining the CSP in the cases described above, I recently started to work on a small tool (and library) to detect, inspect and whitelist new inline changes. You can check it here or download it directly from PyPI.

  • Django Friday Tips: Admin Docs

    While the admin is a well known and very useful app for your projects, Django also includes another admin package that isn’t as popular (at least I never seen it being heavily used) but that can also be quite handy.

    I’m talking about the admindocs app. What it does is to provide documentation for the main components of your project in the Django administration itself.

    It takes the existing documentation provided in the code to developers and exposes it to users that have the is_staff flag enabled.

    This is what they see:

    Django admindocs main page.
    A view of the main page of the generated docs.
    Checking documentation for existing views.
    Checking a model reference in the admin documentation.
    Checking a model reference.

    I can see this being very helpful for small websites that are operated by teams of “non-developers” or even for people providing support to customers. At least when a dedicated and more mature solution for documentation is not available.

    To install you just have to:

    • Install the docutils package.
    • Add django.contrib.admindocs to the installed apps.
    • Add path('admin/doc/', include('django.contrib.admindocs.urls')) to the URLs.
    • Document your code with “docstrings” and “help_text” attributes.

    More details documentation can be found here. And for today, this is it.

  • Who keeps an eye on clipboard access?

    If there is any feature that “universally” describes the usage of computers, it is the copy/paste pattern. We are used to it, practically all the common graphical user interfaces have support for it, and it magically works.

    We copy some information from one application and paste into another, and another…

    How does these applications have access to this information? The clipboard must be something that is shared across all of them, right? Right.

    While very useful, this raises a lot of security questions. As far as I can tell, all apps could be grabbing what is available on the clipboard.

    It isn’t uncommon for people to copy sensitive information from one app to another and even if the information is not sensitive, the user generally has a clear target app for the information (the others don’t have anything to do with it).

    These questions started bugging me a long time ago, and the sentiment even got worse when Apple released an iOS feature that notifies users when an app reads the contents of the clipboard. That was brilliant, why didn’t anyone thought of that before?

    The result? Tons of apps caught snooping into the clipboard contents without the user asking for it. The following articles can give you a glimpse of what followed:

    That’s not good, and saying you won’t do it again is not enough. On iOS, apps were caught and users notified, but what about Android? What about other desktop operating systems?

    Accessing the clipboard to check what’s there, then steal passwords, or replace cryptocurrency addresses or just to get a glimpse of what the user is doing is a common pattern of malware.

    I wonder why hasn’t a similar feature been implemented in most operating systems we use nowadays (it doesn’t need to be identical, but at least let us verify how the clipboard is being used). Perhaps there exists tools can help us with this, however I wasn’t able to find any for Linux.

    A couple of weeks ago, I started to look at how this works (on Linux, which is what I’m currently using). What I found is that most libraries just provide a simple interface to put things on the clipboard and to get the current clipboard content. Nothing else.

    After further digging, I finally found some useful and interesting articles on how this feature works on X11 (under the hood of those high level APIs). For example:

    Then, with this bit of knowledge about how the clipboard works in X11, I decided to do a quick experiment in order to check if I can recreate the clipboard access notifications seen in iOS.

    During the small periods I had available in the last few weekends, I tried to build a quick proof of concept, nothing fancy, just a few pieces of code from existing examples stitched together.

    Here’s the current result:

    Demonstration of clipboard-watcher detecting when other apps access the contents

    It seems possible to detect all attempts of accessing the clipboard, but after struggling a bit, it seems that due to the nature of X11 it is not possible to know which running process owns the window that is accessing the clipboard. A shame.

    The information that X11 has about the requesting client must be provided by the client itself, which makes it very hard to know for sure which process it is (most of the time it is not provided at all).

    Nevertheless, I think this could still be a very useful capability for existing clipboard managers (such as Klipper), given the core of this app works just like one.

    Even without knowing the process trying to access the clipboard contents, I can see a few useful features that are possible to implement, such as:

    • Create some stats about the clipboard access patterns.
    • Ask the user for permission, before providing the clipboard contents.

    Anyhow, you can check the proof of concept here and give it a try (improvements are welcome). Let me know what you think and what I’ve missed.