As you might have guessed from the title, today’s tip is about how to add “Subresource integrity” (SRI) checks to your website’s static assets.
First lets see what SRI is. According to the Mozilla’s Developers Network:
Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match.
So basically, if you don’t serve all your static assets and rely on any sort of external provider, you can force the browser to check that the delivered contents are exactly the ones you expect.
To trigger that behavior you just need to add the hash of the content to the integrity attribute of the <script> and/or <link> elements in question.
This is all very nice but adding this info manually isn’t that fun or even practical, when your resources might change frequently or are built dynamically on each deployment.
To help with this task I recently found a little tool called django-sri that automates these steps for you (and is compatible with whitenoise if you happen to use it).
After the install, you just need to replace the {% static ... %} tags in your templates with the new one provided by this package ({% sri_static .. %}) and the integrity attribute will be automatically added.
One critical piece of the software development process that often gets neglected by companies and also by many open-source projects is explaining how it works and how it can be used to solve the problem in question.
Documentation is often lacking and people have an hard time figuring out how they can use or contribute to a particular piece of software. I think most developers and users have faced this situation at least once.
Looking at it from the other side, it isn’t always easy to pick and share the right information so others can hit the ground running. The fact that not everybody is starting from the same point and have the same goal, makes the job a bit harder.
It splits the problems in 4 areas targeting different stages and needs of the person reading the documentation. Django uses this system and is frequently praised for having great documentation.
From a user point of view it looks solid. You should take a look and apply it on your packages/projects, I surely will.
The first post I published on this blog is now 10 years old. This wasn’t my first website or even the first blog, but it’s the one that stuck for the longest time.
The initial goal was to have a place to share anything I might find interesting on the Web, a place that would allow me to publish my opinions on all kinds of issues (if I felt like it) and to be able to publish information about my projects. I think you still can deduce that from the tag line, that remained unchanged ever since.
From the start, being able to host my own content was one of the priorities, in order to be able to control its distribution and ensuring that it is universally accessible to anyone without any locks on how and by whom it should be consumed.
The reasoning behind this decision was related to a trend that started a couple of years earlier, the departure from the open web and the big migration to the walled gardens.
Many people thought it was an inoffensive move, something that would improve the user experience and make the life easier for everyone. But as anything in life, with time we started to see the costs.
Today the world is different, using closed platforms that barely interact with each other is the rule and the downsides became evident: Users started to be spied for profit, platforms decide what speech is acceptable, manipulation is more present than ever, big monopolies are now gate keepers to many markets, etc. Summing up, the information and power is concentrated in fewer hands.
Last week this event set the topic for the post. A “simple chat app”, that uses an open protocol to interact with different servers, was excluded/blocked from the market unilaterally without any chance to defend itself. A more extensive discussion can be found here.
The message I wanted to leave in this commemorative post, is that we need to give another shot to decentralized and interoperable software, use open protocols and technologies to put creators and users back in control.
If there is anything that I would like to keep for the next 10 years, is the capability to reach, interact and collaborate with the world without having a huge corporation acting as middleman dictating its rules.
I will continue to put an effort in making sure open standards are used on this website (such RSS, Webmention, etc) and that I’m reachable using decentralized protocols and tools (such as email, Matrix or the “Fediverse“). It think this is the minimum a person could ask for the next decade.
In this year’s first issue of my irregular Django quick tips series, lets look at the builtin tools available for managing access control.
The framework offers a comprehensive authentication and authorization system that is able to handle the common requirements of most websites without even needing any external library.
Most of the time, simple websites only make use of the “authentication” features, such as registration, login and logout. On more complex systems only authenticating the users is not enough, since different users or even groups of users will have access to distinct sets of features and data records.
This is when the “authorization” / access control features become handy. As you will see they are very simple to use as soon as you understand the implementation and concepts behind them. Today I’m gonna focus on how to use these permissions on the Admin, perhaps in a future post I can address the usage of permissions on other situations. In any case Django has excellent documentation, so a quick visit to this page will tell you what you need to know.
Under the hood
ER diagram of Django’s “auth” package
The above picture is a quick illustration of how this feature is laid out in the database. So a User can belong to multiple groups and have multiple permissions, each Group can also have multiple permissions. So a user has a given permission if it is directly associated with him or or if it is associated with a group the user belongs to.
When a new model is added 4 permissions are created for that particular model, later if we need more we can manually add them. Those permissions are <app>.add_<model>, <app>.view_<model>, <app>.update_<model> and <app>.delete_<model>.
For demonstration purposes I will start with these to show how the admin behaves and then show how to implement an action that’s only executed if the user has the right permission.
The scenario
Lets image we have a “store” with some items being sold and it also has a blog to announce new products and promotions. Here’s what the admin looks like for the “superuser”:
The admin showing all the available models
We have several models for the described functionality and on the right you can see that I added a test user. At the start, this test user is just marked as regular “staff” (is_staff=True), without any permissions. For him the admin looks like this:
No permissions
After logging in, he can’t do anything. The store manager needs the test user to be able to view and edit articles on their blog. Since we expect in the future that multiple users will be able to do this, instead of assigning these permissions directly, lets create a group called “editors” and assign those permissions to that group.
Only two permissions for this group of users
Afterwards we also add the test user to that group (in the user details page). Then when he checks the admin he can see and edit the articles as desired, but not add or delete them.
No “Add” button there
The actions
Down the line, the test user starts doing other kinds of tasks, one of them being “reviewing the orders and then, if everything is correct, mark them as ready for shipment”. In this case, we don’t want him to be able to edit the order details or change anything else, so the existing “update” permissions cannot be used.
What we need now is to create a custom admin action and a new permission that would let specific users (or groups) execute that action. Lets start with the later:
class Order(models.Model):
...
class Meta:
...
permissions = [("set_order_ready", "Can mark the order as ready for shipment")]
What we are doing above, is telling Django there is one more permission that should be created for this model, a permission that we will use ourselves.
Once this is done (you need to run manage.py migrate), we can now create the action and ensure we check that the user executing it has the newly created permission:
class OrderAdmin(admin.ModelAdmin):
...
actions = ["mark_as_ready"]
def mark_as_ready(self, request, queryset):
if request.user.has_perm("shop.set_order_ready"):
queryset.update(ready=True)
self.message_user(
request, "Selected orders marked as ready", messages.SUCCESS
)
else:
self.message_user(
request, "You are not allowed to execute this action", messages.ERROR
)
mark_as_ready.short_description = "Mark selected orders as ready"
As you can see, we first check the user as the right permission, using has_perm and the newly defined permission name before proceeding with the changes.
And boom .. now we have this new feature that only lets certain users mark the orders as ready for shipment. If we try to execute this action with the test user (that does not have yet the required permission):
No permission assigned, no action for you sir
Finally we just add the permission to the user and it’s done. For today this is it, I hope you find it useful.
Git by itself is a distributed version control system (a very popular one), but over the years organizations started to rely on some internet services to manage their repositories and those services eventually become the central/single source of truth for their code.
The most well known service out there is GitHub (now owned by Microsoft), which nowadays is synonymous of git for a huge amount of people. Many other services exist, such as Gitlab and BitBucked, but GitHub gained a notoriety above all others, specially for hosting small (and some large) open source projects.
These centralized services provide many more features that help managing, testing and deploying software. Functionality not directly related to the main purpose of git.
Relying on these central services is very useful but as everything in life, it is a trade-off. Many large open source organizations don’t rely on these companies (such as KDE, Gnome, Debian, etc), because the risks involved are not worth the convenience of letting these platforms host their code and other data.
Over time we have been witnessing some of these risks, such as your project (and all the related data) being taken down without you having any chance to defend yourself (Example 1 and Example 2). Very similar to what some content creators have been experiencing with Youtube (I really like this one).
When this happens, your or your organizations don’t lose the code itself since you almost certainly have copies on your own devices (thanks to git), but you lose everything else, issues, projects, automated actions, documentation and essentially the known used by URL of your project.
Since Github is just too convenient to collaborate with other people, we can’t just leave. In this post I explain an easy alternative to minimize the risks described above, that I implemented myself after reading many guides and tools made by others that also tried to address this problem before.
The main idea is to automatically mirror everything in a machine that I own and make it publicly available side by side with the GitHub URLs, the work will still be done in Github but can be easily switched over if something happens.
The software
To achieve the desired outcome I’ve researched a few tools and the one that seemed to fit all my requirements (work with git and be lightweight) was “Gitea“. Next I will describe the steps I took.
The Setup
This part was very simple, I just followed the instructions present on the documentation for a docker based install. Something like this:
If you are doing the same, don’t copy the snippet above. Take look here for updated instructions.
Since my website is not supposed to have much concurrent activity, using an SQLite database is more than enough. So after launching the container, I chose this database type and made sure I disabled the all the functionality I won’t need.
Part of the Gitea’s configuration page
After this step, you should be logged in as an admin. The next step is to create a new migration on the top right menu. We just need to choose the “Github” option and continue. You should see the below screen:
Creating a new Github migration/mirror in Gitea.
If you choose This repository will be a mirror option, Gitea will keep your repository and wiki in sync with the original, but unfortunately it will not do the same for issues, labels, milestones and releases. So if you need that information, the best approach is to uncheck this field and do a normal migration. To keep that information updated you will have to repeat this process periodically.
Once migrated, do the same for your other repositories.
Conclusion
Having an alternative with a backup of the general Github data ended up being quite easy to set up. However the mirror feature would be much more valuable if it included the other items available on the standard migration.
During my research for solutions, I found Fossil, which looks very interesting and something that I would like to explore in the future, but at the moment all repositories are based on Git and for practical reasons that won’t change for the time being.
With this change, my public repositories can be found in:
https://code.ovalerio.net/dethos (Not everything is migrated yet, but will be soon)
Edit: Due to the described limitations of the mirror functionality in Gitea. My self-hosted mirror ended up being less useful than previously estimated. For that reason, it was shutdown on 12/07/2024.
Today lets look at the tools Django provides out of the box to debug the queries made to the database using the ORM.
This isn’t an uncommon task. Almost everyone who works on a non-trivial Django application faces situations where the ORM does not return the correct data or a particular operation as taking too long.
The best way to understand what is happening behind the scenes when you build database queries using your defined models, managers and querysets, is to look at the resulting SQL.
The standard way of doing this is to set the logging configuration to print all queries done by the ORM to the console. This way when you browse your website you can check them in real time. Here is an example config:
...
web_1 | (0.001) SELECT MAX("axes_accessattempt"."failures_since_start") AS "failures_since_start__max" FROM "axes_accessattempt" WHERE ("axes_accessattempt"."ip_address" = '172.18.0.1'::inet AND "axes_accessattempt"."attempt_time" >= '2020-09-18T17:43:19.844650+00:00'::timestamptz); args=(Inet('172.18.0.1'), datetime.datetime(2020, 9, 18, 17, 43, 19, 844650, tzinfo=<UTC>))
web_1 | (0.001) SELECT MAX("axes_accessattempt"."failures_since_start") AS "failures_since_start__max" FROM "axes_accessattempt" WHERE ("axes_accessattempt"."ip_address" = '172.18.0.1'::inet AND "axes_accessattempt"."attempt_time" >= '2020-09-18T17:43:19.844650+00:00'::timestamptz); args=(Inet('172.18.0.1'), datetime.datetime(2020, 9, 18, 17, 43, 19, 844650, tzinfo=<UTC>))
web_1 | Bad Request: /users/login/
web_1 | [18/Sep/2020 18:43:20] "POST /users/login/ HTTP/1.1" 400 2687
Note: The console output will get a bit noisy
Now lets suppose this logging config is turned off by default (for example, in a staging server). You are manually debugging your app using the Django shell and doing some queries to inspect the resulting data. In this case str(queryset.query) is very helpful to check if the query you have built is the one you intended to. Here’s an example:
>>> box_qs = Box.objects.filter(expires_at__gt=timezone.now()).exclude(owner_id=10)
>>> str(box_qs.query)
'SELECT "boxes_box"."id", "boxes_box"."name", "boxes_box"."description", "boxes_box"."uuid", "boxes_box"."owner_id", "boxes_box"."created_at", "boxes_box"."updated_at", "boxes_box"."expires_at", "boxes_box"."status", "boxes_box"."max_messages", "boxes_box"."last_sent_at" FROM "boxes_box" WHERE ("boxes_box"."expires_at" > 2020-09-18 18:06:25.535802+00:00 AND NOT ("boxes_box"."owner_id" = 10))'
If the problem is related to performance, you can check the query plan to see if it hits the right indexes using the .explain() method, like you would normally do in SQL.
>>> print(box_qs.explain(verbose=True))
Seq Scan on public.boxes_box (cost=0.00..13.00 rows=66 width=370)
Output: id, name, description, uuid, owner_id, created_at, updated_at, expires_at, status, max_messages, last_sent_at
Filter: ((boxes_box.expires_at > '2020-09-18 18:06:25.535802+00'::timestamp with time zone) AND (boxes_box.owner_id <> 10))
What is the piece of software (app) you have used continuously for the longest period of time?
This is an interesting question. More than 2 decades have passed since I’ve got my first computer. Throughout all this time my usage of computers evolved dramatically, most of the software I installed at the time no longer exists or is so outdated that there no point in using it.
Even the “type” of software changed, before I didn’t rely on so many web apps and SaaS (Software as a service) products that dominate the market nowadays.
The devices we use to run the software also changed, now it’s common for people to spend more time on certain mobile apps than their desktop counterparts.
In the last 2 decades, not just the user needs changed but also the communication protocols in the internet, the multimedia codecs and the main “algorithms” for certain tasks.
It is true that many things changed, however others haven’t. There are apps that were relevant at the time, that are still in use and I expect that they will still be around in for many years.
I spent some time thinking about my answer to the question, given I have a few strong contenders.
One of them is Firefox. However my usage of the browser was split by periods when I tried other alternatives. I installed it when it was initially launched and I still use it nowadays, but the continuous usage time doesn’t take it to the first place.
I used Windows for 12/13 straight years before switching to Linux, but it is still not enough (I also don’t think operating systems should be taken into account for this question, since for most people the answer would be Windows).
VLC is another contender, but like it happened to Firefox, I started using it early and then kept switching back and forth with other media players throughout the years. The same applies to the “office” suite.
The final answer seems to be Thunderbird. I’ve been using it daily since 2004, which means 16 years and counting. At the time I was fighting the ridiculously small storage limit I had for my “webmail” inbox, so I started using it to download the messages to my computer in order to save space. I still use it today for totally different reasons.
And you, what is the piece of software or app you have continuously used for the longest period of time?
Nowadays, in some “developed” countries, it is very common for people to have a bunch of old phones stored somewhere in a drawer. Ten years have passed since smartphones became ubiquitous and those devices tend to become unusable very quickly, at least for their primary purpose. Either a small component breaks, the vendor stops providing updates, newer apps don’t support those older versions, etc.
The thing is, these phones are still powerful computers. It would be great if we could give them another life once they are no longer fit for regular day to day use or the owner just wants to try a shiny new device.
I never had many smartphones, mines tend to last many years, but I still have one or two lying around. Recently I started thinking of new uses for them, make them work instead of just gathering dust. A quick search on the internet tells me that many people already had the same idea (I’m quite late to the party) and have been working on cool things to do with these devices.
However, most of these articles just throw the idea at you, without telling you how to do it. Others assume that your device is relatively recent.
Of course the difficulty increases with the age of the phone, in my case the software that I will be able to run on a 10 year old Samsung Galaxy S will not be as easy to find as the software that I can run on another device with just one or two years.
Bellow is a list posts I found online with cool things you can do with your old phones. What sets this list apart from other results is that all the items aren’t just ideas, they contain step by step instructions of how to achieve the end result.
Nowadays most software developed using the Python language relies on external packages (dependencies) to get the job done. Correctly managing this “supply-chain” ends up being very important and having a big impact on the end product.
As a developer you should be cautious about the dependencies you include on your project, as I explained in a previous post, but you are always dependent on the job done by the maintainers of those packages.
As a public package owner/maintainer, you also have to be aware that the code you write, your decisions and your actions will have an impact on the projects that depend directly or indirectly on your package.
With this small introduction we arrive to the topic of this post, which is “What to do as a maintainer when you no longer want to support a given package?” or ” How to properly rename my package?”.
In both of these situations you might think “I will start by removing the package from PyPI”, I hope the next lines will convince you that this is the worst you can do, for two reasons:
You will break the code or the build systems of all projects that depend on the current or past versions of your package.
You will free the namespace for others to use and if your package is popular enough this might become a juicy target for any malicious actor.
TLDR: your will screw your “users”.
The left-pad incident, while it didn’t happen in the python ecosystem, is a well known example of the first point and shows what happens when a popular package gets removed from the public index.
Malicious actors usually register packages using names that are similar to other popular packages with the hope that a user will end up installing them by mistake, something that already has been found multiple timesonPyPI. Now imagine if that package name suddenly becomes available and is already trusted by other projects.
What should you do it then?
Just don’t delete the package.
I admit that in some rare occasions it might be required, but most of the time the best thing to do is to leave it there (specially for open-source ones).
Adding a warning to the code and informing the users in the README file that the package is no longer maintained or safe to use is also a nice thing to do.
A good example of this process being done properly was the renaming of model-mommy to model-bakery, as a user it was painless. Here’s an overview of the steps they took:
A new source code repository was created with the same contents. (This step is optional)
After doing the required changes a new package was uploaded to PyPI.
Deprecation warnings were added to the old code, mentioning the new package.
The documentation was updated mentioning the new package and making it clear the old package will no longer be maintained.
A new release of the old package was created, so the user could see the deprecation warnings.
All further development was done on the new package.
The old code repository was archived.
So here is what is shown every time the test suite of an affected project is executed:
/lib/python3.7/site-packages/model_mommy/__init__.py:7: DeprecationWarning: Important: model_mommy is no longer maintained. Please use model_bakery instead: https://pypi.org/project/model-bakery/
In the end, even though I didn’t update right away, everything kept working and I was constantly reminded that I needed to make the change.
In this post I’ll try to describe a simple solution, that I came up with, to solve the issue of dynamically updating DNS records when the IP addresses of your machines/instances changes frequently.
While Dynamic DNS isn’t a new thing and many services/tools around the internet already provide solutions to this problem (for more than 2 decades), I had a few requirements that ruled out most of them:
I didn’t want to sign up to a new account in one of these external services.
I would prefer to use a domain name under my control.
I don’t trust the machine/instance that executes the update agent, so according to the principle of the least privilege, the client should only able to update one DNS record.
The first and second points rule out the usual DDNS service providers and the third point forbids me from using the Cloudflare API as is (like it is donein other blog posts), since the permissions we are allowed to setup for a new API token aren’t granular enough to only allow access to a single DNS record, at best I would’ve to give access to all records under that domain.
My solution to the problem at hand was to put a worker is front of the API, basically delegating half of the work to this “serverless function”. The flow is the following;
agent gets IP address and timestamp
agent signs the data using a previously known key
agent contacts the worker
worker verifies signature, IP address and timestamp
worker fetches DNS record info of a predefined subdomain
If the IP address is the same, nothing needs to be done
If the IP address is different, worker updates DNS record
worker notifies the agent of the outcome
Nothing too fancy or clever, right? But is works like a charm.
I’ve published my implementation on GitHub with a FOSS license, so anyone can modify and reuse. It doesn’t require any extra dependencies, it consists of only two files and you just need to drop them at the right locations and you’re ready to go. The repository can be found here and the README.md contains the detailed steps to deploy it.
There are other small features that could be implemented, such as using the same worker with several agents that need to update different records, so only one of these “serverless functions” would be required. But these improvements will have wait for another time, for now I just needed something that worked well for this particular case and that could be easily deployed in a short time.
Some days ago while scrolling my mastodon‘s feed (for those who don’t know it is like Tweeter but instead of being a single website, the whole network is composed by many different entities that interact with each other), I found the following message:
To server admins:
It is a good practice to provide contact details, so others can contact you in case of security vulnerabilities or questions regarding your privacy policy.
One upcoming but already widespread format is the security.txt file at https://your-server/.well-known/security.txt.
See https://securitytxt.org/ and https://infosec-handbook.eu/.well-known/security.txt.
It caught my attention because my personal domain didn’t had one at the time. I’ve added it to other projects in the past, but do I need one for a personal domain?
After some thought, I couldn’t find any reason why I shouldn’t add one in this particular case. So as you might already have guessed, this post is about the steps I took to add it to my domain.
What is it?
A small text file, just like robots.txt, placed in a well known location, containing details about procedures, contacts and other key information required for security professionals to properly disclose their findings.
Or in other words: Contact details in a text file.
security.txt isn’t yet an official standard (still a draft) but it addresses a common issue that security researches encounter during their day to day activity: sometimes it’s harder to report a problem than it is to find it. I always remember the case of a Portuguese citizen, who spent ~5 months trying to contact someone that could fix some serious vulnerabilities in a governmental website.
Even though it isn’t an accepted standard yet, it’s already being used in the wild:
Go to https://securitytxt.org/ and fill the required fields of the form present on that website.
Fill the extra fields if they apply.
Generate the text document.
Sign the content using your PGP key gpg --clear-sign security.txt
Publish the signed file on your domain under https://<domain>/.well-known/security.txt
As you can see, this is a very low effort task and it can generate very high returns, if it leads to a disclosure of a serious vulnerability that otherwise would have gone unreported.
This time, as you can deduce from the title, I will address the topic of how to use feature flags on Django websites and applications. This is an incredible functionality to have, specially if you need to continuously roll new code to production environments that might not be ready to be released.
But first what are Feature Flags? The Wikipedia tells us this:
A feature toggle (also feature switch, feature flag, …) is a technique in software development that attempts to provide an alternative to maintaining multiple branches in source code (known as feature branches), such that a software feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during runtime.
It seems a pretty clear explanation and it gives us a glimpse of the potential of having this capability in a given project. Exploring the concept a bit more it uncovers a nice set of possibilities and use cases, such as:
Canary Releases
Instant Rollbacks
AB Testing
Testing features with production data
To dive further into the concept I recommend starting by reading this article, that gives you a very detailed explanation of the overall idea.
In the rest of the post I will describe how this kind of functionality can easily be included in a standard Django application. Overtime many packages were built to solve this problem however most aren’tmaintained anymore, so for this post I picked django-waffle given it’s one of the few that are still in active development.
As an example scenario lets image a company that provides a suite of online office tools and is currently in the process of introducing a new product while redoing the main website’s design. The team wants some trusted users and the developers to have access to the unfinished product in production and a small group of random users to view the new design.
With the above scenario in mind, we start by install the package and adding it to our project by following the instructions present on the official documentation.
Now picking the /products page that is supposed to displays the list of existing products, we can implement it this way:
# views.py
from django.shortcuts import render
from waffle import flag_is_active
def products(request):
if flag_is_active(request, "new-design"):
return render(request, "new-design/product_list.html")
else:
return render(request, "product_list.html")
You can see above that 2 conditions are checked while processing a given request. These conditions are the flags, which are models on the database with certain criteria that will be evaluated against the provided request in order to determine if they are active or not.
Now on the database we can config the behavior of this code by editing the flag objects. Here are the two objects that I created (retrieved using the dumpdata command):
Last January I made a small post about setting up a “Content-Security-Policy” header for this blog. On that post I described the steps I took to reach a final result, that I thought was good enough given the “threats” this website faces.
This process usually isn’t hard If you develop the website’s software and have an high level of control over the development decisions, the end result ends up being a simple yet very strict policy. However if you do not have that degree of control over the code (and do not want to break the functionality) the policy can end up more complex and lax than you were initially hoping for. That’s what happened in my case, since I currently use a standard installation of WordPress for the blog.
The end result was a different security policy for different routes and sections (this part was not included on the blog post), that made the web-server configuration quite messy.
(With this intro, you might have already noticed that I’m just making excuses to redo the initial and working implementation, in order to test some sort of new technology)
Given the blog is behind the Cloudflare CDN and they introduced their “serverless” product called “Workers” a while ago, I decided that I could try to manage the policy dynamically on their servers.
Browser <--> CF Edge Server <--> Web Server <--> App
The above line describes the current setup, so instead of adding the CSP header on the “App” or the “Web Server” stages, the header is now added to the response on the last stage before reaching the browser. Let me describe how I’ve done it.
Cloudflare Workers
First a very small introduction to Workers, later you can find more detailed information on Workers.dev.
So, first Cloudflare added the v8 engine to all edge servers that route the traffic of their clients, then more recently they started letting these users write small programs that can run on those servers inside the v8 sandbox.
The programs are built very similarly to how you would build a service worker (they use the same API), the main difference being where the code runs (browser vs edge server).
These “serverless” scripts can then be called directly through a specific endpoint provided by Cloudflare. In this case they should create and return a response to the requests.
Or you can instruct Cloudflare to execute them on specific routes of your website, this means that the worker can generate the response, execute any action before the request reaches your website or change the response that is returned.
This service is charged based on the number of requests handled by the “workers”.
The implementation
Going back to the original problem and based on the above description, we can dynamically introduce or modify the “Content-Security-Policy” for each request that goes through the worker which give us an high degree of flexibility.
So for my case a simple script like the one below, did the job just fine.
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
/**
* Forward the request and swap the response's CSP header
* @param {Request} request
*/
async function handleRequest(request) {
let policy = "<your-custom-policy-here>"
let originalResponse = await fetch(request)
response = new Response(originalResponse.body, originalResponse)
response.headers.set('Content-Security-Policy', policy)
return response
}
The script just listens for the request, passes it to a handler function (lines 1-3), forwards to the origin server (line 12), grabs the response (line 13), replaced the CSP header with the defined policy (line 14) and then returns the response.
If I needed something more complex, like making slight changes to the policy depending on the User-Agent to make sure different browsers behave as expected given the different implementations or compatibility issues, it would also be easy. This is something that would be harder to achieve in the config file of a regular web server (nginx, apache, etc).
Enabling the worker
Now that the script is done and the worker deployed, in order to make it run on certain requests to my blog, I just had to go to the Cloudflare’s dashboard of my domain, click on the “workers” section and add the routes I want it to be executed:
Configuring the routes that will use the worker
The settings displayed on the above picture will run the worker on all requests to this blog, but is can be made more specific and I can even have multiple workers for different routes.
Some sort of conclusion
Despite the use-case described in this post being very simple, there is potential in this new “serverless” offering from Cloudflare. It definitely helped me solve the problem of having different policies for different sections of the website without much trouble.
In the future I might comeback to it, to explore other user-cases or implementation details.
I haven’t written one of these supposedly weekly posts with small Django tips for a while, but at least I always post them on Fridays.
This time I gonna address how we can test emails with the tools that Django provides and more precisely how to check the attachments of those emails.
The testing behavior of emails is very well documented (Django’s documentation is one of the best I’ve seen) and can be found here.
Summing it up, if you want to test some business logic that sends an email, Django replaces the EMAIL_BACKEND setting with a testing backend during the execution of your test suite and makes the outbox available through django.core.mail.outbox.
But what about attachments? Since each item on the testing outbox is an instance of the EmailMessage class, it contains an attribute named “attachments” (surprise!) that is list of tuples with all the relevant information:
("<filename>", "<contents>", "<mime type>")
Here is an example:
# utils.py
from django.core.mail import EmailMessage
def some_function_that_sends_emails():
msg = EmailMessage(
subject="Example email",
body="This is the content of the email",
from_email="some@email.address",
to=["destination@email.address"],
)
msg.attach("sometext.txt", "The content of the file", "text/plain")
msg.send()
# tests.py
from django.test import TestCase
from django.core import mail
from .utils import some_function_that_sends_emails
class ExampleEmailTest(TestCase):
def test_example_function(self):
some_function_that_sends_emails()
self.assertEqual(len(mail.outbox), 1)
email_message = mail.outbox[0]
self.assertEqual(email_message.subject, "Example email")
self.assertEqual(email_message.body, "This is the content of the email")
self.assertEqual(len(email_message.attachments), 1)
file_name, content, mimetype = email_message.attachments[0]
self.assertEqual(file_name, "sometext.txt")
self.assertEqual(content, "The content of the file")
self.assertEqual(mimetype, "text/plain")
If you are using pytest-django the same can be achieved with the mailoutbox fixture:
import pytest
from .utils import some_function_that_sends_emails
def test_example_function(mailoutbox):
some_function_that_sends_emails()
assert len(mailoutbox) == 1
email_message = mailoutbox[0]
assert email_message.subject == "Example email"
assert email_message.body == "This is the content of the email"
assert len(email_message.attachments) == 1
file_name, content, mimetype = email_message.attachments[0]
assert file_name == "sometext.txt"
assert content == "The content of the file"
assert mimetype == "text/plain"