Django Friday Tips: Testing emails

I haven’t written one of these supposedly weekly posts with small Django tips for a while, but at least I always post them on Fridays.

This time I gonna address how we can test emails with the tools that Django provides and more precisely how to check the attachments of those emails.

The testing behavior of emails is very well documented (Django’s documentation is one of the best I’ve seen) and can be found here.

Summing it up, if you want to test some business logic that sends an email, Django replaces the EMAIL_BACKEND setting with a testing backend during the execution of your test suite and makes the outbox available through django.core.mail.outbox.

But what about attachments? Since each item on the testing outbox is an instance of the EmailMessage class, it contains an attribute named “attachments” (surprise!) that is list of tuples with all the relevant information:

("<filename>", "<contents>", "<mime type>")

Here is an example:

from django.core.mail import EmailMessage

def some_function_that_sends_emails():
    msg = EmailMessage(
        subject="Example email",
        body="This is the content of the email",
    msg.attach("sometext.txt", "The content of the file", "text/plain")

from django.test import TestCase
from django.core import mail

from .utils import some_function_that_sends_emails

class ExampleEmailTest(TestCase):
    def test_example_function(self):

        self.assertEqual(len(mail.outbox), 1)

        email_message = mail.outbox[0]
        self.assertEqual(email_message.subject, "Example email")
        self.assertEqual(email_message.body, "This is the content of the email")
        self.assertEqual(len(email_message.attachments), 1)

        file_name, content, mimetype = email_message.attachments[0]
        self.assertEqual(file_name, "sometext.txt")
        self.assertEqual(content, "The content of the file")
        self.assertEqual(mimetype, "text/plain")

If you are using pytest-django the same can be achieved with the mailoutbox fixture:

import pytest

from .utils import some_function_that_sends_emails

def test_example_function(mailoutbox):

    assert len(mailoutbox) == 1

    email_message = mailoutbox[0]
    assert email_message.subject == "Example email"
    assert email_message.body == "This is the content of the email"
    assert len(email_message.attachments) == 1

    file_name, content, mimetype = email_message.attachments[0]
    assert file_name == "sometext.txt"
    assert content == "The content of the file"
    assert mimetype == "text/plain"

And this is it for today.


8 useful dev dependencies for django projects

In this post I’m gonna list some very useful tools I often use when developing a Django project. These packages help me improve the development speed, write better code and also find/debug problems faster.

So lets start:


This one is to avoid useless discussions about preferences and taste related to code formatting. Now I just simply install black and let it care of these matters, it doesn’t have any configurations (with one or two exceptions) and if your code does not have any syntax errors it will be automatically formatted according to a “style” that is reasonable.

Note: Many editors can be configured to automatically run black on every file save.


Using a code linter (a kind of static analysis tool) is also very easy, can be integrated with your editor and allows you to catch many issues without even running your code, such as, missing imports, unused variables, missing parenthesis and other programming errors, etc. There are a few other In this case pylint does the job well and I never bothered to switch.


Python has a unit testing framework included in its standard library (unittest) that works great, however I found out that there is an external package that makes me more productive and my tests much more clear.

That package is pytest and once you learn the concepts it is a joy to work with. A nice extra is that it recognizes your older unittest tests and is able to execute them anyway, so no need to refactor the test suite to start using it.


This package, as the name indicates, adds the required support and some useful utilities to test your Django projects using pytest. With it instead of python test, you will execute just pytest like any other python project.


Debug toolbar is a web panel added to your pages that lets you inspect your requests content, database queries, template generation, etc. It provides lots of useful information in order for the viewer to understand how the whole page rendering is behaving.

It can also be extended with other plugin that provide more specific information such as flamegraphs, HTML validators and other profilers.


If you are developing an API without any HTML pages rendered by Django, django-debug-toobar won’t provide much help, this is where django-silk shines in my humble opinion, it provides many of the same metrics and information on a separate page that can be inspected to debug problems and find performance bottlenecks.


This package is kind of a collection of small scripts that provide common functionality that is frequently needed. It contains a set of management commands, such as shell_plus and runserver_plus that are improved versions of the default ones, database visualization tools, debugger tags for the templates, abstract model classes, etc.


Finally, this one is an email panel for the django-debug-toolbar, that lets you inspect the sent emails while developing your website/webapp, this way you don’t have to configure another service to catch the emails or even read the messages on terminal with django.core.mail.backends.console.EmailBackend, which is not very useful if you are working with HTML templates.


Channels and Webhooks

Django is an awesome web framework for python and does a really good job, either for building websites or web APIs using Rest Framework. One area where it usually fell short was dealing asynchronous functionality, it wasn’t its original purpose and wasn’t even a thing on the web at the time of its creation.

The world moved on, web-sockets became a thing and suddenly there was a need to handle persistent connections and to deal with other flows “instead of” (or along with) the traditional request-response scheme.

In the last few years there has been several cumbersome solutions to integrate web-sockets with Django, some people even moved to other python solutions (losing many of the goodies) in order to be able to support this real-time functionality. It is not just web-sockets, it can be any other kind of persistent connection and/or asynchronous protocol in a microservice architecture for example.

Of all alternatives the most developer friendly seems to be django-channels, since it lets you keep using familiar django design patterns and integrates in a way that seems it really is part of the framework itself. Last year django-channels saw the release of it second iteration, with a completely different internal design and seems to be stable enough to start building cool things with it, so that is what we will do in this post.

Webhook logger

In this blog post I’m gonna explore the version 2 of the package and evaluate how difficult it can be to implement a simple flow using websockets.

Most of the tutorials I find on the web about this subject try to demonstrate the capabilities of “channels” by implementing a simple real-time chat solution. For this blog post I will try something different and perhaps more useful, at least for developers.

I will build a simple service to test and debug webhooks (in reality any type of HTTP request). The functionality is minimal and can be described like this:

  • The user visits the website and is given a unique callback URL
  • All requests sent to that callback URL are displayed on the user browser in real-time, with all the information about that request.
  • The user can use that URL in any service that sends requests/webhooks as asynchronous notifications.
  • Many people can have the page open and receive at the same time the information about the incoming requests.
  • No data is stored, if the user reloads the page it can only see new requests.

In the end the implementation will not differ much from those chat versions, but at least we will end up with something that can be quite handy.

Note: The final result can be checked on Github, if you prefer to explore while reading the rest of the article.

Setting up the Django project

The basic setup is identical to any other Django project, we just create a new one using django_admin startproject webhook_logger and then create a new app using python startapp callbacks (in this case I just named the app callbacks).

Since we will not store any information we can remove all database related stuff and even any other extra functionality that will not be used, such as authentication related middleware. I did this on my repository, but it is completely optional and not in the scope of this small post.

Installing “django-channels”

After the project is set up we can add the missing piece, the django-channels package, running pip install channels==2.1.6. Then we need to add it to the installed apps:


For this project we will use Redis as a backend for the channel layer, so we need to also install the channels-redis package and add the required configuration:

    "default": {
        "BACKEND": "channels_redis.core.RedisChannelLayer",
        "CONFIG": {"hosts": [(os.environ.get("REDIS_URL", ""), 6379)]},

The above snippet assumes you are running a Redis server instance on your machine, but you can configure it using a environment variable.

Add websocket’s functionality

When using “django channels” our code will not differ much from a standard django app, we will still have our views, our models, our templates, etc. For the asynchronous interactions and protocols outside the standard HTTP request-response style, we will use a new concept that is the Consumer with its own routing file outside of default file.

So lets add these new files and configurations to our app. First inside our app lets create a with the following contents:

# callbacks/
from channels.generic.websocket import WebsocketConsumer
from asgiref.sync import async_to_sync
import json

class WebhookConsumer(WebsocketConsumer):
    def connect(self):
        self.callback = self.scope["url_route"]["kwargs"]["uuid"]
        async_to_sync(self.channel_layer.group_add)(self.callback, self.channel_name)

    def disconnect(self, close_code):
            self.callback, self.channel_name

    def receive(self, text_data):
        # Discard all received data

    def new_request(self, event):

Basically we extend the standard WebsocketConsumer and override the standard methods. A consumer instance will be created for each websocket connection that is made to the server. Let me explain a little bit what is going on the above snippet:

  • connect – When a new websocket connection is made, we check which callback it desires to receive information and attach the consumer to the related group ( a group is a way to broadcast a message to several consumers)
  • disconnect – As the name suggests, when we lose a connection we remove the “consumer” from the group.
  • receive – This is a standard method for receiving any data sent by the other end of the connection (in this case the browser). Since we do not want to receive any data, lets just discard it.
  • new_request – This is a custom method for handling data about a given request/webhook received by the system. These messages are submitted to the group with the type new_request.

You might also be a little confused with that async_to_sync function that is imported and used to call channel_layer methods, but the explanation is simple, since those methods are asynchronous and our consumer is standard synchronous code we have to execute them synchronously. That function and sync_to_async are two very helpful utilities to deal with these scenarios, for details about how they work please check this blog post.

Now that we have a working consumer, we need to take care of the routing so it is accessible to the outside world. Lets add an app level file:

# callbacks/
from django.conf.urls import url

from .consumers import WebhookConsumer

websocket_urlpatterns = [url(r"^ws/callback/(?P<uuid>[^/]+)/$", WebhookConsumer)]

Here we use a very similar pattern (like the well known url_patterns) to link our consumer class to connections of certain url. In this case our users could connect to an URL that contains the id (uuid) of the callback that they want to be notified about new events/requests.

Finally for our consumer to be available to the public we will need to create a root routing file for our project. It looks like this:

# <project_name>/
from channels.routing import ProtocolTypeRouter, URLRouter
from callbacks.routing import websocket_urlpatterns

application = ProtocolTypeRouter({"websocket": URLRouter(websocket_urlpatterns)})

Here we use the ProtocolTypeRouter as the main entry point, so what is does is:

It lets you dispatch to one of a number of other ASGI applications based on the type value present in the scope. Protocols will define a fixed type value that their scope contains, so you can use this to distinguish between incoming connection types.

Django Channels Documentation

We just defined the websocket protocol and used the URLRouter to point to our previous defined websocket urls.

The rest of the app

At this moment we are able to receive new websocket connections and send to those clients live data using the new_request method on the client. However at the moment we do not have information to send, since we haven’t yet created the endpoints that will receive the requests and forward their data to our consumer.

For this purpose lets create a simple class based view, it will receive any type of HTTP request (including the webhooks we want to inspect) and forward them to the consumers that are listening of that specific uuid:

# callbacks/

class CallbackView(View):
    def dispatch(self, request, *args, **kwargs):
        channel_layer = get_channel_layer()
            kwargs["uuid"], {"type": "new_request", "data": self._request_data(request)}
        return HttpResponse()

In the above snippet, we get the channel layer, send the request data to the group and return a successful response to calling entity (lets ignore what the self._request_data(request) call does and assume it returns all the relevant information we need).

One important piece of information is that the value of the type key on the data that is used for the group_send call, is the method that will be called on the websocket’s consumer we defined earlier.

Now we just need to expose this on our file and the core of our system is done.

# <project_name>/

from django.urls import path
from callbacks.views import CallbackView

urlpatterns = [
    path("<uuid>", CallbackView.as_view(), name="callback-submit"),

The rest of our application is just standard Django web app development, that part I will not cover in this blog post. You will need to create a page and use JavaScript in order to connect the websocket. You can check a working example of this system in the following URL :

For more details just check the code repository on Github.


I not going to explore the details about the topic of deployments but someone else wrote a pretty straightforward blog post on how to do it for production projects that use Django channels. You can check it here.

Final thoughts

With django-channels building real-time web apps or projects that deal with other protocols other than HTTP becomes really simple. I do think it is a great addition to the current ecosystem, it certainly is an option I will consider from now on for these tasks.

Have you ever used it? do you any strong opinion about it? let me know on the comments section.

Final Note: It seems based on recent messages on the mailing list that the project might suspend its developments in its future if it doesn’t find new maintainers. It would definitely be a shame, since it has a lot of potential. Lets see how it goes.


Django Friday Tips: Links that maintain the current query params

Basically when you are building a simple page that displays a list of items that contain a few filters you might want to maintain them while navigating, for example while browser through the pages of results.

Nowadays many of this kind of pages are rendered client-side using libraries such as vue and react, so this doesn’t pose much of a problem since the state is easily managed and requests are generated according to that state.

But what if you are building a simple page/website using traditional server-side rendered pages (that for many purposes is totally appropriate)? Generating the pagination this way while maintaining the current selected filters (and other query params) might give you more work and trouble than it should.

So today I’m going to present you a quick solution in the form of a template tag that can help you easily handle that situation. With a quick search on the Internet you will almost for sure find the following answer:

def url_replace(request, field, value):
    dict_ = request.GET.copy()
    dict_[field] = value
    return dict_.urlencode()

Which is great and work for almost scenario that comes to mind, but I think it can be improved a little bit, so like one of lower ranked answers suggests, we can change it to handle more than one query parameter while maintaining the others:

def updated_params(context, **kwargs):
    dict_ = context['request'].GET.copy()
    for k, v in kwargs.items():
        dict_[k] = v
    return dict_.urlencode()

As you can see, with takes_context we no longer have to repeatedly pass the request object to the template tag and we can give it any number of parameters.

The main difference for the suggestion on “Stack Overflow” it that this version allows for repeating query params, because we don’t convert the QueryDict to a dict.  Now you just need to use it in your templates like this:{% updated_params page=2 something='else' %}

Looking for security issues on your python projects

In today’s post I will introduce a few open-source tools, that can help you improve the security of any of your python projects and detect possible vulnerabilities early on.

These tools are quite well known in the python community and used together will provide you with great feedback about common issues and pitfalls.

Safety and Piprot

As I discussed some time ago on a post about managing dependencies and the importance of checking them for known issues, in python there is a tool that compares the items of your requirements.txt with a database of known vulnerable versions. It is called safety (repository)  and can be used like this:

safety check --full-report -r requirements.txt

If you already use pipenv safety is already incorporated and can be used by running: pipenv check (more info here).

Since the older the dependencies are, the higher the probability of a certain package containing bugs and issues, another great tool that can help you with this is piprot (repository).

It will check all items on your requirements.txt and tell you how outdated they are.


The next tool in the line is bandit, which is a static analyzer for python built by the Open Stack Security Project, it checks your codebase for common security problems and programming  mistakes that might compromise your application.

It will find cases of hardcoded passwords, bad SSL defaults, usage of eval, weak ciphers, different “injection” possibilities, etc.

It doesn’t require much configuration and you can easily add it to your project. You can find more on the official repository.

Python Taint

This last one only applies if you are building a web application and requires a little bit more effort to integrate in your project (at its current state).

Python Taint (pyt) is a static analyzer that tries to find spots were your code might be vulnerable to common types of problems that affect websites and web apps, such as SQL injection, cross site scripting (XSS), etc.

The repository can be found here.

If you are using Django, after using pyt you might also want to run the built in check command, (as discussed in a previous post) to verify some specific configurations of the framework present on your project.


Python Technology and Internet

Django Friday Tips: Adding RSS feeds

Following my previous posts about RSS and its importance for an open web, this week I will try to show how can we add syndication to our websites and other apps built with Django.

This post will be divided in two parts. The first one covers the basics:

  • Build an RSS feed based on a given model.
  • Publish the feed.
  • Attach that RSS feed to a given webpage.

The second part will contain more advanced concepts, that will allow subscribers of our page/feed to receive real-time updates without the need to continuously check our feed. It will cover:

  • Adding a Websub / Pubsubhubbub hub to our feed
  • Publishing the new changes/additions to the hub, so they can be sent to subscribers

So lets go.

Part one: Creating the Feed

The framework already includes tools to handle this stuff, all of them well documented here. Nevertheless I will do a quick recap and leave here a base example, that can be reused for the second part of this post.

So lets supose we have the following models:

class Author(models.Model):

    name = models.CharField(max_length=150)
    created_at = models.DateTimeField(auto_now_add=True)

    class Meta:
        verbose_name = "Author"
        verbose_name_plural = "Authors"

    def __str__(self):

class Article(models.Model):

    title = models.CharField(max_length=150)
    author = models.ForeignKey(Author, on_delete=models.CASCADE)

    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)

    short_description = models.CharField(max_length=250)
    content = models.TextField()

    class Meta:
        verbose_name = "Article"
        verbose_name_plural = "Articles"

    def __str__(self):
        return self.title

As you can see, this is for a simple “news” page where certain authors publish articles.

According to the Django documentation about feeds, generating a RSS feed for that page would require adding the following Feedclass to the (even tough it can be placed anywhere, this file sounds appropriate):

from django.urls import reverse_lazy
from django.contrib.syndication.views import Feed
from django.utils.feedgenerator import Atom1Feed

from .models import Article

class ArticlesFeed(Feed):
    title = "All articles feed"
    link = reverse_lazy("articles-list")
    description = "Feed of the last articles published on site X."

    def items(self):
        return Article.objects.select_related().order_by("-created_at")[:25]

    def item_title(self, item):
        return item.title

    def item_author_name(self, item):

    def item_description(self, item):
        return item.short_description

    def item_link(self, item):
        return reverse_lazy('article-details', kwargs={"id":})

class ArticlesAtomFeed(ArticlesFeed):
    feed_type = Atom1Feed
    subtitle = ArticlesFeed.description

On the above snippet, we set some of the feed’s global properties (title, link, description), we define on the items() method which entries will be placed on the feed and finally we add the methods to retrieve the contents of each entry.

So far so good, so what is the other class? Other than standard RSS feed, with Django we can also generate an equivalent Atom feed, since many people like to provide both that is what we do there.

Next step is to add these feeds to our URLs, which is also straight forward:

urlpatterns = [
    path('articles/rss', ArticlesFeed(), name="articles-rss"),
    path('articles/atom', ArticlesAtomFeed(), name="articles-atom"),

At this moment, if you try to visit one of those URLs, an XML response will be returned containing the feed contents.

So, how can the users find out that we have these feeds, that they can use to get the new contents of our website/app using their reader software?

That is the final step of this first part. Either we provide the link to the user or we include them in the respective HTML page, using specific tags in the head element, like this:

<link rel="alternate" type="application/rss+xml" title="{{ rss_feed_title }}" href="{% url 'articles-rss' %}" />
<link rel="alternate" type="application/atom+xml" title="{{ atom_feed_title }}" href="{% url 'articles-atom' %}" />

And that’s it, this first part is over. We currently have a feed and a mechanism for auto-discovery, things that other programs can use to fetch information about the data that was published.

Part Two: Real-time Updates

The feed works great, however the readers need continuously check it for new updates and this isn’t the ideal scenario. Neither for them, because if they forget to regularly check they will not be aware of the new content, neither for your server, since it will have to handle all of this extra workload.

Fortunately there is the WebSub protocol (previously known as Pubsubhubbub), that is a “standard” that has been used to deliver a notification to subscribers when there is new content.

It works by your server notifying an external hub (that handles the subscriptions) of the new content, the hub will then notify all of your subscribers.

Since this is a common standard, as you might expect there are already some Django packages that might help you with this task. Today we are going to use django-push with as the hub, to keep things simple (but you could/should use another one).

The first step, as always, is to install the new package:

$ pip install django-push

And then add the package’s Feed class to our (and use it on our Atom feed):

from django_push.publisher.feeds import Feed as HubFeed


class ArticlesAtomFeed(ArticlesFeed, HubFeed):
    subtitle = ArticlesFeed.description

The reason I’m only applying this change to the Atom feed, is because this package only works with this type of feed as it is explained in the documentation:

… however its type is forced to be an Atom feed. While some hubs may be compatible with RSS and Atom feeds, the PubSubHubbub specifications encourages the use of Atom feeds.

This no longer seems to be true for the more recent protocol specifications, however for this post I will continue only with this type of feed.

The next step is to setup which hub we will use. On the file lets add the following line:


With this done, if you make a request for your Atom feed, you will notice the following root element was added to the XML response:

<link href="" rel="hub"></link>

Subscribers will use that information to subscribe for notifications on the hub. The last thing we need to do is to tell the hub when new entries/changes are available.

For that purpose we can use the ping_hub function. On this example the easiest way to accomplish this task is to override the Article  model save() method on the file:

from django_push.publisher import ping_hub


class Article(models.Model):
    def save(self, *args, **kwargs):
        super().save(*args, **kwargs)

And that’s it. Our subscribers can now be notified in real-time when there is new content on our website.


Django Friday Tips: Timezone per user

Adding support for time zones in your website, in order to allow its users to work using their own timezone is a “must” nowadays. So in this post I’m gonna try to show you how to implement a simple version of it. Even though Django’s documentation is very good and complete, the only example given is how to store the timezone in the users session after detecting (somehow) the user timezone.

What if the user wants to store his timezone in the settings and used it from there on every time he visits the website? To solve this I’m gonna pick the example given in the documentation and together with the simple django-timezone-field package/app implement this feature.

First we need to install the dependency:

 $ pip install django-timezone-field==2.0rc1

Add to the INSTALLED_APPS of your project:


Then add a new field to the user model:

class User(AbstractUser):
    timezone = TimeZoneField(default='UTC'

Handle the migrations:

 $python makemigration && python migrate

Now we will need to use this information, based on the Django’s documentation example we can add a middleware class, that will get this information on every request and set the desired timezone. It should look like this:

from django.utils import timezone

class TimezoneMiddleware():
    def process_request(self, request):
        if request.user.is_authenticated():

Add the new class to the project middleware:


Now it should be ready to use, all your forms will convert the received input (in that timeone) to UTC, and templates will convert from UTC to the user’s timezone when rendered. For different conversions and more complex implementations check the available methods.

Python Technology and Internet

Receive PGP encrypted emails, without the sender needing to know how to do it

One common trouble of people trying to secure their email communications with PGP, is that more often that not the other end doesn’t know how to use these kind of tools. I’ll be honest, at the current state the learning curve is too steep for the common user. This causes a huge deal of trouble when you desire to receive/sent sensitive information in a secure manner.

I will give you an example, a software development team helping a customer building his web business or application, may want to receive a wide variety of access keys to external services and APIs, that are in possession of the customer and are required (or useful) to be integrated in the project.

Lets assume that the customer is not familiarized with encryption tools, the probability of that sensitive material to be shared in an insecure way is too high, he might send it through a clear text email or post it on some shared document (or file). Both the previous situations are red flags, either by the communication channel not secure enough or the possibility of existing multiple copies of the information in different places with doubtful security, all of them in clear text.

In our recent “Whitesmith Hackathon”, one of the projects tried to address this issue. We though on a more direct approach to this situation based on the assumption that you will not be able to convince the customer into learning this kind of things. We called it Hawkpost, essentially it’s a website that makes use of OpenPGP.js, where you create unique links containing a form, that the user uses to submit any information, that will then be encrypted on his browser with your public key (without the need to install any extra software) and forwarded to your email address.

You can test and used it on, but the project is open-source, so you can change it and deploy it on your own server if you prefer. It’s still in a green state at the moment, but we will continue improving the concept according with the received feedback. Check it out and tell us what you think.


Django Friday Tips: Secret Key

One thing that is always generated for you when you start a new django project is the SECRET_KEY string. This value is described in the documentation as:

A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value.

The rule book mandates that this value should not be shared or made public, since this will defeat its purpose and many securing features used by the framework. Given that on any modern web development process we have multiple environments such as production and staging, or in the cases where we might deploy the same codebase different times for different purposes, we will need to generate and have distinct versions of this variable so we can’t rely solely on the one that was generated when the project was started.

There is no official way to generate new values for the secret key, but with a basic search on the Internet, you can find several sources and code snippets for this task. So which one to use? The django implementation has a length of 50 characters, chosen randomly from an alphabet with size 50 as well, so we might start with this as a requirement. Better yet, why not call the same function that uses itself?

So for a new project, the first thing to do is to replace this:

SECRET_KEY = "uN-pR3d_IcT4~ble!_Str1Ng..."

With this:

SECRET_KEY = os.environ.get("SECRET_KEY", None)

Then for each deployment we can generate a distinct value for it using a simple script like this one:

from django.utils.crypto import get_random_string

chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
print("SECRET_KEY={}".format(get_random_string(50, chars)))


$ python >> .env

Some people think the default function is not random enough and proposed a different alternative (that also works), if you feel the same way check this script.

Personal Python

Browsing folders of markdown files

If you are like me, you have a bunch of notes and documents written in markdown spread across many folders. Even the documentation of some projects involving many people is done this way and stored, for example, in a git repository. While it is easy to open the text editor to read these files, it is not the most pleasant experience, since the markup language was made to later generate readable documents in other formats (eg. HTML).

For many purposes setting up the required configuration of tools to generate documentation (like mkdocs) is not practical, neither it was the initial intent when it was written. So last weekend I took a couple of hours and built a rough (and dirty) tool to help me navigate and read the markdown documents with a more pleasant experience, using the browser (applying style as github).

I called it mdvis and it is available for download through “pip”. Here’s how working with it looks like:

It does not provide many features and is somewhat “green”, but it serves my current purposes. The program is open-source so you can check it here, in case you want to help improving it.