Author: Gonçalo Valério

  • Setting up a Content-Security-Policy

    A couple of weeks ago, I gave a small talk on the Madeira Tech Meetup about a set of HTTP headers that could help website owners protect their assets and their users. The slides are available here, just in case you want to take a look.

    The content of the talk is basically a small review about what exists, what each header tries to achieve and how could you use it.

    After the talk I remembered that I didn’t review the heades of this blog for quite sometime. So a quick visit to Mozilla Observatory, a tool that lets you have a quick look of some of the security configurations of your website, gave me an idea of what I needed to improve. This was the result:

    The Content-Security-Header was missing

    So what is a Content Security Policy? On the MDN documentation we can find the following description:

    The HTTP Content-Security-Policy response header allows web site administrators to control resources the user agent is allowed to load for a given page.

    Mozilla Developer Network

    Summing up, in this header we describe with a certain level of detail the sources from where each type of content can be fetched in order to be allowed and included on a given page/app. The main goal of this type of policy is to mitigate Cross-Site Scripting attacks.

    In order to start building a CSP for this blog a good approach, in my humble opinion, is to start with the more basic and restrictive policy and then proceed evaluating the need for exceptions and only add them when strictly necessary. So here is my first attempt:

    default-src 'self'; object-src 'none'; report-uri https://ovalerio.report-uri.com/r/d/csp/reportOnly

    Lets interpret what it says:

    • default-src: This is the default value for all non-mentioned directives. self means “only things that come from this domain”.
    • object-src: No <object>, <embed> or <applet> here.
    • report-uri: All policy violations should be reported by the browser to this URL.

    The idea was that all styles, scripts and images should be served by this domain, anything external should be blocked. This will also block inline scripts, styles and data images, which are considered unsafe. If for some reason I need to allow this on the blog I could use unsafe-inline, eval and data: on the directive’s definition but in my opinion they should be avoided.

    Now a good way to find out how this policy will affect the website and to understand how it needs to be tuned (or the website changed) we can activate it using the “report only mode:

    Content-Security-Policy-Report-Only: <policy>

    This mode will generate some reports when you (and other users) navigate through the website, they will be printed on the browser’s console and sent to the defined report-uri, but the resources will be loaded anyway.

    Here are some results:

    CSP violations logs on the browser console
    Example of the CSP violations on the browser console

    As an example below is a raw report from one of those violations:

    {
        "csp-report": {
            "blocked-uri": "inline",
            "document-uri": "https://blog.ovalerio.net/",
            "original-policy": "default-src 'self'; object-src 'none'; report-uri https://ovalerio.report-uri.com/r/d/csp/reportOnly",
            "violated-directive": "default-src"
        }
    }

    After a while I found that:

    • The theme used on this blog used some data: fonts
    • Several inline scripts were being loaded
    • Many inline styles were also being used
    • I have some demos that load content from asciinema.org
    • I often share some videos from Youtube, so I need to allow iframes from that domain
    • Some older posts also embeded from other websites (such as soundcloud)

    So for the blog to work fine with the CSP being enforced, I either had to include some exceptions or fix errors. After evaluating the attack surface and the work required to make the changes I ended up with the following policy:

    Content-Security-Policy-Report-Only: default-src 'self'; script-src 'self' https://asciinema.org 'sha256-A+5+D7+YGeNGrYcTyNB4LNGYdWr35XshEdH/tqROujM=' 'sha256-2N2eS+4Cy0nFISF8T0QGez36fUJfaY+o6QBWxTUYiHc=' 'sha256-AJyUt7CSSRW+BeuiusXDXezlE1Wv2tkQgT5pCnpoL+w=' 'sha256-n3qH1zzzTNXXbWAKXOMmrBzjKgIQZ7G7UFh/pIixNEQ='; style-src 'self' 'sha256-MyyabzyHEWp8TS5S1nthEJ4uLnqD1s3X+OTsB8jcaas=' 'sha256-OyKg6OHgnmapAcgq002yGA58wB21FOR7EcTwPWSs54E='; font-src 'self' data:; img-src 'self' https://secure.gravatar.com; frame-src 'self' https://www.youtube.com https://asciinema.org; object-src 'none'; report-uri https://ovalerio.report-uri.com/r/d/csp/reportOnly

    A lot more complex than I initially expected it to be, but it’s one of the drawbacks of using a “pre-built” theme on a platform that I didn’t develop. I was able (in the available time) to fix some stuff but fixing everything would take a lot more work.

    All those sha-256 hashes were added to only allow certain inline scripts and styles without allowing everything using unsafe-inline.

    Perhaps in the future I will be able to change to a saner theme/platform, but for the time being this Content-Security-Policy will do the job.

    I started enforcing it (by changing Content-Security-Policy-Report-Only to Content-Security-Policy) just before publishing this blog post, so if anything is broken please let me know.

    I hope this post has been helpful to you and if you didn’t yet implement this header you should give it a try, it might take some effort (depending on the use case) but in the long run I believe it is totally worth it.

  • kinspect – quickly look into PGP public key details

    Sometimes I just need to look into the details of a PGP key that is provided in its “armored” form by some website (not everyone is publishing their keys to the keyservers).

    Normally I would have to import that key to my keyring or save it into a file and use gnupg to visualize it (as it is described in this Stack Overflow answers).

    To avoid this hassle I just created a simple page with a text area where you can paste the public key and it will display some basic information about it. Perhaps an extension would be a better approach, but for now this works for me.

    You can use it on: https://kinspect.ovalerio.net

    In case you would like to contribute in order to improve it or extend the information displayed about the keys, the source code is available on Github using a Free Software license: https://github.com/dethos/kinspect

  • Firefox’s DoH, the good, the bad and the ugly

    First of all, DoH stands for “DNS over HTTPS”.

    So last week Mozilla announced that future versions of Firefox will use DoH by default, a decision that at first sight might seem innocuous or even good but thanks to some implementation details it ended up being very controversial.

    The reaction that followed the announcement on many technology focused forums and communities was mostly negative (example 1, example 2 and example 3), pointing out many problems, mostly with the way it was implemented and with the default settings used by Mozilla.

    One of the first outcomes was OpenBSD, a well known operating system, announcing that the version of Firefox distributed through its “package manager” will have this change disabled by default.

    In this post I will try to summarize the core of whole controversy and list the pros and cons of it.

    How does a DNS query work?

    In a very brief and not 100% accurate way, when you try to visit a website such as www.some-example.com, your computer first asks a DNS server (resolver) for the IP address of that website, this server address is usually defined on your system either manually (you set it up) or automatically (when you join a given WiFi network, for example, the network will tell you a server you can use).

    That server address generally is set system wide and will be used by all apps. If the server knows the location of the website it will tell you the answer, otherwise it will try to find the location using one of 2 approaches (I will avoid any details here) and come back to you with the result.

    You browser will then use this result to fetch the contents of the website. The bellow image describes this flow:

    Diagram of a client making a DNS query to a local server.
    Source: https://commons.wikimedia.org

    This system is kind of distributed across many entities. Different people across the globe will contact different servers according to their settings and network/location.

    DNS over HTTPS

    The previously described flow already exists for decades and does not change with DoH, what changes is the way you contact the server in order to ask for the website location and the way this data is transmitted.

    While the standard implementation uses UDP and the information travels in cleartext throughout the network (everybody can see it) with DoH this is done as an HTTP request that uses TCP with an encrypted connection, protecting you from malicious actors.

    In the end this should be a good thing, but as we will see later on the post things will do south.

    Current implementation

    A great deal of the discussion this week was sparked by a blog post telling people to turn off Firefox’s DoH, the main complains resolve around not the DoH in itself but the way Mozilla decided to introduce it. Being opt-out and not opt-in, the browser ignoring system configuration and using the servers of a single company by default.

    With the current implementation we end up with:

    The good

    The good part is the obvious reason for using DNS over HTTPS, all your websites queries are encrypted and protected while in transit on the network. It is the extra protection that has been needed for “DNS traffic” for a while.

    The bad

    The first bad part is that the browser will work differently from the rest of the apps which can cause confusion (why this URL work on the browser and not on my app?), the browser no longer will connect to the same server that was defined for the whole systems

    Related to the above problem there is also special network configurations that will stop working such as internal DNS names, rules and filters that are often used on private networks and rely on the internal DNS servers. For these scenarios Mozilla described a series of checks and fallbacks (such as “canary domains”) to accommodate this situation, however they look like fragile hacks.

    The ugly

    The ugly part is that all DNS traffic from the browser will go to a single entity by default, no matter where you are or which network you are using, which raises privacy concerns and increases the centralization of the system. There is the option of manually setting up a different server however 99% of the users will rely on that single provider.

    Conclusion

    The overall the intention was good and having encrypted DNS resolution is something that has been required for a very long time but hasn’t become mainstream yet.

    The core of the problem with Mozilla’s approach is making it “opt-out”, which means all users will now tell a single “Mozilla partner” the websites they visit by default, without being aware of it.

    It will also create some problems to solutions that are deployed network wide and rely on setting certain DNS configurations, since Firefox will not respect them. We can also expect an increased centralization on a system that has been previously working the other way around.

    Lets hope that in the future DoH and other encrypted alternatives become standardized so we can continue to use DNS as we always did and don’t have to manage it on every application.

  • Rust examples and exercises

    Learning to program in Rust is as easy like other languages out there, because it ends up having different constrains and new concepts that you will have to go through, in the beginning everybody fights the compiler at least a little bit.

    I started this journey a while ago, however I’ve been progressing slowly just dedicating some time once in a while when I don’t anything else to do.

    I did what many recommendations on the internet tell you to do, start by reading the official book, that is in fact pretty good. But after reading one or two chapters, we need to practice and play with the language to have a feel of it and explore the new concepts you had just learned.

    So in this small post I just want to share two open resources that can be used while you read the book to practice what you have just learned.

    The first one is a website with examples you can modify and execute live in the browser called Rust by Example.

    The second is an official rust project that will put your knowledge up to a test called Rustlings.

    You can use it like the above video or with rustlings watch that stop and reload each exercise until you solve it.

    This is it, I hope they end being helpful to someone else as well.

  • Staying on an AirBnB? Look for the cameras

    When going on a trip it is now common practice to consider staying on an rented apartment or house instead of an hotel or hostel, mostly thanks to AirBnB which made it really easy and convenient for both side of the deal. Most of the time the price is super competitive and I would say a great fit for many situations.

    However as it happens with almost anything, it has its own set of problems and challenges. One example of these new challenges are the reports (and confirmations) that some, lets call them malicious hosts, have been putting in place cameras to monitor the guests during their stay.

    With a small search on the internet you can find

    Someone equipped with the right knowledge and a computer can try to check if a camera is connected to the WiFi network, like this person did:

    Toot describing that a camera that was hidden inside a box

    If this is your case, the following post provides a few guidelines to look for the cameras:

    Finally, try to figure out the public IP address of the network you are on ( https://dshield.org/api/myip ) and either run a port scan from the outside to see if you find any odd open ports, or look it up in Shodan to see if Shodan found cameras on this IP in the past (but you likely will have a dynamic IP address).

    InfoSec Handlers Diary Blog

    This page even provides a script that you can execute to automatically do most steps explained on the above article.

    However, sometimes you don’t bring your computer with you, which means you would have to rely on your smartphone to do this search. I’m still trying to find a good, trustworthy and intuitive app to recommend, since using nmap on Android will not help the less tech-savvy people.

    Meanwhile, I hope the above links provide you with some ideas and useful tools to look for hidden cameras while you stay on a rented place.

  • 8 useful dev dependencies for django projects

    In this post I’m gonna list some very useful tools I often use when developing a Django project. These packages help me improve the development speed, write better code and also find/debug problems faster.

    So lets start:

    Black

    This one is to avoid useless discussions about preferences and taste related to code formatting. Now I just simply install black and let it care of these matters, it doesn’t have any configurations (with one or two exceptions) and if your code does not have any syntax errors it will be automatically formatted according to a “style” that is reasonable.

    Note: Many editors can be configured to automatically run black on every file save.

    https://github.com/python/black

    PyLint

    Using a code linter (a kind of static analysis tool) is also very easy, can be integrated with your editor and allows you to catch many issues without even running your code, such as, missing imports, unused variables, missing parenthesis and other programming errors, etc. There are a few other In this case pylint does the job well and I never bothered to switch.

    https://www.pylint.org/

    Pytest

    Python has a unit testing framework included in its standard library (unittest) that works great, however I found out that there is an external package that makes me more productive and my tests much more clear.

    That package is pytest and once you learn the concepts it is a joy to work with. A nice extra is that it recognizes your older unittest tests and is able to execute them anyway, so no need to refactor the test suite to start using it.

    https://docs.pytest.org/en/latest/

    Pytest-django

    This package, as the name indicates, adds the required support and some useful utilities to test your Django projects using pytest. With it instead of python manage.py test, you will execute just pytest like any other python project.

    https://pytest-django.readthedocs.io

    Django-debug-toolbar

    Debug toolbar is a web panel added to your pages that lets you inspect your requests content, database queries, template generation, etc. It provides lots of useful information in order for the viewer to understand how the whole page rendering is behaving.

    It can also be extended with other plugin that provide more specific information such as flamegraphs, HTML validators and other profilers.

    https://django-debug-toolbar.readthedocs.io

    Django-silk

    If you are developing an API without any HTML pages rendered by Django, django-debug-toobar won’t provide much help, this is where django-silk shines in my humble opinion, it provides many of the same metrics and information on a separate page that can be inspected to debug problems and find performance bottlenecks.

    https://github.com/jazzband/django-silk

    Django-extensions

    This package is kind of a collection of small scripts that provide common functionality that is frequently needed. It contains a set of management commands, such as shell_plus and runserver_plus that are improved versions of the default ones, database visualization tools, debugger tags for the templates, abstract model classes, etc.

    https://django-extensions.readthedocs.io

    Django-mail-panel

    Finally, this one is an email panel for the django-debug-toolbar, that lets you inspect the sent emails while developing your website/webapp, this way you don’t have to configure another service to catch the emails or even read the messages on terminal with django.core.mail.backends.console.EmailBackend, which is not very useful if you are working with HTML templates.

    https://github.com/scuml/django-mail-panel

  • Pixels Camp v3

    Like I did in previous years/versions, this year I participated again on Pixels.camp, a kind of conference plus hackathon. For those who aren’t aware, it is one of the biggest (if not the biggest) technology event in Portugal (from a technical perspective not counting with the Web Summit).

    So, as I did in previous editions, I’m gonna leave here a small list with the nicest talks I was able to attend.

    Lockpicking versus IT security

    This one was super interesting, Walter Belgers showed the audience a set of problems in make locks and compared those mistakes with the ones regularly done by software developers.

    Al least for me the more impressive parts of the whole presentation were the demonstrations of the flaws on regular (and high security) locks.

    Talk description here.


    Containers 101

    “Everybody” uses containers nowadays, on this talk the speaker took a step back and went through the history and the major details behind this technology. Then he shows how you could implement a part of it yourself using common Linux features and tools.

    Talk description here.


    Static and dynamic analysis of events for threat detection

    This one was a nice overview about Siemens infrastructure for threat detection, their approaches and used tools. It was also possible to understand some of the obstacles and challenges a company must address to protect a global infrastructure.

    Talk description here.


    Protecting Crypto exchanges from a new wave of man-in-the-browser attacks

    This presentation used the theme of protecting crypto-currency exchanges but gave lots of good hints on how to improve security of any website or web application. The second half of the talk focused on a kind of attack called man-in-the-browser and focused on a demonstration of it. In my opinion, this last part was weaker and I left with the impression it lacked details about the most crucial part of the attack while spending a lot of time on less important stuff.

    Talk description here.

  • Easy backups with Borg

    One of the oldest and most frequent advises to people working with computers is “create backups of your stuff”. People know about it, they are sick of hearing it, they even advice other people about it, but a large percentage of them don’t do it.

    There are many tools out there to help you fulfill this task, but throughout the years the one I end up relying the most is definitely “Borg“. It is really easy to use, has good documentation and runs very well on Linux machines.

    Here how they describe it:

    BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.

    The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets.

    Borg’s Website

    The built-in encryption and de-duplication features are some of its more important selling points.

    Until recently I’ve had a hard time recommending it to less technical people, since Borg is mostly available through the command line and can take some work to implement the desired backup “policy”. There is a web based graphical user interface but I generally don’t like them as a replacement for native desktop applications.

    However in the last few months I’ve been testing this GUI frontend for Borg, called Vorta, that I think will do the trick for family and friends that ask me what can they use to backup their data.

    The tool is straight forward to use and supports the majority of Borg’s functionality, once you setup the repository you can instruct it to regularly perform your backups and forget about it.

    I’m not gonna describe how to use it, because with a small search on the internet you can quickly find lots of articles with that information.

    The only advise that I would like to leave here about Vorta, is related to the the encryption and the settings chosen when creating your repository. At least on the version I used, the recommend repokey option will store your passphrase on a local SQLite database in clear-text, which is kind of problematic.

    This seems to be viewed as a feature:

    Fallback to save repo passwords. Only used if no Keyring available.

    Github Repository

    But I could not find the documentation about how to avoid this “fallback”.

  • Channels and Webhooks

    Django is an awesome web framework for python and does a really good job, either for building websites or web APIs using Rest Framework. One area where it usually fell short was dealing asynchronous functionality, it wasn’t its original purpose and wasn’t even a thing on the web at the time of its creation.

    The world moved on, web-sockets became a thing and suddenly there was a need to handle persistent connections and to deal with other flows “instead of” (or along with) the traditional request-response scheme.

    In the last few years there has been several cumbersome solutions to integrate web-sockets with Django, some people even moved to other python solutions (losing many of the goodies) in order to be able to support this real-time functionality. It is not just web-sockets, it can be any other kind of persistent connection and/or asynchronous protocol in a microservice architecture for example.

    Of all alternatives the most developer friendly seems to be django-channels, since it lets you keep using familiar django design patterns and integrates in a way that seems it really is part of the framework itself. Last year django-channels saw the release of it second iteration, with a completely different internal design and seems to be stable enough to start building cool things with it, so that is what we will do in this post.

    Webhook logger

    In this blog post I’m gonna explore the version 2 of the package and evaluate how difficult it can be to implement a simple flow using websockets.

    Most of the tutorials I find on the web about this subject try to demonstrate the capabilities of “channels” by implementing a simple real-time chat solution. For this blog post I will try something different and perhaps more useful, at least for developers.

    I will build a simple service to test and debug webhooks (in reality any type of HTTP request). The functionality is minimal and can be described like this:

    • The user visits the website and is given a unique callback URL
    • All requests sent to that callback URL are displayed on the user browser in real-time, with all the information about that request.
    • The user can use that URL in any service that sends requests/webhooks as asynchronous notifications.
    • Many people can have the page open and receive at the same time the information about the incoming requests.
    • No data is stored, if the user reloads the page it can only see new requests.

    In the end the implementation will not differ much from those chat versions, but at least we will end up with something that can be quite handy.

    Note: The final result can be checked on Github, if you prefer to explore while reading the rest of the article.

    Setting up the Django project

    The basic setup is identical to any other Django project, we just create a new one using django_admin startproject webhook_logger and then create a new app using python manage.py startapp callbacks (in this case I just named the app callbacks).

    Since we will not store any information we can remove all database related stuff and even any other extra functionality that will not be used, such as authentication related middleware. I did this on my repository, but it is completely optional and not in the scope of this small post.

    Installing “django-channels”

    After the project is set up we can add the missing piece, the django-channels package, running pip install channels==2.1.6. Then we need to add it to the installed apps:

    INSTALLED_APPS = [
        "django.contrib.staticfiles", 
        "channels", 
    ]

    For this project we will use Redis as a backend for the channel layer, so we need to also install the channels-redis package and add the required configuration:

    CHANNEL_LAYERS = {
        "default": {
            "BACKEND": "channels_redis.core.RedisChannelLayer",
            "CONFIG": {"hosts": [(os.environ.get("REDIS_URL", "127.0.0.1"), 6379)]},
        }
    }

    The above snippet assumes you are running a Redis server instance on your machine, but you can configure it using a environment variable.

    Add websocket’s functionality

    When using “django channels” our code will not differ much from a standard django app, we will still have our views, our models, our templates, etc. For the asynchronous interactions and protocols outside the standard HTTP request-response style, we will use a new concept that is the Consumer with its own routing file outside of default urls.py file.

    So lets add these new files and configurations to our app. First inside our app lets create a consumer.py with the following contents:

    # callbacks/consumers.py
    from channels.generic.websocket import WebsocketConsumer
    from asgiref.sync import async_to_sync
    import json
    
    
    class WebhookConsumer(WebsocketConsumer):
        def connect(self):
            self.callback = self.scope["url_route"]["kwargs"]["uuid"]
            async_to_sync(self.channel_layer.group_add)(self.callback, self.channel_name)
            self.accept()
    
        def disconnect(self, close_code):
            async_to_sync(self.channel_layer.group_discard)(
                self.callback, self.channel_name
            )
    
        def receive(self, text_data):
            # Discard all received data
            pass
    
        def new_request(self, event):
            self.send(text_data=json.dumps(event["data"]))

    Basically we extend the standard WebsocketConsumer and override the standard methods. A consumer instance will be created for each websocket connection that is made to the server. Let me explain a little bit what is going on the above snippet:

    • connect – When a new websocket connection is made, we check which callback it desires to receive information and attach the consumer to the related group ( a group is a way to broadcast a message to several consumers)
    • disconnect – As the name suggests, when we lose a connection we remove the “consumer” from the group.
    • receive – This is a standard method for receiving any data sent by the other end of the connection (in this case the browser). Since we do not want to receive any data, lets just discard it.
    • new_request – This is a custom method for handling data about a given request/webhook received by the system. These messages are submitted to the group with the type new_request.

    You might also be a little confused with that async_to_sync function that is imported and used to call channel_layer methods, but the explanation is simple, since those methods are asynchronous and our consumer is standard synchronous code we have to execute them synchronously. That function and sync_to_async are two very helpful utilities to deal with these scenarios, for details about how they work please check this blog post.

    Now that we have a working consumer, we need to take care of the routing so it is accessible to the outside world. Lets add an app level routing.py file:

    # callbacks/routing.py
    from django.conf.urls import url
    
    from .consumers import WebhookConsumer
    
    websocket_urlpatterns = [url(r"^ws/callback/(?P<uuid>[^/]+)/$", WebhookConsumer)]

    Here we use a very similar pattern (like the well known url_patterns) to link our consumer class to connections of certain url. In this case our users could connect to an URL that contains the id (uuid) of the callback that they want to be notified about new events/requests.

    Finally for our consumer to be available to the public we will need to create a root routing file for our project. It looks like this:

    # <project_name>/routing.py
    from channels.routing import ProtocolTypeRouter, URLRouter
    from callbacks.routing import websocket_urlpatterns
    
    application = ProtocolTypeRouter({"websocket": URLRouter(websocket_urlpatterns)})

    Here we use the ProtocolTypeRouter as the main entry point, so what is does is:

    It lets you dispatch to one of a number of other ASGI applications based on the type value present in the scope. Protocols will define a fixed type value that their scope contains, so you can use this to distinguish between incoming connection types.

    Django Channels Documentation

    We just defined the websocket protocol and used the URLRouter to point to our previous defined websocket urls.

    The rest of the app

    At this moment we are able to receive new websocket connections and send to those clients live data using the new_request method on the client. However at the moment we do not have information to send, since we haven’t yet created the endpoints that will receive the requests and forward their data to our consumer.

    For this purpose lets create a simple class based view, it will receive any type of HTTP request (including the webhooks we want to inspect) and forward them to the consumers that are listening of that specific uuid:

    # callbacks/views.py
    
    class CallbackView(View):
        def dispatch(self, request, *args, **kwargs):
            channel_layer = get_channel_layer()
            async_to_sync(channel_layer.group_send)(
                kwargs["uuid"], {"type": "new_request", "data": self._request_data(request)}
            )
            return HttpResponse()

    In the above snippet, we get the channel layer, send the request data to the group and return a successful response to calling entity (lets ignore what the self._request_data(request) call does and assume it returns all the relevant information we need).

    One important piece of information is that the value of the type key on the data that is used for the group_send call, is the method that will be called on the websocket’s consumer we defined earlier.

    Now we just need to expose this on our urls.py file and the core of our system is done.

    # <project_name>/urls.py
    
    from django.urls import path
    from callbacks.views import CallbackView
    
    urlpatterns = [
        path("<uuid>", CallbackView.as_view(), name="callback-submit"),
    ]

    The rest of our application is just standard Django web app development, that part I will not cover in this blog post. You will need to create a page and use JavaScript in order to connect the websocket. You can check a working example of this system in the following URL :

    http://webhook-logger.ovalerio.net

    For more details just check the code repository on Github.

    Deploying

    I not going to explore the details about the topic of deployments but someone else wrote a pretty straightforward blog post on how to do it for production projects that use Django channels. You can check it here.

    Final thoughts

    With django-channels building real-time web apps or projects that deal with other protocols other than HTTP becomes really simple. I do think it is a great addition to the current ecosystem, it certainly is an option I will consider from now on for these tasks.

    Have you ever used it? do you any strong opinion about it? let me know on the comments section.

    Final Note: It seems based on recent messages on the mailing list that the project might suspend its developments in its future if it doesn’t find new maintainers. It would definitely be a shame, since it has a lot of potential. Lets see how it goes.

  • Finding Product Market Fit

    Interesting talk addressing the challenges of finding product market fit, in the form of a case study about Weebly . It’s worth the 1h you will spend listening, if you are into this kind of stuff.

    The transcript and slides can be found on the Startup School’s website.

  • Some content about remote work

    If you already have read some of my earlier posts, you will know that I currently work remotely and am part of a team that is spread across a few countries.

    For this reason I try to read a lot about the subject, in order to try to continuously the way we work.  On this post I just want to share 2 links (one video and an article) that I think can be very helpful for remote teams, even though they address subjects that are common to everyone.

    So here they are:

    Documenting Decisions in a Remote Team

    This is very important specifically the parts of making sure everyone is in the loop, explicitly communicating the ownership of the decision and keeping a record that can be consulted in the future.

    Read on Medium

    Building Operating Cadence With Remote Teams

    This is a more general presentation where it is explained how things work at Zapier (100% remote team). One good idea from the video that caught my attention is the small “survey” before the meetings, to define the plan and allowing people to get more context before the meeting starts.

    Watch the video on Business of Software

  • Experiment: ChainSentinel.co

    The amount and maturity of the tools available to help developers in process of building new applications and products is often crucial to the success of any given technology, platform or ecosystem.

    Nowadays in this blockchain trend we are witnessing, the front runner and most mature contender is Ethereum, for sure. The quality and quantity of the tools and content (documentation, tutorials, etc) available to developers in order to build on top of it, is miles away from the competition.

    Recently I’ve been working and experimenting with NEO blockchain (as you can see on some of my previous posts), a team that I took part even won an award of merit in their most recent dApp competition (Github repository). During that period we felt the pain of the lack of maturity and documentation that affected this new “ecosystem”.

    Things got better, but there are a few things still missing, such as tools that help you integrate your applications and services with the blockchain, tools to make the developer’s life easier and tools to make their dApps more useful, such as the equivalent to Ethereum’s web3.js and Metamaskextension for example.

    Even though you can achieve a lot with NEO’s JSON RPC API and through running your own node, I still think things should be easier. So at the last Whitesmith hackathon we’ve tried to address a subset of these pains.

    We’ve put together, on that limited timeframe, a simple and rough service that delivers blockchain events as traditional Webhooks (websockets are planned) to make it easier for everybody to interact in real-time with any smart-contract.

    We are looking for feedback to understand if it is something more developers also need and in that case work towards improving the service. Feel free to take a look at:

    https://chainsentinel.co

  • Django Friday Tips: Links that maintain the current query params

    Basically when you are building a simple page that displays a list of items that contain a few filters you might want to maintain them while navigating, for example while browser through the pages of results.

    Nowadays many of this kind of pages are rendered client-side using libraries such as vue and react, so this doesn’t pose much of a problem since the state is easily managed and requests are generated according to that state.

    But what if you are building a simple page/website using traditional server-side rendered pages (that for many purposes is totally appropriate)? Generating the pagination this way while maintaining the current selected filters (and other query params) might give you more work and trouble than it should.

    So today I’m going to present you a quick solution in the form of a template tag that can help you easily handle that situation. With a quick search on the Internet you will almost for sure find the following answer:

    @register.simple_tag
    def url_replace(request, field, value):
        dict_ = request.GET.copy()
        dict_[field] = value
        return dict_.urlencode()
    

    Which is great and work for almost scenario that comes to mind, but I think it can be improved a little bit, so like one of lower ranked answers suggests, we can change it to handle more than one query parameter while maintaining the others:

    @register.simple_tag(takes_context=True)
    def updated_params(context, **kwargs):
        dict_ = context['request'].GET.copy()
        for k, v in kwargs.items():
            dict_[k] = v
        return dict_.urlencode()

    As you can see, with takes_context we no longer have to repeatedly pass the request object to the template tag and we can give it any number of parameters.

    The main difference for the suggestion on “Stack Overflow” it that this version allows for repeating query params, because we don’t convert the QueryDict to a dict.  Now you just need to use it in your templates like this:

    https://example.ovalerio.net?{% updated_params page=2 something='else' %}
  • Looking for security issues on your python projects

    In today’s post I will introduce a few open-source tools, that can help you improve the security of any of your python projects and detect possible vulnerabilities early on.

    These tools are quite well known in the python community and used together will provide you with great feedback about common issues and pitfalls.

    Safety and Piprot

    As I discussed some time ago on a post about managing dependencies and the importance of checking them for known issues, in python there is a tool that compares the items of your requirements.txt with a database of known vulnerable versions. It is called safety (repository)  and can be used like this:

    safety check --full-report -r requirements.txt

    If you already use pipenv safety is already incorporated and can be used by running: pipenv check (more info here).

    Since the older the dependencies are, the higher the probability of a certain package containing bugs and issues, another great tool that can help you with this is piprot (repository).

    It will check all items on your requirements.txt and tell you how outdated they are.

    Bandit

    The next tool in the line is bandit, which is a static analyzer for python built by the Open Stack Security Project, it checks your codebase for common security problems and programming  mistakes that might compromise your application.

    It will find cases of hardcoded passwords, bad SSL defaults, usage of eval, weak ciphers, different “injection” possibilities, etc.

    It doesn’t require much configuration and you can easily add it to your project. You can find more on the official repository.

    Python Taint

    This last one only applies if you are building a web application and requires a little bit more effort to integrate in your project (at its current state).

    Python Taint (pyt) is a static analyzer that tries to find spots were your code might be vulnerable to common types of problems that affect websites and web apps, such as SQL injection, cross site scripting (XSS), etc.

    The repository can be found here.

    If you are using Django, after using pyt you might also want to run the built in manage.py check command, (as discussed in a previous post) to verify some specific configurations of the framework present on your project.

     

  • Finding blockchain challenges to learn and practice

    With “Blockchain” technologies gaining a certain momentum in the last few years, we have witnessed a spread of different implementations with many variations in several aspects, even tough they are  built over the same principals.

    This field is not easy to get started with and after taking the time to learn one approach, lets say “smart contract development in ethereum”, there might be less motivation to explore other technologies and networks.

    An approach that is being used by the proponents of these networks, is to organize challenges and competitions to try to call the attention of other developers. With these competitions you can learn, explore, solve certain problems and in the end even win some reward (in the form of cryptocurrency/tokens). Most of these challenges happen online and you don’t even have to leave the comfort of your home.

    For those interested, the most difficult part is finding the currently open challenges and competitions, so last week I put together a bare-bones page (plus RSS feed and calendar) aggregating the challenges that I find on the web.

    You can access the page in the following URL: https://blockchain-challenges.ovalerio.net

    I will try to keep it updated. In case you find it useful and discover any unlisted challenge, please submit it there, so it can be added to the list.

    Note 22/04/2020: Given the website was no longer updated or maintained, I took it of the air.