Author: Gonçalo Valério

  • Creating a NEP5 Token on NEO network

    In this post I will try to introduce a way of creating a digital token, that doesn’t rely on the Ethereum network, neither on their ERC20 standard.

    Other than Ethereum there are several other blockchain implementations that ofter the possibility of running smart-contracts and therefore allowing the creation of new digital tokens.

    The option that I will address today is NEO, through its NEP5 standard. I will not elaborate on the advantages and disadvantages between the 2 approaches, given that it will depend on many factors and your current situation (perhaps a future post, but there are many other publications that already did that exercise).

    A key difference for tokens based on NEO is that you have multiple choices regarding how do you write your smart-contract, instead of relying just on `Solidity`. If this is better or worse, that’s another discussion as well.

    For today’s post I will write everything in python, and use neo-python to compile the smart-contract and run the network node that will be used to deploy it on the blockchain.


    Brief Introduction to NEO

    Shortly, NEO is a blockchain project that has been around for a while, that intends build a smart economy over a distributed network. In other words and in practice, it is a platform with many similarities to Ethereum, with some key differences on some philosophical aspects and technical decisions.

    It was founded in China during the year of 2014 and has been gaining momentum recently, outside of their home country there are many people working on this platform, perhaps the most well known is an international community called City of Zion (CoZ)that develops open source tools and organizes all kinds of events and initiatives to support this project.

    As you will see in a later stage of this post, we will use one of CoZ’s tools to connect to the network and to manage “our” smart-contracts.

    The NEP5 Standard

    As it happens with Ethereum, NEO allows you to run smart contracts, therefore you can create your own tokens over this network/platform and for example run an “Initial Coin Offering” (ICO).

    The ecosystem benefits if all these tokens do have a common interface, so like the ERC20 (now EIP20) there is the NEP5 standard (the document can be found here). Complying with this common interface is highly advisable, since it will make the management of the token using most wallets easier for your users.

    As a small overview, so you can be aware of the simplicity of the proposed interface, your smart-contract should implement at least the following methods: totalSupply, name, symbol, decimals, balanceOfand transfer.

    Of course there are many other things that are required to make your smart-contract and respective token usable, such as initialization procedures and configuration of certain parameters like the total amount of tokens, how many decimals it has, which wallet should be the owner, etc. In this post we will stick with the basics.

    The smart-contract

    As it was said before, I will use python throughout the remaining of this post. Since the examples present in the proposal document are in C#, I will base the rest of the article on “NEO ICO Template” provided by the Neon Exchange, which is implemented in python, complies with NEP5 and has all the remaining utilities implemented.

    A detailed post about how to use this template already exists and can be found here. Some sections of that article are already a bit out-dated, but it remains very informative nevertheless. To avoid duplicating content, I will provide a lighter version and show how we can make use of the built-in neo-python features instead of calling the smart-contract methods directly, demonstrating how NEP5 standard can also make you users life easier.

    The Node and deployment

    So lets start!

    Assuming you already have the neo-python installed (if you don’t, you can follow the instructions here), the first think you should do is to launch the `prompt` and open your wallet:

    $ np-prompt -p
    ...
    open wallet {{wallet_path}}

    If you cloned the repository it will be something like:

    $ python neo/bin/prompt -p 
    ...
    open wallet {{wallet_path}}

    Next we will download the example smart-contract code in another terminal window:

    $ git clone git@github.com:neonexchange/neo-ico-template.git

    Before we build the smart-contract, we will need to edit a few settings that will be exclusive for our token. On the {{path_to_smartcontract}}/nex/token.py file, lets edit a few parameters (there are several others you could change, but lets stick to the basics here):

    # nex/token.py
    TOKEN_NAME = 'Example ICO Token'
    TOKEN_SYMBOL = 'EIT'
    TOKEN_OWNER = b'{{your_wallet_script_hash}}'
    

    To get the {{your_wallet_script_hash}} just type wallet on the terminal window running neo-python and you should see something like this [I 180506 20:50:38 UserWallet:538] Script hash b'SCRIPT HASH HERE' <class 'bytes'> printed on the terminal. Just copy it to your contract code and you’re done.

    The other options include changing the amount of tokens, how many will be initially “minted” and added to the owner’s wallet, etc.

    Now it is time to “compile” the smart-contract from python to the NEO’s virtual machine “format”. To do that, run the following command on the “prompt”:

    build {{path_to_smart-contract}}/ico_template.py 0710 05 True False

    The extra arguments are:

    • 0710 – Parameter types ( 07 is string, 10is a array).
    • 05 – Is the return type, in this case it means bytearray.
    • True – It requires storage.
    • False – It doesn’t use dynamic invocation.

    Now an ico_template.avm should had been created, we will use this file to deploy our smart-contract to the blockchain. To do so, you will need `GAS` (400 of it, check here the values) and since this is just a test a better approach is to use the test network (or even a private network) in order to avoid wasting funds. To deploy the smart-contract you should run:

    import contract {{path_to}}/ico_template.avm 0710 05 True False

    and follow the interactive instructions. After this final step the smart-contract should be ready to use.

    Using the newly created token

    Now that everything is deployed and we are ready to start using our new token, the first thing that we need to do is to “run” the deploy instruction in order to setup the initial amount of tokens. To deploy we need to find get the hash of the imported contract and invoke it with the deploy parameter.

    contract search {{query for your contract}}
    # grab the "hash" value
    testinvoke {{hash}} deploy []
    

    Then we can add this token to our wallet and interact with it using a friendlier interface than having to invoke manually the methods of the contract like we did with the deploy action. We achieve this with the command: import token {{hash}}.

    At this point you will be able to see your new token balance when you check your wallet, something similar to the following snippet:

        "synced_balances": [
            "[NEO]: 1000.0 ",
            "[NEOGas]: 25099.999 ",
            "[EIT]: 2499980 "
        ],
    

    From now on, to send tokens to someone else, instead of doing something like this:

    testinvoke 0xfd941304d9cf36f31cd141c7c7029d81b1efa4f3 transfer ["AUiPgh9684vjScBDJ5FFsYzBWyJjf6GQ6K","ASfh5fCf6jZ2RxKNoDfN6dN817B9kaNRgY", "10"]

    you have this friendlier interface:

    wallet tkn_send EIP AUiPgh9684vjScBDJ5FFsYzBWyJjf6GQ6K ASfh5fCf6jZ2RxKNoDfN6dN817B9kaNRgY 10

    If you check the help command, you will see that you have a few helper methods to easily interact with your NEP5 token:

    wallet tkn_send {token symbol} {address_from} {address to} {amount} 
    wallet tkn_send_from {token symbol} {address_from} {address to} {amount}
    wallet tkn_approve {token symbol} {address_from} {address to} {amount}
    wallet tkn_allowance {token symbol} {address_from} {address to}
    wallet tkn_mint {token symbol} {mint_to_addr} (--attach-neo={amount}, --attach-gas={amount})
    wallet tkn_register {addr} ({addr}...) (--from-addr={addr})
    

    And we just finished this small tutorial. To sum it up, I’ve made a small video going through the whole process:


    Given the popularity of the “blockchain” movement nowadays, we are starting to have several alternative networks that are able to run smart-contracts, some of them more mature than the others, but many of them very capable.

    Playing with several of the competing alternatives before jumping to the implementation phase of our solution is important, so we can understand which one will be a better fit for our particular situation.

    If you have been following this field for the last few years, you know it is moving rapidly and many breakthroughs are still happening. Nevertheless at this moment we already have solid foundations for building decentralized applications on top of the blockchain, for this purpose NEO is positioning itself as solid solution to take into account.

  • Django Friday Tips: Adding RSS feeds

    Following my previous posts about RSS and its importance for an open web, this week I will try to show how can we add syndication to our websites and other apps built with Django.

    This post will be divided in two parts. The first one covers the basics:

    • Build an RSS feed based on a given model.
    • Publish the feed.
    • Attach that RSS feed to a given webpage.

    The second part will contain more advanced concepts, that will allow subscribers of our page/feed to receive real-time updates without the need to continuously check our feed. It will cover:

    • Adding a Websub / Pubsubhubbub hub to our feed
    • Publishing the new changes/additions to the hub, so they can be sent to subscribers

    So lets go.

    Part one: Creating the Feed

    The framework already includes tools to handle this stuff, all of them well documented here. Nevertheless I will do a quick recap and leave here a base example, that can be reused for the second part of this post.

    So lets supose we have the following models:

    class Author(models.Model):
    
        name = models.CharField(max_length=150)
        created_at = models.DateTimeField(auto_now_add=True)
    
        class Meta:
            verbose_name = "Author"
            verbose_name_plural = "Authors"
    
        def __str__(self):
            return self.name
    
    
    class Article(models.Model):
    
        title = models.CharField(max_length=150)
        author = models.ForeignKey(Author, on_delete=models.CASCADE)
    
        created_at = models.DateTimeField(auto_now_add=True)
        updated_at = models.DateTimeField(auto_now=True)
    
        short_description = models.CharField(max_length=250)
        content = models.TextField()
    
        class Meta:
            verbose_name = "Article"
            verbose_name_plural = "Articles"
    
        def __str__(self):
            return self.title
    

    As you can see, this is for a simple “news” page where certain authors publish articles.

    According to the Django documentation about feeds, generating a RSS feed for that page would require adding the following Feedclass to the views.py (even tough it can be placed anywhere, this file sounds appropriate):

    from django.urls import reverse_lazy
    from django.contrib.syndication.views import Feed
    from django.utils.feedgenerator import Atom1Feed
    
    from .models import Article
    
    
    class ArticlesFeed(Feed):
        title = "All articles feed"
        link = reverse_lazy("articles-list")
        description = "Feed of the last articles published on site X."
    
        def items(self):
            return Article.objects.select_related().order_by("-created_at")[:25]
    
        def item_title(self, item):
            return item.title
    
        def item_author_name(self, item):
            return item.author.name
    
        def item_description(self, item):
            return item.short_description
    
        def item_link(self, item):
            return reverse_lazy('article-details', kwargs={"id": item.pk})
    
    
    class ArticlesAtomFeed(ArticlesFeed):
        feed_type = Atom1Feed
        subtitle = ArticlesFeed.description
    

    On the above snippet, we set some of the feed’s global properties (title, link, description), we define on the items() method which entries will be placed on the feed and finally we add the methods to retrieve the contents of each entry.

    So far so good, so what is the other class? Other than standard RSS feed, with Django we can also generate an equivalent Atom feed, since many people like to provide both that is what we do there.

    Next step is to add these feeds to our URLs, which is also straight forward:

    urlpatterns = [
        ...
        path('articles/rss', ArticlesFeed(), name="articles-rss"),
        path('articles/atom', ArticlesAtomFeed(), name="articles-atom"),
        ...
    ]
    

    At this moment, if you try to visit one of those URLs, an XML response will be returned containing the feed contents.

    So, how can the users find out that we have these feeds, that they can use to get the new contents of our website/app using their reader software?

    That is the final step of this first part. Either we provide the link to the user or we include them in the respective HTML page, using specific tags in the head element, like this:

    <link rel="alternate" type="application/rss+xml" title="{{ rss_feed_title }}" href="{% url 'articles-rss' %}" />
    <link rel="alternate" type="application/atom+xml" title="{{ atom_feed_title }}" href="{% url 'articles-atom' %}" />
    

    And that’s it, this first part is over. We currently have a feed and a mechanism for auto-discovery, things that other programs can use to fetch information about the data that was published.

    Part Two: Real-time Updates

    The feed works great, however the readers need continuously check it for new updates and this isn’t the ideal scenario. Neither for them, because if they forget to regularly check they will not be aware of the new content, neither for your server, since it will have to handle all of this extra workload.

    Fortunately there is the WebSub protocol (previously known as Pubsubhubbub), that is a “standard” that has been used to deliver a notification to subscribers when there is new content.

    It works by your server notifying an external hub (that handles the subscriptions) of the new content, the hub will then notify all of your subscribers.

    Since this is a common standard, as you might expect there are already some Django packages that might help you with this task. Today we are going to use django-push with https://pubsubhubbub.appspot.com/ as the hub, to keep things simple (but you could/should use another one).

    The first step, as always, is to install the new package:

    $ pip install django-push
    

    And then add the package’s Feed class to our views.py (and use it on our Atom feed):

    from django_push.publisher.feeds import Feed as HubFeed
    
    ...
    
    class ArticlesAtomFeed(ArticlesFeed, HubFeed):
        subtitle = ArticlesFeed.description
    

    The reason I’m only applying this change to the Atom feed, is because this package only works with this type of feed as it is explained in the documentation:

    … however its type is forced to be an Atom feed. While some hubs may be compatible with RSS and Atom feeds, the PubSubHubbub specifications encourages the use of Atom feeds.

    This no longer seems to be true for the more recent protocol specifications, however for this post I will continue only with this type of feed.

    The next step is to setup which hub we will use. On the  settings.py file lets add the following line:

    PUSH_HUB = 'https://pubsubhubbub.appspot.com'
    

    With this done, if you make a request for your Atom feed, you will notice the following root element was added to the XML response:

    <link href="https://pubsubhubbub.appspot.com" rel="hub"></link>

    Subscribers will use that information to subscribe for notifications on the hub. The last thing we need to do is to tell the hub when new entries/changes are available.

    For that purpose we can use the ping_hub function. On this example the easiest way to accomplish this task is to override the Article  model save() method on the models.py file:

    from django_push.publisher import ping_hub
    
    ...
    
    class Article(models.Model):
        ...
        def save(self, *args, **kwargs):
            super().save(*args, **kwargs)
            ping_hub(f"https://{settings.DOMAIN}{reverse_lazy('articles-atom')}")
    

    And that’s it. Our subscribers can now be notified in real-time when there is new content on our website.

  • Upgrade your “neo-python” wallets

    Several weeks ago I started to explore the NEO ecosystem, for those who are not aware NEO is a blockchain project that just like Ethereum pretends to create the tools and the platform to execute smart-contracts and create new types of decentralized applications. It has its pros and cons just like any other system, but that is outside of the scope of this blog post.

    One of the defining characteristics of this “cryptocurrency” is the ability develop those smart-contracts in programming languages the user already is familiar with (however only a small subset of the language is available).

    So I searched for the available SDKs and found the neo-python project, which is a wallet software and also a set of tools to develop using the Python programming language. The project is developed by a community of supporters of the NEO ecosystem called City of Zion.

    And now the real topic of the post begins, while learning the features and exploring the codebase I found an urgent security issue with the way the wallets were being encrypted by neo-python.

    Long story short, the method used to protect the wallets wasn’t correctly implemented and allowed an attacker with access to the wallet file to decrypt it without the need for the password/pass-phrase (more details here) .

    Fortunately it is an actively developed project and the team responsible for it was quick to acknowledge the problem and merge the fix I proposed in a pull request. The fix is now present in the newer versions of the project, and it now forces the users to reset the security features of their wallets (check this video for more details, starting on minute 8 up to 10) .

    Now in this post I would like to leave my recommendation about how to proceed after re-encrypting the wallet, because even though the issue is fixed your private keys might have been compromised before you applied the patch. If you are a user and didn’t noticed nothing yet the most probable scenario is that you weren’t compromised, since most immediate thing an attacker could/would do is to steal your funds.

    Nevertheless, there is always the possibility and to avoid any bad surprises you definitely should:

    1. Properly encrypt your wallet using the reencrypt_wallet.py  script.
    2. Check the new generated wallet is working properly.
    3. Then delete the old wallet.
    4. Create a new wallet.
    5. Transfer your funds to the new wallet.

    The steps 4 and 5 are necessary because the fix protects your master key but it doesn’t change it and as I previously said if a copy of your vulnerable wallet exists (created by you or by an attacker) your funds are still accessible. So don’t forget to go through them.

    Other than this, the project is very interesting and while still immature it has been fun the work with it, so I will keep contributing some improvements in the near future.

     

  • Decentralized alternatives on the Internet

    This post follows a recent discussion about decentralization at the first Madeira Tech Meetup of 2018. I already touched these subjects previously here on the blog a couple of times (for example,  the one about Zeronet and the one about Mastodon), but this time I will try to summarize the key points of the issue and give a list of awesome projects and applications the will help you reduce your dependency on several centralized services.

    Nowadays many most popular web services and applications that “we” use are centralized, this means that the application and its data only exist on the machines of a single entity. As soon as this was considered normal, it led to another phenomenon, which these “apps” close down any possible interaction that is not under their control, to keep the user locked in. A good example is social media and their chat applications, since closing down the doors to open protocols and any form of interoperability, allows the network effect to keeps the users inside (why can’t someone with an account on a certain Google chat application, talk with someone who uses Facebook Messenger? you know, like email works).

    The Internet infrastructure at it’s core is decentralized, it was initially designed this way in order to be robust and continue working when there is a problem in any of member of the network. For many years the applications built on top it, followed some of this principals and were happy to have a certain level of interoperability with other software because it provided value. Since mid 00’s to this day things changed and many services and applications are as closed as they can be (the so called wallet gardens), because there are economic incentives to behave this way, making the users and their data “hostage”.

    Fortunately, the new rules of the European Union will start being implemented and enforced in the next couple of months. Other than giving back the control of their data to users, it requires that they must be able to export it in a common format, which will allow users to move to other competitors more easily.

    However, this new right still threatens some comfortable walled-garden models. For example, social networks, exercise-tracking apps, and photo services will have to allow users to export their posts, rides, and photos in a common format. Smart competitors will build upload tools that recognize these formats; GDPR might therefore help to bridge the strategic moats of incumbents.

    Source: Techies Guide to GDPR

    But it doesn’t solve all the centralization problems. So a better (and complementary) approach is to use applications and services that use open protocols and were built with interoperability in mind. If they are open source, if you can self-host them or if they are completely distributed, even better.

    So to give you a head start, here is a list of applications that address common usages in the Internet, that are open-source, decentralized and can easily replace popular software you use nowadays:

    • Matrix – Is an open chat protocol and collection of reference implementations (ready to use) that is an alternative to any chat application, you can use an existing server or launch your own.
    • Mastodon – is a Twitter alternative. Together with GNUSocial and other softwares, they form an open network that regardless of which service or software type you use, you are able to interact with everyone. You also can host your own server/instance.
    • IPFS – a content-based addressing protocol (with reference implementation) that tries to make the web more distributed and less reliant on central servers. Very useful to publish public data from your own computer.
    • ZeroNet – an alternative distributed network where websites are served by the peers in the network and not central servers. One of the nice things here is that you can easily clone and publish websites using standard templates.
    • Peertube – the name gives you a hint, it is a Youtube replacement that doesn’t depend any central server. It is federated and the video is distributed to the users by “Webtorrent”.
    • Syncthing – File synchronization application (alternative to Dropbox) that lets you have all your data up to date across all your devises and doesn’t rely on central servers.
    • OwnCloud/NextCloud – Also an alternative to Dropbox and similar services, were the user can self-host a server or use one available on the Internet. These servers are federated, which means you can share and work on the same folders even with users from another server or service.

    Like these ones there are many others available. I just highlighted the ones above because I think they are ready/stable and can “fight” with the existing centralized services.

    The usage of decentralized applications makes the Internet more democratic, more resistant to censorship, more resilient, makes the life harder to authoritarian entities, as well as others that try to centrally manipulate public perception (don’t automatically adding users to the famous “filter bubbles”) and reduces the probability of massive companies maintaining monopolies on certain markets.

    At first it might take a little bit of work, but the benefits are there and will be collected later.

  • Observations on remote work

    A few days ago I noticed that I’ve been working fully remote for more than 2 years. To be sincere this now feels natural to me and not awkward at all, as some might think at the beginning or when they are introduced to the concept.

    Over this period, even though it was not my first experience (since I already did it for a couple of months before), it is expected that one might start noticing what works and what doesn’t, how to deal with the shortcomings of the situation and how make the most of its advantages.

    In this post I want to explore what I found out in my personal experience. There are already lots of articles and blog posts, detailing strategies/tips on how to improve your (or your team’s) productivity while working remotely and describing  the daily life of many remote workers. Instead of enumerating everything that already has been written, I will focus on some aspects which proved to have a huge impact.

    All or almost nothing

    This is a crucial one, with the exception of some edge cases, the most common scenario is that you need to interact and work with other people. So remote work will only be effective and achieve its true potential if everyone accepts that not every element of the team is present in the same building.

    The processes and all the communication channels should be available for every member of the team the same way. This means that it should resemble the scenario where all members work remotely. We know people talk in person, however work related discussions, memos, presentations and any other kind of activity should be available to all.

    This way we don’t create a culture were the team is divided between first and second class citizens. The only way to maximize the output of the team, is to make sure everyone can contribute with 100% of their skills. For that to happen, adequate processes and an according mindset is required.

    Tools matter

    To build over the previous topic, one important issue is inadequate tooling. We need to remove friction and make sure working on a team that is spread through multiple locations requires no more effort and doesn’t cause more stress than it would normally do in any other situation.

    Good tools are essential to make it happen. As an example, a common scenario is a bad video conference tool that is a true pain to work with, making people lose time at the beginning of the conference call because the connection can’t be established or nobody is able to hear the people on the other end. Merge that together with the image/sound constantly freezing and the frustration levels go through the roof.

    So good tools should make communication, data sharing and collaboration fluid and effortless, helping and not getting in the way. They should adapt to this environment (remote) and privilege this new way of working, over the “standard”/local one (this sometimes requires some adjustments).

    Make the progress visible

    One of the issues people often complain about remote work, is the attitude of other colleagues/managers who aren’t familiarized with this way of doing things, struggling with the notion of not seeing you there at your desk. In many places what counts is the time spend on your chair and not the work you deliver.

    On the other side, remote workers also struggle to be kept in the loop, there are many conversations that are never written or recorded, so they aren’t able to be part of.

    It is very important to fix this disconnection, and based on the first point (“All or almost nothing”) the complete solution requires an effort of both parties. They should make sure that the progress being done is visible to everyone, keeping all team in the loop and able to participate. It can be a log, some status updates, sending some previews or even asking for feedback regularly, as long as it is visible and easily accessible. People will be able to discuss the most recent progress and everyone will know what is going on. It might look like some extra overhead, but it makes all the difference.

    Final notes

    As we can see working remotely requires a joint effort of everybody involved and is not immune to certain kinds of problems / challenges (you can read more on this blog post), but if handled correctly it can provide serious improvements and alternatives to a given organization (of course there are jobs that can’t be done remotely, but you get the point). At least at this point, I think the benefits generally outweigh the drawbacks.

  • Getting started with Blockchain and related technologies

    Crypto currencies have been at all rage in the last few months, with the price of Bitcoin and other Altcoins skyrocketing and achieving new all time heights.

    This is very interesting and since there is the idea that the technology supporting all of these coins is revolutionary, some saying it can be used to solve many problems that we currently struggle with (or turn thing more transparent). I needed to take a look at how it works internally.

    For anyone trying to understand and work with “Blockchain” and its new style of “consensus”, playing and just using the coins and wallets is not enough. All the things that happen in the background are complex and require the understand of several mathematical properties.

    So where to start?

    There are many books (and I’ve read a few of them), but some just approach the technology from an high level and from a user perspective. Others are good and complete, but very dense.

    I took another approach to complete my understanding of the internals of the technology. I joined and recently completed the online course of “Bitcoin and Cryptocurrency Technologies” from Princeton University, available on the Coursera platform.

    The content, even though the course dates from 2014/2015, is solid. It covers the basic concepts and the internals of bitcoin very well, foundational things that do not change that often. Content isn’t just thrown at you, they build up your knowledge on the matter.

    The only issue, in my humble opinion, is the timing of the first graded assignment, which is given to you before the lessons on those concepts are explained (but you can watch those lessons and comeback to the assignment later). On the remaining assignments some context and better explanation of the terms on the code might help, but eventually you end up understanding what they mean. Otherwise the discussion forums will guide you in the right direction.

    Overall, truly recommend this course (that is free at the moment) if you desire to get an initial and deeper knowledge of the system and not just the user perspective.

  • Making the most of your dusty Raspberry Pi

    Raspberry Pi was one of those projects that were “godsent” (just like Arduido). A cheap and very small computer that would let the little ones and also older folks learn, explore, prototype and build lots of things that  previously would be inaccessible to many.

    So in this post I want to give you some tips and ideas of things you could do with that Raspberry Pi you have unused in the drawer. The main objective is to set it up once, configure some services and utilities that are useful in several areas.

    Protect the entire household from Ads

    One awesome project you can easily install on your device is Pi-hole, which you can setup with a simple command and by changing the DNS entries of your router’s DHCP settings, it will let you have all of the devices in your network automatically “protected” against those Ad networks that try to track your Internet usage.

    Protect your communications when not at home

    Other tool that is very easy to setup and can help you when you are abroad or using insecure networks, is the PiVPN project. You will be able to create your own virtual private network, that will route your communications through your chosen network (home network for example), protecting the traffic and also letting you access your devices that are on the given network as well.

    Extra Tip: Use this together with the Pi-hole’s DNS server to block the advertisements and tracking requests anywhere you go, by adding:

    push "dhcp-option DNS <IP OF YOUR PI'S TUN0 INTERFACE>"

    to your “/etc/openvpn/server.conf” (a restart is needed) and changing the “Interface Listening Behavior” to “Listen on all interfaces” on Pi-Hole’s administration.

    Check who’s at Home or at the office

    This one is a little more tricky to setup, but it might worth it when you need to know when somebody is at home or at the office. The Pi-sensor-free-presence-detector, makes use of your local network to check which devices are connected and then you can link to a person based on who’s the owner.

    Other Stuff

    For those who want to take things one step further and take control of many of the usual web applications you can take a look at the Freedom Box project. You can install it on your Raspberry Pi and configure many applications on your own server.

  • Slack is not a good fit for community chat

    There, I said it … and a lot of other people said it before me (here, here, here, here, … and the list goes on and on).

    Before I proceed with my argument, I would like to say that, slack is great product (even if the client consumes lots of memory) and works very well for closed teams and groups of people, since that was the original purpose. I use it every day at work and it does the job.

    However I keep seeing it being used as the chat tool for many open on-line communities and it is not fit for that purpose. Many times these communities have to resort to a bunch of hack in order to make sure it meets the minimum needs for an open chat application.

    The main issues I see are:

    • It doesn’t let me participate without a previous individual invitation (hack or manual labor are often used)
    • It doesn’t let me search the conversations (for previously discussed solutions) without registering first on the community.
    • For small and occasional intervention, I need to create an account.
    • Search and history limitations of the free account (this can be a problem for bigger communities)

    Of course that for some cases it could be good enough, but as a trend the final outcome is not great.

    There are many alternatives and I will not address how an old protocol such as IRC is still a good choice for this use case, since there are lots of complaints about how it is difficult, not pretty enough or not full of little perks (such as previews, reaction, emojis, etc).

    So which software/app do I think that could be used instead of slack? Let me go one by one:

    GitterContrary to slack, Gitter was built with this purpose in mind overcoming the history and search limitations of slack, channels are open to read, easy for external people to join in (no hacks) and is open source, letting you continue using the same software even if the hosted service closes down.

    Matrix protocolMatrix is an open chat protocol, that works in a federative way (similar to email), it has many client applications (web, desktop and mobile) and several implementations of the server side software. You can run your server instance or host your channels on an existing one, you can setup bridges to have users using IRC, Slack, Gitter and others, to interact on the same room, and it doesn’t suffer from any of the described issues of Slack.

    Discord: Discord very similar to Slack, but many of the problems, like the limits and not requiring hacks to access the chat, are solved. Even though it is not as open as Matrix or IRC, you can generate a link, where everyone will be able to join (even without creating a new account), they will be able to search the entire history and can use the browser, making it a better fit to the use case of an open-community.

    There are plenty of other choices for community chat, this is just a subset, feel free to pick them but please avoid using Slack, it is not made for that purpose.

  • Managing Secrets With Vault

    I’ve been looking into this area, of how to handle and manage a large quantity of secrets and users, for quite a while (old post), because when an organization or infrastructure grow, the number of “secrets” required for authentication and authorization increase as well. Is at this stage that bad practices (that are no more than shortcuts) as reusing credentials, storing them in less appropriate ways or no longer invalidating those who are no longer in required, start becoming problematic.

    Yesterday at “Madeira Tech Meetup” I gave a brief introduction to this issue and explored ways to overcome it, which included a quick and basic explanation of Vault and demo about a common use case.

    You can find the slides of the presentation here and if you have any suggestion or something you would like to discuss about it, feel free to comment or reach through any of the contact mediums I provided.

  • How to work asynchronously

    Just watched a talk by Jason Fried (available at the bottom of the post), about how you should take as much care about how your company works, as you take care of your products. More than the actual name or topic of the presentation, what I really liked to see were some of the processes and mindset Basecamp has, that are really well thought in order to fit the reality of a company that works mostly asynchronously. You can read about it in some books but seeing it action brings a whole new level of clarity.

    It may sound easy to apply these concepts in our day to day, but it isn’t. I know this because after 2 years of working remotely, for a company that is remote friendly and does the best it can to make the work more asynchronous, we still fall for some of the bad habits described in the talk, such as relying too much in the chat app.

    Either way, I recommend it, I learned a lot and hope you enjoy it.

    Your Company Should be Your Best Product | Jason Fried, Basecamp | BoS USA 2016

  • Search open tabs in Firefox

    A few days ago it was announced that next versions of Firefox will have huge improvements on performance and resource usage. This is great because browsers nowadays consume huge amounts of resources, making it harder for people with olders or weaker machines to use the computer without starting to pulling out their hair.

    I’m not a huge fan of having lots of opens tabs, but I know many people that like to work this way, having dozens of them open. One question I always ask is how they find the tab they are looking for, when just the favicons are shown on the tabs.

    It looks like Firefox makes it easy to handle this situation without any extra extension, just select the search bar in any tab (Ctrl+l) and then type “% <search query>” to search on the opens tabs.

    I hope this is useful to you.

  • Keep your dependencies under check

    Nowadays most software projects with a “decent size” rely on many software dependencies, or in other words: libraries and tools, developed by other people. That usually are under constant change.

    The reasons for these are clear and can go from implementing common patterns and avoid repeating ourselves, to accelerate the development, to use mature implementations and avoid some pitfalls, etc. Sometimes many projects rely on way too many dependencies for simples things (Remember the left-pad fiasco?).

    Once these dependencies are loaded, integrated and working as expected, people often forget they are there and many times they stay untouched for long periods of time. Even when newer versions are released, unless something starts breaking, nobody remembers to keep them up to date, a situation that might lead to security vulnerabilities, not in your code but on the code your project depends on.

    Of course I’m not telling you anything new, what I pretend to achieve with this post, is to show that there are many tools available to help you fight this problem. When you integrate them on your CI or on another step of your development process, they will keep you informed about what dependencies have known security vulnerabilities and what you should upgrade as soon as possible.

    The majority of the programming languages have this sort of tools, so a little search should help you find the one that better suits you stack. Below are some examples:

    As an example here is what I needed to do in order to check the dependencies of Hawkpost (an open-source project that I’m deeply involved with at the moment):

    $ safety check --full-report -r requirements/requirements.txt
    safety report
    ---
    No known security vulnerabilities found

    For most of these tools the basic check is this simple to do and in the long run it might save you from some headaches.

    Update (26-06-2018): Added cargo-audit to the list

  • rip!

    It is almost a decade old, but this documentary film is still worth viewing. It addresses the same issues as the Everything is a remix series (that I also shared in previous posts, here, here and here).

  • Federated Tweets, or Toots

    Recently there was been a big fuss about “Mastodon“, an open-source project that is very similar to twitter. The biggest difference is that it is federated. So what it means?

    It means that it works like “email”, there are several providers (called instances) where you can create an account (you can setup your own server if you desire) and accounts from different providers can communicate with each other, instead of all information being in just one silo.

    Of course for someone that is in favor of an open web this is a really important “feature”.

    Another big plus is that the wheel wasn’t reinvented, this network is inter-operable with the existing “GNU Social” providers (uses the same protocol), so you can communicate and interact with people that have an account in an instance running that software. It can be seen like 2 providers of the same network running different software packages (one in PHP the other in Ruby) but talking the same language over the network.

    I haven’t tested it much yet, but given it is a push for a solution that is not centralized (which is a rare thing nowadays) and I think it is a small step in the right direction, So I’ve setup an individual instance for myself where I will publish regularly links of posts/articles/pages that I find interesting. Feel free to follow at https://s.ovalerio.net and if you know someone worth following in this network, let me know.

    Here are a few links with more information:

    List of instances where you can create an account

  • The wave of bloated desktop apps

    Some time ago I complained about a mindset that was producing a form of “bloated” software (software that uses a disproportionate amount of resources to accomplish “simple” tasks). Later I even posted an example of a popular chat application consuming a ridiculous amount of memory, given the its purpose.

    One recent example of this phenomenon is the recent explosion of electron desktop applications. Today the following post reached the front page of Hacker News, and it makes a good argument:

    Electron is flash for the desktop

    https://josephg.com/blog/electron-is-flash-for-the-desktop/