Category: Random Bits

  • Optimizing mastodon for a single user

    I’ve been participating in the Fediverse through my own mastodon instance since 2017.

    What started as an experiment to test new things, focused on exploring decentralized and federated alternatives for communicating on top of the internet, stuck. At the end of 2024, I’m still there.

    The rhetoric on this network is that you should find an instance with a community that you like and start from there. At the time, I thought that having full control over what I publish was a more interesting approach.

    Nowadays, there are multiple alternative software implementations that you can use to join the network (this blog is a recent example) that can be more appropriate for distinct use cases. At the time, the obvious choice was Mastodon in single user mode, but ohh boy it is heavy.

    Just to give you a glimpse, the container image surpasses 1 GB in size, and you must run at least 3 of those, plus PostgreSQL database, Redis broker and optionally elastic search.

    For a multiple user instance, this might make total sense, but for a low traffic, single user service, it is too much overhead and can get expensive.

    A more lightweight implementation, would fit my needs much better, but just thinking about the migration process gives me cold feet. I’m also very used to the apps I ended up using to interact with my instance, which might be specific for Mastodon’s API.

    So, I decided to go in a different direction, look for the available configurations that would allow me to reduce the weight of the software on my small machine, in order words, run Mastodon on the smallest machine possible.

    My config

    To achieve this goal and after taking a closer look at the available options, these are the settings that I ended up changing over time that produced some improvements:

    • ES_ENABLED=false — I don’t need advanced search capabilities, so I don’t need to run this extra piece of software. This was a decision I made on day 1.
    • STREAMING_CLUSTER_NUM=1 — This is an old setting that manages the number of processes that deal with web sockets and the events that are sent to the frontend. For 1 user, we don’t need more than one. In recent versions, this setting was removed, and the value is always one.
    • SIDEKIQ_CONCURRENCY=1 — Processing background tasks in a timely fashion is fundamental for how Mastodon works, but for an instance with a single user, 1 or 2 workers should be more than enough. The default value is 5, I’ve used 2 for years, but 1 should be enough.
    • WEB_CONCURRENCY=1 — Dealing with a low volume of requests, doesn’t too many workers, but having at least some concurrency is important. We can achieve that with threads, so we can keep the number of processes as 1.
    • MAX_THREADS=4 — The default is 5, I reduced it to 4, and perhaps I can go even further, but I don’t think I would have any significant gains.

    To save some disk space, I also periodically remove old content from users that live on other servers. I do that in two ways:

    • Changed the media cache retention period to 2 and user archive retention period to 7, in Administration>Server Settings>Content Retention.
    • Periodically run the tootctl media remove and tootctl preview_cards remove commands.

    Result

    In the end, I was able to reduce the resources used by my instance and avoid many of the alerts my monitoring tools were throwing all the time. However, I wasn’t able to downsize my machine and reduce my costs.

    It still requires at least 2 GB of RAM to run well, even though with these changes, there’s much more breathing room.

    If there is a lesson to be learned or a recommendation to be done with this post, it is that if you want to participate in the Fediverse, while having complete control, you should opt for a lighter implementation.

    Do you know any other quick tips that I could try to optimize my instance further? Let me know.

  • An attempt at creating a DIY baby monitor

    Occasionally, “modern” baby monitors hit the news, but not for the best reasons. Sometimes for the lack of security, other times for exploitative practices, etc.

    This also applies to many other categories of products that we can include in the so-called “Internet of Things”.

    After reading a recent article about the topic, some comments with clever approaches, made me wonder how hard would it be, to build a simple and reasonably secure alternative with common tools (DIY style)?

    Starting with the approach above, that uses ffmpeg and ffplay, I will describe how far I was able to go with:

    • 1 Raspberry Pi
    • 1 Standard webcam (with microphone)
    • (Optional) 1 Wi-Fi USB dongle, if board doesn’t include one.

    The goal of the solution: Non-technical people can easily and securely check their baby or their pet, anywhere. They should be able to move the “monitor” around (plug out and then plug in), it should start working right away once turned on (and a known Wi-Fi connection is available).

    Figuring it out

    After playing a bit with the solution described in the mentioned comment, I found that while it seems ok for a quick setup, it falls short in 2 points:

    • Only works for one viewer at a time
    • The viewer IP address needs to be set on the monitor, which means a config change and a restart are required every time you change the device you are using to view the video stream.

    Falling short of achieving my main goal. It does require a different approach.

    After additional research, I tried several alternatives that can be found with a quick online search. Including, uv4l using WebRTC (not quite what I’m looking for), motion (no sound, but perhaps can be used together with the final solution for extra functionality) and others. None of them were easy to set up or could achieve the defined goal.

    Later, I found another blog post describing how the author achieved the goal using picam. However, that software only supports the Raspberry Pi camera module, a strict hardware limitation that falls out of scope. The same for libcamera.

    In the end, the easiest solution was to turn to vlc, the well known media player. It already comes installed on the Raspberry Pi OS.

    I checked the documentation, which already provides a great deal of information, and started tinkering with it. It turned out to be a good fit, however couldn’t get it working exactly as I wished. Fortunately, I’m not the first to try this and someone else already wrote a great answer to a similar question.

    All things set, the following command does exactly what is needed:

    cvlc -vvv v4l2:///dev/video0 :input-slave=alsa://hw:1,0 --sout '#transcode{deinterlace,vcodec=mpgv,acodec=mpga,ab=128,channels=2,samplerate=44100,threads=4,audio-sync=1}:standard{access=http,mux=ts,mime=video/ts,dst=device.local.ip.address:8080}'

    Then you just need to open a network stream, using VLC or other player, on any of your devices, using the following URL:

    http://device.local.ip.address:8080

    Done, phase one complete. It has a slight delay (just a couple of seconds), the quality is very reasonable, and we kept it simple (one command).

    Making it persistent

    Now 2 things are missing:

    1. Set and start everything once the device is turned on
    2. Be able to access it from anywhere (without exposing it to the internet and with proper access controls).

    To address the first, a person can rely on systemd. The following “service” config, will start the stream when the device is turned on and connected to a Wi-Fi network:

    [Unit]
    Description=Cam Stream
    Wants=network-online.target
    After=network-online.target
    
    [Service]
    Type=simple
    User=pi
    WorkingDirectory=/home/pi
    ExecStart=/usr/bin/bash vlc_streaming.sh
    RestartSec=5
    Restart=always
    
    [Install]
    WantedBy=multi-user.target

    This file should be placed in /etc/systemd/system/ and then enabled using the systemctl command. The vlc_streaming.sh file contains the single cvlc command I mentioned above.

    Making it available

    To address the second point (Be able to access it from anywhere), I opted to add the device to a Tailscale network and turn on Magic DNS.

    This way, I can authorize which devices on the VPN will access the “cam”. Once those devices connect to the network, regardless of where they are, they can access the stream. Tailscale will handle access-control to this device and will encrypt all connections to it.

    A simple ACL rule for defining the right access-controls could be:

    {
      "acls": [
        {"action": "accept", "src": ["tag:camviewer"], "dst": ["tag:cam:8080"]},
        ...
      ]
      ...
    }

    Where tag:camviewer represents the devices allowed to access the cam stream.

    Regarding the configuration described in the previous sections, a few changes might be required.

    The first is replacing dst=device.local.ip.address:8080 with the new interface address created by Tailscale (dst=device.tailscale.ip.address:8080), so the stream is only available on that protected network.

    You might want to edit the systemd service to only start after Tailscale is up and running:

    [Unit]
    Description=Baby Cam Streaming
    Wants=tailscaled.service
    After=network-online.target tailscaled.service sys-subsystem-net-devices-tailscale0.device
    
    ...

    Then on your other devices, you would use the following URL to connect:

    http://<device_name>:8080

    Note: Due to some timing related issues, you might need to prepend a sleep command with a couple of seconds, to the vlc_streaming.sh file.

    Wrapping up

    I want to finish this post, by going back to the question I asked myself before starting this exploration: “how hard would it be to build something like this?”.

    I have to say that the solutions are not obvious. But it also isn’t that hard for a “power user” or for someone with basic technical knowledge to leverage existing tools and build a system that works reasonably well.

    The solution I ended up assembling, can look complicated, but it is actually simple, and I quite like it.

    I will just leave it here as a future reference, since I might need to return to it someday in the future, or it can be helpful to someone else. If you think there is something wrong, vulnerable or missing, please let me know in the comments.

  • Playing with maps

    I’ve always been astonished about how well mapping apps work. Sure, when Google Maps was first released the sense of wonder was much greater than it is nowadays, nevertheless it is still impressive.

    The number of situations when/where this kind of software becomes handy is huge, from the well-known GPS guides to even games (remember “Pongo”?).

    Today it is even easier to work and play with maps, by using free sources such as OpenStreetMap or by using many existing APIs. Even without any programming knowledge, you can build outstanding maps and experiences with services like Felt.

    Some time ago, by reading a random blog post on the web, I learned about this tool called “prettymaps“, which lets you generate beautiful images based on the map of a given location.

    It has been lost in my notebook since then. Today I decided to give it a try and generate a few renders of the capital city of Madeira Island. Here are the results:

    Funchal downtown, rendered using pretty maps.
    Funchal, as a combination of 4 rendered images of 4 different localities
    Render of a small part of Funchal

    Unfortunately, I wasn’t able to include the ocean. I tried multiple approaches, downloaded files with that information, but ultimately the results were always the same. In the package’s issue tracker on GitHub there are multiple people facing the same issues, perhaps a future version will include a fix for this.

  • Cleaning my follow list using “jacanaoesta”

    Last year we saw the rise of the Fediverse. Mostly because of a series of external events, that ended up pushing many people to try other alternatives to their centralized platform of choice.

    Mastodon was clearly the software component that got most attention and has been under the spotlight in the last few months. It wasn’t launched last year, in fact, mastodon instances (servers) have been online since 2016, managed by its developers and other enthusiasts.

    I’ve been running my own instance since 2017 and since then, I’ve seen people come and gone. I started following many of them, but some no longer are active. This brings us to the real topic of this post.

    Since I couldn’t find a place in the Mastodon interface that would allow me to check which users I follow are inactive, I decided to build a small tool for that. It also served as a nice exercise to put my rust skills into practice (a language that I’m trying to slowly learn during my spare time).

    The user just needs to specify the instance and API key, plus the number of days for an account to be considered inactive if the default (180 days) is not reasonable. Then the tools will print all the accounts you follow that fit that criteria.

    Find people that no longer are active in your Mastodon follow list.
    
    Usage: jacanaoesta [OPTIONS] <instance>
    
    Arguments:
      <instance>
    
    Options:
      -k, --api-key      Ask for API key
      -d, --days <days>  Days since last status to consider inactive [default: 180]
      -h, --help         Print help information
      -V, --version      Print version information

    And this is an example of the expected output:

    Paste API Key here:
    Found 171 users. Checking...
    veracrypt (https://mastodon.social/@veracrypt) seems to be inactive
    ...
    fsf (https://status.fsf.org/fsf) seems to be inactive
    38 of them seem to be inactive for at least 180 days

    Without the -k option, the program tries to grab the API key from the environment variables instead of asking the user for it.

    Problem solved. If you want or need to give it a try, the code and binaries can be found here: https://github.com/dethos/jacanaoesta

    Note: After publishing this tool, someone brought to my attention that Mastodon does indeed have a similar functionality in its interface. The difference being it only considers accounts that don’t publish a status for 1 month as inactive (it’s not configurable).

    You can find it in “Preferences → Follows and Followers → Account Activity → Dormant”

    Screenshot of where to find the "dormant" functionality.
  • My picks on open-source licenses

    Sooner or later everybody that works with computers will have to deal with software licenses. Newcomers usually assume that software is either open-source (aka free stuff) or proprietary, but this is a very simplistic view of the world and wrong most of the time.

    This topic can quickly become complex and small details really matter. You might find yourself using a piece of software in a way that the license does not allow.

    There are many types of open-source licenses with different sets of conditions, while you can use some for basically whatever you want, others might impose some limits and/or duties. If you aren’t familiar with the most common options take look at choosealicense.com.

    This is also a topic that was recently the source of a certain level of drama in the industry, when companies that usually released their software and source code with a very permissive license opted to change it, in order to protect their work from certain behaviors they viewed as abusive.

    In this post I share my current approach regarding the licenses of the computer programs I end up releasing as FOSS (Free and Open Source Software).

    Let’s start with libraries, that is, packages of code containing instructions to solve specific problems, aimed to be used by other software developers in their own apps and programs. On this case, my choice is MIT, a very permissive license which allows it to be used for any purpose without creating any other implications for the end result (app/product/service). In my view this is exactly the aim an open source library should have.

    The next category is “apps and tools”, these are regular computer programs aimed to be installed by the end user in his computer. For this scenario, my choice is GPLv3. So I’m providing a tool with the source code for free, that the user can use and modify as he sees fit. The only thing I ask for is: if you modify it in any way, to make it better or address a different scenario, please share your changes using the same license.

    Finally, the last category is “network applications”, which are computer programs that can be used through the network without having to install them on the local machine. Here I think AGPLv3 is a good compromise, it basically says if the end user modifies the software and let his users access it over the network (so he doesn’t distribute copies of it), he is free to do so, as long as he shares is changes using the same license.

    And this is it. I think this is a good enough approach for now (even though I’m certain it isn’t a perfect fit for every scenario). What do you think?

  • Giving a new life to old phones

    Nowadays, in some “developed” countries, it is very common for people to have a bunch of old phones stored somewhere in a drawer. Ten years have passed since smartphones became ubiquitous and those devices tend to become unusable very quickly, at least for their primary purpose. Either a small component breaks, the vendor stops providing updates, newer apps don’t support those older versions, etc.

    The thing is, these phones are still powerful computers. It would be great if we could give them another life once they are no longer fit for regular day to day use or the owner just wants to try a shiny new device.

    I never had many smartphones, mines tend to last many years, but I still have one or two lying around. Recently I started thinking of new uses for them, make them work instead of just gathering dust. A quick search on the internet tells me that many people already had the same idea (I’m quite late to the party) and have been working on cool things to do with these devices.

    However, most of these articles just throw the idea at you, without telling you how to do it. Others assume that your device is relatively recent.

    Of course the difficulty increases with the age of the phone, in my case the software that I will be able to run on a 10 year old Samsung Galaxy S will not be as easy to find as the software that I can run on another device with just one or two years.

    Bellow is a list posts I found online with cool things you can do with your old phones. What sets this list apart from other results is that all the items aren’t just ideas, they contain step by step instructions of how to achieve the end result.

    You don’t have to follow the provided instructions rigorously and you should introduce some variations that are more appropriate to your use case.

    Have fun and reuse your old devices.

  • Dynamic DNS using Cloudflare Workers

    In this post I’ll try to describe a simple solution, that I came up with, to solve the issue of dynamically updating DNS records when the IP addresses of your machines/instances changes frequently.

    While Dynamic DNS isn’t a new thing and many services/tools around the internet already provide solutions to this problem (for more than 2 decades), I had a few requirements that ruled out most of them:

    • I didn’t want to sign up to a new account in one of these external services.
    • I would prefer to use a domain name under my control.
    • I don’t trust the machine/instance that executes the update agent, so according to the principle of the least privilege, the client should only able to update one DNS record.

    The first and second points rule out the usual DDNS service providers and the third point forbids me from using the Cloudflare API as is (like it is done in other blog posts), since the permissions we are allowed to setup for a new API token aren’t granular enough to only allow access to a single DNS record, at best I would’ve to give access to all records under that domain.

    My solution to the problem at hand was to put a worker is front of the API, basically delegating half of the work to this “serverless function”. The flow is the following;

    • agent gets IP address and timestamp
    • agent signs the data using a previously known key
    • agent contacts the worker
    • worker verifies signature, IP address and timestamp
    • worker fetches DNS record info of a predefined subdomain
    • If the IP address is the same, nothing needs to be done
    • If the IP address is different, worker updates DNS record
    • worker notifies the agent of the outcome

    Nothing too fancy or clever, right? But is works like a charm.

    I’ve published my implementation on GitHub with a FOSS license, so anyone can modify and reuse. It doesn’t require any extra dependencies, it consists of only two files and you just need to drop them at the right locations and you’re ready to go. The repository can be found here and the README.md contains the detailed steps to deploy it.

    There are other small features that could be implemented, such as using the same worker with several agents that need to update different records, so only one of these “serverless functions” would be required. But these improvements will have wait for another time, for now I just needed something that worked well for this particular case and that could be easily deployed in a short time.

  • kinspect – quickly look into PGP public key details

    Sometimes I just need to look into the details of a PGP key that is provided in its “armored” form by some website (not everyone is publishing their keys to the keyservers).

    Normally I would have to import that key to my keyring or save it into a file and use gnupg to visualize it (as it is described in this Stack Overflow answers).

    To avoid this hassle I just created a simple page with a text area where you can paste the public key and it will display some basic information about it. Perhaps an extension would be a better approach, but for now this works for me.

    You can use it on: https://kinspect.ovalerio.net

    In case you would like to contribute in order to improve it or extend the information displayed about the keys, the source code is available on Github using a Free Software license: https://github.com/dethos/kinspect

  • Staying on an AirBnB? Look for the cameras

    When going on a trip it is now common practice to consider staying on an rented apartment or house instead of an hotel or hostel, mostly thanks to AirBnB which made it really easy and convenient for both side of the deal. Most of the time the price is super competitive and I would say a great fit for many situations.

    However as it happens with almost anything, it has its own set of problems and challenges. One example of these new challenges are the reports (and confirmations) that some, lets call them malicious hosts, have been putting in place cameras to monitor the guests during their stay.

    With a small search on the internet you can find

    Someone equipped with the right knowledge and a computer can try to check if a camera is connected to the WiFi network, like this person did:

    Toot describing that a camera that was hidden inside a box

    If this is your case, the following post provides a few guidelines to look for the cameras:

    Finally, try to figure out the public IP address of the network you are on ( https://dshield.org/api/myip ) and either run a port scan from the outside to see if you find any odd open ports, or look it up in Shodan to see if Shodan found cameras on this IP in the past (but you likely will have a dynamic IP address).

    InfoSec Handlers Diary Blog

    This page even provides a script that you can execute to automatically do most steps explained on the above article.

    However, sometimes you don’t bring your computer with you, which means you would have to rely on your smartphone to do this search. I’m still trying to find a good, trustworthy and intuitive app to recommend, since using nmap on Android will not help the less tech-savvy people.

    Meanwhile, I hope the above links provide you with some ideas and useful tools to look for hidden cameras while you stay on a rented place.

  • Some content about remote work

    If you already have read some of my earlier posts, you will know that I currently work remotely and am part of a team that is spread across a few countries.

    For this reason I try to read a lot about the subject, in order to try to continuously the way we work.  On this post I just want to share 2 links (one video and an article) that I think can be very helpful for remote teams, even though they address subjects that are common to everyone.

    So here they are:

    Documenting Decisions in a Remote Team

    This is very important specifically the parts of making sure everyone is in the loop, explicitly communicating the ownership of the decision and keeping a record that can be consulted in the future.

    Read on Medium

    Building Operating Cadence With Remote Teams

    This is a more general presentation where it is explained how things work at Zapier (100% remote team). One good idea from the video that caught my attention is the small “survey” before the meetings, to define the plan and allowing people to get more context before the meeting starts.

    Watch the video on Business of Software

  • Observations on remote work

    A few days ago I noticed that I’ve been working fully remote for more than 2 years. To be sincere this now feels natural to me and not awkward at all, as some might think at the beginning or when they are introduced to the concept.

    Over this period, even though it was not my first experience (since I already did it for a couple of months before), it is expected that one might start noticing what works and what doesn’t, how to deal with the shortcomings of the situation and how make the most of its advantages.

    In this post I want to explore what I found out in my personal experience. There are already lots of articles and blog posts, detailing strategies/tips on how to improve your (or your team’s) productivity while working remotely and describing  the daily life of many remote workers. Instead of enumerating everything that already has been written, I will focus on some aspects which proved to have a huge impact.

    All or almost nothing

    This is a crucial one, with the exception of some edge cases, the most common scenario is that you need to interact and work with other people. So remote work will only be effective and achieve its true potential if everyone accepts that not every element of the team is present in the same building.

    The processes and all the communication channels should be available for every member of the team the same way. This means that it should resemble the scenario where all members work remotely. We know people talk in person, however work related discussions, memos, presentations and any other kind of activity should be available to all.

    This way we don’t create a culture were the team is divided between first and second class citizens. The only way to maximize the output of the team, is to make sure everyone can contribute with 100% of their skills. For that to happen, adequate processes and an according mindset is required.

    Tools matter

    To build over the previous topic, one important issue is inadequate tooling. We need to remove friction and make sure working on a team that is spread through multiple locations requires no more effort and doesn’t cause more stress than it would normally do in any other situation.

    Good tools are essential to make it happen. As an example, a common scenario is a bad video conference tool that is a true pain to work with, making people lose time at the beginning of the conference call because the connection can’t be established or nobody is able to hear the people on the other end. Merge that together with the image/sound constantly freezing and the frustration levels go through the roof.

    So good tools should make communication, data sharing and collaboration fluid and effortless, helping and not getting in the way. They should adapt to this environment (remote) and privilege this new way of working, over the “standard”/local one (this sometimes requires some adjustments).

    Make the progress visible

    One of the issues people often complain about remote work, is the attitude of other colleagues/managers who aren’t familiarized with this way of doing things, struggling with the notion of not seeing you there at your desk. In many places what counts is the time spend on your chair and not the work you deliver.

    On the other side, remote workers also struggle to be kept in the loop, there are many conversations that are never written or recorded, so they aren’t able to be part of.

    It is very important to fix this disconnection, and based on the first point (“All or almost nothing”) the complete solution requires an effort of both parties. They should make sure that the progress being done is visible to everyone, keeping all team in the loop and able to participate. It can be a log, some status updates, sending some previews or even asking for feedback regularly, as long as it is visible and easily accessible. People will be able to discuss the most recent progress and everyone will know what is going on. It might look like some extra overhead, but it makes all the difference.

    Final notes

    As we can see working remotely requires a joint effort of everybody involved and is not immune to certain kinds of problems / challenges (you can read more on this blog post), but if handled correctly it can provide serious improvements and alternatives to a given organization (of course there are jobs that can’t be done remotely, but you get the point). At least at this point, I think the benefits generally outweigh the drawbacks.

  • Slack is not a good fit for community chat

    There, I said it … and a lot of other people said it before me (here, here, here, here, … and the list goes on and on).

    Before I proceed with my argument, I would like to say that, slack is great product (even if the client consumes lots of memory) and works very well for closed teams and groups of people, since that was the original purpose. I use it every day at work and it does the job.

    However I keep seeing it being used as the chat tool for many open on-line communities and it is not fit for that purpose. Many times these communities have to resort to a bunch of hack in order to make sure it meets the minimum needs for an open chat application.

    The main issues I see are:

    • It doesn’t let me participate without a previous individual invitation (hack or manual labor are often used)
    • It doesn’t let me search the conversations (for previously discussed solutions) without registering first on the community.
    • For small and occasional intervention, I need to create an account.
    • Search and history limitations of the free account (this can be a problem for bigger communities)

    Of course that for some cases it could be good enough, but as a trend the final outcome is not great.

    There are many alternatives and I will not address how an old protocol such as IRC is still a good choice for this use case, since there are lots of complaints about how it is difficult, not pretty enough or not full of little perks (such as previews, reaction, emojis, etc).

    So which software/app do I think that could be used instead of slack? Let me go one by one:

    GitterContrary to slack, Gitter was built with this purpose in mind overcoming the history and search limitations of slack, channels are open to read, easy for external people to join in (no hacks) and is open source, letting you continue using the same software even if the hosted service closes down.

    Matrix protocolMatrix is an open chat protocol, that works in a federative way (similar to email), it has many client applications (web, desktop and mobile) and several implementations of the server side software. You can run your server instance or host your channels on an existing one, you can setup bridges to have users using IRC, Slack, Gitter and others, to interact on the same room, and it doesn’t suffer from any of the described issues of Slack.

    Discord: Discord very similar to Slack, but many of the problems, like the limits and not requiring hacks to access the chat, are solved. Even though it is not as open as Matrix or IRC, you can generate a link, where everyone will be able to join (even without creating a new account), they will be able to search the entire history and can use the browser, making it a better fit to the use case of an open-community.

    There are plenty of other choices for community chat, this is just a subset, feel free to pick them but please avoid using Slack, it is not made for that purpose.

  • Managing Secrets With Vault

    I’ve been looking into this area, of how to handle and manage a large quantity of secrets and users, for quite a while (old post), because when an organization or infrastructure grow, the number of “secrets” required for authentication and authorization increase as well. Is at this stage that bad practices (that are no more than shortcuts) as reusing credentials, storing them in less appropriate ways or no longer invalidating those who are no longer in required, start becoming problematic.

    Yesterday at “Madeira Tech Meetup” I gave a brief introduction to this issue and explored ways to overcome it, which included a quick and basic explanation of Vault and demo about a common use case.

    You can find the slides of the presentation here and if you have any suggestion or something you would like to discuss about it, feel free to comment or reach through any of the contact mediums I provided.

  • Managing a 100% remote company

    https://www.youtube.com/watch?v=e56PbkJdmZ8

    This video about Gitlab was posted recently and is a very interesting case-study on how a company can normally function while having all of its employees working remotely.

  • Before the flood

    Yesterday I watched the above documentary on National Geography Channel, it is a good piece of work and it alerts to very pertinent issues, that have been in the agenda for many years/decades. Yet, we haven’t been able to overcome lobbies and established interests, that maintain the status quo and their “money machines” running with disregard for future consequences. Something we already know for sure is that there is no going back and we will pay the price. Now, the question that remains is “what will the price be?”.

    You should watch it, I definitely recommend it. It reminded me of another great documentary called “Home” (You should watch it too), released in 2009 (dam, 7 years and we are still stuck) that is less focused on climate change and addresses mankind’s impact on the planet specially on the last 100 years.

    I really hope that we can start seeing real progress soon.