Categories
Personal Random Bits

My picks on open-source licenses

Sooner or later everybody that works with computers will have to deal with software licenses. Newcomers usually assume that software is either open-source (aka free stuff) or proprietary, but this is a very simplistic view of the world and wrong most of the time.

This topic can quickly become complex and small details really matter. You might find yourself using a piece of software in a way that the license does not allow.

There are many types of open-source licenses with different sets of conditions, while you can use some for basically whatever you want, others might impose some limits and/or duties. If you aren’t familiar with the most common options take look at choosealicense.com.

This is also a topic that was recently the source of a certain level of drama in the industry, when companies that usually released their software and source code with a very permissive license opted to change it, in order to protect their work from certain behaviors they viewed as abusive.

In this post I share my current approach regarding the licenses of the computer programs I end up releasing as FOSS (Free and Open Source Software).

Let’s start with libraries, that is, packages of code containing instructions to solve specific problems, aimed to be used by other software developers in their own apps and programs. On this case, my choice is MIT, a very permissive license which allows it to be used for any purpose without creating any other implications for the end result (app/product/service). In my view this is exactly the aim an open source library should have.

The next category is “apps and tools”, these are regular computer programs aimed to be installed by the end user in his computer. For this scenario, my choice is GPLv3. So I’m providing a tool with the source code for free, that the user can use and modify as he sees fit. The only thing I ask for is: if you modify it in any way, to make it better or address a different scenario, please share your changes using the same license.

Finally, the last category is “network applications”, which are computer programs that can be used through the network without having to install them on the local machine. Here I think AGPLv3 is a good compromise, it basically says if the end user modifies the software and let his users access it over the network (so he doesn’t distribute copies of it), he is free to do so, as long as he shares is changes using the same license.

And this is it. I think this is a good enough approach for now (even though I’m certain it isn’t a perfect fit for every scenario). What do you think?

Categories
Personal

And… the blog is back

You might have noticed that the website has been unavailable during the last week (or a bit longer than that), well, the reason is quite simple:

OVH Strasbourg datacenter burning
OVH Strasbourg datacenter burning (10-03-2021)

It took sometime but the blog was finally put online again, new content should be flowing in soon.

And kids, don’t forget about the backups, because the good old Murphy’s law never disappoints:

Anything that can go wrong will go wrong

Wikipedia
Categories
Personal

10 years

The first post I published on this blog is now 10 years old. This wasn’t my first website or even the first blog, but it’s the one that stuck for the longest time.

The initial goal was to have a place to share anything I might find interesting on the Web, a place that would allow me to publish my opinions on all kinds of issues (if I felt like it) and to be able to publish information about my projects. I think you still can deduce that from the tag line, that remained unchanged ever since.

From the start, being able to host my own content was one of the priorities, in order to be able to control its distribution and ensuring that it is universally accessible to anyone without any locks on how and by whom it should be consumed.

The reasoning behind this decision was related to a trend that started a couple of years earlier, the departure from the open web and the big migration to the walled gardens.

Many people thought it was an inoffensive move, something that would improve the user experience and make the life easier for everyone. But as anything in life, with time we started to see the costs.

Today the world is different, using closed platforms that barely interact with each other is the rule and the downsides became evident: Users started to be spied for profit, platforms decide what speech is acceptable, manipulation is more present than ever, big monopolies are now gate keepers to many markets, etc. Summing up, the information and power is concentrated in fewer hands.

Last week this event set the topic for the post. A “simple chat app”, that uses an open protocol to interact with different servers, was excluded/blocked from the market unilaterally without any chance to defend itself. A more extensive discussion can be found here.

The message I wanted to leave in this commemorative post, is that we need to give another shot to decentralized and interoperable software, use open protocols and technologies to put creators and users back in control.

If there is anything that I would like to keep for the next 10 years, is the capability to reach, interact and collaborate with the world without having a huge corporation acting as middleman dictating its rules.

I will continue to put an effort in making sure open standards are used on this website (such RSS, Webmention, etc) and that I’m reachable using decentralized protocols and tools (such as email, Matrix or the “Fediverse“). It think this is the minimum a person could ask for the next decade.

Categories
Personal

The app I’ve used for the longest period of time

What is the piece of software (app) you have used continuously for the longest period of time?

This is an interesting question. More than 2 decades have passed since I’ve got my first computer. Throughout all this time my usage of computers evolved dramatically, most of the software I installed at the time no longer exists or is so outdated that there no point in using it.

Even the “type” of software changed, before I didn’t rely on so many web apps and SaaS (Software as a service) products that dominate the market nowadays.

The devices we use to run the software also changed, now it’s common for people to spend more time on certain mobile apps than their desktop counterparts.

In the last 2 decades, not just the user needs changed but also the communication protocols in the internet, the multimedia codecs and the main “algorithms” for certain tasks.

It is true that many things changed, however others haven’t. There are apps that were relevant at the time, that are still in use and I expect that they will still be around in for many years.

I spent some time thinking about my answer to the question, given I have a few strong contenders.

One of them is Firefox. However my usage of the browser was split by periods when I tried other alternatives. I installed it when it was initially launched and I still use it nowadays, but the continuous usage time doesn’t take it to the first place.

I used Windows for 12/13 straight years before switching to Linux, but it is still not enough (I also don’t think operating systems should be taken into account for this question, since for most people the answer would be Windows).

VLC is another contender, but like it happened to Firefox, I started using it early and then kept switching back and forth with other media players throughout the years. The same applies to the “office” suite.

The final answer seems to be Thunderbird. I’ve been using it daily since 2004, which means 16 years and counting. At the time I was fighting the ridiculously small storage limit I had for my “webmail” inbox, so I started using it to download the messages to my computer in order to save space. I still use it today for totally different reasons.

And you, what is the piece of software or app you have continuously used for the longest period of time?

Categories
Personal Random Bits

Observations on remote work

A few days ago I noticed that I’ve been working fully remote for more than 2 years. To be sincere this now feels natural to me and not awkward at all, as some might think at the beginning or when they are introduced to the concept.

Over this period, even though it was not my first experience (since I already did it for a couple of months before), it is expected that one might start noticing what works and what doesn’t, how to deal with the shortcomings of the situation and how make the most of its advantages.

In this post I want to explore what I found out in my personal experience. There are already lots of articles and blog posts, detailing strategies/tips on how to improve your (or your team’s) productivity while working remotely and describing  the daily life of many remote workers. Instead of enumerating everything that already has been written, I will focus on some aspects which proved to have a huge impact.

All or almost nothing

This is a crucial one, with the exception of some edge cases, the most common scenario is that you need to interact and work with other people. So remote work will only be effective and achieve its true potential if everyone accepts that not every element of the team is present in the same building.

The processes and all the communication channels should be available for every member of the team the same way. This means that it should resemble the scenario where all members work remotely. We know people talk in person, however work related discussions, memos, presentations and any other kind of activity should be available to all.

This way we don’t create a culture were the team is divided between first and second class citizens. The only way to maximize the output of the team, is to make sure everyone can contribute with 100% of their skills. For that to happen, adequate processes and an according mindset is required.

Tools matter

To build over the previous topic, one important issue is inadequate tooling. We need to remove friction and make sure working on a team that is spread through multiple locations requires no more effort and doesn’t cause more stress than it would normally do in any other situation.

Good tools are essential to make it happen. As an example, a common scenario is a bad video conference tool that is a true pain to work with, making people lose time at the beginning of the conference call because the connection can’t be established or nobody is able to hear the people on the other end. Merge that together with the image/sound constantly freezing and the frustration levels go through the roof.

So good tools should make communication, data sharing and collaboration fluid and effortless, helping and not getting in the way. They should adapt to this environment (remote) and privilege this new way of working, over the “standard”/local one (this sometimes requires some adjustments).

Make the progress visible

One of the issues people often complain about remote work, is the attitude of other colleagues/managers who aren’t familiarized with this way of doing things, struggling with the notion of not seeing you there at your desk. In many places what counts is the time spend on your chair and not the work you deliver.

On the other side, remote workers also struggle to be kept in the loop, there are many conversations that are never written or recorded, so they aren’t able to be part of.

It is very important to fix this disconnection, and based on the first point (“All or almost nothing”) the complete solution requires an effort of both parties. They should make sure that the progress being done is visible to everyone, keeping all team in the loop and able to participate. It can be a log, some status updates, sending some previews or even asking for feedback regularly, as long as it is visible and easily accessible. People will be able to discuss the most recent progress and everyone will know what is going on. It might look like some extra overhead, but it makes all the difference.

Final notes

As we can see working remotely requires a joint effort of everybody involved and is not immune to certain kinds of problems / challenges (you can read more on this blog post), but if handled correctly it can provide serious improvements and alternatives to a given organization (of course there are jobs that can’t be done remotely, but you get the point). At least at this point, I think the benefits generally outweigh the drawbacks.

Categories
Personal Technology and Internet

Federated Tweets, or Toots

Recently there was been a big fuss about “Mastodon“, an open-source project that is very similar to twitter. The biggest difference is that it is federated. So what it means?

It means that it works like “email”, there are several providers (called instances) where you can create an account (you can setup your own server if you desire) and accounts from different providers can communicate with each other, instead of all information being in just one silo.

Of course for someone that is in favor of an open web this is a really important “feature”.

Another big plus is that the wheel wasn’t reinvented, this network is inter-operable with the existing “GNU Social” providers (uses the same protocol), so you can communicate and interact with people that have an account in an instance running that software. It can be seen like 2 providers of the same network running different software packages (one in PHP the other in Ruby) but talking the same language over the network.

I haven’t tested it much yet, but given it is a push for a solution that is not centralized (which is a rare thing nowadays) and I think it is a small step in the right direction, So I’ve setup an individual instance for myself where I will publish regularly links of posts/articles/pages that I find interesting. Feel free to follow at https://s.ovalerio.net and if you know someone worth following in this network, let me know.

Here are a few links with more information:

List of instances where you can create an account

Categories
Personal

Starting the “1ppm Challenge”

So certain parts of the world already entered the year 2017 (this system doesn’t sound that great, but I will leave this discussion for another occasion) and we, here in Europe, are making the preparations to start the new year in a few hours.

I am not found of those traditional new year resolutions that everyone does, they seem always destined to fail. But, yesterday I found a challenge on HackerNews that is very interesting and looks like a great push to be more productive during the whole year.

This post explains it a little better, but in brief, everyone that tries to accomplish the “1ppm Challenge” must build and ship a different project every month during the next year. For it to work out, in that restricted time-frame, the projects must have a clear objective and focus on solving a well defined problem. They also must cut all the clutter and be a MVP (Minimum viable product) since we only have +- 4 weeks for each one.

On the original challenge the projects can be anything, but for me I will restrict it to software projects (at least 10 in 12). My goal with this is to improve me skills in shipping new products and to focus on what matters the most in a given moment. I’m realistic about the challenge and having a 100% success rate will be hard, so a the end of the year I will evaluate my performance this simple way: number_of_finished_projects/12.

By number_of_finished_projects I mean every project that meets all the goals defined for it. Since I don’t have yet 12 good ideas I really want to work on during the next year, the project for each month will be posted in a new post before the beginning of every month and the challenge log will be updated.

So lets see what is the score I achieve at the end of the year. To get things started here is the description of the project for the next month:


Audio and video capture monitor

Description: This project aim is to let people know when their computers camera and microphone are being used by any program. This way every time a program starts to use this devices the users gets an alert. For now it will be Linux only.

Goals:

  • Must be written using Rust
  • Must detect when the cam or the micro is active (being used)
  • Must alert the user
  • Provide a log for all activity (optional)

So, I’m looking forward to know how this challenge will play out. Tomorrow it is time to start. Hope for a better 2017 for you all.

Categories
Personal Portugal Technology and Internet

Pixels Camp 2016

A few weeks ago took place in Lisbon the first edition of Pixels Camp (aka Codebits 2.0), an event that I try to attend whenever it happens (see previous posts about it). It is the biggest technology focused event/conference in Portugal with a number of attendees close to 1000.

This year the venue changed to LX Factory, even though the place is really cool, it is not as well located as the previous venue, at least to people who don’t live in Lisbon and arrive to the airport. The venue was well decorated and with a cool atmosphere, giving you the feeling that it was the place to be. However, this year there was less room for the teams working on the projects and not everybody was able to get a table/spot (it appeared to me that the venue was a little bit smaller than the previous one).

From the dozens of great talks that were given on the 4 stages of the event, many of whose I was not able to see since I was competing in the 48h programming competition, bellow are two that I really liked:

Chrome Dev Tools Masterclass

IPFS, The Interplanetary Filesystem

If you have some curiosity you may find the remaining on their youtube channel.

All this is great but the main activity of Pixels Camp is the 48h programing competition and this year we had another great batch of cools projects being developed (total of 60, if I remember correctly).

As usual I entered the contest, this time with the fellow Whitesmithians, Rui and Pedro. We chose to develop a GPS based game, you know, since it seemed to be a popular thing this summer and we though the medium still has great potential to do really entertaining stuff.

The idea already had a few years but never had been implemented and at its core was quite simple. It took some ideas from the classic game “pong” and adapted it to be played in a fun way while navigating through a real world area.

We called it PonGO and essentially the users must agree on a playing field, such as city block, a city or even bigger areas, then they connect their phones and the ball starts rolling. The players have to move around with their phones (which they use to see the map and track everyone’s position) trying to catch the ball and throw it to the other side of the map. The player that is able to do it more times wins the game. Here is sketch we did while discussing the project:

Initial Sketch
Initial Sketch

As you can see in the above image, that would be on the phone’s screen, the player (in yellow) reached close enough to the ball so it can play it, now he has to change the direction to one of the opposite sides (marked as green). The other players (in blue), will have to run to catch the ball before it gets out. Spread across the map you can see some power ups that give users special capabilities.

That’s it, it might seem easy but doing it in less that 48h is not. We ended with a working version of the game but the power ups were not implemented due to time constrains. Here are some screenshots of the final result(we used the map view instead of the satellite view so it might look a little different):

In game screenshotsIn game action

 

 

 

 

 

 

 

 

 

The code itself is a mess (it was an hackathon what were you expecting) and can be found here and here.

At the end, it was a great event as usual and I would also like to congratulate some of my coworkers at Whitesmith that took home the 7th place in the competition. Next year I hope to be there again (and you should too).

Categories
Personal Python

Browsing folders of markdown files

If you are like me, you have a bunch of notes and documents written in markdown spread across many folders. Even the documentation of some projects involving many people is done this way and stored, for example, in a git repository. While it is easy to open the text editor to read these files, it is not the most pleasant experience, since the markup language was made to later generate readable documents in other formats (eg. HTML).

For many purposes setting up the required configuration of tools to generate documentation (like mkdocs) is not practical, neither it was the initial intent when it was written. So last weekend I took a couple of hours and built a rough (and dirty) tool to help me navigate and read the markdown documents with a more pleasant experience, using the browser (applying style as github).

I called it mdvis and it is available for download through “pip”. Here’s how working with it looks like:

It does not provide many features and is somewhat “green”, but it serves my current purposes. The program is open-source so you can check it here, in case you want to help improving it.

Categories
Personal

3 Months of Remote Work

Three months have passed since I left the office and started working remotely (+1000 km), in this post I share the “pros and cons” of my short experience, even though across the Internet many people already covered this topic extensively.

Whitesmith has been “remote friendly” since first day I joined, more recently the company is trying to become a “remote first” business, as described in a recent blog post. What this means is that remote workers should be treated as first class citizens and the company’s processes should assume that all employees are working remotely. This mindset gave me the possibility to move farther away for a while.

The first thing that I’ve done was to rent a table in the nearest co-working space, because staying all 24/7 in the same house is not my thing. It was a good decision, this way is possible to meet and interact with new people from different backgrounds regularly and I have a spot where I can focus without too many distractions.

Regarding the job related issues, the asynchronous nature of remote work is both its biggest strength and at the same time its biggest drawback. I say this because all the liberty and flexibility comes with a cost, which is the lack of a fast feedback loop and that instant discussion on the spot that settles everything down, without the need for more message round trips or checking my peer’s availability for a quick video call.

On the social side, one aspect that I noticed (and already expected before embracing this new experience) was a small detachment of whats going on in the office. Slack is more active than ever but is not the same as the “water cooler”, plus new people are constantly joining in. Without a physical presence it is hard to get to know the newcomers.

Even though there are these rough edges, I’m really enjoying working remotely. In 2016 I will try a few new strategies to overcome the above obstacles, such as:

  • Improve my written communication skills
  • Avoid slack for long running discussions and prefer more structured platforms
  • Organize some on-line activities/events
  • Work on small projects with the new teammates

Lets see how it goes in the next few months.