Yarmo - Yarmo's blogBlog of an Open Source developerZola2022-06-12T21:39:59+00:00https://yarmo.eu/blog/atom.xmlHow not to use a password manager2022-06-12T21:39:59+00:002022-06-12T21:39:59+00:00
Unknown
https://yarmo.eu/blog/how-not-to-use-password-manager/<p>I have just read Kev's latest blog post named <a href="https://kevq.uk/segregating-email-with-sub-domains/">Segregating Email With Subdomains</a> and it's a trick I am most definitely adopting. I enjoy using catch-all addresses over <code>+something</code> addresses as the latter are rejected by some services. But only one person per domain can enjoy this privilege. If each member of the server gets their own subdomain, they all get to enjoy a catch-all address. Simple yet clever.</p>
<p>But I feel an urge to quickly react to the post's premise and its stated "problem with the setup".</p>
<p>Both the author and a friend of theirs have found themselves in the predicament of not being able to reset their forgotten password because they also forgot the email address they used to sign up for the service in question.</p>
<p>At the risk of sounding like Captain Obvious, password managers are an all-or-nothing solution. There is nothing to be gained by "not putting the credentials to this one service in the password manager".</p>
<p>Every time you don't use a random password issued and tracked by your password manager, you are using a password you will need to remember and so will either be one you use more often, or is some derivative of the service you are signing up for. This opens your account up to a host of attack surfaces.</p>
<p>I take pride in not knowing any of my passwords anymore — except for the big one for the password manager. Yes, it takes a bit of time to get used to the system but once you have it, it just makes no sense not to use for every single service you sign up for.</p>
<p>And in my humble opinion, laziness is the worst reason not to register a password in the password manager. This is not the reason stated in the blog posts but I am adding this as a general statement.</p>
<blockquote>
<p>it’s easy to lose track of which address you have used where</p>
</blockquote>
<p>It doesn't have to be. Password managers are not password managers, they are identity managers. Therefore, we should extend the identity "randomness" also to the email addresses we use.</p>
<p>Up to a few weeks ago, I used a catch-all address combined with a random string generator, the result of which I stored in Keepass. Cumbersome, granted, but effective. If you don't have a catch-all address, a plus address works for those services that don't reject them.</p>
<p>But when <a href="https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/">Bitwarden recently announced support for email forwarding services</a>, I nearly immediately migrated to their service as this would streamline my process — and add a dash of anonimity as long as we trust our email forwarding service of choice.</p>
<blockquote>
<p>Yes, my password manager is supposed to remember the email used, but that’s not always reliable</p>
</blockquote>
<p>With all due respect, I can't imagine a single situation where my password manager's capacity to store the credentials I entered isn't <em>reliable</em>.</p>
<p>So, in short, we can (and should) discuss to great lengths how to properly use a password manager — it's a complicated matter. But skipping some accounts is for sure <strong>not</strong> how to use a password manager.</p>
Matrix Synapse: migrating from Cloudron to ansible2022-06-11T10:37:13+00:002022-06-11T10:37:13+00:00
Unknown
https://yarmo.eu/blog/synapse-cloudron-ansible/<p>Maybe, like me, you tried to selfhost a <a href="https://matrix.org/docs/projects/server/synapse">Matrix Synapse</a> server, miserably failed because it's just not quite that easy and then settled for <a href="https://docs.cloudron.io/apps/synapse/">Cloudron's Synapse app</a> because it just works. Pay a bit more, worry a little less.</p>
<p>Sure, it works but you still introduced a middle man in your pristine homelab setup and the itch never goes away to get rid of it.</p>
<p>Time for round 2.</p>
<h2 id="The_ansible_playbook">The ansible playbook</h2>
<p>The last few years, I have been slowly learning how to work with <a href="https://www.ansible.com/">ansible</a> and ever since I found the <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy">matrix-docker-ansible-deploy</a> project, I knew this would be my chance for redemption.</p>
<p>So, I found myself a spare machine (SBC, NUC, VPS, all should work), installed Debian 11, enabled SSH and went to work. Here's what I did to finally make the Cloudron migration happen.</p>
<h2 id="Decommissioning_Cloudron">Decommissioning Cloudron</h2>
<p>!Read through the whole guide first! Understanding why you do certain things will help you do those things correctly or do them do differently: just because I did them a certain way doesn't mean it's the best way for everyone.</p>
<p>First, I moved some important files from the Cloudron server to my new server. There are two folders of interest, both can be obtained by visiting the Cloudron admin panel, going to the settings for the Matrix-Synapse app and clicking on <code>Storage</code>.</p>
<p>In the <code>/home/yellowtent/appsdata/<APP-ID></code> folder, you'll find the <code>postgresqldump</code> file that was generated during the last Cloudron app backup — so make sure to run a backup right before migrating to have the latest data!</p>
<p>In the <code>/mnt/data/apps/<APP-NAME>/data</code> folder, you'll find the all-importand <code>media_store</code> folder.</p>
<p>Additionally, in the <code>/mnt/data/apps/<APP-NAME>/configs</code> folder, you will find the <code>homeserver.yaml</code> config file for the Matrix Synapse server — use this for inspiration for the new one but more importantly, make a note of the <code>database/args/user</code> value. There is also the <code>signing.key</code> file.</p>
<p>Using your method of choice (rsync, wormhole…), copy the <code>postgresqldump</code> file, the <code>media_store</code> folder and the <code>signing.key</code> over to the new server in — for example — the <code>/migration</code> folder.</p>
<p>This is the moment to power down the Cloudron machine (or simply the app if you wish to keep Cloudron running) and update the <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/configuring-dns.md">DNS records</a>.</p>
<h2 id="Fixing_the_database_dump">Fixing the database dump</h2>
<p>Before we touch ansible, we need to go our <code>postgresqldump</code> file and replace all instances of the previous database user (the one you found in the <code>homeserver.yaml</code> on the Cloudron server, it should look like <code>user1a2b3c4d5e6f7g8h9i</code>) with <code>matrix</code>.</p>
<p>According to the <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/importing-postgres.md">Importing an existing Postgres database</a>, this should be <code>synapse</code> and not <code>matrix</code>. I ran into permission issues when using <code>synapse</code> but that may have been due to a configuration error I made elsewhere. Feel free to attempt with <code>synapse</code>.</p>
<h2 id="Setting_up_the_new_server_with_ansible">Setting up the new server with ansible</h2>
<p>I am going to skip the majority of the ansible process here, namely the <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/configuring-playbook.md">Configuring the Ansible playbook</a> part and only focus on what is relevant when migrating from a Cloudron instance. There is a special <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/importing-postgres.md">Importing an existing Postgres database</a> but I had to change the steps a bit to make them work for me.</p>
<p>I followed the <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/configuring-playbook.md">Configuring the Ansible playbook</a> steps to obtain my ansible configuration file.</p>
<p>In the end, I added to <code>vars.yml</code>:</p>
<pre data-lang="yml" style="background-color:#212733;color:#ccc9c2;" class="language-yml "><code class="language-yml" data-lang="yml"><span style="font-style:italic;color:#5c6773;"># Set up synapse database connection
</span><span style="color:#73d0ff;">matrix_synapse_database_user</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'matrix'
</span><span style="color:#73d0ff;">matrix_synapse_database_password</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'SUPERSECRETPASSWORD'
</span><span style="color:#73d0ff;">matrix_synapse_database_database</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'matrix'
</span><span>
</span><span style="font-style:italic;color:#5c6773;"># Set up postgres
</span><span style="color:#73d0ff;">matrix_postgres_connection_username</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'matrix'
</span><span style="color:#73d0ff;">matrix_postgres_connection_password</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'SUPERSECRETPASSWORD'
</span><span style="color:#73d0ff;">matrix_postgres_db_name</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'matrix'
</span></code></pre>
<p>If you chose in the previous section to try and import the database dump into the <code>synapse</code> database instead of the <code>matrix</code>, make sure to update those values here.</p>
<p>Let's let ansible set up everything on the server:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">ansible-playbook</span><span style="color:#ffcc66;"> -i</span><span> inventory/hosts setup.yml</span><span style="color:#ffcc66;"> --tags</span><span style="color:#f29e74;">=</span><span>setup-all</span><span style="color:#ffcc66;"> -K
</span></code></pre>
<p>Do not start just yet! First, let's import the database dump:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">ansible-playbook</span><span style="color:#ffcc66;"> -i</span><span> inventory/hosts setup.yml </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --extra-vars</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">'server_path_postgres_dump=/migration/postgresqldump' </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --tags</span><span style="color:#f29e74;">=</span><span>import-postgres</span><span style="color:#ffcc66;"> -K
</span></code></pre>
<p>If there weren't any errors, let's import the <code>media_store</code> folder as well:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">ansible-playbook</span><span style="color:#ffcc66;"> -i</span><span> inventory/hosts setup.yml </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --extra-vars</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">'server_path_media_store=/migration/media_store/' </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --tags</span><span style="color:#f29e74;">=</span><span>import-synapse-media-store</span><span style="color:#ffcc66;"> -K
</span></code></pre>
<p>If still no errors, great! Now let's take a look at that signing key we copied over. What I should have done was follow the instructions <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/738">here</a> to add the previous signing key to the list of old signing keys. But I didn't know of these instructions when performing the setup, so I simply used the old key to overwrite the signing key ansible had generated and stored in <code>/matrix/synapse/config/matrix.your.domain.signing.key</code>. Not as elegant, I admit, but it works.</p>
<p>It's time to start the server (and run the setup again for the new signing key):</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">ansible-playbook</span><span style="color:#ffcc66;"> -i</span><span> inventory/hosts setup.yml</span><span style="color:#ffcc66;"> --tags</span><span style="color:#f29e74;">=</span><span>setup-all,start</span><span style="color:#ffcc66;"> -K
</span></code></pre>
<p>Done!</p>
<p>Again, I skipped a lot of important steps like setting up a reverse proxy — this playbook includes <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/configuring-playbook-nginx.md">Nginx</a> — so make sure to read <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/tree/master/docs">all the documentation</a> to end up with a fully functional instance.</p>
<h3 id="Something_went_wrong">Something went wrong</h3>
<p>Surely, something went wrong in one of the steps above, it happens.</p>
<p>If something went wrong during the importing of the <code>postgresqldump</code>, you can't just repeat the step as postgres will now complain that some import steps were already performed (see the <a href="https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/importing-postgres.md">Importing an existing Postgres database</a> guide).</p>
<p>I followed the exact steps they propose. So, on the new server, run:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">systemctl</span><span> stop matrix-postgres
</span><span style="color:#ffd580;">rm</span><span style="color:#ffcc66;"> -rf</span><span> /matrix/postgres/data/</span><span style="color:#f29e74;">*
</span><span style="color:#ffd580;">systemctl</span><span> start matrix-postgres
</span></code></pre>
<p>Then, back on the ansible controller, run:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">ansible-playbook</span><span style="color:#ffcc66;"> -i</span><span> inventory/hosts setup.yml </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --tags</span><span style="color:#f29e74;">=</span><span>setup-postgres</span><span style="color:#ffcc66;"> -K
</span></code></pre>
<p>You now have an empty database ready for a fresh import!</p>
<h3 id="Checking_the_import_process_succeeded">Checking the import process succeeded</h3>
<p>To check if the data was imported correctly, here's how to log into the database and query the user table:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">/usr/local/bin/matrix-postgres-cli
</span><span style="font-style:italic;color:#5c6773;"># List the databases
</span><span style="color:#95e6cb;">\l
</span><span style="font-style:italic;color:#5c6773;"># Connect to the matrix database
</span><span style="color:#95e6cb;">\c </span><span style="color:#ffd580;">matrix
</span><span style="font-style:italic;color:#5c6773;"># Query the users table
</span><span style="color:#ffd580;">select </span><span style="color:#f29e74;">*</span><span> from users</span><span style="color:#f29e74;">;
</span></code></pre>
<p>If that last query returns the list of users you expect to see, we should be good! Well, almost.</p>
<h2 id="Resetting_passwords">Resetting passwords</h2>
<p>For some explanation that is as of yet beyond me, the old passwords won't work. That is, we haven't changed the passwords during the import process, but we also cannot log in as the server will now complain the password is incorrect.</p>
<p>Luckily, we can simply reset a user's password to fix this:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">ansible-playbook</span><span style="color:#ffcc66;"> -i</span><span> inventory/hosts setup.yml </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --extra-vars</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">'username=USER password=SUPERSECRETPASSWORD' </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --tags</span><span style="color:#f29e74;">=</span><span>update-user-password</span><span style="color:#ffcc66;"> -K
</span></code></pre>
<p>This user can now log in again.</p>
<p>If like me, you host a Synapse server for a small number of people that trust you, this step is not an issue. If you host for a lot of people, I am not really sure how to proceed. Hopefully someone can help out here, explain what step I missed to make passwords work after the migration and I can update the guide accordingly.</p>
<h2 id="Conclusion">Conclusion</h2>
<p>There you have it, this is what I did to obtain a working fresh Synapse server with all the data from the Cloudron server. Cloudron was nice to work with but I am glad I have everything working again without the need for a middle man.</p>
<p>So now… dendrite when?</p>
InfluxDB 2 migration2022-05-20T18:36:41+00:002022-05-20T18:36:41+00:00
Unknown
https://yarmo.eu/blog/influxdb-2-migration/<h2 id="What_is_InfluxDB?">What is InfluxDB?</h2>
<p><a href="https://www.influxdata.com/">InfluxDB</a> is the <a href="https://github.com/influxdata/influxdb/blob/master/LICENSE">MIT licensed</a> <a href="https://en.wikipedia.org/wiki/Time_series_database">time series database</a> of my monitoring stack of choice, the so-called <a href="https://www.influxdata.com/blog/introduction-to-influxdatas-influxdb-and-tick-stack/">TICK stack</a>, consisting of <a href="https://www.influxdata.com/time-series-platform/telegraf/">Telegraf</a> (data collection agent), <a href="https://www.influxdata.com/">InfluxDB</a> (time series database), <a href="https://www.influxdata.com/time-series-platform/chronograf/">Chronograf</a> (charts and dashboard interface) and <a href="https://www.influxdata.com/time-series-platform/kapacitor/">Kapacitor</a> (data processing engine).</p>
<h2 id="My_history_with_InfluxDB">My history with InfluxDB</h2>
<p>I started my homelab somewhere in 2017-2018. When about a year in I faced some random crashes and reboots which I did not manage to understand or fix, I first turned to <a href="https://github.com/netdata/netdata">netdata</a> before settling with the TICK stack.</p>
<p>While I did eventually solve the random crashes, an obvious issue surfaced: if I want to monitor a crashing server, I probably shouldn't host the monitoring stack on that same server.</p>
<p>And so, my setup eventually converged to the one I use today: a homelab and a few VPSs, all monitored by a TICK stack installed on a dedicated VPS.</p>
<p><a href="https://docs.influxdata.com/influxdb/v2.0/reference/release-notes/influxdb/">InfluxDB 2.0</a> was released in November of 2020 and I remember this moment clearly: I updated my InfluxDB docker container, I noticed everything immediately broke, I read up on what a gargantuan update this 2.0 release was, I said "nope" and immediately reverted to 1.8.x.</p>
<p>Yesterday, I decided it was finally time to sit down and calmly move to InfluxDB 2(.2.0).</p>
<h2 id="InfluxDB_2,_or_the_Death_of_the_TICK_stack">InfluxDB 2, or the Death of the TICK stack</h2>
<p>Please refer <a href="https://docs.influxdata.com/influxdb/v2.0/reference/release-notes/influxdb/">release notes for InfluxDB 2</a> to see for yourself what has changed.</p>
<p>One of the most notable changes is that InfluxDB 2 now has a interface with graphs and dashboards (replacing Chronograf's functionality) and can send alerts based on data (which previously required Kapacitor). While I am sure Chronograf and Kapacitor can still be used together with InfluxDB 2, they no longer need to and indeed, I have now removed these services from my monitoring stack.</p>
<h2 id="InfluxDB_2,_the_migration">InfluxDB 2, the migration</h2>
<p>The <a href="https://hub.docker.com/_/influxdb">InfluxDB 2 dockerhub page</a> has clear instructions on how to migrate from 1.x to 2.x when using containers and so I did. And while the container logs showed the migration process was successful, I had this weird issue where when I would try to then log in on the web interface, I was greeted with a "fresh install" screen, suggesting the InfluxDB instance was in fact not aware of the migration.</p>
<p>And indeed, after following the instructions of the "fresh install" screen, everything was working fine but no trace of any migrated data :(</p>
<p>If my historical InfluxDB data was in any way valuable to me, I would have put more effort into migrating all of it to InfluxDB 2.</p>
<p>But it wasn't.</p>
<h2 id="InfluxDB_2,_the_non-migration">InfluxDB 2, the non-migration</h2>
<p>And thus, in a bit of an anticlimax to this whole situation, I <code>rm -rf</code>ed the whole data directory and started anew with a fresh InfluxDB 2 instance. No more Chronograf, no more Kapacitor. All servers have updated Telegraf configurations and I am currently rebuilding the dashboards in InfluxDB's web interface.</p>
<p>And I also still need to rebuild my Grafana dashboards to make use of Flux instead of InfluxQL. I still have things to do before I can fully enjoy my monitoring solution again.</p>
A playerctl module for polybar2022-05-07T22:20:11+00:002022-05-07T22:20:11+00:00
Unknown
https://yarmo.eu/blog/playerctl-polybar/<h2 id="playerctl">playerctl</h2>
<p><a href="https://man.archlinux.org/man/community/playerctl/playerctl.1.en"><code>playerctl</code></a> is a simple and lightweight utility that can query and control MPRIS-enabled media players.</p>
<p>Running a simple <code>playerctl pause</code> in the terminal will pause the media player currently playing anything.</p>
<p>As effective as that is, I want something even better: a little button in my <a href="https://polybar.github.io/">polybar</a> that knows when media is playing and when pressed, pauses the player.</p>
<p>So, instead of using an existing functional script, let's write our own.</p>
<h2 id="The_polybar_module">The polybar module</h2>
<p>We need two things: the configuration of the polybar module, and a bash script that the module will call.</p>
<p>Here's the polybar configuration:</p>
<pre data-lang="ini" style="background-color:#212733;color:#ccc9c2;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#ffa759;">[module/playerctl]
</span><span style="color:#ffcc66;">type </span><span style="color:#f29e74;">= </span><span style="font-style:italic;color:#5ccfe6;">custom/script
</span><span style="color:#ffcc66;">exec </span><span style="color:#f29e74;">= /</span><span>home</span><span style="color:#f29e74;">/</span><span>user</span><span style="color:#f29e74;">/.</span><span>local</span><span style="color:#f29e74;">/</span><span>bin</span><span style="color:#f29e74;">/</span><span>polybar_scripts</span><span style="color:#f29e74;">/</span><span>playerctl</span><span style="color:#f29e74;">.</span><span>sh
</span><span style="color:#ffcc66;">interval </span><span style="color:#f29e74;">= </span><span style="color:#ffcc66;">0</span><span style="color:#f29e74;">.</span><span style="color:#ffcc66;">5
</span></code></pre>
<p>This module configuration simply calls the script at the defined path every 0.5 seconds.</p>
<p>Here's the bash script called by the module:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="font-style:italic;color:#5c6773;">#!/usr/bin/env bash
</span><span>
</span><span>playerctlstatus</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">$</span><span>(</span><span style="color:#ffd580;">playerctl</span><span style="color:#bae67e;"> status </span><span style="color:#ffcc66;">2</span><span style="color:#f29e74;">></span><span style="color:#bae67e;"> /dev/null</span><span>)
</span><span>
</span><span style="color:#ffa759;">if </span><span style="color:#f28779;">[[ </span><span>$playerctlstatus </span><span style="color:#f29e74;">== </span><span style="color:#bae67e;">"" </span><span style="color:#f28779;">]]</span><span style="color:#f29e74;">; </span><span style="color:#ffa759;">then
</span><span> </span><span style="color:#f28779;">echo </span><span style="color:#bae67e;">""
</span><span style="color:#ffa759;">elif </span><span style="color:#f28779;">[[ </span><span>$playerctlstatus </span><span style="color:#f29e74;">=~ </span><span style="color:#bae67e;">"Playing" </span><span style="color:#f28779;">]]</span><span style="color:#f29e74;">; </span><span style="color:#ffa759;">then
</span><span> </span><span style="color:#f28779;">echo </span><span style="color:#bae67e;">"%{A1:playerctl pause:}⏸️%{A}"
</span><span style="color:#ffa759;">else
</span><span> </span><span style="color:#f28779;">echo </span><span style="color:#bae67e;">"%{A1:playerctl play:}▶️%{A}"
</span><span style="color:#ffa759;">fi
</span></code></pre>
<p>Yes, it's not a very nice script, I'm sure it can be improved in many ways but hey, it works.</p>
<p>In short, it first gets the current status of playerctl. If the status is an empty string i.e. there are no media players, return an empty string. This basically "hides" the module.</p>
<p>If the status is the string <code>Playing</code>, return the emoji of a pause button. <code>%{A1:playerctl pause:}</code> will make the button clickable and, when clicked, will run <code>playerctl pause</code>.</p>
<p>Finally, since we know there are media players but none are playing (i.e. the status is <code>Paused</code>), show the play button which does actually what one would expect.</p>
<p>So, is it safe to run this script every 0.5 seconds? Here are some numbers:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">$</span><span> time /home/user/.local/bin/polybar_scripts/playerctl.sh
</span><span style="font-style:italic;color:#5c6773;"># real 0m0,037s
</span><span style="font-style:italic;color:#5c6773;"># user 0m0,016s
</span><span style="font-style:italic;color:#5c6773;"># sys 0m0,013s
</span></code></pre>
<p>So yes, it's quite fast even on this 2010 ThinkPad x201i, this script can easily run multiple times a second. That's handy because you want the play/pause button to update relative quick after pressing it.</p>
<h2 id="Closing_remarks">Closing remarks</h2>
<p>There you have it, my playerctl module for polybar. Okay, one little detail: I do not actually use the emoji symbols in my script but rather the corresponding glyphs included in the <a href="https://www.nerdfonts.com">Mononoki nerd font</a>.</p>
<p>You could easily expand the script to show what song is playing using the command explained in my <a href="/blog/playerctl">previous blog post</a>. I might do just that soon but for now, a simple button that only appears when music is either playing or paused is exactly what I was looking for.</p>
playerctl: get currently playing music2022-05-03T14:35:49+00:002022-05-03T14:35:49+00:00
Unknown
https://yarmo.eu/blog/playerctl/<h2 id="TLDR">TLDR</h2>
<p>To get information about music currently playing on the computer, run:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">playerctl</span><span> metadata</span><span style="color:#ffcc66;"> --all-players --format </span><span style="color:#bae67e;">'{{ status }}: {{ artist }} - {{ title }}'
</span></code></pre>
<p>A wide variety of media players are MPRIS-enabled and can be queried using the above command, including media players running in the browser such as <a href="https://github.com/airsonic-advanced/airsonic-advanced">Airsonic</a>.</p>
<p><a href="https://man.archlinux.org/man/community/playerctl/playerctl.1.en">playerctl on man.archlinux.org</a></p>
<h2 id="Explanation">Explanation</h2>
<p>For a long time, I've been using <a href="https://github.com/airsonic-advanced/airsonic-advanced">Airsonic</a> as my media player, handy if you are into selfhosted services and despise streaming platforms with a passion.</p>
<p>Since I also do <a href="https://yarmo.live">live streaming</a> and like to play ambient music in the background, I attempted to build a little "Now playing" widget for on stream.</p>
<p>Sadly, no luck: Airsonic doesn't have an API.</p>
<p>After a lot of trying out different clients and hosting additional services, I settled on syncing my music collection from the NAS to my computer and then playing music through mopidy, as I could then query the currently playing music using MPD:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="font-style:italic;color:#5c6773;"># Snippet will be added soon, currently away from computer
</span></code></pre>
<p>Not my favorite solution but it works.</p>
<p>In comes this guy <a href="https://deavid.wordpress.com/">DeavidSedice</a> and he tells me I can get that same information from most existing media players with a single line of bash:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="font-style:italic;color:#5c6773;"># The command as he sent it to me
</span><span style="color:#ffd580;">playerctl</span><span> metadata</span><span style="color:#ffcc66;"> --all-players --format </span><span style="color:#bae67e;">'{{ status }}: {{ artist }} - {{ title }} ' </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">grep</span><span> Playing </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">tr </span><span style="color:#bae67e;">'\n' ' '
</span></code></pre>
<p>Why did I never hear about this before? A life changer! And it even works with Airsonic running in the browser.</p>
<p>This tool makes it quite trivial to write a bash script that loops the command, writes the output to a text file and have OBS display it on stream.</p>
GPG import public key from smartcard2022-05-03T09:00:12+00:002022-05-03T09:00:12+00:00
Unknown
https://yarmo.eu/blog/gpg-import-from-smartcard/<h2 id="TLDR">TLDR</h2>
<p>On a new computer, insert your USB OpenPGP smartcard and run:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">gpg</span><span style="color:#ffcc66;"> --card-edit
</span><span style="color:#ffd580;">fetch
</span><span style="color:#ffd580;">quit
</span></code></pre>
<h2 id="Explanation">Explanation</h2>
<p>I have a <a href="https://www.yubico.com/products/yubikey-5-overview/">YubiKey 5</a> (still waiting on my <a href="https://www.indiegogo.com/projects/solo-v2-safety-net-against-phishing#/">Solo v2</a>) on which I store my OpenPGP secret key.</p>
<p>However, if I boot into a new system, insert my USB OpenPGP smartcard, import my public key from a keyserver:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">gpg</span><span style="color:#ffcc66;"> --keyserver</span><span> hkps://keys.openpgp.org</span><span style="color:#ffcc66;"> --recv-keys</span><span> ABCD1234
</span></code></pre>
<p><a href="https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work">configure git</a>:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">git</span><span> config</span><span style="color:#ffcc66;"> --global</span><span> user.signingkey ABCD1234
</span></code></pre>
<p>and attempt to sign a commit, I'll get an error message:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">git</span><span> commit</span><span style="color:#ffcc66;"> -S -m </span><span style="color:#bae67e;">"Signed commit"
</span><span style="font-style:italic;color:#5c6773;"># error: gpg failed to sign the data
</span><span style="font-style:italic;color:#5c6773;"># fatal: failed to write commit object
</span></code></pre>
<p>GPG doesn't know yet it can interact with the private key stored on the USB OpenPGP smartcard!</p>
<p>So, instead of importing the public key from a keyserver, fetch it from the smartcard with the following commands:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">gpg</span><span style="color:#ffcc66;"> --card-edit
</span><span style="color:#ffd580;">fetch
</span><span style="color:#ffd580;">quit
</span></code></pre>
Wireguard and docker: providing VPN access to arbitrary containers2022-04-16T12:00:12+00:002022-04-16T12:00:12+00:00
Unknown
https://yarmo.eu/blog/wireguard-docker/<h2 id="Your_container_might_benefit_from_VPN_access">Your container might benefit from VPN access</h2>
<p>Some containers just aren't meant to be connected directly to the internet. After all, you wouldn't want your ISP knowing which Linux distribution you download and share.</p>
<p>If like me you have your BitTorrent client installed as a container on a homeserver to make sure it's always connected but you don't want to route your other containers through a VPN, you'll probably want to use a VPN-in-a-container and route your BitTorrent client through it.</p>
<p>I already had a similar solution using OpenVPN but it was time for an upgrade. Oh yes, it's <a href="https://www.wireguard.com/">Wireguard</a> time.</p>
<p>As VPN provider, I use <a href="https://mullvad.net/">Mullvad</a>.</p>
<h2 id="The_solution">The solution</h2>
<p>Our situation is this: our homeserver (could be a Linux machine, a Raspberry Pi…) runs two docker containers, one which is fine to be directly connected to the internet and one which would benefit from VPN access.</p>
<p>One could install the Wireguard client straight on the machine and route both containers through the VPN, but for various reasons, that's now what we want here.</p>
<p>Our solution will be to add another container which connects to the VPN and route our sensitive container through the VPN container.</p>
<p>With some experimenting, I got it working 90%. The only issue was that while the BitTorrent client was perfectly shielded by the VPN, I could no longer access the client myself. Not great.</p>
<p>After two days of trying stuff out and searching the internet, I found the working solution on a blog post from 2021 which sadly already no longer exists. But thanks to the Web Archive, <a href="https://web.archive.org/web/20210207170757/https://bookstack.almueti.com/books/wireguard/page/docker-compose-with-mullvad-wireguard-arbitrary-service">its wisdom is lost no more</a>.</p>
<h2 id="PostUp_and_PreDown">PostUp and PreDown</h2>
<p>The reason I didn't get it working myself is because I knew the problem lay in the <code>PostUp/PreDown</code> commands of the Wireguard configuration. And I don't know how to read or write those :/ Mullvad provides their own but they do not work in this situation.</p>
<p>I must therefore warn you that I sadly do not fully understand the solution. I probably could fiddle with it and get it working on a different system, but I don't <em>understand</em> it. I simply took my 90%-functional implementation, copy-pasted the <code>PostUp/PreDown</code> commands from the linked blog post and voilà, success!</p>
<p>Not proud of it, and I hope I'll gain understanding of these commands in the near future, but that's the situation.</p>
<h2 id="The_implementation">The implementation</h2>
<p>You must have Wireguard installed on your system but it doesn't need to be running any connection.</p>
<h3 id="docker-compose.yml"><code>docker-compose.yml</code></h3>
<pre data-lang="yml" style="background-color:#212733;color:#ccc9c2;" class="language-yml "><code class="language-yml" data-lang="yml"><span style="color:#73d0ff;">version</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">'2.3'
</span><span>
</span><span style="color:#73d0ff;">services</span><span style="color:#ccc9c2cc;">:
</span><span> </span><span style="color:#73d0ff;">wireguard</span><span style="color:#ccc9c2cc;">:
</span><span> </span><span style="color:#73d0ff;">image</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">linuxserver/wireguard
</span><span> </span><span style="color:#73d0ff;">hostname</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">wireguard
</span><span> </span><span style="color:#73d0ff;">container_name</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">wireguard
</span><span> </span><span style="color:#73d0ff;">cap_add</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">net_admin
</span><span> - </span><span style="color:#bae67e;">sys_module
</span><span> </span><span style="color:#73d0ff;">ports</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">8112:8112
</span><span> - </span><span style="color:#bae67e;">58846:58846
</span><span> </span><span style="color:#73d0ff;">volumes</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">/lib/modules:/lib/modules
</span><span> - </span><span style="color:#bae67e;">./data/wireguard:/config
</span><span> </span><span style="color:#73d0ff;">sysctls</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">net.ipv4.conf.all.src_valid_mark=1
</span><span>
</span><span> </span><span style="color:#73d0ff;">deluge</span><span style="color:#ccc9c2cc;">:
</span><span> </span><span style="color:#73d0ff;">image</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">linuxserver/deluge
</span><span> </span><span style="color:#73d0ff;">container_name</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">deluge
</span><span> </span><span style="color:#73d0ff;">network_mode</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">service:wireguard
</span><span> </span><span style="color:#73d0ff;">volumes</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">./data/deluge:/config
</span><span> - </span><span style="color:#bae67e;">./data/downloads:/downloads
</span></code></pre>
<p>In this docker-compose setup, we use the <a href="https://hub.docker.com/r/linuxserver/wireguard">linuxserver/wireguard</a> and <a href="https://hub.docker.com/r/linuxserver/deluge">linuxserver/deluge</a> container images. Please have a look at their respective documentation for more information on their configuration.</p>
<p>A few interesting notes:</p>
<pre data-lang="yml" style="background-color:#212733;color:#ccc9c2;" class="language-yml "><code class="language-yml" data-lang="yml"><span> </span><span style="color:#73d0ff;">cap_add</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">net_admin
</span><span> - </span><span style="color:#bae67e;">sys_module
</span><span> [</span><span style="color:#bae67e;">…</span><span>]
</span><span> </span><span style="color:#73d0ff;">volumes</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">/lib/modules:/lib/modules
</span></code></pre>
<p>The <a href="https://hub.docker.com/r/linuxserver/wireguard">linuxserver/wireguard</a> image uses the system's Wireguard module and this configuration allows the container to access it.</p>
<pre data-lang="yml" style="background-color:#212733;color:#ccc9c2;" class="language-yml "><code class="language-yml" data-lang="yml"><span> </span><span style="color:#73d0ff;">sysctls</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">net.ipv4.conf.all.src_valid_mark=1
</span></code></pre>
<p>This is important but sadly, I do not know what it does.</p>
<pre data-lang="yml" style="background-color:#212733;color:#ccc9c2;" class="language-yml "><code class="language-yml" data-lang="yml"><span> </span><span style="color:#73d0ff;">ports</span><span style="color:#ccc9c2cc;">:
</span><span> - </span><span style="color:#bae67e;">8112:8112
</span><span> - </span><span style="color:#bae67e;">58846:58846
</span></code></pre>
<p>This is the interesting part. We assign those ports to the <code>wireguard</code> container, but they are the <a href="https://raw.githubusercontent.com/linuxserver/docker-deluge/58ce900bc33b06d3c9cec24e7f17ac9d8b4433cf/Dockerfile">ports exposed by the <code>deluge</code> container</a>! Indeed, since the <code>deluge</code> container's network flows through the <code>wireguard</code> container, we can only access the <code>deluge</code> container through the <code>wireguard</code> container's network.</p>
<p>By the way, port <code>8112</code> is used for the <a href="https://deluge.readthedocs.io/en/latest/reference/web.html">Deluge WebUI</a> and port <code>58846</code> is used by <a href="https://dev.deluge-torrent.org/wiki/UserGuide/ThinClient">Deluge Thin Clients</a>. Your BitTorrent client of choice will most likely use different ports!</p>
<pre data-lang="yml" style="background-color:#212733;color:#ccc9c2;" class="language-yml "><code class="language-yml" data-lang="yml"><span> </span><span style="color:#73d0ff;">network_mode</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">service:wireguard
</span></code></pre>
<p>The trick that makes it all work: make sure that the <code>deluge</code> container connects to the internet through the <code>wireguard</code> container.</p>
<h3 id="wg0.conf"><code>wg0.conf</code></h3>
<p>This Wireguard configuration file is based on the one provided by Mullvad, but with the <code>PostUp/PreDown</code> commands found in the <a href="https://web.archive.org/web/20210207170757/https://bookstack.almueti.com/books/wireguard/page/docker-compose-with-mullvad-wireguard-arbitrary-service">blog post mentioned earlier</a>.</p>
<pre data-lang="conf" style="background-color:#212733;color:#ccc9c2;" class="language-conf "><code class="language-conf" data-lang="conf"><span style="color:#ffa759;">[Interface]
</span><span style="color:#ffcc66;">PrivateKey </span><span style="color:#f29e74;">= <</span><span>private key</span><span style="color:#f29e74;">>
</span><span style="color:#ffcc66;">Address </span><span style="color:#f29e74;">= <</span><span>ip address</span><span style="color:#f29e74;">>
</span><span style="font-style:italic;color:#f29e74;">DNS </span><span style="color:#f29e74;">= <</span><span>ip address</span><span style="color:#f29e74;">>
</span><span>
</span><span style="color:#ffcc66;">PostUp </span><span style="color:#f29e74;">= </span><span style="font-style:italic;color:#f29e74;">DROUTE</span><span style="color:#f29e74;">=</span><span>$(ip route </span><span style="color:#f29e74;">|</span><span> grep default </span><span style="color:#f29e74;">|</span><span> awk </span><span style="color:#bae67e;">'{print $3}'</span><span>); </span><span style="font-style:italic;color:#f29e74;">HOMENET</span><span style="color:#f29e74;">=</span><span style="color:#ffcc66;">192.168.0.0/16</span><span>; </span><span style="font-style:italic;color:#f29e74;">HOMENET2</span><span style="color:#f29e74;">=</span><span style="color:#ffcc66;">10.0.0.0/8</span><span>; </span><span style="font-style:italic;color:#f29e74;">HOMENET3</span><span style="color:#f29e74;">=</span><span style="color:#ffcc66;">172.16.0.0/12</span><span>; ip route add </span><span style="color:#ffa759;">$HOMENET3</span><span> via </span><span style="color:#ffa759;">$DROUTE</span><span>;ip route add </span><span style="color:#ffa759;">$HOMENET2</span><span> via </span><span style="color:#ffa759;">$DROUTE</span><span>; ip route add </span><span style="color:#ffa759;">$HOMENET</span><span> via </span><span style="color:#ffa759;">$DROUTE</span><span>;iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">I OUTPUT </span><span style="color:#f29e74;">-</span><span>d </span><span style="color:#ffa759;">$HOMENET </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">ACCEPT</span><span>;iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">A OUTPUT </span><span style="color:#f29e74;">-</span><span>d </span><span style="color:#ffa759;">$HOMENET2 </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">ACCEPT</span><span>; iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">A OUTPUT </span><span style="color:#f29e74;">-</span><span>d </span><span style="color:#ffa759;">$HOMENET3 </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">ACCEPT</span><span>; iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">A OUTPUT </span><span style="color:#f29e74;">! -</span><span>o </span><span style="color:#ffa759;">%i </span><span style="color:#f29e74;">-</span><span>m mark </span><span style="color:#f29e74;">! --</span><span>mark $(wg show </span><span style="color:#ffa759;">%i</span><span> fwmark) </span><span style="color:#f29e74;">-</span><span>m addrtype </span><span style="color:#f29e74;">! --</span><span>dst</span><span style="color:#f29e74;">-</span><span>type </span><span style="font-style:italic;color:#f29e74;">LOCAL </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">REJECT
</span><span style="color:#ffcc66;">PreDown </span><span style="color:#f29e74;">= </span><span style="font-style:italic;color:#f29e74;">HOMENET</span><span style="color:#f29e74;">=</span><span style="color:#ffcc66;">192.168.0.0/16</span><span>; </span><span style="font-style:italic;color:#f29e74;">HOMENET2</span><span style="color:#f29e74;">=</span><span style="color:#ffcc66;">10.0.0.0/8</span><span>; </span><span style="font-style:italic;color:#f29e74;">HOMENET3</span><span style="color:#f29e74;">=</span><span style="color:#ffcc66;">172.16.0.0/12</span><span>; ip route del </span><span style="color:#ffa759;">$HOMENET3</span><span> via </span><span style="color:#ffa759;">$DROUTE</span><span>;ip route del </span><span style="color:#ffa759;">$HOMENET2</span><span> via </span><span style="color:#ffa759;">$DROUTE</span><span>; ip route del </span><span style="color:#ffa759;">$HOMENET</span><span> via </span><span style="color:#ffa759;">$DROUTE</span><span>; iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">D OUTPUT </span><span style="color:#f29e74;">! -</span><span>o </span><span style="color:#ffa759;">%i </span><span style="color:#f29e74;">-</span><span>m mark </span><span style="color:#f29e74;">! --</span><span>mark $(wg show </span><span style="color:#ffa759;">%i</span><span> fwmark) </span><span style="color:#f29e74;">-</span><span>m addrtype </span><span style="color:#f29e74;">! --</span><span>dst</span><span style="color:#f29e74;">-</span><span>type </span><span style="font-style:italic;color:#f29e74;">LOCAL </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">REJECT</span><span>; iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">D OUTPUT </span><span style="color:#f29e74;">-</span><span>d </span><span style="color:#ffa759;">$HOMENET </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">ACCEPT</span><span>; iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">D OUTPUT </span><span style="color:#f29e74;">-</span><span>d </span><span style="color:#ffa759;">$HOMENET2 </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">ACCEPT</span><span>; iptables </span><span style="color:#f29e74;">-</span><span style="font-style:italic;color:#f29e74;">D OUTPUT </span><span style="color:#f29e74;">-</span><span>d </span><span style="color:#ffa759;">$HOMENET3 </span><span style="color:#f29e74;">-</span><span>j </span><span style="font-style:italic;color:#f29e74;">ACCEPT
</span><span>
</span><span style="color:#ffa759;">[Peer]
</span><span style="color:#ffcc66;">PublicKey </span><span style="color:#f29e74;">= <</span><span>public key</span><span style="color:#f29e74;">>
</span><span style="color:#ffcc66;">AllowedIPs </span><span style="color:#f29e74;">= </span><span style="color:#ffcc66;">0.0.0.0/0
</span><span style="color:#ffcc66;">Endpoint </span><span style="color:#f29e74;">= <</span><span>ip address </span><span style="color:#ffa759;">with</span><span> port</span><span style="color:#f29e74;">>
</span></code></pre>
<h2 id="Verification">Verification</h2>
<p>We need to make sure we are in fact connected safely to Mullvad! To do this, let's use Mullvad's <code>https://am.i.mullvad.net/connected</code> API endpoint.</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">docker</span><span> exec</span><span style="color:#ffcc66;"> -t</span><span> wireguard curl https://am.i.mullvad.net/connected
</span><span style="font-style:italic;color:#5c6773;"># You are connected to Mullvad (server XXYY-wireguard). Your IP address is XYZ.XYZ.XYZ.XYZ
</span></code></pre>
<p>Success! But wait, that's the <code>wireguard</code> container, this just checks whether our config is working. What about the <code>deluge</code> container?</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">docker</span><span> exec</span><span style="color:#ffcc66;"> -t</span><span> deluge curl https://am.i.mullvad.net/connected
</span><span style="font-style:italic;color:#5c6773;"># You are connected to Mullvad (server XXYY-wireguard). Your IP address is XYZ.XYZ.XYZ.XYZ
</span></code></pre>
<p>Victory! Have fun sharing Linux distributions!</p>
My wtwitch setup2021-12-24T23:31:17+00:002021-12-24T23:31:17+00:00
Unknown
https://yarmo.eu/blog/wtwitch-setup/<h2 id="wtwitch">wtwitch</h2>
<p>wtwitch (<a href="https://github.com/krathalan/wtwitch">source repo</a>) is a neat terminal user interface for Twitch, allowing me to watch Twitch streamers without needing to ever use their website or any of their clients.</p>
<p>It manages subscriptions, has autocomplete and easily let's you start a stream in an mpv window or any player of your choosing.</p>
<p>I have a basic setup with the config file in the default place.</p>
<p><code>~/.config/wtwitch/config.json</code>:</p>
<pre data-lang="json" style="background-color:#212733;color:#ccc9c2;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> </span><span style="color:#bae67e;">"player"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"mpv"</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"quality"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"best"</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"colors"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"false"</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"printOfflineSubscriptions"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"true"</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"subscriptions"</span><span style="color:#ccc9c2cc;">: </span><span>[
</span><span> {
</span><span> </span><span style="color:#bae67e;">"streamer"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"rossmanngroup"
</span><span> }</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="font-style:italic;color:#5c6773;">// and many more
</span><span> ]</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"apiToken"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"..."</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"apiTokenExpiry"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"..."</span><span style="color:#ccc9c2cc;">,
</span><span> </span><span style="color:#bae67e;">"lastSubscriptionUpdate"</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"..."
</span><span>}
</span></code></pre>
<h2 id="wtwitch_and_rofi">wtwitch and rofi</h2>
<p>Where things get interesting is the integration with rofi (<a href="https://github.com/davatorium/rofi">source repo</a>), the application launcher and so much more.</p>
<p>All I need is a single executable file.</p>
<p><code>~/.local/bin/rofi_wtwitch.sh</code>:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="font-style:italic;color:#5c6773;">#!/bin/bash
</span><span>
</span><span style="color:#ffa759;">if </span><span style="color:#f28779;">[[ </span><span>$1 </span><span style="color:#f28779;">]]</span><span style="color:#f29e74;">; </span><span style="color:#ffa759;">then
</span><span> name</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">$</span><span>(</span><span style="color:#f28779;">echo </span><span style="color:#bae67e;">$</span><span>1 </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">awk </span><span>{</span><span style="color:#bae67e;">'print $1'</span><span>} </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">sed </span><span style="color:#bae67e;">'s/\://'</span><span>)
</span><span> </span><span style="color:#ffd580;">wtwitch</span><span> w $name </span><span style="color:#f29e74;">></span><span> /dev/null
</span><span style="color:#ffa759;">else
</span><span> </span><span style="color:#ffd580;">wtwitch</span><span> check </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">sed</span><span style="color:#ffcc66;"> -n </span><span style="color:#bae67e;">'/Live/,/Offline/p' </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">sed </span><span style="color:#bae67e;">'/Live channels/d;/Offline/d' </span><span style="color:#f29e74;">| </span><span style="color:#ffd580;">sed </span><span style="color:#bae67e;">'s/\x1B\[[0-9;]\{1,\}[A-Za-z]//g;s/ //;'
</span><span style="color:#ffa759;">fi
</span></code></pre>
<h3 id="Running_straight_from_terminal">Running straight from terminal</h3>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>rofi -modi wtwitch:/home/yarmo/.local/bin/rofi_wtwitch.sh -show wtwitch
</span></code></pre>
<h3 id="For_bspwm/sxhkd_users">For bspwm/sxhkd users</h3>
<p><code>~/.config/sxhkd/sxhkdrc</code>:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span># Launch wtwitch in rofi
</span><span>super + shift + d
</span><span> rofi -modi wtwitch:/home/yarmo/.local/bin/rofi_wtwitch.sh -show wtwitch
</span></code></pre>
<p>Replace with your own keybinding.</p>
<h3 id="For_i3_users">For i3 users</h3>
<p><code>~/.config/i3/config</code>:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span># Launch wtwitch in rofi
</span><span>bindsym $mod+Shift+d exec --no-startup-id "rofi -modi wtwitch:/home/yarmo/.local/bin/rofi_wtwitch.sh -show wtwitch"
</span></code></pre>
<p>Replace with your own keybinding.</p>
The Burden of Christmas, exemplified2021-12-19T13:00:16+00:002021-12-19T13:00:16+00:00
Unknown
https://yarmo.eu/blog/burden-of-christmas-exemplified/<h2 id="Introducing_CB">Introducing CB</h2>
<p>Something happened yesterday which subsequently consumed a significant portion of my attention for the rest of the day and I decided to try and put into words what happened, what I thought, how I reacted and how I should have reacted.</p>
<p>While this post is the consequence of an interaction with a friend I'll nickname Carte Bleue (she'll understand =D) or CB for short, anything I say here is in no way a critique on her. This rant is directed towards Christmas and its commercialization. This interaction could have happened with anyone. It is my privilege to be able to consider CB a friend in my life and there is nothing I would to jeopardize that.</p>
<h2 id="The_Jar_of_Cookies">The Jar of Cookies</h2>
<p>For no particular reason, I baked cookies yesterday. Dutch kruidnoten (<a href="https://en.wikipedia.org/wiki/Kruidnoten">wikipedia</a>) to be precise. I bought most of the spices I needed back in October and missed the date to bake the cookies on the 5th of December as I had planned (due to lacking a mortar and pestle and a few more spices).</p>
<p>So yesterday, unsure what to do with my day, I went out, treated myself to a mortar and pestle, purchased a few more spices and baking soda and got baking.</p>
<p>While the dough was in the fridge doing whatever it is dough does in a fridge, I remembered CB was dropping by later in the afternoon for a minute or less to drop something off. The proverbial lightbulb ignited.</p>
<p>I found an old glass jar, cleaned it, filled it with fresh-out-of-the-oven kruidnoten, wrote "Joyeux Noël" with a smiley on a small piece of laid paper, attached the note to the jar using some cotton string and called it a day. It looked cute AF :)</p>
<p>Time passed. The buzzer rang. I opened the door. There was CB. I handed over the jar of cookies.</p>
<p>After an expression of gratitude, she said…</p>
<p>"Well, now I have nothing for you."</p>
<p>Queue the rant.</p>
<h2 id="Hypocrisy_much">Hypocrisy much</h2>
<p>Again, this is no way a critique on CB. Anyone I know would have reacted the same way. Hell, I would have reacted that way! So yeah, everything I am about to say applies to myself as much as it does to anybody else.</p>
<h2 id="Newton's_Third_Law_of_Christmas_Gifting">Newton's Third Law of Christmas Gifting</h2>
<blockquote>
<p>For any Christmas gift gifted, an opposite gift is expected.<br />
— Isaac Newton (somewhat paraphrased)</p>
</blockquote>
<p>This law that governs modern-age Christmas rituals is what I and many others call the Burden of Christmas.</p>
<p>How is it possible that I cannot give a jar of cookies without giving a feeling of guilt in one and the same gesture? I did not expect a gift in return. In fact, <a href="https://yarmo.eu/blog/2020s-christmas-gift-pledge/">I'd rather not receive gifts at all</a>.</p>
<h2 id="Human_nature?">Human nature?</h2>
<p>You might counter with the following argument: it's just human nature. We don't like debts. Living in a society means giving and taking. A good-natured person wouldn't want to take without giving back.</p>
<p>Yeah, I get that, but hear me out. Had I handed over the jar in August or any other month that does not start with "D" and end with "ecember", none of this would have happened. CB gave me a jar of a homemade product a year back, I did not feel an urge to immediately return the gesture.</p>
<p>What is it about Christmas that completely changes our perspective and our primary response to receiving a gift?</p>
<p>I don't know for sure. If you have interesting reading material on the subject, please <a href="https://yarmo.eu/contact/">do share with me</a>, I'm genuinely curious.</p>
<h2 id="Guilt_flows_both_ways">Guilt flows both ways</h2>
<p>So I intended to give a jar of cookies and instead I handed over a burden.</p>
<p>Could I not have foreseen that?</p>
<p>Well, as a matter of fact, I did. There was a moment in the process yesterday when I strongly considered not giving the jar and wait until January. Because there was a chance she would express guilt for not having something to return on the spot. Because if that would happen, I would feel guilt for putting her in that position. And I wanted none of that. I just had a surplus of cookies.</p>
<p>Damn you, Jar of Cookies, source of all evil!</p>
<p>Seriously though, this is messed up.</p>
<p>I'm certain many people don't have these issues. They know each other well enough to look through this reflective and hazardous haze and see the things for what they truly are.</p>
<p>But I suspect that many among us do have to contend with the Burden of Christmas during the December month. Or November. Or whenever it is the Christmas shitshow starts these days.</p>
<p>Or maybe not and I am just weird. Very plausible too.</p>
<h2 id="Game_theory_applied_to_Christmas">Game theory applied to Christmas</h2>
<p>This whole story strongly reminds me of the Prisoner's Dilemma (<a href="https://en.wikipedia.org/wiki/Prisoner%27s_dilemma">wikipedia</a>) and, according to the dozens of online articles I just found (<a href="https://duckduckgo.com/?q=christmas+prisoner+dilemma">DuckDuckGo search results</a>), it does so for many other people too.</p>
<p>Since we don't usually communicate our intention to prepare gifts for others, how are they supposed to know if they should prepare something in return? After all, you don't want to be that person that doesn't gift back, or so society tells us. So it's in your best interest to prepare a gift for the largest number of people, just in case. Which leads to all those people facing the same dilemma. And so on.</p>
<p>Huh, I wonder if the marketing industry has caught on to this phenomenon and would use it to their advantage. <a href="https://www.urbandictionary.com/define.php?term=%2FS">/S</a></p>
<h2 id="My_reaction">My reaction</h2>
<p>So how did I react?</p>
<p>Since putting her on the spot put me on the spot, I don't remember with 100% certainty, but I think I chuckled and said "That's okay!".</p>
<p>O agony, thee malignant incubus!</p>
<p>"<em>That's okay</em>"???</p>
<p>This implies a gift in return would indeed have been a desirable outcome, but "hey, it's okay, I forgive you". That was not my intended message!</p>
<p>So in the blog version of the "esprit de l'escalier" (<a href="https://en.wikipedia.org/wiki/L%27esprit_de_l%27escalier">wikipedia</a>), what should I have said?</p>
<p>I am not sure. Anything along the lines of "well, that was not the intention" may come across as disdainful, as in "I wasn't expecting you to give something in return".</p>
<p>I am most certainly overthinking at this point. Any response would probably be acceptable as long as you point out that it was not your intention to give them the Burden of Christmas.</p>
<p>Perhaps this blog post is also a half decent—if delayed—response. If I could have condensed the entirety of this post in a 30 seconds monologue, that might have done the trick.</p>
<h2 id="To_CB">To CB</h2>
<p>I am not sure yet if I'll send a link to this to CB. I am quite conscious of my tendency to overthink and assuming (or hoping) that others don't.</p>
<p>But if I did, well, hey there CB :)</p>
<p>Hope you enjoyed the cookies! They were a silly little thing I just wanted to give you. For fun. Made me feel happy to prepare the jar. I tried painting a message on it but that miserably failed :)</p>
<p>I don't want anything in return, even though it's December. If you know me well enough, you'll understand.</p>
<p>And if that still doesn't alleviate the Burden of Christmas, let's organize a fun movie night again. As of January. After all the madness has died down.</p>
My 2020s Christmas Gift Pledge2021-12-16T11:22:57+00:002021-12-16T11:22:57+00:00
Unknown
https://yarmo.eu/blog/2020s-christmas-gift-pledge/<h2 id="Christmas_and_consumerism">Christmas and consumerism</h2>
<p>Christmas in the 2020s = consumerism. You know it, I know it. I am not even going to give an introduction on the subject. Even the Pope has condemned the commercialization of Christmas.</p>
<p>So, you're either here because you want to read the blog post, or you haven't received from me a Christmas gift that lives up to our society's norms, asked me "what gives?" and I sent you here.</p>
<h2 id="Me_the_Consumer">Me the Consumer</h2>
<p>I live in a consumer society ergo I am a consumer. Be that as it may, I won't be your typical consumer though.</p>
<p>I turn off the radio when ads come on. I look away from a TV screen during ad breaks. If I still see an ad of a product somewhere, I am instantly less prone to using it or recommending it.</p>
<p>If I see a single billboard promoting some TV show, I'll instantly know the people behind it don't have enough faith in the show that it would organically grow an audience and thus feel the need to shove it down our throats instead. I'll pass on said show.</p>
<p>I don't use Amazon. I condemn its use.</p>
<p>I try and support local to the best of my abilities.</p>
<p>Given all these idiosyncracies and more, you do not expect me to do Christmas like a well-behaving consumer, do you?</p>
<h2 id="Doing_things_differently">Doing things differently</h2>
<p>So I want to write a pledge, a set of rules to live by and guide me through the yearly burden that is Christmas, so that whenever I am in doubt about a gift-related decision, I can just read the pledge and know what to do.</p>
<p>I want to make the pledge last a decade so that I won't stray from my convictions and my rejection of consumerism. I can abandon the pledge in the future but that would require me to write a new one, forcing me to re-evaluate everything in the current pledge.</p>
<p>I also don't expect Christmas to become less commercial any time soon so I'll need this pledge for the foreseeable future anyway.</p>
<p>I want to make my pledge public so that I can send people to it whenever my gift-related decisions are being questioned.</p>
<h2 id="Message_to_Future_Me">Message to Future Me</h2>
<p>Future Me, if you're here because peer pressure is causing you to doubt the pledge, don't. Just don't.<br />
— 2021 Me</p>
<h2 id="The_Pledge_on_Giving_Gifts">The Pledge on Giving Gifts</h2>
<p>My gifts will not be bought if possible. I will attempt to make something personalized, thoughtful and unique using my hands. It can be big, small, silly, helpful.</p>
<p>My gifts are disposable. Receiving a gift is a burden, therefore no hard feelings if you decide to get rid of it.</p>
<p>My gifts can be ephemeral. I'd rather show you my appreciation of our personal connection by spending quality time together than giving you something. An experience is worth more than an object.</p>
<p>The rules above aren't always suitable or desirable depending on the receiver. In case a gift is bought:</p>
<p>My gifts will not contain plastic. If unavoidable, it will contain minimal plastic wrapping. Objects made entirely of plastic are a no-no. The only exception: custom-made items containing plastic from a 3D printer (while still plastic, it wasn't mass-produced).</p>
<p>My gifts will not be tech, especially surveillance technology. The only exception: something I built myself using an arduino or a single-board-computer like a raspberry pi.</p>
<p>My gifts will never be obtained through Amazon. Some underpaid Amazon worker would have needed to work extra shifts and pee in a bottle to make sure the gift arrives on time. No gift of mine will ever require corporation-organized slavery to be given. Or, for that matter, any slavery of any kind.</p>
<h2 id="The_Pledge_on_Receiving_Gifts">The Pledge on Receiving Gifts</h2>
<p>I do not wish to receive bought gifts. Let's hang out for a bit and do something we both enjoy. If you truly feel an urge to give something tangible, write a poem, make a drawing, bake cookies, print out a picture of us and put it in a (non-plastic) frame.</p>
<p>Evidently, this isn't always a realistic expectation. So, a few more guidelines:</p>
<p>I will most likely be polite and accept a bought gift but please do not take it as a given that I will use the gift or, for that matter, keep it. It will depend. One of my ambitions this decade is to live smaller and possess less. Receiving stuff I don't want or need is counterproductive to said ambition.</p>
<p>An exception: surveillance technology. Any gift made by Big Tech that contains microphones, cameras or other tracking capabilities will be unceremoniously rejected on the spot.</p>
<p>Another exception: anything that goes against any of my convictions. Think objects made entirely out of plastic, objects that could cause pollution or waste of resources, objects made through corporation-organized slavery.</p>
<p>If a gift was obtained through Amazon, you and I are going to have a chat.</p>
<h2 id="Conclusion">Conclusion</h2>
<p>Hope you enjoyed the read and—if you know me personally—better understand how I think about Christmas and act when it comes to gifts.</p>
Keyoxide Project Update #52021-06-29T14:52:10+00:002021-06-29T14:52:10+00:00
Unknown
https://yarmo.eu/blog/keyoxide-project-update-5/<p>An update for all.</p>
<h2 id="Accessibility">Accessibility</h2>
<p>The latest 3.1.0 release of <a href="https://codeberg.org/keyoxide/keyoxide-web">keyoxide-web</a> greatly improves accessibility
and ensures that it works nicely together with screen readers.</p>
<p>To make sure the implementation of accessibility features was as thorough as possible, it was first ran through a series
of automated tests, namely <a href="https://web.dev/measure/">Lighthouse</a> and <a href="https://wave.webaim.org/">WAVE</a>, both giving
Keyoxide respectively a <strong>100% accessibility score</strong> and <strong>0 accessibility errors</strong> on every page.</p>
<p>While automated tests are a decent start, nothing beats feedback from the actual target audience: good ol' human beings.</p>
<p>After <a href="https://fosstodon.org/@keyoxide/106380848176122986">posting a message</a> on the Keyoxide fediverse account
(<a href="https://fosstodon.org/@keyoxide">keyoxide@fosstodon.org</a>) to call for help from people who use accessibility tools
like screen readers, I received plenty of feedback about little quirks that went undetected by the automated tests.
These were all addressed and fixed.</p>
<p>So I can now gladly confirm that the Keyoxide website should be <strong>WAI-AA</strong> compliant, meaning all text has a contrast
ratio higher than 4.5:1, all links and images are appropriately labeled for screen readers and even the profile pages
can be navigated by keyboard alone.</p>
<p>I once again thank the people that have provided the invaluable feedback without whom the result of my efforts would
have proven unsufficient.</p>
<p>If you find more quirks and/or annoyances, please do file an issue on the
<a href="https://codeberg.org/keyoxide/keyoxide-web/issues">code repository</a> so it can be fixed as quickly as possible.</p>
<h2 id="#keyoxide_on_IRC">#keyoxide on IRC</h2>
<p>Keyoxide was just about to request a channel on freenode when sadly, well, <em>that</em> happened.</p>
<p>So now, it is with delight that I can now invite you all to our <strong>#keyoxide</strong> channel on the great
<a href="https://libera.chat/">libera.chat</a> network. In addition to our Matrix room, this is one more place where we can hang
out and discuss the future of identity on the internet. And many other things.</p>
<p>And yes, of course I have already proven my identity on IRC using the
<a href="https://keyoxide.org/guides/irc">IRC guide on Keyoxide</a>.</p>
<h2 id="Signing_off">Signing off</h2>
<p>For all your questions and suggestions, be sure to join the conversation in the
<a href="https://matrix.to/#/#keyoxide:matrix.org">Keyoxide matrix room</a> or the #keyoxide channel on
<a href="https://libera.chat/">libera.chat</a>. Or raise an issue on <a href="https://codeberg.org/keyoxide/">Codeberg.org</a>.
All contributions (including PRs!) are welcome.</p>
<p>As always, the source code is available at the <a href="https://codeberg.org/keyoxide/keyoxide-web">Codeberg.org repo</a>.</p>
<p>All work on Keyoxide is possible thanks to donations, the project stands against VC funding. If you feel like Keyoxide
is a step in the right direction for netizens worldwide, please <a href="https://liberapay.com/Keyoxide/">become a patron</a> and
help the project do its part in the global fight against the internet corporations.</p>
<p>Until next time,<br />
Yarmo</p>
Keyoxide Project Update #42021-05-04T17:52:10+00:002021-05-04T17:52:10+00:00
Unknown
https://yarmo.eu/blog/keyoxide-project-update-4/<p>The update I have been looking forward to for months.</p>
<h2 id="Keyoxide_3.0.0">Keyoxide 3.0.0</h2>
<p>Every day I work on Keyoxide, I learn more and gain a deeper understanding of how powerful this Decentralized
OpenPGP-based Identity verification actually can be. And as we are nearing the first anniversary of the Keyoxide project,
I realized all the new ideas and major improvements—most suggested by the community—were being held back by
the previous implementation of the code, restricted by my earlier understanding and imagination.</p>
<p>It was time not for a number of superficial additions and fixes, but for a big overhaul of the core code which would
then cause a chain reaction of bugs to be fixed and features to be added or improved.</p>
<p>To illustrate what is new in this version, here's my
<a href="https://keyoxide.org/9f0048ac0b23301e1f77e994909f6bd6f80f485d">Keyoxide profile</a>.</p>
<h3 id="Visuals">Visuals</h3>
<p>Ok, let's do start superficial, though. Keyoxide 3.0.0 has a shiny new look. I hope you will agree with me that that
was much needed. The previous design of the website was made before I even implemented the concept of decentralized
proofs.</p>
<p>The new and cleaner design has eliminated most of the clutter and puts all the emphasis on what is important: the
identity claims.</p>
<h3 id="Server_side_rendering">Server side rendering</h3>
<p>Thanks to the class-based approach of the <a href="https://codeberg.org/keyoxide/doipjs/">doip.js library (version 0.12.*)</a>,
Keyoxide will now do most of the mundane work on the server and let the browser finish the process of identity
verification. So who does exactly what now?</p>
<ul>
<li>The server will try and find the public key associated with the profile to be generated</li>
<li>The server will parse the identity claims stored inside the public key</li>
<li>The server will match the identity claims to the known library of service providers</li>
<li>The server will render the profile page, including the yet-to-be-verified identity claims, and send it to the browser</li>
<li>The browser will parse the yet-to-be-verified identity claims and verify them</li>
</ul>
<p>Not only is the website now much faster to load, the browser will verify the identity claims in parallel! No more
waiting for that one slow identity to verify before showing the result of all the other identity verifications.</p>
<h3 id="rel="me"">rel="me"</h3>
<p>The wait is finally over! Server-side rendering means that Mastodon instances can now detect the <strong>rel="me"</strong> links
on Keyoxide profile pages and will reward you with a green tick for every Keyoxide profile you link to in your
Mastodon bio!</p>
<p>Here's an example: <a href="https://fosstodon.org/@keyoxide">@keyoxide@fosstodon.org</a>. So satisfying!</p>
<p>Yes, this means Keyoxide can now do and be as much as "any other" identity provider on Mastodon. By just using basic
web technology. Without requiring special server protocols. And no VC-funded companies needed.</p>
<p>Small web truly is beautiful, isn't it? (quote from <a href="https://small-tech.org/">Small Tech Foundation</a>)</p>
<h3 id="A_claim_failed,_what_does_that_mean?">A claim failed, what does that mean?</h3>
<p>The issue of a claim failing to verify is actually more complex than it seems, and something that the previous versions
of Keyoxide did not handle very elegantly.</p>
<p>As an example, let us claim to be Alice on Github. If it fails, it could either mean that we made a mistake somewhere,
or we are attempting to impersonate Alice—the very thing Keyoxide is designed to detect and prevent.</p>
<p>In this case, it's simple: the claim <code>https://gist.github.com/Alice/...</code> could only reference Github so the story ends
here.</p>
<p>But what if we wanted to verify <code>https://alice.tld/apps/live</code>? From the looks of it, it could be an
<a href="https://owncast.online/">Owncast</a> server, but that is just a guess.</p>
<p>When this claims fails to verify, does it fail because that Owncast server is not mine (impersonation) or because it
wasn't actually an Owncast server? This URL could also very well lead to a repo on a <a href="https://gitea.io/">Gitea</a> server.</p>
<p>And what if it also fails to verify as a Gitea account? Was it one of them that genuinely failed, or neither of them?</p>
<p>Keyoxide 3.0.0 now recognizes "ambiguity" in URLs and acts accordingly. Does a claim with an unambiguous URL (like
Github) fail? Keyoxide will let the visitor know the claim genuinely failed. Did a claim with an ambiguous URL fail?
Then Keyoxide will show a message letting the visitor know that it wasn't sure what the claim was meant to be but
regardless, it failed to verify.</p>
<h2 id="Future_improvements">Future improvements</h2>
<p>Keyoxide 3.0.0 brings a few tweaks, but again, the biggest change is the overhaul of the core code. This will allow
a bunch more improvements to be made soon with relative ease. Here's an overview of what is in the pipeline.</p>
<h3 id="Requirement_of_JavaScript">Requirement of JavaScript</h3>
<p>Previous versions of Keyoxide said "the browser must do everything". This meant that JavaScript had to enabled in order
for Keyoxide to be able to do anything at all.</p>
<p>As stated above, Keyoxide 3.0.0 now only lets the browser do the very last step of the whole process but this still
means JavaScript is required. However, it is not difficult to imagine now that the server could do everything and just
send the finished profile page to the browser.</p>
<p>In a future version of Keyoxide, visitors who have JavaScript disabled and do not mind waiting for up to fifteen seconds
(due to some claims taking more time to verify) will be able to request a fully server-side rendered profile page.</p>
<h3 id="a11y_and_i18n">a11y and i18n</h3>
<p>With more work being done server-side, it becomes simpler to implement internationalisation and render the website in
different languages.</p>
<p>Also, with the pages themselves become less dynamic, decent accessibility is also simpler to achieve and currently has
the highest priority.</p>
<h2 id="Signing_off">Signing off</h2>
<p>That's about it for today. This update marks a big change that will greatly benefit future versions of Keyoxide. I can't
wait to start working on the next developments and share them with you as they come along.</p>
<p>As always, the source code is available at the <a href="https://codeberg.org/keyoxide/keyoxide-web">Codeberg.org repo</a>
(now renamed to <strong>keyoxide-web</strong>).</p>
<p>For all your questions and suggestions, be sure to join the conversation in the
<a href="https://matrix.to/#/#keyoxide:matrix.org">Keyoxide matrix room</a> or raise an issue on
<a href="https://codeberg.org/keyoxide/">Codeberg.org</a>. All contributions are welcome!</p>
<p>Until next time.</p>
A personal transformation2021-05-02T15:19:55+00:002021-05-02T15:19:55+00:00
Unknown
https://yarmo.eu/blog/personal-transformation/<p>I'd like to share a few insights I gained during the last few months.</p>
<h2 id="What_happened">What happened</h2>
<p>First, a bit of background. For a while now, I have been feeling unproductive and unsure how to improve the situation.</p>
<p>That's it, really. Whether it's adapting to a post-academia lifestyle or learning to work from home, I knew I was doing
something wrong. I could spend days waiting for my head to clear up, then start working again to finally notice after
a few days that my head was foggy once more.</p>
<p>The frustrating part was that I knew how to be productive. For a little over four years, academia pushed me up to (and
beyond) the limit of my mental capacity for working.</p>
<p>I knew the post-academia recovery was still ongoing, but I felt that didn't explain the whole picture. I lacked
understanding of what was going on, so I started making a few changes of my own and educating myself. The Dunning-Kruger
effect was not going to keep me in my state of ignorance.</p>
<h2 id="My_own_changes">My own changes</h2>
<p>I picked up intensive bullet journaling again. While this saved my mental health during my PhD, it did little for my
post-PhD predicament. Apparently, the problem was deeper than simply freeing my head from the burden of remembering
stuff.</p>
<p>I picked up more hobbies but found myself more often feeling guilty of giving in to those hobbies rather than spending
time on what I wanted to do: being productive.</p>
<p>It was also during this phase of exploring possible actions that I formulated a wish: I wanted to go abroad for a month
or two to a quiet location, take nothing but a bunch of clothes and my Thinkpad X201i and just start hammering away on
the keys. To my frustration, I could not provide arguments as to why I was yearning for this. I got no further than
"fully reset my lifestyle, go back to basic, build a new method and take those lessons back home". It sounds reasonable
but, well, knowing what I know now…</p>
<p>I scouted for a few locations where I already knew some people but a wish it remained. The pandemic. Safety above all.</p>
<p>And in the midst of fearing my situation was lightyears away from improving, I stumbled upon two little words on some
small personal blog, a drop of wisdom easily overlooked in a vast knowledge-overloaded internet.</p>
<h2 id="Educating_myself">Educating myself</h2>
<p>I had started reading a few books but so far none had resonated with me—none, until that one book. I was reading a blog post when I got puzzled by a couple of words (paraphrasing because I lost the link to the post):</p>
<blockquote>
<p>[...] compatible with the concept of <strong>deep work</strong>, [...]</p>
</blockquote>
<p>Thank all the deities for being it precisely those two words that my brain decided to fixate on! A quick non-google
search later and I found the book the words were referring to:
<strong>Deep Work: Rules for Focused Success in a Distracted World</strong> by Cal Newport. A DRM-free purchase later and I started
my read.</p>
<p>It was after barely a few pages that I decided to not go the same route as with the other books by reading a chapter
every day and steadily progress through the book—I am generally a very slow reader. No, instinct told me stop all the
work I was doing and completely focus all my attention on this book.</p>
<p>The reason for this was simple: the book almost directly begins with talking about people choosing to travel to remote
locations to help them become productive. I suddenly realized this book might actually articulate the arguments I
couldn't when I was planning my wishful trip.</p>
<p>It still took me a couple of days, spending time inbetween chapters to think and re-reading a few chapters. But I came
out of the isolation a changed person.</p>
<h2 id="Deep_Work">Deep Work</h2>
<p>Let's make something clear, I am not here to sell you this book. As the other books didn't work for me, this one might
not for you. I am not here to tell you how to fix your productivity, I am simply stating a few steps that have worked for me
so far.</p>
<p>The book goes on to explain how important it is to work without distractions. And, as the book correctly predicted, my
reaction was:</p>
<blockquote>
<p>Well, that doesn't really apply to me.</p>
</blockquote>
<p>A statement which I somewhat still stand by. When I worked, I did not have my phone with me. Depending on the work I was
doing, I would put on some music or a stream, but would turn that off if my brain needed the extra focus. So, I was
already golden, right?</p>
<p>Not only could I improve my handling of distractions and planning of work, the book filled in some large gaps in my
understanding of how work works. I will point out two of those wisdom potholes that the book has generously filled.</p>
<h2 id="Types_of_work">Types of work</h2>
<p>Not all work is created equal. Somehow, I had never stopped to consider this and use this factoid to my advantage. While
I often found myself torn between programming and responding to issues, I never considered these two activities are not
equal and therefore, I should not choose between them. There is a time for programming (deep work) and there is a time
for responding to issues (shallow work). Shallow work is not "dumb work", it just requires my head to be in a different
state than it needs to be when programming.</p>
<h2 id="Quantity_of_deep_work_hours">Quantity of deep work hours</h2>
<p>My biggest revelation, the one that truly convinced me of the logic put forth in the book and the one that triggered all
that followed was this (paraphrasing for brevity):</p>
<blockquote>
<p>The most productive people in the world have roughly <strong>four hours</strong> of deep work every day, rarely more.</p>
</blockquote>
<p>This, I did not understand. I immediately started mentioning it to people around me and they said "yeah, makes sense".
I refused the notion that this made sense. How do you get stuff done with only four hours?</p>
<h2 id="My_take-away">My take-away</h2>
<p>Anyway, long story short: not all work is created equal and be mindful about the deep work. Once I got these concepts
in my head, my situation improved drastically within days. </p>
<p>Out with the old "as long as I am not tired, I can work a little longer" and in with the careful planning of my days around
deep work hours.</p>
<p>I will sit down every morning with my bullet journal, draw a timeline and start planning an ideal day around the work
that needs to happen, making sure that both the shallow works gets dedicated time, and the deep work hours are spaced
with sufficient breaks. I get annoyed when I find myself working a few minutes longer and stealing precious minutes of
mental rest.</p>
<p>I have even ignored my planning a few times to force work ahead of a deadline, only to notice the next day
I was feeling significantly more foggy-headed and unable to work deep. Bad me! At least, I now know what went wrong and
how I can improve it.</p>
<h2 id="Note_about_distractions">Note about distractions</h2>
<p>To deal with distractions, I took two drastic steps right after finishing the book's last page: no more fediverse, and
only use messaging apps between 9:00-10:00 and 17:00-18:00. This helped tremendously in my quest for productivity!</p>
<p>With regards to messaging apps: I still roughly follow this pattern. On resting days, I will allow a bit more
"connected" time (helpful with family across multiple countries).</p>
<p>With regards to fediverse: I have noticed that it's a part of my life that I am beginning to miss, because
(most of the time) it generated a pleasant distraction and even connection. I have had truly meaningful interactions on
the fediverse. I have even received messages from a few concerned netizens that noticed and were worried about my sudden
online disappearance, something that I have appreciated a bunch! So now, I am slowly returning to the fediverse, aware
of the negative side of distraction, and whole-heartedly embracing its benefits.</p>
<p>Thanks for reading, here's to hoping it may help another wandering soul.</p>
Keyoxide Project Update #32021-03-09T16:00:00+00:002021-03-09T16:00:00+00:00
Unknown
https://yarmo.eu/blog/keyoxide-project-update-3/<p>Two months since the last update. Two great new additions for this one.</p>
<h2 id="Keyoxide_2.5.0">Keyoxide 2.5.0</h2>
<p>The latest version of Keyoxide's web client has two very neat additions, all
thanks to the latest <a href="https://codeberg.org/keyoxide/doipjs">0.11.*</a> release of
<a href="https://yarmo.eu/blog/keyoxide-project-update-3/js.doip.rocks">doip.js</a>: the verification of accounts on the IRC and Matrix
platforms.</p>
<p>This really is what Keyoxide was designed to do: making sure your online
correspondances are exchanged with the intended person or entity, even as both
parties use anonymous accounts with varying usernames on different platforms.</p>
<p>To this end, it was important to integrate additional communication platforms,
to join the already supported <a href="https://keyoxide.org/guides/xmpp">XMPP</a> protocol.</p>
<p>Given the popularity and decentralized nature of either platform, IRC and Matrix
were both prime candidates. So, let's see what goes into proving identities on
IRC and Matrix.</p>
<p>(All examples below use a certain OpenPGP fingerprint and a fictional user, make
sure to use your own data when replicating the steps!)</p>
<h2 id="Proving_IRC_identity_with_taxonomy">Proving IRC identity with taxonomy</h2>
<p>What is taxonomy within the context of IRC?</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>>> /msg NickServ help TAXONOMY
</span><span>
</span><span>***** NickServ Help *****
</span><span>Help for TAXONOMY:
</span><span>
</span><span>The taxonomy command lists metadata information associated
</span><span>with registered users.
</span><span>
</span><span>Examples:
</span><span> /msg NickServ TAXONOMY foo
</span><span>***** End of Help *****
</span></code></pre>
<p>Taxonomy is metadata information, much like the vCard data for XMPP. Let us use
this to our advantage:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>>> /msg NickServ SET PROPERTY KEY openpgp4fpr:3637202523e7c1309ab79e99ef2dc5827b445f4b
</span><span>
</span><span>Metadata entry KEY added.
</span><span>
</span><span>>> /msg NickServ TAXONOMY foo
</span><span>
</span><span>Taxonomy for foo:
</span><span>KEY : openpgp4fpr:3637202523e7c1309ab79e99ef2dc5827b445f4b
</span><span>End of foo taxonomy.
</span></code></pre>
<p>And there you have it: one-directional linking from IRC to an OpenPGP key. Now,
to make that bidirectional, add the following notation to your key:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>proof@metacode.biz=irc://chat.freenode.net/foo
</span></code></pre>
<p>Et voilà, foo has now cryptographically proven that that IRC nickname is theirs.</p>
<p>All steps above are explained with more detail in the
<a href="https://keyoxide.org/guides/irc">IRC guide</a>.</p>
<p>It is important to note that IRC is the slowest identity to verify to date. As
IRC servers lack API endpoints to query the taxonomy metadata (maybe one
day? ^_^), Keyoxide has to log in into the IRC server like any other client,
send a message to NickServ, parse the response and log out again.</p>
<h2 id="Proving_Matrix_identity_with...">Proving Matrix identity with...</h2>
<p><a href="https://matrix.org/">Matrix</a> is an excellent decentralized communication
platform, but sadly, for identity verification purposes, it completely lacks
any form of customisable metadata. A shortcoming we can work with, but one which
might also understandibly deter some from proving Matrix identities.</p>
<p>One simply sends a message to a public room with the following content:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>[Verifying my OpenPGP key: openpgp4fpr:3637202523e7c1309ab79e99ef2dc5827b445f4b]
</span></code></pre>
<p>Easy enough. The issue is: Keyoxide can only read messages in a public room via
a Matrix account that already has access to said public room.</p>
<p>Ergo, we can't just use any room, we'll all have to use the same room.</p>
<p>A dedicated room named
<a href="https://matrix.to/#/#doipver:matrix.org">#doipver:matrix.org</a> has been created
for the very purpose of receiving Matrix identity proofs. Simply join the room
and send the message with your own OpenPGP fingerprint.</p>
<p>By viewing the source of message, you get the data needed to generate the
identity claim to be stored inside your OpenPGP key: the <code>room_id</code> (shared by
everyone) and the <code>event_id</code> (unique to your proof). The notation will look
like this:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>proof@metacode.biz=matrix:u/@foo:matrix.org?org.keyoxide.r=!dBfQZxCoGVmSTujfiv:matrix.org&org.keyoxide.e=$3dVX1nv3lmwnKxc0mgto_Sf-REVr45Z6G7LWLWal10w
</span></code></pre>
<p>That is quite an unwieldy notation, but one designed to follow the
<a href="https://github.com/matrix-org/matrix-doc/pull/2312">MSC2312 Matrix URI scheme proposal</a>
.</p>
<p>Please refer to the <a href="https://keyoxide.org/guides/matrix">Matrix guide</a> for
detailed instructions on how to verify your own Matrix identity.</p>
<h2 id="Signing_off">Signing off</h2>
<p>As the project now supports multiple communications platforms and its
versatility increases with each update, I am confident that Keyoxide is now
ready for the next phase: focus on the user experience. Nothing to show just
yet, but I am sure the next project update will have interesting announcements
related to this.</p>
<p>For all your questions and suggestions, be sure to join the conversation in the
<a href="https://matrix.to/#/#keyoxide:matrix.org">Keyoxide matrix room</a>.</p>
<p>Until next time.</p>
Keyoxide Project Update #22021-01-11T16:30:00+00:002021-01-11T16:30:00+00:00
Unknown
https://yarmo.eu/blog/keyoxide-project-update-2/<p>A prosperous 2021 to all. Let's dive into some Keyoxide news.</p>
<h2 id="Signature_profiles">Signature profiles</h2>
<p>The Keyoxide web client just got updated to <a href="https://codeberg.org/keyoxide/web/releases/tag/2.4.0">2.4.0</a> which
introduced a few minor bug fixes as well as a robots.txt and noindex meta tags.</p>
<p>The most exciting new feature in this release is the support for "signature profiles", a new way of creating
decentralized profiles that is both simpler to generate and solves a few drawbacks that come with the traditional
method of storing identity claims as notations in cryptographic keys.</p>
<p>From the newly added <a href="https://keyoxide.org/guides/signature-profiles">signature profiles guide</a>:</p>
<blockquote>
<p>Storing claims inside the key as notations is a powerful method. Wherever the public key goes, so go the identity claims. This allows one to use the existing vast network of key sharing tools to also share these identity claims.</p>
<p>There are drawbacks to this: you lose granularity. You cannot pick and choose the claims you want to send to certain people or use for certain purposes. There is also the possibility that notations in keys could be scraped as the keys are publicly available.</p>
<p>Putting (certain) claims in a signature profile solves both drawbacks. You can choose which claims to be associated with each other and you can choose which persons can see this by only sending it to them. You can even encrypt the signature profile! Since the signature profile is not publicly available (unless you make it so), there is no possibility to scrape the contents of it.</p>
<p>Note that there is one catch: the person you send it to could publish it. Only send claims you wish to keep secret to people you trust!</p>
</blockquote>
<h3 id="What_does_a_signature_profile_look_like?">What does a signature profile look like?</h3>
<p>Here's an example:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>-----BEGIN PGP SIGNED MESSAGE-----
</span><span>Hash: SHA512
</span><span>
</span><span>Hey there! Here's a signature profile with proofs related to the DOIP project (https://doip.rocks).
</span><span>
</span><span>Verify this profile at https://keyoxide.org/sig
</span><span>
</span><span>proof=dns:doip.rocks
</span><span>proof=https://fosstodon.org/@keyoxide
</span><span>-----BEGIN PGP SIGNATURE-----
</span><span>
</span><span>iQHEBAEBCgAuFiEENjcgJSPnwTCat56Z7y3FgntEX0sFAl/7L0MQHHRlc3RAZG9p
</span><span>cC5yb2NrcwAKCRDvLcWCe0RfS3iYC/0QQqz2lzSNrkApdIN9OJFfd/sP2qeGr/uH
</span><span>98YHa+ucwBxer6yrAaTYYuBJg1uyzdxQhqF2jWno7FwN4crnj15AN5XGemjpmqat
</span><span>py9wG6vCVjC81q/BWMIMZ7RJ/m8F8Kz556xHiU8KbqLNDqFVcT35/PhJsw71XVCI
</span><span>N3HgrgD7CY/vIsZ3WIH7mne3q9O7X4TJQtFoZZ/l9lKj7qk3LrSFnL6q+JxUr2Im
</span><span>xfYZKaSz6lmLf+vfPc59JuQtV1z0HSNDQkpKEjmLeIlc+ZNAdSQRjkfi+UDK7eKV
</span><span>KGOlkcslroJO6rT3ruqx9L3hHtrM8dKQFgtRSaofB51HCyhNzmipbBHnLnKQrcf6
</span><span>o8nn9OkP7F9NfbBE6xYIUCkgnv1lQbzeXsLLVuEKMW8bvZOmI7jTcthqnwzEIHj/
</span><span>G4p+zPGgO+6Pzuhn47fxH+QZ0KPA8o2vx0DvOkZT6HEqG+EqpIoC/a7wD68n789c
</span><span>K2NLCVb9oIGarPfhIdPV3QbrA5eXRRQ=
</span><span>=QyNy
</span><span>-----END PGP SIGNATURE-----
</span></code></pre>
<p>I only wrote the four lines after <code>Hash: SHA512</code>! The rest is generated by an OpenPGP-compatible library.</p>
<p>The first two lines are meant for humans. They state my intent with this signature profile as well as give an
instruction to whomever receives it.</p>
<p>The remaining two lines are my identity claims. They follow a specific syntax that Keyoxide and any other service using
<a href="https://doip.rocks">doip.js</a> can interpret.</p>
<p>The text around it is the signature. They make the message both provably beyond doubt written by yours truly, and
untemparable. Try it in the next step, change any character in the text, it will fail. This ensures that no bad actor
could intercept my signature on its way to you and modify its content.</p>
<h3 id="What_to_do_with_it?">What to do with it?</h3>
<p>When put into <a href="https://keyoxide.org/sig">keyoxide.org/sig</a>, the website will perform two verifications.</p>
<p>First, is the signature valid? Has the text been tampered with? If the signature is valid, a so-called 'fingerprint'
is extracted from it and displayed. Preferably, I have already mentioned my fingerprint to you. This ensures that you
didn't simply get a signature from someone else pretending to be me.</p>
<p>The fingerprint of the key that I used for the signature above is <code>3637202523e7c1309ab79e99ef2dc5827b445f4b</code>.</p>
<p>Second step is the verification of the identity claims. I could write a perfectly valid signature profile filled with
absurd and incorrect identity claims! We don't want that.</p>
<p>The fingerprint we just extracted from the signature is now used to verify these identity claims. For example, the first
claim (doip.rocks) will have a DNS record with that value. And the second claim (fosstodon.org/@keyoxide) has the
fingerprint in the bio section of the account.</p>
<p>Both should verify. This allows you to say:</p>
<blockquote>
<p>Whoever signed this profile, holds the doip.rocks domain name and the fosstodon.org/@keyoxide account.</p>
</blockquote>
<h3 id="Granular_and_non-scrapable_identity_claims">Granular and non-scrapable identity claims</h3>
<p>There you have it. Identity claims that can be sent granularly (you pick and choose which to include) and are
non-scrapable (signature profiles are not publicly available).</p>
<p>And if you wish to go a step further, you can even encrypt the signature profile to make sure it can't be read while in
transit to the intended recipient.</p>
<p>Happy signing!</p>
<h2 id="doip.js_0.9.0">doip.js 0.9.0</h2>
<p><a href="https://codeberg.org/keyoxide/doipjs/releases/tag/0.9.0">Release 0.9.0</a> of <a href="https://js.doip.rocks">doip.js</a> introduced
support for the verification of signature profiles. In fact, Keyoxide simply relies on doip.js for all identity
verifications. This makes it possible for new projects to get started quickly with fully decentralized identity
verification and always have the same feature set that Keyoxide has.</p>
<p>This is the way.</p>
<h2 id="Signing_off">Signing off</h2>
<p>Hope you enjoy the signature profiles. Do not hesitate to get in touch for questions, comments or suggestions. There's
a <a href="https://matrix.to/#/#keyoxide:matrix.org">Keyoxide matrix room</a> as well as a
<a href="https://lists.sr.ht/~yarmo/keyoxide-devel">mailing list</a>.</p>
<p>Until next time.</p>
Keyoxide CLI released2020-12-08T16:30:00+00:002020-12-08T16:30:00+00:00
Unknown
https://yarmo.eu/blog/keyoxide-cli-released/<p>Five months ago when I made <a href="https://keyoxide.org">keyoxide.org</a> public, one
specific request made by quite a few people stood out: we need the ability to
perform the identity verification locally. And given that it was a quite
technical crowd, this meant: we need a command-line interface (CLI).</p>
<h2 id="The_command-line_interface">The command-line interface</h2>
<p>Today, I'm pleased to announce the release of the CLI. Written in Node.js and
published on <a href="https://codeberg.org/keyoxide/cli">Codeberg</a> under the
<a href="https://codeberg.org/keyoxide/cli/src/branch/main/LICENSE">AGPL-v3.0-or-later</a>
license, the Keyoxide CLI uses the recently released
<a href="https://js.doip.rocks">doip.js</a> library and does all the things the Keyoxide
website does, but locally. This means you no longer need to trust the website of
the Keyoxide instance you are using, its maintainer or everything inbetween.</p>
<p>Your machine fetches the keys, parses them locally and then directly requests
the identity proofs from the service providers to verify the identity
claims. Here's a quick tour.</p>
<p>Assuming you already have Node.js installed, first install the CLI:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">npm</span><span> install</span><span style="color:#ffcc66;"> -g</span><span> keyoxide
</span></code></pre>
<p>Then go and verify the identity proofs inside a cryptographic key! To get
started, try out the key I use for testing:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">keyoxide</span><span> verify hkp:test@doip.rocks
</span></code></pre>
<p>You should now get the following result:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>Verification results:
</span><span>Yarmo Mackenbach (material for test frameworks) <test@doip.rocks>
</span><span> ✓ doip.rocks (dns)
</span></code></pre>
<p>And there you have it! Whoever generated this key verifiably owns the
<a href="https://doip.rocks">doip.rocks</a> domain name.</p>
<p>Of course, the CLI can also fetch keys using WKD or get them from Keybase. More
information about these protocols is available on the
<a href="https://keyoxide.org/">Keyoxide</a> website and in the source code's
<a href="https://codeberg.org/keyoxide/cli">readme</a> document.</p>
<h2 id="FOSS_FTW">FOSS FTW</h2>
<p>As always, this project is fully open source and I welcome all criticism and
contributions, both issues and PRs. We all stand to benefit from solutions that
are built for and by the people. As governments worldwide push for cryptographic
backdoors, let us all keep using and promote free and open software.</p>
<p>Many thanks to <a href="https://nlnet.nl">NLnet</a> for supporting me on this journey and
allowing me to focus on these projects while keeping them free from VC funding
and other means of monetization that could compromise the privacy of the
individual.</p>
<p>If you value my efforts and would like to donate, it's possible to do so on the
project's <a href="https://liberapay.com/Keyoxide/">Liberapay</a> page. Cheers and I'll see
you in the next Keyoxide project update post!</p>
Keyoxide Project Update #12020-11-09T14:00:00+00:002020-11-09T14:00:00+00:00
Unknown
https://yarmo.eu/blog/keyoxide-project-update-1/<p>Time for the first big Keyoxide project update! A lot to cover, so let's get to
it.</p>
<h2 id="The_Big_Identity">The Big Identity</h2>
<p>Decentralized identity is coming. <a href="https://www.w3.org/TR/did-core/">DIDs</a> are
coming. Awesome libraries like <a href="https://idx.xyz/">IDX</a> are being published. Even
<a href="https://www.microsoft.com/en-us/security/business/identity/own-your-identity">Microsoft</a>
seems on board.</p>
<p>My point is this: decentralized identity is an exciting field to be working on
right now and I'm committed to keep learning about this domain, its technologies
and contribute to our digital society's cure from the parasitic tech giants.
Keyoxide and the response it generated showed me this is within the realm of the
possible.</p>
<h2 id="A_wild_foundation_appears!">A wild foundation appears!</h2>
<p>To help me achieve this, I've decided to set up a foundation. Please welcome the
<a href="https://keytoidentity.foundation">Key to Identity Foundation</a>! The foundation
will serve as an umbrella for a couple of identity-related projects to come,
which I will be glad to share more about as they progress. In fact, one of these
new projects is included in this update :)</p>
<p>Having a non-profit foundation also allows me to try and fully sustain the
project on donations and grants. I truly believe this model will help the
project and give it the best chance at making a significant change out there.</p>
<p>And as it turns out, I am not the only one who wants to see that happen…</p>
<h2 id="NLnet_grant_for_Keyoxide_development">NLnet grant for Keyoxide development</h2>
<p>The awesome people over at <a href="https://nlnet.nl/">NLnet</a> have taken a good look at
the current status of the Keyoxide project, my plans for its future and it is
now my pleasure to announce they have decided to award me an
<a href="https://nlnet.nl/NGI0/">NGI Zero grant</a> and fund the development!</p>
<p>I couldn't be more excited about this news. Keyoxide generated a lot of positive
feedback and ideas on how to improve it when it launched. Getting the
possibility to work on it full-time and build on the aspects that were important
to the community is a dream come true.</p>
<p>More information available on the
<a href="https://nlnet.nl/project/Keyoxide/">NLnet website</a>.</p>
<h2 id="Keyoxide_endgame">Keyoxide endgame</h2>
<p>I would like to expand on a point that is dear to me. Today's internet is in its
precarious state because we put faith in monopolistic forces that blossomed
under a lack of competition. The endgame of this endeavor is not just to create
a successful project. It is to build an ecosystem that will thrive on
competition and ultimately deliver the best experience for netizens.</p>
<p>It is what this in mind that I am releasing a new project today that should help
new projects get started in the decentralized identity world.</p>
<h2 id="doip.js">doip.js</h2>
<p><strong>DOIP</strong> stands for Decentralized OpenPGP Identity Proofs, the technology that
enables the identity verification that Keyoxide performs.</p>
<p><strong>doip.js</strong> is a Node.js library that enables any project to perform the same
tricks. It is even able to run directly in the browser!</p>
<p>What excites me most is that any contribution, like supporting new service
providers, is immediately available to all those projects and websites, not just
Keyoxide.</p>
<p>Documentation is available at <a href="https://js.doip.rocks/#/">doip.rocks</a>.</p>
<p>Code is licensed under Apache 2.0 and hosted by
<a href="https://codeberg.org/keyoxide/doipjs">Codeberg.org</a>.</p>
<h2 id="Building_a_community">Building a community</h2>
<p>Keyoxide now has a Matrix room, come hang out and discuss related topics!</p>
<p>Invite link: <a href="https://matrix.to/#/#keyoxide:matrix.org">#keyoxide:matrix.org</a></p>
<h2 id="The_road_ahead">The road ahead</h2>
<p>We are just getting started here. A lot still needs to happen to make Keyoxide
and OpenPGP-based decentralized identity practical and useful for a larger
audience. With the NLnet grant and my newly-acquired ability to turn this
project into a full-time job, I foresee a bright future.</p>
<p>Hope to see you back for the next update!</p>
The Post-MomentJS Era2020-09-15T09:46:54+00:002020-09-15T09:46:54+00:00
Unknown
https://yarmo.eu/blog/post-momentjs-era/<h2 id="The_Post-MomentJS_Era">The Post-MomentJS Era</h2>
<p>According to their <a href="https://momentjs.com/docs/#/-project-status/">own documentation</a>, new projects should no longer use <a href="https://momentjs.com">MomentJS</a>, mentioning its hefty size and its outdated architecture as the principal reasons behind this statement.</p>
<p>Although there are new libraries that they do recommend, we also have a different solution nowadays: no library.</p>
<p>Using ECMAScript <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl">Intl</a>, we can go a very long way formatting dates without dependencies.</p>
<pre data-lang="js" style="background-color:#212733;color:#ccc9c2;" class="language-js "><code class="language-js" data-lang="js"><span style="font-style:italic;color:#5ccfe6;">Intl</span><span style="color:#f29e74;">.</span><span style="color:#ffd580;">DateTimeFormat</span><span>(</span><span style="color:#bae67e;">"en"</span><span style="color:#ccc9c2cc;">, </span><span>{
</span><span> year</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> month</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"long"</span><span style="color:#ccc9c2cc;">,
</span><span> day</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> hour</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> minute</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> hour12</span><span style="color:#ccc9c2cc;">: </span><span style="color:#ffcc66;">false</span><span style="color:#ccc9c2cc;">,
</span><span> timeZone</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"CET"
</span><span>})</span><span style="color:#f29e74;">.</span><span style="color:#ffd580;">format</span><span>(</span><span style="color:#f29e74;">new </span><span style="color:#73d0ff;">Date</span><span>())</span><span style="color:#ccc9c2cc;">;
</span><span style="font-style:italic;color:#5c6773;">// -> September 15, 2020, 09:41
</span></code></pre>
<pre data-lang="js" style="background-color:#212733;color:#ccc9c2;" class="language-js "><code class="language-js" data-lang="js"><span style="font-style:italic;color:#5ccfe6;">Intl</span><span style="color:#f29e74;">.</span><span style="color:#ffd580;">DateTimeFormat</span><span>(navigator</span><span style="color:#f29e74;">.</span><span>language</span><span style="color:#ccc9c2cc;">, </span><span>{
</span><span> year</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> month</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"long"</span><span style="color:#ccc9c2cc;">,
</span><span> day</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> hour</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> minute</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"numeric"</span><span style="color:#ccc9c2cc;">,
</span><span> hour12</span><span style="color:#ccc9c2cc;">: </span><span style="color:#ffcc66;">false</span><span style="color:#ccc9c2cc;">,
</span><span> timeZone</span><span style="color:#ccc9c2cc;">: </span><span style="color:#bae67e;">"CET"
</span><span>})</span><span style="color:#f29e74;">.</span><span style="color:#ffd580;">format</span><span>(</span><span style="color:#f29e74;">new </span><span style="color:#73d0ff;">Date</span><span>())</span><span style="color:#ccc9c2cc;">;
</span><span style="font-style:italic;color:#5c6773;">// -> 15 september 2020 09:41
</span><span style="font-style:italic;color:#5c6773;">// In dutch!
</span></code></pre>
<p>ISO formatting with <code>Intl</code> is tricky. But we don't need it.</p>
<pre data-lang="js" style="background-color:#212733;color:#ccc9c2;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#f29e74;">new </span><span style="color:#73d0ff;">Date</span><span>()</span><span style="color:#f29e74;">.</span><span style="color:#ffd580;">toISOString</span><span>()</span><span style="color:#ccc9c2cc;">;
</span><span style="font-style:italic;color:#5c6773;">// -> 2020-09-15T07:41:41.148Z
</span></code></pre>
<pre data-lang="js" style="background-color:#212733;color:#ccc9c2;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#f29e74;">new </span><span style="color:#73d0ff;">Date</span><span>()</span><span style="color:#f29e74;">.</span><span style="color:#ffd580;">toISOString</span><span>()</span><span style="color:#f29e74;">.</span><span style="color:#f28779;">split</span><span>(</span><span style="color:#bae67e;">"T"</span><span>)[</span><span style="color:#ffcc66;">0</span><span>]</span><span style="color:#ccc9c2cc;">;
</span><span style="font-style:italic;color:#5c6773;">// -> 2020-09-15
</span></code></pre>
<p>Have a look at the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl">MDN docs</a> for more information.</p>
<p>Even though <code>Intl</code> and <code>Date</code> are viable options, the MomentJS developers recommend a few libraries to help with some browser inconsistencies. Make sure to <a href="https://momentjs.com/docs/#/-project-status/">read the MomentJS post</a> for all the pros and cons.</p>
Flipper Zero and their "Limited" pledges2020-07-31T11:13:30+00:002020-07-31T11:13:30+00:00
Unknown
https://yarmo.eu/blog/flipper-zero-limited/<h2 id="The_Flipper_Zero_project">The Flipper Zero project</h2>
<p>I'm not going to lie, Flipper Zero sounds like a cool project for hackers. Here's a <a href="https://flipperzero.one/zero">link to their website</a> which will lead you to their Kickstarter page.</p>
<h2 id="What_is_going_on_on_Kickstarter?">What is going on on Kickstarter?</h2>
<p>Something extremely scummy is going on right now! Have a look:</p>
<p><img src="/img/blog/kickstarted_counter__1a.png" alt="Flipper Zero Kickstarter" /></p>
<p>Looking good, lot's of stuff to read, let's take our time.</p>
<p><img src="/img/blog/kickstarted_counter__1b.png" alt="Flipper Zero Kickstarter" /></p>
<p>My word, they're almost out of Early Birds! Please, for the love of god, if you want to save some money, pledge now, only 9 left and it clearly says "Limited"!</p>
<h3 id="One_minute_later">One minute later</h3>
<p><img src="/img/blog/kickstarted_counter__2.png" alt="Flipper Zero Kickstarter" /></p>
<p>A person has just pledged! Where's my credit card?</p>
<h3 id="Another_minute_later">Another minute later</h3>
<p><img src="/img/blog/kickstarted_counter__3.png" alt="Flipper Zero Kickstarter" /></p>
<p>Wait, 9 left? Someone bailed? Doesn't matter, I need this!</p>
<h3 id="Yet_another_minute_later">Yet another minute later</h3>
<p><img src="/img/blog/kickstarted_counter__4.png" alt="Flipper Zero Kickstarter" /></p>
<p>Wait, what?</p>
<h3 id="And_it_goes_on">And it goes on</h3>
<p><img src="/img/blog/kickstarted_counter__5.png" alt="Flipper Zero Kickstarter" /></p>
<h3 id="And_on">And on</h3>
<p><img src="/img/blog/kickstarted_counter__6.png" alt="Flipper Zero Kickstarter" /></p>
<h2 id="This_needs_to_stop">This needs to stop</h2>
<p>Well, you get the point. Flipper Zero is having some employee continuously adding more "Limited" pledges to perpetually give the impression they are almost out of "Early Bird" kits.</p>
<p>That's extremely deceptive and manipulative behavior and should not be tolerated. This needs to stop right now.</p>
Keyoxide 1.0.0: switched to AGPL-v32020-07-30T12:48:24+00:002020-07-30T12:48:24+00:00
Unknown
https://yarmo.eu/blog/keyoxide-agpl/<h2 id="The_big_1.0.0">The big 1.0.0</h2>
<p>Well, yes but no. It's actually a small update but with a MAJOR (get it? Because <a href="https://semver.org/">semver</a>) change: the project has switched to the <a href="https://www.gnu.org/licenses/agpl-3.0.en.html">AGPL-3.0-or-later</a> license.</p>
<p>When I started the <a href="https://keyoxide.org">Keyoxide</a> project, it didn't have the scope and ambitions it has now. What begun as a tool to bring simple PGP operations directly to the user's browser—a side project like many others—has turned into a full-blown solution to prove online identity in a decentralized manner.</p>
<p>The project has also seen quite a warm welcome among the tech-savvy and privacy-minded as a partial replacement for alternatives like Keybase. More importantly, the project has started receiving contributions from other people. From that point on, as was pointed out to me by <a href="https://social.tchncs.de/@t0k">@t0k@social.tchncs.de</a>, a permissive license like I was using before will no longer do.</p>
<p>A copyleft license like <a href="https://www.gnu.org/licenses/agpl-3.0.en.html">AGPL-3.0-or-later</a> is much better suited to protect the project and its contributors from getting the source code—including everyone's contributions—turned into a closed-source clone. Keyoxide is for the online citizenry and will remain so.</p>
<h2 id="Why_1.0.0?">Why 1.0.0?</h2>
<p>Usually, the "big 1.0" is associated with a project coming out of a beta period or more generally, becoming a product that users can use without excessive bugs. This is not the case here.</p>
<p>The versioning of this project adheres to <a href="https://semver.org/">semver</a>: MAJOR-MINOR-PATCH. A license change such as this one might put certain people or organizations off from using it (it shouldn't… but it might) and could therefore be considered a breaking change which, according to semver, triggers a MAJOR release.</p>
<p>Hence 1.0.0.</p>
Traefik: managing both HTTP and HTTPS connections separately2020-07-28T00:00:00+00:002020-07-28T00:00:00+00:00
Unknown
https://yarmo.eu/blog/traefik-http-https/<h2 id="Introduction">Introduction</h2>
<p>If you have used the wonderful <a href="https://containo.us/traefik/">Traefik</a> before to route all your traffic to different docker containers or other services, you'll know how easy it is to add SSL certificates and secure HTTPS connections, simple and free thanks to their integration with <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>You will first need the following lines in the <code>traefik.toml</code> file:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">entryPoints</span><span>]
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span>]
</span><span> </span><span style="color:#73d0ff;">address </span><span>= </span><span style="color:#bae67e;">":80"
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span>]
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirections</span><span>]
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirections</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">entryPoint</span><span>]
</span><span> </span><span style="color:#73d0ff;">to </span><span>= </span><span style="color:#bae67e;">"websecure"
</span><span> </span><span style="color:#73d0ff;">scheme </span><span>= </span><span style="color:#bae67e;">"https"
</span><span>
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">websecure</span><span>]
</span><span> </span><span style="color:#73d0ff;">address </span><span>= </span><span style="color:#bae67e;">":443"
</span></code></pre>
<p>To learn more about this setup using <code>traefik.yml</code> or the CLI, please check out the <a href="https://docs.traefik.io/https/acme/">documentation</a>.</p>
<p>What these lines do is make sure that Traefik listens to the correct ports, <code>80</code> for HTTP connections (or <code>web</code>) and <code>443</code> for HTTPS connections (or <code>websecure</code>). Because we only want secure connections coming from the internet, Traefik is instructed to always redirect <code>web</code> entrypoint connections to the <code>websecure</code> entrypoint. Easy and safe!</p>
<p>Now, let us get a SSL certificate and secure those HTTPS connections:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">certificatesResolvers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">myresolver</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">acme</span><span>]
</span><span> </span><span style="color:#73d0ff;">email </span><span>= </span><span style="color:#bae67e;">"test@example.com"
</span><span> </span><span style="color:#73d0ff;">storage </span><span>= </span><span style="color:#bae67e;">"acme.json"
</span><span> [</span><span style="color:#73d0ff;">certificatesResolvers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">myresolver</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">acme</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">httpChallenge</span><span>]
</span><span> </span><span style="color:#73d0ff;">entryPoint </span><span>= </span><span style="color:#bae67e;">"web"
</span></code></pre>
<p>Setting up the <code>certificateResolver</code> is only half the process, now you need to make sure your docker container uses it by appending a few labels to <code>docker-compose.yml</code>:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span style="color:#73d0ff;">mycontainer</span><span style="color:#ff3333;">:
</span><span> </span><span style="color:#73d0ff;">image</span><span style="color:#ff3333;">: someimage
</span><span> </span><span style="color:#73d0ff;">labels</span><span style="color:#ff3333;">:
</span><span> </span><span style="color:#73d0ff;">- traefik</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">routers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">router0</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">rule</span><span>=</span><span style="color:#ff3333;">Host(`example.com`)
</span><span> </span><span style="color:#73d0ff;">- traefik</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">routers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">router0</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">tls</span><span>=</span><span style="color:#ffcc66;">true
</span><span> </span><span style="color:#73d0ff;">- traefik</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">routers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">router0</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">tls</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">certresolver</span><span>=</span><span style="color:#ff3333;">myresolver
</span></code></pre>
<p>To learn more about this setup using swarm, kubernetes or rancher, please check out the <a href="https://docs.traefik.io/https/acme/">documentation</a>.</p>
<p>You don't often get so much functionality out of a few lines of code: the service provided by <code>mycontainer</code>, whether it's a <a href="https://hub.docker.com/_/nextcloud">Nextcloud</a> container, a <a href="https://hub.docker.com/r/wallabag/wallabag">Wallabag</a> container or any other service you might enjoy using, is now available by visiting example.com with HTTPS enabled automatically ensuring a secure connection now and in the future as Traefik manages the certificate renewal for you.</p>
<p>Once you start hosting 20, 30, maybe 40 containers and making them accessible to yourself, to friends and family or the entire world through the internet, Traefik really starts to make your self-hosting life a whole lot more enjoyable.</p>
<h2 id="The_problem:_home_connections">The problem: home connections</h2>
<p>There's one use-case where the "HTTPS everything" credo becomes counter-productive: services that should only be accessible via the home network.</p>
<p>In my home, I monitor a lot of services and devices using <a href="https://www.influxdata.com/time-series-platform/telegraf/">Telegraf</a> which sends the metrics to an <a href="https://www.influxdata.com/">InfluxDB</a> database to be analyzed by <a href="https://grafana.com/">Grafana</a>, a setup also known as the <a href="https://hackernoon.com/monitor-your-infrastructure-with-tig-stack-b63971a15ccf">TIG stack</a>. All these services are hosted on a server running inside my home.</p>
<p>The thing is, I don't need access to my Grafana dashboards outside my home. In fact, I don't even <em>want</em> those dashboards and the data they display available on the internet. So, I can just make up a domain that doesn't exist, say "grafana.lan", ensure that my home's DNS resolver (this could be your router or, in my case, <a href="https://pi-hole.net/">PiHole</a>) sends any <code>grafana.lan</code> requests to my server instead of to the internet and just make my Grafana container respond to <code>grafana.lan</code> requests using Traefik labels:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span style="color:#73d0ff;">grafana</span><span style="color:#ff3333;">:
</span><span> </span><span style="color:#73d0ff;">image</span><span style="color:#ff3333;">: grafana/grafana
</span><span> </span><span style="color:#73d0ff;">labels</span><span style="color:#ff3333;">:
</span><span> </span><span style="color:#73d0ff;">- traefik</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">routers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">grafana</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">rule</span><span>=</span><span style="color:#ff3333;">Host(`grafana.lan`)
</span><span> </span><span style="color:#73d0ff;">- traefik</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">services</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">grafana</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">loadbalancer</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">server</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">port</span><span>=</span><span style="color:#ffcc66;">3000
</span></code></pre>
<p>I added a <code>server.port</code> label to make sure Traefik forwards all requests to the correct port that Grafana listens to, in this case <code>3000</code>.</p>
<p>I removed the TLS-related labels since I won't be needing HTTPS! Inside my home, I can just visit <code>http://grafana.lan</code> since I won't be using the dangerous internet connections outside my home.</p>
<p>Simple, right?</p>
<p>Well, no. We have a problem, because this setup won't work.</p>
<h2 id="Redirect_HTTP_to_HTTPS">Redirect HTTP to HTTPS</h2>
<p>Remember the first lines we added to <code>traefik.toml</code>? The ones that redirected all HTTP requests to the HTTPS entrypoint? Well, every time we want to visit <code>http://grafana.lan</code>, it will be redirected to <code>https://grafana.lan</code>. In the best case scenario, your browser will display a few flashy warnings that you are about to enter a website over an unsecured connection, with a hidden button that allows you to proceed anyway. In the worst case scenario, your browser will not allow you anywhere close to your own container because there is a certificate problem.</p>
<p>So, the problem is that there is no certificate for <code>grafana.lan</code>? Well, simple! Let's just add back the TLS labels and have a secure connection within the home!</p>
<p>Again, no can do. A domain needs to exist and be accessible from the internet in order for a SSL certificate to be issued. Since neither criteria are applicable to our <code>grafana.lan</code>, we can't get a SSL certificate.</p>
<p>Luckily, there is a solution.</p>
<h2 id="The_solution:_domain-specific_HTTP_redirection">The solution: domain-specific HTTP redirection</h2>
<p>What we need to do is instruct Traefik to not redirect <strong>all</strong> HTTP connections to HTTPS, but <strong>only</strong> those that we can access from the internet. The connections that stay in the house should remain HTTP connections.</p>
<p>Remember these lines?</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span>]
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirections</span><span>]
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirections</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">entryPoint</span><span>]
</span><span> </span><span style="color:#73d0ff;">to </span><span>= </span><span style="color:#bae67e;">"websecure"
</span><span> </span><span style="color:#73d0ff;">scheme </span><span>= </span><span style="color:#bae67e;">"https"
</span></code></pre>
<p>These need to be removed from <code>traefik.toml</code> so that you keep the basic entrypoint definitions:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">entryPoints</span><span>]
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web</span><span>]
</span><span> </span><span style="color:#73d0ff;">address </span><span>= </span><span style="color:#bae67e;">":80"
</span><span>
</span><span> [</span><span style="color:#73d0ff;">entryPoints</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">websecure</span><span>]
</span><span> </span><span style="color:#73d0ff;">address </span><span>= </span><span style="color:#bae67e;">":443"
</span></code></pre>
<p>Next, we need to add a so-called <code>dynamic file</code> configuration. Add the following line to the <code>[providers]</code> section in <code>traefik.toml</code>:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">providers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">file</span><span>]
</span><span> </span><span style="color:#73d0ff;">filename </span><span>= </span><span style="color:#bae67e;">"/path/to/traefik_dynamic.toml"
</span><span> </span><span style="color:#73d0ff;">watch </span><span>= </span><span style="color:#ffcc66;">true
</span></code></pre>
<p>Create the file <code>/path/to/traefik_dynamic.toml</code> and add the following content:</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">http</span><span>]
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">middlewares</span><span>]
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">middlewares</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirect_to_https</span><span>]
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">middlewares</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirect_to_https</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">redirectscheme</span><span>]
</span><span> </span><span style="color:#73d0ff;">scheme </span><span>= </span><span style="color:#bae67e;">"https"
</span><span>
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">routers</span><span>]
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">routers</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">web_redir</span><span>]
</span><span> </span><span style="color:#73d0ff;">rule </span><span>= </span><span style="color:#bae67e;">"HostRegexp(`example.com`, `{subdomain:[a-z]+}.example.com`)"
</span><span> </span><span style="color:#73d0ff;">entryPoints </span><span>= [</span><span style="color:#bae67e;">"web"</span><span>]
</span><span> </span><span style="color:#73d0ff;">middlewares </span><span>= [</span><span style="color:#bae67e;">"redirect_to_https"</span><span>]
</span><span> </span><span style="color:#73d0ff;">service </span><span>= </span><span style="color:#bae67e;">"api@internal"
</span></code></pre>
<p>This code instructs Traefik to add a new <a href="https://docs.traefik.io/middlewares/overview/">middleware</a> whose only responsibility is to redirect HTTP requests to HTTPS requests.</p>
<p>The next few lines applies this <code>middleware</code> to a <a href="https://docs.traefik.io/routing/routers/">router</a>. The trick here is that this router is only applied to requests that target <code>example.com</code> and all its subdomains. You could add as many domains as you need to get them to always redirect to HTTPS. Our home domain <code>grafana.lan</code> is not in the list, therefore the <code>middleware</code> will not be applied to it and the connection will remain HTTP!</p>
<p><strong>Note</strong>: the router points to the service <code>api@internal</code>. This is done because a router always needs a service even when, as in this case, the service will never be used because the connection is redirected to HTTPS and after that, this router is no longer used. If you don't like using <code>api@internal</code> for this purpose, you could always run a harmless <a href="https://hub.docker.com/r/containous/whoami">whoami</a> container and point the router to that service.</p>
<h2 id="Conclusion">Conclusion</h2>
<p>This Traefik setup takes a bit more effort but allows you to gracefully handle two different types of connections.</p>
<p>Connections coming from the internet are secured via SSL certificates and will always happen over HTTPS.</p>
<p>Connections coming from within the house stay within the house, they happen over HTTP and you can feel a lot safer knowing the internet no longer has direct access to that container.</p>
Keyoxide and XMPP + OMEMO2020-07-23T14:08:02+00:002020-07-23T14:08:02+00:00
Unknown
https://yarmo.eu/blog/keyoxide-xmpp-omemo/<h2 id="XMPP">XMPP</h2>
<p><a href="https://xmpp.org/">XMPP</a> is an open messaging protocol that not only drives a <strong>thriving secure communication ecosystem for the privacy-minded</strong>, but also handles the messages sent by platforms like WhatsApp and Zoom. Knowingly or not, you have most likely used XMPP at some point in your life.</p>
<p><em>For the rest of this post, we will not take into account services like WhatsApp and Zoom as their platforms are closed off from all other platforms even though they use the same XMPP protocol.</em></p>
<p>That <strong>ecosystem for the privacy-minded</strong> consists of libraries for developers, server applications for the tech-savvy service providers and clients like <a href="https://dino.im/">Dino</a> (Linux), <a href="https://gajim.org/">Gajim</a> (Windows, Mac, Linux) and <a href="https://conversations.im/">Conversations</a> (Android) for everyone.</p>
<p>Because there is no single server or client to rule them all, we call this is a <em>decentralized</em> network. I could use a different server and a different client than you do, but we would still be able to communicate with each other. Also, any server or client could cease to exist the next day without impacting the rest of the network.</p>
<h2 id="Care_to_join_the_XMPP_ecosystem?">Care to join the XMPP ecosystem?</h2>
<p>Joining the XMPP ecosystem is as simple as making an account on a server and logging in using any XMPP-compatible client. But which server? Which client?</p>
<p>While not the focus of this post, here is a <a href="https://xmpp-servers.404.city/">list provided by 404.city</a> and a <a href="https://list.jabber.at/">list provided by jabber.at</a> of XMPP servers.</p>
<p>Notable mention for <a href="https://404.city/">404.city</a> itself. Not sponsored. Just a fan.</p>
<p>With regards to clients, the three mentioned above should get you started. Need a different client? Have a look at this <a href="https://xmpp.org/software/clients.html">list provided by xmpp.org</a>.</p>
<h2 id="End-to-end_encryption:_OMEMO">End-to-end encryption: OMEMO</h2>
<p>XMPP communication can be end-to-end encrypted with <a href="https://conversations.im/omemo/">OMEMO</a> (<a href="https://xmpp.org/extensions/xep-0384.html">XEP-0384</a>), the easiest and most common of <a href="https://wiki.404.city/en/XMPP_client_encryption">XMPP-compatible end-to-end encryption schemes</a>. Verifying OMEMO fingerprints is essential to trust your communication and keep it safe from Man-in-the-Middle attacks.</p>
<p>Each XMPP client you use will have its own OMEMO key, the content of which remains secured on your device but a "fingerprint" of which can be made public without a problem. These fingerprints are used to identify the different clients that have logged in on your XMPP account.</p>
<p>If you wish to secure your communication with OMEMO, make sure to choose a <a href="https://omemo.top/">client with full support on this website</a>.</p>
<h2 id="OMEMO_and_trust">OMEMO and trust</h2>
<p>When you talk with someone over XMPP and you want to guarantee all communication is secured, it is recommended to use a different form of communication to compare and trust each others fingerprints. Ideally, you would meet in person and scan QR codes, a handy function of the <strong>Conversations</strong> app.</p>
<h2 id="XMPP_identity_proofs_and_Keyoxide">XMPP identity proofs and Keyoxide</h2>
<p>As you can see, trusting OMEMO keys is an essential step in the process of ensuring communication is secure. Fortunately, <a href="https://keyoxide.org">Keyoxide</a> can assist you in that process.</p>
<p>As of <a href="https://codeberg.org/keyoxide/web/releases/tag/0.4.0">version 0.4</a>, Keyoxide generates QR codes for all <strong>verified</strong> XMPP accounts it detects. This makes it easy to add new contacts if your <a href="https://keyoxide.org/guides/xmpp">XMPP identity proof</a> looks like this:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>proof@metacode.biz=xmpp:username@domain.org
</span></code></pre>
<p>Scan the resulting QR code on a Keyoxide profile page in the <strong>Conversations</strong> app and the contact is added. But the OMEMO keys are not yet trusted. Let's solve that!</p>
<h2 id="Integrating_OMEMO_in_the_XMPP_identity_proof">Integrating OMEMO in the XMPP identity proof</h2>
<p>It is also possible to add a more advanced <a href="https://keyoxide.org/guides/xmpp">XMPP identity proof</a> to your OpenPGP key that includes the OMEMO fingerprints:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>proof@metacode.biz=xmpp:user@domain.org?omemo-sid-123456789=A1B2C3D4E5F6G7H8I9...
</span></code></pre>
<p>Obtaining the correct URI for the proof can be difficult when doing so manually. Fortunately, this can be assisted by the <strong>Conversations</strong> app. As you can tell, using the <strong>Conversations</strong> app brings a ton of advantages.</p>
<p>In the main menu of that app, press <strong>Manage accounts > [your account] > Share > Share as XMPP URI</strong> and add the resulting URI to your key using <a href="https://keyoxide.org/guides/xmpp">this Keyoxide guide</a>.</p>
<p>Scan the resulting QR code on a Keyoxide profile page and not only is the contact added, their OMEMO fingerprints are also fully trusted and verified.</p>
<h2 id="Why_trust_the_Keyoxide_identity_proof?">Why trust the Keyoxide identity proof?</h2>
<p>Anyone can add any XMPP proof to their OpenPGP key, whether they own it or not. So why trust the identity proof on Keyoxide?</p>
<p><strong>STEP 1</strong> The QR code is only shown if a XMPP identity proof is verified. Verifying a XMPP account requires the holder of said account to add a small line of code to their XMPP bio <a href="https://keyoxide.org/guides/xmpp">as described in this guide</a>. Only a person with access to both the OpenPGP private key and the XMPP account can verify that XMPP account.</p>
<p><strong>STEP 2</strong> While Keyoxide assists as much as possible with trusting the right proofs, a critical mind is always an asset when dealing with trusting online identities, especially when securing your communication. Do you recognize any other proofs on this person's profile page? Is this proof verified? If so, you can safely assume that the person who holds the OpenPGP key also has access to this "other proof".</p>
<p>Combining the two steps above, you can trust that you are talking to the right person and verifying the right OMEMO fingerprints.</p>
<h2 id="Example">Example</h2>
<p>You are reading this post on <a href="https://yarmo.eu">yarmo.eu</a>. Whether or not you trust me, I'm telling you that my OpenPGP fingerprint is:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>9f0048ac0b23301e1f77e994909f6bd6f80f485d
</span></code></pre>
<p>So you visit <a href="https://keyoxide.org/9f0048ac0b23301e1f77e994909f6bd6f80f485d">keyoxide.org/9f0048ac0b23301e1f77e994909f6bd6f80f485d</a>. Indeed, whoever holds the key with that fingerprint also owns the <a href="https://yarmo.eu">yarmo.eu</a> domain.</p>
<p>Now, you scroll down until you reach the XMPP proof for <strong>yarmo@404.city</strong>. You read that the XMPP account is verified. Ergo, whoever holds the key with that fingerprint also has access to that XMPP account.</p>
<p>Final conclusion: whoever owns the <a href="https://yarmo.eu">yarmo.eu</a> domain also has access to the <strong>yarmo@404.city</strong> XMPP account. If you wish to talk with me securely, scan the QR code and be certain that you have just added me as a contact, and that are you verifying the right OMEMO fingerprints to ensure secure and fully encrypted communication between us.</p>
ELIUF: Explain Like I Use Facebook2020-07-18T17:51:26+00:002020-07-18T17:51:26+00:00
Unknown
https://yarmo.eu/blog/eliuf/<h2 id="ELI5">ELI5</h2>
<p>Perhaps you've seen the word <a href="https://www.urbandictionary.com/define.php?term=ELI5">ELI5</a> before. This stands for "Explain Like I'm 5". In other words, "please use simple terms and expressions to explain your point". It's not an insult to one's intelligence, it's just the process of transferring knowledge between (groups of) individuals that have different fundamental levels of understanding of the concept in question.</p>
<p>I've come to the realization we need a similar thing with regards to the "decentralized internet". It's not that I could only explain it to you if I use very simple terms because it's a complicated concept—it's not.</p>
<p>But rather, it's a significant shift in how one needs to think about how the internet works, how services are provided for us and, ultimately, how our data flows on the digital highway.</p>
<h2 id="Our_internet_needs_to_change">Our internet needs to change</h2>
<p>Days rarely go by without some mention in the global news that a <a href="https://www.eureporter.co/frontpage/2020/07/14/gdpr-belgian-data-protection-authority-fines-google-e600000/">big internet corporation was fined for misuse of our data</a> or a <a href="https://time.com/5867577/twitter-breach-hack-bitcoin-trust/">report on a large security breach</a> or <a href="https://www.businessinsider.nl/google-prioritizes-youtube-videos-over-competitors-2020-7?international=true&r=US">abuse of monopoly</a>. These links are just from last week (2020-07-13 to 2020-07-19). Next week will produce new revelations on how our online existence is threatened or abused.</p>
<p>Our internet today is fundamentally flawed. But with all the attention it is getting these days, people are starting to see the cracks and will turn to a better alternative: the "decentralized internet".</p>
<h2 id="ELIUF">ELIUF</h2>
<p>In a series of post, we will explore what it means to use this new decentralized internet, how it serves you, what are the benefits and the drawbacks, and how it fixes today's internet.</p>
<p>You will recognize these post by their mentioning of "ELIUF" in either the title or somewhere in the text. ELIUF stands for "Explain Like I Use Facebook". A little facetious, I admit. But it works. People who use Facebook have a good grasp on how a "centralized internet" works. With Facebook slowly crumbling under scandals and fines, it's their users and the users of similar sites (Twitter, YouTube, etc.) that will need some guidance when they choose to abandon those sites.</p>
<h2 id="ELIUF_101">ELIUF 101</h2>
<p>A first ELIUF explanation? No problem, here it is! "Decentralized" means that no single "central" authority is responsible for possessing data or knowledge. Therefore, this blog that you are reading this post on, will only be one of a larger network of blogs that will write about the topic. Each blog will aim to link to a few posts on different blogs, allowing you to easily learn about these new concepts while moving around the internet.</p>
<p>Just like recommended Facebook posts or recommended YouTube videos! Except, well, decentralized :)</p>
<p>Let's all share this knowledge with each other and welcome everyone who wishes to escape today's internet and join the decentralized web.</p>
<h2 id="ELIUF_posts_elsewhere_on_the_internet">ELIUF posts elsewhere on the internet</h2>
<ul>
<li><a href="https://www.garron.blog/posts/eliuf.html">How to decentralize the Internet</a> by <a href="https://www.garron.blog">Guillermo Garron</a></li>
</ul>
Quick comparison: Plausible vs logs2020-07-13T23:19:39+00:002020-07-13T23:19:39+00:00
Unknown
https://yarmo.eu/blog/plausible-versus-logs/<p>About a month ago, I started collecting website usage data using both <a href="https://plausible.io">Plausible.io</a> and logs generated by <a href="https://caddyserver.com">Caddyserver</a>, my reverse proxy. The goal was to compare the data sources, just like <a href="https://markosaric.com/">Marko Saric</a> did in a <a href="https://plausible.io/blog/server-log-analysis">post on the Plausible blog</a>.</p>
<p>Here's a quick overview of the results. For more details, read the post mentioned above, the results are nearly identical and Marko does a great job explaining the results.</p>
<h2 id="Results">Results</h2>
<h3 id="Quantitative_data">Quantitative data</h3>
<p>The table below summarizes key metrics computed by both Plausible and <a href="https://goaccess.io">GoAccess</a> (based on Caddyserver logs). Data used was collected between June 13th and July 13th.</p>
<table><thead><tr><th style="text-align: left">Metric</th><th style="text-align: left">Plausible.io</th><th style="text-align: left">Logs + GoAccess</th><th style="text-align: left">Δ factor</th></tr></thead><tbody>
<tr><td style="text-align: left">Visitors</td><td style="text-align: left">32.1k</td><td style="text-align: left">76.9k</td><td style="text-align: left">x2.4</td></tr>
<tr><td style="text-align: left">Pageviews</td><td style="text-align: left">44.5k</td><td style="text-align: left">468.6k</td><td style="text-align: left">x10.5</td></tr>
<tr><td style="text-align: left">Bandwidth</td><td style="text-align: left">-</td><td style="text-align: left">16.6 GiB</td><td style="text-align: left">-</td></tr>
</tbody></table>
<p>Just as Marko noticed, logs show much higher numbers of visitors and pageviews, likely due to crawlers and bots that get noticed in the logs but do not run javascript and therefore are not picked up by Plausible.</p>
<p>I could compare other metrics like referrers and top pages, but again, I suggest you read the <a href="https://plausible.io/blog/server-log-analysis">post on the Plausible blog</a>.</p>
<p>I'd like to add that the logs can provide some information about bandwidth usage and which files are downloaded the most. This would allow you to make informed decisions when optimizing caching and file loading. Plausible can't help you with this data, one needs logs for this.</p>
<h3 id="Qualitative_data">Qualitative data</h3>
<p>The experience with Plausible was more convenient than with GoAccess, as the website of the former loads in seconds whilst the latter took 3 minutes to process the logs and generate the results.</p>
<h2 id="Conclusion">Conclusion</h2>
<p>Both methods have advantages and disadvantages. Plausible gives fast and precise results but potentially impacts page load (although minimally). Server logs don't impact page load, can provide bandwidth stats but inflate numbers due to traffic noise generated by search engines, crawlers and bots. Personally, I will continue using both for the foreseeable future.</p>
<h2 id="Methodology">Methodology</h2>
<h3 id="Plausible">Plausible</h3>
<p>Visit the <a href="https://plausible.io">Plausible.io</a> website and simply look at the website's stats.</p>
<h3 id="Caddy_logs">Caddy logs</h3>
<p>Logs were collected using the following snippet in the Caddyfile:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>log {
</span><span> output file /var/log/caddy/access.log {
</span><span> roll_size 100MiB
</span><span> roll_keep 10
</span><span> roll_keep_for 2160h
</span><span> }
</span><span>}
</span></code></pre>
<h3 id="GoAccess">GoAccess</h3>
<p>As GoAccess cannot read Caddy logs directly, a small bash script is needed:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span>today_date</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">$</span><span>(</span><span style="color:#ffd580;">date</span><span style="color:#ffcc66;"> -u</span><span style="color:#bae67e;"> +"%Y-%m-%d"</span><span>)
</span><span>today_date</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">$</span><span>(</span><span style="color:#ffd580;">date</span><span style="color:#ffcc66;"> -u --date</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">"$</span><span>today_date</span><span style="color:#bae67e;"> -30 day" +"%Y-%m-%d"</span><span>)
</span><span>today_ts</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">$</span><span>(</span><span style="color:#ffd580;">date</span><span style="color:#ffcc66;"> -d </span><span style="color:#bae67e;">$</span><span>today_date</span><span style="color:#bae67e;"> +%</span><span>s)
</span><span>
</span><span style="color:#ffd580;">goaccess </span><span style="color:#f29e74;"><</span><span>(</span><span style="color:#ffd580;">zcat</span><span style="color:#ffcc66;"> -f</span><span> logs/access</span><span style="color:#f29e74;">* | </span><span style="color:#ffd580;">jq</span><span style="color:#ffcc66;"> --raw-output </span><span style="color:#bae67e;">'
</span><span style="color:#bae67e;"> .request.remote_addr |= .[:-6] |
</span><span style="color:#bae67e;"> select(.request.remote_addr != "1.1.1.1") |
</span><span style="color:#bae67e;"> select(.request.remote_addr != "2.2.2.2") |
</span><span style="color:#bae67e;"> select(.ts >= '</span><span>$today_ts</span><span style="color:#bae67e;">') |
</span><span style="color:#bae67e;"> [
</span><span style="color:#bae67e;"> .common_log,
</span><span style="color:#bae67e;"> .request.headers.Referer[0] // "-",
</span><span style="color:#bae67e;"> .request.headers."User-Agent"[0],
</span><span style="color:#bae67e;"> .duration
</span><span style="color:#bae67e;"> ] | @csv'</span><span>) </span><span style="color:#ccc9c2cc;">\
</span><span style="color:#ffcc66;"> --log-format</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">'"%h - - [%d:%t %^] ""%m %r %H"" %s %b","%R","%u",%T'</span><span style="color:#ffcc66;"> --time-format</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">'%H:%M:%S'</span><span style="color:#ffcc66;"> --date-format</span><span style="color:#f29e74;">=</span><span style="color:#bae67e;">'%d/%b/%Y'
</span></code></pre>
<p>This was adapted from the bash script described by <a href="https://fosstodon.org/@AlexMV12">Alessandro</a> in this <a href="https://alexmv12.xyz/blog/goaccess_caddy/">blog post</a>.</p>
The Future of Online Identity is Decentralized2020-07-12T16:23:31+00:002020-07-12T16:23:31+00:00
Unknown
https://yarmo.eu/blog/future-online-identity-decentralized/<h2 id="Online_identity">Online identity</h2>
<p><a href="https://en.wikipedia.org/wiki/Online_identity">Online identity</a> refers to the concept of "being" in the digital world. As an internet user, you exist. You create accounts on websites. You write on social media and blogs. You post photos. All this online activity has in common that one and the same person performed these actions; it defines your "online identity".</p>
<p>However, your "online identity" is not <em>per se</em> representative of your "social identity" in the physical world.</p>
<p>You may choose to use your own name or a pseudonym. You may choose to publish personally identifiable information or not. You may choose to remain truthful to your social identity or deceive. In short, you may choose for authenticity or for anonymity.</p>
<h2 id="Authenticity_versus_anonymity">Authenticity versus anonymity</h2>
<p>Authenticity and anonymity aren't mutually exclusive and that is the beauty of the internet. In the physical realm, you are (mostly) limited to a single social identity. In the digital space, there are no such restrictions. While you can't embody multiple persons in the offline world, you can have several identities online. In fact, you can even have multiple accounts on the same platform, opting for a different balance between authenticity and anonymity for each one of them.</p>
<p>The anonymity has its downsides, creating psychological artifacts like <a href="https://en.wikipedia.org/wiki/Online_disinhibition_effect">online disinhibition</a> and facilitating <a href="https://en.wikipedia.org/wiki/Cyberbullying">cyberharassment</a>. However, even though we are far from completely overcoming these challenges, the internet that allows us to remain anonymous is still the one we should want and fight for.</p>
<h2 id="Consolidation_of_identity_and_internet_corporations">Consolidation of identity and internet corporations</h2>
<p>Removing the possibility for anonymity could solve the problem of online toxicity. Large internet corporations like Google and Facebook allow all to create an account on condition that some personally identifiable information is revealed, usually a phone number.</p>
<p>The benefit is that it deters most from repeatably creating new accounts when older accounts have been flagged or banned due to improper behavior. These companies gain the function of "identity provider": they manage your online identity that can be used to login in different locations of the internet. We all know many websites that offer a "Google login" or "Facebook login".</p>
<p>But there is a problem: handling the entire online identity of a single person is too much responsibility for any corporation or organization, especially if it is in their interest to gain intimate individual knowledge and sell it (Google) or use it to manipulate moods and influence decision making (Facebook).</p>
<p>That phone number that was once used to prevent online toxicity is now the first of many pieces of personally identifiable information that these corporations will seek and use to figure out who you are.</p>
<p>"You have nothing to hide"? Great. The internet corporations will still make money hand over fist by selling your personality, your preferences, your buying patterns and your vote. And not just yours. That of entire populations.</p>
<p>Know that profits are just the tip of the iceberg. Governments all around the world are also interested in knowing what their citizens think, say and do for very different motives.</p>
<h2 id="Going_decentralized">Going decentralized</h2>
<p>The solution is relatively simple. When you create a new account and get to choose between "Google login", "Facebook login" and "Email login", pick "Email login".</p>
<p>The benefit of not giving away any more personal data and tracking possibilities outweigh the inconvenience of having to fill in your email address and a password, especially when using a password manager. As tempting as the alternative is, making these changes will improve your life and ultimately, when enough people join these efforts, that of the world population.</p>
<p>A different problem arises: how to prove online identity when decentralized?</p>
<h2 id="Decentralized_online_identity">Decentralized online identity</h2>
<p>When you are no longer relying on an identity provider to manage your entire online identity, you lose the one common thing all your accounts on different platforms had: if two accounts on different online platforms are created by the same Google or Facebook account, we can safely assume they belong to the same person.</p>
<p>But this "trust by proxy" is lost when the accounts on those platforms were created without identity provider. And whether authentic or anonymous, it can sometimes be extremely useful to know and trust that separate accounts on the internet belong to the same person, even when not knowing who this person is.</p>
<p>The username is not sufficient to identify accounts across platforms. If you are "Alice" on one website, chances are you might need to be "Alice123" on the next one. And what if someone close to you is contacted by an "Aliss" requesting an amount of money to be transferred because they believe you to be in some sort of trouble? A poor attempt at impersonation, I know… Don't worry, a real bad actor will put in more effort and make a much more convincing act.</p>
<h2 id="Proving_decentralized_online_identity">Proving decentralized online identity</h2>
<p>What if not only your online identity is decentralized, but also the tool to prove said online identity? This would mean that you wouldn't need to depend on a single company or entity to prove your identity across platforms. Decentralized identity, decentralized proofs!</p>
<p>Such solutions are already being deployed in industry, for example by firms like <a href="https://indicio.tech/">Indicio.tech</a> which focus on blockchain technology.</p>
<p>Built for individuals, I recently launched <a href="https://keyoxide.org">Keyoxide</a> which uses cryptographic keypairs to accomplish decentralized identity verification. While it doesn't (and shouldn't!) link an account to a person in the physical realm, it links accounts across platforms.</p>
<p>If you trust an account on one platform, you can trust any other account on any other platform as long as they are both verified by "identity proofs" stored in the same keypair. Whether you choose authenticity or anonymity, decentralized identity proofs allow you to build a cross-platform online identity.</p>
<p>Here's my <a href="https://keyoxide.org/9f0048ac0b23301e1f77e994909f6bd6f80f485d">Keyoxide profile</a>. In this case, I link to several "authentic" accounts but I could easily generate a new keypair void of personal data that links to several anonymous accounts. The accounts don't need to be authentic to create an online persona.</p>
<p>All the accounts listed in the link above belong to me. No one else could claim these accounts. Here's how.</p>
<h2 id="Identity_proofs">Identity proofs</h2>
<p>An "identity proof" is nothing more than a link to an account A on some platform P stored inside your keypair K. If a "proof verification tool" such as Keyoxide follows this link and discovers some piece of data linking back to keypair K (which is only possible if keypair K and account A on platform P belong to the same person), the account is verified. If this proof verification is done for several accounts on different platforms, it is beyond reasonable doubt that the same person owns said accounts.</p>
<p>No bad actor could claim one of your accounts: the piece of data that links back is specific to your keypair, not the bad actor's keypair. And the bad actor also couldn't insert a proof inside your keypair as long as your keypair isn't compromised. Only you, the owner of the keypair, can add new proofs. But the entire world can read and verify them.</p>
<p>These identity proofs are decentralized because Keyoxide doesn't store them, your cryptographic keypair does. Keyoxide simply reads the keys and verifies the proofs. When you remove a proof from your keypair, Keyoxide will no longer have access to it. You own your proofs and your online identity.</p>
<p>In fact, the proofs are readable by everyone and are not specifically designed for Keyoxide. Anyone can use any tool or create new ones to verify these proofs and developers are encouraged to enrich this field with additional tools and services. Let's build a decentralized identity ecosystem we can all trust.</p>
<h2 id="Online_identity_beyond_today's_internet">Online identity beyond today's internet</h2>
<p>Initiatives like <a href="https://inrupt.com/solid">Solid</a> by <a href="https://en.wikipedia.org/wiki/Tim_Berners-Lee">Sir Tim Berners-Lee</a> are paving the way for a new internet where all data is owned by the user and shared with platforms with consent and restrictions. This would solve the online identity problem: you get the benefits of a "pseudo centralized" account while maintaining full ownership over all account-related data stored on a decentralized platform. Social media would be allowed to see some data, messaging platforms some other data. But there would still be one single account to rule all the platforms.</p>
<p>On today's internet, the best we can do is make fully separated accounts, link them using technologies like decentralized online identity proofs and create our own online personas, with our own open tools that ensure we maintain ownership over them.</p>
State of the Keybase.io website2020-07-03T15:23:38+00:002020-07-03T15:23:38+00:00
Unknown
https://yarmo.eu/blog/keybase-website/<h2 id="Disclaimer">Disclaimer</h2>
<p>Two days ago, I launched <a href="https://keyoxide.org">Keyoxide.org</a> which provides a few similar functions as <a href="https://keybase.io">Keybase.io</a> but in an Open Source package. I've been wanting to write this post for a while but felt it could be perceived as disingenuous if posted before making my own project public. Therefore, I post this now.</p>
<h2 id="TLDR">TLDR</h2>
<p>The Keybase.io website uses non-optimized resources resulting in a slow pageload and 5+ year old versions of libraries with known and public security vulnerabilities.</p>
<h2 id="The_Keybase.io_website">The Keybase.io website</h2>
<p>I have opinions about the Keybase service, but this post is not about that. This is about the facts behind their website, <a href="https://keybase.io">Keybase.io</a> and more specifically their <a href="https://keybase.io/encrypt">encrypt</a> page, the one you use to <strong>encrypt private and confidential messages</strong>.</p>
<p>When you load that specific page, make sure to load it in a private session or window to eliminate cached resources. What do you notice?</p>
<p>It is slow. Really slow. I noticed so too and decided to run a <a href="https://www.webpagetest.org/result/200627_0Q_044080ef3ab8a678721658c90d2f4706/">Webpagetest (link to result)</a>. Out of three runs, we analyze only the median run (so not the best one, not the worst one).</p>
<p><img src="/img/blog/keybase_encrypt__wpt_overview.png" alt="Keybase encrypt Webpagetest overview" /><br />
<em>keybase.io/encrypt</em></p>
<h2 id="The_content_loaded">The content loaded</h2>
<p>It takes <strong>6.25 seconds</strong> to fully load the <strong>2.9 megabytes</strong> that are used on this page. That is hefty for a page that is essentially a single form. I mean, look at it:</p>
<p><img src="/img/blog/keybase_encrypt.png" alt="Keybase encrypt page" /><br />
<em>Why 2.9 megabytes?</em></p>
<p>That's a regular web form. What could possibly be <strong>2.9 megabytes</strong>? The javascript?</p>
<p><img src="/img/blog/keybase_encrypt__wpt_1.png" alt="Webpagetest run 1 overview" /><br />
<em>How many requests? How many bytes?</em></p>
<p>Most requests are fonts. That makes sense. Earlier, we saw the page only makes <strong>12 requests</strong>, so I could imagine a few of those being several fonts files. Fortunately, fonts are only <strong>6.5%</strong> of the bytes loaded, so we'll forgive them.</p>
<p><strong>90 percent</strong> of the bytes are due to javascript and images‽ That's <strong>2.6 megabytes</strong> for a form! What images?</p>
<h2 id="Javascript_and_image(s)">Javascript and image(s)</h2>
<p>Let's grab the <a href="https://www.webpagetest.org/result/200627_0Q_044080ef3ab8a678721658c90d2f4706/1/details/#waterfall_view_step1">waterfall</a> and see what is going on:</p>
<p><img src="/img/blog/keybase_encrypt__wpt_1_waterfall.png" alt="Webpagetest run 1 waterfall" /><br />
<em>Run 1 waterfall</em></p>
<p>At two points in time, the loading of the website stalls. The first stall is <strong>2.6 seconds</strong> for the file <code>sitewide-js.js</code>. The second stall is <strong>2.5 seconds</strong> for the file <code>footprints_transp.png</code>. Let's go.</p>
<h2 id="sitewide-js.js">sitewide-js.js</h2>
<p>This file is <strong>4.7 megabytes</strong> raw and <strong>1.2 megabytes</strong> gzipped. Let us look at a random excerpt:</p>
<p><img src="/img/blog/keybase_encrypt__js_excerpt.png" alt="Javascript excerpt" /><br />
<em>Javascript excerpt</em></p>
<p>This is not really optimized for performance: one could choose to minimize the javascript. Allow me to use <code>@node-minify/cli</code>.</p>
<p><code>JS_Parse_Error [SyntaxError]: Unexpected token: name «syms», expected: punc «;»</code></p>
<p>Hmm… Let's remove that one line which simply initializes a variable (only fix I could find). I can no longer guarantee it works but let's assume it does.</p>
<p>Old version: <strong>4.7 megabytes</strong> raw and <strong>1.2 megabytes</strong> gzipped.<br />
New version: <strong>2.5 megabytes</strong> raw and <strong>0.7 megabytes</strong> gzipped.</p>
<p>First win!</p>
<p>Well, I need to specify one thing: the website loads a gzipped version of the original file at a size of <strong>1.23 megabytes</strong>. When I <code>gzip</code> it on my local machine, the original file even becomes <strong>1 megabytes</strong>. I don't know what causes this discrepancy, but while we were able to reduce the raw files by 2.2 megabytes, the reduction could only become 0.3 megabytes once gzipped (on my machine™).</p>
<h2 id="footprints_transp.png">footprints_transp.png</h2>
<p>Have you found the image yet? It's the little image at the bottom of the dog (?) following footprints. Cute :)</p>
<p><img src="/img/blog/keybase_encrypt__img.png" alt="Footprints image" /><br />
<em>Footprints image</em></p>
<p>Dimensions on page: <strong>330 x 90 pixels</strong><br />
Dimensions of file: <strong>2836 x 770 pixels</strong></p>
<p>That's only <strong>8.6 times</strong> larger than it needs to be. The bigger crime is the size: <strong>1.4 megabytes</strong>. Which is 50% of the website. You guessed it. This could be better.</p>
<p>Using <a href="https://imagecompressor.com/">imagecompressor.com</a>, I can compress this full-sized image down to <strong>398 kilobytes</strong> (reduction of <strong>71%</strong>). And I'm even allowing the full 256 colors. And the dimensions are still <strong>8.6 times</strong> larger than they need to be.</p>
<p>Optimizing the compression and the image dimensions could yield even better results. I'm not going to bother. The devs didn't either.</p>
<p>Still, second win!</p>
<h2 id="Anything_else_we_can_learn?">Anything else we can learn?</h2>
<p>The source code contains this:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</span><span>
</span><span> K E Y B A S E
</span><span>
</span><span> crypto for everyone
</span><span>
</span><span> because no one we know ever
</span><span> seems to have a public key. :-(
</span><span>
</span><span> No Google Analytics or other 3rd party hosted script tags on Keybase.
</span><span>
</span><span> And this has the added bonus that we'll never be able to serve ad code.
</span><span>
</span><span> \o/ \o/
</span><span> keybase team
</span><span>
</span><span> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</span></code></pre>
<p>I like it when devs get creative. The third paragraph is a bit alienating, but that's my opinion.</p>
<p>Anything else? Given that this is all cryptography related, maybe some security related issues?</p>
<h2 id="Security">Security</h2>
<p><img src="/img/blog/keybase_encrypt__security.png" alt="Webpagetest security score" /><br />
<em>Webpagetest security score</em></p>
<p>I've ran quite a few webpagetests on different website, but a <strong>0</strong> security score is new to me. What does that even mean?</p>
<p>Well, <a href="https://snyk.io/test/website-scanner/?test=200627_0Q_044080ef3ab8a678721658c90d2f4706">it turns out</a> that the entire website is built using the following libraries:</p>
<ul>
<li>jquery v1.11.3 (from <a href="https://blog.jquery.com/2015/04/28/jquery-1-11-3-and-2-1-4-released-ios-fail-safe-edition/">april 2015</a>)</li>
<li>bootstrap v3.3.5 (from <a href="https://blog.getbootstrap.com/2015/06/15/bootstrap-3-3-5-released/">june 2015</a>)</li>
<li>moment v2.7.0 (from <a href="https://github.com/moment/moment/releases/tag/2.7.0">june 2014</a>)</li>
</ul>
<p>Besides the obvious aging, these libraries account for a total of six known and public security vulnerabilities, including <a href="https://snyk.io/vuln/SNYK-JS-BOOTSTRAP-72890">cross-site scripting</a>, <a href="https://snyk.io/vuln/SNYK-JS-JQUERY-174006">prototype pollution</a> and <a href="https://snyk.io/vuln/npm:moment:20161019">regular expression denial of service</a>. All six security vulnerabilities have remediations.</p>
<p>Let the <a href="https://blog.getbootstrap.com/2015/06/15/bootstrap-3-3-5-released/">bootstrap v3.3.5 announcement</a> be a painful reminder: this is <em>pushing it</em>.</p>
<h2 id="Wrapping_up">Wrapping up</h2>
<p>This should give a nice overview of what could go wrong with a non-optimized and aging website. This could happen to any website. But this is Keybase, the company that promises <strong>"secure messaging and file-sharing"</strong>. The same company that got <a href="https://www.crunchbase.com/organization/keybase">$10.8 million in a Serie A funding</a>. The same company that <a href="https://github.com/keybase/client/issues/24105">won't allow us to see their server code</a>.</p>
<p>To paint a full and fair picture, there has been an <a href="https://keybase.io/docs-assets/blog/NCC_Group_Keybase_KB2018_Public_Report_2019-02-27_v1.3.pdf">audit of the Keybase protocol [PDF]</a> which states that:</p>
<blockquote>
<p>[...] there were weaknesses in the Keybase implementation; these were quickly fixed.</p>
</blockquote>
<p>The audit didn't include the website. I'll just end with another quote from the same audit:</p>
<blockquote>
<p>Another common theme was the presence of legacy code. [...]<br />
This does not necessarily imply that legacy code is insecure, but complexity and security are intertwined – every new piece of code may contain a security vulnerability, and more code correlates with more risk.</p>
</blockquote>
Transcript of a strange call2020-07-01T16:14:40+00:002020-07-01T16:14:40+00:00
Unknown
https://yarmo.eu/blog/transcript-strange-call/<p><strong>Lady</strong><br />
"Hi, I'm calling you because you have shown interest in the financial market."</p>
<p><strong>Me</strong><br />
"What? No, I haven't."</p>
<p><strong>Lady</strong><br />
"You have in the past…"</p>
<p><strong>Me</strong><br />
"I think you might have the wrong person on the line."</p>
<p><strong>Lady</strong><br />
"Oh, what is your email address?"</p>
<p><strong>Me</strong><br />
"I'm not giving you my email address."</p>
<p><strong>Lady</strong><br />
"It's all right, I have it here, I just wanted to check. What is your full name?"</p>
<p><strong>Me</strong><br />
"No, I'm not… Wait… Why? I'm not interested in the financial market."</p>
<p><strong>Lady</strong><br />
"You were in the past…"</p>
<p><strong>Me</strong><br />
"Ok, bye."</p>
Launching Keyoxide.org2020-07-01T12:00:00+00:002020-07-01T12:00:00+00:00
Unknown
https://yarmo.eu/blog/keyoxide/<p>Today, I'm excited to launch <a href="https://keyoxide.org">Keyoxide.org</a>, the lightweight and FOSS solution to make basic cryptography operations accessible to regular humans.</p>
<h2 id="What_is_Keyoxide.org?">What is Keyoxide.org?</h2>
<p><a href="https://keyoxide.org">Keyoxide.org</a> offers easy encryption, signature verification and decentralized identity proof verification based on PGP keys while demanding little in-depth knowledge about the underlying encryption program from its users.</p>
<p>This project aims to offer comparable functionality as services like <a href="https://keybase.io">Keybase</a> while reducing friction and being more open.</p>
<p>The project is MIT licensed, uses <a href="https://github.com/openpgpjs/openpgpjs">openpgpjs</a> and is hosted on <a href="https://codeberg.org/yarmo/keyoxide">Codeberg</a>.</p>
<h2 id="Why_only_encryption_and_signature_verification?">Why only encryption and signature verification?</h2>
<p>These are the operations that are available when only having access to public keys instead of private keys. If you wish to decrypt messages and sign them, you need a keypair. If you have a keypair, you probably have the knowledge to use dedicated tools like the CLI or Kleopatra. And if you do, you probably won't be using <a href="https://keyoxide.org">Keyoxide.org</a> directly yourself.</p>
<p>Indeed, if you possess a PGP keypair, <a href="https://keyoxide.org">Keyoxide.org</a> is the tool you send to others to interact with your public key more easily. Allow them to encrypt a message for you, to verify one of your signatures, to verify your online identities using decentralized proofs.</p>
<h2 id="What_are_those_decentralized_identity_proofs_you_keep_mentioning?">What are those decentralized identity proofs you keep mentioning?</h2>
<p>You know how Keybase allows you to prove you have control over accounts on certain websites and services? A great function! Fortunately for you, this function can be even better and more secure by using <a href="https://keyoxide.org/guides/openpgp-proofs">decentralized OpenPGP identity proofs</a>. <a href="https://keyoxide.org">Keyoxide.org</a> will prove your identity on multiple platforms at the same time and yet, you are not required to make an account to use this function. How is that possible?</p>
<p>Well, it's called <em>decentralized</em> for a reason: <a href="https://keyoxide.org">Keyoxide.org</a> doesn't hold your proofs, your key does! Any software that can access your public key can verify these proofs for anyone. When better tooling comes around, you could verify those proofs using a mobile app, using a command-line utility, you name it. No single service holds your proof, only you do, stored inside your keypair.</p>
<p>I have written a <a href="https://keyoxide.org/guides">guide</a> on how to add a proof for every platform currently supported by this website: <a href="https://keyoxide.org/guides/dns">domains</a>, <a href="https://keyoxide.org/guides/lobsters">Lobste.rs</a>, <a href="https://keyoxide.org/guides/twitter">Twitter</a>, <a href="https://keyoxide.org/guides/github">Github</a>, a <a href="https://keyoxide.org/guides">bunch more</a> and work is in progress to support even more still. Is your beloved service not in the list? <a href="https://codeberg.org/yarmo/keyoxide">Open an issue or make a PR</a>! Free open-source software FTW!</p>
<p>Oh, that reminds me, any <a href="https://keyoxide.org/guides/mastodon">Mastodon</a> instance can be used to prove your identity. Yes, <a href="https://github.com/keybase/keybase-issues/issues/3385">any</a>.</p>
<h2 id="So_how_does_it_compare_to_Keybase?">So how does it compare to Keybase?</h2>
<p>There's a more complete <a href="https://keyoxide.org/guides/feature-comparison-keybase">guide on the Keyoxide website</a>, but in a nutshell:</p>
<ul>
<li>more privacy-friendly by not forcing you to create an account and handing over data</li>
<li>more secure by not asking you to trust the service with your private keys</li>
<li>open-source servers (<a href="https://github.com/keybase/client/issues/24105">a must</a>)</li>
<li>encrypt/verify with every public key accessible on the internet, not just those that have been uploaded to a proprietary server</li>
<li>almost all processing is done in the browser, no data is sent to servers*</li>
<li>no vendor lock-in</li>
<li>selfhostable</li>
</ul>
<p>* Only exception is decentralized identity proof verification: some service providers do not have the correct CORS headers (like Reddit) or require APIs (like Twitter). In these rare cases, simple PHP scripts (also open-source) run the proof verification instead.</p>
<h2 id="Can_I_get_an_account?">Can I get an account?</h2>
<p>No. <a href="https://keyoxide.org">Keyoxide.org</a> doesn't need your data on its servers. There are already several ways of exposing public keys on the internet, including <a href="https://keyoxide.org/guides/web-key-directory">web key directory</a> (WKD) and dedicated servers like <a href="https://keys.openpgp.org">keys.openpgp.org</a>. Let's use those instead of making yet another service where you need to upload your keys to.</p>
<h2 id="Can_I_get_a_profile_page_then?">Can I get a profile page then?</h2>
<p>Yes! Append your PGP fingerprint or WKD id to the URL and there it is!</p>
<p>Want an example? Here's my profile at<br />
<a href="https://keyoxide.org/9f0048ac0b23301e1f77e994909f6bd6f80f485d">https://keyoxide.org/9f0048ac0b23301e1f77e994909f6bd6f80f485d</a>.</p>
<p>Now you know what accounts on various services are mine, where to follow me if you wish to get updates on the project and if you wish to send me an encrypted message, that's also just two clicks away.</p>
<h2 id="What_about_my_private_keys?">What about my private keys?</h2>
<p>Don't upload your private keys to the internet, period. If a service wants your private keys on their (proprietary) servers, say no.</p>
<h2 id="You_said_selfhostable?">You said selfhostable?</h2>
<p>Well, yes! It's not a fully supported use case just yet, but the browser does all the processing, the server is mostly just there to deliver the files to the user to perform the operations. <a href="https://codeberg.org/yarmo/keyoxide">Grab the code</a> and put it on your own PHP server!</p>
<h2 id="Any_closing_words?">Any closing words?</h2>
<p>I built this to provide better tooling around modern-day encryption programs and reduce the friction for less tech-savvy people when interacting with public keys.</p>
<p>For those who wish to use encryption programs beyond OpenPGP, <a href="https://codeberg.org/yarmo/keyoxide/issues">let's talk about this</a>. Keyoxide doesn't have any reference to PGP in its name for a reason: it could serve as a platform for easy interaction with any public key, no matter the underlying encryption program.</p>
<p>And above all, I hope you see the same benefit and potential in <a href="https://keyoxide.org">Keyoxide.org</a> as I do and would like to see it grow as an open and accessible platform to push forward the democratization of online privacy and security.</p>
<p>Privacy is not a luxury.</p>
<p>Many thanks to <a href="https://metacode.biz/@wiktor">Wiktor</a> for helping with the decentralized identity proofs.</p>
Webmentiond FTW2020-07-01T10:31:49+00:002020-07-01T10:31:49+00:00
Unknown
https://yarmo.eu/blog/webmentiond-ftw/<p>I just read <a href="https://www.garron.blog/posts/webmentiond-working.html">this post by Guillermo</a> which is a great general overview of <a href="https://indieweb.org/Webmention">webmentions</a> and in particular, the implemention by Horst Gutmann named <a href="https://zerokspot.com/weblog/2020/06/14/setting-up-webmentiond/">webmentiond</a>. An absolute delight to use! Glad you got it working, Guillermo, and happy to have been of help!</p>
Github is sinking2020-06-29T13:06:56+00:002020-06-29T13:06:56+00:00
Unknown
https://yarmo.eu/blog/github-sinking/<p><em>If you're looking for a more reasoned argumentation, see Update 3 at the bottom.</em></p>
<p>I rarely interact with <a href="https://github.com">Github</a> anymore. All my projects are either on my selfhosted <a href="https://gitea.io">Gitea</a> instance or on <a href="https://codeberg.org/">Codeberg.org</a>. That's why I missed the following on <a href="https://www.githubstatus.com/">Github Status</a>:</p>
<p><img src="/content/img/github_status.png" alt="Github status shows a lot of downtimes" /><br />
<em>Yikes</em></p>
<p>Yikes, indeed. How everyone handles this is up to them. Large projects will find it hard to move, no doubt.</p>
<p>My interpretation? The Microsoft Github ship is sinking and it's sinking faster every day. The beauty is: you don't need them. Instead of relying on Github, you could:</p>
<ul>
<li>selfhost your own <a href="https://gitea.io">Gitea</a> instance if you have the knowledge;</li>
<li>use <a href="https://codeberg.org/">Codeberg.org</a> which also uses <a href="https://gitea.io">Gitea</a>;</li>
<li>use <a href="https://sourcehut.org/">sourcehut.org</a> which takes a different but very solid approach to git hosting;</li>
<li>use any instance generously hosted by amazing people (think <a href="https://libreho.st/">libreho.st</a> and <a href="https://chatons.org/">Chatons</a>);</li>
<li>use <a href="https://gitlab.com/">gitlab.com</a> or selfhost an instance.</li>
</ul>
<p>There are so many better places to be for git hosting nowadays. For an easy performance comparison of different services, see <a href="https://forgeperf.org/">forgeperf.org</a>.</p>
<p>Abandon the corporate ship before or after it sinks, up to you.</p>
<hr />
<h2 id="Update_1">Update 1</h2>
<p>Added <a href="https://forgeperf.org/">forgeperf.org</a> link after suggestion by <a href="https://mstdn.io/@slow">@slow@mstdn.io</a>.</p>
<hr />
<h2 id="Update_2">Update 2</h2>
<p>Added <a href="https://sourcehut.org/">sourcehut.org</a> link after suggestion by <a href="https://social.privacytools.io/@freddyym">@freddyym@social.privacytools.io</a>.</p>
<hr />
<h2 id="Update_3">Update 3</h2>
<p>Don't publish on your website when you are feeling frustrated; that's what Twitter is for.</p>
<p>Let's inject some reason here. Github isn't dying anytime soon. Certainly not due to this number of outages. And all software breaks, so that's no measure; what matters is the response. And Github is on it. Like every single other time it was broken.</p>
<p>But that doesn't mean we can't change the status quo. Almost every defense of Github comes down to discoverability: if I put my project on Github, others will find it. If I put it elsewhere, other won't find it.</p>
<p>Do not forget: Github's discoverability comes from us, the userbase. We the developers make or break Github. If we all move, Github shuts its doors. This won't happen. But look at the landscape: so many alternative solutions exist, Github is no better than any other service and, in the eyes of some, me included, Github may actually provide a worse experience than most alternatives.</p>
<p>And about discoverability. Have you heard of social media? Blog posts? I discover a lot of new Github projects on a regular basis and almost none, I have discovered via Github itself. People talk about good projects and share them, plain and simple.</p>
<p>If you simply like Github and their network and their continuously "evolving" UI, have at it. To each their own.</p>
<p>If you don't like Github, do not stay. Be the change you want to see.</p>
Set default git branch to main2020-06-25T11:37:43+00:002020-06-25T11:37:43+00:00
Unknown
https://yarmo.eu/blog/git-main/<p>For a while, we've all been seeing the "switch git default branch from master to main" posts, the earliest I recall having been <a href="https://www.hanselman.com/blog/EasilyRenameYourGitDefaultBranchFromMasterToMain.aspx">written by Scott Hanselman</a>. I've been postponing the change for a bit, but it was <a href="https://www.thorlaksson.com/im-changing-the-default-branch-name-in-my-git-repositories-and-you-should-too/">the post by Kristófer Reykjalín</a> that gave the required motivation to go out and just do it.</p>
<p>For new repositories, <a href="https://gitea.io">Gitea</a> already has <a href="https://github.com/go-gitea/gitea/pull/10803">the option to set the default branch name</a>.</p>
<p>For exiting repositories, the <a href="https://www.hanselman.com/blog/EasilyRenameYourGitDefaultBranchFromMasterToMain.aspx">commands provided by Scott</a> work perfectly:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">git</span><span> checkout master</span><span style="color:#f29e74;">; </span><span style="color:#ffd580;">git</span><span> branch</span><span style="color:#ffcc66;"> -m</span><span> master main</span><span style="color:#f29e74;">; </span><span style="color:#ffd580;">git</span><span> push</span><span style="color:#ffcc66;"> -u</span><span> origin main
</span></code></pre>
<p>Yes, a one-liner :) If you like to take things more slowly, here it goes:</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">git</span><span> checkout master </span><span style="font-style:italic;color:#5c6773;"># switch to master branch
</span><span style="color:#ffd580;">git</span><span> branch</span><span style="color:#ffcc66;"> -m</span><span> master main </span><span style="font-style:italic;color:#5c6773;"># move existing master branch to main (keeping history)
</span><span style="color:#ffd580;">git</span><span> push</span><span style="color:#ffcc66;"> -u</span><span> origin main </span><span style="font-style:italic;color:#5c6773;"># push the main branch to the server
</span></code></pre>
<p>If you use <a href="https://gitea.io">Gitea</a>, you should now go the repository's Settings > Branches and set the main branch as the default branch. Once done, you can now safely delete the obsolete master branch with the command below.</p>
<pre data-lang="bash" style="background-color:#212733;color:#ccc9c2;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#ffd580;">git</span><span> push</span><span style="color:#ffcc66;"> --delete</span><span> origin master </span><span style="font-style:italic;color:#5c6773;"># delete the master from the server
</span></code></pre>
<p>As for my website, here's the <a href="https://git.yarmo.eu/yarmo/yarmo.eu/commit/78e18c55c59c8e65e99013718cd42154ddb7ebd6">commit</a> that completed the transition, making sure my CI/CD solution knows what to listen to.</p>
<hr />
<h2 id="Update_1">Update 1</h2>
<p>Thanks to <a href="https://charlespence.net/">Charles Pence</a> for reminding me to add the <code>git push --delete origin master</code> line.</p>
<hr />
<h2 id="Update_2">Update 2</h2>
<p>Thanks to <a href="https://charlespence.net/">Charles Pence</a> for reminding me that a Gitea branch cannot be deleted as long as it's the default.</p>
Added /now, /feeds, /uses2020-06-21T00:54:10+00:002020-06-21T00:54:10+00:00
Unknown
https://yarmo.eu/blog/added-now-feeds-uses/<p>My website now has dedicated <a href="/now">/now</a>, <span class="line-through">/feeds and /uses pages</span>, inspired by several trends in blogging. A new <a href="/friends">/friends</a> (or something similar) will follow soon!</p>
No to .io, yes to .xyz!2020-06-18T13:09:19+00:002020-06-18T13:09:19+00:00
Unknown
https://yarmo.eu/blog/no-io-yes-xyz/<blockquote>
<p>TL;DR: I openly urge all FOSS projects and startups to reconsider registering .io ccTLD domains, opting instead for truly generic TLDs like .xyz</p>
</blockquote>
<p><strong>.io</strong> is dead, long live <strong>.xyz</strong>!</p>
<p><em>UPDATE: Long live .xyz, .org, and many other gTLDs! Please see <a href="https://yarmo.eu/blog/no-io-yes-xyz/#update-2">Update 2</a> below.</em></p>
<p>Is that an exaggerated statement? Yes, yes it is. But all new projects (and startups?) should reconsider their choice of TLD.</p>
<h2 id="The_case_in_favor_of_.io">The case in favor of <strong>.io</strong></h2>
<p>The <em>de-facto</em> choice is <a href="https://en.wikipedia.org/wiki/.io">.io</a>. Numerous startups use it as a way to make their offering more legitimate, due to the long history of it being used by businesses, starting in <a href="https://en.wikipedia.org/wiki/.io#History">1998 with levi.io, registered by Levi Strauss & co</a>. The appeal comes from the shortness of the TLD and, with regards to the high-tech sector, it being the abbreviation for "input/output".</p>
<h2 id="The_case_against_.io">The case against <strong>.io</strong></h2>
<p>But it doesn't mean "input/output". It stands for <a href="https://en.wikipedia.org/wiki/British_Indian_Ocean_Territory">British Indian Ocean Territory</a> as it is indeed a ccTLD (i.e. country-specific) and not a generic gTLD.</p>
<p>Look at the <a href="https://en.wikipedia.org/wiki/.io">logo on the wikipedia page</a>. Looks techy, right? Everyone knows what the intended use was ("entities connected with British Indian Ocean Territory"), what the actual use is ("startup companies and browser games; little if anything related to the territory itself") and are happy to play along because $$$.</p>
<p><strong>.io</strong> is one of the most expensive TLDs out there (overview on <a href="https://www.domaincompare.io/">domaincompare.io</a> and no, the irony is not lost on me ^_^). Stating the obvious, this is not due to the British Indian Ocean Territory having become such a hot property over the last decade.</p>
<p>The tech industry has appropriated the <strong>.io</strong> ccTLD and everyone is cashing in on it. Everyone?</p>
<h2 id="Colonial_history_and_.io">Colonial history and <strong>.io</strong></h2>
<p>In 2014, Gigaom reported in two separate articles (<a href="https://gigaom.com/2014/06/30/the-dark-side-of-io-how-the-u-k-is-making-web-domain-profits-from-a-shady-cold-war-land-deal/">article 1</a>, <a href="https://gigaom.com/2014/07/11/uk-government-denies-receiving-io-domain-profits/">article 2</a>) what shady practices happen behind the scenes of the <strong>.io</strong> TLD management. Afraid of not doing the story any justice with my words, I ask you to read both articles and make up your own opinion on the matter.</p>
<p>The first one describes how the UK gets profits for <strong>.io</strong> while denying any claims from the <a href="https://en.wikipedia.org/wiki/Chagossians">Chagossians, the people native to the Chagos Islands</a> whom they <a href="https://en.wikipedia.org/wiki/Expulsion_of_the_Chagossians">expelled from the islands</a> (a matter which is <a href="https://en.wikipedia.org/wiki/Expulsion_of_the_Chagossians#2018_ICJ_hearing">still actual in 2020!</a>). The second article describes how, in response to the first article, the UK government denied receiving profits and therefore defended that no profits should be shared with the Chagossians.</p>
<p>In january of 2019, <a href="https://www.theguardian.com/world/2020/jan/05/uk-forfeit-security-council-chagos-islands-dispute">The Guardian wrote</a>:</p>
<blockquote>
<p>Last February the <em><strong>International Court of Justice (ICJ)</strong></em>, the principal judicial body of the United Nations, issued an advisory opinion that <em><strong>found the UK was in unlawful occupation of the islands</strong></em> and demanded that they be returned to Mauritius as quickly as possible.</p>
<p>The <em><strong>UN general assembly endorsed the opinion</strong></em> in May and set a deadline for implementation of 22 November 2019, which the <em><strong>UK ignored</strong></em>.</p>
</blockquote>
<p>One may not agree with me, but it is my interpretation that, since the Chagossians aren't seeing any profits for the ccTLD that corresponds to the land they lived on but were forcibly removed from, <strong>by buying .io domains, one directly supports the still-actual behavior of the UK government defending their colonial history and acts against human rights</strong>.</p>
<h2 id="FOSS_and_.io">FOSS and <strong>.io</strong></h2>
<p>Why do we make FOSS software? Because we believe in openness and equality. It doesn't matter who you are, you can use my software, you can modify it, you can redistribute it.</p>
<p>Everything that has happened with the Chagossians and the <strong>.io</strong> TLD is in stark opposition to the core principles of the FOSS community.</p>
<h2 id="The_case_in_favor_of_.xyz">The case in favor of .xyz</h2>
<p>The <a href="https://en.wikipedia.org/wiki/.xyz">.xyz TLD</a> is fun, small, refreshing, funky, a whole lot cheaper and you don't support colonialism.</p>
<h2 id="Final_words">Final words</h2>
<p>If you choose to make your projects FOSS, you choose to uphold and respect certain principles and human rights, such as the <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software#Four_essential_freedoms_of_Free_Software">Four Essential Freedoms of Free Software</a>.</p>
<p>It is my opinion that buying a <strong>.io</strong> TLD domain directly opposes all a FOSS developer stands for.</p>
<p>I openly urge all FOSS projects and startups to reconsider registering <strong>.io</strong> ccTLD domains, opting instead for truly generic TLDs like <strong>.xyz</strong>.</p>
<h2 id="Disclaimer">Disclaimer</h2>
<p>I have bought .io domains in the past. I did not have the knowledge of what was going behind the <strong>.io</strong> TLD. Now that I do, I will let them expire and NOT renew them. I will also never buy a ccTLD again if its use exceeds the intended use, namely to represent the territory it is associated with.</p>
<hr />
<h2 id="update-1">Update 1</h2>
<p>There is also the issue of the <strong>.io</strong> TLD's <a href="https://www.prolificlondon.co.uk/marketing-tech-news/tech-news/2019/05/future-popular-io-domains-question-over-british-empire-row">future</a>:</p>
<blockquote>
<p>But the UK faces significant international pressure over the Islands, and in the event that they are returned, control over the .io TLD would likely pass to the Mauritian government.</p>
</blockquote>
<p>Who knows what will happen to your domain registration when control is passed to the Mauritian government? Why risk the future of your domain just so you can associate your brand and/or product with the words "input/output"?</p>
<hr />
<h2 id="update-2">Update 2</h2>
<p>It has been pointed out by many that this post focuses too much on the <strong>.xyz</strong> gTLD. This was not my intention. In fact, any gTLD will do just fine, after all they are generic. A non-exhaustive list of gTLDs that could perfectly replace <strong>.io</strong> (assuming <strong>.io</strong> simply stands for "input/output"):</p>
<ul>
<li>.net</li>
<li>.org</li>
<li>.tech</li>
<li>.site</li>
<li>.link</li>
<li>.systems</li>
<li>.computer</li>
</ul>
<p>What is important is that you can identify with the TLD, be it <strong>.net</strong> or even <strong>.ooo</strong>.</p>
<p>I would also like to point out <strong>.io</strong> is not the only ccTLD that is often "misused" (IMO) as a gTLD. Think of <strong>.ai</strong>, <strong>.tv</strong>, <strong>.to</strong> and <strong>.ly</strong> to name just a few. The reason I single out <strong>.io</strong> is because this one in particular has a lot of controversy around it that has lasted for forty years and is still active. I haven't found the same levels of conflict with the other ccTLDs. If there are, do let me know, I read all links to discussion below.</p>
OPSV: Open PGP Signature Verification2020-06-17T11:51:39+00:002020-06-17T11:51:39+00:00
Unknown
https://yarmo.eu/blog/opsv/<h1 id="Introduction_to_OPSV">Introduction to OPSV</h1>
<p>I'd like to introduce a new project of mine named <a href="https://opsv.foss.guru">Open PGP Signature Verification</a> or OPSV, a FOSS solution for easy PGP signature verification. I have copy-pasted the README from the <a href="https://codeberg.org/yarmo/opsv">Codeberg repo</a> below and added a "Why make this project?" section containing opinions.</p>
<h2 id="About">About</h2>
<p>This project uses <a href="https://openpgpjs.org/">openpgp.js</a> loaded in the browser, meaning all processing is done on the device itself and no data is ever sent to the server. It supports loading public keys directly through:</p>
<ol>
<li>plaintext input</li>
<li>web key directory (WKD)</li>
<li>HTTP Keyserver Protocol (HKP).</li>
</ol>
<p>OPSV will always use the first input method it detects in the order described above.</p>
<p>It's also possible to not provide a public key. Read more about this in the <code>Using no public key at all</code> section below.</p>
<h2 id="Usage">Usage</h2>
<p>Visit https://opsv.foss.guru/. On this website, you can enter a signed message (see example below) and any of the three supported public key inputs to verify that the owner of that public key was indeed the person to have signed that message.</p>
<h2 id="Example">Example</h2>
<p>Let's say I, Yarmo, would really like the world to know that I like pineapple. Using my private key, I've signed that statement so you can verify I wrote that message.</p>
<p>The signed statement:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>-----BEGIN PGP SIGNED MESSAGE-----
</span><span>Hash: SHA256
</span><span>
</span><span>I like pineapple.
</span><span>-----BEGIN PGP SIGNATURE-----
</span><span>
</span><span>iQJDBAEBCAAtFiEEog/Pt4tEmnyVrrtlNzZ/SvQIetEFAl70mVUPHHlhcm1vQHlh
</span><span>cm1vLmV1AAoJEDc2f0r0CHrRQXIP/08uza9zOtmZXv5K+uPGVzDKwkgPgZJEezX7
</span><span>6iQ358f1pjSRvYfQ5aB13k2epUHoqCKArMYu1zPqxhvLvvAvp8uOHABnr9NGL3El
</span><span>u7UUgaeUNHkr0gxCKEq3p81abrrbbWveP8OBP4RyxmaFx13Xcj7mfDluiBHmjVvv
</span><span>WU09EdH9VPlJ7WfZ+2G2ZZDHuE5XiaeP7ocugTxXXLkp33zwpDX0+ZuCIXM6fQGe
</span><span>OccSffglFPdNBnfasuuxDWxTQPsEbWGOPJV+CAPmBDeApX+TBF9bovO3hw4Uozk2
</span><span>VT7EAy8Hb0SOrUb3UNGxzoKv++5676IxyB4JXX0Tr9O4ZxhO8o9pEEHwirtn/J1+
</span><span>MWven4gVlWM/6bMeUqx6ydyNc2nqF5059yfRmwGMlp09x82G4x1bcf6aDZ+5njDG
</span><span>fS5T2OpXRIkZHJx8BhmZjsxiDR0KV44zwHpt06+96ef3EDWB0BcP6M+a5Rtc33zf
</span><span>irRmQd2M6RLyXCYtdGIiiAFRuomw802U4F0P4LwVrZdbGA6ObqBv1k8BUFCMbMz8
</span><span>Ab4hF7kO4z0Vh3JaKzcHey0pOzdNCPpAHZ51sAoAnFDM4PdMBgQxxVweCMu4KYMZ
</span><span>FN8sNn42oY/b7gDmwCelVhgD+rvUn/a8+B7CDmCp+wIquyrjrTt00voATcb+ZPMJ
</span><span>pTXJ/NcM
</span><span>=rqTX
</span><span>-----END PGP SIGNATURE-----
</span></code></pre>
<p>Use this as "Signature" on <a href="https://opsv.foss.guru/">OPSV</a>.</p>
<h3 id="Using_plaintext_public_key">Using plaintext public key</h3>
<p>Now, let's check the signature. Go to <a href="https://yarmo.eu/pgp">my personal website</a> and copy-paste the "plaintext" key in the "Public Key (1: plaintext)" field.</p>
<p>You will see a green message confirming that my key was used to sign this message. I really do like pineapple.</p>
<h3 id="Using_web_key_directory_(WKD)">Using web key directory (WKD)</h3>
<p>Remove the contents from the "Public Key (1: plaintext)" field. Now, in the "Public Key (2: web key directory)", write <code>yarmo@yarmo.eu</code> and verify the signature again. It is still verified. Try using <code>jane@doe.org</code> or any other input, it won't verify.</p>
<h3 id="Using_HTTP_Keyserver_Protocol_(HKP)">Using HTTP Keyserver Protocol (HKP)</h3>
<p>Remove the contents from the "Public Key (2: web key directory)" field. I uploaded my keys to the https://keys.openpgp.org/ HKP server, which is the default server used by OPSV. All you need to do is once again go to <a href="https://yarmo.eu/pgp">my personal website</a> and copy-paste the "Fingerprint" in the "Public Key (3: HKP)" field (the second field!). Still verified!</p>
<h3 id="Using_no_public_key_at_all">Using no public key at all</h3>
<p>Wait, what? Then what am I verifying the signature against?</p>
<p>PGP signatures can contain the <code>userId</code> of the signer. If OPSV finds a <code>userId</code>, it will use it to perform a HKP lookup.</p>
<p>Remove the contents from the "Public Key (3: HKP)" field. It again verifies BUT against the information contained within the signature itself. You should carefully check the information OPSV returns. In this case, the authenticity is confirmed because the <code>userId</code> (yarmo@yarmo.eu) matches the one I use.</p>
<p>The signature below does not contain a <code>userId</code>:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>-----BEGIN PGP SIGNED MESSAGE-----
</span><span>Hash: SHA256
</span><span>
</span><span>I like pineapple.
</span><span>-----BEGIN PGP SIGNATURE-----
</span><span>
</span><span>iQIzBAEBCAAdFiEEog/Pt4tEmnyVrrtlNzZ/SvQIetEFAl70mT4ACgkQNzZ/SvQI
</span><span>etHUNBAAlswF4Q5IkPTsMELZPNHBps8CUJUeDWu3HlSz7c2U+4h2jJztHD0mDtxH
</span><span>PqKzUnqQqNF1Bot//5xoOcn+m6UaSCzDk1oQFwD6LlQA+ScnIXddoV3xLqzTRAMe
</span><span>dyuqOoDzoVeD+fWlwisnGElYX5jHRX6tgyKNh0auR3/crQUIJazAyeDwZFdJiwaL
</span><span>ntd+d8T0BcVlVVPYN7RIp1hpT+PLIcwIsr64Myfy8SOa4cjVcQgnrhR/Lfz9680T
</span><span>LCpnSohHRiA82nMGRiapEv+s+zy1NUZnVYbU2Li+Q0nYdSoDFu0xEBYmLOxwS50H
</span><span>j6kK0ZyRicNeq2T25aIlieliTmSFLHHpzi/Zw8Yt1+FtZvWf4pstA19ahk7AQK5W
</span><span>zYF2bMO2xn5D4/pRz1P4e2NTWYeIK+ZHttc7T9ZSS9Ffo03fjcJXhson3WcQZKB5
</span><span>VIGVVFnlWujNYYotmxys84OtE6ePfVRwHasIOLfknVq64RVo68Y1Pgw/KPXSb1k6
</span><span>3r+YD0mt5i/NWpwm79G/Aq54WI5JT905div88d0Bbpa3dScTZ2MiBJbP96pZBcKl
</span><span>dpm3RnjsbCFgZqEpclrEh2SD1e8eCjrNcouWK3jIfOkaWB2xk1KvNmdyQQTs3dkP
</span><span>/CpKcCJiNVvY9ogWxg9aUuQZUn4WvCvaEkmP4dfkk9s8yAKPQf8=
</span><span>=QqCq
</span><span>-----END PGP SIGNATURE-----
</span></code></pre>
<p>Once again, the signature verifies. And again, it only verifies against the information contained within itself so <strong>that doesn't prove anything about its authenticity</strong>. Anyone can write this and the signature will return verified.</p>
<p>Except now, there is no <code>userId</code> for easy manual verification. So, you need to either take the <code>keyId</code> or the <code>fingerprint</code> and find some other way of verifying it, for example by contacting the person who supposedly wrote the message.</p>
<p>In my case, you can simply visit <a href="https://yarmo.eu/pgp">my personal website</a> and compare the <code>fingerprint</code>.</p>
<h3 id="What_can_a_bad_actor_do?">What can a bad actor do?</h3>
<p>One could not sign a statement with my private key: I, and only I, have access to it.</p>
<p>One could however simply take any of my signed messages and change the content. Like so:</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>-----BEGIN PGP SIGNED MESSAGE-----
</span><span>Hash: SHA256
</span><span>
</span><span>I like privacy invasion.
</span><span>-----BEGIN PGP SIGNATURE-----
</span><span>
</span><span>iQJDBAEBCAAtFiEEog/Pt4tEmnyVrrtlNzZ/SvQIetEFAl70la8PHHlhcm1vQHlh
</span><span>cm1vLmV1AAoJEDc2f0r0CHrRDcYP/R4Yo+xiBLHtrOEMAQQkwbxWyQgCbjS4h9iF
</span><span>As86o9a+t5dKSsL4gSoB3sdNAL0a1ZOhaAU8kWaR6xN1RvCQjFr878hEf631yai6
</span><span>DfF2eRZPEsjXkAzlOKlPrAvtrNwUUMSDk20rGa4A9HHdxpfrmDRIgVaz3uNr1qqc
</span><span>N/Ag3OK/2l1pZFTqjPekqDnXwblLiuTTLFlMlS80LFKoa7zZLkE5SD5O6WQFpOK7
</span><span>DwYJk1+UjWDgVugz8rSLWag0mag9Z815furPIkU9yRmE1tIjsgpCR+uJA/e0I4bn
</span><span>4Ei0M29df1QucDNv6q2WoW/7rCMz1IY796TY/BbdqbFk6vOUUHu596mQB+fJDNTX
</span><span>jGC0SpJPhzhgoZICzK8yWJMJGoLXScYj95rCAqjYdnW/LDdAgODCyjSOxnzdI1zi
</span><span>prQf4OmayHzDjI8Bo4bl22toPdSIDt3r5MmSGXcmBrNU16ea7FC9MnR8dkKfHD55
</span><span>tC3UL2Ps/iU76kqzGAei1PKvaVqKogUGi/kqWzfi2eg+useHRyZpJrJv3R2mE0Y2
</span><span>eSLMMJ5cTuM60c0GSIPOxzBBsMRwa0HmEQ3HKgpnpkVYxoA00/hq91kuNavqUqM+
</span><span>OyOgbb21woPAG+S4OCHkOINEAooeCfhpSFtmpa87sUcfvDHUuX1ivL4rYoQO3cT2
</span><span>gNfjdSiB
</span><span>=tqZV
</span><span>-----END PGP SIGNATURE-----
</span></code></pre>
<p>Given the wording of the statement, you naturally doubt the origin of it being me. You run it through OPSV and indeed, this is not what I wrote!</p>
<p>You know me, "I despise privacy invasion." (hint hint).</p>
<h2 id="Why_make_this_project?_(with_opinions)">Why make this project? (with opinions)</h2>
<p>This project directly targets a specific use-case of <a href="https://keybase.io">Keybase</a>.</p>
<p>It is possible to upload your public key (don't upload your private key…) to the Keybase servers. When you sign a message using your private key, anyone can verify that you wrote that message by simply using their <a href="https://keybase.io/verify">verify page</a>. It's really simple to use, but you'll notice something is missing: a field asking you which key to use for the verification. What Keybase does is check the message against all of the keys it knows about and then let you know which of its users wrote and signed that message.</p>
<p>It is my humble opinion that this is an anti-pattern. By not being able to verify against a single key, you open the door to impersonation: I can make an account named <code>j0hn</code> and pretend to be <code>john</code>. If I write a false statement and sign it with <code>j0hn</code>'s key, Keybase will gladly tell you that the message is legit and signed: it is, but by the wrong person. It is up to the user to then investigate <code>j0hn</code>'s Keybase account and figure out if it belongs to <code>john</code> or some bad actor.</p>
<p>Considering recent events, namely Keybase's <a href="https://keybase.io/blog/keybase-joins-zoom">acquihire by Zoom</a> and Zoom's willingness to bend to <a href="https://www.theguardian.com/technology/2020/jun/03/zoom-privacy-law-enforcement-technology-yuan">US law enforcement</a> and <a href="https://www.nytimes.com/2020/06/11/technology/zoom-china-tiananmen-square.html">Chinese</a> influence, combined with their <a href="https://github.com/keybase/client/issues/24105">unwillingness to release the server source code</a>, I strongly urge all to <a href="https://yarmo.eu/notes/deletekeybase">#deletekeybase</a>. They are not worthy of your keys and your data. It is a mystery what happens to your keys once you give them to Keybase, and with their employers working for a company eager to please privacy-invading governments, why would you? Seriously, why would you ever give your valuable private keys to Keybase?</p>
<p>I was having this discussion on the fediverse recently and a privacy-minded individual still was forced to use Keybase for the simple reason it was the easiest and beginner-friendly way of verifying PGP signatures that didn't involve installing complicated software and handling PGP keys.</p>
<p>Well now, there is <a href="https://opsv.foss.guru">OPSV</a>. It has the same intuitive copy-paste workflow as Keybase does, with the only additional step of having to copy-paste a plaintext key, email address or fingerprint (which, in my book, is a feature!). Processing is done client-side, so no data is sent to any server.</p>
<h3 id="Why_include_privacy-friendly_plausible.io_stats?">Why include privacy-friendly plausible.io stats?</h3>
<p>Well, without sounding cocky, I humbly believe this is the first project I made that could actually make a difference to people's workflow on the internet. As such, if usage suddenly spikes, I need to know if the server can handle it.</p>
<p>Because asking users to accept website statistics is much in my opinion, I decided a nice compromise was to make the <a href="https://plausible.io/opsv.foss.guru">statistics public</a>.</p>
<p>If open statistics or any statistics at all is not to your liking, please do let me know by <a href="https://codeberg.org/yarmo/opsv/issues">opening an issue</a>.</p>
<h2 id="Final_words">Final words</h2>
<p>I hope you like this project, I know I do. OPSV allows me to use signed messages more and provide a simple and secure way to verify their authenticity without relying on big corporations. This is our web, so it's also our duty to keep it secure.</p>
<hr />
<h2 id="Update_1">Update 1</h2>
<p>Added the <code>Using no public key at all</code> section.</p>
Website is now open source!2020-06-11T16:09:40+00:002020-06-11T16:09:40+00:00
Unknown
https://yarmo.eu/blog/website-open-sourced/<p>It's finally here: the <a href="https://git.yarmo.eu/yarmo/yarmo.eu">source code of this website</a> on my selfhosted gitea instance. It was delayed because, even though the current codebase does not contain secret keys or passwords, this has been the case in the past and the git history is easily searchable. I have deleted the old git project and started afresh.</p>
<p>From now on, the source code and the <a href="https://drone.yarmo.eu/yarmo/yarmo.eu/">drone CI/CD pipelines</a> that go with the website are all open and available. This should make the content on the website more trustworthy as you can now review the code that generated the content. It is also my belief that the open-sourcing of this website is beneficial to all including myself: it gives you a chance to see the inner workings and perhaps pick up a trick or two, and if you see a blatant mistake, bad coding practices or other errors, I trust you will <a href="/contact">let me know</a>.</p>
<p>Enjoy and thanks for taking the time to be here :)</p>
Start of the Plausible experiment2020-06-11T12:01:57+00:002020-06-11T12:01:57+00:00
Unknown
https://yarmo.eu/blog/plausible-start/<p>During the roughly 6 months since I started this website, I have not been using any website statistics whatsoever. I did not see the point of it, this website was not designed to gather an audience in any fashion, it was primarily meant to be a permanently-updated online CV. Given that I am leaving academia which I have been preparing for during the last nine years, I figured I could use any means of getting my name out there.</p>
<p>Recently, I have taken an interest in blogging about selfhosting, online privacy and related technical subjects. In an attempt to understand if people see these articles or any other section of my website, I will start an experiment gathering statistics using the privacy-friendly <a href="https://plausible.io">Plausible</a>.</p>
<h2 id="The_Plausible_experiment">The Plausible experiment</h2>
<p>In a month or so, I will look back at the data gathered and see if anything of interest can be learned. The danger is that when the observation is made some articles perform better than others, the writing process is consequently changed to conform to what the statistics say performs best.</p>
<p>This is not my intention for the simple reason that this blog is not made to target a specific audience but rather to serve as an outlet for things I learn and interest me. If I notice my writing behavior change due to insights gained by statistics, the experiment is ended.</p>
<h2 id="A_comparative_experiment">A comparative experiment</h2>
<p>In the near-future, I will also compare what can be learned from a "client-side" statistics solution like <a href="https://plausible.io">Plausible</a> with what can be learned from a "server-side" statistics solution like <a href="https://goaccess.io">GoAccess</a>.</p>
<p>The reason I am not performing this comparative experiment right now is because both solutions above manage to not support a single common log format. It seems it was decided a month or so ago that <a href="https://github.com/allinurl/goaccess/issues/1768#issuecomment-629652452">GoAccess should conform to Caddy's format</a> (<a href="https://github.com/caddyserver/caddy/issues/3417#issuecomment-629836804">separate issue on Caddy's side</a>). Until that happens (or until I figure out a way to parse Caddy's log format in GoAccess), this comparative experiment will have to wait.</p>
<hr />
<h2 id="Update">Update</h2>
<p>The comparative experiment is back on! Thanks to <a href="https://fosstodon.org/@AlexMV12">@AlexMV12</a> and this <a href="https://alexmv12.xyz/blog/goaccess_caddy/">blog post</a> he wrote, I now have a working bash script to analyze Caddy's log file. See you in thirty days!</p>
About my avatar2020-06-10T22:18:19+00:002020-06-10T22:18:19+00:00
Unknown
https://yarmo.eu/blog/avatar/<p>Every so often, I get asked about the origin and make of my avatar, as seen on my <a href="/">website</a> and my <a href="https://fosstodon.org/@yarmo/">Fosstodon profile</a>. So, here it is.</p>
<p>Inspired by the avatars of <a href="https://fosstodon.org/@kev">Kev@fosstodon.org</a> and <a href="https://fosstodon.org/@mike">Mike@fosstodon.org</a>, both drawn by Kev, I decided to draw my own in a similar style. The keen-eyed among you will indeed spot a few differences in design.</p>
<p>I used <a href="https://inkscape.org/">Inkscape</a> to draw over a photo of mine, simple vectors only, no special brushes required.</p>
<p>Due to the similarity, I did ask Kev to confirm he had no objections to me using this avatar as my profile picture. Other than using their avatars as stylistic references, there are no other links between my avatars and theirs, their owners or the <a href="https://fosstodon.org">Fosstodon instance</a>.</p>
Friendly reminder to clean your NUC's fan2020-06-08T13:35:15+00:002020-06-08T13:35:15+00:00
Unknown
https://yarmo.eu/blog/nuc-fan-cleaning/<p><a href="https://www.intel.com/content/www/us/en/products/boards-kits/nuc.html">Intel NUCs</a> make for some great low-entry-barrier low-power-consumption servers and homelabs. I have three NUCs at home, two of which have played a server role. They span several generations: a 5i3, a 7i7 and a 8i5.</p>
<p>And they all have one thing in common: sooner or later, their fans clog up with dust, they heat up, they make more noise and perform worse.</p>
<p>If you haven't cleaned the fan in a while, your best bet is to open the NUC up and clean the fan and the exhaust.</p>
<p>To prevent having to open a NUC up too often, I bought a few cans of compressed air and regularly blow air through the device. I'm also looking into placing air filters near the air intake.</p>
<p><img src="/img/blog/nuc_temp_fan_cleaning.png" alt="NUC cools down when fan is cleaned" /><br />
<em>Can you tell when compressed air was applied to the NUC?</em></p>
Optimizing the website's load performance2020-06-05T22:47:21+00:002020-06-05T22:47:21+00:00
Unknown
https://yarmo.eu/blog/website-load-performance/<h2 id="My_old_webhosting">My old webhosting</h2>
<p>When I started making websites back in 2010-ish (maybe even earlier than that, I don't remember), I used shared hosting as I did not have the slightest clue about how a server worked, let alone set one up for web hosting. About two-three years ago, I switched to <a href="https://www.cloudways.com/en/">Cloudways</a>, allowing one to host websites on a virtual private server (VPS) and still not require any actual knowledge about the inner workings of a server.</p>
<h2 id="My_new_webhosting">My new webhosting</h2>
<p>However, I've been managing my own private server for almost two years so I felt confident I could do the hosting myself. Hip as I am (I am not), I decided to go with a <a href="https://caddyserver.com/">Caddy server</a> as a <a href="https://www.docker.com/get-started">Docker container</a> on a VPS hosted by <a href="https://www.digitalocean.com/">DigitalOcean</a>. For the low-traffic websites I currently maintain, this is largely sufficient.</p>
<p>I am in the process of moving each website one by one to the new hosting solution. It was time for this very website, <a href="https://yarmo.eu">yarmo.eu</a> and I thought to myself:</p>
<blockquote>
<p>I should actually check if I gain any website load performance by moving to this new solution.</p>
</blockquote>
<p>When asking the Fediverse, 70% predicted <a href="https://fosstodon.org/web/statuses/104285148110095796">Caddy would perform better than Cloudways</a>.</p>
<h2 id="Let's_get_testing">Let's get testing</h2>
<p>I decided to use <a href="https://www.webpagetest.org">WebPageTest.org</a> to measure load perfomance. For each case described below, three measurements were performed and the median measurement is displayed and analyzed.</p>
<h3 id="Cloudways">Cloudways</h3>
<p>First, a baseline measurement of my existing Cloudways solution.</p>
<p><img src="/img/blog/wpt_1_1a.png" alt="Cloudways - overview" /><br />
<em>Cloudways - overview</em></p>
<p><img src="/img/blog/wpt_1_1b.png" alt="Cloudways - rating" /><br />
<em>Cloudways - rating</em></p>
<p><img src="/img/blog/wpt_1_1c.png" alt="Cloudways - waterfall" /><br />
<em>Cloudways - waterfall</em></p>
<p>So the server returns the first byte of information after 480 milliseconds. Now, I should tell you that my website is based on <a href="https://phug-lang.com">Phug</a>, the PHP port of <a href="https://pugjs.org">pug.js templating</a>. The page is rendered in real-time and apparently, that takes a little over 300 ms.</p>
<p>It is worth noting that any other metric is then dependent on how the website is programmed. Once Cloudways has sent over the data, it no longer has any influence on load performance.</p>
<p>The website if fully loaded after 923 ms. Good to know. About a second to wait for my website to load.</p>
<p>Over on the waterfall, we see a bunch of files being downloaded simultaneously after the HTML page is loaded. The largest asset to load is the profile picture.</p>
<p>Wait, what is that <code>F</code> over on the rating? Security is NOT in order! As it turns out, Cloudways does not handle security-related HTTP headers for you… I did not know that! They <a href="https://support.cloudways.com/enable-hsts-policy/">recommend setting these headers in a .htaccess file</a>.</p>
<p>Let this be a reminder to all of you: test your websites. One might learn a few tricks.</p>
<p>Anyway, can Caddy do better?</p>
<h3 id="Caddy">Caddy</h3>
<p><img src="/img/blog/wpt_1_2a.png" alt="Caddy - overview" /><br />
<em>Caddy - overview</em></p>
<p><img src="/img/blog/wpt_1_2b.png" alt="Caddy - rating" /><br />
<em>Caddy - rating</em></p>
<p><img src="/img/blog/wpt_1_2c.png" alt="Caddy - waterfall" /><br />
<em>Caddy - waterfall</em></p>
<p>Well, as it turns out, it's largely the same performance. First byte arrived after 459 ms, but I've ran it a few times and there's really little difference between Cloudways and Caddy.</p>
<p>BUT! Learning from my previous mistakes, I configured Caddy to set up all the correct headers and won't you look at that, <code>A+</code> on the security score!</p>
<p>So that's it then?</p>
<p>Well, not really… I learned that my website wasn't running optimally because I forgot some basic HTTP headers. Did I forget more? In other words, can I do even better than this?</p>
<p>I've tried a lot of things, I'll just narrow it down to the two most important findings.</p>
<h3 id="Caddy_-_inline_most_of_it">Caddy - inline most of it</h3>
<p>As it turned out, I had a few small SVG icons and some CSS files. I tried rendering them into the HTML page, so the data would be sent on the first data transmission and no separate requests were needed. For good measure, I also minified the CSS files which, for one file, reduced the size by 30%!</p>
<p><img src="/img/blog/wpt_1_6a.png" alt="Caddy+inline - overview" /><br />
<em>Caddy+inline - overview</em></p>
<p><img src="/img/blog/wpt_1_6b.png" alt="Caddy+inline - rating" /><br />
<em>Caddy+inline - rating</em></p>
<p><img src="/img/blog/wpt_1_6c.png" alt="Caddy+inline - waterfall" /><br />
<em>Caddy+inline - waterfall</em></p>
<p>On the waterfall above, you can clearly see the <code>dank-mono.css</code> was not inlined but I tried multiple configurations, there was no real gain as the image also needed to load and took longer anyway. So, all in all, inlining the SVG and CSS content did little in this case.</p>
<p>Also, note the regression from <code>A+</code> to <code>A</code> on the security score. There was one header I couldn't quite get working properly so I had to disable that one, other than that, it's working better than it ever has.</p>
<p>What drew my attention for the final step was the <code>B</code>. My server responds in within 480 ms and that it still not good enough for you, WebPageTest? Ok, have it your way.</p>
<p>What takes my server so long to respond? Well, obviously, it must be the templating. Can I improve the template? Perhaps. But as it turns out, I don't have to! Ever heard of caching?</p>
<p>As described on <a href="https://phug-lang.com/#usage">their website</a>, PHUG has support for caching and even calling an optimized version of their renderer. So I applied both caching and optimized rendering.</p>
<h3 id="Caddy_-_PHUG_optimization">Caddy - PHUG optimization</h3>
<p><img src="/img/blog/wpt_1_7a.png" alt="Caddy+PHUG - overview" /><br />
<em>Caddy+PHUG - overview</em></p>
<p><img src="/img/blog/wpt_1_7b.png" alt="Caddy+PHUG - rating" /><br />
<em>Caddy+PHUG - rating</em></p>
<p><img src="/img/blog/wpt_1_7c.png" alt="Caddy+PHUG - waterfall" /><br />
<em>Caddy+PHUG - waterfall</em></p>
<p>Well, there it is!!! First byte of data arrived after a mere 173 ms, website is useable in less than half a second and all scores are <code>A</code>!</p>
<p>That's the result I was hoping for. Now on my todo list:</p>
<ul>
<li>Optimize the profile picture further or try SVG</li>
<li>Get all HTTP headers perfect</li>
</ul>
<p>Any comments or recommendations/optimization? Please <a href="/contact">let me know</a>!</p>
<!-- https://www.webpagetest.org/result/200604_Z3_05019c9c3f872873bd1e964474cb0dac/ -->
<!-- https://www.webpagetest.org/result/200604_E2_026978bdd64ce5830ecf5be74b634120/ -->
<!-- https://www.webpagetest.org/result/200605_M4_b728be8e608e4807af0192cefaf55f2e/ -->
<!-- https://www.webpagetest.org/result/200605_CF_96d815e43897af2723f7a4d762d76ba3/ -->
<!-- https://www.webpagetest.org/result/200605_61_67b241dc40cbaf08ed257177e6efbef0/ -->
The Case of the Missing Entropy2020-06-05T12:14:57+00:002020-06-05T12:14:57+00:00
Unknown
https://yarmo.eu/blog/missing-entropy/<blockquote>
<p>In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. (<a href="https://en.wikipedia.org/wiki/Entropy_(computing)">Source: wikipedia</a>)</p>
</blockquote>
<h2 id="Docker,_are_you_still_there?">Docker, are you still there?</h2>
<p>It all started when I got myself a new VPS server for serving web content. I have a more-than-capable server at home but I'd rather not use it for "uptime-sensitive" use-cases, the odd crash still takes it down from time to time. I know, a CDN…</p>
<p>Sticking with what I'm comfortable with, I decided to go with a docker setup with only a few containers:</p>
<ul>
<li><a href="https://yarmo.eu/blog/missing-entropy/caddyserver.com/">caddy</a> as web server and reverse proxy</li>
<li><a href="https://hub.docker.com/_/php">php-fpm</a> as PHP processor</li>
<li>a couple of others with minor significance</li>
</ul>
<p>As per usual, I write my <code>docker-compose.yaml</code> and it's all set. But not this time. At times, when I would change something in the yaml file and run <code>docker-compose up -d</code>, it would do it immediately, as I would expect from all the times I've run it on my homelab. But sometimes, it would wait a minute or longer and only execute then.</p>
<p>I accepted this behavior a few times, but at some point, it had to be dealt with.</p>
<h2 id="Investigating">Investigating</h2>
<p>I noticed a few things. First, it did not seem to be due to a lack of computing resources. My <a href="https://github.com/grafana/grafana">Grafana</a> dashboard (with <a href="https://github.com/influxdata/influxdb">InfluxDB</a> as backend and <a href="https://github.com/influxdata/telegraf">Telegraf</a> as agent) clearly showed me that CPU usage was about 1% and RAM was about 30% full. No excessive DISK IO or NETWORK IO. So we are not overwhelming the system!</p>
<p>Additionally, while it was waiting to execute, I could open a new SSH connection and do other stuff. With one exception: any docker-related command would not execute.</p>
<p>Final clue: I could not ctrl-c my way out of a pending docker command execution, but if I would close the terminal, open a new one, connect via SSH and run any new docker command, it would still wait.</p>
<p>Final final clue: a minute later, I could run docker commands left, right and center without a single problem. Another minute later, it might do the whole waiting again. It was very… "Random". Wink, wink…</p>
<p>Have you figured it out yet? I hadn't.</p>
<h2 id="Researching">Researching</h2>
<p>With this information, I was confident enough to start searching online and I came across this <a href="https://github.com/docker/compose/issues/6552">github issue</a> fairly quickly:</p>
<blockquote>
<p>"docker-compose often takes a long time to do anything"</p>
</blockquote>
<p>That sounds about right!</p>
<p>A few comments in, <a href="https://github.com/docker/compose/issues/6552#issuecomment-529787442">it was suggested</a> to run the following command: <code>cat /proc/sys/kernel/random/entropy_avail</code>.</p>
<p>On my VPS, this returned <code>52</code>. Whoopsie…</p>
<h2 id="Entropy">Entropy</h2>
<p>For those of you who don't know what (computing) entropy is, here's the <a href="https://en.wikipedia.org/wiki/Entropy_(computing)">wikipedia article</a> for it. In short: computers are terrible at coming up with random numbers (just like humans! Topic for another day), which many applications require for their proper working.</p>
<p>Our operating systems have a clever way to solve this: take all input that is NOT generated by the computer itself and use that as "randomness". For example, a computer doesn't know in advance how you are going to move the cursor or which keyboard button you will press. The operating system takes these inputs, processes them to "extract the randomness" and stores it in the <code>entropy pool</code>.</p>
<p>Any application needing some randomness can request some random data from the <code>entropy pool</code>. Maintaining sufficient entropy is therefore a challenge in itself: process enough random data to keep up with the demand.</p>
<p>Apparently, Docker is an application that requires randomness. But how have I never encountered this issue before?</p>
<h2 id="VPS_and_entropy">VPS and entropy</h2>
<p>On both my desktop computer and homelab, the entropy available is around <code>4000</code>, which is perfect. They are able to maintain this entropy because of all the sources of randomness available to them. Mouse and keyboard inputs, processes running in the background, etc.</p>
<p>Now, let's take the VPS as a counter example. These things are made to be fully reproducible: every time you boot one up, they are expected to run in the same way. They are also very sealed off from the host system for security reasons: I cannot read core temperature values for my VPS. They don't have "true hardware", they get portions of hardware, shared with other VPS instances. Except for my SSH connection, the VPS has no mouse or keyboard inputs.</p>
<p>In other words, VPSs are severely lacking in sources of entropy. That is why the entropy available was only <code>52</code> and why docker stalled: it had to wait for sufficient randomness to occur.</p>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged">More information on VPS and entropy from DigitalOcean</a>.</p>
<h2 id="The_remedy:_haveged">The remedy: haveged</h2>
<p>There is a way to remedy the situation: <a href="https://wiki.archlinux.org/index.php/Haveged">haveged</a>. Having only discovered it last night, I do not fully understand it yet but from what I have read, it is a pseudorandom number generator (PRNG) that fills the <code>entropy pool</code> with "pseudorandomness". Installing <code>haveged</code> immediately solved my issue, all docker commands were running instantly again.</p>
<p><img src="/img/blog/entropy_haveged.png" alt="Available entropy suddenly increases after installing haveged" /><br />
<em>Can you tell when I installed haveged?</em></p>
<h2 id="Caveat:_pseudorandomness">Caveat: pseudorandomness</h2>
<p>There is a downside to this: PRNGs are NOT random. <a href="https://en.wikipedia.org/wiki/Pseudorandom_number_generator">Wikipedia article on PRNGs</a>. PRNGs generate numbers that appear random but are fully deterministic: run the exact same algorithm twice, and you'll get the same "random" numbers. Therefore, VPSs may not be the perfect solutions to perform entropy-heavy tasks such as cryptography: a cryptographic key generated with pseudorandom numbers is far less secure than one generated with truly random numbers.</p>
Invidious2020-06-01T13:05:58+00:002020-06-01T13:05:58+00:00
Unknown
https://yarmo.eu/blog/invidious/<p>Small acts of resistance are all we need. Together, we make change.</p>
<h2 id="Compliance:_YouTube">Compliance: YouTube</h2>
<p>Everyone knows YouTube. It contains more than enough content to keep you entertained for a couple of lifetimes.</p>
<p>The thing is, it's owned by Google and has enough privacy-invading trackers and ads to follow and pester you during all of these lifetimes.</p>
<h2 id="Resistance:_Invidious">Resistance: Invidious</h2>
<p>Please consider using Invidious (<a href="https://github.com/omarroth/invidious">github repo</a>), a free and open source service that sits between you the user and the YouTube servers. It eliminates ads, does not use YouTube APIs and has many features YouTube should also always have had (audio-only mode? Yes please).</p>
<p>Several <a href="https://github.com/omarroth/invidious/wiki/Invidious-Instances">instances</a> are hosted around the world, make sure to visit the nearest to you for the best experience.</p>
<h2 id="Going_beyond">Going beyond</h2>
<p>But you can go further. When using Firefox, install the <a href="https://codeberg.org/Booteille/Invidition/issues">Invidition</a> addon to automagically redirect YouTube links to Invidious (again, make sure to select the closest instance). On Android, install <a href="https://www.f-droid.org/en/packages/app.fedilab.nitterizeme/">UntrackMe</a> to do the exact same thing, YouTube links will be opened in Invidious-compatible apps such as <a href="https://f-droid.org/en/packages/org.schabi.newpipe/">NewPipe</a>.</p>
<h2 id="Drawbacks">Drawbacks</h2>
<p>The main issue is that you are no longer supporting the content creators, which is a big issue. It's easy to say "they shouldn't be relying on YouTube and ad revenue" and I agree with that statement to some degree, but you'll still be sad when your favorite content creator quits.</p>
<p>Try and make contact with them, if they're small this might be feasible, if they're big then you probably don't have to worry about them quitting anyway. Ask them and push them towards accepting other methods of donation.</p>
<p>And then donate.</p>
How does a textbook 'Embrace, Extend, Extinguish' operation work?2020-05-27T13:22:21+00:002020-05-27T13:22:21+00:00
Unknown
https://yarmo.eu/blog/textbook-eee/<p>I recently found out about what happened to the <a href="https://appget.net/">AppGet</a> tool for Windows made by <a href="https://keivan.io">Keivan Beigi</a>.</p>
<p>Sadly, a <a href="https://keivan.io/the-day-appget-died/">recent blog post</a> is outlining the details around the decision to cease development and shut down the service which provided an open source package manager to Windows.</p>
<p>Stories about open source services shutting down are always sad and a blow to the community, but this one in particular is noteworthy. Judging from the events as written down by Keivan in his <a href="https://keivan.io/the-day-appget-died/">post</a>, he has been the target of an absolute textbook case of <a href="https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish">Embrace, Extend, Extinguish</a>, Microsoft's <em>modus operandi</em>.</p>
<p>As a disclaimer, I have been rooting this past year for Microsoft's apparent change in stance towards Linux and the open source community, but I was wrong, and Microsoft has been kind enough to provide all the grounds to distrust the corporation even more, recently with the <a href="https://itsfoss.com/microsoft-maui-kde-row/">MAUI debacle</a> (see all the <a href="https://github.com/dotnet/maui/issues/35">comments marked off-topic in this issue</a>? Flagrant censoring!) and now with AppGet.</p>
<p>So now, let's quickly examine Microsoft's emails from the blog post.</p>
<h2 id="Embrace">Embrace</h2>
<blockquote>
<p>Keivan,<br>
I run the Windows App Model engineering team and in particular the app deployment team. Just wanted to drop you a quick note to <em><strong>thank you for building appget</strong></em> — it’s a great addition to the Windows ecosystem and makes Windows developers life so much easier. We will likely be up in Vancouver in the coming weeks for meetings with other companies but if you had time we’d love to meet up with you and your team to get feedback on how we can make your life easier building appget.</p>
</blockquote>
<p><strong>Embrace</strong>: celebrate what people contribute to your ecosystem</p>
<h2 id="Extend">Extend</h2>
<blockquote>
<p>Keivan,<br>
it was a pleasure to meet you and to find out more about appget. I’m following up on the azure startup pricing for you. As you know we are big fans of package managers on Windows and <em><strong>we are looking to do more in that space</strong></em>. My team is growing and part of that is to build a team who is responsible for ensuring package managers and software distribution on Windows makes a big step forward. <em><strong>We are looking to make some significant changes</strong></em> to the way that we enable software distribution on Windows and there’s a great opportunity (well I would say that wouldn’t I?) to help define the future of Windows and app distribution throughout Azure/Microsoft 365.<br>
With that in mind <em><strong>have you considered spending more time dedicated to appget and potentially at Microsoft</strong></em>?</p>
</blockquote>
<p><strong>Extend</strong>: get a foothold in people's successful contributions to your ecosystem</p>
<h2 id="Extinguish">Extinguish</h2>
<blockquote>
<p>Hi Keivan, I hope you and your family are doing well — BC seems to have a good handle on covid compared to the us.<br>
I’m sorry that the pm position didn’t work out. I wanted to take the time to tell you how much we appreciated your input and insights. <em><strong>We have been building the windows package manager</strong></em> and the first preview will go live tomorrow at build. We give appget a call out in our blog post too since <em><strong>we believe there will be space for different package managers on windows</strong></em>. You will see our package manager is based on GitHub too but obviously with our own implementation etc. our package manager will be open source too so <em><strong>obviously we would welcome any contribution from you</strong></em>.<br>
I look forward to talking to you about our package manager once we go live tomorrow. Obviously this is confidential until tomorrow morning so please keep this to yourself. You and chocolatey are the only folks we have told about this in advance.</p>
</blockquote>
<p><strong>Extinguish</strong>: replace the people's contributions with your own products; it needn't be better because you're a big rich corporation with enormous reach</p>
<h2 id="Microsoft_Loves_Linux">Microsoft Loves Linux</h2>
<p>Make no mistake: this aggressive pattern will continue. They like the name MAUI? They take it and silence the critics. They want a package manager because Linux has them? They "get inspired", build a new one and squash the existing solutions.</p>
<p>They love Linux, right? They are certainly "embracing" it on their platform when they launched the Windows Subsystem for Linux (or WSL), a tool to run Linux distributions inside Windows. It has also been a while since they started "extending" Linux and the open source community by means of <a href="https://itsfoss.com/microsoft-open-sources-powershell/">open-sourcing Powershell</a> and <a href="https://itsfoss.com/microsoft-github/">acquiring Github</a>. Soon, WSL2 will launch with their <a href="https://github.com/microsoft/WSL2-Linux-Kernel">own Linux kernel</a>.</p>
<p>Now is the time to remain vigilant but also act. Donate to or support in any other way your favorite distribution and open source tools. Microsoft is coming.</p>
<h2 id="Final_notes">Final notes</h2>
<p>I've tried to remain respectful to the content Keivan has posted in his <a href="https://keivan.io/the-day-appget-died/">blog post</a>. I feel sorry for the situation: he's the developer hero Windows needed but Microsoft felt we did not deserve. Hope you'll go on to make even bigger projects, Keivan, because you absolutely nailed AppGet!</p>
Ending #100DaysToOffload2020-05-25T16:57:10+00:002020-05-25T16:57:10+00:00
Unknown
https://yarmo.eu/blog/ending-100-days-to-offload/<p><code>#100DaysToOffload >> 2020-05-25 >> 025/100</code></p>
<p>Today, I'm ending my participation in the #100DaysToOffload challenge at precisely a quarter of the way. I'm happy to have been part of it as it has given me much.</p>
<p>I only had a handful of blog posts when I started my personal website, scattered over a period of multiple months. I didn't write much, I didn't take the time for it and more importantly, I didn't see the point. It was an interesting experience, for sure, but what else? Was it just writing for writing sake?</p>
<p>Along came the #100DaysToOffload challenge. I joined the minute I saw the first toot by Kev and immediately wrote about, well, participating in the challenge.</p>
<h2 id="Benefits">Benefits</h2>
<p>Twenty-five posts later, I learned a great deal. Forcing myself to post something every day taught me writing doesn't have to be a long and tedious process. Quite the opposite, it forced my perfectionnist brain to settle for "good enough" content.</p>
<p>Posting links to my blog posts (and later, notes) on the fediverse has sparked on several occasions interesting debates involving interesting people with different interesting views. This must have been the most rewarding benefit of all.</p>
<p>I am grateful to have learned this and I will go forth on the path I am now walking, posting regularly about all things that interest me and having eye-opening conversations. I would also have continued the challenge, were it not for a few downsides.</p>
<h2 id="Drawbacks">Drawbacks</h2>
<p>First and foremost, the requirement to post every day. I know, I know, it didn't have to be every day. However, skipping every other day would drag this challenge to 200 days and also goes a bit against the whole idea behind it.</p>
<p>I am already mentally exhausted from my recent PhD experience. Although the experience of writing is freeing, there is definitely the possibility of having "too much of a good thing". Not writing every day also gives a feeling of failure as I'm letting myself down for not keeping up. And that is just something I could definitely do without right now.</p>
<p>Posting this much content also dilutes the pool of topics and results in slightly lower quality content. I've talked about this before and is somewhat the purpose of the challenge: just write and publish, quality comes with experience, not from delaying posts for weeks while endlessly fine-tuning every word.</p>
<p>The thing is, I also have this blog for a more serious reason, to showcase my capacity for reasoning and tech skills where my educational background is somewhat lacking. Sure, having done a PhD in Neuroscience is cool but that doesn't tell you (a future employer?) that I have experience with containers and networks and FOSS and… You get the point.</p>
<p>In an unpredictable turn of events, the challenge is now holding me back in a way: I feel guilty when not writing and when I do write, it's often a simpler topic just to get something out there, leaving me with less time to dig into the stuff I now really want to write about.</p>
<h2 id="In_the_end">In the end</h2>
<p>So there you have it. I would love to post every other day and I will. But with no obligations or reasoning. Just because I want to.</p>
<p>I will now dive deeper into the stuff I am passionate about and with more vigor and regularity. And that, I owe to the #100DaysToOffload challenge.</p>
A new Projects section2020-05-23T22:51:43+00:002020-05-23T22:51:43+00:00
Unknown
https://yarmo.eu/blog/projects-section/<p><code>#100DaysToOffload >> 2020-05-23 >> 024/100</code></p>
<p>I've added a new <a href="/projects">Projects</a> section to my personal website, the new home for projects I'm either still thinking of doing or actually developing, As these projects will be open-source, so will my preparation for them.</p>
<p>The benefit of doing this is that when you look around and see a project you like or have experience with, I would love for you to <a href="/contact">contact me</a> to work together.</p>
<p>As of today, there are only two projects listed, I have more in my head which I will write down over the coming days.</p>
LunaSea: FOSS FTW2020-05-22T19:47:38+00:002020-05-22T19:47:38+00:00
Unknown
https://yarmo.eu/blog/lunasea/<p><code>#100DaysToOffload >> 2020-05-22 >> 023/100</code></p>
<h2 id="Out_with_the_old">Out with the old</h2>
<p>A couple of weeks ago, I finally discovered a FOSS alternative for nzb360, a great app for managing Plex, Radarr, Sonarr, etc. I wish to have kept using nzb360, but unfortunately, the app relies too heavily on Google Services and though I have paid for it, I can no longer use it as my LineageOS phone can't process purchases made on official Google Android phones.</p>
<h2 id="In_with_the_new">In with the new</h2>
<p>Named <a href="https://www.lunasea.app">LunaSea</a>, it can do anything it should, (manage Sonarr, Radarr, Lidarr, NZB clients), it looks fantastic, it's available for both Google Android and iPhone and, of course, it's <a href="https://github.com/LunaSeaApp/LunaSea">FOSS</a>.</p>
<p>Only thing I'm missing is a donation button. And a fediverse account :)</p>
SMH2020-05-21T09:42:23+00:002020-05-21T09:42:23+00:00
Unknown
https://yarmo.eu/blog/smh/<p><code>#100DaysToOffload >> 2020-05-21 >> 022/100</code></p>
<p>SMH mean "shaking my head".</p>
<p>You probably already know this, but one of my idiosyncracies is that I just cannot remember the meaning of that acronym, no matter how hard I try.</p>
Battlefield 1 Revival2020-05-18T23:59:59+00:002020-05-18T23:59:59+00:00
Unknown
https://yarmo.eu/blog/bf1-revival/<p><code>#100DaysToOffload >> 2020-05-19 >> 021/100</code></p>
<p>Since it was announced that Battlefield V will stop receiving earlier than expected, the general feeling in the Battlefield community has been to play the older titles in the series. After all, the game is still not fun to play and knowing that will be no brighter future, why bother?</p>
<p>I've played Battlefield 1 a few times lately, but I have noticed only today that there was a "Back To Basics" game mode loaded on many servers. And it is a game changer, no pun intended.</p>
<p>Battlefield games are crazy. Massive amounts of infantry, vehicles, planes, all at the same time. But recently, I've been enjoying a more tactical approach to the game genre that are best represented by Post Scriptum and Hell Let Loose. No running around in those games, it's all about teamplay, intelligence and tactical movement.</p>
<p>The new (reintroduced?) "Back To Basics" game mode in Battlefield 1 completely changes the games and almost turns it into a tactical shooter. Vehicles cannot be used and all infantry use one and the same rifle that historically was used by their faction. Not only is this immersive, the lack of excessively powerful machine guns makes the game much more reliant on flanking and proper teamplay. It is hard for individuals to excel, they can't use their favorite weapon of choice optimised for clearing an entire room. You need your teammates now.</p>
<p>Only downside: the base game obviously wasn't designed for such a game mode and after playing a few rounds of Grand Operations, I've yet to see an attacking team win.</p>
<p>For some casual teambased shooting with a tactical twist, Battlefield 1 has become an excellent choice. Unlike Battlefield V, I just cannot see this game phase out of popularity anytime soon.</p>
Mailvelope: PGP for all2020-05-18T16:29:21+00:002020-05-18T16:29:21+00:00
Unknown
https://yarmo.eu/blog/mailvelope/<p><code>#100DaysToOffload >> 2020-05-18 >> 020/100</code></p>
<p><a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy">PGP</a> is a "pretty good" way of encrypting messages and files but gets often criticised for being too cumbersome to work with, which is sadly true. To counter this, certain products and services use PGP internally and provide an easy-to-use interface. Take Protonmail who uses <a href="https://protonmail.com/support/knowledge-base/how-to-use-pgp/">PGP to automatically encrypt emails between protonmail addresses</a>.</p>
<p>Handy, but we are forgetting something. If the PGP protocol is the lock, then the PGP keys are, well, the keys. Protonmail has both the lock and the key on their servers. That's not secure…</p>
<p>Luckily, there are more tools, like <a href="https://www.mailvelope.com/en/">Mailvelope</a> (<a href="https://github.com/mailvelope/mailvelope">source code</a>). It's nothing more than a browser add-on, meaning it will automatically work with any webmail service out there. Encrypting your emails becomes very simple (I also have a <a href="https://yarmo.eu/contact#mailvelope">more detailed guide</a>).</p>
<ul>
<li>Load the recipient's public key in Mailvelope</li>
<li>Open your webmail service</li>
<li>Click the pink Mailvelope logo</li>
<li>Choose the key of the recipient</li>
<li>Write the email</li>
<li>Click encrypt and send the email</li>
</ul>
<p>That is actually quite easy and feasible for the less tech-savvy people.</p>
<p>But keep in mind (the usual email/PGP disclaimer): email is inherently insecure. Email metadata (including title!) is not encrypted, only the body is. Information about your secret communication can be infered from the metadata. Though PGP-encrypted emails are nice to have, truly private communication is achieved using <a href="https://www.privacytools.io/software/real-time-communication/">encrypted instant messengers</a>.</p>
Why we won't have artificial intelligence rivaling human intelligence2020-05-17T00:27:27+00:002020-05-17T00:27:27+00:00
Unknown
https://yarmo.eu/blog/ai-vs-human/<p><code>#100DaysToOffload >> 2020-05-17 >> 019/100</code></p>
<p>Someone asked why people are working on artificial intelligence "which would infinitely surpass human capabilities?" Here's my answer.</p>
<p>That is not going to happen. We will never be able to produce machine learning surpassing human intelligence (though I would have personally looked forward to it).</p>
<p>Consider this: neurons have refractory periods of 1-4 ms. During the refractory period, a neuron cannot give another signal. Thus, their firing rate cannot exceed at absolute best 1000 signals or "spikes" per second. That's 1 kHz. At best. Your average neuron is a lot slower. Modern day processors easily exceed that speed by a factor 1000. So why don't we have a cyborg Einstein yet?</p>
<p>That has everything to do with makes us "intelligent". We have the same neurons as primates. Heck, we have the same neurons as worms. Why aren't people afraid the worms might kill us all soon?</p>
<p>Intelligence does not stem from the number of neurons or how fast they are. It all has to do with how they are connected.</p>
<p>Humans have extremely well-developed cortices. The reason cortices have folds is to increase the surface area, just like a radiator has folds. Sure, you gain a few neurons, but more importantly, you gain a whole lot of connections.</p>
<p>So, "researchers just need to make CPUs with more connections to increase their capability", you might say.</p>
<p>Well, you still haven't considered the single most important reason artificial intelligence will always be inferior to any animal intelligence.</p>
<p>CPU's are made of transistors. A transistor is a switch that goes on or off.</p>
<p>Brains are made of neurons. A neuron has an immense range of output, from slow firing to fast firing (up to 1 kHz). That is a whole lot more nuanced that on/off. It has also extremely well-calibrated inputs, it can receive multiple excitatory inputs that increase neuronal activity, it can also receive inhibitory inputs that decrease activity. It can do addition and substraction by placing the inputs at different locations on the dendrites (what neurons use to capture inputs). Each neuron can singlehandedly do what a whole CPU is designed to do.</p>
<p>And there we have it. To equal a brain with millions of neurons, you can't use a CPU with millions of transistors. You'd need a computer with millions of CPUs.</p>
<p>You, my friend, are safe.</p>
Proposal for a Distributed Content Verification System2020-05-16T14:49:39+00:002020-05-16T14:49:39+00:00
Unknown
https://yarmo.eu/blog/dcvs-proposal/<p><code>#100DaysToOffload >> 2020-05-16 >> 018/100</code></p>
<h2 id="Preamble">Preamble</h2>
<p>This is going to be a long post. In it, I will describe a system that I have been thinking of for the last week. The way I see it, there are three possible outcomes: a) it's genius, I've outdone myself and I should build it; b) it's genius but other people have already solved this issue (perhaps in a different way); c) it's a mediocre/inadequate solution to a problem that doesn't need solving. I need help in figuring out which description suits this idea the best. Let me know on the <a href="https://fosstodon.org/@yarmo">fediverse</a>.</p>
<h2 id="Background">Background</h2>
<h3 id="Story_1_-_Linux_Mint_hack">Story 1 - Linux Mint hack</h3>
<p>Two short stories are required. The first is based around Linux Mint and <a href="https://blog.linuxmint.com/?p=2994">what happened in 2016</a>. TLDR from the blog post: "Hackers made a modified Linux Mint ISO, with a backdoor in it, and managed to hack our website to point to it". In addition to just linking to the modified ISO file, they also changed the MD5 hash to match their modified version.</p>
<p>The web is fragile. If you post MD5 hashes on your website so people can trust your software and your website gets hacked and the hashes changed, there's no trust. This is not Linux Mint's fault, this is the way the internet works. I have had hackers on my shared hosting servers who uploaded a whole bunch of suspicious files. Because of this, the fix was easy. But what if they just made a minor change in a single file? I would have been none the wiser.</p>
<h3 id="Story_2_-_Keybase">Story 2 - Keybase</h3>
<p>You need an external source of truth and this is what the second short story is about: Keybase. I verified my website and my accounts on various services through their website. If you know me through my fosstodon.org account, you could check if that Keybase account was really mine, and if so, you could verify that this website is really mine as well as some other accounts. A nifty solution for authenticity proof of my distributed online presence.</p>
<p>But there are drawbacks. The actual content on my website is not verified. The system is centralised and also not FOSS. Lastly, due to their recent acquiring, I will no longer be using Keybase.</p>
<p>So my new authenticity proof? My website. The links on my website are who I am on various online services. I curated those links. I checked for each one if they link to what I intended them to link to.</p>
<p>But that's not enough. What if my website gets hacked? And a social link gets replaced? "Well, that doesn't happen to me", some might say. Fine, let's look at a second example. Visit someone else's personal site and click their social links. How do you know if you can trust those links? "So what", you say? Let's go further. You want to donate to someone using a cryptocurrency. They have their wallet on their website. Is that really their wallet though?</p>
<p>I think we can solve this issue the way we would want to: using a distributed system.</p>
<h2 id="Proposal_for_a_Distributed_Content_Verification_System">Proposal for a Distributed Content Verification System</h2>
<h3 id="Overview">Overview</h3>
<p>The concept is based around a network of two different types of nodes: the "content" nodes and the "truth" nodes. The "content" nodes are websites with content that need to be verified. The "truth" nodes are servers that periodically check all known pages for changes.</p>
<p>The idea is that a hacker needs to obtain a developer's cryptographic keypair and infiltrate both a "content" node and one or more "truth" nodes in order to get away with their malicious activity.</p>
<h3 id="Step_1_-_Linking_a_"content"_node_to_a_"truth"_node">Step 1 - Linking a "content" node to a "truth" node</h3>
<p>First, the website owner needs to make their website ("content" node) known to the network of "truth" nodes. This can either be done manually by asking someone they trust and owns a "truth" node, or "truth" nodes could implement some sort of registration form.</p>
<p>A valid contact method like an email address is mandatory to communicate irregularities to the website owner. A public cryptographic key is also required in order to check the signature of the hashes (see below).</p>
<h3 id="Step_2_-_Updating_the_"truth"">Step 2 - Updating the "truth"</h3>
<p>During the process of uploading the updated content of their website to their server, the website owner also sends the hashes of the updated files to the "truth" node they registered with. This could be done by using a command-line tool on the server, on the developer's machine, it could even be part of the CI/CD/CD pipeline. Hashes are signed using a cryptographic keypair to make sure the website owner is the one who updated the content.</p>
<p>Alternatively, the "truth" node could have a web interface with a button to trigger the (near-)immediate download of the pages and computation of the hashes. However, this method does not easily allow for the cryptographic signing of the hashes and should therefore not be advised or even accepted.</p>
<p>Once updated, the "truth" nodes exchange the updated hashes with each other.</p>
<p>One thing to consider: websites can be dynamic, for example by including posts from social networks. HTML tags that contain dynamically generated content should get a specific tag so they get excluded from the hashing.</p>
<h3 id="Step_3_-_Verification_of_content">Step 3 - Verification of content</h3>
<p>On a regular basis, the "truth" nodes download the pages and compute the hashes. If they match with the hashes in their database, all is well.</p>
<p>If a discrepancy is found, a "truth" node should ask other "truth" nodes if updates exist for this particular website and the updated hashes simply haven't propagated yet. If so, fetch the new hashes and run this step again.</p>
<p>If no updated hashes are found or the new hashes still don't match, contact the website owner and let them know something has changed on their website that they haven't told the "truth" nodes about.</p>
<h3 id="Optional_step_4_-_User_benefits">Optional step 4 - User benefits</h3>
<p>In addition to the measures taken in step 3 when detecting anomalies, browser plugins could warn visitors of websites that the content they are seeing may not be what the website owner intended it to be.</p>
<h3 id="Possible_attack_surfaces">Possible attack surfaces</h3>
<p>If a "truth" node is hacked, hashes could easily be changed. However, signing the hashes using a cryptographic keypair should mitigate this problem. Other nodes will not trust the newly propagated hashes and will flag that "truth" node as corrupted.</p>
<p>If a "truth" node is hacked and the website owner's credentials are changed, they would no longer receives notifications. Credentials should also be signed by the cryptographic keypair to make changes like these detectable.</p>
<p>If a "truth" node is hacked and the stored public key is modified, we have a problem. "Truth" nodes should verify each other as well to make sure no funny business like this happens.</p>
<p>If a "truth" node is hacked and the "content verification" code is changed, we have a problem. Again, some form of collaboration between "truth" nodes should prevent hacked "truth" nodes from doing harm to the system.</p>
<p>If a "content" node is hacked and new files are uploaded, the "truth" node will not be triggered as it won't handle these files. But at least, the content displayed to visitors remains unchanged.</p>
<p>If a "content" node is hacked and existing files are modified, the "truth" nodes will be triggered and there's no code on the "content" node that could prevent this from happening.</p>
<p>If a "content" node is hacked and existing files are modified in such a way that the hashes match, we have a problem. Proper research needs to be done to correctly implement cyptographic hashing functions to avoid this issue.</p>
<h3 id="Things_that_need_to_be_worked_out">Things that need to be worked out</h3>
<ul>
<li>How exactly does a new website enter the network?</li>
<li>How to coordinate page downloading and hash computation to avoid redundancy and load on the hosting server?</li>
<li>How to measure credibility among "truth" nodes and detect corruption of individual nodes?</li>
<li>How to prevent hash collision?</li>
</ul>
<h3 id="Federated_AND_peer-to-peer">Federated AND peer-to-peer</h3>
<p>The concept described above is technically based on federation. However, I initially imagined several websites hosting both their own websites and the hashes of websites they selected. This is still possible: the concept described above should support both a federated content verification system and a peer-to-peer content verification system.</p>
A place for notes2020-05-12T22:57:59+00:002020-05-12T22:57:59+00:00
Unknown
https://yarmo.eu/blog/notes-section/<p><code>#100DaysToOffload >> 2020-05-12 >> 017/100</code></p>
<h2 id="The_#100DaysToOffload_challenge">The #100DaysToOffload challenge</h2>
<p>Participating in the #100DaysToOffload is fun and encourages to think less and do more when it comes to blogging. That last part both sounds good and bad.</p>
<p>It's good because more content is actually published, it discourages one to keep a post in a "draft" status for an indeterminate amount of time and, well you know how that goes, the post never gets published. It teaches you a habit of working in a permanent cycle of thinking, writing, posting and moving on to the next cycle.</p>
<p>But the drawback is two-fold. Content quality can be diminished. I have noticed I'm not always content with the phrasing of certain sentences. I also regularly get reminded that a post lacks certain disclaimers or counter-arguments to the main rationale.</p>
<p>The other issue I'm currently facing is flooding. I see my personal website as having a professional utility as well: I'd like to point potential employers to my blog so that they can get a real sense of how I think and what I am good at. Administering a homelab, keeping DNS records, thinking about social structures on the internet, etc. I'd like for that "long-form" content not to be drowned out by waves of "short-form" posts because of a challenge.</p>
<h2 id="The_solution">The solution</h2>
<p>I considered tags and though I definitely need them, they are not the solution. The default view would still contain all the posts. Also, I'm not looking forward to making a RSS feed based on (excluding) tags.</p>
<p>Inspired by <a href="https://fosstodon.org/@kev">Kev</a> and a discussion with <a href="https://fosstodon.org/@murtezayesil">Ali Murteza Yesil</a> (thanks again :D), I've decided to implement a <a href="/notes">notes</a> section meant to contain all the short-form posts. Random thoughts go in the <a href="/notes">notes</a>, elaborate thoughts go in the <a href="/blog">blog</a>. A separate RSS feed will be implemented very soon. A note could also be a link to a blog post.</p>
<h2 id="Continuing_the_challenge">Continuing the challenge</h2>
<p>I will continue the challenge with posts being either a blog post or a note. I will, however, refrain from posting every day. Some days are devoid of post-worthy thoughts, some days do not allow for proper writing time. I will not write notes in advance, that defeats the purpose of the challenge.</p>
<p>I'm already noticing benefits from participating: I take more time to write, I post more and that leads to me having more interesting discussions. I am thankful for its existence but will also adapt my participation to my lifestyle and schedule.</p>
<h2 id="Update_2022-05-03">Update 2022-05-03</h2>
<p>I'm removing the notes section in favor of the <a href="/tags/short">#short</a> tag. All posts short and long are available in the <a href="/blog">/blog</a> section.</p>
Introduction to PiHole2020-05-10T23:24:04+00:002020-05-10T23:24:04+00:00
Unknown
https://yarmo.eu/blog/pihole/<p><code>#100DaysToOffload >> 2020-05-10 >> 016/100</code></p>
<p><a href="https://pi-hole.net/">PiHole</a> is almost ubiquitously present on every list of services people could/should selfhost. And rightfully so, it is easy to set up and extremely useful on a daily basis. It blocks ads on almost all websites on all the devices in your home without the necessity of installing anything on them. It will also stop some devices from communicating with their parent companies behind your back.</p>
<h2 id="How_it_works">How it works</h2>
<p>To understand how PiHole does its thing, we need a quick introduction into how DNS works, the system that makes sure we can visit websites even if they are located on the other side of the world. The problem DNS solves is that the URL you use to visit a website doesn't tell your device anything about the physical location or IP address of the server that hosts the website.</p>
<p>If you wish to visit a website, say <a href="https://yarmo.eu">yarmo.eu</a>, you enter that address in the top bar and hit enter. Your browser will then ask your router to get this website for you. If this is the first time you visit this website, your router doesn't know yet where the server is located, so it asks a DNS server in geographical proximity, usually the DNS server of your ISP.</p>
<p>If this DNS server knows the IP address of the server, it will be relayed back to your device which will now ask that server directly for the content of the website. If the DNS server doesn't have this information, it will ask another and so forth until the IP address of the host server is found.</p>
<p>As we established above, your router contains a DNS router. However, this can almost always be delegated to another DNS server in your home. That's where PiHole comes in. Instead of your router trying to figure out where the website server is located, it will ask PiHole to do so.</p>
<p>But PiHole has a trick up its sleeve: it has a built-in database of hundreds of thousands of URLs that are associated with ads and when they are requested, PiHole simply ignores them.</p>
<p>So you want to visit <code>coolsite.com</code>? Fine, PiHole will get you that website. Now, <code>coolsite.com</code> suddenly wants to load an ad from <code>ads.gafam.com</code>? The computer asks the router, the router asks PiHole, PiHole knows this URL is used to serve ads and will block that request, giving you a website without ads. Awesome!</p>
<h2 id="Something_you_want_to_say?">Something you want to say?</h2>
<p>Meanwhile, you are listening to music using a wireless speaker in your living room from a made-up brand "NOSON". What you don't know is that this device is continuously sending messages to the company containing information about the music you play and more. PiHole knows this and as soon as the speaker requests to send a message to <code>metrics.noson.com</code>, PiHole says no.</p>
<p>That's how PiHole blocks ads AND protects your privacy.</p>
<h2 id="Dedicated_hardware">Dedicated hardware</h2>
<p>Dedicating hardware to PiHole is advised but the hardware can be as simple as a <a href="https://www.raspberrypi.org/products/raspberry-pi-zero/">Raspberry Pi Zero</a>. The reason it is advised to use dedicated hardware is because if your PiHole crashes, there's no more internet in the home until you get the PiHole working again. It way to prevent this situation from happening is to always have two PiHoles running on separate hardware and telling the router about both PiHoles.</p>
<h2 id="Second_DNS_server?">Second DNS server?</h2>
<p>Oh, and while we're on the subject: do not put any "fallback" DNS servers like Google's or Cloudflare's in the second DNS server field on your router. Unfortunately, it doesn't work like a fallback, all routers will simply divide the workload over the two DNS servers. This means that if an outside DNS server is put in second place, it will receive DNS calls even if the PiHole is fully functional.</p>
<p>Having a proper DNS fallback server is difficult to set up, so best would be to use two different PiHole instances. Unless, of course, you don't mind a small period of internet loss and you are always nearby to fix the situation.</p>
<h2 id="Caveats">Caveats</h2>
<p>Unfortunately, ads on video platforms like YouTube will not be blocked. This is because they serve the ads on the same domains as they serve the main content, meaning that they don't have a <code>ads.youtube.com</code> or something similar. Therefore, PiHole cannot block the ads. As there are a few of these edge cases, it is always recommended to use PiHole in conjunction with on-device ad blockers like <a href="https://getublock.com/">uBlock Origin</a>.</p>
<h2 id="Final_words">Final words</h2>
<p>Really, there are few reasons to not get PiHole into your home and the benefits vastly outweigh the challenges (IMHO). It is also a great start on a journey of selfhosting more services and realising that one can be independent of major corporations to some degree.</p>
Traefik migrated to v22020-05-09T23:01:34+00:002020-05-09T23:01:34+00:00
Unknown
https://yarmo.eu/blog/traefik-migration/<p><code>#100DaysToOffload >> 2020-05-09 >> 015/100</code></p>
<p>Last september, <a href="https://containo.us/traefik/">traefik</a> received its <a href="https://containo.us/blog/traefik-2-0-6531ec5196c2/">big version 2 update</a>. I was very excited about TCP routers and the newly implemented middlewares. Can't have been more than a few days later that I tried to migrate my homelab to the new version. I remember being annoyed by the lack of a proper migration guide. Sure, it's possible that I didn't look good enough, but I searched for it for a few days without results. I tried using the new documentation and failed, everything crashed and could not get it working. As I did not have the time to do much more extensive research and also, I needed the selfhosted services on a daily basis, so I left it.</p>
<p>Until today. The whole migration took me a little over three hours and I learned quite a bit on the way. Also, the <a href="https://docs.traefik.io/migration/v1-to-v2/">migration guide</a> has helped quite a bit. If this was updated since September, great article. If not, still a great article and I really did not take the appropriate amount of time to prepare my migration.</p>
<h2 id="Easy_steps">Easy steps</h2>
<p>The first thing I did was a general search-and-replace for the docker labels (both routers and services). What was <code>traefik.frontend.rule=Host:xyz</code> now is <code>traefik.http.routers.router0.rule=Host(``xyz``)</code>. What was <code>traefik.port=80</code> now is <code>traefik.http.services.service0.loadbalancer.server.port=80</code>. Quite a bit longer and more cumbersome, but in the end, more extensible.</p>
<p>The <code>traefik.docker.network=xyz</code> is now unnecessary in most cases as you can define a default network in the <code>traefik.toml</code> file. Speaking of which, you can now work with a YAML file. It's not to everyone's taste, but I will switch to it in the future when I have some more time.</p>
<p>The <code>traefik.toml</code> still needed quite the makeover, but everything is will explained in the <a href="https://docs.traefik.io/migration/v1-to-v2/">migration guide</a> and the <a href="https://docs.traefik.io/reference/static-configuration/file/">reference page</a>. Content-wise, I changed little, it's just that the syntax is different. Notable changes are the domains for which certificates are needed now are declared in the <code>entrypoints</code> section instead of the <code>acme</code> section and the <code>file</code> section can no longer include the "router/service declarations", they belong in a separate file.</p>
<h2 id="Pitfalls">Pitfalls</h2>
<p>Doing this resulted in a non-functional state. Two different possibilities: either the container could not be redirected to the correct service resulting in a 404, or the redirect was correct but without correct certificate. That first possibility was mostly on me: containers are now no longer exposed by default and I forgot to add <code>traefik.enable=true</code>. Mind you, I always set <code>traefik.enable=false</code> when I didn't want a container exposed and still do.</p>
<p>However this did not solve the issue for all the containers. I suspect there's still some trickery I need to do in case of using multiple routers. I tried explicitly specifying the <code>service</code> for the different routers but that wasn't the solution.</p>
<p>As for the other issue, the solution was simple but finding the source was quite hard: as it turns out, I renamed the <code>certificateResolver</code> to something other than <code>default</code>. If such is the case, then containers will NOT automatically use it for their certificates. Adding <code>traefik.http.routers.router0.tls=true</code> and <code>traefik.http.routers.router0.tls.certresolver=mycertresolver</code> to each container solves this issue.</p>
<h2 id="Todo">Todo</h2>
<p>One thing I haven't got working yet is using the <code>providers.file</code> provider. I tried to mimic the container labels but to no avail. Yet.</p>
<hr />
<p><strong>Update 2020-05-12</strong>: I fixed the <code>providers.file</code> issue. Remember kids, always read the documentation well. It turns out, I missed the line that starts with <code>*</code> in the code below.</p>
<pre data-lang="toml" style="background-color:#212733;color:#ccc9c2;" class="language-toml "><code class="language-toml" data-lang="toml"><span>[</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">services</span><span>]
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">services</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">Service01</span><span>]
</span><span> [</span><span style="color:#73d0ff;">http</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">services</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">Service01</span><span style="color:#ccc9c2cc;">.</span><span style="color:#73d0ff;">loadBalancer</span><span>]
</span><span style="color:#ff3333;">* [[http.services.Service01.loadBalancer.servers]]
</span><span> </span><span style="color:#73d0ff;">url </span><span>= </span><span style="color:#bae67e;">"foobar"
</span></code></pre>
Time to #DeleteKeybase2020-05-08T11:54:54+00:002020-05-08T11:54:54+00:00
Unknown
https://yarmo.eu/blog/deletekeybase/<p><code>#100DaysToOffload >> 2020-05-08 >> 014/100</code></p>
<p>If you are reading this, there's a big chance you already heard the news: Zoom acquired Keybase. Whether you liked it from the beginning or not, I think most can agree that after the acquisition, there's no more reason to trust the platform and thus to use it. What happens to our keys now is anyone's guess.</p>
<p>Luckily, I had the precaution to never upload my private keys, so I all had to do was donate the remainder of my stellar coins to good causes (such as <a href="https://tails.boum.org/donate/">Tails</a>), press the <a href="https://keybase.io/account/delete_me">big red button</a> and remove any links to them from my website.</p>
My homelab crashed, time for a break?2020-05-07T19:53:23+00:002020-05-07T19:53:23+00:00
Unknown
https://yarmo.eu/blog/homelab-crashed/<p><code>#100DaysToOffload >> 2020-05-07 >> 013/100</code></p>
<p>It's not the first time my homelab has crashed and it won't be the last time. Something with the hard drives. I'll figure it out, no doubt. But despite needing it for various services throughout my daily routine, I have decided to let the homelab rest for a few days maybe.</p>
<p>It has been running almost non-stop since I started it about two years ago, I never made major changes, always gradually improved upon it. Now, the time may have come to take a hard look at what I started with, what I ended up with, learn a few valuable lessons and perhaps start over. My homelab could use a 2.0 moment.</p>
Search engine indexing: DDG vs Google2020-05-06T10:10:45+00:002020-05-06T10:10:45+00:00
Unknown
https://yarmo.eu/blog/search-engine-indexing/<p><code>#100DaysToOffload >> 2020-05-06 >> 012/100</code></p>
<p>Having my own website means I get to control what happens on a tiny tiny part of the internet; it's my space. More importantly, I want to have a bit of control about what people see when they decide to put my name in a search engine. This is an important reason to have a website in the first place: I don't believe anyone would want their Facebook page to be their first impression, or anything the search engine decides to put first.</p>
<p>Months ago, I did a little test, searched my name in both Google and DuckDuckGo, didn't see my website which I just started, didn't think too much of it and went on with my life. Yesterday, I checked again. Let's compare the experiences.</p>
<p>Without any of my input, DuckDuckGo had found my website and it's the first thing anyone sees when searching for my name: mission accomplished. On Google, my website was not on the first page. Or the second. Or the third. After looking around in their "Webmaster Tools", I found out they had never figured out my website existed. I had to manually request the indexing which they say will be done at some point. Couldn't request an indexing without a good ol' game of finding crosswalks in a never-ending series of small images presented in a 3x3 grid.</p>
<p>In your opinion, what is the better experience?</p>
Varken: Plex monitoring solution2020-05-05T21:49:58+00:002020-05-05T21:49:58+00:00
Unknown
https://yarmo.eu/blog/varken/<p><code>#100DaysToOffload >> 2020-05-05 >> 011/100</code></p>
<p>Today, I discovered <a href="https://github.com/Boerderij/Varken">Varken</a>, a neat solution to monitor your Plex ecosystem (including Sonarr, Radarr, etc.) and store the data in your InfluxDB instance. This solution is a great addition as I can now make Grafana or Chronograf dashboardz encompassing both server metrics and Plex metrics. The reason this is important is that I have a relatively low-power server (NUC) and a single stream on Plex can have a noticeable impact on the CPU usage.</p>
<p>Varken requires a <a href="https://tautulli.com/">Tautulli</a> instance to collect the data from as well as a <a href="https://www.maxmind.com">MaxMind</a> API key which unfortunately isn't optional. I run all software mentioned in this post in separate docker containers.</p>
<p>Also, 11th post for #100DaysToOffload today and 11 is my lucky number :)</p>
Taking a break from raid2020-05-04T18:43:31+00:002020-05-04T18:43:31+00:00
Unknown
https://yarmo.eu/blog/break-from-raid/<p><code>#100DaysToOffload >> 2020-05-04 >> 010/100</code></p>
<p>I have three main hard drives in a <a href="http://www.snapraid.it/">snapraid</a> setup in my NAS and a few extra drives for backup. All drives are connected to the server (NUC) via a JBOD USB drive case. I love snapraid, it has served me well and most certainly will in the future.</p>
<p>But now, I need the drive space more than I need a solution for my data to continue to being served while a drive has died. As we all know, raid is not a backup, it's a solution to ensure the data is available while one or more drives are not. Perfect for critical applications, but let's be honest, my homelab is not, especially with me sitting 24/7 next to it.</p>
<p>Thus soon, when I have saved a bit more, I will expand my homelab to a larger array of drives, all connected directly through SATA and all raided using snapraid with ample backup capacity. That time is unfortunately not now. So out goes snapraid and in goes the full capacity of my third drive.</p>
<p>They are WD Red 6TBs. Yes, I have checked, they are CMR. And yes, these are the last drives I'll ever buy from WD.</p>
Selfhost email… But should you?2020-05-03T19:46:47+00:002020-05-03T19:46:47+00:00
Unknown
https://yarmo.eu/blog/selfhost-email-drawbacks/<p><code>#100DaysToOffload >> 2020-05-03 >> 009/100</code></p>
<p>Yesterday, I wrote about <a href="/blog/selfhost-email">how you <strong>can</strong> selfhost your very own email server</a>. Shortly after publishing the post, it was pointed out to me that are very reasonable drawbacks to doing this. So today, let me give you my answer to the question of whether you <strong>should</strong> selfhost your email.</p>
<h2 id="Relying_on_hardware">Relying on hardware</h2>
<p>Firstly, I mentioned in that article that although I have two domains on my selfhosted server, I still fall back to a protonmail address for the most important stuff like banking and governmental services. So what's the point of selfhosting then, if I do use third-party email addresses? Well, what I failed to mention was that in the long term, yes, I do want everything selfhosted.</p>
<p>When I started my email hosting adventure, I was very cautious. Only months before did I start my own homelab and as it turned out, that had the tendency to crash every so often, making it a no-go for email hosting. I resorted to use a VPS while getting my homelab sorted out. This worked great and still does, but at the time, you can imagine I was still discovering the DNS parameters, the reputation handling (more on this later). Also, what were the consequences of running a VPS for 24/7? I could not commit to using the selfhosted email for anything more than experimentation.</p>
<p>Fastforward about a year and the VPS has held up greatly, the email software has never crashed or acted against my expectations. It has received regular updates and never failed once during one. Meanwhile, my homelab has proven to be extremely reliable and with the upcoming hardware upgrade, I expect even less irregularities than I do now. Soon enough, when all the stars align and I figure out how to make recoveries as fast on my local hardware as I can on a VPS (make a new instance based on a daily snapshot and voilà!), my email server will be transferred to my homelab and I will use it for everything.</p>
<h2 id="The_pain_of_administering_an_email_server">The pain of administering an email server</h2>
<p>Secondly, I've also heard of people not resorting to selfhost their email because of the fragility of the underlying processes and if one thing is slightly out of tune, the whole email server stops working. Although I've toyed around with most of the individual processes like dovecot at the beginning to understand what they do and how they work, I haven't touched a single one of them in almost a year. <a href="https://mailcow.email/">mailcow.email</a> is just that good. I've played around with the settings and it won't stop working. Meanwhile, I get an antivirus, spam monitoring, those handy "+topic" email filtering. I'd like to try out <a href="https://mailinabox.email/">Mail-in-a-Box</a>, mostly because it is also recommended by <a href="https://www.privacytools.io/providers/email/#selfhosting">PrivacyTools</a> but I have no incentive too. My current solution just works great for me.</p>
<h2 id="The_reputation_of_a_server">The reputation of a server</h2>
<p>Lastly, I need to address a IMO bigger problem: reputation. If other servers don't trust you, your emails may easily be thrown into the spam folder of the recipient or even rejected. The main reason for this is to fight spam: mass email spammers usually operate from unknown IP addresses. Unfortunately, this hurts the selfhosters. So, before you have even installed your email server software, you are already mistrusted by simply not using the big servers like Gmail and Hotmail. And indeed, when I started, most of my emails landed in spam.</p>
<p>This got greatly improved simply by using an email relay; in my case, <a href="https://www.mailgun.com/">mailgun</a>. These paid-for (but often with free tier) services are a lot more trusted since mailgun will do spam prevention on their end, so letting them send your emails for you is a great improvement. And even with the <a href="https://news.ycombinator.com/item?id=22192543">recently reduced free tier</a>, I don't send nearly enough emails to come close to the free tier quota.</p>
<p>However, it still happens that my emails are treated as spam so I often do follow ups via other channels of communication. Another issue may be that the IP address you were given already has a bad reputation caused by a previous owner: this is difficult to find out and even harder to fix. DDG-ing <code>improve email server reputation</code> yields many articles but read a handful of them and soon you'll realise it's really really hard to improve it. There's no central repository, no forms. Getting a mistrusted IP address can quickly suck all the fun out of having your own email server.</p>
<h2 id="Answering_the_question">Answering the question</h2>
<p>So, <strong>should you</strong>? This depends on how willing you are to be independent of third-party email services and how much you are willing to put up with. I started naïvely and had to answer this question along the way while experimenting. By now, my personal answer to this question is: yes. I see the benefits and drawbacks. I'm not sure if it's the usage of mailgun, or me sending mails to family and friends and then asking them to tell their services to not mark it as spam, but most of my emails are properly received nowadays. Also, I have managed to improve my infrastructure, I can rely on the hardware (and soon on the emergency recovery mechanisms) and will soon migrate my email server so it's nicely at home.</p>
<p>Hosting your own email server is not easy and requires your full dedication. And with many upcoming <a href="https://www.privacytools.io/providers/email/">trusted and privacy friendly email services</a>, it may not always be the right tool for the job.</p>
Selfhost email2020-05-02T16:20:35+00:002020-05-02T16:20:35+00:00
Unknown
https://yarmo.eu/blog/selfhost-email/<p><code>#100DaysToOffload >> 2020-05-02 >> 008/100</code></p>
<p>Yes, you can selfhost email. And you should, if and ONLY if you feel comfortable with maintaining a linux server. I'm not a pro at all, but I've been doing it for almost two years, I know where to find my logs, I know how to find the correct answers on stackoverflow and troubleshoot a less-than-functional system.</p>
<p>So don't start with this, but eventually, soon enough, you can selfhost your email.</p>
<p>Because email is important to me, I have chosen to not host it at home, any network issue could prevent emails from coming in. Granted, the sending server will usually retry for 24 hours until the email is actually received on your side so small errors are forgiven, but still, I've opted for a dedicated droplet on digitalocean, though any VPS will do.</p>
<p>And then, follow the instructions on <a href="https://mailcow.email/">mailcow.email</a> and you're set. SSH in once a week on your VPS to run the updater. The administration side has plenty of features for advanced administration of the email server and the included webclient is the awesome SOGo.</p>
<p>If you want to make sure your email server is as trusted by other servers as possible, your emails are sent as securely as possible and your experience with other email clients is as smooth as possible, please check out my <a href="https://yarmo.eu/blog/email-dns">post on email server DNS settings</a>.</p>
<p>With all that being said, I still use a protonmail address for critical websites and services like governmental services and banking, because whatever happens, I need to make sure that I really receive these emails. On my selfhosted email server, I use two domains: one which I share with the world and with websites for logins and one that I keep private and only use for direct communication with other people. I have yet to experience a single minute of outage, credits to digitalocean and the people behind mailcow.</p>
<p>---UPDATE---</p>
<p>After a <a href="https://fosstodon.org/@Matter/104099349377193869">fair comment</a> on the fediverse, I have written a <a href="/blog/selfhost-email-drawbacks">follow-up post</a> to address a few more critical points like server reputation and how I deal with that.</p>
A response to ICANN's refusal to sell .ORG2020-05-01T09:33:40+00:002020-05-01T09:33:40+00:00
Unknown
https://yarmo.eu/blog/icann-rejects-sale-org/<p><code>#100DaysToOffload >> 2020-05-01 >> 007/100</code></p>
<p>A response to <a href="https://www.icann.org/news/blog/icann-board-withholds-consent-for-a-change-of-control-of-the-public-interest-registry-pir">ICANN's refusal to sell .ORG</a> in 3 movements.</p>
<p>My first reaction was sarcastic when I saw the cheer on social media: "look at us celebrating like there's no tomorrow because a non-profit organisation chose to NOT sell a TLD made for non-profit organisations to a for-profit corporation".</p>
<p>But they indeed chose not to. They really chose not to. They didn't do it. The people spoke and the people won. The powers that be got greedy, misread the room and adjusted their path because and only because of the people. A great, great thanks to all who wrote letters to the California Attorney General and made their voices heard online. This is a victory for all.</p>
<p>Today, we celebrate. Unfortunately, tomorrow, we need to think about what happens next. The internet is still under threat. A group of people have full power over what the internet looks like and they have shown us to be untrustworthy. For each domain we buy, we pay an ICANN fee, yet ICANN has made it clear that they do not have our interests at heart. Stay safe outside and vigilant on the web.</p>
<p>If within your possibilities and beliefs, please support the <a href="https://www.opennic.org/">OpenNIC project</a> (no affiliation, just a fan), a "user-owned and -controlled DNS root offering an alternative to ICANN and the traditional TLD registries".</p>
The People's Web2020-04-30T08:53:37+00:002020-04-30T08:53:37+00:00
Unknown
https://yarmo.eu/blog/peoples-web/<p><code>#100DaysToOffload >> 2020-04-30 >> 006/100</code></p>
<p>The day has long past that we should have started worrying about the openness of our web. It was only a matter of time before censorship and individual tracking would seep into the web of the western world. It pained me to see the levels of state interference in foreign countries, but at least, to me, that was something I only read about in the news, it was then unimaginable that this would happen in the short term in the "free world".</p>
<p>Now, in these trying times, states and big corporations may see it more than justified to track individuals and adjust the information we receive. I do not question their motives: "we are in this together". It takes the world to fight this pandemic, and the next one, and the one after that.</p>
<p>But the question on a lot of minds is: "how do we come back from this?" The hardest part is for the people to accept they are being tracked and their information filtered, so big corporations did it with shady and hidden tactics to avoid this confrontation. We are past this: YouTube has announced it will ban all content that does not conform to WHO and countries everywhere are building apps to track our health and social interactions. Again, this could prove to be what humanity needs right now, but what about after? Is there even an after?</p>
<p>Fortunately, we all have the power to make a few changes to improve our online well-being: change social networks, don't rely on corporations, self-host as much as you can. I will be dedicating a large number of posts on this topic. Self-hosting is not hard, it just takes a little effort to get started.</p>
<p>As a friendly reminder, this website (blog included) has no tracking whatsoever: I do not care who you are, where you are from or with how many you are reading this post.</p>
Typography · Ellipsis2020-04-29T21:21:43+00:002020-04-29T21:21:43+00:00
Unknown
https://yarmo.eu/blog/typography-ellipsis/<p><code>#100DaysToOffload >> 2020-04-29 >> 005/100</code></p>
<p>I like typography and exploring the stories behind special characters. Today, I'd like to talk about one that many use frequently, myself included, but often not in the "digitally correct" way (IMHO).</p>
<p>Yes, I'm talking about the ellipsis. Symbolised by three consecutive dots, it signals that a sentence was cut short and the reader can finish it in his or her head by knowing the context. Surrounded by brackets, it is used to signal a passage was ommitted but the meaning of the remaining sentence is unaltered by that omission. Messaging apps use it to signal the other person is writing.</p>
<p>You may or may not know this, but both on our computers and on our phones, the ellipsis is actually a special characters which can be used instead of writing three separate dots. On a phone, it's accessible under one of the keys by long-pressing on it. On the computer, I usually just copy-paste it, but on Ubuntu, it's inserted by pressing <code>ctrl+shift+u</code>, then typing <code>2026</code> followed by an <code>enter</code>. On Windows, it's inserted by pressing <code>alt + 0 1 3 3</code> on the numpad. In both HTML and markdown, it's inserted by writing <code>&hellip;</code>.</p>
<p><a href="https://en.wikipedia.org/wiki/Ellipsis">Wikipedia</a></p>
Missed a day2020-04-29T09:02:57+00:002020-04-29T09:02:57+00:00
Unknown
https://yarmo.eu/blog/missed-a-day/<p><code>#100DaysToOffload >> 2020-04-29 >> 004/100</code></p>
<p>Well, that was fast. I missed my first day in the #100DaysToOffload challenge. I am not one to make up excuses and reasons why this has happened.</p>
<p>Though I am not planning to share the layout of my entire day yesterday in this blog post, I will write a little bit about an issue I have been facing lately: memory problems. After talking with experts, this is apparently a common issue people face after prolonged exposure to stressful situations. As a reference, I never had problems remembering things before the PhD, sure my memory was not the best out there, but it served me well. Nowadays, I do tend to forget some things on a daily basis unless I write them down immediately. Well, I will still forget them but at least I'll have an indelible reminder. I remembered at multiple occasions yesterday to write a blog post, but I kept forgetting it a bit later and I didn't make a note of it, so…</p>
<p>I cannot wait for this to be over. Until then, I will try something new to help me specifically with #100DaysToOffload: I will leave a fully charged Thinkpad by my bedside in the evening, and first thing in the morning, I will write my blog post for that day.</p>
<p>Let's try that :)</p>
Building my first PC2020-04-27T13:56:18+00:002020-04-27T13:56:18+00:00
Unknown
https://yarmo.eu/blog/pc-build/<p><code>#100DaysToOffload >> 2020-04-27 >> 003/100</code></p>
<p>While working in the lab for my PhD, I needed a good computer. It didn't need to be exceptional and though I did lots of biological and physics computation, I knew that GPU acceleration wasn't needed so that eliminated the need for complicated builds. I went with a NUC.</p>
<p>Two years ago, I started my homelab. All I needed was a relatively simple PC that I wouldn't mind leaving turned on permanently. I opted for a NUC.</p>
<p>Then I needed a PC I could use at home, either to do some more work or play some game. Not expecting great gaming results, I still chose a NUC.</p>
<p>Those "not great gaming results", I got! The 7i7 has a built-in GPU and games can definitely be played on it, but it struggled with reliability for competitive gaming. This year, that's all changing. I have built my own PC for the first time, not only allowing me to play games in a more comfortable way, this will also be my new work-at-home computer as well as being extremely performant for video editing and music mixing (thank you, foam-padded case!).</p>
<p>I opted for a <a href="https://www.amd.com/en/products/cpu/amd-ryzen-5-3600">AMD Ryzen 5 3600</a> on a <a href="https://www.asus.com/Motherboards/PRIME-B450M-A/">Asus PRIME B450M-A</a> motherboard paired with a <a href="https://www.amd.com/en/products/graphics/radeon-rx-580">AMD RX580</a> GPU. OS and software goes on an NVMe m.2 drive, games on a SATA SSD, data on a 2TB HDD. 16GB of DIMM DDR4 RAM.</p>
<p>My <a href="https://www.userbenchmark.com/UserRun/27232925">userbenchmark</a>:</p>
<ul>
<li>UserBenchmarks: Game 67%, Desk 123%, Work 96%</li>
<li>CPU: AMD Ryzen 5 3600 - 92.5%</li>
<li>GPU: AMD RX 580 - 60.8%</li>
<li>SSD: Kingston SA2000M8250G 250GB - 241.9%</li>
<li>SSD: WD Green 240GB (2018) - 56.7%</li>
<li>SSD: WD Green 240GB (2018) - 51.5%</li>
<li>HDD: Seagate Barracuda 2TB (2018) - 101.7%</li>
<li>RAM: Corsair Vengeance LPX DDR4 3200 C16 2x8GB - 83.4%</li>
<li>MBD: Asus PRIME B450M-A</li>
</ul>
<p>Man, I love team red.</p>
Gaming to relax2020-04-26T16:20:27+00:002020-04-26T16:20:27+00:00
Unknown
https://yarmo.eu/blog/gaming/<p><code>#100DaysToOffload >> 2020-04-26 >> 002/100</code></p>
<p>Today hasn't been the smoothest of days and though I got ideas for a few more blog posts, I do not currently have the mental energy to work on any of them.</p>
<p>So instead, allow me to list a few games which tend to me help me relax a bit, one of which I'll start up right after writing this post:</p>
<ul>
<li>Rocket League (great for both casual and competitive, usually I play with my two brothers)</li>
<li>Post Scriptum (great for "relaxation through immersion")</li>
<li>Deadside (great for "relaxation through immersion")</li>
</ul>
<p>I play others as well, though these are nowadays my go-to's. If you happen to play any of these, <a href="/contact">contact me</a> and let's play together, that always enchances the experience!</p>
#100DaysToOffload2020-04-25T11:14:33+00:002020-04-25T11:14:33+00:00
Unknown
https://yarmo.eu/blog/100-days-to-offload/<p><code>#100DaysToOffload >> 2020-04-25 >> 001/100</code></p>
<p>On <a href="https://fosstodon.org">Fosstodon</a>, <a href="https://fosstodon.org/@kev">@kev</a> wrote a <a href="https://fosstodon.org/web/statuses/104053977554016690">toot</a> which started <a href="https://fosstodon.org/tags/100DaysToOffload">#100DaysToOffload</a>, a challenge to blog for 100 days about anything. Enthusiastic about this idea, I'm starting today and decided to make a continuously updated list about the other blogs participating in the challenge.</p>
<p><a href="https://write.privacytools.io/darylsun/">Beyond the Garden Walls</a><br />
<a href="https://blog.marcg.pizza/marcg/">G's Blog</a><br />
<a href="https://write.privacytools.io/freddy/">Freddy's Blog</a><br />
<a href="https://write.as/write-as-roscoes-notebook/">Roscoe's Notebook</a><br />
<a href="https://degruchy.org/">Nathan's Musings on the Web</a><br />
<a href="https://gregoryhammond.ca/blog/">Gregory Hammond</a><br />
<a href="https://www.garron.me/en/blog/">Garron</a><br />
<a href="https://secluded.site/">Secluded Site</a></p>
<p>Want to find even more participating blogs and links to every post? Search for the <code>#100DaysToOffload</code> hashtag on the fediverse (<a href="https://fosstodon.org/tags/100DaysToOffload">Fosstodon link</a>).</p>
So you want to make it on the fediverse?2020-03-20T13:54:22+00:002020-03-20T13:54:22+00:00
Unknown
https://yarmo.eu/blog/make-it-on-fediverse/<p>That's the plan, right? A whole new world awaits you on the fediverse, and you are going to make it there! There's something you should know.</p>
<!--more-->
<h2 id="Welcome_to_the_Fediverse">Welcome to the Fediverse</h2>
<p>Whenever I talk to people in my surroundings about the fediverse and try to convince them to use it, I have a go-to story that I tell. On multiple occasions in the past, there have been real-life events that sparked controversy on Twitter which then responded by banning some public figure or doing something else to upset a portion of the population. In turn, this would lead to either that public figure or a social movement to incite people to leave Twitter and join a safer and more open alternative: the fediverse. As citizens of the fediverse, we would notice a massive influx of new users and a stream of introductory messages on our timelines. Which, in my humble opinion, is always a welcome sight. An interesting observation is that the first message by these new citizens is often a little… "off" if you are used to being on the fediverse.</p>
<p>Allow me to elaborate. The introductory message of Twitter exiles often reads as follows: "Hello all! I am [insert name here], I am going to post messages about [insert multiple topics here]. Who are the people I should follow?".</p>
<p>The message above oozes "Twitter Mentality". To explain what I mean, let me use an analogy.</p>
<h2 id="The_analogy">The analogy</h2>
<p>Twitter is a metropole. Its users all share the same playing field. If you want to participate, you don't talk, you shout. How else are you going to be heard in a crowd of millions? Once you start shouting, quiet people start to listen. This creates a one-to-many dynamic.</p>
<p>The fediverse is a network of well-connected villages. As part of a village, you get to know people. You talk to people because it's less crowded, there's less competition. It's still a network so you can connect to people multiple villages away with the same ease. But the noise is filtered. You are surrounded by people who share a common interest which is the reason you decided to live in that specific village in the first place, but you still get the network effect and communicate with people outside of the village because you want to, because you can. This creates a one-to-one or on a larger scale, many-to-many dynamic.</p>
<h2 id="What_makes_a_network_social?">What makes a network social?</h2>
<p>My argument is that a one-to-many network is not a social network, it's broadcasting. Having "influencers" on a network is, if anything, anti-social. This is where I need to go back to that initial message by the Twitter exile. There's no need to announce you are going to talk about a certain topic, just do it. I will not follow you because you are announcing to post messages about a certain topic. Post a message, let's debate and talk about it and if I am truly interested in your opinion and reasonings, only then will I follow you. And don't follow people because many others do; follow them because you want to, because you get the choice.</p>
<p>This is my opinion and I have talked to many sharing the same sentiment. This many-to-many dynamic is what makes the fediverse appealing. You get a Home timeline where you will find posts from people you follow and actually want to hear from. You get a Local Timeline filled with posts from people you don't always know, but it's the same instance/village, so you already share a common ground. And for when you feel like exploring, you get a Known Network timeline filled with all sorts of posts.</p>
<p>You may argue that building this whole narrative around a simple introductory post is flimsy, and I agree. Not everyone's first post is like that, and even if it were, there are different ways of interpreting the content. But it's something I found people can relate to, it's a bridge between two mentalities, two social structures that I can use to introduce the many advantages of the fediverse.</p>
<p>If you would like actually useful information on starting with the fediverse with Mastodon, please have a look at this <a href="https://kevq.uk/how-does-mastodon-work/">blog post by KevQ</a>.</p>
Selfhosted email: DNS records2020-02-18T18:39:54+00:002020-02-18T18:39:54+00:00
Unknown
https://yarmo.eu/blog/email-dns/<p>When selfhosting email, an essential element to get right are the DNS records. Some are absolutely mandatory for email to work, some build trust and some just make life easier. Here's an overview of how I set up DNS for my personal mail server.</p>
<!--more-->
<h2 id="My_setup">My setup</h2>
<p>I have a VPS server running <a href="https://mailcow.email/">mailcow</a> and two domains: one linking to the mail server, admin page and the web client (let's call this <code>mail.server.domain</code>), and the other just being a email domain used in the email address (let's call this <code>public.domain</code>, so the email address would be <code>hi@public.domain</code>). This way, even if you know the email domain, you don't directly know server domain. Granted, one could find this out by looking at a few DNS records.</p>
<p>The benefit is that I can host email for multiple domains, as long as they all point to <code>mail.server.domain</code> by using the correct DNS records.</p>
<p>Please note that using a single domain is just as easy, as <code>mail.server.domain</code> and <code>public.domain</code> will simply be the same. Another scenario for which you could use the 2-domain setup is when you want your email address to be <code>hi@public.domain</code> (without subdomain) but wish to put the mail server (and/or web client) on <code>mail.public.domain</code> (with subdomain).</p>
<p>I use <a href="https://www.digitalocean.com/docs/networking/dns/">DigitalOcean</a> as my VPS and DNS server.</p>
<h2 id="Mandatory_DNS_records_for_server.domain">Mandatory DNS records for server.domain</h2>
<h3 id="A_records">A records</h3>
<p>A records link domain names to IP addresses. When you want to use the admin page or web client provided by your email selfhosting software, the browser needs to know the IP address of the server/VPS and that is what the A record is used for. A records are not used by mail servers when sending or receiving emails.</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>TYPE HOSTNAME VALUE TTL
</span><span>A mail.server.domain 1.2.3.4 3600
</span></code></pre>
<h2 id="Mandatory_DNS_records_for_public.domain">Mandatory DNS records for public.domain</h2>
<h3 id="MX_records">MX records</h3>
<p>MX records tell other mail servers where to actually send the emails. In my case, my email address is <code>hi@public.domain</code> but my mail server is located at <code>mail.server.domain</code>. Other mail servers look at the address, see <code>public.domain</code> and will assume this is our mail server. We use MX records to direct the emails to <code>mail.server.domain</code> instead.</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>TYPE HOSTNAME VALUE PRIORITY TTL
</span><span>MX public.domain mail.server.domain 1 14400
</span></code></pre>
<h2 id="Optional_DNS_records_for_public.domain">Optional DNS records for public.domain</h2>
<p>A and MX records is all you need to get a functional email address. However, for ease of use and good reputation/trust, a few additional DNS records are recommended.</p>
<h3 id="SRV_records_(ease_of_use)">SRV records (ease of use)</h3>
<p>SRV records are used to link specific protocols to specific domains and ports. Just like how MX records tell other mail servers to direct their mails to your mail server on a different domain, the same must be done for mail clients. Say you want to use Thunderbird (or any other mail client) to access your emails. You will log in with your address (<code>hi@public.domain</code>) and password in Thunderbird, and it will then assume your mail server must be located at <code>public.domain</code>. It will not find it there, warn you about this and you will have to manually enter your IMAP and SMTP server details. If you have set up SRV records, Thunderbird will automatically detect the correct server location (<code>mail.server.domain</code>) and save you some hassle.</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>TYPE HOSTNAME VALUE PORT PRIORITY WEIGHT TTL
</span><span>SRV _imap._tcp mail.server.domain 143 1 100 14400
</span><span>SRV _imaps._tcp mail.server.domain 993 1 100 14400
</span><span>SRV _pop3._tcp mail.server.domain 110 1 100 14400
</span><span>SRV _pop3s._tcp mail.server.domain 995 1 100 14400
</span><span>SRV _submission._tcp mail.server.domain 587 1 100 14400
</span><span>SRV _smtps._tcp mail.server.domain 465 1 100 14400
</span><span>SRV _sieve._tcp mail.server.domain 4190 1 100 14400
</span><span>SRV _autodiscover._tcp mail.server.domain 443 1 100 14400
</span><span>SRV _carddavs._tcp mail.server.domain 443 1 100 14400
</span><span>SRV _caldavs._tcp mail.server.domain 443 1 100 14400
</span></code></pre>
<h3 id="TXT_records_(good_reputation)">TXT records (good reputation)</h3>
<p>TXT records are simply messages that provide additional information. Here, TXT records are used to tell other mail servers more about your own mail server in order to build some trust between them: these records are a useful tool against spoofing where bad actors try to impersonate you and pretend you are sending the bad emails they are sending.</p>
<pre style="background-color:#212733;color:#ccc9c2;"><code><span>TYPE HOSTNAME VALUE
</span><span>TXT @ "v=spf1 mx ~all"
</span><span>TXT dkim._domainkey "v=DKIM1;k=rsa;t=s;s=email;p=..."
</span><span>TXT _dmarc "v=DMARC1;p=reject;rua=mailto:admin@public.domain"
</span></code></pre>
<p>Detailed information on these records can be found <a href="https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/">here</a>, but in short:</p>
<ul>
<li>SPF records tells other mail servers that only the specified mail server (in this case, <code>mx</code> which points to <code>mail.server.domain</code> via the MX record) is allowed to send emails for your email domain (in this case, <code>public.domain</code>);</li>
<li>DKIM applies a virtual signature to all your sent emails and other mail servers use the second TXT record above to validate that signature;</li>
<li>DMARC records tell other mail servers what should happen to emails that fail the SPF and DKIM policies; the record above states these emails should be rejected and a notification sent to <code>admin@public.domain</code>.</li>
</ul>
<p>The DKIM record contains a cryptographic key (replaced above by <code>...</code>). In my case, this key was generated for me by mailcow and is unique for each email domain.</p>
<p>Please note that having the above TXT records does not guarantee that other servers trust you immediately: your emails are still likely to end up in spam folders at first. Using intermediaries like <a href="https://www.mailgun.com/">mailgun</a> can help with avoiding the spam folder. More on that in a later blog post.</p>
<h2 id="References">References</h2>
<p>Mailcow has their own <a href="https://mailcow.github.io/mailcow-dockerized-docs/prerequisite-dns/">recommended DNS records guide</a> which, in conjunction with their admin page, should make setting up DNS records a breeze.</p>
<p><a href="https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/">This guide</a> has a lot of in-depth information about the SPF, DKIM and DMARC records (TXT records above).</p>
IMPUC #1 · Homelab overview2020-01-05T09:32:21+00:002020-01-05T09:32:21+00:00
Unknown
https://yarmo.eu/blog/homelab-overview/<p>"In My Particular Use Case" (or IMPUC) is a series of short posts describing how I setup my personal homelab, what worked, what failed and what techniques I eventually was able to transfer to an academic setting for my PhD work.</p>
<!--more-->
<h2 id="Why_a_homelab?">Why a homelab?</h2>
<p>I started my homelab about a year after I started my PhD. My academic work was challenging in a technical way, with new data generated every day, managing raw data, processed data, metadata. I built a number of tools that would aid me on my daily basis for my work but I needed a place to just try out every technology I could possibly need for my job. It eventually turned out that the homelab was destined to do far greater things than simply serve as a testbed but that's how it started and what provided me the knowledge and experience to solve important issues in my academic work.</p>
<h2 id="The_central_server">The central server</h2>
<p>So one day, I ordered myself an Intel nuc with a 5th generation i3 processor, 8 GB of RAM, an m.2 drive and got started. Container solutions caught my attention before I even had the machine so I first installed docker and later, docker-compose. This setup hasn't changed a bit today as it still allows me to launch new services very easily by changing a single yaml file with minimal impact on the hosting machine. The first things I would install were several databases and gitea, a self hosted git service. The services sit behind a reverse proxy (traefik) to allow them to be accessed by using (sub)domains. Configuration of the machine is managed by a folder of dotfiles backed up in a git repo and <code>stow</code>ed as necessary, but I am currently looking into ansible for this purpose. A 4-bay JBOD USB3 device provides the storage that the nuc then also (partly) makes available over the local network via smb.</p>
<h2 id="The_peripheral_Pi's">The peripheral Pi's</h2>
<p>Floating around the central server are several raspberry pi's. Back when I first started, the central server would sometimes crash or soft-lock and since my entire system monitoring system (telegraf+influxdb+grafana) was also installed on there, there was not a whole lot of investigating and fixing I could immediately do. Now, the central server and the pi's all run telegraf and a single pi now hosts the influxdb+grafana stack and only that. Another pi is acting as a media center (Kodi) and finally, two redundant pi's function as DNS forwarders (Pi-Hole), one of which also hosts my VPN solution (wireguard).</p>
<h2 id="The_out-of-house_computing">The out-of-house computing</h2>
<p>I have two permanent VPS's running: a website server (Cloudways) and a mail server (mailcow). Both could be hosted on the central server but as long as I cannot guarantee a perfectly stable Internet connection (which my house does not have) nor stable computing (personal budget issue), I choose to host these outside of the house.</p>
<h2 id="Final_words">Final words</h2>
<p>Thanks for reading this, more posts will come soon explaining with more depth some of the elements described above. If you have questions, you can find several ways to contact on <a href="https://yarmo.eu">yarmo.eu</a>.</p>
A PhD Post-Mortem2019-12-08T21:02:14+00:002019-12-08T21:02:14+00:00
Unknown
https://yarmo.eu/blog/phd-post-mortem/<p>This is one of those stories that starts with an ending.</p>
<p>As of January 1st 2020, a new challenge awaits for me, a new life. Because my university/academic journey will be completed. In 2010, I set out on a path that would lead me from a biology bachelor degree to a neuroscience master degree and would culminate in a four years PhD program and result in a thesis and doctorate degree to end the journey with a bang. From there, the world would have been my oyster.</p>
<p>That journey was traveled as planned.</p>
<p>Except.</p>
<p>After investing a little over four years in my PhD project, I must end this leg of the trip without achieving its ultimate goal, obtaining the doctorate degree, leaving the past nine years open-ended, unrewarded, uncelebrated.</p>
<!--more-->
<p>This post is not vindictive in nature. In the two weeks since making my decision to end my PhD project, mere weeks before the end of the contract, I have had plenty of time to come to terms with the circumstances, to accept what has happened. At the end of the day, I burdened myself with the responsibility of taking on a PhD project, therefore the eventual outcome of said project, however positive or negative, is the product of my actions and my actions alone. Any setback can be met with positive attitude and forward thinking.</p>
<p>This post is also not a cry for attention. That is not who I am. As a matter of fact, the old pre-PhD me would not have written this post. As hard as I'm trying, I find it difficult to figure what pre-PhD me would have done in this situation nor will I be able to, as I've changed. I have changed in ways I could not have predicted four years ago.</p>
<p>This post is ultimately about opening the discussion on one of the big topics people within higher education do not like to talk about: mental health.</p>
<p>I want to talk about mental health. Though not as severe as with other people I've talked to, I now have first-hand experience in dealing with a sinking ship and have felt the psychological toll that it takes. I can no longer look at fellow graduate students without wondering: are they suffering?</p>
<p>This is my call to action.</p>
<h2 id="The_first_years">The first years</h2>
<p>The project started off pretty slowly, a lot of administrative tasks impeded experimentation. I spent time familiarizing myself with the environment, learned the workings of the existing codebase and other similar tasks. The first major setback was a slow cooker: the protocol describing the activities I was supposed to do needed to be approved by internal committees. This. Took. So. Long. If I had known then that the approval would come after over two and a half full years...</p>
<p>Waiting for approval of my protocol, I spent time learning the experiments, repeating it over and over so that when it would matter, I could eliminate myself as a factor of uncertainty. This process was far less innocent than it seemed. While my protocol was set in a scientific frame and had set goals, the experiments I was performing at first had no higher purpose other than serve as practice. As I grew more comfortable with the experiment and the protocol approval continued to be delayed, I, along with my supervisors, started looking at small results and enjoying minor victories. Sure, I wasn't allowed to yet do what I desperately wanted to, a number of experiments showed an unexpected and promising effect and I spent a few more experiments trying to understand it.</p>
<p>At the time, this felt exciting. I discovered stuff!</p>
<p>That's not what happened. These findings were not sought, they were stumbled upon. And after a couple more experiments, all failing to further our understanding of the observations, the ideas were dropped as fast as a new effect was found and more time was sunk into trying to figure that one out.</p>
<p>And after that observation lead to nothing, there was another one.</p>
<p>And another one.</p>
<p>I failed to see what was happening: I was chasing shiny things. The reason there is a protocol is to keep you focused. It provides a framework in which you work. It asks a question that will be satisfied by any answer, as long as this answer is obtained using the methodology prescribed by the protocol.</p>
<p>The solution would have been simple: write a different protocol with a simpler question. Set the framework. Gather the evidence.</p>
<p>I did not have this wisdom.</p>
<h2 id="The_latter_years">The latter years</h2>
<p>I remember the day I received the email approving the protocol. I am not able to describe the feeling of futility that overcame me. There I was, now allowed to do the things I came here to do, but knowing full well there was no longer time to set up the experiments. My project was set to answer a forty year old question. Instead, I was chasing new shiny things, grasping for any finding that could provide meaning to my presence.</p>
<p>For a while, the project seemed on track, I spent almost a year and a half investigating an interesting effect I observed, a shiny new thing more promising than all that came before. Yet, a shiny new thing nonetheless. Luck struck again: the interesting effect turned out to be an experimental artifact, and though I have been able to 100% confirm this, those nine months worth of data were thrown in the bin.</p>
<p>As a little disclaimer: we had been suspicious it could have been an artifact but there was, and still is, no proper way to test this.</p>
<p>Anyhow, in the third year, amidst all of this chaos, the effects of stress and impending doom were starting to take their toll. Fun routines became less fun. Joyful events inspired less joy. I became more isolated, first in the working environment, later in my personal life.</p>
<p>It was during one of the more difficult periods of my PhD project that happened the most happy event: meeting my girlfriend. I felt particularly down in the days leading up to when we bumped into each other, and meeting her had an immediate positive effect on me. It gave me a reason to wake up and do something, it gave me the spirit to keep the fight going, finish the project and claim the reward.</p>
<p>It was only a matter of time before stress caught up again. Experiments failed, stuff got delayed. The question shifted from "what is needed to finish this project" to "is there even a way to finish it at all". The last year was a race against the clock and against the requirements to hand in a thesis. Published articles were needed. More experiments were needed. Did we just throw away nine months of data? Cool, let's replace it with something new. Frustrations between my supervisors and me grew and created more stress. Weekends were spent sitting on the couch and adjusted to avoid all mental activity. Hobbies vanished. I no longer was the same cheerful person to be around anymore, and though I will always admire the strength of my then-girlfriend to put up with what she put up with, that relationship unfortunately also did not outlast the strain the project put on me.</p>
<p>A little over a month before the end of my contract, after four years and two months of work, I had to call it. The project was dead. There were still escape routes to make something out of the project but I had to decline. My body declined. My brain declined.</p>
<p>This was about two weeks ago, and I am only just writing this now as I've spent almost the entire time incapable of thinking. I had to accept what was happening, I had to make peace with the fact that I spent my entire being on this project, it had cost me happiness, laughter and a relationship, and all in all, it was just to quit right before the end.</p>
<h2 id="Science_and_me">Science and me</h2>
<p>The scientific world is not for me. Perhaps a different project could have stimulated me in a better way and aspired me to become the scientist I always dreamed of becoming. But somewhere in the last four years, I realized this was not the dream for me. Knowing that obtaining a PhD was no longer vital to my career, I persisted in my efforts to finish the job as it was my way of concluding in a satisfactory manner my nine years of studying biology, neuroscience and how to become a scientist.</p>
<h2 id="Mental_health_and_me">Mental health and me</h2>
<p>I wanted to finish the PhD just for me, just for my own satisfaction but in the end, I too was the reason I could not. I was becoming more susceptible to the seasonal illnesses every year, the first few days of holidays were spent working through a persistent headache. I no longer spent time on my hobbies and less and less time behind my piano. Programming, my true passion, had become a chore. And perhaps worst of all, the love for science was gone.</p>
<h2 id="Final_words">Final words</h2>
<p>So, there it is. Ending in a few more weeks, I am now working to leave the project in a suitable state as well as writing to publish one scientific article. I tried to avoid thinking too much about the future as it would only distract me from my already attention-demanding present, but the time for planning has now come. Leaving a whole lot behind, I have so much to gain.</p>
<p>But I will not simply forget about it all. I can not. The environment I worked in during the last few years did not inspire me to come forward about what was happening in my head. Much like a real-life Instagram, the scientific community celebrates success and disguises, or even denies, failure. Words embellish mistakes and misfortune. Posters filled with colorful graphs hide the hardships and perils experienced by an entire generation of upcoming scientists trying to make it in a world that does not welcome them. It inspires fraudulent behavior. The stories I've heard, from PhDs to PIs, from students to technicians.</p>
<p>Come forward. Talk, and you will find people to listen. I know there is an entire population out there of people suffering through their academic career. You are not alone. Let's discuss this. I would like to talk to you.</p>
<p>#mentalhealth</p>
<p>The fediverse is a social network promoting free speech and provides a safe environment to find people in similar situations and have meaningful conversations. You'll find me there, <a href="https://fosstodon.org/@yarmo">@yarmo@fosstodon.org</a>. Let's talk.</p>