Now I joke here about starting with stuff that I should be careful not to lose, but I have learned the importance of backups, so I have a copy of this server’s file system (it’s not really a backup until you’ve tested a restore in anger, and I have been rather lax there). There are also important reasons for doing this server first. It is at home, so if I do screw it up I can easily get a hold of the hardware. I run NextCloud, and while the version I was on was still currently supported, the next version has been out for a little while and does not support running on php 7.4, which is the default in Bullseye, which is the version of Debian the server was running. Helpfully BookWorm comes with php 8.2, so that is much newer. So I updated the sources lists, and did a dist-upgrade.
The first thing to note is that I had installed a number of php packages directly rather than through the virtual packages (php7.4-something instead of php-something) so a number of php libraries were removed as the 7.4 versions were not compatible with php 8.2. This was both easily fixed, and entirely my own fault. But once I had dealt with that all of the applications appeared to start up fine, and function as expected. All of them except NextCloud of course. It turns out version 25 of nextcloud doesn’t support running on php 8.2, which is a bit of a hiccup for me. So once again, I have done something that I should be perfectly capable of doing, as I literally do this sort of thing for a living, and have been caught out by my own lack of preparation. On the plus side the manual upgrade of nextcloud was very straight forward, but I really should have known it was needed going in.
]]>So the work flow as it is involves the private git repository, which contains my site as a jekyll site. This is checked out on my laptop, and I make edits using vim. I use jekyll to serve a version of this site on localhost whilst I am making changes to ensure I am happy with how it looks (well as happy as I can be given the design, I still need to work on that). These changes are then built to a directory that is a copy of the public repository also checked out on my laptop, but on a non-default branch. I then commit these changes, push them to my git server and raise a pull request. Once the pull request is merged the changes are pushed to the servers. Now I recognise that this is a slightly clunky workflow, and I could probably improve it with a little effort. But it works for me, on Linux, that I am used too. Now that my laptop is broken (actually I’ve fixed it, but the fix is temporary at best) I should probably get this workflow working somewhere that is usable.
So I have a Windows 10 desktop computer, which has plenty of system resources, and outside of the command line a GUI is a GUI, and I can work reasonably comfortably in KDE or Windows without too much mental effort to switch (subject to the differences between the software packages in use). So all I really need for this workflow is a browser (I favour Firefox, which I use on Windows and Debian) and a Linux command line. Windows 10 has a feature Windows Subsystem for Linux. So I already had a basic Debian install set up, but I have only really used this to ssh to my servers. Well now is the time to install Ruby, Jekyll, git, and vim (oh and tmux, but that is less important for this workflow).
Well it turns out to work quite well. Running jekyll on WSL allows you to access the site in Firefox in Windows. I have written this post on my Windows desktop, which has two high resolution monitors. I move windows to the background less often. It’s still the same clunky workflow as it was before, but slightly less clunky. I should have tried this ages ago. I still need to configure a few things. Vim has a spell checker that I need to configure on my desktop. I need to configure tmux. But otherwise everything works quite well.
]]>If I’m honest with myself I’m probably done with this. I’ve put together something that proves What Three Words doesn’t need to exist. If I’m less honest with myself there is still some work to do. I still need to apply a license, unfortunately my employer paid for some of the time I spent building this, so I need to get their permission for any license I might choose. I also need to write a README that gives some useful information. I also need to get the app to do more useful logging (any logging really).
]]>The Upgrade ran smoothly, the various services all appeared to start up fine, and everything appeared to work. So I played about with the Apache TLS settings and found settings that got me an A+ on the SSL Labs test again. No databases were lost, or locked out, and I had backups should the worst come to pass. However, later that day I noticed I hadn’t had any emails, including the emails I normally get from Linode to say my servers had restarted. So what had I done wrong? Well in my haste I had not read all the release notes, or checked that there were no errors. Postfix was running, so I should be getting emails right? Well maybe, maybe not. It turns out that with Postfix relying on Dovecot to identify users, and Dovecot not accepting secure connections Postfix may or may not have been accepting emails. I don’t actually know, I didn’t check once I’d fixed things and got my emails (including ones that I expected to receive earlier). What was Dovecot’s issue? Well the upgrade had changed some of the settings that were needed, related to TLS settings in no small irony. So I needed to make a quick change to those, but what I also needed was a set of DHParamaters that Dovecot didn’t think were too weak. This was not a fast thing to fix. The command
openssl dhparam 4096
generates a secure set of DHParameters, but takes a long time to run. So having generated those parameters, and put them in a file for Dovecot to read (Twice because I have two mail servers) everything is working again.
At some point I’m going to learn to do these upgrades in a more professional manner, after all it is literally what I get paid to do day to day. Fortunately I do my job properly when I’m getting paid for it, I just hope my employer realises that if they read my blog.
I still need to review the TLS settings for Dovecot (properly this time), Postfix and Stunnel. Postfix is going to be more complicated as it listens on two separate ports that have different requirements. But more on that when I actually do the work (if I have anything worth adding).
]]>So what I have achieved is a rudimentary LDAP authentication system, and session management. With certain actions only allowed for valid sessions. More of the work on this was getting familiar again with openldap than I’d like to admit, but that is outside the scope of what I want to talk about with the app. So I don’t actually have much more to say. Next step is likely trying to get OAUTH working as an authentication method, and allowing the config to select between them. Then I need to write a README, and select a license to offer this code under.
I also probably need some UX help.
]]>I'd try crypto/rand. There seems to be a debate about having to create the seed for math/rand but apparently that's not something they are planning on fixing as it's not meant for true randomness.
— liam sorsby (@liamsorsby) September 2, 2019
I was also mocked by a colleague for not having anything ready to use yet, so I’ve uploaded what I have so far. Feel free to use it Ols. But do keep in mind this is still a work in progress. The locations are now stored for 30 minutes, as are the keys.
There’s quite a bit I still need to do before I would consider this to even being close to a useful product.
But for now this is what I have, and if I look back at my initial blog post announcing this project, the title included the rather arrogant question “How hard can it be?”. So far, actually, not very.
]]>It shouldn’t come as a huge surprise that it is a deterministic pseudo random number generator, and you are required to provide a variable seed to get different results. Computers are pretty bad at randomness generally. Now most of the advice I have found suggests using the current time (to nano second precision) as the seed, and most of that advice comes with the caveat that it works but isn’t suitable for cryptographic functions. So I have faith that the people giving the advice are giving advice that is good for most use cases a Go novice is likely to need, unless that use case is more nuanced, or related to security. Unfortunately for me, as a complete novice to Go, my use case is somewhat more nuanced, and is related to security.
My issue is that I need two sets of random numbers (to generate random strings) that need to be unrelated. The current time could be suitable for the ID, it being predictable isn’t the end of the world, but the key needs to be more secure, and needs to be generated at close enough to the same time as the ID to make using the current time for both of them unworkable. Now I intend to host this app on Linux, and Linux keeps a good source of what we can consider essentially random data that can be read from /dev/random on most distributions. But reads from /dev/random block when the pool of randomness is empty and on a server the sources of random data that can fill that pool can be limited. Now there is a solution, /dev/urandom takes some of the randomness from the pool as a seed to a pseudo random number generator, that keeps generating random looking numbers for as long as you need. So while the random pool has data /dev/urandom is also pretty good as a randomness source, but when it’s empty it’s less good. So I have a choice, do I make my app slow when there isn’t enough random data, or do I allow it to be less secure? I’m probably going to go with less secure, as the data isn’t designed to be long lived enough to allow unlimited time to attack it, and because the data is only useful with context that should be external to the app (I hope). But this is a trade off that needs to be thought about whenever you rely on randomness in computer programs.
]]>So the current app allows someone to look at the currently shared location by visiting this link now if that location is empty the page will try again every 5 seconds. To set that location for 30 seconds (I’ll make that longer when the ID and key aren’t hard coded to be test) by visiting this link. Do be aware though that there is currently no protection from anyone seeing that location. The location is stored in redis, with a separate key for latitude and longitude. There are a number of different ways I could have done this, but this one was simple. I chose redis because it can store simple key value pairs well, and for what I am doing nothing more complicated is needed.
Next steps are, as I already said, creating the random ID’s, and setting a random key to validate them. But I will also need a way to limit who can do so, and find a way to make it easy to share the link for setting the location, while also opening a page to see that location when it is set.
One of things I had to do before putting this code on the internet was to ensure I didn’t pass any user generated data to functions outside the app in an unsafe manor. Fortunately because go is typed the latitude and longitude will generate an error if you try to set them as anything that isn’t a number, so the scope for abusing them isn’t great, but the ID and key are strings, and strings that I need to pass through to the redis instance in the background, so these need to be sanitized. I’m relying on the function that checks the id is valid to do that, as that will fail in a way that prevents further use of the strings if they are not valid, now obviously that checks they both match the string “test” exactly, but as that section of the function is temporary I have included a regex before it to check they’re not massively long, or contain any funky characters. Those checks are currently overly restrictive, and will be loosened up eventually, but better to start overly restrictive, and allow more as you find it safe than to start overly permissive and let someone hack you. Especially as the source code is readily available.
]]>ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/
We then need to create a user to run the app, we want an unprivileged user
sudo adduser --system\ --shell /bin/false\ --gecos 'Location Finder App'\ --group\ --disabled-password\ --home /opt/location location
With that we need to create a systemd system file (I’ve added an example systemd file to the app repo) and to move a built copy of the binary onto the system, along with the HTML template file. And that is what I have done.
The next steps will be more complicated. I need to turn this into something useful, and for that I’m going to need some server side storage, an API that takes the location and stores it, and an API that reads it.
]]>