D. Moonfire2024-03-03T21:09:36Zhttps://d.moonfire.us/D. MoonfireCreative Commons Attribution-NonCommercial-ShareAlike 4.0 InternationalNixOS and NextCloud2024-03-03T06:00:00Zhttps://d.moonfire.us/blog/2024/03/03/nixos-and-nextcloud/Just a few tips on getting NextCloud 28 working with NixOS, plus a copy of my `.nix` file that I got working.
<p>I spent a few hours upgrading my <a href="/tags/nextcloud/">NextCloud</a> server from
version 26 to 28. Sadly, it was not a pleasant process, mainly because there
were some index changes that were conflicting and the migration code kept
having a problem with creating indexes with one name for <code>oc_table_name</code>
because it conflicted with <code>table_name</code>. This is probably because I have a
relatively old NextCloud setup that has been migrated from OwnCloud over the
years and didn't use table prefixes.</p>
<p>Unfortunately, that meant I ended up having to do a lot of:</p>
<ul>
<li><code>DROP INDEX index_name ON table_name;</code> in <code>mysql</code></li>
<li><code>systemctl restart nextcloud-setup.service || systemctl status nextcloud-setup.service</code></li>
<li>Look at the error, go back to the beginning with the new name</li>
</ul>
<p>That took about an hour to get through and pretty much points out that I also
aren't using the proper indexes because half of those tables are empty and
the other have isn't. However, <code>table_name</code> in the above example was sometimes
the prefixes (<code>oc_table_name</code>) and other times it was the non-prefixed (<code>table_name</code>)
version so that is just something I'm going to live with.</p>
<p>Once I finished, it complained about missing indexes, so I had to run this:</p>
<pre><code class="language-shell">nextcloud-occ db:add-missing-indices
</code></pre>
<p>Which probably put all the indexes I dropped and the migrations put on the wrong
table back to the table that is actually used.</p>
<p>Finally, there were still a few other errors about “interned_strings_buffer”
and memcached not working. I figured some of these were related to me losing my
database instance on my hosting provider, which meant I moved the database from
a dedicated server to the same one as the NextCloud server.</p>
<p>Below the Nix file I use to set up NextCloud so it doesn't give me any warnings
and has backups properly set up.</p>
<pre><code class="language-nix"># nextcloud.nix
{ config
, pkgs
, ...
}:
let
host = "nextcloud.example.com";
backup-name = "restic";
in
{
# Set up the user in case you need consistent UIDs and GIDs. And also to make
# sure we can write out the secrets file with the proper permissions.
users.groups.nextcloud = { };
users.users.nextcloud = {
isSystemUser = true;
group = "nextcloud";
};
# Set up backing up the database automatically. The path will be in
# `/var/backups/mysql/nextcloud.gz`.
services.mysqlBackup.databases = [ "nextcloud" ];
# Restic is already set up to back up the mysql directory but
# we also set it up to backup the data.
services.restic.backups.${backup-name}.paths = ["/var/lib/nextcloud/data"];
# Set up secrets. This is a sops-nix file checked in at the same folder as
# this file.
sops.secrets = {
nextcloud-admin-password = {
sopsFile = ./secrets.yaml;
mode = "0600";
owner = "nextcloud";
group = "nextcloud";
};
nextcloud-db-password = {
sopsFile = ./secrets.yaml;
mode = "0600";
owner = "nextcloud";
group = "nextcloud";
};
nextcloud-secrets = {
sopsFile = ./secrets.yaml;
mode = "0600";
owner = "nextcloud";
group = "nextcloud";
};
};
# Set up Nextcloud.
services.nextcloud = {
enable = true;
package = pkgs.nextcloud28;
https = true;
hostName = host;
secretFile = "/var/run/secrets/nextcloud-secrets";
phpOptions."opcache.interned_strings_buffer" = "13";
config = {
dbtype = "mysql";
dbname = "nextcloud";
dbhost = "localhost";
dbpassFile = "/var/run/secrets/nextcloud-db-password";
adminuser = "admin";
adminpassFile = "/var/run/secrets/nextcloud-admin-password";
};
settings = {
maintenance_window_start = 2; # 02:00
default_phone_region = "en";
filelocking.enabled = true;
redis = {
host = config.services.redis.servers.nextcloud.bind;
port = config.services.redis.servers.nextcloud.port;
dbindex = 0;
timeout = 1.5;
};
};
caching = {
redis = true;
memcached = true;
};
};
# Set up Redis because the admin page was complaining about it.
# https://discourse.nixos.org/t/nextlcoud-with-redis-distributed-cashing-and-file-locking/25321/3
services.redis.servers.nextcloud = {
enable = true;
bind = "::1";
port = 6379;
};
# Setup Nginx because we have multiple services on this server.
services.nginx = {
enable = true;
virtualHosts."${host}" = {
forceSSL = true;
enableACME = true;
};
};
networking.firewall.allowedTCPPorts = [ 80 443 ];
}
</code></pre>
<p>Hopefully, this will help others if they encounter the same problems as me
since I couldn't find these answers with web searching.</p>
New Site Colors2024-02-24T06:00:00Zhttps://d.moonfire.us/blog/2024/02/24/new-site-colors/I decided to rework the colors of my websites, both for d.moonfire.us and future work on fedran.com. Along the way, I made a library to package the theme work and got to play with CSS variables.
<p>I should have been working on something else today, but a series of events lead me down a path of reworking how I implemented my themes here on <a href="https://d.moonfire.us/">https://d.moonfire.us/</a> and <a href="https://fedran.com/">https://fedran.com/</a>.</p>
<p><em>I have not updated the color themes on Fedran at this point.</em></p>
<p>In specific, it was someone mentioning they didn't like my dark theme on their recently announced blogroll. Now, there is absolutely nothing wrong with someone not liking my sense of colors or even how I code websites, but a desire to change the colors has been sitting in the back of my head for a few months. That meant that the commentary reminded me of my own desires to change it and that lead into my fixating on redoing the colors.</p>
<h2>Color Preferences</h2>
<p>One of the problems is that I have a fondness for mono- and bichromatic color themes. You can see that on my covers, but I also realize that it creates rather plain (e.g., uninteresting) site theme. I started to really notice it once I started using <a href="https://github.com/catppuccin/catppuccin">Catppuccin</a> as one of my most common UI themes. There were colors on the screen! Multiple of theme!</p>
<p>And I started wanting a few distinct of my own.</p>
<p>At the same time, I continue to struggle with contrasts. When I was younger (teenager and 20s), I loved drawing on quadraile graph paper. But somewhere in my 30s, I started having trouble because the lines were making it hard for me to see my writing. So I went on a few adventures to find lighter lines and evetually switched to dotted grid pages. Then in my 40s, even those became too dark and I had to switch to the same thing my father used: plain white paper.</p>
<h2>POV Colors</h2>
<p>Related to this is my decision that every point of view in Fedran had a different color. I like that idea, mainly because I <em>love</em> colors, I'm just not great with seeing differences. This is why <a href="https://fedran.com/sand-and-blood/">Sand and Blood</a> has a different color set than <a href="https://fedran.com/allegro/">Allegro</a>.</p>
<p>Each POV has a color hue and everything else is built off of that using <a href="https://atmos.style/blog/lch-color-space">LCH colors</a>. When I switched the covers to be bichromatic, I made the second color always be the contrast (rotate 180 degrees) from the primary color.</p>
<p>Easy huh?</p>
<p>However, this entire excercise pointed out that most people can't tell the difference between say 220 and 225 hue, so I really should spread out the colors a lot more. Even if I say a 6 degree difference in hues, that would limit me to sixty POVs. (Of course, since I'm still somewhat burned out, sixty is a lot right now.)</p>
<p>So, I'm going to change it so each POV color is three colors: the primary color I had before and then two accent colors (which the first will be the contrast color for the existing assigned colors). I'm not quite ready for the full gaumt of colors, but maybe increasing them to three would give me more than enough space and still cater to my reduced color palette.</p>
<p>It also harkens back to CGA colors, which is the first colors I saw on computers.</p>
<h2>Segmenting Colors</h2>
<p>The general idea was to break the color space into ten segments, numbers from zero to nine. The first color (<code>c0</code>) would be based on the hue and the others would be 36° from that so they are evenly spread across the entire range of hues.</p>
<p><img src="./color-grid-220.png" class="block" alt="Color Grid for 220°" /></p>
<p>If the hue changes (like the POV hues), then everything else rotates to fit.</p>
<p><img src="./color-grid-20.png" class="block" alt="Color Grid for 20°" /></p>
<p>As you can see in the above images, there is also ten ranges of brightness (combination of saturation and lightness) that goes from zero to nine with zero being a not-quite-black version of the color and nine being a not-quite-white version.</p>
<p>This effectively gives me a wide range of colors that aren't very close to each other but still a small enough set I'm not struggling with the differences of 218° and 223°.</p>
<h2>CSS</h2>
<p>Because I've really embraced CSS variables instead of going directly to SASS or LESS, this entire thing uses various CSS variables such as <code>--color-priduck-c0b0</code> and <code>--color-priduck-c9b9</code>. They use <code>calc()</code> and <code>lch()</code> to build the colors from the base hue of <code>--color-priduck-hue</code>.</p>
<p>Eventually, I hope to create some symbolic ones like <code>--color-priduck-comment</code> or something that would allow customizations.</p>
<p>This will also lead into the <code>rgb()</code> colors that I need to do the Fedran covers.</p>
<h2>Libraries</h2>
<p>I ended up writing two little NPM packages to support these and let me try it out. These are not even remotely documented because… I would consider them alpha until I get Fedran integrated with them. By then, I should have something I feel is “right” but also expand it out into the other components I need such as the color-rotation code to make the covers and other stuff.</p>
<p>Needless to say, if I like this, it will take me about six months to a year to finish integrating it into my system. Which means I want to make sure this is the direction I want to go.</p>
<p>I also decided to call this the Priduck Color Theme, because “priduck” sounded funny, doesn't have a lot of search hits, and it reminds me a lot of the <a href="https://www.youtube.com/channel/UC1KLPSDD6JT-PptihEqAJ2Q">Useless Duck Company</a>, a set of hilarious shorts about inventing.</p>
<h3>@priduck-color-theme/base for Node</h3>
<p><a href="https://src.mfgames.com/priduck-color-theme/priduck-color-theme-base-js">https://src.mfgames.com/priduck-color-theme/priduck-color-theme-base-js</a></p>
<p>Naturally, trying to build all of these was a little tedious so I wrote an ad-hoc programs to generate the color schemes using CSS variables. Then put it into a NPM package because I don't like copy/pasting things when I know I'm going to be tweaking it over time (including adding symbolic colors so I can have a default colors for language keywords, surfaces, and accents).</p>
<p>This lets me just include it into the CSS files as needed.</p>
<pre><code class="language-css">@import "~@priduck-color-theme/base/colors.css";
:root {
--color-priduck-hue: 220;
}
</code></pre>
<h3>@priduck-color-theme/theme for Node</h3>
<p><a href="https://src.mfgames.com/priduck-color-theme/priduck-color-theme-theme-js">https://src.mfgames.com/priduck-color-theme/priduck-color-theme-theme-js</a></p>
<p>I also wrote another because there isn't a good way to write a DRY version of CSS themes that can handle default values, the browser providing <code>prefers-color-scheme</code> and <code>prefers-contrast</code>, and setting them via attributes in various combinations. So I wanted to be able to provide six files (dark/light and more/less contrast for both of those).</p>
Re: Unsolicited opinions about CLI design2024-01-06T06:00:00Zhttps://d.moonfire.us/blog/2024/01/06/re-opinions-on-clis/Over on Gemini, there was a recent post about CLI design by Lark that caught my attention this morning about opinions on CLI design and I wanted to add my two cents.
<p>This week is one of those “little things” week where I get to do fun things, work on the little broke things around the house, and just relax. It also means I get more verbose and start doing blog posts because why not?</p>
<p>Over on Gemini, there was a <a href="gemini://lark.gay/posts/cli-opinions.gmi">recent post about CLI design</a> by Lark that caught my attention this morning. Well, and one about last names, but that is a much different topic.</p>
<p>As such, I just had a long conversation with one of my developers about our semi-annual goals. They wanted to document our primary CLI and asked me about opinions of their tasks. Ultimately, I suggested that their idea of creating a document that lists every option would be ultimately useless, expanding on the help from inside the CLI would be beneficial.</p>
<p>This also gives me the impetus to talk about some of my own evolving opinions about CLI.</p>
<h2>External Documentation</h2>
<p>I'll start with external documentation. Our application, call it <code>bob</code>, is a command-line that reuses the same business logic as our front end and services. It is intended to be a “big” one which means it has nested commands for individual tasks.</p>
<p>Many of those tasks are written because we need to solve a problem Right Now™. Other are the tasks because upper management has, in their infinite wisdom, separated our DBAs into a separate team which handles all the database woes of our company and we no longer have the dedicated individual that we've enjoyed for thirteen years. Since there is now a 1-3 hour delay on getting a DBA, they have to follow a strict system of access, and I have to puppet (re, take over their screen and type because I can't have direct access and they don't know the system or type quickly), I've been writing tools for things that I can programmatically do through our normal users.</p>
<p>This means that any external documentation is going to go stale. In a perfect world, I would write formal documentation as I code, but I also make a point of documenting every argument and command line as I go, so I'd rather have the <code>bob</code> handle the documentation instead of trying to also update our internal wiki, which customers don't have access to despite the fact we ship <code>bob</code> with our product also (for the same reasons I need it).</p>
<h2>No Arguments</h2>
<p>I completely agree with the <code>-long</code> argument name that <code>find</code> and PowerShell uses. It annoys the hell out of me. I also hate the <code>/v</code> of other Windows programs.</p>
<p>Same with exit codes. Different codes are awesome, when they are used properly. I'm looking at you, MadCap Flare.</p>
<h2>Discovering</h2>
<p>I think it would be obvious, but <code>bob</code> is inspired by <code>git</code> and <code>az</code> (thought more toward Azure's CLI). It uses nested verbs and they can go down reasonably deep.</p>
<pre><code>az account list
bob direct property calculate
</code></pre>
<p>In both cases, I want the container verbs (<code>az account</code>, <code>bob direct</code>, <code>bob direct property</code>) to provide a short help, maybe some examples, than do a default behavior. This is because people who don't know the system need to be able to explore, but also know that there is more.</p>
<p>This is contrast to <code>git remote</code> which lists the remote, but <code>git remote add</code> adds remotes. There is no <code>git remote list</code> or <code>git remote ls</code>, so that makes it hard to discover what the tool can do because there is no obvious indicator that there are more nested verbs.</p>
<p>The Gitea CLI, <code>tea</code>, also does this but in a slightly different manner. Both <code>tea repo</code> and <code>tea repo ls</code> both list repositories but <code>tea repo create</code> will create a new one. That means if someone enters <code>tea repo</code>, there is nothing to indicate the create exists but at least there is a way to explicitly list repositories.</p>
<p>Since I value exploration and curiosity, I strongly lean toward container verbs are for discovery, not functionality. Ideally, a command that just gives input should also have a dedicated exit code (253 in most of my programs) for “showed help”.</p>
<h3>Breaking Changes</h3>
<p>The problem with the above statement is when a command needs to be broken into separate bits. For example, we used to have <code>bob direct export</code> to import data directly into the database (our <code>direct</code> commands are straight to the DB, our <code>api</code> commands use the OpenAPI layer). However, we split export into export data and export tables (<code>bob direct export table</code>).</p>
<p>What about the existing scripts that made the assumption about <code>bob direct export</code>? I don't want to be a versioning layer on our CLI and our customers don't understand <a href="https://semver.org/">semantic versioning</a> even if we used it (business insists on <a href="https://dafoster.net/articles/2015/03/14/semantic-versioning-vs-romantic-versioning/">romantic versioning</a>). I also don't want to limit our ability to evolve as our understanding of the tools and how they work changes.</p>
<p>Ultimately, I go with three options:</p>
<ul>
<li>Plan ahead and add the nested verb if I think we're going to need it.</li>
<li>Reorganize so we don't have to, which is why we have <code>bob direct criteria export</code> and <code>bob direct property export</code>.</li>
<li>Just break it (bump the major version if you can) and document the change.</li>
</ul>
<h2>Levels of Help</h2>
<p>I prefer three levels of help: synopsis, option, and verb.</p>
<p>Synopsis is when you just run a program and it doesn't include the requirement arguments. Just give a little summary of what is needed. Depending on the parsing library, I'd rather see most command options, a list of verbs and what they do.</p>
<p>Option is when someone passes in <code>--help</code> (I do not like using <code>-h</code> for help). That should be the list of all verbs, options, and arguments and details.</p>
<p>This is where my opinions different from Lark. I hate when <code>git clone --help</code> opens a pager. I know what I'm doing and either I'm passing it already through a pager of my choice or I'm scanning because PowerShell is slow enough I can read almost as fast as it prints. I want it to dump data, I want to be able to scroll up as I need. Needless to say, I despise that <code>git clone --help</code> opens up a web browser on Windows. Switching programs is the last thing I want to do and it makes me feel like I'm losing control.</p>
<p>Also, if I touch the mouse, I failed. And browsers on Windows almost always require mice touching.</p>
<p>On the other hand, if it is given as a command or verb, such as <code>git clone help</code>, then I'm okay with a novel-length list of help files and throwing it into a pager is fine. I want the help verb to be the full details with examples, discussions, and links. Need more? Then make <code>help</code> have nested verbs that let me discover the detailed topics (but please no interactive exploration).</p>
<p>That means I'm also okay with the example:</p>
<pre><code class="language-shell">$ go get --help
usage: go get [-t] [-u] [-v] [build flags] [packages]
Run 'go help get' for details.
</code></pre>
<p>Though I would have preferred that <code>go get</code> did the synopsis given above, <code>go get --help</code> gave the synopsis and explained what <code>-t</code>, <code>-u</code>, and <code>-v</code> are, and <code>go help get</code> (I'd rather it be <code>go get help</code> though) gave details like modules, examples, etc.</p>
<p>I believe that <code>go get</code> does something by itself, but I'd rather require an option that says “get all” (which is what I assume it does") instead of inferring it, but that is a point I feel strongly about. If <code>go get</code> does something, then skip that but my opinions that <code>go get --help</code> should at least list the verbs and explain the options.</p>
<p>Related to that, it frustrates me when <code>--help</code> does not show help screens. I don't care if I have every argument and option set, <code>--help</code> should always take priority.</p>
<h2>Arguments and Options</h2>
<p>There is one thing I struggle with the <a href="https://clig.dev/#arguments-and-flags">CLI design</a>:</p>
<blockquote>
<p>Prefer flags to args. It’s a bit more typing, but it makes it much clearer what is going on. It also makes it easier to make changes to how you accept input in the future. Sometimes when using args, it’s impossible to add new input without breaking existing behavior or creating ambiguity.</p>
</blockquote>
<p>There isn't a good way of knowing when an flag is required. No standard convention that says “you must have this” because we are so inconsistent with indicating optional verses not optional. I guess if the synopsis said <code>gary [--verbose] --input FILE</code> maybe?</p>
<p>But this is one that I need to work out in my head because I usually equate positions are required, flags are optional. But, intellectually, I agree with the statement, but it isn't what I'm doing these days.</p>
<p>After thinking about it, I think I want to change to follow this one more.</p>
<h2>Data Type of Options and Arguments</h2>
<p>This is one of my frustrations, when I don't know if a flag takes a value or not, and what type of value does it take. In our system, we identify files or the type of input.</p>
<pre><code>$ bob direct user list --help
...
--no-color
--table-search REGEX
--output FILENAME
--user-search LOOKUP
</code></pre>
<p>(“LOOKUP” has a special meaning for us such as <code>starts:XXX</code>, <code>regex:XXX</code>, <code>id:999</code>, <code>key:XXXX</code>, and <code>contains:XXX</code> verses <code>XXX</code> which is a case-insensitive exact match.)</p>
<p>There are cases when I want to pass in the value, so there is a different between <code>--foo</code> and <code>--foo yes</code> and the help should say that. Also, using a generic value, like <code>tea</code> does is frustrating.'</p>
<pre><code class="language-shell">$ git repo create --help
...
--gitignores value, --git value list of gitignore templates (need --init)
</code></pre>
<p>Um, what is the value? And the help isn't what it does. It doesn't list the templates, it needs to have a template set but it doesn't tell me where to get a list of the templates.</p>
<h2>Terminal Columns</h2>
<p>Another frustration is that I don't like fancy tables in my CLI help. It might sound strange, but creating a nicely formatted table is great when you use standard columns:</p>
<pre><code class="language-shell">$ tea repo create --help
...
--branch value use custom default branch (need --init)
--description value, --desc value add description to repo
</code></pre>
<p>Now, saw you are jamming a narrow column of shell because your editor needs the bulk of the window but you need to handle some sidebar tasks or are writing code to use that CLI:</p>
<pre><code class="language-shell">$ tea repo create --help
...
--branch value use custom def
ault branch (need --init)
--description value, --desc value add descriptio
n to repo
--gitignores value, --git value list of gitign
ore templates (need --init)
</code></pre>
<p>... yeah, that isn't really readable. Wrapping stuff is a problem unless you take into account the number of columns that are currently on the screen. This is somewhere either the default needs to be screen wrapping, or we need a reactive formatting.</p>
<p>In a “perfect” world, I'd rather the narrow one look like:</p>
<pre><code>$ tea repo create --help --faked
--branch value
use custom default branch (need --init)
--description value, --des value
add description to repo
--gitignores value, --git value
list of gitignore templates (need --init)
</code></pre>
<p>Also, infrequently options should be longer and or multiple aliases mean there is a bigger chance of word-wrapping. Overall, <code>ripgrep</code> has a nicer default format for what I'm looking for:</p>
<pre><code>$ rg --help
--no-config
When set, ripgrep will never read configuration files. When this flag
is present, ripgrep will not respect the RIPGREP_CONFIG_PATH
environment variable.
If ripgrep ever grows a feature to automatically read configuration
files in pre-defined locations, then this flag will also disable that
behavior as well.
</code></pre>
<p>But even that doesn't handle columns well. I'd love to see a better convention here, even if it comes down to when everything should just be on a single line and let the terminal wrap.</p>
<pre><code>$ bob --help
--log-level LEVEL, -l LEVEL
Sets the logging level for the
console output. Possible optio
ns are: Error, Warning, Info,
Debug, Verbose. Case-insensiti
vie and shortened values allow
ed. Default: $BOB_LOG_LEVEL, I
nfo.
</code></pre>
<p>And since it usually comes up, I can't always zoom out the terminal and make the text smaller because a certain point, the blur radius from my eye surgery causes everything to turn into a muddled mess of colors.</p>
<h2>Environment Variables</h2>
<p>If an flag/option is driven by an environment variable (then it isn't a argument/positional), then list that in the <code>--help</code>. I believe the Woodpecker CLI does that (but I don't have it installed right now), but seeing something like this is nice:</p>
<pre><code>$ bob direct property export --help
...
--connection DOTNET-SQL-CONNECTION The connection string to connect to
the primary database. Value comes from $BOB_CONNECTION_STRING,
"ConnectionString" from bob.config. Required.
</code></pre>
<p>That makes it a lot easier to understand, more so when the order of processing is also listed. If I'm using the example of <code>--help</code> verses <code>help</code>, those additional details are better suited for the command instead of the option.</p>
<h2>Plural verses Singular</h2>
<p>Singular. The same with my REST opinions. <code>tea</code> uses both and it looks wrong to me:</p>
<pre><code>$ tea --help
...
issues, issue, i List, create and update issues
</code></pre>
<h2>Colors and Emojis</h2>
<p>This is a hard one because I love and hate colors. Pretty colors are great, but I use terminals with black background, ones with light ones, and PowerShell with its hideous blue background. Sooner or later, a command with color screws up one of them because after decades of writing CLIs, we haven't come up with a convention to provide color preferences via environment variables.</p>
<p>Also, a decent percentage of the population is color blind. I also find that I stop processing color in certain situations and start seeing things in desaturated colors. So, as much as I want to see all those pretty colors in output, either don't touch my colors so I can use whatever background I want or come up with a <a href="xkcd://927">standard</a> that everyone follows.</p>
<p>I also love emojis, but there is a disconnect when some are black and white and others have full color. They are also dependant on fonts on the terminal.</p>
<h2>Logging</h2>
<p>Remarkably, logging flags cause me a lot of stress. As I see it, there are two philosophies when it comes to logging: flags or option.</p>
<p>Flags is when you have <code>--verbose</code> and <code>--quiet</code>. The problem is the conflict. What if you provide both? Should one take priority? Should it blow up with an invalid options selected? Mutually exclusive options for those libraries who handle it? And then there is the <code>-vvvvvvv</code>.</p>
<p>The other is an option, such as a lot of Microsoft tools use (also <code>woodpecker-cli</code>), which is a <code>--log-level LEVEL</code>.</p>
<p>Lately, I've been leaning toward the <code>--log-level</code> approach, but give it options. Since I use <a href="https://serilog.net/">Serilog</a> heavily and my users are sloppy, I like the level to be as flexible as possible. So, while I might say “error, warning, info, debug, verbose” (I always think debug and verbose should be reversed with Serilog), I want them to be able to say <code>--log-level e</code> or <code>--log-level ERR</code> because that's how they think.</p>
<p>I also frequently use the same tools in a monitored environment as my local machine, so occasionally I want more details in the log messages. That is also why I want to see more <code>--log-format FORMAT</code> where format can be one-line JSON, plain, with full timestamps, extra details, etc.</p>
<p>Finally, I have a common need to have log files written out to a text file. Usually this is <code>--log-file FILENAME</code>, but that format and level should be independently configurable. <code>--log-file-format</code> and <code>--log-file-level</code> with the same options, defaulting to the top level ones if not provided.</p>
<p>Of course, if <code>--verbose</code> is just an alias for <code>--log-level verbose</code>, that would be fine also. If I had to pick common or uncommon options, I would pick one set for the common.</p>
<h2>Consistency</h2>
<p>There is a bunch in here. Not all of them make sense for everyone, but they make sense for me. That comes down to the Standards Problem so I can't really say if they are “good” opinions or not, just my opinions and ones that I like and ones that I don't like.</p>
<p>But I like talking about it because it helps refine my opinions and find new ideas that I have never thought about.</p>
Teaching NixOS about OpenTofu2024-01-05T06:00:00Zhttps://d.moonfire.us/blog/2024/01/05/teaching-nixos-about-opentofu/In my endless quest to come up with a completely data-driven and reproducible environment, I decided to take a stab at a new instance automation tool: OpenTofu.
<p>In my endless quest to come up with a completely data-driven and reproducible environment, I decided to take a stab at a new automation tool: <a href="https://opentofu.org/">OpenTofu</a>. I've already gotten a good <a href="/tags/nixos/">NixOS</a> setup, but I wanted to also be able to check in the setup for my instances (and to a smaller degree, my bare metal servers in my home lab) to expand on the functionality. It didn't hurt that work had settled on Terraform.</p>
<p>Previously, I had taken a stab at <a href="https://www.pulumi.com/">Pulumi</a> (during my wedding anniversary trip in 2022). It was fun and I liked the code but I ended up gutting it later. For some reason, it quickly ended up feeling like a chore to play with. At that point, I figured I would just do things manually. But then I saw the announcement that OpenTofu had forked from Terraform because of enshittification of licenses (develop with an open license, then switch to a more limited one once profits became important). That little thing set me off and I decided to try it out.</p>
<h2>Installing</h2>
<p>Since all my infrastructure code is in a Nix flake, to get started just required me to add to the shell's packages.</p>
<pre><code class="language-nix">devShell.${system} = pkgs.mkShell {
buildInputs = [
pkgs.just
pkgs.opentofu
pkgs.openstackclient
];
};
</code></pre>
<p>I also grabbed the OpenStack client because it made easier to find some of the nasty little identifiers I needed to import.</p>
<h2>Configuration Files</h2>
<p>The way tofu works, it grabs all the <code>*.tf</code> files in the same directory. So inside my infrastructure flake, I have a <code>src/tofu</code> directory with configuration files that make sense to me:</p>
<ul>
<li><code>000-providers.tf</code></li>
<li><code>050-secrets.tf</code> (.gitignored)</li>
<li><code>050-secrets.tf.enc</code> (SOPS encrypted based on my user key)</li>
<li><code>200-networks.tf</code></li>
<li><code>500-instance0.tf</code></li>
<li><code>500-instance1.tf</code></li>
</ul>
<p>All of them are picked up, merged together, and made into a single set of settings. I also use <code>tofu fmt</code> a lot since I like to normalize my files on every commit.</p>
<h2>OpenStack and DreamHost</h2>
<p>Fortunately, my hosting provider of choice is <a href="https://dreamhost.com/">DreamHost</a>. They aren't the cheapest or the best, but they appear to be ethetical. Mostly, I stick with them because they went to the court to <a href="https://techfreedom.org/victory-online-political-free-speech-dreamhost-case/">fight some overreaching gag orders</a>.</p>
<p>(I also tried DigitalOcean at the same time as Pulumi but dropped that also.)</p>
<p>OpenTofu (via Terraform plugins) does a wonderful job of supporting both OpenStack and DreamHost DNS to tie everything together.</p>
<pre><code>terraform {
required_version = ">= 0.14.0"
required_providers {
dreamhost = {
source = "adamantal/dreamhost"
version = "0.3.2"
}
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.53.0"
}
}
}
</code></pre>
<p>I added the <code>adamantal/dreamhost</code> plugin so I could also assign the DNS record directly from my script and have everything working.</p>
<h2>Secrets</h2>
<p>Even though my infrastructure flake is in a private repository, I still encrypt all my secrets. I use SOPS for this, which means setting up the <code>.sops.yaml</code> file to encrypt and then decrypting/encrypting files using `just``:</p>
<pre><code>decrypt: decrypt-clouds decrypt-secrets
decrypt-clouds:
if [ ! -f clouds.yaml ];then sops -d clouds.yaml.enc > clouds.yaml;fi
encrypt-clouds:
cp clouds.yaml clouds.yaml.enc
sops -i -e clouds.yaml.enc
decrypt-secrets:
if [ ! -f 050-secrets.tf ];then sops -d 050-secrets.tf.enc > 050-secrets.tf;fi
encrypt-secrets:
cp 050-secrets.tf 050-secrets.tf.enc
sops -i -e 050-secrets.tf.enc
</code></pre>
<p>I do the same with my <code>.env</code> file so I can get the information I need set up properly.</p>
<h2>Creating an Instance</h2>
<p>Here is a short segment for creating an instance on DreamHost (or most OpenStack providers).</p>
<pre><code>resource "openstack_compute_instance_v2" "instance1" {
provider = openstack.dreamhost
name = "instance1" // It really isn't instance1, but just pretend
key_pair = "keypair1" // This is my SSH key set up somewhere else
flavor_name = "gp1.supersonic" // gp1.supersonic means I don't need a swap disk
user_data = <<-EOT
#cloud-config
runcmd:
- curl https://raw.githubusercontent.com/elitak/nixos-infect/master/nixos-infect | NIX_CHANNEL=nixos-unstable bash 2>&1 | tee /tmp/infect.log
EOT
# This sets up the boot device as /
block_device {
source_type = "image"
uuid = "2b2c61c6-324c-47f4-88c1-9ae8a978ddfd" # Ubuntu
boot_index = 0
delete_on_termination = true
destination_type = "volume"
multiattach = false
volume_size = 80
}
network { // Also configured somewhere else
name = openstack_networking_network_v2.public.name
}
}
resource "dreamhost_dns_record" "instance1" {
record = "instance1.mfgames.com"
value = openstack_compute_instance_v2.instance1.network[0].fixed_ip_v4
type = "A"
}
</code></pre>
<p>For me, the part is really cool is that I can bake in the <a href="https://github.com/elitak/nixos-infect">NixOS infect</a> script right in. To my surprise, it just ran the first time without errors (thought I had to wait about ten minutes after OpenTofu said it was done).</p>
<p>All I had to do was either show the results:</p>
<pre><code class="language-shell">tofu plan
</code></pre>
<p>Or apply the changes:</p>
<pre><code class="language-shell">tofu apply
</code></pre>
<h2>Importing an Instance</h2>
<p>Actually, the first thing I did was import my existing instances into the system. This involves creating a <code>.tf</code> file with the same basic setup at the other instance (some fields can be skipped but I was still learning), then import with the ID from OpenStack. Of course, getting the IDs was the hard part. Fortunately, this is where the OpenStack client comes into play. I can use that to get the list of servers, figure out the ID, then import it into Tofu.</p>
<pre><code class="language-shell">$ openstack --os-cloud dreamhost server list
+--------------------------------------+-----------+--------+------------+--------------------------+----------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------+--------+------------+--------------------------+----------------+
| 55f8ee35-31b2-4137-af1d-b7597d348271 | instance0 | ACTIVE | public=*** | N/A (booted from volume) | gp1.supersonic |
| 1a38092b-bbc5-46bd-9092-0df979ca8fe4 | instance1 | ACTIVE | public=*** | N/A (booted from volume) | gp1.supersonic |
$ tofu import openstack_compute_instance_v2.instance0 55f8ee35-31b2-4137-af1d-b7597d348271
</code></pre>
<h2>NixOS</h2>
<p>Now, while this was great for setting up things, I also wanted to pull that data into my Nix infrastructure flake. Fortunately, OpenTofu has a way of exporting the data pulled from the cloud. To do that, I need to add an <code>output</code> stanza at the bottom of my <code>500-instance1.tf</code> file:</p>
<pre><code>output "instance1_ipv4" {
value = openstack_compute_instance_v2.instance1.network[0].fixed_ip_v4
}
</code></pre>
<p>Since I'm (recently) fond of using <a href="/tags/just/">Just</a> for automation, I banged up a little stanza that automatically creates a <code>default.nix</code> file inside that directory every time I apply.</p>
<pre><code>apply: decrypt format && export
tofu apply
format:
tofu fmt
plan: decrypt format
tofu plan
export:
echo "inputs: {" > default.nix
tofu output | sort | perl -ne 'chomp;s@_@.@g;print " $_;\n"' >> default.nix
echo "}" >> default.nix
</code></pre>
<p>I had to use <code>_</code> in the output since dotted notation isn't accepted, but I use <code>perl</code> to convert those underscores into Nix-happy format.</p>
<pre><code class="language-nix">inputs: {
instance0.ipv4 = "1.2.3.4";
instance1.ipv4 = "2.3.4.5";
}
</code></pre>
<p>I use this to pull into my <code>networking.nix</code> which is used to drive things like configuring AdGuard, services like Maddy (for DeltaChat) and other services.</p>
<pre><code class="language-nix">inputs:
let
tofu = import ./tofu/default.nix { };
in
{
instance0 = tofu.instance0;
instance1 = tofu.instance1;
instance2.ipv4 = "192.168.0.2";
}
</code></pre>
<p>From there, I have a single place to get all my IP addresses:</p>
<pre><code class="language-nix">inputs:
let
ip = (import ../../../../networks.nix { }).instance0.ipv4;
in
{
}
</code></pre>
<h2>Conclusion</h2>
<p>It isn't the best or most graceful way of doing things, but I'm pretty happy how everything turned out. I made a few mistakes along the way of setting up Gitea Actions and had to drop and rebuild my instance0. That was just a matter of renaming <code>500-instance0.tf</code>, applying to drop, and then name the file back. Then I had nice clean slate to push out a new closure.</p>
<p>OpenTofu is much nicer than Pulmui. It didn't insist on having a cloud to maintain state, the file is checked into Git instead. It has a declarative language instead of code, and since I really don't need a lot of that logical flow, it just works for me. Plus I was able to inject into my <code>just deploy</code> top-level script that pushes out changes to my home lab and all my instances in a single call.</p>
Passing NixOS Flake Inputs Through Colmena2023-12-03T06:00:00Zhttps://d.moonfire.us/blog/2023/12/03/passing-nixos-flake-inputs-through-colmena/How to pass a flake input from a top-level infrastructure flake into the individual nodes (servers) so a single lock file can handle all changes.
<p>I'm pass the one year anniversary of using <a href="/tags/nixos/">NixOS</a> and I'm still learning a lot about the language and infrastructure. For the most part, I still feel good about the results even with my troubles playing video games and my underpowered hardware.</p>
<p>In the beginning, I also decided to go “all in” on Flakes. They covered a lot of the things I wanted in Nix, has the isolation that played well with my writing and programming environment. They were also barely understood, shiny, and I like taking the hardest path possible.</p>
<p>Once I realized I liked NixOS, I started putting it on other machines. For that, I settled on <a href="https://github.com/zhaofengli/colmena">Colmena</a>. It had pretty output and, for the most part, did what I wanted instead of the more complex systems that I couldn't understand due to my low skill in the Nix language.</p>
<h2>Flake Inputs</h2>
<p>One thing that had been vexing me for the last year is how to pass a flake input from infrastructure project (the repository that builds out all the servers) into the individual nodes (servers). This would let me install custom tools on servers or just use something outside of Nixpkgs that wouldn't require me to hunt down hashes every time the project built. Eventually, I hope to include:</p>
<ul>
<li>Author Intrusion (if it ever works)</li>
<li>Fedran CLI</li>
<li>FlakeHub (current curiosity)</li>
<li>Catappuccin theme for Gitea</li>
</ul>
<p>Since I was messing with NixOS over the last few days, I decided to make another attempt at solving the problem. A few questions on Matrix didn't answer, so I decided that a real-time chat isn't the best place for this type of question and went with the NixOS Discourse and <a href="https://discourse.nixos.org/t/colmena-how-to-push-a-flake-input-into-the-individual-nodes/36332">posted there</a>. Lo and behold, someone answered with exactly the answer I needed.</p>
<pre><code class="language-nix"># flake.nix
{
# An example flake to pass into the node.
inputs.fh.url = "https://flakehub.com/f/DeterminateSystems/fh/*.tar.gz";
outputs = inputs @ {self, ...}: let
system = "x86_64-linux";
pkgs = import inputs.nixpkgs {inherit system;};
in rec {
colmena = {
meta = {
# Inject the inputs for the top-level flake into a variable so the
# nodes can pull them down consistently. I renamed this to flakes
# instead of inputs since I always use `inputs` for my parameters
# for nested Nix files.
specialArgs.flakes = inputs; = {
flakes = inputs;
system = system; # I cannot figure out the "correct" way to do this yet.
};
};
machine-a = import ./machine-a.nix;
};
}
}
# machine-a.nix
{config, pkgs, flakes, system, ...}: {
environment.systemPackages = [
flakes.fh.packages.${system}.default
]
};
</code></pre>
<h2>Nix as a Language</h2>
<p>I still don't really “like” Nix, but I'm getting better at it. It is one of those things that grows on you, even though I have a strong preference for statically compiled languages because they catch many trivial typos.</p>
<p>Automated handling of trivial things has become something I need. While I have pretty good attention to detail, I don't always have the bandwidth to full focus on an item. So anything that I can do to fix those without thinking is a good thing.</p>
<p>Trying to solve this is a good example of my problems. I spent a year trying to solve this, occasionally asking on Matrix and other places. And a GitHub issues page really isn't the best place to ask beginning questions, though I have done that in the past.</p>
<p>I also couldn't find an example of how to do things, which is one reason why I'm creating these posts. At least then, hopefully, someone who is struggling with the same things can stumble on the answer.</p>
Enforcing Standards with NixOS2023-12-02T06:00:00Zhttps://d.moonfire.us/blog/2023/12/02/enforcing-standards-with-nixos/A way of using Nix and direnv to hook up standards for formatting and conventions.
<p>Some time ago, I stumbled into <a href="https://github.com/divnix/std">std</a>, a batteries-included development stuff. It looks like something I would really like to get into, mainly because it gave off notes of <a href="https://buck2.build/">Buck2</a>, which is something that interests me when dealing with microservice ecosystems and polyglot frameworks. And I know I love a polyglot solution to problems.</p>
<p>There were a few things that I fought again. I didn't care for the menu system that always shows up (noise), it's instability (still alpha), and the difficulty getting it to work with my way of thinking. I could have worked on some of those and figure out how to accept what I couldn't change and alter what I needed to be productive.</p>
<p>I don't really have that energy at the moment. I'm painfully aware that my time and attempt budget has been eroded by my family, drama, and the other things going on in my life. I find that I don't have the energy to do much and getting std to play with me was one of those things I decided to bump.</p>
<h2>Automation Tools</h2>
<p>However, there were some things I really liked about std that I wasn't aware of, namely <a href="https://github.com/nix-community/nixago">Nixago</a> which is a way of having the shell hook of a Nix setup automatically write out the various configuration files for things like <a href="https://github.com/siderolabs/conform">Conform</a> for Git messages (I like my conventional commits), <a href="https://editorconfig.org/">EditorConfig</a> for formatting, and <a href="https://github.com/evilmartians/lefthook">Lefthook</a> to make sure everything is honored. Standard also taught me about <a href="https://github.com/numtide/treefmt">Treefmt</a> which is a single command to reformat a command base.</p>
<p>There was also a way of doing arbitrary configurations, such as maybe setting up my project configuration files or handling other things, but I couldn't figure out how with a cursory look.</p>
<p>In short, all of those things I like to handle automatically instead of remembering all the little details.</p>
<p>With me getting rid of std, I wanted to keep this. Ideally in a manner that I could eventually create a flake of my common configurations and then apply them to every story or programming project.</p>
<h2>Necessity Calls</h2>
<p>Getting rid of std meant I hand to figure it out. Last night, I sat down and went through the code with my growing skill at Nix (I still do not enjoy the language but I'm getting more fluent with it). I'm also messing with <a href="https://flakehub.com/">FlakeHub</a> so you'll see some elements from that library.</p>
<p>I already use <a href="https://direnv.net/">direnv</a> for setting up my flakes as I enter directories. That is part of my normal tool set and I plan on using that for a great deal of time.</p>
<h2>Layout</h2>
<p>I like small, individual files. Naturally this means I would like every automated system to have its own file but grouped together in a folder to make it obvious how they are used. (Needless to say, I don't advocate <a href="https://medium.com/lost-but-coding/in-programming-folder-structure-doesnt-matter-as-much-as-you-think-71deecca6028">coding without folders</a> but that is also how I work/)</p>
<pre><code class="language-shell">$ find src -type d
src
src/configs
</code></pre>
<h2>Inputs</h2>
<p>The first part is pulling in the inputs for Nixago and its extensions.</p>
<pre><code class="language-nix"># flake.nix
{
inputs = {
nixpkgs.url = "https://flakehub.com/f/NixOS/nixpkgs/*.tar.gz";
nixago.url = "github:jmgilman/nixago";
nixago.inputs.nixpkgs.follows = "nixpkgs";
nixago-exts.url = "github:nix-community/nixago-extensions";
nixago-exts.inputs.nixpkgs.follows = "nixpkgs";
};
# Flake outputs that other flakes can use
outputs = inputs @ { self, nixpkgs, nixago, nixago-exts }:
let
# This bit comes from Flakehub's init and seems to be a reasonable pattern.
supportedSystems = [ "x86_64-linux" ];
forEachSupportedSystem = f: nixpkgs.lib.genAttrs supportedSystems (system: f {
inherit system;
pkgs = import nixpkgs { inherit system; };
});
in
{
devShells = forEachSupportedSystem ({ system, pkgs }:
let
# This pulls in the configurations from the configuration directory.
configs = import ./src/configs/default.nix { inherit system pkgs nixago nixago-exts; };
in
{
default = pkgs.mkShell {
# Pinned packages available in the environment
packages = with pkgs; [
git # Needed for life until I find something more awesome
nixpkgs-fmt # Needed for Lefthook
treefmt # Needed for Lefthook
lefthook # Needed for Lefthook
];
# Configuration setup
shellHook = ''
${configs.shellHook}
lefthook install
'';
};
});
};
}
</code></pre>
<h2>Configurations</h2>
<p>The basic default is just so I have a single line to configure all the libraries. This just acts as an index file.</p>
<pre><code class="language-nix"># src/configs/default.nix
inputs:
inputs.nixago.lib.${inputs.system}.makeAll [
(import ./conform.nix (inputs))
(import ./editorconfig.nix (inputs))
(import ./lefthook.nix (inputs))
(import ./prettier.nix (inputs))
(import ./treefmt.nix (inputs))
]
</code></pre>
<h3>Prettier</h3>
<p>Prettier was the first one I used, since I have very little customization in it. The nixago-exts is an extension library that figures out a lot of the formats so I don't have to.</p>
<pre><code class="language-nix"># src/configs/prettier.nix
inputs @ { system, nixago, nixago-exts, ... }:
nixago-exts.prettier.${system} {
data = {
printWidth = 80;
proseWrap = "always";
};
}
</code></pre>
<h3>Lefthook</h3>
<p>Lefthook's configuration is the same, but you'll notice there is no <code>data =</code> element like most of the others. This threw me because it is different than the others. I also found that I had to add <code>&& git add {staged_files}</code> from most of the examples I saw others when I commit, it would reformat the code but then leave them modified for the <em>next</em> check in. Adding the files fixes that and keeps things relatively speedy.</p>
<p>You also can see how I refer to specific paths for the executables while cleaning up the code.</p>
<pre><code class="language-nix"># src/configs/lefthook.nix
inputs @ { system, pkgs, nixago, nixago-exts, ... }:
nixago-exts.lefthook.${system} {
commit-msg = {
commands = {
# Runs conform on commit-msg hook to ensure commit messages are
# compliant.
conform = {
run = "${pkgs.conform}/bin/conform enforce --commit-msg-file {1}";
};
};
};
pre-commit = {
commands = {
# Runs treefmt on pre-commit hook to ensure checked-in source code is
# properly formatted.
treefmt = {
run = "${pkgs.treefmt}/bin/treefmt {staged_files} && git add {staged_files}";
};
};
};
}
</code></pre>
<p>As a side note, I have not found a single <em>fast</em> C# reformatter that only handles a few files. It has been intensely frustrating because I really like ReSharper's “Silently Clean” feature and I don't have a way of doing it from the command line in a reasonable period of time.</p>
<h3>Conform</h3>
<p>Conform is nice because it enforces Git commit messages for conventional commits.</p>
<pre><code class="language-nix"># src/configs/conform.nix
inputs @ { system, nixago, nixago-exts, ... }:
nixago-exts.conform.${system} {
commit = {
header = { length = 89; };
conventional = {
# Only allow these types of conventional commits (inspired by Angular)
types = [
"build"
"chore"
"ci"
"docs"
"feat"
"fix"
"perf"
"refactor"
"style"
"test"
];
# If you want scopes, then add:
#scopes = ["allows" "scopes" "here"];
};
};
}
</code></pre>
<h2>Treefmt</h2>
<p>Cleaning up code is something that is tedious but really needs to be done to lower the bar of allowing others into the code. It is also something that can be “mostly” automated, which I'm also in favor of. Sadly, there are gaps in the tools that I want, like a C# or Rust formatter that will organize members (such as grouping public properties together and making them alphabetical), but I can live without those.</p>
<pre><code># src/configs/treefmt.nix
inputs @ { pkgs, ... }:
let
data = {
formatter = {
prettier = {
command = "${pkgs.nodePackages.prettier}/bin/prettier";
options = [ "--write" ];
includes = [
"*.css"
"*.html"
"*.js"
"*.json"
"*.jsx"
"*.md"
"*.mdx"
"*.scss"
"*.ts"
"*.yaml"
#"*.toml"
];
excludes = [ "**.min.js" ];
};
nix = {
command = "nixpkgs-fmt";
includes = [ "*.nix" ];
};
};
};
in
{
# I don't understand the reason why many Nix examples define
# the data in the let section and then just inherit it here.
inherit data;
output = "treefmt.toml";
}
</code></pre>
<h3>EditorConfig</h3>
<p>I think one of the best things that came out of the last decade or so of coding was a slow migration to having a semi-universal file for configuring line endings, trimming white space, and the others. Also, both Microsoft and Jetbrains have embraced EditorConfig so I can check in a file that reduces the trivial (but needed) rejects for pull requests. I want more tools to use this and extend them because I don't want to bother with line indents, tabs verses spaces (tabs lost, but I've accepted spaces now), and formatting rules (braces on new lines).</p>
<pre><code class="language-nix"># src/configs/editorconfig.nix
inputs: # I don't use the inputs, I just wanted all the calls in `default.nix` to be consistent.
let
data = {
root = true;
"*" = {
end_of_line = "lf";
insert_final_newline = true;
trim_trailing_whitespace = true;
charset = "utf-8";
indent_style = "space";
indent_size = 4;
indent_brace_style = "K&R";
max_line_length = 80;
tab_width = 4;
curly_bracket_next_line = true;
};
"*.md" = {
max_line_length = "off";
};
"package.json" = {
indent_style = "space";
indent_size = 2;
tab_width = 2;
};
"{LICENSES/**,LICENSE}" = {
end_of_line = "unset";
insert_final_newline = "unset";
trim_trailing_whitespace = "unset";
charset = "unset";
indent_style = "unset";
indent_size = "unset";
};
};
in
{
inherit data;
hook.mode = "copy";
output = ".editorconfig";
format = "toml";
}
</code></pre>
<p>Obviously, the C# version is huge with lots of settings to fit my standards.</p>
<h2>Benefits</h2>
<p>The nice part about this is all I have to do is change into the directory and direnv will automatically make sure all the files are correct and up to date. Since I'm mostly on the command line, this works out beautifully for me and how I work.</p>
<pre><code class="language-shell">$ cd bakfu
direnv: loading ~/src/bakfu/.envrc
direnv: using flake
evaluating derivation 'git+file:///home/dmoonfire/src/bakfu#devShells.x86_64-linux.default'
nixago: updating repository files
nixago: '.conform.yaml' link updated
nixago: '.editorconfig' copy is up to date
nixago: 'lefthook.yml' link updated
nixago: '.prettierrc.json' link updated
nixago: 'treefmt.toml' link updated
direnv: export +AR +AS +CC +CONFIG_SHELL +CXX +HOST_PATH +IN_NIX_SHELL +LD +NIX_BINTOOLS +NIX_BINTOOLS_WRAPPER_TARGET_HOST_x86_64_unknown_linux_gnu +NIX_BUILD_CORES +NIX_BUILD_TOP +NIX_CC +NIX_CC_WRAPPER_TARGET_HOST_x86_64_unknown_linux_gnu +NIX_CFLAGS_COMPILE +NIX_ENFORCE_NO_NATIVE +NIX_HARDENING_ENABLE +NIX_LDFLAGS +NIX_STORE +NM +OBJCOPY +OBJDUMP +RANLIB +READELF +SIZE +SOURCE_DATE_EPOCH +STRINGS +STRIP +TEMP +TEMPDIR +TMP +TMPDIR +__structuredAttrs +buildInputs +buildPhase +builder +cmakeFlags +configureFlags +depsBuildBuild +depsBuildBuildPropagated +depsBuildTarget +depsBuildTargetPropagated +depsHostHost +depsHostHostPropagated +depsTargetTarget +depsTargetTargetPropagated +doCheck +doInstallCheck +dontAddDisableDepTrack +mesonFlags +name +nativeBuildInputs +out +outputs +patches +phases +preferLocalBuild +propagatedBuildInputs +propagatedNativeBuildInputs +shell +shellHook +stdenv +strictDeps +system ~PATH ~XDG_DATA_DIRS
</code></pre>
<h2>Moving Parts</h2>
<p>There is a problem with this, in that it is a lot of moving parts to basically write out a file that could easily be checked into code once and be done with. I fully admit that we are going through a lot to hoops that could easily be done with simple files with only one exception.</p>
<p>The biggest exception is Lefthook. Someone needs to run <code>lefthook init</code> after the repository is cloned to ensure the hooks are all configured so it enforces the commit messages and makes sure the code is formatted before committing. Since Git won't ever provide that, having the shell hook from the flake enforce it cuts out a tedious step that is easily overlooked.</p>
<p>The other reason for going with this approach is my ability to update it. Each of my stories are in their own repository for a variety of reasons, but the structure and layout of them are typically manipulated en masse as I update a standard. I also do theme and style changes across the boards, such as finding a new font or fixing the ebook generation.</p>
<p>With the flake setup, I could easily migrate these configurations to a dedicated flake that is shared across all of them, and then just update the lock for that file to enforce the latest iteration of an evolving standard.</p>
<p>And standards are evolving. While I have common patterns (braces on newlines), how I format the code, organize files, or hook up patterns changes. Sometimes it is a little incremental change, sometimes it is a sweeping change as I switch build systems or introduce a standard format. Last year, I set it up so every project would generate a EPUB and PDF file and work with my Gitea and Woodpecker CI setup.</p>
<p>If I can make those changes cut across all the projects, then it is less effort for me to get conformity but also let me work in an environment I'm comfortable with. Being able to make sure every tool I want is available (such as <a href="/tags/author-intrusion/">Author Intrusion</a> or <a href="/tags/markdown/">markdowny</a>) means I don't have to think about the plumbing and just do the part that is fun: write.</p>
Package Management - Formats and Registries2023-11-30T06:00:00Zhttps://d.moonfire.us/blog/2023/11/30/package-management-formats/Thoughts on setting up formats and layering registries for those formats on top of each other.
<p>Since my mind has been on it, I wanted to work out some of the ideas I had for formats in my packaging system. In this case, I'm going to focus on a single one, NuGet, because I have a fair amount of experience with this and it has some of the complexities that are throwing me.</p>
<h2>Series</h2>
<p>This is going to be a series of posts, but I have no idea of how fast I'll be writing them out. I want to work out my ideas, maybe have a few conversations, and then start to move to more technical concepts.</p>
<ul>
<li><a href="/blog/2023/02/07/package-management-introduction/">2023-02-07 Package Management - Introduction</a></li>
<li><a href="/blog/2023/02/08/package-management-versions/">2023-02-08 Package Management - Versions</a></li>
<li><a href="/blog/2023/02/12/package-management-identifiers/">2023-02-12 Package Management - Identifiers</a></li>
<li><a href="/blog/2023/02/13/package-management-dependencies/">2023-02-13 Package Management - Dependencies</a></li>
<li><a href="/blog/2023/09/20/package-management-identifiers-2/">2023-09-20 Package Management - Identifiers 2</a></li>
<li><a href="/blog/2023/11/30/package-management-formats/">2023-11-30 Package Management - Formats and Registries</a></li>
</ul>
<h2>Configuration Files</h2>
<p>All of the configuration for the system will be in a series of JSON5 (or JSON or whatever formats are officially supported) that are merged together with any conflict producing an error message and stopping the systems.</p>
<p>Assuming <code>$GITDIR</code> is the top-level directory for a Git repository, then the configuration would be in <code>$GITDIR/.config/bakfu</code>. All the files will be gathered together, but a <code>.gitignore</code> could ignore <code>*.user.*</code> which means authentication information could be stored locally but have a common configuration on top of that.</p>
<p>In this case, the files all have the same schema which is a required component because it also identifies the version of the file.</p>
<pre><code class="language-json5">{
"$schema": "...",
}
</code></pre>
<p>Any more details on how we look for configuration files will have to wait for another post.</p>
<h2>What is a Format?</h2>
<p>Right now, a “format” is a specific format of a package, such as NuGet package or a NPM one. Inspired by SGML catalogs and how I like to see things in Git repositories, a format looks roughly like this JSON5.</p>
<pre><code class="language-json5">{
// In this file, the various components don't have to be URL encoded.
"formats": {
"nuget": {
"defaults": {
"authority": "nuget",
},
"authorities": {
"nuget": {},
},
},
},
}
</code></pre>
<p>Since we merge files, that means I could create project-specific authority for packages that aren't (and probably never will be) on the official NuGet server. For example, my personal Forgejo instance at <a href="https://src.mfgames.com/mfgames-cil">https://src.mfgames.com/mfgames-cil</a>.</p>
<pre><code class="language-json5">{
// In this file, the various components don't have to be URL encoded.
"formats": {
"nuget": {
"authorities": {
"src.mfgames.com/mfgames-cil": {},
},
},
},
}
</code></pre>
<h2>Authorities</h2>
<p>The authority itself is the complex part of the file because it needs to handle how to search for packages, how to download them, authorization needed, and how verification is done.</p>
<pre><code class="language-json5">// merged formats.nuget.authorities:
"nuget": {
// "enabled": true, // Implied so it can be disabled
"registries": {
"nuget.org": {
"protocol": "nuget-v3",
"url": "https://api.nuget.org/v3/index.json",
},
],
}
</code></pre>
<p>Another file could merge additional registries in, much like you can have a proxy feed in DevOps.</p>
<pre><code class="language-json5">// merged formats.nuget.authorities:
"nuget": {
"registries": {
"nuget.org": {
"protocol": "nuget-v3",
"url": "https://api.nuget.org/v3/index.json",
},
"example": {
"protocol": "nuget-v3",
"url": "https://example.org/proxied-feed/v3/index.json",
},
},
}
</code></pre>
<h3>Controlling Order</h3>
<p>In the <code>NuGet.config</code> file, there is also the ability to clear out the list of registries and use only a single set of identified registries. In this case, it would be a combination of disabling the known ones, changing the search for other files (the later post), and using a set of ordering controls.</p>
<pre><code class="language-json5">// merged formats.nuget.authorities:
"nuget": {
"registries": {
"nuget.org": {
"protocol": "nuget-v3",
"url": "https://api.nuget.org/v3/index.json",
"enabled": "false",
// Below implies "search": { "after": ["example"] } }
},
"example": {
"protocol": "nuget-v3",
"url": "https://example.org/proxied-feed/v3/index.json",
"search": {
"before": ["nuget.org"],
}
},
},
}
</code></pre>
<h3>Additional Packages</h3>
<p>In <code>NuGet.config</code>, it is possible to <a href="https://learn.microsoft.com/en-us/nuget/consume-packages/package-source-mapping">map a set of packages</a> to a given URL, such as all <code>MfGames*</code> can only be found at a specific server. In those cases, that should be treated as a separate authority with its own set of registries (that may be a duplicate if it is also a proxy feed).</p>
<p>In the example below, <code>contoso</code> would be treated as a separate authority than the <code>nuget</code> default.</p>
<pre><code class="language-xml"><!-- NuGet.config -->
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSourceMapping>
<packageSource key="nuget.org">
<package pattern="*" />
</packageSource>
<packageSource key="contoso.com">
<package pattern="Contoso.*" />
<package pattern="NuGet.Common" />
</packageSource>
</packageSourceMapping>
</configuration>
</code></pre>
<h3>Protocol</h3>
<p>The protocol determines how the registry is accessed. I could see a number of possibilities:</p>
<ul>
<li>NuGet V3 protocol, obviously NuGet-centric</li>
<li>NPM access</li>
<li>Directory location</li>
<li>gRPC proxy server</li>
</ul>
<p>Since this library needs to be implemented across a number of platforms and libraries, I would expect that unknown protocols would be filtered out (maybe with a warning) and then the ones that can be accessed are used. If there are no valid ones, then the system should blow up.</p>
<p>The protocol also determines what additional settings might be required such as authentication, file system layout, or the like. It would be obvious specific to that protocol, so it is thrown into a generic “settings” object to control those things.</p>
<pre><code class="language-json5">"local": {
"protocol": "file-v1",
"url": "file:///${env:HOME}/src/other/project",
"settings": {
// The layout for "bob" would be "b/bob".
"package": "${PACKAGE_NAME:0-0}/${PACKAGE_NAME}/${PACKAGE_VERSION}",
},
},
</code></pre>
<h2>Search Controls</h2>
<p>Another aspect if how to handle searching. NuGet will search all the sources at the same time and the first one that responds gets it. However, in some cases, one might want only certain ones to be searched and then stop if it isn't there (such as a full feed proxy verses a subset proxy feed).</p>
<p>I would see this as controlled by two components, at the authority and for an individual repository.</p>
<pre><code class="language-json5">// merged formats
"nuget": {
"authorities": {
"nuget": {
"search": {
"concurrent": true,
"defaultOrder": "Alphabetical",
},
"registries": {
"nuget.org": {
"protocol": "nuget-v3",
"url": "https://api.nuget.org/v3/index.json",
"search": {
"notFound": "Stop",
"timeout": {
"time": "00:01:00",
"action": "Retry",
"maximumRetries": 3,
},
},
},
"example": {
"protocol": "nuget-v3",
"url": "https://example.org/proxied-feed/v3/index.json",
"search": {
"notFound": "Continue",
},
},
},
},
},
}
</code></pre>
<h2>Conclusion</h2>
<p>Well, that's my thoughts on authorities and how to search them for packages. One thing you might notice is that I <em>don't</em> have offline packages in the above examples. I want to treat offline (cached) files as a first-class concepts in the packaging and so that requires its own discussion. Not to mention, the cache is for all formats, not just one.</p>
<p>I also want to eventually introduce the ability to have services provide opinions on packages. This way, I could set up a service that translates CVE alerts into controls of the packages found or allow a project-specific settings that would hide packages that were incompatible with the current project. I just don't know how to call them.</p>
<blockquote>
<p>Computer science cannot solve two things: cache invalidation, how to name things, and off-by-one errors.</p>
</blockquote>
These Last Few Months2023-11-20T06:00:00Zhttps://d.moonfire.us/blog/2023/11/20/these-last-few-months/A lot has been going on in the last few months, much of which has been radio silence for me as I fixated on getting everything moving as smoothly as I can before things crumbled.
<p>In the last month, I have been in one of those periods of time when I'm not very communicative online or posting at all. Usually, this is because I shut down when I'm focusing on something that is driving my anxiety and the only way I know to handle it is to fixate until it gets resolved.</p>
<p>Overall, a lot has happened and this is a little retrospective of my last month.</p>
<h2>Disposition</h2>
<p>As I <a href="/blog/2023/08/21/kenneth-evans-jr/">posted in August</a>, my dad died of heart complications. As much as he and I <a href="/garden/exit-planning/">talked about the end</a>, there is a stark difference between preparing for ones death and being one of the people who has to pick up the pieces. My younger brother had to take on the majority of the burden, but I still have my share.</p>
<p>This last week, I was in Michigan to help take apart my dad's house. It was a daunting task, but I was able to help in something I'm good at: processing large amounts of data. In this case, it was going through forty years of accumulated notes, which I <a href="https://octodon.social/@dmoonfire/111411977974056290">posted about on Octodon</a> while I was going through it. There was a lot more than that, but it was a good slice of interesting things to find in his notes.</p>
<p>Along the way, I'm also getting a sizable hunk of furniture and the random debris that I could find useful along with a number of things that invoked an emotional response or memory (like his fusion reactor or the particle accelerator diagrams).</p>
<h2>Cleanup</h2>
<p>Which led to the two weeks before when I frantically cleaned up the flood damage and everything that happened in the <a href="/tags/entanglement-2021/">entanglement</a>, my three years of bad luck. Things like dragging the water-damaged bookcases up, throwing out ruined stuff, and cleaning out the accumulated things that were shoved into any available space just to get them out of the way of danger.</p>
<p>That was a lot of work and I think I pushed myself well past my limits since I hit a point in the middle of moving and my right shoulder just gave up with a sharp pain. I've been treating it gingerly for a few days now but it is going to take at least a few weeks before I can pick up anything heavy with my right arm.</p>
<p>Fortunately, we were able to rent a dumpster again this year and filled it with everything, so I had a lot less clutter. Plus, with some money from the estate, I was able to get some shelves and organize it.</p>
<p>I was also able to finally take out the last hunks of wood and find out the full scope of the mold growing on the drywall. It wasn't pretty, but it only bad on about a dozen 4x8 sheets instead of twenty.</p>
<h2>Laundry</h2>
<p>As my luck goes, the week I got the money was also the week that the clothes washer died completely. So, a portion of that went into replacing it, the dryer that was on its last legs, and the microwave that died in 2021. Two of them got installed last week and I'm <em>hoping</em> (but don't expect) that the washer will get in this week.</p>
<h2>Cracked Pipe</h2>
<p>Back in <a href="/blog/2021/12/18/basement-bathroom/">December 2021</a>, I found out I had a cracked pipe in the foundation. With funding (and things like someone stealing $12k from me), I couldn't do much because of everything else going on. But with the money I got, I was able to have the plumber jackhammer the floor, find the broken pipe, and repair it. And, in his words, “there were all types of things wrong with it” including the entire bottom half of the joint being broken off and it dumping a good portion of every dishwasher run into the the slab.</p>
<p>While the plumbers were cleaning up, the hydraulic that automatically closes the door between the garage and the house tore itself out of the side along with a large fist-size hunk of wood and pulled out of the steel door too. So, that is going to be something I'll have to replace at some point.</p>
<p>As things go, this was a major drain of my attention because it was always there underneath my feet, another flood waiting to happen without warning.</p>
<h2>Cleaning</h2>
<p>While I was gone, Partner went on a cleaning binge so I came home to a clean house. That was nice. And we finally figured out what to do with the closet doors that have been breaking since 2020 and we decided to take them out entirely and it looks much better now.</p>
<p>We're also going to put a rug on the ruined carpet until we can tear that out… later.</p>
<h2>The Pause</h2>
<p>Now I'm in a brief pause where I have immediate deadlines. My brother got me a Pod with my stuff from dad's house, so I have to finish repairing the basement by May when it shows up. I already have the bulk of the basement shuffled into one room, and I have a clear set of tasks that have to be done before that:</p>
<ul>
<li>Replace the bathroom that got demolished by repairing the cracked pipe.</li>
<li>Replace the rotted drywall (and maybe run some conduit to key rooms).</li>
<li>Replace the flooring with something other than naked concrete.</li>
</ul>
<p>Oh, and take it easy for a few days before something else time sensitive hits me. Maybe play a few videos games, watching some movies, and just… not do anything.</p>
<h2>Luck</h2>
<p>I'd like to say my entanglement is over, but it is still tracking at something relatively significant about every three weeks (maybe about once four weeks now). We are in year three right now, but this year, I've lost both parents, a dog, significantly damaged my leg and sciatic nerve, had to do a major repair to my car, and a whole slew of other things.</p>
<p>Using my dad's estate money helps with repairing and recovering from the last three years. I can't say how grateful I am that it came when it did, but I'm really hoping this is a sign that things are looking up (you know, except for losing my dad).</p>
<h2>Work</h2>
<p>Work has also been hitting me pretty hard. Not much to say about that, since I don't really give details about what I do, but we have quarterly releases and the end of February, May, August, and November are always rough times. Hopefully I'm near the end of that since we're past code cut-off and working toward QA cut-off.</p>
<h2>Moving Forward</h2>
<p>The thing is, I usually have good luck, so the only thing I can do is the same thing I always do when it comes to these things:</p>
<blockquote>
<p>Just keep swimming. - Dory, <em>Finding Nemo</em></p>
</blockquote>
ICON 48 Retrospective2023-10-17T05:00:00Zhttps://d.moonfire.us/blog/2023/10/17/icon-48-retrospective/<p>I got back from this year's <a href="https://iowa-icon.com/">ICON 48</a> in Cedar Rapids, and it was definitely an experience but not that much of an adventure this year. A lot of it was the result of the previous weekend, when we went up to the cabin, a lack of child watching (usually the children's grandmother watches them but she was exhausted), and Partner getting “peopled out.”</p>
<p>In the end, I tried to do too much and didn't really excel at most of them.</p>
<h2>Dealer Hall</h2>
<p>Like most years, I had a table in the dealer call for <a href="https://typewriter.press/">Typewriter Press</a>. This year, I had eighteen books available, fifteen were on display (the others were a selection of erotic sci-fi/fantasy I've read during previous <em>Late Night Erotic Reading</em> sessions).</p>
<p>Sales were not good: I sold a single $5 book.</p>
<p>It was a nice table and I was happy with the result, there were just a few things that went wrong. For starters, ICON is still a relatively small circle of people (only a couple hundred), so most of them have seen most of my books already. The new ones, <a href="//fedran.com/flight-of-the-scions/">Flight of the Scions</a> didn't draw folks in, nor did Shannon's <a href="https://weirdauthor.com/merger">Merger of Evil</a>.</p>
<p>It was also one of ICON's low attendance years. I'm guessing less than a hundred people, which makes it hard to sell since those who do come are already dedicated to the convention and… have seen everything Shannon and I have to offer.</p>
<p>This was also a bummer because I thought my packing and setup was pretty good, though I didn't get a proper price sheet written out so I wrote one by hand. Maybe next time.</p>
<h2>Panels</h2>
<p>All of my panels were on Saturday and that is where things got a little scattered. Because of a production issue, I had limited time to set up on Friday. On Saturday, I brought Child.0 with me but the consuite and gaming rooms didn't open until 10:00 (usually they are open all night). Not to mention, this was Child.0's first “full” convention so I walked them through the various places and tried to get them situated.</p>
<p>This meant I completely forgot about my first panel and blew it off. Which is a major bummer because being late sets off my anxiety (third major trigger) and I don't like fulfilling my obligations. Not to mention, I really wanted to talk about <em>Writing Space Epics</em> because it is a topic I enjoy (though most of my readers know me for <a href="//fedran.com">Fedran</a>, not sci-fi writing).</p>
<p>The second panel was an author reading.</p>
<p>No one showed up.</p>
<p>Well, the other author did, but no one else. I spent an hour talking to Bob J. Koester about his audio books and podcasts, random topics, and basically chatting until the next authors showed up and I talked to them for a while (and realized I had to buy their books) before finding Child.0.</p>
<p>The third panel was the one that finally was successful. <em>Character Development for Aspiring Authors</em> was pretty good. I didn't realize I was moderator at first (they didn't have programs available and the Google Doc for the schedule didn't have those details) but I like moderating panels and did so. It was a lot of fun to talk about different ways to approach building characters and I had fun.</p>
<p>The fourth and final panel was <em>Writing Collaboratively</em>, which also went well. I've done a lot of commissioned writing, writing one-on-one adventures for games (some of which were novel length), and a variety of that. My experiences were much different than the other panelists but I thought it was fairly balanced and there were plenty of questions.</p>
<p>There could have been a fifth one, <em>Late Night Erotic Reading</em>, but since I dropped out of Facebook, the folks setting it up thought I had disappeared off the planet. I do like to find strange submissions (there is always a Tingler being read, not to mention silly and serious pieces), so I just have a lot of fun doing it. But, because I disappeared, I wasn't invited and I had a thirteen year old so… I had to skip it. Maybe next year?</p>
<p>I also still want to resurrect <a href="https://en.wikipedia.org/wiki/Mickey_Zucker_Reichert">Mickey's</a> writing workshop. She was a mentor for me, though she probably didn't think so, and helped me a lot with my writing and craft. It was also a fantastic two hours of doing a deep dive critique of stories and it helped me a lot. I can't cover the full range of experiences she had, but I feel that I can be helpful to others get to at least my level and maybe push them above me. I've been talking about it for a few years, but I need <a href="https://mindbridge.org/">Mindbridge</a> to help which means I need to actually talk to them long before the next convention.</p>
<h2>Child.0</h2>
<p>Instead of listening to snippets of porn, I ended up playing with Child.0's first Dungeons and Dragons game instead. It was a pick up game but I wanted their first attempt to not have me as a dungeon master. The fact it was a Minecraft-based adventure and the entire party was a pyromancer in <em>some</em> regard, it ended up being a game of “burn it all down” but there was also remarkably a large amount of peaceful options gone.</p>
<p>And apparently “let's walk the courtyard to get a lay of the land and make sure there are no surprised” was completely foreign to everyone. (In other words, everyone played their WIS attribute perfectly.)</p>
<p>It was a lot of fun and both Child.0 and Child.1 want to try it again.</p>
<p>Also, checking on Child.0 meant I abandoned my dealer table frequently which may have contributed to the lack of sales, but I have a pretty good priority system and Zero's care is higher than making a few bucks.</p>
<h2>Child.1</h2>
<p>On Sunday, Child.0 decided to sleep in, so I brought Child.1 (age eight). Now, One is a lot more effort to watch than Zero, so I wasn't really able pay attention to the table either. So, we played video games, checked out things, and bought way too much.</p>
<p>Also, because One takes a little longer to get going (and I had the dealer's table), I missed the benefactor's brunch and a chance to fan-squeal over the guests of honor.</p>
<h2>Partner</h2>
<p>At the middle of Sunday, Partner came and we got a few hours of strolling around and being together. It was nice, because they also helped tear down the table and get everything packed up, all for the price of five sushi trays.</p>
<h2>Next Year</h2>
<p>This year, I had conflicts of interest: my family, the panels, and capitalism. I don't like the conflict, but I was already committed to the dealer hall and the panels, but the children took priority.</p>
<p>Next year, I need to remove at least one so I'm not going to do the dealer hall and just focus on the kids and panels because Zero and One are both at an age where I can leave them alone for an hour to do the panel but I can't leave them alone for six to man a table. Not to mention, it cost me $55 to make a profit of less than a dollar (after printing costs and royalties).</p>
ICON 48 Schedule2023-09-24T05:00:00Zhttps://d.moonfire.us/blog/2023/09/24/icon-48-schedule/<p>I got my schedule for this year's <a href="https://iowa-icon.com/">ICON 48</a> in Cedar Rapids, IA, USA. The convention runs from Friday, October 13 to Sunday, October 16. This is my “home” convention as it were (it's less than two kilometers from my house) and it is a relatively focus, busy day on Saturday for me with plenty of time through the rest of the days:</p>
<ul>
<li>Saturday:
<ul>
<li>09:00 (9:00 AM) - <em>Writing Space Epics</em></li>
<li>13:00 (1:00 PM) - <em>Author Reading - Dylan Moonfire & Bob J. Koester</em></li>
<li>15:00 (3:00 PM) - <em>Character Development for Aspiring Authors</em></li>
<li>17:00 (5:00 PM) - <em>Writing Collaboratively</em></li>
</ul>
</li>
</ul>
<p>Most of these are topics I talk a lot about in general, so I feel pretty good about not making a “complete” fool of myself (unlike that Women in Sci-Fi and Fantasy panel many years ago). Though, I really need to finish getting my sci-fi website up and running before then… and maybe some business cards.</p>
<p>This year, I'm skipping the Author Meet and Greet but hopefully won't be skipping the Patrons Brunch the next day. Also, I'm going to see about running the one-on-four writer's workshop that got me going so many years ago when Mickey was running it.</p>
<p>In addition to all that, I'll have a table in the dealer's room for <a href="https://typewriter.press">Typewriter Press</a> so there will be books to sell and plenty of chances to wander by and say hi. I love talking to folks.</p>
Package Management - Identifiers 22023-09-20T05:00:00Zhttps://d.moonfire.us/blog/2023/09/20/package-management-identifiers-2/Using URNs to identify packages and some re-thinking of concepts.
<p>It's been a many (seven) months since I started working on the ideas of a package management system. A lot's happened but something got me down the path again and I decided to look at it again. This isn't a straight journey where I write every single post ahead of time but instead its more of ramblings and thoughts.</p>
<p>Or, in the words of one of my favorite puzzle games, <em>The Thirtieth Guest</em>:</p>
<blockquote>
<p>Feeling lonely?</p>
</blockquote>
<h2>Series</h2>
<p>This is going to be a series of posts, but I have no idea of how fast I'll be writing them out. I want to work out my ideas, maybe have a few conversations, and then start to move to more technical concepts.</p>
<ul>
<li><a href="/blog/2023/02/07/package-management-introduction/">2023-02-07 Package Management - Introduction</a></li>
<li><a href="/blog/2023/02/08/package-management-versions/">2023-02-08 Package Management - Versions</a></li>
<li><a href="/blog/2023/02/12/package-management-identifiers/">2023-02-12 Package Management - Identifiers</a></li>
<li><a href="/blog/2023/02/13/package-management-dependencies/">2023-02-13 Package Management - Dependencies</a></li>
<li><a href="/blog/2023/09/20/package-management-identifiers-2/">2023-09-20 Package Management - Identifiers 2</a></li>
<li><a href="/blog/2023/11/30/package-management-formats/">2023-11-30 Package Management - Formats and Registries</a></li>
</ul>
<h2>Mistakes Were Made</h2>
<p>While I was reading <a href="/blog/2023/02/12/package-management-identifiers/">the first identifiers post</a>, I realized I made a few mistakes. One was that I was using a URL instead of a <a href="https://en.wikipedia.org/wiki/Uniform_Resource_Name">URN</a> to identify a package:</p>
<blockquote>
<p>In contrast, URNs were conceived as persistent, location-independent identifiers assigned within defined namespaces, typically by an authority responsible for the namespace, so that they are globally unique and persistent over long periods of time, even after the resource which they identify ceases to exist or becomes unavailable.</p>
</blockquote>
<p>When it comes to identifying a package, that is exactly what we want. A persistent identifier that doesn't point to a specific location. When we want Markdowny, an important part is that we don't want to mandate <em>where</em> to get it, but enough to identify it.</p>
<h2>Rehashing as URN Components</h2>
<p>URNs always start with <code>urn:</code> and a registered “code”. We're going to pretend <code>bakfu</code> is the registered code, so that means all the package identifiers would be <code>urn:bakfu:</code> then something.</p>
<p>Likewise, I think it is important to know when something is a package identifier (say <code>pkg:</code>) or a reference to a package which has ranges (<code>ref:</code>).</p>
<p>EDIT: After looking at my notes, I already figured out why I didn't need this so I removed it from the content below.</p>
<p>From earlier posts, I decided on the package format being a component with well-known versions (npm, nuget, deno) and arbitrary ones (domain-based).</p>
<ul>
<li><code>urn:bakfu:npm</code></li>
<li><code>urn:bakfu:minetest</code></li>
<li><code>urn:bakfu:authorintrusion.com/spell-check</code></li>
</ul>
<p>In the later examples below, I'm going to cut off the <code>urn:bakfu:</code> as noise so <code>npm</code>.</p>
<h2>Authority</h2>
<p>I don't have a lot of problems or doubt with the components above. The next part, on the other hand is a bit more complicated and fluid. As I develop more, I think we should have separate package repositories/registeries instead of putting everything at npmjs.com or nuget.org. However, that leads into potential name and identifier conflicts.</p>
<p>Using the example from my life, when I started Nitride, I made all the namespaces “Nitride” and was going to buy a developer SSL to push it up to nuget.org. However, by the time I got to a stable point, there was then a Nitride already there and that didn't work.</p>
<p>(This is also one reason why I don't like identifiers that aren't namespaced.)</p>
<p>At the same time, I want to keep these URNs relatively “simple” for the 99% cases. In those cases, that means I want to aim for something like:</p>
<ul>
<li><code>npm:markdowny</code></li>
<li><code>npm:@mfgames-writing/format</code></li>
<li><code>nuget:Humanizer</code></li>
</ul>
<p>But, if there is a non-default location, the URN needs to have some mechanism that identifies the “authority” of a package. This authority doesn't need to be a URL, just a unique key to distinguish between two packages with the same identifier.</p>
<p>Originally I thought about something like <code>npm:///markdowny</code> based on using <code>file:///</code> to reference the local file system but allow a domain and directory to be used:</p>
<ul>
<li><code>npm://mfgames.com/markdown</code></li>
<li><code>npm://mfgames.com/@mfgames-writing/epub2</code></li>
<li><code>npm://example.org/~user/@example-organization/example-package</code></li>
</ul>
<p>The problem with that is the last one. Where does the directory structure end, where does the package begin? If the entire URL is opaque, then it would be easy to leave as-is, but because this has to translate, there needs to be an unequivocal way of splitting them into an authority and a package identifier.</p>
<p>URL encoding to the rescue.</p>
<p>If we treat the optional directory structure (on the optional authority domain) as a single “unit”, then we can keep the slash to separate authority from the identifier but still keep the identifier in its most common format (NPM uses slashes):</p>
<ul>
<li><code>npm:markdowny</code></li>
<li><code>npm:///markdowny</code> (same as above)</li>
<li><code>npm://npmjs.com/@mfgames-writing/epub2</code></li>
<li><code>npm://npmjs.com%2F~dmoonfire/@mfgames-writing/epub2</code></li>
</ul>
<p>I think this would work because we can have a rule that states that the component after the package type has either two slashes for an authority and the next slash ends that component. Everything after that is the full identifier.</p>
<h3>Well-Known URLs</h3>
<p>I'm fond of the <a href="https://en.wikipedia.org/wiki/Well-known_URI">.well-known/</a> infrastructure that has built up over the years. I could easily envision that this could translate into an actual URL to help identify the location of the packages if not known.</p>
<ul>
<li><a href="https://npmjs.com/.well-known/bakfu/npm/npmjs.com%2F%7Edmoonfire/@mfgames-writing/epub2">https://npmjs.com/.well-known/bakfu/npm/npmjs.com%2F~dmoonfire/@mfgames-writing/epub2</a></li>
</ul>
<p>The resulting JSON file would give common locations where to find it. So going to the @mfgames-writing/epub well-known URL would then give the URLs for the official servers or locations, such as npmjs.com, my local package repository, an IPFS address, or whatever makes sense.</p>
<p>The reason it won't use query strings like <a href="https://webfinger.net/">webfinger</a> is because query strings don't play well with static sites and I use static sites pretty heavily.</p>
<h2>Qualified Identifiers</h2>
<p>I think the ideas from the original identifiers post for qualified identifiers still have merit, but without the <code>bakfu:</code> prefix because it ends up just being noise. I think these should be limited and defined ahead of time since there is flexibility on the features.</p>
<ul>
<li><code>java:org.example.hyphenated_name</code></li>
<li><code>npm:markdowny?version=1.1.0</code></li>
<li><code>cargo:serde?version=1.0.152&feature[]=derive&feature[]=rc</code></li>
<li><code>cargo:serde?version=1.0.152&platform=x86_64-unknown-linux-gnu</code></li>
</ul>
<h3>Additional Versions</h3>
<p>If the package version (as opposed to the content version) is needed, then <code>&package=1.0.0</code> can be used. Likewise, if the Bakfu itself needs to be bumped, then <code>&bakfu=1.2.0</code> can be used.</p>
<p>I thought about making versions arbitrary, but I couldn't imagine a case where a package would have two different versions for two purposes. Those would be two separate packages in that case.</p>
<h3>Overriding Packages</h3>
<p>One of the reasons of this exercise is how to do a modification to a library that is already releases but the users learn after the fact that it breaks semantic versioning (SlimMessageBus). One constraint to this is that according to the <a href="https://semver.org/#what-do-i-do-if-i-accidentally-release-a-backward-incompatible-change-as-a-minor-version">specification</a>, the version of the content cannot change once published.</p>
<p>That is why the package has its own version, to indicate that the package metadata such as the dependencies and requirements, can change independently of the contents. In most cases, <code>package=1.0.0</code> but a proxy service could add in the modified dependency and call it <code>package=1.0.1-service</code> which would then cause the packaging system to prefer the highest version of the package with the same version.</p>
<p>There is some gaps because if we had a Bakfu-aware packaging system, someone could create a package and then keep bumping the package version higher to override anyone's overrides but I think this is a case where an upstream package modification should be blocked if there is an override given. At least until that new version can be reviewed and accepted.</p>
<h2>Conclusion</h2>
<p>The main reason to have these package identifiers is just to distinguish a package uniquely across the entire ecosystem. I strongly believe there needs to be a decoupling of the location verses identifier because of the other goals in this project: moving from one host to another, caching packages, being able to provide a curated list, blocking malicious packages, and to add after-the-fact changes.</p>
<p>Overall, I think this fits my need for something that is roughly ascetic (<code>urn:bakfu:npm:markdowny?version=1.0.1</code>), has a most-common use of something simple and readable (<code>urn:bakfu:npm:markdowny</code>), but still allows distributed packages and cases where there are name collisions (<code>urn:bakfu:nuget://mfgames.com/Nitride</code>).</p>
<p>It also can be reduced to a common form based on context such as removing the <code>urn:bakfu:</code> which makes the simplest version <code>npm:markdowny</code>.</p>
<p>Also, it shouldn't be hard to create a normalized rule to turn it into a proper C# or Rust structure for doing the next steps.</p>
The passing of Kenneth Evans Jr.2023-08-21T05:00:00Zhttps://d.moonfire.us/blog/2023/08/21/kenneth-evans-jr/On July 30, my dad died.
<p>On July 30, my dad died.</p>
<p>It wasn't exactly like I didn't know it was coming. He had been going downhill in the last year and COVID didn't do him any favors. He had a number of heart attacks over the decades including a quadruple bypass that he outlived the repair and had to have it done again.</p>
<p>But, knowing it was happening and having it happen are sometimes two different things. For some of us more than others.</p>
<p>I don't know if I have a lot to say about heading over to Michigan to help my brother with getting a memorial prepared, or going through his house realizing I won't see him again, or even seeing family I haven't seen since the last time someone died. It was just a sad conclusion to a novel that has lasted my entire life.</p>
<p>I wouldn't say unsatisfactory though. He was an amazing person. I know a lot of folks would say that about their parents, but a lot would also not. As a kid, I got to say my dad designed nuclear reactors or worked on particle accelerators. I got to see his pictures and photos win awards after seeing them being created before my eyes. He built the family cabin with his father and rode almost every trail in northern Wisconsin. He did so many things in so many fields.</p>
<p>And I'm a product of that. My variety of interests comes from him (and my mom, let's be honest). I'm a son of a scientist and of an artist. The differences were superficial in some regard and infinite large in others.</p>
<p>He is the man would gave me the first nod of approval when I picked up enough C and ANSI color codes in a single day to start coding on my own. I was six and I still can't forget that sound of wonder as I'm trying to get my name to line up in cyan on the page.</p>
<p>He gave me another nod of approval when I excitedly told him I was first published. He wouldn't review it, because no parent should review their children (I also suspect it was because of the content). But he encouraged me to keep writing, even when I “statistically should be selling more” and “I wasn't as bad as some of the other books in the story bundles.”</p>
<p>He didn't really know how to say “I love you” but he tried. It came in lectured and advice. When Partner got lost in the woods one time, he sent them maps the next day with suggestions. When I needed to figure out a math problem, he taught me basic trig even though I was in second grade. He never expected me to be anything other than the best I could.</p>
<p>I inherited that struggle to express emotions. I know how to fake them, but I don't really know how to experience them. His death hit me hard, but I didn't realize it until I was taking control of his <a href="https://github.com/KennethEvans">GitHub repositories</a>, <a href="https://kenevans.net/">website</a>, and his <a href="http://kennethevans.github.io/">software</a> and copying files off his drive. That was the point I started really crying, knowing that he wouldn't be calling me to ask for help with a merge conflict or showing off a new tool he wrote to help manage his heart.</p>
<p>In the end, he made sure we knew that he loved talking to my children and that he was proud of us. He was apologetic for dying in the same year as our mother and was struggling to make sure my brother knew the password to his BitWarden before he passed.</p>
<p>My dad is the inspiration for my <a href="/garden/exit-planning/">exit planning</a>. He was one of the most well-organized people I knew, even when it came to planning out his death. He had his paperwork gather together, his notes distributed and an archived drive with (almost) everything we needed.</p>
<p>He didn't leave many intentions behind. One of them is to publish his memoirs in print. Being that I own a publishing company, we always intended to clean it up and get it ready. I'm going to do that probably in the next few months, with a goal of having a print version by the end of the year.</p>
<p>I'm also going to go through his repositories and add a banner to the read me files to say they are no longer being maintained and making sure they have good licenses (if he didn't already).</p>
<p>At the moment, I'm not okay. I will be, but not right now.</p>
<p>One thing I'm so thankful for: he left the world on his own terms. He's told us everything he could and now it is up to us to do what we will with that.</p>
Fedran Infrastructure Redux2023-06-27T05:00:00Zhttps://d.moonfire.us/blog/2023/06/27/fedran-infrastructure-2/Just a follow up on a few things I had to do to get my Fedran infrastructure behaving better and not chewing up time and bandwidth once a day.
<p>I know I said that I was going to work on writing after working on <a href="/blog/2023/06/24/fedran-infrastructure/">the previous post</a>, but I tried to speed up the pipelines because an hour was overwhelming and bothered me.</p>
<p>Thanks to a bit of obsession, I figured it out.</p>
<p>The bulk of the pipeline was spent download and building Nix packages. This normally isn't a problem, but I want to be able to run an agent in my home lab where I'm charged for bandwidth overages. Pulling down half a gig of packages every time I built my website was also not being a good steward of the Internet as a whole.</p>
<p>I had bookmarked a link to kotatsuyaki's <a href="https://blog.kotatsu.dev/posts/2023-04-21-woodpecker-nix-caching/">Locally Cached Nix CI with Woodpecker</a> which was based on <a href="https://kevincox.ca/2022/01/02/nix-in-docker-caching/">Kevin Cox's work</a> and tried it a couple of times. I usually got hung up on switching from Docker to Podman but eventually got it mostly working, hit some snags, and found a slightly-less-than-optimal approach instead.</p>
<h2>Lazy Configuration</h2>
<p>It took me a day to figure out how to get Podman working with my setup. Naturally, OCI images in Nix change name so it was <code>docker-woodpecker-server</code> in Docker and <code>podman-woodpecker-server</code> on Podman. This meant I had to also switch the various systemd links that tie in changing secrets with restarting the OCI images.</p>
<p>As usual, once I realized I was going back and forth (usually on the third time), I decided to automate that.</p>
<pre><code># woodpecker-agent.nix
{
config,
pkgs,
lib,
...
}: let
tag = "next-eaae6b44c7";
container =
if config.virtualisation.podman.enable
then "podman"
else "docker";
in {
sops.secrets = {
woodpecker-ci-agent = {
restartUnits = ["${container}-woodpecker-ci-agent.service"];
};
};
</code></pre>
<p>I'm not fond of Nix as a language, but it was nice being able to have it pick up which configuration was set up properly and then rename the various systemd units as appropriate.</p>
<h2>Mounting Nix Store</h2>
<p>The crux of the problem was storing Nix packages so I didn't have to download them every time I built the package. This is really important when I start doing mass changes to my writing projects since one of those causes ~137 pipelines to be triggered. Since each one uses an identical <code>flake.lock</code> (because of the Rust CLI), caching those packages also means that it can generate each of the PDFs and EPUBs automatically when I update packages, change style, or need to rebuild things.</p>
<p>Originally I tried the example in the above link and it worked beautifully. I did have to make the pipeline trusted, but I run my own CI server and I don't build pull requests, so I have that locked down to avoid too much security exposure.</p>
<pre><code># .woodpecker.yaml
pipeline:
run-greet:
image: nixos/nix
commands:
- echo 'experimental-features = flakes nix-command' >> /etc/nix/nix.conf
- nix run --store unix:///mnt/nix/var/nix/daemon-socket/socket?root=/mnt .#greet -L
volumes:
- /nix:/mnt/nix:ro
</code></pre>
<p>I do think I may have had better luck if I included Kevin's command in the above link.</p>
<blockquote>
<p>I have seen some issues when using –store. I found that this can be fixed by additionally passing –eval-store local.</p>
</blockquote>
<p>Though by the time I realized I should have tried that, I found a working solution and needed to stop messing with an incremental change without significant improvement over what I had.</p>
<p>When I migrated my Fedran pipeline over to the above <code>--store</code> implementation, it blew up with being unable to link DLLs/shared libraries. First it was <code>sodium23</code>, then <code>lowdown</code>, then something else. Each time, I added more packages to the build until I hit <code>libnixeval.so</code>. That one… I couldn't fix.</p>
<p>The main problem was the <code>run-greet</code> pipeline didn't need to call the <code>nix</code> executable, but I was using <a href="/tags/nix-standard/">Standard</a> which does. That means, I couldn't use <code>std //cli/apps/default:run</code> to run anything which basically meant I had to use <code>cargo build</code> which negated the entire purpose of using Nix packages to cache. I also couldn't use <code>nix run .</code> to take advantage of the Nix caching.</p>
<p>In the end, I switched back to Docker but with Kevin's first suggestion: just create a volume in Docker to share the Nix store. This worked out well… but I had to make a tweak from Kevin's original suggestion:</p>
<pre><code># .gitlab.yaml (I don't remember what it is called anymore)
[[runners]]
executor = "docker"
[runners.docker]
volumes = ["/nix"]
</code></pre>
<p>I found it worked better if I used a named volume:</p>
<pre><code># .woodpecker.yaml
pipeline:
deploy:
image: nixpkgs/nix-flakes
commands:
- nix develop --command ./src/website/scripts/ci.sh
when:
event: [push, manual, cron]
branch: main
volumes:
- woodpecker-nix-store:/nix # named instead of just `- /nix`
</code></pre>
<p>And that worked beautifully. The build time when down from 48 minutes for the last run to 6 minutes at the most current one. Plus it barely downloaded anything thanks to using Crane for Rust in Nix, and the general package management. If I created a second store to keep the Git repositories and only did a <code>git pull</code> instead, I could shave it down even more.</p>
<h2>Cleanup</h2>
<p>It did occur to me that if I left what I figured out as-is, sooner or later I would run out of space. (I also was reminded by I always create a dedicated <code>/var/lib/docker</code> partition in this). So on my nightly job, I added this stanza to my <code>.woodpecker.yaml</code> file (which I haven't tested very well):</p>
<pre><code># .woodpecker.yaml
pipeline:
clean:
image: nixpkgs/nix-flakes
commands:
- nix-collect-garbage --delete-older-than 15d
when:
event: [cron]
branch: main
volumes:
- woodpecker-nix-store:/nix
</code></pre>
<p>If I did it right, then it should keep everything cleaned up and tidy. I'll find out in a month or so if that is true.</p>
<h2>Checkout</h2>
<p>One thing to be said about having so many project pipelines is that I hammer my system when I do a mass change (updating the locks for example). This meant I encountered a strange bug where Woodpecker's checkout plugin starts to fail somewhere around the tenth rapid-fire pipeline.</p>
<p>To get around it, I switched from using the built-in clone to do it manually:</p>
<pre><code># .woodpecker.yaml
skip_clone: true
pipeline:
clone:
image: alpine/git
commands:
- git clone https://fedran:$GITEA_TOKEN@src.mfgames.com/$DRONE_REPO.git . --depth 1
- git checkout $DRONE_COMMIT
secrets:
- gitea_token
when:
event: [push, manual, cron]
branch: main
</code></pre>
<p>Since I put it in, I haven't seen check out failures due to being unable to get the user name and password from the prompt. I could have also used <code>nixpkgs/nix-flakes</code> instead of <code>alpine/git</code>, but didn't. This works and it's fast.</p>
<h2>What's Next</h2>
<p>I really need to write. I got a closure point. Everything works, I wrote issues up for the ones that aren't breaking, so I can stop obsessing and instead focus on the 10 kwd obligation I have by the end of the month. Which is to say… probably not going to happen but I'm going to still try.</p>