Category: Blog Update Bzt
BZT Thoughts
A collection of some thoughts about BZT, zero trust solutions, and zero trust in general.
Device-aided BZT
Trusted device start is an important factor in a zero trust architecture. Storing a secret key in a TPM and using that Key to authenticate a device and encrypt is a solid baseline to work past in the boot process.
Such a process would look like storing a secret key in the TPM, and use it with the IPSEC daemon. IPSEC policy drops non-IPSEC as normal to and from devices. Client auths normally over strong auth when attempting to use application traffic.
Category: Blog Update
Arch EFI Luks
Setting up FDE with UKI (Unified Kernel Images) and Secure Boot with Arch Linux was slightly more confusing that I anticipated. Just wanted to knock out a quick how to on actually building this the right way. It seems to be the right configuration conceptually, but the tools used like dracut
vs mkinitcpio
in the wiki made it hard to piece together. A opinionated Ansible playbook is hopefully coming soon.
Can Johnny Encrypt Now
To continue on the legacy of Why Johhny Can’t Encrypt research, and to generally check in on how Thunderbird is with their OpenPGP encryption implementation, I conducted an experiment to investigate. For those curious about how the lab was constructed, most of the code should be available on my Github. Please reach out about concerns or questions.
BZT Research Paper
For a class I took, I did some research into zero trust networking. As a result of the thinking about the problem, I did a PoC and paper discussing a novel approach to zero trust networking. The code can be found at https://github.com/Peeanio/bzt.
Rust Esp32
For some time now, I have been meaning to get started with Esp32’s in earnest, and wanted an excuse to get started with Rust in something that was meaningful and embedded. I took the chance, and given the majority of my chips we esp32 v3.0, I had some hoops to get go through to get it working properly. Below should outline the steps I needed to take to get going with rust and this chip.
Conflict-Gaming
Running a wargame with just military assets feels incomplete in a landscape where intelligence is so real-time and complete, where theatres in a non-shooting war include such amounts of espionage and influence through cyber campaigns and more that it seems so important to model those facets. For the past few years, I’ve thought about different aspects of what would be needed for that complete picture; running the cyber game, controlling HUMINT sources and directing influence ops, but now it feels like rolling all of that into one is possible. I really want to create an system where it is possible to all kinds of assets and their capabilities, in order to model and run that kind of conflict. Pace will be slow, like all of these, but it’s something in my mind, much as one could glean from my reading list from the past year.
ESP32
Dropped the ball on working on some things because my main focuses were taking a lot of time and effort. I did get some ESP32 units to start messing around with, which feel like a step up from the Arduinos I started on seven years ago. With nothing but a simple library and its example loop, I was able to get a bluetooth speaker built and configured (with a DAC, of course). Here I was gearing up to learn C or Rust for this project and dive into some low level, but it was all done for me. The project itself did not mean enough for me to dig into properly, but I do plan to doing it for some sensor projects with other ESP32s.
Cloud Providers
I’ve used linode to host a mail server for a few years now. They are changing their pricing, and I thought I would try out Digitalocean, see what it was like. I appreciated the aws-like product offering, and spent some time writing terraform to spin up a new email server there. I get it all spun up and lo and behold, Digitalocean now blocks port 25.
It is disappointing that it is getting so difficult to host an email server. Too many bad actors spoiled it for the rest of us by taking advantage of lack standards and controls to send spam. People don’t want to run their email servers themselves, because of the legacy of malicious behaviour, so those of us that want to start without the static IPs with reputation are left wanting.
Migration
It has been a busy start to the year for me. I’ve been working on a good deal of internal projects, but I came to the conclusion that they weren’t doing much good where nobody can see them. The projects will slowly be moved to Github (because I am also on a public git server now), and my blog is now housed there too. People have figured out static websites to the point where it is almost silly not to manage them through providers.
Django
Recently, full stack web development was something I wanted to try my hand at. I thought that learning ReactJS and Javascript would be what I needed to do; my first try was use AWS to stand up a serverless app, so the JS needed to help the user complete the application. That was cool and made sense, but then I was trying to figure out how to tie that to my own API and not have the main logic etc show in the frontend. Turns out, I should have taken some web dev classes at some point, because I’d never touched anything like this before and my web stack knowledge was pretty shaky. LAMP stack, no problem, but I’m not even looking under where /var/www/html. I know Python and so wanted to give Django a try, and boy was I impressed by what I could do with the polls app.
CloudFormation vs Terraform
At work, we use CloudFormation, dating back to when the org first onboarded into AWS. Since then Terraform has come out and I’ve used in AWS and other provider settings. There are some differences, which are interesting to note, but I think in general I stand behind Terraform.
CloudFormation tracks state better. It is easy to trust CF to tear down a stack COMPLETELY, but there are some limitations then. It is difficult to programatically get resources, sometimes even information, when they are from outside the stack. This comes up rarely as an issue for full time Cloud Formation users, but there’s an itch for those who aren’t to just want an API call to answer to fill in gaps. Reaching outside the somewhat limited CloudFormation options is impossible. Terraform allows this, as it is all just in front of the standard SDK, but that also means the state can get away from a user if they are not careful. Less so with the AWS (fully-baked provider), but I’ve lost my state or resources don’t import correctly, meaning I don’t have unequivocal definition of state.
Serverless
Decemeber got away from me in terms of projects, but this time I actually seemed to learn, create, and achomplish something. I actually learned some HTML and JavaScript to create a serverless application on AWS. Using a Lambda function backed by DynamoDB, APIGateway responds to requests from the aforementioned HTML living in a public S3 bucket. The webpage is rough, but its not my forte or experience so I hope to hash it out more fully in the future. The experience was fun and defintely useful, so I hope I can make further stacks in similar manners. There are already ideas floating around in my head about what to do next, so stay tuned.
Load Overload
There’s nothing like a vacation to get away and reset some practices in order to get things back on track. I was able to take the time to get away from my work enough to get excited about what I do again. Problem is, I talk to too many people about too many different things, and get REALLY excited about them all. I am driven to be an expert about all of them because they all genuinely interest me and I love doing things, but there’s only so much time to do it all.
libvirt passthrough
Quick note on using a windows gaming vm on a libvirt host. Using pcie passthough can be a pain if using nvidia drivers, doing blacklisting on the drivers. Huge pages and the looking glass performance improvement page cpu tweaks did work, but when carving out ram and doing cpu pinning, you start to wonder about why even using a vm. It really is like having a hard resource divider, and not a resource pool dynamic allocation, because speed matters. Maybe its down to the age of the gear, not actually really able to play the games I was trying. The software and hardware support will come into its own in the next few years regardless.
Terraform Rundown
I want to show off some Terraform code, and how and why descisions were made in writing the project. This is to demonstrate some features of Terraform, as well as how I’ve used it for some local infrastructure. Again, Infrastructure as Code is all about following Patterns, one of which is relying on Primitives to exist, like a secret or user management service. That being said, we can define Primitives using IaC methods, which is an iterative pattern all of its own.
glauth and Keycloak
Finally, I have an application looking at keycloak over oauth2, which is in turn fed over ldap. Single sign on is more of a reality, but perhaps more important is having mfa in either keycloak or glauth. The deployment wasn’t easy, as the several of the elements weren’t plug and play.
glauth config:
[backend]
datastore = "config"
baseDN = "dc=bootingup,dc=net"
nameformat = "cn"
groupformat = "ou"
[[users]]
name = "reader"
uidnumber = 5001
primarygroup = 5501
passsha256 = ""
mail = "[email protected]"
[[users.capabilities]]
action = "search"
object = "*"
[[users]]
name = "max"
uidnumber = 5002
primarygroup = 5503
passsha256 = ""
mail = "[email protected]"
[[groups]]
name = "svcaccts"
gidnumber = 5501
[[groups]]
name = "users"
gidnumber = 5502
LDAP federation in keycloak:
Machine-Gunning Your Pets: When to give up eBay Builds
There is so much to consider when building infrastructure. There’s no way around that. In the pursuit of making things easier for ourselves, engineers have come up with so many good tools, methods, and patterns that provide excellent results and make things easy on us. So why is it so hard to actually stand them up sometimes? Why is there so much hand-wringing and tail chasing to actually go out and do the thing that you set out to do? We keep going to eBay and buying some used iron to make our problem go away, right now, and end up wishing the landscape looked as pretty as our neighbours. The answer is just to build something that you can maintain as long as the infrastructure will exist.
Still Working
I took a bit of a step back to get some more perspective, but also because my environment seems to be struggling a bit. I have all my resources from different in a single project, from the “primitives” to the multiple applications, and it just was not happy spinning things up. It appears that single project IaC is a mammoth task; the other projects I’ve looked at seem to limit to individual services. That seems like a decent approach; roll out each service one by one, allowing the operator to handle the dependency order. I wrote an ansible script in the beginning to handle that (and pre-reqs), so I may split my terraform code into primitives (or multiple, if that needs to be split down further), and main application service stacks. Kubernetes seems to do this itself, but because I was rolling with docker, it all seemed to blend together. So, my choice of tool lead me into bad practices that I may have avoided if I knew what I was doing. Good to know; that’s what testing is for.
Reading Up
I stayed up late too many nights struggling over what would be the best way to tackle some problems with IaC deployments, that I could not solve myself or by reading other blogs; my brain was too focused on my small deployment, and blogs typically covered even smaller and very specific cases. Learning a quick and easy way to Terraform a container is one thing, but how does that process carry over to Terraforming a VM with Ansible managing the config. Do I use cloud-config to enroll in Ansible roles, or use Terraform actions to hit the configs? How do I go about dependencies that are crucial to the stack I am creating? I was scrabbling for best practices.
IAC Epiphanies
Writing my services in IAC (being Terraform and Ansible), as well as being able to for the most part start from a clean state, has been a hugely thought provoking and ultimately rewarding exersise. I have embraced the infrastructure as code ethos, and would gladly die on that hill after seeing it work as it should. Getting away from pets, hand crafted deployments, and hero efforts has given me the insights that I want to share.
Terraform Working
Progress on two fronts: I actually got keycloak working behind Haproxy, something that has eluded me on and off for months, and using Terraform to throw up the Docker containers with everything that is needed. It’s nice to be able to run containers that are “tied” together without needing compose, or going as far overboard with k8s. Some things that needs some work would be to move away from strings in the resource definition and move towards using both common vars and security/password vaults. There is also a move towards DNS names and networks, but thats the next step. But here’s the terraform:
Terraform Progress
Terraform has been going well, it’s been nice to use the tool to do what it takes some serious scripting to work around. The one thing I’m concerned about, especially with the libvirt provider, is how terraform remediates minor things in the infrasturcure. That begets the question, is it then something that is a core part of the deployment strategy to work around? Using a tool as part of the process, vs using the tool and working within its confines is something that has always been difficult to determine the best practice of.
Roadmap: DevOps
I’ve been really feeling stepping up my DevSecOps skills, after really feeling comfortable working with chef and ansible to run configs. I found roadmap.sh, and found myself most of the way down the roadmap for DevOps. I’m halfway through the Infrastructure as Code, needing to do infrastructure provisioning. I’ve written my own provisioning, but perhaps using Terraform can provide value, especially as my code was libvirt only. Being more agile with regards to deployment zone makes the code portable.
Roadmap: GoLang
Writing code has been something that has not been something I found I could just jump into and do for hours, but has been enourmously satisfing, personally and professionally. My public git works show that I have been able to write some small projects, but I have never felt comfortable enough to write anything more than some quick utilites. To me, what feels like a complete project are things like persistent daemons or backends that interface with r/w databases and the like, and learning how to do that is something that could be yet another tool in the belt, or possibly a career change.
DevOps
It’s been a while; there was a big changeover as I prepared to leave one job and start another. Starting at this new job, I got exposed to Chef, and had the time to really bash at it until I feel comfortable with it. It is a really interesting tool compared to Ansible, as it feels a bit easier to use it as a configuration management tool only, and not taking advantage of Ansible’s ability to reach into anything and do things in almost any manner. Looking back, I see how some of my Ansible playbooks were more like scripts than configuration management. There’s a place for both, and the two tools are not mutually exclusive.
My Own API
Work has progressed on my own API: just messing with some data with a spy theme. I’ve been able to do GET, PUT, POST, and DELETE, which has been cool. I’ve got some more avenues to explore, along with just general tidying up: call and run a script via the API for value setting, then general edges to clean up.
After that, putting a web client in front of the API to show off the data and how to work with it. Then I will have been a “full stack dev” and can see what pulled me in to work with. Really exciting stuff!
Work Projects
My work projects have mostly scratched my dev itch lately, but with a good one coming to an end, its time to share. I’ve been enjoying writing a switch management project in Ansible, as its been great to have a hack at API’s, network gear, and plain old optimisiation logic. The API’s have been loads of fun and really interesting to get into, after being on my to-do list for ages, and I have a good feeling for JSON structs. This project has been mainly object orientated (if I have terminology right), being that we have an object (the switch) and are making tasks based on what is where. At home, I want to write some API servers for my own use case to get a good feel and slide yet further into dev land.
Where I've Been
It’s been a while since I was able to update here, for serveral reasons. First, I was accepted for a Masters at UC Berkeley, and was trying to make that work. Second, I made no progress with Keycloak behind haproxy. Finally, there’s been no other code to show you as its all been on work git servers.
Starting with the UC. I applied for the MICS, or Masters of Information in Cyber Security. The school charged a princely sum of $80k USD to do the course, and with everything I tried, that number never got comfortable. I’m scrapping that plan, moving forward with self-taught hard skills (golang, CEH), and if I want a masters, I can go back to WGU.
Blog
When I started doing this blog, I’m not sure exactly what I wanted it to be. Some of my first posts were simply papers I had written for school I thought were cool. There have been few technically explict posts (with configs, tutorials, etc) based on things I had done myself, as most of the posts are just summaries of things I have done or are working on. I supposed that’s because I don’t have many peers who would be interested and I am not involved in any tech groups.
Tired of Docker?
The Docker deployment I am using is looking more promising, especially for web front ends. The Let’s encrypt wildcard is easy to use, so using the single wildcard with haproxy makes for a compelling single moving part. I suppose a clustered deploy would be useful, to prevent downtime with the single load balancer, but that’s okay for my size.
Next, I want to get some NIPS or perhaps WAF in place behind the SSL balancer, to keep that honest, before opening up the firewall. As I’m typing that, doing some more firewall rules on the docker host to prevent action when comprimised, but that’s another kettle of fish.
More Docker
At work, there’s a push towards using K8s. I’ve setup a test K8s, I’ve run some docker, but I’m no expert. As I mess with all that tech, I’m starting to get behind it as a concept and want to use it in a meaningful way, and get away from “my apt packages and debian servers work fine thanks.”
Some of the services I run at home are now in containers. I’ve set up a haproxy server to act as a load balancer entry point, complete with SSL. This is funky, as in the backend network, everything is exposed (and some Docker containers expect the security to be on the host, implictly trusting traffic), but also means I need a wildcard cert. Will need to read up on Lets Encrypt to see how that is this days.
Hashing Machines
Imagination sparked to run a GPU accelerated VM for hash cracking with hashcat. Having run it with CPUs, before, I know how to do that part, but I needed to get a GPU involved. I did this on my Fedora desktop, which had no problems with the drivers. But when I went to use a dedicated VM with PCI passthrough (something else I had just started doing with a fiber card for my router), I got stuck. I figured out how to do it, so I have a quick write up to share.
Goodbye Opnsense
I went through a LOT of changes lately on my router system. I wanted to create a VM for it and passthrough a PCIE card, but combined with a fan failure and I only just got it finished. During that time, I had to buy new 10GB fiber cards (no drivers for cheap old ones), then had to get a new CPU for IOMMU groups, and then a new fan. I fought with two clean Opnsense installs, trying to get VLAN tagging working on a Mikrotik SFP+ port, but it was not working correctly. I decieded to try PFsense instead, maybe the kernel had some differnet modules, and while it didn’t work initally, I did get the second SFP+ port working on the Mikrotik, so maybe Opnsense would have worked after all. By then though, I was too far into my build and had to to get it all working, so here I am on PFsense.
Ethical Starts
Got serious about the CEH. Got a No Starch Press Ethical Hacking book which I am now working through, as I want to feel confident on hard skills in addition to the theory of the CEH. Setting up my “weapons lab” vlan proved more difficult than it needed to be with VLAN tagging on Linux bridges on Mikrotiks. For anyone who reads this, I had to set the guests in KVM to use macvtap (which I never use, as I want the host to talk to the guests) instead of bridge mode. Likely something to do with the MAC addresses, but didn’t read too far into it once I saw the right traffic.
RouterOS for Switches
In my home lab, I have had Mikrotik gear for a long time. It’s cheap, very adaptable, and could almost be confused for Linux. My CRS226 used to server as my main router, but after moving to OPNSense, it’s been regulated to switch duties. As a switch, its something that takes getting used to for people used to Cisco-like gear.
Vlan tagging is difficult to get at first, as the nomenclature is very different, using ingress and egree vlan tags instead of native vlans and trunks. They also are configured in groups of the same config, instead of defining config per port. Its just something so different from Linux and Cisco that its a little unappealing. I would love to get some Linux switches, but the open firmware and whitebox world is very expensive second hand, and there isn’t a quick and easy start. The projects have seemed to have totally changed hands and what is in vogue, but hopefully we’ll see that change soon. If I’m wrong, please let me know!
Oauth Progress
Made progress with oauth2-proxy by using Okta instead of keycloak, which was likely a partial source of much trouble, although I will backport some of my config in order to see what the issue is.
Some observations were made using from using this though: what to do with the headers or cookie for legacy apps? Should the cookie be made as minimal as possible with the headers as stripped as possible, or should some work be done to work with whatever authentication method the app uses? SSO is the end goal, so it is completely desireable to get that working throughout, but that means learning all about web auth. Oh well, that’s something to add to the CV!
Wargame Militias
Working on the wargame to use for the war between Cascadia and the IRC. I want to use Fistful of Tows, but figure I should try using some other system before that, to a) not need to buy a $75 book and b) not start a real war in the world immediately. So for now, we are going to simulate schirmishes that take place between militias on the border, using the AK47 Republic ruleset.
Fighting OAUTH
Spent the whole day today working on getting a working solution going for OAUTH2 with Keycloak today. Started with trying to get it with oauth2-proxy, which I got no results from. Both portions were in Docker containers, but I just could not what seemed to be cookies working fully. Then with vouch-proxy, I get stuck in a redirection loop with a JWT error.
Long story short, I have a few options. Ask for help, or move on from this idea. I view getting something like this as a huge win at home at work, as SSO is something that organisations just need now. There’s few things that feel boiler-plate and drop in enough to get going easily, which is a shame. Although maybe keycloak is just worse than lemonldap-ng.
Wargames
Been interested in getting some wargames going, but instead of focusing on a true historical confrontation or unique imaginations, I went and wrote a story for a world where any sort of small scale confrontation could take place.
In 1941, residents of southern Oregon and northern California organised to form their own state of Jefferson, their capital in Yreka. After a brief struggle, federal government leadership granted their independence in order to focus on something more important: Pearl Harbour and the American entrance to the Second World War.
Holidays
The holidays have been and gone. I really enjoyed having some downtime to spend it away from work or other high stress tasks, which helped give me a reset to take on the new year. Some modeling tasks got done, to a fairly decent degree, and I feel myself getting closer and closer to something that is fun there. More python was also done, being the start of the openlibrary(?) API tool that can pull info on books by ISBN, as I want to make pretty HTML pages from the ISBN to put on a “trophy” bookshelf to display. Need to get going on the HTML part.
Server Rebuild
Rebuild my main server, which currently has a 2x2 striped mirror ZFS pool (8TB usable), KVM, LXC and a few native servers. I needed more space (so adding another 4tb), which meant a case change (from an old Supermicro workstation to a cheap 3u rack mount). Unfortunately, this new case didn’t work out: I needed more SATA ports, so I needed a HBA, so I needed more PCIE slots, so I needed to change out to a full ATX board, which doesn’t fit with the HDD bays mounted. Now, I need to get a 4u chassis, move the server to that, and use the 3u for a container host to serve replies for the ZFS pool.
Progress
In lieu of any real progress on my prior projects, I am just getting some more thoughts down. The goal is still to do some Haproxy server from a VPS into the local network, where there will be a containerised and segmented DMZ for the servers to show off. These will all be behind some form of MFA based SAML login server, (thinking Keycloak with one of a few SAML servers) to keep people who they say they are.
Identity Woes
Been looking at server to use as identiy backends for a build out of my infrastructure in a clear and manageable way. I want to use centralised identities, and in my head I had SAML, but the servers I use do not use it natively. I can look into locking down the reverse proxy with SAML and just use Keycloak, as an easy server to set up, but going with a different tool such as Gluu or LemonLDAP-ng would give me multiple backends to work with. That brings into question what’s really the point of settings up security with MFA if I end up using LDAP or RADIUS and disregard the MFA to begin with?
Projects
Been struggling again keeping the projects on going, but I started a page of notes regarding projects and some dev ideas. I intend to build up an Okta-like universial directory, with LDAP, SAML, etc, and plug infra into it. Some who have been following may have noticed the codenames.py application, which is intended to be a part of a game I want to write about espionage.
At work, we have been going to lunch often, so I want to write a program that suggests where to go based on what people like/want, balanced against things that are important to them, such as time, distance, price, etc. Could be fun, would likely try it in Go, as the coworkers like that.
Blackarch
I have been using BlackArch linux for some pentesting and live USB work. It is an arch based distro with its own special repo for the pentesting tools. So far, it has worked out for me, really as much as I felt Kali did. It uses fluxbox as the WM instead of XFCE like Kali live does, which is a ligher footprint, but not as easy when using live to use wireless. I have used wpa supplicant before, but had issues with DHCP on blackarch for some reason.
HackTheBox Day 2
I spend the day yesterday on hackthebox.eu. I breezed through the Proving Grounds sections, then got dropped in with the released labs. I’m in for some learning. I was on the right track in the one lab I did complete, but was missing some pieces to complete the pwn by myself. Looking forward to my first no-cheating attack.
Also expect to learn some skills from the challenges. I like the OSINT ones or ZIP file based where you have to do the analysis and get it yourself. Stay posted, as I start sharpening skills there and build up a decent lab. I rebuilt my switching and intend to build up a DMZ to lock down whatever bad things I put there.
gameutils Progress
I spent a lot of time, perhaps an hour a day working on my gameutils frostgrave script. It is currently in a bit of a borked state as I left something unfinished for too long as I was reading a book and need to put in the final stretch. I do want to finish it, as I want to get a project going for the Level1Forums devember challenge. I have some ideas, some more grandiose than others.
Virt-Builder at Work
At work, we use a very basic KVM stack. It’s bog standard KVM, with virt-manager to mess with VMs as needed. I would not recommend this, but its legacy and its there. VPSes like AWS and Linode have some pretty great scripts to roll out VMs based on a distro and a “tshirt size” of small, medium or large, essentially. We wanted to recreate this process without using any special tools like OpenStack or Proxmox, so we ended up doing it with virt-builder and Ansible.
Cyberattack
At work last week, I saw my first real cyber attack. It involved a comprimised user account and our VPN, and I saw how woefully unprepared our org was for such an event. The IT team did not really know how to respond, and we had so little in place in terms of safeguards or even watchguards. I’m glad that I have embraced security as my profession, and am able to help people in this world of cyber crime. It has helped me make my choice on career and certs in the remainder of the year.
More gameutils
I started moving a script I was writing from go to python, but I am now seeing the different tiers of closeness to the machine between go, python, and sh. In sh, I am used to using variables where-ever, including stringing them together to call things by “string” + $var. That does not work in go or python. But, with python I do not need to worry about reflection in lists/arrays, as I can call the contents of the arrays by index number.
Go Scripting
I tried a scripting project in Go, as I had heard that the language could replace Python in the scripting world. I get that it can be compiled at runtime, but I really struggled writing scripts as I knew them in Go. Aparently, I had taken reflection and dynamic variable assignments for granted while writing in shell or Python, so going to dynamically populating lists or arrays just turned into a mind bending experience. I probably am close to getting it, but can’t see me following through with the project I had in mind with Go.
Next Certs
With a new move fresh under my belt and a sense of relief in my new locale and life, I t may be time to approach a new certification. I have some ideas of paths to follow, but as I climb the cert ladder, they all start to get pricey, which is daunting while self funding. Still cheaper than the college I self-funded, however.
I’m looking at going up to CCNP as my next network cert; I don’t know if CompTIA does a higher than Network+, and anything I learn there is applicable to most other vendors. Going with another bottom tier cert seems silly, and I like learning about complex networks that I have not been able to touch myself.
Books Git
Brief annoucement, I have added a git repo to https//git.bootingup.net that has a record of my books read and in progress. It only starts from the beginning of August roughly, so despite the many books I have read before, it is a short list so far. I guess now I have an incentive to re-read some things!
OpnSense Build
For the past few years I have been using Mikrotik routers, a Hex then a CRS, but I have built and installed Opnsense on an amd64 Pentium board. Using VLANs and a switch behind it, I have started to get more serious about my home network.
Essentially, its time in my mind to get some more practice in with security in WAN settings, but also eperate out my traffic so I am not getting any bleedover that could be dangerous. Currently, nothing is really open, but the firewall is easier on Opnsense than Mikrotik, and I can load the firewall (which is beefy for a firewall) up with some other things like IPS and traffic sniffing.
Terrain Building
I recently got back into terrain building for miniatures and miniature wargaming. While I started to played 40k back in middle school, this time around of getting back into the hobby I find myself more drawn to historics. In that vein, I found a copy of Wargames Illustrated with a “How To” guide on building terrain, and I have gotten pretty into it again, although to a better degree I think than my previous terrain.
The Next Step: Virt Builder
I have been using Debian/Ubuntu preseed files for a while now, automating installation of machines as defined by a preseeded config file, which works great (don’t get me started on how much I hate 20.04 though). While discussing the merits of images versus automated installation, I looked at and like the libguestfs suite of tools, notably the virt-builder tools for standing up a VM in less time.
It was to my surprise that virt-builder builds its images from pressed and kickstart files, then just anonymises those disks to be used as templates. That was exactly what I wanted; a means of building images that works great and is easy to use, and a way to put those into templates that are virtually on tap.
Test Enviornment
When gearing up to make changes on a production network, there’s almost always something that would be great to just try first; it could be because one is unsure on the exact behaviour, or there’s some ambiguity over what the best approach is. Keeping those tests away from a network that matters is important, and while having a separate VLAN is a pretty good approach, one of my favourite domains is to build a virtualisation host, and to use it as a router for a self-contained network segment.
PXE Booting
One of my latest projects has been to get PXE booting auto installation working in a more dynamic way. This was sparked when reading the iPXE website, where they mention dynamic booting, using a webserver with PHP to select the install files based off hardware information. I don’t know PHP, but I do know how to tweak a config file, so I set about trying to do this using just the iPXE stack.
Preseeding and Autoinstall
I have been using a script on my git site (git.bootingup.net) for headless VM installations, and it has been working so well for me, I missed the functionallity at work, where VM installs have been manual. At home I use Debian, work Ubuntu, so I wanted to port my preseed configs over. This went smoothly (after I worked out that Ubuntu shipped their netinstall kernelwith different permissions than Debian) for 18.04 and older, but 20.04 moved away from the preseed architecture, towards a new system called autoinstall.
Going Full Circle
One year ago, I did some testing to get to grips with containers versus VMS. I spun some things up, worked on some composing and automation, and came up with conclusions. I realised what I wanted, and LXC was it; but somehow, I kept working with KVM vms. Coming back to it, once again (but with a sharper eye) I looked at Docker, LXC, and KVM, and for what I want, LXC wins again. Now I need to actually use it ;)
Open Source Organisation Usage
Open source applications are great, and they are truly an equaliser when it comes to productivity, efficency, and dependability. It is strange, however, to see the landscape that exists for the usage of open source applications: it is used more, generally, by small organisations and very large organisations, and I think that that is worth commenting on, in some degree, if only for my own sake.
I’ve primarily worked in small organisations (my own infrastructure being one of them), and open source applications are appealing. There’s no cost but time, which is the primary currency that small organisations have; there’s no capital or they would be larger, and having the best bang for ones buck only gets you one place. Small shops are run by small teams, so they are much more flexible and able to not only have the server cattle as pets, but configure little as usually the use cases are few. As the organisation starts to grow, IT teams start to struggle; their focus gets split or skill pool polluted, there’s a bit more capital that can provide quick wins to band aid over the problem of cattle as pets, and the IT team is never quite on top of things enough to get everything they need done. Big organisations have time and money, and can usually wait for something to be built for exactly what they need, or can pay someone who can do get it done. They have their own expertise in house, and can demystify any anspect of a problem if they need to, or even expand something if it helps them. These organisations can refactor and shift and be more agile in the specifics because of the abstractation that comes from their size, where a smaller one would be more direct and could become more stuck.
Privacy Focused Organisations
The recent trend of organisations moving towards SaaS as their desired goal for all infrastructure, from identity provision to WiFi access, is a troubling development for those who become associated through work, education, or other means but who must submit to their terms and policies with no recourse of action. The FSF has been campaigning against this, primarily against video confrencing, such as Zoom, but they offer little alternatives. Most if not all of these things are built on open source projects of some form or another, but these organisations see the vendor as a quick and safe way to a) deploy a ready for use application, and b) take care of all their needs and security compliance obligations. I can understand the second, as security should keep everyone up at night, but these mid to small sized organisations are unwilling to spend the time to get the right self hosted application for their organisation taken care of internally before shopping around.
Infrastructure: A question of scale
Its easy to get caught up in the intracacies of servers and deployment, overbuilding for a given problem, but it is just as easy to not plan enough, and be left with something that is unworkable and unsustainable. In general, it is better to overbuild than under, but not when building the infrastructure creates paralysis that prevents things from being done. Building a whole automated provisioning stack just to get a single web page up is too much, but where does one draw the line?
Ryzen Woes
First gen Ryzen seems like great value for money, even when compared to the Xeon builds that have been floating around the market for a few years now. I upgraded my storage server with a Ryzen 1600, but then had cpu lockup issues. Somehow, these seemed to only occur on my build on Linux while running ZFS; this turned out to be a red herring, as I found the issue (after my system has been stable after a few days) to be to do with cstate instructions.
Enterprise and Fun
I noticed lately how sometimes at work the pursuit of stability, reliability, and repeatability took some of the fun out of servers and networking. There’s a lot of reassurance and calm that comes from these things, but that doesn’t make it fun all of the time. Having a very fast time to production and trying several different solutions is a bit of a hacker’s thrill, but can get lost the larger the enterprise. This should not be a surprise to anyone, but something that they need to consider when moving up a bracket.
World of Warcraft Servers
For better or for worse, I have always like World of Warcraft as a game, especially the 1.12 version, known as “vanilla.” I found the CMANGOS project after playing on private servers for years, and was excited by what I saw. CMANGOS is an emulator for running a WoW server privately, requiring basically just a SQL database. I fired a server up with little fuss and saw I had found a new potential long term project.
Community Servers
Recently, I created a new server for my small online community of friends: a small file storage server, only meant to server temporary files over SSH. During the installation, I really got to tailor the setup to the needs of the community; putting user restrictions in, building tooling good for future developments, and doing some development for exact needs. While the systems I have been creating for this community have been growing over time, I am starting to reach the size where I would like the full infrastructure I am used to. Version control, monitoring and logging, centralised user management, automated deployments and infrastructure as code, to name what immediately springs to mind, are some of the tools of the trade I would I like to have in place for comfortable management. But these things take time to build up, money to maintain, and ultimately I would end up with more work long-term.
Other Things
It has been a long time since I have updated this blog, and some of you may have noticed a lack of posts for a few months. I deleted my Github account, which included the hosting for this page, and have only just gotten around to hosting the site myself. For anyone curious, my software projects can also be found now at git.bootingup.lan.
There has been much else that I have been working on. I have a few new certifications I could add to the about me page, most of my networking/computer stacks have remained the same (although I don’t want it to stay that way for too long), and I have been fixing up some airsoft guns. For anyone considering it, although it is a stunning looking replica, the ICS L85 is not a fun gun to work on or fix up. I’m wanting to sell another of my weapons, the Knight’s Armament Stoner LMG in order to buy a Classic Army M249 Gen 1, as I love the Minimi look. There will be nothing stopping me for looking straight out of Bravo Two Zero.
Working From Home
I, like many other people have been working from home more recently, and likely running into the same problems and benefits that others are.
I’m a fan of working from home for the most part; I defintely feel much more able to get some tasks done. Being able to crank some music while hacking away has usually been only I thing I could do while working on my own projects, but now I can whenever I like! Additionally, I feel much more able to pick and choose what and when to do any given project, without the pressures of someone nearby checking in too closely.
Inspiration to Hack
Sometimes, one gets the idea of a way to solve a problem, through means of a series of logic that seek to address all of the branching paths that one can anticpate and hope to successfully deal with. The most recent series all involve the Warhammer 40,000 tabletop game, and way to streamline the experience of a player. Such ideas have struck me as ways to automatically roll the dice needed to resolve a combat or shooting step, or to total all of the point neccesary to create an “army” to play with. Things such as an easy utility to search and view the stats and rules of a unit or character are equally as useful, and require just as much attention.
My Computer History
Computers haven’t always been my interest. I really did not have much of an interest in them until quite late, or more accurately computing as a field of and in itself. I was able and intereted in making computers do what I needed to do, and that was enough, until I got my hands on Linux. This is that story.
Growing up, my mother was proficient in using computers to get work done, but my father was very much against using technology when possible, so we had a black box Dell that was little used until they seperated. It was a likely Windows 95, but that is really all I can remember, as we never really used it. We put a graphics card in it to play Lego Star Wars: The Original Trilogy, the first computer game I properly played. Shortly thereafter, I began playing Runescape, a game that sparked my love for MMOs. We were given a Windows Vista laptop, a black Compaq, that we played Runescape and Sid Meier’s Pirates! on, as well as my first simulator on, a Battle of Britain game I could not figure out how to play. Next we had another Dell desktop, running XP this time, that I played a good deal of Mount and Blade on, modding and breaking the game often. Most of the time, I would use one of several Mac computers (Intel iMacs and a white Macbook, but not sure beyond that) to play Minecraft and World of Warcraft. At the time, I really did think Macs were better, even though the performance was propbably not optimal for anything I was doing, and I could play few of the games I wanted. I spent more time on a PS2 or PS3 for that, playing many of the mainstream games I found appealing. The computers did what I wanted, but
Automatic Debian Installs
When embracing “devops”-style workflows as a sysadmin, one of the most important things is to reduce time to get tasks done; this is why we use Ansible, Docker, and all manner of other tools. Creating VMs is not a quick pactice most of the time. One of the common ways to get around this is to have a golden image and clone it for new VMs, but I don’t find that cloning is the best practice, as images may need to be changed or adapted to fit other workflows. It also does not help as much for physical installs. I found that the best way is to use Debian Preseed configuration to do this all for me, automatically, in an extensible manner.
Digital Archiving: Camera to PDF
I wanted to get a system up and running for scanning my own books and documents into digital, primarily PDF formats, without being destructive or expensive, and I have managed to take care of that today. I set a camera up on a tripod, rested a book on a cardboard rest at a 45 degree angle, and captured every page behind a sheet of glass as I turned the pages. Once I took the pictures of the individual pages, I loaded them onto my computer, and cleaned the pages up with scantailor, a program that can change the files to clean up lighting, orientation, margins etc. I found that this program worked pretty good with the defaults, and dumped every page into .tif files. The images could converted into individual pdfs with tiff2pdf, another free tool that worked well with a script to convert it all over. Lastly, I had to combine all of the individual PDF pages into the finished document; this was the hardest step for me. I had taken pictures of every file on one side, then the other, so my pages were out of order. I then had to use pdfunite to alternate the page numbers, which I did manually. For anything longer than what I did, I would have worked on getting the page numbers sequentially, or work out a good way to script it out.
RTFM Culture
RTFM, or Read The Fine Manual, is a common saying around the Linux community, among others. Some distros, such as Arch and Gentoo, take this to almost be their slogan (although Arch has their own. I am using Arch, btw), while other cultures have emerged around other distrobutions no quite so brow-beatingly stringent. Evaluating the benefits of one way or the other is not the purpose of this prose, but instead to just talk briefly about RTFM.
Arma 3 Server
I set up an Arma server for my friends during the downtime of socail isolation, which was fun and fairly easy. I did it on a Linux host, and then on a Linux VPS, and had some good and bad experiences. For bad, Arma is primarily a Windows game, and the anti-cheat, BattleEye, does not work properly even in Wine, so it has to be disabled for me to play on my Linux rigs. That means I could not use the RCON BattleEye features, which I wanted to use to do some remote management and monitoring. Installing mods was a bit of a pain, but doable once I found some download sites for downloading mods from the Steam workshop, as steamcmd does not have that capability (wishlist!). But there is a Linux server binary, and it works fine, is stable, and uses minimal resources. The config file is pretty easy to work with, and although the documentation isn’t great, it isn’t insurmountable. Always happy to answer questions for anyone wanting to run servers on Linux, based on my experiences!
Matrix
I have a pretty small working group of friends that I talk to, and the primary friends that I game with were using Discord. I had been using third party clients, in an effort to fix the reliability problems I was having on Linux (which didn’t help), and to keep out of what one may call “botnet.” I wanted off of that, and to move to something more usable and extensible. So I went ahead and set up a Matrix Synapse homeserver.
Built vs Bought
As I transition to a new job, and can see how a bigger organisation handles their infrastructure, workflow, and software, I have an oppurtunity to reflect on a question which has followed me through my previous positions, and is really a life-defining philosophical question. The debate of building something entirely oneself, versus buying parts or an entire solution, is not just relevant to IT admins, and something that is increasing relevant to everything as more and more things are being comoditised.
Thinkpad X270
I recently got a Thinkpad X270, used, and have been using it as my daily for a few weeks. I have some impressions that I wanted to share, which may be helpful. First, I have the 1080p IPS panel, and it is a lovely laptop screen. A huge improvement from the 2010-2012 era 768 screens I was used to, and I am very happy with the extra screen real estate, viewing angles, and quality. The keyboard is a pretty standard new-era keyboard, with the Trackpoint recessed too much for what I would prefer. The new-type fingerprint scanner has not yet had its drivers released on Linux, and I don’t see that happening soon, so if that is a priority, it may be best to stick with an X260. One big selling feature is the inclusion of USB-C, which I plan on setting up with a dock. Batter life is quite excellent; with the spilt batteries it is difficult to estimate, but somewhere between 8-10 hours sounds reasonable. Overall a decent purchase, but waiting for the price to come down on these models or even the Ryzen ones is probably a safer bet, sticking with whatever old model is still kicking around.
Getting to Grips with Docker
I had a job interview recently, and I learned how this prospective employer was doing their infrastructure: using Docker and Ansible. I have been using Ansible more and more, getting to grips with how best to employ the tool. I have some work to do catching up there, but totally doable. I did feel lacking in my practical skillset with Docker; I had just docker run some things before. So I set about wrapping my head around the technology, and it just clicked.
LFCS
This week I earned the Linux Foundation Certified Sysadmin certification. This is a cert that I got in hopes of following a career path in exactly what the name denotates. For those who don’t know, the cert is very similar to the RHCSA, and a probably interchangeable aside from the signing authority and the LFCS allows one to use Ubuntu for the exam. It was not an easy exam, being performance based, but I was well prepared and passed with flying colours. Find and various invocations thereof was perhaps the most useful command for me during the exam, but I had thankfully mastered most uses for it already. There were some other sections on the test that I found surprised to see on there, given the official study material. This material is really not beneficial, and I found the best cram-study refrence on a github page.
cyberchargen
I have been working on a another, actual coding project in my spare time. I enjoyed learning Python the first time, and I thought making a quick and easy character generator for pen and paper RPG Cyberpunk 2020 would be a fun reentry. Plus, its 2020! The goal is to have a few options of character templates, get into some guts of Python, and learn to package it by the end of the year. Where it goes from there, who knows. The things that could be done with a simple character generator are pretty vast, so we will see where my ambitions end up. Anyway, follow along on my github.
New Website
I want more control over my website than I feel I have here. I am currently hosting on github pages, and want to move towards a more complete web stack with a new static website. Hugo looks cool, but the call of raw HTML is always there. My site is just a quick blog as of now, and there are not too many plans of expansion (of the web portion, at least), so it feels manageable. Just throwing my ideas down.
Everyday Carry
This is a short post for fun. What does a Lone Ranger IT professional carry with them to get through the day is different for all.
Pockets
- Live USB drive. I carry a Kali USB stick to get me out of any sticky spots. Has everything I need to troubleshoot a local computer, and look into external hosts when not at my desk.
- Keys to the kingdom. Physical and virtual keys stay on me. I have the password database, SSH keys, and old fashioned brass on me so I don’t have to run back to the desk.
- Android phone. Have a great deal of troubleshooting tools on it, but worst case: remote back into the desktop.
- Knife. Haven’t broken a Kershaw yet.
- Flashlight. Helps having a seperate one from the phone.
Bag
- iFixit screwdriver kit. Has enough to get the job done.
- Toner and probe. For tracking down cables and and testing them.
- Laptop. I like old, Librebooted or at least Thinkpads (I personally own five), so I may be biased, but anything that you will actually carry works.
The bag gets filled more the bigger the task, but that’s a good start with the bare essentials. Let me know what you carry!
Using Other's Services
In 2019, it is difficult to have a day go by where one is not frustrated by the tools one uses,or communities surrounding them. Same for data breaches or personal data misuse. People complain that platforms or websites walk all over them, with little regard for what matters for the little guy. These people want the platforms to change for them, or want other people banned or removed for one thing or another. According to some, these massed platforms should be built regualted like public works or goods. The people are the real ones with power, and it would take so little for them to exersize it.
Category: Blog Update Thinkpad
Thinkpads in 2020
2020 is just around the corner, and I have been messing with my Thinkpads again, as well as recommending, shopping for, and thinking carefully about Thinkpads in the coming year. For refrence, I have five Thinkpads of my own, a X1 Carbon (1st gen), X61, T61, X200 and X220. Of those, I only have daily driven the X1 and X220 for the previous year. These are the laptops I consider the limit for where to go shopping for Thinkpads. All the Core 2 Duos and 1st gen Intel Core devices are really starting to show their age in raw horsepower, as well as I/O capabilities. Other quality of life features are also missing, such as backlit keyboards, HDMI, Thunderbolt, USB 3.0+, full HD screens, and AC wifi. These going into 2020 are very noticible, as modern low cost devices often have some of these features at the same price as a used device. Even the great benefits of the old Thinkpads are hard to justify picking them up: better keyboard layout and typing experience, ability to neuter Intel Management Engine or Coreboot, better form factor, and availability. These older devices often do not have the battery life to keep up today (especially the older ultrabook processor models of X1s, etc). That doesn’t make these devices an automatic skip-over, but for similar prices what can you get in the Thinkpad line?
Category: Blog Update Linux Project
Project Update 1: KVM and Preseed
I spent my first night on the project, and while it was not without its frustrations, it was successful: I was able to install Debian from a preseed file, and learned how to use KVM properly from the command line. The most difficult issues for me were finding correct and accurate examples of syntax for using the extra arguments on the virt-install script to get things correct. That was of course, after I spent at least an hour trying to boot from a correct ISO. The –location switch is very picky with the type of ISO that can be used, and to be perfectly honest, I found it by accident. The final stumbling block I had was even though I had automated what I could in the preseed, I was still being prompted to manually intervene during the install. Some extra arguements saved me there. Overall, I am impressed by what I can do with preseed and virt-install. I already use KVM daily, so I look forward to more automation with that. The preseed can be quite basic, but there is a lot during the partition step that can be modified, which speaks to preseed’s power. Hosting a random web server with a sped up preseed.cfg to install on many systems is any interesting idea :).
Project Update 2: Containers
Today was mostly about realising what LXC is and does, and what Docker is for. LXC is a Linux container; like a BSD jail, it is just compartimentalised filesystem that shares the host kernel. Docker is different, it is primarily driven by what application it is intended to deliver: a Docker container pulls its config from the dockerfile, and is meant to die when that container and application are done. LXC is more similar to a virtual machine, but there are less wasted resources spent on recreating some components.
Category: Blog Update Linux Career Project
New Project
I want to start pushing my career from network and sysadmin, towards the devops side of things. I have been working on the methodology and tools of the trade for a while now, things like scripting and automation. I have lots of experience with virtualisation, and some with containers, but I want to take it to the next level. Learning to write, manage, and hopefully cluster or failover Docker/LXC containers are on the agenda. More experience with other scripting/programming languages will be important to working on this devops ideal. I have a good handle on shell, and could sharpen up my Python skills and go from there. But that’s not this newest project.
Category: Blog Update FreeBSD
FreeBSD
I like Linux. I like the options, the countless tweaks, and all the software I could never go through all of being right there. So what if I don’t want Linux, but still want all of these benefits? What if I want all my GNU software, but a different kernel? FreeBSD has been there for a long time, and it is still there. I have spent a cursory and surface level amount of time using and reading about it, and come to a few conclusions about this Unix-based operating system. Being that FreeBSD is considered modern Unix, while Linux is its own thing today, I have donned my wizard hat and robe and grown my beard a few inches after using the operating system for a time.
Category: Windows Mac Linux Blog Update
Windows vs Mac: From a GNU/Linux User
It’s not possible for many FOSS stalwarts to live without being asked or required to use a more mainstream or work related operating system. Some people just need the ease of preinstalled OSes or integrated hardware. Windows and Macs are everywhere, but as a GNU purist or privacy concious user, which is the lesser of two evils? Microsoft has often been accused and revealed to have been abusing user data and collecting gratuitus amounts of analytics, while Apple does far more to lock in or out users of repair, reuse, or full operation of their devices. Many are left questioning what to do, and there is no clear cut answer.
Category: Linux Blog Update
Minecraft Server: Part Two
I wrote my first post about the Minecraft server I have been running eight months ago, and I have changed a few things since then. I ran the server with a bit more than the default vanilla Minecraft, on a AMD FX-8350, which really is not the best CPU to run with. Keeping in mind that the jvm server is single-threaded, clock speads are a pretty big determinator of what the server will run like. There were some datapacks, no mods, and no more than five consecutive players, and I managed to get it playable. Here’s some things I did.
Category: Mac Blog Update
Who Are Macs For
Recently, I had several people, who know I am in IT, ask why not to get a Mac, for a non-computer person who were already in the Apple ecosystem. I was slightly taken back when most of my comments and critiques were “I like it that way!” or “that doesn’t really bother me!” It can be difficult trying to convince people who have bought into the cult of Apple to move away, to any other system, be it a *nix, Windoze, ChromeOS, or Android. So what are the big killers that people notice that you should point out to help people make up there minds?
Category: Linux Update
Thoughts on Gentoo
Many people in and out of the Linux community have heard of Gentoo. There are a lot of different perceptions of the distribution. Some see it as the holy grail of customisability, some see it as a meme of bygone years after being told on countless forums to “install Gentoo” instead of dealing with whatever problem they were having. Well I installed Gentoo, and there’s a lot to unpack.
I’m not new to Linux, but I would not say I have a particularly long wizard beard. I’ve lived out of Arch for a few months, and manage some servers at home and work. I’ve troubleshooted a lot of issues, but there’s so much that goes into Linux that it takes years to properly master. But I installed, following the guide, and didn’t get stuck. I enjoyed the install process, which is much like the Arch one, and appreicated the options open to me. Do I want a hardended system by default, or not? Systemd? All of the choices were there, and then I complied. It was interesting setting flags for the kernel, and although I left most as default that’s because most are good enough for the average user. Since I was just testing the waters, I stuck with the CLI and just did some everyday tasks. Installed some software through every method (ports, binaries, etc) and removed some things. It all felt like it was there, just like any other Linux.
Learning Scripting
I should preface by saying I am in no way good, proficient, or authoritative about scripting in *nix or any other computing environments. But I have had an absolute blast getting started, and wanted to share my thoughts and discoveries. Anyone else who has already been here, feel free to call me a n00b.
It started out when I found myself installing webmin a lot on my new servers. I got tired of going through the whole things time after time, and I thought it made sense to finally start scripting things. I mean, it is very easy to just chain commands like that together into an executable script, so that way my first: a shell script to install webmin. I was hooked after that.
Learn Protocols
For someone first beginning the journey into IT or computing, in any discipline, the sheer amount of specialised knowledge that one needs to know can seem incredibly daunting. There is a lot to learn, and some of it really is complicated, but there is something that one can do to make the early steps easier: learn protocols.
Much of what people see in computers is built on frameworks, and those usually have a number of common, standard libraries and protocols. HTTP is a prime example; it can be really easy to make a quick web site with Google Sites or Wordpress, but it can be just as easy to write up a dirty file and serve it up with Apache. By seeing the series of steps that make a simple site, using the base protocols, it is easier to distinguish all the other layers that make up modern, complext sites, by adding in one at a time things like PHP or Ruby.
My First Linux Experiences
Like many of us out there, I did not start using Linux when I first started using computers, nor did I start using it when I first learned of it. Windows, what I had been using, worked fine and there seemed to little reason to want to switch over. The benefits were not explained properly to me, or I was too inexperience to need anything more than what the Windows desktop offered.
What Makes a Linux Distro
Linux, or GNU+Linux, has been around as an operating system for every type of device, phone, server, desktop, since the early nineties. Because of the free nature of the software, countless different distrobutions for all sorts of applications have sprung up over the years, some highly specialised, others more capable in different roles. Some popular ones today, such as Ubuntu or Red Hat, have a several concurrent releases at a time: a server and a desktop. Conversely, other projects like Debian and Arch produce a more generic release for end users to build up how they see fit, providing merely the framework. A Linux distrobution in general terms is a design philosophy behind a project; in technical terms it is the software suite included with the Linux kernel.
Category: Homelab Update
Building a Homelab
Intro
For those not already in the know, homelabs are a staple for those in IT: a place to build, test and play with technologies to improve understanding for use in a production or home deployment. There are somethings that people should know before getting into homelabbing, as it can be an expensive and draining hobby if done improperly. These are some of my notes and ideas from building my own enviornment.
Category: Minecraft Update
What I learned running Linux Minecraft Servers
Intro
Running Minecraft servers on linux the manual way isn’t too bad: just execute the jar file with java and leave it running. What if you want to automate it, or need to send in commands? One needs to be able to easily access the run session easily, and the tools I used was screen.
Screen creates a vtty that can be attached and detached as the running tty locally or over and SSH tunnel, making it ideal for automation and remote management. Simply start the screen session, leave it running and forget. Using Debian, a systemd distro, I used screen, a systemctl service, and a bash script to get things running. This was an excellent chance to experiment with these tools for a useful application.
Category: Jekyll Update
Welcome to Jekyll!
You’ll find this post in your _posts
directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run jekyll serve
, which launches a web server and auto-regenerates your site when a file is updated.
To add new posts, simply add a file in the _posts
directory that follows the convention YYYY-MM-DD-name-of-post.ext
and includes the necessary front matter. Take a look at the source for this post to get an idea about how it works.