# Dump of Keiths notes: Proxmox

This is a dump of my notes from the last few years

# proxmox cloud init notes

<span style="white-space: pre-wrap;">Cloud init in proxmox by default offers basic functionality to create a user, add a password for that user, decide if the machine should be updated and import ssh keys and doing basic network config. This is good enough to get started using ansible but cloud init is capable of so much more including setting up repositorys, instaling packages, running arbitrary commands to configure the system, perform advanced network setup and setup many users on the system by default and much much more. </span>

<span style="white-space: pre-wrap;">I am largely using information from this article https://dustinrue.com/2020/05/going-deeper-with-proxmox-cloud-init/ to get this project off the ground. I have been struggling to understand how to get this working for the last while and I think this is going to be a great feather for my toolbox. </span>

<span style="white-space: pre-wrap;">First we started by editing the Cephfs volume storage at the datacenter level to allow for snippets. This allows the file system to use the cloud init file that we create to apply it to the cloud init disk we already have "installed" on one of our templates. </span>

<span style="white-space: pre-wrap;">Next we create a file in the /mnt/pve/Cephfs/snippets/ directory that will be our cloud init user file. This will include things like creating our user, adding ssh keys, and installing packages. If you plan to use a more complicated netplan file than just turning on DHCP for the first network interfaces you can also setup a network cloud init file. You could create a meta file as well but Im not sure why that would be nessicary. </span>

<span style="white-space: pre-wrap;">A few things I would like to know and I will test as I go though this. If I leave some fields blank can I still use the basic config information to apply things like the system hostname? That would be helpful to know. </span>

I am going to be working off examples from the cloud init documentation for this. I would like to have my systems at least have a bare minimum config of being setup to work with ansible and also to have mdns installed. below is a cloud init file that should allow that. Im also going to create an additional user to get into the system. it doesnt seem like it will use the cloud init from the default user and it also seems like the network setup is fucked as well

To select a custom cloud init file for a specific VM you use the command `qm set &lt;VMID&gt; --cicustom "user=local:snippets/user.yaml`

```yaml
#cloud-config

# Install additional packages on first boot
#
# Default: none
#
# if packages are specified, then package_update will be set to true
#
# packages may be supplied as a single package name or as a list
# with the format [<package>, <version>] wherein the specific
# package version will be installed.
packages:
 - libnss-mdns
 - qemu-guest-agent

# A common use-case for cloud-init is to bootstrap user and ssh
# settings to be managed by a remote configuration management tool,
# such as ansible.
#
# This example assumes a default Ubuntu cloud image, which should contain
# the required software to be managed remotely by Ansible.
#

ssh_pwauth: false
users:
- name: borg
  gecos: Ansible User
  groups: users,admin,wheel
  sudo: ALL=(ALL) NOPASSWD:ALL
  shell: /bin/bash
  ssh_authorized_keys:
    - "ssh-rsa YOUR KEY HERE"
```

<span style="white-space: pre-wrap;">This all ended up working pretty well with a few caveats. I don't think you can create a user named admin on an Ubuntu machine. At least the system failed to create that user. There are a few other important things I learned. </span>

First, you can only use either use the builtin cloud init tools from the proxmox UI \*\*or\*\* you can use custom cloud init tools using the `cicustom` command. This is slightly unfortunate because I cannot quite figure out how to pass in the name of the machine used by proxmox to the custom cloud init file at the buildtime for the new vm. this isnt a huge issue but it does mean a tweak in how the systems are spun up and setup. the systems all come out with the same host name if useing cicustom.

<span style="white-space: pre-wrap;">I think that this is the best solution for some projects. If a template is setup to generate specificly configured systems (like the tailscale templates) the cicustom command makes more sense so you can get a basic config and </span>

# templates

https://technotim.live/posts/cloud-init-cloud-image/

Follow the above guide!

I will use the \*\*fart\*\* net for only local system networking

<span style="white-space: pre-wrap;">The rest of this tutorial seems do-able. Ill probably want to do this at home. </span>

# SDN vs pfsense and or opensense

The process for setting up pfsense or opensense in proxmox and then messing with it looks something like this...

first install opnsense, add multiple network adapters before logging in. one of those adapters will be setup using just a default bridge to the existing lan network and one of the bridges will use the same interface but will have a vlan tag on the interface. this way all traffic is created inside this vlan.

the next step is to install an ubuntu desktop system on the proxmox host as well and setup 1 network interface there without vlan tag untill after ubuntu is installed. then add the vlan tag that matches the interface on the pfsense vms lan interface. this way youll be able to get access to the web console.

<span style="white-space: pre-wrap;">When trying to figure all this stuff out I was unclear on how connections should work between machines and software defined networks. </span>

# lxc containers

just reading though the lxc contianers documentation on proxmoxes wiki, key take aways are such

\- containers are lightweight (duh)

\- no live migration due to technical limitations (migrations are handled by restarting the contianer but this is quick because containers are light)

\- only linux systems can be emulated (containers)

<span style="white-space: pre-wrap;">- for security reasons containers run seperate namespaces and some syscalls are not allowed in containers (im not sure I fully understand this point but it sounds like podman where it locks stuff down more than docker for security) </span>

\- You can use proxmox ve firewall and high avaliblity framework with containers

\- the goal of lxc containers to to provide benifits of a vm without additional overhead. LXC containers should be thought of as system containers instead of application containers (docker syle containers (generaly))

\- Proxmox recomends if you want to run applications like you would with docker to do it in a vm with docker installed so all syscalls and other stuff like live migration is avalible along with stronger isolation from the host system.

all in all, I could be somewhat wrong about LXC containers and if we want to spin up quick crash and trash systems we could consider them as viable test platforms. the live migration being lost is a bit of a bummer.

https://pve.proxmox.com/wiki/Linux\_Container

# Tailscale templates

A bit more on the ansible stuff

\[\[Tailscale Ansible Automation\]\]

\---

<span style="white-space: pre-wrap;">The steps I have got though to setup tail-scale Templates for Proxmox. </span>

\- create fresh Ubuntu VM

\- user is called 'administrator'

\- password will be kept for admin

<span style="white-space: pre-wrap;"> users setup for guests will have sudo access only for tailscale functions</span>

XOXO- QEMU guest agent installed

\- tailscale installed

\- subnet router rules applied to system

\- udp optimizations setup

\- avahi-daemon setup for mdns id of systems

\- machine id removed

\- ssh host information and keys removed

\- cleared out Ethernet device classifications

Onboarding a user:

\- create user information for the user in the vault warden, they need to add the install script per linux server onboard steps from tailscale

\- Get them to setup tailscale account and one node on their stuff they plan to use for their microspace stuff

\- split DNS setup for yeticraft.net (thanks josh)

\- need to know what IP range they will be allowed to access

\- need to know if they need additional network interfaces to make that happen

\- will want their email address to send them the link to authorize their account

\- I should figure out how to email... https://docs.ansible.com/ansible/latest/collections/community/general/mail\_module.html

Does the user need access to the system???

Need to experiment...

<span style="white-space: pre-wrap;">I don't think so... If we put in the correct information from an admin perspective we should be able to run tailscale up --advertise routes 192.168.x.0/x with all stuff included. Then we paste them the login link. I think that there is a way to </span>

Need to talk over with the fellas if this seems like a safe and or sane way of doing stuff...

avahi-daemon (mDNS linux client) installed to get hostname. tailscale nodes will be updated based on hostname

The link below is the fix for ssh not working (when you wipe out template keys for the template they need to be setup again on the new vm)

https://www.reddit.com/r/Proxmox/comments/sgyv24/ssh\_after\_clone\_doesnt\_work/

Vm's that are full clones can be independent of the the Ceph cluster. We are probably going to go with linked clones. Not sure how that works on the backside but it does seem to be functional

<span style="white-space: pre-wrap;">Things an admin will need to do. </span>

\- create machine based on template

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">- users should have a tag with there name on it and </span>

<span style="white-space: pre-wrap;"> </span>- select linked clone (unless putting on system not in Ceph. Probably don't do that)

<span style="white-space: pre-wrap;"> </span>- add to remote-access pool (haven't created this yet)

<span style="white-space: pre-wrap;"> </span>- wait for VM to be created

<span style="white-space: pre-wrap;"> </span>- select correct network interfaces before brining system up (this will dictate the IP address the user gets)

\- run `sudo dpkg-reconfigure openssh-server` to generate fresh keys

\- import ssh keys (gh or other)

\- change hostname to correct numerical designation `hostnamectl hostname tailscale-node#` replace # with number

\- reboot system to change hostname

\- add new host to new guest inventory section

\- set administrator user as a no password user (reworking the template may be necessary for this)

\- run ansible playbook to setup system (Crowdsec and other stuff)

\- walk user though getting authkey for a Linux server. This is not an awesome way to go about this. I have a feeling that getting input from the fellas could lead to better solutions...

<span style="white-space: pre-wrap;">- </span>

\- tailscale up --advertise-routes \*route ip based on allowed addresses\*

\- ~~give user the login link (need to figure out how long this will wait) I could potentially automate this bit with ansible and email... That would be cool~~

\- move to regular guest inventory

\- ensure nymph can get to the system for regular updating

<span style="white-space: pre-wrap;">- ensure the user can get to services they expect they should be able to get to. </span>

\- Will need to implement a full follow along guide for user.

<span style="white-space: pre-wrap;">Today I got this mostly working. I don't really know if there are better ways I could do this for now but this seems like an excellent start. </span>

<span style="white-space: pre-wrap;">I am going to do a second version. There are a few things I need to get correct </span>

\---

After review with the u-space team this is how I'm gonna tweak stuff

A better procedure generally would be such

\- user installs tailscale, enorles at least one system, setup split dns

\- enroll user in bitwarden (get auth key for user along with having them generate passwords for whatever services they will be using) we should encourage users to use the vault-warden account for all passwords they generate for the project they are in. That way resets are easier for us. This is ultimately where LDAP and SSO will eventually come into play

\- create VM for users access including correct network interfaces. There's a bunch of steps here, will outline later

<span style="white-space: pre-wrap;">- add VM to ansible creation inventory </span>

\- run initial install playbook

\- when user has their auth key in the system, paste into ansible var for auth key as part of playbook to spin up system

\- the tailscale join should be the last playbook. Should include the subnet routing to users expected resources. (Nextcloud, dev environment like coder, help desk software, redmine, oodoo, WordPress site backend, ECT.)

\- user needs to enable subnet routes in the admin console and also the host system the are using (--accept-routes for Linux. Assuming that windows is a toggle. Android auto forwards afaik)

\- check with user that they can login to their services

<span style="white-space: pre-wrap;">Steps I need to complete. </span>

<span style="white-space: pre-wrap;">- Build a fresh template. I think I'm going to try and automate the install and setup of system variables like the tailscale install, setting up the hostname, configuring to use mDNS and other items at the top of this note can all be automated and we can use the cloud init tools to create the VM. Use the techno tim video to get this VM setup. </span>

\- probably need to implement logging push up to network monitoring tools (this is something Josh/garth should be consulted on)

\- harden nymph

https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img

<span style="white-space: pre-wrap;">Stuff for the ansible playbook </span>

Tailscale optimizations

https://tailscale.com/kb/1320/performance-best-practices

<span style="white-space: pre-wrap;">for setting up nodes with this new setup. </span>

1\. go to template 8000

2\. create a clone

3\. target node: mss1/2

4\. VM ID: leave alone (let it auto populate)

5\. Name: tailscale-node#

6\. resource pool: tail-nodes

7\. mode: full clone

8\. target storage: same as source

9\. create clone

10\. do not turn on vm once its cloned!

11\. set up network devices as needed

12\. turn on system

13\. get ip address for host (mdns will be installed)

14\. ensure nymph can contact hosts

15\. run config ansible playbooks (what do those playbooks do?)

\---

\## Lets talk about password manager integration...

<span style="white-space: pre-wrap;">I have yet again, bounced off of working with a password manager and ansible. the issue ive got is that all the implementations I can think of seem incredibly fragile and dificult to setup. I also realised some of the limitations of the community edition of vault warden. my main issue is the lack of granularity when it comes to password user control. </span>

It seems that we would have to add users to the vaults that they need access to, (part of that is sending them an invite link) Once they are in the system they would have access to both their vault and the organization they need access to. they would then add their auth tolken to their personal account and then share it with an admin user?

\---

\## roleplay

Here is my tabletop of a user and admin interaction

<span style="white-space: pre-wrap;">In this scenario, a user needs to gain access to several development machines as well as nextcloud for document storage and redmine for project managment. the admin has access to the proxmox cluster to provision this new user a system with access to their cluster of systems. the admin also has access to bitwarden. the project lead sends a message to admin group to establish users in this group. </span>

Project Manager:

<span style="white-space: pre-wrap;"> </span>good morning, I need users1, user2, user3 and user4 to have access to my workgroup X. please enroll them. here are their email addresses

<span style="white-space: pre-wrap;">Admin Managment: </span>

<span style="white-space: pre-wrap;"> </span>Check rodger, those users will be added

<span style="white-space: pre-wrap;"></span>

Admin: \*send email to users\*

<span style="white-space: pre-wrap;"> </span>Good morning, Im going to help enroll you in our system. you will recive an invite to join the bitwarden account with access to your workgroups vault. please create your account. Once complete, i will send you an enrolement invite to join the u-space organization along with adding you to relevant password vaults for your work group

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">Once your account is created you'll also create a tailscale account using this link https://tailscale.com/ you will need to provide an identity provider such as github or google. If you already have a tailscale account you can skip this step. </span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">Once you have tailscale setup on your devices of choice, You will need to go to the settings menu in the tailscale admin dashboard. then select keys, and then generate an auth token. your key for this account should not be re-usable. do not select any of the switches. </span>

<span style="white-space: pre-wrap;"> </span>That auth-tolken will then be saved under your name in a note with the "enroll tolken vault collection"

<span style="white-space: pre-wrap;"> </span>From there, I will create your machines. Please reach out to me if you have troubles signing up for tailscale or the bitwarden account

Breif overview of steps

1\. sign up for yeticraft vault warden

3\. sign up for tailscale

4\. admin will enroll you in enroll vault collection and other relevant vaults for your work group

5\. generate an auth key and save it to your vault

6\. send that item to admin@admin.example

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">Your remote access will be established to the systems in work group X. if you need additional access please talk to your project manager about provisioning those resources. </span>

Users: do the tasks as outlined

<span style="white-space: pre-wrap;">Admin: </span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">- create systems. one for each user. add users systems network interfaces to software defined network for that work group. create entrys for those users in the ansible inventory. then get into the bitwarden account and get the auth tolkens. these should be added to an ansible vault and mapped to the system ID they belong to. </span>

<span style="white-space: pre-wrap;"> </span>- configure ansible vault with auth tolkens and ip addresses for internal ip's of the systems

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">- running the playbook to enrole new users will run any hardening steps required, configure a strong administrator password, install tailscale and authorize the system for the users network as well as allowing the subnets needed for the workgroup. </span>

\---

\### A few notes

<span style="white-space: pre-wrap;">Im going to spend some time working on play optimization steps. for example, stripping out the gather facts step if it is not required, dont try to install shit if its already there and dont create users that already exist. </span>

This explains the check\_mode tool in the users module which is helpful

https://stackoverflow.com/questions/75211712/ansible-user-module-check-if-a-user-exists-or-not

checkout these modules for checking if stuff is already there

https://docs.ansible.com/ansible/latest/collections/ansible/builtin/package\_facts\_module.html

\---

So how did this all end up?

<span style="white-space: pre-wrap;">Well It seems that things worked pretty well. Currently this has all largely come together. What I have now is a system that will allow many users access to internal services for projects. This system still needs a bit of refinement as I had to do a good amount of tweaking to get everything working. I think that I need to run though this whole process again now that the ansible scripts are working. I may also move my access into the microspace to that method and see how it works to access internal services. Everything internal to the machine that Im on should be fairly fast given that there's no real network connections. </span>

<span style="white-space: pre-wrap;">As I finished up this project a project came to my attention to host a tailscale proxy system that proxy's all the services in its docker network. This project is fairly easy but I'm not sure if it would work for our needs over all since it seems like its to allow one user to get access to resources. I am going to do a deeper dive into this TSDProxy tool and see if it can be applied to our usecase. It could be that for users who dont need access to full virtual machines and only need access to internal services like nextcloud this could be a more appropriate tool to use. </span>

<span style="white-space: pre-wrap;">I have gotten tsdproxy to work, The next step is to see if I can get multiple tailscale accounts link up to a single tailscale proxy stack. that way it can be much more efficent to give access to services for many users we can just setup </span>

<span style="white-space: pre-wrap;">Somehow your template got all fucked up. gonna try again using </span>

https://dev.to/minerninja/create-an-ubuntu-cloud-init-template-on-proxmox-the-command-line-guide-5b61

https://blog.themaxtor.boo/post/2024-10-12-how-to-create-proxmox-template-with-cloud-image/

https://blog.themaxtor.boo/post/2024-10-19-how-to-create-vm-with-cloud-init-in-proxmox/

<span style="white-space: pre-wrap;">I was able to re-establish the Ubuntu-cloud template. this was an excellent starting point and i was very glad to have it back. I was finally able to crack being able to add customized cloud init user files. </span>

I think the problem I was running into had several causes. Mostly though I think the main issue was that I was incorrectly formatting the cloud init files and expecting things that were not possible. It seems like you can either use the values entered in the proxmox ui for cloud init or you can use the ci-custom command to overwrite those things with a customized version. You only get to use one. This means that I will end up tweaking the procedure for an admin to spin up the tailscale nodes significantly. the new procedure looks like this once all data is gathered from the user.

1\. select the Tailscale-Template and create a clone

2\. select mss1/mss2

3\. leave vmid alone

4\. give name of ts-node# (number of the tailscale node your adding)

5\. add to Tailscale-Node Resource pool

6\. select Full Clone mode

7\. leave target striagre at same as source

8\. leave format as QEMU image format (qcow2)

9\. Wait for system lock to go away

10\. Do not start system yet

<span style="white-space: pre-wrap;">11. add system to relevant network (default is simple LAN (fartnet), this may or may not be an appropriate option for each user, that system should have some sort of dhcp method for assigning ip addresses. </span>

12\. Tag each system with the tsnode tag, the users name as a tag, and the project they are on

13\. start new system

14\. you can watch the system go though its first boot process, wait for it to finish this process or at least wait for 10 minutes

15\. reboot system once cloud init has run

<span style="white-space: pre-wrap;">16. the system should now report its ip address to the summary page using the qemu-guest-agent, enter this ip address into the ansible playbook for these new machines in the new guests section. </span>

17\. enter user auth tokens into the ansible vault in the keys feild (It is very likely that this portion could be automated)

# Ansible, tailscale, proxmox automated templates

<span style="white-space: pre-wrap;">Stuff for the ansible playbook </span>

Tailscale optimizations generally, mostly for subnet routers

https://tailscale.com/kb/1320/performance-best-practices

Tailscale subnet router setup

https://tailscale.com/kb/1019/subnets

\---

\## The vault look-ups

this project helped me get a better idea on how ansible looks up variables. this diagram is how I understand this as working.

!\[\[Drawing 2024-12-11 17.47.07.excalidraw.png\]\]

<span style="white-space: pre-wrap;">basicly the way I understand this working is the vars file is used to map in variables from diffrent places including the inventory file and also the vault file. kinda brings everything together. </span>

This was the main addition that this playbook added to my knowledge of ansible. Learning to work with variables and pull information from different sources will be incredibly handy going forward. As far as I can tell this method of pulling variables works but could be turned into a oneliner and hopping over the vars file? It is possible this is wrong as Im still learning about how this all works.

\---

<span style="white-space: pre-wrap;">This is the rest of the ansible script written for this project. Beyond the use of vars files the main things I think are cool about this playbook is the use of registering facts about the packages installed using the ansible.builtin.package\_facts module to pull information about the systems targeted. I also register the outputs of certain plays to check if other plays should run. this helps when running this play against systems after they have been configured once. </span>

```YAML

\---

\- name: preliminary steps

<span style="white-space: pre-wrap;"> hosts: new\_guests</span>

<span style="white-space: pre-wrap;"> become: true</span>

<span style="white-space: pre-wrap;"> vars\_files:</span>

<span style="white-space: pre-wrap;"> - ./vars.yaml</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- ./vault.yaml

<span style="white-space: pre-wrap;"></span>

tasks:

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span>- name: Set a hostname

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.hostname:</span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span>name: "{{ inventory\_hostname }}"

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span>- name: gather facts about packages installed

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.package\_facts:</span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> manager: auto</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: install packages

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.apt:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> upgrade: full</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Check if reboot is required

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.stat:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> path: /var/run/reboot-required</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> register: reboot\_required\_file</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Reboot systems to apply kernel updates

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.reboot:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: reboot\_required\_file.stat.exists == true</span>

<span style="white-space: pre-wrap;"></span>

\- name: setup crowdsec

<span style="white-space: pre-wrap;"> hosts: new\_guests</span>

<span style="white-space: pre-wrap;"> become: true</span>

<span style="white-space: pre-wrap;"> vars\_files:</span>

<span style="white-space: pre-wrap;"> - ./vars.yaml</span>

<span style="white-space: pre-wrap;"> - ./vault.yaml</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> tasks:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Crowdsec repo's are installed via script

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: curl -s https://install.crowdsec.net | sh</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> register: my\_output # &lt;- Registers the command output.</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> changed\_when: my\_output.rc != 0 # &lt;- Uses the return code to define when the task has changed.</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'crowdsec' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Crowdsec install

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.apt:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> package:</span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> - crowdsec</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> update\_cache: true</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'crowdsec' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Crowdsec install

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.apt:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> package:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> - crowdsec-firewall-bouncer-iptables</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> update\_cache: true</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'crowdsec' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Enroll in crowdsec console

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.command: sudo cscli console enroll -n {{ inventory\_hostname }} -e context clz8lrn840007lb085o6va59z</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> register: my\_output # &lt;- Registers the command output.</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> changed\_when: my\_output.rc != 0 # &lt;- Uses the return code to define when the task has changed.</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'crowdsec' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Add linux collection

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.command: sudo cscli collections install crowdsecurity/linux</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> register: my\_output</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> changed\_when: my\_output.rc != 0</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'crowdsec' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: Restart crowdsec post enrollment to get stuff working

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.service:</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> name: crowdsec</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> state: restarted</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'crowdsec' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

\- name: install and enroll tailscale on hosts

<span style="white-space: pre-wrap;"> hosts: new\_guests</span>

<span style="white-space: pre-wrap;"> become: true</span>

<span style="white-space: pre-wrap;"> vars\_files:</span>

<span style="white-space: pre-wrap;"> - ./vars.yaml</span>

<span style="white-space: pre-wrap;"> - ./vault.yaml</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> tasks:</span>

<span style="white-space: pre-wrap;"> </span>- name: install and enroll host in tailscale

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: curl -fsSL https://tailscale.com/install.sh | sh &amp;&amp; sudo tailscale up --auth-key="{{ tailscale\_auth\_key }}"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> when: "'tailscale' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: setup subnet routers pt 1

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: echo 'net.ipv4.ip\_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> # when: "'tailscale' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: setup subnet routers pt 2

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> # when: "'tailscale' is not in ansible\_facts.packages"</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: setup subnet routers pt 3

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: sudo sysctl -p /etc/sysctl.d/99-tailscale.conf</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>\# when: "'tailscale' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: subnet router optimizations

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: printf '#!/bin/sh\\n\\nethtool -K %s rx-udp-gro-forwarding on rx-gro-list off \\n' "$(ip -o route get 1.1.1.1 | cut -f 5 -d " ")" | sudo tee /etc/networkd-dispatcher/routable.d/50-tailscale &amp;&amp; sudo chmod 755 /etc/networkd-dispatcher/routable.d/50-tailscale</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span>- name: advertise routes for systems

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;"> ansible.builtin.shell: tailscale up --advertise-routes "{{ subnets }}"</span>

```

\---

This is the vars file included with this project. the commented out lines are from earlier iterations of this project that are helpful to understand how things are looked up using variables

```YAML

\# ansible\_password: "{{ server\_passwords.admin }}"

\# ansible\_become\_password: "{{ server\_passwords.admin }}"

tailscale\_auth\_key: "{{ individual\_sys\[server\_id\].key}}"

\# new\_borg\_password: "{{ individual\_sys\[server\_id\].borg\_pass }}"

```

\---

This is the vault file that I created and is useful for understanding how auth-keys are looked up while stored in this secure format. It is sanitized and the keys used are only useful once anyway

```YAML

individual\_sys:

<span style="white-space: pre-wrap;"> ts-node1:</span>

<span style="white-space: pre-wrap;"> key: tskey-auth-kfjkdeadfkeeCNTRL-ueoiruyaoieuryaiosudfyoaisudyfudh</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> ts-node2:</span>

<span style="white-space: pre-wrap;"> key: tskey-auth-kfjkdeadfkeeCNTRL-ueoiruyaoieuryaiosudfyoaisudyfudh</span>

<span style="white-space: pre-wrap;"> ts-node3:</span>

<span style="white-space: pre-wrap;"> key: tskey-auth-kfjkdeadfkeeCNTRL-ueoiruyaoieuryaiosudfyoaisudyfudh</span>

```

\---

This is the inventory used for this project, Useful for understanding vars and the workflow with this script

```YAML

\### uspace stuff

\## Once configured, hosts from new\_guests get moved to guests

\## you should be able to switch hosts from ip addresses to mdns names

\## Systems configured like this could also be swapped over to mainline inventorys

\##

new\_guests:

<span style="white-space: pre-wrap;"> hosts:</span>

<span style="white-space: pre-wrap;"> tailscale-node3:</span>

<span style="white-space: pre-wrap;"> # ansible\_host: tailscale-node1.local</span>

<span style="white-space: pre-wrap;"> ansible\_host: 10.0.0.13</span>

<span style="white-space: pre-wrap;"> server\_id: ts-node6</span>

<span style="white-space: pre-wrap;"> subnets: 192.168.4.0/24</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> vars:</span>

<span style="white-space: pre-wrap;"> #ansible\_user: administrator</span>

<span style="white-space: pre-wrap;"> ansible\_user: borg</span>

<span style="white-space: pre-wrap;"></span>

guests:

<span style="white-space: pre-wrap;"> hosts:</span>

<span style="white-space: pre-wrap;"> tailscale-node1:</span>

<span style="white-space: pre-wrap;"> ansible\_host: tailscale-node1.local</span>

<span style="white-space: pre-wrap;"> # ansible\_host: 10.0.0.11</span>

<span style="white-space: pre-wrap;"> server\_id: ts-node1</span>

<span style="white-space: pre-wrap;"> subnets: 192.168.0.0/24</span>

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"> tailscale-node2:</span>

<span style="white-space: pre-wrap;"> ansible\_host: tailscale-node2.local</span>

<span style="white-space: pre-wrap;"> # ansible\_host: 10.0.0.12</span>

<span style="white-space: pre-wrap;"> server\_id: ts-node2</span>

<span style="white-space: pre-wrap;"> subnets: 192.168.0.0/24</span>

<span style="white-space: pre-wrap;"> vars:</span>

<span style="white-space: pre-wrap;"> ansible\_user: borg</span>

```

<span style="white-space: pre-wrap;">--- </span>

<span style="white-space: pre-wrap;">I did do a bit of work for a previous version of this project before i figured out custom cloud init files. below are those files. at this time I was creating a borg user, setting a borg user password, and targeting an admin account that required a password for running sudo commands. I was also passing in the whole enrollment command instead of just an auth-key which is much more cumbersome. ultimately this ended up being unnecessary. </span>

vars:

```yaml

ansible\_password: "{{ server\_passwords.admin }}"

ansible\_become\_password: "{{ server\_passwords.admin }}"

tailscale\_auth\_command: "{{ individual\_sys\[server\_id\].command}}"

new\_borg\_password: "{{ individual\_sys\[server\_id\].borg\_pass }}"

```

vault:

```YAML

server\_passwords:

admin: password123456!@#$

<span style="white-space: pre-wrap;"></span>

individual\_sys:

tsn1:

command: curl -fsSL https://tailscale.com/install.sh | sh &amp;&amp; sudo tailscale up --auth-key=tskey-auth-k8iqCx4jQV11CNTRL-xW3gNm6FuKf7ra49zQ32SfTKRJnVCGbC

borg\_pass: ez8aaBkNxRyKGeR9zhdxLRQGoJfoMEEiRuptxamr

<span style="white-space: pre-wrap;"></span>

tsn2:

command: curl -fsSL https://tailscale.com/install.sh | sh &amp;&amp; sudo tailscale up --auth-key=tskey-auth-kBxMEyjHu111CNTRL-3JzrF8UmkRacSdd4yHbuKav7udHYBkeua

borg\_pass: aoQCCYphdFvBhzxxnVQW4C6XfrtDtePKUeAhUxj9

```

<span style="white-space: pre-wrap;">here's the playbook I wrote for this its basically the same as one I ultimately used with a few slight tweaks. It also includes the steps to bootstrap a borg user. </span>

```YAML

\---

\- name: setup apt-cache proxy

hosts: new\_guests

become: true

vars\_files:

\- ./vars.yaml

\- ./vault.yaml

<span style="white-space: pre-wrap;"></span>

tasks:

<span style="white-space: pre-wrap;"></span>

\- name: remove apt-cache proxy info

ansible.builtin.file:

path: /etc/apt/apt.conf.d/00proxy

state: absent

register: proxy

<span style="white-space: pre-wrap;"></span>

\- name: reboot all hosts to apply changes

ansible.builtin.reboot:

when: proxy.changed

<span style="white-space: pre-wrap;"></span>

\- name: setup mdns and install general use packages

hosts: new\_guests

become: true

vars\_files:

\- ./vars.yaml

\- ./vault.yaml

<span style="white-space: pre-wrap;"></span>

tasks:

\- name: gather facts about packages installed

ansible.builtin.package\_facts:

manager: auto

\- name: install packages

ansible.builtin.apt:

pkg:

\- libnss-mdns

\- qemu-guest-agent

update\_cache: true

when: "'libnss-mdns' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: apply hardening

hosts: new\_guests

become: true

vars\_files:

\- ./vars.yaml

\- ./vault.yaml

<span style="white-space: pre-wrap;"></span>

tasks:

<span style="white-space: pre-wrap;"></span>

\- name: Crowdsec repo's are installed via script

ansible.builtin.shell: curl -s https://install.crowdsec.net | sh

register: my\_output # &lt;- Registers the command output.

changed\_when: my\_output.rc != 0 # &lt;- Uses the return code to define when the task has changed.

when: "'crowdsec' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: Crowdsec install

ansible.builtin.apt:

package:

\- crowdsec

update\_cache: true

when: "'crowdsec' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: Crowdsec install

ansible.builtin.apt:

package:

\- crowdsec-firewall-bouncer-iptables

update\_cache: true

when: "'crowdsec' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

<span style="white-space: pre-wrap;"></span>

\- name: Enroll in crowdsec console

ansible.builtin.command: sudo cscli console enroll -n {{ inventory\_hostname }} -e context clz8lrn840007lb085o6va59z

register: my\_output # &lt;- Registers the command output.

changed\_when: my\_output.rc != 0 # &lt;- Uses the return code to define when the task has changed.

when: "'crowdsec' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: Add linux collection

ansible.builtin.command: sudo cscli collections install crowdsecurity/linux

register: my\_output

changed\_when: my\_output.rc != 0

when: "'crowdsec' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: Restart crowdsec post enrollment to get stuff working

ansible.builtin.service:

name: crowdsec

state: restarted

when: "'crowdsec' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: Create borg user

hosts: new\_guests

become: true

vars\_files:

\- ./vars.yaml

\- ./vault.yaml

<span style="white-space: pre-wrap;"></span>

tasks:

\- name: Check if users exist

ansible.builtin.user:

name: borg

check\_mode: true

register: test\_users

<span style="white-space: pre-wrap;"></span>

\- name: Set user password from vault based on host index for new user borg

ansible.builtin.user:

name: borg

shell: /bin/bash

groups: sudo

password: "{{ new\_borg\_password }}"

append: true

create\_home: yes

no\_log: true

when: "test\_users is false"

<span style="white-space: pre-wrap;"></span>

\- name: Add borg user to sudoers with no password

ansible.builtin.copy:

src: /home/borg/tailscale-project/borgaddin

dest: /etc/sudoers.d/

owner: root

group: root

mode: "0600"

when: "test\_users is false"

<span style="white-space: pre-wrap;"></span>

\- name: Turn off cloud init

ansible.builtin.file:

path: /etc/cloud/cloud-init.disabled

mode: "0600"

state: touch

<span style="white-space: pre-wrap;"></span>

\- name: Restart cloud-init

ansible.builtin.service:

name: cloud-init

state: restarted

<span style="white-space: pre-wrap;"></span>

\- name: Upload SSH key

ansible.posix.authorized\_key:

user: borg

key: "{{ lookup('file', '/home/borg/.ssh/id\_rsa.pub') }}"

state: present

when: "test\_users is false"

<span style="white-space: pre-wrap;"></span>

\- name: install and enroll tailscale on hosts

hosts: new\_guests

become: true

vars\_files:

\- ./vars.yaml

\- ./vault.yaml

<span style="white-space: pre-wrap;"></span>

tasks:

\- name: install and enroll host in tailscale

ansible.builtin.shell: "{{ tailscale\_auth\_command }}"

when: "'tailscale' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: setup subnet routers pt 1

ansible.builtin.shell: echo 'net.ipv4.ip\_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf

\# when: "'tailscale' is not in ansible\_facts.packages"

\- name: setup subnet routers pt 2

ansible.builtin.shell: echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf

\# when: "'tailscale' is not in ansible\_facts.packages"

\- name: setup subnet routers pt 3

ansible.builtin.shell: sudo sysctl -p /etc/sysctl.d/99-tailscale.conf

\# when: "'tailscale' is not in ansible\_facts.packages"

\- name: advertise routes for systems

ansible.builtin.shell: tailscale up --advertise-routes "{{ subnets }}"

```

I also wrote a few scripts to test things...

```YAML

\### This playbook is just a tester to see how the when flag(?)

\### it also checks out the package\_facts module

\### This playbook is useful for figuring out

<span style="white-space: pre-wrap;"></span>

\- name: check if packages exist, skip or run

hosts: new\_guests

become: true

gather\_facts: false

<span style="white-space: pre-wrap;"></span>

tasks:

\- name: gather facts about packages installed

ansible.builtin.package\_facts:

manager: auto

\- name: run arbitrary command if a specific package is installed

ansible.builtin.debug:

msg: this is a test1

when: "'libnss-mdns' is in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: run an arbitrary command if a package is not installed

ansible.builtin.debug:

msg: this is another more diffrent test

when: "'nmap' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: run arbitrary command if a specific package is installed

ansible.builtin.debug:

msg: this is a test2

when: "'libnss-mdns' is not in ansible\_facts.packages"

<span style="white-space: pre-wrap;"></span>

\- name: run an arbitrary command if a package is not installed

ansible.builtin.debug:

msg: this is another more diffrent test2

when: "'nmap' is in ansible\_facts.packages"

```

```YAML

\---

\### This script sets a hostname on systems targeted

\### matches hostnames to the inventory hostname because why not

\- name: set hostname for hosts

hosts: targeted systems

become: true

tasks:

\- name: Set a hostname

ansible.builtin.hostname:

name: "{{ inventory\_hostname }}"

```

# Backup server setup

<span style="white-space: pre-wrap;">Once proxmox ve is installed on a system and proxmox backup server is installed on a separate server (ideally on the same subnet) you can get the two working together by adding proxmox backup server as a storage volume in the proxmox data center. Follow along with the required information and things will get backed up. From there you can save backups directly or you can setup scheduling. The scheduling is where I got to and stopped. We will likely want to do some resource pooling and decide who and what gets backed up where. </span>

# LXC container for lightweight remote access

<span style="white-space: pre-wrap;">I am running a lxc container for my access to the makerspace. here are the steps I took to stand up the lxc container. You can use this as a guide for standing up your own personal LXC container. you dont have to do tailscale on there but this has some info on securing the container. </span>

1\. login as root/PAM auth on the proxmox cluster

2\. get to the shell on a particular system

3\. run a proxmox convience script to standup a very basic ubuntu lxc container

4\. once the container is stood up and running get into the root console and do the following

<span style="white-space: pre-wrap;"> </span>1. change password for root `passwd` to something secure and your own

<span style="white-space: pre-wrap;"> </span><span style="white-space: pre-wrap;">2. modify system to prevent auto-login using the command below and remove the `--auto-login root` portion of the line that is there. </span>

```
nano /etc/systemd/system/container-getty@1.service.d/override.conf
```

<span style="white-space: pre-wrap;"> </span>3. reboot the container

<span style="white-space: pre-wrap;"> </span>4. login to your root user

<span style="white-space: pre-wrap;"> </span>5. run `wget https://github.com/YOUR-GITHUB-USERNAME.keys` to pull down your ssh keys

<span style="white-space: pre-wrap;"> </span>6. modify your /etc/ssh/sshd\_config file to allow for root login via ssh

<span style="white-space: pre-wrap;"> </span>7. verify you can ssh to the system

<span style="white-space: pre-wrap;"> </span>8. modify system hostname using proxmox gui under container &gt; DNS &gt; hostname

<span style="white-space: pre-wrap;"> </span>9. reboot to apply hostname

<span style="white-space: pre-wrap;"> </span>10. once all that works install tailscale the normal way you do that on linux servers.

<span style="white-space: pre-wrap;"> </span>11. setup subnet routing

<span style="white-space: pre-wrap;"> </span>12. \*\*TURN OFF SUBNET ROUTING ONCE IT WORKS\*\* we should be using the wireguard VPN for access. this is a backup in the case the wireguard is acting up

<span style="white-space: pre-wrap;">5. once the system is configured, your backdoor now should work just fine. I recomend not going crazy with this system. </span>