VMware workstation似乎有个bug, 当添加新的SCSI磁盘时VM内不会显示新的 /dev/ 块设备, 例如:

1
2
3
4
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 50G 0 part /

此时通过VMware控制台添加一个200G的磁盘, 再运行 lsblk 并不会有任何变化.

需要执行以下脚本”刷新”:

1
2
3
for h in $(ls /sys/class/scsi_host); do
echo '- - -' > /sys/class/scsi_host/$h/scan
done

再运行 lsblk 就可以看到磁盘了:

1
2
3
4
5
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 50G 0 part /
sdb 8:16 0 200G 0 disk

参考

[SOLVED] Virtual Hard Disk is added, but not showing using lsblk -d command

开发机从 Ubuntu Server 20.04.6 升级到 Ubuntu Server 24.04.1 LTS 版本, 没遇到什么大问题. 但是发现 podman 变成了 [installed,local] 版本, 没有被自动升级. 但是看了一下容器,镜像,Volume还在.

  1. 卸载当前版本

我之前用的是kubic社区的podman源, 需要先卸载 sudo apt remove podman 再删除掉apt能识别出来不再需要的依赖: sudo apt autoremove

此时如果直接安装新版本 podman 可能会有问题, 提示 /etc/containers/containers.conf 被包 containers-common 使用.

可以尝试: sudo apt remove containers-common buildah crun. 卸载前通过 sudo apt list --installed | grep installed,local 确认是否为本地版本.

还是不行的话就只能手动逐个卸载了:

1
2
3
sudo dpkg --remove crun
sudo dpkg --remove podman
sudo dpkg --remove containers-common

总之需要保证之前的版本都卸载掉. 不需要(也不要)运行 purge. 会丢失数据.

  1. 安装新版本

sudo apt install podman golang-github-containernetworking-plugin-dnsname

其中 golang-github-containernetworking-plugin-dnsname 这个不清楚为什么在 Ubuntu 24.04 源里名字这么奇怪, 根据文档来看应该是 podman-plugins 或者 podman-dnsname 才对. Podman 4.x 版本删除了 dnsname 这个插件, 导致通过 compose 拉起的容器组之间不能直接通过 service 名字做寻址.

这个名字是通过 sudo apt search dnsname 找到的.

如果不安装这个包, 可能会看到类似下面的错误:

1
WARN[0000] Error validating CNI config file /home/kiritow/.config/cni/net.d/wgop-net-wg0.conflist: [failed to find plugin "dnsname" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin]]

装完之后在 /usr/lib/cni 下应该会看到 dnsname 插件.

  1. 安装Docker-Compose

由于Docker Compose V2已经从pip包升级合并到docker命令了, 因此我们需要使用docker官网提供的docker-compose独立安装包: Install Compose standalone

注意: 之前提到的 Ubuntu 22.04 里面 containernetworking-plugins 的问题似乎已经解决了, 现在安装 podman 会正确的安装这个依赖包. 参考之前的文章

参考

containers/dnsname: name resolution for containers 这个插件现在已经archive了.

下面的参考没太大价值, 更多感觉还是要看 dnsname 这个插件的文档: Using the dnsname plugin with Podman

Podman network dns option not working with the DNS plugin enabled #20911

Impossible to override container’s DNS with network #17499

Small issues (and fixes) when moving to Ubuntu 23.04

Podman dnsname install issues

  1. 关闭swap

sudo vim /etc/fstab 把带有 /swap.img 的一行注释掉, 然后重启 sudo reboot

不想重启的话可以适用这条命令关闭swap sudo swapoff -a

  1. 准备容器运行环境
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 安装Containerd
curl -vL https://github.com/containerd/containerd/releases/download/v1.6.33/containerd-1.6.33-linux-amd64.tar.gz -O
sudo tar -xzvf containerd-1.6.33-linux-amd64.tar.gz -C /usr/local
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mv containerd.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerd

# 安装 runc
curl -vL https://github.com/opencontainers/runc/releases/download/v1.1.13/runc.amd64 -O
sudo install -m755 runc.amd64 /usr/local/sbin/runc

# 安装 CNI
sudo mkdir -p /opt/cni/bin
curl -vL https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz -O
sudo tar -xzvf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/

在root下执行 sudo su

1
2
mkdir /etc/containerd/
containerd config default > /etc/containerd/config.toml

编辑配置文件 vim /etc/containerd/config.toml

修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false # <--- 这里的false改为true
...

保存后重启服务 sudo systemctl restart containerd

  1. 配置网络 sudo vim /etc/sysctl.d/k8s.conf
1
net.ipv4.ip_forward=1

使其生效: sudo sysctl --system

  1. 安装kubeadm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
  1. 启动kubelet
1
sudo systemctl enable --now kubelet
  1. kubeadm创建集群 (可能需要一段时间)

这里的 --pod-network-cidr需要跟下面的flannel配置一样, flannel默认是 10.244.0.0/16

1
sudo kubeadm init --pod-network-cidr=10.77.0.0/16

看到这个就是创建成功了

1
2
3
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

如果需要将其他节点加入到集群内但是忘记复制了安装之后的命令, 可以创建一个新的 bootstrap token, 有效期为1天. 会打印出用于加入集群的命令.

1
kubeadm token create --print-join-command
  1. 复制 kubectl 用到的配置
1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 安装 Flannel
1
2
3
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
sed -i s_10.244.0.0/16_10.77.0.0/16_g kube-flannel.yml
kubectl apply -f kube-flannel.yml

查看状态 kubectl get nodes

1
2
NAME        STATUS   ROLES           AGE     VERSION
lsp-k8s-1 Ready control-plane 5m17s v1.30.3

注意 Flannel 在新版Ubuntu Server (24.04)上可能会安装失败, log报错如下:

1
Failed to check br_netfilter: stat /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory

参考以下方案解决:

1
modprobe br_netfilter

安装 Istio (可选)

1
2
3
4
curl -L https://istio.io/downloadIstio | sh -

istioctl x precheck
istioctl install

整合脚本

整合脚本是我在写完前面内容之后一段时间才整理出来的, 因为一直忙活GKE所以忘了整本地的集群. 这里面的版本可能跟前文的不太一样.

这个脚本需要在所有节点上运行. 创建集群, 加入集群命令不一样就不放到这里了.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#!/bin/bash
set -euxo pipefail
sudo swapoff -a

curl -vL https://github.com/containerd/containerd/releases/download/v1.7.24/containerd-1.7.24-linux-amd64.tar.gz -O
sudo tar -xzvf containerd-1.7.24-linux-amd64.tar.gz -C /usr/local
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mv containerd.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerd

curl -vL https://github.com/opencontainers/runc/releases/download/v1.2.2/runc.amd64 -O
sudo install -m755 runc.amd64 /usr/local/sbin/runc

sudo mkdir -p /opt/cni/bin
curl -vL https://github.com/containernetworking/plugins/releases/download/v1.6.0/cni-plugins-linux-amd64-v1.6.0.tgz -O
sudo tar -xzvf cni-plugins-linux-amd64-v1.6.0.tgz -C /opt/cni/bin/

sudo mkdir -p /etc/containerd/
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i s/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/g /etc/containerd/config.toml

sudo systemctl restart containerd

echo 'net.ipv4.ip_forward=1' | sudo tee /etc/sysctl.d/k8s.conf
sudo sysctl --system

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo systemctl enable --now kubelet

注意, PVE环境下如果图省事用Clone VM创建更多节点, 需要使用这样一个脚本来确保Clone出来的机器能够正常使用. 需要在上面的脚本之前执行并重启. 参考 Proxmox如何完整的复制一个VM

1
2
3
4
5
6
7
8
set -xe
sudo vim /etc/hostname
sudo vim /etc/hosts
echo -n | sudo tee /etc/machine-id
sudo rm /var/lib/dbus/machine-id
sudo ln -s /etc/machine-id /var/lib/dbus/machine-id
sudo rm -rf /etc/ssh/ssh_host*
sudo dpkg-reconfigure openssh-server

参考

Creating a cluster with kubeadm

Fix sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables.

Ava Max - Salt (Lyrics) oh, oh oh oh oh oh↑ I’m all out of salt, I’m not gonna cry

Kelly Clarkson - Stronger (What Doesn’t Kill You) [Official Video]

突然的自我 伍佰AndChinaBlue ♫「等不完守候,如果仅有此生 又何用待从头」♫ 動態歌詞Lyrics Music ♫ White_Lyric

谭咏麟 - 卡拉永远Ok『不想归去挂念你 对影只得我自己』【動態歌詞Lyrics】

Press Play Walk Away - S3RL & Synthwulf Laptop DJ

【Future Bass】MYUKKE. - YUMEND

Will Sparks - Untouchable (feat. Aimee Dalton) [Official Music Video]

Blasterjaxx & Marnik - Heart Starts to Beat (Official Music Video)

《苹果香》狼戈 (哈族民歌) Apple Scent - Langge 六星街里还传来, 巴扬琴声吗

往期优秀作品推荐

2024年6月

最近在阿里轻量云买了一些机器, 同样都是1C/0.5G, Ubuntu 20.04 的机器运行非常正常, Ubuntu 22.04 的机器却隔一段时间就没有响应了. 具体表现为能ping通, 但是ssh登录会超时失败, 宿主机监控CPU/磁盘IO大涨. 登录VNC能看到类似这样的提示:

1
2
3
4
5
6
7
8
9
[44211.553196] Out of memory: Killed process 95466(apt-check) ... shmem-rss:0KB, UID:0 pgtables:300KB dom_score_adj:0
[44213.738115] systemd[1]: Failed to start Refresh fwupd metadata and update motd.
[63358.618014] Out of memory: Killed process 123118(apt-check) total-vm:190376KB ... UID:0 pgtables:364kB oom_score_adj:0
[63359.212458] systemd[1] : Failed to start Dailyapt download activities.
[74055.581349] Out of memory: Killed process 126756 (apt-check) total-vm:190376kB, ... UID:0 pgtables:376kB oom_score_adj:0
[121996.542525] Out of memory: Killed process 210249 (apt-check) total-vm:100788kB, ... UID:0 pgtables:228kB oom_score_adj:0
[121996.882131] systemd[1]: snapd.service: Watchdogtimeout (limit 5min)
[121997.311208] systemd[1]: Failed to start Dailyapt download activities.
[179235.303036] Out of memory: Killed process 292938(apt-check) total-vm:190376KB, ... UID:0 pgtables:372kB oom_score_adj:0

而这台机器已经关闭了apt自动更新, 一番搜索之后发现snapd可能会引起这个问题:

  1. snap list
1
2
3
4
Name    Version        Rev    Tracking       Publisher   Notes
core20 20240416 2318 latest/stable canonical✓ base
lxd 5.0.3-80aeff7 29351 5.0/stable/… canonical✓ -
snapd 2.63 21759 latest/stable canonical✓ snapd
  1. 删除snap各个包(有依赖关系)
1
2
3
sudo snap remove --purge lxd
sudo snap remove --purge core20
sudo snap remove --purge snapd
  1. 删除snap
1
2
sudo apt remove snapd
sudo apt purge snapd
  1. 添加apt配置文件防止snapd重新被安装回来

sudo vim /etc/apt/preferences.d/nosnap.pref

1
2
3
Package: snapd
Pin: release a=*
Pin-Priority: -10
  1. 清除apt缓存, 重新更新一下(可选)

sudo apt clean && sudo apt update

在完全删除掉snapd之后, 目前机器已经正常运行了两三天…

后续

后来发现还是不太行, 解决方案是给这个内存超级小的机器加上Swap. 因为阿里云没创建swap, 而且还把swappiness设置成了0! 为防止奇怪的事情发生, 弄成crontab脚本每分钟跑一下好了.

创建并启用swap

1
2
3
4
dd if=/dev/zero of=/swap.img bs=1M count=1024
chmod 600 /swap.img
mkswap /swap.img
swapon /swap.img

添加分钟级任务 sudo crontab -e

1
2
3
4
@reboot swapon -s | grep -q swap || swapon /swap.img
@reboot echo 60 > /proc/sys/vm/swappiness
* * * * * swapon -s | grep -q swap || swapon /swap.img
* * * * * echo 60 > /proc/sys/vm/swappiness

参考

Terminate unattended-upgrades or whatever is using apt in ubuntu 18.04 or later editions

How to Remove Snap Packages in Ubuntu Linux

How do I configure swappiness?

How to read oom-killer syslog messages?

How can I check if swap is active from the command line?

Linux Partition HOWTO: 9. Setting Up Swap Space

How to Clear RAM Memory Cache, Buffer and Swap Space on Linux

Swappiness: What it Is, How it Works & How to Adjust

设备是HP Ultrium 6-SCSI, 使用LTO-6磁带进行备份, 磁带空间大约为2TB

安装 mt-st 工具管理磁带

1
sudo mt -f /dev/nst0 status

返回大概是这样的

1
2
3
4
5
6
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x5a (LTO-6).
Soft error count since last status=0
General status bits on (41010000):
BOT ONLINE IM_REP_EN

tar工具备份出来的tar包没有对文件顺序有明确的要求, 最终顺序由 readdir 决定, 某些情况下可能不符合要求, 可以提前生成一个文件列表提供给tar.

1
find tobackupdirname -print0 | sort -z > /tmp/filelist.txt

如果等待tar打包的文件总大小超过了磁带总大小, 需要启用MultiVolume支持 (建议在tmux内执行保证复制不会中断)

1
sudo tar -cvf /dev/nst0 -M --no-recursion --null -T /tmp/filelist.txt

这样tar就会在空间写满的时候提示换盘:

1
2
...
Prepare volume #2 for ‘/dev/nst0’ and hit return:

更换磁盘后回车, tar就会继续进行备份了

当然也可以尝试使用LTFS进行数据的备份, 此处不再赘述.

参考

How do I create a tar file in alphabetical order?

Tar Splitting Into Standalone Volumes

有一台很久之前创建的Windows VM, 使用OVMF UEFI作为引导, 但是主盘是通过IDE挂载的, 而且因为系统没有安装Virtio驱动所以直接改为SCSI挂载会报错 INACCESSABLE_BOOT_DEVICE. 换成SATA也是一样的.

一番搜索后找到了一个靠谱的答案:

  1. 关闭VM
  2. 挂载Windows安装镜像, 这里我用的是 Win10 22H2 2024.07 Business Editon
  3. 挂载Virtio驱动镜像
  4. 设置Boot device为刚刚挂载的两个镜像, 取消对其他磁盘的勾选. (这里遇到点问题, virtio的镜像启动优先级必须比windows iso的高, 否则进入windows安装之后搜索不到驱动盘)
  5. 启动VM, 进入windows安装界面, 按Shift+F10打开控制台
  6. 使用 dir C:\, dir D:\ 之类的命令确定当前系统盘和Virtio驱动盘.
  7. 假设系统盘是 C:, 驱动盘是 E:, 通过如下命令安装驱动: dism /image:C:\ /add-driver /driver:E:\vioscsi\w10\amd64
  8. 命令执行完成后通过命令关机: wpeutil shutdown -s
  9. 删除镜像挂载, 把原来的ide磁盘先卸载再通过SCSI的方式挂载
  10. 启动VM, 可以正常引导进入系统了.

整个流程走下来之后VM顺利更换掉了IDE磁盘. 同时我还升级了Machine的定义, 目前暂时没看到什么异常.

参考

Change disk type (IDE/SATA to SCSI) for existing Windows machine

PVE 8.2.4 ISO安装, 由于同一台机器上其他的磁盘上原来已经安装了 PVE 7.4 所以安装过程中提示是否要rename到pve-OLD 而且不能跳过.

安装之后原来磁盘上的PVE会被保留, 但这也造成了我们没法把原来的磁盘用作其他用途 (Disk Manage会报错占用了, 没法Wipe Disk)

参考以下步骤删除掉 pve-OLD

删除lv

1
2
3
lvdisplay
lvchange -an /dev/pve-OLD-0BFE9176
lvremove /dev/pve-OLD-0BFE9176

删除vg

1
2
vgdisplay
vgremove /pve-OLD-0BFE9176

删除pv

1
2
pvdisplay
pvremove /dev/nvme1n1p3

这样原来的磁盘就可以用作其他用途了.

参考

How to delete PVE Old

jordanhillis/pvekclean

PVE 8.2.4 版本iso安装之后 syslog里一直报错:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
...
Aug 31 22:29:31 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:29:41 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:29:51 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:30:01 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:30:11 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:30:21 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:30:31 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:30:41 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:30:51 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:31:01 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:31:11 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:31:21 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Aug 31 22:31:31 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
...

如果直接操作 ipset, iptables -vnL 命令会提示:

1
2
3
4
5
6
ipset v7.10: Cannot open session to kernel.
command 'ipset save' failed: exit code 1
...

iptables v1.8.9 (legacy): can't initialize iptables table...: Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

看起来是因为新版本启用了ebtables但是某些地方还是有问题, 没找到特别好的解决方案, 目前的解决办法是 Datacenter -> Firewall -> Options, 设置 ebtablesNo. 重启后错误日志就消失了.

另外还有个奇怪的问题, 默认安装启用 ebtables 的情况下, CIFS mount会失败, 但是 dmesg 和 journalctl 里看不到错误. 目前也没有找到比较可靠的解决方案, 关闭 ebtables 之后正常了.

参考

status update error: iptables_restore_cmdlist

WebUI 上只有添加 Storage 的地方, 没有删除的方式, 需要手动进入Node Shell 进行删除

这里假设要删除的存储是 localssd

  1. 取消 mount 服务 (会自动umount)
1
2
systemctl status mnt-pve-localssd.mount
systemctl disable --now mnt-pve-localssd.mount
  1. 确定mount点已卸载并删除mount点
1
2
ls -al /mnt/pve/localssd
rmdir /mnt/pve/localssd
  1. 删除 mount 文件
1
rm /etc/systemd/system/mnt-pve-localssd.mount
  1. 删除存储配置
1
nano /etc/pve/storage.cfg

找到localssd这一段并删除

1
2
3
4
5
6
dir: localssd
path /mnt/pve/localssd
content images
is_mountpoint 1
nodes pve-sg
shared 0

参考

[SOLVED] Removing old Storage from GUI

Proper way to remove old kernels from PVE 8.0.4 & which are safe to remove

Linux Quick Tip: How to Delete or Remove LVM volumes