Knowing when qube Storage is Reaching Limit

Hi all.
I’ve started using qubes as my default os and I think it’s missing a column in qube manager; it correctly display (if refreshed), disk usage, but not its ratio! I’d like to know when assigned storage is reaching it’s limit…
Do You think it’s possible?
Thanks a lot.
Cheers.

2 Likes

Congrats! It’s quite a challenge. Be sure to read the documentation :slight_smile: It will save you from lot of frustrations.

Unfortunately, as far as I know it is not possible. There are several issue reports already for better informing users of when this happens. So it’s a known issue.

https://github.com/QubesOS/qubes-issues/issues/5984

Somehow including the ratio on that same collum (or on another column) could be a solution, but since few people use the Qube manager, it is probably best have something more visible (like a notification when it’s about to be full – see the github issue)

lol, imho only reason to use qube manager is possibility to have everything under control in a single window / point of view
Indeed, if You yet planned to do else… let’s wait for next release!
Cheers,
M.

1 Like

Do you have any reliable statistics about it? I personally use it all the time.

1 Like

No statistics. Just saying it out my perceived idea. Might be wrong. I know some folks use it all the time.

Hi there.
So let’s step back to my original question…
I’ve read “somewhere in time” :wink: that also if You assign a size to private disk of an AppVm, it’s allocated only when “used”.
If it’s correct, may be a temporary workaround add a saltstack automation that set’s all private appVM disks to same size (that I can raise to max one available / needed, from dom0 - I think…).
It will be best if it’s possible to “group” saltstack command effects to appVM only.
Do You think it’s possible?
Thanks a lot.

Perfectly possible. A simple bash script using built-in qubes utilities like qvm-prefs may even be easier than saltstack.

It’s possible, and you can also do this using a simple bash script,
iterating over qvm-ls --raw-list, or a list of qubes you want to
update, and then calling qvm-volume resize

But, you are then at risk of hugely over committing your available
space, which makes it far more likely that you will overextend the lvm,
and that has potential for breaking your whole Qubes.
And, let’s be honest, if a user cant keep an eye on the free space in a few
qubes, how likely is it that they will do the same for dom0?
Much better to extend just a few that you want to use for greater
storage.

Qubes will also warn you with a pop up if disk space is short.

Hi there.
Thanks for your suggestions.
I often use snap in my qubes and I don’t know why, but I’ve never received an alert; when a snap becomes “broken”, I understand it’s time to grow private volume (also if space isn’t yet finished at all).

For my first request, my interest onto saltstack derive from work jobs, so I try to learn how much is possible about it.
Indeed published pages I found on github start from a basic knowledge I don’t have…
That’s why I’ve requested so…

Cheers,
M.

Hi there.
Well I think I’ve found the problem… I’ve installed a new test qube, added snap and:

[user@my-new-qube ~]$ sudo df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda3 9.6G 5.6G 3.6G 61% /
none 9.6G 5.6G 3.6G 61% /usr/lib/modules
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.0G 0 1.0G 0% /dev/shm
tmpfs 61M 1012K 60M 2% /run
/dev/xvdb 2.0G 1.7G 211M 90% /rw
tmpfs 1.0G 12K 1.0G 1% /tmp
tmpfs 31M 76K 31M 1% /run/user/1000
/dev/loop0 33M 33M 0 100% /var/lib/snapd/snap/snapd/12057
/dev/loop1 56M 56M 0 100% /var/lib/snapd/snap/core18/2066
/dev/loop2 66M 66M 0 100% /var/lib/snapd/snap/gtk-common-themes/1515
/dev/loop3 163M 163M 0 100% /var/lib/snapd/snap/gnome-3-28-1804/145
/dev/loop4 622M 622M 0 100% /var/lib/snapd/snap/libreoffice/218
/dev/loop5 142M 142M 0 100% /var/lib/snapd/snap/chromium/1628
/dev/loop6 129M 129M 0 100% /var/lib/snapd/snap/teams/5
/dev/loop7 241M 241M 0 100% /var/lib/snapd/snap/zoom-client/149
/dev/loop8 51M 51M 0 100% /var/lib/snapd/snap/snap-store/542
/dev/loop9 219M 219M 0 100% /var/lib/snapd/snap/gnome-3-34-1804/72
[user@my-new-qube ~]$

How You can see, “rw” is 90% used.

But from dom0 has this ratio:
my-new-qube 10/2048

So, now?
Thanks a lot.
Cheers,
M.

How did you get this?

As your suggestions:

qvm-ls --all --running --exclude dom0 -O DISK

and

qvm-ls --all --running --exclude dom0 -O PRIV-MAX

Indeed, if I stop my-new-qube and restart Qube Manager, then I get correct value: 1818.62MiB

(added some markdown formatting better readability)

Hi there.
Please let me understand: is there a know bug I’m facing?
Or does it happens only for snaps install?
Thanks a lot.
M.