I'm looking to organise my paper mail with the help of a scanner and some document management system for Linux.
Does anybody have any suggestions?
The paperless-ngx project is sort of what I'm looking for, but I don't really want or need to run it in a selfhosted manner. I have a selfhosted server on which I could easily add it, but since I don't really need or want this to be available online in any way (even on my local home network) I don't really want that overhead.
I would prefer an application in the manner of what Calibre is for ebooks. That is, it operates on a locally stored library and that's it. No web server.
https://nltimes.nl/2023/08/30/dutch-residents-will-ditch-cars-sustainable-transport-system
To build a fully climate-neutral transport system in the Netherlands, many citizens will have to give up their cars, Jan Willem Eirsman, the government’s new chief climate adviser as chairman of the Scientific Climate Council, told the AD.
I've been using emacs since 2010. I use doom emacs now, but I have written my own overcomplicated config at one point in the past. I've grown used to it, but sometimes when emacs chokes on some input due to its single threaded nature I have time to wonder if there's something better for me out there.
I tried a few IDEs in the past, but none of them really suited me. Therefore, I put some thought into what I'm looking for and was wondering if the community knows something that fits these modest requirements:
Personally, I don't think these are particularly demanding, but surprisingly a lot of IDEs have failed me on the terminal requirement or remote editing. I have all of this in emacs and to me these are must have features.
I think VS code ticks most of these, but the telemetry puts me off.
Any suggestions? I'm okay with paid IDEs.
Does anybody have experience with both systems enough to compare them?
I'm currently using ifupdown on my Debian server as that's the default, but it seems that the modern way of managing the local network is via systemd-networkd so I'm contemplating putting the effort in to migrate.
Would those of you who have experience with it, recommend it?
In my short investigation, I have made the following observations:
Note: It seems my original post from last week didn't get posted on lemmy.world from kbin (I can't seem to find it) so I'm reposting it. Apologies to those who may have already seen this.
I'm looking to deploy some form of monitoring across my selhosted servers and I'm a bit confused about the different options.
I have a small network of three machines that I would like to monitor. I am not looking for a solution that lets me monitor tens, hundreds, or thousands of nodes. Furthermore, I am more interested in being able to observe metrics for each node individually rather than in aggregate. Each of these machines performs a different task so aggregate metrics from these machines are not particularly meaningful. However, collecting all the metrics centrally so that I can have a single dashboard to view them all in one convenient place is definitely something I would like.
With that said, I have been trying to understand the different (popular) options that are available and I would like to hear what the community's experience is with these options and if anybody has any advice on any of these in light of my requirements above.
Prometheus seems like the default go-to for monitoring. This would require deploying a node_exporter on each node, a prometheus service, and a grafana dashboard. That's all fine, I can do that. However, from all that I'm reading it doesn't seem like Prometheus is optimised for my use case of monitoring each node individually. I'm sure it's possible, but I'm concerned that because this is not what it's meant for, it would take me ages to set it up such that I'm happy with it.
Netdata seems like a comprehensive single-device monitoring solution. It also appears that it is possible to run your own registry to help with distributed monitoring. Not gonna lie, the netdata dashboard looks slick. An important additional advantage is that it comes packaged on Debian (all my machines run Debian). However, it looks like it does not store the metrics for very long. To solve that I could also set up InfluxDB and Grafana for long-term metrics. I could use Prometheus instead of InfluxDB in this arrangement, but I'm more likely to deploy a bunch of IoT devices than I am to deploy servers needing monitoring which means InfluxDB is a bit more future-proof for me as it could be reused for IoT data.
Cockpit is another single-device solution which additionally provides direct control of the system. The direct control is probably not so much of a plus as then I would never let Cockpit be accessible from outside my home network whereas I wouldn't mind that so much for dashboards with read-only data (still behind some authentication of course). It's also probably not built for monitoring specifically, but I included this in the list in case somebody has something interesting to say about it.
What's everybody's experience with the above solutions and does anybody have advice specific to my situation? I'm currently leaning to netdata with my own registry at first and later add InfluxDB and Grafana for long-term metrics.
I run a self-hosted server at home on which I have run a bunch of personal stuff (like nextcloud etc.). To prevent pointing DNS servers at my home router, I run a reverse proxy on a VPS that I rent (from Scaleway FWIW).
Today I was trying to figure to what extent that exposes my data to my VPS provider and whether I can do something about it. Disclaimer: this is just a hobby exercise. I'm not paranoid, I just want to learn for my own self how to improve security of my setup.
My reverse proxy terminates the SSL connection and then proxies the connection over a wireguard connection to my home server. This means that (a) data is decrypted in the RAM of the VPS and (b) the certificates live unencrypted in the storage of the VPS. This means that the VPS provider, if they want to, can read all the traffic unencrypted to and from my home server.
I was thinking that I can solve both problems by using Nginx's SSL pass-through feature. This would allow me to not terminate SSL on the VPS solving (a) and to move the certificates to my home server solving (b).
But just as I was playing around with it, I realised that SSL pass-through would not solve the problem of trying to protect my data from the VPS provider. As long as my DNS records point at the VPS provider's servers, the VPS provider can always get their own certificates for my domains and do a MitM attack. Therefore, I might as well keep the certificates on the VPS since I still have to trust them not to make their own behind my back.
In the end I concluded that as long as I use a VPS provider to route my traffic to my home server, there is no fool-proof way to secure my data from them. Intuitively it makes sense, the data crosses their hardware physically and thus they will have access to it. The only way to stop it would be to update the DNS records to point directly at my home server which I don't want to do.
Is this correct thinking or is there some way to prevent the VPS provider from seeing my data?
Again, I'm trying to solve this problem as a hobby exercise. The most sensitive data that I have is stored encrypted at the filesystem level and I only decrypt it locally on my own machine to work on it. Therefore, the actually sensitive data that would be cost me a lot if compromised is never available unencrypted on the VPS. Due to the overhead of this encryption and other complications, I don't do this for all my files.
@dr_robot
@kbin.social