• 1 Post
  • 30 Comments
Joined 7 months ago
cake
Cake day: September 25th, 2025

help-circle
  • Yeah get that. I do it because my pangolin is segregated so that if that internet facing layer is penetrated, there’s not much else they’ll have access to. Similarly, if my WiFi is penetrated, there’s just a few devices. And many of my services run on Kubernetes distributed and load balanced across a bunch of cheap devices, so it needs reverse proxying at the ingress anyway. And there are a few other reasons for keeping traffic off of the pangolin server or even the router when it’s internal to internal, but still be able to use the single domain name for the service, especially with IPv6 not having static IP addresses quite the same way as IPv4, so not wanting to hard code IP addresses or even port assignments in services that back other services like the database server which originally was just running on the NAS, but switching it over to another system only required changing the internal reverse proxy, not every service that used it. I like abstraction like that.


  • Yeah, I have my own DNS server that caches from multiple backing servers as needed. I’m not worried about DNS blocking, it’s never been effective. The issue is ISP level blocking usually isnt just DNS blocking, it’s also involves IP level blocking, many of which dont work on IPv6 which is one reason (besides just resistance to replacing old hardware) it hasn’t been adopted widely by consumer ISPs. If you have only a single, unchangeable (by anyone other than them) IP address, they have much more control and your traffic is much easier to track and manipulate.

    And there is even lower level blocking at lower layers of the network stack. ISPs can intercept and mangle packet’s destinations at any layer because your traffic must go through them and so your networking equipment must trust their equipment to properly route traffic. They don’t do it now mostly because it means adding a lot more processing power to analyze every packet. I do it all the time at home to block ads and other malicious traffic. But if they’re required to upgrade to allow for that level of traffic analysis, by law, then that opens the floodgates for all kinds of manipulation either politically or capitalistically nefarious in nature.


  • Yeah, I have caddy and traefik in front of most of my home-based services, except for a few web UIs like the router’s. Pangolin just receives incoming connections and routes them to the correct reverse proxy in the correct VLAN for that service.

    I have VLANs to separate services that are more public facing from very private ones that only certain devices should be able to connect to directly. Basically, I have one VLAN for IoT devices that need to connect to the internet often but only certain things should access directly, one for very private things like my NAS, database server, 3D printer, etc, that rarely if ever need access to the internet, one for my personal devices (laptop, desktop, phone, tv) which are behind a pihole for ad blocking, and one guest VLAN for guests, but mostly for my work computer which really likes to snoop.





  • Yeah, but most of the data centers recently brought online to feed the LLM/“AI” bubble have triggered a bunch of retired coal plants to be restarted as well as old “dirty” nuclear plant that generate fissile-material for the new nuclear weapons Trump ordered built and other nuclear waste that we already dont have anywhere to store longterm. Part of the excuse being that the demand of these centers is too volatile for green energy. Plus Musk and Trump killing off the programs to build a network of car charging stations mean electric car production for the US market has been drastically cut despite gains in other countries. And cutting the incentives for heat pumps and replacing natural gas furnaces and water heaters has reduced the boom that heat pumps were having as well as are having elsewhere.

    And the general public believes that natural gas in homes and gasoline in cars is cheaper than electric although that is not true, it’s only that

    Anyway, more “dirty” energy sources are in use than a few years ago, do any gains in clean energy have been outpaced significantly by increases in use of dirty energy in the US, though that isn’t the case in many other countries like China and many EU countries without such large tax subsidies for the general public to consume fossil fuels more cheaply out of pocket.



  • More like training it wrong. It is just a mimicking engine, not intelligent. If it’s trained on data that includes bad information (like the near entirety of the internet), it will periodically include that bad information.

    Also, wrong settings. Increasing the threshold of confidence in something before it presents it to the user would at least partly increase the accuracy, but also increase how often it would say it doesn’t know how to do something. And for corporate executives, admitting complete ignorance is unfathomable, so of course they don’t want their products admitting it.



  • If configured properly, it can usually bypass the router altogether. In my setup I have several VLANs for different traffic, so for me it’s important to have a Layer 3 switch that can handle the routing between VLANS. But if you don’t use VLANs, a layer 2 switch will build a mac address table and bypass the router once it knows where the traffic is going. That way only your DNS queries and similar get sent to the router for internal traffic on the LAN. Then the issue is just traffic going to the internet.

    For the internet side you just need to configure the firewall to drop packets on ports (not reject, just drop/ignore) you don’t use and use something like fail2ban or crowdsec to make your router outright drop malicious and LLM bot kinds of traffic to ports you do use that otherwise have to be processed. That generally will reduce processing load unless you have self-hosted services that really generate a ton of traffic in which case you can move those to VPSs outside of your network.

    Those are my general strategies at a very high level.


  • Wow, I run opnsense in proxmox along with a pihole and a couple of other small services and never hit 100% CPU on an Intel N100. My miniPC box has 4 2.5 gigabit network ports though I only use 2 of them, one for LAN and one to the modem. I do also have a managed switch, though, that has a couple of 10 gigabit ports a couple of 2.5 and the rest 1. Likely the switch is taking some of load off of the router I suppose. Might try getting a low-end managed switch. If you’re in the US do it quick, though as a lot of networking equipment is about to spike in price since the administration banned all new foreign made equipment and none is made I’m he US.


  • I use OpnSense on a miniPC with an N100 processor. I got a decent one from HUNSN and added memory. I installed ProxMox and OpnSense runs in that along with a pihole instance and a few other services and it is really fast compared to any router I’ve had in the past.

    I also use a RAM disk for OpnSense caching and logs, and anything I want to keep gets copied out to my NAS for permanent storage. That helps a lot with performance and SSD drive wear, but with memory so expensive from the LLM bubble, it might be more expensive now than a few years ago when I got mine.



  • Problem is that the user has to be presented that webpage anf the results have to make their way back to teach component. If you have a bunch of microservices that aren’t user facing (whether internet or private network) then how do those services get the user data to do their things. Monolithic server applications are bad practice outside of extremely simple web apps if you want something scalable. So you still need a database of local users that the services can share privately. That means a built-in user database that is just linked to the SSO user by the service that is user facing. Otherwise, all micro-services have to authenticate separately with the user once every time the token expires. Which means lots of browser sessions somehow getting from a micro-service with no web front end to the user.

    Anyway, just an example, but when a local user database is required anyway, then SSO is always addition development work and exerts possibly significant limitations on the application architecture. This is why it’s not commonly implemented at first. There needs to be better protocols that are open source and well tested. OIDC is my current favorite in many cases, but it has limitations like logging out or switching between users on the same browser is a pain. Most proprietary apps use proprietary solutions because of the limitations and they feel (often incorrectly) like it’s obfuscated enough to not be susceptible to attacks despite the simplicity. Doing SSO right is hard, so having to implement something from scratch isn’t feasible and when done is usually vulnerable.



  • Problem is requiring a browser if it’s not primarily a web interface. Even if initial setup is web-based, a lot of times background processes exist that don’t traverse the internet, especially in higher security situations, so exposing those components to the internet just to get external credentials is not worth it, so then an additional proxying component is required. Anyway, the idea is that it can add a significant amount of complexity if it’s something more complex than a simple, single component web application.


  • This too would likely require compromising at least one of the devices or at the very least compromising both users’ ISPs or some other fairly detailed and highly targeted attack, but none of that would require compromising Signal’s servers and would make any system’s key exchanges vulnerable, even self hosted systems.

    Simply compromising Signal’s servers might allow disrupting key exchanges from succeeding and thus making it impossible for those users to communicate at all, but not MITM really, at least if we assume there aren’t defects in the client apps.

    The key exchange is much more complex than something like TLS and designed specifically so that the server can’t interfere. With true e2ee the key never passes through the server. This isn’t like many other apps that say e2ee, but really mean end to server gets one key and server to end gets another and decryption and re-encryption happens at the server to allow users to access older messages on new devices and stuff like that. Signal just connects the users to each other. The apps do the rest.

    They could probably do something if they totally took over the entire Signal network infrastructure, but it’s definitely not something they could do undetected. But if a government took over the entire infrastructure, security conscious people would stop using it immediately thus not really worth the monetary and political cost. Otherwise China and others would have already done that to all secure communications. And again, not Signal specific.



  • It’s unlikely encryption would be compromised since the keys never leave the device. The user’s device would have to be compromised for that. Decrypting messages on Signal servers without the keys takes too many resources to be feasible en masse, even for a state actor. And the current app has no method to transfer those private/decryption keys.

    But Signal is not private. It is only secure. Two totally different things. A bad actor could uniquely identify a user and what users they have communicated with and how often, just not the content of the messages. That metadata is stored on the Signal servers and the company has access. That is the tradeoff for ease of use and keeping malicious accounts to a minimum vs an anonymous IM app.