Hacker Newsnew | past | comments | ask | show | jobs | submit | iamd3vil's commentslogin

Another option for incremental backups is Restic [0]. It has support to backup to Backblaze B2, Amazon S3 and lots of other places.

[0] https://restic.net/


Please do share the playlist. Will love to listen to it.


I think a better option would be to create a Wireguard tunnel between Raspberry Pi and the remote server instead of a SSH Tunnel. Then there is no need to add or change ports and restart the tunnel for every service.


while i think this is true (any other vpn software would work too though) i want to point out that you actually can bring up a tun interface using ssh with the "-w local_tun[:remote_tun]" flag somewhat easily if you want to. It is also possible to make forwarding work in either direction using an integrated socks proxy using "-R" or "-D" flags ...


> you actually can bring up a tun interface using ssh

Beware of TCP over TCP issues[0] when using SSH for tun.

[0]: http://sites.inka.de/bigred/devel/tcp-tcp.html


OP's solution sound good enough for their needs.

Solutions can always be improved, but it's not always worth doing that.


OP has literally written in the gist about exploring a way to map entire port range and avoiding doing this, so the non hacky way of doing this is setting up something like a wireguard tunnel. That's the reason I suggested doing this instead of a tunnel which has other disadvantages like doing TCP on TCP.


SSH tunnels do not run TCP inside of them, just the bytes of the connection data itself.

The only TCP in use is the TCP connection of the SSH connection between hosts.


Ohh TIL but my other point still stands.


It used to be common (at least not unhead of) to run ppp over ssh, which has this problem.


Don't ever user free VPNs and especially something like Hotspot Shield. You can check https://thatoneprivacysite.net/ for a comparison of VPN services.

I use Mullvad VPN which supports both OpenVPN and Wireguard(which is the reason I use Mullvad) and costs 5 euros per month. You can use something like Bitcoin to pay if you want anonymity.


Thanks for opening my cautioning about HotSpot Shield.

5 euros/ month sounds expensive but does it give a bigger bang for your buck?


I think it does. If you want cheaper VPNs, you should checkout Private Internet Access(PIA). If you subscribe for annual plans, you will get it cheaper there. PIA doesn't support Wireguard protocol though.


Thanks for this.

I have already bought Adblock by Futuremind from AppStore. Went through https://thatoneprivacysite.net/ didn’t saw Adblock anywhere. Reckon Futuremind Adblock ain’t that good :(

May I ask on what basis you judge the reliability of VPN’s?

I can already see OpenVPN and WireGuard support, but apart from that, any other major parameter?

Any doc I could read to understand VPN’s better?

Thanks again for taking interest in my issue.

PS - Are you talking about the VPN by Private Internet Access Developer by Anonymous VPN Service and provided by London Trust Media, Inc on AppStore?


As the data shows 99.3% of the cash returned. This doesn't include Grameen banks. So the number maybe more. Percentage of black money has increased. The amount of cash transactions with respect to digital transactions has also increased. Demonetization is a total disaster and I can't forgive Modi for making the country suffer like this and taking 100 lives for nothing.


Also does not include cash still in countries like Bhutan and Nepal.


We package it into a docker image using distillery and then deploy to Kubernetes using Peerage with KubeDNS for auto clustering. It was tricky to figure out everything at the start but after figuring out once, it's pretty easy and works really well. Only issue we had at the start is figuring out sys.config but we use Environment Variables for almost everything and set `REPLACE_OS_VARS=true` while building the Docker image, which solved most of our issues.


> I could go into this in more detail if anyone's curious.

Please do!


We've experienced a plethora of platform issues that were exacerbated by how BEAM consumes resources on the system. Here's a few that come to mind:

- CPU utilization can differ wildly on Haswell & Skylake. On Skylake processors our CPU utilization jump by 20-30% due to Skylake using more cycles to spin. Luckily, all of that CPU time was spent in spinning, and our actual "scheduler utilization" metric remained roughly the same (actually, on Skylake it was lower!).

- Default allocator settings can call malloc/mmap a lot, and is sensitive to latency on those calls. Under host memory bandwidth pressure, BEAM can grind to a halt. Tuning BEAM's allocator settings is imperative to avoid this. Namely, MHlmbcs, MHsbct, MHsmbcs and MMscs. This was especially noticeable after meltdown was patched.

- Live host migrations and BEAM sometimes are not friends. Two weeks ago, we discovered a defect in GCP live migrations that would cause a 80-90% performance degradation on one of our services during source-brownout migration phase.

GCP Support/Engineering has been excellent in helping us with these issues and taking our reports seriously.


> Live host migrations and BEAM sometimes are not friends. Two weeks ago, we discovered a defect in GCP live migrations that would cause a 80-90% performance degradation on one of our services during source-brownout migration phase.

I thought that GCP live migrations were completely transparent for the kernel and the processes running in the VM. I'd be happy to read a bit more about the defect that made BEAM unhappy.


> Default allocator settings can call malloc/mmap a lot, and is sensitive to latency on those calls. Under host memory bandwidth pressure, BEAM can grind to a halt. Tuning BEAM's allocator settings is imperative to avoid this. Namely, MHlmbcs, MHsbct, MHsmbcs and MMscs. This was especially noticeable after meltdown was patched.

Excessive allocations and memory bandwidth are two very different things. Often then don't overlap, because to max out memory bandwidth you have to write a fairly optimized program.

Also, are the allocations because of BEAM or is it because what you are running allocates a lot of memory?


BEAM's default allocator settings will allocate often. It's just how the VM works. The erlang allocation framework: http://erlang.org/doc/man/erts_alloc.html is a complicated beast.

We were able to simulate this failure condition synthetically by inducing memory bandwidth pressure on the guest VM.

We noticed that during certain periods of time, not caused by any workload we ran, the time spent doing malloc/mmap would 10-20x, but the # of calls would not.


> We noticed that during certain periods of time, not caused by any workload we ran, the time spent doing malloc/mmap would 10-20x, but the # of calls would not.

I'm curious what tools you used to discover this



Thanks! I would be great if you wrote an article on these with more details, as it would be really helpful to both Elixir and Erlang communities.


If you don't trust PIA's or any VPN's clients, you can always use Openvpn clients directly, provided the vpn supports openvpn.


PIA works with OpenVPN, their Windows app was (maybe still is) just a pretty interface on top of OpenVPN, but the trust has nothing to do with the client imo, it lies more in trusting them when they say thry don't do any logging or eagerly cooperate with adversaries. They xlaim they don't log, but how do I prove that?


I do trust them, and I have used them, but I prefer “trust, but verify.” It’s also just the right thing to do, going open source.


Yeah true. Also by open sourcing the chrome extension maybe someone can port this to Firefox. I think it should be relatively easy after the recent move by Firefox to webextensions.


I think websockets is a much better use case for this if you don't want the reconnection overhead. Also since websockets are bidirectional, you can keep the connection open and send all requests through the connection as well as receive responses from the connection. Also you can send binary on websockets if you want to save bandwidth as well. We do this at work and it works pretty nicely.


We use Elixir, Go, C++ for backend and JS, Java / Kotlin, C# for frontend. We also use Postgresql, Rabbitmq and Redis.


How are you managing deployment of these many technologies? What does your CI/CD workflow like?


We mostly use docker with docker swarm. Currently exploring moving to Rancher. Also we use Gitlab CI for CI/CD. Our workflow looks like this: a release is tagged, CI tests and builds a docker image which is pushed to our private registry and then we use ansible to deploy the new release on our servers. We mostly use onprem for our production and can't use any cloud with the regulations and limitations we have.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: