Hacker Newsnew | past | comments | ask | show | jobs | submit | kfogel's commentslogin

We are happy to be providing this public service :-). I wish the term were better known outside tech; it's useful in so many contexts.


So many stories like this about Slack.

We use Zulip (https://zulip.org/) for our corporate chat, and we've never looked back. It's been good, and it's fully open source. We self-host, but paid hosting is easy to get too if you want.


I love Zulip. We used it before our small firm was purchased by a large company that moved us to teams. Great software!


Unless I'm missing something tho, zulip seems to be exactly the same? That is, it's a SaaS with no oss software, no self hostable alternative. Only difference is they haven't hiked their prices......yet.

At this point anyone looking to avoid a price hike like the one described above should probably consider something they'll have more control over.

I'd probably go with my own Mastodon server if I was a company that needed any such communication tool. I'm sure there are other alternatives out there too


It's OSS and self-hostable. And it's got a great UI and the most joyous technology I've ever had the pleasure of using. https://zulip.com/self-hosting/


Oh, so I was missing something!

That was not very obvious from their landing page!

Well in that case, carry on!


> That was not very obvious from their landing page!

It says in bold letters:

"Your data is yours!

For ultimate control and compliance, self-host Zulip’s 100% open-source software"


Well yeah but I bet slack has similar wording on their site. In this case they apparently meant it, but to me that just registers as marketing speech.

I guess I've been on the internet too long, my brain automatically blacks certain language out, like a biological spam filter.


> Well yeah but I bet slack has similar wording on their site.

...You could go to the Slack website right now and see? We're on the internet. It's all on the internet. We can literally just check.

Doesn't seem to mention anything about being open source, anything privacy-related, data, or hosting.


Sadly as with many such products, if you want SSO and the like, you'll still end up paying per user per month. That gets stupid expensive quick


Or not.

> When you self-host Zulip, you get the same software as our Zulip Cloud customers.

> Unlike the competition, you don't pay for SAML authentication, LDAP sync, or advanced roles and permissions. There is no “open core” catch — just freely available world-class software.

The optional pricing plans for self-hosted mention that you are buying email and chat support for SAML and other features, but I don't see where they're charging for access to SAML on self-hosted Zulip.


That's exciting! I didn't catch that from the pricing page, thank you for clarifying :)


go to product > self-hosting

you might notice it's 100% free software

now there is always the question how a company used Slack, e.g. just some ad-hoc fast communication channels like "general", "food", "events" or a in depth usage with a lot of in-depth usage, including video conferences, channels for every squad/project/sprint/whatever

but the relevant thing to realize is that there is subtle but very relevant difference between a "social network" focused tool and a work place communications focused tool

and Mastodon has a very clear focus on the former while Zulip has a clear focus on the later



It is open source and you can self host it.


Wow. This project was the cause of a very long and intense discussion about mis-use of the term "open source". See https://github.com/n8n-io/n8n/issues/40#issuecomment-5397146... for details (lands mid-thread -- you might want to scroll back to see the start, and if you read the whole thing to the end then you deserve some sort of award!).

TL;DR: The author originally tried to call n8n "open source" but while using a non-open-source license. After much discussion, he kept the license but stopped using the label "open source", to the relief of many people.

That half-decade-old thread is still what I point to when I want to explain to someone why preserving the specificity of the term "open source" matters.


Xlife

I believe it implements Bill Gosper's hashlife quadtree algorithm (already mentioned elsewhere in the comments here).

Xlife is unbelievably fast.


Most of the comments so far are about the temperature and the closeness to the sun, and, hey, I get it: those are both amazing to think about. But to me even more amazing is... 0.16% of the speed of light?? Yikes.


Pretty sure it's 0.064%, not sure why the article got it wrong, still impressive though


Still. ~200,000 m/s (= ~430,000 mph) is unfathomably fast.


~200,000 m/s is unfathomably fast.

It's about 110,000 fathoms per second.


Or if you prefer leagues, at that speed it would still take 9 minutes to reach the depth in Jules Verne's book.


The title refers to the distance the _Nautilus_ traveled while submerged, not the depth it reached.


Until I realized this, the title was quite confusing. If “20,000 leagues” were referring to depth, it would be enough to go all the way through the Earth, exit the other side, and then make it a quarter of the way to the moon.


Yeah he really needed a comma.

20,000 Leagues, Under the Sea

I think it reads cleaner.


It is, although I was still a little surprised it's on the order of a minute to go NYC to Tokyo at that speed. My intuition was it would be much less time.


Light loops around the earth ~7.75x a second iirc so a few orders of magnitude less makes sense


Light is fast, but it isn't imperceptible. The original experiments to measure it in a lab involved spinning rigs and mirrors between hills. When dealing with objects the size of continents, such as phone or other communication systems, the delays are well within our abilities to detect.


terrestrial phone / internet carried by undersea cables are gated by the relays more so than c. the ping time from US to Australia (one way) is about 115 ms (rounding down, using most optimistic data.

Light can travel over 34,000km in that time. The great arc distance from LA to Sydney is just over 12,000km. In all likelihood the fiber line connecting them doesn't follow that arc, but it shouldn't be too far out of limits. So about 2/3 of the latency is caused by relays and switching equipment.

it gets even worse for satellite, because (until starlink) communications satellites are in geosynchronous orbit, 35,000km above the equator. so talking on one means a 70km round trip, which causes its path to take over 5x more distance than the linear distance (across the surface) between those 2 cities.


>> it gets even worse for satellite, because (until starlink) communications satellites are in geosynchronous orbit

No. There were and are other communication satellites in lower orbits. SpaceX did not invent the concept of low-orbit communications satellites. The first satellites of the 90+ Iridium constellation were launched in the late 90s. That system is still online in low orbit today. Before that there were various military/state-owned satellites. The Soviet Union and Russia were big on providing coms to areas where the geo-stationary relays could not, specifically the north, as far back as the 60s. See the Molnya program.

https://en.wikipedia.org/wiki/Molniya_(satellite)

https://en.wikipedia.org/wiki/Iridium_satellite_constellatio...


Iridium existed but how much backhaul did it actually carry? it was meant for very low-bandwidth applications.


> So about 2/3 of the latency is caused by relays and switching equipment.

Not really; the speed of light in fiber optic cable is only about two-thirds of that in a vacuum. That means it takes light about ~60ms to travel the 12,000km great arc distance.


thats a great point, I failed to consider c in the propagation medium. however, OP didn't specify that either, he just insinuated that we can perceive propagation delays by c. we are both wrong (in his specific case about telecom delays, we have absolutely contrived experiments to detect it)


>> The great arc distance from LA to Sydney is just over 12km.

12,000km ?


fixed, thank you


Right now I’m reading the Expeditionary Force series and one thing the author drives home is how incredibly slow the speed of light is.


The speed of light is the speed of causality, so obviously it can't be too fast or everything will be in instantaneous causal contact. It's not strange that biological creatures find that things over distances that "feel" distant actually take light some perceptible "time" to reach.


Yeah, it always sticks in my mind that the time it takes for light to reach the top of the Eiffel tower from the ground is measured in nanoseconds. Maybe that came from a Grace Hopper talk?


Microseconds even! Light travels about 0.3 meter in a nanosecond, and the Eiffel tower has a height of ~300 meter, so it takes about a microsecond for light to get from the ground to the top.


Correct. About 18 milliseconds of time dilation per day assuming 690,000 km/h - 430,000 mph.


Helios2 was half that speed in 1976

I guess everything with a sun slingshot is going to be impressive.

We'd have 1% speed-of-causality probes by now if it meant better war machines but best they can do with the budget

Vaguely related they did capture light moving with a 1 billion frames per second experiment so Femtosecond Photography is definitely some cutting edge stuff.


It weighs half a ton. Getting it to even 10% of the speed of light would take more energy than is produced by the world in a year.


It wouldn't be in orbit of the sun anymore


That part about "...you wouldn’t want to wing it with the configuration, because allegedly you could break your monitor with a bad Monitor setting" -- strike the "allegedly"! Or at least, let me allege it from personal experience: I did that to one monitor, in the early 1990s. You could smell the fried electronics from across the room.


For the interested: CRT monitors have a high-voltage power supply which uses an oscillator. Cheap(er) monitors allegedly reused the horizontal sync frequency for the power supply oscillation, to save an oscillator, so if the horizontal sync frequency was very different from expected, or worse, completely stopped, it could burn out the HV power supply.

Has anyone tested this hypothesis? It could also be that the horizontal sync itself burns out, although that seems less likely.

(In even more detail: Like any other switching power supply, the HV supply in a CRT runs on a two-phase cycle: first, a coil, which creates electrical inertia, is connected to the power source, allowing current to build up. Then the current is suddenly shut off, and the force of the coil attempting to keep it flowing creates a very high voltage, which is harvested. If the circuit gets stuck in phase one, the current never stops increasing, until it's limited by the circuit's resistance, much higher than it's supposed to be. The excessively high current overheats and burns out the switching component. Anyone working on switching power supplies will have encountered this failure mode many times.)


It is not really about saving one oscillator, but about two things:

- saving the drive circuitry for the flyback, which is usually combined with horizontal deflection amplifier. Also such a design probably simplifies the output filter for horizontal deflection as the flyback primary is part of that filter.

- synchronizing the main PSU of the display to the horizontal sync as to make various interference artifacts in the image stay in place instead of slowly vandering around, which will make them more distracting and noticable.

It is not that hard to see the whole CRT monitor as essentially being one giant SMPS that produces bunch of really weird voltages with weird waveforms. And in fact is you take apart mid-90's CRT display (without OSD), the actual video circuitry is one IC, few passives and lot of overvoltage protection, rest of the thing is powersupply and the deflection drivers (which are kind of also an power supply, as the required currents are significant).


Your (parenthesized) explanation of switching power supplies made a lot of "secondhand knowledge" click in my head -- like, for instance, why there's lots of high-frequency noise in the DC output. Thank you!


I was briefly pleased with the ability to run an 8" monitor that looked like the kind on 90s cash registers at the impressively high resolution of 1024x768. Then after about 10 seconds it blinked out, smelled like burning electronics, and never worked again.


Neal Stephenson's Cryptonomicon made reference to a hacker dubbed The Digi-Bomber, as he could make his victims CRT monitors implode in front of them by remotely forcing a dangerously bad configuration.


That reminds me of using a CRT monitor to broadcast audio through radio waves: http://www.erikyyy.de/tempest/


Does anyone know why projects like this always seem to specify using a particular type of tiny, low-power computer (usually a Raspberry Pi or something similar) to drive the display?

I already have plenty of non-tiny computers that run Debian GNU/Linux. Suppose I wanted to run an e-paper display from one of those computers, using this code, just via a normal USB cable. I could do that, right? There's no reason I would have to use a Raspberry Pi or something similar?


Small computers like RPi make it easy to access the low-level peripherals such as SPI, which this small screen uses, and others like GPIOs. If your big-computer has such peripherals available to the OS, you can use them also. Before small computers, you could use the parallel port (and some small program) to talk to your own peripherals via the same low-level signalling.


The other extreme would be nice. Something very low powered that can spend 99% of its time in standby. Then you could run the whole thing on a battery for months. For a weather display, waking up for a few milliseconds per minute should be enough.

The 7" E-Ink display is US$86, which is not too bad.


Got it -- I appreciate the explanation.


There's no reason at all. RPis come with lots of bootstrap documentation and code so it's comfortable for someone that's played with Linux to get one running, install some packages, and make it do something.

You could do this with a tiny microcontroller if you had the time and knowledge to do it. There's nothing magical about the displays other than strange supply voltages at times.

The more common problem is that they don't listen to USB. They take SPI or parallel digital interfaces to set the pixels. So you need some kind of intermediate interface and software to draw the display. Which is why people just slap an RPi into the mix and talk to that over more common protocols.


Thank you. My idea was more the opposite: do it with a normal laptop or desktop computer driving the display, rather than a tiny microcontroller. I guess I'm assuming that either the display's USB input supplies enough voltage to run the display, or that the display has a separate power supply -- i.e., that there's nothing magical about a Raspberry Pi that makes it supply special bits or special voltages to these displays that can't be supplied by, say, my desktop computer.


Edited my response above. The answer is more about the interface that these displays require.


AHHHH, that's the key thing I didn't know (I have a Raspberry Pi sitting in a drawer and have played with it embarrassingly little -- I didn't realize how important having the SPI or other special interface is in this context). Thank you again.


With that said, though, there are also tons of inexpensive ways to output SPI or various other serial protocols from just about any device with a USB port, like your full-sized computer: https://www.adafruit.com/product/2264

The RPis and friends just optimize the workflow - theres nothing particularly magic about they way they implement SPI or GPIO, they just have it out of the box because its such a common way to extend hobby computer boards.


Just ordered. Thank you :-).


The refresh rate of these displays is 0.03 fps.


My first thought for a project like this (grab photos/data from the internet, display them on a device) would be a Pi Zero 2 W or a Pi Pico W, for the reasons you stated.

I'm not particularly up to date with the tiny microcontroller ecosystem - if I wanted to execute this at lower cost and/or lower power, what would be some better options to consider?


Because they are cheap and run on battery for a long time, and it is stupid to leave a computer running this display on.


The most important factor in my learning Emacs was doing it in a room with experienced Emacs users. I really strongly recommend doing this if you possibly can. A few minutes of an experienced user shoulder-surfing while I worked, and giving advice on better ways to do things, was worth hours of self-directed study.

Get together with experienced users in person and have them watch you edit. That's it.


There are algorithms where I think "Sure, with enough time and attention given to the problem, I might have thought of that." And then there are algorithms where I think "Oh, wow. That came from another planet. I would never have come up with that myself."

This one is definitely in the latter category.


Agreed. Further, for most algorithms, I can read a high-level description of it and go "ohhh, I get it now", and go away and implement it myself without further information. HashLife is not one of those algorithms! While I grok the concept of it, I'm pretty lost on how I'd turn that into functional code. I'm sure I could figure it out with enough further reading, though.


Very happy user of a System76 Lemur Pro laptop (i7, 32 GB RAM, 1 TB SSD) for the past year, FWIW. I'm running stock Debian on it, not System76's Pop!_OS.

I get the kind of battery life the review mentions if I put the laptop into "Power Saver" mode. In "Balanced" or especially in "Performance" mode the battery doesn't last as long, of course. So when I can't be plugged in, I put it into Power Saver mode (this is super easy via the Gnome upper-right settings popup panel; I assume it would be just as easy in other window managers).

I got great customer service from System76 when I ran into a hitch at the start of my Debian installation process (TL;DR: see Debian bugs #1024346 and #1024720 -- the file ".disk/info" existed on the pre-installed Pop!_OS partition; getting rid of that enabled the installation to continue). System76 support went above and beyond the call of duty in tracking this down and solving it, considering that I was installing an OS that wasn't even officially supported by them.

Happy customer; would buy again; I get no commission for any of this -- I just want to see the company flourish so they're still there when it's time for me to upgrade my laptop!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: