Hacker Newsnew | past | comments | ask | show | jobs | submit | leojfc's commentslogin

Thanks for writing this up — some great insights!

The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.

But it was a nice pattern to work with: for example if you made code changes you often got hot-reloading ‘for free’ because the client can just query the server again. And it was by definition infinitely flexible.

I’d be interested to hear from anyone with experience of both Datastar and Hotwire. Hotwire always seemed very similar to HTMX to me, but on reflection it’s arguably closer to Datastar because the target is denoted by the server. I’ve only used Hotwire for anything significant, and I’m considering rewriting the messy React app I’ve inherited using one of these, so it’s always useful to hear from others about how things pan out working at scale.


> The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.

Basically every single web page on the modern web has the server returning JS that the client then executes. I think you should clarify what's dangerous about the specific pattern you're thinking of that isn't already intrinsic to the web as a whole.


I like Hotwire but I admit its a bit confusing to get started with and the docs dont help. Form submits + redirects are a bit weird, you cant really make the server "break out" of a frame during a redirect if the form was submitted from inside a frame (there are workarounds, see https://github.com/hotwired/turbo/issues/257).

Also, custom actions [https://turbo.hotwired.dev/handbook/streams#custom-actions] are super powerfull, we use it to emmit browser events, update dom classes and attributes and so on, just be careful not to overuse it.


Absolutely. Years ago I found this book on the topic really eye-opening:

- https://www.amazon.co.uk/Technological-Revolutions-Financial...

The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.

e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.

I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.


My understanding is that the 40 hour work week (and similar) was talked about for centuries by workers groups but only became a thing once governments during WWI found that longer days didn't necessarily increase output proportionally.

For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.


> My understanding is that the 40 hour work week (and similar) was talked about for […]

See perhaps:

* https://en.wikipedia.org/wiki/Eight-hour_day_movement

Generally it only really started being talked about when "workers" became a thing, specifically with the Industrial Revolution. Before that a good portion of work was either agricultural or domestic, so talk of 'shifts' didn't really make much sense.


Oh sure, a standard shift doesn't make much sense unless you're an employee. My point was specifically about the 40 hour standard we use now though. We didn't get a 40-hour week because workers demanded it, we got it because wartime governments decided that was the "right" balance of labor and output.


> https://www.amazon.co.uk/Technological-Revolutions-Financial...

Yes, that is the first link of my/GP post.


Wholeheartedly agree. There’s often good performance or security reasons why it’s hard to get a debugger running in prod, but it’s still worth figuring out how to do it IMO.

Your experience sounds more sophisticated than mine, but the one time I was able to get even basic debugger support into a production Ruby app, it made fixing certain classes of bug absolutely trivial compared to what it would have been.

The main challenge was getting this considered as a requirement up front rather than after the fact.


Yeah, I do something kind of similar, using Dash [1] snippets which expand to full commands.

Since I'm almost always on my mac, it means they're available in every shell, including remote shells, and in other situations like on Slack or writing documentation.

I mostly use § as a prefix so I don't type them accidentally (although my git shortcuts are all `gg`-consonant which is not likely to appear in real typing).

[1] https://kapeli.com/dash


I would second the mid-size B2B option here. I found professional services a bit stressful for what the OP is saying.

But I also think it’s really personal. Since turning 40 I tried: moving into management at a ~100 dev company; IC at a big tech firm (first time I’d worked somewhere really big as a dev); and now I’m back to running tech side of things at a startup.

I don’t think I could have known in advance which of those was going to work for me. There were a lot of positives to the first two, even though I ultimately left. Turns out I actually do prefer a) small places and b) a mix of management and IC work. But I’m absolutely sure that’s not true for everyone.

OP might feel like they want something very different from running their own startup – I also felt pretty burnt out on that after 7 years of my own – but once they’ve had some time they might remember why they went that way in the first place!


Yes, I would buy any laptop which offered an ortholinear keyboard option, with customisable firmware.

I switched to using an Ergodox after long hours working on a MacBook Pro made my wrists start to hurt and my pinkie finger to go numb (and this was back in the day when a MBP keyboard was still decent!). I can still type full speed on a regular keyboard but it doesn’t feel as comfortable, and I think there’s a genuine health issue at least for some people.


Strategically, could this be part of a response to Apple silicon?

Or put another way, Apple and Google are both responding to Intel/the market’s failure to innovate enough in idiosyncratic manner:

- Apple treats lower layers as core, and brings everything in-house;

- Google treats lower layers as a threat and tries to open-source and commodify them to undermine competitors.

I don’t mean this free fabbing can compete chip-for-chip with Apple silicon of course, just that this could be a building block in a strategy similar to Android vs iOS: create a broad ecosystem of good-enough, cheap, open-source alternatives to a high-value competitor, in order to ensure that competitor does not gain a stranglehold on something that matters to Google’s money-making products.


These are not related at all. Only common element is making silicon.

Apple spends $100+ millions to design high performance microarchitecture to high-end process for their own products.

Google gives tiny amount of help to hobbyists so that they can make chips for legacy nodes. Nice thing to do, nothing to do with Apple SoC.

---

Software people in HN constantly confuse two completely different things

(1) Optimized high performance microarchitecture for the latest prosesses and large volumes. This can cost $100s of millions and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology.

(2) Generic ASIC design for process that is few generations old. Software costs few $k or $10ks and you can uses the same design long time.


> Nice thing to do

I don't believe Google does anything because it's a "nice thing to do". There's some angle here. The angle could just be spurring general innovation in this area, which they'll benefit from indirectly down the line, but in one way or another this plays to their interests.


> I don't believe Google does anything because it's a "nice thing to do".

If only Google had this singular focus... From my external (and lay) observation - some Google departments will indulge senior engineers and let them work on their pet projects, even when the projects are only tangentially related to current focus areas.

Looking at Google org on Github (https://github.com/google); it might be a failure of imagination on my part, but I fail to see an "angle" on a good chunk of them.


Google has never created a product that does not collect data in a unique manner apart from its other products.


They must be some kind of genius. I don't see how are they going to be able to extract personal information out of here.


They're not doing this out of the kindness of their heart. Just because we don't know the data being collected here (yet) does not invalidate my statement. Name a google product and you can easily identify the unique data being collected.


Not necessarily personal. Maybe training a robot to design circuits?


> few generations old

And by old, I mean /old/. 130 nm was used on the GameCube, PPC G5, and Pentium 4.


Think of all the chips from then and before then that are becoming rare. The hobbyist and archivist community do their best with modern replacements, keeping legacy parts alive, and things like FPGAs, but to be able to fab modern drop in replacements for rare chips would be amazing.

Things don't have to be ultra modern to offer value.


That's not terribly long ago, really. My understanding is that a sizeable chunk of performance gains since then have come from architectural improvements.


Probably the fastest processor made on 130nm was the AMD Sledgehammer, which had a single core, less than half the performance per clock of modern x64 processors, and topped out at 2.4GHz compared to 4+GHz now, with a die size basically the same as an 8-core Ryzen. So Ryzen 7 on 7nm is at least 32 times faster and uses less power (65W vs. 89W).

You could probably close some of the single thread gap with architectural improvements, but your real problems are going to be power consumption and that you'd have to quadruple the die size if you wanted so much as a quad core.

The interesting uses might be to go the other way. Give yourself like a 10W power budget and make the fastest dual core you can within that envelope, and use it for things that don't need high performance, the sort of thing where you'd use a Raspberry Pi.


You wouldn't get access to ASIC fab just to make a CPU. Fill it with tensor cores, or fft cores, plus a big memory bus. Put custom image processing algorithms on it. Then it will be competitive with modern general silicon despite the node handicap.


Your suggestion was more what i was thinking, perhaps something more limited in scope than a general processor. An application that comes to mind is an intentionally simple and auditable device for e2e encryption.


My understanding is that architectural improvements (i.e. new approaches to detect more parts in code that can be evaluated at the same time, and then do so) need more transistors, ergo a smaller process.

(Jim Keller explains in this interview how CPU designers are making use of the transistor budget: https://youtu.be/Nb2tebYAaOA)


My first reaction was that it could be a recruitment drive of sorts to help build up their hardware team. Apple have been really smart in the last decade in buying up really good chip development teams and that is experience that is really hard to find.


> Apple have been really smart in the last decade in buying up really good chip development teams and that is experience that is really hard to find.

They can outsource silicon development. Should not be a problem with their money.

In comparison to dotcom development teams, semi engineering teams are super cheap. In Taiwan, a good microelectronics PhD starting salary is USD $50k-60k...


Opportunity cost, though.

Experienced teams who have designed high performance microarchitectures aren't common, because there just isn't that much of that work done.

And when you're eventually going to spend $$$$ on the entire process, even a 1% optimization on the front end (or more importantly, a reduction of failure risk from experience!) is invaluable.


Does Google have a silicon team?


As of a year and a half ago they had over 300+ people across Google working on silicon (RTL, verification, PD, etc) that I’m aware of.


They created TPU's right? So somewhere inside the alphabet group they must have some expertise


It wouldn't surprise me. They've been designing custom hardware for some time. Look at the Pluto switch and the "can we make something even more high performance" or "we can make it simpler, cheaper, more specialized and save some watts" (which in turn saves on power for computing and power for cooling costs).

At the scale that Google is at, it really wouldn't surprise me if they were working on their own silicon to solve the problems that exist at that scale.


Pluto is merchant silicon in a box, like all their other switches.

"""Regularly upgrading network fabrics with the latest generation of commodity switch silicon allows us to deliver exponential growth in bandwidth capacity in a cost-effective manner."""

https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183...


I wasn't intending to claim that Pluto is custom silicon but rather that Pluto is an example of Google looking for simplicity, more (compute) power, and less (electrical) power.

The next step in that set of goals for their data center would be custom silicon where merchant silicon doesn't provide the right combination.


Manu Gulati - a very popular Silicon Engineer who worked at Apple left for Google. (He now works at Nuvia with other ex-Apple stalwarts)


They have Norman Jouppi, he apparently was involved in the TPU design.


What are TPUs and quantum computers made of? ;)


Joel Spolsky calls this "Commoditizing your complement".


I'm guessing GP was clearly referencing that phrase, not unaware of it.


I mean someone else said the software to design chips is 5 figures per seat so probably a multi billion dollar industry.

My guess would be a cloud based chip design software is in the works. This would accelerate AI quite a bit I should think?


More like 6 figures per seat...

It's actually a big part of why some silicon companies distribute themselves around timezones - so someone in Texas can fire up the software immediately when someone in the UK finishes work.

It's not unusual to see an 'all engineering' email reminding to you close rather than minimize the software when you go to meetings.


I thought most EDA companies put a stop to that with geographic licensing restrictions.


And this is the reason some companies have shift work...

But that all means nothing for companies who buy Virtuoso copies from from guys trading WaReZ in pedestrian underpasses in BJ.

A number of quite reputable SoC brands here in the PRD are known to be based on 100% pirated EDAs.

This is not a critique, but a call to think about that a bit.

In China, you can spin-up a microelectronics startup in under $1m, in USA, you will spend $1m to just buy a minimal EDA toolchain for the business.

Allwinner famously started with just $1m in capital, when disgruntled engineers from Actions decided to start their own business.


What is PRD? I’m guessing a country acronym?


Pearl River Delta


Absolutely not. "Apple Silicon" is branding for their own processor. This is a road to an opensource ecosystem in HW design.


That's the same thing parent said, so "Absolutely yes".


Doesn’t this imply the need for a new language feature? So that well-defined sections of inline code can be pulled out, initial conditions set in a testing environment, and then executed independently.

I guess this could trip up if the compiler optimisations available when considering all the code at once means that the out-of-context code actually does something different in testing...


Pull out well-defined sections of inline code that can be executed independently for testing? Sure, that's breaking it down into functions.


I think there is such a feature. In most languages, these are called "functions".


What about providing a BitTorrent link? The main server could provide a backstop seed, and presumably enough other people would seed too for any decent-sized project.


P2P downloads would help cover some of the costs, but popular projects probably need a direct download link with load balancing as well.

If the project doesn't want to manage their own infrastructure, they're probably going to want a CDN or object storage provider. The most cost-friendly I've seen is OVH's RunAbove object storage, but I'd be interested to know if there is anything else comparable.


Addictive! Would be great to have some other success metrics, like oldest cell (rather than largest), or cell that has been through most splits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: