Hacker Newsnew | past | comments | ask | show | jobs | submit | YossarianFrPrez's commentslogin

Not only do people leave the US but stay in Academia, plenty of people leave the research pipeline after receiving years and years of highly specialized, expert training. As an American who used to work in Tech and is currently getting a PhD, the geographic constraints on the (top tier) academic job market are more severe than people outside of Academia typically realize. It's a shame, because if it were the norm that science could happen by university-trained experts but in non-university institutions, we could a) fix the leaky pipeline, and b) see greater scientific progress.

What I mean is that if you don't like the company you work for in, say, SF, you can switch companies without having to switch houses. In Academia... it's akin to going to conservatory for classical music: you have to travel to where the orchestral openings are. This is a bit of a legacy problem from Wilhelm von Humboldt's idea to combine teaching and research, which led to the modern university system.

I'm far from the first person to say this, btw. Convergent Research's "Focused Research Organization" concept as well as The Arc and Astera Institutes are a few recent examples of people trying to provide escape routes from having to deal the large degree of "institutional tech/systems debt" in university contexts. For a great essay on why this is necessary, see "A Vision of Meta-science" (highly recommended if you are interested) [1].

The good news is that people are starting to come around to the idea that the scientific ecosystem would benefit from more diversity in the shape, size, and form of science-generating institutions.=The NSF just announced a new program to fund such "independent research organizations." I think this could give people who want to go into the sciences as a second career and who have a bit of an entrepreneurial tendency a new kind of Job opportunity [2]. We talk about Founders all of the time in Tech, we should probably have some equivalent in the best possible sense of the term, in the Sciences.

[1] https://scienceplusplus.org/metascience/ [2] https://www.nsf.gov/news/nsf-announces-new-initiative-launch...


And like any digits of Chatin's constant.

It's seemingly quasi-semi-related to The Foundation's 'atomic knives'?


Maybe I'm just a sci-fi nerd who loves innovation, but this is so cool!

Clearly, this product is not intended for the mass market, and may find purchase with people who have tennis elbow and who can afford it, etc. <insert other critiques about practicality and applicability here>. But still, when was the last time someone tried to re-invent something as basic as a knife?


Ultrasonic knives are used commercially, this is an attempt at a mass market by making it cheaper and packaging it in a more familiar form.


They also make one for home use: https://www.amazon.com/dp/B0FK9SB5KR

HOZO NeoBlade Wireless Ultrasonic Cutter

It's from a kickstarter.


> when was the last time someone tried to re-invent something as basic as a knife?

A year ago? This one is designed for woodworkers.

https://www.bourbonmoth.com/shop/p/the-bourbon-blade-origina...


I thought I recognised the name - Bourbon Moth is the guy who faked a video of oily rags self-igniting to make a video to advertise fireproof bins for woodworking.

This one: https://m.youtube.com/watch?v=3Gqi2cNCKQY

Debunking: https://m.youtube.com/watch?v=GEtU3bYyCtA


It's interesting that AvE devoted multiple videos to debunking oily rags but believes 9/11 was an inside job.


Expertise & ignorance alike don’t carry across domains.


Writer Jack London's mansion,The Wolf House, that he was building up in Sonoma county was destroyed by a fire that investigators later attributed to the spontaneous combustion of oil-soaked rags in the dining room...


> but believes 9/11 was an inside job

Uh, really? I haven't been following him for a while, so I don't absolutely know if you're wrong, but I absolutely can see him joking about it and maybe even taking it too far.


Never heard of either of these Youtubers but I've seen tons of cans of stains and such with warnings about potential self-ignition if left on rags in the wrong conditions...


Yes. Learn from my mistake--I almost burned down my apartment by leaving linseed oil rags unattended.


A pocket chisel is very much not the same thing as a kitchen knife.


Me, immediately: ooh Star Wars vibroblade!


40K chainswords next.


knife is already perfect


Can you explain what you mean about 'not needing to be solved'? There are versions of that kind of critique that would seem, at least on the surface, to better apply to finance or flash trading.

I ask because scaling an system that a substantially chunk of the population finds incredibly useful, including for the more efficient production of public goods (scientific research, for example) does seem like a problem that a) needs to be solved from a business point of view, and b) should be solved from a civic-minded point of view.


I think the problem I see with this type of response is that it doesn't take into context the waste of resources involved. If the 700M users per week is legitimate then my question to you is: how many of those invocations are worth the cost of resources that are spent, in the name of things that are truly productive?

And if AI was truly the holy grail that it's being sold as then there wouldn't be 700M users per week wasting all of these resources as heavily as we are because generative AI would have already solved for something better. It really does seem like these platforms are, and won't be, anywhere as useful as they're continuously claimed to be.

Just like Tesla FSD, we keep hearing about a "breakaway" model and the broken record of AGI. Instead of getting anything exceptionally better we seem to be getting models tuned for benchmarks and only marginal improvements.

I really try to limit what I'm using an LLM for these days. And not simply because of the resource pigs they are, but because it's also often a time sink. I spent an hour today testing out GPT-5 and asking it about a specific problem I was solving for using only 2 well documented technologies. After that hour it had hallucinated about a half dozen assumptions that were completely incorrect. One so obvious that I couldn't understand how it had gotten it so wrong. This particular technology, by default, consumes raw SSE. But GPT-5, even after telling it that it was wrong, continued to give me examples that were in a lot of ways worse and kept resorting to telling me to validate my server responses were JSON formatted in a particularly odd way.

Instead of continuing to waste my time correcting the model I just went back to reading the docs and GitHub issues to figure out the problem I was solving for. And that led me down a dark chain of thought: so what happens when the "teaching" mode rethinks history, or math fundamentals?

I'm sure a lot of people think ChatGPT is incredibly useful. And a lot of people are bought into not wanting to miss the boat, especially those who don't have any clue to how it works and what it takes to execute any given prompt. I actually think LLMs have a trajectory that will be similar to social media. The curve is different and I, hopefully, don't think we've seen the most useful aspects of it come to fruition as of yet. But I do think that if OpenAI is serving 700M users per week then, once again, we are the product. Because if AI could actually displace workers en masse today you wouldn't have access to it for $20/month. And they wouldn't offer it to you at 50% off for the next 3 months when you go to hit the cancel button. In fact, if it could do most of the things executives are claiming then you wouldn't have access to it at all. But, again, the users are the product - in very much the same way social media played into.

Finally, I'd surmise that of those 700M weekly users less than 10% of those sessions are being used for anything productive that you've mentioned and I'd place a high wager that the 10% is wildly conservative. I could be wrong, but again - we'd know about that if it were the actual truth.


> If the 700M users per week is legitimate then my question to you is: how many of those invocations are worth the cost of resources that are spent, in the name of things that are truly productive?

Is everything you spend resources on truly productive?

Who determines whether something is worth it? Is price/willingness of both parties to transact not an important factor?

I don't think ChatGPT can do most things I do. But it does eliminate drudgery.


I don't believe everything in my world is as efficient as it could be. But I genuinely think about the costs involved [0]. When doing automations that are perfectly handled by deterministic systems why would I put the outcomes of those in the hands of a non-deterministic one? And at that cost differential?

We know a few things: LLMs are not efficient, LLMs are consuming more water than traditional compute, we know the providers know but they haven't shared any tangible metrics, and the build process involves, also, an exceptional amount of time, wattage and water.

For me it's: if you have access to a supercomputer do you use it to tell you a joke or work on a life saving medicine?

We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do". It seems as though the lines of fatalism and paradox are in full force for a lot of the arguments around AI.

I think the real kicker for me this week (and it changes week-over-week, which is at least entertaining) is when Paul Graham told his Twitter feed [1] a "hotshot" programmer is writing 10k LOC that are not "bug-filled crap" in 12 hours. That's 14 LOC per minute. Compared to industry norms of 50-150 LOC per 8 hour day. Apparently,this "hot-shot" is not "naive", though, implying that it's most definitely legit.

[0] https://www.sciencenews.org/article/ai-energy-carbon-emissio... [1] https://x.com/paulg/status/1953289830982664236


> When doing automations that are perfectly handled by deterministic systems why would I put the outcomes of those in the hands of a non-deterministic one?

The stuff I'm punting isn't stuff I can automate. It's stuff like, "build me a quick command line tool to model passes from this set of possible orbits" or "convert this bulleted list to a course articulation in the format preferred by the University of California" or "Tell me the 5 worst sentences in this draft and give me proposed fixes."

Human assistants that I would punt this stuff to also consume a lot of wattage and power. ;)

> We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do".

I'm not sure why you think this is paradoxical.

I probably eliminate 20-30% of tasks at this point with AI. Honestly, it probably does these tasks better than I would (not better than I could, but you can't give maximum effort on everything). As a result, I get 30-40% more done, and a bigger proportion of it is higher value work.

And, AI sometimes helps me with stuff that I -can't- do, like making a good illustration of something. It doesn't surpass top humans at this stuff, but it surpasses me and probably even where I can get to with reasonable effort.


It is absolutely impossible that human assistants being given those tasks would use even remotely within the same order of magnitude the power that LLM’s use.

I am not an anti-LLM’er here but having models that are this power hungry and this generalisable makes no sense economically in the long term. Why would the model that you use to build a command tool have to be able to produce poetry? You’re paying a premium for seldom used flexibility.

Either the power drain will have to come down, prices at the consumer margin significantly up or the whole thing comes crashing down like a house of cards.


> It is absolutely impossible that human assistants being given those tasks would use even remotely within the same order of magnitude the power that LLM’s use.

A human eats 2000 kilocalories of food per day.

Thus, sitting around for an hour to do a task takes 350kJ of food energy. Depending on what people eat, it's 350kJ to 7000kJ of fossil fuel energy in to get that much food energy. In the West, we eat a lot of meat, so expect the high end of this range.

The low end-- 350kJ-- is enough to answer 100-200 ChatGPT requests. It's generous, too, because humans also have an amortized share of sleep and non-working time, other energy inputs/uses to keep them alive, eat fancier food, use energy for recreation, drive to work, etc.

Shoot, just lighting their part of the room they sit in is probably 90kJ.

> I am not an anti-LLM’er here but having models that are this power hungry and this generalisable makes no sense economically in the long term. Why would the model that you use to build a command tool have to be able to produce poetry? You’re paying a premium for seldom used flexibility.

Modern Mixture-of-Experts (MoE) models don't activate the parameters/do the math related to poetry, but just light up a portion of the model that the router expects to be most useful.

Of course, we've found that broader training for LLMs increases their usefulness even on loosely related tasks.

> Either the power drain will have to come down, prices at the consumer margin significantly up

I think we all expect some mixture of these: LLM usefulness goes up, LLM cost goes up, LLM efficiency goes up.


Reading your two comments in conjunction - I find your take reasonable, so I apologise for jumping the gun and going knee first in my previous comment. It was early where I was, but should be no excuse.

I feel like if you're going to go down the route of the energy consumption needed to sustain the entire human organism, you have to do that on the other side as well - as the actual activation cost of human neurons and articulating fingers to operate a keyboard won't be in that range - but you went for the low ball so I'm not going to argue that, as you didn't argue some of the other stuff that sustains humans.

But I will argue the wider implication of your comment that a like-for-like comparison is easy - it's not, so leaving it in the neuron activation space energy cost would probably be simpler to calculate, and there you'd arrive at a smaller ChatGPT ratio. More like 10-20, as opposed to 100-200. I will concede to you that economies of scale mean that there's an energy efficiency in sustaining a ChatGPT workforce compared to a human workforce, if we really want to go full dystopian, but that there's also outsized energy inefficiency in needing the industry and using the materials to construct a ChatGPT workforce large enough to sustain the economies of scale, compared to humans which we kind of have and are stuck with.

There is a wider point that ChatGPT is less autonomous than an assistant, as no matter the tenure with it, you'll not give it the level of autonomy that a human assistant would have as it would self correct to a level where you'd be comfortable with that. So you need a human at the wheel, which will spend some of that human brain power and finger articulation, so you have to add that to the scale of the ChatGPT workflow energy cost.

Having said all that - you make a good point with MoE - but the router activation is inefficient; and the experts are still outsized to the processing required to do the task at hand - but what I argue is that this will get better with further distillation, specialisation and better routing however only for economically viable task pathways. I think we agree on this, reading between the lines.

I would argue though (but this is an assumption, I haven't seen data on neuron activation at task level) that for writing a command-line tool, the neurons still have to activate in a sufficiently large manner to parse a natural language input, abstract it and construct formal language output that will pass the parsers. So you would be spending a higher range of energy than for an average Chat GPT task

In the end - you seem to agree with me that the current unit economics are unsustainable, and we'll need three processes to make them sustainable - cost going up, efficiency going up and usefulness going up. Unless usefulness goes up radically (which it won't due to scaling limitations of LLM's), full autonomy won't be possible, so the value of the additional labour will need to be very marginal to a human, which - given the scaling laws of GPU's - doesn't seem likely.

Meanwhile - we're telling the masses at large to get on with the programme, without considering that maybe for some classes of tasks it just won't be economically viable; which creates lock in and might be difficult disentangle in the future.

All because we must maintain the vibes that this technology is more powerful than it actually is. And that frustrates me, because there's plenty pathways where it's obvious it will be viable, and instead of doubling down on those, we insist on generalisability.


> There is a wider point that ChatGPT is less autonomous than an assistant, as no matter the tenure with it, you'll not give it the level of autonomy that a human assistant would have as it would self correct to a level where you'd be comfortable with that.

IDK. I didn't give human entry level employees that much autonomy. ChatGPT runs off and does things for a minute or two consuming thousands and thousands of tokens, which is a lot like letting someone junior spin for several hours.

Indeed, the cost is so low -- better to let it "see its vision through" than to interrupt it. A lot of the reason why I'd manage junior employees closely are to A) contain costs, and B) prevent discouragement. Neither of those apply here.

(And, you know -- getting the thing back while I remember exactly what I asked and still have some context to rapidly interpret the result-- this is qualitatively different from getting back work from a junior employee hours later).

> that maybe for some classes of tasks it just won't be economically viable;

Running an LLM is expensive. But it's expensive in the sense "serving a human costs about the same as a long distance phone call in the 90's." And the vast majority of businesses did not worry about what they were expending on long distance too much.

And the cost can be expected to decrease, even though the price will go up from "free." I don't expect it will go up too high; some players will have advantages from scale and special sauce to make things more efficient, but it's looking like the barriers to entry are not that substantial.


The unit economics is fine. Inference cost has reduced several orders of magnitude over the last couple years. It's pretty cheap.

Open AI reportedly had a loss of $5B last year. That's really small for a service with hundreds of millions of users (most of which are free and not monetized in any way). That means Open AI could easily turn a profit with ads, however they may choose to implement it.


> so what happens when the "teaching" mode rethinks history, or math fundamentals?

The person attempting to learn either (hopefully) figures out the AI model was wrong, or sadly learns the wrong material. The level of impact is probably quite relative to how useful the knowledge is one's life.

The good or bad news, depending on how you look at it, is that humans are already great at rewriting history and believing wrong facts, so I am not entirely sure an LLM can do that much worse.

Maybe ChatGPT might just kill of the ignorant like it already has? GPT already told a user to combine bleach and vinegar, which produces chlorine gas. [1]

[1] https://futurism.com/chatgpt-bleach-vinegar



[flagged]


The only solution to those people starving to death is to kill the people that benefit from them starving to death. It's a solved problem, the solution isn't palatable. No one is starving to death because of a lack of engineering prowess.


>> People are starving to death ...

> The only solution to those people starving to death is to kill the people that benefit from them starving to death.

There are solutions other than "to kill the people that benefit", such as what have existed for many years, including but not limited to:

  - Efforts such as the recently emasculated USAID[0].
  - Humanitarian NGO's[1] such as the World Central Kitchen[2]
    and the Red Cross[3].
  - The will of those who could help to help those in need[4].
Note that none of the aforementioned require executions nor engineering prowess.

0 - https://en.wikipedia.org/wiki/United_States_Agency_for_Inter...

1 - https://en.wikipedia.org/wiki/Non-governmental_organization

2 - https://wck.org/

3 - https://en.wikipedia.org/wiki/International_Red_Cross_and_Re...

4 - https://en.wikipedia.org/wiki/Empathy


Figuring out how to align misaligned incentives is an engineering problem. Obviously I disavow what you said, I reject all forms of advocacy of violence.


> People are starving to death and the world's brightest engineers are ...

This is a political will, empathy, and leadership problem. Not an engineering problem.


Those problems might be more tractable if all of our best and brightest were working on them.


>>> People are starving to death and the world's brightest engineers are ...

>> This is a political will, empathy, and leadership problem. Not an engineering problem.

> Those problems might be more tractable if all of our best and brightest were working on them.

The ability to produce enough food for those in need already exists, so that problem is theoretically solved. Granted, logistics engineering[0] is a real thing and would benefit from "our best and brightest."

What is lacking most recently, based on empirical observation, is a commitment to benefiting those in need without expectation of remuneration. Or, in other words, empathetic acts of kindness.

Which is a "people problem" (a.k.a. the trio I previously identified).

0 - https://en.wikipedia.org/wiki/Logistics_engineering


Famine in the modern world is almost entirely caused by dysfunctional governments and/or armed conflicts. Engineers have basically nothing to do with either of those.

This sort of "there are bad things in the world, therefore focusing on anything else is bad" thinking is generally misguided.


Famine is mostly political but engineers (not all of them) definitely have to do with it. If you’re building powerful AI for corporations that are then involved with the political entities that caused the famine, then you can’t claim to basically have nothing to do with it.


I totally disagree. "If A is associated with B, and B is associated with C, and C causes D, then A is responsible for D" is tortured logic.


You can disagree all you want but the exact wording used in original comment that I responded to was

> Engineers have basically nothing to do with either of those.

The logic here is “If A is actively working to develop capabilities for B, which B offers up to C who then uses it to do D, then A cannot claim to have nothing to do with D.”


the existence of poor hungry people feeds the fear of becoming poor and hungry which drives those brightest engineers. I.e. the things work as intended, unfortunately.


They won’t be honest and explain it to you but I will. Takes like the one you’re responding to are from loathsome pessimistic anti-llm people that are so far detached from reality they can just confidently assert things that have no bearing on truth or evidence. It’s a coping mechanism and it’s basically a prolific mental illness at this point


And what does that make you? A "loathsome clueless pro-llm zealot detached from reality"? LLMs are essentially next word predictors marketed as oracles. And people use them as that. And that's killing them. Because LLMs don't actually "know", they don't "know that they don't know", and won't tell you they are inadequate when they are. And that's a problem left completely unsolved. At the core of very legitimate concerns about the proliferation of LLMs. If someone here sounds irrational and "coping", it very much appears to be you.


> so far detached from reality they can just confidently assert things that have no bearing on truth or evidence

So not unlike an LLM then?


Side note: The organization that maintains Lean is a "Focused Research Organization", which is a new model for running a science/discovery based nonprofit. This might be useful knowledge for founder types who are interested in research. For more information, see: https://www.convergentresearch.org

And if you want to read why we need additional types of science organizations, see "A Vision of Metascience" (https://scienceplusplus.org/metascience/)


The concept trying new science orgs is noble, but this is the typical Schmidt BS of saying every previous academic consortia is totally incompetent and I'm the only one that can inject the magic sauce of focus and coordination.


Unfortunately being noble or self righteous or whatever emotion you choose has nothing to do with it. If there is a pool of grant money available only to “Focused Research Organizations,” and you want some of it for your work, then you open one and do your work under that umbrella. Academic institutions themselves do this all the time. It looks politically and morally sketchy, and maybe it often is, but it’s the way it works.


To me, it seems like coming up with something more coordinated than a consortium and more flexible than a single lab or a research corporation funded by multiple universities makes sense.

It's probably a narrow set of problems with the right set of constraints and scale for this to be a win.


Having an organization maintain a software tool seems pretty unsurprising. There’s a well-defined problem with easily visible deliverables, relatively little research risk, and small organizations routinely maintain software tools all the time. Whereas broader research is full of risk and requires funders be enormously patient and willing to fund crazy ideas that don’t make sense.


Hmm. I don't know very much about Lean, and it definitely feels smaller in scope and coordination risk than the kinds of things that would generally benefit from this.

(OTOH, within the community they're effectively trying to build a massive, modern Principia Mathematica, so maybe they would...)

> Whereas broader research is full of risk and requires funders be enormously patient and willing to fund crazy ideas that don’t make sense.

Yah. I'm not a researcher, but I keep ending up tangentially involved in research communities. I've seen university labs, loose research networks, loose consortia funding research centers, FFRDC, etc.

What I’ve noticed is that a lot of these consortia or networks struggle to deliver anything cohesive. There's too many stakeholders, limited bandwidth, and nobody quite empowered to say “we’re building this.”

In the cases where there’s a clearly scoped, tractable problem that’s bigger than what a single lab can handle, and a group of stakeholders agrees it’s worth a visionary push, something like an FRO might make a lot of sense.


This is an incredibly bad take on a hard social problem which is hard for reasons that are well understood.

Scientific research is often not immediately applicable, but can still be valuable. The number of people that can tell you if it's valuable are small, and as our scientific knowledge improves, the number of people who know what's going on shrinks and shrinks.

Separately, it's possible to spend many years researching something, and have very little to show for it. The scientists in that situation also want some kind of assurance that they will be able to pay their bills.

Between the high rate of failure, and conflicts of interest, and inscrutability of the research topics. It's very hard to efficiently fund science, and all the current ways of doing it are far from optimal. There is waste, there is grift, there is politics. Any improvement here is welcome, and decreasing the dollar cost per scientific discovery is more important than the research itself in any single field.


Some days I joke that there should be a set of Nobel prizes for making machines quieter. Categories could include: air-conditioning units and mini-fridges, construction and landscaping equipment, old university buildings, pump-housings, etc. The quality of life of many would be improved if we had quieter machines. It boggles my mind that a) in many hotel rooms one can hear a good deal of machine noise and neighbors' televisions, and b) that some sort of noise score (as calculated from DB meter measurements) isn't more widely available for things like apartment rentals, conference room bookings, etc.


what about a noise tax? my city has some electric buses and some ancient buses - the difference obviously is absolutely huge, but right now the financial incentives aren't there to upgrade the whole fleet


Noise from construction machines is actually a feature. They all have added backup beepers at this point as required per OSHA guidelines. Audible for well over a mile in normal conditions


> Audible for well over a mile in normal conditions

That doesn't strike me as a feature.

Also a solved problem: https://www.youtube.com/watch?v=6rwJ5NCf1Vw

Tesco delivery trucks have them here in Ireland; it's pretty good stuff. Still quite loud/noticeable when you're up close, while at the same time not being completely obnoxious to everyone in a kilometre radius.


They have to be loud enough to be heard through hearing protection. The amplitude is a feature.

It's a "solved problem" in the sense that nuclear energy is a solved problem. There's no mandate to actually see widespread roll out of anything that may be a better solution.

There's a construction site near me at present. There is always 1 machine in reverse, at all times. The utility of having a backup beeper or any noise making device on that site is thus zero. It is the single largest source of noise pollution, larger than the roadway


>The utility of having a backup beeper or any noise making device on that site is thus zero.

This strikes me as an odd take, maybe from someone who has never worked on a construction site.

Our auditory sense is more than just a binary “present/not present” detection. We can sense distance and direction. Just because there is a backup beeper somewhere on site does not mean there is no value to any other auditory signal.

Think about when you’re in a congested city. There’s probably a lot of ambient car noise, including horns, in the background. That doesn’t mean you’re unable to react to a honking car in your immediate vicinity.


You just believe you can sense the direction of loud noises in urban environments. Our nervous system has no "404 not found" for positional awareness. Even after severe head trauma, you have a sense of position for everything. It's so wrong as to be useless, but you have it.

Ask anyone who's been at a shooting in a city. Everyone gives a different answer for where the shooter was at. It's such a severe issue the US Army has microphone arrays they equip urban combat vehicles with. Even with bullets actually bouncing off the armor the troops cannot accurately locate the direction of the shooter(s).


As the other poster mentioned, the characteristics of sound matter. That’s why the report of a firearm is a bad example.

But there are more commonplace examples. Older phone ringtones are often hard for people to locate, but nearly everybody can pinpoint the sound of a dropped coin. Sound perception is more complex than just perception of pressure levels. To the point above, you wouldn’t confuse a car honking in front of you with one behind you even in the presence of ambiguous ambient noise.


I'm not talking about the report of a firearm. I'm talking about the physical impact of the bullet on the armored vehicle you are in.

Also I have no idea what you mean by "but nearly everybody can pinpoint the sound of a dropped coin". What sound does a coin make when it is dropped on a busy street?


Bullets are a bad example because they have multiple properties which makes them much harder to localise than many other sounds.

I'm pretty sure most people can localise a vehicle emitting broadband noise (engine or white reversing sound) in the conditions that matter.


> They have to be loud enough to be heard through hearing protection.

It's kind of a nit-pick, but this is not really true.

Very approximately, you will perceive a sound if it is above your threshold of hearing, and also not masked by other sounds.

If you're wearing the best ear defenders which attenuate all sounds by about 30dB, and you assume your threshold of hearing is 10dBSPL (conservative), any sound above 40dBSPL is above the threshold of hearing. That's the level of a quiet conversation.

And because your ear defenders attenuate all sounds, masking is not really affected -- the sounds which would be masking the reversing beepers are also quieter.

There are nuances of course (hearing damage, and all the complicated effects that wearing ear defenders cause), but none of them are to the point that loud reversing noises are required because of hearing protection -- they are required to be heard over all the other loud noises on a construction site.

> The utility of having a backup beeper or any noise making device on that site is thus zero.

The inverse square law says otherwise; on site the distances will be much more apparent.


Those beepers should be directional. I don't need to hear the beep opposite the direction of movement.


The requirements are that it be heard in all directions, including straight up.



> making machines quieter

It's already possible, just not profitable.


Exactly. That’s where the other comment about a “noise tax”, or enact fines for exceeding limits, are probably necessary to shift the calculus.

Japan is a good case study [1]. If nothing else, it’s fun to look at the charts showing noise reductions—not just in aggregate, but for each contributing input (e.g. engine, intake, exhaust, tires, cooling)—for both passenger vehicles and heavy equipment. Unfortunately, in the US, we have a few obstacles to legislation like this, least of which being public apathy as majority of voters who are not exposed to high sound levels daily.

“Japan's primary legislation governing noise regulation is the Environmental Noise Regulation Act, first introduced in 1986 and subsequently amended in 1999. This act sets different noise limits for different times of the day, with the maximum allowable noise level during the day set at 55 decibels and reduced to 45 decibels at night to prevent disturbances to those who are sleeping. Violators of these standards are subject to penalties.”

[1] https://www.lios-group.com/news/noise-regulations-in-japan-o...


From what I can understand, instead of funding various causes via "matching donations" QF is proposal for a funding body to do something like 'match in proportion to a blend of the donation amount with the number of people donating to the cause.' The point seems to be to smooth out any undue influence any one philanthropist or individual funder has and make the funding of public goods quasi-democratic.

However, compare these two problems: a) not enough people who can afford to do so engage in philanthropy, and b) philanthropic funding isn't quasi-democratically distributed. I have to imagine that (a) is a much, much bigger issue than (b).

I guess one could argue that because there isn't an analog of "a market" for public goods (c.f. "The Use of Knowledge in Society") somehow we aren't funding the important public goods "efficiently"? And maybe we should think about this more? Yet it's not clear that efficiency (in the economic sense) should be the goal or even applies. This is because markets are great at distilling people's the preferences for fungible goods they want to buy and fungible services they want to use when faced with multiple options for procuring some of each. But a) the vast majority of people don't have that same type of preference for which public goods should be funded, and b) public goods typically aren't fungible. (I.e., funding one scientist gives you a very different research output from funding another in the same subfield.)


> a) not enough people who can afford to do so engage in philanthropy, and b) philanthropic funding isn't quasi-democratically distributed. I have to imagine that (a) is a much, much bigger issue than (b)

Consider philanthropy funding as actions that terraform the future. The future is where all possibilities unfold, so shaping future landscape pays dividends to the worldview of those who materialize it.

I would propose that if (b) is miscalibrated and inequitable, it might affect everything, including (a), much more than we assume.

But also, I'm not trying to claim I know that one is more important, just that they're both quite important and very interrelated :)


Ah, interesting. So what (I think) you are saying is that if there is enough of the right type of philanthropic, in-kind donation more people will be able to donate in the future. I will admit the possibility that there may be a clever way of of doing in-kind donations that isn't wide-spread. Sort of akin to ranked-choice-voting but for donor matching.


> It also axed research on Covid-19, including studies that could have helped the nation respond to many infectious disease threats. Among them: a grant to Emory University and Georgia State University, where researchers had developed three potential drugs that showed promise against many RNA-based viruses, including coronaviruses, Ebola, avian influenza and measles, said George Painter, a pharmacologist at Emory who was co-leading the research.

Just to reiterate a few things, while estimates vary, every $1 spent on medical research returns multiple dollars of economic value. One study out of England suggest that for ever pound invested in medical research, the return is .25 pounds every year after, forever. [1] The cost of these cuts, as others have said, is quite large.

In addition, these grants are peer reviewed by expert panels, and only grants that score within certain top N percentiles which are determined each year. For the marquee grants, you have to score in the top ~10th percentile (see [2], for example.) This scoring is done by expert panels, which are composed of leading experts / professors from around the country. While one can adjust funding priorities, part of the price to pay for having cutting edge basic research always available is that there will be certain things one disagrees with.

There is plenty of room for a discussion of how to increase the efficiency of scientific funding, and if the current science-funding institutions are at... 'a near-optimal position in tradeoff space.' However, taking a chainsaw to the agencies to punish them is like blaming doctors for outbreaks of diseases, the latter being sadly predictable.

[1] https://www.kcl.ac.uk/news/health-research-offers-a-big-retu...

[2] https://www.niaid.nih.gov/grants-contracts/niaid-paylines


Seconded, as not only is this an interesting idea, it might also help solve the issue of checking for reproducibility. Yet even then human evaluators would need to go over the AI-reproduced research with a fine-toothed comb.

Practically speaking, I think there are roles for current LLMs in research. One is in the peer review process. LLMs can assist in evaluating the data-processing code used by scientists. Another is for brainstorming and the first pass at lit reviews.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: