I am on the free tier of Gemini 3. With some intervention on my part, I got it to build, in Emacs Lisp, a primitive-recursive function for determining if a number is prime (by mu-recursive I mean a function built from the building blocks of a constant, successor and projection function, as well as a primitive recursive function and compositional function/macro). I was impressed, as previous models (including Anthropic and OpenAI) could not do this.
For the past few days I asked it to built a mu-recursive Ackermann function in Emacs Lisp (built on the primitive-recursive functions/operators, plus an extra operator - minimization). I said that the prime detector function it already built should be able to use the same functions/operators, and to rewrite code if necessary.
So far it has been unable to do this. If I thought it could but was stumbling over Emacs Lisp I might ask it to try in Scheme or Common Lisp or some other language. It's possible I'll get it to work in the time I have allotted from my daily free tier, but I have had no success so far. I am also starting with inputs to the Ackermann function of 0,0 - 0,1 - 1,0 - 1,1 to not overburden the system but it can't even handle 0, 0. Also it tries to redefine the Emacs Lisp keyword "and", which Emacs hiccups on.
A year ago LLMs were stumbling over Leetcode and Project Euler functions I was asking it to make. They seem to have gotten a little better, and I'm impressed Gemini 3 can handle, with help, primitive recursive functions in Emacs Lisp. Doesn't seem to be able to handle mu-recursive functions with minimization yet though. The trivial, toy implementations of these things. Also as I said, it tried to redefine "and" as well, which Emacs Lisp fell over on.
So it's a helpful helper and tool, but definitely not ready to hand things over to. As the saying goes, the first 90% of the code takes 90% of the time, and the last 10% of the code takes the other 90% of the time. Or the other saying - it's harder to find bugs than write code, so if you're coding at peak mental capacity, finding bugs becomes impossible. It does have its uses though, and has been getting better.
It's The Mythical Man Month idea. Programming software is a different thing than working on an assembly line, or a call center, or in retail sales. You're much better off having four programmers who are worth paying $200k a year than ten programmers who are worth paying $75k a year.
I'm going to argue that, at scale, process beats the quality of the people you're using -- and also that there are toxic cultures, around Google and C++, where very smart people get seduced into spending all their time and effort fighting complexity, battling 45 minute builds, etc.
> and also that there are toxic cultures, around Google and C++, where very smart people get seduced into spending all their time and effort fighting complexity, battling 45 minute builds, etc.
Not sure what you mean here. "Fighting" as in "seeking to prevent", or "putting up with", or what exactly? Is this supposed to be bad because it's exploitative, or because it's a poor use of the smart person's time, or what exactly?
Essentially that the idea that people can hold 7 + 2 things in their head simultaneously is basically true such that when your tools make a demand on your attention it subtracts from the attention you can put on other things.
There are many sorts of struggle. There is struggle managing essential complexity and also the struggle, especially in the pre-product phase, of getting consensus over what is "essential" [1] When it comes to accidental complexity you can just struggle following the process or struggle to struggle less in the future by some combination of technical and social innovations which themselves can backfire into increased complexity.
Google can afford to use management techniques that would be impossible elsewhere because of the scale and profitability of their operations. Many a young person goes there thinking they'll learn something transferable but the market monopolies are the one thing that they can't walk out with.
[1] Ashby's law https://www.edge.org/response-detail/27150 best exemplified by the Wright flyer which could fly without tumbling because it controlled roll, pitch and yaw.
Yes...NVDA closed at $188.15 yesterday, a price it was never at until October. It did hit $212.19 last week, but retreated.
After spring 2023, Nvidia stock seems to follow a pattern. It has a run-up prior to earnings, it beats the forecast, with the future forecast replaced with an even more amazing forecast, and then the stock goes down for a bit. It also has runs - it went up in the first half of 2024, as well as from April to now.
Who knows how much longer it can go on, but I remember 1999 and things were crazier then. In some ways things were crazier three years ago with FAANG salaries etc. There is a lot of capital spending, the question is are these LLMs with some tweaking worth the capital spending, and it's too early to tell that fully. Of course a big theoretical breakthrough like the utility of deep learning, or transformers or the like would help, but those only come along every few years (if at all).
From spring 2019 to July of this year, I worked at IT at a Fortune 100 retailer.
The project I worked on was enormously successful in terms of revenue growth. I, and people on my team had a huge nationwide impact, which started when we were about a dozen people (it has grown now to several dozen).
Whereas even a manager of one of the big box stores would only have a limited geographical impact. Whereas my work would always have nation-wide impact (and at some companies programmers would have world-wide impact). I turned on a payment option for my platform one quarter, and very quickly people were using it for one million a month in purchases. Which kept going up.
The book Capitalism without Capital talks about this. Some aspects of it are alluded to in Fred Brooks 1975 book The Mythical Man Month.
To build a car, a lot of effort has to be made in making the car - not just the end result, but the making the glass, tires and so forth. Whereas with programming, I write an app, or a feature for an app, and the end result is duplicated and distributed around the country (or even around the world) for free, or virtually free. I'm not helping make commodities one at a time like someone on an automobile line is. It is something different.
> I turned on a payment option for my platform one quarter, and very quickly people were using it for one million a month in purchases.
How is that making a difference? These are sales that were already happening and you allowed them to do something like "pay with paypal" and then claim the entire sale?
As a result, one would assume price should crash to zero and everything would balance out because even though you're having nationwide impact, it fundamentally doesn't cost as much as making a car to make an app... So why should people pay as much for it?
Presently, it's mostly because laws sustain the author's ability to gatekeep the software, whether or not it's even running on capital they own.
One does wonder how long such laws will last, one way or the other, if their end result is a massive inequality of outcome.
NVDA had a surprise earnings on May 24, 2023, closing at $305 and opening at $385 (before the 10 for 1 split). Pretty much every earnings day since then has been the same - the numbers come out, they beat what they estimated, the stock goes down a little. People have been doom and glooming it every earnings call - you can read the threads here from May 2023, people were saying it was a bubble then, and it's over four times what it was then.
Up 4x in 2 years seems like an indication it is a bubble, not the opposite. There's really no way of knowing when a bubble is going to pop, there's no reason an overvalued stock can't become even more overvalued.
I think we need to first straighten out the terms.
What really is a bubble? In my view, if we want to get strict with this, a bubble is really an enlargement in asset prices that is not backed by fundamentals.
Now as it stands, Nvidia's recent growth has been to date backed by fundamentals due to increases in free cash flows. However, its market capitalization is based upon the fact that demand for what they produce continues to rise notwithstanding competition.
The problem is, Nvidia's customers who are responsible for their revenue growth are not seeing meaningful ROI and how steep will the barriers to entry remain? Its questionable if Nvidia can maintain existing barriers to entry (let alone making those barriers any steeper) to sustain the market power it enjoys over a long time horizon to justify its present value. Therein lies the problem with Nvidia's valuation.
There is also no evidence that shows Nvidia's projects have real options embedded within them. Now of course, lets get real - who is really doing this level of analysis? Very few, and we are seeing this play out with jumps in the stock price with no tangible justification.
Wow, look at the crowd of NN doubters in the comments there. I see the quality of foresight in the commentariat hasn’t improved given the state of this thread, either.
A commonality to most of them (and to a lesser extent all of them) is they write software.
If a company not on the list like Ford has an F-150 truck come off the assembly line, some of that $40,000 cost is in the capital expenditure for the plant, any automation it has, the software in the car and so on. But Ford has to pay for the aluminum, steel and glass for each truck. It has to pay for thousands of workers on the assembly line to attach and assemble parts for each truck.
Meanwhile, at Apple a team writes iOS 18, mostly based on iOS 17, and it ships with the devices. Once it is written that's it for what goes off on iPhone 16. There may be some additional tweaks up until iOS 18.6. The relatively small team working on iOS has it going out with tens of millions of units. Their work is not as connected to the process of production as the assembly line people attaching and assembling the F-150 truck. If some inessential feature is not done as a phone is being made, it will be punted to next release. This can't be done with an F-150 truck.
Software properly done is just much more profitable than non-software work. We can see this here. Yes, some of the latest boost is due to AI hype (which may or may not come to fruition in the near future), but these companies got to this position before all of that.
I was watching a speech by Gabe Newell talking about the (smaller) software industry of the 1990s, and the idea back then to outsource and try to save on salary costs. He said he and his partners went the other way and decided to look for the most expensive and best programmers they could find, and Valve has had great success with that. Over the past 2 1/2 years we've seen a lot of outsourcing to cheaper foreign labor, FAANG layoffs (including Microsoft's recent Xbox layoffs), and more recently attempts to lower costs by having software produced by less experienced vibe coders using "AI". I have seen myself at Fortune 100 companies, especially non-tech ones, that the lessons of the late 1960s NATO software engineering conferences, or the lessons learned by Fred Brooks while managing the OS/360 project in the 1960s haven't been learned. Software can be a very, very profitable enterprise, and it is sometimes done right, but companies are still often doing things in the same way they were attempting such projects in the early 1960s. Even attempts to fix things like agile and scrum get twisted around as window dressing to doing things in the old-fashioned corporate way.
I think that's because the announcement there actually told you something technically interesting. This just presents a result (which is cool), but the actual method is what is really cool!
For the past few days I asked it to built a mu-recursive Ackermann function in Emacs Lisp (built on the primitive-recursive functions/operators, plus an extra operator - minimization). I said that the prime detector function it already built should be able to use the same functions/operators, and to rewrite code if necessary.
So far it has been unable to do this. If I thought it could but was stumbling over Emacs Lisp I might ask it to try in Scheme or Common Lisp or some other language. It's possible I'll get it to work in the time I have allotted from my daily free tier, but I have had no success so far. I am also starting with inputs to the Ackermann function of 0,0 - 0,1 - 1,0 - 1,1 to not overburden the system but it can't even handle 0, 0. Also it tries to redefine the Emacs Lisp keyword "and", which Emacs hiccups on.
A year ago LLMs were stumbling over Leetcode and Project Euler functions I was asking it to make. They seem to have gotten a little better, and I'm impressed Gemini 3 can handle, with help, primitive recursive functions in Emacs Lisp. Doesn't seem to be able to handle mu-recursive functions with minimization yet though. The trivial, toy implementations of these things. Also as I said, it tried to redefine "and" as well, which Emacs Lisp fell over on.
So it's a helpful helper and tool, but definitely not ready to hand things over to. As the saying goes, the first 90% of the code takes 90% of the time, and the last 10% of the code takes the other 90% of the time. Or the other saying - it's harder to find bugs than write code, so if you're coding at peak mental capacity, finding bugs becomes impossible. It does have its uses though, and has been getting better.
reply