Very awesome project, can't wait to get a chance to try it out.
I feel like you would benefit from having a real-life photo of "The deck" feature. Your description does it justice, but your graphic does not. (To me)
Everyone is reading this as intentional anti-competitive practices. While that may be true, isn't another reasonable explanation that the Copilot development team is moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity? It takes a lot more time/energy to push public APIs and it's probably a very different team than the team developing the copilot extension. Seems a bit like a "don't attribute to malice..." kind of moment to me
> Everyone is reading this as intentional anti-competitive practices. While that may be true, isn't another reasonable explanation that the Copilot development team is moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity?
Wouldn't another way of saying that be "the Copilot development team is leveraging their Microsoft ownership to create products in a way not available to the general marketplace?"
The goal might not be to squash competition, but blessing one client with special treatment not available to others can still be anti-competitive.
Whether that would fall afoul of any regulation is beyond my expertise. Naively, most companies have internal APIs that are not generally available. But then most companies don't have paid public marketplaces on their platform.
Is it even not available to competitors? Visual studio is open source. Didn't cusor fork it and is building it features directly into the fork? Not doing something like this would make Copilot at a disadvantage.
Sort of. The core is, and the installable binaries with telemetry and properietary extensions are not.
The open source, telemetry-free version of VSCode is called VSCodium: https://vscodium.com/
> Didn't cusor fork it and is building it features directly into the fork?
Yes, in their recent interview with Lex Fridman they argued that life as an extension is too limiting.
The main reason we criticise Microsoft for doing this and not them is just their size and market dominance.
Why jump through hoops to make competitors better able to hotwire their own AI into VSCode, or hotwire Copilot into their own IDE, when it's easier to iterate fast and remain unpredictable?
> Why jump through hoops to make competitors better able
Because that is the competitive philosophy that allowed VS Code win in this space. It fits with that great quote from Bill Gates: "A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it."
By having VS Code give a priority to another MS/GitHub product that they aren't willing to give competitors, they're diminishing VS Code's value as a platform, and encouraging competitors to build their own IDEs rather than building on top of it.
do you have anything respectful to say, or just this disrespectful, dismissive response?
If you want to have a discussion, then let's have one. Step one is to have the discussion in good faith. If you're not capable of that, then don't respond at all.
The fact that you can access source code allowing you to build a telemetry-free version of VSCode doesn’t magically make what’s actually distributed open source and telemetry free.
The sole thing you can actually download and run while calling it VS Code - a trademarked name - is neither open source nor telemetry-free.
But Cursor had to fork, so as a developer wanting to use them, you need to give up VS Code and install a new code editor, and you can’t just install a plugin. Very few can maintain a fork and get enough people to use their fork. Also what happens if you have two products that needed a fork? You can’t use them both.
I don’t know if it’s legal or not, IANAL, but it feels definitely anti competitive.
Competitors compete in the same market. The market in this case is VS Code extensions, with the consumers in that market being the user base of VS Code, not the users of some fork of VS Code. You can't point your competitors to a different market and then reasonably claim to be open to competition.
Now, I'm not a big fan of VS Code as of lately. I find the changes, that first broke Customize UI + MonkeyPatch extensions to make it look not completely shit on macOS, and now the change that broke APC too that replaced the first two, completely user-hostile and the PM response in GH issues to that very poor. But this specific lie about what is OSS and what isn't, and how it's used annoys me a lot. You are not helping with the problem.
Seems like the only sensible comment in this thread so far.
Here's what I imagine it's like working on the Copilot team:
> Mgmt: "We need this feature, and we need in 2 weeks."
> Devs: "That feature is not technically possible."
> Mgmt: "Well, figure out a way to make it possible. That's _your_ problem."
That is exactly the sort of management that has landed many a company in hot mater before, including Microsoft.
Whether the managers remain ignorant by malice of incompetence is irrelevant. Directing your subordinate to do something that they should reasonably know would break the law or be anticompetitive is still illegal.
The see no evil defense is a piss poor defense that is more likely going to be used to show you knew exactly what was going on.
There isn't the remotest chance that any of this is anticompetitive in a legal sense. Microsoft doesn't have anything close to a monopoly on dev tooling or text editors.
This doesn't fly when you're a company the size of Microsoft with the kind of influence and power they have. You can't just ignore the possibility or effects of engaging in anti-competitive behavior simply because it's convenient for you. That's not how it works.
Why not? They're survived for decades just shrugging off the law and paying off whatever minor fine there is years later. They started that model, now embraced by everyone from Google to Apple to Uber. Build it fast, get rich, worry about legality later.
Microsoft: we’ve just committed to an investment of two zillion dollars in co-pilot!
Microsoft to investors: don’t worry, you’ll get two zillion dollars of “value” launching next week , AND we won’t have to pay the bill for years! There’s even a chance our lawyers will win, and we will never have to pay!
Microsoft to devs: sorry, we spent two zillion on product so your profit sharing is going to take a bit hit. Thanks for your hard work!
The few people I know in the Copilot team(s) (not necessarily VS Code) are laser focused on prioritizing features based on customer demand, not top-down guidance :)
Are other extensions like Codeium[0] allowed to publish under the same rules? I'm not saying your comment is incorrect, but unless Copilot competitors can get the same treatment, it seems extremely unfair and anti-competitive.
> VSCode is provided fully free as in beer and freedom
No, VSCode is a proprietary text editor/IDE from Microsoft. Code-OSS is provided fully free as in beer and freedom, and is currently what resides at https://github.com/microsoft/vscode.
Why would Microsoft not want other AI agent extensions to get the same benefits, which would benefit all AI agent users?
Edit: I have removed the portion of the comment which discussed the throwaway account.
Probably not. Please suggest to extension authors to dual-publish their extensions to OpenVSX and VSMarketplace. So far all authors I engaged with were happy do to so (except for Microsoft of course, who are the only benefactor of this wallet garden situation).
I also have an extension that I dual publish. I was surprised to see it’s getting as many downloads on OpenVSX as on the VSCode marketplace. I’m just glad it’s useful to more people for marginally no cost.
I think Cursor just mirrors the VSCode marketplace on their own servers. They used to have an ugly work around for installing extensions, but now it just works and I see links to https://marketplace.cursorapi.com/ inside of Cursor's extension browser.
Eh not quite. Famously, you can fork VSCode, but you can't use the VSCode Extension Marketplace if you do, which loses a lot of the network effect benefits of the VSCode ecosystem. (As far as I know Cursor is flat out violating Microsoft's terms of service with respect to the extension marketplace).
And a lot of the licenses for flagship Microsoft VSCode extensions for languages like C/C++ and Python don't allow using them outside of VSCode/Extension Marketplace so open source forks are crippled by default.
I believe this also blocks you from using Microsoft's proprietary language extensions, and they have been steadily switching the default language packages from OSS to proprietary.
Yes. You famously cannot use the C/C++ language server bundled in the C/C++ extension or Pylance. Who knows what other development tools they will lock behind their fork to the detriment of open source communities. Also you can't use their Remote Extension suite.
Red Hat provides support for their packages. If you're not paying for support, you don't get access to the repos. That makes sense to me. What does Microsoft gain by creating a walled garden? They don't provide support. All that they provide is hosting. The Eclipse Foundation provides hosting for free for OpenVSX, which is an amazing service to the community of people using VSCode forks that aren't allowed to access the VSCode Marketplace. Microsoft should either relax the ToS on the Marketplace or acknowledge OpenVSX as the one and only marketplace for extensions.
>Everyone is reading this as intentional anti-competitive practices.
Even if it is anti-competitive, I don't care. Why should VS Code have to support alternative AI assistants in their software? I understand why people would want that, but I'm not sure why microsoft has some sort of ethical or legal burden to support it. Plus it's open source, competitors can take it for free and add their own co-pilots if they want.
I’m no fan of MS, but how are they leveraging their dominance in, say, OS to create dominance in editors? AFAIK it’s not like VS code is bundled with Windows.
Hanlon's razor falls apart when it's used outside of personal relationships and in situations where billions of dollars are on the line.
There is no functional difference between a Microsoft that's really excited about Copilot so that it quickly integrates it into their products and a Microsoft that's hellbent on making sure Copilot gets to use secret APIs others can't.
Anti-competitive behavior is absolutely fine though when not illegal. I don´t see how vscode could be constructed as having a monopoly when cursor freely forked it.
So was IE, back in the day, when they first "embraced" the web.
Today's "embrace" is of the web dev ecosystem, which before VSCode's dominance consisted of Jetbrains, other IDEs, text editors, etc.
Now with VScode and Github, they control much of the dev ecosystem, shrink competitors' marketshares by making them free to end-users (subsidized by other Microsoft businesses), expand them with new capabilities (even before secret APIs), etc.
It is really a shame to me that everyone believes Microsoft has changed and would never behave like they did in the 90s and prior. They haven't changed. They just decided -- for a time -- that another strategy was in their best interests. They're deciding that again, and going back to their EEE playbook.
(It also occurs to me that a lot of people here probably aren't old enough to remember 20th-century Microsoft...)
>Seems a bit like a "don't attribute to malice..."
I'm not saying you are wrong or that the rest of your comment isn't pretty valid, but a lot of people attribute malice to microsoft out the gate because they have history of operating out of malice.
> It takes a lot more time/energy to push public APIs
And, once an API is public, it becomes a lot harder to make changes to it. Iterating with a private API, then making it public once you've figured out the problem space, is a valid and useful approach.
Iterating on a private API is fine. Allowing your internal AI assistant to publish to the extension store while consuming those private APIs while prohibiting any competitors from doing so is not.
> Everyone is reading this as intentional anti-competitive practices.
I think its fair to assume anticompetitive intent due to their history of anticompetitive behavior. Admittedly, in old enough to remember the crap they pulled all through the 90s.
While I can understand the part about hidden APIs, as they're in flux and experimental, the part that's weird about it to me is the "you can totally build it and share it just not on our marketplace" part. That just sounds to me like they're trying to bar their competitors from the VSCode Marketplace, making installing and updating a lot harder for users.
I don't care if it's malicious or not. The fact remains that this team is using their position inside Microsoft to make use of tools in another product that a competing product wouldn't get to use.
This is one of the things MS got sued for back in the 90s. They shouldn't be allowed to do this again.
Won't really help ya. As outlined at https://ghuntley.com/fracture/ as soon as you compile "VSCode" (MIT) the ecosystem fractures in a bad way (tm) including no-license to run majority of MSFT extensions (Language LSPs, Copilot, Remote Development). If you are a vendor producing a MIT fork then one needs to iterate the graph and convince 3rd party extension authors to _not use the MSFT extensions_ as dependencies _and_ to publish on open-vsx.
This is how Cursor gets wrecked in the medium/long term. Coding agent? Cool. You can't use Pylance with it etc. VSCode degrades to being notepad.exe. MSFT uses Cursor for product research and then rolls out the learnings into Copilot because only Copilot supports all of "Visual Studio Code" features that users expect (and this is by design)
Further enshittifying Windows and Office. I'd say this task must have run its course by now, but Microsoft always seems to find a way to make products worse.
> While that may be true, isn't another reasonable explanation that the Copilot development team is moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity?
this strikes me as most likely. it is anti-competitive, but it's probably not their motive.
Also regarding the wording "Proposed API": This seems like it's just some kind of incubator for APIs before marking them as stable. So that copilot thing may just be their incubator project. It may be not though.
Not malicious, but still selfish. It's important to remember that the copilot extensions are an extremely effective way of monetizing VScode. So it seems more like they're kind of compromising on their API usage rules in order to get to market quicker. But allowing themselves to use the APIs before anyone else is in a way anti-competitive, because the only way one could compete would be to use the unfinished APIs. But that requires users to go through more hoops to install your extension.
I should also mention that I am a VScode extension developer and I'm one of the weirdos that actually takes the time to read about API updates. They are putting in a lot of effort in developing language model APIs. So it's not like they're outright blocking others from their marketplace.
Your VaporView extension looks amazing! I can't even fathom how to get that far along in extension development.
Do you have any links or resources you could direct me toward that were more helpful than Microsoft's basic how-to pages for learning VS Code plugin development? I attempted to build a VS Code extension, but the attempt fizzled out. I managed to make some progress in creating the simplest of UI elements and populating them. I'm particularly interested in building a GUI-based editor of JSON / YAML where a user can select a value from a prepopulated dropdown menu, or validating a JSON / YAML file against a custom schema. Any help or advice you could provide would be appreciated!
Frankly if they shipped it with `enabledApiProposals` I'd even go further and assume that they actually _intend_ to release public APIs once they've baked.
Like, why go through the extra work of gating it under `enabledApiProposals` and using the public manifest flag when you could put code in VSCode itself that is like "oh if this extension is installed let me just run some secret code here in the binary".
I would think this is less team velocity and more about LSP/etc. I am not an expert on how this is developed, but I imagine it will take at least a couple of years for the dust to settle to decide on good public API abstractions for LLM codegen, and they don’t want to introduce anything public that they have to maintain in concert with 3rd parties.
That’s not to say the general concern about GitHub-VSCode smothering competition isn’t valid, but I agree that it’s probably not what’s happening here.
Thank you. This needs to be said & should be reported.
If we want a world that isn’t massively hostile to devs, like it is for most companies, this is the kind of advocacy we need and I’d love to see more people in tech putting it out there.
Disclaimer: I used to work at Microsoft. These days I work at a competitor. All words my own and represent neither entity.
Microsoft has the culture and the technology to tell private and public APIs apart and to check code across the company to ensure that only public APIs are called. This was required for decades as part of the Department of Justice consent decree and every single product in the company had scanners to check that they weren't using any private APIs (or similar hacks to get access to them such as privately searching for symbols in Windows DLL files). This was drilled into the heads of everyone, including what I assume are 90% of VP+ people currently at the company, for a very long time.
For them to do this is a conscious decision to be anticompetitive.
What a coincidence, I was just browsing Microsoft's Go fork (for FIPS compatibility, basically replacing Go crypto with OpenSSL and whatever API Windows has, just like there's a Google's fork that uses BoringSSL), and found this patch:
Upstream Go tricks Windows into enabling long path support by setting an
undocumented flag in the PEB. The Microsoft Go fork can't use undocumented
APIs, so this commit removes the hack.
So, even if they fork something, they have to strictly follow this guideline and remove undocumented API usage. I wonder if this only applies to Windows APIs though.
> Microsoft has the culture and the technology to tell private and public APIs apart and to check code across the company to ensure that only public APIs are called. This was required for decades as part of the Department of Justice consent decree and every single product in the company had scanners to check that they weren't using any private APIs (or similar hacks to get access to them such as privately searching for symbols in Windows DLL files).
I thought that only applied to private Windows APIs?
The antitrust case was about the Windows monopoly specifically, so other MS products calling Windows private APIs was in its scope. But, this is more comparable to another MS product calling a private Visual Studio API – I don't believe that was in the scope of that antitrust case. Did Microsoft have policies and processes against that scenario too?
The settlement was (presumably, I've never read it) about not using a monopoly in one area to gain influence in another, so I would not be surprised if Windows was the primary focus, but the overall message was fairly universal, and it makes sense: Microsoft builds platforms and overwhelmingly those platforms rely on other parties, so don't leverage anything internal/unfair as that hurts the platform.
This means that Office shouldn't use private Windows APIs and pin itself to the taskbar. It means that Surface shouldn't have special integrations (whether with Windows, Copilot, or whatever) that aren't available to third parties. It means that Azure shouldn't build things that are only available to Office. You build for the platform. The push was originally around a legal mandate, but it turns into a culture.
> The push was originally around a legal mandate, but it turns into a culture.
Whatever the scope of the legal mandate was, it expired over a decade ago now.
Culture can change over time. Even if Microsoft had this culture strongly when you worked there, it might have become much weaker in the years since. Within a corporation, culture can also vary a lot between different teams/divisions/etc - maybe it is still strong in some parts of the company but gone in others.
In 2011 [Erich Gamma] joined the Microsoft Visual Studio team and leads a development lab in Zürich, Switzerland that has developed the "Monaco" suite of components for browser-based development, found in products such as Azure DevOps Services [0]
>I had to spam click back like 30 times to get back to this hacker news comment thread
Click-and-holding or right-clicking the back button will give you a list of last N URLs in your tab history. This page only generates one auto-redirect, so the HN URL will show up.
Just wanted to apologise to everyone for this, this kind of stuff drives me nuts and I'm not sure how I never noticed - it seems to be a result of how we use the iframe to render the chat example. Investigating!
Tip for next time this happens: hold down the back button for a menu of your history. It can help get where you want faster. Although not sure it helps too much if you literally had to click 30 times
I’m also experiencing issues with the website. I when to the docs page and accidentally pressed back in my browser, after which the forward button wouldn’t work to undo the back operation.
Seems like the website breaks basic browser navigation.
On Firefox 131.0 I clicked through the tabs with the demo code, then pressed my mouse's "back" button and it didn't work. So I manually clicked the back button and it directed me back to this page.
Then I opened it again and clicked the back button and it didn't work again.
Reproduced with Firefox 131.0 on Windows 11. Happens if I click to jazz.tools. After pressing back once, I am still on jazz.tools, but have a forward arrow. It does seem related to the "chat" because the "result" window changes when I click between those back/forward arrow controls of the browser.
> the person running them gets fired or quits partway through at least half of the time
This is a good point. Or the migration appears to have been very successful to management (before it's actually complete from an engineering perspective) and they get promoted / moved onto higher priority work.
Either way: make sure you are keeping the rest of the relevant engineering organization informed about how the new system works and how the migration is going to work.
I don’t think there’s much room for promotion because migrations are fabrication and promotions favor innovation. It’s ability to save money versus ability to make money. See: Smiling curve in economics.
I am not on the bleeding edge of this stuff. I wonder though: How could a safe super intelligence out compete an unrestricted one? Assuming another company exists (maybe OpenAI) that is tackling the same goal without spending the cycles on safety, what chance do they have to compete?
That is a very good question. In a well functioning democracy a government should apply a thin layer of fair rules that are uniformly enforced. I am an old man, but when I was younger, I recall that we sort of had this in the USA.
I don’t think that corporations left on their own will make safe AGI, and I am skeptical that we will have fair and technologically sound legislation - look at some of the anti cryptography and anti privacy laws raising their ugly heads in Europe as an example of government ineptitude and corruption. I have been paid to work in the field of AI since 1982, and all of my optimism is for AI systems that function in partnership with people and I expect continued rapid development of agents based on LLMs, RL, etc. I think that AGIs as seen in the Terminator movies are far into the future, perhaps 25 years?
People spending so much time thinking about the systems (the models) themselves, not enough about the system that builds the systems. The behaviors of the models will be driven by the competitive dynamics of the economy around them, and yeah, that's a big, big problem.
It'd be naive if it wasn't literally a standard point that is addressed and acknowledged as being a major part of the problem.
There's a reason OpenAI's charter had this clause:
“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””
How does that address the issue? I would have expected them to do that anyhow. Thats what a lot of businesses do: let another company take the hit developing the market, R and D, and supply chain, then come in with industry standardization and cooperative agreements only after the money was proven to be good in this space. See electric cars. Also they could drop that at any time. Remember when openAI stood for opensource?
Neither mention anything about open-source, although a later update mentions publishing work (“whether as papers, blog posts, or code”), which isn't exactly a ringing endorsement of “everything will be open-source” as a fundamental principle of the organization.
Since no one knows how to build an AGI, hard to say. But you might imagine that more restricted goals could end up being easier to accomplish. A "safe" AGI is more focused on doing something useful than figuring out how to take over the world and murder all the humans.
Assuming AGI works like a braindead consulting firm, maybe. But if it worked like existing statistical tooling (which it does, today, because for an actual data scientist and not aunt cathy prompting bing, using ml is no different than using any other statistics when you are writing your python or R scripts up), you could probably generate some fancy charts that show some distributions of cars produced under different scenarios with fixed resource or power limits.
In a sense this is what is already done and why ai hasn't really made the inroads people think it will even if you can ask google questions now. For the data scientists, the black magicians of the ai age, this spell is no more powerful than other spells, many of which (including ml) were created by powerful magicians from the early 1900s.
Similar to how law-abiding citizens turn on law-breaking citizens today or more old-fashioned, how religious societies turn on heretics.
I do think the notion that humanity will be able to manage superintelligence just through engineering and conditioning alone is naive.
If anything there will be a rogue (or incompetent) human who launches an unconditioned superintelligence into the world in no time and it only has to happen once.
This is not a trivial point. Selective pressures will push AI towards unsafe directions due to arms race dynamics between companies and between nations. The only way, other than global regulation, would be to be so far ahead that you can afford to be safe without threatening your own existence.
There's a reason OpenAI had this as part of its charter:
“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””
The problem is the training data. If you take care of alignment at that level the performance is as good as an unrestricted one, except for things you removed like making explosives or ways to commit suicide.
But that costs almost as much as training on the data, hundreds of millions. And I'm sure this will be the new "secret sauce" by Microsoft/Meta/etc. And sadly nobody is sharing their synthetic data.
Honestly, what does it matter. We're many lifetimes away from anything. These people are trying to define concepts that don't apply to us or what we're currently capable of.
AI safety / AGI anything is just a form of tech philosophy at this point and this is all academic grift just with mainstream attention and backing.
This goes massively against the consensus of experts in this field. The modal AI researcher believes that "high-level machine intelligence", roughly AGI, will be achieved by 2047, per the survey below. Given the rapid pace of development in this field, it's likely that timelines would be shorter if this were asked today.
I am in the field. The consensus is made up by a few loudmouths. No serious front line researcher I know believes we’re anywhere near AGI, or will be in the foreseeable future.
So the researchers at Deepmind, OpenAI, Anthropic, etc, are not "serious front line researchers"? Seems like a claim that is trivially falsified by just looking at what the staff at leading orgs believe.
Apparently not. Or maybe they are heavily incentivized by the hype cycle. I'll repeat one more time: none of the currently known approaches are going to get us to AGI. Some may end up being useful for it, but large chunks of what we think is needed (cognition, world model, ability to learn concepts from massive amounts of multimodal, primarily visual, and almost entirely unlabeled, input) are currently either nascent or missing entirely. Yann LeCun wrote a paper about this a couple of years ago, you should read it: https://openreview.net/pdf?id=BZ5a1r-kVsf. The state of the art has not changed since then.
I don't give much credit to the claim that it's impossible for current approaches to get us to any specific type or level of capabilities. We're doing program search over a very wide space of programs; what that can result in is an empirical question about both the space of possible programs and the training procedure (including the data distribution). Unfortunately it's one where we don't have a good way of making advance predictions, rather than "try it and find out".
It is in moments like these that I wish I wasn’t anonymous on here and could bet a 6 figure sum on AGI not happening in then next 10 years, which is how I define “foreseeable future”.
You disagreed that 2047 was reasonable on the basis that researchers didn't think it wouldn't happen in the foreseeable future, so your definition must be at least 23 years for consistency's sake
I'd be OK with that, too, if we adjusted the bet for inflation. This is, in a way, similar to fusion. We're at a point where we managed to ignite plasma for a few milliseconds. Predictions of when we're going to be able to generate energy have become a running joke. The same will be the case with AGI.
LeCun has his own interests at heart, works for one of the most soulless corporations I know of, and devotes a significant amount of every paper he writes to citing himself.
Fair, ad hominems are indeed not very convincing. Though I do think everyone should read his papers through a lens of "having a very high h-index seems to be a driving force behind this man".
Moving on, my main issue is that it is mostly speculation, as all such papers will be. We do not understand how intelligence works in humans and animals, and most of this paper is an attempt to pretend otherwise. We certainly don't know where the exact divide between humans and animals is and what causes it, which I think is hugely important to developing AGI.
As a concrete example, in the first few paragraphs he makes a point about how a human can learn to drive in ~20 hours, but ML models can't drive at that level after countless hours of training. First you need to take that at face value, which I am not sure you should. From what I have seen, the latest versions of Tesla FSD are indeed better at driving than many people who have only driven for 20 hours.
Even if we give him that one though, LeCun then immediately postulates this is because humans and animals have "world models". And that's true. Humans and animals do have world models, as far as we can tell. But the example he just used is a task that only humans can do, right? So the distinguishing factor is not "having a world model", because I'm not going to let a monkey drive my car even after 10,000 hours of training.
Then he proceeds to talk about how perception in humans is very sophisticated and this in part is what gives rise to said world model. However he doesn't stop to think "hey, maybe this sophisticated perception is the difference, not the fundamental world model". e.g. maybe Tesla FSD would be pretty good if it had access to taste, touch, sight, sound, smell, incredibly high definition cameras, etc. Maybe the reason it takes FSD countless training hours is because all it has are shitty cameras (relative to human vision and all our other senses). Maybe linear improvements in perception leads to exponential improvement in learning rates.
Basically he puts forward his idea, which is hard to substantiate given we don't actually understand the source of human-level intelligence, and doesn't really want to genuinely explore (i.e. steelman) alternate ideas much.
Anyway that's how I feel about the first third of the paper, which is all I've read so far. Will read the rest on my lunch break. Hopefully he invalidates the points I just made in the latter 2/3rds.
This could also just be an indication (and I think this is the case) that many Manifold betters believe the ARC AGI Grand Prize to be not a great test of AGI and that it can be solved with something less capable than AGI.
I don't understand how you got 2047. For the 2022 survey:
- "How many years until you expect: - a 90% probability of HLMI existing?"
mode: 100 years
median: 64 years
- "How likely is it that HLMI exists: - in 40 years?"
mode: 50%
median: 45%
And from the summary of results: "The aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059"
That’s the first step towards returning to candlelight. So it isn’t a step toward safe super intelligence, but it is a step away from any super intelligence. So I guess some people would consider that a win.
Not sure if you want to share the capitalist system with an entity that outcompetes you by definition. Chimps don't seem to do too well under capitalism.
You might be right, but that wasn't my point. Capitalism might yield a friendly AGI or an unfriendly AGI or some mix of both. Collectivism will yield no AGI.
One can already see the beginning of AI enslaving humanity through the establishment. Companies work on AI get more investment and those who don't gets kicked out of the game. Those who employ AI get more investment and those who pay humans lose confidence through the market. People lose jobs, get harshly low birth rates while AI thrives. Tragic.
So far it is only people telling AI what to do. When we reach the day where it is common place for AI to tell people what to do then we are possibly in trouble.
It is a trendy but dumbass tautology used by intellectually lazy people who think they are smart. Society is based upon capitalism therefore everything bad is the fault of capitalism.