Hacker Newsnew | past | comments | ask | show | jobs | submit | more frenchie4111's commentslogin

Did you win the race to be the first comment?


Next week on HN... Show HN: A GitHub Action that uses AI to answer PR quizzes


Cluely 2.0


Very awesome project, can't wait to get a chance to try it out.

I feel like you would benefit from having a real-life photo of "The deck" feature. Your description does it justice, but your graphic does not. (To me)


Thanks, just updated Readme with more screenshots!


What about ages?


i vote we move to the korean system where everyone gets a year older on January 1st


That was a fun one but I think they changed it already.

https://www.bbc.com/news/world-asia-66028606.amp


If you are interested in doing this for commercial control systems, come talk to me.


Everyone is reading this as intentional anti-competitive practices. While that may be true, isn't another reasonable explanation that the Copilot development team is moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity? It takes a lot more time/energy to push public APIs and it's probably a very different team than the team developing the copilot extension. Seems a bit like a "don't attribute to malice..." kind of moment to me


> Everyone is reading this as intentional anti-competitive practices. While that may be true, isn't another reasonable explanation that the Copilot development team is moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity?

Wouldn't another way of saying that be "the Copilot development team is leveraging their Microsoft ownership to create products in a way not available to the general marketplace?"

The goal might not be to squash competition, but blessing one client with special treatment not available to others can still be anti-competitive.

Whether that would fall afoul of any regulation is beyond my expertise. Naively, most companies have internal APIs that are not generally available. But then most companies don't have paid public marketplaces on their platform.


Is it even not available to competitors? Visual studio is open source. Didn't cusor fork it and is building it features directly into the fork? Not doing something like this would make Copilot at a disadvantage.


> Visual studio is open source

Sort of. The core is, and the installable binaries with telemetry and properietary extensions are not.

The open source, telemetry-free version of VSCode is called VSCodium: https://vscodium.com/

> Didn't cusor fork it and is building it features directly into the fork?

Yes, in their recent interview with Lex Fridman they argued that life as an extension is too limiting.

The main reason we criticise Microsoft for doing this and not them is just their size and market dominance.

Why jump through hoops to make competitors better able to hotwire their own AI into VSCode, or hotwire Copilot into their own IDE, when it's easier to iterate fast and remain unpredictable?


> Why jump through hoops to make competitors better able

Because that is the competitive philosophy that allowed VS Code win in this space. It fits with that great quote from Bill Gates: "A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it."

By having VS Code give a priority to another MS/GitHub product that they aren't willing to give competitors, they're diminishing VS Code's value as a platform, and encouraging competitors to build their own IDEs rather than building on top of it.


That just tells you where in the EEE lifecycle you are.

    Embrace, extend, and extinguish
          `--->
https://en.wikipedia.org/wiki/Embrace%2C_extend%2C_and_extin...


Embracing, extending, and extinguishing their own tool?

Please consider what you are going to say before you say it.


No, an ecosystem and culture of open source software development tooling.


> Embracing, extending, and extinguishing their own tool?

Consider how C# support in VSCode got nerfed recently:

https://news.ycombinator.com/item?id=31760684

https://github.com/dotnet/vscode-csharp/issues/5276

There was another event only a few months ago, but I can't find the reference.


> Please consider what you are going to say before you say it.

Do as I say , not as I do?


Oh my sweet summer child...


do you have anything respectful to say, or just this disrespectful, dismissive response?

If you want to have a discussion, then let's have one. Step one is to have the discussion in good faith. If you're not capable of that, then don't respond at all.


> The open source, telemetry-free version of VSCode is called VSCodium

The open source, telemetry-free version of VSCode is called VSCode. The VSCodium people simply build it for you and package it for you.


The fact that you can access source code allowing you to build a telemetry-free version of VSCode doesn’t magically make what’s actually distributed open source and telemetry free.

The sole thing you can actually download and run while calling it VS Code - a trademarked name - is neither open source nor telemetry-free.


Congratulations, you've won a car!

If you choose to drive it, it's full price.


You're mistaken, Visual Studio Code is open source not Visual Studio, they're different


But Cursor had to fork, so as a developer wanting to use them, you need to give up VS Code and install a new code editor, and you can’t just install a plugin. Very few can maintain a fork and get enough people to use their fork. Also what happens if you have two products that needed a fork? You can’t use them both.

I don’t know if it’s legal or not, IANAL, but it feels definitely anti competitive.


> Visual studio is open source.

No it’s not. Visual Studio is a proprietary product and the latest version is Visual Studio 2022.

Visual Studio Code is open source, and it is about as close to Visual Studio as Lightning is to Lightning Bug.


Competitors compete in the same market. The market in this case is VS Code extensions, with the consumers in that market being the user base of VS Code, not the users of some fork of VS Code. You can't point your competitors to a different market and then reasonably claim to be open to competition.


Many things like C# Dev Kit are closed source. M$ is slowly but surely moving to the extinguish phase in its takeover workflow.


Sigh.

https://news.ycombinator.com/item?id=41891653

https://news.ycombinator.com/item?id=41884187

https://news.ycombinator.com/item?id=41809351

https://news.ycombinator.com/item?id=41639205

https://news.ycombinator.com/item?id=41384888

Now, I'm not a big fan of VS Code as of lately. I find the changes, that first broke Customize UI + MonkeyPatch extensions to make it look not completely shit on macOS, and now the change that broke APC too that replaced the first two, completely user-hostile and the PM response in GH issues to that very poor. But this specific lie about what is OSS and what isn't, and how it's used annoys me a lot. You are not helping with the problem.


I agree. Apple has been doing this for years as well.


Seems like the only sensible comment in this thread so far.

Here's what I imagine it's like working on the Copilot team:

  > Mgmt: "We need this feature, and we need in 2 weeks."
  > Devs: "That feature is not technically possible."
  > Mgmt: "Well, figure out a way to make it possible. That's _your_ problem."


That is exactly the sort of management that has landed many a company in hot mater before, including Microsoft.

Whether the managers remain ignorant by malice of incompetence is irrelevant. Directing your subordinate to do something that they should reasonably know would break the law or be anticompetitive is still illegal.

The see no evil defense is a piss poor defense that is more likely going to be used to show you knew exactly what was going on.


There isn't the remotest chance that any of this is anticompetitive in a legal sense. Microsoft doesn't have anything close to a monopoly on dev tooling or text editors.


This doesn't fly when you're a company the size of Microsoft with the kind of influence and power they have. You can't just ignore the possibility or effects of engaging in anti-competitive behavior simply because it's convenient for you. That's not how it works.

It's not sensible at all.


Why not? They're survived for decades just shrugging off the law and paying off whatever minor fine there is years later. They started that model, now embraced by everyone from Google to Apple to Uber. Build it fast, get rich, worry about legality later.


Sounds like when Slack started taking marketshare from Skype for Business and they pushed out Teams as fast as possible.


Government: "We fine you two zillion dollars. You should have listened to the dev."


Microsoft: we’ve just committed to an investment of two zillion dollars in co-pilot! Microsoft to investors: don’t worry, you’ll get two zillion dollars of “value” launching next week , AND we won’t have to pay the bill for years! There’s even a chance our lawyers will win, and we will never have to pay! Microsoft to devs: sorry, we spent two zillion on product so your profit sharing is going to take a bit hit. Thanks for your hard work!


The few people I know in the Copilot team(s) (not necessarily VS Code) are laser focused on prioritizing features based on customer demand, not top-down guidance :)


Who decides what customer demands? Is it a free for all environment where people just push whatever they want into the trunk?


Are other extensions like Codeium[0] allowed to publish under the same rules? I'm not saying your comment is incorrect, but unless Copilot competitors can get the same treatment, it seems extremely unfair and anti-competitive.

[0]: https://marketplace.visualstudio.com/items?itemName=Codeium....


[flagged]


> VSCode is provided fully free as in beer and freedom

No, VSCode is a proprietary text editor/IDE from Microsoft. Code-OSS is provided fully free as in beer and freedom, and is currently what resides at https://github.com/microsoft/vscode.

Why would Microsoft not want other AI agent extensions to get the same benefits, which would benefit all AI agent users?

Edit: I have removed the portion of the comment which discussed the throwaway account.


Does throwaway account negates the arguments though?


I think there can be an inherent bias to the argument which should be known, and not hidden away. Nevertheless, I removed that portion of the comment.

Either way, no, a fork is not simple because forks cannot access the VSCode Marketplace.


Exactly. That's what Cursor did, and (I think) Microsoft will agree with that and maybe even welcome developers to do this.


Has Microsoft allowed Cursor to access the VSCode Marketplace? As far as I know, it is against the ToS for any editor other than VSCode to access it.


Probably not. Please suggest to extension authors to dual-publish their extensions to OpenVSX and VSMarketplace. So far all authors I engaged with were happy do to so (except for Microsoft of course, who are the only benefactor of this wallet garden situation).


I find that many of the extensions I use do dual publish. I also dual publish my own extension for people because walled gardens are not cool.


I also have an extension that I dual publish. I was surprised to see it’s getting as many downloads on OpenVSX as on the VSCode marketplace. I’m just glad it’s useful to more people for marginally no cost.


I think Cursor just mirrors the VSCode marketplace on their own servers. They used to have an ugly work around for installing extensions, but now it just works and I see links to https://marketplace.cursorapi.com/ inside of Cursor's extension browser.


Any idea how they got the data? I would imagine that just downloading all the data is also against the ToS.


> I would imagine that just downloading all the data is also against the ToS.

It is.


I use both vsc and cursor, cursor imported automatically all my vsc extensions and settings and theme and everything.


Fortunately most ToS are not legally enforceable, but only amount to a public statement of "we are threatening to block your IP if you do this"


There's an entire mechanism to build custom VSCode based applications right there ready to be used. They did more than could be expected.


Eh not quite. Famously, you can fork VSCode, but you can't use the VSCode Extension Marketplace if you do, which loses a lot of the network effect benefits of the VSCode ecosystem. (As far as I know Cursor is flat out violating Microsoft's terms of service with respect to the extension marketplace).


And a lot of the licenses for flagship Microsoft VSCode extensions for languages like C/C++ and Python don't allow using them outside of VSCode/Extension Marketplace so open source forks are crippled by default.


I believe this also blocks you from using Microsoft's proprietary language extensions, and they have been steadily switching the default language packages from OSS to proprietary.


Yes. You famously cannot use the C/C++ language server bundled in the C/C++ extension or Pylance. Who knows what other development tools they will lock behind their fork to the detriment of open source communities. Also you can't use their Remote Extension suite.


Correct


OpenVSX. Again this is just the same as RHEL repos behind license login.


Red Hat provides support for their packages. If you're not paying for support, you don't get access to the repos. That makes sense to me. What does Microsoft gain by creating a walled garden? They don't provide support. All that they provide is hosting. The Eclipse Foundation provides hosting for free for OpenVSX, which is an amazing service to the community of people using VSCode forks that aren't allowed to access the VSCode Marketplace. Microsoft should either relax the ToS on the Marketplace or acknowledge OpenVSX as the one and only marketplace for extensions.


So basically the same way XMLHttpRequest was born[0]?

[0]: https://web.archive.org/web/20060617163047/http://www.alexho...


>Everyone is reading this as intentional anti-competitive practices.

Even if it is anti-competitive, I don't care. Why should VS Code have to support alternative AI assistants in their software? I understand why people would want that, but I'm not sure why microsoft has some sort of ethical or legal burden to support it. Plus it's open source, competitors can take it for free and add their own co-pilots if they want.


>Why should VS Code have to support alternative AI assistants in their software?

Because of the dominant position of Microsoft in various markets.


I’m no fan of MS, but how are they leveraging their dominance in, say, OS to create dominance in editors? AFAIK it’s not like VS code is bundled with Windows.


Does Microsoft have a monopoly (or large enough market share) on text editors?


It's hard to find a good answer here but there's some strong indication that Microsoft is pretty dominant with code editors.


> Plus it's open source, competitors can take it for free and add their own co-pilots if they want.

They can and they do. The process is working.


I think you’ve made a good point here, its not like they force you to have vscode. I feel like it wont be super popular here thoguh


It doesn't matter much whether it's "intentional" or "malicious", though. It's still anticompetitive behavior.


Hanlon's razor falls apart when it's used outside of personal relationships and in situations where billions of dollars are on the line.

There is no functional difference between a Microsoft that's really excited about Copilot so that it quickly integrates it into their products and a Microsoft that's hellbent on making sure Copilot gets to use secret APIs others can't.


> Everyone is reading this as intentional anti-competitive practices

Who cares about intention? Anti-competitive behavior is anti-competitive behavior.


Anti-competitive behavior is absolutely fine though when not illegal. I don´t see how vscode could be constructed as having a monopoly when cursor freely forked it.


Embrace.

Extend. <-- We are here.

Extinguish.

Microsoft. Microsoft never changes. https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


where's the embrace step? vscode is their own product in the first place.


So was IE, back in the day, when they first "embraced" the web.

Today's "embrace" is of the web dev ecosystem, which before VSCode's dominance consisted of Jetbrains, other IDEs, text editors, etc.

Now with VScode and Github, they control much of the dev ecosystem, shrink competitors' marketshares by making them free to end-users (subsidized by other Microsoft businesses), expand them with new capabilities (even before secret APIs), etc.


Arguably VS Code was their way of "embracing" what GitHub were doing with Atom.


VSCode is embracing Eclipse team and Eclipse way of opensource IDE ecosystem.


No. It's Eclipse that took Microsoft made Monaco editor and built Theia on top of it.


Microsoft hired ex-Eclipse team to build VSCode.


It is really a shame to me that everyone believes Microsoft has changed and would never behave like they did in the 90s and prior. They haven't changed. They just decided -- for a time -- that another strategy was in their best interests. They're deciding that again, and going back to their EEE playbook.

(It also occurs to me that a lot of people here probably aren't old enough to remember 20th-century Microsoft...)


>Seems a bit like a "don't attribute to malice..."

I'm not saying you are wrong or that the rest of your comment isn't pretty valid, but a lot of people attribute malice to microsoft out the gate because they have history of operating out of malice.


> It takes a lot more time/energy to push public APIs

And, once an API is public, it becomes a lot harder to make changes to it. Iterating with a private API, then making it public once you've figured out the problem space, is a valid and useful approach.


Iterating on a private API is fine. Allowing your internal AI assistant to publish to the extension store while consuming those private APIs while prohibiting any competitors from doing so is not.


> Everyone is reading this as intentional anti-competitive practices.

I think its fair to assume anticompetitive intent due to their history of anticompetitive behavior. Admittedly, in old enough to remember the crap they pulled all through the 90s.


While I can understand the part about hidden APIs, as they're in flux and experimental, the part that's weird about it to me is the "you can totally build it and share it just not on our marketplace" part. That just sounds to me like they're trying to bar their competitors from the VSCode Marketplace, making installing and updating a lot harder for users.


I don't care if it's malicious or not. The fact remains that this team is using their position inside Microsoft to make use of tools in another product that a competing product wouldn't get to use.

This is one of the things MS got sued for back in the 90s. They shouldn't be allowed to do this again.


I would maybe entertain that idea in a vacuum, but that's Microsoft and they already did that in both Windows and Office before so no.


fork vscode, do whatever you want. merge back when ready.


Won't really help ya. As outlined at https://ghuntley.com/fracture/ as soon as you compile "VSCode" (MIT) the ecosystem fractures in a bad way (tm) including no-license to run majority of MSFT extensions (Language LSPs, Copilot, Remote Development). If you are a vendor producing a MIT fork then one needs to iterate the graph and convince 3rd party extension authors to _not use the MSFT extensions_ as dependencies _and_ to publish on open-vsx.

This is how Cursor gets wrecked in the medium/long term. Coding agent? Cool. You can't use Pylance with it etc. VSCode degrades to being notepad.exe. MSFT uses Cursor for product research and then rolls out the learnings into Copilot because only Copilot supports all of "Visual Studio Code" features that users expect (and this is by design)


If MS didn't owned VS code. What would they be doing?


Building VS Code :)


Further enshittifying Windows and Office. I'd say this task must have run its course by now, but Microsoft always seems to find a way to make products worse.


> intentional anti-competitive practices

> moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity

It’s not an either/or. That’s the same thing. The second part is the anticompetitive practice.

Giving advantage to your own teams so they can be there first and uncontested is approximately as anticompetitive as it can get.


> While that may be true, isn't another reasonable explanation that the Copilot development team is moving as fast as they can and these sorts of workarounds are being forced through in the name of team velocity?

this strikes me as most likely. it is anti-competitive, but it's probably not their motive.


Also regarding the wording "Proposed API": This seems like it's just some kind of incubator for APIs before marking them as stable. So that copilot thing may just be their incubator project. It may be not though.


Not malicious, but still selfish. It's important to remember that the copilot extensions are an extremely effective way of monetizing VScode. So it seems more like they're kind of compromising on their API usage rules in order to get to market quicker. But allowing themselves to use the APIs before anyone else is in a way anti-competitive, because the only way one could compete would be to use the unfinished APIs. But that requires users to go through more hoops to install your extension.

I should also mention that I am a VScode extension developer and I'm one of the weirdos that actually takes the time to read about API updates. They are putting in a lot of effort in developing language model APIs. So it's not like they're outright blocking others from their marketplace.


Your VaporView extension looks amazing! I can't even fathom how to get that far along in extension development.

Do you have any links or resources you could direct me toward that were more helpful than Microsoft's basic how-to pages for learning VS Code plugin development? I attempted to build a VS Code extension, but the attempt fizzled out. I managed to make some progress in creating the simplest of UI elements and populating them. I'm particularly interested in building a GUI-based editor of JSON / YAML where a user can select a value from a prepopulated dropdown menu, or validating a JSON / YAML file against a custom schema. Any help or advice you could provide would be appreciated!


Check my comment elsewhere (it's now bobbing up and down). Some things just take time, no need to assume malicious intent.


Frankly if they shipped it with `enabledApiProposals` I'd even go further and assume that they actually _intend_ to release public APIs once they've baked.

Like, why go through the extra work of gating it under `enabledApiProposals` and using the public manifest flag when you could put code in VSCode itself that is like "oh if this extension is installed let me just run some secret code here in the binary".


I think you are on the mark. And, also, it's a happy accident that this also means an advantage for CoPilot.


I would think this is less team velocity and more about LSP/etc. I am not an expert on how this is developed, but I imagine it will take at least a couple of years for the dust to settle to decide on good public API abstractions for LLM codegen, and they don’t want to introduce anything public that they have to maintain in concert with 3rd parties.

That’s not to say the general concern about GitHub-VSCode smothering competition isn’t valid, but I agree that it’s probably not what’s happening here.


Can you point me to an example were the initial maliciousness was reverted permanently later?


Seems like a false dichotomy. Move fast is just a public undocumented unstable API.


Thank you. This needs to be said & should be reported.

If we want a world that isn’t massively hostile to devs, like it is for most companies, this is the kind of advocacy we need and I’d love to see more people in tech putting it out there.


Yeah, the fact that they have direct access to VScode is anti-competitive. It doesn't require intent, it's baked in to the org structure.


Could be, but definitely worth flagging at the top of HN for everyone to see!


Disclaimer: I used to work at Microsoft. These days I work at a competitor. All words my own and represent neither entity.

Microsoft has the culture and the technology to tell private and public APIs apart and to check code across the company to ensure that only public APIs are called. This was required for decades as part of the Department of Justice consent decree and every single product in the company had scanners to check that they weren't using any private APIs (or similar hacks to get access to them such as privately searching for symbols in Windows DLL files). This was drilled into the heads of everyone, including what I assume are 90% of VP+ people currently at the company, for a very long time.

For them to do this is a conscious decision to be anticompetitive.


What a coincidence, I was just browsing Microsoft's Go fork (for FIPS compatibility, basically replacing Go crypto with OpenSSL and whatever API Windows has, just like there's a Google's fork that uses BoringSSL), and found this patch:

https://github.com/microsoft/go/blob/microsoft/main/patches/...

Upstream Go tricks Windows into enabling long path support by setting an undocumented flag in the PEB. The Microsoft Go fork can't use undocumented APIs, so this commit removes the hack.

So, even if they fork something, they have to strictly follow this guideline and remove undocumented API usage. I wonder if this only applies to Windows APIs though.


> Microsoft has the culture and the technology to tell private and public APIs apart and to check code across the company to ensure that only public APIs are called. This was required for decades as part of the Department of Justice consent decree and every single product in the company had scanners to check that they weren't using any private APIs (or similar hacks to get access to them such as privately searching for symbols in Windows DLL files).

I thought that only applied to private Windows APIs?

The antitrust case was about the Windows monopoly specifically, so other MS products calling Windows private APIs was in its scope. But, this is more comparable to another MS product calling a private Visual Studio API – I don't believe that was in the scope of that antitrust case. Did Microsoft have policies and processes against that scenario too?


The settlement was (presumably, I've never read it) about not using a monopoly in one area to gain influence in another, so I would not be surprised if Windows was the primary focus, but the overall message was fairly universal, and it makes sense: Microsoft builds platforms and overwhelmingly those platforms rely on other parties, so don't leverage anything internal/unfair as that hurts the platform.

This means that Office shouldn't use private Windows APIs and pin itself to the taskbar. It means that Surface shouldn't have special integrations (whether with Windows, Copilot, or whatever) that aren't available to third parties. It means that Azure shouldn't build things that are only available to Office. You build for the platform. The push was originally around a legal mandate, but it turns into a culture.


> The push was originally around a legal mandate, but it turns into a culture.

Whatever the scope of the legal mandate was, it expired over a decade ago now.

Culture can change over time. Even if Microsoft had this culture strongly when you worked there, it might have become much weaker in the years since. Within a corporation, culture can also vary a lot between different teams/divisions/etc - maybe it is still strong in some parts of the company but gone in others.


vscode is developed by VPs borged from github, no? those wouldn't know. not that I approve such things, certainly not.


> vscode is developed by VPs borged from github

Other way around:

In 2011 [Erich Gamma] joined the Microsoft Visual Studio team and leads a development lab in Zürich, Switzerland that has developed the "Monaco" suite of components for browser-based development, found in products such as Azure DevOps Services [0]

0. https://en.wikipedia.org/wiki/Erich_Gamma

1. https://microsoft.github.io/monaco-editor/


vscode predated github acquisition by several years


Mild nit: your website hijacked the back button, I had to spam click back like 30 times to get back to this hacker news comment thread


>I had to spam click back like 30 times to get back to this hacker news comment thread

Click-and-holding or right-clicking the back button will give you a list of last N URLs in your tab history. This page only generates one auto-redirect, so the HN URL will show up.


Thank you for this. Many years of browser usage and I never knew this.


Such is the years long removal of affordance from the UI. Netscape 4 had a small downwards arrow to indicate a submenu for the backbutton.


On Firefox it apparently disappeared in 2011

https://github.com/black7375/Firefox-UI-Fix/wiki/%5BArticle%...


Wow .. you've just removed a massive source of frustration - thank you.


Just tried this in Arc and Firefox...I never knew.


Just wanted to apologise to everyone for this, this kind of stuff drives me nuts and I'm not sure how I never noticed - it seems to be a result of how we use the iframe to render the chat example. Investigating!


Update: this should be fixed now


Thanks for the quick response!


Tip for next time this happens: hold down the back button for a menu of your history. It can help get where you want faster. Although not sure it helps too much if you literally had to click 30 times


That or right click.


Founder here - do you mean the chat example? Or just the homepage itself?


I’m also experiencing issues with the website. I when to the docs page and accidentally pressed back in my browser, after which the forward button wouldn’t work to undo the back operation.

Seems like the website breaks basic browser navigation.


On Firefox 131.0 I clicked through the tabs with the demo code, then pressed my mouse's "back" button and it didn't work. So I manually clicked the back button and it directed me back to this page.

Then I opened it again and clicked the back button and it didn't work again.


I couldn't replicate this in Firefox 131 under W11.


homepage itself, firefox on ubuntu


Are the history entries all just the same URL? Thanks for reporting


Yes (encountered same issue on ff macos after clicking on example tabs)


same for me safari on macos15


Reproduced with Firefox 131.0 on Windows 11. Happens if I click to jazz.tools. After pressing back once, I am still on jazz.tools, but have a forward arrow. It does seem related to the "chat" because the "result" window changes when I click between those back/forward arrow controls of the browser.

https://imgur.com/a/nuUluX3


I wonder if they tested this at all. What a poor showing.


> the person running them gets fired or quits partway through at least half of the time

This is a good point. Or the migration appears to have been very successful to management (before it's actually complete from an engineering perspective) and they get promoted / moved onto higher priority work.

Either way: make sure you are keeping the rest of the relevant engineering organization informed about how the new system works and how the migration is going to work.


I don’t think there’s much room for promotion because migrations are fabrication and promotions favor innovation. It’s ability to save money versus ability to make money. See: Smiling curve in economics.


Lot's of good advice here. Some things I will throw in:

Find ways to ship smaller versions of the migration first. If possible: isolate features that can be migrated on their own.

If possible silently run v2 in parallel with v1 for as long as it takes to be comfortable with v2.

Assume that at some point you are going to have to completely halt the migration, go back to v1-only, fix something, and restart the migration.

I'd bet it's going to take 2-3x longer than you think to completely deprecate v1.


I am not on the bleeding edge of this stuff. I wonder though: How could a safe super intelligence out compete an unrestricted one? Assuming another company exists (maybe OpenAI) that is tackling the same goal without spending the cycles on safety, what chance do they have to compete?


That is a very good question. In a well functioning democracy a government should apply a thin layer of fair rules that are uniformly enforced. I am an old man, but when I was younger, I recall that we sort of had this in the USA.

I don’t think that corporations left on their own will make safe AGI, and I am skeptical that we will have fair and technologically sound legislation - look at some of the anti cryptography and anti privacy laws raising their ugly heads in Europe as an example of government ineptitude and corruption. I have been paid to work in the field of AI since 1982, and all of my optimism is for AI systems that function in partnership with people and I expect continued rapid development of agents based on LLMs, RL, etc. I think that AGIs as seen in the Terminator movies are far into the future, perhaps 25 years?


It can't. Unfortunately.

People spending so much time thinking about the systems (the models) themselves, not enough about the system that builds the systems. The behaviors of the models will be driven by the competitive dynamics of the economy around them, and yeah, that's a big, big problem.


It's probably not possible, which makes all these initiatives painfully naive.


It'd be naive if it wasn't literally a standard point that is addressed and acknowledged as being a major part of the problem.

There's a reason OpenAI's charter had this clause:

“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””


Let us do anything we want now because we pinky promise to suddenly do something against our interest at a hazy point of time in the future


How does that address the issue? I would have expected them to do that anyhow. Thats what a lot of businesses do: let another company take the hit developing the market, R and D, and supply chain, then come in with industry standardization and cooperative agreements only after the money was proven to be good in this space. See electric cars. Also they could drop that at any time. Remember when openAI stood for opensource?


Really, you think Ford is dropping their electric car manufacturing in order to assist Tesla in building more gigafactories?

> Remember when openAI stood for opensource?

I surely don't, but maybe I missed it, can you show me?

https://web.archive.org/web/20151211215507/https://openai.co...

https://web.archive.org/web/20151213200759/https://openai.co...

Neither mention anything about open-source, although a later update mentions publishing work (“whether as papers, blog posts, or code”), which isn't exactly a ringing endorsement of “everything will be open-source” as a fundamental principle of the organization.


Automakers often do collaborate on platforms and engines. In terms of ev we see this as well as chargers become standardized.


I wonder if that would have a proof like the halting problem


Since no one knows how to build an AGI, hard to say. But you might imagine that more restricted goals could end up being easier to accomplish. A "safe" AGI is more focused on doing something useful than figuring out how to take over the world and murder all the humans.


Hinton's point does make sense though.

Even if you focus an AGI on producing more cars for example, it will quickly realize that if it has more power and resources it can make more cars.


Assuming AGI works like a braindead consulting firm, maybe. But if it worked like existing statistical tooling (which it does, today, because for an actual data scientist and not aunt cathy prompting bing, using ml is no different than using any other statistics when you are writing your python or R scripts up), you could probably generate some fancy charts that show some distributions of cars produced under different scenarios with fixed resource or power limits.

In a sense this is what is already done and why ai hasn't really made the inroads people think it will even if you can ask google questions now. For the data scientists, the black magicians of the ai age, this spell is no more powerful than other spells, many of which (including ml) were created by powerful magicians from the early 1900s.


Not on its own but in numbers it could.

Similar to how law-abiding citizens turn on law-breaking citizens today or more old-fashioned, how religious societies turn on heretics.

I do think the notion that humanity will be able to manage superintelligence just through engineering and conditioning alone is naive.

If anything there will be a rogue (or incompetent) human who launches an unconditioned superintelligence into the world in no time and it only has to happen once.

It's basically Pandora's box.


This is not a trivial point. Selective pressures will push AI towards unsafe directions due to arms race dynamics between companies and between nations. The only way, other than global regulation, would be to be so far ahead that you can afford to be safe without threatening your own existence.


There's a reason OpenAI had this as part of its charter:

“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””


Operative word there seems to be had.


The problem is the training data. If you take care of alignment at that level the performance is as good as an unrestricted one, except for things you removed like making explosives or ways to commit suicide.

But that costs almost as much as training on the data, hundreds of millions. And I'm sure this will be the new "secret sauce" by Microsoft/Meta/etc. And sadly nobody is sharing their synthetic data.


Safety techniques require you to understand your product and have deep observability.

This and safety techniques themselves can improve the performance of the hypothetical AGI.

RLHF was originally an alignment tool, but it improves llms significantly


The goal of this company likely wouldn’t be to win against OpenAI, but to play its own game, even if much lesser


Not with that attitude


Honestly, what does it matter. We're many lifetimes away from anything. These people are trying to define concepts that don't apply to us or what we're currently capable of.

AI safety / AGI anything is just a form of tech philosophy at this point and this is all academic grift just with mainstream attention and backing.


This goes massively against the consensus of experts in this field. The modal AI researcher believes that "high-level machine intelligence", roughly AGI, will be achieved by 2047, per the survey below. Given the rapid pace of development in this field, it's likely that timelines would be shorter if this were asked today.

https://www.vox.com/future-perfect/2024/1/10/24032987/ai-imp...


I am in the field. The consensus is made up by a few loudmouths. No serious front line researcher I know believes we’re anywhere near AGI, or will be in the foreseeable future.


So the researchers at Deepmind, OpenAI, Anthropic, etc, are not "serious front line researchers"? Seems like a claim that is trivially falsified by just looking at what the staff at leading orgs believe.


Apparently not. Or maybe they are heavily incentivized by the hype cycle. I'll repeat one more time: none of the currently known approaches are going to get us to AGI. Some may end up being useful for it, but large chunks of what we think is needed (cognition, world model, ability to learn concepts from massive amounts of multimodal, primarily visual, and almost entirely unlabeled, input) are currently either nascent or missing entirely. Yann LeCun wrote a paper about this a couple of years ago, you should read it: https://openreview.net/pdf?id=BZ5a1r-kVsf. The state of the art has not changed since then.


I hope you have some advanced predictions about what capabilities the current paradigm would and would not successfully generate.

Separately, it's very clear that LLMs have "world models" in most useful senses of the term. Ex: https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-o...

I don't give much credit to the claim that it's impossible for current approaches to get us to any specific type or level of capabilities. We're doing program search over a very wide space of programs; what that can result in is an empirical question about both the space of possible programs and the training procedure (including the data distribution). Unfortunately it's one where we don't have a good way of making advance predictions, rather than "try it and find out".


It is in moments like these that I wish I wasn’t anonymous on here and could bet a 6 figure sum on AGI not happening in then next 10 years, which is how I define “foreseeable future”.


You disagreed that 2047 was reasonable on the basis that researchers didn't think it wouldn't happen in the foreseeable future, so your definition must be at least 23 years for consistency's sake


I'd be OK with that, too, if we adjusted the bet for inflation. This is, in a way, similar to fusion. We're at a point where we managed to ignite plasma for a few milliseconds. Predictions of when we're going to be able to generate energy have become a running joke. The same will be the case with AGI.


LeCun has his own interests at heart, works for one of the most soulless corporations I know of, and devotes a significant amount of every paper he writes to citing himself.

He is far from the best person to follow on this.


Be that as it may, do you disagree with anything concrete from this paper?


Fair, ad hominems are indeed not very convincing. Though I do think everyone should read his papers through a lens of "having a very high h-index seems to be a driving force behind this man".

Moving on, my main issue is that it is mostly speculation, as all such papers will be. We do not understand how intelligence works in humans and animals, and most of this paper is an attempt to pretend otherwise. We certainly don't know where the exact divide between humans and animals is and what causes it, which I think is hugely important to developing AGI.

As a concrete example, in the first few paragraphs he makes a point about how a human can learn to drive in ~20 hours, but ML models can't drive at that level after countless hours of training. First you need to take that at face value, which I am not sure you should. From what I have seen, the latest versions of Tesla FSD are indeed better at driving than many people who have only driven for 20 hours.

Even if we give him that one though, LeCun then immediately postulates this is because humans and animals have "world models". And that's true. Humans and animals do have world models, as far as we can tell. But the example he just used is a task that only humans can do, right? So the distinguishing factor is not "having a world model", because I'm not going to let a monkey drive my car even after 10,000 hours of training.

Then he proceeds to talk about how perception in humans is very sophisticated and this in part is what gives rise to said world model. However he doesn't stop to think "hey, maybe this sophisticated perception is the difference, not the fundamental world model". e.g. maybe Tesla FSD would be pretty good if it had access to taste, touch, sight, sound, smell, incredibly high definition cameras, etc. Maybe the reason it takes FSD countless training hours is because all it has are shitty cameras (relative to human vision and all our other senses). Maybe linear improvements in perception leads to exponential improvement in learning rates.

Basically he puts forward his idea, which is hard to substantiate given we don't actually understand the source of human-level intelligence, and doesn't really want to genuinely explore (i.e. steelman) alternate ideas much.

Anyway that's how I feel about the first third of the paper, which is all I've read so far. Will read the rest on my lunch break. Hopefully he invalidates the points I just made in the latter 2/3rds.


51% odds of the ARC AGI Grand Prize being claimed by the end of next year, on Manifold Markets.

https://manifold.markets/JacobPfau/will-the-arcagi-grand-pri...


This could also just be an indication (and I think this is the case) that many Manifold betters believe the ARC AGI Grand Prize to be not a great test of AGI and that it can be solved with something less capable than AGI.


I don't understand how you got 2047. For the 2022 survey:

    - "How many years until you expect: - a 90% probability of HLMI existing?" 
    mode: 100 years
    median: 64 years

    - "How likely is it that HLMI exists: - in 40 years?"
    mode: 50%
    median: 45%
And from the summary of results: "The aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059"


Reminds me of what they've always been saying about nuclear fusion.


Many lifetimes? As in upwards of 200 years? That's wildly pessimistic if so- imagine predicting today's computer capabilities even one lifetime ago


> We're many lifetimes away from anything

ENIAC was built in 1945, that's roughly a lifetime ago. Just think about it


Ilya the grifter? That’s a take I didn’t expect to see here.


the first step of safe superintelligence is to abolish capitalism


That’s the first step towards returning to candlelight. So it isn’t a step toward safe super intelligence, but it is a step away from any super intelligence. So I guess some people would consider that a win.


Not sure if you want to share the capitalist system with an entity that outcompetes you by definition. Chimps don't seem to do too well under capitalism.


You might be right, but that wasn't my point. Capitalism might yield a friendly AGI or an unfriendly AGI or some mix of both. Collectivism will yield no AGI.


First good argument for collectivism I've seen in a long while.


One can already see the beginning of AI enslaving humanity through the establishment. Companies work on AI get more investment and those who don't gets kicked out of the game. Those who employ AI get more investment and those who pay humans lose confidence through the market. People lose jobs, get harshly low birth rates while AI thrives. Tragic.


So far it is only people telling AI what to do. When we reach the day where it is common place for AI to tell people what to do then we are possibly in trouble.


Why does everything have to do with capitalism nowadays?

Racism, unsafe roads, hunger, bad weather, good weather, stubbing toes on furniture, etc.

Don't believe me?

See https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Are there any non-capitalist utopias out there without any problems like this?


To be honest these search results being months apart shows quite the opposite of what you're saying...

Even though I agree with your general point.


It is a trendy but dumbass tautology used by intellectually lazy people who think they are smart. Society is based upon capitalism therefore everything bad is the fault of capitalism.


This is literally a discussion on allocation of capital, it's not a reach to say that capitalism might be involved.


Right, so you draw a line from that to abolishing capitalism.

Is that the only solution here? We need to destroy billions of lives so that we can potentially prevent "unsafe" super intelligence?

Let me guess, your cure for cancer involves abolishing humanity?

Should we abolish governments when some random government goes bad?


"Abolish" is hyperbole.

Insufficiently regulated capitalism fails to account for negative externalities. Much like a Paperclip Maximising AI.

One could even go as far as saying AGI alignment and economic resource allocation are isomorphic problems.


Agreed. At the same time, regulators too need regulation.

From history, governments have done more physical harm (genocides, etc) than capitalist companies with advanced tech (I know Chiquita and Dow exist).


And then seize the means of production.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: