Hacker Newsnew | past | comments | ask | show | jobs | submit | nvrmnd's commentslogin

From CNN:

"Commerce Secretary Howard Lutnick told reporters on a call Friday evening that the administration came to the fee of $100,000 per year, plus vetting costs, after talking with companies.

He noted that the payment structure is still under discussion with the Department of Homeland Security, in terms of “whether we’re going to charge the $300,000 up front or $100,000 a year for the three years.”


before there was no $100k/year cost to H1Bs, see post title.


I found this map a few years ago and had it printed online on canvas, to hang on my wall near my bike area, I recommend doing this with other old maps as well.


What is 'UQ', I assume some measure of uncertainty over your model outputs?


Usually means uncertainty quantification


Unbiased quantifier


One thing to keep in mind with BI software is that the users are often very different than, well, those individuals that prefer to use mutt as an email client.

Many, or most, users for a BI tool will be operations, product managers, and business management who simply will not find the interface to be intuitive, responsive, or well designed. At least that's my experience.


I agree, learning admissible heuristics will retain worst case performance, which has always been the measuring stick for these algorithms. It's not at all uncommon to find faster solutions for the average or even p99 cases that cannot provide guarantees on the worst case.


How would one go about proving that a learned heuristic (something from an AI model) is in fact admissible?


For something like focal search, it doesn't even need admissibility, you just apply it as a second selection heuristic among the choices your admissable heuristic returns as 'top k' choices.


So a tiebreaker then?


This and other proposed legislation is attempting to hit the ball out of the park on the first pitch. I feel it would be a lot more sensible and effective to legislate clear and present harms, such as holding developing firms liable for deep-fake technology if used for identity theft for the purpose of fraud.


Should I be able to sue Honda because someone in their Civic ran into me?

A users misuse of a technology shouldn't be the responsibility of the developer. You could apply this to almost every product in the world otherwise.


Yes, if the Civic had a feature that made it easier to hit you, or lacked a reasonable feature that would have prevented it from hitting you.

We have a long history of legally targeting companies that produce products targeted at criminal activity or implied criminal activity.


>Yes, if the Civic had a feature that made it easier to hit you, or lacked a reasonable feature that would have prevented it from hitting you.

Ah but some cars today have automatic breaking, so can I sue the manufacturer for not including one? Maybe Toyota's would have seen the pedestrian, is it reasonable to assume Honda's should have as well? Since this is a safety issue, why did Honda allow the car to even start without an up to date pedestrian detection system?

Does this help you see the issue?


That one car has automatic breaking means you can sue all others for not having it. Depending on why the others don't have it you might or might not win, but once the technology exists they will ask why others didn't. (sometimes the courts will accept the patent on the technology was too expensive to license, and sometimes the technology can't be put on this car for technical reasons - see a lawyer)


As I said down-thread a bit... the issue with your car analogy is that we force everyone to get a license and register their car before they can drive around. Do you want to have to get a license and register your AI model before you're allowed to start generating images or text? If so, maybe that is a solution. But I doubt anyone would accept that sort of system. So, saying "you can't sue Honda if someone drives into you," while true, doesn't really get us anywhere in addressing the issues with AI.


>So, saying "you can't sue Honda if someone drives into you," while true, doesn't really get us anywhere in addressing the issues with AI.

I'm not trying to address any perceived 'issues' with ai here, I'm pointing out the flaw in holding the developer/ manufacturer liable for what a end user does.

Also I could switch the analogy to planning a robbery over WhatsApp, hacking into a bank using 'pentration testing tools' or even just windows itself for allowing users to run any software they want. Or maybe windows allows piracy by not scanning every file against a hash of known piracy content.

You can make up a million scenarios of end users misusing products, I'm sorry you don't like the car one.


With cars we have as a metric shit tonne of regulations so that manufacturers can be relieved of some liability.

Let's do the same for A.I., right? How about you reply with the regulations that A.I. companies face today that are equivalent to what car companies face. I'll check back for your answers. If there are any gaps, then let's get to work on that legislation.

1. *Fuel Economy Standards (Corporate Average Fuel Economy, or CAFE)*: Auto manufacturers are required to meet specific fuel efficiency targets for their fleet of vehicles. These standards aim to reduce greenhouse gas emissions and promote fuel-efficient technologies.

2. *Emissions Standards*: The Environmental Protection Agency (EPA) sets emissions limits for pollutants such as nitrogen oxides (NOx), carbon monoxide (CO), and hydrocarbons (HC). Compliance with these standards ensures cleaner air and reduced health risks.

3. *Safety Regulations (National Highway Traffic Safety Administration, or NHTSA)*: Auto manufacturers must adhere to safety standards related to crashworthiness, occupant protection, airbags, seat belts, and child safety. These regulations help prevent injuries and fatalities.

4. *Recall Requirements*: Auto manufacturers are obligated to promptly address safety defects by issuing recalls. The NHTSA oversees recall processes to protect consumers from faulty components or design flaws.

5. *Consumer Protection Laws*: Regulations ensure transparency in advertising, warranties, and pricing. Auto manufacturers must provide accurate information to consumers and address any deceptive practices.

6. *Clean Air Act*: This federal law regulates emissions from vehicles and sets emission standards for pollutants. Compliance with these standards is crucial for environmental protection.

7. *Corporate Average Emission Standards (CAES)*: Similar to CAFE, CAES focuses on reducing greenhouse gas emissions. Auto manufacturers must meet specific emission targets across their fleet.

(I'm sure the list goes on a good bit longer but I feel like this is enough for now.)


Actually I take it all back, the car is a really good model for how we should handle AI safety.

With cars, we let most people use some very dangerous but also very useful tools. Our approach, as a society, to making those tools safe is multi-layered. We require driver's ed and license drivers to make sure they know how to be safe. We register cars as a tool to trace ownership. We have rules of the road that apply to drivers. We have safety rules that apply to manufacturers (and limit what they are allowed to let those tools do). If a user continues to break the rules, we revoke their license. If the manufacturer breaks the rules, we make them do a recall.

I actually agree with you 100%, this is probably a good way to think about regulating AI. Some rules apply to individual users. Some rules apply to the makers of the tools. We can come together as a society and determine where we want those lines to be. Let's do it.


Are steering wheels and gas pedals features that make it easier to hit people?


If their Civic's brakes were poorly designed or implemented, then yes, Honda should be liable. Then we get into the definition of 'poorly' - in what distance and time should the car stop? - and then we need some sophisticated regulation.


The analogue of someone using deep-fakes for fraud is for someone to purposefully hit a pedestrian with their car. Should Honda be held liable because someone tried to use their car as a weapon? The classical form of this argument is if a kitchen knife manufacturer should be held liable if someone used their knives for homicide.


This analogy is strained, because when it comes to motor vehicles, aside from the concept of "street legal" cars that limit what you can do with the vehicle, we also have cops that patrol the streets and cameras that can catch people breaking the rules based on license plate. Theoretically you can't drive around without being registered.

What's the equivalent of that for AI? Should there be a watermark so police can trace an image back to a particular person's software? If that isn't acceptable (and I don't think it would be), how do we prevent people from producing deep fakes? At the distribution level? These are hard problems, and I don't think the car analogy really gets us anywhere.


> This analogy is strained

Yes, that's why I offered the kitchen knife example instead. Cars are also a problematic analogy, because even though some people still consider their operation to be fully controlled by the driver and not the manufacturer via their software, that's apparently becoming less the case.

> If that isn't acceptable (and I don't think it would be), how do we prevent people from producing deep fakes?

You don't. The problem isn't producing deepfakes. The problem is committing fraud, regardless of the tools used. Someone using deepfakes to e.g. hide facial disfigurement from their employer isn't someone I mind using deepfakes.


> Someone using deepfakes to e.g. hide facial disfigurement from their employer isn't someone I mind using deepfakes.

I agree here. But what about the harder questions? Do you think deepfake porn of celebrities should be allowed? What about deepfake porn of an unpopular student at the local high school?

If these aren't allowed, where is the best place to prevent them, but still has minimal impact on the allowed uses? At the root level of the software capable of producing them (what seems to be proposed in TFA)? At the user level (car analogy)? At the distribution level (copyright-style)? I don't know the answer to these questions, but I think we should all be talking and thinking about them.


> Do you think deepfake porn of celebrities should be allowed? What about deepfake porn of an unpopular student at the local high school?

Should be handled by impersonation/defamation laws. For celebrities, perhaps it may be handled by copyright. That would allow them to license their likeness under their own conditions.

> If these aren't allowed, where is the best place to prevent them, but still has minimal impact on the allowed uses?

By enforcing laws against the bad behaviors themselves and not trying to come up with convoluted regulations on tools just because of their potential to be used badly.

> At the root level of the software capable of producing them (what seems to be proposed in TFA)?

> At the distribution level (copyright-style)?

You'd just be increasing the costs of producing software and distribution means (thinking of stuff like YouTube). It's just setting up already powerful companies to become even more powerful by raising the bar on what potential competition must be ready for from the get-go.


It turns out bringing a lawsuit is a lot more costly than sharing a deepfake with a classroom. Your solution is flawed.


> not trying to come up with convoluted regulations on tools just because of their potential to be used badly

and

> You'd just be increasing the costs of producing software and distribution means (thinking of stuff like YouTube). It's just setting up already powerful companies to become even more powerful by raising the bar on what potential competition must be ready for from the get-go.

These arguments could be made against any "custom" regulatory scheme like what we have for drugs, cars, airplanes, etc. But sometimes the unique harms presented by certain classes of products require unique regulatory schemes.

Maybe you're right (I hope you are) and the potential harms of AI are not really significant enough to warrant any special regulation. But I don't think that is _obviously_ the case, and I would be careful when it comes to talking with normies about this stuff - AI does seem to be really scary, and hearing a techie hand-wave their concerns over technology they don't understand has the potential to make it worse. Good luck out there buddy.


In that case, I think you have a point. However, consider these situations:

* Honda made their car poorly and there are sharp edges at the fenders, and the driver purposely used those to injure someone. I think Honda should still have some liability; their poor construction resulted in extra injury, regardless of the application.

* Honda intentionally or with recklessness built the car in a way that would serve as a useful tool for murder, in ways that served no worthwhile purpose, in ways that could be secured. I don't know the law exactly, but I expect Honda would be liable, and IMHO that would be absolutely right.

Still, if Honda builds a safe car and someone simply chooses to use its mass x acceleration to kill someone, then I wouldn't hold Honda liable.


If Honda says that the Civic is good for running into people then yes - as this is the clear purpose. Or if Honda says you don't have to worry as it is not possible to hit someone - because they promised that took care of things.

Note that courts take advertising over warning labels and the manual. Which is why many car ads have the text "professional driver on closed track on screen" - make it clear they they think car can do it but not most customers. Likewise cutting tools often have "guards removed for clarity" are clearly not operating (or clearly a cartoon image and not the real tool) - if they advertise someone running the tool without the guard they are liable.

There is also the concept of foreseeable misuse in courts. If you can imagine someone would do that you have do show the courts that isn't the intended purpose and you tried to prevent it. If someone does something you didn't think of, then you need to show the court you put a reasonable effort into figuring out all the possible misuses otherwise it becomes a lack of creativity on your part. Thinking of a misuse doesn't mean you have to make it impossible, just you have to make reasonable effort to ensure that doesn't happen (guards, warning labels, training, not selling to some customers - all are common tactics to sell something that can be misused without being liable, but even there you can't put a warning label on something if you could have placed a guard on the danger)

The above just brushes the surface of what the courts deal with (and different countries have different laws). If you need details talk to a lawyer.


I'm suspicious of this bill, but your analogy does more to show how cars are horrifyingly unregulated than push for individual responsibility.

The car allows you to break the law by going 2x faster than the highest speed limit in the nation. A faster car, with higher ground clearance does make it easier to fatally run into someone. The Tesla cybertruck is a killing machine in car form.

Cars are the leading cause of death in the US. Maybe we need to have a similar 'pre-emptive manufacturer-side intervention' bill for cars too.


If it was found they were reckless, absolutely. I believe this is already the case.


I don't.

Software developers and researchers should not be liable for distributing information or code, even if it's used for something illegal, as long as they aren't explicitly promoting the illegal activity and don't have any involvement with it outside of creating the software.

Not only is that consistent with previous decisions, such as those regarding copyright (i.e. torrents are fine, but making a client to torrent movies specifically isn't), but also any other decision would be a violation of the social contract with regard to open-source development.


If a bridge collapses and people are hurt the engineer is at fault and should be held accountable. If software fails and people are hurt the software engineer is at fault and should be held accountable.


This is a poor analogy. A better one would be if a murderer used your bridge to escape. Should you be held liable? What if the bridge were designed to handle highway speeds so he could escape faster?


I'd agree that they shouldn't be liable in that case, since the bridge works the same for everybody (no matter why they're driving over it) and is working as designed/intended. It's really only the idea that developers shouldn't be liable for their code as long as they aren't explicitly promoting illegal activity and don't have any involvement with it outside of creating the software that I take issue with.


General AI tools also work the same for all users. It's not as if the average AI company is optimizing for celebrity deepfake nudes or spambots.

I mean, there are really two categories of software:

* Free and/or open source software. In this case, I think there is no good reason to make the developer liable, unless they're promoting illegal use. No person wants to be attacked for giving away something for free. That's why the LICENSE.

* Commercial/paid software. In this case, it is reasonable to argue that companies should be liable if end users are harmed by the software. For paid software especially, disclaimers cannot be absolute.

But I do not think it is acceptable to hold developers liable for second-order effects - i.e., a user doing something illegal with the software and harming a third party - unless it was obvious to them that the user was going to do something illegal.


If they are knowingly including large numbers of celebrity photos in their training data, slurping it into their models, and doing nothing to block users from abusing what is a clearly foreseeable harm? That's on the companies making the product, not on the users.

If Honda put a big spike on the front of their vehicles because they thought it looked good and would sell more cars, but the spike was good at skewering pedestrians, they'd be at fault too. It wouldn't matter that their designers thought the spike was sexy and would sell more cars. You can't make something you know to be dangerous and expect to sell it to the public without being regulated.

Want to avoid the regulation, don't steal a bunch of celebrity photos and an provide your users with a tool that that creates celebrity porn deepfakes on demand.

This isn't controversial. Go to Microsoft's AI chatbot today and try to get it to create a naked image of Taylor Swift. Microsoft has spent non-trivial engineering resources making that fail. Not doing that work is irresponsible and likely to lead to lawsuit that may or may not be winnable but that Microsoft and others clearly want to avoid.


Counterpoint: tons of tools are dangerous yet are still sold without much if any regulation. Knives are dangerous, but you don't need an ID to buy one from the store. We sell dangerous products all the time! We just put warnings and disclaimers on them (which AI models tend to come with).

That said, I dispute the idea that these models are "dangerous" in the first place. A box that generates texts and images is not even remotely as dangerous as a sharp spike strapped to a car. Such a comparison is hyperbolic.

People act like these models are going to be the end of US when they're literally just "instant photoshop." A dangerous model would be one designed to run a military drone or automatic weapons, not a random text and image machine.

All that aside, the deepfake issue has nothing to do with the model datasets including celebrity photos (in fact, it would work fine without any of them). And no, downloading public photos is not stealing either.


the original comment explicitly said "holding developing firms ..." - so this is not about software developers, it is about corporations. The moment you start to sell stuff is the moment you become liable.


HN doesn't know how to make that distinction any more. It's so overrun with corporate bootlickers who think the software engineers ARE the company and the company IS the software engineers. I presume it's just a bunch of temporarily embarrassed billionaires planning for the future, but it's a shame that a once hacker-friendly forum is now mostly focused on compensation maximization and defending trillion dollar corporations.


Torrents are fine because there are legal purposes. Developers of torrent software are very careful to emphasize the legal uses.


There are legal purposes for generative AI tools and even deepfakes, so there should be no issues with the tools themselves.

Obviously if a site promotes "download this tool to generate infinite nude pictures of celebrities", then that is illegal, since that particular tool was only developed for illegal uses.


There are legal purposes for cars and guns and we regulate the hell out of them because there are also plenty of not-legal purposes and even just the potential for accidents. When A.I. is as heavily regulated as cars, we can revisit the "it's just a tool" argument.


We can revisit the argument when an image generation AI causes a fatal accident with 5 other vehicles.


If there was legislation that required Honda to install certain safety features and they failed to do so, then yes they should be liable.


> I feel it would be a lot more sensible and effective to legislate clear and present harms, such as holding developing firms liable for deep-fake technology if used for identity theft for the purpose of fraud.

s/deep-fake/photoshop

Deepfakes are simply more convenient photo/video/audio editing that has been around for decades[1], and we don't really need new legislation to deal with them. Fraud/defamation/etc, the actual harmful aspects of what can be accomplished with deepfakes, don't need any new updates to handle the technology. If we're going to hobble new technologies, we may as well go back and hold Adobe responsible for all the shady things people have done with Photoshop, and video/audio editing suites for all the deceptive clips people have spliced together.

[1] https://www.youtube.com/watch?v=La5jrfobfTM&t=1s


s/photoshop/airbrush

I vaguely recall seeing some fairly convincing B&W Soviet-era photos (I think they had Stalin in them) where people were removed and other people moved around to fill the gap. And document forgery for the purposes of fraud and espionage has of course been around for centuries.

But I think the issue is less the capability itself, and more that companies will make it too easy (trivial, actually) for anyone to commit mischief. The ability to mass-manipulate images on command is no longer restricted to the General Secretary of the USSR.

That doesn't necessarily mean regulation is required, though--plenty of modern technologies make it very easy to commit crimes, but only some of them require special rules.


I understood the bill to explicitly not target misuse of the AI (from the article: "Odd that ‘a model autonomously engaging in a sustained sequence of unsafe behavior’ only counts as an ‘AI safety incident’ if it is not ‘at the request of a user.’ If a user requests that, aren’t you supposed to ensure the model doesn’t do it? Sounds to me like a safety incident."). This seems to be entirely targeted at potential risk from a rogue AI. What regulation would you propose to address that risk?


Some people say that it doesn't matter if someone dies at age 89 -- after they have lived a full life and contributed all they had to give -- it's still just as sad and shocking.

Personally, I don't agree, to me it's just not as sad or shocking. People don't live forever and Wirth's life was as successful and complete as possible. It's not a "black day" where society truly lost someone before they fulfilled their potential.


Yeah, that just shows you haven't yet grasped what mourning is.


Is this an advertisement?


I don't think that's accurate, it generates novel outputs that were not observed in the training data.


It doesn't generate new tokens.

Train an LLM on text that only uses lowercase, and it will never output an uppercase letter.


So the model is limited to using words and characters that already exist. I agree with you but I don't see why is a limitation worth pointing out.


you literally have to put in every number for it to do mathematics correctly...

its as stupid as that. some try to get around it by indeed only having the 10 different digits and glue them together, but its a hallucination that that works.

an important point in generalization is for example that you teach it something. This is literally important

'ycombinator is a website' is a prompt that is almost impossible of ycombinator is not in your training set


But can it put two tokens together

10 01 = 1001?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: