OP here. Many people are reacting to the title of the paper. A few thoughts:
* The paper is 35 pages long and it's hard to convey its message in any single title. We make clear in the text that our point is not that predictive optimization should never be used.
* We do want the _default_ to change from predictive optimization being seen as the obvious way to solve certain social problems to being against it until the developer can address certain objections. This is also made clear in the paper.
* The title is a nod to a famous book in this area called "Against prediction". Most people in our primary target audience are familiar with that book, so the title conveys a lot of information to those readers. That's one reason we picked it.
* Despite its flaws, when might we want to use predictive optimization? Section 4 gets into this in detail.
Is the problem really the predictive optimization algorithms themselves, or rather how we think about these problems / solutions in the first place?
I don’t know, it may just be that I’m getting old (45), but my impression is that there has been a substantial shift in our culture around these issues over my adult lifetime (at least in Sweden where I live). For example, only 20 years ago it was extremely rare that the Swedish police shot anybody, and e.g. holding a knife and being non compliant with police instructions was not considered enough to warrant getting shot. There has been no change to the relevant law AFAIK, but there has been a cultural change where many people now think the police is responsible if such a person suddenly runs away and attacks someone with that knife. And there’s also a constant stream of cases where the police has shot someone essentially for being non compliant (mentally ill, addicts, distraught people and so on).
I think in general 20 years ago in Sweden most people in positions of influence were a lot more principled in thinking about these issues, and the principles were the classic ones ironed out by Voltaire et al. Nowadays, not so much… The thinking seems much more primitive.
One of those principles was that it was wrong to punish (or take similar action) against someone for what they could potentially do in the future. Another was that each person is (with very few exceptions) morally responsible for their own actions. Pretty good principles in my view.
Is it naive to think that police officers can restrain someone holding a knife and being non compliant, potentially dangerous?
I’ve seen police in Switzerland do this. Specifically once a large man with a large knife. He made a very weird, unstable, unnerving impression on me. He got surrounded and restrained by about three policemen. They had some kind of gloves on. Happened very quickly.
I think they did tge right thing there. They predicted potentially life threatening danger and then handled it in a non-lethal way.
If they pulled a gun and shot the guy I would have some questions...
> Is it naive to think that police officers can restrain someone holding a knife and being non compliant, potentially dangerous?
Swedish law is not that the police should attempt this.
The basic idea (which today is hard to understand, I know…) is that it’s a rather minor crime to hold a knife in public and it’s not a crime at all to be non compliant with police instructions. So the police does not have legal authority to shoot [1]. They essentially have to wait it out and see what happens.
But what has changed over the last 20 years is that the concept of self defense (“nödvärn”) has been expanded enormously when applied specifically to the police, in an extrajudicial way, so that it now more matches your expectation: “if he seems dangerous / could do something bad in the future, shoot!”
I think the person you are responding to is not arguing they should shoot, but quite the contrary. I think you're essentially in agreement.
My own opinion: I think shooting someone should always be the last resort. Obviously police officers must look for their own well-being and should not expose themselves to obvious risk of death if possible. On the other hand, someone holding a knife or being non-compliant is not something that must be immediately handled by shooting at the person. Maybe waiting it out, maybe talking, maybe swarming him/her with multiple officers, maybe using tranquilizers, who knows. Every option has a risk and nothing is foolproof, but killing someone is the ultimate, irreversible option and should not be the default, even when the person is resisting or noncompliant.
I don't understand why in some countries, like the US, police officers are so eager to resort to violence or killing -- though it's certainly the easy way out.
> I don't understand why in some countries, like the US, police officers are so eager to resort to violence or killing
Police training has become heavily influenced by military tactics (take the case of Israeli training of American officers and "police exchanges"). This was not always the case. Now, I do not deny that such tactics can be useful in certain circumstances, but overemphasis on them can be a problem just as much as laxity.
In my worldview self-defense, including defending others, always uses the minimal amount of force to reduce further harm. And I would expect a police officer to have the skills and composure to do so in the general case. Bad things can happen and mistakes are human and all.
The trend you describe is disheartening. Not just for individual ethical concerns you mentioned, but also in the context of state violence. We give the police the right to use violence. It's a massive responsibility and when it's abused, it leads to spiraling effects.
To expand on this, the expansion that has happened in courts and mainly in cases involving police is based on the legal idea of 'inbillat nödvärn', i.e. defending against an imagined threat, in contrast to an actual threat. If someone is holding a gun the threat is material, if they're gesticulating with a phone the threat would be 'inbillat', imagined.
I'd disagree that it's extrajudicially expanded, it's courts and prosecutors privileging police, probably because politicians have made it clear that they think Sweden needs many, many more police and currently very few and fewer suitable persons let themselves be recruited. The pay is bad, the job is not fun, a large portion of recruits join to serve a few years and then get a much nicer job. If they also risked getting jailed for panicking on the job even fewer would join the force.
As a side note, police commonly practice at the shooting ranges run by sports shooting clubs and considered a nuisance since they are sloppy and bad at hitting the target.
> I'd disagree that it's extrajudicially expanded, it's courts and prosecutors privileging police, probably because politicians have made it clear that they think Sweden needs many, many more police and currently very few and fewer suitable persons let themselves be recruited.
It’s debatable if extrajudicial is the right word. It can be used to describe sentencing entirely outside of the formal legal system (as e.g. in “extrajudicial execution”). But that’s clearly not what I’m referring to here. In my view it can also be used to describe the case where a court makes decisions outside of the law, without legal authority or in direct conflict with law.
So in my mind what you are describing is essentially an extrajudicial expansion. We have a civil law system where it shouldn’t matter what politicians say. The only thing that should matter is when 175+ members of parliament press the green button. To the best of my knowledge they have never done so with the intent to expand police use of force in self-defense.
The example you bring up I think is a poor one (from my perspective), since the court’s verdict was not entirely unreasonable. A better example is the case where a police officer beat a very drunk unarmed man with her baton and let her dog attack him because he refused to lay down on the street. The appellate court (“hovrätten”) found her not guilty based on her statement that she could see from a tightening of the muscles in the man’s face that he was about to attack [1]. I’ve read the judgement and deem it bizarre enough to warrant the extrajudicial label.
Another reason I think it’s fair to call it an extrajudicial expansion is that several police officers I’ve talked to think of this kind of use of force as now within their authority. The line between legal use of force and what a police officer “can get away with (essentially by lying in court)” has been blurred, and for many it’s the latter that they see as relevant.
Mainly due to police work, which famously doesn't mend the cracks violent street gangs seep through, instead it just irregularly takes away the leaders and cause further instability and violence around the business they do.
> But what has changed over the last 20 years is that the concept of self defense (“nödvärn”) has been expanded enormously when applied specifically to the police, in an extrajudicial way
This is a dangerous road that leads to US-style policing.
Sweden has changed a lot since 1984 when the current police law was passed. I really like it, but it was written for a different time. It would be much better to pass a new one with more formal authority for the police to use force (“laga befogenhet”), instead of letting a culture of courts looking the other way / bending the law build up even further.
I will never understand why non-lethal force (eg taser) isn't given more consideration. It's a false dichotomy! Using lethal force (guns with bullets), or risking officer harm (wrestling suspects into submission) are not the only options!
A common criticism regarding tasers is that since they're perceived by police as safe they'll be overused, including in cases where it would rather amount to torture than policing, or as a method to punish suspects that are otherwise complient for things like name calling or spitting on the ground.
Personally I prefer that police risk something when they use force, that encourages them to have solid reason for it.
"Non-lethal" weapons are actually lethal enough (they can kill) that their use is considered similar in outcome to bullets BUT they can also be completely ineffective against intoxicated and or somehow "shielded" targets (clothes, etc.). So it's really a dice shot. Given this, the logic is that if you're gonna need a gun because you fear for your life, better pull a real one. Otherwise, just keep you distance and wait for backup.
To me this is an essential part of the problem. "Kill or be killed" is an escalation relative to the vast majority of situations. The percentage of suspects killed by police shootings who actually posed a direct, imminent, lethal threat to the arresting officers is a very small minority.
The idea that guns and tasers are considered similarly lethal is ~laughable. Consider "don't tase me, bro" vs gunfire.
Your point about potential ineffectiveness in a critical situation is well-taken. But improving the quality / effectiveness of mace and tasers (or equivalent non-lethal tech) would be a budgetary rounding error compared to the cost of the status quo.
Let's just make sure that all usage of non-lethal weapons follows rules of engagement as strict as firearms. For example, non-lethal weapons should not be a quick cop-out from proper deescalation tactics nor should they be used against non-violent political action.
If the police have a taser or other weapon that can immobilize someone, sure, but otherwise, you should not try to attack a person wielding a knife with your bareheaded. Knives are incredibly dangerous.
Police also do not wear body armor to protect against knives. Their body armor protects against bullets. In places where guns are very uncommon, they could switch to body armor aimed at protecting against knives, but I am not sure if there are places where that tradeoff is common.
I don't think it is reasonable to expect the police to risk their lives to try to subdue someone with a weapon. Taser first and then a gun is preferable to someone using a knife to kill someone.
I agree. We should have a special body of people trained explicitly to deal with violent behaviour and threats to life, such as men wielding knives and similar weapons. They should be given training and equipment to disarm, or otherwise neutralise,such threats, without putting anyone's lives in danger, their own, the general public's, or the people acting in a threatening manner.
It is something they signed up for. This is the cost of protecting your community.
Not all of us are so brave as to become a police officer. But there comes a time when you need to stand up not just for the people around you, but the values you all organize yourselves into communities for. Protecting a community's way of life is literally the job description of the police. This includes protecting your community from being dominated by a police force empowered to murder with no warning or accountability. The officers are the ones who make the decisions to ensure that their communities remain safe from threats of all kinds.
We don't give our armed forces excuses when they violate the Geneva convention and ROE, why would we do it with police?
OK. Would it be unreasonable to ask them to form a barrier between the violent person and other people in the area and defuse the situation with either patience, wit or talk?
People get very unrealistic ideas from movies and TV shows. If you go to attack someone who has a knife and you are barehanded, you almost assuredly will lose that altercation.
> Nowadays, not so much… The thinking seems much more primitive.
But the threat is so much greater.
I live in Sweden too, and there was a bombing near my apartment. A shooting at the metro station, etc.
With the massive influx of weapons from the remnants Yugoslavia, and criminal young men imported from Syria, Afghanistan, Iraq, etc. - Sweden just isn't the same as 20 years ago, and law enforcement has to adapt to that too.
Sweden still has laughably light sentencing for gang crime compared to Denmark for example (nevermind the UAE, Singapore, etc.) so it's no wonder that the situation has spiralled out of control.
> and criminal young men imported from Syria, Afghanistan, Iraq, etc
Yes Sweden was truly a paradise before all this immigration. When a prime minister could just wonder around the streets and be killed.
Try giving equal opportunities to people instead of passing the message: "you are born poor, and such you shall remain" and see that people might be less inclined to become criminals.
> Sweden is one of the most unequal countries in the world, with its so-called Gini-coefficient, as calculated by Credit Suisse, higher than every country apart from Bahamas, Bahrain, Brunei, Botswana, Brazil, the UAE, Yemen, Laos, Russia, South Africa and Zambia.
> Sweden’s billionaires own 16 percent of Sweden’s national wealth, double the share they had in June 2016, and quadruple what they had in 1996. In 2021, the wealth of Sweden’s billionaires amounted to 68 percent of GDP, up from just 6 percent in 1996.
> The richest 0.1 percent of Swedes hold about 29 percent of total household wealth. In the US, the richest 0.1 percent hold only 19.3 percent.
> Sweden’s 542 billionaires, who Cervenka points out could all just about fit into a single Airbus 380, own as much as the poorest 6.2 million Swedes.
Source: Girig Sverige.
But sure… immigrants are the sole problem here. /s
Keep voting SD and keep avoiding stressing your brain. Immigrants=bad. Ok. Sure.
>> Try giving equal opportunities to people instead of passing the message: "you are born poor, and such you shall remain" and see that people might be less inclined to become criminals.
Why would poor people be more inclined to become criminals? I know plenty of poor people who aren't, who break their backs trying to make ends meet working multiple jobs. There are plenty of people (whom I do not know personally) who end up homeless because they can't provide for themselves for many different reasons, and who do not resort to violent crime.
If poor people are more inclined to become criminals, then most Africans, Indians, Afghans, etc, should be so "more inclined to become criminals". Well, that's exactly what those people argue who want to control migration for their own political gains. That's what Nigel Farage was doing when he stood in front of that poster with the line of refugees fleeing the war in Syria:
You're conveniently leaving out the part that the event of someone holding a knife and not being compliant has been happening much, much more frequently recently, compared to 20 years ago.
So since the situation didn't use to happen often, obviously the reaction of shooting them did not either.
I read the intro and skimmed the paper. The title is a bit misleading.
The authors are not "against" using ML to make predictions about human beings for automated decision-making (e.g., whether to approve a loan, offer college acceptance, offer a job, reduce a jail sentence, etc.). What the authors are against is using ML for such purposes without explicitly addressing multiple common-sense issues that unfairly impact groups of human beings.
Specifically, the authors recommend that those organizations which use ML for automated decision-making to specify how they are dealing with these issues, instead of sweeping them under the rug. This strikes me as a sensible recommendation.
This same fundamental argument got a prominent researcher at Google's AI Ethics group, Timnit Gebru, fired. Of course the excuse for the firing was some bureaucratic bumpkis that would have been swept under the rug with any other person, but because it was someone with a lot of clout in the DEI community, they were removed from their influential position.
As someone in the field, I can absolutely vouch that stakeholders generally want the issues raised swept under the rug, and junior folks often don’t even consider these issues. So I absolutely agree with the authors recommendation that each of the points raised should should be assumed to be an issue unless specifically addressed by the developers.
It's not that we can’t do this stuff, it’s just that doing better than a human is amazingly hard, and a lot of Good Decisions are AGI-hard (or harder, since philosophers through history still struggle at it). Solving it well at scale might be fundamentally expensive.
If a sales person tells you “just trust me our algo is great,” the burden of proof is on them to demonstrate that they’ve addressed these issues.
This is a fantastic summary of the potential issues by some of the most respected people at the forefront of research on bias and harms in AI.
Some people are criticizing the short page for not being more constructive - there are some constructive things you can do and the authors know them and probably discuss them in the full paper. But there are also cases where using a predictive system is inherently unjust (or it is not feasible to fix its issues) and sometimes you have to make an ethical decision not to deploy (example: automated sentencing or bail decisions).
Justice is blind, but it shouldn't be blind and mute. Until AI systems can "explain themselves" and be scrutinized as easily as rubrics, I don't them with life-changing decisions
Opaque systems appear unjust whether or not they are in fact unjust. Perception of justice is as critical as actual justice in many public-facing use cases.
This is an important argument and I agree with many of the problems they mention. The problem is always identifying an alternative. You could say just don't do it but people will probably do it anyway in a black box kind of way, and so the question becomes what then? What is the implicit alternative, and can it be made explicit or replaced?
The article provided a rubric for assessing the legitimacy of decision making algorithms. While it can be used to reject inferior algorithms, it can also be used to develop and have confidence in a decision making algorithm that meets all the conditions.
One can also use an explicit rubric to automate decision making with transparency.
The problem is that ML models are somewhat inscrutable and this inscrutability creates a problem for decision-based software. In particular you don't know why the algorithm made the decision. As the author notes, this makes the outcomes hard to contest because it's not possible to know how the decision was made (only in a general sense). For example, you can't know if there is any bias that was taken into account. ML models can absolutely be biased. For example face recognition software has high false positive rates for groups of people that are less familiar. The author goes on to say in the paper that the outcomes from ML seem to not be much better than a human or a simpler statistical model doing the same role - but without the drawback of inscrutability.
Honestly, the title threw me off at first—I used to work in supply chain optimization which very much follows a predict → optimize pattern, and I think there are interesting directions[1] for preferring a different approach, but the abstract and title made it pretty clear that the work focuses on a fundamentally different area. Supply chain forecasting is qualitatively different from the predictions in this essay because it applies to aggregate behavior, not to individuals.
[1]: In particular, what we did (and what most people do?) is predict first, then optimize. Prediction and optimization were done by different systems, built and maintained by different teams, optimizing for different performance metrics. But, over time, I became increasingly convinced that this separation was not effective and that we could get substantially better performance by jointly optimizing the prediction and decision-making models. I saw a paper[2] demonstrating this idea mathematically, but now I think some variant of this would make sense from an organizational and systems-design point of view as well. If I ever get the chance to build a supply chain optimization system from scratch, I'd want to start with the forecasting and optimization components much closer together technically and organizationally, even if we don't jump into trying joint optimization from the beginning.
I'm completely clueless about "supply chain optimisation" but I wonder whether it involves more automated processes, than human beings?
I'm asking, because the article does focus on tools used to predict human behaviour (as I think you note as you say it focuses on a different area).
Predicting human behaviour is hard. I imagine that it's far easier to predict the behaviour of an artificial system, or a process, whose rules are (... more or less ... ) known. For example, I imagine that predicting, say, the rate of defects of a certain product coming out of some factory's production line, is not that hard to do -given enough data etc.
And then, most of the time there's probably many fewer ethical and social issues to consider in such cases. I'm saying probably! Since I don't know the subject at all. But it's clear that there are social issues in, e.g., predicting recidivism or making automated decisions about taking children into foster care.
I was working in retail, which means we were trying to forecast human behavior—how much of a given item people would want to buy at some time—but we only cared about that behavior in aggregate, not at the level of individual people. That definitely makes it qualitatively different to the sorts of prediction and optimization that the paper talks about.
I think that if you read their list of "Flaws of predictive optimization" as a primer for what developers of automated advisory or decision making systems ought to be thinking about, and ultimately be accountable for, then this is very useful. But then concluding that we should be "Against predictive optimization)" (Their choice of headline), and provide a gauntlet of 23 ways to de-legitimize a predictive algorithm (as in their linked rubric), all of which, in all likelihood, no algorithm can survive, it goes way over the top.
Predictive optimization is a necessary fact of life in many functions. The question is often not "should we do it," but "should we do it with a particular algorithm, in a particular setting?" The fundamental questions we should be asking, from the top down:
1) does making the prediction systematically, and well, serve a useful, desirable social purpose. That is, do accurate predictions actually do us a net good? This can be a hard question - using predictive algorithms to screen for adverse futures, given that all predictions, including human-based decisions, throw both false positives and false negatives, you have to weight whether the harm in false predictions is outweighed by the good in true predictions.
2) Does a machine algorithm improve on human judgement in the question and system in question? Machine algorithms don't have to be perfect to be better than human judgement, which is often abysmal when viewed on a system-wide basis.
3) Does the system into which the algorithm is being deployed provide reasonable mechanisms for detection, recourse and compensation for false predictions? Because, again, there will be false predictions.
A lot of the use of AI in social decision making that the authors criticize would flunk these questions. Not all, though.
AI is just a way of absolving a human of responsibility. Responsibility for what? Anything - just put an algorithm in front of a person's decision and suddenly it's an objective immutable truth.
Yes, but we at least try to hold humans accountable (i.e. a feedback loop). Adding these AI systems without a clear mechanism for a feedback loop is going to make things worse.
I initially rolled my eyes at the title, but then the subtitle adds “On the Legitimacy of...” and the rest of the article raises good points, so it’s just the misleading title I dislike. I don’t think there is a good argument against predictive optimization itself, but that’s not what the article is about—rather it is about how we are implementing it incorrectly and misusing its results.
I’ve posted on here before that I think we as a society need to carefully distinguish between predictive ability, causal inference, and ethical policies.
You can predict things without understanding the cause. And you can determine the cause of something without being able to make future predictions about it. But neither causation nor prediction are sufficient to inform an ethical policy.
A simple example might be a genetic abnormality that causes disease. Suppose we can use someone’s DNA to predict early in life and with high accuracy whether they are likely to eventually develop a disease that entails exorbitant medical costs. And suppose we can even identify the combination of genes that fully determines the outcome of whether a person will develop the disease or not. Health insurance companies might decide it’s not in their financial interest to insure these people, at least not without significantly raising the affected customers’ premiums.
But as a matter of morality, society decides that because this disease is not the fault of any individual or within any individual’s ability to control, insurance (and subsequently, all customers) should accept this additional cost as a necessary loss in order to ensure fairness.
To expound on the article’s other point, what is frequently claimed as “predictive” often is not actually so. As a data scientist, one of the things that I’ve noticed within my industry that is really glossed over is evaluating the accuracy of accuracy evaluation. Determining the quality of a model’s predictions is to some degree extremely difficult to do correctly. As a motivating example, consider a collection of models from different seismologists for earthquake prediction. Could you easily tell me which model is “the best” and to what extent each model is wrong?
Any ethical policy that includes predictive ability as one of its components should require intense scrutiny of the quality of the predictions. Not to mention that policies that depend upon predictive ability should already be extremely rare—I can think of perhaps one example: “what value of n to use for Blackstone’s ratio?” In this case, the true value of n (as opposed to our desired value for n) depends on how accurately we can predict whether someone is legally guilty or innocent. And even in that case, we never have ground truth data, so our ability to determine precision/recall is fundamentally limited.
You want to make loans without predictive optimization? Good luck with that. This is truly one of the most childish criticisms I've ever seen published.
Anyone who works on forecasting or "predictive optimization" understands all of these issues. The question isn't "is predictive optimization perfect", the question is "Is predictive optimization better or worse than your practical alternatives?" And sometimes the answer to that question is yes, and sometimes the answer is no.
I read the intro and skimmed the paper. The OP doesn't propose to "make loans without predictive optimization." It proposes that the people who use software to make loans specify how they are dealing with these issues, instead of sweeping them under the rug.
Exactly this. If you're going to use an algorithm to discriminate between people in order to advantage some over others (by granting a loan to them, or admitting them to a university, for example) you'd better be able to fully and independently verifiably explain how your algorithm works and doesn't discriminate among people due to their protected attributes or proxies thereof. Current machine learning technologies render this assurance all but impossible and so they must be regarded with the most extreme suspicion until this can be provably remedied. Given how difficult it is validating traditionally programmed inference systems we're a long way from it.
If a person and not an algorithm decides,there is no full and verifiable explanation. If we want that we need to move to fully algorithmic decisions to start with.
I agree. Also it’s worth noting that there are multiple, reasonable definitions of “fair” and separately following these will not result in equivalent outcomes. So no matter what is done, there will be room for criticism.
> due to their protected attributes or proxies thereof
By definition, if it's algorithmic, it's not discriminating due to protected (or any other) attributes - it's discriminating very strictly on ability to repay with absolutely no bias, because it's literally an unthinking, unfeeling machine. Forcing ML models to find specific results rather than actually letting the models do what they were designed and asked to do renders them useless, but that may be the goal.
First, love the presentation. I wish more papers had such a nice, succinct presentation of the main arguments.
Second, a disclaimer, I did not read the full paper but only went through the link here.
The site should be more specific that it refers to the use of predictive optimization in social setups. I thought I was about to read a finding in how predictive optimization is fundamentally flawed, but that is far what this presents.
Even within the social scope, the argued flaws seem to be more like a list of things to check against, rather than fundamental flaws:
- Good predictions may not lead to good decisions: In a setup where the prediction model is trained independently than the optimization one, this should be obvious. How you craft the optimization model and what you feed it is what matters.
- It's hard to measure what we truly care about: This is true in way too many cases, and still we have to do something. We constantly seek proxies to be able to act instead of raising our hands and give up.
- The training data rarely matches the deployment setting: A well known issue in ML, not really insightful and just something that should be taken into account when developing models.
- Social outcomes aren’t accurately predictable, with or without machine learning: This way too broad a claim. The variance from person to person may make it hard to predict a specific outcome, but it may be possible to provide a model that gives a distribution over outcomes in an accurate way. For example, if you are born in income percentile p, it is hard to predict _exactly_ the income percentile your kids will be born into, but it is feasible to predict a _distribution_ over p. Moreover, if social outcomes were truly random, that is we can't say anything about the future, then there is zero signal between our interventions and the outcomes down the line... which again sort of implies we should never do anything.
- Disparate performance between groups can’t be fixed by algorithmic interventions: Again, not a fundamental flaw. There's nothing an algorithmic intervention that makes it inherently biased, rather it will simply reflect the design of the creator. This is a known issue and there is plenty of literature on the matter.
- Providing adequate contestability undercuts putative efficiency benefits: This is probably the best argument of the list. The design of predictive optimization models should take into account that they may need to be explained on individual cases, which probably reduces the model space. Or we should get better at explaining complex models!
- Predictive optimization doesn't account for strategic behavior: Again, this is just something to take care when designing these systems, not a fundamental flaw. This afflicts human interventions as well, as the examples show.
I worry that people in public policy read papers / summaries like this, take it as absolute truths, and then sentence our systems to be fully manual for the foreseeable future.
Very annoying to read papers like this. It’s easy to write a paper pointing out that things are wrong. It’s also unhelpful. Do i care about your opinions that this is a bad idea, if predictive optimization is still the best idea we’ve got?
It’s much more useful to create a better alternative and show people how to improve.
> It’s easy to write a paper pointing out that things are wrong. It’s also unhelpful.
It's not unhelpful at all. If you're a software developer, is it unhelpful if your users point out bugs? Are you suggesting that those users should fix the bugs themselves?
* The paper is 35 pages long and it's hard to convey its message in any single title. We make clear in the text that our point is not that predictive optimization should never be used.
* We do want the _default_ to change from predictive optimization being seen as the obvious way to solve certain social problems to being against it until the developer can address certain objections. This is also made clear in the paper.
* The title is a nod to a famous book in this area called "Against prediction". Most people in our primary target audience are familiar with that book, so the title conveys a lot of information to those readers. That's one reason we picked it.
* Despite its flaws, when might we want to use predictive optimization? Section 4 gets into this in detail.
Thanks for reading.