Is it fast enough to stop a huge economic and biological burden hitting our children? It’s sad how short sighted the majority of people in the US is today. Small sacrifices could lead to hugely greater quality of life for everyone’s grand children but everyone is special and deserve everything they desire right now.
Letting states choose how they hit their CO2 allotment is totally fine as long as there is a universal per capita CO2 goal for the whole country. If there is no cap then self serving states will simply emit as much as is maximally profitable
I think autonomous cars are something that must happen, and the sooner the better, but I am also a bit concerned that they could represent the ultimate weapon if a malicious actor gained control over them.
One battery fire isn't a problem, but if every 2nd car in a city or country crashes into something in a way that starts a fire, that becomes a very big problem extremely quickly.
Watch on youtube any video that shows puncturing a cellphone battery what happens.
Then scale that up and imagine a scenario where one idiot on drugs drives one in supermarket or mall entrance and the car's battery gets punctured. After that comeback and tell me you still think 100 casualties are hard to happen from only one EV car.
Gives me very low confidence that these people understand society at all. And they’re supposed to be the gatekeepers? According to… themselves I’m guessing?
Ah yes a pompous letter where we list all of our impressive formal accomplishments will surely get these rubes to take their hands off of our toys
The only reason they are the gatekeepers, is because they are the ones with all the GPUs. OpenAI seems to be counting on that as their moat vs all the plebs who want to use GPT-3 for "unapproved" purposes, at least.
I think they will try to ban us from the technology, or force us to be registered to use it on their system under supervision. my guess is that the future will involve thumb-printing these models in some way to determine where they came from and who developed them.
Good luck. This will take the ban on unlicensed general-purpose computation - which incidentally, has more and more pieces dropping into place thanks to the wonders of DRM.
IMO, for far more legitimate reasons than many attempted cancellations I've seen in the past. The guy was the driver behind many initiatives, but on a personal level I'd no longer want to meet him.
There has been some easy to find coverage of all the recent controversies. This video [0] is a decent starting point, but it's really easy to find relevant information by plugging in a few keywords into any search engine.
Overall, he's not a good fit in a leadership position given what others have been seeing. I don't care enough to call for him to be cancelled, but I would support removal from any public speaking position at the FSF. If they can find someone equally hardline on software freedoms but more acceptable in interpersonal interactions, he should go immediately.
> What made you feel this way, exactly?
There wasn't any single thing that threw me off (except maybe personal hygiene habits), but the sum of everything that's going on, crossed my "this is OK" threshold.
true, these people circumvent law, not play by it. But clearly they do their best to wall us off and push us towards the registered paid model that I mentioned.
Funny I remember warning people on various reddits many months ago that there is no risk free 10-20% investment and asking where is this yield originating from. Let’s just say people weren’t very receptive. Greed is a hell of a drug
Even in games composition is much better and what pretty much every engine forces you to use these days. It’s a lot more flexible too as you can compose a new boss out of ten different components rather than having to refactor a crazy tree of nonsense
I disagree. Teaching people to think about abstractions wrong is not in any way helpful. Everyone I know including myself had to spend years writing bad code before realizing that this way of thinking is counterproductive. It reminds me of this quote from Kung Pow
Don’t worry about Wimp Lo, he’s an idiot. We’ve purposefully taught him wrong… as a joke.
Composition is an infinitely better method to teach if you have to teach OOP. That way you don’t need to unlearn what you thought was the right way but is actually useless at modeling almost anything.
I'm so sick of this "composition is so much better than inheritance" talking point that seems to have become dogma in the last 5 years. You use both. They are completely different things.
Look at a classically inheritance-based system: a GUI library. You'll see lots of inheritance. Buttons and Checkboxes extend ClickableThing, ClickableThing extends Control. Whatever.
And then you compose those controls on a Form.
Look at a classically composition-based system: a video game entity-component system. You'll see Monsters composed of GraphicsObjects and Animations and WeaponSlots and AIAgents.
And you implement all of those components in an inheritance hierarchy, or else your GameObjectThinger can't have a list of them that can grow or shrink at runtime.
So keep saying "teach composition, not inheritance". You might as well tell musicians to study "rhythm, not tempo".
Abstractions are useful because they simplify and if you don't allow error you don't allow maximization of simplification. You can formally relate this to learning problem formulation complexity. When you do you encounter things like branching factors which effect solution times which lead to natural results like fast but finishes being better than optimal but never terminates. This can hold even despite error in the abstraction and there are techniques for recovering that error at runtime because a particular problem is a less general and more specific situation and so isn't as cursed by branching factor.
Edit: A previous version of this comment was more wordy in stating this and mentioned that there are formal proofs to this effect.
Take the simplest problem of searching over a continuous range between zero and one exhaustively. Let the value you search for be 0.1. By a diagnolization argument we can realize that even a search carried on for an infinite amount of time and granted an infinite amount of space would not terminate. After all, to find 0.1 you need to make it past 0.01. But to reach that you have to reach 0.001. And so on. Let abstraction map from N to 1 over this space. As N goes up, abstraction error goes up, but the problem is no longer impossible because the diagnolization proof no longer holds. Let b be a branching factor in a game. As b goes to infinity, the game graph becomes continuous. In learning formulations under the game theoretic school of thought you have to search the game graph to get the average of your policies proportional to your counterfactual regret. That implies walking the continuous game graph. In reinforcement learning the bellman equations give you your values. This is also defined in terms of the game graph. So again we fall prey to the problem. We've shown that walking continuous things is impossible even under conditions that are ideal - infinite speed and memory and we've shown that this relates to the learning problem formulations I spoke of.
Obviously this only gets worse when we impose reality - we don't actually have infinite space on our computers. They don't compute for an infinite amount of time either. But notice that before we searched for infinite space and time and we failed? When we move down to finite space and finite time we still have the property of completing after full enumeration. We have a finite amount of computing capacity. We have a finite amount of computational storage. Yet the growth rate for unabstracted game trees is exponential. Let c be our constraints.
An abstracted game maps n states to one state. So it has log_n(X) where X is your state count. An abstracted game has X states. Since log_n(x) < X we know there exist terminating algorithms for abstraction that are not terminating for unabstracted because log_n(x) < X when X = c. So we get log_n(X) < c when X = c.
This is actually much less than the real world gains. Since in learning we get the policy expectations multiple times over the game graph and the convergence guarantees relate to the complexity of the graph you get a much more worthwhile window than just the difference of log_n(X) versus X. For much tighter bounds check out game theory research. They get error bounds on the abstraction error too by choosing clustering solutions with provable properties. So it really has a much stronger formal treatment than you might imagine.
You are talking about game theory without any precise definition of "abstraction", which it seems like one can define it to be whatever one wants it to be.
One abstraction that is very useful is linear algebra which is an abstraction without errors. Same goes for category theory. Grothendiecks work wasn't about tolerating errors in abstractions either. Simple abstractions like generic containers are also not about ignoring errors.
Honestly, the random segues to diagonalization arguments, game theory, continuous functions, RL and Bellman equations(WTF!) sound like stream of conciousness random ramblings and an attempt at "out jargoning", much less a "formal proof". Reminds me of this story by Tadelis.
"We use Lagrange multipliers," one of them said. And for a second, Tadelis was astounded. What? Lagrange multipliers? But Lagrange multipliers don’t have anything to do with ..."Then it hit me," Tadelis recalled. "This guy is trying to out-jargon me!"
If error in abstraction is never right to teach as the OP claims, then you're saying that computation using numbers shouldn't be taught. It literally doesn't matter what computer programming language you use. Every single one of them has this fundamental problem, because Turing machines have this problem. They aren't defined for the reals. They are defined for the computable numbers. So you have an abstraction with error in it. You were taught it. Occasionally you are going to go on for years with this not biting you, but then it will and you'll have to adjust - maybe switching to arbitrary precision in a case where precision matters. Maybe switching to floating point when you realize that you have too much data to fit in storage.
You're claiming that if someone points out that abstractions with error - which are literally impossible to avoid - are useful despite the error, then they're just pretending to know things. But anyone complaining about abstractions having error as a basis for abstraction being wrong is fundamentally missing the point of abstraction.
We need abstraction. It isn't illogical to tolerate the error. It is suicide to not tolerate the error, because you won't terminate - which means you can't react. Haven't you ever wondered why people aren't purely rational? Why we think fast, not just slow, but also fast? These questions have answers. You can look at the foundations of learning in terms of graphs and see why it has to be so. I'm sorry it goes over your head, but it is fascinating regardless of whether or not others understand it. And I think it is worth sharing, because it is fundamental truth.
This is formally provable, but to put that another way - I don't care if you want to be wrong; good for you, saving yourself some time. Enjoy your day.
Having thought about this for a while - this is on you, not me.
The idea that some computer programs take too long to finish if their input size is really large isn't a complicated one. You try to discredit the existence of this basic truth by complaining that I'm using jargon, but my jargon is really just vocabulary. I bring up the bellman equations and counterfactual regret, because algorithms built with respect to these things operate against the graph of the game. By invoking them, I'm not being incoherent. I'm firmly rooting my claims in the computational complexity of the algorithms.
I'm not doing this because I'm confused. I'm doing it because game graphs have very particular properties. When you add additional moves to a graph on each turn the growth rate isn't one move more of complexity - the number goes in the exponent. So modest amounts of additional states lead to an combinatorial explosion of complexity. They make the graphs extremely big. In practice, as well as in theory, this makes it so the algorithms don't terminate before the universe is expected to.
I'll give a practical example of this so you don't rant about lagrane - but also so that everyone reading this will realize you are full of shit and only pretending to know what you claim to know.
A practical example of this is that we've managed to solve checkers, but chess is too complicated. The number of states in the graph is so high that we can't enumerate all of them in a reasonable amount of time. Go is a more extreme example than chess. Anyone who doesn't know the jargon can easily look up terms like "branching factor" and "solved checkers" and "solving chess" and "solving go". They'll quickly find that you were misleading others with regard to computational complexity not being relevant to whether error in learning is reasonable.
We need to do something to make the problems easier in order to make progress. So we do. Sometimes, when we are lucky, we can use perfect abstractions that make things simpler. One thing you seem to think I'm saying, but which I'm definitely not, is that perfect abstractions don't exist. That is you just being confused about what I'm saying. It isn't something I've claimed. Instead, what I'm claiming is that sometimes perfect abstractions aren't enough. Again, chess can be used to make this point. You can get some perfect abstraction by doing things like rotations on the end game tables. Yet this doesn't actually save you from the game tree being enormous. Despite having perfect abstractions, we still use approximation when we try to learn the best thing to do in a chess position.
I think you don't give me enough credit. Your entire interaction with me has tried to imply that I'm just pretending to understand something in order to win an argument. You seem to think these things I'm saying are about big words, because I'm incoherent. That isn't true.
If you look up the comment chain you'll find, paraphrased, that one person said something to the effect that learning something that is wrong can be productive for someone who is learning even though it has error in it. Then another person disagreed with that claim on the basis of error existing in the abstraction and later being corrected.
So clearly, we definitely were discussing (1) abstraction and (2) learning. Fundamentally, it isn't jargon if I choose to try to make my point by talking about (1) learning theory and (2) abstraction and how it relates to learning theory. You try to act like I'm being incoherent, but really I'm just thinking from first principles. We're discussing learning and instead of thinking about it in terms of programming, a thing where there is plenty of debate about what is the right approach, I'm thinking about it from a lower level.
That means my claims are actually a lot more limited than others. More nuanced. This is in keeping with Hacker News guidelines. Our replies are supposed to be more nuanced and thoughtful as we get deeper into the comment tree. When we disagree with each other, we're supposed to be teaching each other something.
I don't think it is wrong of me to think from first principles, nor for me to share my thinking from first principles. I can't offer what is the best thing to teach, but I can suggest with confidence that it isn't the case that an abstraction having error and later being corrected while solving more specific problems isn't enough to prove that teaching that abstraction is bad. It might be suggestive, but it isn't sufficient.
It really is the case that there exists problems which in their full unabstracted state the problem is too large to solve even if you use a perfect abstraction for certain learning algorithms. That is why we even do approximation. That you can apply approximation to the space of the inputs and reduce the size of the problem means you can connect the introduction of error via abstraction to the simplification of problem complexity.
I'm not saying this to sound smart. I'm saying this because you can actually do that. It isn't a universal result - there are some learning frameworks that aren't defined with respect to a graph. That is why I'm so careful to talk about learning algorithms that do the definition in that way. Your entire railing against me for arrogant "jargon" is actually an attack on me having been cautious to not make claims that were too bold.
Frankly, I think you should consider the opposite of my point to see if you really believe it. If you believe I'm wrong than it means you believe that it is possible to learn without ever learning errors. So for example, you believe that all babies ought to be able to instantly know all things - even things our society doesn't know anything about as of yet. I'm not saying this to put you on this position or imply that you believe it. I'm saying it to make it more obvious to you that what I'm saying isn't actually a controversial thing. The fact that believing the opposite would create absurd beliefs is suggestive of the fact that I'm right about there being real benefit to being willing to learn and teach abstractions with error.
Perhaps most importantly - I linked a paper in which an abstraction with error was better than the best results we've gotten without abstraction with error in a game theory research paper. So I have an existence proof of my claims. I'm right and you're just too conceited with regard to your presumption of my idiocy to see it. If you were less interested in being mean and more interested in actually talking to people, the conversation could have been a lot more interesting.
In my estimation of our conversation you're struggling to avoid contending with my points. You attack me, because you can't contest with my ideas. You try to claim I'm rambling, because you can't find flaw in my reasoning. You had a preconception that I was wrong and you never bothered to really engage with what I was saying, just assuming I was wrong. And so, you are; you've become a creature of rhetoric, attacking others character rather than dealing with ideas as a person ought to.
> One abstraction that is very useful is linear algebra which is an abstraction without errors.
Bullshit. Your claim that there is no abstraction error in linear algebra (when run on computers - we're in subdiscussion related to programming) is false.
Computers can't represent all numbers [1]. They can only represent the computable numbers in theory and even then only a subset of computable numbers can actually be computed. Therefore, there is abstraction error in linear algebra on computers. This isn't at all a theoretical thing. This regularly has implication on algorithm design. You ought to have known this, because the truth about floating point numbers has been well publicized [2].
> Same goes for category theory. Grothendiecks work wasn't about tolerating errors in abstractions either. Simple abstractions like generic containers are also not about ignoring errors.
You're just ignoring that these things when implemented in computers actually do have error, because you find it convenient. You're also apparently not self-aware enough to realize that this agrees with my central premise. Think about your thinking and you'll notice that this failure is indicative of your own minds belief in wrong abstractions being right - otherwise you wouldn't have been able to make this error.
> Honestly, the random segues to diagonalization arguments, game theory, continuous functions, RL and Bellman equations(WTF!) sound like stream of conciousness random ramblings and an attempt at "out jargoning", much less a "formal proof". Reminds me of this story by Tadelis.
We're talking about learning as it relates to abstraction. If you can't see why a learning problem formulation is relevant to discussion about whether learning an abstraction is appropriate that says a lot more about your reasoning than it does my articulation.
As a reminder the OP said:
> Teaching people to think about abstractions wrong is not in any way helpful. Everyone I know including myself had to spend years writing bad code before realizing that this way of thinking is counterproductive.
I've disagreed with the claim that teaching a bad abstraction that has to be unlearned is bad. I've shown that in learning problems, teaching a bad abstraction can actually be good. I've given proofs to the effect that there are times in which teaching a bad abstraction produces algorithms which terminate in situations where teaching a perfect abstraction doesn't terminate. I've given you links to mathematics showing that given a bad abstraction as a starting point you can produce agents which outcompete something which doesn't use an abstraction. The paper in question takes a bad abstraction and improves upon it after getting it during the challenge of a more specific problem.
> "We use Lagrange multipliers," one of them said.
Hacker News guidelines call for more thoughtful points, not less thoughtful points, as threads get deeper. It also calls for assumption of good faith. It calls for increasing nuance [3]. What you are doing here - it isn't that. At the risk of starting something painfully obvious, talking about random shit people that aren't me said that you found dumb isn't actually relevant to the discussion. This is a subthread that is on the topic of learning and abstraction. I'm talking about both topics. You aren't.
> random ramblings
I'm sharing something deeply counterintuitive but true because I think people might find it interesting. Right algorithm + correct data can be worse than right algorithm + wrong data. This is deeply counterintuitive and I find it very fascinating, but it falls out of the formal definitions of utility under multiple different learning frameworks. Something isn't bad because it has abstractions with error in it. Teaching an abstraction with error in it isn't bad either. It isn't even bad when you can find the error - because you're in a more specific situation the learning problem changes, in the general case it was too hard to compute the unabstracted best thing to do, but you can reduce the error in your abstraction when you get more specific because there are less states and so the learning problem becomes more tractable.
A person might try to respond to this point by claiming arbitrary precision numbers exist. That person is lying to themselves. They don't exist. The abstraction error is just in a different place. Go back to the definition of Turing Machines and observe again that not all numbers are computable - we are in an abstraction subject to error and you can't escape this while staying in the Turing framework [1].
> Bullshit. Your claim that there is no abstraction error in linear algebra (when run on computers - we're in subdiscussion related to programming) is false.
The fact that you had to change my statement to add computers to it already makes it clear that you know my statement is correct. Don't put words into my mouth to prove statements that I have never made false. You also failed to realize that the tweaked statement you put in to my mouth is still correct!
You fail to realise that we can prove theorems about linear algebra using computers without actually using floating point numbers. Linear algebra can be infinite dimensional and not just be defined on the complex field, and this doesn't introduce any errors in to the abstraction or theorems being proved, with or without computers. Your assumption that linear algebra on computers is exclusively about floating point number crunching tells me that you don't understand what abstraction linear algebra represents. Linear algebra is the study of linear maps. This is a good book for you to get started [1]
You also failed to address the gazillions of abstractions that don't have errors, some of which I have listed along with linear algebra. For those familiar with proofs, Just one counter example can prove your sweeping statement "all abstractions have errors" false. We are discussing math, not physics. And if you want to restrict yourself to computable functions- modulo arithmetic with groups, rings and fields suffices as a counter example.
It isn't even clear what you mean by abstraction at all.
> We're talking about learning as it relates to abstraction.
The OP was not discussing learning at all. Your segue into machine learning concepts is completely unrelated to the topic being discussed. So is game theory. I am familiar with virtually all the topics you are discussing, so you can skip the citations. I find no coherence to any of your segues.
> You're just ignoring that these things when implemented in computers actually do have error, because you find it convenient.
> I am familiar with virtually all the topics you are discussing, so you can skip the citations.
I really can't, because other people might believe you if I don't; I don't hate them. I want them to know the truth. So I'll oppose you strongly, for their sake, so they can discriminate between my counterintuitive truth and your rejection of the truth on that basis of confusion.
Go read page 173 of Artificial Intelligence: A Modern Approach. I'll quote Norvig here. "Because calculating optimal decisions in complex games is intractable, all algorithms must make some assumptions and approximations." Now go to page 172. I'll quote Norvig again. "One way to deal with this huge number is with abstraction: i.e. by treating similar hands as identical. For example, it is very important which aces and kings are in a hand, but whether hand has a 4 or a 5 is not as important, and can be abstracted away."
But, the discerning might ask, what of the talk of the infinite? Why does Josh speak of such an absurd thing? Isn't it irrelevant? It is not. Go to page 611, "Non-Cooperative Game Theory". I'll quote him again for you, "With this observation in mind, the minimax trees can be thought of as having infinitely many mixed strategies the first player can choose." The thing to notice in this quote is that we have a simple game - very simple. Yet Norvig just explained that in this simple game we have the quality of a tree of infinite size. This growth to infinite is actually very normal - mixed strategies are continuous and we have proofs that mixed strategies are the solution for a variety of different games involving imperfect information.
> It isn't even clear what you mean by abstraction at all.
I'm using abstraction in the sense of 'blueprint abstraction' from game theory. This is basically compression of the input to your learning algorithm. There exists lossless compression - perfect one to one abstraction. There is also lossy compression - given one compressed form it could be any number of uncompressed forms. Abstraction with error is then compression with error. What I was trying to prove and what I still believe to have proved is that some algorithms when given an input of unbounded size have the property of not terminating. What I then tried to show was that abstraction breaks the proof of non-termination, because it breaks the core assumption of diagnolization - that there is a one to one mapping. So the proof of non-termination doesn't hold.
> I am familiar with virtually all the topics you are discussing, so you can skip the citations.
I literally linked to a paper that used abstractions in the way I meant it. So maybe you should not skip citations? Clearly you don't know the fields as well as you think you do.
> I find no coherence to any of your segues.
It isn't a segue; it is what you asked for. I gave you a proof that unabstracted learning problems can terminate where abstracted problems terminate.
> You are completely out of touch [2][3]
This is such an ironic statement; here we were discussing what abstraction techniques we should teach when teaching computer programming and you're trying to complain that I'm the one who is out of touch when I say computers use abstractions. They definitely do. In fact, they use abstractions with error.
> You fail to realise that we can prove theorems about linear algebra
Obviously I realize we can prove theorems. If I didn't realize it was possible to prove things I wouldn't claim things were provable. This is false by contradiction with my previous statements. Your worldview of me isn't consistent.
> You also failed to address the gazillions of abstractions that don't have errors, some of which I have listed along with linear algebra.
Why should I have to prove that abstractions without error don't exist? They do. I never claimed they didn't. My point was that abstractions with error reduce computational complexity. This gives them room to outcompete perfect abstraction for sufficiently complex problems. It honestly seems insane to me to not believe what I'm saying is true, because it is true. I can't fathom how the concept would be impossible to grasp. Showing that we regularly use error filled abstractions is more important than demonstrating something I don't intend to show.
> The OP was not discussing learning at all.
I literally quoted him saying there was never a reason to teach bad abstractions. Teaching is related to learning. Abstractions are related to abstractions. So abstraction and learning were a topic of discussion. Even the original post we're under is about teaching programming technique. Which is about learning, because of the relationship between teaching and learning. It is also about abstraction, because problem modeling is very related to abstraction.
> "all abstractions have errors"
Ctrl + F shows no instance of this except for you saying it. What do you think I'm claiming? I'm really confused? You seem to think I'm saying something I'm definitely not saying.
Notice how when I said it I put parantheses and explained my reasoning as to why I felt your claim was your claim? You meanwhile misquote me. Strict quotes imply actual attribution, but I never said what you claimed I said.
I don't want to talk to you anymore. I very much don't appreciate your comparison to someone who just makes stuff up. I found that very rude and insulting. A kind person would help correct me if my reasoning was wrong and I would appreciate it, but you haven't done that. When you quoted me, it wasn't even something I said or tried to argue. If you were trying to make someone have a worse day, congratulations, you did.
>>>> One abstraction that is very useful is linear algebra which is an abstraction without errors.
>>> Bullshit. Your claim that there is no abstraction error in linear algebra (when run on computers - we're in subdiscussion related to programming) is false
>> You fail to realise that we can prove theorems about linear algebra
USING COMPUTERS
> Obviously I realize we can prove theorems. If I didn't realize it was possible to prove things
You have been arguing in bad faith by either putting words in my mouth (when run on computers) or deleting key phrases from my reply (USING COMPUTERS - reinserted by me)
These 2 statements by you contradict each other
"Bullshit, Your claim that there is no abstraction error in linear algebra (when run on computers - we're in subdiscussion related to programming) is false
"
"
Obviously I realize we can prove theorems" -----> USING COMPUTERS
It is clear that you thought of computers as IEEE floating point number crunching machines, with implicit floating point errors. All of your arguments rested on this irrelevant point. You even condescended to teach me about floating points using citations not needed by anyone who has attended CS101.
You failed to realize that finite sized representations of computable real numbers exist, by definition[1]. One such representation could be a finite sum on surd basis, instead of using a binary basis eg sqrt(2) instead of 1.414.... and the most general form is a theorem prover like Lean.
All of these misunderstandings in your head because you failed to realize the gist of the Church Turing thesis ' - Everything that you do with your brain and paper can be duplicated on a computer.
In any case, I am glad you learned something today. Computers don't relate to math via IEEE floating pointing numbers, the connection is a lot deeper[1]. You won't acknowledge this, but frankly speaking, I can't really stand out- jargoning pretending to be an honest discussion. I did give an opportunity to you to correct yourself with my first comment. But you only doubled down - more jargon, bad paraphrasing of diagonalization, putting words in my mouth, followed by removing key phrases from my replies amongst other bad faith arguments.
To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable
Serious question. Putting aside all the stuff that basically make me feel like you are just calling me a moron and wasting my time: what do you disagree about? You keep treating the things I say like they are irrelevant jargon and not bothering to engage. Can you please try for a moment to explain why you think my argument that error in abstraction is useful is wrong by addressing the actual premises that are within that argument. Literally, address one of these claims:
1. Do you disagree with my claim that the runtime of learning algorithms depends depends on the graph size in both game theory and reinforcement learning problem formulations?
2. Do you disagree with my claim that abstraction reduces the number of states in the graph?
3. Do you disagree with my claim that since abstraction reduces the number of states in the graph the learning algorithms which run against them can complete more quickly because there are less states?
4. Do you disagree with my claim that algorithms which can compute a solution can have a better solution than algorithms which don't compute the solution?
5. Or if you don't disagree, can you admit that we agree on these things? Because when you just act like I'm not making any points it makes me feel like you are trolling me and being a jerk, not actually trying to talk to me.
I'm still just as convinced of the truth of the idea that it can be very wise to accept a bad abstraction, one that has error, rather than a perfect abstraction. I can't even fathom how to go about the opposite. How would a child go from knowing nothing to knowing everything perfectly without moving through areas of bad abstraction along the way?
Waiting for your rebuttal. Worth nothing the problem isn't theoretical. We actually run into this 'we can't compute it fast enough' problem in practice.
- When we tried to solve chess we couldn't, the branching factor was too much.
- Go, it was horrendous there too.
- Poker, terrible there too.
But you want to dismiss me on the basis of jargon right? So here you go. Bellman coined the term curse of dimensionality. Combinatorial explosions happen because of branching factors in game graphs. Computational complexity for algorithms are defined with respect to this graph in both time and space for many learning algorithms. Because the games get so big the curse of dimensionality forces problem relaxation. I used ~words~. I must be an idiot. Feel free to dismiss me, I guess. I heard you heard someone else use words once and they were ~wrong~.
Hey wait a second. You're using words too. Does that mean everything you say is wrong?
> Everything that you do with your brain and paper can be duplicated on a computer.
Can you stop pretending I'm talking about things that are computable when talking about things that don't terminate? When someone says that computation is only defined for the computable numbers responding with the claim that they don't understand "that finite sized representations of computable real numbers exist" is honestly either stupid or malicious.
I made the claim that problem simplification through an abstraction that has error can reduce computational complexity leading to improved solution quality. You responded by talking about people who are talking about Lagrange points. I found that extremely insulting. I still find you to be extremely insulting. I don't think my claims are so hard to understand. I find your intentional misinterpretation of my points annoying.
I despise that you lied about whether we were talking about learning and abstraction. I don't like talking with people who blatantly lie. I consider lying bad.
I also dislike that you misquoted me. You put quotes around words I didn't say. I didn't do that to you. You say that I did. You lie when you say that. I gave you my interpretation of what I felt you were claiming. I even explained why I felt you claimed that. This wasn't under the quote symbol. Yours was inside quotes. Yours was a lie. Mine had your original quote, unaltered, with my interpretation below it. I was in error. I admit that. I was trying to get at the heart of my point - that erroneous abstractions aren't inherently bad. Outcome error is much more important than input error.
I don't agree that you've taught me anything - you just try to call me incoherent because what I'm saying is true but you employ motivated reasoning to avoid having to refute it. If you actually understood what I'm saying - which obviously you don't, which is a big part of the problem here, you would agree with me. Or at least, I think you would.
That simple problems are easier to solve and sometimes an actual solution is better than no solution really isn't that complicated a thing. Or controversial. I'm sure plenty of people understand it.
There are so many times in life where my point holds. The use of floating point is one. Perhaps you didn't notice that ML engineers frequently choose to move from float64 to float32 to float16 to float8? Perhaps you didn't notice that services all throughout the computing industry choose to meet an SLA, minimizing latency sometimes at the cost of optimal solutions whose computation isn't realistic given their computing budget. I don't know. But you're definitely not actually teaching me anything. Your just not understanding me. So this conversation is pointless.
I'm still just as convinced of the truth of the idea that it can be very wise to accept a bad abstraction, one that has error, rather than a perfect abstraction. I can't even fathom how to go about the opposite. How would a child go from knowing nothing to knowing everything perfectly without moving through areas of bad abstraction along the way?
I feel you are mean. I'd rather we stop talking about this together if we're not going to actually engage with each other on the topic under discussion.
Yes they were. They were discussing teaching, which is related to learning.
Moreover, the person they were responding to - they also were talking about learning. They talked about how it was good to learn an abstraction that wasn't perfect.
Even the article is about whether we should teach one abstraction or another.
> You are completely out of touch
Gaslighting is abusive. Stop abusing me.
> I am familiar with virtually all the topics you are discussing, so you can skip the citations. I find no coherence to any of your segues.
This is an argument from authority. It is a fallacious argument. If my segue is incorrect, you need to show it from the structure of the argument, not by appealing to your authority, which is irrelevant; you might be great - I'm not saying your not.
You are great; I'm not saying you're not. I'm sure you're intelligent and smart and witty and cool. But who we are - it doesn't matter. We're irrelevant. The ideas are all that matters.
If I'm wrong - why isn't chess solved? Why do we approximate solutions? Meanwhile, why is checkers solved? Why do games with simpler graphs get solved perfectly but more complex games with more complex graphs not get solved perfectly? Please back up the ideas you would be advancing were it the case you actually disagreed with what I claim is provable. If you actually understood my point, than you should know that the opposite of my point isn't that perfect abstractions exist - it is that the run time of specific learning algorithms aren't correlated with their input size. Your trying to get me to defend conclusions I didn't make, treating me like I'm stupid and comparing me to people who are rambling. You are being abusive. Lying. Gaslighting. Attacking me as a person rather than my ideas. Appealing to authority.
I'm sorry that I thought you were referring to linear algebra when used as abstraction in the way I meant it - I'm talking about computational abstractions; stuff with many dimensions, compressed to be in fewer dimensions. That happens when we do linear algebra on computers in practice. I thought it was reasonable to point this out, because my claim is closer to "abstractions with errors in them can be useful" than it is to "abstractions without error don't exist". But I feel like you do understand me in this - because you seem desperate not to admit this. To pretend we can represent all things in finite space, when we can't, because the infinite things can't all be represented in finite space. And it seems to me the only reason you would be so desperate not to admit this - to try and throw up so much confusion about this idea - is that you do understand me. And you understand that if you surrender on that point, you admit I'm right. So I think you already know I'm right. And you're just being mean intentionally. I think I probably offended you. It best explains the inconsistency in your reasoning. So I'm sorry about that - and I assume it was sometime in the past, because my first replies didn't deserve your malice.
Or go try and solve a moderately complicated anything without allowing yourself abstractions with error. I won't wait on you, because your solutions will never terminate in my lifetime for even things as simple as chess which is a hell of a lot simpler than reality.