Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Please read the entire post and respond to that rather then the tidbits that help you gain argumentative advantage. I have repeatedly said that the implementation WAS not the point and that all performance metrics are negligible as the are under O(N). Either way here's a response:

Honestly reg expressions in all general contexts is one of the fastest implementation of string parsing available for this specific type of parsing. You would have to understand computing theory to see this. Since you didn't even bring it up the actual fact of the matter is, you don't have a good model of computing in general in your head.

The issue you're seeing here is python specific. So really the only way to see this kind of thing is through benchmarks OR knowing the python source.

> See how having a reasonable mental model can be nice?

The problem is your posts also indicate to me that you don't have a good mental model. From what I can make of the model is this: Abstractions are bad, use less of it for more speed, also know SSD's and CPU architecture that will help you write faster python code.

Then what you do is run benchmarks and rely on that in place of an actual working mental model. Believe it or not you CAN run benchmarks on every permutation of a code path you can forego a model altogether. Evidence is more reliable then a model, yet your genius advice was for me to learn CPU architecture.

>a) the actual string manipulation would form a negligible part of the total runtime b) regexp would not be faster (and the actual fast way to do it would be str.translate)

Two things about this, first... str.translate is python specific. No general mental model would assist you with this. You are using python specific knowledge here.

The second part is similar. How do you know eval would be non-negligible? Theoretically the interpreter interprets python code and eval interprets the same thing. Is eval/str accessing heap space or stack space? What is causing it to be slow or is it the extra parsing itself?

Likely you don't know.

Either way my example served one thing. If you understood it you would know that the theme was basically to say that moving through Another type space could have advantages over moving through the original type space. The string functor was just an example.

I could easily say that the goal was to convert:

   {"1":"2", "3":"4", "5":"6"} to {6:1, 2:3, 4:5}
Or essentially the ordered dict, rotated. Tell me which space is a rotation more intuitive? A list space. [1,2,3,4,5,6] is more readily rotated into [6,1,2,3,4,5] with one operation and converted back into a dict.

If you tried to do the above directly by manipulating the dictionary it would not be as straightforward. Use a functor to lift the dict into a list, do the rotation and lift it back down to a dict.

That is the point and the pattern I am trying to convey. We can argue about benchmarks all day it serves Nothing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: