We can eliminate entire classes of attacks by using certain languages and technologies.
Simple, forgotten example: You can't just patch a running OS kernel from an application program anymore. There's hardware in the CPU called an MMU, or Memory Management Unit, which inspects all attempts to access memory, read or write, and checks them against a policy the kernel set which aims to disallow all unsafe memory accesses. (This is like nine kinds of oversimplified, but it's not actually wrong... ) The MMU will alert the kernel if an application program attempts to access memory in a way contrary to policy, and the kernel typically kills the program dead right there. That's what a segfault is.
My point is, prior to the MMU, the only possible response to "Applications can modify running OS kernels willy-nilly." was "Don't do that then." There was no way to enforce that policy. That's why MS-DOS, which ran pretty much exclusively on hardware either without an MMU or with the MMU disabled, had no effective security policy: Any application could modify the only thing attempting to enforce that policy at any time, and nothing could stop it. In the immortal words of Bokosuka Wars, "WOW ! YOU LOSE !"
We take MMUs for granted now. We take OSes which use MMUs for granted now. We no longer have to rely on the care and kind nature of strangers to enforce the basic policy.
The trade-off is speed: Adding an MMU to the path to RAM inevitably makes accessing RAM slower. There's no way around it. We see it as a rock-simple win-win tradeoff that we've almost forgotten that there even is a tradeoff, but our computers would run faster without MMUs. We've just, as a hardware and software culture, decided that it's worth it.
So. The discussion here is, "Which other tradeoffs are worth it?" Because there are other technologies we could adopt, hardware and software and a mix of both, which could completely seal off other classes of attack vectors, and we need to decide which of those technologies are worth implementing.
The proposal being made, as per my understanding, is that by default CommonMark-compliant markdown should be evaluated in a "safe" mode--escaping html and whitelisting url formats--and that compliance with this behavior among interpreters should be enforced by the validation tools provided by CommonMark. This would conceivably result in CommonMark markdown becoming an easy default way to eliminate the possibility of XSS on sites that accept user-input in the form of comments, etc.
Unfortunately, the conclusion to the discussion seems to have been, "although it seems to be possible to trivially eliminate XSS attacks from user input using 'safe' markdown, that's an implementation detail so it's not the job of the standard".
I only found this yesterday, and I'm still trying to understand the current stance the project is taking on this (I don't know if it has changed), but I think it underlines how important--and uncommon--it is for people to adopt an attitude of "if we can eliminate a class of attack through a formalism or technology, let's do it."
If CommonMark were to adopt this approach, it could make the security advice given for processing user input as simple as "use a CommonMark-compliant markdown interpreter in the default 'safe' mode", whereas now it seems to be, "pick the right markdown interpreter that has a 'safe' mode you can trust, be sure to configure it properly, and then for good measure take the computationally expensive step of running your generated HTML through a sanitizer."
Was this exact conversation not had decades ago regarding C & assembly-type languages? Tomorrow, lua vs C, then Go vs java, then spoken language vs whatever. (I surely got the specific comparatives wrong)
Tomorrow we will take terabytes & terahertz for granted.
I see no end. Old-school vs new-school ad-infinitum.
Yes, once upon a time C shared a spot with the other systems programmer languages many of them safer than it, and on home-micros it was seen as a "managed language", which many used as a cheap macro assembler via the inline assembly extensions.
Now its compilers are praised for speed, given 30 years of optimization efforts.
Simple, forgotten example: You can't just patch a running OS kernel from an application program anymore. There's hardware in the CPU called an MMU, or Memory Management Unit, which inspects all attempts to access memory, read or write, and checks them against a policy the kernel set which aims to disallow all unsafe memory accesses. (This is like nine kinds of oversimplified, but it's not actually wrong... ) The MMU will alert the kernel if an application program attempts to access memory in a way contrary to policy, and the kernel typically kills the program dead right there. That's what a segfault is.
My point is, prior to the MMU, the only possible response to "Applications can modify running OS kernels willy-nilly." was "Don't do that then." There was no way to enforce that policy. That's why MS-DOS, which ran pretty much exclusively on hardware either without an MMU or with the MMU disabled, had no effective security policy: Any application could modify the only thing attempting to enforce that policy at any time, and nothing could stop it. In the immortal words of Bokosuka Wars, "WOW ! YOU LOSE !"
We take MMUs for granted now. We take OSes which use MMUs for granted now. We no longer have to rely on the care and kind nature of strangers to enforce the basic policy.
The trade-off is speed: Adding an MMU to the path to RAM inevitably makes accessing RAM slower. There's no way around it. We see it as a rock-simple win-win tradeoff that we've almost forgotten that there even is a tradeoff, but our computers would run faster without MMUs. We've just, as a hardware and software culture, decided that it's worth it.
So. The discussion here is, "Which other tradeoffs are worth it?" Because there are other technologies we could adopt, hardware and software and a mix of both, which could completely seal off other classes of attack vectors, and we need to decide which of those technologies are worth implementing.