Until you notice that in some indirect way your life livelihood (or that of others in your community or some other actor with influence) does depend on that sea life. People are really quick to argue for the seemingly simple fix when interacting with complex systems. It rarely works out in the long term.
Source: climate change. It was really nice to emit a lot of carbon for a few decades, now it begins to cause real problems, conflicts and costs. Technical innovations should not simply assume that negative externalities are „fine“. Innovators should have to assume the burden of identifying and resolving the externalities before widespread distribution of the innovation.
> Innovators should have to assume the burden of identifying and resolving the externalities before widespread distribution of the innovation.
Based on what timescale, though? We can only work with what we know about. E.g. until Radium was deemed highly unsafe, it was deemed safe. No one paints Radium onto watch hands any more because we discovered that that's a terrible idea.
Or, it might turn out in 30 years that CO2 has a large positive effect on re-greening the Earth, and all those low CO2 innovations were harmful.
In other words: I don't understand how your statement makes sense without hindsight, and I don't understand how any innovation can happen if hindsight (that is based on that innovation happening) is its prerequisite.
> until Radium was deemed highly unsafe, it was deemed safe
You countered your own argument: Radium was never safe, it was just deemed safe because FAAFO was in fashion. Lead, Australian frogs, cigarettes, DDT, thalidomide, deforestation, and asbestos among (many) others have had lasting negative effects simply because we pushed ahead without considering whole-system complexity.
> because we pushed ahead without considering whole-system complexity
No, because we didn't know better. My argument was: how long do we wait til we decide we know better? Do we stop everything? How long is long enough?
You're talking as though right now many things we think are the best thing to do won't turn out to be a terrible idea. They will be, because there is no way to consider "whole-system complexity" on a long enough time period to do things completely safely, or even to do anything at all, as anything might turn out to be a bad idea on a long enough timescale.
> how long do we wait til we decide we know better? Do we stop everything? How long is long enough?
I agree with your premise and not with the threshold.
To make an analogy: you give a kid a bike and only teach them balance as they ride it.
Someone who has a “nothing can hurt me” mentality will jump on, go down hills, jump through bushes, etc. It’s most likely that they would break bones and possible that they die before learning what they need to know to be safe (enough).
Someone who has an overcautious mentality may try to fully understand the physics of bike riding before venturing past the lawn. It’s likely they won’t get hurt at all and it’s possible that they simply never ride outside the yard.
In my opinion, both are negative outcomes.
However: if someone is of the right mentality they will experiment safely, seek to understand enough of the physics to know the dangers, and take on challenges that are appropriate for their level. They will probably get hurt and might even break a bone but they will be able to recover and keep riding.
In my opinion, if we put genuine care and attention in to thinking about system complexity before/as we progress we can avoid many of the “obvious” problems without having to stop everything. I agree that there’s no way to avoid all potential fallout, but I think we can stay with consequences we can recover from if we accept that it means seeing ourselves as a part of a much bigger picture.
As someone who has essentially unlimited access to clean drinking water, I might not feel the trade off is worth it.
But if I was someone who didn’t have a superabundant supply of clean drinkable water, I’d probably say to hell with the sea life.