While climate change is a reality, backed by multiple metric tons of hard science, it is unclear if these three mosquitos were genuinely colonizers due to it or if they had simply been transported over by a foreign cargo vessel at the aluminum smelter nearby to where the mosquitos were discovered.
For a few years now I have made an effort to first try to find a local (in the neighborhood, city or country) seller who has a good reputation before even considering Amazon. In all cases, where I have found alternatives, I've had great experiences, and no issues.
The only thing I've noticed is the very big price difference between academic/technical books when bought on Amazon vs a local book store (40+%) in favour of Amazon. Bulk discounts play a part in that I assume.
There sadly isn't a single viable option for a Linux mobile phone out there.
- Purism runs ancient hardware, charges way too much and has questionable business ethics.
- Pine64 has equally bad hardware but reasonable prices. I don't like the Hong-Kong connection though. Not sure how the security patching environment is in practice.
The only option on the table as I see it is buying from the devil and installing GrapheneOS.
FuriLabs has shipped a usable device for going on two hardware releases now.
Yes, it currently builds on top of Hallium. Anyone who thinks this should be a sticking point has their head in the sand; the device and effort is how you get a usable ecosystem rolling.
I have been running Graphene on a Pixel for a while now and I don't think Linux phones are a viable alternative. The vast majority of Android apps just work on Graphene, and there are millions of them. The UI experience is polished, everything just works with the exception of apps that require Google Play Integrity. And of course these projects aren't affected by Google's restrictions on sideloading.
Look I love that GrapheneOS exists, and I have used it in the past (as have I with Lineage).
But GrapheneOS lives by the mercy of Google. Pixel devices being reference devices makes it so that it's unlikely that Google will close them down completely.
However, as can be seen with this verification move, Google is willing to go very far to accomplish its aims. They already delayed delivery of Android 16 images, causing GrapheneOS some headaches.
Waydroid exists and a mobile distro that provided Waydroid OOTB would be as usable as a full-on Android phone. You could even build it to remove the app verification stuff if that found its way into AOSP.
Reading this gives me flashbacks of my ~22 years old self who was, as most of us do, trying to come to terms with how life, responsibilities and the experience of it changes in your early adulthood.
At that age we all read academic literature, philosophy and short form texts from "thinkers" and then start believing that we have had a deep profound insight into life.
However at that tender age most of us have not experienced life enough to have those insights. We have not experienced profound love or heartbreak, nor hardship and relief. Life changing events that put everything else into perspective. To someone that young there hasn't formed a deep enough well of experiences to pull from to give those ideas perspective.
For me this was 20 years ago, and were I to stand infront of that young man now I would grab him by the shoulders and tell him to stop over analysing everything and worrying about how life might end up being and instead just get on with living it.
There is so much to see, do and experience, and so little time!
I notice that Mastodon is only mentioned in the article in terms of protocols, but to me the killer feature there is the absolute lack of an algorithm.
Nothing is ever pushed on me by the platform, so the whole experience doesn't become combative. That does mean though that each user has to do some work finding others they like, and that can take some time. But that also weeds out those that just want to be spoonfed content, which is a plus.
The last three years on there have been some of the most wholesome social media interactions I have had in the last 25 years.
Mastodon literally has a trending feed. Is that not an "algorithm"? It has algorithmic popular hashtags, news feed, and user recommendations. Just a bog standard handful of algorithmic surfaces, so why are they still pretending like it's "algorithm free" is beyond me. "Absolute lack", right.
The Trending feature is not pushed into the home (or any) timeline. In the Web UI it sits unobtrusively in the corner of the window and on some apps simply does not exist. It can also be easily disabled.
In the discourse about social media, the term "algorithm" is exclusively used to refer to purposefully-maligned algorithms engineered to addict and abuse people. Nothing about any of the Fediverse services is designed this way because they're not chasing money or engagement, they're made to help people converse in a human way.
If you're not logged in, the evil algorithmic trending feed is literally the first thing you'll see being pushed onto you. (seems like it's a default setting, because it's that way across several different instances.) So what's the truth? Seems like an incoherent position to me, especially given how mastodon itself advertises it as "no algorithms". It doesn't hold true when you can immediately see algorithmic feeds, at most charitable it's confused, at worst it's just a barefaced lie.
So it's literally just "bad algorithms" (the ones other platforms make) and "good algorithms" (the good algorithms good platforms make, like us). Which is kind of literally how it is, there are good ones and bad ones, except both of these kinds of platforms employ "bad" engagement driving discovery algorithms, so it's really just 'us vs them'. The trending and news algorithms are literally just driving engagement and discovery, and top hashtags feed is proudly clamoring how much engagement there is. Doesn't seem like they're not "chasing" it.
You seem to be purposefully mixing the two opposing uses of the word "algorithm". On the non-abusive platforms, an algorithm is a fairly simplistic set of criteria that are designed to be useful to the human beings that use a service. If you want to, you can inspect the code used to generate them; the likes of Mastodon don't hide how these work because they aren't trying to harm anyone.
These sorts of algorithms tend to promote posts or people that have recently been popular for the purpose of being useful to folk. On the likes of tiktok, facebook and twitter they are the culmination of very large sums of money and an ocean of professional psychological collaborators with the aim to purposefully harm and addict people, e.g. to manipulate public opinion and democracy, incite the suicide of transgender people and the perpetration of genocide. For money. I find it difficult to believe that you're arguing, in good faith, that the two types of "algorithm" have much in common.
I am not sure how it is "evil" showing recently-popular posts on a social media server's home page to logged-out people, and how that's pushing anything. It's not an agenda, it's not a series of posts that are picked because they are likely to addict and enrage people. I do suspect that there's some ragebait that shows up, because some people are still having to unlearn the indoctrination they're suffering from.
It's totally fine if people would just say it like "bad algorithms" or "good algorithms", but somehow the meaning of the word "algorithm" in itself got so twisted that it apparently means "bad" just on its own. Which looks idiotic if you realize that everywhere there are algorithms, even in those platforms that claim to be "no algorithm/algorithm-free" or whatever other meaningless duplicitous marketing drivel they dress it up with. From where I see it, it's some other people that purposely mix the meanings there, while also overlooking how some arbitrary "good" or "unremarkable" things just kinda silently get a pass, despite being functionally the same thing. Almost to the point where you could just advertise as "no algorithms" (whatever that means) and just have algorithms anyway, and it's kind of whatever.
It's not "evil" to be showing an algo feed per se. But mastodon and a bunch of other platforms refer to algo feeds as "bad/evil" or something of the sort, market themselves as not having them, and yet thoroughly employ multiple algo feeds. Is that not just hypocritical? It looks glaringly dishonest. They could at least have some integrity to say "we don't like the yucky algorithms, but here we only have good™ algorithms", when that's literally what it is.
> In the discourse about social media, the term "algorithm" is exclusively used to refer to purposefully-maligned algorithms engineered to addict and abuse people.
But I feel like it misses the point. What about a service where you can design and use your own "algorithms", and it's built into the platform?
Such a platform would have thousands of algorithms, but none of them designed for chasing money or engagement, just different preferences. But Mastodon could still claim "We don't use The Algorithm and is therefore better than other places" while a platform with custom user-owned algorithms could get the best of both worlds.
Was kind of hoping it'd take people longer time to notice where the idea came from :) But also kind of cheating for you to bring it up, but understand it's hard to resist.
In this context, "algorithm" means something that gives you the endorphin hit and keeps you scrolling. Facebook is "algorithmic social media", whereas Mastodon is not.
That trending feed on mastodon would still literally be that, ranking posts on how much they're engaged with and further driving engagement on the platform. So I'm just wondering what hairs are even there to split.
Not to mention "sort by most recent from accounts I follow" is an algorithm too.
I feel like the wording needs a bit of rewording/rework. I agree chronological order facilitates better discussions, but just saying that "Mastodon lacks algorithms" doesn't really help people understand things better.
Mastodon and fediverse despite not running on algorithms sadly aren't free of spam and bots - probably nothing nowadays is. Last year in February there was a flood of messages attacking less populated instances, with... Spam can image in message body.
What grinds my gear after this attack is that majority of mastodon clients doesn't offer a simple way to block instance that would limit unwanted posts. Some even don't have that feature at all.
Unfortunately, we discovered that people would rather be told what to watch, rather that self-discover their interests, because that’s a lot of “work”.
I hope it’s not that black-and-white, that it’s possible to have a sane social network with algorithmic feed, only we need to design the algorithms around users’ needs first.
If you judge users’ needs by “things they’ll pay attention on and engage with”, well… it is exactly what all the current algorithms are good at right now. It’s just, in my opinion, bad for the society at large, as rage baiting, slop-posting and etc. is great in achieving that.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
As a customer I'm angry that businesses get to use "hope and pray" as their primary data protection measure without being forced to disclose it. "Motivators" only work on people who value their job more than the data they can access and I don't believe there's any organization on this planet where this is true for 100% of the employees, 100% of the time.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
This really isn’t fair. It is not simply hope and pray: it is a clearly stated/enforced deterrent that anyone who violates the policy will be terminated. You lose your income and seriously harm your future career prospects. This is more or less the same policy that governments hold to bad actors (crime happens but perpetrators will be punished).
I get that it is best to avoid the possibility of such incidents but it is not always practical and a strong punishment mechanism is a reasonable policy in these cases.
You don't think it's fair to expect a trillion-dollar business to implement effective technical measures to stop rogue (or hacked!) employees from accessing personal information about their users?
I'm not talking about small businesses here, but large corporations that have more than enough resources to do better than just auditing.
> crime happens but perpetrators will be punished
Societies can't prevent crime without draconian measures that stifle all of our freedoms to an extreme degree. Corporations can easily put barriers in place that make it much more difficult (or impossible) to gain unauthorized access to customer information. The entire system is under their control.
Okay, how do you want to implement those technical measures? I propose that we add a checkbox, for employees to click when they have gone rogue, or have been hacked. That way, when the box is checked, we can just reject those requests as being bad/wrong/illegal. Simple as that!
There may be some details with the implementation of this, but once we've got that check box, then things will be secure.
Or maybe trillions of dollars can't change digital physics. I don't care how much money you have, you can't make water not be wet.
Facebook/Meta has shown time and time again that it can't be trusted with data privacy, full stop.
No amount of internal auditing, externally verified and stamped with approval for following ISO standards theater will change the fact that as a company it has firebombed each and every bridge that was ever available to it, in my book.
If the data has the potential to be misused, that is enough for me to equate it as not secure for use.
I just had a use case the other day: my mom sent me a photo of a handwritten recipe from my great grandmother a year ago. I only remembered asking about asking, not about the response, so I was happy to still have that pic in my history. Had I downloaded the Pic, it would be lost among all the other crap I store all ocer the place. This way it was preserved with the context and even a voice message from my grandmother (not great grandmother) remarking on it.
I guess time will tell.