To the extent that any of this was ever true, it hasn’t been true for at least a decade. After the WiredTiger acquisition they really got their engineering shit together. You can argue it was several years too late but it did happen.
I got heavily burned pre-wiredtiger and swore to never use it again. Started a new job which uses it and it’s been… Painless, stable and fast with excellent support and good libraries. They did turn it around for sure.
A highly cited reason for using mongo is that people would rather not figure out a schema. (N=3/3 for “serious” orgs I know using mongo).
That sort of inclination to push off doing the right thing now to save yourself a headache down the line probably overlaps with “let’s just make the db publicly exposed” instead of doing the work of setting up an internal network to save yourself a headache down the line.
> A highly cited reason for using mongo is that people would rather not figure out a schema.
Which is such a cop out, because there is always a schema. The only questions are whether it is designed, documented, and where it's implemented. Mongo requires some very explicit schema decisions, otherwise performance will quickly degrade.
Fowler describes it as Implicit vs Explicit schema, which feels right.
Kleppmann chooses "schema-on-read" vs "schema-on-write" for the same concept, which I find harder to grasp mentally, but describes when schema validation need occur.
There is a surprising amount of important data in various Mongo instances around the world. Particularly within high finance, with multi-TB setups sprouting up here and there.
I suspect that this is in part due to historical inertia and exposure to SecDB designs.[0] Financial instruments can be hideously complex and they certainly are ever-evolving, so I can imagine a fixed schema for essentially constantly shifting time series universe would be challenging. When financial institutions began to adopt the SecDB model, MongoDB was available as a high-volume, "schemaless" KV store, with a reasonably good scaling story.
Combine that with the relatively incestuous nature of finance (they tend to poach and hire from within their own ranks), the average tenure of an engineer in one organisation being less than 4 years and you have an osmotic process of spreading "this at least works in this type of environment" knowledge. Add the naturally risk-averse nature of finance[ß] and you can see how one successful early adoption will quickly proliferate across the industry.
ß: For an industry that loves to take financial risks - with other people's money of course, they're not stupid - the players in high finance are remarkably risk-averse when it comes to technology choices. Experimentation with something new and unknown carries a potentially unbounded downside with limited, slowly emerging upside.
I'd argue that there's a schema; it's just defined dynamically by the queries themselves. Given how much of the industry seems fine with dynamic typing in languages, it's always been weird to me how diehard people seem to be about this with databases. There have been plenty of legitimate reasons to be skeptical of mongodb over the years (especially in the early days), but this one really isn't any more of a big deal than using Python or JavaScript.
Yes there's a schema, but it's hard to maintain. You end up with 200 separate code locations rechecking that the data is in the expected shape. I've had to fix too many such messes at work after a project grinded to a halt. Ironically some people will do schemaless but use a statically typed lang for regular backend code, which doesn't buy you much. I'd totally do dynamic there. But DB schema is so little effort for the strong foundation it sets for your code.
Sometimes it comes from a misconception that your schema should never have to change as features are added, and so you need to cover all cases with 1-2 omni tables. Often named "node" and "edge."
> Ironically some people will do schemaless but use a statically typed lang for regular backend code, which doesn't buy you much. I'd totally do dynamic there.
I honestly feel like the opposite, at least if you're the only consumer of the data. I'd never really go out of my way to use a dynamically typed language, and at that point, I'm already going to be having to do something to get the data into my own language's types, and at that point, it doesn't really make a huge difference to me what format it used to be in. When there are a variety of clients being used though, this logic might not apply though.
If you're only consuming, yes. It might as well be a totally separate service. If it's your database that you read/write on, it's closely tied to your code.
We just sit a data persistence service infront of mongo and so we can enforce some controls for everything there if we need them, but quite often we don’t.
It’s probably better to check what you’re working on than blindly assuming this thing you’ve gotten from somewhere is the right shape anyway.
The "DAO" way like this is usually how it goes. It tends to become bloated. Best case, you're reimplementing what the schema would've done for you anyway.
The adage I always tell people is that in any successful system, the data will far outlive the code. People throw away front ends and middle layers all the time. This becomes so much harder to do if the schema is defined across a sprawling middle layer like you describe.
As someone who has done a lot of Ruby coding I would say using a statically typed database is almost a must when using a dynamically type language. The database enforces the data model and the Ruby code was mostly just glue on top of that data model.
That's fair, I could see an argument for "either the schema or the language needs to enforce schema". It's not obvious to me that one of the two models of "only one of them is" deserves to much more criticism than the other though.
It's possible you didn't intend it, but your parent comment definitely came off as snarky, so I don't think you should be surprised that people responded in kind. You're honestly doing it again with the "let's stop feeling attacked" bit; whether you mean it or not, your phrasing comes across as pretty patronizing, and overall combined with the apparent dislike of people disagreeing with you after the snark it comes across as passive-aggressive. In general it's not going to go over well if you dish out criticism but can't take it.
In any case, you quite literally said there was a "lack of schemas", and I disagreed with that characterization. I certainly didn't feel attacked by it; I just didn't think it was the most accurate way to view things from a technical perspective.
It could be because when you leave an SQL server exposed it often turns into much worse things. For example, without additional configuration, PostgreSQL will default into a configuration that can own the entire host machine. There is probably some obscure feature that allows system process management, uploading a shell script or something else that isn't disabled by default.
The end result is "everyone" kind of knows that if you put a PostgreSQL instance up publicly facing without a password or with a weak/default password, it will be popped in minutes and you'll find out about it because the attackers are lazy and just running crypto-mine malware, etc.
No one, if you aren't in the administration's good graces and something shitty happens unrelated to you, you've put a target on your back to be suspect #1.
Because nobody uses mongo for the reasons you listed. They use redis, dynamo, scylla or any number of enriched KV stores.
Mongo has spent its entire existence pretending to be a SQL database by poorly reinventing
everything you get for free in postgres or mysql or cockroach.
False. Mongo never pretended to be a SQL database. But some dimwits insisted on using it for transactions, for whatever reason, and so it got transactional support, way later in life, and in non-sharded clusters in the initial release. People that know what they are doing have been using MongoDB for reliable horizontally-scalable document storage basically since 3.4. With proper complex indexing.
Scylla! Yes, it will store and fetch your simple data very quickly with very good operational characteristics. Not so good for complex querying and indexing.
Yeah fair, I was being a bit lazy here when writing my comment. I've used nosql professionally quite a bit, but always set up by others. When working on personal projects I reach for SQL first because I can throw something together and don't need ideal performance. You're absolutely right that they both have their place.
That being said the question was genuine - because I don't keep up with the ecosystem, I don't know it's ever valid practice to have a nosql db exposed to the internet.
What they wrote was pretty benign. They just asked how common it is for Mongo to be exposed. You seem to have taken that as a completely different statement
The "other things" is what most people seem to have problem with.
Mozilla burns a batshit amount of money on feel good fancies.
If it were focused on its core mission -- building great software in key areas -- it would see it can't afford this, because that's the same money that if saved would make them financially independent of Google.
> In 2018, Baker received $2,458,350 in compensation from Mozilla.
> In 2020, after returning to the position of CEO, Baker's salary was more than $3 million.
> In 2021, her salary rose again to more than $5.5 million,
> and again to over $6.9 million in 2022.
>
> https://en.wikipedia.org/wiki/Mitchell_Baker#Mozilla_Foundation_and_Mozilla_Corporation
I wouldn't use it for consumer apps because it requires a Websocket connection to maintain state and probably doesn't scale very cheaply... but for business applications or personal tools it's actually kind of insane how much functionality you get out of the box (at least by the standards of statically typed languages).
.NET works amazingly on the web. This is just not the UI framework you would use.
There is ASP.NET of course and Razor Pages. We all use apps built with these every day without even realizing it. There are other great frameworks as well.
I do not even see Blazor as a real web technology but of course it is positioned that way.
MAUI is a "cross-platform" and frankly mobile first UI framework. It was never meant for the web.
It's basically a way for people to externalize tasks that require a human but pay fractions of what it would cost to actually employ those humans.
Mechanical Turk was one of the early entrants into "how can we rebrand outsourcing low skill labor to impoverished people and pay them the absolute bare minimum as the gig economy".
Much of the low skill labor were things like writing transcripts and turning receipts into plaintext. It was at a point where OCR wasn't reliable. There were a few specialist tasks.
The gig economy was very much a net positive here. Some people used it to quit factory work and make twice the income; some used it as negotiation terms against the more tyrannical factories. Factories were sometimes a closed ecosystem here - factory workers would live in hostels, eat the free factory food or the cheap street food that cropped up near the area. They'd meet and marry other factory workers, have kids, who'd also work there. They were a modern little serfdom. Same goes for plantations.
Things like gig work and mturk were an exit from that. Not always leaving an unhappy or dangerous life, but making their own life.
If it paid badly, just don't work there. These things push wages down for this kind of work, but this work probably shouldn't be done in service economies anyway.
> If it paid badly, just don't work there. These things push wages down for this kind of work, but this work probably shouldn't be done in service economies anyway.
This paragraph is so tantalizingly close to putting its finger on the issue. The fact that a company found someone willing to do a job for what they want to pay does not mean that it's ethical or moral for them to do so.
In this case (as in many others), one of the predicates was finding groups of people whose existing options, financial literacy, living conditions, or some combination of the three were already so bad that becoming digital serfs was a minor step up.
I got paid $11 an hour to enter handwritten applications into a database, as a temp job back in the early 2010s. It was "low-skill" inasmuch as, "Locking in and moving efficiently through entire filing cabinets of forms, often written by people whose first script was not Latin, for 6-7 hours straight, every weekday, for 2 months, with no prior training," is "low-skill" (and I apparently did it much faster than my supervisors expected). $11/hr was less than it should have paid, and yet I have to commend the company I was working with, because they sourced local labor and paid still multiple times what the job would have commanded through outsourcing via Mturk.
The conditions you're describing were caused by the systemic globalist status quo that Mturk is a part of; Mturk did not fix that, it perpetuated it.
It's not a fraction of what it would cost to actually employ those humans, since there were humans who clearly chose to do that work when presented with the opportunity.
I think this is a very first-world oriented take. It efficiently distributed low-value workloads to people who were willing to do it for the pay provided. The market was efficient, and the wages were clearly on par with those who were doing the work found economical to do, considering they did (and still do) the work for the wages provided.
Evil feels strong? Small companies benefit from having the basic feature set subsidized by big cos. It's kind of hard for me to imagine a scenario where pricing of a saas product could be _evil_. you can just choose not to do business with them!
No, but I code for Microsoft platforms since MS-DOS 3.3, so one gets to know how it all works, when having read so many docs, MSJ articles, MSDN, PDC and BUILD sessions, podcats and what not.
No, we don't do anything. Theoretically we could judge several times with different ordering.
We could measure order bias really easily though; we just need to look at the average score by rollout position across many runs. I'll add that to my list of experiments!
> While the specific internal workings of DeepSeek LLM are still being elucidated, it appears to maintain or approximate the self-attention paradigm to some extent.
Totally nonsensical. Deepseeks architecture is well documented, multiple implementations are available online.
I love to hate on google, but I suspect this is strategic enough that they wont kill it.
Like graviton at AWS its as much of a negotiation tool as it is a technical solution, letting them push harder with NVIDIA on pricing because they have a backup option.
Google has done stuff primarily for negotiation purposes (e.g. POWER9 chips) but TPU ain't one. It's not a backup option or presumed "inferior solution" to NVIDIA. Their entire ecosystem is TPU-first.
Pretty sure the answer is yes. I have no direct knowledge of the matter for Gemini 2.5, but in general TPUs were widely used for training at Google. Even Apple used them to train their Apple Intelligence models. It’s not some esoteric thing to train on TPU; I would consider using GPU for that inside Google esoteric.
P.S. I found an on-the-record statement re Gemini 1.0 on TPU:
"We trained Gemini 1.0 at scale on our AI-optimized infrastructure using Google’s in-house designed Tensor Processing Units (TPUs) v4 and v5e. And we designed it to be our most reliable and scalable model to train, and our most efficient to serve."