I also switched to cheapest Mac Mini M4 Pro this year (after 20+ years of using Intel CPUs). MacOS has its quirks, but it provides ZSH and it "just works" (unlike manjaro I used in parallel with Windows). I especially like the preview tool - it has useful pdf and photo editing options.
The hardware is impressive - tiny, metal box, always silent, basic speaker built-in and it can be left always on with minimal power consumption.
Drive size for basic models is limited (512gb) - I solved it by moving photos to NAS. I don't use it for gaming, except Hello Kitty Island Adventure. I would say it's a very competitive choice for a desktop PC in 2025 overall.
I mean, it’s hard to verify this, and that seems to be the main aspect of reporting on Gaza that makes me question the veracity of this information, given the source of it. I can believe that it’s true, but I think it’s fair to trust, but verify. If only there were some independent journalists in Gaza they could have worked with to give this project more legitimacy.
I don't have means to verify this at all, privately I consider it blunt propaganda. HN is not the site to discuss politics.
I'm more surprised it's possible to post ads on such loaded issues on YouTube. I would appreciate comments from anyone with better knowledge of ads industry.
I found this page, but when I tried to view it, after clicking through a Google search to a Google support page, where I passed a captcha just to do the search, I somehow failed the second(!) captcha on visit to the support page, where Google said it detected suspicious activity on my network and wouldn’t let me actually view the page, but maybe it’s enough to get you started on your end? I suspect my VPN host has me lumped in with some noisy neighbors or something.
> This page appears when Google automatically detects requests coming from your computer network which appear to be in violation of the Terms of Service. The block will expire shortly after those requests stop.
> This traffic may have been sent by malicious software, a browser plug-in, or a script that sends automated requests. If you share your network connection, ask your administrator for help — a different computer using the same IP address may be responsible. Learn more
> Sometimes you may see this page if you are using advanced terms that robots are known to use, or sending requests very quickly.
I'm not doing anything like that except maybe the advanced search terms, but I don't know which ones bots use, and I don't use bots, so I can only assume it's the VPN in combination with some of my search terms. The first time I saw it was right after recaptcha failed to load in on mobile, so who knows. I blame Google. When you can't even click on the Learn More link because you get another error, and you have no actual customer support, I think I might call my Congressperson. This is an ADA violation waiting to happen.
I own both AC and HRV and HRV is not enough for cooling. Single split unit has max air flow of around 500m3/h for a single room. My HRV has max air flow of 350m3/h for the whole house - it's not able to substantially cool anything.
This, albeit HRV can sometimes have bypass mode that doesn’t harvest the exhaust heat. That said you need much larger vents for heating/cooling than ventilation.
AC is still viewed almost like plague (or a microwave) in eastern europe.
Growing in Poland I'd say Eastern Europe was until recently too poor to afford AC, but also the climate was indeed less extreme just 30 years ago. Nowadays most new detached houses do have AC in at least one room (bedroom or living room) - mine included. Alternatively many new houses have heat pumps. Apartament blocks are less consistent in that matter, but there are interesting initiatives as well - like using combined-heat-and-power citywide heating installations for cooling (by pumping cold water through them).
Lonely Planet. In 2013 we've done a Morocco backpacking trip with just a Lonely Planet guide - no phones, no Internet access, no pre-booked accomodation or transport. The guide contained maps, hostel suggestions, POI, transport options and a lot more. This year we brought the newest LP release to Thailand, and it was mostly useless - no transport or accommodation info, bad maps, generic POI descriptions. We are not buying another LP guide.
They can be quite good during the trip planning phase, like a helpful friend who makes suggestions that you are totally free to ignore. Having multiple threads of thought is easier with books, where you can riffle around the pages, maintain multiple contexts simultaneously, etc., compared to the very linear, one search at a time model that google search gives you.
The other small advantage is that they work without power and cell service.
Europe's problem in this area seems to be inability to build a mechanism to realize large, unprofitable projects. It looks like the whole continent operates on thin margins, and has no resources set aside to pursue strategic initiatives.
US is building its pile of resources by capitalising on its current advantage, but also by pushing larger and larger parts of society into poverty. Europe is maybe more fair, but ultimately unable to keep up and very vulnerable because of that.
Even Russia, a country with much smaller economy is able to concentrate its resources to affect politics and develop few important techs (missiles). Of course it achieves the goal even more by expoiting its own population. Few people benefit from it, but their wealth is immense. Before 2022 whole regions of Europe catered to needs of foreign millionaires, not few homegrown ones. Lack of large projects translates to little opportunities to become rich.
I have no idea how Europe can build institutions to develop large projects without impoverishing lots of its citizens.
The data business in the US was not developed by “building institutions” through centralized government projects but by private enterprise. There is a fundamental difference in mindset regarding the relative importance of the public and private sectors between the two regions, and I suspect that difference underlies the stark difference between the US and EU in terms of tech companies. The US has Apple, Microsoft, Amazon, Meta, X, all of which innovate to offer products appealing enough that customers are willing to use or buy them in a free and consensual transaction (which does not require any impoverishment). If they fail at innovating to appeal to customers, they go bankrupt. EU institutions funded through taxpayer money have less incentive to produce goods and services that appeal to taxpayers. I’m not sure where “impoverishment” fits into this at all.
On "impoverishment": the difference between private sectors in Europe and US are of course important, but I'm referring to a challenge on much lower level.
Lagging country that wants to increase pace, must find resources to invest. The easiest way to find these resources is to squeeze out parts of its population, to decrease consumption and rise investment. This was done historically in the Soviet Union, in Korea, in China. Europe is of course much more prosperous, but I don't think it can escape this logic. I'm not sure if either European politicians and populations are ready to implement such catch-up initiatives that would result in partial dismantling of the welfare state.
> The US has Apple, Microsoft, Amazon, Meta, X, all of which innovate to offer products appealing enough that customers are willing to use or buy them in a free and consensual transaction (which does not require any impoverishment). If they fail at innovating to appeal to customers, they go bankrupt.
I make iPhone apps.
No Apple software update has appealed to me since back when they were still named after cats. In agregate they have added some useful stuff, but even then what has gotten worse over the years means my only interest in updates is (1) the security issues and (2) the way all developers (including me!) are pushed towards supporting only the most recent releases that in turn means that even modern websites, let alone apps, don't work right on older systems: https://blog.greggant.com/posts/2024/07/03/running-10.6-snow...
And, bluntly, the library updates are also unimpressive, SwiftUI has obviously been the intended way forward for a while now, but even Apple themselves have to keep updating UIKit because SwiftUI isn't good enough yet: https://developer.apple.com/videos/play/wwdc2024/10118
Microsoft has a very different set of issues, but at least they sucessfully diversified into gaming platforms without destroying their dominance in office work. But how much of that dominance is the updates, vs. backward compatibility with so many people's old documents that would be extraordinarily difficult for 3rd parties to replicate? https://www.joelonsoftware.com/2008/02/19/why-are-the-micros...
And they have LinkedIn now, which has weirdly become my standard minigames(!) experience, presumably because someone finalky noticed games helped Facebook get stickier back in the day.
Amazon's consumer offerings suck. Almost everything I've bought from the website has had some issue or turned out to have been more expensive than the same stuff from a local shop I just hadn't found yet. But most people tell me AWS is good, so there's that.
FB and X's main asset is network effects, not tech. The web version of FB has bugs that I'd expect from a junior with no oversight, not a $1.5T market cap giant. Meta is more than FB these days, but is that enough? Horizons was a disaster for them, and they were outraged by Apple's ad changes that only implemented what various laws already required, their foundations may turn out to be a marshy flood plain.
No, what these companies have is mindshare, branding, and deep enough pockets to make government-scale spending capital allocations and survive their inevitable incorrect allocations.
(And by being so big that governments listen to them, regulatory capture etc.)
I do. These companies offer me choices and compete for my business. I can deploy to AWS (Amazon), Azure (MS), Google Cloud, or other competitors. My business will go to the one who gives me the best results for the least money. If they fail to provide me the services I want at a price that seems reasonable, or if I'm unhappy with them for any other reason, I can take my business elsewhere. Freedom and consent lie at the heart of private enterprise.
On the other hand, when a government tells me that I can't use the services I want to use and cannot trade with the people I want to trade with because of politics, and that I have to use different services because they're located in a particular region and favored by the government, that's not freedom, nor is it consensual.
Despite this, do you still recognize the countless tactics businesses use to lock their consumers in into their ecosystem as nonconsensual, or do you view that in a different light still?
You also mention innovation with regards to companies like Meta. How do things like the network effect fit into this model? To be more explicit, suppose I want to migrate off of Messenger to Signal. Meta won't allow bridges, and the people I know don't wish to switch. Surely it is not unreasonable for me to consider my continued usage of Meta's Messenger platform as nonconsensual, and my choices as impaired?
I personally regard this the same when people say stuff like "freedom of speech does not imply freedom of consequences from that speech". Very clearly that betrays the expectations one would reasonably build when they hear such a phrase.
> Europe's problem in this area seems to be inability to build a mechanism to realize large, unprofitable projects.
I don't disagree with this take completely, especially in regards to IT infrastructure, but I don't think that this explanation captures the main problem.
There are clear counterexamples for large European no-profit infrastructure (CERN, ITER) or even currently ongoing projects (e.g. railway infrastructure like the Brenner base or Fehmarn tunnel).
My take is that there simply was no strong enough incentive to spend a lot of money/effort to duplicate American offerings just to stay independent, but the current US admin is providing sufficient incentives to start spending on this, and while independence is nice in general, I think this is overall just a huge waste.
There are lots of people (both in and out) perceiving Europe in different ways, but is this a result of facts or of repetition of some ideas in the news?
Can you give without searching 5 examples of large unprofitable projects in Europe and the USA?
Each time I look for examples, I find more than I knew before. And anyhow the purpose is not for the project to be unprofitable or large! If (just as an example) Europe obtains 50% of the result with 25% of the investment is that bad? I (as European) also want to live a good life, I don't care that Europe is that first at throwing money around like crazy.
And is it about size in monetary value or impact? Anybody talks about UK Bio Bank? No, while the impact on healthcare is amazing and they don't even try to make it for profit (ofc lots of US companies are crazy to use it)...
CBOR started as a complimentary project to previous-decade IoT (Internet of Things) and WSN (Wireless Sensor Networks) initiaties. It was designed together with 6LoWPAN, CoAP, RPL and other standards. Main improvement over message pack was discriminating between byte strings and text strings - an important usecase for firmware updates etc. Reasoning is probably available in IETF mailing archive somewhere.
All these standards were designed as a research and seem rather slow to gain general popularity (6LoWPAN is used by Thread, but its uptake is also quite slow - e.g. Nanoleaf announced dropping support for it).
I would say if CBOR fits your purpose it's a good pick, and you shouldn't be worried by it being "not cool". Design by committee is how IETF works, and I wouldn't call it a weakness, although in DOGE times it might sound bloated and outdated.
To be fair, CBOR proper is amazingly well designed given its constraints and design-by-committee nature. It is not even hard to remember the whole specification in your head due to the regular design. Unfortunately though I can't say that for any other CBOR ecosystem; many related specs do show varying level of signs of bloat. I recently heavily criticized the packed CBON draft because I couldn't make any sense out of it [1], and Bormann seemed to have clearly missed most of my points.
Disclaimer: I wrote and maintain a MessagePack implementation.
To be uncharitable, that's probably because CBOR's initial design was lifted from MP, and everything Bormann added to it was pretty bad. This snippet from your great post captures it pretty well I think:
---
CBOR records the number of nested items and
thus has to maintain a stack to skip to a particular nested item.
Alternatively, we can define the "processability" to only include a
particular set of operations. The statement 3c implies and 3d seems to
confirm that it should include a space-constrained decoding, but even
that is quite vague. For example,
- Can we assume we have enough memory to buffer the whole packed CBOR
data item? If we can't, how many past bytes can we keep during the
decoding process?
> To be uncharitable, that's probably because CBOR's initial design was lifted from MP, and everything Bormann added to it was pretty bad.
To be clear, I disagree and believe that Bormann did make a great addition by forking. I can explain this right away by how my point can be fixed entirely within CBOR itself.
CBOR tags are of course not required to be processed at all, but some common tags have useful functions that many implementations are expected to implement them. One example is the tag 24 "Encoded CBOR data item" (Section 3.4.5.1), which indicates that the following byte string is encoded as CBOR. Since this string has the size in bytes, every array or map can be embedded in such tags to ensure the easy skippability. [1] This can be made into a formal rule if the supposed processability is highly desirable. And given those tags are defined so early, my design sketch should have been already considered in advance, which is why I believe CBOR is indeed designed better.
[1] Alternatively RFC 8742 CBOR sequences (tag 63) can be used to emulated an array or map of an indeterminate size.
Sure, I think CBOR's "suggested" tags (or whatever they are) are probably useful to most people. The tradeoff is that they create pressure for implementations to support them, and that's not free. For example, bignum libraries are pretty heavyweight; they're not really the kind of thing you'd want to include in a C implementation as a dependency, especially when very few of your users will use them. Well OK, now you have a choice between:
- include it anyway, bloat your library for almost everyone, maybe consider supporting different underlying implementations, manage all these dependencies forever, also those libraries have different ways of setting precision, allocating statically or dynamically, etc, so expose that somehow
- don't include it, you're probably now incompatible with all dynamic language implementations that get bignums for free and you should note that up front
This is just one example, but it's pretty representative of Bormann's "have your cake and eat it too" design instincts where he tosses on features and doesn't consider the tradeoffs.
> One example is the tag 24 "Encoded CBOR data item" (Section 3.4.5.1), which indicates that the following byte string is encoded as CBOR. Since this string has the size in bytes, every array or map can be embedded in such tags to ensure the easy skippability.
This only works for types that aren't nested unless you significantly complicate bookkeeping during serialization (store the byte size of every compound object up front), which has the potential to seriously slow down serializing. My approach to that would be to let individual apps do that if they want (encode the size manually), because I don't think it's a common usage.
> Well OK, now you have a choice between: - include it anyway, [...] - don't include it, [...]
So guess that's why MP doesn't have a bignum. But MP's inability to store anything more than (u)int64 and float64 does make its data model technically different from JSON because JSON didn't properly specify that its number format should be round-trippable in those native types. Even worse, if you could assume that everything is at most float64 then you still have to write a considerable amount of subtle code to do the correct round-trip! [1] At this point your code would already contain some bignum stuffs anyway. So why not support bignums then?
[1] Correct floating point formatting and parsing is very difficult and needs a non-trivial amount of precomputed tables and sometimes bignum routines (depends on the exact algorithm)---for the record I'm the main author of Rust's floating point formatting routine. Also for this reason, most language-standard libraries already have a hidden support for size-limited bignums!
> My approach to that would be to let individual apps do that if they want (encode the size manually), because I don't think it's a common usage.
I mean, the supposed processability is already a poorly defined metric as I wrote earlier. I too suppose that it would be entirely up to the application's (or possibly library's educated) request
> But MP's inability to store anything more than (u)int64 and float64 does make its data model technically different from JSON....
Yeah I don't love the MP/JSON comparison the site pushes. I don't really think they solve the same problems, but the reasons are kind of obscure so shrug. MP is quite different from JSON and yeah, numbers is one of those ways.
> [1] Correct floating point formatting and parsing is very difficult and needs a non-trivial amount of precomputed tables and sometimes bignum routines (depends on the exact algorithm)---for the record I'm the main author of Rust's floating point formatting routine. Also for this reason, most language-standard libraries already have a hidden support for size-limited bignums!
Oh man yeah tell me about it; I attempted this way back when and gave up lol. I was doing a bunch of research into arbitrary precision libraries and the benchmarks all contain "rendering a big 'ol floating point number" and that's why. Wild.
> I mean, the supposed processability is already a poorly defined metric as I wrote earlier. I too suppose that it would be entirely up to the application's (or possibly library's educated) request
I think in practice implementations are either heavily spec'd (FIDO) on top of a restricted subset of CBOR, or they control both sender and receiver. This is why I think much of the additional protocol discussion in CBOR is pretty moot; if you're taking the CBOR spec's advice on protocols you're not building a good protocol.
> Oh man yeah tell me about it; I attempted this way back when and gave up lol. I was doing a bunch of research into arbitrary precision libraries and the benchmarks all contain "rendering a big 'ol floating point number" and that's why. Wild.
Yes, it is a stuff that people generally don't even realize its existence. To my knowledge only RapidJSON and simdjson seriously invested in optimizing this aspect---their authors do know this stuff and difficulty. Others tend to use a performant but not optimal library like double-conversion (which was the SOTA at the time of release!).
> Well OK, now you have a choice between: - include it anyway, [...] - don't include it, [...]
I do not see an issue here. In decoder, one does not need bignum library, just pass bignum as a memory blob to application.
In application, one knows semantic restriction on given values, and either reject bignums as semantically-invalid out-of-range, or need bignum processing library anyways.
You can replace "pull in MPFR" with "work any harder than just using `double`". Bignums are an obvious pain in the ass; I can think of no data representation formats that include support for them and that's why
I'm aware of plenty (though I have surveyed at least 20 formats in the past and so that would include more obscure ones). At the very least, you can feed it back to sscanf if you are fine with an ordinary float or double, a thoughtful API would include this as an option too. That's what I expect for the supposed bignum support: round-trippability.
Maybe an example is useful. I want to build a generic CBOR decoder in C. I have 2 options:
- link GMP/mpdecimal/whatever (or hey, provide an abstraction layer and let a user choose)
- accept function pointers to handle bignum tags
Function pointers are an irritation (I know this because my MP library uses them), they're slower than not using them, you've gotta check for NULL a lot, you're also asking any application that uses your library and wants bignum support to include GMP itself (with all the attendant maintenance, setup, etc.)
Or, you can include it yourself, but welcome to doing all the maintenance yourself, and exposing all of GMP's knobs (ex: [0])
You might argue that these aren't the only options, but a deserialized value has to be understood by the application; your suggestions aren't good tradeoffs. sscanf (also do not use sscanf) doesn't work if the value is actually a bignum, and yielding a bespoke bignum format is just as unusable as simply returning whatever's encoded in CBOR. How would I add two such values together? How would I display it? This is what bignum libraries are for.
All this is made far worse by the fact that there are effectively no public CBOR (or MP) APIs where you're expecting them to be consumed entirely by generic decoders, so there's not even a need to force generic decoders to go through all this effort to support bignums (etc.) Further, unlike MP, CBOR doesn't let you use tags for application-specific purposes. Put it all together and it's uniformly worse: implementations are either more complex or have surprising holes, you can't count on generic decoders supporting tags when building an API or defining messages, and you can't even just say, "for this protocol, tag 31 is a UUID".
This is probably a big reason (though I can think of others) why the only formats you can think of w/ bignum support are obscure.
> That's what I expect for the supposed bignum support: round-trippability.
Round-tripping is only meaningful if a receiver can use the values before reserializing, otherwise memcpy meets your requirements. If a sender gives me a serialized bignum, the deserializing library has to deserialize it into a value I can understand and use; that's the whole point of a deserialization library.
MP's support for timestamps is a reasonable example here: it decomposes into a time_t, and it can do this because it defines the max size. You can't do that w/ a bignum--the whole point of a bignum is it's big beyond defining. A CBOR sender can send you an infinite series of digits, and the spec doesn't reckon with this at all.
> I have 2 options: - link GMP/mpdecimal/whatever (or hey, provide an abstraction layer and let a user choose) - accept function pointers to handle bignum tags
I would just provide two kinds of functions:
// For each representative native type...
cbor_read_t cbor_read_float(struct cbor *ctx, float *f);
// And there is a generic number handling:
struct cbor_num {
int sign; // -1, 0 or 1
int base; // 10 or 16
int exponent;
const char *digits;
size_t digits_len;
};
cbor_read_t cbor_read_number(struct cbor *ctx, struct cbor_num *num);
// And then someone will define the following on top of cbor_read_number:
cbor_read_t my_cbor_read_mpz(struct cbor *ctx, mpz_t num);
Memory lifetime and similar has to be also considered here (left as an exercise), but the point is that you never need function pointers in this case. In fact I would actively avoid them because proper function pointer support is indeed a PITA as you said. They can generally be avoided with the (sorta) inversion of control, which is popular in compact C APIs and to some extent also in Rust APIs. It is just you haven't thought of this possibility.
> sscanf (also do not use sscanf) doesn't work if the value is actually a bignum, and yielding a bespoke bignum format is just as unusable as simply returning whatever's encoded in CBOR. How would I add two such values together? How would I display it? This is what bignum libraries are for.
In practice many bignums are just left as is. For example X.509 certificate serial numbers are technically bignums, but you never compute anything out of them. So you don't need any bignum to read serial numbers. If you do need computation then you need an adapter function as above, but the library proper needs no knowledge about such adapter. What's a problem now?
By the way, sscanf is fine here because the API's contract constrains sscanf's inputs enough to be safe. Sscanf in general is also safe when every `char*` outputs are bounded. It is certainly a difficult beast, but so is everything about C.
> and yielding a bespoke bignum format is just as unusable as simply returning whatever's encoded in CBOR. How would I add two such values together? How would I display it? This is what bignum libraries are for.
I know this is what you've been getting at. Maybe I've been unclear about why this isn't useful, but here are the main points:
- Without bignum functionality, your data structure doesn't provide any more functionality than memcpy. How do I apply the base? How do I apply the exponent? How would I add two of them together? This may as well just be a `char *`.
- Speaking of just being a `char *`, CBOR's bignums are just that, so you'd just call `mpz_init_set_str` on whatever is in the buffer (zero terminate it in a different buffer, I guess, whatever). Parsing into your struct here is counterproductive.
- Even the minimal functionality you're proposing here is added bloat to every application that doesn't care about bignums and wants to ignore the tag (probably almost all applications). Ameliorating this requires conditional compilation.
> In practice many bignums are just left as is.
I'd believe this; I'd also believe there's very little real need for them generally. This is an argument for not including them in a data serialization format.
> By the way, sscanf is fine here
The problem with sscanf isn't that it can never be safe, it's that if you're not safe every time you blow everything up. It's better to just not use it.
If the IETF is design by committee, almost any collaboratively developed standard could be called designed by committee. And I'm rather confident in assuming you haven't seen ITU or IEEE in action, or you'd be singing angel's choir praises on the IETF process…
(The IETF really does not have a committee process by any reasonable definition.)
I also had an opportunity to design control systems using their products in Poland. The sensors were very accurate, and competitively priced (although maybe a bit fragile, compared to Sick or other European products - it was visibly different engineering culture), but the most remarkable thing about them was their sales team. They were ready to borrow expensive sensors for months, and responded very quickly.
The hardware is impressive - tiny, metal box, always silent, basic speaker built-in and it can be left always on with minimal power consumption.
Drive size for basic models is limited (512gb) - I solved it by moving photos to NAS. I don't use it for gaming, except Hello Kitty Island Adventure. I would say it's a very competitive choice for a desktop PC in 2025 overall.