Etype 23 (rc4-hmac) gets ~3500 kH/s, 18 (aes256-cts-hmac-sha1-96) gets roughly 2500 kH/s. Big difference, but somehow I thought it would be much bigger? 2.5M guesses/second is still not so bad.
I've done kerberoasting and aseproasting a handful of times only, but from what I recall, RC4 can be cracked within reasonable time regardless of your password complexity. But with AES if you have a long and complex service account password, it will take decades/centuries to crack. But (!!) it is still quite common to use relatively weak passwords for service accounts, a lot of times the purpose of the service is included in the password so it makes guessing a bit easier.
My criticism is that Kerberos (as far as I'm aware) does not provide modern PBKDFs (keyed argon2?) that have memory-hardness in place. That might be asking too much, so why doesn't Microsoft alert directory administrators (and security teams) when someone is dumping tickets for kerberoasting by default? It's not common for any user or service to request for tickets for literally all your service accounts. Lastly, Microsoft has azure-keyvault in the cloud, but they're so focused on cloud, they don't have an on-prem keyvault solution. If a service account is compromised, you still have to find everything that uses it and change the password one by one. Where if there was a keyvault-like setup, you could more easily change passwords without causing outages.
Rotating the KDC/krbtgt credential is also still a nightmare.
From what bits I've heard, Microsoft expects its users to be using EntraId instead of on-prem domains (computers joined directly to entra-id instead of domain controllers). That's a nice dream, but in reality 20 years from know there will still be domain controllers on enterprise networks.
Kerberos has FAST for truly addressing the offline dictionary attack issues with PA-ENC-TIMESTAMP. FAST is basically tunneling, encrypting using some other ticket. With PKINIT w/ anonymous client's it's pretty easy to get this to be good enough, but Windows / AD doesn't support that, so instead you have to use a computer account to get the outer FAST tunnel's ticket, which works if you're joined to the domain, and doesn't work otherwise.
There's also work on a PAKE (zero-knowledge password proof protocol) which also solves the problem. Unfortunately the folks who worked on that did not also add an asymmetric PAKE, so the KDC still stores password equivalents :(
> Rotating the KDC/krbtgt credential is also still a nightmare.
I've done a bunch of work in Heimdal to make key rotation not a nightmare. But yeah, AD needs to copy that. I think the RedHat FreeIPA people are working on similar ideas.
> That's a nice dream, but in reality 20 years from know there will still be domain controllers on enterprise networks.
SSPI and Kerberos are super entrenched in the Windows architecture. IMO MSFT should build an SSP that uses JWTs over TLS, using PKI for server auth and JWT for client auth, using Kerberos principal names as claims in the JWTs and using the PKINIT SAN in server certs to keep all the naming backwards compatible. To get at the "PAC" they should just have servers turn around and ask a nearby DC via NETLOGON.
Do you now if FAST and the work on PAKE is available for use in AD?
Heimdal looks very cool, I'm reading up on it to learn about it a bit more. Also, nice work on the SEO! On ddg, searching for "Heimdal" gives your site as the #1 result, beating even wikipedia for the namesake.
Active Directory does support FAST. It also supports tunneling over HTTPS, which also buys protection for weak pre-authentication mechanisms.
Idk about AD and PAKE.
Heimdal is really cool, though currently a bit on the abandonware side, but I'm working on a huge PR that should lead to us doing an 8.0 release with lots of pent-up and very cool features.
What's most cool about Heimdal is the build-a-compiler-for-it ethic that its Swedish creators brought to it. That's why it has a very nice ASN.1 compiler. That's why it has three other internal compilers, one for com_err-style error definition files, one for certificate selection queries, and one for sub-commands and their command-line options.
> I've done kerberoasting and aseproasting a handful of times only, but from what I recall, RC4 can be cracked within reasonable time regardless of your password complexity
That's not quite right. If the password is sufficiently strong, you won't crack it even when RC4 is used. The password space is infinite.
You might be thinking of the LM hash, where you are guaranteed to find the password within minutes, because the password space is limited to 7 character passwords.
> Rotating the KDC/krbtgt credential is also still a nightmare.
I also disagree there. Just change it exactly once every two weeks or so. Just don't do it more than once within 10 hours. See: https://adsecurity.org/?p=4597
What I wonder is why Windows isn't changing it itself every 30 days or so, just like every computer account password.
> why doesn't Microsoft alert directory administrators (and security teams) when someone is dumping tickets for kerberoasting by default?
Good question. Probably because they want you to license some Defender product which does this.
> I also disagree there. Just change it exactly once every two weeks or so. Just don't do it more than once within 10 hours. See: https://adsecurity.org/?p=4597
That link says wait a week before the second change. There is a good reason for that, because kerberos is so assymetric and just because there are badly written apps out there, you'll cause failed logins for them if you do it too fast. Normally I consider this in the context of a domain compromise, so you have to consider making the rotation with a lower delay, but that always raises the controversy of causing outages. My original comment is exactly what you said, the rotation should be an automatic and regular event. It should be able to change it, track how much the old password is being used, and after the old password hasn't been used in <configured interval> it can do another rotation. It can prevent outages by tracking usage that way. I see no good reason why they made the effort to have an old/new password distinction but didn't give admins the option to auto-rotate. Although, I wonder if you can do this now with powershell (if the old pw usage is tracked anywhere).
> That's not quite right. If the password is sufficiently strong, you won't crack it even when RC4 is used. The password space is infinite.
You're totally right. I was thinking in terms of password people usually configure which are 12-18 characters long. But computer accounts and well configured service accounts, I've seen them use a 64 character minimum which should be very hard to crack with RC4.
I think some of the book associations are wrong. It shows "the martian chronicles" for mentions of andy weir's "the martian".
Otherwise nice to see so many of the books i read this year mentioned. Except "Mein Kampf" of course, interesting top mention there. perhaps lots of people are reading it to understand the past? I'll need to see if it's worth it, I always considered it the equivalent of drinking water from the river thames to understand victorian england better.
Yesterday I finished a long listen of the audio book "The Raise and Fall of the Third Reich" by William Shirer (on audible, 60 hours). He frequently quotes "Mein Kampf". I am not sure one can stomach the whole thing but it's interesting to read quotes of it in context.
Interesting. I was sure at first that the title should be "Jscript", but it really is JavaScript. It uses the MSHTML COM, this isn't the modern Edge/mswebview embedding but the legacy browser engine used by Internet Explorer. It's had lots of vulnerabilities over the users.
I always use -useb with iwr, only because it spits out lots of errors otherwise, I think most people do as well (this isn't an issue). The "system access" in the title might be misleading, the javascript code can't access system resources just the same as it can't if you were running it internet explorer, unless of course there was an exploit.
Also, for OP: Do you mean "access to the system it runs on"? Because I'm pretty sure it doesn't run with "SYSTEM" access (as in privileged user).
It's basically same as using headless chrome to download or scrape things. The Invoke-WebRequest cmdlet here ('curl' is the alias for it), let's you do things like pass the response to some other cmdlet and do stuff with it. You can for example check the status code (even with usebasicparsing/useb), I believe with full DOM rendering here does is that it lets you access the DOM post-render for script manipulation.
There are lots of legit uses for this, especially when it involves interacting with sites that are too outdated and internal, or external sites that publish important information but don't have a proper feed or api.
To do this with curl.exe proper would not be possible (get a fully rendered dom). Even without rendering the whole dom, parsing the html/xml using cli tools or a shell script is very difficult. What Invoke-webrequest does it doesn't 'pipe' or output the raw text response, but an object that contains the rawresponse ( (curl -useb https://news.ycombinator.com).rawresponse ) but also the body, the headers and a other details of the response for shell scripting.
> Also, for OP: Do you mean "access to the system it runs on"? Because I'm pretty sure it doesn't run with "SYSTEM" access (as in privileged user).
Yeah, I mean “access to the system”. It’s not the same as using headless chrome, because it gives you ActiveX and you can shell out to an arbitrary command.
Good for them, I don't see this as a big deal other than my fear of west china invading china (taiwan! :) ).
Don't get me wrong, I want the west to succeed, but a competition from China is exactly what is needed. They're building datacenters in arizona and india for TSMC because of this competition.
I really hope we get past historical political rivalry and get along with China better. Competition is good, hostility sucks.
Give you some more historical context: China (ROC) planned to invade west China until the plan was given up in 60's. Both sides wanted reunification by force. When China's navy and air force was superior in early 1950's, it tried to "establish blockade of trade with west China (PRC) along the Chinese coast" (1)
China eventually gave up the plan in 1960's not because it didn't want to but because the balance of the power weighting over to west China. In 80's and 90's both agree to make peace given the premise that both sides belong to China.
TSMC was a product of industry policy from None-democratic China government. The founder Morris Chang , an American born in the west China ,never visited China before 50 years old.
Both China (before 90') and west China used to want reunification , by force or not. China changed a bit later. The motivation of west China to invade China has little to do with chips although US thought that's the critical incentive. West China will still let TSMC provide the chips to the world in case it would have successfully invaded China in my view.
Thanks. In my view, the PRC is making a huge strategic mistake. As is Taiwan. The PRC is too focused on full control, normally they're more long-term-minded, but in this case they're rushing it too much. Establishing a trade-bloc and peaceful relations first and then aiming for full reunification would be the smart play, since there isn't anything huge to gain outside of TSMC (that I know of) by way of an invasion.
Taiwan is too dependent on the west, it too should know it can't actually resist an invasion, and that the west won't do much when it comes down to it. Its interests would have been served best if it sought good trade relations with the PRC, so that the PRC will continue to rely on TSMC. it should be providing west-china with all the nice chips the west is forbidding it from having. It should have been more like india and less like south korea.
> there isn't anything huge to gain outside of TSMC (that I know of) by way of an invasion.
The reunification of Taiwan is a fundamental national policy, enshrined in the Constitution of the People's Republic of China. The primary intention behind the desire for national reunification stems from the realization of reunification itself, rather than from other interests. This reflects a complex national sentiment and shared aspiration.
We consider the people of Taiwan to be our compatriots. Therefore, even though our military strength far surpasses that of Taiwan, the mainland is unwilling to resort to force and has always hoped for peaceful reunification. This is because we do not wish to harm or even kill any of our compatriots in the process of achieving it.
Essentially, it has been the United States that has been obstructing this unification process and using propaganda tools to influence public perception in Taiwan. As a result, many Taiwanese people are shocked by the stark difference between the mainland and the propaganda portrays them when they visit. It is truly baffling that, despite living so close to the mainland, their understanding of it is almost in sync with that of Americans.
> I don't see this as a big deal other than my fear of west china invading china (taiwan! :) ).
Isn't that "other than" clause a big deal, though? I've read a survey and a number of articles from defense and foreign policy types, and the general feeling is there's a ~25% chance that China will invade Taiwan this decade. That's really damn big. If there's rollback in Taiwan then the first island chain could plausibly fall, or if not you will surely see Japan and maybe South Korea nuclearize. Why must we keep assuming the best with these security calculations instead of believing someone when they keep saying what they're going to do?
This will probably never happen. All countries are rivals, and the semblance of cooperation is really just the manifestation of a power imbalance.
China grew into their big boy pants and can hold their own on the international stage. They have no need to be cooperative because they are in the International Superpower Club. Their strategic ambitions do not align with those of their rivals, and they are strong enough to not need to play nice anymore.
Now that the US has also dropped their visage of being the benevolent world leader, there's even less reason for China to pretend to be cooperative. At this point, it's a matter of who is more apt to invade your country, US or China? And you buy weapons from the other one.
Maybe we see more "cooperation" between China and the EU or South America. But that will be entirely because those regions are under duress.
Tibet. Their ongoing border disputes with India. Island disputes along side their bullying of nearly every maritime neighbor in the region. Stationing destroyers outside of Australian cities as a show of force.
Plus, their current antagonistic relationship with Japan, where they make direct public threats to Japanese leaders who respond by seeking nuclear weapons.
They are currently probing for weakness in their neighbors because of territorial ambitions. Just because they don't invade countries on the other side of the world like the USA does, doesn't make them pacifists. They just have different goals.
yeah they really shouldn't be blockading their neighbors while claiming every country around them is their sphere of influence and openly interfering in their allies domestic politics while leveraging their size to force other countries to accept asymmetric economic deals...
Please spare us. China invaded Vietnam to protect Pol Pot while he was mass killing millions of innocent civilians. They have territorial disputes with over 10 countries, which they've been unable to decisively act on because those neighbors either have nukes (India) or are protected by a more powerful country (US). Not because their government is some benevolent entity. They're basically an authoritarian dictatorship that's kind of cornered at the moment (like Saddam after the Gulf War) but would kill a bunch of people and expand if the US wasn't around.
China has resolved a lot of its border disputes already. The border disputes with Kazakhstan, Krgyzstan, Laos, Mongolia, Nepal, North Korea, Russia, Vietnam, Tajikstan have all been resolved
The more China advances domestically, especially in this area, the less it has to gain from invading Taiwan. China is getting to the point where the conquest is finally doable (rapidly advancing and massive military, plus a weak US president), but the potential gains are diminishing year to year.
I'd speculate that if they don't invade during Trump's term, they never will, and will pursue a different course down the road. China is nothing if not patient.
The motivation to invade Taiwan is rooted in the PRC's political and historical narrative about it's legitimacy and purpose, a narrative internalized by most Chinese, including especially the military. It's in a sense existential, not economic or realpolitik, and I don't see that motivation diminishing anytime soon. If anything it's growing stronger, as evidenced by the suppression in Hong Kong, which made zero sense without reference to how Chinese political institutions sustain themselves. The risk of an invasion sparking a conflict with the US is primarily what held them back, and at best economic and foreign strategic pain only secondarily, but all those risks diminish by the day, leaving China's raw existential motivation unchecked.
The biggest victory for CCP will be Taiwan willingly joining PRC. Nothing else will be a better testament to the CCP model
Reunification with the mainland isn’t a completely unpopular idea in Taiwan. The economic ties are already extremely deep (largest trading partner by far).
Reunification in Taiwan has nothing to do with chips, and militarily PRC was able to do so a long time ago. The political will in PRC to "kill other Chinese" is zero.
> The political will in PRC to "kill other Chinese" is zero.
Counts for nothing, these narratives are built on sand. Russians also saw Ukrainians as "brothers", as did South/North Koreans before the war, among countless other examples.
Their is always a political will in China to kill other Chinese since thousands of years ago. This works vastly different from the western humanitarian philosophy.
Invading Taiwan isn't about chips at all, and in fact chips are actively disincentivizing invasion. Semiconductor fabs and the oodles of atomically precise ultra clean and ultra expensive equipment inside absolutely do not mix well with bombs.
I think people are wildly overreacting. There is a new CEO and he wants to make a splash so the throws around "AI" that's it. Of course there will be AI related features in firefox, there already are! Wait and see what the actual specifics are before reacting?
Also, a small minor detail here: We're not paying for firefox! why are so many people feeling entitled? Mozilla has to do something other than beg Google to survive. Perhaps we need a fork of firefox that is sustained by donations and is backed by a non-profit explicilty chartered to make decisions based on community feedback? I don't see a problem with that wikipedia-like approach, I don't think any of the forks today have a good/viable org structure that is fully non-profit (as in it won't seek profit at all). Mozilla has bade some bad decisions recently, but they're a far cry from the world-ended outcry they're getting.
If we don't donate to Mozilla and we don't pay them money, then we have to be the product at some point. Even if they don't it to be that way, they have to placate to some other business interests.
I hope the EU also pays attention, perhaps some of their OSS funding can help setup an alternate org.
Waiting until the thing is done to voice your opinions on the thing is a very poor strategy if you want to have any influence over what the thing turns out to even be.
>>> Also, a small minor detail here: We're not paying for firefox! why are so many people feeling entitled? Mozilla has to do something other than beg Google to survive.
Because some of us have supported, donated to, advocated for, and participated in the firefox and mozilla communities over the years, and feel betrayed by the abandonment of principles, kowtowing to adtech surveillance "features", and overall enshittification of a once beloved browser that we hoped would allow for an alternative to the chrome blob, as they once were to the atrocity that was internet explorer.
It's perfectly reasonable to call out foundations and organizations that utterly abandon and fail to live up to principles. Mozilla is just a PR wing for Alphabet and whitewashing the chromification of all browsers, at this point.
Ladybug and some other alternatives will come around. I don't see any future in which Mozilla returns to principles - the people leeching off / running the foundation won't ever be interested in returning to a principled stance, but to change the brand, or pursue profit, or some other outcome that is divergent from the expectations and consideration of the original supporters. They keep trying to commodify and branch out and waste insane amounts of money on nonsense, and hire CEOs that lose the plot before they ever start the job. Mozilla is functionally dead, for whatever vision of it a lot of us might once have had.
By the time they'd have a chance to fix anything, maybe it'll be practical to have an AI whip up a new browser engine and we'll all have bespoke, feature complete privacy respecting browsers built on the fly.
I think all of firefox's alternatives depend on firefox as an upstream. firefox itself has low adaption right now. I'm all for any approach that isn't google/chrome being the only browser standards driver. This type of knee-jerking doesn't help with that goal.. I don't care so much about mozilla's past or how terrible an "ai browser" will be, as much as having viable alternatives and not having a monopoly for browser standards. a fork and alternative browsers will do nothing to help with that. Either there is an alternative to Mozilla or there isn't.
This isn't kneejerk, it's bone deep weariness of year over year of failure and corporate schlock and weasel words and trying to rebrand firefox and sucking on the google teat while pretending they have a purpose. They're doing anything and everything but seriously putting in the time and effort to distinguish themselves and be a viable competitor. All of the good stuff happens downstream; firefox development is a continual disappointment and enshittification, and they don't even profit from it. They enshittify on behalf of google and the adtech blob.
The downstream projects constantly have to tear out features the mozilla team try to jam in, for no good reason, almost like there's an arms race and the adtech blob is just trying to slip one past all the people who want simple privacy preserving software.
Unfortunately, it’s impossible to donate to Firefox development. Donations to Mozilla expressly do not got to pay for Firefox since Firefox is for-profit and they’ve decided to not accept money from users. So I guess we are already “the product.”
I liked it the few times it worked but so many times it's chosen to do things like translate Japanese into Spanish when I speak English natively and never would've chosen Spanish as the target language. It just feels convoluted and poorly implemented, like most AI features in most software.
I don't give a shit about the specifics. I don't want AI in my browser period.
Yes, AI is already in Firefox. That does not on any way make more AI any less unacceptable.
I don't want to opt out. I don't want to dismiss nags. I don't want to fuck around with internal configs and hope that the options do what they say (they often don't).
I want a browser that renders websites. That's it. Anything else is detracting from Firefox's core value proposition: being a good web browser.
I want a web browser. I do not want, need, nor am I interested in entertaining an ""AI browser"", whatever the hell that even means. I want to browse the goddamn web, not interact with "AI". We've had AI shoved into literally every conceivable corner of every piece of software. Nobody, nobody needs more AI in more places. We have ten million ways to access AI in absolutely every other program.
Just give me a fucking web browser. This is not that complicated.
Because the AI implementations they’ve already done are shit? Buttons on by default, features that are annoying to remove for normal users (the context menu ‘search with chatbot’).
It’s just garbage, get this shit out of here. Stop adding things to my window without permission and stop with the popups announcing them to me.
Anyone who seriously wants this will seek it out. Leave the rest of us alone.
1. Why would I donate to Mozilla? Mozilla hates me.
2. When Mozilla was 30% of the browser market rather than 3%, they could have easily cleaned up on donations. If they had made whatever extension transition that they thought they needed to do but while protecting all contemporary extension capabilities and not using it as a power grab to limit user control, they'd still have 30% of the market. If they hadn't made the business decision to permanently be a wonky Chrome, people wouldn't think of them as a wonky Chrome.
3. Mozilla has plenty of money. If you can't create a sustainable browser with a billion and a half dollars in the bank and a fully-featured browser, it's because you don't want to. You already have the browser, you can't whine about how complicated it is to create a browser. Pay developers with the interest. Stop paying these useless weirdo executives a fortune.
But enough about Mozilla. If you're some Bitcoin or startup billionaire, I'll ask you the same thing. Firefox is sitting right there and licensed correctly. You want people to respect you and remember you nicely when you're dead? Take it, fork it, put that same billion and a half into a trust, and save an open door to the Internet at a time when it's really needed. You've won in life, it will be easy to make people trust you if your ambition is just to do good. Steal Firefox, put it on the right track, and people will flock back to it. I know Ladybird is interesting, but a bird in the hand is worth two in the bush.
I would just like to encourage all Rust devs to distribute binaries. No matter what compiler you choose, or what Rust version, users shouldn't have to build from source. I mostly see this with small projects to be fair.
I always comment when people say how TV shows make hacking look so easy, that I think they're not too far off when the "hackers" are state-sponsored. Part of the benefit of compartmentalizing things like tool/exploit-dev from ops is you get good tooling that you just point and shoot and it mostly works.
With enterprise/corporate red-teaming you have to work for it a lot, update your tooling, attacks, etc... do a lot of recon. But even then, even in companies that take security seriously and pay for it too, experienced pros spend a few days and get domain-admin (or equivalent) half the time. And I'm talking about in 2025 with everyone and their mom running EDR that have only gotten better over time (in my opinion).
The CIA's tools probably don't have flashy graphics, but even the ones that were leaked a while ago give a good insight into things.
I can imagine an experienced operator automating things quite a bit, and when you give them a target, they'll just run a few commands, wait a some time and get a shell with lots of powerful capabilities.
Matter of fact, I think they don't show enough "easy hacking" in the movies, where you take over hospitals, government agents, courts ,etc.. in a matter of minutes and start snooping around, or just wipe them out. That would feel unbelievable to movie/tv audiences so they lave it out.
I know this is a good thing, but I've struggled a lot on systems that don't have good/reliable NTP time updates.
Also, at some point in the lifetime graph, you start getting diminishing returns. There aren't many scenarios where you get your private keys stolen, but the bad guys couldn't maintain access for more than a couple of weeks.
In my humble opinion, if this is the direction the CA/B and other self-appointed leaders want to go, it is time to rethink the way PKI works. We should maybe stop thinking of LetsEncrypt as a CA but it (and similar services) can function as more of a real-time trust facilitators? If all they're checking for is server control, then maybe a near-real-time protocol to validate that, issue a cert, and have the webserver use that immediately is ideal? Lots of things need to change for this to work of course, but it is practical.
Not so long ago, very short DNS TTL's were met with similar apprehension. Perhaps the "cert expiry" should be tied to the DNS TTL. With the server renewing much more frequently (e.g.: If the TTL is 1 hour, the server will renew every 15 minutes).
Point being, the current system of doing things might not be the best place to experiment with low expiry lifetimes, but new ways of doing things that can make this work could be engineered.
Not precise, but for example if it's been over a day since the last time update, i start getting errors on various sites, including virtually every site behind cloudflare. (assuming you're referring to the initial issue I mentioned).
One of the setups that gives me issues is machines that are resumed from a historical snapshot and start doing things immediately, if the NTP date hasn't been updated since the last snapshot you start getting issues (despite snapshots being updated after every daily run). Most sites won't break (especially with a 24h window, although longer always have issues), but enough sites change their certs so frequently now, it's a constant issue.
Even with a 10 year cert, if you access at the right time you'll have issues, the difference now is it isn't a once in a 10 year event, but once in every few days some times.
Perhaps if TLS clients requesting a time update to the OS was a standardized thing and if NTP client daemons supported that method it would be a lot less painful?
in my case, it's more of a case of "the system still thinks it's yesterday, until the ntp daemon updates the time a minute or five after resuming". Being behind by a day wasn't a huge deal before these really short cert life spans.
This isn't something I've seen; are you running systems w/o an onboard RTC, or with ntpdate doing periodic update, etc etc?
The closest I've gotten to this would be something like a Raspberry Pi, but even then NTP is pretty snappy as soon as there's network access, and until there's network access I'm not hitting any TLS certs.
Windows is the fastest from my testing, even then there is about a minute or so immediately after restoration where i get TLS errors on some sites.
Honestly, I just wish that browsers used NTP directly and used that instead of the system time. If the CA/B wants to go this direction, maybe this will be a good enhancement to make it more tenable?
I think perhaps I'm doing a poor job at explaining the specific issue I encounter regularly, but if a system has been offline for a day, it will be askew by a day. Unless you're assuming a hardware clock that's on battery power is always available, and that the time/ntp daemon checks that and updates the clock fast enough.
My use case isn't unique, if you have an embedded device, I'm sure there are even more stringent limitations. Is there really that big of a difference if the notBefore is a day instead of an hour, or even a week? Perhaps when shortening notAfter, notBefore should be increased.
> Unless you're assuming a hardware clock that's on battery power is always available, and that the time/ntp daemon checks that and updates the clock fast enough.
Not "and", just "or". A hardware clock is assumed, but in absence of that it's the job of the OS to fix the clock before it breaks anything.
And an hour is already generous. Extending it to a day or a week gets weird and helps almost nobody.
The overwhelming majority of consumer laptops/desktops/etc today have a RTC so that yes, there's a battery keeping a RTC chip awake that keeps the clock reasonably correct after it's been powered off / hibernating / etc.
You're totally right, but how much is the very small minority?
What irks me slightly is that this is the type of thinking I typically see from companies like Google, where only 0.1% of users will be affected by a change, but 0.1% of a billion is 1 million people.
I'm not saying I disagree with you, perhaps I'm the only person who might be affected, in which case who cares. But LetsEncrypt is a critical service provider at this point, they shouldn't calculate impact like a commercial entity that can ignore people due to lack of revenue implications.
How unreasonable would I be if I expected TLS client clock precision to be part of the TLS spec, and such changes should require a version bump? That's probably extreme, but how can we ensure stability and reliability when these systems billions use change? Is the CA/B making decisions for everyone, even the minority? Do browser vendors care if some IoT device stops working?
We can ensure stability and reliability with RTCs and
NTP. The minority here is systems with no RTC that try to perform TLS operations before NTP is operational. The fix is to move NTP earlier in the dependency tree. Or just wait a minute.
I don’t want the CAB to defer security wins for the 99% because of hardware and software trade offs the 1% made.
I'll trust you know more than I and concede on this then. I didn't consider the security benefits to be worth even a minor convenience to even a handful of people.
maybe a dumb but wise approach is to just code as usual without thinking about "AI", and when you have difficulties or efficiency issues, look for tools to solve that. think in terms of specific tools instead of "ai" or "llm".
Do you need better auto-completion? Do you need code auto-generation? do you need test cases to be generated, and lots of them? maybe llms can are ideal for you, or not.
Personally, the best use i've gotten out of it so far is to replace the old pattern of googling something and clicking through a bunch of sites like stackoverflow to figure things out. and asking llms to generate an example code of how to do something, and using that as a reference to solve problems. sometimes i really just need the damn answer without having a deep debate with someone on the internet, and sometimes I need a holistic solution engineering. AI helps with either, but if I don't know what questions to ask to begin with, it will be forced to make assumptions, and then I can't validate the suggestions or code it generated based on those assumptions. So, it's very important to me that the questions I ask an AI tool are questions whose subject domain I have a good understanding of, and where the answers are things I can independently validate.
I was looking at this guy's benchmark here: https://gist.github.com/Chick3nman/32e662a5bb63bc4f51b847bb4...
Etype 23 (rc4-hmac) gets ~3500 kH/s, 18 (aes256-cts-hmac-sha1-96) gets roughly 2500 kH/s. Big difference, but somehow I thought it would be much bigger? 2.5M guesses/second is still not so bad.
I've done kerberoasting and aseproasting a handful of times only, but from what I recall, RC4 can be cracked within reasonable time regardless of your password complexity. But with AES if you have a long and complex service account password, it will take decades/centuries to crack. But (!!) it is still quite common to use relatively weak passwords for service accounts, a lot of times the purpose of the service is included in the password so it makes guessing a bit easier.
My criticism is that Kerberos (as far as I'm aware) does not provide modern PBKDFs (keyed argon2?) that have memory-hardness in place. That might be asking too much, so why doesn't Microsoft alert directory administrators (and security teams) when someone is dumping tickets for kerberoasting by default? It's not common for any user or service to request for tickets for literally all your service accounts. Lastly, Microsoft has azure-keyvault in the cloud, but they're so focused on cloud, they don't have an on-prem keyvault solution. If a service account is compromised, you still have to find everything that uses it and change the password one by one. Where if there was a keyvault-like setup, you could more easily change passwords without causing outages.
Rotating the KDC/krbtgt credential is also still a nightmare.
From what bits I've heard, Microsoft expects its users to be using EntraId instead of on-prem domains (computers joined directly to entra-id instead of domain controllers). That's a nice dream, but in reality 20 years from know there will still be domain controllers on enterprise networks.
reply