> in this case it's Paramount saying "we'll up out government-blocks-the-sale fee from $2.xbn to $5bn" which is saying they have a lot of confidence the merger will go through
No.
Paramount has nothing to do with these numbers, which both come from the Plan of Merger among Netflix, Warner and others [1].
Paramount's bid constitutes an Acquisition Proposal under § 6.2(c). It is a "proposal, offer or indication of interest" from Paramount, a party who is not "Buyer and its Affiliates," which "is structured to result in such Person or group of Persons (or their stockholders), directly or indirectly, acquiring beneficial ownership of 20% or more of the Company’s consolidated total assets."
Given it "is publicly proposed" after the date of the Plan of Merger and "prior to the Company Stockholder Meeting," it is a Company Qualifying Transaction (8.3(D)(x)).
If 8.3(D)(y) is then satisfied (a condition I got bored jumping around to pin down–if thar be dragons, they be here) and Warner consummates the Company Qualifying Transaction or "enters into a definitive agreement providing for" it (8.3(a)(D)(z)(2), the Buyer can terminate the Plan of Merger under 8.1(b)(iii). That, in turn, triggers the Company Termination Fee of $2.8bn, which is separate from the Regulatory Termination Fee of $5.8bn Netflix would have to pay Warner if other shit happened.
Also a huge Eno fan here. Put together, I probably have listened to Music for Airports, Another Green World, Taking Tiger Mountain and Discreet Music more than any other artist. Maybe Philip Glass comes in at a close second.
Anyways, in 2016, Tero Parviainen (@teropa) shared this really cool long-form exploration called "JavaScript Systems Music – Learning Web Audio by Recreating The Works of Steve Reich and Brian Eno" that I enjoyed tremendously (and I don't even like Javascript!)
You can relay through any other SSH server if your target is behind a firewall or subject to NAT (for example the public service ssh-j.com). This is end-to-end encrypted (SSH inside SSH):
With transcribing a talk by Andrej, you already picked the most challenging case possible, speed-wise. His natural talking speed is already >=1.5x that of a normal human. One of the people you absolutely have to set your YouTube speed back down to 1x when listening to follow what's going on.
In the idea of making more of an OpenAI minute, don't send it any silence.
will cut the talk down from 39m31s to 31m34s, by replacing any silence (with a -50dB threshold) longer than 20ms by a 20ms pause. And to keep with the spirit of your post, I measured only that the input file got shorter, I didn't look at all at the quality of the transcription by feeding it the shorter version.
The argument is futile as the goal posts move constantly. In one moment the assertion is it’s just megacopy paste, then the next when evidence is shown that it’s able to one shot construct seemingly novel and correct answers from an api spec or grammar never seen before, the goal posts move to “it’s unable to produce results on things it’s never been trained on or in its context” - as if making up a fake language and asking it write code in it and its inability to do so without a grammar is an indication of literally anything.
To anyone who has used these tools in anger it’s remarkable given they’re only trained on large corpuses of language and feedback they’re able to produce what they do. I don’t claim they exist outside their weights, that’s absurd. But the entire point of non linear function activations with many layers and parameters is to learn highly complex non linear relationships. The fact they can be trained as much as they are with as much data as they have without overfitting or gradient explosions means the very nature of language contains immense information in its encoding and structure, and the network by definition of how it works and is trained does -not- just return what it was trained on. It’s able to curve fit complex functions that inter relate semantic concepts that are clearly not understood as we understand them, but in some ways it represents an “understanding” that’s sometimes perhaps more complex and nuanced than even we can.
Anyway the stochastic parrot euphemism misses the point that parrots are incredibly intelligent animals - which is apt since those who use that phrase are missing the point.
Anytime Oracle is brought up is a great time to repost the famous Lawnmower quote:
> "As you know people, as you learn about things, you realize that these generalizations we have are, virtually to a generalization, false. Well, except for this one, as it turns out. What you think of Oracle, is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle. And I gotta say, as someone who has seen that complexity for my entire life, it's very hard to get used to that idea. It's like, 'surely this is more complicated!' but it's like: Wow, this is really simple! This company is very straightforward, in its defense. This company is about one man, his alter-ego, and what he wants to inflict upon humanity -- that's it! ...Ship mediocrity, inflict misery, lie our asses off, screw our customers, and make a whole shitload of money. Yeah... you talk to Oracle, it's like, 'no, we don't fucking make dreams happen -- we make money!' ...You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle." - Bryan Cantril
You must have been extremely lucky because I've had multiple apps just trigger endless SELinux warnings on RHEL8 (Rustdesk is an example) and I very much subscribe to the views in this article:
I'm not going to waste my time fighting SELinux to stop non-existent threats (I'm just using a desktop and I'm not a high profile target). Too many false positives and I'll just turn it off. And in my experience there are always too many false positives.
I parent a child with autism. The care needs are intense; the burnout is real. But I can't employ usual burnout mitigation techniques like taking time off or making career / lifestyle changes. The world relentlessly marches forward. However, I've learned human resilience is AMAZING. You'll be surprised at what you are capable of when life asks for it.
Here are a few big insights that have really helped me:
- You never have to feel like doing something to start doing it. This insight is so strangely freeing. It really got me out of my head and the loop that I was in berating myself about motivation.
- The act of doing something is usually what creates motivation to continue. Tell yourself you'll spend 30 minutes on whatever it is. No matter what it is, I always know I can survive 30 minutes of it. However, 90% of the time, the timer goes off, but I don't feel like stopping. I've found my grove and I keep going.
- Procrastination isn't poor time management; it's poor emotional management. Be gentle with yourself. Know that you can be scared, frustrated, or angry but don't have to let those emotions define you. CBT works really well here. Don't define emotions as "good" or "bad". Define them as "useful". If the emotion you feel isn't useful, acknowledge it, but realize it's fleeting and let it pass on. Some people like to visualize emotions as clouds drifting by.
"And whatever your labors and aspirations, in the noisy confusion of life, keep peace in your soul. With all its sham, drudgery and broken dreams, it is still a beautiful world. Be cheerful. Strive to be happy."
Using some scripts/parsers to take DTrace/perf/Instruments/ETW
data and transfer it to perfetto was one of the most exciting moments of my performance engineering career. It’s such a powerful thing compared to every single other workflow I’ve ever used.
It just shows contention in a way that so hard to see otherwise.
If this tool packages some of that in an easier to use package it’s going to be a great tool for some.
Agreed with most of this but I’m skeptical of the rsc.io/script dsl approach. I’ll try it, though, because Russ is often right.
shameless advert: do you wish testify was implemented with generics and go-cmp, and had a more understandable surface area? Check out my small library, testy https://github.com/peterldowns/testy
shameless advert: do you want to write tests against your postgres database, but each new test adds seconds to your test suite? Check out pgtestdb, the marginal cost of each test is measured in tens of milliseconds, and each test gets a unique and isolated postgres instance — with all your migrations applied. https://github.com/peterldowns/pgtestdb
You probably want the "Maple Edit" of the Hobbit (also known as "J.R.R. Tolkien's The Hobbit"). It is 247 minutes and is setup as a single film with intermissions.
>Those rates also allow me to make thousands per month, risk-free, in interest off my savings.
If you are in a state with income tax, I would take a minute, even immeasurable, increase in risk by putting it in US Treasuries (like TTTXX at Merrill), resulting in no state income taxes for 95%+ of your return. In case you already are not doing that.
I used to work in trading and I think some of the principles I learned there apply directly to salary negotiations.
Two key things I think about all the time are price discovery and competition. Price discovery means that in some illiquid markets, you don't really know what a thing will trade at unless you try to trade it (which itself reveals some information to the counterparty but that's not important in the job case)
From that point of view, I am always surprised to hear people who hate interviewing, are reluctant to interview, etc. Interviewing is the only real way to learn "the market" - what offers to you would actually look like, and other information as well. Like if you always get offers at the low end of a range - or if you always fail interviews - is very good signal that you can act on.
Competition - when we traded bonds, we'd always go to multiple dealers for quotes. The dealers would know that we always ask at least X of their competitors by rule, so they knew that whatever number they gave us, we'd compare against others. That prompted them to strike the balance between a good deal for them and a price we'd actually accept. From that point of view, the best negotiating advice is to have multiple offers, because (1) you see what the highest one is (2) you can see how close or dispersed they are and you can infer something from that (3) they give you leverage. EG: let's say you want job X most, but job Y gave you $20K more on the offer or whatever. You can always tell job X "I'd love to work with you but it's hard to accept a lower offer, can you match?" This isn't even a game, you'd be saying this sincerely. Or, if your offers are close, you can push the highest one a bit because you know they aren't so far out of the norm with more confidence.
It's a garden-path sentence [1]. Garden-path sentences have ambiguous parses that typically require backtracking to correct earlier misinterpretations.
It has been built by extending Nginx with an OpenResty-based DNS resolver instead. You could even extract it away from Kong itself and use it standalone.
If you're looking to archive/download/whatever the images on gfycat.com, you can do the following:
0. This process only works in Chrome(-based) browsers
1. Open each of the following domains:
- gfycat.com
- api.gfycat.com
- weblogin.gfycat.com
- thumbs.gfycat.com
2. You will see a scary HTTPS warning. This is fine.
3. Type "thisisunsafe" into the page (no need to select anything or click and input field). This overrides any and all HTTPS errors that aren't technical issues.
4. After doing this for every domain, the HTTPS errors will be ignored for the rest of the session and the site will work again. If anything is still broken, hit F12 for the dev tools and check the network tab to see what domains failed and maybe try the above again.
Two days of downtime starting on a week day is not a good sign. Back up any images you want to keep!
Cache API being "useful" and "entirely client-driven" - both great points, but how is does it contradict my statement about same API being usable for server-push (i.e. force-feeding some objects to client it didn't ask for), may I ask?
Remember, "client" (assuming user didn't disable javascript) is a dumb machine that executes whatever (code downloaded from) server tells it to, within some (pretty generous) limits. Imagine index HTML containing this:
<script>
const cache = await caches.open('push');
cache.put('/resource.json',
new Response('{"foo": "bar"}'));
cache.put('/resource.gif',
new Response(atob('R0lGODlh...'));
</script>
That, of course, assumes that rest of code would use cache.match() instead of fetch API or XHR. Or, more realistically, a wrapper that tries both.
Not really a script, but a `.ssh/config` to automatically deploy parts of my local cli environment to every server i connect to (if username and ip/hostname matches my rules).
On first connect to a server, this sync all the dotfiles i want to a remote host and on subsequent connects, it updates the dotfiles.
Idk if this is "special", but I haven't seen anyone else do this really and it beats for example ansible playbooks by being dead simple.
"I treat memcached in infrastructure as a very stable service."
I run memcached at a large scale. You are totally right. Every other year we will find ONE bad memcached node down. We use nutcraker instead of mcrouter for consistent hashing to each memcache node. Once i read "We also run a control plane for the cache tier, called Mcrib. Mcrib’s role is to generate up-to-date Mcrouter configurations" -- I was like oooooh boy, here we go....
Knowing memcache is a rock comes with experience though.
My experience has been that the people opposed to types won't be convinced to start liking them by anything you can tell them or have them read. In all of the cases where I've seen Sorbet be adopted, the process looked like this:
1. Ambitious team who wants types does work to get the initial version passing in CI. Importantly, it's only checking at `# typed: false`, which basically only checks for missing constants and syntax errors.
2. That initial version sits silently in the codebase over a period of days or weeks. If new errors are introduced, it pings the enthusiastic Sorbet adoption team; they figure out whether it caught a real bug or whether the tooling could be improved. It does not ping the unsuspecting user yet.
3. Repeat until the pings are only high-signal pings
4. Turn Sorbet on in enforcing mode in CI. It's still only checking at `# typed: false` everywhere, but now individual teams can start to put `# typed: true` or higher in the files they care about.
5. Double check that at this point it's easy to configure whatever editor(s) your team uses to have Sorbet in the editor. Sorbet exposes an LSP server behind the `--lsp` flag, and publishes a VS Code extension for people who want a one-click solution.
6. Now the important part: show them how good Sorbet is, don't tell them. Fire up Sorbet on your codebase, delete something, and watch as the error list populates instantly. Jump to definition on a constant. Try autocompleting something.
In my experience trying to bring static types to Ruby users, seeing is really believing, and I've seen the same story play out in just about every case.
One final note: be supportive. Advertise one place for people to ask questions and get quick responses. Admit that you will likely be overworked for a bit until it takes off. But in the long run as it spreads, other teammates will start to help out with the evangelism as the benefits spread outward.
Postgres’s transactional semantics are really useful when building a queue, because of how it interacts with the various pieces.
Connection 1
LISTEN 'job-updates';
Connection 2
BEGIN;
INSERT INTO jobs ('a-uuid', …);
SELECT PG_NOTIFY('job-update', 'json blob containing uuid and state change info');
COMMIT;
Connection 3 (used when Connection 1 is notified)
BEGIN;
SELECT id, … FROM jobs WHERE id = 'a-uuid' FOR UPDATE SKIP LOCKED;
UPDATE 'jobs' SET state = 'step1_completed' WHERE is = 'a-uuid';
SELECT PG_NOTIFY('job-update', 'json blob containing uuid and state change info');
-- do the thing here: computation, calling external API, etc. If it fails then rollback.
COMMIT;
Because notify has transactional semantics, the notify only goes out at transaction commit time. You want to use a dedicated connection for the notify.
The only downsides I immediately think of are you will have every worker contending to lock that row, and you’ll need to write periodic jobs to cleanup/retry failures.
The information won't be as finely detailed since it's not sensing every circuit in the home, but for $70 and no tinkering necessary I can read my home's consumption in XML format every 10 seconds.
No.
Paramount has nothing to do with these numbers, which both come from the Plan of Merger among Netflix, Warner and others [1].
Paramount's bid constitutes an Acquisition Proposal under § 6.2(c). It is a "proposal, offer or indication of interest" from Paramount, a party who is not "Buyer and its Affiliates," which "is structured to result in such Person or group of Persons (or their stockholders), directly or indirectly, acquiring beneficial ownership of 20% or more of the Company’s consolidated total assets."
Given it "is publicly proposed" after the date of the Plan of Merger and "prior to the Company Stockholder Meeting," it is a Company Qualifying Transaction (8.3(D)(x)).
If 8.3(D)(y) is then satisfied (a condition I got bored jumping around to pin down–if thar be dragons, they be here) and Warner consummates the Company Qualifying Transaction or "enters into a definitive agreement providing for" it (8.3(a)(D)(z)(2), the Buyer can terminate the Plan of Merger under 8.1(b)(iii). That, in turn, triggers the Company Termination Fee of $2.8bn, which is separate from the Regulatory Termination Fee of $5.8bn Netflix would have to pay Warner if other shit happened.
[1] https://www.sec.gov/Archives/edgar/data/1065280/000119312525...