Yet... deploy on two clouds and you'll get tax payers scream at you for "wasting money" preparing for a black swan event. Can't have both, either reliability or lower cost.
Right, but there's no doomsday prophecies around the Year 2038 problem as far as I can tell. I think it falls in the same kind of category of known problems that are certain to happen at some point. Some other things I was thinking of were the theorized ARkStorm, and also an earthquake that could happen in the Cascadia subduction zone.
It's also not impacting _only_ at that time, many tasks involve dates in the future, and a system dealing with a far enough off date _today_ is already impacted.
So it's not as if "everything works" then suddenly "everything doesn't"
And it's only for operations that care about the sign / compute deltas / use signed numbers, otherwise it's 2106-02-07 06:28:16 UTC.
Or humans make a mistake. I got burned by the MsDos date rollover with signed values ~20 years ago. A salesman fat-fingered a job into the 2060s. Of course while I was on the other side of the world with no phone access.
Not quite; the first attack happened at approximately UNIX time 1000210380, which isn't quite as round as "1 trillion milliseconds". (It was about 2 days after 1e9).
The St Nicholas Orthodox church sat at the base of the Twin Towers, because it was there for 100 years and they wouldn't take the money to rebuild it elsewhere. They probably served their last Divine Liturgy there on Sunday 9/9/01 as a last blessing before it was destroyed that Tuesday.
I was on a break at work reading a lot about 9/11 for some reason. Went back to fix an easy bug where our timestamps were printing wrong dates (milliseconds vs seconds) so I became curious what dates would show up if I added zeros in front of 1, to get a ballpark of where dates are. I freaked out after the ninth zero, you know, being so close to the event I was just reading about.
Heh. That's me. The "no presence" part. About the 10x part, ask my colleagues. I want to believe I'm doing some good work but who knows.
But the no presence... I've got a kid, a house, a mortgage. I've been in software since I was a teenager. Am I still fascinated by it? Sure. Will I still spend hours and hours of free time? Nop. It long since stopped being a hobby. Right now I like reading and listening to audio books, when I have a break from house chores and child rearing. I like to cook and experiment in the kitchen. The endorphin feedback cycle is so much faster (hours) than large scale software (weeks to years). I like to watch interesting shows on TV. Write. Coding-wise, I'm invisible outside the company I work at.
Not strictly a 9-5, but with a kid I do try to have quality family time, so I condense as much as possible to my working day, leaving time to be with my loved ones. If there's something important I'll participate. If there's a pagerduty alarm I'll jump. But otherwise, I'll deal with it tomorrow. I've long since learned to identify real emergencies from artificial urgency "because there's a milestone deadline!". Sure there is. Like the old saying goes "I love deadlines. I love the wooshing sound they make when they go by". Is it a customer commitment? No? Then I'll work on it Monday morning, right now I'm out.
I value people like that. Being a hero is a young-people game. You can't be a hero for years and years and not burn out. I've seen that happen. Working every weekend? Then something is wrong with the estimation. Or the design. Or whoever is in charge of priorities.
God helps me when I look for a job again. I guess I'll have to rely on references and hope to hell I'll pass the filtering software to actually get someone to look at my application. So far I've been lucky. Last time I actually sent CVs was at the beginning of my career, as a new grad, 20 years ago. Ever since then I was picked out, carried over, invited in by people who knew me. Really, really hoping that'll keep being the case.
I bet "I'm a lot of people". That's the point of the post. We exist, we contribute, some of us are critical. We just don't chase fame, don't care about (much) about recognition (beyond peer I guess) and have interests and ways to occupy our time other than software. :shrug:.
I accepted that I won't be a "name". Yet I have made suggestions that were adopted into Spring, I have commented on JCPs, I have talked with antirez (though not much contributed there, I'm still in awe of Redis' internal design). I just... don't care much about other people knowing me beyond what I need to pay the bills and make my immediate peers, manager chain and customers happy.
To me that's a complete circle back to lessons learned from neural networks - I long suspected we'd be heading that way again with transformers being a computational step rather than the whole thing.
One of my best commits was removing about 60K lines of code, a whole "server" (it was early 2000's) with that had to hold all of its state in memory and replacing them with about 5k of logic that was lightweight enough to piggyback into another service and had no in-memory state at all. That was pure a algorithmic win - figuring out that a specific guided subgraph isomorphism where the target was a tree (directed, non cyclic graph with a single root) was possible by a single walk through the origin (general) directed bi-graph while emitting vertices and edges to the output graph (tree) and maintaining only a small in-process peek-able stack of steps taken from the root that can affect the current generation step (not necessarily just parent path).
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
I’m a hobby programmer and lucky enough to script a lot of things at work. I consider myself fairly adept at some parts of programming, but comments like these make it so clear to me that I have an absolutely massive universe of unknowns that I’m not sure I have enough of a lifetime left to learn about.
I want to believe a lot of these algorithms will "come to you" if you're ever in a similar situation; only later will you learn that they have a name, or there's books written about it, etc.
But a lot is opportunity. Like, I had the opportunity to work on an old PHP backend, 500ms - 1 second response times (thanks in part to it writing everything to a giant XML string which was then parsed and converted to a JSON blob before being sent back over the line). Simply rewriting it in naive / best practices Go changed response times to 10 ms. In hindsight the project was far too big to rewrite on my own and I should have spent six months to a year trying to optimize and refactor it, but, hindsight.
This is my experience, and my favorite way to learn: go in blindly, look things up when you get stuck/run out of ideas. I think it forces a deeper understanding of the topic, and really helps things "stick". I assume it's related to the massive dumps of dopamine that come from the "Eureka!" moments.
I've "invented" all sorts of old patents, all sorts of algorithms, including the PID algorithm. I think it helped form a very practically useful "intuition".
But, I've noticed that some people are passionately against this type of self exploration.
Yes. I invented the Trie data structure when I was 19. It was very exciting finding out it had a name, and it was indeed considered a good fit for my use case.
Thats so funny, I had the exact same experience. And when I was 16 I "invented" csv's because I was too lazy to setup SQL for my discord bot. I like to think I've gotten better at searching for the correct solution to things rather than just jumping in with my best guess.
I was just going to say the same— LLMs are great for giving a name to a described concept, architecture, or phenomenon. And I would argue that hallucinations don't actually much matter for this usage as you're going to turn around and google the name anyway, once you've been told it.
Read some good books on data structures and algorithms, and you'll be catching up with this sort of comment in no time. And then realise there will always be a universe of unknowns to you. :-) Good luck, and keep going.
do try (so you get the joy of 'small' wins), also do know that it's untouchable (so you don't despair when you don't master quantum mechanics in one lifetime)
(More than?) half of the difficulty comes from the vocabulary. It’s very much a shibboleth—learn to talk the talk and people will assume you are a genius who walks the walk.
That! It took me a while to start. My education of graph theory wasn't much better than your average college grad. But I found that fascinating and started reading. I was also very lucky to have had two great mentors - my TL and the product's architect, the former helped me to expend my understanding of the field.
A lot if it is just technical jargon. Which doesn't mean it's bad, one has to have a way to talk about things, but the underlying logic, I've found, is usually graspable for most people.
It's the difference between hearing a lecture from a "bad" professor in Uni and watching a lecture video by Feynman, where he tries to get rid of scientific terms, when explaining things in simple terms to the public.
As long as you get a definition for your terms, things are manageable.
You could've figured out this one with basic familiarity with how graphs are represented, constructed, and navigated, and just working through it.
One way to often arrive at it is to just draw some graphs, on paper/whiteboard, and manually step through examples, pointing with your finger/pen, drawing changes, and sometimes drawing a table. You'll get a better idea of what has to happen, and what the opportunities are.
This sounds "Then draw the rest of the owl," but it can work, once you get immersed.
Then code it up. And when you spot a clever opportunity, and find the right language to document your solution, it can sound like a brilliant insight that you could just pull out of the air, because you are so knowledgeable and smart in general. When you actually had to work through that specific problem, to the point you understood it, like Feynman would want you to.
I think Feynman would tell us to work through problems. And that Feynman would really f-ing hate Leetcode performance art interviews (like he was dismayed when he found students who'd rote-memorize the things to say). Don't let Leetcode asshattery make you think you're "not good at" algorithms.
I despise leetcode interviews. These days, with coding LLMs, I see them as even less relevant than they were before.
Yet, you ask someone "how do you build an efficient LFU" and get blank stares (I just LOVE the memcache solution of regions and probabilistic promotion/demotion).
I deleted an entire micro service of task runners and replaced it with a library that uses setTimeout as the primitive driving tasks from our main server.
It’s because every task was doing a database call but they had a whole repo and aws lambdas for running it. Stupidest thing I’ve ever seen.
> I deleted an entire micro service of task runners and replaced it with a library that uses setTimeout as the primitive driving tasks from our main server.
Your example raises some serious red flags. Did it ever dawned upon you that the reason these background tasks were offloaded to a dedicated service might have been to shed this load from your main server and protect it from handling sudden peaks in demand?
These background tasks are all database calls. That means the cpu is just waiting on the database for the majority of the call. Most modern servers can handle 10k of these calls concurrently. And you can do this off of one not so powerful CPU. Even half a cpu can handle this. Of course it depends on the CPU but you get my point.
The database is the bottleneck. The database is the thing that needs to be scaled first before you scale servers. This is the most common web application pattern. One way is providing more compute to the database (sharding is better then increasing cpu power as the bottleneck in the database is usually filesystem access not cpu power). Another way is to have a queue buffer the traffic spikes. Both of these are addressing an issue with the database first.
In most web apps. All the server does is wait for a database. The database is doing compute. You never want the server to do compute as that becomes what we call a “blocking call.” These blocking calls are the ones you offload to an external service as these calls “block” entire cpu threads. database calls do not “block” as the server will context switch to another green thread during database calls.
If you work somewhere where you’re scaling crud servers but not after scaling a central database it usually means you’re in a company that doesn’t get it and overemphasizes on “architecture” over common sense. It’s actually extremely common in lower tier small companies to have not so smart people build things like this that don’t make any sense. They aren’t thinking coherently and I’ve seen tons of people who just miss this common sense notion.
I’ll be Frank. It’s stupid and defies common sense. It’s likely you are doing this? But it’s also extremely commonplace.
If you flatten both of your trees/graphs and regard the output as strings of nodes, you reduce your task to a substring search.
Now if you want to verify if the structures and not just the leave nodes are identical, you might be able to encode structure information into you strings.
I was thinking in terms of finding all subgraph isomorphisms. But this definitely is O(N) if all you need is one solution.
But then I thought about it even further and this reduces to sliding window problem. In this case you still need to travel to each node in the window to see if there’s a match.
So it cannot be that you traverse each node once. Not if you want to find all possible subgraph isomorphisms.
Imagine a string that is a fractal of substrings:
rrrrrrrrrrrrrrrrrrrrrrrrrrrr
And the other one:
rrrrrrr
Right? The sliding window for rrrrrrr will be 7 in length and you need to traverse that entire window every time you move it. So by that fact alone every node is traversed at least 7 times.
Hi I'm a mathematician with a background in graph theory and algorithms. I'm trying to find a job outside academia. Can you elaborate on the kind of work you were doing? Sounds like I could fruitfully apply my skills to something like that. Cheers!
Look into quantitative analyst roles at finance firms if you’re that smart.
There’s also a role called being an algorithms engineer in standard tech companies (typically for lower level work like networking, embedded systems, graphics, or embedded systems) but the lack of an engineering background may hamstring you there. Engineers working in crypto also use a fair bit of algorithms knowledge.
I do low level work at a top company, and you only use algorithms knowledge on the job a couple of times a year at best.
You can try to get a job at an investment bank, if you're okay with heavy slogging, i.e., in terms of hours, which I have heard is the case, although that could be wrong.
I heard from someone who was in that field, that the main qualification for such a job is analytical ability and mathematics knowledge, apart from programming skills, of course.
That was about 20 years ago. Not much translates to today's world. I was in the algorithms team working on a CMDB product. Great tech, terrible marketing.
These days it's very different, mostly large-ish distributed systems.
I would love a little more context on this, cause it sounds super interesting and I also have zero clue what you’re talking about. But translating a stateful program into a stateless one sounds like absolute magic that I would love to know about
He has two graphs. He wants to determine if one graph is a subset of another graph.
The graph that is to be determined as a subset is a tree. From there he says it can be done in an algorithm that only traverses every node at most one time.
I’m assuming he’s also given a starting node in the original graph and the algorithm just traverses both graphs at the same time starting from the given start node in the original graph and the root in the tree to see if they match? Standard DFS or BFS works here.
I may be mistaken. Because I don’t see any other way to do it in one walk through unless you are given a starting node in the original graph but I could be mistaken.
To your other point, The algorithm inherently has to also be statefull. All traversal algorithms for graphs have to have long term state. Simply because if your at a node in a graph and it has like 40 paths to other places you can literally only go down one path at a time and you have to statefully remember that node has another 39 paths that you have to come back to later.
I oversimplified the problem :). Really it was about generating an isomporhic-ish view, based on some user defined rules, of an existing graph, itself generated by a subgraph isomorphism by a query language.
Think a computer network as a graph, with various other configuration items like processes, attached drives, etc (something also known as a CMDB). Query that graph to generate a subgraph out of it. Then use rules to make that subgraph appear as a tree of layers (tree but in each layer you may have additional edges between the vertices) because trees are efficient, non-complex representation on 2d space (i.e. monitors).
However, a child node in that tree isn't necessarily connected directly to the parent node. E.g. one of the rules may be "display the sub network and the attached drives in a single layer", so now the parent node, the gateway, has both network nodes (directly connected to it) and attached drives (indirectly connected to it) as direct descendants.
Extend this to be able to connect through any descendant, direct or indirect (gateway -> network node -> disk -> config file -> config value - but put the config value on the level of the network node and build a link between them to represent the compound relationship).
Walk through the original subgraph while evaluating the rules and build a "trace back" stack to let you understand how to build each layer even in the presence of compound links while performing a single walkthrough instead of nm (original vertices rules for generation).
As I said, that was a lot of fun. I miss those days.
The target being a tree is irrelevant right? It’s the “guided” part that makes a single walk through possible?
You are starting at a specific node in the graph and saying that if there’s an isomorphism the target tree root node must be equivalent to that specific starting node in the original graph.
You just walk through the original graph following the pattern of the target tree and if something doesn’t match it’s false otherwise true? Am I mistaken here? Again the target being a tree is a bit irrelevant. This will work for any subgraph as long as as you are also given starting point nodes for both the target and the original graph?
I advise checking out the users other comments before jumping to conclusions. Doesn't look AI generated to me, rather just an "individual" writing style. Only because it's possible doesn't mean its true. Maybe user can confirm?
Otherwise just downvote or flag I guess, but this comment of yours just reads as an insult to a person that maybe did not put the most effort into writing their comment, but seems genuine to me at least.
I've worked on a product that reinvented parts of the standard library in confusing and unexpected ways, meaning that a lot of the code could easily be compacted 10-50 times in many place, i.e. 20-50 lines could be turned into 1-5 or so. I argued for doing this and deleting a lot of the code base, which didn't take hold before me and every other dev left except one. Nine months after that they had deleted half the code base out of necessity, roughly 2 MLOC to 1 MLOC, because most of it wasn't actually used much by the customers and the lone developer just couldn't manage the mess on his own.
The iPhone does that, when you have a sleep schedule set it will show you the alarm for the coming day and when you go turn it off it turns it off for the next day only (it prompts you to confirm you want to just skip the next one rather than edit the schedule).
They've got that one figured out, works really well for me.
If you happen to be an iOS user you can setup a bedtime. Then there are controls to change your sleep/wake times for "next wake up only". Or to skip for a day.
A cursory glance at "setAccessible" usage reveals popular libraries such as serializers like gson and jaxb, class manipulation and generation like cglib, aspectj and even jdk.internal.reflect, testing frameworks and libraries including junit, mockito and other mocking libraries, lombok, groovy, spring, and the list goes on and on.
My bet is that this will be yet another "checked exception" or "module system", where many applications now need to add "--add-opens". If you'll use ANY of many of the more popular frameworks or libraries you'll end up giving this assurance away, which will make library developers not able to rely on it and we're back to square one.
We've addressed that in the JEP. Serialization libraries have a special way to circumvent this (until we get Serialization 2.0), and mocking libraries may, indeed, need to set the flag, but they're rarely used in production, so if they don't enjoy some new optimisation -- no big deal.
BTW, this JEP does not apply to setAccessible generally, as that's been restricted since JDK 16, but only to the particular (and more rare) use of setAccessible to mutate instance final fields. As the JEP says, static final fields, records' internal instance fields, and final instance fields of hidden classes cannot be mutated with that approach currently, so it's never been something that's expected to work in all cases.
Would be nice to have a single “--test-mode” flag that is only meant to be set when running tests, and allows for all this leniency, (add opens, etc) in a single flag.
We should separate the problem from the solution. The problem is that running tests may require relatively many integrity-busting flags. That is true.
There are, however, better solutions than a global test-mode flag that, invariably, will be used by some in production out of laziness, leaving no auditable record of what integrity constraints need to be violated and why. When a new team lead is appointed some years later they will have a hard time trying to reduce the entropy.
The better solutions will arrive in due course, but until then, build tools can automatically add most of the necessary flags. They should be encouraged to do that.
So make the flag remove some other feature, which is critical to production, like the ability to run main() or something.
On the other hand, I don’t think the solution to someone holding a shotgun to their foot and threatening to pull the trigger is to make everyone wear armored shoes. They’re already a lost cause, and there are a billion other ways they can shoot their foot off, if they are so inclined.
I agree with the principal of making it hard to screw things up assuming good faith efforts (making it hard to fall in the pit of despair), so overall I like the JEP.
> On the other hand, I don’t think the solution to someone holding a shotgun to their foot and threatening to pull the trigger is to make everyone wear armored shoes.
I don't think so, either, it's just that I think there are better solutions than a test-mode flag at the level of the `java` launcher. If the mechanism that runs the tests can automatically configure the appropriate capabilities without requiring the user running the tests to do manual configuration then the problem is solved for those who just want to easily run tests just as well as a test-mode configuration.
The idea of a test-mode flag has been floated before and considered; we're not ruling it out, but if such a mode is ever added, I can't tell you now what it would mean exactly. In any event, it's better to carefully study the nature of the problem and its origins before suggesting a particular solution. As Brian Goetz likes to say, today's solutions may well become tomorrow's problems.
> They’re already a lost cause, and there are a billion other ways they can shoot their foot off, if they are so inclined.
True, but our experience shows that it's not a good idea to make the bad choice the easiest one, or people may pick it out of laziness. Let those who want to shoot themselves in the foot work for it. If nothing else, it increases the chance that they learn what their (not-entirely-trivial) configuration means, and maybe they'll realise they don't want it after all.
Someone might point out that there are still ways to do the wrong thing out of laziness by blindingly copying a configuration from StackOverflow etc., but we're not done yet.
setAccessible is also used to be able to access private fields, and not just to be able to write to final fields. Most libraries shouldn't need to set final fields, and I say this as someone who was very against when they deprecated java.lang.misc.Unsafe. I've only had to set a final field once in my career and it was related to some obscure MySql/JDBC driver bug/workaround. This particular deprecation seems very sensible to me.
The theory is, go through the constructor. However, some objects are designed to go through several steps before reaching the desired state.
If GSON must deserialize {…, state:”CONFIRMED”}, it needs to call new Transaction(account1, account2, amount), then .setState(STARTED) then .setState(PENDING) then .setState(PAID) then .setState(CONFIRMED) ? That’s the theory of the constructor and mutation methods guarding the state, so that it is physically impossible to reach a wrong state.
There is a convention that deserialization is an exception to this theory: It should be able to restore the object as-is, after for example a transfer over the wire. So it was conventionally enabled to set final variables of the object, but only at initialization and only for its own good. It was assumed that, even though GSON could reach a state that was unachievable through normal means, it was, after all, the role of the programmer to add the right annotations to avoid this.
> the developers of serialization libraries should serialize and deserialize objects using the sun.reflect.ReflectionFactory class, which is supported for this purpose. Its deserialization methods can mutate final fields even if called from code in modules that are not enabled for final field mutation.
I don't know enough about the details here to say if that's sufficient, but I imagine that it at least should be, or if it's not, it will be improved to the point where it can be.
> The sun.reflect.ReflectionFactory class only supports deserialization of objects whose classes implement java.io.Serializable.
In my experience, most classes being deserialized by libraries like GSON do not implement Serializable. Implementing Serializable is mostly done by classes which want to be serialized and deserialized through Java's native serialization format (which is used by nothing outside Java, unlike cross-platform formats like JSON or CBOR).
Why would you use GSON for objects that go through steps of state? Why would you mark fields like State as final when it is actually mutable? This just sounds like poorly designed code.
Maybe I don't know of your use case, but GSON/Jackson/Json type classes are strictly data that should only represent the data coming over the wire. If you need to further manipulate that data it sounds like the classes have too much responsibility.
It strikes me that we could have a way to reflectively create an object from values for all its fields in a single step - similar to what record constructor does, but for any class (could even be Class::getCanonicalConstructor, returning a java.lang.reflect.Constructor). It would be equivalent to creating an uninitialised instance and then setting its fields one by one, but the partly-initialised object would never be visible. This should probably be restricted, because it bypasses any invariants the constructor enforces, but as you say, ultimately serialisation libraries do need to do that.
I don't know if Java serialization supports this kind of thing, but if you have object A has a pointer to object B and vice-versa, there's no order to deserialize them without passing through a partially-initialized state that preserves the object identity relationship. I suppose you can't construct this kind of loopy references graph with final fields without reflection in the first place, so it's kindof chicken and egg. For the very common case of DAG-shaped data or formats that don't support references I think the one-shot internal constructor works though.
Yeah like the module system. Looks good on paper, is probably hard to deal with. There are still tons of popular libraries that have no module-info.
Java does evolve, but the direction it does is so weird. And than the tooling is strange and it’s worse that there are basically two build tools, both with their upsides and downsides but they still feel more complicated than tools for other languages like cargo, go (if you consider that), msbuild (the modern csproj stuff/slnx)
Gradle is a general build tool, while cargo/go are only for their respective languages.
The moment you need to run some code from another language on your code to generate some other code or whatever, they break down, while Gradle can be used for pretty much anything.
In other words, cargo/go only solve the cache/parallelize/resolve task dependencies problem for "hard coded" special cases, the moment you strive away from that you are in a world of pain.
My impression is that this will be painful for the code I work on because the libraries you mention depend on being able to modify private and/or final fields.
I asked that question so many times (for reference, see my comment on Jake's thread https://news.ycombinator.com/item?id=41163619 ). I asked it of my late wife. I asked it of my therapist. I asked it of my daughter, when she was sleeping.
"Is this my life now?"
The first few months were terrible. Then things started to get better. Before anyone jumps and says "a few months?! That's nothing!", there's a thing called "anticipatory grief". Look it up. (Besides, each grief journey is individual. Besides, who are you to criticize me?).
Then things stopped getting worse. For a while life was flat. Colorless. Dark. I moved through the motions. Dropped my daughter at preschool, worked from home, picked her up, went to the playground, went home, dinner, bedtime story, lie in bed doing nothing. Rinse and repeat. Go to sleep early to avoid feeling.
Then it started getting better. And better. And even better than that. Therapy, meds, pushing (omg so much pushing), friends, a new love. Things got continuously better. I'll never forget that year, but I also now know that I can survive what I think is the 2nd worst thing that can happen to a person. I know it cannot break me.
And I think Bess found that out too. Parts of us died with them, but new parts are growing. Parenthood parts. Discovery parts.
I remember watching my wife to make sure she was breathing. Then at the hospital. Then she wasn't. And it was terrible. A loss I cannot even describe, a part of your own soul that is torn out of you. Yet, that part was painful. Not just that, also in pain. In some sense, I was relieved she was no longer in pain. Even more relived she didn't have to witness her mom passing away. The world turning darker and more despair filling in. She missed on milestones, but also on sadness. And, at the end, I miss her but that part, slowly, became more bearable.
To Bess - I can't promise it'll be ok. No one can. But it'll get easier to bare.
I'm so happy (well... I'm something. happy is somewhat hard to come by these days) that I helped a fellow widower. We're in this alone, but together.
My personal view is to hide nothing, NOTHING, from my daughter. The tears, the grief, the pictures, the videos. Talking about death using "death" and not "passed away". Talking about the memories and feelings. About a person no longer being here (not spiritual so no "heaven" for us, no waiting to be reunited). And, so far at least (just closing on 2 years, daughter grew from 4.5 to 6.5), it seems to be working very well. She's happy, active, well adjusted, charismatic and not prone to tantrums or worrisome behavior more than any other 6 years old. And her being happy makes me happy. I KNOW my late wife would've been proud of us both.
"Explains Git so often, probably dreams in commit messages"
"So passionate about system design, probably tries to optimize their grocery shopping with distributed algorithms"
Both are so true!
And on a more serious note, "Will develop an open-source journaling platform focused on grief support and mental health" sounds like an amazing project to dive into. Possibly, deepseek, possibly...
Lost my wife about 1.5 years ago. It was expected and unexpected at the same time. Long metastatic cancer treatment that ended all of the sudden, in a few weeks of unconsciousness ("coma") with an auto immune brain disease, likely caused by chemo.
As the partner left behind, I nothing but empathy to Bess. As an avid, ultra pragmatic, HN reader though, I've gathered resources so I'll list them here:
Forums / chats:
https://www.reddit.com/r/widowers/ - This one I used immediately after. Yelling into the void. Crying. Having other people cry with me. Make sure I'm heard.
https://discord.gg/CFQfCdby - /r/widowers discord. This one is "good" for the first few days / weeks / months, when the pain is great and the sense of lost is overcoming and you just need someone to talk with, someone who's been through this, right now. Everyone is friendly, rules to keep things sane and not triggering are in effect.
Facebook groups - I know, ugh. But it helps to see other people in the same boat. Somehow. A little. For me it was "Young and Widowed With Children" (well, me) and some of the black humor groups e.g. "Widow(er) Humor". Find your tribe. It really does help.
Books:
It's ok you're not ok - https://www.amazon.com/Its-That-Youre-Not-Understand/dp/1622... - This is "the book". Everyone recommends it and it's justified. If you can't bring yourself to read, get the audible version. I did, it was easier to lie in bed with eyes closed.
Irreverent Grief Guide - https://www.amazon.com/gp/product/B08L5RRJ9D - this one is a "how to" guide. I mean a real "how to", emotionally. I, and possibly many on /r/widowers/ found it priceless.
- The invisible string - https://www.amazon.com/gp/product/031648623X
- Fix-it man - https://www.amazon.com/gp/product/1925335348
- Missing mummy - https://www.amazon.com/gp/product/0230749518
- The sad dragon - https://www.amazon.com/gp/product/1948040999
- Something very sad happened - https://www.amazon.com/gp/product/1433822660
Read once or twice:
- Love is forever - https://www.amazon.com/gp/product/0615884059
- I'll See You In The Moon - https://www.amazon.com/gp/product/1989123309
- My heart will stay - https://www.amazon.com/gp/product/0578794578
- The heart and the bottle - https://www.amazon.com/gp/product/0399254528
- Always remember - https://www.amazon.com/gp/product/0399168095
- The garden of lost balls - https://www.amazon.com/gp/product/B0BLQW27XX
- Gone but never forgotten - https://www.amazon.com/gp/product/B09SNY9VF3
Therapy and meds:
Actually, therapy and meds before, if not already. Anticipatory grief is a thing and processing it can make later days a bit easier. Anti anxiety meds (NDRI) can create "inoculation" effect to some extent. SSRIs probably as well. Understand depression, the symptoms, the issues. Educate family and friends. Establish rapport with a therapist.
Friends and community:
Expect loss of friends. It's terrible but it happens a lot. Extremely common that friends will silently disappear after a few days or weeks. Not even just joint friends. People are awkward around grief. Community, however, does seem to work well. Rely on them. Don't say no to food offers, it helps. Doordash! Don't be shy about it, it's fine to eat junk food. Don't drink though and don't get high, it deepens and prolongs the grief symptoms.
Calls:
Don't forget your family or close friends. I've had daily calls with my sister. It helped a ton. Scheduled daily calls.
Forgot to add: Journaling helped me a lot. I favored writing this as "letters" / "texts" to my wife. As if she's here, just telling her about my day, feelings, emotions, what our kid did, what happened around us, family and friends. Venting, crying, blaming, being frustrated, being happy, being proud. All goes in there.
I think I will be forwarding funny animal videos to Jake on instagram for all time but the idea of them being delivered to no one is this weird minor detail that is just so hard.
That happened to me too. Then I switched writing in "notes" and eventually switched to https://dayone.me/ which has a webapp as well as mobile app, so it's easy for me to write on any device. It was less disheartening not to see that "delivered, not read" on messages.
Thanks for sharing, bironran, I ordered the irreverant guidebook, and appreciate the suggestion. I'm avoiding all medications because I'm 7 months pregnant, but have Tetris available to try and prevent PTSD https://www.ox.ac.uk/news/2017-03-28-tetris-used-prevent-pos...