Great idea. I've thought to do LaTeX to PDF resume tool. But YAML to PDF is much better.
The only feature I miss, is possibility to convert YAML to PDF from CLI (globally installed npm package, or Docker image). This way, it will be possible to have YAML in GitHub repository and export it to PDF on each modification using GitHub Actions. Then up to date PDF resume will always be available on GitHub.
Thank you. This is certainly possible. The library I'm using for rendering the PDF (https://react-pdf.org/) does support Node.js as well. This is a good point, I suppose a lot of people will have their resumes in GitHub.
I noticed that you use the @Transactional annotation on class definition. This will create a write transaction for every public method of the annotated class, including read only methods. You should consider using readOnly=true for read methods.
Additionally, I would consider using two data sources, one for write queries and a read only ds for the Q part of CQRS.
The "little trick" in the Citus approach is very inventive.
SHARE ROW EXCLUSIVE mode protects a table against concurrent data changes, and is self-exclusive so that only one session can hold it at a time.
Thus, when such lock is obtained, we can be sure that there are no more pending transactions with uncommited changes. It's a protection from loosing data of the pending transactions.
Throwing the exception immediately releases the lock. Thus, the exclusive table lock is held for milliseconds.
I like the general idea, but I don't want to add plpgsql functions/procedures.
I'll see if this can be elegantly implemented in Java+SQL (without plpgsql) and perhaps add it as alternative approach to my project. Such approach may be even more effective because it focuses on a single table and not on all transactions like the one described in my project, thus, locks on irrelevant tables have no effect on event handlers.
Thanks for sharing.
plpgsql is good language. But in my experience Java and .NET developers tend to choose solutions that do not use plpgsql, PL/SQL, T-SQL. And these developers is the main audience for the project.
Kafka doesn't have a way to assert stream version on event write, which is critical for CQRS. Without it, you can't guarantee stream state when processing a command without resorting to a singleton/locks which does not scale at all. Why Apache doesn't wish to support such a critical feature is beyond me though.
No doubt audit tables are a popular alternative to event sourcing. But if the current state and changes history of the entity are stored in different tables, someone may say: "Prove me that your audit log is correct". Because you are not using audit table for the business logic, you may not immediately notice the problem with it that corrupts the audit log.
Event Sourcing provides other advantages, not only audit log.
For example, a service command typically needs to create/update/delete aggregates in the DB (JDBC/R2DBC) and send messages to a Kafka. Without using the two-phase commit (2PC), sending a message in the middle of a transaction is not reliable. There is no guarantee that the transaction will commit. With Event Sourcing you have to subscribe to the event and send the message to Kafka from listener. The delivery guarantee is "at least once".
Anyway, there is a demand for Event Sourcing on the market
Regarding publishes to a message broker, the transactional outbox pattern (mentioned in TFA , and something that can be used on its own) provides similar capabilities if you don't want to fully buy into event sourcing.