Hacker Newsnew | past | comments | ask | show | jobs | submit | eugene-khyst's commentslogin

Great idea. I've thought to do LaTeX to PDF resume tool. But YAML to PDF is much better. The only feature I miss, is possibility to convert YAML to PDF from CLI (globally installed npm package, or Docker image). This way, it will be possible to have YAML in GitHub repository and export it to PDF on each modification using GitHub Actions. Then up to date PDF resume will always be available on GitHub.


Thank you. This is certainly possible. The library I'm using for rendering the PDF (https://react-pdf.org/) does support Node.js as well. This is a good point, I suppose a lot of people will have their resumes in GitHub.


I can definitely recommend EventStoreDB. I used it in production and most colleagues like this DB. I have a sample Java Spring Boot + EventStoreDB project <https://github.com/eugene-khyst/eventstoredb-event-sourcing>.



I noticed that you use the @Transactional annotation on class definition. This will create a write transaction for every public method of the annotated class, including read only methods. You should consider using readOnly=true for read methods.

Additionally, I would consider using two data sources, one for write queries and a read only ds for the Q part of CQRS.


Thanks for suggestions. I will add @Transactional(readOnly = true) annotation. I will mention in the README the possibility of using two data sources.


alright thanks. this java stuff is pretty hard for me to follow. it looks like java is doing the aggregating, but maybe this is some kind of ORM


Yes, Debezium is an implementation of the Transaction log tailing pattern an alternative to Transactional outbox pattern.


Thanks for sharing.

> My linked code works with involving the MVCC snapshot's xip_list as well, to avoid this gotcha.

I will definitely take a look. It would be great to fix this problem. This problem really concerns me, although in most cases it is not critical.


The "little trick" in the Citus approach is very inventive. SHARE ROW EXCLUSIVE mode protects a table against concurrent data changes, and is self-exclusive so that only one session can hold it at a time. Thus, when such lock is obtained, we can be sure that there are no more pending transactions with uncommited changes. It's a protection from loosing data of the pending transactions. Throwing the exception immediately releases the lock. Thus, the exclusive table lock is held for milliseconds. I like the general idea, but I don't want to add plpgsql functions/procedures. I'll see if this can be elegantly implemented in Java+SQL (without plpgsql) and perhaps add it as alternative approach to my project. Such approach may be even more effective because it focuses on a single table and not on all transactions like the one described in my project, thus, locks on irrelevant tables have no effect on event handlers. Thanks for sharing.


Why no plpgsql? Is it because the language is bad? If so, what about something like pl/rust https://plrust.io/ ? (Or other language)


plpgsql is good language. But in my experience Java and .NET developers tend to choose solutions that do not use plpgsql, PL/SQL, T-SQL. And these developers is the main audience for the project.


Thanks for noticing. I will add Apache License 2.0.


I tried to evaluate Kafka usage for event sourcing: <https://github.com/eugene-khyst/ksqldb-event-souring>. More out of curiosity. But never tried it in production.


Kafka doesn't have a way to assert stream version on event write, which is critical for CQRS. Without it, you can't guarantee stream state when processing a command without resorting to a singleton/locks which does not scale at all. Why Apache doesn't wish to support such a critical feature is beyond me though.

https://issues.apache.org/jira/browse/KAFKA-2260


No doubt audit tables are a popular alternative to event sourcing. But if the current state and changes history of the entity are stored in different tables, someone may say: "Prove me that your audit log is correct". Because you are not using audit table for the business logic, you may not immediately notice the problem with it that corrupts the audit log. Event Sourcing provides other advantages, not only audit log. For example, a service command typically needs to create/update/delete aggregates in the DB (JDBC/R2DBC) and send messages to a Kafka. Without using the two-phase commit (2PC), sending a message in the middle of a transaction is not reliable. There is no guarantee that the transaction will commit. With Event Sourcing you have to subscribe to the event and send the message to Kafka from listener. The delivery guarantee is "at least once". Anyway, there is a demand for Event Sourcing on the market


Regarding publishes to a message broker, the transactional outbox pattern (mentioned in TFA , and something that can be used on its own) provides similar capabilities if you don't want to fully buy into event sourcing.

https://microservices.io/patterns/data/transactional-outbox....


The illustrations are made with PlantUML.


Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: