Hacker Newsnew | past | comments | ask | show | jobs | submit | rachbelaid's commentslogin

With 2ndquadrant working on Postgres-XL (http://www.postgres-xl.org/), I think that you can be confident that you will see a lot of the features being proposed to core postgres. It will just take some times to build the building block necessary like: global index, distributed sequence, repartition ...

I quite confident that the postgresql from 5y in future will be quite different in term of storage / server topology support. I won't be surprise pg_bouncer capacity to finally make its way to core when we have a coordinator.

Postgres has steady progression (even if not fast enough for some people) but they are moving without compromising robustness of their product for the users.


Some work started in this direction. I didn't follow closely the whole thread but I don't think that got commited in PG10.

https://www.postgresql.org/message-id/flat/20170119213859.GA...

More info in the EDB roadmap: https://wiki.postgresql.org/wiki/EnterpriseDB_database_serve...

Postgres has been amazing in shipping the foundation required to deliver complex feature.. Logical Replication is an example of it, all the piece commited in the last 6y allowed to make this patch achievable.



I'm always amazed by the PG community - it seems like such a constructive place.

Those patches are absolutely insane. Makes you remember how much hard work goes into building the software you use on a day to day basis.

https://www.postgresql.org/message-id/attachment/45478/0001-...


I've been professionally focused on PostgreSQL based works for the last 5 years. At the highest point of the BigData hype I sometimes felt a little bit off-track, because I never got the time to investigate NoSQL solutions...

Only recently did I realize that being focused on actual data and how to process it inside PostgreSQL was maybe the best way I could spend my working time. I really can't say what's the best part of PostgreSQL, the hyperactive community, the rock solid and clear documentation or the constant roll-out of efficient, non-disruptive, user-focused features...


I could see good amount of quality engineering there, kudos.


A quick update to say that few things got added since version 0.1 and you can find more details on the github release page. https://github.com/rach/pome/releases


(Author here) Thanks for the comment. I used Grafana and it's a great tool. One of the motivation behind this project is to be Battery included. A tool like Grafana require a timeseries DB (graphite or influx) then you need something to collect the metric like collectd and maybe an aggregator like statsd.

I want Pome to be simple to run as binary.

I did discuss the idea of allowing to disable the web and support pushing data to existing tools. I've been quite busy lately but the project is not dead.


Here the author. Glad and scare that this project reached the 1st page of HN. This project is at a very early stage but I had to release it at some stage. I wrote some explanation in the Readme [1] and in a blog post [2].

TL;DR: Pome aim to be very simple to deploy, opinionated and battery included tool to have a look at the health of your PG db. It's maybe not the case of anybody here but in my career, I have seen many PG db for which no health status were track (or because people think that RDS is magic). I assumed that if a very simple tool existed then it gives less reason to not track their health status.

At this stage, I don't think that Pome offers enough to be very useful but I hope that you will like the direction taken and where it's going.

Pome isn't aiming to be a tool for humongous Postgres instances which are already in the hands of a DBA who can have the time to setup more advanced monitoring tools. Pome won't be an alternative to a more configurable tool like collectd.

[1] https://github.com/rach/pome#why-building-pome

[2] http://rachbelaid.com/introducing-pome/


Yes, it supports RDS and I want to support it as it's the one that I used most often. It's really basic right now but I will add some new metrics this month.


Here the author. If you dislike Go then that makes two of us :)

But more seriously, I wrote a bit about the why in https://github.com/rach/pome#why-building-pome

To keep with the simplicity that I was aiming, I wanted a binary. I considered Rust, Haskell or Go (Swift was not opensource yet at the time) but went with Go because the libs that I will need (cron like scheduler, embedding assets, etc).

I had never written something in Go before but if I've known go then maybe I would have pick an other language as I started this project with a learning motivation.


> went with Go because the libs that I will need (cron like scheduler, embedding assets, etc).

Haskell has a cron like scheduler. What do you mean by embedding assets?


I suppose he's referring to embedding files (usually web resources such as js, css files) into the source code, using a tool such as go-bindata [1]. The assets are then accessed as variables.

[1] https://github.com/jteeuwen/go-bindata


Haskell has a library for this called "embed-file".


Haskell does but I'm comfortable in Go and easier to get people to contribute


Thanks you. typo mistake.


Thanks for sharing this. I will look at which metrics that I can include from this project. I can also say that CPU can't happen to keep the project gathering data only through Postgres (except if you know a way)

The goal of Pome was to be very easy to setup for people who put nothing in place, which is why wrote it in Go. But if you have some time there is much better/complete tools like collectd.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: