Hacker Newsnew | past | comments | ask | show | jobs | submit | more philipbjorge's commentslogin

My context has been startups with engineering teams of 15-60 engineers.

In general, I've found CI/CD templates, IAC templates and app templates to be an anti-pattern. It goes to the heart of scalable vs replicable.

> Put simply: something replicable can be copy-pasted (with variations as needed) to grow impact linearly in relation to effort and cost. Something scalable can create impact at a rate that increases faster than the rate at which your effort and costs increase.

In the contexts I've worked in, templates enabled us to rapidly grow and expand our business and technology in unmaintainable ways.

We scaled to 100 services, but our revenue and headcount didn't scale linearly to support the services over time.

---

In general, I lean towards solutions that facilitate distributing and updating templates over solutions that facilitate copying/pasting templates.

- Kubernetes this meant a common Helm service chart for the organization -- Now we could distribute and update kubernetes manifests to the org - CICD this meant building plugins that we distributed to teams - Application templates I've historically moved folks back into monoliths but the microservice chassis pattern looks like a solution here -- https://microservices.io/patterns/microservice-chassis.html

Your calculus might look different depending on your internal capabilities and resources. Good luck!


Perhaps I wasn’t clear in my post. Templates to me means distributing them in a usable fashion, and all templates should be able to be used via invoking some binary and providing inputs/flags.

Similar to helm.

Right now, we are using dotnet new templates, which is .net’a way of creating new apps, yeoman for npm, etc.

No copying and pasting. Copying and pasting isn’t DRY and is currently ramped at my org, since we didn’t have any shared templates


What’s the deal with templates? I make new projects infrequently. Templates save me little time. I spend far more time struggling to fix broken code, fitting existing code to new requirements, etc


Templates to me means distributing them in a usable fashion, and all templates should be able to be used via invoking some binary and providing inputs/flags.

I am wondering why nobody talks about using version control tools for templates. Git lacks the ability to replace variables and handle flags, but it allows to rebase a small micro service on top of the latest version of the template.


These are version controlled. We use semver and they are released using the languages package manager :)


And how do you update a project started with template x version 1.2.3 to template 1.3.4?


How often do developers have to make the same changes to their dotnet new templates? How often will they have to make changes as a result of the central team making an update to the central template that breaks their setup?

When somebody needs to use an escape hatch, how painful are you gonna make it to keep that escape hatch running?


So essentially in dotnet, when you create a new project, you run

Dotnet new templateName

Dotnet runs that template and out comes dotnet code.

That is what we have built and we consume internal libraries we created in order to keep people updated with changes.

But at the end of the day, it’s just dotnet code, don’t like what or how we did something? Go ahead and change it. It’s just regular code.

It’s similar to running “helm create {name}” or npm run create-react-app, but developed internally.

https://github.com/dotnet/templating/wiki/Available-template...


Recently I worked on a project that was using synchronous IO in an async framework -- That tanked performance immediately and effectively meant that the application could service one request at a time while subsequent requests started queuing.

(Agreed that synchronous IO can serve hundreds of requests per second with the right threading model)


We used RavenDB 2-4 at Leafly.

Won't go into battle scars here, but this report does not surprise me. We're much happier with Postgres and Elasticsearch.


What makes most operational sense is going to depend on your context.

From my vantage point, you’re both right in the appropriate context.


We were comfortably supporting millions of jobs per day as a Postgres queue (using select for update skip locked semantics) at a previous role.

Scaled much, much further than I would’ve guessed at the time when I called it a short-term solution :) — now I have much more confidence in Postgres ;)


> We were comfortably supporting millions of jobs per day as a Postgres queue (using select for update skip locked semantics) at a previous role.

That's very refreshing to hear. In a previous role I was in a similar situation than yours, but I pushed for RabbitMQ instead of postgres due to scaling concerns, with hypothetical seilings smaller than the ones you faced. My team had to make a call without having hard numbers to support any decision and no time to put together a proof of concept. The design pressures were the simplicity of postgres vs paying for the assurance of getting a working message broker with complexity. In the end I pushed for the most conservative approach and we went with RabbitMQ, because I didn't wanted to be the one having to explain why we had problems getting a RDBMS to act as a message broker when we get a real message broker for free with a docker pull.

I was always left wondering if that was the right call, and apparently it wasn't, because RabbitMQ also put up a fight.

If there were articles out there showcasing case studies of real world applications of implementing message brokers over RDBMS then people like me would have an easier time pushing for saner choices.


> RabbitMQ also put up a fight.

I'm interested in hearing more about this (making a similar decision right now!). What pains did RabbitMQ give you?


> showcasing case studies of real world applications of implementing message brokers over RDBMS

You mean "industrial scale RDBMS" that you can license for thousands of dollars? No, you can't really implement message brokers on those.

You will never see those showcase articles. Nobody paying wants them.


No, industrial scale RDBMSes like PostgreSQL, that you can license for free. Obviously?


Those don't have money to fund studies about industry best practices. So you don't get many.

Almost everything you see on how to use a DBMS is an amateur blog or one of those studies. One of those is usually dismissed on any organization with more than one layer of management.


> Those don't have money to fund studies about industry best practices. So you don't get many.

Your comment reads like a strawman. I didn't needed "studies". It was good enough if there was a guy with a blog saying "I used postgres as a message broker like this and I got these numbers", and they had a gitlab project page providing the public with the setup and benchmark code.


Just out of curiosity (as someone who hasn't done a lot of this kind of operational stuff) how does this approach to queueing with Postgres degrade as scale increases? Is it just that your job throughput starts to hit a ceiling?


Throughput is less of an issue then queue size—Postgres can handle a truly incredible amount of throughput as long as the jobs table is small enough that it can safely remain in memory for every operation. We can handle 800k jobs/hr with postgres, but if you have more than 5k or 10k jobs in the table at any given time, you're in dangerous territory. It's a different way of thinking about queue design then some other systems, but it's definitely worth it if you're interested in the benefits Postgres can bring (atomicity, reliability, etc)


With Postgres, you also need to worry a lot about tombstoning and your ability to keep up with the vacuums necessary to deal with highly mutable data. This can depend a lot on what else is going on with the database and whether you have more than one index on the table.


One strategy for mitigating vacuum costs would be to adopt an append-only strategy and partition the table. Then you can just drop partitions and avoid the vacuum costs.

Really depends on the needs but this can unlock some very impressive and sustainable throughputs.


This! Most haven't tried. It goes incredibly far.


Because all popular articles are about multi million tps at bigtech scale, and everybody thinks they're big tech somehow.


That's the original problem, but then there are the secondary effects. Some of the people who made decision on that basis write blog posts about what they did, and then those blog posts end up on StackOverflow etc, and eventually it just becomes "this is how we do it by default" orthodoxy without much conscious reasoning involved - it's just a safe bet to do what works for everybody else even if it's not optimal.


Can you share some resources where I can learn about this? Thanks


I've seen this difference myself and done some profiling on it in the past with asdf.

I also know that Sam Saffron has mentioned the shim latency a bit before as well with other tools.

> We stopped benching on rbenv based systems, everyone moved to chruby or rvm cause the shims rbenv adds introduce significant delays on boot.

https://discuss.rubyonrails.org/t/why-is-rails-boot-so-slow-...


Location: Seattle, WA

Remote: Remote is not required, but I do have experience with it and like it

Willing to relocate: No

Technologies: Ruby, Python, Node, Java, C#, Elasticsearch, RDBMS, Kubernetes, AWS

Resume/CV: https://drive.google.com/file/d/1Nmzh_lXISLDnB0FjNubBeeTtngG...

Email: philip@philipbjorge.com

I'm a passionate full stack software developer with operations experience who can bounce between creating, extending, and operating deployment pipelines, cloud infrastructure, web services, and native and web apps.


It links to Amazon Kindle. I believe this is the link that their app spits out.

https://urlex.org/ can expand these for you.


On iPhone X, you can hold the power button and volume up for 5 seconds and it will lock the phone, disable Face ID, and force entering your passcode to unlock.


You can also tap the power button five times in a row, which brings up the emergency screen (Power Off, Medical ID, SOS call), but also disables face recognition until a pin unlock has happened.


Note this initiates an alarm and emergency call, with a three second window in which to cancel. Quite a surprise!!


I wish there was a control center toggle for this. I like to disable biometrics when I travel so a CC option would be great.


How does that work if you are under arrest? This is the scenario I am referring to and the article is about the police, so it's on point.


The idea being that you are able to disable the biometric stuff before you expect to encounter a situation involving overreaching law enforcement. Hitting the power button 5 times in your pocket is easy enough.

Additionally TouchID is disabled after several hours of the device being unlocked. I presume FaceID is no different. So if you refuse to comply and they have to get a court order then the time limit may be reached.

You can even have the device wipe itself after 10 failed attempts to unlock it.


If you’re doing something illegal, I’d assume you wouldn’t use Face ID because of this exact scenario, but from what I’ve seen, most criminals are dumb.


>from what I’ve seen, most criminals are dumb

Nope, just the ones you see caught.


Innocent people get arrested. Innocent people spend years in jail awaiting trial. And occasionally innocent people get convicted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: