Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a lot of problems with the i210. Here’s a sample:

https://www.google.com/search?q=i210+proxmox+e1000e+disable

Most people don’t really use their NICs “all the time” “with many hosts.” The i210 in particular will hang after a few months of e.g. etcd cluster traffic on 9th and 10th gen Intel, which is common for SFFPCs.

On Windows, the ndis driver works a lot better. Many disconnects in similar traffic load as Linux, and features like receive side coalescing are broken. They also don’t provide proper INFs for Windows server editions, just because.

I assume Intel does all of this on purpose. I don’t think their functionally equivalent server SKUs are this broken.

Apparently the 10Gig patents are expiring very soon. That will make Realtek, Broadcom and Aquantia’s chips a lot cheaper. IMO, motherboards should be much smaller, shipping with BMC and way more rational IO: SFP+, 22110, Oculink, U.2, and PCIe spaced for Infinity Fabric & NVLink. Everyone should be using LVFS for firmware - NVMe firmware, despite being standardized to update, is a complete mess with bugs on every major controller.

I share all of this as someone with experience in operating commodity hardware at scale. People are so wasteful with their hardware.



I like your better I/O idea.

Many systems which cost more than a good car are still coming with the Broadcom 5719 (tg3) from 1999. They have a single transmit queue and the driver is full of workarounds. It's a complete joke these are still supplied today.

SFP would be great but I'd settle for an onboard NIC chipset which was made in the last 10 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: