Hacker Newsnew | past | comments | ask | show | jobs | submit | moneywoes's commentslogin

Any suggested eink tablet with higher refresh that this would work better for?

Many if not all of the current generation Boox devices. Choose comparison category "Refresh Time" here:

https://www.mydeepguide.com/daf-tool

Be aware that Boox runs Android apps. Many other brands do not.


I use the Boox 10.3 for reading emails, text-based sites like this, and manga. Its bliss and has replaced 80% of my ipad. The experience of using it outside completely trounces normal screens.

As soon as they make larger, better 60hz panels I will 100% switch all my monitors over. I think making videos look worse is a positive. We don't need doomscrolling. We don't need 60fps react buttons with smooth gradients. We don't need to HDR the entire web. I primarily use text based sites anyways, so eink is perfect for me.


I'm writing this on a Daylight Computer. It's been my primary mobile device (instead of a smartphone) for all of 2025. I cannot recommend it enough.

how much fee do you take


We built our own billing engine so the total cost to you is .65% (on top of the normal 2.9%). By comparison Stripe Billing costs .70%


what is the cheapest for a nomad


Vietnam.

source: I've been to almost every country in SEA at least 3x. (Brunei was once, never went to Timor-Leste).

Check the forex changes and rent prices if you don't believe me.

Harder to factor in is visa costs. Vietnam, you need to leave every 90 days. So you need to buy a $25usd visa + flights/buses + hotels for 3-5 days while you get your next visa. Thailand, you only need to leave every 6mo on the DTV.


Thailand is cracking down on visa runs and people staying quasi-permanently on short-stay visas: https://economictimes.indiatimes.com/nri/visit/thailand-step...


The parent mentions the DTV visa which is the opposite of the visa-run strategy. Realistically though, if you're a "nomad" from a country with a powerful-ish passport you can come to Thailand for 60 days, extend once for 30 days for a total stay of 90 days. After that you can do a bit of a loop between Malaysia, Vietnam, Cambodia, Laos, Indonesia, Philippines in whatever order you prefer and come back to Thailand in a year. They'll have no problem letting you in again.

It's pretty easy to spend a year in SEA without raising eyebrows at any border if you're willing to change countries somewhat often and don't mind AirAsia flights.


That is basically my life. I've visited almost every country in the region this year (+ China and Japan) on a tourist visa.

The problem for me personally is this life is stressful on relationships, health, and personal productivity. Spending a weekend every 1-2 months to deal with travel (and arrangements) is exhausting and expensive on productivity hours.


not FOSS


what's the value prop over cursor


what about things like rate limiting, how are those implemented, any Goodreads


what's the difference between that and those providers exposing an api?


MCP defines the API so vendors of LLM tools like cursor, claude code, codex etc don't all make their own bespoke, custom ways to call tools.

The main issue is the disagreement on how to declare the MCP tool exists. Cursor, vscode, claude all use basically the same mcp.json file, but then codex uses `config.toml`. There's very little uniformity in project-specific MCP tools as well, they tend to be defined globally.


Maybe this is a dumb question, but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?


>but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?

Yes.

It's not a dumb question. The situation is so dumb you feel like an idiot for asking the obvious question. But it's the right question to ask.

Also you don't need to "train" the LLM on those resources. All major models have function / tool calling built in. Either create your own readme.txt with extra context or, if it's possible, update the API's with more "descriptive" metadata (aka something like swagger) to help the LLM understand how to use the API.


You keep saying that major models have "tool calling built in". And that by giving them context about available APIs, the LLM can "use the API".

But you don't explain, in any of your comments, precisely how an LLM in practice is able to itself invoke an API function. Could you explain how?

A model is typically distributed as a set of parameters, interpreted by an inference framework (such as llama.cpp), and not as a standalone application that understands how to invoke external functions.

So I am very keen to understand how these "major models" would invoke a function in the absence of a chassis container application (like Claude Code, that tells the model, via a prompt prefix, what tokens the model should emit to trigger a function, and which on detection of those tokens invokes the function on the model's behalf - which is not at all the same thing as the model invoking the function itself).

Just a high level explanation of how you are saying it works would be most illuminating.


The LLM output differentiates between text output intended for the user to see, vs tool usage.

You might be thinking "but I've never seen any sort of metadata in textual output from LLMs, so how does the client/agent know?"

To which I will ask: when you loaded this page in your browser, did you see any HTML tags, CSS, etc? No. But that's only because your browser read the HTML rendered the page, hiding the markup from you.

Similarly, what the LLM generates looks quite different compared to what you'll see in typical, interactive usage.

See for example: https://platform.openai.com/docs/guides/function-calling

The LLM might generate something like this for text:

    {
      "content": [
        {
          "type": "text",
          "text": "Hello there!"
        }
      ],
      "role": "assistant",
      "stop_reason": "end_turn"
    }
Or this for a tool call:

    {
      "content": [
        {
          "type": "tool_use",
          "id": "toolu_abc123",
          "name": "get_current_weather",
          "input": {
            "location": "Boston, MA"
          }
        }
      ],
      "role": "assistant",
      "stop_reason": "tool_use"
    }
The schema is enforced much like end-user visible structured outputs work -- if you're not familiar, many services will let you constrain the output to validate against a given schema. See for example:

https://simonwillison.net/2025/Feb/28/llm-schemas/

https://platform.openai.com/docs/guides/structured-outputs


It is. Anthropic builds stuff like MCP and skills to try and lock people into their ecosystem. I'm sure they were surprised when MCP totally took off (I know I was).


I don't think there is any attempt at lock in here, it's simply that skills are superior to MCP.

See this previous discussion on "Show HN: Playwright Skill for Claude Code – Less context than playwright-MCP (github.com/lackeyjb)": https://news.ycombinator.com/item?id=45642911

MCP deficiencies are well known:

https://www.anthropic.com/engineering/code-execution-with-mc...

https://blog.cloudflare.com/code-mode/


how so, any examples


Personally, it’s highlighted the value of physical books and helped me spend less time getting sucked into rabbit holes on devices. I’ve been much more deliberate about what text I choose to read. Been burning through classics that have been on my shelf for decades.


> Outsourcing comes and goes in waves. Good talent in India and the Philippines tend to work for FAANG companies, often at very comparable salaries to the west.

In those locations?

Based on sheer CS grad numbers why wouldn't companies just shift their r& operations there then?


> Based on sheer CS grad numbers why wouldn't companies just shift their r& operations there then?

There are lots of CS grads, yes. But most colleges out there are mostly degree mills, and this carries on to the workplace, where your average software engineer or engineering manager has very little understanding of what they’re actually doing (this[1] article was posted on HN, which will tell you the quality of engineering in India).

For anything slightly complicated, companies seem to be only interested in hiring from the best colleges and pay out of their nose in the process. A friend of a friend does some hardware work at a FAANG, and gets paid at almost that level.

[1] https://eaton-works.com/2025/10/28/tata-motors-hack/


Conversation about outsourcing aside, it isn’t fair to pick one example and generalize to say an entire country’s talent pool is poor.

The US has the best engineering talent pool in the world and you can find dozens of examples at major companies as bad (or worse) than the one you linked.


The FAANG I work for is trying to do just that. But while new grads are indeed a dime a dozen, you can't staff an R&D with only new grads, and finding and retaining skilled seniors is so tough that it has resorted to offering US-based Indians packages with US level comp to entice them to move back for a few years to bootstrap teams.


how are games today different


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: