Unfortunately real life problems involve a lot of context and introducing that to interviews is hard.
It either requires a long conversation where someone might lose track, or involve writing a large ish code base and getting them to work on that (which is a lot of work, and once again they don’t have context).
In my experience if a problem simple enough to be completed in an hour, it’s simple enough for AI.
IMO if your interview question has a right answer that's like, a basic early screening tool to see if the person is paying the slightest bit of attention. The sorts of questions that really give you useful insight into who someone is as an engineer are the ones that don't have a right answer, and can't be reasonably solved in an interview. If your question has an achievable correct answer you're clipping the gamut of info you can glean from it.
I recently had a question where I was just given a bunch of tuples and told "let me know anything you can glean about this data in two hours". I think you learn much more by seeing how someone fails at a complex nebulous task than whether someone can implement a doubly linked list from memory or whatever.
Real life programming is like 80% failing at complex nebulous tasks, picking yourself up and trying again (by volume). Interviews should simulate that. If AI helps you, so be it.
- We're a close knit team of 5 in Sydney (plus several international)
- We're profitable and growing
Job details:
- Full stack (python django backend, react and typescript frontend)
- 3rd engineer - this means you will have a lot of freedom and autonomy and the ability to work on a much broader set of tasks than most other companies
My partners family plays a version of Canasta where you start dealing by picking up all the cards you think you’ll need to deal and if you picked it perfectly you get another 100 points.
It’s a great addition to the game and makes it a positive to be the dealer.
Definitely give them a go, we use fine-tuned ada a bunch for classification work for example; I personally think the smaller models are overlooked and don't get enough love - if OpenAI increased the context window of a model like babbage to 8k tokens I feel like that would be as much of a big deal as making a marginal improvement to davinci, purely because so many use cases rely on low-latency, many request models.
In my experience if a problem simple enough to be completed in an hour, it’s simple enough for AI.