Hacker Newsnew | past | comments | ask | show | jobs | submit | bkanuka's commentslogin

Not who you're replying to (but I'm in the same camp). I use the album art as decoration, but the music is the first selection criteria. The music has to mean something to me first, and then the album art just needs to "pass".

I have young kids also, so I try to stay away from violent or scary album art.


Why not just buy the cardboard cover?

Where can you buy just the cardboard cover?

Now go deeper! Prompt Gemini to write a prompt for itself that would write a prompt for itself that would get similar results.


Inception 2.0


Can I do it in an infinite loop and bring all the data centers down?


Don't know if this is sarcasm or not. If you have 23 req/day, then there's no tech problem to solve. Whatever you have is good enough, and increasing traffic will come from solving problems outside tech (marketing, etc)


Thanks! Never heard of Endless OS till now. Looks very reasonable for kids.

My kids are too young for it, but it lead me to find GCompris (especially with kiosk mode) which is for a much better fit for my kid's ages.


I studied physics in university, and found it challenging to find null-result publications to cite, which can be useful when proposing a new experiment or as background info for a non-null paper.

I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)



This is really really so necessary ...

If really pro science, some non-profit should really fund this sort of research.-

PS. Heck, if nothing else, it'd give synthetic intellection systems somewhere to not go with their research, and their agency and such ...


Before tackling that, a non-profit should fund well-designed randomized controlled trials in areas where none exist. Which is most of them. Commit to funding and publishing the trial, regardless of outcome, once a cross-disciplinary panel of disinterested experts on trial statistics approve the pre-registered methodology. If there are too many qualified studies to fund, choose randomly.

This alone would probably kill off a lot of fraudulent science in areas like nutrition and psychology. It's what the government should be doing with NIH and NSF funding, but is not.

If you manage to get a good RCT through execution & publication, that should make your career, regardless of outcome.


> should fund well-designed randomized controlled trials in areas where none exist.

Indeed. That is the "baseline"-setting science, you are much correct.-



could just be online for a start, then it's just time for the organization that you'd need. sounds like a fun project to be honest


The Concordians believe that the Overture will be the second coming of their saviour.


From the article:

> The Concorde, which was retired in 2003, was built jointly by the British and French governments.

This is the technicality TechCrunch is using to make this claim.


Yeah, but that's not what “civil” means.


Also Concorde wasn't "built" by governments. Funded by, perhaps..


Can you comment on the compatibility with other 3.5mm tips like the TS80/TS80P?

Will there be other tip shapes available?

Is the tip design patented (and enforced) or will you allow for 3rd party tips?


We did not patent the tip design, anyone is welcome to make third party tips.

Tips we'll have at launch: Cone, Bevel 1.5, Wedge 1.5, Point, Bevel 2.6, Knife 2.5, Knife 1.4

We made some different electrical design decisions than they did. TS-80 tips aren't rated for the power that we're putting out, so being compatible with the TS-80 tips could be pretty sketchy.


Amazing! Thanks for clarifying. Now I'm much more interested


I completely agree. My wife and I have our locations shared with each other. I'm not "surveilling" her. I almost never remember that we have this feature until we need it for some reason, and even then its normally very benign (how far from home are you? should we wait before having dinner?)

Honestly, it's because I just don't care. I'm not worried about her changing plans or going somewhere without telling me (the feels dirty just thinking about) and at a certain age, I also won't care what my kids do. They will also change plans, or explore off the path. So what? But that one time I _really_ need to call them or they need help, we will be glad they have a little bit of tech on them.

I also find it somewhat interesting that many of the same people who are so worried about this type of surveillance _already_ have the devices and/or technical knowledge to surveil others and choose not to for whatever reason. For example, we have home networks and could track what our families do online. We _could_ put a malicious app into someone's phone, or a tracker on someone's car. Simply having the ability to do something does not imply that it will be done, and certainly doesn't imply that it will be done maliciously.


As someone who learned mathematics first and programming later, I think it took me about 10 years of working in data-intensive programming before I could write really "good" SQL from scratch.

I completely attribute this to SQL being difficult or "backwards" to parse. I mean backwards in the way that in SQL you start with what you want first (the SELECT) rather than what you have and widdling it down. Also in SQL (as the author states) you often need to read and understand the structure of the database before you can be 100% sure what the query is doing. SQL is very difficult to parse into a consistent symbolic language.

The turning point for me was to just accept SQL for what it is. It feels overly flexible in some areas (and then comparatively ridgid in other areas), but instead of fighting against this or trying to understand it as a consistent, precise language , I instead just go "oh SQL - you are not like the other programming languages I use but you can do some pretty neat stuff so we can be on good terms".

Writing good SQL involves understanding the database, understanding exactly the end result you want, and only then constructing the subqueries or building blocks you need to get to your result. (then followed by some trial and error of course)


I feel like a foreigner in another land when I read your comment and others like it. For as long as I can remember using SQL, I can't remember ever finding it more difficult or backwards than anything else I use.

That difference might go some way towards explaining why I prefer a much more database heavy/thick approach to writing apps than my peers.


I agree. I never even thought about "select what you want first" as a problem until someone else pointed out.

Programmers seem far too sensitive about wanting everything to work one way. SQL is a very powerful DSL. It has its quirks but nothing that ever enraged me. I don't really care that it doesn't work like some other stuff I use, I just accept that I'm learning the language of a particular domain. This doesn't mean that I don't think there is always room for improvement. Of course I think FROM first would be a little nicer, but so much nicer that I think its worth changing a whole battle-tested standard? Not at all. The pain is so minimal I don't even feel it.


> I never even thought about "select what you want first" as a problem until someone else pointed out.

I thought it was a problem as soon as IDEs had good SQL autocomplete. I got so used to depending on just being able to "tab my way through" autocompleting in other languages (e.g. if you do <objectVariable>.<propertyName>, it's obvious the set of property names can be narrowed down based on the type of the variable), that it immediately becomes apparent that doing select first sucks, because autocomplete has no good information until you get to the from clause. A lot of times with a good SQL editor like Datagrip I just do "SELECT * FROM foo" first, and then go back and edit the select columns because it can now autocomplete them quickly.

I now notice this in other places too, like I hate how in JavaScript you do `import { foo } from "moduleBar"`. I'd much rather do `from "moduleBar" import { foo }`.


Hey, that's fair! I'm not a big autocomplete user so I never thought of this, but it's a good argument.

> I now notice this in other places too, like I hate how in JavaScript you do `import { foo } from "moduleBar"`. I'd much rather do `from "moduleBar" import { foo }`.

Personally I prefer languages that don't make you import at all ;)


The actual ISO standard falls well short of being useful/sufficient to anyone who isn't an incumbent player. It's effectively a moat and therefore a direct impediment to competition from teams who have novel technical ideas but don't have access to significant capital - building a SQL implementation is a long, expensive journey. This is why many startups resort to building Postgres extensions, or using Calcite or DataFusion.

If SQL weren't so (needlessly) complex we would see much more competition across the database space.


> If SQL weren't so (needlessly) complex we would see much more competition across the database space.

I think there is more competition across the database space now than back when the SQL spec was less complex (say, in 1989 with SQL-89).

Also, much complexity in the spec comes from complex features; I really like grouping sets and window functions, and sure, that adds complexity; but it does allow users to express certain concepts that allow the database to more efficiently process data than sending everything to the user and letting the user solve the computations.


LINQ runs with FROM being first. Definitely trivial difference but a bit easier.


Ya I use Ecto which is the defacto Elixir SQL abstraction. It's heavily inspired by LINQ though only works with SQL. In any event, it also starts with FROM and I always end up writing my selects last. I've just never felt particularly annoyed writing them first in SQL (and I've written a LOT of raw SQL) I'd just do it without thinking about it. Never thought of it as a big deal.

The big problem with SQL AFAIC is its poor (really complete lack of) composability.


> I feel like a foreigner in another land when I read your comment and others like it. For as long as I can remember using SQL, I can't remember ever finding it more difficult or backwards than anything else I use

Learn linq or query/list comprehensions and then you'll easily see why SQL is backwards.


I've been using Django almost as long as I've been using SQL and I prefer the SQL ordering more: it matches the rest of the code, making it faster/easier to read. As a crude example:

  SELECT results FROM source WHERE criteria

  results = source(criteria)
It's rare to see someone want to change assignments in code to be like:

  source(criteria) -> results
Where I see it as the same thing: the SELECT columns, like the variable assignment, are the interaction point with the following lines of code.

And yes, CTE ordering does annoy me because of this. Putting it in the middle is pretty much the worst order.


> Where I see it as the same thing: the SELECT columns, like the variable assignment, are the interaction point with the following lines of code.

Indeed, which is why source(criteria) -> results makes more sense: the results definition is right next to the code that's going to be using that definition. If you put the results definition first as with SQL, then you have to scroll up to find the context (although perhaps Python's indentation sensitivity is the tripping point in this case). Not even mentioning the fact that the SQL way completely destroys any chance of code completion.

I'm going to boldly state that the SQL way is literally objectively wrong, in that there is no world in which SQL's choice is superior for general querying.


Then why are you advocating for it?

> or query/list comprehensions

List comprehensions are column first.


Right, here's the nuance: list comprehensions are intended to be concise one-liners, so having the results definition far off to the right defeats the principle I was outlining. Most SQL queries are not like this, they are almost always multiline of the form:

    select x, y, z
    from Foo
    where a or b
Here the opposite is the case: selection-first moves the return definition far from the subsequent code that uses it.

So if you're going to support list comprehensions, a monadic do-style notation which lets you chain them and again places select last:

https://wiki.haskell.org/List_comprehension#List_monad


Your first example should be more like

  return source(criteria).results
In your SQL, `results` isn't the variable you're assigning to, it's the column you're reading from source.


I learned SQL before I learned set theory. While learning set theory I remember thinking "oh this notation is just SQL backwards." Afterwards I began to find SQL much harder because I realized there are so many ways to mathematically ask for the same data, but SQL servers will computationally arrive at the end differently and with very different performance. This is a minor deal if you're just doing small transactions on the database, because if you are dealing with pages of 100 objects it's trivial to hit good-enough performance benchmarks, even with a few joins.

I was first introduced to the issue of needing hyper optimized SQL in ETL type tasks, dealing with very large relational databases. The company switched to non-relational database shortly after I left, and it was the first time I professional witness someone make the switch and agreed that it was obviously required for them. We were dealing with very large batch operations every night, and our fortune 500 customers expected to have the newest data and to be able to do Business Intelligence operations on the data every morning. After acquiring bigger and bigger customers, and collecting longer and longer histories of data, our DBA team had exhausted every trick to get maximum performance from SQL. I was writing BI sql scripts against this large pool of SQL data to white-glove some high value customers, and constantly had to ask people for help optimizing the sql. I did this for a year at the beginning of my career, before deciding to move cities for better opportunities.

Lately, I've began seeing the requirements of high performance SQL again with the wave of microservice architectures. The internal dependency chain, even of what would have been a mid size monolith project a decade ago, can be huge. If your upstream sets a KBI of a response time, it's likely you'll get asked to reduce your response time if your microservice takes up more than a few percentage points of the total end to end time. Often, if you are using relational SQL with an ORM you can find performance increases in your slowest queries by hand writing the SQL. Many ORMs have a really good library for generating sql queries they expose to users, but almost all ORMs will allow you to write a direct sql query or call a stored procedure. The trick to getting performance gains is to capture the SQL your ORM is generating and show it to the best sql expert that will agree to help you. If they can write better SQL than the ORM generated than incorporate it into your app and have the SQL expert and a security expert on the PR. You might also need to do a SQL migration to modify indexes.

So in summary, I think your experiences with SQL depends heavily on your mathematical background and your professional experience. It's important to look at SQL as computational steps to reach your required data and not simply as a way to describe the data you would like the SQL server to give you.


Was this before BigQuery/Presto/Trino? To me it seems like those technologies would have been a good fit.

They don't really work with indexes but instead regular files stored in partitions (where date is typically one of them).

This means that they only have to worry about the data (e.g. dates) that you are actually querying. And they scale up to the number of CPUs that particular calculation needs. They rarely choke on big query sizes. And big tables are not really an issue as long as you query only the partitions you need.


Those technologies were brand new at the time, the discussions about the problem started in 2013. The company (I had zero input) choose a more established vendor with an older product. Given the time and institutional customers that were trusting us with their data, I suspect any cloud based offerings were a nonstarter, and open source felt like a liability.

Of course with 20/20 hindsight that decision is easy to criticize. I suspect their primary concerns were to minimize risk and costs while meeting our customer's requirements. Even today, making a brand new Google product or Facebook backed open source project a hard dependency would be too much risk for an established business.


> I can't remember ever finding it more difficult or backwards than anything else I use."

This is the major problem. SQL looks like is not "difficult". You don't see (as a user) all their MASSIVE, HUGE, problems.

That is why:

- People barely do more than basic SQL

- People can't imagine SQL can be used for more than that, which leads to:

- Doing a lot of hacky, complex, unnecessary stuff on app code (despite the RDBMS being capable of it)

- Trying to layer something "better" in the forms of ORM

- Refusing to use advanced stuff like views, stored procedures, custom types, and the like

- Using of using advanced stuff like views, stored procedures, custom types, and the like, but wrongly

- Thinking that SQL means RDBMS

- So when the RDBMS fails, it is because the RDBMS is inferior. But in fact, is SQL that have failed (you bet the internals of the RDBMS are far more powerful than any NoSql engine, unfortunately, they are buried forever because SQL is a bad programming interface for the true potential of the engine!)

- So dropping SQL/RDBMS for something better, like JS (seriously?)

- And they are happier with their "cloud scale" NoSQL that rarely performs better, needs major, massive hacks for queries, or reimplements, poorly, ACID again, is more prone to data issues, etc.

And this is not even starting. If you think "is bad to make a full app, all their code, in relational model" that is how much brain damage SQL has caused.

---

I can count with my fingers the number of semi-proper DBs/SQL usage on my niche (ERPs) and that is mostly mine! (For example: I use dates for dates, not strings, like many of my peers!) and that is taking into account that I actually learned what the heck is that "relational" thingy after +20 years of professional use.

Go figure!

P.D: And then go to my code and see "what the heck, I could have done this in some few lines of SQL" and "what the heck, if only SQL were well designed I could do this dozen lines of SQL in 3!"


The trial and error is the worst part.

In traditional languages, you can print iteration by iteration the intermediate result and understand if there is something wrong.

In SQL you sample output, and you keep changing the query until you think you get it right. And then 2 years later someone else finds that the query was wrong all this time.


Common Table Expressions (CTE) do help a little, as you can query each “table” and inspect the output. Debugging a giant query with deeply nested sub queries is very painful indeed


So do table variables and temp tables.


> The trial and error is the worst part.

I don't know about anyone else, but I do this kinda naturally when writing SQL queries. Usually start with a base table, query the first 100 rows to see what the data looks like, start joining on other tables to get info I need, querying as I go to check join conditions, perhaps build out some CTEs if I need to do some more complex work, query those to check the format of the data ... And so on.

It doesn't feel that different to any other programming in that sense. Querying is printing.


> you can print iteration by iteration the intermediate result

You would not be able to do that with a multi-threaded/multi-process application.

And this is the reason why e.g. Trino/Presto is so powerful together with SQL.

Instead of telling the computer how to go by to get your result, you tell it what result you want and let it do it in the best way.

The most up-front way of telling a computer "how" is a for-loop. And SQL does not have it. It may seem limiting, but avoiding explicit for loops gives the freedom to the computer. If it sees it fit to distribute that calculation over 200 distributed CPUs it can do that. With an imperative language you need to tell the computer exactly how it should distribute it. And from there it gets really hairy.


In development I don't need it to be multi-threaded. 1 thread is fine, as long as I can explain, step-by-step, how the calculations produced the output.


If you don't need threads in development OR production, you might as well do SELECT * from users and do the join in your imperative code.

If you need threads in production I think you will end up getting rid of your for loops anyway (or possibly, if you really want to, end up in a mutex/semaphore quagmire).

I must say, though, that there are other benefits with a declarative approach than just avoiding threading issues. But I guess it takes some getting used to.

I would say that the same "I cant step through my code" argument also goes for functional style code.


> If you don't need threads in development OR production, you might as well do SELECT * from users and do the join in your imperative code.

Except that it most likely will be orders of magnitude slower. Most databases are very good at what they are doing.


Yes. Kind of my point to. But the OP missed the possibility to step through the code.


sure you can. set the concurrency limit to 1. If you're debugging the logic and not some race condition then this works perfectly fine. Remember to profile afterwards though


Trial and error is usually a bad idea in all kinds of programming.


I mean, I never build a query from front to back. Usually I build it FROM -> JOIN -> WHERE -> SELECT.


Start off with SELECT * then once the joins are working, filter * down to the essentials.


> widdling it down

Whittling. It means to carve something out of wood, with a metaphorical extension, as here, to gradually making something smaller by shaving bits of it away.


Important distinction. "Widdling" is urination.


I always thought writing SQL from scratch was the easy part. The hard part for me was coming back to my query a few weeks later


This is true for most programming languages.


That's why I try (but sometimes forget) to extensively comment my queries that have any kind of complexity :)


This doesn’t totally solve the issue of SELECT’ing first then filtering, but for complex queries I’ve found CTEs very useful (whenever the database/SQL dialect supports it).


What I usually do is start with "select *", get the joins and where clause down, then refine the select.


> I completely attribute this to SQL being difficult or "backwards" to parse. I mean backwards in the way that in SQL you start with what you want first (the SELECT) rather than what you have and widdling it down.

> The turning point for me was to just accept SQL for what it is.

Or just write PRQL and compile it to SQL

https://github.com/PRQL/prql


You may like PRQL, which gives a more composable-atoms based approach. I find it far easier than SQL.


Saying what you want first rather than what you have is evidence of the von Neumann bottleneck or it was a sign of the times when SQL was being developed on 1970s machine.

Either way, point taken that it is not like a proof.


Covey’s: “start with the end in mind” is not a bad advise when building something complex. With procedural languages you do the same, you first write the signature, parameters expected to go in and out, and then you start writing the way to achieve this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: