Hacker Newsnew | past | comments | ask | show | jobs | submit | alsetmusic's commentslogin

Apologies. Not trying to spam by submitting two links about the same thing, but there’s Jeff’s take, which is valuable because he pays close attention to the area. I then started reading this article, which the first one was about, and found it more compelling.

Dang, please nuke one or the other as appropriate.


The funny thing about it is how no one learns. Granted, one can’t be expected to read every thread on Reddit about LLM development by people who are out of their depth (see the person who nuked their D: drive last month and the LLM apologized). But I’m reminded of the multiple lawyers who submitted bullshit briefs to courts with made-up citations.

Those who don’t know history are doomed to repeat it. Those who know history are doomed to know that it’s repeating. It’s a personal hell that I’m in. Pull up a chair.


I work on large systems where security incidents end up on cnn. These large systems are running as fast as everyone else to LLM integration. The security practice at my firm has their hands basically tied by the silverbacks. To the other consultants on HN, protect yourself and keep a paper trail.

It's a bit dark, but I'm doing much better now, so happy ending. No need to wish me well or anything, I'm the happiest I've ever been (thankfully).

After reaching an age where bi-polar disorder goes full swing, I was unable to manage manic episodes; they'd spring up and I'd be awake for days and then crash horribly. I lost all hope that I'd be able to hold down a typical job ever again. I became a 24h/7d alcoholic with the goal of never being conscious and trying to sleep through life until it ended.

I was at the local shop where I bought my booze buying a bunch of beer and vodka around 7-8am. A guy near me at the counter made a comment about what a great party must be coming. I looked at him, probably dead-eyed, and said, "I'm an alcoholic."

He put his hand on my shoulder. He didn't say anything. It was just a moment of compassion. It was deeply kind. What was communicated was simply that someone cared and, to this day, I wish I had a way to thank him for that profound gesture.


Came to recommend the same. Here's the podcast interview where I learned about the book: https://99percentinvisible.org/episode/invisible-women/

You can live a normal life without trading cryptocurrencies, that much is certain.

Several years ago, I wrote an angry email to loved ones about something I’d seen in national news (USA) about my city. A friend replied saying that he thought I should submit it to a local paper. Ended up as an op-ed. Not a major claim to fame, but I was still pleased that someone cared enough about my words to publish.

> Even recently, Musk fought back against the notion that Tesla relies on teleoperation for its Optimus demonstration. He specified that a new demo of Optimus doing kung-fu was “AI, not tele-operated”

The world's biggest liar, possibly. It's insane to me that laws and regulations haven't stopped him from lying to investors and the public, but that's the world in which we live.


The FCC would have a whole lot to say if he was lying about something like that at a publicly traded company.

What would the FCC do?

The SEC didn't even enforce the whole he can't run his twitter account punishment for tweeting that he took TSLA private at 420.


Sorry, I meant SEC. Just search for "Musk SEC". He's been fined and sued already for similar statements. It's pretty illegal to lie about the capabilities of the products of a publicly held company.

Just look at Nikola.


That’s what lesuorac is saying. The SEC found he violated the rules for a publicly traded company... And then could do absolutely nothing about it to enforce the rules.

He lies again and again. Occasionally gets a slap or a small fine. And then keeps doing it.

What has he lied about? With the caveat that a prediction of the future being incorrect and an estimation of a timeline being wrong is not a lie.

An example of a lie would be the topic at hand, misrepresenting current capabilities of an existing product.

Mars or self driving cars by Year X isn't a lie.


A lot, the easiest example was the autopilot video that started off with "The car is driving itself, the driver is there for regulatory reasons". The video was created by stitched together different sessions as in some of the sessions the car drove itself off the road into solid objects.

Or about the thai diver being a pedophile.

People just give Elon too many benefits of the doubt. Saying Mars/Self driving cars is going to be next year for over a decade is just a lie after the first couple of times.


One instance https://www.forbes.com/sites/willskipworth/2023/12/07/elon-m...

Though some of his future predictions are obviously things he knows will not possibly happen and are as close to a lie as you can get while still being plausibly deniable.


That’s what DOGE was/is for. The world’s richest man’s personal effort to gut the regulatory agencies perusing him.

Anyone who can’t see that hasn’t been paying attention or is in denial, taking the lies at face value.


Nothing Electrek says can really be taken seriously. They’ve openly said they have an axe to grind

> which seems awful penny-wise pound foolish of them.

On one of the podcasts that I listen to, which has given me many great recommends, one of the hosts has given up watching content until it hits three or four seasons because of exactly this.


Not the person you replied to, but I thought the same thing. Perl was my first as well, and it certainly shaped the way I think about coding. It made Python feel too rigid and Ruby feel familiar. There's something to be said for the restrictions of an environment when you're learning how to operate in a domain that seems to shape future thinking.

I'm sure there are people who started in a language and later found something that made more sense. I'm just reflecting on what I've found in my experience.


> There's something to be said for the restrictions of an environment when you're learning how to operate in a domain that seems to shape future thinking.

When at University the academic running the programming language course was adamant the Sapir–Whorf hypothesis applied to programming language. ie language influences the way you think.


Seems somewhat related to Iverson's 1979 Turing Award lecture, "Notation as a Tool of Thought" (https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p...)(https://news.ycombinator.com/item?id=25249563)

Reading the YCombinator link there's a mention of APL and a comment by dTal[1] which includes saying:

> "A lot of the mystique of APL is because it's illegible ... nothing more than a DSL for 'numpy-like' code. .. same demo, using Julia and the result is (in my opinion) much more legible: ... let n=sum(map(

sum() in Julia is more clear and more readable at a glance than +/ in APL, but the APL version is a combination of two things. + which is a binary addition function, and / which is reduce, a higher-order operator or meta-function. sum() in Julia doesn't lead you to think about anything else except what other builtins exist. The APL notation leads you to wonder about combining other commands in that pattern, like times-reduce is ×/ and calculates the product of an array of numbers. From the notation you can see that sum and product are structurally related operations, which you can't see from names sum() and product(). Then you change the other part by wondering what plus does if used with other higher functions, like +\ (scan) and it's a running-sum across an array. (i.e. "+\ 1 1 1 1" gives "1 2 3 4", the sum so far at each point).

So the notation isn't just about readability, it's a tool for thinking about the operations. Different notations enable you to think about different things. If we imagine there was no sum() then you might write:

    sum = 0
    foreach (n in numbers) { sum += n }

    product = 0
    foreach (n in numbers) { product *= n }
and whoops that doesn't work; this notation brings to the focus that sum has to start with 0 and product has to start with 1 to get the right answer and you can wonder mathematically why that is; APL notation hides that just like it hides the looping. Different notation is a tool for changing the what people think about - what things we must attend to, cannot attend to, and what new things a notation enables us to see. dTal's next reply:

> "the power of abstraction of APL is available to any other language, with the right functions. ... there's nothing to stop anyone from aliasing array-functions to their APL equivalents in any Unicode-aware language, like Julia (oddly, nobody does)."

Maybe nobody does it because if you can't take the patterns apart and put them back together differently without an APL engine behind it, is there any benefit? Take an example from APLCart[2]:

    {⍵/⍨∨\⍵≠' '} Dv      # Remove leading blanks [from a character vector]
In C# that task is str.TrimStart() and I assume it's a loop from the start of the string counting the spaces then stopping. Calculating length - num_of_spaces, allocating that much memory for the new string, copying the rest of the string into the new memory. I wouldn't think it was do-able using the same higher order function (\ scan) from a running sum. What this is doing to achieve the answer is different:

          {⍵≠' '} '   abc   def'       # make a boolean array mask
    ┌→──────────────────────┐          # 0 for spaces, 1 for nonspaces
    │0 0 0 1 1 1 0 0 0 1 1 1│    
    └~──────────────────────┘
          {∨\⍵≠' '} '   abc   def'    # logical OR scan
    ┌→──────────────────────┐          # once a 1 starts,
    │0 0 0 1 1 1 1 1 1 1 1 1│          # carry it on to end of string
    └~──────────────────────┘
          {⍵/⍨∨\⍵≠' '} '   abc   def'
    ┌→────────┐                        # 'compress' using the boolean
    │abc   def│                        # array as a mask to select what to keep
    └─────────┘  
Now how do I remove the leading 0s from a numeric array? In C# I can't reach for TrimStart() because it's a string only method. I also can't assume that there's a named method for every task I might possibly want to do. So I have to come up with something, and I have no hints how to do that. So I have to memorise the TrimStart() name on top of separately learning how TrimStart() works. That notation gives me a clear readable name that isn't transferable to anything else. In APL it's:

    {⍵/⍨∨\⍵≠0} Dv      # Remove leading zeroes [from a numeric vector]
That's the same pattern. Not clear and readable, but is transferable to other similar problems - and reveals that they can be considered similar problems. In C where strings are arrays of characters, you aren't doing whole array transforms. In C# strings are opaque. In APL strings are character arrays and you can do the same transforms as with numeric arrays.

Which part of that would you alias in Julia? I suspect you just wouldn't write a trimstart in this style in Julia like you wouldn't in C#. You wouldn't think of using an intermediate boolean array.

It's not just about "readability", the APL notation being concise and self-similar reveals some computy/mathematical patterns in data transforms which "giving everything a unique English name" obscure. And APL notation hides other patterns which other notations reveal. i.e. Different notations are being tools for thinking differently about problems, Notation as a Tool for Thought.

[1] https://news.ycombinator.com/item?id=25258819

[2] https://aplcart.info/


this is why i like how operators work in pike:

+ - * / and other operators work not only on numbers but on strings, arrays and other types and all have an intuitive application.

on strings and arrays for example, + is concatenate, / is split, * is join, - is filter (with static values).


I only recently learned about this, maybe a month ago. It made a lot of sense to me.

> It made Python feel too rigid and Ruby feel familiar.

That's so funny to me; I like Python, and dislike Perl & Ruby. Something about Ruby rubs me the wrong way - I could name a few things that I think are _objectively_ bad decisions in the language's design, but it's mostly about an aesthetic preference that's just a matter of taste.


> Apple did stuff like this all the time at their high point in the late 2000s and early 2010s, and it would happen often in other markets.

Interesting in that I thought about their purchase of $1B of solid state memory at the height of their iPod run. The difference is that Apple had a hit product that was selling as quickly as they could be produced and there was a legitimate need if they wanted to meet the demand.

FTFA:

> No, their deals are unprecedentedly only for raw wafers — uncut, unfinished, and not even allocated to a specific DRAM standard yet. It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM!

I don't consider this legitimate. It's not illegal, but it sure seems unethical and scummy and it pissed me off. OpenAI throwing its weight around is harming ordinary people who aren't competing with them.


> The difference is that Apple had a hit product that was selling as quickly as they could be produced and there was a legitimate need if they wanted to meet the demand.

What if OpenAI expects to be in the same boat? Their "hit product" is just R&D and training for new very large models. Of course if they're wrong, they've just set a huge pile of their own cash on fire.


If there was a law against buying the supplies of materials and letting them rot in a storehouse just to deprive competitors of them, your argument would be what OpenAI would try to make in court...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: