Hacker Newsnew | past | comments | ask | show | jobs | submit | ZuoCen_Liu's commentslogin

I’ve spent months digging into why dexterous manipulation (like the 'Humanoid Hand' problem Musk mentions) is still so far from reality. The culprit isn't the hardware—it's the 'Discrete Tax.'

Current solvers burn thousands of GPU cycles just to prevent numerical 'explosions' during contact. It's a brute-force patch for a math problem that should be solved with algebraic continuity. We've pushed a framework that runs at ~100W with higher stability than a 5000W H200 cluster.

I’ve officially challenged the PhysX core team on their forums to address this systemic waste.


The "Compute Wall" is a symptom of an efficiency bubble.

We are currently burning through H100/H200 clusters at an unprecedented scale, yet 90% of those GPU cycles are a "waste tax." We aren't calculating intelligence; we are using massive GPGPU power to "patch" 30-year-old numerical errors in discrete time-stepping (Δt).

In the race for Embodied AI, we’ve hit a wall: The Brute-Force Tax. To get high-fidelity Sim-to-Real data, we compensate for low-precision iterative solvers with massive parallelism. It’s an energetic dead-end that no amount of capital can fix—unless we change the math.

The Breakthrough: From Iteration to Hypercomplex Logic

We are introducing a New Computing Primitive based on Hypercomplex (Octonion) Manifolds. This isn't just a new algorithm; it's a structural shift in how physical state-space is represented. Unlike traditional tensors, this manifold internalizes "Time-flow" and "Interaction-coupling" into its algebraic structure.

The "One-Look" Disruption (VC Alpha):

• Current Bottleneck: Traditional Neural Networks need to "see" 10+ frames to infer velocity/force. This leads to long Transformer sequences, high KV-cache latency, and massive VRAM consumption.

• Our Paradigm: Because our state-space is inherently causal, a Transformer needs only one "look" (a single state) to understand complete motion trends.

• The Result: We drastically shorten the context window, enabling ultra-low-latency physical intuition at the edge.

Scaling to the 100W Edge (The Economic Dividend):

• The 5000W Cost: The price of "patching" bad math with GPU clusters.

• The 100W Reality: By running our Physics Algebraic Kernel on dedicated FPGA/ASIC "Causal Processors," we bypass discrete iterations entirely. We achieve data-center-level fidelity within a handheld power envelope.

The Vision: The Physics Co-Processor We are building the "Physical Brain" for the next billion robots. This hardware-native algebraic kernel provides a high-dimensional, continuous feature space that current AI chips (Orin/Jetson) crave but cannot produce.

Deep-Dive & Technical Proof on NVIDIA Discussions: https://github.com/isaac-sim/IsaacSim/discussions/394 We are looking for architects and visionaries who understand that the next leap in AI won't come from more GPUs, but from better primitives.


The "Brute-Force" TaxWe are burning 5000W GPGPU clusters to run brute-force discrete simulations, just to "patch" the numerical gaps of Δt. This is the Discrete Cost: to get high-fidelity Sim-to-Real data, we compensate for low precision with massive parallelism. It’s an energetic dead-end.

The Breakthrough: Hypercomplex Causal LogicWe are introducing a New Computing Primitive based on Hypercomplex (Octonion) Manifolds.

Unlike traditional tensors, this state-space internalizes "Time-flow" in its real part and "Coupling-strength" in its imaginary parts.

Why this changes AI Inference (The "One-Look" Advantage):

Traditional NNs: Need to "see" 10+ frames of images to infer velocity and acceleration.

Our Paradigm: Because the state-space is inherently causal and coupled, a Transformer needs only one "look" (a single state) to understand motion trends.

Impact: This drastically shortens the Transformer sequence length, enabling ultra-low power inference on edge devices.

The Power Dividend: 5000W vs. 100W

5000W (Discrete): The cost of brute-force GPU clusters struggling to "patch" accuracy.

100W (Algebraic): A dedicated Causal Processor (FPGA/ASIC) running our Physics Algebraic Kernel. It bypasses discrete iterations entirely, delivering data-center-level fidelity at the edge.

The Hardware VisionThis isn't just software. We are positioning the Physics Algebraic Kernel as a "Co-processor." It runs on FPGA/ASIC to provide "Physical Intuition" for the adjacent AI chip (like NVIDIA Orin/Jetson), providing a higher-dimensional, continuous feature space that current neural networks crave.

Deep-Dive on NVIDIA Discussions:https://github.com/isaac-sim/IsaacSim/discussions/394#discus...


You are referring to Continuous Collision Detection (CCD), which has indeed existed for decades. However, CCD is a detection patch, not an integrator cure. 1. The Scaling Wall: While CCD avoids tunneling for a single pair of objects, solving it analytically for a system with thousands of constraints leads to a Non-linear Complementarity Problem (NCP) explosion. Most engines fallback to iterative solvers (like PGS or Jacobi), which brings us back to square one: high-frequency iterations to resolve 'shaking' constraints. 2. Integrator Drift: CCD finds the time of impact, but the integration still happens in discrete space. You still suffer from Numerical Dissipation (energy loss) because the state manifold is disconnected between steps. 3. The 'Why' of Octonions: Our approach isn't just 'detecting' the collision; it's about State Coupling. By using Non-associative algebra, we lock the causal dependency into the movement itself. We are replacing the O(n^2) geometric 'check-then-fix' loop with a single-pass O(n) algebraic update. In short: CCD tells you when you crashed; Octonions ensure the state update respects the causal sequence without the iterative overhead.

In short:CCD is a diagnostic patch; Octonions are an algebraic cure. One checks for crashes, the other makes physics causal by design.

We are currently discussing a paradigm shift in physics simulation on the NVIDIA Isaac Sim repository. The core issue is that discrete time-stepping in GPGPU architectures is hitting a "Compute Wall"—consuming 5000W+ just to "patch" numerical errors like tunneling and jitter. The Validation:We’ve implemented an Octonion-based EKF (OEKF) that treats time and causality as an internal algebraic manifold rather than an external parameter. Verified Results in Isaac Sim:Precision: >60% position error reduction (≤0.1m vs. ≥ 0.25m). Stability: Zero attitude jitter during high-dynamic flips (traditional filters showed ≥ 3^ jitter). This isn't just a software patch; we are moving into the RTL design phase for a 100W FPGA Causal Processor to replace power-hungry GPGPU heuristics with dedicated algebraic gates. Join the technical deep-dive on NVIDIA’s GitHub Discussion:[https://github.com/isaac-sim/IsaacSim/discussions/394]

Hosting a concert to prevent workers from organizing is the ultimate 'Silicon Valley Lord' move. It’s the billionaire equivalent of 'we have pizza in the breakroom so don't ask for a raise.'


As an entrepreneur, this feels like a classic case of over-engineering for a problem you haven't earned yet.

Decentralized auth is a fascinating technical rabbit hole, but it introduces a massive friction point for your first 1,000 users. For a new, unproven project, credibility is your biggest bottleneck, not decentralized storage.

By building your own complex auth/privacy stack, you are asking users to trust you to get the crypto right—which is a huge leap of faith.

A more pragmatic approach: Outsource the trust. > Use 'Sign in with Google/Apple/GitHub.' You leverage their multi-billion dollar security infrastructure and their existing trust relationship with the user. It provides immediate convenience (one-click onboarding) and shifts the perceived privacy liability to a known entity.

Don't spend your innovation tokens on auth. Spend them on the core value of your information exchange. You can always 'decentralize' the back-end later once you have enough users to actually make it matter.


Yeah I think decentralization will be a stretch, especially at the beginning.

About the login, SSO is nice and it will probably be an option, but I heavily prefer good old email+password. It might be trickier, haven't explored SSO before.

The auth/central server will be open source of course, and I'm hoping I could get feedback/auditing that way if anything's wrong (even tho I feel like the process is simple with encryption libs and knowledge). At first it will be heavily experimental and will hold just dummy data and then gradually go from there if it works out.


Single Sign-On (SSO) is not complicated, and platforms that provide services all have detailed tutorials.

I don't think obtaining authentication data is useful; it's better to use it for collecting data on functional experiences.


If your solution to copyright infringement requires criminalizing the fundamental architecture of secure communication, your problem isn't the technology—it's your desire for absolute control.


RSS isn't just alive; it's the only remaining protocol for deterministic content delivery. There is a fundamental philosophical divide between RSS and the current AI-driven wave that people often miss: RSS is a Push system: If I subscribe to a source, I get 100% of the signal. The 'algorithm' is my own intent. AI is a Filtering system: AI is probabilistic. Its entire job is to guess what I want, which by definition means it creates a 'lossy' stream. People claim AI can 'replace' RSS by summarizing my interests, but AI will never be able to provide the certainty of a feed. I don't want an AI to 'guess' which security advisory or niche technical blog post I should read today; I want the raw signal that I explicitly requested. As long as there are professionals who value information completeness over algorithmic convenience, RSS (or its successor) will be a hard requirement. You can't replace a pipe with a concierge.


Apple’s Japanese Compliance Strategy: Is “Safety First” Strangling Independent Educational Apps? Apple just announced sweeping changes in Japan to comply with the Mobile Software Competition Act (MSCA). While the focus is on alternative marketplaces and third-party payments, the fine print regarding "Younger Users" is concerning. Under the new rules: Under 13: Apps cannot link to websites for transactions at all. Under 18: Mandatory "Parental Gates" for any non-IAP transactions. Kids Category: Zero links to external purchase methods, period. While framed as protection against scams, this creates a massive friction taxspecifically for the education sector. Most innovative educational tools are built by small teams who can't survive on Apple's 30% (or 15%) cut but also can't afford the drop-off caused by these new "security" friction points. My concern is that this "one-size-fits-all" protection will: Deter independent developers from building for the Japanese student market. Force a consolidated market where only giant publishers with massive marketing budgets can navigate the parental friction. Create a "Compliance Moat" where Apple uses "safety" as a justifiable way to make third-party payments so annoying that parents and devs just give up. Is there a way to protect children without effectively banning the economical bypass that these new laws were supposed to enable? Or is Japan’s MSCA actually a net-loss for the educational software ecosystem?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: