Hacker Newsnew | past | comments | ask | show | jobs | submit | vdm's commentslogin


I can second this 0.9mm transparent stuff, I've run it successfully and it's very subtle.

Depending on the media converter pair you're using, you probably want UPC instead of APC. I also found that the cheapest generic bidi media converters tend to be SC, so I want with a 30m pre-terminated SC/UPC cable. Total cost (cable plus media converters) was about £30.

Alternatively, you can order a custom 30+m white 0.9mm cable from FS: https://www.fs.com/uk/products/12285.html Lead time is fairly long.



cool UI and lets anyone upload a doc. but lacks https://github.com/opendatalab/mineru


  $ zdump -Vc 2025,2026 America/New_York
  America/New_York  Sun Mar  9 06:59:59 2025 UT = Sun Mar 9 01:59:59 2025 EST isdst=0 gmtoff=-18000
  America/New_York  Sun Mar  9 07:00:00 2025 UT = Sun Mar 9 03:00:00 2025 EDT isdst=1 gmtoff=-14400
  America/New_York  Sun Nov  2 05:59:59 2025 UT = Sun Nov 2 01:59:59 2025 EDT isdst=1 gmtoff=-14400
  America/New_York  Sun Nov  2 06:00:00 2025 UT = Sun Nov 2 01:00:00 2025 EST isdst=0 gmtoff=-18000

  $ zdump -Vc 2025,2026 Europe/London
  Europe/London  Sun Mar 30 00:59:59 2025 UT = Sun Mar 30 00:59:59 2025 GMT isdst=0 gmtoff=0
  Europe/London  Sun Mar 30 01:00:00 2025 UT = Sun Mar 30 02:00:00 2025 BST isdst=1 gmtoff=3600
  Europe/London  Sun Oct 26 00:59:59 2025 UT = Sun Oct 26 01:59:59 2025 BST isdst=1 gmtoff=3600
  Europe/London  Sun Oct 26 01:00:00 2025 UT = Sun Oct 26 01:00:00 2025 GMT isdst=0 gmtoff=0


$ duckdb f.db -c 'COPY table1 TO table1.csv;COPY table1 TO table1.parquet;'


on my machine that i did the basic run, the one in the link is way more faster.

``` $ time ./duckdb_cli-linux-amd64 ./basic_batched.db -c "COPY user TO 'user.csv'" 100% (00:00:20.55 elapsed)

real 0m24.162s user 0m22.505s sys 0m1.988s ```

``` $ time ./duckdb_cli-linux-amd64 ./basic_batched.db -c "COPY user TO 'user.parquet'" 100% (00:00:17.11 elapsed)

real 0m20.970s user 0m19.347s sys 0m1.841s ```

``` $ time cargo run --bin parquet --release -- basic_batched.db user -o out.parquet Finished `release` profile [optimized] target(s) in 0.11s Running `target/release/parquet basic_batched.db user -o out.parquet` Database opened in 14.828µs

SQLite to Parquet Exporter ========================== Database: basic_batched.db Page size: 4096 bytes Text encoding: Utf8 Output: out.parquet Batch size: 10000

Exporting table: user Output file: out.parquet

   user: 100000000 rows (310.01 MB) - 5.85s (17095636 rows/sec)
Export completed successfully! ========================== Table: user Rows exported: 100000000 Time taken: 5.85s Output file: out.parquet Throughput: 17095564 rows/sec File size: 310.01 MB

real 0m6.052s user 0m10.455s sys 0m0.537s ```

``` $ time cargo run --bin csv --release -- basic_batched.db -t user -o out.csv Finished `release` profile [optimized] target(s) in 0.03s Running `target/release/csv basic_batched.db -t user -o out.csv`

real 0m6.453s user 0m5.252s sys 0m1.196s ```


Nice! Thank you



The `sharkd` has been around for quite some while, but until recently one had to build it from source. But now it is included in Wireshark DMG, so it is easier to use.




not a game on Steam? :(


If you want to treat yourself with an accounting game night, there's this one built by @patio11: https://keshikomisimulator.com/


Benchmarking. fio and iperf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: