The Nvidia DGX Spark Is a Tiny 128GB AI Mini PC Made for Scale-Out Clustering servethehome.com 14 points by PaulHoule 2 days ago
Havoc 2 days ago The upcoming wave on APU like minipcs will be really cool in general.The mem throughput looks a tad on the low side but combined with MoE style models will still allow for big models to run at reasonable speeds.Prices will need to drop though. A grand is likely closer to most people budget than 3 for an AI quasi toy. fragmede 2 days ago The original Apple I computer was released in 1976 and sold for $666.66, which is $3,725.38 in Feb 2025 adjusting for inflation.https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1... Havoc a day ago Good point. I do think expectations have shifted though on electronics. If you look at say big TVs for example. Those went from a really big purchase to just an accessory basically
fragmede 2 days ago The original Apple I computer was released in 1976 and sold for $666.66, which is $3,725.38 in Feb 2025 adjusting for inflation.https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1... Havoc a day ago Good point. I do think expectations have shifted though on electronics. If you look at say big TVs for example. Those went from a really big purchase to just an accessory basically
Havoc a day ago Good point. I do think expectations have shifted though on electronics. If you look at say big TVs for example. Those went from a really big purchase to just an accessory basically
captaindiego a day ago Are these just good for LLM inference or can they be used to train stuff like CV models too? (Let's say vs. a 5090 which is same ball.park price-wise) banderwidthdk a day ago From my experience LLM inference really, really likes memory bandwidth, which at 1.79 TB/s the 5090 has quite the lead on the APU's 273GB/s
banderwidthdk a day ago From my experience LLM inference really, really likes memory bandwidth, which at 1.79 TB/s the 5090 has quite the lead on the APU's 273GB/s
The upcoming wave on APU like minipcs will be really cool in general.
The mem throughput looks a tad on the low side but combined with MoE style models will still allow for big models to run at reasonable speeds.
Prices will need to drop though. A grand is likely closer to most people budget than 3 for an AI quasi toy.
The original Apple I computer was released in 1976 and sold for $666.66, which is $3,725.38 in Feb 2025 adjusting for inflation.
https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1...
Good point. I do think expectations have shifted though on electronics. If you look at say big TVs for example. Those went from a really big purchase to just an accessory basically
Are these just good for LLM inference or can they be used to train stuff like CV models too? (Let's say vs. a 5090 which is same ball.park price-wise)
From my experience LLM inference really, really likes memory bandwidth, which at 1.79 TB/s the 5090 has quite the lead on the APU's 273GB/s