You guys are actually buying these processors? I’m still running a 4770 and a 1060.
Finally, now I can afford the 5800x3D.
I look forward to watching a Gamers Nexus review of this. I hope it’s as good as they say. 😀
what game can’t be ran by a 5800x3D ? if anything I feel like graphic cards are the biggest bottle neck right now
Simulators and games with mods can push the cpu. But yeah. Mostly gpu limited.
Escape from Tarkov. If you want 120+ fps on streets you pretty much need a 7800x3d.
While the 9000 series looks decent, I honestly think Intel has a really interesting platform to build off of with the core ultra chips. It feels like Intel course correcting with poor decisions made for the 13th and 14th gen chips. Wendel from Level1 techs made a really good video about the good things Intel put into the chips while also highlighting some of the bad things, things like a built-in NPU and how they’re going to use that to pull in profiles for applications and games with ML, or the fact that performance variance occurs between chipset makers more often with the core ultra. It’s basically a step forwards in tech but a step backwards in price/performance.
Work at a tech store; the technicians that build the PCs for customers recently tried building with the new Core Ultra 7 256K. Two processors were dead or unstable right out of thr box. Tried with known good RAM, two different cpus on two different motherboards. It seems that Intel hasn’t really fixed their stability issue, which should be their first concern.
Well I didn’t say they were perfect.
I’m an antifan of Apple but the M4 Max is supposed to be faster than any x86 desktop CPU, and use a lot less power. That’s per geekbench 6. I’d be interested in seeing other measurements.
Geekbech is as useful as a metric as an umbrella on a fish. Also the M4 max will not consume less energy than the competition. That is a misconception arising from the lower skus in mobile devices. The laws of physics apply to everyone, at the same reticle size the energy consumption in nT worlkloads is equivalent. The great advantage of Apple is that they are usually a node ahead and the eschewing of legacy compatibility saves space and thus energy in the design that can be leveraged to reduce power consumption on idle or 1T. Case in point, Intel’s latest mobile CPUs.
The laws of physics apply to everyone
That is obviously true, but a ridiculous argument, there are plenty examples of systems performing better and using less power than the competition.
For years Intel chips used twice the power for similar performance compared to AMD Ryzen. And in the Buldozer days it was the same except the other way around.Arm has designed chips for efficiency for a decade before the first smartphones came out, and they’ve kept their eye on the ball the entire time since.
It’s no wonder Arm is way more energy efficient than X86, and Apple made by far the best Arm CPU when M1 arrived.The great advantage of Apple is that they are usually a node ahead
Yes that is an advantage, but so it is for the new Intel Arrow Lake compared to current Ryzen, yet Arrow Lake use more power for similar performance. Despite Arrow Lake is designed for efficiency.
It’s notable that Intel was unable to match Arm on power efficiency for an entire decade, even when Intel had the better production node. So it’s not just a matter of physics, it is also very much a matter of design. And Intel has never been able to match Arm on that. Arm still has the superior design for energy efficiency over X86, and AMD has the superior design over Intel.
Intel has had a node disadvantage regarding Zen since the 8700K… From then on the entire point is moot.
From then on the entire point is moot.
No it’s not, because the point is that design matters. When Ryzen came out originally, it was far more energy efficient than the Intel Skylake. And Intel had the node advantage.
https://www.techpowerup.com/review/intel-core-i7-8700k/16.html
https://www.techpowerup.com/cpu-specs/core-i7-6700k.c1825
Ryzen was not more efficient than skylake. In fact, the 1500x was actually consuming more energy in nT workloads than skylake while performing worse, which is consistent with what I wrote. What Ryzen was REALLY efficient at was being almost as fast as skylake for a fraction of the price.
https://www.notebookcheck.net/Apple-M3-Max-16-Core-Processor-Benchmarks-and-Specs.781712.0.html
Will you look at that, in nT workloads the M3 Max is actually less efficient than competitors like the ryzen 7k hs. The first N3 products had less than ideal yields so apple went with a less dense node thus losing the tech advantage for one generation. That can be seen in their laughable nT performance/watt. Design does matter however, and in 1T workloads Apple’s very wide design excells by performing very well while consuming lower energy, which is what I’ve been saying since this thread started.
Power consumption is not efficiency, PPW is.
Tell me you didn’t open the links without telling me you didn’t open the links. Have a nice day friend.
Not to mention ARM chips which by and large were/are more efficient on the same node than x86 because of their design: ARM chip designers have been doing that efficiency thing since forever, owing to the mobile platform, while desktop designers only got into the game quite late. There’s also some wibbles like ARM insn decoding being inherently simpler but big picture that’s negligible.
Intel just really, really has a talent for not seeing the writing on the wall while AMD made a habit out of it out of sheer necessity to even survive. Bulldozer nearly killed them (and the idea itself wasn’t even bad, it just didn’t work out) while Intel is tanking hit after hit after hit.
I’d consider educating yourself more on this topic.
Is 20% faster than intel a step up, generation on generation?