I tried to warn people on slickdeals about this and got flamed again and again.So...saving $40 instead of buying a Haswell i3 (bonus sucker points for pairing an AE with a Z-board), guess all those suckers who ran along with the Pentium AE was a "great budget gaming chip" meme must be eating crow for quite a while now, and this is just another kick to the face by no-free-lunch Intel.
No, that IS the point. What if, five years down the road, some new technology pops up, and we get 10-12Ghz dual-cores again? Why should this game be arbitrarily locked out from running on those theoretical future PCs?
But again, we're off into hypotheticals. They didn't draw the line at 8 cores so until they do I don't see the point in speculating about it further. We can come back to this argument in maybe 5 years and see if the situation has changed.Let's put it this way. What if they drew the line at an 8-core, because the consoles have 8 cores. The minimum required CPU was an FX-series 8-core. Would that please you as well?
Well, that would suck, but.....that's not what's happening. If it turns out dual cores w/o HT really can run this game perfectly fine I'll have to rethink my arguments, but for now I'm looking at the date, realising it's 2016 and the consoles have had octocores for years, quads are mainstream in the PC market, so if a game wants to demand 4 threads for genuine technical reasons that's absolutely fine by me.What if i5 Skylake users had adequate performance to run the game. But they were arbitrarily locked out of playing it, because they "didn't have enough cores".
In the time I've been typing RS has addressed your points in more detail and with more eloquence than I have. I'll finish by saying that I've seen you spend a lot of time defending low end hardware on here, and sometimes I think you have a decent enough point in doing so, but here you've officially lost me. I just don't get what you think you or anyone else has to gain by clinging to dual cores in this day and age.Maybe now you can see the point of my argument, which really has nothing to do with any personal desire to play this game or not.
Well, then why should they have to put more time and effort into making it run on a low end to high end graphics card? They shouldnt have to bother with ultra, high, medium, and low settings. Just design one level of graphics for a mid range card. If you card is weak, cant run the game. If you card is strong, cant use all its power. After all, we have moved on from older graphics levels, right?
I am obviously exaggerating, but the point is, porting to PC entails providing the game with the ability to run on a wide range of hardware. I am not a programmer, but from information about previous games with the same problem, I dont think it is a hugely burdensome task to assigning cpu priorities to make the game run on a true dual core.
Probably don't even need to do that. Just remove / update the code that blocks running on logical cores 0 and 1, that was leftover from the console version.
It seems that people here don't understand multi-threaded programming, in the context of a multi-threaded/multi-processor time-sliced OS.
Instructions (opcodes) from a program, are essentially processed on a prioritized assembly-line, in packets (called a "timeslice").
If you have a quad-core processor, four such packets can be processed at once, in parallel.
If you have a dual-core processor, two such packets can be processed at once, in parallel.
If the dual-core is clocked at twice the clock speed as the quad-core, then twice as many instructions can be processed in each packet (or timeslice).
The only caveat, is that for one program to take advantage of four packets per timeslice throughput, the programmer has to write the program in such a way that it can be parallelized to be able to execute instructions in each of those four parallel packet "slots" per timeslice. Once this is done, then it will trivially adapt to running on a dual-core as well, for the most part. Other than possibly latency issues, which can lead to frametime calculation/present issues. But if the dual-core is clocked twice as fast as the quad-core, then there shouldn't be any additional latency. (Well, roughly.)
So you see, once the program has been properly parallelized to run on a quad-core, it will also automatically run properly on a 2X-clocked dual-core. Unless it was arbitrarily locked out, by the devs. As appears to be the case with this game.
After being flamed enough times for telling people to pass on the Pentium and get an i5 (or at least a cheap 8320E from Microcenter to overclock for more casual gamers who won't mind the high power consumption and lower IPC) I finally started telling people that video gaming is a luxury activity and that if one can't afford an i5 then one should get a job and save up for it.It's seriously boggling my mind how much butt hurt there is over a $60 dual core CPU not being supported to play a cutting edge game in 2016.
You make it sound like by limiting to i3 or above DICE has severely limited the hardware range. i3 is already pretty low end and are pretty dang cheap. What more do you want? It's seriously boggling my mind how much butt hurt there is over a $60 dual core CPU not being supported to play a cutting edge game in 2016.
The amount of assumptions you have made here is astounding. Not everything can be scaled, and what can be scaled is MUCH easier to scale up than down in terms of performance. Until the game is released and someone hacks it to run on a dual core, we just won't know. I very well understand this could be a superficial limit (like FC4 and in which case I'd have to eat some crow), but it's reasonable to give DICE the benefit of the doubt here.
And today's dual-cores with HyperThreading are what, 30% better MT performance than straight-up dual-cores? So, in five years, if we have dual-cores that are 30% faster than today's dual-cores, is it OK that they have equivalent performance to today's dual-core plus HT, but they would be arbitrarily locked out of playing?This is going to come across as blunt, but if this is the sort of argument you are seriously going to deploy in defence of dual cores, I think I'm going to have to join the others in considering you a lost cause. There's so much wrong with this sort of prediction that I don't know where to begin trying to deconstruct it. Are you at least aware that this game will run on current dual core CPUs that have Hyperthreading?
I'm fine with the game benchmarking the current hardware, and choosing not to run if it doesn't perform well enough. What I'm not fine about, is arbitrarily locking out hardware that performs well enough, but doesn't have enough cores.I could turn this whole thing around; Say I dig out an old Pentium 4 or Athlon XP rig and then get all outraged that the latest games either won't run at all or run at the seconds-per-frame territory. Would that sound reasonable to you? Would it be sensible for me to claim this has happened due to ulterior motives on the part of game developers, and they should recognise that single cores are still relevant and should be considered when it comes to writing or optimising cutting edge game engines? Because that's what this is ultimately about, where do you draw the line for what hardware can be considered too underpowered to bother with.
I think that you missed my arguments entirely then. This isn't about me, my hardware, or current-gen dual-cores. Nor is it about performance optimization for current-gen dual-cores.In the time I've been typing RS has addressed your points in more detail and with more eloquence than I have. I'll finish by saying that I've seen you spend a lot of time defending low end hardware on here, and sometimes I think you have a decent enough point in doing so, but here you've officially lost me. I just don't get what you think you or anyone else has to gain by clinging to dual cores in this day and age.
I do agree with you on many things, but now let's get back to reality.If it's an arbitrary lock out like Far Cry 4, you have a point. Let's see what happens once the game releases. I am all for next gen games pushing the boundaries. If obsoleting old hardware is a side-effect, I am OK with that. What I am not OK is requiring 5960X and GTX1080 to get 60 fps at 1080P in a game like ARK Survival Evolved or similar.
Also, look at Crysis 3 and Metro 2033/Last Light, etc. Those games look awesome and run decently on 2GB videocards. At the same time, there have been many modern games that look worse and require 3-4GB VRAM for full IQ/textures. You can make the same argument that developers are artificially locking out better graphics since they didn't optimize the game engine as well as those older games. I get it, but ultimately, even if you are right to an extent, what can I do as an end user?
I cannot predict how unoptimized software will be in 2017-2020. So I deal with it by buying something decent - a more practical approach. Alternatively, vote with your wallet and not buy the game if you feel the developers are artificially gimping optimizations for lower tiers. That's a good option.
About time :thumbsup:screw it im getting a quad core, its worth it as a future investment for triple A games
Well, it is a matter of degree. I am inferring from previous games, that it does not take a huge amount of effort to assign priority so that the game will start up on dual core, as Larry said.
Can anyone show a firm citation for where it says that less than 4 threads will not run? As far as I can tell everyone is just assuming it will be locked out because they listed 4 threads as the minimum. If so, it's equally valid to assume that it will run on a 2 thread machine, just very poorly (thus not meeting their minimum spec)
And today's dual-cores with HyperThreading are what, 30% better MT performance than straight-up dual-cores? So, in five years, if we have dual-cores that are 30% faster than today's dual-cores, is it OK that they have equivalent performance to today's dual-core plus HT, but they would be arbitrarily locked out of playing?
I don't have the knowledge to dispute any of this. Others might do so I can't say much more. But I'm thinking of video encoding, where you always want more cores and threads even if clockspeed takes a hit. I mean, theoretically, for a given core, a 10ghz single core should encode the same as a 5ghz dual core as a 2.5ghz quad core etc. But it's never that simple, we never got 10ghz single cores and we never might.Again with the people that are ignorant about how multi-processing on a time-sliced SMP OS works, and how programs that are written for quad-cores, can easily map to a dual-core of 2X the clock rate, and still function properly. (Without being re-written, well, unless it's an arbitrary lock-out.)
Games have quoted minimum clock speeds, if only from a performance point of view. If they figure out how to have 100s of cpu cores at 1Ghz, and games required that to run, I'd make sure I'd had one of those 100 core CPUs so I'd be ready to go. And here's the thing: Dual threaded games should run perfectly fine on those 100 core cpus assuming backwards compatibility is still a thing. I think it would be absurd though to say that 'this 100 threaded game won't run on an ancient old dual core from ages ago, how dare they arbitrarily lock it out....'. That's where my concern ends, I simply can't see this future you seem to be so concerned about where people are suffering because they can't play games on hardware that's long since obsolete.Edit: Or would you all be fine, with a game, that had a minimum required Mhz to play and run? What if they figure out a way to auto-multithread software in the future, and future CPUs had 100s of tiny little cores (like a GPU, sort of) @ 1W ea, running at 1Ghz or so.
7/2014richard leadbetter at eurogamer said:the issue is one of consistency - games are now built typically with four threads or more in mind. Dropping down to two - no matter how fast they are - causes latency and stalling issues that manifest as highly unwelcome stutter during gameplay.
Retaining ultra or very high settings in battlefield 4 and crysis 3 quickly exposed the frailties of the g3258. Whatever the background processing required in setting up the scene, it's just way too much, even for a two 4.5ghz cores.
In the highlighted video, you can see that running the game at the high preset (that's one 'notch' down from the ultra-equivalent, very high) in combination with a gtx 760 results in a night-and-day performance differential between the i7 4790k and the pentium. The additional fidelity in the game simulation, coupled with the immense increase in gpu set-up costs, sees the anniversary edition pentium struggle horrendously to keep pace. What we're seeing here is a classic case of a lack of hardware balance: The g3258 simply can't feed the gtx 760 quickly enough to sustain a consistent frame-rate.
In both cases with bf4, the overclocked pentium has trouble locking to 60fps whereas the i7 sails through. Our contention is that an engine built for four or more cores simply doesn't translate well to a dual-core chip, and a 4.5ghz overclock isn't a cure-all.
The results with mantle are very interesting: We see mantle handing in consistently higher frame-rates, but the stutter issues are not resolved. Battlefield 4 still appears to require more threads than the pentium provides, latencies kick in and the experience isn't that great at all.
Crysis 3 - and to a lesser extent battlefield 4 - demonstrate that the most advanced gaming engines cause problems that can't be fully resolved by sheer clock-speed alone. The future of games development is many-core in nature
in some respects, the g3258 anniversary edition feels like an anachronism - a modern-day rendition of an outdated type of processor that's had its day, bludgeoning its way to success through sheer brute force alone.
The g3258 is based around a processing concept that many game-makers have left behind, and that if you're looking to run the latest and greatest titles at high frame-rates, there's the possibility that you could find yourself upgrading sooner rather than later.
Again with the people that are ignorant about how multi-processing on a time-sliced SMP OS works, and how programs that are written for quad-cores, can easily map to a dual-core of 2X the clock rate, and still function properly.
7/2014
The AE was already a bad idea back then, and somehow people are butthurt its doing worse 2 years later.
Maybe we shouldn't phase out 2G either to reuse the spectrum for LTE to massively benefit the rest of the population because grandma doesnt want to ditch her 15 year old Nokia.
Another developer too lazy to remove the console 0+1 dedication.
I expect a trashy PC port.
You must be joking right.
DICE PC games have always been excellent optimized and they've set the standard on that front.
Battlefront still is the best visuals vs performance on PC.
Gameplay wise, they bombed since BF4 but their engine is top notch by far.
Larry, please post a screenshot of one of your 5.4 Ghz dual-cores. That is in fact the minimum speed to be twice the frequency as the slowest quad-core desktop CPU Intel sells in 2016. Your entire argument falls flat on its face, without said 5.4 Ghz dual-core.
