Observations with an FX-8350

Discussion in 'CPUs and Overclocking' started by Idontcare, Dec 15, 2012.

  1. guskline

    guskline Diamond Member

    Joined:
    Apr 17, 2006
    Messages:
    4,628
    Likes Received:
    11
    First hats off to IDC for one of the few, if not the ONLY in depth review of 8350 performance on it's own and in comparison to Intel's I5/I7 chips.

    This morning I checked the pricing at Newegg for the 8350, 3570k and 3770k respectively. The prices, rounded up to even dollars, are $200, $230 and $330.

    The 8350 is simply not equivalent to a processor that cost $130 more (8350 vs 3770k).

    Based on relative cost, it is closest to the 3570k. Perhaps one can find an isolated benchmark where an overclocked 8350 just beats out a stock 3770k but the overwhelming number of benchmark victories belongs to the 3770k. Why would this surprise any one? If you overclock the 3770k like you overclock a 8350 the gap gets wider. Again, a surprise? No! Comparing a $200 chip to a $330 chip usually results in this outcome.

    Owning both a 8150 and now a 8350 (plus 2 2500ks) I can say that AMD got realistic about pricing the 8350 Piledriver unlike the horrible overpricing of the 8150 (glad I waited till it fell to $170 when I bought it). Moreover, AMD made a decent effort to quickly improve the 8350 over the known deficits of the 8150.

    Is the 8350 in the same league as the 3770k ? No, but for what it costs it shouldn't be. On price alone, the only Intel chip it should be forced to be compared with is the 3570k ($200 vs $230).

    The Sandy Bridge, now Ivy Bridge cpus are remarkable considering their price. The 8350 is not a bad chip overall considering its price.
     
    #526 guskline, Jan 16, 2013
    Last edited: Jan 16, 2013
  2. jvroig

    jvroig Platinum Member

    Joined:
    Nov 4, 2009
    Messages:
    2,397
    Likes Received:
    0
    You seem to be a bit misguided here. Intel's HT is in no way an indicator that "Single thread and high IPC" is passe. Higher IPC is still on Intel's goals. To borrow a phrase from IDC, higher IPC is the tide that floats all boats - improving IPC improves both single-threaded and multi-threaded performance, across all types of applications. If it were true that "BD is just misunderstood, it is for multi-tasking / heavy workloads / parallel, etc!!" then the server sector would have eaten it up by gobs, you know how that ended (power hog). And even server benchmarks didn't make Bulldozer shine against Intel - not in pure performance, not in performance per watt.

    It's not wrong to try to increase MT performance. That's why Intel bothered with HT. But they went about it the right way: don't overlook IPC improvements and single-threaded performance.

    I'm not knocking CMT. CMT has a different design goal versus SMT. CMT is not the BD problem. The BD problem is that each core, even when fully possessing a module, is not that powerful to begin with. And that's something they thought they (AMD) could live with. The market did not agree with them, in any sector.


    I see absolutely nothing interesting about that or anything that would change my mind in a postive way. BD has 1.2 billion transistors. The 2700K has 1.16 billion, slightly lower AND includes a GPU. Better performance, better power consumption, all in a design with a lower transistor count for the CPU? That's the CPU design to praise, not the BD design.


    Now that is a reasonable statement I can live with. No hyperbole "it changes the way you look at it", no inventing of irrelevant metrics or throughput, not trying to pass it off as a misunderstood engineering marvel that is way ahead of its time, just the plain and honest truth :thumbsup:

    I still have a Thuban rig that could be swapped for a BD. BD came and went, I was not tempted. With PD, I am at least considering it now. It is a step in the right direction.
     
    #527 jvroig, Jan 16, 2013
    Last edited: Jan 16, 2013
  3. dastral

    dastral Member

    Joined:
    May 22, 2012
    Messages:
    67
    Likes Received:
    0
    I don't think anyone would be surprised by this, we know Intel IPC is much much higher.
    However (and obviously) software hyper threading is much less efficient/effective than hardware hyper threading.
     
  4. Idontcare

    Idontcare Elite Member

    Joined:
    Oct 10, 1999
    Messages:
    21,130
    Likes Received:
    0
    Answering "what" it does for one's life is easy, answering "how" it does it entails a far lengthier response ;)

    Backstory: My life's ambitions include playing some role, however small, in enabling humankind to transcend the non-sustainable energy production ways of our ancestors.

    Be it photovoltaic (solar cells), hydrogen fuel (splitting water with sunlight), or improving energy storage devices...I've had the opportunity to professionally dabble in quite a few different thrusts of alternative energy developments.

    Computational chemistry is one of those areas which stands to be a dual-benefit to humankind.

    For starters it can help lower the environmental footprint of ongoing materials and chemistry research in just about any industry because the researchers can cycle through a lot more hypothetical compounds and experiments in silico rather than in the laboratory with real chemical compounds that were synthesized with huge energy and environmental impact, and whose production resulted in equally undesirable chemical wastes.

    Secondly, when applied directly to the pursuit of developing better materials and compounds for use in the alternative energy sector (my graduate research was on the development of molecules that absorb sunlight and split water into hydrogen fuel and oxygen gas) the utility of computational chemistry cannot be over-stated in terms of time-to-results because you can let the computer run through hundreds of more molecule candidates than a lab full of technicians could ever hope to do in the same time.

    The rate limiting step in computational chemistry is the processing power of the computer, obviously, and the imagination of the computational chemist (the computer doesn't just invent molecules on its own, not yet anyway ;))...and it was this fact alone that motivated me to become a process node development engineer because I saw that I was about 20yrs ahead of the curve in terms of computing power I felt would be needed to reasonably enable practical computational chemistry.

    [​IMG]

    ^ from my dissertation.

    So I set aside my near-term aspirations to develop alternative energies and instead elected to make a career out of doing what I could to help speed along Moore's Law in the meantime so that someday, should I live long enough to see the other side of it, the semiconductor industry would be producing CPUs that were finally fast enough to enable me to do what I truly aspired to do in life (use computational chemistry to enable a solar-based energy economy of some sort).

    In the meantime I pursued the development of computational chemistry methodologies within process node development that truly made head-ways into speeding up the resolution of long-standing issues. For example, the elucidation of the primary mechanism of plasma damage which occurs in carbon-containing BEOL ultra low-k dielectrics. (the stuff that keeps our chips from running even slower than they already do) I was able to nail that down to the creation of methane and a residual silicon dangling bond, which once we (industry-wise) realized what was going on we were then able to remedy the damaged dielectric and make chips that were some 5-8% faster than those that were not remedied.

    So when posed with the question:

    My answer is - a sense of fulfillment and accomplishment, and quite possibly a future that we might not otherwise have ;)
     
  5. guskline

    guskline Diamond Member

    Joined:
    Apr 17, 2006
    Messages:
    4,628
    Likes Received:
    11
    IDC, where did you do your graduate work? My oldest son-in-law defended for his PhD in Organic Chemistry in late 2009 after receiving his B.S. in Chemistry in 2004. He went straight into the lab and at times we wondered if he would ever finish. It was a long haul and gave me a great appreciation of the rigors of graduate programs.
     
  6. Idontcare

    Idontcare Elite Member

    Joined:
    Oct 10, 1999
    Messages:
    21,130
    Likes Received:
    0
    OK, we've covered two of my three real-world apps of personal interest (TMPGenc and Gaussian), which leaves the "money maker" application - MetaTrader 4.

    MetaTrader4 (aka MT4) is a software platform that enables the trading of foreign currencies and other commodities (oil, metals, etc) on the spot-market through a broker of one's choice (provided they support the platform of course).

    There are many aspects to MT4 when it comes to trading currencies on foreign currency exchange (aka FOREX), one can use it simply as a data tracking platform for graphing and analyzing the market trends, or one can use it to manually place orders of varying types (market orders, limit orders, etc), and perhaps it most noteworthy feature is the ability to enable autonomous trading by way of a coded trading algorithm.

    It is the latter that has me involved with MT4. I program code which incorporates various trading strategies in algorithmic (analytical) form which, in turn, trade foreign currencies 24 hrs a day, 5 days a week without the aid of human for any buying or selling decisions.

    I got into forex back in 2006 when my employer at the time, Texas Instruments, decided they no longer intended to maintain a CMOS R&D team for nodes beyond the 65nm node. That didn't mean I lost my job, but I did lose my interest in my job at TI at the time and the prospects of striking out on my own to professionally trade foreign currencies was very interesting indeed.

    Some information on the MT4 app itself - it is a multi-threaded app but not in the way that you might think. You remember how in the early days of multi-threaded gaming the game would be advertised to be multithreaded but in reality all they did was make the video be one thread and the audio be another thread? Technically it was now multi-threaded but the audio thread wasn't really cpu intensive anyways so it hardly helped.

    That is how MT4 is multi-threaded. The rate limiting step in the cpu intensive portion of MT4 (the backtesting feature) is single-threaded, so for all practical purposes the app can safely be considered to be a single-threaded app.

    There are two usage modes when one uses MT4 as I do - as both a programmer/coder and as a user - so it is relevant to determine single-threaded (single instance) performance as well as multi-tasking of multiple instances of the application. (each currency pair can use its own core, so in practice I could fully saturate a 100+ core CPU)

    [​IMG]

    ^ this is an application that I have been using since 2007 and I had the privilege of encountering fellow ATF members SlowSpyder and Peter Trend who generated the Phenom II data shown in this graph :thumbsup:

    It was this data that compelled me to buy Q6600's instead of PhII's for my quad-core farm of computers in the basement.

    What we see here is that when I am running a single-instance of MT4, a common usage mode for programming and coding where we need to rapidly iterate through debug and verification procedures when we compile/tweak new codes, the piledriver microarchitecture is just not a good match for this application by any stretch of the imagination :(

    This is a bummer for me, I really hoped that it would be able to step in and serve as my programming workstation, but it has vastly lower IPC even in comparison to Phenom II, let alone the 5yr old Q6600's I am looking to replace.

    On the flip side the 3770k really shows a solid improvement in IPC as well as the additional clockspeed headroom above and beyond the Q6600, basically this is an app that appears the 3770k was made to run :)

    OK, so the FX8350 is out of contention as my MT4 programmer's station, but what about the other usage profile? Once a code is completed then it is uploaded for running in real-time. This is where the money is made, or lost, and time is money in this usage scenario. So how well can the FX8350 chew through loads of currency pairs (fully loading all 8 cores) in comparison to the other processors?

    [​IMG]

    Well at least its fortunes have markedly improved, but not enough to best the 3770k. Thanks to its 8 cores versus the 4 cores on the PhII X4 and Q6600, the aggregate throughput of the FX8350 bests the other two contenders.

    Now before anyone gets the wrong impression from the graph - as guskline continues to rightly point out - the FX8350 is a $200 processor, and not a $330 processor, for a good reason. And this is just confirmation of that.

    But look at that slope. The $330 3770k gets 6.17 passes per minute per GHz while the $200 FX8350 gets 5.14 PPM/GHz.

    Breaking that down to money, that's 0.0187 PPM/GHz/$ for the 3770K versus 0.0257 PPM/GHz/$ for the FX8350.

    Now that's not entirely a useful comparison because we haven't diluted the performance costs with the total true costs (mobo, ram, etc, plus power costs over a reasonable lifetime of the rigs and so forth) but there is a clear performance/$ advantage here for the FX8350 with this application when it comes to performance in the "live mode".

    Based on the test results from my three real-world apps of interest I can tell that my particular FX8350 is destined for the basement where it will replace one of the existing Q6600's and become a forward-testing/live-trading MT4 box. I will keep the 3770K as my main desktop CPU where it will be primarily tasked with transcoding and MT4 single-app programming and debug usages.
     
  7. inf64

    inf64 Platinum Member

    Joined:
    Mar 11, 2011
    Messages:
    2,252
    Likes Received:
    7
    IDC great testing methodology and great results!
    Test setup: 10/10
    Testing: 10/10
    Presentation of results: 10/10
    Conclusion:10/10
     
  8. SlowSpyder

    SlowSpyder Lifer

    Joined:
    Jan 12, 2005
    Messages:
    10,133
    Likes Received:
    4
    It was very much my pleasure! Thanks for continuing to be AT's technical encyclopedia. :) It was a while ago, but are you sure I didn't use a Phenom I 9850 for my testing? I could be wrong here, it has been a while. :)



    Glad to see you found a use for it, something making it worth keeping. Will you overclock with an aftermarket cooler and a voltage bump now? Or if this is going to be a 24/7 machine, maybe undervolting? Looking at your graphs, I do not see another few hundred MHz changing anything... but, if you're like me you have to take those MHz just because they are there. :p
     
  9. frozentundra123456

    frozentundra123456 Diamond Member

    Joined:
    Aug 11, 2008
    Messages:
    8,996
    Likes Received:
    20
    Excellent work. However, I have one caveat. If one is using the computer for an income producing function, as if I understood correctly that you are, the performance per dollar would also include the extra income produced by having more timely and/or more complete data. In that case the initial price differences between 2 cpus would be very minor relative to the lost income from lesser data output, although I dont know how one would quantify this. If you are using the term "money-maker" application in just a figurative sense, my apologies.

    Edit:BTW, I also have a degree in chemistry, but was never able to go past a BS level. So I really understand how difficult and complex the work you are doing really is.
     
    #534 frozentundra123456, Jan 16, 2013
    Last edited: Jan 16, 2013
  10. grimpr

    grimpr Golden Member

    Joined:
    Aug 21, 2007
    Messages:
    1,073
    Likes Received:
    0
    The classic FPU will be obsolete by the HSA era in 2014 apus, thats why AMD is cutting down its transistor budgets on x86 cores where it matters.
     
  11. ShintaiDK

    ShintaiDK Lifer

    Joined:
    Apr 22, 2012
    Messages:
    20,102
    Likes Received:
    15
    No...
     
  12. Homeles

    Homeles Platinum Member

    Joined:
    Dec 9, 2011
    Messages:
    2,585
    Likes Received:
    0
    It definitely won't be obsolete. Latency still matters, and there's plenty of things that a GPU can't do.
     
  13. Shamrock

    Shamrock Senior member

    Joined:
    Oct 11, 1999
    Messages:
    898
    Likes Received:
    0
    Amazon has the FX-8350 for $193.79, just trying to save you a couple bucks.
     
  14. AtenRa

    AtenRa Lifer

    Joined:
    Feb 2, 2009
    Messages:
    11,276
    Likes Received:
    51
    Core i7 2700K has 995M transistors with a die size of 216mm2, take away the iGPU and the CPU alone with Caches and Memory Controllers etc is bellow 900M.

    Core i7 3820 has 1.27B transistors with a die size of 294mm2. It is a Quad Core + HT (8 Threads) same as 2700K with 1MB L2 Cache + 10MB L3 Cache and Quad Channel Memory Controllers for the 2011 Socket. If you compare it against 2700K directly you will come to the conclusion that Core i7 3820 is inefficient but yet it uses the same cores as 2700K

    Bulldozer has 1.2B transistors with a die size of 315mm2. It is a quad Module 8 Threads CPU with 8MB L2 Cache + 8MB L3 Cache with Dual Memory Controllers.

    Now technically, you CANNOT directly compare Core i7 3820 and Bulldozer/PileDriver against Core i7 2700K. The first two are made for server use with larger amount of L2 and L3 Caches and 3820 has two more memory Controllers.

    Comparing the Core i7 3820 against FX8350 technically makes it apples to apples due to the ~same characteristics. Now compare the two adding the price difference including a Motherboard and Memory and you will find the FX8350 to have a superior Performance/Price ratio. The only plus for the S2011 is that you can upgrade to a 6 core Intel.

    IMO AMD should have released FX with G34 socket for the Desktop/workstation with 8 to 16 Cores and leave the Llano/Trinity for the Sub 300$ market with FM1/FM2. Make two dies, one Quad and one Okta-core with the same iGPU (no L3) and compete in Desktop against Intel with better products (higher MT and better iGPU than your competitor).

    Also, comparing BD/PD against against SB (1155) and conclude that BD/PD is inefficient, is the same by comparing Trinity vs SB/IB in graphics and say that SB/IB is inefficient. But Technically comparing Intels HD3000 against Trinitys graphics is apples to oranges. HD3000 has more than half the transistor count of Llanos iGPU so no wonders that its slower. Same goes for the CPU part, Llano/Trinity CPU cores are half the transistor count of a Quad Core SB/IB.

    To conclude,

    What im saying is that AMD due to time and resources constrain, is forced to sell a Server CPU to the same market as an Intel Desktop CPU and with an older lithographic process. The combination is the worst you can put together for current desktop applications.

    Just compare (both performance and die size and efficiency) Core i7 3820 against IB 3770K and you'll see what i mean. ;)
     
    #539 AtenRa, Jan 17, 2013
    Last edited: Jan 17, 2013
  15. jvroig

    jvroig Platinum Member

    Joined:
    Nov 4, 2009
    Messages:
    2,397
    Likes Received:
    0
    @AtenRa:

    You apply conditions/rules that I find to be completely arbitrary and irrelevant to both consumer purchasing decisions and in reviewing / objectively trying to qualify "good design".

    There is no way I can engage you in debate that I would find reasonable, because my impression is that you are determined to pigeonhole BD into a place where it will be seen in the best light. We will never agree on this, and it seems pointless that we debate each other further. Good day, sir.


    EDIT: Just noticed that your blog's affected by a domain censor again. That sucks, but you've got it easy ;) While you sometimes get affected by domain censors, I sometimes get affected by IP censors, meaning when it happens to me, I actually cannot access the forums at all! Try beating that :D


    I had high hopes on this one. I'm a bit disappointed, but it certainly could have been worse. I wonder what would happen if Thuban with a mild OC (say, 3.7?) were added to that graph? Just glancing at the dots of the Phenom II X4 and the y-axis figures and using 1.5 multiplier (best case, since multiple instances running), it looks like it could only reach a max of 20-22, no better than the FX3850 at stock (or are my eyes deceiving me?). It could be a worthy CPU swap, after all. I have similar use cases where I achieve MT workloads by simply running multiple instances, or in others by spawning multiple independent threads, each one computing their own data set with zero interdependence.

    Thanks! :thumbsup:
     
    #540 jvroig, Jan 17, 2013
    Last edited: Jan 17, 2013
  16. AtenRa

    AtenRa Lifer

    Joined:
    Feb 2, 2009
    Messages:
    11,276
    Likes Received:
    51
    I was talking technically since people spoke about die sizes and performance/die area and efficiency. Im not talking about consumers and as you have seen i have clearly said that BD/PD cannot compete against 22nm Intel Desktop CPUs in efficiency not only because of the litho process but because it has more L2/3 cache(and slower) making it less efficient for desktop use.

    So im not trying to say that BD/PD is the best CPU but that when people compere them against SB/IB (1155) should know and UNDERSTAND the differences.

    It is more apples to apples (Technically) to Compare Core i7 3820 (same SB architecture) against FX8350 than 3570K/3770K.
    The consumer doesnt care about technically and i agree that IB is more efficient buy far for desktop use for all the reasons i described above but that doesnt make BD/PD inefficient against a comparably Intel CPU like the 3820. ;)
     
  17. Idontcare

    Idontcare Elite Member

    Joined:
    Oct 10, 1999
    Messages:
    21,130
    Likes Received:
    0
    [​IMG] thanks for the very kind words :$

    In the end we owe it to ourselves to test with the apps we use. Synthetic benchmarks are a good guide to go by in the absence of performance data with one's specific applications of interest, but there is no substitute for just getting your hands dirty with the very software you intend to use on the hardware in question.

    Yeah you also tested the same app with your original PhenomI but I left those data off these charts for the sake of clarity (you'll recall the PhI was slightly better than the PhII, but still underperformed the Q6600).

    I plan to test the stock HSF with non-stock TIM, then I will test with non-stock HSF's (NH-D14, H100, and a TME III), and then I will lap the FX-8350 and test again.

    Even if I don't get higher clocks, if I can lower the operating temperatures and reduce power consumption at 4GHz operation then I'll take it.

    This is for income, and definitely there are two sides to that coin. There is "operating cost" which must adhere to a budget, and then there is "revenue opportunity" which is capped and limited by the expenditures made in the operating cost window.

    You can't win the lottery if you never buy a lottery ticket ;)

    In general, yes a faster computer ought to lead to increased revenue that would outsize the cost outlay for the faster computer in the first place.

    However, in this specific case since I already own the FX8350 and my choices are (1) find something to do with it, or (2) resell it and recoup pennies on the dollar - I am simply trying to find something that I can do with it.

    The other thing I took away from this experience is that I really need to re-evaluate my existing software choices. Gaussian isn't the only computational chemistry app out there, nor is TMPGEnc the only transcoding app that can give me good IQ/bit-rate.

    The MT4 situation is one that does lack choice though. I either use that app or I use no app. Thankfully it appears to be an app that uses a portion of the ISA for which Intel continues to optimize and improve the IPC.

    At the risk of inserting my foot into my mouth by weighing into this discussion and providing my unsolicited opinion, it appears to me that you two aren't seeing eye-to-eye on this because you are both talking about different things (for different reasons).

    Atenra's comments are applicable IMO, but only for the purposes for which he is invoking them in the first place. From a product-lineup standpoint, the FX products were not meant to compete with Intel's iGPU (APU) products, that is what Llano and Trinity were for.

    It just turned out that Llano and Trinity weren't as up to the task as AMD had hoped, so they had to dig into their server lineup and pull out the consumer-grade SKUs much as Intel does with their XEON EP microarchitectures for the extreme i7 products.

    The comparisons and analogies are not invalid, these are valid comparisons to make. But making a valid comparison is not the same as making a relevant comparison because "relevance" is like art and beauty - the relevance of any product compared to another is in the eye of the beholder (the consumer, not the manufacturer).

    So when speaking of relevance, the questions themselves must be carefully framed and robustly account for the specific conditions under which the comparison would be relevant (and not just simply valid, a much easier condition to meet).

    Atenra's comparisons are valid IMO, but not relevant when contrasted to the specific end-user perspective with which you are framing your questions. However Atenra is of course viewing the discussion with a different perspective in mind, and within the confines of that perspective his valid comparison is also relevant.

    In the same token, so too are your arguments regarding the relevance of the comparison with respect to the perspective of the demographic for which you are arguing.

    In order for there to be constructive dialogue on the topic of relevance and validity, the parties involved must have some degree of agreement in terms of why it is they are talking about what they are talking about (including the specific perspective that is being entertained as a limiting condition that bounds the relevance of the otherwise valid comparisons).

    I think I'll stop pontificating at this point, lmost sprained my ankle climbing up on this soapbox :D

    Keyword is "again"...we never bothered to address your situation, it remained broken (always and forever) and we decided we were OK with that being your lot in life :p But we actually cared enough, at one point in time, to spend the time and effort needed to fix Atenra's censor issue...only to then re-double our efforts by spending time and effort to break it "again".

    You are the neglected step-child; whereas Atenra is being smothered with over-active misguided admin love from higher powers than my own :D

    In the end just trust that we strive to treat everyone equally, and if we haven't found a way to dick around with any given member as of yet then rest assured you have a number and we will get to you eventually :p

    Yeah this is one of those cases where, owing to the course-grained nature of the multi-tasking approach, we are reasonably legitimate in just taking the slope of the best fit equations (shown on the graph) and scaling for core count (or module, or HT pairs as it were).

    A PhII quad-core scores 3.88 x GHz, which breaks down to 0.97 x GHz/core.

    So in theory a PhII hexcore (thuban) would be expected to deliver (6 cores) x (0.97 x GHz/core) = 5.82 x GHz.

    [​IMG]

    Looking at the MT-ST (mult-tasking single-threaded) graph, we see the slope for the CMT-enabled 8-core FX8350 is 5.14 x GHz.

    Thus, at the same GHz there is no question a thuban would best the FX-8350 (5.82 vs 5.14 -> thuban is 13% faster).

    A 3.53GHz thuban would be expected to perform identically to a 4GHz FX-8350 in this scenario (both would churn out 20.55 passes per minute). A 3.7GHz thuban (21.53) would outperform a 4GHz FX-8350 (20.55 PPM).

    Perhaps equally intriguing though is the fact that, as you noted, the performance of a 45nm hexcore thuban would actually come within striking distance of a 22nm quad+HT 3770K. 5.82/GHz vs 6.17/GHz (not that thuban stands any chance of scaling to the same GHz as the 3770k though, nor at the same power-consumption, but price/performance would be there)
     
  18. bgt

    bgt Senior member

    Joined:
    Oct 6, 2007
    Messages:
    565
    Likes Received:
    0
    IDC, you have both the i7-3770K and FX8350. Do you notice any difference in everyday life computing between these 2? I don't mean the benches or other scientific stuff. Still wondering if I should get a 3770 or stick with the 8350? I don't plan to OC it anyway. Delidding is too much fuss and the gain is relative. I tried OCing my 2500K but it was boiling hot at 4.5Ghz. So for me a no go. The FX8350 box I have now is extremely quiet and cool so thats not a problem.((usage=CAD pcb design,lots of un/zipping(with winrar) huge files))
     
    #543 bgt, Jan 17, 2013
    Last edited: Jan 17, 2013
  19. Idontcare

    Idontcare Elite Member

    Joined:
    Oct 10, 1999
    Messages:
    21,130
    Likes Received:
    0
    For anything that is not compute intensive (like a 3hr transcode job where 30% faster means it gets done an hour sooner, etc) the FX8350 is overkill just as the 3770k is overkill.

    If I owned just the FX8350, I would not upgrade to a 3770k.

    This isn't a case of choosing between "good vs bad", it is more the case of choosing between "better vs best". If you already have "better" then upgrading to "best" is a bit superfluous.

    Even in the example of the disparity in transcode times where the 3770k trounces the 8350 in my specific app of choice, the reality is that if I did not already have the 3770k then I'd still use the FX8350 (I would not go and replace the 8350 by purchasing a 3770k) and I'd just queue up my transcoding jobs to run overnight. Whether they get done at 5am or 2am, either way it would be done long before I woke up and got back to the computer to check on its progress.

    I find the performance of my FX8350 to be more than sufficient for my needs. I am in awe of just how much more punch the 3770k has in terms of pure bursty speed, there is no question the apps open faster, productivity stuff just happens that much faster, on the 3770k but that is expected for the price premium.

    But we aren't talking about a binary difference here. Not like the comparison to say my laptop. There are things I just won't bother wasting my time attempting to do on my laptop - transcoding DVDs being one of them.

    I don't feel that way about the 8350, there is nothing I would do with my 3770k that I wouldn't feel like doing on the 8350. They are interchangable in every regard.
     
  20. bgt

    bgt Senior member

    Joined:
    Oct 6, 2007
    Messages:
    565
    Likes Received:
    0
    IDC, thanx.
    Are u talking about the 3770K in OCed mode or normal speed mode?

    PS IDC, did you ever compare the 4000IGP with the Fusion IGP from the A10 trinity chips? Any experience with them? I really like your straight forward/sober approach toward technical differences.
     
    #545 bgt, Jan 17, 2013
    Last edited: Jan 17, 2013
  21. Idontcare

    Idontcare Elite Member

    Joined:
    Oct 10, 1999
    Messages:
    21,130
    Likes Received:
    0
    Just talking about normal speed mode (3.5GHz w/turbo enabled 3770K versus 4GHz w/turbo enabled FX8350).

    What I can't speak to is the gaming aspects between these two platforms. I do game but with stuff that is so antiquated that either system would do just fine when paired with the GTX460's I own.

    Again it is all about price/performance though. An FX8350 with a GPU that is $130 more expensive than the GPU you would buy to go with a 3770K (so the systems were price comparable at the end of the day) would probably give the 3770K a run for its money in terms of gaming frame rates (just a guess).

    Unfortunately I have no experience to speak to when it comes to AMD's iGPU (APUs). The Intel MIVE-Z mobo I have doesn't let me use the Intel iGPU either though, so I guess the conclusion there is that one can safely say I am quite ignorant regarding the performance viability of either company's iGPU solutions (beyond what I read in reviews of course ;)).
     
  22. bgt

    bgt Senior member

    Joined:
    Oct 6, 2007
    Messages:
    565
    Likes Received:
    0
    IDC, thanx :thumbsup:
     
  23. frozentundra123456

    frozentundra123456 Diamond Member

    Joined:
    Aug 11, 2008
    Messages:
    8,996
    Likes Received:
    20
    Of course you are correct. I was in no way implying that you should not utilize the 8350. What I really meant was that if one is building from scratch for an income producing situation, this is something to consider.
     
  24. Idontcare

    Idontcare Elite Member

    Joined:
    Oct 10, 1999
    Messages:
    21,130
    Likes Received:
    0
    Absolutely.

    If it weren't for the fact that my intentions in buying the FX8350 were impure, tainted with the dreaded muchosinfectious enthusiasticus, where I was more aiming to kill two birds with one stone (get another rig to do work, and get another rig to play around with beforehand) then I would have just cut straight to the chase and bought a 3930k ;) :D

    Don't hold it against me though, sometimes we professionals like to go slumming with the cheap-and-dirty things one can find on the random eggsaver email here and there :p Its not my fault I tells ya, the damned FX8350 retail box had that "come hither" smokey eye thing going on :( :hangs head in shame:
     
  25. frozentundra123456

    frozentundra123456 Diamond Member

    Joined:
    Aug 11, 2008
    Messages:
    8,996
    Likes Received:
    20
    Yea, I resisted the temptation to suggest a 3930K, but I see you have already thought about it.