• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Cryptocoin Mining?

Page 343 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Thanks!! Time to break 550 Kh/s!

I guess these kinds of optimizations will also result in better gaming performance?

Potentially - optimizing memory timings to the chips used could reduce latencies and lead to a slight boost overall. I'm curious to see if he can tweak profiles for reference 290s using Elpida chips (like my XFX reference 290)
 
Before the stilt bios mod, my sapphire 7950 couldn't break 450kh without going over 75C (it's the top card, dual 7950s in a case). Now she can easily pull 550kh while working on the computer at ~72C with I-15, and 630-640kh at 74C and I-19 when not using the computer. Love it.

Now...how the hell do I get cgwatcher to behave and bring the intensity down when the computer is no longer idle? It works at intensities of less than 18, but 19-20 seem to block it from dropping the intensity down. 🙁
 
Now...how the hell do I get cgwatcher to behave and bring the intensity down when the computer is no longer idle? It works at intensities of less than 18, but 19-20 seem to block it from dropping the intensity down. 🙁

It works for me. Try updating your program. I never had problems with bouncing between intensity of 17 and 19 when idle and in use.

I'm surprised the vendors didn't optimize their memory chips and latencies... What the hell are they doing? Such a move would help them distinguish one vendor from another.
 
VICTIMLESS CRIME D:


"From December 31 to January 3 on our European sites, we served some advertisements that did not meet our editorial guidelines - specifically, they spread malware,"


:colbert:
"For an ad platform it is virtually impossible to guarantee 100% malware free ads."

How about stopping to piss people off with ads completely, morons? No, they don't get it.
 
Hmm, could manufacturers be gnashing their teeth at the fact that certain cards are being shipped out with suboptimal memory timings has been exposed....hmm perhaps someone should run a test with their vid cards to see if they notice any differences while gaming. Maybe they thought that the difference didn't matter enough....

Also, I wonder how things are on the NVIDIA end...😉
 
Hmm, could manufacturers be gnashing their teeth at the fact that certain cards are being shipped out with suboptimal memory timings has been exposed....hmm perhaps someone should run a test with their vid cards to see if they notice any differences while gaming. Maybe they thought that the difference didn't matter enough...

I'd be quite surprised if it made any difference in gaming as graphics workloads aren't very sensitive to latency at all due to the fact that it's mostly streaming of data. For example with texture sampling it might take a hundred clocks to get the first cache line of texture data back from memory, but since you've been streaming requests the entire time the next cache line comes back on the next clock and so on for millions of clocks. More than enough to mask the start up time. Which is basically why the manufacturers can get away with such variety in their memory timings... at least when it comes to gaming.
 
I'd be quite surprised if it made any difference in gaming as graphics workloads aren't very sensitive to latency at all due to the fact that it's mostly streaming of data. For example with texture sampling it might take a hundred clocks to get the first cache line of texture data back from memory, but since you've been streaming requests the entire time the next cache line comes back on the next clock and so on for millions of clocks. More than enough to mask the start up time. Which is basically why the manufacturers can get away with such variety in their memory timings... at least when it comes to gaming.

True. In gaming comparisons, identical GPU's with identical drivers at identical clock speeds turn in practically identical results. There may be a couple FPS difference here and there, but nothing like the ~25% variance that apparently exists when mining (600k vs. 750k, etc.). If anybody came out with a card that performed 25% worse than its peers on a common gaming benchmark, it would be called out immediately in the hardware press. Memory latency obviously just doesn't affect gaming performance much, if at all.
 
It works for me. Try updating your program. I never had problems with bouncing between intensity of 17 and 19 when idle and in use.

I'm surprised the vendors didn't optimize their memory chips and latencies... What the hell are they doing? Such a move would help them distinguish one vendor from another.

The non-optimal latency doesnt seem to affect games from my testing, but it makes a massive difference mining.
 
Pretty big news here regarding official regulatory acceptance.

Singapore has given guidance on how it intends to tax bitcoin transactions for businesses and merchants, becoming one of the first governments in the world to do so.

So far, the world’s governments and central banks have put more energy into persuading the public that bitcoin is risky and not a currency, or restricting its use, than formulating rules to tax its transactions.

A few countries have issued statements where the exact tax rules hinge on whether bitcoin is classifed as a virtual currency, asset, or good. Slovenia (the home of Bitstamp) has said bitcoin is a virtual currency but not a ‘monetary asset’, and bitcoin income would be taxable.

Germany has said it regards bitcoin a ‘private money’ or a ‘financial instrument’, and the Swiss parliament is “considering” a move to have bitcoin officially recognized as a currency.
 
I'm not affected by this, all mine do 600+ at .975 or lower. but you guys who benefited from his work really should donate a few coins or fractions of coins for his work.
 
Well my Sapphire cards are suffering from this I believe. The 280x is so much faster than the 7970Ghz no matter what I do with clock/memory speeds.
 
If anyone is mining here with R9's on wemineltc, what do you set for suggested and max diff in the worker setting?

Also, I have always used -I 13 on my cards and it worked fine. Yesterday I experimented with changing it to -I 17 and I got a higher hash numbers as reported by the cgminer, however the hashrate reported by the wemineltc website dropped to pathetic 70kh/s. Now, I know that wmltc site says their hashrate is approximate and just trust the cgminer, but last time I experimented with various settings and got such a small hashrate as reported on the website, my payout slowed down significantly. Any advice?
 
If anyone is mining here with R9's on wemineltc, what do you set for suggested and max diff in the worker setting?

Also, I have always used -I 13 on my cards and it worked fine. Yesterday I experimented with changing it to -I 17 and I got a higher hash numbers as reported by the cgminer, however the hashrate reported by the wemineltc website dropped to pathetic 70kh/s. Now, I know that wmltc site says their hashrate is approximate and just trust the cgminer, but last time I experimented with various settings and got such a small hashrate as reported on the website, my payout slowed down significantly. Any advice?

Are you getting a lot of HW errors in cgminer? If you are, then the intensity could be too high. If you can maintain the higher intensity while keeping HW errors and rejects low, I would trust cgminer. Wemineltc shows wonky readings for me sometimes as well.
 
Are you getting a lot of HW errors in cgminer? If you are, then the intensity could be too high. If you can maintain the higher intensity while keeping HW errors and rejects low, I would trust cgminer. Wemineltc shows wonky readings for me sometimes as well.

Agreed, watch the "HW" space in cgminer. I turned the intensity up too high on one of my cards, and the HW error number ticked up every few seconds. These are computed but not submitted/accepted to the pool because they are known errors. This will kill your payout, but cgminer's reported hash rate will still be high.

The pool has no way of knowing your actual hash rate, just the rate at which you return results, and it shows an estimated hash rate based on that. My rigs total about .25 Mhash/s, but the pool dashboard numbers might show .12 one time and .43 another time, depending on temporary luck factors. However, most of the time they show in the .20-.30 range. My guess is that the higher your hash rate, the lower the variance on the pool dashboard (assuming no hardware errors of course).
 
Couple things I found on my cards regarding hashing performance when switching from BAMT (debian based mining specialized distro) to Win7.

The switch gave me better stability, but initially it dropped my hash rate significantly.

My hash rates dropped on Win7 and I've found two repeatable solutions to increase it back to within a whisper of what I was getting on BAMT.

1) Control Panel/System and Security/System From there I had to click on "Advanced System Settings" and choose the "Advanced" tab and set the performance settings to "Adjust for best performance" instead of "Adjust for best appearance".

This boosted hash rates by 10% and was a repeatable effect on my system.

2) Set the cgminer.exe process to High Priority from within task manager instead of leaving it at default Normal Priority.

--this is of specific concern for anyone running a Sempron 145. I made a switch from a Phenom X3 to the Sempron 145 and my hash rates dropped significantly in Windows 7. Cgminer.exe needs to be at high priority in Win 7 when running a Sempron 145 (or other single threaded CPU i'd wager) to get hash rates you'd expect.


Does anyone know how to get cgminer to start automatically with a high priority? Currently I can manually set it, but I have my rig setup to automatically start mining after power outages and currently it loses hash rate unless I go in and manually set cgminer.exe from normal to high priority in task manager.

I use a .bat to launch cgminer and it looks standard enough,
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_USE_SYNC_OBJECTS 1
cgminer --scrypt --url="blah blah blah"

I drop a shortcut to the .bat in my system startup folder and after a power outage or reboot it automatically runs the .bat from within my cgminer folder.

I've tried different stuff in there but I dont' understand where to address how cgminer launches and if it's possible to have it launch with high priority via a flag in the .bat. Or if I need to go about setting cgminer to auto launch with the high priority in a different way.
 
Last edited:
I got my Sapphire 7970 from 660 hash to 720 after flashing the BIOS, but speeds and temps have increased from 925/1375 @74C to 1050/1500 @85C.

Still no luck with my (POS for mining) 7970 6GB, that's stuck on 550 hash.

I'm in the same boat. I got a reference Powercolor 7970 that can't break 660 running at 945/1575. My Sapphire 7950 gets 670 @ 1100/1200 with stock BIOS!
 
After reading the litecointalk thread with The Stilt, it saddens me to see how incompetent some OEM's are with their products.

Also, I have a stock Sapphire 7950 that barely breaks 600 KH/s @ 1100/1250.
 
Back
Top