• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Ethereum GPU mining?

Page 205 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Um hold on, lemme see if I can find a link:

https://gist.github.com/marcialwushu/693bcea078b27fa51daa02c1b94a7e37

It actually isn't mcache, it's --cache

You use the --cache command line option when launching geth from the cli, default is 128 which is a bit goofy. Most people recommend --cache 1024 but I found from testing on my SSD that if I set --cache 10000 (or so) that it dramatically cut down on client writes. Enough that a decent spinner could probably catch up.

It seems like geth wants to do a whole lot of writebacks, so letting it do writebacks to cache instead of the hdd is what cuts down on all the excessive writes. The amount of actual blockchain data downloaded over any arbitrary period of time is quite small, and you can clear all your blockchain data and resync pretty quickly with a). a fast Internet connection and b). a huge --cache value.

Anyway if you had a machine with 16GB (or more) RAM and a 4 TB drive or something, you could set a 10-12 GB --cache value and let it go for awhile. When it gets a bit full, delete the whole thing and let it resync.
 
The last time I tried, was just little over a week ago. Not full sync, but light sync as I just needed to check my accounts.

Light sync takes a lot less. My XPS 12 Ivy Bridge laptop finishes it in an hour or so. Of course, it wasn't from the beginning, but sometime last year when they had the light sync it improved a lot over full sync.

I think the Constantinople hard fork has further improvements for this.
 
Light sync is good, but it relies on some other full node supporting it, basically. So if you're trying to be a fundamental part of the Ethereum infrastructure by running your own node, full sync is the way to go.

If all you're doing is trying to sync up to do send/receive/etc. from your Ethereum client then light sync is sufficient, and faster. I mean hell not all of us want to use MEW all the time, right?
 
Light sync is good, but it relies on some other full node supporting it, basically. So if you're trying to be a fundamental part of the Ethereum infrastructure by running your own node, full sync is the way to go.

Full Sync is what, over a terabyte or something? And probably takes a week or so to sync. They need to really cut down to make it practical. Not many will be doing this.

If all you're doing is trying to sync up to do send/receive/etc. from your Ethereum client then light sync is sufficient, and faster. I mean hell not all of us want to use MEW all the time, right?

Metamask is simpler IMO. MEW might be better for permanent storage. But some of us have hardware wallets.
 
I don't think full sync is a TB yet, is it? I'm mean it's substantial, but I was doing full sync to a partition of my 512 GB SATA SSD a few months ago. Maybe if you count all the host writes it adds up to that amount (or more!) but again, you can nix that with a large --cache setting.

Personally I am not all that comfortable with MetaMask or MEW. But I admit that I did use MEW to move tokens quickly for trading purposes, once.
 
So I just tried syncing to a HDD with cache at 8192 (using geth 1.8.16). It's on the latest block for "block headers" but "state entries" is still going (about 1000/second), and disk activity is pegged at 100%. How many state entries are there supposed to be?
 
DrMrLordX: It's not like Metamask is shady or anything. Yes, being a browser extension does make it less of a permanent storage. I use it for accessing dapps. If you have enough in eth, I would recommend a hardware wallet.

I've heard while the full sync is only 200GB or so, its not really full as it doesn't keep records all the way back and it'll go above 1TB if you do so. Here's a quote from an article about Turbo Geth.

Compared the 1.2 terabytes of disk space required by Geth today, Turbo Geth users only need 252.11 gigabytes of disk space to run a full archive node.

Also about your reply to my last post. I can understand the skepticism, and yes a threat exists that another crypto might take over but that's almost part of life. You are born with inherent threats around you. It has the most established base of developers, and ease of access is improving daily by applications.

You can already find real world usage starting to surface:
-https://www.ccn.com/consensys-has-begun-supplying-electricity-using-ethereum-to-texas/
-https://bvo.trybravo.com/?refer=141294

Bravo is a company didn't use an ICO to fund itself. Actually they were featured on Shark Tank. They have their own scaling solution.

To my knowledge Level 2 scaling solutions aren't official, but the upcoming hard fork has an EIP that eases development for such as state channels.
 
I think MetaMask had some bugs in it at one point, and people lost ETH using it, or something. Hopefully that's fixed and won't happen again. But I'm skeptical.

I'm not heavy in any crypto right now it's not a major issue for me. Turbo Geth seems interesting though.

So I just tried syncing to a HDD with cache at 8192 (using geth 1.8.16). It's on the latest block for "block headers" but "state entries" is still going (about 1000/second), and disk activity is pegged at 100%. How many state entries are there supposed to be?

Hmm, I would try a larger --cache value to be perfectly honest. Unless you can get through the state entries.
 
I think MetaMask had some bugs in it at one point, and people lost ETH using it, or something. Hopefully that's fixed and won't happen again. But I'm skeptical.

I think that's a harder problem to fix than anything else in crypto, including scaling. It's code, and human errors are amplified in code. Basically you expect problems to lurk somewhere, all you can do is manage the fallout in the best way possible.

The trade-off between convenience and security is almost like laws of physics. Easier to access + widespread = more security vulnerabilities. Even CPUs have security faults nowadays.
 
I think MetaMask had some bugs in it at one point, and people lost ETH using it, or something. Hopefully that's fixed and won't happen again. But I'm skeptical.
FWIW, apparently Metamask is compatible with hardware wallets, or at least the Ledger (I saw an announcement on Reddit in the last week or so). So you can now use MetaMask while your ETH stays secured behind a hardware wallet all the time.
 
FWIW, apparently Metamask is compatible with hardware wallets, or at least the Ledger (I saw an announcement on Reddit in the last week or so). So you can now use MetaMask while your ETH stays secured behind a hardware wallet all the time.

If you search about that I think the first result is about the Trezor, and since I own a Ledger device I know that's also supported. I still check my address multiple times to make sure I'm sending it to the right address.
 
I just want to say that I was somewhat wrong about nodes. You don't need the archival node to contribute to the Ethereum network, which is what requires the 1.2TB of space. You only need the ~200GB that DrMrLordX was talking about for the full node. Yes, that is still quite a lot.
 
I wanted to try to do a full node because I actually do have the disk space, but it was taking so long to sync, if it was even syncing, I kinda gave up haha. I should maybe set it up again on my mining rig so that it's more set and forget and maybe eventually it will sync.
 
The devs are saying currently, Geth requires an SSD for a full sync. If you want to use a HDD for syncing, you have to go for light sync. Yes, 200GB space on an SSD is painful. Still, SSDs are fairly cheap nowadays.

Searching says it could take 2-3 weeks to finish. I think its best you start from scratch, and enable --fast sync with cache appropriate to your RAM capacity to make it the fastest. Maybe you'll finish in a week. 😛
 
Last edited:
Oh wow that explains it, was trying to do it in a VM (lots of overhead for disk IO) stored on a small raid 10. I suppose I could try to do a raid 10 or even raid 0 directly on bare metal and see how that goes. Don't really want to sacrifice SSDs for something that will involve lot of small writes as they probably won't last long.

My mining rig is probably a good candidate for that since it already is bare metal hardware, just need to get a couple 1TB drives and do a raid 0. I suppose it's possible to backup the DB and then start off a backup, if a drive fails?
 
Don't really want to sacrifice SSDs for something that will involve lot of small writes as they probably won't last long.

geth from maybe 6 months ago was killing my SSD pretty quickly. I had to up my --cache value to get the client write rates down. geth ate something like 8% off the drive's life expectancy according to SMART readings.

How much RAM do you have on that mining rig?
 
geth from maybe 6 months ago was killing my SSD pretty quickly. I had to up my --cache value to get the client write rates down. geth ate something like 8% off the drive's life expectancy according to SMART readings.

How much RAM do you have on that mining rig?

I only put like 8GB but I used a single 8GB stick so plenty of room to expand if I needed to. I can't recall how much ram the motherboard supports, think up to 64GB.
 
How much is 8%? Like a 512GB 760p is rated for 188TBW. 8% of that would be something like 15 terabytes. I guess that's a cumulative number looking at a node over many months?

It's an MX200 for what it's worth. I forget the timeframe, but it was several months. If I recall correctly, I was recording over 400 GB of host writes per day. It was kind of scary. That was using the default --cache value of 128.
 
So it basically needs a 900/905P Optane drive. 😵

Quite a dilemma there. HDDs are quite durable in terms of writes, but too slow. SSDs are fast enough but may die quick.

How much can you lower that figure? Let's say with 8GB of RAM. I'm going to experiment(again) with the system in my sig.
 
Doing the sync now. Cache at 4096, fast sync enabled. 12 minutes in I'm at about 400K blocks. According to that, syncing fully with 6.5 million blocks should only take 180 minutes.

However, 400K blocks equal to 1700MB used by geth. So about a million block in I'll reach the cache limit? Then it must swap from the drive, which becomes the slow part. According to this, if you have set to 26GB of RAM, then it should never swap.

Interestingly, some time in, the RAM use went down to 1530MB at 450K blocks.
 
Doing the sync now. Cache at 4096, fast sync enabled. 12 minutes in I'm at about 400K blocks. According to that, syncing fully with 6.5 million blocks should only take 180 minutes.

However, 400K blocks equal to 1700MB used by geth. So about a million block in I'll reach the cache limit? Then it must swap from the drive, which becomes the slow part. According to this, if you have set to 26GB of RAM, then it should never swap.

Interestingly, some time in, the RAM use went down to 1530MB at 450K blocks.
Which drive are you using?
 
So about a million block in I'll reach the cache limit? Then it must swap from the drive, which becomes the slow part. According to this, if you have set to 26GB of RAM, then it should never swap.

I don't think that's exactly how it works. From what I observed, geth likes to download some chain/block data, process it, write it to the drive, read it back, process it, rewrite it, etc. until it's done and finally writes the data to the destination. If you have a high --cache value it does a lot of the rewrites to the cache instead. That's why it produces so many host writes. When I set --cache to something like 10240 the host write rate dropped through the floor, it was next to nothing.

You should watch a SMART utility or something to see how many host writes you pile up with --cache 4096, should be interesting to see.
 
Back
Top