Open expanse pool
https://github.com/expanse-org/open-expanse-pool
This branch is 14 commits ahead of sammy007:master.

Open Source Expanse Mining Pool

Features

This pool is being further developed to provide an easy to use pool for Expanse miners. This software is functional however an optimised release of the pool is expected soon. Testing and bug submissions are welcome!
    Support for HTTP and Stratum mining
    Detailed block stats with luck percentage and full reward
    Failover geth instances: geth high availability built in
    Modern beautiful Ember.js frontend
    Separate stats for workers: can highlight timed-out workers so miners can perform maintenance of rigs
    JSON-API for stats

Proxies

Building on Linux

Dependencies:
    go >= 1.9
    geth or parity
    redis-server >= 2.8.0
    nodejs >= 4 LTS
    nginx
I highly recommend to use Ubuntu 16.04 LTS.
First install go-ethereum.
Clone & compile:
1
git config --global http.https://gopkg.in.followRedirects true
2
git clone https://github.com/expanse-org/open-expanse-pool.git
3
cd open-expanse-pool
4
make
Copied!
Install redis-server.

Running Pool

1
./build/bin/open-expanse-pool config.json
Copied!
You can use Ubuntu upstart - check for sample config in upstart.conf.

Building Frontend

Install nodejs. I suggest using LTS version >= 4.x from https://github.com/nodesource/distributions or from your Linux distribution or simply install nodejs on Ubuntu Xenial 16.04.
The frontend is a single-page Ember.js application that polls the pool API to render miner stats.
1
cd www
Copied!
Change ApiUrl: '//example.net/' in www/config/environment.js to match your domain name. Also don't forget to adjust other options.
1
npm install -g [email protected]
2
npm install -g bower
3
npm install
4
bower install
5
./build.sh
Copied!
Configure nginx to serve API on /api subdirectory. Configure nginx to serve www/dist as static website.

Serving API using nginx

Create an upstream for API:
1
upstream api {
2
server 127.0.0.1:8080;
3
}
Copied!
and add this setting after location /:
1
location /api {
2
proxy_pass http://api;
3
}
Copied!

Customization

You can customize the layout using built-in web server with live reload:
1
ember server --port 8082 --environment development
Copied!
Don't use built-in web server in production.
Check out www/app/templates directory and edit these templates in order to customise the frontend.

Configuration

Configuration is actually simple, just read it twice and think twice before changing defaults.
Don't copy config directly from this manual. Use the example config from the package, otherwise you will get errors on start because of JSON comments.
1
{
2
// Set to the number of CPU cores of your server
3
"threads": 2,
4
// Prefix for keys in redis store
5
"coin": "eth",
6
// Give unique name to each instance
7
"name": "main",
8
9
"proxy": {
10
"enabled": true,
11
12
// Bind HTTP mining endpoint to this IP:PORT
13
"listen": "0.0.0.0:8888",
14
15
// Allow only this header and body size of HTTP request from miners
16
"limitHeadersSize": 1024,
17
"limitBodySize": 256,
18
19
/* Set to true if you are behind CloudFlare (not recommended) or behind http-reverse
20
proxy to enable IP detection from X-Forwarded-For header.
21
Advanced users only. It's tricky to make it right and secure.
22
*/
23
"behindReverseProxy": false,
24
25
// Stratum mining endpoint
26
"stratum": {
27
"enabled": true,
28
// Bind stratum mining socket to this IP:PORT
29
"listen": "0.0.0.0:8008",
30
"timeout": "120s",
31
"maxConn": 8192
32
},
33
34
// Try to get new job from geth in this interval
35
"blockRefreshInterval": "120ms",
36
"stateUpdateInterval": "3s",
37
// Require this share difficulty from miners
38
"difficulty": 2000000000,
39
40
/* Reply error to miner instead of job if redis is unavailable.
41
Should save electricity to miners if pool is sick and they didn't set up failovers.
42
*/
43
"healthCheck": true,
44
// Mark pool sick after this number of redis failures.
45
"maxFails": 100,
46
// TTL for workers stats, usually should be equal to large hashrate window from API section
47
"hashrateExpiration": "3h",
48
49
"policy": {
50
"workers": 8,
51
"resetInterval": "60m",
52
"refreshInterval": "1m",
53
54
"banning": {
55
"enabled": false,
56
/* Name of ipset for banning.
57
Check http://ipset.netfilter.org/ documentation.
58
*/
59
"ipset": "blacklist",
60
// Remove ban after this amount of time
61
"timeout": 1800,
62
// Percent of invalid shares from all shares to ban miner
63
"invalidPercent": 30,
64
// Check after after miner submitted this number of shares
65
"checkThreshold": 30,
66
// Bad miner after this number of malformed requests
67
"malformedLimit": 5
68
},
69
// Connection rate limit
70
"limits": {
71
"enabled": false,
72
// Number of initial connections
73
"limit": 30,
74
"grace": "5m",
75
// Increase allowed number of connections on each valid share
76
"limitJump": 10
77
}
78
}
79
},
80
81
// Provides JSON data for frontend which is static website
82
"api": {
83
"enabled": true,
84
"listen": "0.0.0.0:8080",
85
// Collect miners stats (hashrate, ...) in this interval
86
"statsCollectInterval": "5s",
87
// Purge stale stats interval
88
"purgeInterval": "10m",
89
// Fast hashrate estimation window for each miner from it's shares
90
"hashrateWindow": "30m",
91
// Long and precise hashrate from shares, 3h is cool, keep it
92
"hashrateLargeWindow": "3h",
93
// Collect stats for shares/diff ratio for this number of blocks
94
"luckWindow": [64, 128, 256],
95
// Max number of payments to display in frontend
96
"payments": 50,
97
// Max numbers of blocks to display in frontend
98
"blocks": 50,
99
100
/* If you are running API node on a different server where this module
101
is reading data from redis writeable slave, you must run an api instance with this option enabled in order to purge hashrate stats from main redis node.
102
Only redis writeable slave will work properly if you are distributing using redis slaves.
103
Very advanced. Usually all modules should share same redis instance.
104
*/
105
"purgeOnly": false
106
},
107
108
// Check health of each geth node in this interval
109
"upstreamCheckInterval": "5s",
110
111
/* List of geth nodes to poll for new jobs. Pool will try to get work from
112
first alive one and check in background for failed to back up.
113
Current block template of the pool is always cached in RAM indeed.
114
*/
115
"upstream": [
116
{
117
"name": "main",
118
"url": "http://127.0.0.1:8545",
119
"timeout": "10s"
120
},
121
{
122
"name": "backup",
123
"url": "http://127.0.0.2:8545",
124
"timeout": "10s"
125
}
126
],
127
128
// This is standard redis connection options
129
"redis": {
130
// Where your redis instance is listening for commands
131
"endpoint": "127.0.0.1:6379",
132
"poolSize": 10,
133
"database": 0,
134
"password": ""
135
},
136
137
// This module periodically remits ether to miners
138
"unlocker": {
139
"enabled": false,
140
// Pool fee percentage
141
"poolFee": 1.0,
142
// Pool fees beneficiary address (leave it blank to disable fee withdrawals)
143
"poolFeeAddress": "",
144
// Donate 10% from pool fees to developers
145
"donate": true,
146
// Unlock only if this number of blocks mined back
147
"depth": 120,
148
// Simply don't touch this option
149
"immatureDepth": 20,
150
// Keep mined transaction fees as pool fees
151
"keepTxFees": false,
152
// Run unlocker in this interval
153
"interval": "10m",
154
// Geth instance node rpc endpoint for unlocking blocks
155
"daemon": "http://127.0.0.1:8545",
156
// Rise error if can't reach geth in this amount of time
157
"timeout": "10s"
158
},
159
160
// Pay out miners using this module
161
"payouts": {
162
"enabled": false,
163
// Require minimum number of peers on node
164
"requirePeers": 25,
165
// Run payouts in this interval
166
"interval": "12h",
167
// Geth instance node rpc endpoint for payouts processing
168
"daemon": "http://127.0.0.1:8545",
169
// Rise error if can't reach geth in this amount of time
170
"timeout": "10s",
171
// Address with pool balance
172
"address": "0x0",
173
// Let geth to determine gas and gasPrice
174
"autoGas": true,
175
// Gas amount and price for payout tx (advanced users only)
176
"gas": "21000",
177
"gasPrice": "50000000000",
178
// Send payment only if miner's balance is >= 0.5 Ether
179
"threshold": 500000000,
180
// Perform BGSAVE on Redis after successful payouts session
181
"bgsave": false
182
}
183
}
Copied!
If you are distributing your pool deployment to several servers or processes, create several configs and disable unneeded modules on each server. (Advanced users)
I recommend this deployment strategy:
    Mining instance - 1x (it depends, you can run one node for EU, one for US, one for Asia)
    Unlocker and payouts instance - 1x each (strict!)
    API instance - 1x

Notes

    Unlocking and payouts are sequential, 1st tx go, 2nd waiting for 1st to confirm and so on. You can disable that in code. Carefully read docs/PAYOUTS.md.
    Also, keep in mind that unlocking and payouts will halt in case of backend or node RPC errors. In that case check everything and restart.
    You must restart module if you see errors with the word suspended.
    Don't run payouts and unlocker modules as part of mining node. Create separate configs for both, launch independently and make sure you have a single instance of each module running.
    If poolFeeAddress is not specified all pool profit will remain on coinbase address. If it specified, make sure to periodically send some dust back required for payments.

Alternative Expanse Implementations

This pool is tested to work with Ethcore's Parity. Mining and block unlocking works, but I am not sure about payouts and suggest to run official geth node for payments.

Credits

Made by expanse-org. Licensed under GPLv3.

Contributors

Donations

ETH/ETC: 0xb85150eb365e7df0941f0cf08235f987ba91506a
Highly appreciated.