Quick project info
- what is it?
This software is a "mining" software for a family of cryptocurrencies based on so-called Proof-of-Capacity
The best information about that concept is available in info materials from the BURST coin. You can learn some technical bits from resources on their home page, for example this whitepaper.
- family of cryptocurrencies?
Proof-of-Capacity coins require large storage space instead of large computational power. Pre-generated datasets are stored in so-called "plotfiles", and there is not much preventing re-using the same plotfiles for different currencies, just like ASIC miners can be shared/reused to mine Proof-of-Work coin types. At this point of time, at least BURST and BHD share the same plot file format. Rumor has it that more are coming.
- why this miner is important?
While this miner features a nicer UI than other miners of this kind, probably the most significant value of this miner is the ability to mine two coins at once (or rather, dynamically switch between chains), a feature added by andzno1. This enables smooth mining on BURST and BHD with a single installation, whereas other solutions usually need two separate miners, running at the same time, losing performance as the result of I/O collisions, which increase probability of losing income and probably increases drive wear.
- who uses it?
Obviously, users interested in mining both BURST and BHD coins. Mining only one of them is possible with this miner as well. Right now this software is dedicated for Windows platform, and currently focuses on using CPU for computations as this miner lacks GPU support.
This is somewhat tricky, because the project has changed its 'maintainers' a few times over its history. In reverse order (newest first):
^ my fork with new features and patches, it is based on:
^ original author of the "dual-chain" blagominer, this one is based on:
^ famous mantainer who added POC2 support, this one is based on:
^ added important optimisations, bases on:
^ I think that's the earliest version from the original author of the project; current project name
Blagominertakes its name after this author
As you see, it's a mess, only worsened by the fact that some of the forks were not even made by "fork" button available on GitHub, so it's not that easy to find linkage to older versions.
This is one of the reasons for why I forked it and why I consider this to be a development of "my" project, instead of just sending a PR. It seems to be more-or-less seems to be a tradition here, but I suppose it's a result of the fact that this previous authors could not actively mantain updates and release new versions, so they just published their code as-is for the benefit of the community.
Actually, I've send a PR with these changes to the immediately previous fork: https://github.com/andzno1/blagominer/pull/5 (or actually, that's the first PR: https://github.com/andzno1/blagominer/pull/4 - but I closed it and re-submitted it properly as soon as I noticed I had accidentaly sent it from
master branch). Relevant local branch in my repo is here: headers-https-leaks.
This submission is related to my work on this project between Jun 3, 2019 - Jun 2, 2019, with later socket leak bugfixes from Jun 7, 2019, important stability update.
1. Adding custom pool-specific headers to DL submission
Feature branch for this change is feature/custom-headers.
My work on this project began with the idea of making the Blagominer easier to use by small causal miners. While setting up BURST mining is pretty straightforward, setting up BHD mining often requires use of a so-called
proxy that injects some custom headers, like
X-Account which identifies miner's account in the pool website. An example of such BHD mining pool is The Foxy Pool. Blagominer could not be connected directly to this pool due to X-Account http header required by the pool, as otherwise the pool could not track and transfer earnings properly. Typical installation targetting Foxy Pool so-called Foxy Proxy as a mediating node.
Since the core of the problem was pretty simple, I added support for configuring custom headers via existing Blagominer's
miner.conf file. The thing required was just setting up an X-Account header value, but I added support for many/any such headers, just in case another pool requires more of these.
Most of the work was actually contained in a single commit - because changes for this part were pretty simple. First of all, in-memory config store had to be extended with new keys - common.h line 154, that's pretty trivial change. Then, I added reading new configuration section in - blagominer.cpp lines 279..296. This code basically scans given config section as a JSON object and considers all of its properties as key-value pairs (key=property name, value=property value), or actually, header-name and header-value. Since this header set is constant, the same loop that reads the headers configuration immediatelly builds header text part blagominer.cpp lines 419..422 that is cached and will be later used when HTTP requests are formed network.cpp lines 407-408 and sent few lines later.
Since it was pretty easy thing to do, I also included support for custom http header for BURST connections, and I also added similar support for custom http query parameters fo BURST/BHD sections.
2. Support for HTTPS connections
Feature branch for this change is feature/https.
While I was testing the custom headers, another problem immediatelly showed up. The Foxy Pool was hidden behind Cloudflare, and its hosting was set up to enforce HTTPS connections ("Always use HTTPS", https://support.cloudflare.com/hc/en-us/articles/204144518-SSL-FAQ). Unfortunatelly, Blagominer was using simple direct socket operations for forming HTTP requests and reading responses. Sending HTTP request to Foxy Poo ended up in the Cloudflare returning 301 code in attempt to force a redirect to a HTTPS connection. This in turn was not understood by Blagominer's response parser and ended up treated as "request failed".
Adding support for such redirects made no sense, as Blagominer was incapable of opening HTTPS connections, so such redirect couldn't be handled anyways.
To solve this, I added HTTPS support to the core 3 networking operations performed by the miner, namely:
confirm_i(). Later it turned out that those are actually two operations, known better as
getMiningInfo (poll) and
submitNonce (send_i, confirm_i).
Changes for this part were far more extensive and spanned about 15 commits. Since I didn't know the codebase well, and I didn't want to accidentally damage the working code, I started off with refactoring-out the socket operations, so later it will be possible to switch between old HTTP and new HTTPS code. I started refactoring with
poll()/GMI (getMiningInfo) function - [commit https://github.com/quetzalcoatl/blagominer/commit/f4c947c65c5f039b9a7e4ac9e35cdb6beff2fc8b] - and it mostly boiled down to moving the code, adjusting function parameters and result/error values. Then, I added curl library and reimplemented GMI with curl, network.cpp lines 874..912. Funny thing, I accidentally overlooked a debugging note at line 903, the fault mentioned there was related to wrong %s/%S/etc options for
sprintf_s that I already fixed before committing, but I forgot to remove that comment. It is removed a few commits later, where I corrected %i/%S typo and remembered about that comment. For sending POST to
getMiningInfo I used typical sequence of
curl_easy_setopt followed by
curl_easy_perform as it makes the code brief and easy to read. A downside of this approach is a total lack of asynchronicity, but in this place it was OK as the original code was blocking on this request anyways. There's almost no response-reading code here, only a json-document object is created here, all parsing was left almost in the original unrefactored location.
Adding HTTPS support for
submitNonce was much harder - Blagominer handles that in a somewhat asychronous way, sending request on one thread, keeping handles on a queue, and receiving responses on another thread. I worked on that part in a similar way: first, refactoring out the socket-related code on send_i and confirm_i. Then, I duplicated the connection queue to have a second store dedicated for curl handles. Then finally I reimplemented (send_i)[https://github.com/quetzalcoatl/blagominer/commit/074e2ad0751a24b78efff9f8c9af3b945b35d7b0] and (confirm_i)[https://github.com/quetzalcoatl/blagominer/commit/882c8fb7834ed98b91581cc50a0443525384ed0c] with curl. While the concept of the change was similar to GMI (refactor, extend, reimplement), the actual changes were totally different. For one example, I couldn't fully use the handy
curl_easy_perform and let it do everything, because the operation must not be blocking here. I had to add
curl_easy_setopt(curl, CURLOPT_CONNECT_ONLY, 1L)line 430 option to prevent handling the send/recv operations for me, because it would block on them. To mimic behavior of the old code, it should only block for the time of setting up the connection - and this option does exactly that). Since
curl_easy_perform now only handles the handshake, a bit later in the code, after some checks for a succesful connection,
curl_easy_sendis used in a loop lines 464..483 to send the request and then a still-open handle to curl object is stored in the connection queue line 503.
Since another connection queue was added,
confirm_i had to be adapted as well. The curl version of
confirm_i is ran on another thread and it first takes a next connection from the queue line 784 and then tries reading the response via
curl_easy_recv in a typical loop in blocking fashion lines 823..840. That loop features some inobvious branches that handle curl-specific CURLE_AGAIN error code which actually only informs about empty buffers, not a real error condition.
When I was refactoring this part earlier, I left some parts of request-parsing operations here, near the response-reading code. I left them here because cleaning up this part would require far more extensive changes, and "making it pretty" was not the current goal. This means that I also had to adapt this part a bit as well, but the changes to the response-parser in lines 887..900 and below were mostly related to error handling and cleanup (i.e. line 947).
A small trick at lines 989..999 is worth mentioning - since the connection-queue structures are separate and unrelated (
session for HTTP,
session2 for HTTPS), I had to find a common part so the later response-parsing code may be left unchanged -- the
sessionX variable is a tiny copy of the original
sessions but it holds only
deadline parts that are analyzed below.
After those changes, I began testing Blagominer on Foxy Pool without proxy software, and early tests proven it ran fine and cooperated with the pool properly.
3. Cleaning up socket leaks.
Feature branch: get-rid-of-socket-leaks
Longer tests exposed some connection-managing issues, namely a pool of dead sockets was piling up (detected with ProcessHacker2, networking tab). After a few 8-hour test runs, I noticed the problem and on Jun 7, 2019 decided to fix that.
Making this story short, it turned out that almost all of original
confirm_i() had handle leaks. On main path of execution flow these three cleaned up everything properly, but in many error handling branches the sockets were not cleaned up and released, resulting in slow buildup of dead handles, which eventually could cause serious problems.
Since I effectively duplicated this code and adapter to HTTPS via curl, initial version of HTTPS code inherited the same handle leaks. A series of 5 commits:
take care of them well. I performed another set of 8-hrs test runs later and number of open connections stopped growing.
Most of those patches are just adding proper cleanup on specific error code branch that lacked it, but in some places the number of missing cleanups was so large, that I leveraged smart-pointers to guarantee a RAII-style cleanup. An example of that is seen for example here where
unique_ptr is exploited for this purpose by using a lambda with
closesocket as its
Deleter. Since this is happening in
send_i and on the succeess-branch the socket should eventually be moved to the connection queue, a smart-pointer's
release had to be used as well line 373 to ensure that the smart-pointer will not cleanup the socket in this case.
While bugfixes usually doesn't qualify, these leaks actually originate from "not my code" :) and I consider them an important "icing on the cake", as these leaks were present there for a long time and these patches significantly reduce socket/port consumption and improve stability of the miner.
My main and only account is: https://github.com/quetzalcoatl/