Re: Serious Faults in BECI

At the start of August 2017, a series of quotes from Digiconomist surfaced on the following page: Serious faults in Digiconomist’s Bitcoin Energy Consumption Index (BECI). These quotes originated from communication between Digiconomist and Marc Bevand in several weeks before they were published. Sadly, the quotes were taken out of context with the purpose of misrepresenting Digiconomist’s position. In several cases Digiconomist’s real position is directly opposite to the assertions made by Bevand. The motivation behind this behavior is unknown. Private communication seemed to indicate alignment on much of the content discussed. A request was made to adjust the misleading information 24 hours prior to publishing this post, but Bevand did not provide a response. To counter and discourage this type of behavior, a full transcript of the communication between Digiconomist and Bevand has been disclosed below. The parts that supply the missing context and reveal Digiconomist’s true position have been highlighted.

June 21, 2017

So I really want to get to the bottom of why you stick to “65%” despite my CSVs proving otherwise.

On Twitter you said I “derived the wrong numbers”.

Firstly, do you agree that we are both trying to calculate the “proportion of electricity costs over mining revenues”, right?

Secondly, my CSVs shows you the real-world electricity costs assuming $0.05/kWh (to be consistent with your model). Then I calculate real-world mining revenues[1]. I divide the first number by the second. Explain why you think this is wrong.

[1] Actually my CSVs ignore mining fees, so the real-world percentage is even less than 6-32%. Maybe 5-25%. Even farther than your 65%…

-Marc

June 21, 2017

Hi Marc,

Okay, so first of all I should note it’s been lowered to 60% since my examples show 60-70 and I figured I should be on the most conservative side rather than in-between (even though the Antminer S7 that serves as the second benchmark hasn’t reached the end of its lifetime yet). It’s been that way for some time now.

First of all the basics of the model say marginal costs should equal marginal revenue in the longer run (doesn’t have to right now). In fact, it’s all about production optimization, and adding more quantity simply takes time. That doesn’t change that at the intersection you will find the optimal output, hence you expect that output will eventually be reached.

I don’t disagree on the CSV; it makes perfect sense that a miner considering buying an S9 will consider a few things. First of all the price of $2100 and an estimate of lifetime energy costs. Well, I put the expected lifetime on 700 days based on earlier machines. So the miner will say okay I have $2100 in costs + $1159 in electricity (and some other negligible costs). He’ll then determine what return he expects to get. If that’s greater than $2100+$1159 it makes sense to invest because it’s going to be a profit. IMO the CSV shows miners indeed made a profit, so that’s great.

The thing is, other miners can still make a profit too, so other miners would continue to invest all while revenues remain greater than expected costs. Obviously expected revenues fall as the # of miners increases, but this takes time as more machines aren’t added overnight.

So basically, I expect revenues to continue falling over time (for individual miners) all until there’s no more expected profit and marginal costs indeed equal marginal revenue. I know the revenues, I can calculate them based on rewards, so it’s just a matter of waiting until the network fill up.

When marginal costs do equal marginal revenues I still need to know ther atio between electricity and other costs. In the example that’s $2100 versus $1159 making 35% going to electricity. Is this the target then? Well, no. The $2100 isn’t a real floor, since Bitmain can produce them for $500 and just lower the price if demand goes down (while they still make a big profit). This is why I calculate $500 versus $1159 in electricity costs (even if Bitmain produces for themselves they still have to make a profit). No more hash will be added below this point (not entirely true). That’s where you end up with a 60-70 target.

IMO this is pretty solid, although just based on one case and I do expect some trend in the ratio. The bigger question to me is, when will the target be reached? In theory pretty soon since there’s a profit opportunity, but capacity limits may put a cap on that. That’s where the lags comes in.

This is why today’s number is just $700M versus $2.1B (30%) in costs rather than the target of 60%. The revenues have been going up quick and the quantity still needs to catch up. Miners investing right now will probably make good profits.

If I’d put 6% as the target the lag would put it at even less. Implied J/GH would be well below 0.10 J/GH, so that doesn’t seem realistic at all.

Hope this helps, but I’m happy to zoom in more if required.

Kind regards,

Alex

June 22, 2017

Hi Alex,

I will write a longer response later, but in the mean time can I ask: on what exact day was the 65->60 change made? Why is there no drop in your TWh/yr estimate? 65->60 if significant. For example it should have dropped 14 TWh to 12.92 TWh. I see no such sudden drop on your chart.

-Marc

June 22, 2017

Hi Marc,

It was probably around the end of March/start of April when I added the examples of expected lifetime costs for an S9. By the way, one question I’ve been wanting to ask you but forgot to do so when we called: Suppose your estimate is correct, or even that Bitcoin is running at the absolute most efficient energy consumption of 0.10 J/GH. Today that would still mean BTC runs at 46 KWh per on-chain transaction. That powers 1 US household for 1.5 days at the very very least. It’s not suddenly appearing a lot more sustainable if you take a lower number. Most people won’t even let me demo a BTC transaction after hearing about this. So how do you feel about this?

Kind regards,

Alex

July 11, 2017

As indicated, the motivations driving Bevand are unknown. There seems to be little need to provide misleading information when there’s agreement on much of the content.

Hi,

I found the time to continue this discussion. So, I agree with almost everything you write in your June 21st email. On the long term, it makes sense that we would tend toward approximately this target of 60-70%. In fact you can do the math with most ASICs released at any time, such as even the first ASIC (Avalon 1), and you will find that the target is around 60-70% with them too. You and I simply disagree as to when this target will be reached. You say in theory it should be “soon”. But the thing is, we have had ASICs for 4.5 years and we *still* aren’t even close to 60%! How do you justify sticking with your model, since it has been wrong for 4.5 years and counting? > If I’d put 6% as the target the lag would put it at even less. Implied J/GH would be well below 0.10 J/GH, so that doesn’t seem realistic at all. I never suggested putting 6%. All I pointed out is that 1 of the many ASICs has shown a ratio of 6%. As I wrote in my critic of BECI I think the average percentage is most likely between 26-30% and that would put the implied J/GH at roughly 0.15 which is realistic. > It was probably around the end of March/start of April when I added the examples of expected lifetime costs for an S9. I have been tracking your numbers since March 13th, and I’ve never seen a single day drop of 7.7% which would result from your change from 65 to 60%.The biggest drop is on March 25th (5.9% 10.2→9.6 TWh). What about your change from averaging the price from 60 to 120 days. When have you done this? In theory it should also have caused a sudden drop in your TWh estimate. > […] It’s not suddenly appearing a lot more sustainable if you take a lower number. Most people won’t even let me demo a BTC transaction after hearing about this. So how do you feel about this? Do these people not drive cars? Do they not buy products manufactured in countries thousands of km away and shipped on polluting cargo ships? We, as the human race, waste energy left and right. Bitcoin is no exception. I don’t feel particularly bad about it for 2 reasons. The first reason is the same reason the people you talk to don’t feel bad about driving a car that wastes 98% of its fuel’s energy [1]. You should ask them why don’t they use a bicycle for short trips? Or why don’t they use public transport? Or why don’t they buy an electric car and have solar panels at home to recharge it? The reason is because we like convenience. We spend energy on things that make our lives better, and that’s (usually)OK. The second reason I don’t feel bad about it is that, as I have explained many times, miners tend to use renewable energy, mostly hydroelectric, because this type of energy is cheap and its vital to have cheap energy to survive in the mining business. It’s ok to “waste” renewables. They are renewables! There is a practically infinite supply. And they don’t pollute. [1] Only about 25% of the energy from the fuel is used to move the car down the road, the rest is lost mostly via heat. Furthermore, if a vehicle that weights 1000 kg is used to transport a 80 kg person, then only 2% of the fuel’s energy ends up being used to move this person down the road.

-Marc

July 18, 2017

Hi Marc, Okay, so I get we’re actually almost at full agreement except for one small little detail… > You and I simply disagree as to when this target will be reached. You say in theory it should be “soon”. I honestly don’t even think we disagree that much here. Economic theory indeed predicts gaps would fill asap, but I recognize production lines can only handle so much (just take the massive GPU shortage atm). There’s no good way to estimate the delay based on historic data either. The past years didn’t just see the introduction of ASICs, but also the block reward halving last year. I think it also depends on revenue volatility. Large increases are harder to “catch up” with than slow increases. I’m experimenting with making this more dynamic. Right now there’s a 200(+) day lag as a result. That’s not so “soon” anymore. >The biggest drop is on March 25th (5.9% 10.2→9.6 TWh). I honestly didn’t take note of the exact date, but this could be it. Intraday moves could be partially offsetting. >What about your change from averaging the price from 60 to 120 days. See my earlier mark about making it more dynamic. It may vary per day. Other params should probably be more dynamic too. E.g. you’d expect the average price per KWh to be trending towards the lowest point. I also expect a trend in the target, although I have too little data to say something about the direction. > Or why don’t they buy an electric car and have solar panels at home to recharge it? The reason is because we like convenience. I actually own an electric car, but indeed it lacks some general convenience lol. In Bitcoin I feel more like I’m paying a big price for more inconvenience (BTC sucks for point-of-sale, there’s the hassle of managing your private keys, we’ll even have to open payment channels for a simple transaction soon, etc.). The good news is we can remove most of the energy costs at no additional inconvenience penalty. PoW runs in the background, so if there’s a good way to replace it we should right? 😉 Kind regards, Alex

July 19, 2017

> Okay, so I get we’re actually almost at full agreement

Good. To be perfectly clear, does it mean you finally recognize the validity of my data, namely that “the energy/mining ratio seems to be around 30% (NOT 60-70%) as of today, and should theoretically tend to60-70% in the future”? And can I quote you on that?

> I honestly don’t even think we disagree that much here.

I think it is not easy to estimate how soon we will reach this 60-70%energy/mining ratio. It could be years. But I won’t try to make a guess. Too many variables. (Hence not worth another debate.)

> Right now there’s a 200(+) day lag as a result.

What about this change from 120 to 200 days, on what day did you change your model?

So far it’s been 3 changes you claim to have made (65%->60%, 60 days → 120days, 120 days → 200 days). But the fact I only saw 1 daily energy estimate drop that kind of corresponds to the change (where are the other 2?), and the fact you never seem to remember (don’t even track?) when you make the change make your claims look suspicious (if I may be honest with you). Are you rolling out a change progressively, smoothing its effect over multiple days to “hide” the sudden energy estimate drop it would cause?

> In Bitcoin I feel more like I’m paying a big price for more inconvenience.

Perhaps it’s not convenient to you, but are you considering other Bitcoin users who have drastically different life circumstances, economically, politically, financially, and safety-wise?

Think about Venezuelans whose savings are evaporating due to hyperinflation, think about the people of Cyprus who had their bank account confiscated in 2014 to bail out their banks, think about online merchants who serve markets prone to high credit card fraud and have no choice but to only accept Bitcoin to reduce fraud, think about people who get taxed a 10%fee(!) to remit money to their family overseas, etc.

-Marc

July 19, 2017

>To be perfectly clear, does it mean you finally recognize the validity of my data, namely that “the energy/mining ratio seems to be around 30% (NOT 60-70%) as of today, and should theoretically tend to60-70% in the future”? And can I quote you on that? Uhh, you know you can calculate my ratio from my key statistics? Just take the implied J/GH / BE J/GH. It will give you 40% today. It even was below30% some time ago. I thought this was clear in my explanation, I even bolded the “capped” part, but I guess not. So yeah, it’s trending towards that point. I personally took it the other way. When you produced your numbers with your model I was like: cool, you’re not even that far off from my estimate(that particular day – even though the estimate is not completely independent since you’re also using economics). Mainly looking at the upper bound and “real” lower bound that is. I had to do the latter manually because your were putting an overly optimistic best guess, completely ignoring your own methodology. I mentioned this in the comments, but you didn’t reply to this. >Are you rolling out a change progressively, smoothing its effect over multiple days to “hide” the sudden energy estimate drop it would cause? Erm, it’s not like the 65 > 60 change is big enough to want to cover it up somehow. It may have been slightly smoothed, but that’s due to my general methodology. I’ll explain: My data is calculated on an hourly basis. The daily numbers always use an average over the past 24 hours (over these hourly calculations). The main reason for this is technical: my feed sometimes fails. The daily number is produced every day at 7 AM (my time). I did this this so I could check whether it was functioning after waking up (in the early days it could break down). I’m not making changes this early, I’ve got to go to work! If I’m changing anything it’s probably in the evening, but hey that’s halfway through the BECI’s day. This way a change is likely to be spread out over two days. Then there’s the change in the days lagging, but if that would show clearly I would be doing a bad job. Just think about it. If you link to volatility the natural thing that’s supposed to happen is that it will counterbalance major shifts. That’s the whole point in the first place, “catching up” takes more time if there’s more to catch up on. I’ve been updating the BECI page frequently with the best approximation, but okay, I should probably use a live indicator for that. Honestly there’s no good spot on the page currently to put that at, and the page is already too slow so I’m not considering this atm. Maybe I’ll just state it’s dynamic and leave it at that, but I’m trying to be as transparent as possible so I dislike that idea too lol. >Perhaps it’s not convenient to you, but are you considering other Bitcoin users who have drastically different life circumstances, economically, politically, financially, and safety-wise? I don’t question the advantages, just the need to do that with PoW. We maybe able to be 99% more efficient with the exact same advantages, we only need to change PoW to something like PoS. “Waste” is required to make PoW work. But if we can achieve the same with PoS, then we’re really wasting energy. Kind regards, Alex

July 19, 2017

> Uhh, you know you can calculate my ratio from my key statistics? Just take the implied J/GH / BE J/GH. It will give you 40% today.

Yes I know. 36.5% today. However I mistyped and meant to write “as of 26Feb 2017”: Do you recognize that the energy/mining ratio seems to be around30% (NOT 60-70%) *as of 26 Feb 2017*?

I didn’t keep all your numbers for that day. Though I have them for 14 Apr, and at that time you didn’t publish a break-even J/GH, but only:

*Annualized global mining revenues $914,015,869*

*Annualized estimated global mining costs $549,585,434*

…which implied a ratio of 60%. So do you stand by this 60% as of 26 Feb2017, or do you agree the real ratio was around 30% (as my CSVs show)?

> your were putting an overly optimistic best guess, completely ignoring your own methodology. I mentioned this in the comments, but you didn’t reply to this.

What comment did I ignore? AFAIK I replied to everything.

> But if we can achieve the same with PoS, then we’re really wasting energy.

Yes. But PoS hasn’t proved to be feasible (yet).

Also can you answer: on what day did you change your model from averaging the price from 120 to 200 days?

-Marc

July 24, 2017

Hi Marc,

I was intending to reply a bit sooner this time, but [private] so it got delayed again. >…which implied a ratio of 60%. So do you stand by this 60% as of 26 Feb2017 I don’t believe in the network as a static entity. Miner income varies, and so does the portion spent on electricity. My model shows that by reflecting dynamics in that ratio. One date is a poor measure for another one. On Feb 26 specifically I was closer to 50%. And given the 5% target adjustment I would even be somewhat lower if recalculated today. On that day you produced an upper bound of 38% (337M/900M), and a lower bound of maybe 30-34%-ish (at the very least for a best guess)? (I’ll get to this). So overall I found it quite supportive (there is some overlap in indicated ranges). Then again, it is just one data point (I tried extending your numbers to today and found we’d probably be at something like 24% vs 34% now). I actually commented this: “One more thing. I was wondering why the method wasn’t repeated in reverse? If you can do it for the least efficient machines why not with the most efficient ones? This would make for a more interesting “lower bound”. At least more comparable to the upper one.” I get the absolute lower bound is indeed the best machine * hash, but it just doesn’t compare to the upper one. If you stand behind your method why not apply it to the best available machines and redo the numbers that way? Did you try this out? You would probably notice a problem too while doing this (other than the best guess being on the low side). It’s impossible for the two numbers to be very far off, since you’d be using the same numbers for beginning part(which ultimately put in the most weight). The lack of variation here would imply that lower bound = upper bound (= perfect accuracy) for this part, even though this is the most error-prone part of the whole method (I already showed most of the energy weight is at the start based on your numbers). >Yes. But PoS hasn’t proved to be feasible (yet). Can agree on that. Let’s hope it will soon 🙂 >Also can you answer: on what day did you change your model from averaging the price from 120 to 200 days? BECI hasn’t had a completely static delay since the official release. You probably want to have a look at this image(<a href=”https://drive.google.com/open?id=0B7IQks_dQ92qcWVMM0diOWlUY3M”>https://drive.google.com/open?id=0B7IQks_dQ92qcWVMM0diOWlUY3M</a>). It shows my ratio versus volatility &amp; btc price. During the first vol peak the lag increased to over 120 days. The second peak eventually knocked it over 200(there is no cap on the delay). Note: the first peak was during a price decrease so the ratio maxed out. You cannot observe any impact during this stage, as live prices take over at that point. So how realistic is this? => Volatility means uncertainty. Any swing, up or down, may affect the “catch up” lag. But the model lacks a “memory”. If machines get turned off and then back on again once price increases it should at least go on from the previous consumption peak IMO.(There IS some kind of “memory” on the (maximum) volatility.) But this is kind of where I draw the line in terms of complexity. The lag is the only parameter that I consider dynamic enough to justify some additional complexity, but other variables are (for now) fine in static form (even though I expect trends in here as well). I’m pretty happy with the model as it is. I’m not looking for perfection. No model could achieve that anyway. Awaiting the upcoming Bitcoin fork and contemplating the assumptions for the ETH index in the meanwhile. What are your thoughts on the latter? Have you seen this index page? Kind regards, Alex

July 25, 2017

Hi, [private]. > On that day you produced an upper bound of 38% (337M/900M), and a lower bound of maybe 30-34%-ish? Actually my lower bound was 16%. See footnote # 3 in:>http://blog.zorinaq.com/serious-faults-in-beci/ On Feb 26 specifically I was closer to 50%. And given the 5% target adjustmen tI would even be somewhat lower if recalculated today. “closer to 50%”? You can’t cite an exact percentage? Your ratio varies from day to day, but from https://drive.google.com/file/d/0B7IQks_dQ92qcWVMM0diOWlUY3M/view?usp=drive_web it looks like it was pretty consistently in the range 50-60% for the period March-April. My goal is to edit http://blog.zorinaq.com/serious-faults-in-beci/ to provide some sort of closure to our debate. So I figured let’s pick our numbers as of 2017-04-12,since I have your data for this day (see attached file) and since it’s in a period where your ratio was not varying too much. I recomputed my model’s numbers to be as of 2017-04-12: the 10-day moving avg hash rate was 3827 PH/s, break-even 0.51 J/GH, 1 BTC = 1220 USD, and Iget: lower: 383 MW *3.36 TWh/yr* (18% of mining revenues spent on electricity)best guess: 540-610 MW *4.73-5.34 TWh/yr* (26-29% of mining revenues spent on electricity)upper: 861 MW *7.54 TWh/yr* (41% of mining revenues spent on electricity) By comparison your estimate for 2017-04-12 was *10.99 TWh/yr* (60% of mining revenues spent on electricity: $549,585,434/$914,015,869). Now, you implied in this email thread already that (paraphrasing you:)”reality will take some time to catch up with economic theory”, so does it mean you agree the real power consumption was probably between my lower &amp;upper bounds, and not at 10.99 TWh/yr *as of that exact day*? If you agree then, fine, that would provide a good closure to our debate. In any case, it looks like your model, by increasing the price lag to ~200days, lowers the ratio and therefore *appears* to be tending toward more realistic power consumption estimates. (But I haven’t really verified how more or less accurate BECI might be these days…) > I actually commented this: “One more thing. I was wondering why the method wasn’t repeated in reverse? If you can do it for the least efficient machines why not with the most efficient ones? This would make for a more interesting “lower bound”. At least more comparable to the upper one.” Ah, I remember not replying because I’m interested in calculating a lower bound and what you are suggesting (people at each phase deploying the most efficient machines) is obviously not a lower bound, because machines are often decommissioned even before they stop being profitable (see http://blog.zorinaq.com/assets/income-antminer-s5.csv: an S5 should reasonably be decommissioned between day 385 and 567). So I don’t see the point in doing the math. Nonetheless I did it for your curiosity. As of2017-02-26: (290*.50 +30*.50 +50*.20 +70*.20 +40*.20 +350*.20 +670*.20 +350*.10+150*.10 +1250*.10)/3250 = 0.176 J/GH… which would correspond to a consumption of 572 MW. About the dynamic price lag: what does it mean “not completely static”? You make frequent manual changes? You have a formula to dynamically recompute it every day? Why are you so vague about it? It’s been 3 or 4 emails that I ask you and I still don’t have a clear answer as to how or when you change it… This is another flaw in BECI: you have a black box that determines a crucial input parameter to your model, and because it’s a black box no one else can validate/reproduce BECI’s numbers. About the ETH index: from a purely theoretical economics viewpoint it is logical, but again I don’t think it gives a realistic energy consumption for the same reason as the BTC index (you don’t base the estimate on hardware parameters.) My napkin maths shows you are probably overestimating the ETH miners consumption by 1.5x-2x. -Marc

July 25, 2017

[Private]

>If you agree then, fine, that would provide a good closure to our debate.

I cannot rule out it could be. 😉 The challenge is that we’re comparing estimates to estimates, and not even completely independent ones. Deriving energy consumption from hash is interesting, but it only works well for the absolute bottom. Otherwise it’s mostly a meaningless number. You can use economics to give it some meaning, but that’s partially overlapping with what I do and also results in an (uncertain) estimate. There’s no way to pick a superior method based on this. I personally dislike hashrate based estimates because it’s more complicated, there’s no real way around economics anyway, and these estimates lack predictive properties.

>(people at each phase deploying the most efficient machines) is obviously not a lower bound, because machines are often decommissioned even before they stop being profitable

So you accept economics to calculate a top, but reject economics for calculating a reasonable (rather than absolute) bottom. I’d be more careful with calculating an objective top and using subjective arguments to reject the bottom. I can think of plenty of subjective reasons to reject the top, e.g. free (/stolen) electricity or mining in the red to support some kind of fork. I really think the application of the method should be more consistent, also with regard to potential spill-over I mentioned. It’s a bit like cherry picking.

>About the dynamic price lag: what does it mean “not completely static”?

I suppose “not completely static” is a poor way of saying it responds to(new) volatility peaks. Otherwise it’s stable.

>This is another flaw in BECI: you have a black box that determines a crucial input parameter to your model, and because it’s a black box no one else can validate/reproduce BECI’s numbers.

Huh? Everything you need to reproduce the number is there. I’ve consistently provided this. It shouldn’t take more than a minute. Jus ttake the 200 days average price (okay, you have to get these numbers, but it’s $1600), multiply with 1800 coins per day. If you’re feeling lazy, just take yesterday’s fees (167 according to blockchain.info), and add that at the same rate per coin. Multiply with 365 to get $1.15 billion. Then multiply with 60% to get $690M, and add 5% (rough average block time adjustment). You end up with $724 in total network costs. This quick and dirty approach is sufficient to almost get the provided costs of $735Mtoday. What’s missing on the page to be able to do this??

>My napkin maths shows you are probably overestimating the ETH miners consumption by 1.5x-2x.

Really? What’s the minimum for ETH? I found the most efficient GPU doing something like 4.66 J/MH (index at 7.33 J/MH).

Kind regards,

Alex

July 26, 2017

(It’s very late here, I’ll reply tomorrow.) Just wanted to ask again: when did you make that change from 120 to 200days? -Marc

July 26, 2017

Let’s say mostly during the first two weeks of June (see chart I sent).

Kind regards,

Alex

July 27, 2017

> Let’s say mostly during the first two weeks of June (see chart I sent). What does this even mean? The chart doesn’t make anything clearer. You don’t know the exact day? You spread the change incrementally over 2 weeks? Whenever I ask you on what day you made a specific change your answers are always vague and cryptic. This is the 4th time I ask about the 120→200change and still no clear answer… Look, if you don’t know and don’t keep track of when these changes are made, just admit it. I would advise you to put your model and code under revision control to have more traceability and be able to answer these questions. Better: *document publicly* on what day you make changes. BECI’s historical data is unverifiable if you yourself can’t answer the question of “how was the avg Bitcoin price calculated on day X”. > [Private] > > If you agree then, fine, that would provide a good closure to our debate.> I cannot rule out it could be. 😉 Ok. Well at least I can summarize your position as such: You recognized BECI didn’t appear to produce *as of 2017-04-12* an accurate energy consumption estimate. You recognized physical bottlenecks (“production lines can only handle so much”) are causing the hash rate &amp; power consumption to take some time to catch up with what economic theory predicts. To palliate with this you’ve increased the price lag from 60 to200 days, so in theory BECI should be closer to reality today. Accurate? > and these estimates lack predictive properties The goal of hardware-based estimates is not to predict *future*consumption. It’s more important to give an accurate estimation of*present* consumption. Besides, any “predictive properties” an economic model might have are still largely flawed since you don’t know exactly when these physical bottlenecks(production lines) will resolve and catch up with economic theory. > So you accept economics to calculate a top, but reject economics for calculating a reasonable (rather than absolute) bottom. No, I embrace economics for both. You, on the other hand, forget to take into account some economic factors. Do you know why it makes sense to decommission an S5 at day 567 when “only” 78% of the daily revenues are spent on electricity ($0.71 of $0.90)? Because $0.19 daily is not enough for an industrial miner to be able to cover the maintenance, DC space, labor, and other miscellaneous business and operational overheads of a 3U1/3rd width racked chassis of electronics running 24/7. I’ve pointed out before it is understandable you don’t grasp this (“You are not a miner. You have never talked to, researched, or studied the mining industry. You have never met and interviewed professional miners”). The mining economy, or any economy for that matter, is not a perfect-spherical-object-in-a-vacuum type of thing. There are costs other than electrical costs that matter, your simplistic model fails to capture them, so change that. As a rule of thumb these overheads account for *at least* $0.01-0.02 per kWh. Go talk to other professional miners, if you want more than 1 data point, more than just mine. Here is one who contacted me and believes the overhead is even larger: <a href=”https://www.reddit.com/r/BitcoinMarkets/comments/605loa/below_337_per_bitcoin_mining_becomes_unprofitable/df3vf3k/”>https://www.reddit.com/r/BitcoinMarkets/comments/605loa/below_337_per_bitcoin_mining_becomes_unprofitable/df3vf3k/</a> > > This is another flaw in BECI: you have a black box…> Huh? Everything you need to reproduce the number is there. … Forget that comment. Your vague and cryptic answer (“BECI hasn’t had a completely static delay”) sounded like the price lag was now completely dynamic and changed every day, hence my criticism. Don’t be vague next time. Looking at my data, I noticed you actually never changed the price lag from120 to directly 200 days, but had an intermediate step at 150 days. It looks like you forgot about it because you didn’t mention it to me. These frequent, random, unexplained changes make BECI’s numbers arbitrary IMHO, especially because the price lag drastically affects your numbers (57%difference between 60-day and 200-day average)… Also you’ve never explained exactly *how* you decide on a particular price lag. It looks like you pick the number willy-nilly. -Marc

July 28, 2017

One of the assertions made by Bevand is the following (from the Zorinaq blog page):

BECI’s author’s implicit acknowledgment his earlier models would be overestimating electricity consumption by 1.57×

This statement was explicitly denied, but this isn’t the only false assertion addressed in the next part. Another one is as follows:

The author’s vague answers as to when he made the changes indicate he does not track, archive, or document them.

It was actually indicated that these changes are logged on the Index’s page history. Repeated requests for being more specific on the information need were ignored:

> What does this even mean?

What this means is that I don’t know the purpose of your request, so I’m providing you with a general direction on where to look since this isn’t a single day event. Volatility peaks, this raises the delay, which decays again slowly afterward. What do you want to do? If you want to prove our methods are significantly different, you first of all need more data points and then the BECI points are sufficient to help you out. Reproduce today’s numbers? Everything you need is on the page as explained. Reproduce a past days? Should work just fine too for recent history and with the approach I laid out. Going way back? The required information was on the page, but can help if you’re specific. To what purpose would we be doing this? Need the exact date 65% was changed to 60%? Give me a reason why I’d want to go through thousands of page versions to retrieve this. Since anything relevant goes on the BECI page, I suppose it’s effectively my changelog as well. Granted, this sucks for looking up things fast. I should keep these more central.

> You recognized BECI didn’t appear to produce *as of 2017-04-12* an accurate energy consumption estimate. You recognized physical bottlenecks(“production lines can only handle so much”) are causing the hash rate &amp;power consumption to take some time to catch up with what economic theory predicts. To palliate with this you’ve increased the price lag from 60 to200 days, so in theory BECI should be closer to reality today. Accurate?

No. T think there’s some misunderstandings here. Most importantly, changes in delay should mainly affect the FUTURE. That’s the whole idea anyway. If the price goes up 30% today it would take longer to catch up than a 10% increase. It doesn’t make sense if that would influence today’s value though. Costs don’t go down because there’s more to catch up on, they should just go up slower. As far as I’m concerned, total costs reflected reality just fine, and still do today. The expected (near) future is what looks different in these cases. I find that this is working beautifully at the moment. Ever since the beta phase completed the total costs have looked almost perfect, taking in nothing of the craziness of the market no matter what happened (except for the small bump were the ratio maxed out).This worked fine when volatility was low (and the lag was low), as well as when volatility peaked (and so did the delay).

Just compare price to costs in this picture: https://drive.google.com/open?id=0B7IQks_dQ92qN3JfTlY2QlRmOVE

There’s been one manual change that would slightly impact the cost level(being the 65 to 60), but then you also misunderstand the reason I made the change in the first place. I made an optimistic and a pessimistic scenario for the target. 65% was in-between, which is still defendable IMO. 60% provides a bit better fit given a very conservative point of view. But honestly I’d have no problem with changing it back to 65% today. I consider this change more of a cosmetic change than anything else. Non-trivial changes such as the BCC fork get the attention they deserve: https://twitter.com/DigiEconomist/status/890616835229466624 (although the exact “change” still has to take shape, stay tuned).

> I noticed you actually never changed the price lag from 120 to directly 200 days

Been trying to tell that for a while now. 😀

> It looks like you pick the number willy-nilly.

I’m way too lazy and have better things to do than running a random number generator every day and putting it in manually, especially [private]. 😉 I wouldn’t have built the BECI had it required manual updates (fun fact: untrue, some comparison data requires annual updates – I can live with that).

> As a rule of thumb these overheads account for *at least* $0.01-0.02 per kWh.

That’s accounted for in the 5 cents per KWh. Honestly doubt any serious miner is still paying more than 4 cents per KWh, and even that’s expensive. I actually talked to a lot of people before I got to this number, and the guy you refer to confirms this number too: “5-6c per KWh”lol. It’s the same as saying “every 5 cents spent on operational costs includes payment for 1 KWh”. If you’re going with 5c too then you’re already including overhead as well, so this doesn’t make an objective argument for decommissioning. Kind regards, Alex

July 30, 2017

> > > Let’s say mostly during the first two weeks of June (see chart I sent).> > What does this even mean?> What this means is that I don’t know the purpose of your request, so I’m providing you with a general direction on where to look since this isn’t a single day event. Volatility peaks, this raises the delay, which decays again slowly afterward. Alex, you contradict yourself all the time. Initially you said the moving average was fixed. Then you said it was “dynamic”. Then you said it was fixed (“Just take the 200 days average price”). Now you say it is “decaying”. You remain as vague and cryptic as ever. For once, answer clearly. Which is it: 1) Is the moving average *fixed*, eg. right now it’s averaged over exactly200 days (except you occasionally change it, eg. previously it was averaged over exactly 150 days)? 2) Or is the moving average *dynamic*, eg. one day it could be averaged over 200 days, the next day over 195 days, the next day over 192 days, etc? And what is your formula for determining the averaging period? > > I noticed you actually never changed the price lag from 120 to directly200 days, but had an intermediate step at 150 days> Been trying to tell that for a while now. 😀 Well you haven’t been trying very hard. You should have simply said there was an intermediate step at 150 days. That’s it. A 1-sentence clear explanation. Instead you give and continue to give vague and cryptic answers. It’s so bad that this is my *5th email* questioning you and I*still* don’t know if the moving average is computed as per (1) or (2)above. I would like to reply to the rest of your email, but I’ll keep it short and simple for now, in order to force you to answer once and for all if it’s(1) or (2). -Marc

July 30, 2017

Another key assertion is the following (again from the Zorinaq blog page):

However BECI’s author did exactly what he should have never done: making random changes…

This statement was made with regard to how the period for the BECI’s moving average is determined. In reality, this is done via an approach that is heavily inspired by a mean reversing GARCH model commonly used in finance (the actual method itself is too complicated to be handled by the tools used in making the BECI index), and by linking the resulting volatility to the BECI’s time window. Bevand was well aware of this when making his statement:

Hmm, you complain to me about not being consistent while not addressing the inconsistencies in your approach. But okay, let’s talk formulas and get this cleared up. In essence it’s as follows:

Average of ( Maximum of ( volatility t-1 , decay factor * decaying volatility t-1 ) * reference days )

With a decay factor so low that without a new volatility peak it’s practically static. So here’s why I’m saying it’s fine to use a static number for most parts, and I update this on the BECI page when the difference is interesting enough > there will always be some fine-tuning mismatches on multiple parameters. The information provided always allows for getting close enough in reproducing, as shown it did. If the fine-tuning is actually relevant it’s a different story, but again, that really depends on the purpose.

Kind regards,

Alex

Lastly, the mentioned intention to “keep BECI down” is a quote that wasn’t part of this (or any) conversation history.

No additional responses were received between the last email and the moment this post was published.

Part of the layout was lost due to parsing from webmail.