BIP101 - Crypto Mining Blog

The mempool hasn't gone below 10MB in over 2 days. We've officially reached the point of an ever-increasing transaction backlog.

I did some calculations of the number of transactions per second bitcoin can handle here, but long story short, the bitcoin network is only able to handle about 2.5 tx/s with the 1MB limit, and we're at that point now.
According to tradeblock.com, the miners haven't been able to clear out the mempool, and we've been sitting at 10MB, or 9x over capacity, for about a 2 days. Now we face an ever increasing transaction backlog, with erratic fee structures that our wallets won't be able to handle, and randomly dropped transactions.
You sent money today? Sorry, fuck you, bitcoin is over capacity, try again with a higher fee later.
The miners are entirely to blame for this situation for not running bip101, which would fix this problem very quickly. If they don't, bitcoin could very quickly become the MySpace of crypto currencies.
submitted by thouliha to btc [link] [comments]

21% attack possible against BIP100?

If I interpret BIP100 correctly, the top and bottom 20% of votes are discarded, and the minimum is chosen.
Doesn't this mean that a miner with 21% of the hashing power can drop the block size to the minimum that can be specified by the votes, i.e. 1B? If the block size is calculated on entire blocks, wouldn't this permanently destroy Bitcoin unless recovered with a hard fork?
Now, of course, one could argue that a miner would not want to do that since it would destroy the value of his mining equipment, but I think that's a weak argument. A year worth of Bitcoin mining is worth
25 BTC/block * 6 blocks/h * 24*365 h/year * 230 USD/BTC 
which is roughly $300 million. eBay bought PayPal for five times this amount in 2002, so this is not an unrealistically large amount of money. If you can't amortize your equipment over more than a year, the price of mining will surely be higher than that, but not that much higher, I suspect (since miners do become obsolete quite fast anyways) - and besides that, the mining still pays for itself, at least before the attack becomes obvious, if you sell the coins immediately.
Assuming a block size limit of 1 MB, and a voting cycle of 3 months (the BIP doesn't seem to specify it, but it can be implied from the initial voting cycle), 1 year of mining would "only" allow you to drop the block size four times, e.g. from 1 MB to 500 KB, 250 KB, 125 KB, and finally 62.5 KB. That wouldn't irreversibly destroy Bitcoin on a technical level, but probably make it unaffordable to transact on it and destroy it on an economic level.
Did I miss something, or should we consider some changes to BIP100 to prevent this (e.g. using the median instead of the 20th percentile)?
Note: I'm in support of a block size increase, be it through BIP100, BIP101, or otherwise. My aim is to help fix an issue if one exists, understand why if it isn't an issue, but not to delay or prevent the implementation of one of these BIPs.
Edit: Possible solution here - a majority of miners can always declare a certain size unacceptable and start treating all blocks that vote beyond the arbitrarily chosen limits as invalid (soft fork).
submitted by aaaaaaaarrrrrgh to Bitcoin [link] [comments]

Introducing: the Time To Visa

I propose that the "Time to Visa" be calculated to judge the worthiness of block size proposals. The TTV is calculated by estimating the block size needed to do Visa scale transactions per second and calculating when in the future a BIP will loosen its stranglehold on the bitcoin network enough to let it happen. I estimated the blocksize needed as 1.4 Gigabytes and got these numbers.
submitted by Bitcoin_Chief to bitcoinxt [link] [comments]

segwit after a 2MB hardfork

Disclaimer: My preferred plan for bitcoin is soft-forking segregated witness in asap, and scheduling a 2MB hardforked blocksize increase sometime mid-2017, and I think doing a 2MB hardfork anytime soon is pretty crazy. Also, I like micropayments, and until I learnt about the lightning network proposal, bitcoin didn't really interest me because a couple of cents in fees is way too expensive, and a few minutes is way too slow. Maybe that's enough to make everything I say uninteresting to you, dear reader, in which case I hope this disclaimer has saved you some time. :)
Anyway there's now a good explanation of what segwit does beyond increasing the blocksize via accounting tricks or however you want to call it: https://bitcoincore.org/en/2016/01/26/segwit-benefits/ [0] I'm hopeful that makes it a bit easier to see why many people are more excited by segwit than a 2MB hardfork. In any event hopefully it's easy to see why it might be a good idea to do segwit asap, even if you do a hardfork to double the blocksize first.
If you were to do a 2MB hardfork first, and then apply segwit on top of that [1], I think there are a number of changes you'd want to consider, rather than just doing a straight merge. Number one is that with the 75% discount for witness data and a 2MB blocksize, you run the risk of worst-case 8MB blocks which seems to be too large at present [2]. The obvious solution is to change the discount rate, or limit witness data by some other mechanism. The drawback is that this removes some of the benefits of segwit in reducing UTXO growth and in moving to a simpler cost formula. Not hard, but it's a tradeoff, and exactly what to do isn't obvious (to me, anyway).
If IBLT or weak blocks or an improved relay network or something similar comes out after deploying segwit, does it then make sense to increase the discount or otherwise raise the limit on witness data, and is it possible to do this without another hardfork and corresponding forced upgrade? For the core roadmap, I think the answer would be "do segwit as a soft-fork now so no one has to upgrade, and after IBLT/etc is ready perhaps do a hard-fork then because it will be safer" so there's only one forced upgrade for users. Is some similar plan possible if there's an "immediate" hard fork to increase the block size, to avoid users getting hit with two hardforks in quick succession?
Number two is how to deal with sighashes -- segwit allows the hash calculation to be changed, so that for 2MB of transaction data (including witness data), you only need to hash up to around 4MB of data when verifying signatures, rather than potentially gigabytes of data. Compare that to Gavin's commits to the 0.11.2 branch in Classic which include a 1.3GB limit on sighash data to make the 2MB blocksize -- which is necessary because the quadratic scaling problem means that the 1.3GB limit can already be hit with 1MB blocks. Do you keep the new limit once you've got 2MB+segwit, or plan to phase it out as more transactions switch to segwit, or something else?
Again, I think with the core roadmap the plan here is straightforward -- do segwit now, get as many wallets/transactions switched over to segwit asap (whether due to all the bonus features, or just that they're cheaper in fees), and then revise the sighash limits later as part of soft-forking to increase the blocksize.
Finally, and I'm probably projecting my own ideas here, I think a 2MB hardfork in 2017 would give ample opportunity to simultaneously switch to a "validation cost metric" approach, making fees simpler to calculate and avoiding people being able to make sigop attacks to force near-empty blocks and other such nonsense. I think there's even the possibility of changing the limit so that in future it can be increased by soft-forks [3], instead of needing a hard fork for increases as it does now. ie, I think if we're clever, we can get a gradual increase to 1.8MB-2MB starting in the next few months via segwit with a soft-fork, then have a single hard-fork flag day next year, that allows the blocksize to be managed in a forwards compatible way more or less indefinitely.
Anyhoo, I'd love to see more technical discussion of classic vs core, so in the spirit of "write what you want to read", voila...
[0] I wrote most of the text for that, though the content has had a lot of corrections from people who understand how it works better than I do; see the github pull request if you care --https://github.com/bitcoin-core/website/pull/67
[1] https://www.reddit.com/btc/comments/42mequ/jtoomim_192616_utc_my_plan_for_segwit_was_to_pull/
[2] I've done no research myself; jtoomim's talk at Hong Kong said 2MB/4MB seemed okay but 8MB/9MB was "pushing it" -- http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet/ and his talks with miners indicated that BIP101's 8MB blocks were "Too much too fast" https://docs.google.com/spreadsheets/d/1Cg9Qo9Vl5PdJYD4EiHnIGMV3G48pWmcWI3NFoKKfIzU/edit#gid=0 Tradeblock's stats also seem to suggest 8MB blocks is probably problematic for now: https://tradeblock.com/blog/bitcoin-network-capacity-analysis-part-6-data-propagation
[3] https://botbot.me/freenode/bitcoin-wizards/2015-12-09/?msg=55794797&page=4
submitted by ajtowns to btc [link] [comments]

BIP X? Decaying dynamic block size cap as a function of fullnes & average fee

BIP X? Decaying dynamic block size cap as a function of fullnes & average fee
Introduction & motivation
We need something fresh. I've been following this and the "other" sub-reddit from sidelines for a while, and now I feel the need to attempt to contribute constructively, by introducing some fresh ideas (fresh as far as I’m aware). Personally, I prefer BIP101 over BIP100 or over status quo and think BIP100 is something to avoid as it may not be such a good idea to give any specific group dynamic control of system parameters (in this case, the miners). I believe all parameters need to be predictable to ensure stability, utility and fair game for all participants. But this is just me, and it doesn’t really matter what I believe. I only want to present something, and then see if it sticks.
Goals
  1. Have downwards pressure on the cap as to stimulate the fee market and address fears of cap reaching infinity.
  2. Have the cap be re-adjusted based on actual usage as to ensure adequate capacity.
  3. Use fees as a parameter when re-adjusting the cap as to diminish influence of spam or low fee transactions and in a way which would allow fee market to develop before raising the cap.
  4. Be predictable enough to enable all participants to plan their operations or responses to change.
  5. Have same rules for all participants - no voting, no easy manipulation with mechanism to achieve an increase. Any changes should be done dynamically as a response to change in actual usage.
  6. Keep the 1MiB cap as a minimum, for safety/historic purpose.
  7. KISS (Keep It Simple Stupid)
The Proposal
Have the maximum block size be determined in the following way:
If (block_size(i-1) > max_block_size(i-1)*0.5) and avg_fee_per_kb(i-1,i-1) >= avg_fee_per_kb(i-947,i-2)*(256/259)) Then Max_block_size(i) = Max_block_size(i-1)*(259/256) Else Max_block_size(i) = max(max_block_size(i-1)*(4093/4096), 1MiB) 
Explanation
Two conditions need to be met for an increase.
  1. Previous block must have been more than half-full.
  2. Previous block average fee/kb must have been greater than the average fee/kb calculated over last 946 blocks (approx 1 week) multiplied by (256/259).
When they're met, the cap is increased by factor of (259/256). As you can see, the increase is allowed only if the average fee/kb is maintained or is growing. Win for all.
If conditions for an increase would be met for every block, the cap could theoretically quadruple by the end of the day (in 119 blocks). However, that would be unlikely considering that an increase in block size would likely dilute the average fee/kb and that they would need to be filled more than 50%. If it would not get diluted and there are enough transaction to trigger increases, well, so be it - good for the miners as it means new space is filled with more profit (increased avg. fee/kb + more transactions). If the fees are not maintained, block cap will keep reducing at a rate (4093/4096) which would halve the size in about 1 week (946 blocks).
With this, raising the cap would be a continuous struggle, but also a response to actual need to raise it. Each bump in the up direction would take 16 blocks to return down back to "normal" unless another bump would be triggered in the meantime. It would be hard for any entity with limited supply of bitcoin to maintain the cap higher then actually required. However, if everybody is pushing for their place in the block (fee market), the condition under 2. should be regularly triggered until some equilibrium is reached.
Well, this wraps it up, What do you think?
End notes
I don't know the technical details of bitcoin implementation. Maybe the above would be impractical to implement, violating goal 7. Maybe it would be too fast changing, violating goal 4. I cannot say. Anyway, I had this idea and tried to formulate it a bit and give it some initial structure. Maybe it gives someone skilled something to start with, a fresh idea. Take it or leave it - I don't care. If it grows, it would make me happy. If not, well, I tried :)
submitted by Rariro to bitcoinxt [link] [comments]

Cost of voting for XT by renting mining time

Building off of the good idea that willsteel had here about how we can vote for XT by renting time on miners, I gave some thought to the cost of supporting XT through this method. Obviously if you have your own mining hardware and support XT you can do it directly, but for the rest of us, an effective way to vote is to rent hashing power that signs with the BIP101 flag. I wanted to figure out what the actual cost of voting this way is in the current market conditions, here's what I got.
Based on mining calculators, the current mining revenue from 1TH/sec averages to 0.00927BTC/day. NiceHash charges a 3% fee, and assuming you're pointed towards the slush pool, they charge a 2% fee, so you're left with 0.0088BTC. The cost of (1TH/sec) * 1day on NiceHash is currently 0.0096BTC, so for 1TH/sec*day you're losing 0.0008BTC/day, or at the current price of $226/BTC, $0.18/day. Getting to a nice round number, for a loss of $1/day, you can purchase 1/.18 = 5.56 TH/s hash rate
The total hashing power of the network is currently about 400PH/sec. So, for $1/day, the fraction of that you can purchase is 5.56 /400000= .000014 of bitcoin's hashing power, or 0.0014%.
tl;dr: using NiceHash and slush pool, you can pay 1 US dollar / day to convert 0.0014% of bitcoin's hash power to XT
submitted by medley_of_minds to bitcoinxt [link] [comments]

BIP X? Decaying dynamic block size cap as a function of fullnes & average fee

BIP X? Decaying dynamic block size cap as a function of fullnes & average fee
Introduction & motivation
We need something fresh. I've been following this and the "other" sub-reddit from sidelines for a while, and now I feel the need to attempt to contribute constructively, by introducing some fresh ideas (fresh as far as I’m aware). Personally, I prefer BIP101 over BIP100 or over status quo and think BIP100 is something to avoid as it may not be such a good idea to give any specific group dynamic control of system parameters (in this case, the miners). I believe all parameters need to be predictable to ensure stability, utility and fair game for all participants. But this is just me, and it doesn’t really matter what I believe. I only want to present something, and then see if it sticks.
Goals
  1. Have downwards pressure on the cap as to stimulate the fee market and address fears of cap reaching infinity.
  2. Have the cap be re-adjusted based on actual usage as to ensure adequate capacity.
  3. Use fees as a parameter when re-adjusting the cap as to diminish influence of spam or low fee transactions and in a way which would allow fee market to develop before raising the cap.
  4. Be predictable enough to enable all participants to plan their operations or responses to change.
  5. Have same rules for all participants - no voting, no easy manipulation with mechanism to achieve an increase. Any changes should be done dynamically as a response to change in actual usage.
  6. Keep the 1MiB cap as a minimum, for safety/historic purpose.
  7. KISS (Keep It Simple Stupid)
The Proposal
Have the maximum block size be determined in the following way:
If (block_size(i-1) > max_block_size(i-1)*0.5) and avg_fee_per_kb(i-1,i-1) >= avg_fee_per_kb(i-947,i-2)*(256/259)) Then Max_block_size(i) = Max_block_size(i-1)*(259/256) Else Max_block_size(i) = max(max_block_size(i-1)*(4093/4096), 1MiB) 
Explanation
Two conditions need to be met for an increase.
  1. Previous block must have been more than half-full.
  2. Previous block average fee/kb must have been greater than the average fee/kb calculated over last 946 blocks (approx 1 week) multiplied by (256/259).
When they're met, the cap is increased by factor of (259/256). As you can see, the increase is allowed only if the average fee/kb is maintained or is growing. Win for all.
If conditions for an increase would be met for every block, the cap could theoretically quadruple by the end of the day (in 119 blocks). However, that would be unlikely considering that an increase in block size would likely dilute the average fee/kb and that they would need to be filled more than 50%. If it would not get diluted and there are enough transaction to trigger increases, well, so be it - good for the miners as it means new space is filled with more profit (increased avg. fee/kb + more transactions). If the fees are not maintained, block cap will keep reducing at a rate (4093/4096) which would halve the size in about 1 week (946 blocks).
With this, raising the cap would be a continuous struggle, but also a response to actual need to raise it. Each bump in the up direction would take 16 blocks to return down back to "normal" unless another bump would be triggered in the meantime. It would be hard for any entity with limited supply of bitcoin to maintain the cap higher then actually required. However, if everybody is pushing for their place in the block (fee market), the condition under 2. should be regularly triggered until some equilibrium is reached.
Well, this wraps it up, What do you think?
End notes
I don't know the technical details of bitcoin implementation. Maybe the above would be impractical to implement, violating goal 7. Maybe it would be too fast changing, violating goal 4. I cannot say. Anyway, I had this idea and tried to formulate it a bit and give it some initial structure. Maybe it gives someone skilled something to start with, a fresh idea. Take it or leave it - I don't care. If it grows, it would make me happy. If not, well, I tried :)
submitted by Rariro to Bitcoin [link] [comments]

BIP proposal - Max block size | Erik | Nov 13 2015

Erik on Nov 13 2015:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi devs. I was discussing the BIP proposals concerning max block size
yesterday in the #bitcoin channel. I believe that BIP101 fully utilized
will outperform consumer hardware soon or later and thereby centralize
Bitcoin. I would therefore like to do a different proposal:
Motivations:
This is very fast and may make the blockchain to grow faster than
consumer hardware can cope with.
block max size will need to arise soon after it has been implemented.
max size. Althoigh it has several cons: 1) The block max size can never
extend 32 MiB, even if we are so far in the future that it is need for
larger blocks. 2) The block max size could reach a size of 32 MiB in a
rather fast manner if pools vote for it, even though consumer hardware
today isn't really ready for the growth it implicates. 3) Block max size
can be pushed backwards, which will make TX fees higher, cause a lot of
orphaned low-fee TXes. It could make some smaller mining pools dependant
on lots of TXes with fees unprofitable. It is a serious flaw which could
damage the trust of the network.
will be storage for the larger block chain in the future.
that will be processed in that the fees will rise.
will prevent mainstream users from using the network. There will also be
a lot of orphan TXes which will cause uncertainity and fear of losses
among users that don't know how bitcoin works.
although a few, nodes still must store the complete data -> centralization.
Concepts:
There is always a growth in the block max size. Never a decrease.
The growth rate desicion should be in the hands of the miners.
It's good to have limits on the block max size to keep back spam TXes.
Use rules that makes a more smooth and predictable growth.
Rules:
1) Main target growth is 21/2 every second year, or a doubling of the
block max size every four years.
2) The growth rate every second year will strictly be limited by the
formula 22 > growth > linear growth.
3) The target growth could be modified with positive or negative votes,
but it will not exceed the limits of 2) in any direction. Miners could
also choose to not vote.
4) The linear y=kx+m will be formed from the genesis block date with
size 1 MiB (m) through the last retarget block date with current size.
5) Target growth is based on votes from the last 26280 blocks (half a year).
6) Block max size grows at the same time as block difficulty retarget
(2016 blocks) with the formula 2((1/2+(1/2*amount positive
votes)-(1/2*amount negative votes))/52). If the votes propose a lower
growth than the linear, use the linear growth instead. Block size is
floored to byte precision.
7) Amount positive/negative votes are calculated as following: number of
votes, positive or negative / 26280.
8) When this rule are put in force, the block max size will immidiately
be set to 4 MiB.
Notes:
number of week pairs or difficulty retargets per two years.
1). Also blocks mined before the implementation counts as blocks with no
votes.
Examples:
votes exists for positive and negative side. Then the next block max
size is 4096 KiB*2(1/2/52)=4123.3905 KiB (or exactly 4 222 351 bytes)
and 2 weeks since the genesis block, the next block is a retarget and
every vote is negative. Then 2((1/2-(1/2))/52) = 1. It is lower than
the linear, then the next block max size will follow the linear derived
from: (11 MiB - 1 MiB) / (10.00 years) = 1 = k. Formula for a linear is
y=kx+m. m is the genesis block max size in MiB. Then y = 1 * (10+1/52) +
1 = 11.019 [MiB] (or exactly 11 554 500 bytes)
four years, then the block max size will then be y = 1*14 + 1 = 15 [MiB].
instead has put positive votes into the block chain since 4.5 years,
then the block max size now is 10 MiB * 2 ((1/2+1/2/(52)) * 2) = 10
MiB * 4 = 40 MiB
26280 blocks, then the formula will look like:
size2((1/2+(1/20.2)-(1/2*0.1))/52)
Pros:
Provides a long term solution that give opportunities to the network to
itself cope with the actual state and hardware limits of the future
network. No need to make a hard fork to adapt to other growth rates
within this proposal's limits.
Provides a smooth growth rate based on a large consensus, thus making
the growth for the near future almost predictable. No big jumps in block
max size provides stability to the network.
Miners can choose pools that votes in a way that conforms to the miners
interest.
Eliminates fluctuating block size as could happen with BIP100 proposal.
Cons:
A few single, large entities could either vote for smaller growth of
blocks for a long time, causing TX congestion and mistrust in the
bitcoin network. On the contrary they could vote for a larger growth of
blocks, causing the blockchain to be too large for consumer hardware. It
will then result in fewer nodes and in worst cases closing of small
pools. These cases seems to be extremely unlikely partly because of the
time and mining power that will be needed, partly also because of limits
in how much the votes can adjust the growth rate. It would therefore not
pose a large risk.
Sincerely,
Erik Fors
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAEBAgAGBQJWReCVAAoJEJ51csApon2oLBgP/jn7mL5AzvU7/PCeAD39Kmc3
IsgFwh9LrHin/SaerPebusRGbjKXezP86kbiQVGEsSu3K3BxUAf9O09UoQiWECoc
g2EOw5E1XrtzBopxYTO06daM/2CqDydpLVIVv6NwwLMpXKvmbixdqaD6vOKfzhNF
1B5tmg9Vh1zqEkBj7exnuypagG/3llkCt3DRb0+siVzkIM/O9GzgHbGtt8rtDEnH
XHIhwLw+ySGuHg6hRhLo3uHs3gCUQmarxx1AoqR6AyvzgR6TGhJcy22vXct7QK5G
B2K4+JseyVD0bvkBeIpjuqJpGoCq4lmNu/AmI/nQ82TmqqzvOBi/ljFF/Q+HArjZ
UQO6p28lE7rmXf80GB6L117QLHktA5CdY++vW4Gwz3KDYEafs6H3CptvSmj9JbQz
SVAt/eVvvdnVkRcYw++b0WrRuOS3Z+105QIX4yqt0Kyghr87LQ76LXnZHPMKZeHt
IRX3wv7ZFqrJEpmGrTK4ZMZUAPVpkGe0kPms5kLHjEtjU92rvZJA726JJFoaAv5S
rFDiGUupLvHttZLTYfTdyFhCo6ZStOI095qDZ69awVCLMmYpC9atjQ5zMu5eNS
y4hQdrX0Z4sdrJ2mTB+OXO7broLDn2G9dIqfpZwcIU493ljcXk/Uma4lj3oDrGTA
oc5Q5ie/OVUclWB6GIho
=cocM
-----END PGP SIGNATURE-----
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0x29A27DA8.asc
Type: application/pgp-keys
Size: 3117 bytes
Desc: not available
URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20151113/2b1ae0ad/attachment.bin
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Novembe011734.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Scaling Bitcoin - Day 1 - Morning Session Bitcoin Wallet Recovery How to Choose your Bitcoin Wallet  Best Guide to Trade Bitcoins 2014 Bip Coin Coin ! How to make, check, and transfer BTC from a brainwallet

Segwit stands for Segregated Witness and was introduced in BIP-0141 (Bitcoin Improvement Proposal 141). The size of a Segwit transaction is lower than the size of a legacy Bitcoin transaction, as Segwit reduces the overall size of the transaction. The difference lies mainly in the size of inputs, as the outputs are roughly the same in size - 1-2 bytes smaller when using Segwit. Thus, having a ... bitcoin network hashrate chart - Bitcoin Average hashrate (hash/s) per day Chart. Transactions Block Size Sent from addresses Difficulty Hashrate Price in USD Mining Profitability Sent in USD Avg. Transaction Fee Median Transaction Fee Block Time Market Capitalization Avg. Transaction Value Median Transaction Value Tweets GTrends Active Addresses Top100ToTotal Fee in Reward - 8MB - blocks ... This reminds me of headwinds in airplanes. You might think that on average, wind cancels out. Sometimes you have a tailwind, sometimes you have a headwind and over time it will all average out. This is not the case. On average, wind reduces your groundspeed. The short of it is you spend more time in a headwind, so you Blockchain, along with other big names in the Bitcoin industry have just released a public letter in support the implementation of BIP101 and larger Bitcoin blocks. Other companies that have signed the document include BitPay, Circle, Kncminer, Bitnet, Xapo and Bitgo and other companies are encouraged to join. Below is the document quoted and you can also find a link to the contents and ... bip 101 Early Bitcoin users are spoiled. Having used a decentralized, secure, fast, cheap and unlimited network for a few years, they are not willing to concede on any of these five features.

[index] [36892] [23407] [38809] [18810] [34713] [18345] [49318] [7428] [39832] [34355]

Scaling Bitcoin - Day 1 - Morning Session

Top Bitcoin Core Dev Greg Maxwell DevCore: Must watch talk on mining, block size, and more - Duration: 55:04. The Bitcoin Foundation Recommended for you bitcoins buy, bitcoins exchange, bitcoins exchange rate, bitcoins gbp, bitcoins market, bitcoins markets, bitcoins price, bitcoins price chart, bitcoins to gbp, buy a bitcoin, buy bitcoin, buy ... Try watching this video on www.youtube.com, or enable JavaScript if it is disabled in your browser. BITCOIN XT meaning - BITCOIN XT definition - BITCOIN XT explanation. ... Skip navigation Sign in. Search. Loading... Close. This video is unavailable. Watch Queue Queue. Watch Queue ... bitcoin profit calculator, bitcoin poker, bitcoin paypal, bitcoin paper wallet, bitcoin preev, bitcoin program, p vs np bitcoin, bitcoin miner .p bitcoin p bitcoin price bitcoin p=np forum.bitcoin ...

#