Connect with us

Entrepreneurship

What’s Worrying/Exciting about Bitcoin in 2017

Published

on

Looking forward into the next year and more of bitcoin, I see three main areas of concern, each related to the other. Let’s look at the problems, and the work going on to solve them.

  • Fungibility
  • Centralization
  • Scalability

Fungibility: Protecting Your Privacy

Fungibility technically means all coins are substitutable, but in practice it means that you can spend your bitcoins how you want. That means that nobody has the power to stop your transaction (see: Centralization), and nobody has reason not to accept your coins.

The state of fungibility in bitcoin today is poor. Services exist which aim to trace where bitcoins came from and whose they are. The fact that coins can be traced means some services are obliged to do so, and they refuse to interact with coins they see as “tainted”.

The simplest weakness of fungibility is the public ledger: everyone can try to analyze payments to see where they went. Consider transaction 3d96bcd… from April 8th 2016; one output is 3.10510875 BTC, the other is 0.05934611 BTC. If we convert them using the USD closing rate from April 7th, that’s $1307.8842 and $24.9968. It’s fair to guess that the second output is a $25 payment, and the first output is back to the payer. I’d also guess the payer is in the United States.

Addresses naturally cluster when a wallet has to use more than one input to create a transaction; when public addresses are revealed (particularly with address reuse!), analysis becomes easier. I asked someone to look at my bitcoin address, and he immediately linked me to localbitcoins.com using such techniques.

Different software creates slightly different transactions, which can also be used to link transactions and thus addresses. Differences in fee estimation is another method. And every transaction you know makes it easier to guess the remaining transactions, like solving a crossword puzzle. Fungibility is a network property: other people having it helps you have it, too.

There are also active probes going on; fake bitcoin nodes which connect to as many other nodes as they can, presumably to try to nail down the original source of transactions.

What’s Being Done For Fungiblity

Software is slowly improving: every bitcoin core release changelog seems to include tweaks to make active snooping more difficult.

We may see more uniformity in wallet implementations, too, though in the short term things like replace-by-fee will probably make wallets more different, not less.

The most promising development here is TumbleBit: it’s a tumbler which you don’t need to trust with your coins or your privacy. A normal tumbler is where I take everyone’s coins, and then return them randomly. Of course, I might decide to not return them, or keep records so I can trace whose coins went where. TumbleBit is more complicated, but doesn’t have either of these problems. It’s in early development, but once it’s complete I look forward to quite a few TumbleBit servers mudding the waters.

Centralization: Control of The Network

If the miners refuse to mine your transactions, your bitcoins aren’t worth anything. With better fungibility that becomes unlikely, but still possible (miners could insist on ID for every transaction, for example).

In most systems, there are economies of scale which drive centralization, and bitcoin mining is no exception. The invention of mining pools dramatically increased centralization, as small miners delegated their transaction selection to a handful of pools (this smooths out a miners income, by profit sharing). As block sizes increased, the situation became worse: if your block is slow to get out to the other miners, it’s likely to lose a race, and if you’re slow to get blocks from other miners, you’re more likely to produce obsolete blocks. Blocks which lose out like this are called “orphan blocks”, and how often you produce them is your “orphan rate”. More than 1% and your profitability is probably shot.

You can drop your orphan rate by being the biggest miner (or, part of the biggest pool). If a single miner or pool gets more than 50% (which has happened), they can reliably censor the network (which hasn’t). With even less they can still profitably exploit vendors who accept unconfirmed transactions (which has happened). And it turns out that larger miners can drive up orphan rates of other miners (so-called selfish mining) and magnify their advantage.

It should be no surprise then that mining is fairly centralized: four groups control more than half the mining power. Fortunately, there doesn’t seem to be deliberate orphaning attacks happening.

The other issue is that fear of orphaning leads to miners mining empty blocks (aka SPV mining). They do this because they watch other mining pools, and as soon as they see a block header which refers a new previous block, they start mining an empty block themselves. They have to mine an empty block, because they don’t know what transactions were in the previous block. That doesn’t help the network throughput at all, and because they are not validating the previous block, it greatly weakens the security of lightweight nodes which assume miners are actually checking blocks. It turns out over 50% of mining power was doing this in 2015, and many still are.

What’s Being Done For Centralization

Fast block propagation was a big area of work last year, with Bitcoin Unlimited’s XTHIN and Bitcoin Core’s Compact Block work. Both send short summaries of the block contents which often allow a node (which usually knows all the transactions already, just not which ones are in this block) to reconstruct it.

Matt Corallo previously ran the Bitcoin Relay Network to try to increase propagation and reduce incentive to SPV mine; the latest version is based on compact blocks and is even more efficient, called Bitcoin Fibre. You’re welcome to run your own Fibre network, too (I run a test one on Digital Ocean, for example). It uses UDP and error correction so you can get blocks from multiple sources at once, and handle packet loss. Matt claims that there’s no point in SPV mining any more; Fibre gets you the blocks just as fast.

There’s ongoing work on speeding up new block creation further: I’m told Bitcoin Unlimited removed the validity double-check on newly created blocks (it’s caught issues in the past, but maybe it’s time) and Bitcoin Core has worked on speeding it up so it’s no longer measurable. Combined with more significant fee income (which is lost when SPV mining), we may see SPV mining eliminated this year.

None of these addresses the core problem of centralization; this is the issue we have fewest technical fixes for and thus is likely to be least amenable to technical efforts.

Nontheless, Roger Ver’s bitcoin.com mining pool gives me hope that we’ll see some diversity in motivations for miners. Making life easier and more convenient for small miners (especially solo mining) should be a priority for those who care about centralization. In the long term, as more businesses become dependent on bitcoin, I’d like them to start investing in mining capacity as a kind of distributed insurance policy.

Scalability: More Transactions

In the early days, bitcoin software had a 100k block limit and no transaction fees were required. Nobody cared, and blocks were never full.

When blocks passed 700k, bitcoin saw its first centralization crisis as orphan rates spiked and one pool (Ghash.io) got over 50% of the hash power. Since then developers have scrambled over the issue of block propagation; in theory, it could be independent of block size, but in practice it’s not. Centralization has remained a core source of tension with hopes for enlarging blocksize. Blocks are now full (though only 85% of theoretical maximum), and the transition from “free” to “user pays” is causing pain as software has to be upgraded and users proceed through the stages of mourning on free transactions (disbelief, denial, bargaining, guilt, anger, depression, and acceptance).

But other scalability issues exist: the bitcoin history has reached 100GB (that’s a lot of work for starting a new node), the size of unspent outputs each node has to remember keeps expanding (it must remember these forever), and the number of full nodes in the network is in long-term decline (though currently flat).

What’s Being Done For Scalability

There are several “20% improvement” factors on the horizon, and together they multiply to give significant improvements in scalability as software improves. Rising fees are causing wallet authors to (finally!) begin optimizing their transactions, because users are noticing.

Block propagation has gotten better (see centralization above) and slightly less coupled to blocksize, and validation has gotten much faster (thanks much to libsecp256k1) which may see us close the gap between the theoretical 1MB blocksize and the current 850k average blocksize.

Segregated Witness should increase blocks to about 2MB, though it depends how quickly the ecosystem (wallets and other transaction businesses) start using it.

Segregated witness makes signatures (aka “witnesses”) discardable, and gives them a discount over parts of transactions which must be kept (ie. unspent outputs). This should bias wallets towards using it so more of the blocks can be discarded by nodes.

Replace-by-fee is becoming more common: this allows you to bump the fee on transactions which are taking too long to confirm. This not only means you can be more aggressive on lowering fees, it also allows you to combine multiple payments into one if you have them, which reduces your total transaction size.

On the horizon are Schnorr signatures, which can be combined together, reducing witness size even further: instead of a transaction with two inputs which are each a 33 byte key and 72 byte signature, we might have two 33 byte keys, and a single signature. Interestingly, this also provides an incentive to adopt mixing protocols (like TumbleBit) because they are smaller and hence cheaper, helping the network fungibility even if you don’t care about fungibility yourself.

Finally, there are at two significant efforts to create off-chain scaling for bitcoins; Lightning for microtransactions, and the proposed sidechain MimbleWimble. Lightning takes Satoshi’s original (but incomplete) ideas for payment channels on top of bitcoin and makes them bi-directional and trustless, and forms them into a network. There are at least four teams of us actively working on implementing it. MimbleWimble is more radical, and uses a cut-down scriptless bitcoin with some amazing math to produce a blockchain which doesn’t require transmission or storage of any historical state, just the current unspent outputs, without loss of security (but with great fungibility benefits). Implemented as a sidechain, you would move bitcoins across to it, then back. It has cast its spell on Andrew Poelstra and I look forward to seeing an alpha release this year.

Conclusion

It’s often hard to find an overview of all the different threads of development and effort going on at once in the bitcoin technical community. I haven’t even covered more speculative things like Bitcoin-NG or Confidential Transactions nor developments which don’t directly address these three areas such as covenants or new scripting enhancements, let alone things which will no doubt be dropped from the sky

But hopefully this gives you a list of things I’m looking forward to in 2017!

________________________________________________
About the Author 

This article was written by Rusty Russell. Rusty is a Linux kernel dev who wandered into Blockstream, and is currently trying to produce a prototype and spec for bitcoin lightning.

Entrepreneurship

The Brittle vs. Ductile Strategy for Business

Published

on

Companies and startups often pursue a path of “brittle strategy” and in it’s execution, it can be translated, in layman terms, into something like this:
Heard about the guy who fell off a skyscraper? On his way down past each floor, he kept saying to reassure himself: “So far so good… so far so good… so far so good.” How you fall doesn’t matter. It’s how you land!
– Movie : La Haine (1995)

Brittle strategy :

A brittle strategy is based on a number of conditions and assumptions, once violated, collapses almost instantly or fails badly in some way. That does not mean a brittle strategy is weak, as the condition can either be verified true in some cases and the payoff from using this strategy tends to be higher. However the danger is that such a strategy provides a false sense of security in which everything seems to work perfectly well, until everything suddenly collapses, catastrophically and in a flash, just like a stack of cards falling. Employing such approach, enforces a binary resolution: your strategy will break rather than be compromising, simply because there is no plan B.
From observation, the medium to large corporate company strategies’ landscape is often dominated by brittle “control” strategies as opposed to robust or ductile strategies. Both approach have their strong parts and applicability to corporate win the corporate competition game. The key to most brittle strategy, especially the control one, is to learn every opponent option precisely and allocate minimum resources into neutralizing them while in the meantime, accumulating a decisive advantage at critical time and spot. Often, for larger corporations, this approach is driven by the tendency to feed the beast within the company that is to say the tendency is to allocate resources to the most successful and productive department / core product / etc.. within the company. While this seems to make sense, the perverse effect is that it is quite hard to shift the resources in order to be able to handle market evolution correctly. As a result of this tendency, the company gets blindsided by a smaller player which in turn uses a similar brittle strategy to take over the market.The startup and small company ecosystem sometimes/often opts for brittle strategy out of necessity due to economic constraints and ecosystem limitations because they do not have the financial firepower to compete with larger players over a long stretch of time, they need to approach things from a different angle. These entities are forced to select an approach that allows them to abuse the inertia and risk averse behavior of the larger corporations. They count on the tendency of the larger enterprise to avoid leveraging brittle strategies, made to counter other brittle strategies. These counter strategies often fail within bigger market ecosystem as they are guaranteed to fail against the more generic ones. Hence, small and nimble company try to leverage the opportunity to gain enough market share before the competition is able to react.

Ductile strategy :

The other pendant of the brittle strategy is the ductile strategy. This type of strategy is designed to have fewer critical points of failure, while allowing to survive if some of the assumptions are violated. This does not mean the strategy is generally stronger, as the payoff is often lower than a brittle one – it’s just a perceived safer one at the outset. This type of approach, will fail slowly under attack while making alarming noises. To use an analogy, this is similar to the the approach employed with a suspension bridge using stranded cables. When such a bridge is on the brink of collapse, will make loud noises allowing people to escape danger. A Company can leverage, if the correct tools and processes are correctly put in place, similar warning signs to correct and adapt in time, mitigating and avoiding catastrophic failure.
To a certain extend, the pivot strategy for startups offer a robust option to identify the viability of a different hypothesis about the product, business model, and engine of growth. It basically allows the Company to iterate quickly fast over the brittle strategy until a successful one is discovered. Once found, the Company can spring out and try to take over the market using this asymmetrical approach. For a bigger structure, using the PST model combined with Mapping provides an excellent starting point. As long as you have engineered within your company and marketed the correct monitoring system to understand where you stand at anytime. Effectively, you need to build a layered, strategic approach via core, strategic and venture efforts combined with a constant monitoring of your surroundings. This allow you to take risks with calculated exposure. By having the correct understanding of your situation (situational awareness), you will be able to mitigate threats and react quickly via built-in agility. However, we cannot rely solely on techniques that allow your strategy to take risk while being able to fail gracefully. We need techniques that do so without insignificant added cost. The cost differential between stranded and solid cables in a bridge is small, and like bridges, the operational cost between ductile and brittle strategy should be low. However, this topic is beyond the scope of this blog post but I will endeavor to expand on the subject in a subsequent post.
Ductile vs Brittle :
The defining question between the two type of strategies is rather simple: which strategy approach will guarantee a greater chance of success? From a market point of view this question often turns into : is there a brittle strategy that defeats the robust strategy?
By estimating the percentage of success a brittle strategy has against the other strategies in use, weighted by how often each strategy is used by each competitor you can determinate the chances of success.Doing this analysis is a question of understanding the overall market meta competition. There will be brittle strategies that are optimal at defeating other brittle strategies but will fail versus robust. However, the robust one will succeed against certain brittle categories but will be wiped out with other. Worse still, there is often the recipe for a degenerate competitive ecosystem if any one strategy is too good or counter strategies are too weak overall. Identifying the right strategy is an extremely difficult exercise. Companies do not openly expose their strategy/ies and/or often they do not have a clear one in the first place. As a result, if there is a perception that the brittle strategy defeats the ductile one, therefore the brittle strategy approach ends up dominating the landscape. Often strategy consulting companies rely on this perception in order to sell the “prêt a porter” strategy of the season. Furthermore, ductile strategies tend to be often dismissed as not only do they require a certain amount of discipline, but also the effort required in its success can be daunting. It requires a real time understanding of the external and internal environment. It relies on the deployment of a fractal organisation that enables fast and risky moves, while maintaining a robust back end. And finally, it requires the capability and stomach to take risk beyond maintaining the status quo. As a result, the brittle strategy often ends up more attractive because of its simplicity, more so that it’s benefit from an unconscious bias.

The Brittle strategy bias:

Brittle strategies have problems “in the real world”. They are often unpredictable due to unforeseen events occurring. The problem is we react and try to fix things going forward based on previous experience. But the next thing is always a little different. Economists and businessmen have names for the strategy of assuming the best and bailing out if the worst happens, like “picking pennies in front of steamrollers” and “capital decimation partners”.
It is a very profitable strategy for those who are lucky and the “bad outcome” does not happen. Indeed, a number of “successful” companies have survived the competitive market using these strategies and because the (hi)story is often only told by the winner’s side only, we inadvertently overlook those that didn’t succeed, which in turn means a lot of executives suffer from the siren of the survival bias, dragging more and more corporations into similar strategy alongside them.
In the end all this lot ends up suffering from a more generalized red queen effect whereby they spend a large amount of effort standing still (or copying their neighbors approach). This is why when a new successful startup emerges, you see a plethora of similar companies claiming to apply a similar business model. At the moment it’s all about UBER for X and most of these variants. If they are lucky, they will end up mildly successful. But for most of them, they will fail as the larger corporations have been exposed and probably bought into the hype of the approach.
________________________________________________________________
About the Author
This article was written by Benoit Hudzia of Reflections of the Void, a blog about life, Engineering, Business, Research, and everything else (especially everything else). see more.
Continue Reading

Entrepreneurship

What Kills A Startup

Published

on

1 – Being inflexible and not actively seeking or using customer feedback

Ignoring your users is a tried and true way to fail. Yes that sounds obvious but this was the #1 reason given for failure amongst the 32 startup failure post-mortems we analyzed. Tunnel vision and not gathering user feedback are fatal flaws for most startups. For instance, ecrowds, a web content management system company, said that “ We spent way too much time building it for ourselves and not getting feedback from prospects — it’s easy to get tunnel vision. I’d recommend not going more than two or three months from the initial start to getting in the hands of prospects that are truly objective.”

2 – Building a solution looking for a problem, i.e., not targeting a “market need”

Choosing to tackle problems that are interesting to solve rather than those that serve a market need was often cited as a reason for failure. Sure, you can build an app and see if it will stick, but knowing there is a market need upfront is a good thing. “Companies should tackle market problems not technical problems” according to the BricaBox founder. One of the main reasons BricaBox failed was because it was solving a technical problem. The founder states that, “While it’s good to scratch itches, it’s best to scratch those you share with the greater market. If you want to solve a technical problem, get a group together and do it as open source.”

3 – Not the right team

A diverse team with different skill sets was often cited as being critical to the success of a starti[ company. Failure post-mortems often lamented that “I wish we had a CTO from the start, or wished that the startup had “a founder that loved the business aspect of things”. In some cases, the founding team wished they had more checks and balances. As Nouncers founder stated, “This brings me back to the underlying problem I didn’t have a partner to balance me out and provide sanity checks for business and technology decisions made.” Wesabe founder also stated that he was the sole and quite stubborn decision maker for much of the enterprises life, and therefore he can blame no one but himself for the failures of Wesabe. Team deficiencies were given as a reason for startup failure almost 1/3 of the time.

4 – Poor Marketing

Knowing your target audience and knowing how to get their attention and convert them to leads and ultimately customers is one of the most important skills of a successful business. Yet, in almost 30% of failures, ineffective marketing was a primary cause of failure. Oftentimes, the inability to market was a function of founders who liked to code or build product but who didn’t relish the idea of promoting the product. The folks at Devver highlighted the need to find someone who enjoys creating and finding distribution channels and developing business relationship for the company as a key need that startups should ensure they fill.

5 – Ran out of cash

Money and time are finite and need to be allocated judiciously. The question of how should you spend your money was a frequent conundrum and reason for failure cited by failed startups. The decision on whether to spend significantly upfront to get the product off the group or develop gradually over time is a tough act to balance. The team at YouCastr cited money problems as the reason for failure but went on to highlight other reasons for shutting down vs. trying to raise more money writing:

The single biggest reason we are closing down (a common one) is running out of cash. Despite putting the company in an EXTREMELY lean position, generating revenue, and holding out as long as we could, we didn’t have the cash to keep going. The next few reasons shed more light as to why we chose to shut down instead of finding more cash.

The old saw was that more companies were killed by poor cashflow than anything else, but factors 1, 2 and 4 probably are the main contributing factors to that problem. No cash, no flow. The issue No 3 – the team – is interesting, as if I take that comment ” I didn’t have a partner to balance me out and provide sanity checks for business and technology decisions made” and think about some of the founders and startup CEOs I know, I can safely say that the main way that any decision was made was by agreeing with them – it was “my way or the highway”. I don’t therefore “buy” the team argument, I more buy the willingness of the key decision makers to change when things are not working (aka “pivoting” – point 9).

_________________________________________________

About the Author

This article was produced by Broadsight. Broadsight is an attempt to build a business not just to consult to the emerging Broadband Media / Quadruple Play / Web 2.0 world, but to be structured according to its open principles. see more.

Continue Reading

Trending