Originally sourced from a reddit comment from Vitalik Buterin himself: https://np.reddit.com/r/btc/comments/6ldssd/so_no_worries_ethereums_long_term_value_is_still/djt6opz/
Turing completeness is bad! You need decidability for safety!
We don't agree, but if you want decidability here's a decidable HLL on top of ethereum that gives you that: http://github.com/ethereum/viper
But we don't need Turing completeness because Post's theorem!
Sure, but it's not about turing completeness, you're missing the whole point of why a richly stateful model like ethereum can do things that a UTXO model can't. Moeser-Eyal-Sirer wallets to start off. Then we can talk about advanced state channel constructions like this, ENS auctions, and so on and so forth.
We can do smart contracts too!
Sure, if they're either (i) trivial, or (ii) reliant on a multisig of known intermediaries to actually execute then you can, but not natively.
Smart contracts are useless!
ENS auctions have users. Contracts for token sales that implement logic like funding caps have users. Multisig wallets with single-sig daily withdrawal limits have users, and have been a substantial convenience boon for the ethereum foundation itself.
Ethereum can't scale!
Raise the gas limit by 40%
EIUC: No, we mean technical capacity! Like that chart that shows geth nodes take up 150 GB.
First of all, that 150 GB is only true if you full-sync; fast syncing gives you 15 GB. Second, this can be handled with state tree pruning, and if you want it now the Parity client already has it, and takes up only ~10 GB. And by the way, we have a really nice light client that you can run instead of a full node and it has basically the same functionality and thanks to Merkle Patricia state trees it can even directly verify the state of any account without any need for interactive protocols or centralization.
But it's not parallelizable!
But it's not cacheable!
The version of EIP 648 with full-length address prefixes. And by the way, cacheability only helps reduce stale rates, not initial sync times, and we have an alternative technique to mitigate the practical impact of stale rates, namely the uncle mechanism. And when full proof of stake hits, it'll become okay to have block processing times that are much longer, as long as they can reliably stay under 100% of the block time, and in that case cacheability isn't really all that beneficial in the first place.
Proof of stake is fundamentally flawed! Nothing at stake! Stake grinding!
It's still not scalable enough!
And so on and so forth.
But Bitcoin can do vaults with some fancy thing we're building in Elements Alpha!
Sure, but now you're the one talking up hypothetical features that may or may not ever get implemented due to politics to counter stuff in Ethereum that works now, and by the way I looked at the code for doing that even with the feature and it's quite ugly. I'd rather have a clean, simple 15-line Solidity script with startWithdrawal, finalizeWithdrawal and cancelWithdrawal that anyone can understand and audit.
Smart contracts are not for noobs, noobs will inevitably screw them up. Smart contract programming should only be done by teh elite experts with mad computer science skills, and such experts don't need that kind of user-friendliness.
Even if it is true that only teh elite experts should be coding contracts, (i) a model that's noob friendly is still superior as it'll make it easier for experts to understand too, and (ii) noob-friendliness is still crucial because it greatly expands the set of people who can audit a given contract and make sure it's doing what it says it's doing.
Something something infinite rapidly growing supply.
Casper. Transaction fee reclaiming.
But proof of stake is fundamentally insecure!
We already linked the FAQ that refutes all of that...
Starts publishing core dev meetings so people can see what the decision-making process looks like. Spoiler: no, it's not centralized.