As cryptocurrencies mature, the promise of a worldwide distributed ledger and computing platform is now taken for granted by businesses around the world. Billions of dollars were raised in ICOs which leverage Ethereum’s computing platform. Contracts are no longer seen as a marvel, but instead form the core of several successful business ventures.
Ethereum’s true difference from Bitcoin is a Turing-complete contract language called Solidity. This was the major improvement that marked the transition from so called first to second generation cryptocurrencies. First generation cryptos are based on proof-of-work mining and use a trivial transaction processing language. Second generation cryptos are also mined using proof-of-work but boast a much more sophisticated transactional language that is capable of describing much more complex problems. The capacity to execute complex contracts was the big innovation brought to the markets by Ethereum.
But was it really an improvement?
The Bitcoin language is below Solidity in the computing language hierarchy. It is a stack-oriented language, which means it has a data structure called (you guessed it) a stack, on which values can be pushed and popped from the top only. Think of it as a big pile of plates where you cannot pull the ones below or you’d crash everything. With a stack, you’ve got to take all the other plates from the top in order to access the ones below. The same thing is done with data pushed onto a stack: to access values pushed earlier, you’ve got to complete the work on the ones at the top so that you can discard them (pop them off the stack). This kind of language is cumbersome and does not have the capacity to solve every existent problem. And this was done on purpose, because all it was meant for was to verify transactions. As long as the last thing remaining on the stack is “true”, then the Bitcoin transaction will be accepted and then committed onto the blockchain. That is all the Bitcoin language was intended for, and that’s all that it does *. Its main purpose is to verify cryptographic signatures. If the signature on a new transaction matches the one made using the public key of a previous input, then the last value on the stack will be “true” and this transaction will be committed onto the blockchain. Otherwise it is an invalid transaction and will be rejected by miners.
(* Actually, Satoshi/Finney intended the language to do more, but it was scaled back, perhaps to allow them to launch Bitcoin to the markets more quickly. Hal Finney’s health was deteriorating fast at this time, sadly, which probably part of the reason to have coded several of the Bitcoin language operations as “NO OP”, which means “do nothing with this instruction”. )
Enter Ethereum and its fully featured Solidity language.
This language is Turing-complete, which means it is capable of running infinite loops, drawing graphical user interfaces (GUI), solving differential equations and so on. In essence, a Turing-complete language can express any computable problem in code. Obviously there is no sense in creating GUI’s using a contract language, but the important thing to note is that this kind of language is capable of doing anything any other language can. But it doesn’t run under a windowing environment like the one you’re using right now (be it on your phone or notebook/PC), it runs in a distributed virtual computer – the program runs on the ether and nobody knows where, exactly, it’ll get executed. This is where the cryptocurrency gets its name. When someone asks “where do Ethereum programs run”, the only correct answer is “out there”. Solidity programs run on the ether, the empty space that fills the universe.
Ethereum’s concept is considered a step forward from Bitcoin and, rightfully so, the currency’s value reaches #2 just behind BTC itself due to immense demand.
Smart contracts begin being developed for anything ranging from online casinos, to online voting to new cryptocurrencies developed on top of Ethereum itself. The latter, ERC20 token standard, spawned a whole industry of ICO’s where any tiny little community was able to create their own currency and trade it on top of Ethereum. A person living in a village in the middle of nowhere was now able to create a cryptocurrency on their own using boiler plate code provided by Ethereum and have it published in minutes. The village economy can then run on the ether as well. Goods can be traded, food can be bought, rent can be paid, all using the token invented by the villagers, just like Disney tokens you can buy to ride the toys and have an ice cream while in the park.
This is great, but there’s a problem. A big one.
The whole Ethereum system runs on top of something called the Ethereum Virtual Machine, or EVM. The name borrows from other virtual machines and comes from computing research done in the 1960’s. Virtual machines are nothing new, in fact the world’s first concept of a “cloud” was dumb terminals that talked to a mainframe somewhere. This mainframe would often run several virtual machines within it. As PC’s gained popularity in the 1980’s, the virtual machine concept slowly crept out of the mainstream and individual software became all the rage. A young entrepeneur by the name of Bill Gates published his vision of “one PC on top of every desk in the world, running Microsoft software”. He became the world’s most successful businessman ever with that very simple vision. Instead of monstrous machines running in large buildings with dumb terminals spread everywhere, you’d have the terminals themselves do the computing. Decentralization made Bill Gates the world’s richest person.
With Ethereum, decentralization is also the central concept. But instead of installing a program on each machine, the programs are stored in a shared storage system called…the blockchain. Companies publish code onto the blockchain and the EVM installed in each device/PC/mainframe can run that code as it downloads the blockchain. The programs are loaded via a hash code. When you see “contract address” for an Ethereum token, that contract address is the location of the program code on the blockchain. For example, the Power Ledger electrical energy and appliance trading token, resides at address 0x595832f8fc6bf59c85c527fec3740a1b7a361269. When someone wants to buy or sell electricity, they will invoke the code located at this address in the Ethereum blockchain. The functions are also found by similarly cryptic addresses. There are no named functions written on the blockchain. All Solidity code is compiled to numerical values. A piece of information called the ABI of a contract, which is not stored on the blockchain, will indicate the offset of each function within the code loaded from the contract address. “Call me the function 100 bytes into the contract, passing it the following data: 1, 2 and 0x123…”. This is how functions are found and executed on the Ethereum blockchain.
But there’s no such a thing as a free lunch (unless you’re Tether), so someone must be paying for all this to happen? Yes they are.
The Price of Computing on the Ether
For every EVM in the world to be sure it contains the same code, there must be a distributed verification performed on every piece of data contained in the blockchain. This is called mining. Ethereum mining has been very profitable until recently and it was made available to the masses by employing a hashing algorithm that is tough on ASICs (specific mining hardware like the Antminers used with Bitcoin). Therefore graphic card mining was given a 2nd chance with Ethereum, after Bitcoin was made impossible to mine using common PC hardware. But, just like with Bitcoin, Ethereum mining is very expensive and consumes bazillions of watts in electrical energy.
The electrical energy consumption plus the computing power used in each Solidity computation translates into a price for Ethereum computation. And, as we mentioned before, the Ethereum computing language is complete. Completeness allows for complexity, and complexity costs money in a system like Ethereum. Lots of money.
The smallest unit of ETH is a Wei. Each ETH can be broken down into 10 with 18 zeroes worth of Wei (a billion billions). One billion Wei is a giga-Wei or gwei for short. Each operation performed in a smart contract is billed in gwei’s, or billionths of an ETH coin.
Ethereum abstracted the computing cost using a gas station analogy. Whereas with Bitcoin each transaction costs a “fee” charged in Bitcoin itself, with Ethereum you provide a maximum “gas tank” size and how much ETH you’d like to pay for each unit of gas. Therefore, Ethereum GAS has a value set by the person requesting the computation.
It’s a computing power auction. If someone is willing to pay 20 gwei per GAS unit for a certain computation, with a maximum of 1 million GAS ( the “tank size”) and someone else will pay 100 gwei per gas, the latter will win the auction and their code will run first. If you set the gas too low and the system determines that it is not enough to run the contract, an error will be displayed on the blockchain.
This is a very interesting concept, which also differentiates ETH from the BTC computation model. You can tell that there was profound analysis of the Bitcoin system made by the creators of Ethereum, they tried their best to create the ultimate Bitcoin substitute.
Back to ETH computing costs, let’s have a look at this excerpt, taken from an excellent text :
In this transaction, we specified a maximum gas allowed to be used (“Gas Limit”) as 1000000 gas. The EVM starts at this number and counts down with each operation to ensure there is enough gas remaining. If the EVM hits zero gas while in the middle of code execution, the transaction fails, changes are undone, and the fee associated with gas is still paid to the miner.
We can see in the VM Trace that we begin the constructor execution at 837872 gas remaining. This means at that 162128 (
1000000-837872) or about 30% of the total gas has already been used when we get to the constructor. [Emphasis added by Crypto.BI]
As you can see, even before a contract starts to execute, over 160k GAS out of one million was already spent just to construct the Solidity object. Let’s take the current GAS prices from ETH Gas Station for a simple ETH transfer (one of the simplest contracts there is, if not the simplest). Right now for a transfer to be finished within 47 seconds, it will require a price of 5 gwei per GAS. In US Dollars this is U$ 0.079 or 7.9 cents of a dollar for a simple ETH transfer which consumes 21000 gas. (Note that, at approximately this same time 24 hours ago, a simple ETH transfer using the same parameters cost over U$ 1 due to the high Ethereum network load.)
At today’s rate, the 162k GAS mentioned in the quoted excerpt would cost U$ 0,61. That is, 61 cents of a dollar simply to construct a Solidity language object(!!), which is absurd. As we said, today’s rate is far lower than a few days ago, not only due to the steep drop in U$ price of cryptocurrencies, but also due to less network traffic.
Not only is computation in Ethereum absurdly expensive, it is not always productive either.
For instance, Solidity only has one kind of runtime error, and it is an “Out of Gas” exception. So if any error happens throughout the execution of a contract, it will consume every last gas that was alloted to it. How about that? For example, it is common for hackers to try and exploit overflow bugs in contracts. Overflow bugs happen when a number is larger than the computer can store and it “wraps around” starting from the lowest possible value, similarly to a car odometer that goes back to 1 after reaching 1 million miles.
If you check for initial conditions and determine that overflow would indeed happen, you must “throw” an error and close this contract transaction immediately or you’d allow something like the DAO disaster to happen. So your contract did next to nothing, you simply checked input values before doing anything, and determined there would be an error. You then announced that error before it happened and closed the computation right there. Well, this just cost you maximum GAS. Back to the gas station analogy: it spent the full gas tank in order to throw an error.
One of the main differences between a sophisticated language like Solidity, to a trivial one like Bitcoin, is how data is stored.
As mentioned earlier, on Bitcoin, the data is stored in a stack. Stacks cannot be randomly accessed. For example, you cannot grab the 3rd item from a stack of 10 items. You must take off 7 items from the top, push them onto another stack, retrieve the 3rd item, then stack them back up again. In Solidity, on the other hand, you can have the luxury of random access memory. Create an array of N things and access any item from 0 to N-1 at will, just by addressing the item location within the array.
This is great, right?
Great, but very, very expensive.
Storing a single number on the blockchain costs 20000 gas. Almost the same as transferring ETH! At today’s very discounted rate (compared to the past week), storing a single number on the blockchain would cost 7 cents of a dollar. Storing one gigabyte on the blockchain would cost 32000 Ethereum which, at today’s price of U$ 913/ETH would mean U$ 10.2 million dollars. You read that right: storing one gigabyte in the blockchain costs 10 million dollars. Compare this to Amazon AWS S3 storage prices, which offers 99,999…% SLA and users pay $0.023 (two cents of a dollar) per gigabyte!
As you can see, Ethereum is a marvelous technology which proved that complex programs could be stored and executed in completely decentralized fashion. Mining guarantees the integrity of distributed apps and a sophisticated language guarantees that any computer program can be expressed in the Ethereum Virtual Machine.
But as we can trivially prove, computation and storage costs are absolutely prohibitive. This platform simply cannot scale at the current prices. With Ethereum oscillating in the U$ 1000 range, there simply cannot be a realistically complex application deployed using Solidity and EVM.