Consensus Algorithm...
 

Consensus Algorithm-a complete List/comparation (Pros&Cos) of all Consensus Algorithm 21 to 26 .  

  RSS

alberdioni8406
(@alberdioni8406)
Estimable Member
XQR:58
Joined: 7 months ago
Posts: 205
May 21, 2019 10:28 am  
21. Delegated Byzantine Fault Tolerance (dBFT)
Pros:
Fast. Scalable.
Cons:
Evеrуоnе is fіghtіng to be root chain. There саn be ѕеvеrаl rооt chains.
Used by: Neo
Explanation: The dBFT is called the Delegated Byzantine Fault Tolerant, a Byzantine fault-tolerant consensus mechanism that enables large-scale participation in consensus through proxy voting. The holder of the NEO token can, by voting, pick the bookkeeper it supports. The selected group of bookkeepers, through BFT algorithm, reach a consensus and generate new blocks. Voting in the NEO network continues in real time, rather than in accordance with a fixed term.
The dBFT provides fault tolerance of f = [(n-1) / 3] for a consensus system consisting of n consensus nodes. This fault tolerance also includes both security and availability, resistant to general and Byzantine failures, and is suitable for any network environment. dBFT has good finality, meaning that once confirmations are final, the block can not be bifurcated, and the transaction will not be revoked or rolled back.
In the NEO dBFT consensus mechanism, taking about 15 to 20 seconds to generate a block, the transaction throughput is measured up to about 1,000 TPS, which is excellent performance among the public chains. Through appropriate optimization, there is potential to reach 10,000 TPS, allowing it to support large-scale commercial applications.
The dBFT combines digital identity technology, meaning the bookkeepers can be a real name of the individual or institution. Thus, it is possible to freeze, revoke, inherit, retrieve, and ownership transfer due to judicial decisions on them. This facilitates the registration of compliant financial assets in the NEO network. The NEO network plans to support such operations when necessary.
 
22. RAFT
RAFT Consensus
Pros:
Simpler model than Paxos, but offers same safety.
Implementation available in many languages.
Cons:
Usually used in private, permissioned networks.
Used by: IPFS Private Cluster, Quorum
Explanation: Raft is a Consensus algorithm designed as an alternative to Paxos. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features. Raft offers a generic way to distribute a state machine across a cluster of computing systems, ensuring that each node in the cluster agrees upon the same series of state transitions. It has a number of open-source reference implementations, with full-specification implementations in Go, C++, Java, and Scala.
Raft achieves consensus via an elected leader. A server in a raft cluster is either a leader or a follower, and can be a candidatein the precise case of an election (leader unavailable). The leader is responsible for log replication to the followers. It regularly informs the followers of its existence by sending a heartbeat message. Each follower has a timeout (typically between 150 and 300 ms) in which it expects the heartbeat from the leader. The timeout is reset on receiving the heartbeat. If no heartbeat is received the follower changes its status to candidate and starts a leader election.
 
23. Stellar Consensus
Pros:
Decentralized control
Low latency
Flexible trust
Asymptotic security
Cons:
Used by: Stellar
Explanation: It is based on federated Byzantine agreement (mentioned above).
The Stellar Consensus Protocol (SCP) provides a way to reach consensus without relying on a closed system to accurately record financial transactions. SCP has a set of provable safety properties that optimize for safety over liveness — in the event of partition or misbehaving nodes, it halts progress of the network until consensus can be reached. SCP simultaneously enjoys four key properties: decentralized control, low latency, flexible trust, and asymptotic security.
24. Proof of Believability
 
Pros:
More decentralized that traditional PoS by using concept of Servi(mentioned in Explanation).
Fast Finality as compared to traditional PoS.
Used by: IOST
Explanation: A major challenge faced by traditional Proof-of-Stake consensus mechanism is the tendency towards centralization. In order to mitigate this risk, IOST introduced Servi as both a measurement of users’ contribution to the community and a way to encourage members to contribute to the continued development of the IOSChain. It has the following attributes:
Non-tradable: Since Servi is not designed as a medium of exchange, Servi can not be traded or exchanged in any way.
Self-destructive: After validating a block, the system will automatically clear the Servi balance owned by the validator. In this way, nodes with high believability scores can take turns in validating blocks, to ensure a fair block generation process.
Self-issuance: Servi will be generated and deposited to user accounts automatically after certain contributions, such as providing community services, evaluating services provided by another entities, and/or making other special contributions.
Traditional blockchain systems have an inherent trade-off between safety and throughput, depending on shard size. A system with a large number of small shards delivers better performance but provides less resiliency against bad actors, and vice versa(One of the problems also faced by Casper). In order to break the trade-off in a way that keeps safety and increases throughput, IOST proposed an innovative Proof-of-Believability (PoB) consensus protocol for IOSChain. PoB guarantees that the nodes are with negligible probability to misbehave, while significantly increasing the transaction throughput by size-one-shard.
The Proof-of-Believability consensus protocol uses an intra-shard Believable-First approach. The protocol divides all validators into two groups, a believable league and a normal league. Believable validators process transactions quickly in the first phase. Afterwards, normal validators sample and verify the transactions in the second phase to provide finality and ensure verifiability. The chance of a node being elected into the believable league is determined by believability score which is calculated by multiple factors (e.g., token balance, contributions to the community, reviews, etc). One with higher believability score is more likely to be elected into the believable league. Believable validators follow the procedures to decide the set of committed transactions and their order, as well as process them in order. Believable validators also form smaller groups — one validator per group. Transactions will be randomly distributed among these believable validators. Consequently, they produce smaller blocks with extremely low latency.
However, it may introduce security problem as only one node is performing the verification. As a result, some corrupted transactions might be committed by misbehaved validators. In order to solve this security problem, they specify a sampling probability that normal validators will sample transactions and detect inconsistencies. If a validator is detected as misbehaviour, it will lose all the tokens and reputation in the system while the defrauded users will be compensated for any loss. The believable-first approach makes processing transactions extremely fast as only a single (believable) validator is doing the verification and it is unlikely to misbehave.
To know more about their sharding policy, refer to Further Reading.
 
25. Directed Acyclic Graphs
Pros:
Highly scalable due to their non-linear structure.
Fast.
Energy efficient.
Finality is achieved instantly.
Cons:
Smart contracts implementation can only be done by use of oracles.
Used by: Iota, HashGraph, Byteball, RaiBlocks/Nano
Explanation: DAGs or Directed Acyclic Graphs are more general form of blockchain. They are popular for inherently high scalablity due to their unique structure.
Basically, in any blockchain system we have a linear structure, one-by-one blocks are added to the chain. This makes blockchain inherently slow as the blocks can’t be added parallely to the chain. But in case of DAGs blocks/transactions are added parallely, each block/transaction confirming a number of previous blocks. This makes DAGs inherently scalable.
There are a number of variations depending on:
algorithm for choosing which previous blocks to verify aka tip-selection algorithm.
How the ordering of transactions is done ?
How finality is reached ?
Here are a few popular projects which use DAGs.
25 a) Tangle (IOTA)
 
Explanation: Tangle is the DAG consensus algorithm used by Iota. In order to send an Iota transaction, you need to validate two previous transactions you’re received. The two-for-one, pay-it-forward consensus strengthens the validity of transactions the more transactions are added to the Tangle. Because the consensus is established by the transactions, theoretically, if someone can generate 1/3 of the transactions they could convince the rest of the network their invalid transactions are valid. Until there’s enough transaction volume that creating 1/3rd of the volume becomes unfeasible, Iota is sort-of “double-checking” all of the network’s transactions on a centralized node called “The Coordinator”. Iota says The Coordinator works like training wheels for the system, and will be removed once the Tangle is big enough.
 
25 b) Hashgraph
 
Explanation: Hashgraph is a gossip-protocol consensus developed by Leemon Baird. Nodes share their known transactions with other nodes at random so eventually all the transactions are gossiped around to all of the nodes. Hashgraph is a great option for private networks, but you’re not going to see it implemented in a public network like Ethereum or dispatch any time soon.
 
25 c) Holochain
Explanation: Quite similar to HashGraph, but not Hashgraph. It provides a data structure that can be used to build decentralized apps. You have your own chain, which you can add data to, including financial transactions. The chains can merge, split, and interact in complex ways. The data is stored in a decentralized way (like Bittorrent). The data has a hash, which is a mathematical fingerprint that corresponds to the data. If someone tampers with the data, the mismatch between the data and the hash will be noticed, and the data rejected as invalid. Digital signatures guarantee the authorship of the data. It’s Bittorrent plus git plus digital signatures.
 
25 d) Block-Lattice (Nano)
Explanation: Nano (formerly Raiblocks) runs with a twist on the blockchain called a Block-lattice. The Block-lattice is a structure where every user (address) gets their own chain that only they can write to, and everyone holds a copy of all of the chains.
 
Every transaction is broken down into both a send block on the sender’s chain and a receive block on the receiving party’s chain. The Block-lattice seems almost too simple to work, but it’s already out there running in the wild. The unique structure does leave the Block-lattice open to some unique attack vectors like the Penny-spend attack, where attackers inflate the number of chains node must keep track of by sending negligible amounts to a wide array of empty wallets.
 
25 e) SPECTRE
Explanation: Serialization of Proof-of-work Events: Confirming Transactions via Recursive Elections, better known as SPECTRE, is a proposed Bitcoin scaling solution that utilizes a combination of PoW and DAGs to reach scalable consensus. In SPECTRE, the blocks are mined pointing to multiple parents, not just one, so the network could potentially handle multiple blocks per second. Mining a block pointing to some parent blocks supports those blocks validity. Compared to PoW’s “longest chain wins”, SPECTRE uses something like “blocks with the most childen win.” SPECTRE hasn’t been battle-tested in the wild yet, and new attack vectors are likely to emerge, but it feels like a very clever potential way to fix Bitcoin.
 
25 f) ByteBall
Explanation: Byteball uses DAG, which establishes partial order between transactions, plus adds the main chain within the DAG.
 
The main chain (MC) allows to define total order between transactions: the transaction which gets included (directly or indirectly) earlier on the MC, is deemed earlier in the total order. When there is a double-spend, the version of the transaction that comes earlier in the total order is deemed valid, all others are deemed void.
The main chain is defined deterministically based on the positions of transactions in the graph. Refer to the white paper for details, but as a general rule, the MC gravitates towards transactions authored by well known users, which we call witnesses. The list of witnesses is defined by users themselves as they include the list in every transaction they post. The MC then follows the path within the DAG such that:
1. the witness lists of the neighboring transactions on the chain are either identical or differ by only one mutation,
2. the chain goes through the most number of witness-authored transactions, compared with alternative chains.
It is also the first platform to include oracles in the system, which are needed for adding smart contract functionality in DAGs.
The above is a very brief and sketchy description with many important details omitted, refer to the white paper for a full technical story.
 
26 Mokka
Mokka is the consensus protocol, resistant to network splits and anonymous attacks. The main goal of the algorithm is in the strict quorum, verification of each action (which may lead to a change of state), and strong history (without an ability to rewrite logs). Mokka resolves these issues by using SSS and asymmetric cryptography — as a part of voting and log appending processes, and merke tree for verification of logs safety. According to mokka, there are only three actions, which may lead to state change: voting, log append, gossip temp logs.
In order to secure the voting and log append processes, mokka use the following approach: during each voting, mokka generate a special secret (based on timestamp + term), split it into shares (through SSS),
and send each share to the certain peer. In case, follower vote for this candidate, then he signs the share, and return to the candidate. The candidate validates the signatures + try to restore the secret (in case some peer will send the wrong share, his vote won’t be accepted). After that, candidate build the proof, which is an encoded string, which contains original secret + signatures. Then, on each state mutation (i.e. log appending), the leader uses this proof. The interesting thing is in SSS secret generation. According to the rule, you can create the shares with an ability of restoration > 50%. For instance, in order to build the proof for the network with 5 nodes, you need to obtain, at least 3 shares during the voting process.
Gossip protocol is used in mokka for transmitting logs to leader node from the follower nodes (in case the client used the follower node for pushing new log). The gossip protocol delivers the pending log to the leader and other followers. This gives an additional backup, in case the leader node dies, another follower may become leader node, and accept this temp log. As for the security aspect, before pushing the log to gossip, the node, which got this log, have to sign it with its private key. The signature will be validated by the leader node. In case, the signature is valid — log will be obtained, if not — then it will be just pulled.
For the moment, there is one implementation, written in typescript, and already ported for the browser (you can play with it on the website).
Also, the solution has an abstraction over the protocol, so you can write your own (it can be TCP, UDP or even zmq).

https://about.me/albertosilvestre


Quote
alberdioni8406
(@alberdioni8406)
Estimable Member
XQR:58
Joined: 7 months ago
Posts: 205
May 21, 2019 10:32 am  

There are total 30 Consensus (as i know) and i Will appreciate if someone can complete the last 3 of them...and if there are more i Will be glad to know them and learn from them 😌 

https://about.me/albertosilvestre


ReplyQuote
Share:

Please Login or Register