Let deep learning bid farewell to intensive computing, new technology can reduce calculation by 95%

Anshumali Shrivastava, assistant professor at Rice University, said, "It applies to any deep learning architecture, and the technology can be expanded sub-linearly, that is, the deeper the neural network applied, the more computations are saved."

The study will be released at this year's KDD conference, which addresses one of the biggest challenges facing big companies like Google, Facebook, and Microsoft. These big companies are vying to build, train, and deploy a large number of deep learning networks to develop different products, such as autonomous vehicles, translations, and email smart responses.

Shrivastave and Rice University graduate student Ryan Spring said the technology comes from hashing, an effective data retrieval method that has been adapted to greatly reduce the computational cost of deep learning. The hash method uses a hash function to convert the data into a manageable fractional hash (called a hash). The hash is stored in a table, similar to the index in a printed book.

Spring said: "Our approach combines two techniques - subtle variant of locality-sensitive hashing and sparse backpropagation variants - to reduce computational requirements without a lot of The accuracy loss. For example, in a small-scale test, we found that we can reduce the calculation by 95%, but the accuracy obtained by the standard method is still within 1%."

The basic building blocks of deep learning networks are artificial neurons. Although used as a model of biological brain neurons in the 1950s, artificial neurons are simply mathematical functions and equations that convert input data into output.

In machine learning, all neurons have the same initial state, just like white paper, they have their own specific functions as they train. In training, the neural network "sees" a large amount of data, and each neuron becomes a specialized structure that identifies a particular pattern in the data. At the bottom, neurons perform simple tasks. For example, in image recognition applications, the underlying neurons may be used to identify light/dark, or the edges of an object. The output from these neurons is passed to the neurons in the next layer of the network, subject to the recognition and processing of other patterns. Only a few layers of neural networks can identify concepts such as faces, dogs and cats, traffic signs and school buses.

Shrivastava said: "Adding more neurons to the neural network hierarchy can extend its performance, and we hope that the neural network has no upper and lower limits. It is reported that Google is trying to train a model containing 137 billion neurons." There may be computational limitations on training and deploying such neural networks.

He said: "Most of the machine learning algorithms used today were developed 30 to 50 years ago, and the computational complexity was not considered in the design. But with big data, there are basic limitations on resources, such as calculation cycles, Energy consumption and storage. Our lab aims to address these limitations."

Spring says that in large-scale deep networks, hashing will greatly reduce computational effort and power consumption.

He said: "Energy savings increase with size because we use the sparsity of big data. For example, we know that a deep network has 1 billion neurons. For any given input, such as a picture of a dog. Only a few of them will become excited. According to the data term, we call it sparsity, and it is because of sparsity that our method will save more energy when the network becomes bigger. Therefore, when we show 1000 When 95% of neurons are energy efficient, mathematics shows that we can achieve more than 99% energy savings for 1 billion neurons."

Electronic Cigarette

Electronic Cigarette,Largest E-Cig Oem,China E-Cig Oem,Vape Pen Oem,Vape Device Oem

Shenzhen MASON VAP Technology Co., Ltd. , https://www.cbdvapefactory.com