日韩福利电影在线_久久精品视频一区二区_亚洲视频资源_欧美日韩在线中文字幕_337p亚洲精品色噜噜狠狠_国产专区综合网_91欧美极品_国产二区在线播放_色欧美日韩亚洲_日本伊人午夜精品

Search

Energy Efficiency

Thursday
15 Dec 2022

The training of deep neural networks entails continuously adapting its configuration, comprised of so-called "weights," to ensure that it can identify patterns in data with increasing accuracy. This p

15 Dec 2022  by https://techxplore.com/   
Deep-learning models have proven to be highly valuable tools for making predictions and solving real-world tasks that involve the analysis of data. Despite their advantages, before they are deployed in real software and devices such as cell phones, these models require extensive training in physical data centers, which can be both time and energy consuming.
 
 
Researchers at Texas A&M University, Rain Neuromorphics and Sandia National Laboratories have recently devised a new system for training deep learning models more efficiently and on a larger scale. This system, introduced in a paper published in Nature Electronics, relies on the use of new training algorithms and memristor crossbar hardware, that can carry out multiple operations at once.
 
"Most people associate AI with health monitoring in smart watches, face recognition in smart phones, etc., but most of AI, in terms of energy spent, entails the training of AI models to perform these tasks," Suhas Kumar, the senior author of the study, told TechXplore.
 
"Training happens in warehouse-sized data centers, which is very expensive both economically and in terms of carbon footprint. Only fully trained models are then downloaded onto our low-power devices."
 
Essentially, Kumar and his colleagues set out to devise an approach that could reduce the carbon footprint and financial costs associated with the training of AI models, thus making their large-scale implementation easier and more sustainable. To do this, they had to overcome two key limitations of current AI training practices.
 
The first of these challenges is associated with the use of inefficient hardware systems based on graphics processing units (GPUs), which are not inherently design to run and train deep learning models. The second entails the use of ineffective and math-heavy software tools, specifically utilizing the so-called backpropagation algorithm.
 
"Our objective was to use new hardware and new algorithms," Kumar explained. "We leveraged our previous 15 years of work on memristor-based hardware (a highly parallel alternative to GPUs), and recent advances in brain-like efficient algorithms (a non-backpropagation local learning technique). Though advances in hardware and software existed previously, we codesigned them to work with each other, which enabled very power efficient AI training."
 
The training of deep neural networks entails continuously adapting its configuration, comprised of so-called "weights," to ensure that it can identify patterns in data with increasing accuracy. This process of adaptation requires numerous multiplications, which conventional digital processors struggle to perform efficiently, as they will need to fetch weight-related information from a separate memory unit.
 
"Nearly all training today is performed using the backpropagation algorithm, which employs significant data movement and solving math equations, and is thus suited to digital processors," Suin Yi, lead author of the study, told TechXplore.
 
"As a hardware solution, analog memristor crossbars, which emerged within the last decade, enable embedding the synaptic weight at the same place where the computing occurs, thereby minimizing data movement. However, traditional backpropagation algorithms, which are suited for high-precision digital hardware, are not compatible with memristor crossbars due to their hardware noise, errors and limited precision."
 
As conventional backpropagation algorithms were poorly suited to the system they envisioned, Kumar, Yi and their colleagues developed a new co-optimized learning algorithm that exploits the hardware parallelism of memristor crossbars. This algorithm, inspired by the differences in neuronal activity observed in neuroscience studies, is tolerant to errors and replicates the brain's ability to learn even from sparse, poorly defined and "noisy" information.
 
"Our algorithm-hardware system studies the differences in how the synthetic neurons in a neural network behave differently under two different conditions: one where it is allowed to produce any output in a free fashion, and another where we force the output to be the target pattern we want to identify," Yi explained.
 
"By studying the difference between the system's responses, we can predict the weights needed to make the system arrive at the correct answer without having to force it. In other words, we avoid the complex math equations backpropagation, making the process more noise resilient, and enabling local training, which is how the brain learns new tasks."
 
The brain-inspired and analog-hardware-compatible algorithm developed as part of this study could thus ultimately enable the energy-efficient implementation of AI in edge devices with small batteries, thus eliminating the need for large cloud servers that consume vast amounts electrical power. This could ultimately help to make the large-scale training of deep learning algorithms more affordable and sustainable.
 
"The algorithm we use to train our neural network combines some of the best aspects of deep learning and neuroscience to create a system that can learn very efficiently and with low-precision devices," Jack Kendall, another author of the paper, told TechXplore.
 
"This has many implications. The first is that, using our approach, AI models that are currently too large to be deployed can be made to fit in cellphones, smartwatches, and other untethered devices. Another is that these networks can now learn on-the-fly, while they're deployed, for instance to account for changing environments, or to keep user data local (avoiding sending it to the cloud for training)."
 
In initial evaluations, Kumar, Yi, Kendall and their colleague Stanley Williams showed that their approach can reduce the power consumption associated with AI training by up to 100,000 times when compared to even the best GPUs on the market today. In the future, it could enable the transfer of massive data centers onto users' personal devices, reducing the carbon footprint associated with AI training, and promoting the development of more artificial neural networks that support or simplify daily human activities.
 
"We next plan to study how these systems scale to much larger networks and more difficult tasks," Kendall added. "We also plan to study a variety of brain-inspired learning algorithms for training deep neural networks and find out which of these have perform better in different networks, and with different hardware resource constraints. We believe this will not only help us understand how to best perform learning in resource constrained environments, but it may also help us understand how biological brains are able to learn with such incredible efficiency."

More News

Loading……
国产精品久久久久9999高清| 精品久久久久久最新网址| ts人妖交友网站| 亚瑟一区二区三区四区| 精品视频免费看| 亚洲九九视频| 8x8x8国产精品| 丰满大乳少妇在线观看网站 | 国产精品欧美久久久久一区二区| www.youjizz.com在线| 久久夜色精品国产欧美乱极品| 九九九伊在线综合永久| 7777精品伊人久久久大香线蕉超级流畅| 国内精品美女在线观看| 在线观看国产v片| 国产成人aaa| 激情综合婷婷| 欧美三日本三级三级在线播放| 99国产精品私拍| 女囚岛在线观看| 亚洲国产精品久久人人爱蜜臀| 亚洲一区二区日韩| 国产美女高潮在线观看| 日韩欧美一二区| 国产性天天综合网| 亚洲精品123区| 成人在线黄色电影| 日韩一区二区免费在线电影| 国产福利91精品一区二区三区| 动漫av一区| 天堂va在线| a视频在线看| 在线精品视频免费播放| 美女诱惑一区二区| 蜜桃成人av| 91p九色成人| aⅴ在线视频男人的天堂| 欧美久久久久久久久久| 国产午夜精品在线观看| 日韩一区欧美二区| 日韩电影一区| 麻豆精品久久| 99re6在线精品视频免费播放| 97碰碰碰免费公开在线视频| 一本一本大道香蕉久在线精品 | 成人深夜福利app| 日韩午夜高潮| 99久久精品费精品国产风间由美| 国产精品色婷婷在线观看| 日韩伦理av| 91xxx在线观看| h色视频在线观看| 日韩一本二本av| 一本大道久久a久久精二百| 中文字幕一区在线| 91在线视频免费91| 国内精品在线播放| 日本视频中文字幕一区二区三区| 国产大片一区| 中文字幕免费精品| 欧美独立站高清久久| 伊人久久大香线蕉无限次| 日韩a级大片| 欧美人与动xxxxz0oz| 国产精品极品| 国产精品午夜一区二区三区| 亚洲肉体裸体xxxx137| 精品视频免费在线观看| 四虎成人av| 国产一区二区你懂的| 免费欧美在线视频| 国产福利91精品一区| 99久久婷婷国产精品综合| 久久精品人人做人人爽人人| 国产人成一区二区三区影院| 亚洲免费在线视频| 欧美日韩久久久一区| xxxxx国产| 亚洲1卡2卡3卡4卡乱码精品| free性欧美| 国产精品自在线拍| 欧美日韩午夜| 激情成人综合网| 国产精品久久久久久久久免费相片| 夜夜夜精品看看| 欧美一级久久久| 在线影院自拍| 亚洲第一av| 成人羞羞动漫| 国产精品一区一区| 欧美日韩国产色| 桃乃木香奈av在线| 91另类视频| 一区二区三区国产盗摄 | 欧美日韩一区不卡| 国产高清视频在线| 精品99re| 亚洲综合日本| 国产精品久久久久影院老司 | 日韩欧美亚洲另类制服综合在线| 国产资源在线播放| 欧美视频三区| 精品一区二区三区视频在线观看| 亚洲免费观看高清完整版在线| 天天干夜夜操| 电影一区二区| 日韩影院免费视频| 91国产精品成人| 国产午夜精品久久久久免费视| 女人抽搐喷水高潮国产精品| 国产乱码精品一品二品| 欧美日产国产精品| 国产精品蜜臀| 欧美+日本+国产+在线a∨观看| 99久久99久久综合| 欧美日韩综合色| 成人c视频免费高清在线观看| 日韩精品成人| 国产色产综合色产在线视频| 精品免费一区二区三区| 天堂中文8资源在线8| 麻豆精品在线| 中文字幕va一区二区三区| 欧美日韩国产首页| 黄网站免费在线播放| 伊人精品视频| 亚洲精品国久久99热| 高清色视频在线观看| 日韩国产在线| 国产农村妇女精品| 污视频网站在线看| 日韩成人av在线资源| 成人国产精品免费网站| 伊人色综合网| 亚洲区第一页| 天天摸天天操天天干| 美女福利一区二区| 亚洲一区二区三区四区五区午夜| 亚洲精品伦理在线| 欧美aa在线观看| 日韩一区精品视频| 黄色成人免费观看| 日韩理论电影| 欧美视频日韩视频在线观看| 福利在线导航136| 国产一区二区三区美女| 欧美久久久久久蜜桃| 亚洲精品456| 欧美吻胸吃奶大尺度电影| 久久a爱视频| 欧美一级在线视频| 国产综合婷婷| 在线观看av网站| 精品在线一区二区| caoporn97在线视频| 久久久亚洲精品石原莉奈| 日韩不卡免费高清视频| 欧美国产日韩一二三区| 四虎av在线| 久久伊人中文字幕| 成人国产一区| 在线观看免费一区| 91成人超碰| 黄色三级高清在线播放| 九九视频精品免费| 人人草在线视频| 一区二区成人在线| 四虎成人精品永久免费av九九| 欧美h版电影| 久久在线观看免费| 蜜臀av一区| 秋霞影院午夜丰满少妇在线视频| 97超超碰碰| 久久五月激情| 亚洲人体视频| 日韩一级片在线观看| 卡一卡二国产精品| 国外成人福利视频| 色综合小说天天综合网| 不卡欧美aaaaa| 欧美××××黑人××性爽| 欧美天天在线| 亚洲乱亚洲高清| 2021中文字幕一区亚洲| 成人午夜影院| 国产在线美女| 秋霞综合在线视频| 欧美精品入口| 影音先锋日韩在线| 一本色道久久综合亚洲精品不卡 | 亚洲影音先锋| 66国产精品| 黄色精品一区| 国产一区不卡在线| 玖玖精品视频| 久久精品毛片| 粉嫩在线一区二区三区视频| 999亚洲国产精| 99久久久免费精品国产一区二区| 91福利视频网站|