Author: Aggregated News
Improved sports image classification using deep neural network and novel tuna swarm optimization | Scientific Reports – Nature.com
Abstract Sports image classification is a complex undertaking that necessitates the utilization of precise and robust techniques to differentiate between various sports activities. This study introduces a novel approach that combines the deep neural network (DNN) with a modified metaheuristic algorithm known as novel tuna swarm optimization (NTSO) for the purpose of sports image classification. The DNN is a potent technique capable of extracting high-level features from raw images, while the NTSO algorithm optimizes the hyperparameters of the DNN, including the number of layers, neurons, and activation functions. Through the…
Read MoreBig data and deep learning for RNA biology | Experimental & Molecular Medicine – Nature.com
Abstract The exponential growth of big data in RNA biology (RB) has led to the development of deep learning (DL) models that have driven crucial discoveries. As constantly evidenced by DL studies in other fields, the successful implementation of DL in RB depends heavily on the effective utilization of large-scale datasets from public databases. In achieving this goal, data encoding methods, learning algorithms, and techniques that align well with biological domain knowledge have played pivotal roles. In this review, we provide guiding principles for applying these DL concepts to various…
Read MoreNew Transformer architecture could enable powerful LLMs without GPUs – VentureBeat
It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More Matrix multiplications (MatMul) are the most computationally expensive operations in large language models (LLM) using the Transformer architecture. As LLMs scale to larger sizes, the cost of MatMul grows significantly, increasing memory usage and latency during training and inference. In their paper, the researchers introduce MatMul-free language models that achieve performance on par with state-of-the-art Transformers while requiring far less memory during…
Read More