Our latest selection of the best AI articles for Experts

Updated every Tuesday and Thursday

  • Accelerating industrialization of Machine Learning at BMW Group using the Machine Learning Operations (MLOps) solution

    The article discusses BMW Group's collaboration with AWS to implement a Machine Learning Operations (MLOps) solution, aiming to accelerate AI/ML adoption. It outlines the components of the MLOps solution, including a reference architecture, reusable infrastructure modules, ML workflows, and deployment templates. The solution supports AI/ML use cases, improves time-to-market, and integrates BMW-specific requirements for networking, compliance, and security, driving innovation in the automotive industry.

    Marc Neumann - 3 April 2024 - Article in English

    Read more

  • Exploring Convolutional Neural Networks for the Thermal Image Classification of Volcanic Activity

    The article explores the use of Convolutional Neural Networks (CNNs) for classifying thermal images of volcanic activity, focusing on Mount Etna. It evaluates the effectiveness of eight pretrained CNN models in accurately classifying various states of volcanic activity captured by ground-based thermal cameras. The study highlights the importance of transfer learning and demonstrates impressive results, with certain models achieving total accuracy rates of approximately 90%. The research contributes to early detection and assessment of eruptive events, aiding in hazard mitigation and risk management.

    Giuseppe Nunnari - 13 April 2024 - Article in English

    Read more

  • Brain topology improved spiking neural network for efficient reinforcement learning of continuous control

    The article presents a novel approach called Brain Topology-improved Spiking Neural Network (BT-SNN) for efficient reinforcement learning. It integrates biological brain topologies into SNNs and employs an evolutionary learning algorithm for synaptic modifications. By selecting key brain regions and optimizing network topology, the BT-SNN outperforms conventional SNNs and ANNs in RL tasks. This method offers promise for achieving brain-like intelligence with efficiency, robustness, and flexibility, paving the way for future advancements in neuromorphic computing and artificial intelligence.

    Yongjian Wang - 16 April 2024 - Article in English

    Read more

  • How Chain-of-Thought Reasoning Helps Neural Networks Compute

    The article discusses how chain-of-thought reasoning, inspired by human problem-solving techniques, has enhanced the computational capabilities of large language models like transformers. Researchers have used computational complexity theory to analyze the limitations and potentials of transformers, revealing insights into their ability to perform step-by-step reasoning. While chain-of-thought reasoning offers promise in solving complex problems, it also poses computational challenges, prompting further exploration into more efficient approaches.

    Ben Brubaker - 21 March 2024 - Article in English

    Read more

  • Python Fuzzing for Trustworthy Machine Learning Frameworks

    The article discusses the importance of ensuring the security and reliability of machine learning frameworks through fuzzing, a technique used in secure software development. It proposes a dynamic analysis pipeline for Python projects using the Sydr-Fuzz toolset, focusing on fuzzing, corpus minimization, crash triaging, and coverage collection. By applying this pipeline to popular machine learning frameworks like TensorFlow and PyTorch, the authors discovered new bugs and proposed fixes, ultimately enhancing the security and trustworthiness of these frameworks.

    Ilya Yegorov - 19 March 2024 - Article in English

    Read more

  • How AI Models Think: The Key Role of Activation Functions with Code Examples

    The article discusses the importance of activation functions in neural networks for AI models, simplifying complex concepts with analogies and code examples. It explains how activation functions control data flow, prevent vanishing gradients, and introduce decision-making processes. Various activation functions like ReLU, Sigmoid, Tanh, and Leaky ReLU are explored, highlighting their roles in improving computational efficiency and addressing the vanishing gradient problem. Mathematically, activation functions transform linear neural network structures into complex systems capable of learning intricate patterns.

    Tiago Monteiro - 10 April 2024 - Article in English

    Read more

  • ViSNet: A general molecular geometry modeling framework for predicting molecular properties and simulating molecular dynamics

    The article introduces ViSNet, a novel molecular geometry modeling framework developed by Microsoft, aimed at predicting molecular properties and simulating molecular dynamics. ViSNet enhances interpretability, reduces computing costs, and improves real-world application utility. By leveraging vector-scalar interactive graph neural networks, ViSNet outperforms existing algorithms in predicting molecular properties and demonstrates promise in drug development competitions. Integrated into the PyTorch Geometric Library, ViSNet facilitates accessibility and usability for researchers, paving the way for advancements in computational chemistry and biophysics.

    Tong Wang - 29 February 224 - Article in English

    Read more

  • Scaling AI/ML Infrastructure at Uber

    The article discusses Uber's journey in scaling its AI/ML infrastructure over the past years. It highlights the transition from on-premise to cloud infrastructure, optimization of existing infrastructure including GPU assets distribution and network upgrades, and the development of new infrastructure focusing on price-performance evaluations and efficiency improvements. Key metrics such as feasibility, reliability, efficiency, and developer velocity are emphasized throughout the discussion.

    Nav Kankani - 29 March 2024 - Article in English

    Read more

  • Towards interpretable machine learning for observational quantification of soil heavy metal concentrations under environmental constraints

    The article discusses the challenges of monitoring soil heavy metal concentrations and the limitations of traditional field surveys in addressing global dispersion of heavy metals. It emphasizes the need for advanced techniques, such as satellite observations and machine learning, to enhance quantification accuracy and inform conservation efforts. The proposed framework integrates spectral data, soil environmental factors (pH, organic carbon), and interpretable ML models to predict metal concentrations, offering insights into feature interactions and model decision-making processes. The study aims to improve understanding of heavy metal distribution and facilitate more effective environmental management strategies.

    Yishan Sun - 20 March 2024 - Article in English

    Read more

  • Deep learning modelling of manufacturing and build variations on multi-stage axial compressors aerodynamics

    The article proposes a novel approach using deep learning to analyze the effects of manufacturing variations like tip clearance and surface roughness on gas turbine performance. By training convolutional neural networks (CNNs) with CFD data, the model accurately predicts flow field behavior in real-time. This enables quick assessment of performance impacts during the manufacturing process, potentially saving costs and time associated with physical tests. The methodology is scalable and adaptable to various manufacturing variations, offering a promising solution for industrial applications in gas turbine design and optimization.

    Giuseppe Bruni - 13 March 2024 - Article in English

    Read more

  • How to Leverage KNN Algorithm in Machine Learning?

    The article provides a detailed overview of the K Nearest Neighbors (KNN) algorithm in machine learning, focusing on its applications in classification and regression problems. It discusses the importance of feature similarity in KNN, the process of selecting the optimal K value, and the advantages and limitations of the algorithm. Furthermore, it includes a practical demonstration of implementing KNN for diabetes prediction, illustrating its real-world applicability and utility in diverse fields.

    Simplilearn - 18 March 2024 - Article in English

    Read more

  • The most popular neural network styles and how they work

    The article provides an overview of various types of neural networks, including feedforward, recurrent, convolutional, transformer networks, and adversarial networks. It explains their structures, functions, and applications in modern AI, covering concepts such as perceptrons, loss functions, gradient descent, and backpropagation. The discussion highlights the importance of understanding neural network variants for software developers and explores their implications for machine learning, consciousness, and the future of technology.

    Matthew Tyson - 28 February 2024 - Article in English

    Read more

  • GADNN: a revolutionary hybrid deep learning neural network for age and sex determination utilizing cone beam computed tomography images of maxillary and frontal sinuses

    The article introduces GADNN, a hybrid deep learning neural network, for age and sex determination using cone beam computed tomography (CBCT) images of maxillary and frontal sinuses. By combining deep learning with genetic algorithms, it achieves superior accuracy in sex and age determination compared to traditional statistical and machine learning methods. This innovative approach demonstrates the potential of deep learning in forensic dentistry and contributes to the advancement of DL applications in dentistry and forensic sciences.

    Omid Hamidi - 27 February 2024 - Article in English

    Read more

  • The Learning Path to Neural Network Industrial Application in Distributed Environments

    The article discusses the implementation of neural networks in industrial distributed environments to enhance efficiency and cost reduction. It addresses challenges in data collection and processing, emphasizing the importance of standardized features. The paper presents a method for data classification using machine learning algorithms, enabling exception detection and prediction. It highlights the significance of AI and BI in industrial control processes, focusing on bridging legacy systems with modern technologies. The study showcases the application of neural networks for real-time control system data classification in the marine industry, demonstrating the potential for improved decision-making and process optimization.

    Lenka Landryová - 28 February 2024 - Article in English

    Read more

  • Revolutionizing Real-Time Data Processing: The Dawn of Edge AI

    The article discusses a breakthrough in edge computing achieved by researchers at Tokyo University of Science. They developed an optical device capable of real-time signal processing across various timescales, addressing the limitations of conventional cloud computing. The device, based on physical reservoir computing, offers efficient and cost-effective processing, as demonstrated by its high classification accuracy on the MNIST dataset. This innovation holds promise for edge computing applications requiring rapid data analysis.

    TOKYO UNIVERSITY OF SCIENCE - 23 February 2024 - Article n English

    Read more

  • Deep neural networks for crack detection inside structures

    The article discusses the application of deep neural networks (DNNs) for crack detection in structures, emphasizing seismic-wave-based techniques for plate structures. It builds upon previous work by exploring various network components and introducing a new data preprocessing approach. The study highlights the effectiveness of utilizing robust DNN architectures like DenseNet and leveraging reference wave fields for improved accuracy in detecting small cracks.

    Fatahlla Moreh - 23 February 2024 - Article in English

    Read more

  • An end-to-end machine learning approach with explanation for time series with varying lengths

    This article introduces a machine learning approach for predicting quality parameters of plastic parts during production using time series data. It addresses the challenge of variable-length time series in industrial batch processes. The method involves a 1D CNN algorithm with a masking layer and class activation mapping for interpretability. Comparative analysis with the 1NN-DTW algorithm is conducted. The study focuses on sensor signals collected during plastic part production to predict quality parameters that are difficult to measure directly. The proposed approach achieves 83.7% prediction accuracy and reduces training time significantly, contributing to quality monitoring in industrial settings.

    Manuel Schneider - 19 February 2024 - Aricle in English

    Read more

  • ANN isn’t very useful!

    The article explores the potential and challenges of Artificial Neural Networks (ANNs) across various fields like medicine, finance, and autonomous vehicles. It delves into the complexity of designing and implementing ANNs, data requirements, computational demands, security concerns, and benefits such as automation, efficiency improvements, enhanced decision-making, and community support. Overall, it highlights ANNs' transformative impact and the need to navigate challenges for effective utilization and future innovation.

    Katy - 1 March 2024 - Article in English

    Read more

  • 5 Steps to Remove Bias From Machine Learning Algorithms

    The article delves into the challenge of bias in machine learning algorithms and presents a comprehensive approach to mitigate it effectively. It highlights the fact that despite algorithms being perceived as objective, they can inherit biases present in the data used to train them. To address this issue, the article suggests five crucial steps: prioritizing data diversity, proactively identifying edge cases, ensuring accurate data annotation, understanding model failures, and regularly evaluating the model's performance. By following these steps throughout the AI lifecycle, developers can create more equitable and reliable machine learning models that are less prone to perpetuating biases.

    Duncan Curtis - 7 February 2024 - Article in English

    Read more

  • Gas adsorption meets deep learning: voxelizing the potential energy surface of metal-organic frameworks

    The article presents a novel approach to predict gas adsorption properties in metal-organic frameworks (MOFs) using deep learning. Instead of traditional geometric descriptors, the potential energy surface (PES) of MOFs is voxelized and processed by a 3D convolutional neural network (CNN). This approach outperforms conventional methods, requires less training data, and demonstrates transferability to different host-guest systems. The framework offers a generic solution applicable beyond reticular chemistry, with potential for diverse applications in materials science and beyond.

    Antonios P. Sarikas -26 January 2024 - Article in English

    Read more

  • A generative model of memory construction and consolidation

    The article introduces a computational model elucidating the intricate processes involved in the construction, consolidation, and distortion of episodic memories. It proposes that hippocampal replay facilitates the training of generative models, enabling the recreation of sensory experiences from latent variables distributed across brain regions. This model accounts for semantic memory, imagination, and schema-based distortions, shedding light on the complex mechanisms underlying memory formation and retrieval. By integrating insights from neuroscience and machine learning, the model provides a comprehensive framework for understanding how memories are formed, consolidated, and recalled, offering valuable insights into the workings of the human mind.

    Eleanor Spens & Neil Burgess - 19 January 2024 - Article in English

    Read more

  • Prediction of surface roughness using deep learning and data augmentation

    The article discusses the significance of surface roughness in mechanical product quality and its impact on fatigue strength and wear resistance. It focuses on predicting surface roughness in milling processing using a neural network based on deep learning and data augmentation. The study employs variational modal decomposition (VMD) for feature extraction, adaptive synthetic sampling (ADASYN) for data balancing, and a deep belief network (DBN) as the prediction model. The optimization method involves the sparrow search algorithm based on Tent chaotic mapping. The proposed approach aims to enhance the accuracy and reliability of surface roughness prediction in intelligent CNC machining, contributing to the advancement of quality diagnostics in the machinery manufacturing industry.

    Miaoxian Guo - 29 January 2024 - Article in English

    Read more

  • This AI Paper Explains the Deep Learning’s Revolutionizing Role in Mapping Genotypic Fitness Landscapes

    This article discusses the revolutionary role of deep learning in mapping genotypic fitness landscapes, a crucial concept in evolutionary biology. Traditional methods face challenges in assessing the vast array of genotypes for a given protein. The study explores the application of deep learning models, such as multilayer perceptrons and recurrent neural networks, to predict fitness based on experimental data. The findings reveal the effectiveness of these models, explaining over 90% of fitness variance with relatively small training samples. The research signifies a paradigm shift, making fitness landscape studies more scalable and efficient, emphasizing the importance of sampling strategies in optimizing model performance.

    Sana Hassan - 28 January 2024 - Article in English

    Read more

  • What are LLMs, and how are they used in generative AI?

    The article delves into the transformative role of Large Language Models (LLMs) in the landscape of generative Artificial Intelligence (AI), highlighting their application in chatbot technologies such as OpenAI's ChatGPT and Google's Bard. It elucidates the fundamental workings of LLMs, their intricate training procedures, and the manifold challenges and opportunities they present. Moreover, it examines pertinent issues encompassing privacy, security, and ethical implications that arise with the proliferation of LLM-based AI systems. Through a comprehensive analysis, the article underscores the pivotal role of LLMs in shaping the future trajectory of AI development and the imperative for responsible governance and oversight in this rapidly evolving domain.

    Lucas Mearian - 7 February 2024 - Article in English

    Read more

  • Chatbots and Large Language Models in Radiology: A Practical Primer for Clinical and Research Applications

    The article discusses the impact of machine learning (ML) on mobile app development, highlighting its role in making applications more intelligent and user-friendly. It explores the principles of individualized user experiences, efficient search functionalities, and optimized e-commerce apps through ML. The article emphasizes the increasing popularity of ML apps, their influence on the app development paradigm, and the principles developers should consider. It also mentions the positive impact of ML on businesses, particularly in responding to customer inquiries via mobile devices. Examples of popular ML-powered mobile applications, such as Netflix, Tinder, Snapchat, Oval Money, Google Maps, Dango, and Uber, are provided to illustrate the diverse applications of ML in creating personalized and innovative user experiences.

    Rajesh Bhayana - 16 January 2024 - Article in English

    Read more

  • Gas adsorption meets deep learning: voxelizing the potential energy surface of metal-organic frameworks

    The article discusses the application of machine learning, specifically deep learning techniques, in predicting gaseous adsorption properties of metal-organic frameworks (MOFs). The traditional challenges in selecting the best MOF candidates due to their vast combinatorial nature are addressed through a generalized framework. The proposed approach uses the potential energy surface (PES) of MOFs as the sole descriptor, which is voxelized and processed by a 3D convolutional neural network (CNN). This method outperforms conventional models in predicting CO2 uptake, requiring significantly less training data. The approach's transferability is demonstrated by predicting CH4 uptake in covalent organic frameworks (COFs). The article emphasizes the potential of this methodology in various fields beyond reticular chemistry.

    Antonios P. Sarikas -26 January 2024 - Article in English

    Read more

  • Complex Valued Neural Networks might be the future of Deep Learning

    The article discusses the potential of Complex Valued Neural Networks (CVNNs) in revolutionizing deep learning and accelerating AI adoption in various fields. It estimates a market size of USD 2.3 billion in the next 3–5 years and USD 13 billion over the next decade. CVNNs are particularly suitable for tasks involving phasic data, such as signal communications, healthcare (medical imaging and ECG), deep fake detection, and acoustic analysis for industrial maintenance. The article delves into the theory behind CVNNs, emphasizing their advantages in terms of raw performance, expressiveness, stability, and generalization. Challenges, risks, and the need for dedicated infrastructure are also discussed. The potential of optical computing platforms for CVNNs is highlighted as a promising avenue for enhanced computational speed and energy efficiency.

    Devansh - 18 December 2023 - Article in English

    Read more

  • Machine learning and topological data analysis identify unique features of human papillae in 3D scans

    The article discusses a groundbreaking study that utilizes machine learning and topological data analysis on 3D microscopic scans of human tongue papillae. The research reveals unique geometric and topological features of papillae, which play a crucial role in taste and textural sensation. Machine learning models trained on these features demonstrate an 85% accuracy in classifying papillae types. Notably, the study shows that papillae are distinctive across individuals, allowing for individual identification with 48% accuracy. This discovery opens new avenues for research in food preferences and oral diagnostics, emphasizing the potential use of tongue papillae as a unique identifier.

    Rayna Andreeva - 14 December 2023 - Article in English

    Read more

  • Discernment of transformer oil stray gassing anomalies using machine learning classification techniques

    This article explores the use of machine learning (ML) algorithms in assessing dissolved gas analysis (DGA) data for early fault detection in oil-immersed transformers. With transformers being critical for power distribution, faults can lead to widespread disruptions. The proposed multi-classification ML model, applied to 138 transformer oil samples, demonstrates improved diagnostic accuracy compared to conventional methods. Case reports on transformer failure analysis and comparisons with industry standards highlight the model's effectiveness. The study emphasizes the potential of ML in enhancing transformer maintenance and reliability.

    M. K. Ngwenyama - 3 January 2024 - Article in English

    Read more

  • A collaborative realist review of remote measurement technologies for depression in young people

    The article discusses the increasing use of remote measurement technologies (RMT), such as smartphones and wearables, for monitoring and managing depression in young people. It highlights the potential benefits of real-time data collection, including objective screening, symptom management, and relapse prevention. The collaborative realist review explores how, why, for whom, and in what contexts RMT may work or not work for depression in young people. The study emphasizes ethical, data protection, and methodological considerations and suggests that RMT could enhance emotional self-awareness, therapeutic relationships, and intervention effectiveness. The findings stress the need for standardized practices and further research to ensure responsible and effective implementation of RMT in mental health care for young individuals.

    Annabel E. L. Walsh - 15 January 2024 - Article in English

    Read more

  • The first use of a photogrammetry drone to estimate population abundance and predict age structure of threatened Sumatran elephants

    The article explores the use of unmanned aerial vehicles (UAVs) for wildlife monitoring in the challenging tropical rainforests of Bukit Tigapuluh Landscape (BTL), Indonesia. Traditional surveys can disrupt elusive species and are resource-intensive. The study employs a vision-based sensor on a quadcopter, utilizing hierarchical modeling and deep learning CNN to estimate the population and age structure of Sumatran elephants. The drones successfully observed 96 individuals, estimating a population of 151 elephants in the study area. The research highlights the potential of UAVs for non-invasive, efficient, and innovative large-scale wildlife surveys in complex terrains.

    Dede Aulia Rahman - 3 December - Article in English

    Read more

  • Exploring 4 Popular Machine Learning Algorithms for Industrial Applications

    The article explores the application of machine learning algorithms in industrial settings, focusing on four popular models: Linear Regression, k-means++, Neural Networks, and Decision Trees. It introduces supervised and unsupervised machine learning, highlighting their distinctions. Linear Regression is discussed for estimating variable values, while k-means++ addresses clustering challenges. Neural Networks are explored for pattern recognition and optimization, and Decision Trees for classification tasks. Practical examples, such as optimizing energy consumption in a freezer, demonstrate the real-world applicability of these algorithms in industrial applications.

    Anthony King Ho - 11 January 2023 - Article in English

    Read more

  • Brainoware: A breakthrough AI approach using brain organoids for advanced computation

    The article discusses "Brainoware," a novel AI hardware approach using brain organoids for advanced computation. Developed by U.S. researchers, Brainoware integrates brain organoids with a reservoir computing framework, demonstrating nonlinear dynamics, fading memory, and unsupervised learning. Brainoware's adaptive living reservoir, formed by functional brain organoids on high-density multielectrode arrays, exhibits promising results in tasks like speech recognition and chaotic equation prediction. Despite challenges, Brainoware highlights the potential of brain-inspired systems for addressing limitations in current AI hardware.

    Sushama R. & Chaphalkar - 12 December 2023 - Article in English

    Read more

  • The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review

    This systematic review examines the impact of machine learning and deep learning in neonatology, specifically in neonatal intensive care units (NICUs). Analyzing 106 research articles from 1996 to 2022, the paper categorizes AI applications in areas such as survival analysis, neuroimaging, and retinopathy of prematurity diagnosis. Emphasizing the transformative role of deep learning, the review suggests a hybrid intelligence approach for integrating AI into NICUs. The paper concludes by highlighting the current status, clinic needs, and potential future directions for AI in neonatal care.

    Elif Keles - 27 November 2023 - Article in English

    Read more

  • Investigating the effect of textural properties on CO2 adsorption in porous carbons via deep neural networks using various training algorithms

    The article explores the impact of textural properties, specifically micropores, on carbon dioxide (CO2) adsorption in porous carbons using deep neural networks with various training algorithms. The study reveals the effectiveness of the Levenberg–Marquardt algorithm in achieving high accuracy. It discusses the correlation between textural properties and CO2 adsorption, emphasizing the potential of neural networks for process simulation. The research contributes insights into pressure-dependent CO2 adsorption behavior, emphasizing the importance of understanding factors influencing the adsorption process. The study utilizes extensive data, employing multilayer perceptron neural networks and radial basis function for modeling.

    Pardis Mehrmohammadi - 2 December 2023 - Article in English

    Read more

  • VEDLIoT: Next generation AIoT applications

    The VEDLIoT project focuses on developing energy-efficient Deep Learning methods for distributed AIoT applications, emphasizing algorithm optimization and addressing safety and security concerns. The project utilizes a modular and scalable cognitive IoT hardware platform with microserver technology, employing heterogeneous computing and hardware accelerators. It validates its architecture through use cases in Smart Homes, automotive (pedestrian safety), and industrial IoT (motor condition monitoring and arc fault detection). The article details the project's advancements, challenges, and outcomes across these domains, showcasing its contributions to AIoT systems.

    Jens Hagemeyer - 4 December 2023 - Article in English

    Read more

  • Automated Testing in Machine Learning Projects [Best Practices for MLOps]

    The article discusses the significance of automated testing in machine learning (ML) projects, emphasizing its role in identifying and preventing bugs throughout the development lifecycle. Automated testing involves using specialized tools to rigorously test software, and its popularity has grown with the rise of Agile development and Continuous Integration. The benefits include reduced developer effort, improved quality, faster release cycles, and easy distribution of tests across multiple devices. The article categorizes various types of automated tests in ML. The piece also introduces tools like Great Expectations, Deepchecks, and Aporia for implementing data and model testing in ML projects, providing a comprehensive guide to best practices for MLOps.

    Enes Zvorničanin 15 November 2023 - Article in English

    Read more

  • Features extraction from multi-spectral remote sensing images based on multi-threshold binarization

    This paper introduces a novel approach for feature extraction from multi-spectral remote sensing images, addressing real-time processing challenges. The proposed method employs multi-threshold binarization to create a vector of discriminative features for classification. Comparative analysis with ResNet and Ensemble CNN models reveals the proposed method's superior accuracy for small datasets, maintaining competitive recall scores for larger datasets. Additionally, the approach demonstrates a fivefold reduction in training and inference time compared to ResNet and Ensemble CNN models, emphasizing its efficiency for remote sensing applications. The technique utilizes equidistant local thresholds along with a global threshold for effective multi-spectral image analysis.

    Bohdan Rusyn - 11 November 2023 - Article in English

    Read more

  • Anomalous behavior recognition of underwater creatures using lite 3D full-convolution network

    The article presents a real-time approach for recognizing anomalous behavior in underwater creatures, particularly focusing on marine life like tilapia and cobia. The proposed method utilizes a lightweight 3D full-convolution neural network named Lite3D, along with object detection and multitarget tracking. Lite3D stands out for its efficiency, being 50 times smaller in size and 57 times lighter in trainable parameters compared to other models. The research addresses the importance of monitoring marine life behaviors in the context of environmental challenges like global warming and pollution. The system aims to provide a valuable tool for ecological research and conservation efforts, with potential applications in underwater monitoring devices for aquaculture and marine life rehabilitation.

    Jung-Hua Wang - 16 November 2023 - Article in English

    Read more

  • Discovery of structure–property relations for molecules via hypothesis-driven active learning over the chemical space

    The article discusses a novel approach that combines physics-informed featurization and hypothesis-driven active learning for predicting material properties, specifically focusing on the formation enthalpy of molecules in the QM9 dataset. This approach aims to approximate generalized physical laws for material behavior, enabling predictions in unexplored chemical spaces with limited available data. It utilizes easily computable features and interpretable mathematical formulations to make meaningful predictions, emphasizing its potential in material design and discovery.

    Ayana Ghosh - 20 October 2023 - Article in English

    Read more

  • Training large-scale optoelectronic neural networks with dual-neuron optical-artificial learning

    The article introduces DANTE, a dual-neuron optical-artificial learning approach for training large-scale optoelectronic neural networks (ONNs). DANTE addresses challenges in diffractive ONNs by integrating optical and artificial neurons. It achieves unprecedented scalability with 150 million neurons on ImageNet, accelerates training speeds significantly on CIFAR-10, and outperforms existing methods. The study demonstrates DANTE's effectiveness through simulation experiments and physical ONN system development, highlighting its potential to advance optical computing for machine learning applications by solving large-scale practical problems.

    Xiaoyun Yuan - 4 November 2023 - Article in English

    Read more

  • Repurposing existing skeletal spatial structure (SkS) system designs using the Field Information Modeling (FIM) framework for generative decision-support in future construction projects

    The article discusses the application of Skeletal Spatial Structure (SkS) systems in construction, emphasizing their modular and sustainable nature. SkS systems, known for their high redundancy and lightweight design, have been historically used for rapid reconstruction and are now employed for aesthetic and free-form structures. The paper explores the advantages of SkS, such as mass customization, reduced weight, and ease of disassembly, along with the challenges of initial investment in training and production facilities.

    Reza Maalek - 10 November 2023 - Article in English

    Read more

  • Machine learning applied to fMRI patterns of brain activation in response to mutilation pictures predicts PTSD symptoms

    The article presents a study that uses multivariate pattern recognition methods and fMRI data to predict posttraumatic stress disorder (PTSD) symptoms in individuals exposed to traumatic events. Participants were shown unpleasant pictures in both a "real context" and a "safe context" with safety cues. Machine learning models were trained to predict PTSD symptoms based on brain activation patterns. The study found that the models could predict PTSD symptoms in the real context, particularly in occipito-parietal brain regions, but not in the safe context. This research highlights the potential for machine learning to identify biomarkers for PTSD and supports a dimensional approach to understanding mental health disorders.

    Liana Catarina - 5 October 2023 - Article in English

    Read more

  • Multi-output ensemble deep learning: A framework for simultaneous prediction of multiple electrode material properties

    The article introduces a multi-output ensemble deep learning framework for predicting multiple material properties of electrode materials used in batteries. It employs various techniques, including deep neural networks, Bayesian optimization, attention mechanisms, and deep belief networks, to enhance prediction accuracy. The model demonstrates strong performance and could expedite material discovery and battery development. Additionally, the framework's applicability extends to battery management and control throughout their lifespan.

    Hanqing Yu - October 2023 - Article in English

    Read more

  • Process operations: from models and data to digital applications

    The article discusses the use of complex digital applications (DAs) in industrial process operations. DAs are software systems that rely on mathematical models, machine learning, and data-driven techniques for decision support and control. The paper addresses the challenges, advancements, and the need for general platforms for code-free development and large-scale deployment of DAs. It also explores the role of DAs in major trends like autonomous plant operation and process plant modularization in the process industries.

    Constantinos C. Pantelides - 26 October 2023 - Article in English

    Read more

  • Adapting physiologically-based pharmacokinetic models for machine learning applications

    Both machine learning and physiologically-based pharmacokinetic models are becoming essential components of the drug development process. Integrating the predictive capabilities of physiologically-based pharmacokinetic (PBPK) models within machine learning (ML) pipelines could offer significant benefits in improving the accuracy and scope of drug screening and evaluation procedures.

    Sohaib Habiballah & Brad Reisfeld – 11 September  2023 - Article in English

    Read more

  • Enhanced antibody-antigen structure prediction from molecular docking using AlphaFold2

    This study focuses on improving the prediction of antibody-antigen complex structures, which is crucial in biomedical research. Existing methods have limitations, and even advanced models like AlphaFold2 struggle with these complexes due to a lack of specific constraints. The study employs physics-based protein docking methods to create sets of potential structures, some close to the native structure (positives) and others not (negatives). These sets serve as benchmarks for evaluating the accuracy of their predictions and aim to enhance our ability to predict antibody-antigen complex structures in real-world applications.

    Francis Gaudreault – 13 September 2023 - Article in English

    Read more

  • Getting Started with Scikit-learn in 5 Steps

    This tutorial delves into Scikit-learn, a widely used Python library for machine learning. It covers essential aspects of the machine learning workflow in five steps. Beginning with installation, it progresses to data preprocessing, emphasizing the need to clean and scale data for model accuracy. Model selection and training are explored, highlighting the importance of choosing the right algorithm for the specific problem. Model evaluation is stressed as a critical step, utilizing metrics like accuracy, precision, and recall. The tutorial culminates with performance enhancement techniques, including hyperparameter tuning and ensemble methods. Throughout, the importance of practical experience and continual learning in the evolving field of machine learning is underscored.

    Matthew Mayo – 16 September 2023 - Article in English

    Read more

  • Inside DSPy: The New Language Model Programming Framework You Need to Know About

    The universe of language model programming(LMP) frameworks has been expanding rapidly on the last few months. Frameworks such as LangChain or LlamaIndex have certainly achieved relevant levels of adoption within the LLM community and Microsoft’s Semantic Kernel is boosting an impressive set of capabilities. Recently, a new alternative known as DSPy came into the scene with a unique approach to LMP. DSPy was created by Stanford University researchers with the goal of providing improved abstractions for different LMP tasks. DSPy prioritizes programming over prompting in an attempt to enable the foundation to create more sophisticated LMP apps.

    Jesus Rodriguez – 5 Sepember 2023 - Article in English

    Read more

  • Prediction of DDoS attacks in agriculture 4.0 with the help of prairie dog optimization algorithm with IDSNet

    Integrating cutting-edge technology with conventional farming practices has been dubbed “smart agriculture” or “the agricultural internet of things.” Agriculture 4.0, made possible by the merging of Industry 4.0 and Intelligent Agriculture, is the next generation after industrial farming. Agriculture 4.0 introduces several additional risks, but thousands of IoT devices are left vulnerable after deployment. In this paper, we provide an IDS for DDoS attacks that is built on one-dimensional convolutional neural networks (IDSNet). 

    Ramesh Vatambeti – 16 Septmber 2023 - Article in English

    Read more

  • Healthcare predictive analytics using machine learning and deep learning techniques: a survey

    Healthcare prediction has been a significant factor in saving lives in recent years. In the domain of health care, there is a rapid development of intelligent systems for analyzing complicated data relationships and transforming them into real information for use in the prediction process. Consequently, artificial intelligence is rapidly transforming the healthcare industry, and thus comes the role of systems depending on machine learning and deep learning in the creation of steps that diagnose and predict diseases, whether from clinical data or based on images.

    Mohammed Badawy – 29 August 2023 - Article in English

    Read more

  • RNA contact prediction by data efficient deep learning

    On the path to full understanding of the structure-function relationship or even design of RNA, structure prediction would offer an intriguing complement to experimental efforts. Any deep learning on RNA structure, however, is hampered by the sparsity of labeled training data. Utilizing the limited data available, we here focus on predicting spatial adjacencies ("contact maps”) as a proxy for 3D structure. Our model, BARNACLE, combines the utilization of unlabeled data through self-supervised pre-training and efficient use of the sparse labeled data through an XGBoost classifier. BARNACLE shows a considerable improvement over both the established classical baseline and a deep neural network.

    Oskar Taubert – 6 September 2023 - Article in English

    Read more

  • An active learning machine technique based prediction of cardiovascular heart disease from UCI-repository database

    Heart disease is a significant global cause of mortality, and predicting it through clinical data analysis poses challenges. Machine learning (ML) has emerged as a valuable tool for diagnosing and predicting heart disease by analyzing healthcare data. Previous studies have extensively employed ML techniques in medical research for heart disease prediction. In this study, eight ML classifiers were utilized to identify crucial features that enhance the accuracy of heart disease prediction. Various combinations of features and well-known classification algorithms were employed to develop the prediction model

    Saravanan Srinivasa – 21 August 2023 - Article in English

    Read more

  • Variance-capturing forward-forward autoencoder (VFFAE): A forward learning neural network for fault detection and isolation of process data

    Data-driven models have emerged as popular choices for fault detection and isolation (FDI) in process industries. However, real-time updating of these models due to streaming data requires significant computational resources, is tedious and therefore pauses difficulty in fault detection. To address this problem, in this study, we have developed a novel forward-learning neural network framework that can efficiently update data-driven models in real time for high-frequency data without compromising the accuracy.

    Deepak Kumar – August 2023 - Article in English

    Read more

  • Scaling the Instagram Explore recommendations system

    AI plays an important role in what people see on Meta’s platforms. Every day, hundreds of millions of people visit Explore on Instagram to discover something new, making it one of the largest recommendation surfaces on Instagram. To build a large-scale system capable of recommending the most relevant content to people in real time out of billions of available options, we’ve leveraged machine learning (ML) to introduce task specific domain-specific language (DSL) and a multi-stage approach to ranking.

    Vladislav Vorotilov, Ilnur Shugaepov – 9 August 2023 - Article in English

    Read more

  • Bringing the world closer together with a foundational multimodal model for speech translation

    The world we live in has never been more interconnected—the global proliferation of the internet, mobile devices, social media, and communication platforms gives people access to more multilingual content than ever before. In such a context, having an on-demand ability to communicate and understand information in any language becomes increasingly important. While such a capability has long been dreamed of in science fiction, AI is on the verge of bringing this vision into technical reality.

    Meta.ai - 22 August 2023 - Article in English

    Read more

  • More metal-organic frameworks, fewer problems: A self-supervised transformer model for property prediction

    For decades, metal-organic frameworks (MOFs) have been captivating researchers because of their wide range of applications: gas absorption, water harvesting, energy storage and desalination. Until now, quickly and inexpensively selecting the top performing MOFs for specific tasks has been challenging. Enter MOFormer, a machine learning model that can achieve higher accuracy on prediction tasks than leading models without explicitly relying on 3D atomic structures.

    Kaitlyn Landram – 31 July 2023 - Article in English

    Read more

  • Future Climate Prediction Based on Support Vector Machine Optimization in Tianjin, China

    Climate is closely related to human life, food security and ecosystems. Forecasting future climate provides important information for agricultural production, water resources management and so on. In this paper, historical climate data from 1962–2001 was used at three sites in Tianjin Baodi, Tianjin and Tanggu districts as baseline and the model parameters were calibrated by the Long Ashton Research Station Weather Generator (LARS-WG). 2m-temperatures in 2011–2020 were verified under two scenarios, representative concentration pathway (RCP) 4.5 and RCP8.5 in different atmospheric circulation models with optimal minimum 2m-temperatures at the three sites. 

    Yang Wang - 31 July 2023 - Article in English

    Read more

  • Application of Graph Neural Networks for Node Classification (on Cora Dataset)

    There are structural linkages between the items in many datasets used in different machine learning (ML) applications, and these interactions can be shown as graphs. Analysis of social and communication networks, traffic forecasting, and fraud detection are some of these applications. Graph illustration Building and training models for graph datasets to be used for a range of ML purposes is the goal of learning. This illustration shows a straightforward application of a Graph Neural Network (GNN) model. For the Cora dataset, the model is employed for a node prediction job to identify a paper’s topic based on its words and network of citations.

    Tejpal Kumawat – 14 July 2023 - Article in English

    Read more

  • Research Hotspots and Trends of Deep Learning in Critical Care Medicine: A Bibliometric and Visualized Study

    Interest in the application of deep learning (DL) in critical care medicine (CCM) is growing rapidly. However, comprehensive bibliometric research that analyze and measure the global literature is still lacking.The present study aimed to systematically evaluate the research hotspots and trends of DL in CCM worldwide based on the output of publications, cooperative relationships of research, citations, and the co-occurrence of keywords.

    Kaichen Zhang + Yihua Fan - 29 July 2023 - Article in English

    Read more

  • Hardware conversion of convolutional neural networks

    AI applications require massive energy consumption, often in the form of server farms or expensive field programmable gate arrays (FPGAs). The challenge lies in increasing computational power while keeping energy consumption and costs low. Now, AI applications are seeing a dramatic shift enabled by powerful Intelligent Edge computing. Compared to traditional firmware-based computation, hardware-based convolutional neural network acceleration is now ushering in a new era of computational performance with its impressive speed and power. 

    Ole Dreessen – 18 July 2023 - Article in English

    Read more

  • Continuous estimation of power system inertia using convolutional neural networks

    Inertia is a measure of a power system’s capability to counteract frequency disturbances: in conventional power networks, inertia is approximately constant over time, which contributes to network stability. We develop a framework for the continuous estimation of the inertia in an electric power system, exploiting state-of-the-art artificial intelligence techniques. We perform an in-depth investigation based on power spectra analysis and input-output correlations to explain how the artificial neural network operates in this specific realm, thus shedding light on the input features necessary for proper neural-network training.

    Daniele Linaro - 24 July 2023 - Article in English

    Read more

  • How Much Of AI Is Inspired By Biological Neural Networks?

    The biological neural network is necessary for human survival. Is there a difference between biological neural networks and artificial neural networks? Do biological neural networks act as inspiration for artificial intelligence? What factors of neural networks are mostly studied? 

    Irshad Anwar – 12 July 2023 - Article in English

    Read more

  • Can LNNs Replace Transformers?

    In liquid neural networks, the parameters change over time based on the results of a nested set of differential equations, which means it understands new tasks by itself, and thus does not require vast amounts of training. The human brain has approximately 86 billion neurons. And it’s quite a complex task to mimic the human brain by building neural networks such as RNNs, CNNs, or Transformers since scaling up the network to that level is not feasible. This also comes along with the problem of collecting huge amounts of labelled training data.

    Mohit Pandey – 11 July 2023 - Article in English

    Read more

  • Taking the First Steps Toward Enterprise AI

    It is clear that AI is being used to power up businesses now more than ever, and companies that don’t use AI risk being outcompeted by those 4who do. The best AI-powered products are fueled by a diverse collection of high-quality data. The most critical and impactful step you can take towards enterprise AI today is ensuring you have a solid data foundation built on the modern data stack with mature operational pipelines, including all your most critical operational data. phData can help with this foundational strategy and platform along with your AI building needs by utilizing our teams of data engineering, analytics, and AI experts.

    Grant Henke – 7 June 2023 - Article in English

    Read more

  • A Novel Explanatory Tabular Neural Network to Predicting Traffic Incident Duration Using Traffic Safety Big Data

    Traffic incidents pose substantial hazards to public safety and wellbeing, and accurately estimating their duration is pivotal for efficient resource allocation, emergency response, and traffic management. However, existing research often faces limitations in terms of limited datasets, and struggles to achieve satisfactory results in both prediction accuracy and interpretability. This paper established a novel prediction model of traffic incident duration by utilizing a tabular network-TabNet model, while also investigating its interpretability. The study incorporates various novel aspects.

    Huiping Li – 29 June 2023 - Article in English

    Read more

  • Julia Vs R: Choosing The Right Language For Data Analysis

    Delve into the battle of Julia vs R as we explore the nuances of these two prominent languages in the realm of data analysis. Discover their unique strengths, performance capabilities, and ecosystem features that can shape your data-driven endeavors.

    Bahniuk Nazar - 25 June 2023 - Article in English

    Read more

  • AI Consciousness: An Exploration of Possibility, Theoretical Frameworks & Challenges

    AI consciousness is a complex and fascinating concept that has captured the interest of researchers, scientists, philosophers, and the public. As AI continues to evolve, the question inevitably arises: Can machines attain a level of consciousness comparable to human beings? With the emergence of Large Language Models (LLMs) and Generative AI, the road to achieving the replication of human consciousness is also becoming possible.

    Haziqa Sajid – 26 June 2023 - Article in English

    Read more

  • Meta releases I-JEPA, a machine learning model that learns high-level abstractions from images

    For several years, Meta’s chief AI scientist Yann LeCun has been talking about deep learning systems that can learn world models with little or no help from humans. Now, that vision is slowly coming to fruition as Meta has just released the first version of I-JEPA, a machine learning (ML) model that learns abstract representations of the world through self-supervised learning on images.

    Ben Dickson – 15 June 2023 - Article in English

    Read more

  • Cloudera Image Warehouse – End-to-End Computer Vision Use Cases in Cloudera

    Artificial Intelligence (AI), as we all know, is taking the technological world by storm, and it’s no secret that it is drastically changing how businesses are run. Through AI, companies are constantly looking for ways to enhance their operations and stay ahead of the competition in today’s fast-paced and constantly changing market. Computer vision is one of the buzziest fields in AI right now, and it involves enabling machines to interpret and understand visual data from the world around them.

    Valerio D.M – 22 June 2023 - Article in English

    Read more

  • Prediction of heavy metals adsorption by hydrochars and identification of critical factors using machine learning algorithms

    Hydrochar has become a popular product for immobilizing heavy metals in water bodies. However, the relationships between the preparation conditions, hydrochar properties, adsorption conditions, heavy metal types, and the maximum adsorption capacity (Qm) of hydrochar are not adequately explored. Four artificial intelligence models were used in this study to predict the Qm of hydrochar and identify the key influencing factors.

    Fangzhou Zhao – May 2023 - Article in English

    Read more

  • Survision: Artificial Intelligence & LPR - Evolution or Revolution

    Today, Artificial intelligence (AI) plays a critical role in License Plate Recognition (LPR) technology. LPR systems use a combination of image processing and machine learning techniques to recognize license plate numbers from images captured by cameras. One of the main challenges in LPR is to perform equally despite external conditions such as lighting, angles, obstructions, and variations. AI-powered LPR systems are trained on large datasets of license plates on many different conditions so they learn to recognize the characters, even if the image is distorted, blurred, or partially obscured; they are also capable of self-improvement and auto-adjust their parameters

    Laura Caillot - 8 June 2023 - Article in English

    Read more

  • The pipeline for the continuous development of artificial intelligence models—Current state of research and practice

    Companies struggle to continuously develop and deploy Artificial Intelligence (AI) models to complex production systems due to AI characteristics while assuring quality. To ease the development process, continuous pipelines for AI have become an active research area where consolidated and in-depth analysis regarding the terminology, triggers, tasks, and challenges is required.

    Monika Steidl – May 2023 - Article in English

    Read more

  • Develop Physics-Informed Machine Learning Models with Graph Neural Networks

    NVIDIA Modulus is a framework for building, training, and fine-tuning deep learning models for physical systems, otherwise known as physics-informed machine learning (physics-ML) models. Modulus is available as OSS (Apache 2.0 license) to support the growing physics-ML community.

    Bhoomi Gadhia – 6 June 2023 - Article in English

    Read more

  • An overview of graph neural networks (GNNs), types and applications

    Graph neural networks (GNNs) have emerged as a powerful framework for analyzing and learning from structured data represented as graphs. GNNs operate directly on graphs, as opposed to conventional neural networks that are created for grid-like input, and they capture the dependencies and relationships between connecting nodes. GNNs are an important area of research due to their adaptability and potential, and they show significant promise for increasing graph-based learning and analysis.

    Tayyub Yaqoob – 27 May 2023 - Article in English

    Read more

  • Deep Q-Networks: Combining Neural Networks and Q-Learning for Superior Results

    Deep Q-Networks (DQNs) have emerged as a powerful fusion of neural networks and Q-learning, a popular reinforcement learning technique. This combination has led to remarkable advancements in artificial intelligence (AI) and machine learning, enabling computers to learn and adapt to new environments and tasks with unprecedented efficiency. By integrating the strengths of both neural networks and Q-learning, DQNs have demonstrated superior performance in various applications, from playing video games to controlling robots

    André De Bonis – 25 May 2023 - Article in English

    Read more

  • Study presents large brain-like neural networks for AI

    YIn a new study in Nature Machine Intelligence, researchers Bojian Yin and Sander Bohté from the HBP partner Dutch National Research Institute for Mathematics and Computer Science (CWI) demonstrate a significant step towards artificial intelligence that can be used in local devices like smartphones and in VR-like applications, while protecting privacy. They show how brain-like neurons combined with novel learning methods enable training fast and energy-efficient spiking neural networks on a large scale.

    Human Brain Project - 8 May 2023 - Article in English

    Read more

  • How to optimize your business with AIOps for IT operations management?

    Artificial intelligence for IT operations, commonly known as AIOps, is a technology that utilizes machine learning (ML) and analytics to automate and improve IT operations management. AIOps provides IT teams with valuable insights into the performance of their systems, allowing them to proactively identify issues and resolve them quickly, ultimately reducing downtime and increasing overall efficiency.

    Sudeep Srivastava – 19 May 2023 - Article in English

    Read more

  • Self-Supervised Learning: Concepts, Examples

    Self-supervised learning is a hot topic in the world of data science and machine learning. It is an approach to training machine learning models using unlabeled data, which has recently gained significant traction due to its effectiveness in various applications. Self-supervised learning differs from supervised learning, where models are trained using labeled data, and unsupervised learning, where models are trained using unlabeled data without any pre-defined objectives. Instead, self-supervised learning defines pretext tasks as training models to extract useful features from the data that can be later fine-tuned for specific downstream tasks.

    Ajitesh Kumar – 9 May 2023 - Article in English

    Read more

  • SVSBI: sequence-based virtual screening of biomolecular interactions

    Virtual screening (VS) is a critical technique in understanding biomolecular interactions, particularly in drug design and discovery. However, the accuracy of current VS models heavily relies on three-dimensional (3D) structures obtained through molecular docking, which is often unreliable due to the low accuracy. To address this issue, we introduce a sequence-based virtual screening (SVS) as another generation of VS models that utilize advanced natural language processing (NLP) algorithms and optimized deep K-embedding strategies to encode biomolecular interactions without relying on 3D structure-based docking. SVS has the potential to transform current practices in drug discovery and protein engineering.

    Li Shen – 18 May 2023 - Article in English

    Read more

  • Implementing Deep Learning Using fastai — Image Classification

    In recent years, Artificial Intelligence (AI) has garnered a lot of attention, especially in the recent months with the launch of ChatGPT. One of the foundational technologies in AI is deep learning. Deep learning is a machine learning technique where you use neural networks to learn the relationships between the features and labels of a dataset.

    Wei-Meng Lee – 19 April 2023 - Article in English

    Read more

  • Brain Games Reveal Clues on How the Mind Works

    Using data from the game “Ebb and Flow”, researchers are training machine learning algorithms to mimic the human ability to switch attention between tasks. The findings shed new light on cognitive control and may expand the current understanding of disorders marked by cognitive control deficits, such as bipolar disorder and schizophrenia.

    Paul Jaffe - 19 April 2023 - Article in English

    Read more

  • A New AI Research Integrates Masking into Diffusion Models to Develop Diffusion Masked Autoencoders (DiffMAE)

    There has been a long-standing desire to provide visual data in a way that allows for deeper comprehension. Early methods used generative pretraining to set up deep networks for subsequent recognition tasks, including deep belief networks and denoising autoencoders. Given that generative models may generate new samples by roughly simulating the data distribution, it makes sense that, in Feynman’s tradition, such modeling should also eventually reach a semantic grasp of the underlying visual data, which is necessary for recognition tasks.

    Aneesh Tickoo - 11 April 2023 - Article in English

    Read more

  • Neuromorphic Camera Sees the Future

    This camera captures the past and present in a single image, making future trajectory predictions much more computationally and energy efficient. As certain technologies develop, the need for dynamic and efficient computer vision systems is growing. These systems use sophisticated algorithms and machine learning techniques to analyze video streams in real time and make predictions about future movements of objects and people.

    Nick Bild – 20 April 2023 - Article in English

    Read more

  • A Real-Time Traffic Sign Recognition Method Using a New Attention-Based Deep Convolutional Neural Network for Smart Vehicles

    Artificial Intelligence (AI) in the automotive industry allows car manufacturers to produce intelligent and autonomous vehicles through the integration of AI-powered Advanced Driver Assistance Systems (ADAS) and/or Automated Driving Systems (ADS) such as the Traffic Sign Recognition (TSR) system. Existing TSR solutions focus on some categories of signs they recognise. For this reason, a TSR approach encompassing more road sign categories like Warning, Regulatory, Obligatory, and Priority signs is proposed to build an intelligent and real-time system.

    Nesrine Triki - 11 April 2023 - Article in English

    Read more

  • Reinforcement learning: From board games to protein design

    Scientists have successfully applied reinforcement learning to a challenge in molecular biology. The team of researchers developed powerful new protein design software adapted from a strategy proven adept at board games like Chess and Go. In one experiment, proteins made with the new approach were found to be more effective at generating useful antibodies in mice.The findings, suggest that this breakthrough may soon lead to more potent vaccines. More broadly, the approach could lead to a new era in protein design.

    Ian Haydon – 20 April 2023 - Article in English

    Read more

  • What Is Few-Shot Learning?

    Few-shot learning is a subfield of machine learning and deep learning that aims to teach AI models how to learn from only a small number of labeled training data. The goal of few-shot learning is to enable models to generalize new, unseen data samples based on a small number of samples we give them during the training process. In general, few-shot learning involves training a model on a set of tasks, each of which consists of a small number of labeled samples. We train the model to learn how to recognize patterns in the data and use this knowledge.

    Artem Oppermann - 6 April 2023 - Article in English

    Read more

  • This AI Paper Introduces SELF-REFINE: A Framework For Improving Initial Outputs From LLMs Through Iterative Feedback And Refinement

    Iterative improvement is an essential aspect of human problem solving. Iterative improvement is a process of making a first draft and then improving it through self-feedback. Using feedback and iterative modification, they show in this study that large language models (LLMs) can successfully mimic this cognitive process in humans.

    Aneesh Tickoo - 7 April 2023 - Article in English

    Read more

  • What is a Data pipeline for Machine Learning?

    As machine learning technologies continue to advance, the need for high-quality data becomes increasingly important. Data is the lifeblood of computer vision applications, as it serves as the basis for machine learning algorithms to learn and recognize patterns in images or videos. Without high-quality data, computer vision models will not be able to effectively identify objects, recognize faces, or track motion accurately.

    TagX – 18 March 2023 - Article in English

    Read more

  • Accelerating the design of compositionally complex materials via physics-informed artificial intelligence

    The chemical space for designing materials is practically infinite. This makes disruptive progress by traditional physics-based modeling alone challenging. Yet, training data for identifying composition–structure–property relations by artificial intelligence are sparse. We discuss opportunities to discover new chemically complex materials by hybrid methods where physics laws are combined with artificial intelligence.

    Dierk Raabe - 31 March 2023 - Article in English

    Read more

  • Biological research and self-driving labs in deep space supported by artificial intelligence

    Space biology research aims to understand the fundamental effects of spaceflight on organisms, develop fundamental knowledge to support deep space exploration, and ultimately develop spacecraft and habitat bioengineering to stabilize the ecosystem of plants, crops, microbes, animals, and humans for sustainable life on multiple planets. To achieve these goals, the field leverages experiments, platforms, data, and model organisms from space and analog ground-based studies. As research extends beyond low Earth orbit, experiments and platforms must be maximally automated, lightweight, agile, and intelligent to accelerate knowledge discovery.

    Lauren M. Sanders - 23 March 2023 - Article in English

    Read more

  • An architecture that combines deep neural networks and vector-symbolic models

    Researchers at IBM Research Zürich and ETH Zürich have recently created a new architecture that combines two of the most renowned artificial intelligence approaches, namely deep neural networks and vector-symbolic models. Their architecture, presented in Nature Machine Intelligence, could overcome the limitations of both these approaches, solving progressive matrices and other reasoning tasks more effectively.

    Ingrid Fadelli - 30 March 2023 - Article in English

    Read more

  • Meet Neural Functional Networks (NFNs): An AI Framework That Can Process Neural Network Weights While Respecting Their Permutation Symmetries

    With the event of enormous language models like ChatGPT, neural networks have change into increasingly popular in natural language processing. The recent success of LLMs is significantly based on the usage of deep neural networks and their capabilities, including the power to process and analyze huge chunks of information efficiently and precisely. With the event of the newest neural network architectures and training methods, their applications of them have set recent benchmarks and have change into extremely powerful.

    LAZYAI – 15 MARCH 2023 - Article in English

    Read more

  • Demystifying Encoder Decoder Architecture & Neural Network

    In the field of artificial intelligence and machine learning, the encoder-decoder architecture is a widely used framework for developing neural networks capable of performing natural language processing (NLP) tasks such as language translation, etc. This architecture involves a two-step process in which input data is first encoded into a fixed-length representation, which is then decoded to produce an output corresponding to the desired format. As a data scientist, understanding the encoder-decoder architecture and the underlying principles of neural networks is essential to developing sophisticated models capable of handling complex data sets.

    Ajitesh Kumar - March 16, 2023 - Article in English

    Read more

  • Energy-based analog neural network framework

    Over the past decade, a body of work has emerged demonstrating the disruptive potential of neuromorphic systems in a wide range of studies, often combining new machine learning models and nanotechnologies. Yet the scope of the research often remains limited to simple problems, as the process of building, training, and evaluating mixed-signal neural models is slow and laborious. In this paper, we present an open-source framework, called EBANA, that provides a unified, modularized, and extensible infrastructure, similar to conventional machine learning pipelines, for the construction and validation of analog neural networks (ANNs).

    Mohamed Watfa - 03 March 2023 - Article in English

    Read more

  • Generative adversarial network (GAN)

    A generative adversarial network (GAN) is a machine learning (ML) model in which two neural networks compete with each other by using deep learning methods to become more accurate in their predictions. GANs typically run unsupervised and use a cooperative zero-sum game framework to learn, where one person's gain equals another person's loss. The two neural networks that make up a GAN are referred to as the generator and the discriminator. The generator is a convolutional neural network and the discriminator is a deconvolutional neural network.

    Kinza Yasar, Sarah Lewis - March 2023 - Article in English

    Read more

  • Unlocking the Secrets of Deep Learning with Tensorleap’s Explainability Platform

    Deep Learning (DL) advances have cleared the way for intriguing new applications and are influencing the future of Artificial Intelligence (AI) technology. However, a typical concern for DL models is their explainability, as experts commonly agree that Neural Networks (NNs) function as black boxes. We do not precisely know what happens inside, but we know that the given input is somehow processed, and as a result, we obtain something as output. Understanding why a model makes certain predictions or how to improve it can be challenging.This article will introduce and emphasize the importance of NN explainability, provide insights into how to achieve it.

    Daniele Lorenzi - 1 March 2023 - Article in English

    Read more

  • Social media bot detection with deep learning methods: a systematic review

    Social bots are automated social media accounts governed by software and controlled by humans at the backend. Some bots have good purposes, such as automatically posting information about news and even to provide help during emergencies. Nevertheless, bots have also been used for malicious purposes, such as for posting fake news or rumour spreading or manipulating political campaigns. There are existing mechanisms that allow for detection and removal of malicious bots automatically.

    Kadhim Hayawi - 06 March 2023 - Article in English

    Read more

  • An MIT study shows how large language models can learn a new task from just a few examples, without the need for any new training data.

    MIT researchers found massive neural network models similar to large language models are capable of completing a new task using simple learning algorithms. This is part of a larger effort to expand artificial intelligence (AI) and machine learning (ML) with the rise of GPT and other neural networks, which are showing their potential in helping human workers and changing how information is gathered and processed.

    ADAM ZEWE - 21 FEBRUARY 2023 - Article in English

    Read more

  • Machine learning and DevSecOps: Inside the OctoML/GitLab integration

    Machine learning can be a powerful tool in software development, but not if it has to live apart from existing engineering workflows. DevSecOps teams, including MLOps, can now integrate OctoML CLI into GitLab’s CI/CD Pipelines to unify workflows and leverage existing deployment and monitoring infrastructure. This integration makes it easier to catch bugs and model performance degradations early in the ML development cycle. The OctoML Platform is a machine learning model optimization and deployment service powered by octoml.ai.

    Sameer Farooqui – 23 February 2023 - Article in English

    Read more

  • Answering the Abstruse: ML Applications & Algorithms

    Machine learning has become a buzzword and is often used interchangeably with artificial intelligence (AI), although machine learning is only a subset of AI, which adds to the confusion about what machine learning is. Machine learning is much more nuanced than just programming, and is not just about robots! For example, machine learning can be used in medical diagnosis to determine whether a tumour is benign or malignant. Machine learning can also be used in city planning, for example to predict the risk of latent fire in a city.

    LeAnne Chan - 8 June 2020 - Article in English

    Read more

  • Cell quantification in digital contrast microscopy images with convolutional neural networks algorithm

    High Content Screening (HCS) or High Content Analysis (HCA) equipment was developed with the goal of combining the efficiency of high performance techniques with the ability to collect quantitative data from cellular images of complex biological systems. HCS is a type of automated microscopy capable of acquiring and analyzing fluorescence or light field (digital contrast) images for multiparametric evaluations of microplate-based cell assays. Possible applications of this microscopy include assessments of cell morphology, cell death, nuclear morphology, membrane protein internalization and others.

    G. F. Silveira – 14 Fevruary 2023 - Article in English

    Read more

  • End-to-End MLOps Architecture and Workflow

    All the machine learning projects developed for the industrial business problem aim to develop and deploy them into production quickly. Thus, developing an automated ML pipeline becomes a challenge, which is why most ML projects fail to deliver on their expectations. However, the problem of automated ML pipelines can be addressed by bringing the Machine Learning Operations (MLOps) concept. Many industrial ML projects fail to progress from proof of concept to production. Even today, data scientists manually manage the ML pipelines, resulting in many issues during the operation. This article will address the traditional problems through MLOps architecture and workflow in detail.

    Sarvagya Agrawal - 9 February 2023 - Article in English

    Read more

  • A New Artificial Intelligence (AI) Research From The University of Maryland Propose A Shape-Aware Text-Driven Layered Video Editing Tool

    Video editing, which involves manipulating and rearranging video clips to achieve desired goals, has been revolutionized by the integration of artificial intelligence (AI) into computing. AI-powered video editing tools enable faster and more efficient post-production processes. Thanks to advances in deep learning algorithms, AI can now automatically perform tasks such as color correction, object tracking and even content creation. Using AI in video editing can significantly reduce the time and effort required to produce high-quality video content, while also providing new creative opportunities.

    Daniele Lorenzi - 15 February 2023 - Article in English

    Read more

  • Simplify your ML Development Cycle with Anyscale and Weights & Biases

    In this blog post, we'll review the challenges of deploying ML models in production, and how using Anyscale with Weights & Biases simplifies MLOps with built-in integration that enables scalability, repeatability, and consistency. Anyscale and Weights & Biases minimize refactoring and reduce friction throughout the ML development lifecycle. As organizations continue to expand the use of machine learning (ML) and AI workloads to improve their operations, MLOps (Machine Learning operations) has become a key initiative to ensure the smooth and efficient deployment and scaling of AI/ML workloads in production.

    Phi Nguyen - 31 January 2023 - Article in English

    Read more

  • ChatGPT demystified

    You've heard of ChatGPT, but do you know what it is in detail? Why is it in the news so much? And how it poses the biggest threat to Google search? These simplified explanations will give you some background and allow you to form your own opinion to answer these questions. Learning about these powerful technologies is not just a matter of curiosity (although it is fun!). They have exciting, and scary, implications. Being informed about what they are and what they are not can help prepare you for the social changes they may bring.

    Ente.io – 31 January 2023 - Article in English

    Read more

  • AI Inference Software Fundamentals: Getting Started with Optical Character Recognition

    AI/ML opens up a world of possibilities for developers to do new and exciting things with their applications. But if you really want to become a true AI developer, you need to understand the basics first. A good place to start is to learn about optical character recognition (OCR). OCR may be a basic machine learning application (it's been around since 1965!), but it's important for several reasons.

    Raymond Lo – 19 janvier 2023 - Article in English

    Read more

  • The definitive guide to adversarial machine learning

    Machine learning is becoming an important component of many applications we use every day. With so many critical tasks being transferred to machine learning and deep learning models, it is natural to be a little concerned about their security. At the forefront are examples of adversarial attacks, imperceptible changes to input data that manipulate the behaviour of machine learning models. Adversarial attacks can lead to embarrassing or fatal errors. Chen and Hsieh bring together the intuition and science behind the key components of adversarial machine learning.

    Ben Dickson - 23 January 2023 - Article in English

    Read more

  • Special Issue Review: Artificial Intelligence and Machine Learning Applications in Remote Sensing

    Remote sensing is used in an increasingly wide range of applications. Artificial intelligence (AI) based models and methodologies are commonly used to enhance the performance of remote sensing technologies. Deep learning (DL) models are the most studied AI-based models due to their high efficiency and performance. In this article, we review nine papers included in this special issue, most of which report on studies based on satellite data and machine learning, reflecting the most widespread trends in remote sensing research.

    Ying-Nong Chen - 18 January 2023 - Article in English

    Read more

  • Commonly used machine learning Algorithms

    Machine Learning is a sub-branch of Artificial Intelligence, used for the analysis of data. It learns from the data that is input and predicts the output from the data rather than being explicitly programmed. Machine Learning is among the fastest evolving trends in the IT industry. It has found tremendous use in sectors across industries, with its ability to solve complex problems which humans are not able to solve using traditional techniques. ML is now being used in IT, retail, insurance, government and the military. There is no end to what can be achieved with the right ML algorithm.

    Harsha Vardhan Garlapati – 24 January 2023 - Article in English

    Read more

  • Industrial Solutions For Machine-Learning-Enabled Yield Optimization And Test

    Implementing ML in in-line manufacturing testing poses its own set of challenges. ML-based applications challenge traditional test flows and infrastructures, as they require: large amounts of data, a secure and dynamic test execution IT infrastructure... However, some of these properties conflict with traditional test setups, resulting in non-standard test flows and creating additional work that impacts time-to-market and return on investment - and, in particular, slows the success and adoption of ML applications.

    Sonny Banwari – 10 January 2023 - Article in English

    Read more

  • Deep CNN-Based Materials Location and Recognition for Industrial Multi-Crane Visual Sorting System in 5G Network

    Smart manufacturing is a challenging and exciting topic in Industry 4.0. Many computer vision (CV) based applications have attracted great interest from researchers and industries worldwide. However, it is difficult to integrate visual recognition algorithms into industrial control systems. In this paper, we develop a multi-grid visual sorting system with cloud-based APIs in a 5G environment, in which character recognition based on deep convolutional neural network (CNN) and dynamic scheduling are designed for materials in smart manufacturing.

    Meixia Fu - 12 January 2023 - Article in English

    Read more

  • Optimize electric automation control using artificial intelligence (AI)

    In order to successfully address the problems facing electrical engineering today, a control system based on artificial intelligence technology is designed. The paper presents a model of an electrical automation control system based on an artificial intelligence algorithm. The control parameters are optimised by implementing a control method based on an artificial intelligence algorithm. The use of artificial intelligence algorithms for autonomous electrification control can significantly increase control response time, reduce costs and produce more efficiently.

    FarahSami - December 2022 - Article in English

    Read more

  • Is Data Science For All?

    Data science seems to be the buzzword of the decade. The internet is talking about it, companies want it, and your friends and family are aware of how data science is changing our world. Now let's turn to aspiring data science professionals and how a comprehensive Data Science Bootcamp can answer this question. Regardless of position, industry and level of technical knowledge, would you like to understand where you fit in? Is it possible to tap into this "new oil" that everyone is talking about? The title does say "Is data science for everyone?" but I would like to focus on "Is data science for you?"

    Anish Mahapatra – 2 January 2023 - Article in English

    Read more

  • How to Run Stable Diffusion on Your PC to Generate AI Images

    The French writer and philosopher Voltaire once said that "originality is nothing but a judicious imitation" and when it comes to the use of artificial intelligence, he is absolutely right. Through a multitude of complex mathematical calculations, powerful supercomputers can be used to analyse billions of images and texts, creating a digital probability map between the two. One of these maps is called stable scatter. And the best part is that you can use it too, with our detailed guide on how to use it.

    Nick Evanson - 28 December 2022 - Article in English

    Read more

  • Performance Evaluation of Deep Learning Algorithm Using High-End Media Processing Board in Real-Time Environment

    The artificial intelligence algorithm based on image processing is a critical task, and its implementation requires careful consideration for the selection of the algorithm and processing unit. With the advancement of technology, researchers have developed many algorithms to achieve high accuracy with minimal processing requirements. On the other hand, high-end and cost-effective graphics processing units (GPUs) are now available to handle complex processing tasks. In the proposed work, we tested a convolutional neural network (CNN) based on variants of You Only Look Once (YOLO) on NVIDIA Jetson Xavier to identify the compatibility between the GPU and YOLO models.

    Muhammad Asif - 07 December 2022 - Article in English

    Read more

  • Machine Learning: A Powerful Tool for Bureau Research

    Data is the lifeblood of geoscience research. Bureau of Economic Geology researchers gather reams of data from a myriad of sources. Core, cuttings, outcrops, fluids, gases, satellites, drones, seismometers, sensors, scanners, and an array of equipment all provide huge amounts of data for cataloging and analysis. This was once an incredibly daunting task, but, for several years now, Bureau scientists and engineers have utilized machine learning (ML) as an extremely powerful tool to expedite and enhance their research.

    Bureau of Economic Geology  - 21 December 2022 - Article in English

    Read more

  • The Year in Computer Science

    The metaverse is becoming one of the hottest topics not only in technology, but also in the social and economic spheres. The metaverse is slowly evolving into a consumer virtual world where it is possible to work, learn, shop, be entertained and interact with others in ways never before possible. However, most of these metaverse experiences can only continue to progress with the use of deep learning (DL), as artificial intelligence (AI) and data science will be at the forefront of advancing this technology.

    Victor Dey - 22 December 2022 - Article in English

    Read more

  • Micro-Data Centers Enable Big Data, AI for Edge Computing

    Edge computing plays an essential role in the efficient implementation of several embedded applications, such as artificial intelligence (AI), machine learning, deep learning, and the Internet of Things (IoT). Today’s data centers, however, cannot currently meet the requirements of these types of applications. This is where edge micro-data centers (EMDCs) come into play.

    Maurizio Di Paolo Emilio – 5 December 202 - Article in English

    Read more

  • Machine Learning Applications in Microgrid Management System

    The advent of renewable energy sources (RES) in the electricity industry has revolutionised the management of these systems due to the need to control their stochastic nature. Deploying RES in the microgrid (MG), as a subset of the power system, is an advantageous way to exploit their innumerable merits in addition to controlling their random nature. The management of the MG can be optimised by using machine learning (ML) techniques applied to applications.

    Lilia Tightiz, Joon Yoo – 16 December 2022 - Article in English

    Read more

  • Autonomous AI-based System Compliant with Expert Consensus for AMD Treatment

    New findings suggest the potential of a novel artificial intelligence-based system for autonomous follow-up of patients treated for neovascular age-related macular degeneration (nAMD). The research demonstrated both its safety and compliance with expert consensus as being on par with decisions made in clinical practice, with particularly low rates of false-positive classification of choroidal neovascularization (CNV) activity.

    Connor Iapoce – 4 December 2022 - Article in English

    Read more

  • AI and How It Can be Implemented

    Artificial Intelligence is said to be omnipresent nowadays (see an interesting brief history of AI). This means it is becoming fundamentally present in the landscape of technology. Hardly any businesses will remain unaffected by it even in the near future. It may well be wise to consider it at the outset of any entrepreneurial effort. An inclusive AI from the beginning, if it made sense, might just put your business development on a faster trajectory. But how do you implement it?

    NazimuddinAR - 10 December 2022 - Article in English

    Read more

  • Where CISOs rely on AI and machine learning to strengthen cybersecurity

    Faced with an onslaught of malware-less attacks that are increasingly hard to identify and stop, CISOs are contending with a threatscape where bad actors innovate faster than security and IT teams can keep up. However, artificial intelligence (AI) and machine learning (ML) are proving effective in strengthening cybersecurity by scaling data analysis volume while increasing response speeds and securing digital transformation projects under construction. 

    Louis Columbus - 28 November 2022 - Article in English

    Read more

  • Busy GPUs: Sampling and pipelining method speeds up deep learning on large graphs

    Graphs, a potentially large network of nodes connected by edges, can be used to express and interrogate relationships between data, such as social connections, financial transactions, traffic, energy networks and molecular interactions. As researchers collect more data and build these graphical images, they need faster and more efficient methods. This new technique dramatically reduces the time required to train and infer on large datasets to keep pace with rapidly changing data in finance, social networks, and fraud detection in crypto-currencies.

    Lauren Hinkel - 29 November 2022 - Article in English

    Read more

  • Convolutional Neural Network Tutorial (CNN) – Developing An Image Classifier In Python Using TensorFlow

    Let us discuss what is Convolutional Neural Network (CNN) and the architecture behind Convolutional Neural Networks – which are designed to address image recognition systems and classification problems. Convolutional Neural Networks have wide applications in image and video recognition, recommendation systems and natural language processing.

    Anirudh Rao – 15 November 2022 - Article in English

    Read more

  • Machine Learning in Financial Forecasting – A Possible Reality or a Relentless Trial and Error?

    Companies are moving away from an annual budgeting period to a quarterly and monthly financial forecasting process. However, with the average time to consolidate and submit a complete detailed budget being 4-8 weeks, depending on the size of the company, the introduction of a financial forecasting cycle does not leave much room for flexibility. Therefore, in this new reality, teams need innovative tools and techniques to reduce the time required to complete a financial forecasting cycle. That's where automation, data lineage and Machine Learning (ML) come in.

    Gizelda Ekonomi - 22 November 2022 - Article in English

    Read more

  • Deep Learning Applications in Tumor Pathology

    Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload

    Alhassan Ali Ahmed, Mohamed Abouzid, Elżbieta Kaczmarek - 15 November 2022 - Article in English

    Read more

  • The Use of Artificial Intelligence in Orthopedics: Applications and Limitations of Machine Learning in Diagnosis and Prediction

    Machine Learning (ML) is driving innovation in a vast variety of fields. It is the ability of a machine to identify relationships between data without explicit criteria, emulating a human-like type of learning. Over the last decade, research efforts have also been focused on orthopedics in order to provide help and assistance to surgeons.

    Bernardo Innocenti - 24 October 2022 - Article in English

    Read more

  • MLOPS PLATFORMS OVERVIEW – KEY FEATURES

    It has become increasingly difficult for companies to remain competitive as they face multiple challenges, such as data tagging, infrastructure management, model deployment and performance evaluation. This is where MLOps comes in, a method for putting a machine learning solution into production and all the steps required to do so.

    Marco – 8 November 2022 - Article in English

    Read more

  • What Are Neural Networks?

    A key element in artificial intelligence, artificial neural networks (ANNs) operate in a manner like the human brain. They mimic the way actual biological neurons function in order to find answers for complex computing questions and challenges. The method include millions of artificial neurons, falls under the umbrella of machine learning. It produces mathematical algorithms that are widely used to recognize patterns and solve complex problems in science and business.

    Samuel Greengard - 9 November 2022 - Article in English

    Read more

  • The Gap Between Deep Learning and Human Cognitive Abilities

    Roosh creates ML/AI projects and invests in innovative ideas in the sector. The first lecture of the series was given by Yoshua Bengio, professor at the University of Montreal. During the lecture, the professor discusses his research project, which aims to bridge the gap between modern AI based on deep learning and human intelligence characterised by creativity

    Bohdan Ponomar – 31 October 2022 - Article in English

    Read more

  • Visionary.ai Introduces Video Denoiser for Low-light Conditions

    AI-based approaches are used in various applications across industries to monitor and optimise the operation of specific use cases. Real-time image and video processing is one such use case where AI plays an important role in analysing, identifying and extracting the required information from an image. Visionary.ai develops AI-based software algorithms that process image inputs to make them ready for analysis.

    Saumitra Jagdale -  26 October 2022 - Article in English

    Read more

  • Tax digitalisation: Not the future, but the present

    Designing a data strategy at the international, national or company level can lead us to a scenario where we can use specific modelling tools and select the most useful technologies for each tax problem. This relationship between taxation and technology is not new. However, the recent increase in the availability of data has allowed a change.

    António Queiroz Martins - 5 September 2022 - Article in English

    Read more

  • What’s Next for AI Regulations in Medical Imaging?

    Artificial intelligence (AI) in the healthcare sector is beginning to emerge as it is increasingly used in a variety of applications, from medical imaging to ophthalmology. Remote monitoring and operation of electronic health records (EHRs). The rapid pace of innovation is challenging for regulators (FDA) who are responsible for ensuring that any solution used for medical purposes is effective and does not compromise health or safety.

    AJ WATSON – 3 Novembre 2022 - Article in English

    Read more

  • How Deep Learning Facilitates Automation & Innovation and When to Use It

    In today's technological age, deep learning is at the forefront of innovation in the workplace. Studies show that 48% of companies are effectively using deep learning. In this article, we will explore the automation of deep learning, including when to use deep learning instead of machine learning.

    Mosaic - 2017 - Article in English

    Read more

  • What are Radial Basis Functions Neural Networks? Everything You Need to Know

    Radial Basis Function (RBF) Networks are a particular type of Artificial Neural Network used for function approximation problems. RBF Networks differ from other neural networks in their three-layer architecture, universal approximation, and faster learning speed. In this article, we'll describe Radial Basis Functions Neural Network, its working, architecture, and use as a non-linear classifier. 

    Simplilearn – 12 Septembre 2022 - Article in English

    Read more

  • What is GPT-4 how will advancements in AI help your Business ?

    The AI software market is growing at a rate of 21.3% per year. With such an expansion, it’s no wonder businesses of all sizes are looking to get in on the action and see how AI can help them grow. And one of the most talked-about AI applications right now is GPT-4. So, what is GPT-4? And how will it be able to help your business grow? Let’s take a closer look.

    Jason Donegan - 21 September 2022 - Article in English

    Read more

  • A new approach to overcome multi-model forgetting in deep neural networks

    Although many deep neural networks capable of performing a variety of tasks have achieved remarkable results, they are usually only good at one particular task because of what is called "catastrophic forgetting". In this article we explain in more detail what this means

    Ingrid Fadelli - 11 March 2019 - Article in English

    Read more

  • What Is a Support Vector Machine? Working, Types, and Examples

    A support vector machine (SVM) is defined as a machine learning algorithm for solving complex problems. This article will explain you the basic principles of SVMs, how they work, their types and some concrete examples.

    Vijay Kanade – 20 September 2022 - Article in English

    Read more

  • The Hardware Pushing AI to the Edge

    While artificial intelligence and machine learning computations are often performed on a large scale in data centers, the latest processing devices allow for a trend towards integrating AI/ML capability into IoT devices at the network edge. Apart from the two most common applications, speech processing and image recognition, machine learning can of course be applied to data from almost any type of sensor.

    Sally Ward-Foxton – 27 Mai 2019 - Article in English

    Read more

  • Clustering Algorithm Fundamentals and an Implementation in Python

    An article to learn all about clustering and its importance in machine learning to understand unlabeled data. You will see what clustering is, the clustering algorithm, how it is used, and its implementation.

    Praveen Nellihela – 1 september 2022 - Article in English

    Read more

  • Machine learning interview preparation— popular topics

    Here's an article on important machine learning topics like : Data preprocessing, Data augmentation, Imbalanced data, Precision and recall, Activation functions, Regularization, Loss and cost functions

    Maryna Klokova – 29 august 2022 - Article in English

    Read more

  • Technology helps self-driving cars learn from own 'memories'

    Tom Fleischman - 22 June 2022 - Article in English

    Read more

  • What is AI? Here's everything you need to know about artificial intelligence

    Nick Heath - 23 July 2021 -Article in English

    Read more

  • Artificial Intelligence And Data Privacy – Turning A Risk Into A Benefit

    David A. Teich - 10 August 2020 - Article in English

    Read more